Abstract—An evaluation of two software tools dedicated for
an automaticanalysis of the CT scanner image spatial resolution is presented in this paper. The methods evaluated consist of calculating the modulation transfer function of the CT scanners; the first uses an image of an impulse source, while the second method proposed by DROEGE uses an image of cyclic bar patterns. Two Digital Test Objects are created to this purpose. These DTOs are then blurred by doing a convolution with a two dimensional Gaussian function (PSF), which has a well known FWHM. The evaluation process consists then of comparing the Fourier transform of the PSF in one hand, and the two mentioned methods in the other hand.
CATPHAN600 6 used for the quality control of the CT scanner. These objects contain specific metallic or plastic patterns included in a soft tissue equivalent envelop, in order to obtain images that offer the possibility of controlling different parameters.
The LAP 5 phantom allow to compare the reference point of the laser skin marking system with the reference of the CT scanner when it is used for radiation therapy applications. It contains a 3cm thick plaque of acrylic, with several slots on each of its sides: 1 longitudinal transverse slot, 1 anterior slot and 2 lateral frontal slots (Cf. figure 7). Frontal and lateral Laser misalignments are then calculated based on the positions of these slots in the corresponding images. Each slot is 3 mm wide. The slice thickness for the CT examination of the phantom is 1mm or less and the slice distance is 1 mm. Theses distances are chosen to facilitate the analyse. In theory, we obtain two types of images as we can see in figure 1, one with slots and the other one without slot because the slice is in the transverse slot. By observing the number of images of each type we can calculate the misalignment in Z direction, i.e. in feet to head direction.
• SP is the instruction pointer referring to the next sub-scope in the scope graph to execute.
Initially, the input parameters and all the other vari- ables are initialized using symbols. SP points to the header block of the scope and P C is set to true. Sym- bolic execution of a program takes a symbolic state and a rule which corresponds to the block action of the sub-scope referred to by SP and returns the symbolic states resulting from the execution of the rule. When SP points to the header block for the second time, the defined loop path is analytically evaluated follow- ing the process described in subsection 4.1, which re- duces the complexity of the approach. Furthermore, only a subset of the symbolic states set of the program are computed. For this purpose, we associate to each scope a set of control points which correspond to its terminal symbolic states set. A control point state is composed of the set of the scope variables with their initial values V in as well as the corresponding terminal
PRINCIPLE OF THE SEGMENTATION OF MULTIVARIATE IMAGES BY WS
The watershed transformation (WS) is one of the most powerful toolsfor segmenting images and was introduced in Beucher and Lantuéjoul (1979). According to the flooding paradigm, the watershed lines associate a catchment basin to each minimum of the landscape to flood (i.e. a scalar or greyscale image) (Beucher and Meyer, 1992). Typically, the landscape to flood is a gradient function which defines the transitions between the regions. Using the watershed on a scalar image without any preparation leads to a strong over-segmentation (due to a large number of minima). There are two alternatives in order to get rid of the over-segmentation. The first one consists in first determining markers for each region of interest. Then, using the homotopy modification, only the local minima of the gradient function are imposed by the markers of the regions. The extraction of the markers, especially for generic images, is a difficult task. The second alternative involves hierarchical approaches either based on non-parametric merging of catchment basins (waterfall algorithm) or based on the selection of the most significant minima. These minima are selected according to different criteria
Many applications in machine learning rely on gradient-based optimization, or at least the efficient calculation of derivatives of models expressed as computer programs. Researchers have a wide variety of tools from which they can choose, particularly if they are using the Python language (Paszke et al., 2017 ; Maclaurin, Duvenaud, and Adams, 2015 ; Tokui et al., 2015 ; Al-Rfou et al., 2016 ; Abadi et al., 2016 ). These tools can generally be characterized as trading off research or pro- duction use cases, and can be divided along these lines by whether they implement automatic differentiation using operator overloading (OO) or SCT. SCT affords more opportunities for whole-program optimization, while OO makes it easier to support convenient syntax in Python, like data-dependent control flow, or advanced features such as custom partial derivatives. We show here that it is possible to offer the programming flexibility usually thought to be exclusive to OO-based tools in an SCT framework.
Network structures and dynamics, organization of leaf venation networks, vascular pattern formation and their optimality in transport properties (electric, fluids, material) have been widely investigated in the recent physics literature, in particular aiming at understanding natural networks. Most of the method- ological tools use discrete models as in [4, 5, 12] to explain conductance dynamics. Supply optimization has also been studied in more mathematical manner, see  and the references therein. In opposition to the global effect of optimization, another explanation for networks topological structures, purely local and based on mechanical laws has been proposed in [10, 11]. Passing to the limit in the discrete model the authors also derive a continuous model of network dynamics which consists of a Poisson-type equation for the scalar pressure p(t, x) coupled to a nonlinear diffusion equation for the vector-valued conductance vector m(t, x) of the network. In [10, 11], the authors propose the following system with parameters D ≥ 0, c > 0,
Recently, there has been much interest in combining dynamic and static methods forprogram verification. Static and dynamic analyses can enhance each other by providing valuable information that would otherwise be unavailable. This paper reports on an ongoing project that aims to provide a new combination of static analysis and structural testing of C programs. We implement our method using two existing tools: Frama- C, a framework for static analysis of C programs, and PathCrawler, a structural test generation tool.
All these studies require one to build a special-purpose code generator, with a complexity ranging from an ad-hoc template assembler to a full, domain-specific, optimizing compiler. In contrast, we take the stubs generated by an existing stub compiler, and derive the specialized stubs with Tempo, a general program specialization tool.
Kernel-level optimizations. It is well recognized that physical memory copy is an important cause of overhead in protocol implementation. Finding solutions to avoid or optimize copies is a constant concern of operating system designers. For instance, copy-on-write  was the technique which made message passing efficient enough to allow operating systems to be designed based on a micro-kernel architecture [30, 31]. Buffers are needed when different modules or layers written independently for modularity reasons have to cooperate together at run time. This cause of overhead has been clearly demonstrated by Thekkath and Levy in their performance analysis of RPC implementations . Recent proposals in the networking area explore solutions to improve network throughput and to reduce latency. Madea and Bershad propose to restructure network layers and to move some functions into user space . Mosberger et al. describe techniques for improving protocols by reducing the number of cycles stalled to wait for memory access completion .
No mathematical theories can be accepted by biologists without a most careful experimental verification. We can but agree with the following remarks made in Nature (H. T. H. P. ’31) concerning the mathematical theory of the struggle for existence developed by Vito Volterra: “This work is connected with Prof. Volterra’s researches on integro-differential equations and their applications to mechanics. In view of the simplifying hypothesis adopted, the results are not likely to be accepted by biologists until they have been confirmed experimen- tally, but this work has as yet scarcely begun.” First of all, very reasonable doubts may arise whether the equations of the struggle for existence given in the preceding chapter express the essence of the processes of competition, or whether they are merely empirical expressions. everybody remembers the at- tempt to study from a purely formalistic viewpoint the phenomena of heredity by calculating the likeness between ancestors and descendants. This method did not give the means of penetrating into the mechanism of the correspond- ing processes and was consequently entirely abandoned. In order to dissipate these doubts and to show that the above-given equations actually express the mechanism of competition, we shall now turn to an experimental analysis of a comparatively simple case. It has been possible to measure directly the factors regulating the struggle for existence in this case, and thus to verify some of the mathematical theories.
Let us mention  who used notions such as finite selection, “isolated criticalities”, stable domain or regular arcs, and argued that “functions given by evaluation procedures are almost everywhere real analytic or stably undefined” where “undefined” meant that a nonsmooth elementary function is used in the evaluation process. For piecewise smooth functions which nonsmoothness can be described using the absolute value function (abs-normal form),  developped a piecewis linearisation formalism and local approximation related to AD,  proposed an AD based bundle type method. These developments are based on the notion of piecewise smooth functions  which we use in this work. More recently,  applied these techniques to single layer neural network training and  proposed to avoid the usage of subgradient “oracles” in nonsmooth analysis as they are not available in practice. In a similar vein, let us mention  study lexicographic derivatives, a notion of directional derivatives which satisfy a chain rule making them compatible by forward mode AD, and  who use directional derivatives in the context of local sampling stochastic approximation algorithms for machine learning.
In a modular view of a biological organism, each task is executed by a specific set of interactions among an ensemble of biological components; in other words, it can be said that there is a specifc network, or module, for each specific task (signaling, metabolic, physiological, etc.). These modules often interact with each other, one task triggering the next in a chain of events or cyclic phenomena. Examples include chains of signaling networks such as MAPK cascades, genetic- metabolic interactions ( Baldazzi et al., 2010 ), or coupled oscillations ( Gérard and Goldbeter, 2012 ). However, in many cases, while experimental evidence supports the existence of links between two modules, their modes of interaction are still unclear (as in the case of mammalian cell cycle and circadian clock, see Feillet et al., 2015 ). In this context, mathematicaltools are necessary to facilitate the analysis of the complex behavior obtained from the interconnection of two or more known modules.
improve current MDE approaches and tools, similar to what other communities in Software Engineering are already doing.
The models that we have discussed give a taste of the difficulties that anybody working on a new OCL analysis technique should consider. Nevertheless, our long term goal is the complete specification of a full benchmark model suite cov- ering all known challenging verification and validation scenarios. The need for such a benchmark was one of the outcomes of the last OCL Workshop. However, the notion ‘challenging scenario’ is not universal and debatable, in the sense that depending on the formalism used by a given tool a scenario may be easy or extremely demanding. With proposing this benchmark and its hopefully com- ing evolution, we want developers to evaluate the existing approaches, realize which are the strengths and drawbacks of each one, and choose a tool or an ap- proach according to their specific needs. Speaking generally, for an OCL analysis tool benchmark there are challenges in two dimensions: (a) challenges related to the complexity of OCL (i.e., the complete and accurate handling of OCL) and (b) challenges related to the computational complexity of the underlying prob- lem. Both should be treated in the benchmark.
In this position paper, we propose a first step toward automaticanalysis of sentiments in dreams. 100 dreams were sampled from a dream bank created for a normative study of dreams. Two human judges assigned a score to describe dream sentiments. We ran four baseline algorithms in an attempt to automate the rating of sentiments in dreams. Particularly, we compared the General Inquirer (GI) tool, the Linguistic Inquiry and Word Count (LIWC), a weighted version of the GI lexicon and of the HM lexicon and a standard bag-of-words. We show that machine learning allows automating the human judgment with accuracy superior to majority class choice.
In this work, we aim to find the markers of MCI and AD in read speech for early diagnosis of these diseases by extracting features from voice of patients and using two classification methods. Next section describes the data set. We review the features we used in our experiments in Section 3 and follow by explaining the methods in Section 4. We present the results in Section 5 and conclude the work with a perspective for future work in Section 6.
that propagate the modification to the other duplicates of the modified snippet.
We have implemented PCR in C++ using the clang com- piler front-end . We choose clang because it has a AST tree interface that contains APIs that support source code rewrites. This enables PCR to generate a patched source code file without dramatically changing the overall struc- ture of the source code. Existing program repair tools [21, 28, 33] often destroy the structure of the original source by inlining all header files and renamining all local variables in their generated patched source code. Preserving the exist- ing source code structure helps developers understand and evaluate the patch and promotes the future maintainability of the application.
Mots clés: Tolerance analysis, Mathematical issues, Overview
As technology increases and performance requirements continually tighten, the cost and the required precision of assemblies increase as well. There is a strong need for increased attention to tolerance design in order to enable high-precision assemblies to be manufactured at lower costs. Due to the variations associated with manufacturing process, it is not possible to attain the theoretical dimensions in a repetitive manner. It causes a degradation of functional characteristics of the product. In order to ensure the desired behavior and the functional requirements of the system in spite of variations, the component features are assigned a tolerance zone within which the value of the feature i.e. situation and intrinsic lie.
assembly requirement (interface constraints) and the compatibility equations are respected” (Assemblability condition).
P FR : the probability of respect of the functional requirements. Let FC be the event that the
functional condition are fulfilled. Once a mechanism assembles, in order to evaluate its performance under the influence of the deviations, it is necessary to describe an additional condition that evaluates its core functioning with respect to the basic product requirements. In terms of the tolerance analysis, the basic requirement becomes the maximum or minimum clearance on a required feature that would have an impact on the mechanism’s performance. The most essential condition therefore becomes that for all the possible gap configurations of the given set of components that assemble together, the functional condition imposed must be respected. In terms of quantification needs, in order to represent all possible gap configurations, the universal quantifier is required: “for all