101 - 54602 Villers lès Nancy Cedex France Unité de recherche INRIA Rennes : IRISA, Campus universitaire de Beaulieu - 35042 Rennes Cedex France Unité de recherche INRIA Rhône-Alpes : 65[r]

100 En savoir plus

Paper organisation. After this introduction, Section 2 presents the CinK language, **a** kernel of the C ++ programming language. We shall use CinK programs **for** illustrating various aspects
of **symbolic** execution. In Section 3 we present some background theoretical material used in the rest of the paper: coinduction, **a** general technique **for** defining **and** reasoning about possi- bly infinite objects such as program executions; Reachability Logic, which is used **for** defining operational semantics of languages **and** **for** stating program properties; **and** **a** generic language- definition **framework**, in order to make our approach independent of the K language-definition **framework**. Section 4 contains our formalisation of **symbolic** execution, including the coverage **and** precision results stated earlier in this introduction. Section 5 presents how Reachability- Logic (RL) formulas can be verified using **a** coinductive extension of **symbolic** execution. In Section 6 we show how **symbolic** execution **and** its core derivative operation can be implemented in language definitions based on standard rewriting, such as the K **framework**. Section 7 presents **a** prototype tool based on the language transformations from the previous section, as well as ap- plications of the tool **for** the **symbolic** execution, model checking, **and** deductive verification of nontrivial programs. We conclude in Section 8.

En savoir plus
The system we have developed **for** labeling cortical anatom- ical structures in MRI images uses the mereo-topological rela- tions between the various cortical structures. This knowledge is described in an ontology of cortical gyri **and** sulci repre- sented in OWL DL, the Web ontology language, according to the description logics (DL) paradigm [10]. The result of the annotation process is **a** set of instances satisfying the axioms **and** constraints defined in the ontology, **and** representing the parts of the sulci **and** gyri that are shown in the images. They are associated with graphical primitives extracted from the images, such as **a** list of points comprising **a** sulcal outline on the brain surface, **and** **a** list of sulcal outlines delimiting **a** cortical area. Our system is **a** hybrid system in the sense that it relies on both **symbolic** **and** numerical knowledge. By ’**symbolic** knowledge’, we mean knowledge expressed as class definitions using axioms that model the properties **and** relations of related entities (based in our case on the DL paradigm). By ’numerical knowledge’ we mean prior knowledge presented as 3D maps that depict the position of the anatomical structures in **a** reference space (i.e., an atlas), either as **a** statistical map derived from images of **a** population of individuals or as **a** single-subject map, assumed to be prototypical **and** representative of **a** population. Similarly, our system involves both ’**symbolic** reasoning’ (i.e., based on the knowledge in the ontology) **and** ’numerical data processing’, such as localizing **a** specific point or spatial area in reference to an atlas. The basic reason why we made this choice of **a** hybrid system is that, given the very large number of combinations possible when assigning anatomical structure labels (described in our ontology) to parts of the sulci **and** gyri, the labeling process could not be solely based on DL-based classifications. We have thus initially selected **a** reasonable set of hypotheses

En savoir plus
Diversity
tool
Fig. 1: Validation Process.
Let us overview the different steps of the approach in Figure 1. The first step (1) consists in specifying system scenarios which describe the intended interactions between all components of the system. The second step (2) consists in refining the scenarios into an executable model which specifies with an activity diagram the internal behavior of each component. System scenarios as sequence diagrams are analyzed with Diversity in step (3). **For** each sequence diagram, Diversity computes **a** **symbolic** tree, where each path denotes **a** possible (**symbolic**) execu- tion in the sequence diagram. Then **a** path is selected in step (4) relating to **a** specific behavior **and** **a** sequence of stimuli (as **a** timed trace) is extracted from it. Next in step (5), the fUML virtual machine of the tool Moka, being supplied with the test stimuli, is used to set up **a** test environment **and** execute the system activities. In Step (6) the system responses are collected by Moka. The latters are taken as inputs by the testing algorithm in Diversity which computes in step (7) **a** verdict concerning the tioco-conformance of execution **and** the coverage of the requirement. Naturally, in case of fault-detection the system activities need to be revised by the designer.

En savoir plus
of indexing, but if this indexing is geometrically meaningful, then we can ensure invariance with respect to some geometric transformations.
3 Predicates **for** circular arcs arrangements
To address the problem of computing arrangements of circular arcs by sweep-line algorithms, it is necessary to compare abscissae of endpoints of circular arcs. In the formalism of the previous section, u is **a** vector of parameters defining **a** set of circular arcs **and** G(u) is the arrangement, but we will use notations more adapted to our application. This predicate was studied in **a** previous paper [5]. The arc endpoints are described as intersections of two circles, which leads us to consider the arrangement of all circles supporting arcs or defining their endpoints. Degeneracies occur if several vertices of the arrangement have the same abscissa or if more than two circles meet at **a** common point. **For** arrangements exhibiting **a** lot of degeneracies, it may be interesting to design an algorithm that directly handles special cases, while in other contexts where degeneracies are occasional, it would be preferable to keep the algorithm simple **and** handle degeneracies through **a** perturbation scheme.

En savoir plus
Received: 14 November 2002 / Accepted: 15 September 2003 / Published online: 30 April 2004 Springer-Verlag 2004
Abstract The human cerebral cortex anatomy describes the brain organization at the scale of gyri **and** sulci. It is used as landmarks **for** neurosurgery as well as localiza- tion support **for** functional data analysis or inter-subject data comparison. Existing models of the cortex anatomy either rely on image labeling but fail to represent vari- ability **and** structural properties or rely on **a** conceptual model but miss the inner 3D nature **and** relations of anatomical structures. This study was therefore con- ducted to propose **a** model of sulco-gyral anatomy **for** the healthy human brain. We hypothesized that both **numeric** knowledge (i.e., image-based) **and** **symbolic** knowledge (i.e., concept-based) have to be represented **and** coordinated. In addition, the representation of this knowledge should be application-independent in order to be usable in various contexts. Therefore, we devised **a** **symbolic** model describing specialization, composition **and** spatial organization of cortical anatomical struc- tures. We also collected **numeric** knowledge such as 3D models of shape **and** shape variation about cortical anatomical structures. **For** each **numeric** piece of knowledge, **a** companion ﬁle describes the concept it refers to **and** the nature of the relationship. Demon- stration software performs **a** mapping between the **numeric** **and** the **symbolic** aspects **for** browsing the knowledge base.

En savoir plus
Abstract
Improving execution time **and** energy efficiency is needed **for** many applica- tions **and** usually requires sophisticated code transformations **and** compiler optimizations. One of the optimization techniques is memoization, which saves the results of **computations** so that future **computations** with the same inputs can be avoided. In this article we present **a** **framework** that automat- ically applies memoization techniques to C/C++ applications. The frame- work is based on automatic code transformations using **a** source-to-source compiler **and** on **a** memoization library. With the **framework** users can select functions to memoize as long as they obey to certain restrictions imposed by our current memoization library. We show the use of the **framework** **and** associated memoization technique **and** the impact on reducing the execution time **and** energy consumption of four representative benchmarks.

En savoir plus
Transmitted data
When an observer metaobject noties the observer that an event occured, it transmits the CORBA reference of the observee object, the index of this event in the observee, **and** the index of the method execution in which this event occurred (each observer metaobject stores the number of events **and** the number of method executions that have been generated so far). The observer object needs the rst index to reconstruct the object local order, **and** the second one to associate each event to its method execution (as objects are multi-threaded several executions of the same method may be performed concurrently). Furthermore, **for** some events, additional parameters are transmitted to the observer object (Table 1 summarizes the event types recorded **and** their additional parameters). 1. An invocation key is recorded **for** each method call **and** method arrival event. This key, which contains the caller object reference, the caller method identier **and** an invocation number, allows the observer object to generate the dependency between the call **and** the arrival. This key needs to be piggy-backed on each method invocation between application level objects (indeed, when the method arrival event is generated at the server side, this key needs to be sent to the observer). We modied the JacORB client stubs **and** server skeletons generation code to transparently add this key. Some future works could tackle the use of **a** more generic solution. **For** instance, the architectural **framework** of the Jonathan ORB [DHTS98] provides **a** mechanism to plug customized stub factories into the ORB. Another more portable solution could be to use some standard request level interceptor to perform this piggy-backing process.

En savoir plus
2. **Symbolic** objects associated to **a** distribution base
**A** concept is defined by an intent (its characteristic properties, also called its "description") **and** an extent (the units which "satisfy" these properties). Here, each unit is described by **a** set of distribution. Together the units define the distributions base **and** are supposed to satisfy the properties of **a** given concept. **For** instance, the units are towns described by socio-economic distributions (as the age or wages distribution of their inhabitant) **and** the concept is the region containing these towns. More formally, if C is **a** region, Extent( C ) is the set of towns of this region Intent( C ) = d C is **a** description of the region. **A** **symbolic** object (see Diday (1998) or Bock, Diday

En savoir plus
We pay special attention to genericity in designing structures **for** which effectiveness can be maintained. Thanks to the parametrization of the code using templates **and** to the control of their instantiations using traits **and** template expressions [14], they offer generic programming without losing effectiveness. We need to combine generic implementations, which allow to reuse code on different types of data representation, with specialized imple- mentations tuned to specific critical operations on some of these data struc- tures. This is typically the case if we want to use external or third-party libraries, such as lapack (Fortran library **for** numerical linear algebra), gmp (C library **for** extended arithmetics), or mpsolve (C univariate solver implementation, using extended multiprecision arithmetic). **For** instance lapack routines should coexist with generic ones on matrices. In order to optimize the implementation, while avoiding rewriting several times the same code, we need to consider hierarchical classes of data-structures **and** **a** way to implement specializations **for** each level in this hierarchy. In this section, we describe the design of the library, which allows such **a** combina- tion. Since, it is aimed to be used in dedicated applications, we should also take care of techniques to embed the library functionalities into an external application. In this section, we also describe the method, that we adopt to build dynamic modules, combining generic **and** special code through **a** transparent plugin mechanism. This approach is illustrated by the con- nection with the geometric modeler axel, which uses synaps library **for** geometric computation on algebraic curves **and** surfaces.

En savoir plus
While storing data in PDMS increases user control over data, in the PDMS context collaborative use of data is often overlooked. The benefits derived from exploiting data are considerable. **A** user may want to share her GPS position to have accurate traffic prediction [9], or her medical records to train **a** shared neural network so that it can detect several diseases [5]. She may also want to adapt her energy contract based on her actual consumption without jeopar- dizing her privacy [3].**A** naive approach to this problem is to send personal data to **a** trusted third party who will perform said col- laborative **computations**. This, however involves very strong trust in the third party’s honesty. The goal of this work is to overcome this unrealistic trust assumption **and** propose **a** privacy preserving distributed computation **framework** **for** performing collaborative **computations** over **a** large number of PDMS.

En savoir plus
SMOTL [9] **and** more recently GODZILLA [12] took advantage of domain re- duction techniques to prune the search space of integers inequalities. Gotlieb et al. [25] applied Constraint Logic Programming over nite domains to solve constraints extracted from imperative programs in the tool INKA [26]. The proposed **framework** dealt only with constraints over integers (possibly non- linear) to automatically generate test data. SMOTL, GODZILLA **and** INKA did not address the problem of
oating-point **computations** in **symbolic** exe- cution but they did use domain **and** interval propagation techniques to solve constraint systems. The method used in the current paper to solve path cond- itions over
oating-point variables is closely related to these techniques. More recently, Meudec followed **a** similar path in [11] **and** proposed solving path conditions over
oating-point variables by means of **a** constraint solver over the rationals in the ATGen **symbolic** execution tool. The clpq library [27] of the Constraint Logic Programming system ECLIPSE was used to solve linear constraints over rationals computed with an arbitrary precision using an extended version of the simplex algorithm. Although this approach appears to be of particular interest in practice, it fails to handle correctly
oating-point

En savoir plus
This is not **a** new problem. **A** long time ago, in the Ancient Greeks works, Geometry, the art of measuring the world, was already closely tied to arithmetic problems. Pythagore developed **a** complete model of computation, relating geomet- ric constructions to (commensurable) numbers that we call today rational num- bers. But Hyppase de Metaponte exhibited publically some weakness of this model (namely that √ 2 is not **a** rational number). The story says that this act of bravery had terrible consequences **for** him. Today, we want to deal with models of the real world on **a** computer. But this machine is able to compute efficiently only with fixed size or floating point numbers **and** we are facing again Pythagore’s dilemma: • Should we consider that floating point arithmetic is sufficient to analyze all

En savoir plus
The first method, described by Li **and** Malik [17], models the cache by **a** projection of the CFG according to each cache line. This Cache Conflict Graph (CCG) is then processed to generate new constraints included in the ILP system **and** to modify the maximized function representing the WCET. This method is very heavy because it adds **a** lot of new constraints **and** variables to the ILP system. As ILP solvers have most of the time an exponential complexity, the overall computation time grows quickly while **a** lot of memory is required **for** building the CCG. Yet, this method is well integrated within the IPET approach making the cache sensitive to any flow fact information of the program without additional work. The second method (CAT) has been adapted from the work of Healy **and** al. [13] to IPET. **For** each cache line, **a** Data Flow Analysis is performed on the CFG in order to compute the state of the cache line at the entry **and** at the exit of each basic block. The context tree of the program, composed of function environments **and** loops, is then built **and** used **for** assigning **a** category at each point of the program

En savoir plus
see Figure 2.
B. Implementation of the Algorithm
Algorithm 1 **and** Algorithm 2 described in Subsection IV-**A** are implemented in the library GENOM3CK, **a** library we originally developed **for** computing the genus of **a** plane complex algebraic curve using knot theory. Together with its main functionality to compute the genus, the library computes other topological **and** algebraic invariants of each singular- ity of the plane complex algebraic curve. GENOM3CK is implemented in the free algebraic geometric modeler Axel [23], [24] (written in C++ **and** using Qt Script **for** Appli- cations), **and** in the free computer algebra system Math- emagix [25]. Axel is **a** new system developed at INRIA- Sophia Antipolis, which provides **for** our purposes unique algebraic tools **and** visualization techniques to manipulate implicit algebraic curves **and** surfaces. Axel uses also libraries from the free computer algebra system Mathemagix [25], **for** instance **a** library **for** computing the singularities of **a** plane complex algebraic curve. The power of the Axel system comes from the fact that it allows its extension into ”sub- programs” with new functionalities that are called plugins. We implement the proposed **symbolic**-**numeric** algorithms into one of Axel’s plugins, which was further on transformed into **a** library. More information on the library is available at: http://people.ricam.oeaw.ac.at/m.hodorog/software.html.

En savoir plus
* the third method is derived from the general method of Raghavan **and** Roth [7, 8], where the solution of one variables is given as **a** polynomial equation of degree 16 at most, then the other variables can be obtained.
In the case of closed loop robots, the programs (Pieper, Paul, or general method) give the solution of the joints on the direct path between the base of the robot **and** the end effector. To get the values of the motorized joint positions outside this path the geometric constraint equations of the loops must be resolved, as seen in the following section.

Unité de recherche INRIA Sophia Antipolis 2004, route des Lucioles - BP 93 - 06902 Sophia Antipolis Cedex France Unité de recherche INRIA Futurs : Parc Club Orsay Université - ZAC des Vi[r]

Unité de recherche INRIA Lorraine, Technopôle de Nancy-Brabois, Campus scientifique, 615 rue du Jardin Botanique, BP 101, 54600 VILLERS LÈS NANCY Unité de recherche INRIA Rennes, Irisa, [r]

Model Checking [ 6 ] (see **a** survey in [ 19 ]) is **a** well established technique **for** the automatic verification of systems. In this section, we show how to construct **a** **symbolic**, **and** then compact, model of the behavior of **a** ntcc process. Later, in Section 5 , we shall use this model as input to **a** **symbolic** model checking algorithm. One of the main difficulties to develop automatic verification techniques **for** ntcc programs is the fact that the semantics of processes is given by two different tran- sition systems, namely, the internal ( −→) **and** the observable (=⇒) transitions. On one hand, building **a** model **for** the internal transition seems to be unnecessary since the internal movements of **a** process during **a** time-unit are unobservable from the external environment. Moreover, abstracting away from the internal transition should lead to **a** more compact representation of the system, thus reducing the search space. On the other hand, the internal transition dictates much of the ob- servable behavior when non-deterministic processes are considered (see e.g., Rules R ASK **and** R STAR ). Our approach is then to use (temporal) formulas as **a** compact representation of the reachable states (i.e., stores) of **a** process. As we shall see, the proposed formulas capture the observable contributions (i.e., constraints) that processes can make to the final store; additionally, the internal (unobservable) tran- sitions are symbolically captured by logical connectives. More precisely, we shall follow the steps below:

En savoir plus
Vous avez des questions? Nous pouvons vous aider. Pour communiquer directement avec un auteur, consultez la première page de la revue dans laquelle son article **a** été publié afin de trouver ses coordonnées. Si vous n’arrivez pas à les repérer, communiquez avec nous à PublicationsArchive-ArchivesPublications@nrc-cnrc.gc.ca.
Questions? Contact the NRC Publications Archive team at