Synchronouslanguages  such as Signal , Lustre  and Esterel 
have been designed to facilitate the development of reactive systems. They enable a high-level specication and a modular design of complex reactive systems by structurally decomposing them into elementary processes. In this paper we show that semantics-based program analysis techniques originally developed for the imperative language paradigm can be applied to Signal programs, facilitating
Abstract. In this paper we present a novel lightweight approach to vali- date compilers for synchronouslanguages. Instead of verifying a compiler for all input programs or providing a fixed suite of regression tests, we extend the compiler to generate a test-suite with high behavioral cover- age and geared towards discovery of faults for every compiled artifact. We have implemented and evaluated our approach using a compiler from Lustre to C.
Annie Ressouche and Daniel Gaffé and Valérie Roy
Abstract Synchronouslanguages rely on formal methods to ease the development
of applications in an efficient and reusable way. Formal methods have been advo- cated as a means of increasing the reliability of systems, especially those which are safety or business critical. It is still difficult to develop automatic specification and verification tools due to limitations like state explosion, undecidability, etc... In this work, we design a new specification model based on a reactive synchronous approach. Then, we benefit from a formal framework well suited to perform com- pilation and formal validation of systems. In practice, we design and implement a special purpose language ( LE ) and its two semantics : the behavioral semantics helps us to define a program by the set of its behaviors and avoid ambiguousness in programs’ interpretation; the execution equational semantics allows the modular compilation of programs into software and hardware targets (C code, Vhdl code, Fpga synthesis, Verification tools). Our approach is pertinent considering the two main requirements of critical realistic applications : the modular compilation allows us to deal with large systems, the model-driven approach provides us with formal validation.
Synchronouslanguages are high-level, engineer-friendly, robust, specification for- malisms, rooted in the concepts of discrete time and deterministic concurrency. Time is usually not explicitly mentioned in the definition of traditional programming lan- guages. Such a notion is however of paramount importance in the design and imple- mentation of data-flow and control software for reactive systems [Harel and Pnueli 1985; Halbwachs 1993]; indeed interactions with external environment processes are there subject to time constraints, memory constraints, security constraints and deter- minism requirements. Synchronouslanguages, which strive to reach such demanding objectives, are often equipped with timing and concurrency mathematical models that are structured around automata theory and a typical core hypothesis of bounded cal- culus and communication between logical instants; this paradigm is called the “syn- chronous hypothesis” and is typically checked by calculating the worst case execution time. Since the semantics of synchronouslanguages assumes that computation and communication are performed within logical instants, these languages provide, con- trarily to traditional ones, programmers and the running environment explicit access to time, via clocks. Computations are specified with respect to these explicit clocks, en- suring that timing constraints can be stated by programmers and verified by compilers later on.
V. C ONCLUSION AND F UTURE W ORKS
Synchronouslanguages have formal semantics computing models of programs either for verification purpose or for compilation. All along the three last decades, several semantics have been defined. They are all in common to compute the status of signals in an execution of a program. We need a mathematical framework to represent and compute signal sta- tus according to semantics rules. This paper is a review of some adopted solutions and points out another framework which provides us with both verification and separated compilation. It studies classical approaches with 3-valuated algebras. In these algebras, a ⊥ element turns out to be useful to have the ability to apply constructive rules. But to go further and get a separated compilation means relying on semantics, 4-valued algebras are required. We study five different 4-valued algebras and show that Algebra5 is a distributive bilattice. Then, we can consider two orders: a Boolean order and a knowledge order which allows us to rely on fixpoints computation to establish signal status. The Boolean order is useful to compute the current environment of signals and the knowledge order is essential to merge environments after a separated compilation of statements. Nevertheless, Algebra3 is also an appealing framework because error is propagated which is not the case for Algebra5. But as soon as ⊤ becomes an absorbing element, the bilattice structure cannot exits and this property is really inescapable.
On another hand, to formally prove safety properties we rely on model checking techniques. In this approach, the correctness of a system with respect to a desired behavior is verified by check- ing whether a structure that models the system satisfies a formula describing that behavior. Such a formula is usually written by using a temporal logic. Most existing verification techniques are based on a representation of the concurrent system by means of a labeled transition system (LTS). Synchronouslanguages are well known to have a clear semantic that allows to express the set of behaviors of program as LTSs and thus model checking techniques are available. Then, they rely on formal methods to build dependable software. The same occurs for LE language, the LTS model of a program is naturally encoded in its equational semantic.
Synchronouslanguages have formal semantics computing models of programs either for verification purpose or for compilation. All along the three last decades, several version of semantics have been provided. They have all in common to compute the status of signals in an execution of a program. We need a mathematical framework to represent and compute signal status according to semantics rules. This paper is a review of some adopted solutions and points out another framework which has the ability to provide us with verification and separated compilation. It studies classical approaches with 3-valuated algebras. In these algebras, a ⊥ element turns out to be useful to have the ability to apply constructive rules. But to go further and get a separated compilation means relying on semantics, 4-valued algebras are required. We study five different 4-valued algebras and show that Algebra5 is a distributive bilattice. Then, we have the ability to consider two orders: a Boolean order and a knowledge order which allows us to get stabilization rules to compute signal status. The Boolean order is useful to compute the current environment of signals and the knowledge order is essential to merge environments after a separated compilation of statements. Nevertheless, Algebra3 is also an appealing framework because error is propagated which is not the case for Algebra5. But as soon as ⊤ becomes an absorptive element, the bilattice structure cannot exits and this property is really inescapable.
Our purpose here is to provide a streamlined theory of determinacy for the synchronous π- calculus introduced in . It seems appropriate to address these issues in a volume dedicated to the memory of Gilles Kahn. First, Kahn networks  are a classic example of concurrent and deterministic systems. Second, Kahn networks have largely inspired the research on synchronouslanguages such as Lustre  and, to a lesser extent, Esterel . An intended side-effect of this work is to illustrate how ideas introduced in concurrency theory well after Kahn networks can be exploited to enlighten the study of determinacy in concurrent systems.
Several synchronouslanguages such as Lustre, Esterel, Scade and Signal  have been defined to de- scribe synchronous automata. These languages are for expert users. We propose a new user-oriented lan- guage, named ADeL (Activity Description Language) to express activities and to automatically generate recognition automata. This language is easier to understand and to use by non computer scientists (e.g., physicians) while relying on formal semantics.
In this thesis, we introduce support for rapid prototyping and composing aspect languages based on interpreters. We start from a base interpreter of a subset of Java and we analyze and present a solution for its modular extension to support AOP based on a common semantics aspect base defined once and for all. The extension, called the aspect interpreter, implements a common aspect mechanism and leaves holes to be defined when developing concrete languages. The power of this approach is that the aspect languages are directly implemented from their operational semantics. This is illustrated by implementing a lightweight version of AspectJ. To apply the same approach and the same architecture to full Java without changing its interpreter (JVM), we reuse AspectJ to perform a first step of static weaving, which we complement by a second step of dynamic weaving, implemented through a thin interpretation layer. This can be seen as an interesting example of reconciling interpreters and compilers. We validate our approach by describing prototypes for AspectJ, EAOP, COOL and a couple of other DSALs and demonstrating the openness of our AspectJ implementation with two extensions, one dealing with dynamic scheduling of aspects and another with alternative pointcut semantics. Different aspect languages implemented with our framework can be easily composed. Moreover, we provide support for customizing this composition. Keywords Aspect-Oriented Programming (AOP), interpreter, semantics, prototyping, compo- sition, Domain-Specific Aspect Language (DSAL)
3.1.3 Restructuring the simple parallel model
This model can be restructured, in order to better map the intrinsic structure of A RRAY -O L repet- itive task, with input tilers, computations, and output tilers. One advantage is to have a structural transformation from A RRAY -O L to synchronous equations, which is implementable automati- cally as a simple translator. Another advantage is related to the introduction of control, which is a direct perspective of this work, following the preliminary results in . Indeed, if one considers controlling such a task with automata, where the states correspond to a configuration or a mode, and transitions switch between these modes which all have the same input-output interface, then it is possible to have a control for each of the tilers or computation components of a task.
We described the National Research Council’s entry to the second shared task on Discriminat-
ing between similar languages . Our system uses a fairly straightforward processing and modeling approach, building a two stage predictor relying on a probabilistic document classifier to predict the group, and Support Vector Machines to identify the language within each group. We tested vari- ous word and character ngram features. Group- level classification was very accurate, making only a handful of mistakes mostly due to the pres- ence of confounding documents from other lan- guages. Our top system yields an average accu-
3. Coverage-guided random testing for imperative languages
The first statistical testing method guided by
coverage criteria was proposed by Thévenod-Fosse and Waeselynck in . In 2004, Denise, Gaudel, and Gouraud  introduced a new method along the same line for the random generation of tests for C programs. The control graph of a C program is represented as a combinatorial structure. A path in the control graph represents a possible execution of the program. Drawing paths randomly within a combinatorial structure (here the control graph) is an active field of research and has produced highly efficient algorithms [10, 11]. Paths within the combinatorial structure are drawn and the testing method uses constraint-solving to find the inputs which execution leads to this path. For any path drawn within the structure, there may be no input which execution is represented by this path: The path is said to be infeasible and this is a general problem to software testing. A further advantage of the method is that the combinatorial decomposition makes possible the generation of inputs aiming to satisfy a coverage criteria on the structure. This method was the basis for the tool AuGuSTe. AuGuSTe takes a C program as input (currently a program written in a subset of C) and generates at random as many test inputs as asked by the user in order to cover a coverage criteria with some quality .
without possibilities of time-dependent errors or data-races. Our model is called DSLM to stand for Dynamic Synchronous Language with Memory.
As in standard synchronous models, a notion of instant is present. Instants define a logical time, different from the physical time; an instant is terminated when all the parallel components have reached a synchronization barrier. Our model makes sure that this synchronous barrier is actually reached, that is, it ensures the termination of instants.
In this paper we have studied the expressivity power of acceptance condition for nite automata. Three new classes have been fully characterized. For a fourth one, partial results are given. In particular, (ninf, u) provides four distinct new classes of languages (see the diamond in the left part of Figure 2), all other acceptance conditions considered tend to give (classes of) languages populating known classes.
For control, squirrel-cage induction machines can be distinguished from synchronous machines with magnets, mainly because the magnetization of the machine is entirely controlled by the inverter supplying the stator. In fact, for synchronous machines with magnets, part of the magnetic field within the machine is not controlled by the supply. Hence, the study of the control of synchronous machines has more constraints than that of induction machines. For this reason, we will restrict the study in this chapter to that of synchronous machines with permanent magnets. The particularity of these machines lies in the number of phases and taking both space and time harmonics into account. This problem of the impact of harmonics on control has already been tackled in the case of three-phase synchronous machines with trapezoidal electromotive forces in [GRE 94] and [LOU 10]. In this chapter, we emphasize the originality induced by increasing the number of phases.
This paper introduces a general language-agnostic framework in which the semantics of difference languages can be defined and on which mechanically-checked reasoning can be performed. In this framework, differences are interpreted as relations between reduction traces. This interpretation domain includes both seman- tic equivalence relations (like observational equivalence, simula- tion, bisimulation, etc) and semantic inequivalence relations (like bugfixing, refinements, extension, etc). It also allows to compare non-terminating or crashing programs, or programs written in dis- tinct programming languages. Finally, this domain is appropriate to interpret intentional differences (resources usage optimizations, policy enforcement, migration to new API versions, etc).
Therefore the focused proposition can be characterized more specifically as a split assertion involving a temporal presupposition (of the predicative relationship) and a qualitative
designation (of the focused constituent). This definition, which applies to focus in general, is
of particular importance in accounting for the uses of focused forms in languages where focus marking is grammaticalized in verb morphology. In these languages, the information structure constrains the choice of the verb form more strictly than in other languages. Furthermore, according to this definition, in the focused sentence, the verb is backgrounded as presupposed, while the focused element is foregrounded. This can explain why, in the case of argument focus, the verb morphology is often reduced (reflecting the backgrounding of the verb), and, by contrast, in the case of verb focus, the verb morphology is often heavier (through reduplication for instance), reflecting the double status of the verb as syntactic predicate and focus.
Related works. Declarative query languages for data min- ing have been studied for years [13, 15, 18, 19]. Logical query languages for data mining have been studied for example in [6, 10]. In  , a general query language for data mining has been devised in a different context: the authors defined a data mining language with schema-variables that range over sets of n-ary tuples of attributes. Their objective was to characterize data mining queries amenable to a levelwise search strategy, i.e. exhibiting monotone properties with re- spect to some partial order. They obtain negative results pointing out that their class of queries was too expressive to ensure properties such as (anti-)monotonic property. Other declarative approaches have been proposed in data mining but in a much more general setting, i.e. at the inter- section of DBMS and data mining techniques (classification, clustering, pattern mining ...), e.g. [8, 18, 19].