L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignemen[r]

I. I NTRODUCTION
A. Context
**Designing** **real**-**time** cyber physical **systems** (CPS) is a complex task which typically entails the use of many different methodologies and tools in different areas and at different stages of development process. Typically, different parts of the CPS are developed by different engineering teams or even external contractors. Integrating these heterogeneous parts and evaluating the composite behavior is challenging and, with today’s tools, a systematic composition is nearly impossible. Over the last years, model-based tools have gained momentum in the design of CPS as they allow for modeling at higher levels of abstraction and they facilitate the communication between different teams of engineers. Additionally, many tools come with add-ons such as formal verification and code-generation. However, different parts of a CPS require different abstractions and tool support. The lack of interoperability between the tools poses a major challenge.

En savoir plus
1. Introduction
The term **real**-**time** UML profile denotes the family of modeling languages that customizes the Unified Modeling Language [Gro03] for **real**-**time** applications and platforms. TURTLE [ALCdSS04] is a **real**-**time** UML profile supported by the open-source toolkit TTool (TURTLE Toolkit [Lab10]). The diagram editor and the type checker support the UML 2 compliant syntax of the profile. The code gene- rators developed for LOTOS, RT-LOTOS, and UPPAAL give access to three formal verification tools : (RTL [rtl], UPPAAL [upp], CADP [cad]) [AdSS09]. The user-friendly interface offered by TTool hide the inner workings of the verification tools. Also, the Java code generators enables fast prototyping of **distributed** **systems**.

En savoir plus
Hence (7) is proven
6 Conclusion
In this paper we presented a heuristic for load Balanc- ing and efficient memory usage of homogeneous **distributed** **real**-**time** embedded **systems**. Such **systems** must satisfy de- pendence and strict periodicity constraints, and their to- tal execution **time** must be minimized since they include feedback control loops. In addition since memory is lim- ited in embedded **systems** it must be efficiently used. This is achieved by grouping the tasks into blocks, and mov- ing them to the processor such that the block start **time** decreases, and this processor has enough memory capac- ity to execute the tasks of the block. We shown that the proposed heuristic has a polynomial complexity which en- sures a fast execution **time**. We performed also a theoret- ical performance study which bounds the total execution **time** decreasing, and shows that our heuristic is a (2 − 1

En savoir plus
VI. C ONCLUSION
Industrial attention towards cobot has expanded. It is vital to consider a cobot with an energy-neutral design to meet the energy demand entirely by a renewable source. This paper has presented the current state-of-the-art on energy-efficient mobile robots. Most approaches have focused only on motion planning to reduce energy consumption. To our knowledge, **real**-**time** **systems** with energy constraints have not been explored. It is essential to consider cobot as a hard **real**-**time** system to exercise strict deadlines. Eventually, scheduling approaches with timing and energy constraints will provide a deterministic and energy-efficient design. These approaches can also practice energy harvesting from renewables. With this practice, the maximum energy demand for the cobot will have coincided with renewables. This approach intends to expand the CPSS application in the following ways: (i) improve the processor utilization, (ii) increase the operating **time** with reduced energy consumption, and (iii) importantly satisfy the energy demand with renewables.

En savoir plus
Determinization of Timed Automata: Timed automata (TA), introduced in (1), form a usual model for
the specification of **real**-**time** embedded **systems**. Essentially TAs are an extension of automata with guards and resets of continuous clocks. They are extensively used in the context of many validation problems such as verification, control synthesis or model-based testing (2). Determinization is a key issue for several problems such as implementability, diagnosis or test generation, where the underlying analyses depend on the observable behavior.

In addition to manage the time synchronization between the Ptolemy model time and the HLA logical time by implementing the TimeRegulator interface, it handles all the logic, data structu[r]

Dedicated supercomputers can run **real**-**time** tasks, but volunteer computing could be an low-cost alternative if it could support **real**-**time** guarantees. In the domain of complex strategy games, Deep Blue [3] was the first machine defeat the human world champion in 1996. IBM developed a dedicated server system for Deep Blue, and the server achieved about 11.38 GFLOPS on the LINPACK benchmark. Since 2006, several researchers in the world have been developed MoGo, which is software to find the next move in the game of Go. They adapted Monte-Carlo-based algorithms, and now, they are as strong as the professional Go players in the 9 × 9 small board based on the cluster computing machines[2]. Grid gaming middleware [11] have been developed and address issues such as adaptive redirection of communication or computation given variable load. They also address issues such as high-level easy-to-use programming interfaces, and monitoring tools for capacity planning. We believe our work on giving worst-case bounds on execution **time** is complementary with those methods. For example, our techniques guarantee performance given that the data can be store entirely in the server’s memory; this in turn could be used with capacity planning tools to determine when to replicate a server.

En savoir plus
Unité de recherche INRIA Lorraine, Technopôle de Nancy-Brabois, Campus scientifique, 615 rue du Jardin Botanique, BP 101, 54600 VILLERS LÈS NANCY Unité de recherche INRIA Rennes, Irisa, [r]

Ongoing work deals with the definition of such a heuristic approach in order to be able to find optimal or near-optimal
allocations in the context of complex **systems**. Moreover, tak- ing into account local I/O constraints when mapping partitions is another important objective as well as analysis of a possible oversampling of emitting partitions (increasing their period to overcome exceeded end-to-end communication delays).

1. Introduction
New **systems** become more and more complex. They are made of numerous components. These components often make use of new technologies and interact between themselves. These **systems** can be embedded inside aircrafts (manned or unmanned), satellites or defense **systems**. Mastering and managing the development and the evaluation of such **systems** become a really difficult task. Simulation is more and more required to bring some help in these processes. Because many scientific and technical problems are addressed and because numerous models are needed to treat these problems, simulations are actually generally **distributed**. Models and simulations must interoperate in order to build some relevant results. So simulations must rely on some basic mechanisms and services to properly interoperate. Moreover, some hardware equipments or some **real** subsystems can be integrated inside the loop of these simulations. In that case, hard **real** **time** constraints must be taken into account when running these **distributed** simulations.

En savoir plus
8. Conclusion
We proposed an approach focused on variables instead of task and process to model and analyse **distributed** **real** **time** **systems**. Based on the state transition system semantics extended by a timed ref- erential, we express timed properties of variables, and of communications. These properties are used to check the freshness of values, their stability and the compatibility of requirements. The analysis is done using propositions to derive simple proof or in more complex case using model checking. The complexity of model checking is a problem when analysing large **systems** so we work on combining proofs, to reduce the problem size, and model checking, to easily analyse simple models. For that purpose we will extend the number of properties and of proved propositions binding these properties.

En savoir plus
The concept of end-to-end deadline has been discussed in many research works. This applies both to single-processor and **distributed** architectures. In particular, for data-driven activation models end-to-end deadlines were considered in the context of schedulability analysis test such as holistic analysis with jitter propagation used in this work [34], or model with offsets as in [35]). Timing analysis techniques advanced significantly, considering new activation models, communication protocols or more expressive tasks representations (e.g. digraph model [41]). The optimization of deployment has not received comparable attention. [42] and [43] proposes a heuristics-based design optimization algorithm for mixed **time**-triggered and event-triggered **systems**. Its main assumption is that the nodes (in our case ECUs) are synchronized. An integrated framework for optimization is proposed in [44] for **systems** with periodic tasks on a network of processor nodes connected by a **time**-triggered bus. Authors use Simulated Annealing (SA) combined with geometric programming to hierarchically explore task allocation and assignment of tasks’ priority and period. In [45] the process of allocation of tasks and priority assignment targets the optimization of system flexibility, i.e. ability to adapt to changes which is important for **real**-**time** **systems**. The possible change this is introduction of new tasks into the system which obviously impacts the response-times of already deployed tasks. To solve the problem, just as in the previous work, authors are using simulated annealing. Work of Hamann et al. [46] optimizes multi-dimensional robustness criteria in a complex embedded system. Their approach is based on the stochastic multi-dimensional sensitivity analysis technique. Authors consider multiple problems affecting system performance such as changes in the execution times of tasks but also period speed-ups, etc. Azketa et al. [47] delivers an approach based on the genetic algorithms that optimizes the assignment of priorities to tasks and messages and then it maps them on the execution platform. Similarly allocation and scheduling decisions are being optimized in [48] and [49] under the **real**-**time** constraints. There are also approaches which consider only mono-processor architectures such as [50], [51] or [52] hence for all of them allocation is out of scope.

En savoir plus
210 En savoir plus

Ongoing work deals with the definition of such a heuristic approach in order to be able to find optimal or near-optimal
allocations in the context of complex **systems**. Moreover, tak- ing into account local I/O constraints when mapping partitions is another important objective as well as analysis of a possible oversampling of emitting partitions (increasing their period to overcome exceeded end-to-end communication delays).

Keywords: **real**-**time** **systems**, validation, partial knowledge, polyhedra.
1. Introduction
Validation is a mandatory step in critical **real**-**time** **systems** design, and can be important even for non- critical ones. There are many ways to check the temporal behavior of a **real**-**time** system: we can use formal methods based on logic (LTL, CTL, etc.) or automata [1, 2, 3] (Petri nets, linear hybrid automata, etc.), or we can use scheduling techniques [7] (RMA method, etc.). Methods based on logic give precise results and have a great expressive power. They can model very specific features, but involve a strenuous **designing** process. Indeed, they are not initially fitted to the particular case of **real**-**time** system validation, and notions such as tasks or resources sharing do not exist in Petri nets or linear hybrid automata. Scheduling techniques have two main advantages. Firstly, they are of course completely fitted to **real**- **time** **systems**. Secondly, they usually have a very low complexity. However, they often are pessimistic, they require an almost complete knowledge of system parameters (tasks periods, priorities, precedence relationships, etc.), and they give quite

En savoir plus
martin.adelantado@onera.fr Keywords:
Simulation, ODEs, HLA, RTI, **real**-**time**, scheduling.
ABSTRACT: In the context of the Research Platform for Embedded **Systems** Engineering (PRISE) Project, we are developing and maintaining a complete aircraft flight simulation using the High Level Architecture (HLA), an IEEE standard for **distributed** simulation. This complex **distributed** simulation is composed of different **distributed** HLA simulators (e.g., Flight Dynamics, Sensors), whose dynamic behaviors are implemented as Ordinary Differential Equations (ODEs). The resolution of these equations is done, locally for each simulator, by numerical integration with methods like Euler or Adams-Bashforth. The global behavior of this **distributed** simulation, where each component runs its own local resolution, is a key challenge. The main problem is to ensure the global simulation consistency and, in particular, the specific data flows between components with the correct temporal **real**-**time** behavior. This paper specifically addresses the problem of solving ODEs over an HLA **distributed** architecture and offers a complete study (specifications, implementation and validation) where several theoretical concepts and methods are discussed.

En savoir plus
Traditional telecom chipset are designed with dedicated hardwired solutions which are cost-ineffective for multi- standard mobile handset. For more flexibility, MPSoCs (Mul- tiprocessor System-on-Chip) [3] with multiple programmable processors as system components have been introduced in the telecom field. MPSoCs are well suited for **systems** with concurrent algorithms like telecom applications. The imple- mentation of such algorithms on two heterogeneous SDR platforms [4] [5] has proven the efficiency of MPSoCs to provide a valuable solution. In the context of SDR platforms, the flexibility and reconfigurability is the key challenge. Ho- mogeneous MPSoCs, which are based on the replication of

En savoir plus
Keywords
Architecture for Autonomous **Systems**, OBDD, Dependability
1 Introduction
Advanced robots or satellites have an increasing need for a high level autonomy while performing in a hard **real** **time** environment. However, this raises a major non trivial problem: most com- plex autonomous **systems**, which operate with a minimal human intervention and in a highly non deterministic environment are hard to validate. Nevertheless, these **systems** must be safe and **dependable** (e.g. avoid non nominal and unknown system states), to avoid financial loss and/or to avoid disturbing/harming humans. These two requirements (high level autonomy and depend- ability) may appear contradictory or at least hard to satisfy together. Indeed, how can we be

En savoir plus
significant and must be a central ingredient in the measure of flexibility.
"**Real** options analysis" moreover offers significant advantages over NPV or DCF. It recognizes the ability and responsibility of engineers and managers to shape the development of any system over **time**. They will make major choices along the way, dropping features that are no longer desirable or adding new ones that have proven to be essential. However, NPV/DCF analyses assume that the project and its cash flows are defined in advance. On the contrary, "**real** options" analysis deals operationally with the reality of many different cash flows through **time**. This is a crucial conceptual difference that many texts go to great effort to stress (for example, Copeland and Antikarov, 2001). Thus "**real** options analysis" is similar to NPV/DCF in that it is economic, but as a form of economic analysis is substantially different.

En savoir plus