Hence (7) is proven
In this paper we presented a heuristic for load Balanc- ing and efficient memory usage of homogeneous distributed real-timeembeddedsystems. Such systems must satisfy de- pendence and strict periodicity constraints, and their to- tal execution time must be minimized since they include feedback control loops. In addition since memory is lim- ited in embeddedsystems it must be efficiently used. This is achieved by grouping the tasks into blocks, and mov- ing them to the processor such that the block start time decreases, and this processor has enough memory capac- ity to execute the tasks of the block. We shown that the proposed heuristic has a polynomial complexity which en- sures a fast execution time. We performed also a theoret- ical performance study which bounds the total execution time decreasing, and shows that our heuristic is a (2 − 1
L’UNAM University–IRCCyN (Institut de Recherche en Communications et Cybernétique de Nantes), UMR CNRS 6597, Nantes, France
Harvesting energy from the environment is very desirable for many emerging applications that use embedded devices. Energy harvesting also known as energy scavenging enables us to guarantee quasi-perpetual system operation for wireless sensors, medical implants, etc. without requiring human intervention which is normally necessary for recharging batteries in classical battery-operated systems. Nevertheless, energy harvesting calls for solving numerous technological problems in relation with chemistry when batteries are used for temporary energy storage for example, power management of the embedded computing system that consumes the energy, etc. And this latter problem becomes more complex when the embedded system has real-time constraints i.e. deadlines attached to computations. This paper surveys the main issues involved in designing energy harvesting embeddedsystems that present strict timing requirements.
This article presents an HMM-based approach to online diagnose accidental faults for real-timeembeddedsystems. By introducing reasonable and appropriate abstraction of complex systems, HMM is used to describe the healthy or faulty states of a system’s hardware components. The observation sequences are derived from the test results with respect to the functional constraints defined in system specifications. They are parametrized to statistically simu- late the real system’s behavior. As it is not easy to obtain rich accidental fault data from a real system, the Baum– Welch algorithm cannot be employed here to train the parameters in HMMs. The parameters in the initial state distribution and the state transition matrix are computed using the failure rates of hardware components. The para- meters in the emission matrix are estimated using the fail- ure propagation algorithm. The estimation method is based on the principles of FTA and the maximum entropy in Bayesian probability theory. A fault propagation distribu- tion is thus computed, whose parameters are adapted using the backward algorithm and observations. The parameter- ized HMMs are then used to online diagnose accidental faults using a vote algorithm integrated with a low-pass fil- ter. We have designed a specific test bed to analyze the measures including the sensitivity, specificity, precision, accuracy and F1-score by generating a large amount of test cases. The test results show that the proposed approach is robust, efficient and accurate.
This article proposes an approach for the online analysis of accidental faults for real-timeembeddedsystems using hid- den Markov models (HMMs). By introducing reasonable and appropriate abstraction of complex systems, HMMs are used to describe the healthy or faulty states of system’s hardware components. They are parametrized to statistically simulate the real system’s behavior. As it is not easy to obtain rich accidental fault data from a system, the Baum–Welch algorithm cannot be employed here to train the parameters in HMMs. Inspired by the principles of fault tree analysis and the maximum entropy in Bayesian probability theory, we propose to compute the failure propagation distribution to esti- mate the parameters in HMMs and to adapt the parameters using a backward algorithm. The parameterized HMMs are then used to online diagnose accidental faults using a vote algorithm integrated with a low-pass filter. We design a specific test bed to analyze the sensitivity, specificity, precision, accuracy and F1-score measures by generating a large amount of test cases. The test results show that the proposed approach is robust, efficient and accurate.
work (Ayari et al., 2016f), a simple genetic algorithm based on a novel schedulability-guided operator (SGX) has proven its efficiency. Our genetic algorithm using SGX easily outperforms the classical operators by offering at least 21% improvement in terms of ratio of certainly schedulable tasks. In the current paper, SGX takes its place in ImGA to be part of the pro- cess with the aim to improve the quality of findings. The crossover operator is considered as the fundamental search operator in GAs (Zhang et al., 2007). In (Zhang et al., 2007), Zhang and al. underlined the efficiency of sophisticated operators to improve the quality of the solutions obtained by standard crossovers. In this work, authors introduced a more intelligent crossover operator using a local hill-climbing search to construct good building blocks for object classification. Ahuja et al. (Ahuja et al., 2000) proposed a greedy genetic algorithm combining the genetic algorithm with greedy approaches to solve large scale qua- dratic assignment problems. Also, an improved genetic implementation has been proposed in (Drezner, 2008) by including a local search algorithm to tackle the same problem. In (Shres- tha and Mahmood, 2016) presented two main ideas of Mitochondrial DNA and Continent Model to improve the quality of GA. In (Toğan and Daloğlu, 2008), authors proposed a novel strategy for initial population generation. In addition, two new self-adaptive member grouping strategies were discussed. In (Ayari et al., 2016g), a simulation-based approach to assess solutions discarded by schedulability pessimism, and include them in the optimization process is proposed. Authors included a simulation stage to consider discarded solutions by schedulabity test pessimism in the exploration process. All these promising results motivated us to design an improved genetic algorithm to tackle the problem of embeddedreal-time application scheduling on heterogeneous multi-core systems under timing constraints.
Most embeddedsystems constructed to date do not extract power efficiently from the source. As a result, they use a much larger harvester (e.g. solar panel) than necessary to yield the same level of power as a more efficient one, or they rely on a larger, more expensive, higher capacity battery than needed in order to sustain extended operation. In both cases, the low har- vesting efficiency limits the achievable performance and will preclude the system from many important applications. This has motivated researchers to design energy harvesting capabilities specifically dedicated to real-timeembeddedsystems from about four years . The crucial issue associated to these systems is to find scheduling mechanisms that can adapt their perfor- mance to the available energy profile. Up to now, when designing a real-timeembedded system, the first concern has been usually time, leaving energy efficiency as a hopeful consequence of empiric decisions. Now, the primary concern is that power from solar panels or other free sources that cannot be stored (or stored with limited capacity) should be fully consumed greedily, or else this energy will be wasted.
In this report, we introduce ARTE, a new simulation tool for reconfigurable energy harvesting real-timeembeddedsystems, which provides various functions to simulate the scheduling process of real-time task sets and their temporal behavior when a scheduling policy is used. It provides the classical real-time scheduling policy EDF, the optimal scheduling algorithm for energy harvesting systems EDH, EDF schedul- ing for (m,k)-constrained task sets, and a new scheduling policy which is an extension of EDH for (m,k)-firm constrained task sets EDH-MK, and finally a new hierarchical approach Reconf-Algorithm. It implements also classical and new feasibility tests for both real-time and energy harvesting requirements. The main aim of this research work is to guarantee a feasible execution in the whole system in the presence of un- predictable events while satisfying a quality of service (QoS) measured first in term of the percentage of satisfied deadline, second the percentage of satisfied deadline while considering the degree of importance of tasks, and finally the overhead introduced by the proposed approach.
In this paper, we propose to describe and evaluate a new task model for answering requirements of firm real-timesystems that accept deadline violations due to either occurrence of faults or/and processing over- load. Overload conditions can be caused by a bad sys- tem design, not anticipated simultaneous arrivals of interrupts, hardware defects in data acquisition from sensors, under-estimated computational demands, op- erating system exceptions, etc. Fault-tolerance tech- niques intend to keep the system operational in the presence of faults, even with producing degraded re- sults. We will show how the BGW task model permits to guarantee online graceful and controlled degrada- tion of the Quality of Service in embeddedreal-timesystems.
We focus on safety critical embeddedsystems, i.e. systems for which constraints must necessarily be satisfied in order to avoid catastrophic consequences. Such systems, in most cases, consist of a set of dependent periodic tasks resulting from a functional specification, usually performed with tools such as Simulink , Scade , etc., based on block diagrams. The functional specification describes the functions that will be executed and their dependences which represent the data produced and consumed by the functions. Such dependences involve a precedence relation on the execution of every producer function and one or several consumer functions, and lead to sharing the data considered. Dependent functions associated with temporal characteristics become dependent real-time tasks. These characteristics feature first release, Worst Case Execution Time (WCET), period and deadline.
Context and Problems
Embeddedreal-timesystems are omnipresent in our daily life, they cover a range of different levels of complexity. We find Embeddedreal-timesystems in several domains: robotics, automotive, aeronautics, medical technology, telecommunications, railway transport, multimedia and nuclear power plants. These systems are composed of hardware and software devices with functional and timing constraints. In fact, the cor- rectness of such systems depends not only on the logical result but also on the physical time at which this result is produced. Real-timesystems are usually reactive, since they interact permanently with their environments. In addition, embeddedreal-timesystems are often considered critical. This is due to the fact that the computer system is entrusted with a great responsibility in terms of human lives and non-negligible economic interests. In order to predict the behavior of an embedded safety-critical system, we need to know the interactions between its components and how to schedule these components on a given platform.
The Model Driven Architecture approach, promoted by the OMG for a few years, puts the model paradigm in the center of the development process. In this approach, the process activities rely on an intensive use of model transformations and tries as much as possible to separate concerns that are platform specific and platform independent. The CEA-List drives experiment and research in the field of RTES development for several years. In this scope, a lot of work has been done to apply the founding concepts of MDA  (and model driven oriented approaches in general) in a development process dedicated to realtime and embeddedsystems. This work resulted in the Accord|UML toolkit which consists of a complete UML based methodology and related supporting tools. This methodology covers the whole development cycle of RTES from very early specification activities to a first executable prototype.
Since timing is such an crucial aspect of these systems, many techniques have been developed to verify this critical non-functional property. Among these, timing analysis is one of the most used technique to ensure the safe operation of CRTES [116, 73, 74].
Timing analysis strives to provide guarantees for the maximum time needed to perform a given computation, providing its safe Worst Case Execution Time (WCET). However, one of the main obstacles to accurate timing analysis is the unpredictable timing behavior of modern computer architectures; with multi-stage pipelines and multi-level memory hierarchies, it is extremely difficult to accurately predict the execution time of a given program. This is made almost impossible with parallel architectures (such as multi-core systems), due to the presence of shared resources. Being conservative and over-estimating does not solve the problem, because of the extremely wide gap between the worst and average cases . In this context, a probabilistic analysis approach can be beneficial : by enabling true randomized behavior in all the components of a computer, one can define probabilistic metrics to the timing behavior of a system. Successful implementation of such systems will have tremendous impact on the way critical systems are designed. The potential benefits in terms of cost of integration, verification, and certification of real-time software are enormous. For that matter, consider the development of the on-board computer of a satellite. This is an excellent example of a CRTES: if commands or alarms are not treated in the appropriate time frame, the entire satellite could be lost. A traditional approach would take a flight-proven processor (e.g. the LEON3) equipped with a real-time operating system (e.g. RTEMS ) and statically schedule all software tasks to guarantee that the control software will always respond to events within a safe time frame. This requires detailed knowledge of the hardware and expensive software analysis. The behavior of multiple interacting tasks can lead to rare corner cases 1 that can lead to catastrophic effects. The largest part of the cost of software
Abstract: Design of real-timeembeddedsystems requires particular attention to the careful schedul-
ing of application onto execution platform. Precise cycle allocation is often requested to obtain full communication and computation throughput.
Our objective is to provide a UML profile where events, actions, and objects can be annotated by “logical” clocks. Initially, clocks are not necessarily related. The goal of the scheduling process (and algorithms) is to regulate the data and control flows within predictable bounds. To this end it extracts clock relations that best map the application onto a desired execution platform. “Clocks-as- schedules” then act as activation conditions, driving these internal events and actions according to the desired activation patterns. Extra communication and buffering latencies can be introduced in the process.
Incremental Validation of Real-TimeSystems
D. Doose, Z. Mammeri
IRIT Paul Sabatier University, Toulouse France
Abstract: Real-timeembeddedsystems are used in highly important or even vital tasks (avionic and medical systems, etc.), thus having strict temporal constraints that need to be validated. Existing solutions use temporal logic, automata or scheduling techniques. However, scheduling techniques are often pessimistic and require an almost complete knowledge of the system, and formal methods can be ill-fitted to manipulate some of the concepts involved in real-timesystems.
The simulations being carried out in Matlab, the authors consider 4, 8, 12 and 16 processors. The algorithm parameter Np equals 100 all the time and there are 200, 300 or 500 iterations depending on the size of task set varying 10 to 100. In fact, the population size should be ﬁxed at a reasonably high value (Np = 100) in order to ensure the output stability and the convergence. The TFTS is run without faults ﬁrst and then with fault injection. The algorithms making use of the GA, ACOA or PSO are simulated with fault injection only. One simulation scenario is run 40 times and obtained values are then averaged. The results are evaluated by means of both the rejection rate and ﬁtness function represented as a function of the number of executed iterations. The simulations show that the scheduling based on the GA, ACOA or PSO outperform the TFTS. The scheduling based on the GA has faster convergence but is slower when compared to the scheduling with the ACOA. The algorithm using the PSO shows uniformity in processor utilisation in comparison to the TFTS. Moreover, it can be seen that at least 8 iterations are required to schedule 10 tasks on 4 processors and at least 50 iterations (for GA) or 200 iterations (for ACOA) or 250 iterations (for PSO) are necessary to place 100 tasks on 16 processors.
We believe that multiform time, introduced by reactive languages, is of first importance to specify constraints in realtimeembeddedsystems. Additionally, UML is more and more present in the industry to bridge the gap between the domain experts, the customers and the developers. This paper introduced some efforts made in the context of the forthcoming UML Profile called Marte to take account of the multiform time in UML diagrams. The goal is to use UML visual editors to capture specifications and time con- straints. Using logical clocks keeps the specification as close as possible from the domain expert handbooks. Then, time analysis tools should be able to extract Marte annota- tions to validate some constraints. On a simplify example, borrowed from the automotive domain, we have shown that with few time constraints we can capture enough informa- tion to perform multiform-time analysis. Some validations have been performed in , these validations concern per- formance and cost requirements (processor speed, number of buffers and their size), as well as variability requirements (number of cylinders).
The main objectives of this article are to enlighten the importance of early validation and virtual (or hybrid) co-execution in complex systems and to provide associated solution. First step is to define a common ontology as a high level conceptual organization of test based engineering and validation. As a second step, we introduce a realtime and interactive co-execution platform that provides heterogeneous model integration, models validation and monitoring. We discuss about technical aspects of platform with emphasize on openness, independent and consistent integration, and system scalability. Then we analysis how such a unified testing system can be leveraged to better coordinate and optimize risk and reduce time to market for complex projects from different abstraction levels. Finally, we explain how realtime and interactive co-execution may accelerate the integration tests and system validation in the context of multidisciplinary projects through a collaborative platform.
For monitoring and actively controlling hydrodynamic and aerodynamic systems (e.g. aircraft wing), it can be necessary to estimate in real-time and predict the flow around those systems. We propose here a new method which combines data, physical models and measurements for this purpose. Very good numerical results have been obtained on 2- and 3-dimensional wake flows at moderate Reynolds, even 16 vortices shedding cycles after the learning window.