focus; we are tackling the challenge of making runtime adap- tation inAUTOSAR.
5. CONCLUSION AND FUTURE WORKS
Runtime adaptation in embedded real-time systems is a topic expected to grow in the coming years, gaining partic- ular moment in the context of designing FEVs. Yet there is a lack of techniques and tools for performing such adap- tation. We presented ASLA, a novel framework that sup- ports task-level reconfiguration features inAUTOSAR. We have built an experimental platform using three ARM-based STM32F4Discovery boards, an Open-source AUTOSAR im- plementation, ERIKA-OS served as the BSW. A CAN bus communication was established between the three ECUs and currently we are focusing on the implementation of the al- gorithms behind ASLA framework to demonstrate the the- oretical ideas. As future work, ASLA will be validated by means of a real case study from the SafeAdapt project. We will measure the overhead of the adaptation mechanisms, in particular the impact on timeliness.
4 Goal and Objectives
The goal of this tutorial is to provide the fundamentals of our Squirel framework which mainly provide a way to perform resource reservation on a modern com- ponent based framework named Kevoree . Kevoree is a framework language for designing heterogeneous and distributed adaptive applications. Squirel  also relies on systemlevel container such as LXC to ensure the resource reser- vation during the system runtime. The goal of this tutorial is to present Squirel through a practical session to a wide number of Software Engineering practi- tioners. This event would be a great venue for people to learn-by-example about resource aware development and deployement, in a modern component based framework. This tutorial is also intended to be a privileged moment to collect comments and feedback on our approach and realizations. This overall goal can be broken down into several concrete sub-objectives:
As far as personal control is concerned, Frontzack et al  highlight how there are many other factors influencing the IEQ including personal, climatic and building factors. This was mainly deduced by survey- based studies. Among the building factors one of the most important ones is the perceived level of control over the environmental conditions: "Providing occupants with the possibility to control the indoor environment improves thermal and visual comfort as well as satisfaction with the air quality". These results are partly confirmed by Bluyssen et al  who, by means of questionnaires and comfort measurement, found out that beyond the level of environmental variables, there are other important building factors which are affecting the IEQ of the occupants. The main factors, other than environmental parameters, are 'view' and 'control over environmental parameters'. In fact it was found that there was a big improvement in the correlation with the perceived IEQ level and the variables considered, when also control variables were included in the analysis (control on temperature, ventilation, sun shading, lighting and noise) and that this is seasonal dependent. The study shows how there is an improved IEQ when the occupants are able to control noise, sun shading and temperature in summer, and just noise in winter, while negative impacts were registered when one is able to control ventilation and lighting in both seasons. Another step forward to the inclusion of personal control in a unified comfort theory is the work of Haldi et al. . In this research, which makes use of a database of several buildings, the control over environmental parameters is measured by means of adaptive action on building components or building services by the occupants (opening of a window, switching of a light, using a fan or opening and closing a shading device), while the main environmental parameters are measured (temperature, solar irradiance, illumination level, presence of occupants). The aim of the study is to elaborate a behavioural model of occupant’s adaptivness to be included in simulation models. This model, different from the adaptive theory, interprets the effect of an occupant as not influencing only its environment but also its perception of comfort (as suggested by [35-36].The inclusion of adaptive action (which are interpreted as physical and physiological action) into the adaptive thermal comfort theory, improves the correlation coefficients between the thermal neutrality and the outdoor temperature for different occupants. The effects of the different actions are then quantified and a probabilistic model, to account for prediction of the effect of this adaptive actions on occupant comfort, is provided.
Figure-1- : “ManagedContent” package.
Except at the package level "ManagedContent", SPEM does not develop the guidance concept. Its definition and practice across the various packages is not approached in details. Thus, SPEM does not offer models of preset guidance nor directives of uses and selection of Guidances_Kind. Nevertheless, the “Method Content” package defines the notion of qualification (see Figure-2- ). The qualification describes the skill provided by a role or qualifications required for a task. It should be noted that the qualification mentioned during the definition of the task and the role is not expressed at the package level “ProcessWithMethod”.
Each of the 37 offices had a commercial thermostat integrated to the same building automation system (BAS). Each thermostat contained a passive-infrared (PIR) motion sensor (5 m range and 100° horizontal and 80° vertical coverage). The PIR motion sensors’ movement detections were collected per event basis and stored in a commercial BAS archiver. The occupancy data records in each room were generated from the movement detections using the adaptive time delay algorithm (Nagy et al. 2015). The principle behind the adaptive time delay algorithm to generate occupancy data records from PIR motion sensors was introduced in Nagy et al. (2015); and its appropriateness was verified against a ground-truth occupancy data record Gunay et al. (2016). Although it is likely that there were brief periods with more than one occupant in these offices (e.g., meetings), the primary users of these offices were assumed to be present at all times where at least one occupant was detected. Therefore, the occupancy data records from these 37 private offices were assumed to represent 37 different individuals. Note that visits shorter than 30 min between 12 am and 4 am were attributed to the cleaning staff, and corresponding data were discarded from the data records.
in general. Some things that have been investigated include choosing an optimal power level to maximize throughput . Maximizing throughput in a direct sequence spread spectrum network by way of a link layer protocol termed the Transmission Parameter Selection Algorithm (TPSA) has also been discussed . This provides real time distributed control of transmission parameters such as power level, data rate, and forward error correction rate. An analysis of throughput as a function of the data rate in a CDMA system has also been presented . Most of the previous work found has taken a very specific look at throughput in different wireless voice systems such as TDMA, CDMA, GSM, etc. by taking into account many different system parameters in the analysis such as Parameter Optimization of CDMA Data Systems . We have taken a more general look at throughput by considering its definition for a packet-based scheme and how it can be maximized based on the channel model being used. Unlike most of the work done on this topic, our research is focused on the transmission of data as opposed to that of voice. Most of the work done on data throughput analysis has been in wired networks (i.e. Ethernet, SONET, etc.). Even in this work, however, the analysis is mostly done with system specific parameters. Many variables affect the throughput of a wireless data system including the packet size, the transmission rate, the number of overhead bits in each packet, the received signal power, the received noise power spectral density, the modulation technique, and the channel conditions. From these variables, we can calculate other important quantities such as the signal-to-noise ratio γ, the binary error rate P e (γ), and the packet success rate f (γ). Throughput depends on all of these quantities. The rest of this paper is organized as follows. In Section II, our system model is introduced. In Section III, we derive an optimal adaptation of individual design parameters. In Section IV, we conclude by describing future areas of research in multi-user throughput optimization.
To improve the system performance, we proposed in a earlier work an approach where a critical task can be run in parallel with less critical tasks, as long as the real-time constraints are met. When no further interferences can be tolerated, the proposed run- time control suspends the low critical tasks until the termination of the critical task. In an automotive context, the approach can be translated as a highly critical partition, namely a classic AUTOSAR one, that runs on one dedicated core, with several cores running less critical AdaptiveAUTOSAR application(s). We briefly describe the design of our proven-correct approach. Our strategy is based on a graph grammar to formally model the critical task as a set of control flow graphs on which a safe partial WCET analysis is applied and used at run-time to control the safe execution of the critical task.
AUTOSAR is a layered architecture, divided into four levels. The bottom layer corresponds to the hardware layer. Above this, stands the basic software layer that contains low-level services and the operating system. This layer does not only contain standard elements, but also a number of ECU-specific com- ponents. Between the basic software layer and the application software compo- nents, the Run Time Environment (RTE) acts as an ad-hoc middleware. The RTE handles communication between software components and between soft- ware components and the basic software. Finally, the applicative layer contains specific components, which are unaware of lower layers, and that implement functions. Thus most of the components may be reused more easily on different targets (with the exception of sensor and actuator components).
LAAS CNRS Universit´e de Toulouse
Abstract—Automotive embedded systems need to cope with antagonist requirements: on the one hand, the users and market pressure push car manufacturers to integrate more and more services that go far beyond the control of the car itself. On the other hand, recent standardization efforts in the safety domain has led to the development of the ISO 26262 norm that defines means and requirements to ensure the safe operation of automotive embedded systems. In particular, it led to the definition of ASIL (Automotive Safety and Integrity Levels), i.e., it formally defines several criticality levels. Handling the increased complexity of new services makes new architectures, such as multi or many-cores, appealing choices for the car industry. Yet, these architectures provide a very low level of timing predictability due to shared resources, which goes in contradiction with timing guarantees required by ISO 26262.
6.2 Results and Discussion
In fig.1 the black curves represent the system without cooperation, the light gray color curves represent the system with cooperation and the dark gray curves represent the SoS according to SApHESIA. The top-left curves present the mean battery level of robots. The top-right curves present the number of boxes in the environment. Finally, the bottom curves present the number of alive robots in the environment. The comparison of the three systems clearly shows that the exchange of criticality improves the systems efficiency. Indeed, the underlying mechanism of cooperation with this exchange allows a greater number of agents to survive. Consequently the cooperative system covers a larger part of the grid thus improving the system’s ability to detect boxes. In a second time, the use of our cooperation process for SoS enables cooperation at a higher level as the reification of each group of robots as a component-system permits to improve global performance (number of boxes, battery level and number of alive robots). These results legitimate the study of cooperation in SoS and are encouraging concerning the inter-AMAS cooperation.
For the two other applications, we considered real industrial use-cases and focus on quantitative results. We applied only SA and GA, as TS does not show its capability to find the optimum for the small application. We remind that we consider constraints of loads balancing for each solution, data for inter-core communication are allocated in the shared memory, and the cost function minimizes inter-core communication overhead (using IOC). With the growth of the application size, it becomes impossible to obtain all the solutions in the exhaustive way as we did on the small application. So, the optimal solution can not be exactly determined. Thus, we used a different criteria to evaluate the quality criteria of the optimization methods. We focused on the standard deviation between the costs of solutions obtained by each algorithm and the cost of the best solution it ever found. The results for the two applications are shown in Table VI and Table VII. From these results, GA can no longer find better solutions than SA. Besides, the run time of GA is much longer. The average run time for both algorithms increases with the size of application, this is shown in Figure 7.
Analytic Target Cascading has been proven to guarantee that a distributed system converges and that the converged value is a globally optimal solution . Additionally, its hierarchy allows for traceability of the design process and provides for integration of marketing and business systems while establishing clear relationships between design subsystems . The main advantage of CO is that it does not require system analysis, but multidisciplinary feasibility may not be satisfied. Thus, some intermediate designs could be infeasible. CSSO guarantees both individual and multidisciplinary feasibility at each iteration, but requires all disciplines to indirectly share of all constraints. Unlike other MDO formulation, BLISS keeps common variables as constants at a lower level, and optimizes only common variables at an upper level. This BLISS formulation is similar to how NASA/Jet Propulsion Laboratory’s Advanced Projects Design Team (Team X) [14, 15] designs aerospace mission optimization.
Fig. 2. The switching scheme of the extremal control (ω 1 (t), ω 3 (t)) where
the points A,B,C,D,E,F ,G and H correspond to those in Figure 1.(a).
We now claim that the number of switchings is 0. Suppose that there is an optimal trajectory with L(0) = A with the number of switchings greater than or equal to 4. Let T s be
An interesting application of our method is satellite images segmentation. Combining cues of spectral and texture according to their discrimination power provides a powerful framework to cope with satellite images. We applied our algorithm on various panchromatic and multi- spectral images acquired by SPOT-3 (4c), SPOT-5 (4a,5a,6a), IKONOS (6c) and QuickBird (4b,5b, 5c, 6b) satellites. Results for images without texture are illustrated in figure 4. Smooth regions like sea, agricultural area, urban in low resolution, green area and ground are cleanly segmented. Figure 5 shows segmentation results for textured region images. Urban with different densities, vegetation area and ground are well segmented. Finally, figure 6 illustrates the capabilities of our approach on images which contain both non- textured regions (eg. agricultural areas, ground, river) and textured regions (eg. mountains, urban).
The advantages of using the IIM to model systems are multiple, among which: 1) the normalization of health in- dicators to obtain inoperability allows modeling systems with heterogeneous components (different health indicators, range values, degradation patterns and failure thresholds); 2) IIM describes a direct relationship between the mission profile effects and the degradation evolution, which eases the adaptation of the mission profile to extend the system life; 3) multiplying the inoperability by 100 gives a percentage of the component degradation relative to its failure threshold, which facilitates communication with the decision-makers.
HAL Id: hal-02893131
Submitted on 16 Jul 2020
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
Table 3. Fold-change of discriminant metabolites identified by 1 H NMR metabolomics approach in aqueous liver extracts of Carassius auratus exposed to pesticide mixture (PEST), temperature increased (TEMP) or both (PEST*TEMP) compared to CONTROL group at T16 d.
Figure 3 Non-replicated Use-Case
Figure 4 Optimal configurations Once the simple use-case has been fixed, other use- cases were found by replicating the simple use-case. This means that, each path, runnable, ECU and BUS is replicated. Hence when replicating by 11 we obtained a use-case with 55 runnables, 11 ECUs and 11 BUS-es. Also, we connected each ECU to the original BUS and all the replicas. Let us remark that for the replicated use-cases, the set of optimal configurations is characterized by having each ECU containing only one path (no inter-ECU communication). Figure 5 shows the runtime for the MILP and the GA. Indexes on the horizontal axe express the factor for the replication. As can be seen, MILP on average gives the results in shorter time. However the Figure 6 shows that when architecture has been multiplied 6, 9, 10 and 11 times, the solver didn’t return any solution. This was due to the returned error. For the factor 5, 7 and 8, the CPLEX finished execution with “out of memory exception”. Nevertheless for the factor 5, returned result is optimal, which is not the case for 7 and 8. The GA for all the replication factors was able to return the optimal solution. We run similar tests but with weights 0.5 for the end-to-end responses and 0.5 for the memory optimization. Set of the optimal solutions for the non- replicated use case contains only one configuration in which all the runnables are partitioned in one task.