HAL Id: inria-00076519
Submitted on 24 May 2006
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
In this paper, we present a formal framework to support the rigorous design of software architectures focusing on the communication aspects at the architecture level. It is based on the definition of a metamodel to describe high level concepts of architecture in a component- port- connector fashion focusing on communica- tion styles and a formal definition of those concepts and their semantics following some properties (specifications). The former offers a transparent structural def- inition of communication styles (mainly message passing and remote procedure call mechanisms). The latter supports the application designer in the rigorous development process to model and analyze architectural communication styles. In the scope of this paper, we propose to use Alloy [ 10 ] for formalizing those com- munication styles and verifying conformance of the communication style at the model level. The formal specification and verification of a software architecture is represented through an Alloy module based on a set of reusable models, namely connectors corresponding to each of the considered communication styles. We provide a set of reusable connector libraries within a set of properties to define architectures forsystems with explicit communications model such as message passing and remote procedure calls.
Most of the existing semantics , , , , ,  are usually based on the definitions of the standard for the computation of traces, thus they are not suitable for SD modelling behaviours of distributedsystems. Moreover most of the work, , , ,  did not deal properly with some CF and the nested CF. Indeed they impose strict hypothesis to avoid inconsistencies due to the use of these CF. In our last work we have well explained these restrictions that limit the expressive power of these CF. To overcome these insufficiencies, we pro- posed an operational semantics that is, on the one hand, based on an extended causal semantics, suitable for UML2.X SD equipped with the most popular CF modelling distributedsystems, on the other hand, it supports guards straightfor- wardly since it is given as a guarded transition system. The operational semantics can be easily implemented and can be used as a basis for refinement checking purpose for our ongoing work.
35 35.2 35.4 35.6 35.8 36 36.2 36.4 36.6 36.8
5 Conclusion and future works
We applied hybrid control architecture on a mobile robot to obtain a stable trajectory tracking while avoiding obstacles. Indeed, the robot is controlled by elementary continuous controllers according to the sub-tasks to accomplish (trajectory tracking, obstacle avoidance) and switching from a controller to an other is done referring to discrete events. We saw that hard switches cause chattering and are not efficient to insure a safe navigation. Therefore, we propose to use the multiple Lyapunov function for hybrid systems to design a stable hybrid control architecture. In addition to elementary stable controllers for the two main sub-tasks, we introduce a third controller which insures the second sufficient condition of multiple Lyapunov function (cf. section 2). Simulations show that our architecture prevents useless switches, guaranteeing thus a safe navigation for the robot. Applying this stable control architecture to a dynamic environment (e.g., moving obstacles), will be the subject of future works. Application to multi-robot systems navigating in formation seems also to be interesting. The objective is to make each robot able to avoid an obstacle before regaining the convoy.
Abstract— Access control concerns in MANETs are very
serious and considered as a crucial challenge for operators who prospects to employ unrivaled capabilities of such networks for different applications. We propose a novel hierarchical distributed AAA architecturefor proactive link state routing protocols notably OLSR . This proposal contains a lightweight and secure design of an overlay authentication and authorization paradigm for mobile nodes as well as a reliable accounting system to enable operators to charge nodes based on their connection duration time. We also suggest a hierarchical distributed AAA (Authentication, Authorization, and Accounting) server architecture with resource and location aware election mechanism. Moreover, this proposal mitigates the OLSR security issues  noticeably and eventually defines a node priority-based quality of service.
Therefore, health assessment functions now become a major issue for complex system designers. Among those functions, the prognostic function aims at defining the future health of the system that contributes to plan productive tasks or maintenance tasks. Among the difficulties leading to the implementation of prognostic functions in complex systems, there are the numerous hardware or software components, devices, functions or subsystems of complex systems. Those equipments are designed, manufactured, assembled by different industrial partners (OEMs, suppliers, subcontractors, etc.). Each partner has a part of the needed knowledge to carry out the prognosis of the complex system. However, some pieces of this knowledge are parts of the own know-how of the partners and so they cannot be shared. To tackle this difficulty, a decentralized/distributedarchitecture can be proposed. Indeed, such architectures enable the implementation of the Remaining Useful Lifetime (RUL) assessment and prognostic functions closer to components, devices, functions or subsystems. Therefore, each OEMs, suppliers or subcontractors can provide RUL assessment and prognostic functions for their equipments. Nevertheless, those functions have to collaborate in order to ensure the convergence of the prognostic process of
2 Systems and Computer Engineering Carleton University Ottawa, Ontario, Canada
Abstract. Building distributed computing systems involves complex concerns integrating a multitude of communication styles, technologies (IoT, cloud and big data...), stakeholders (architects, developers, inte- grators, etc.) and addressing a multitude of application domains (smart cities, health, mobility, etc.). Existing architectural description languages fail to rigorously bridge the gap between the abstract representation of communication styles and those supported by existing execution infras- tructures. In this paper, we aim at specifying software architecture of dis- tributed systems using an approach combining semi-formal and formal languages to build reusable model libraries to represent communication solutions. Our contribution is two fold. First, we propose a metamodel to describe high level concepts of architecture in a component- port- connector fashion focusing on communication styles. Second, we attempt to formalize those concepts and their semantics following some proper- ties (specifications) to check architectural conformance. To validate our work, we provide a set of reusable connector libraries within a set of properties to define architectures forsystems with explicit communica- tions models like message passing and remote procedure calls, that are common to most distributedsystems.
In the context of distributedsystems, a major impor- tant issue is precisely the collaboration or interaction imple- mentations between components. Unfortunately, systems and algorithms for communication among remote compo- nents can be highly dependent on non-functional constraints (from the point of view of the “interaction semantics”): the location and number of interacting components, the state of the network used, the need to encrypt communication or to have a fault-tolerant system, etc. For instance, in an event broadcast system, the fact that the data transmitted are en- crypted or not, does not change the way the components use the system; they will still call the same services. But the im- plementations of these two versions of the same interaction abstraction are quite different.
System Modeling Language for Security - SysMLSec
We described the main building blocks of the security requirement engineering methodology in the previous chapter. To pave the way for system-wide SREP, we first have to breath live into a collection of conceptual stages following the map- ping of stakeholders needs into product functions and use cases. Also, preceding the design of these functions across the engineering disciplines (i.e., hardware, software, etc.). In this context, a number of modeling languages (i.e., UML, SysML, etc.) have been proposed to help engineers from different system development stages to communicate, share, and compare their perspectives, to reason about properties of a system. These modeling languages for software engineering practices express different concepts to serve different development purposes like use case modeling, requirement modeling, protocol modeling, etc. However, security issues involve spe- cial concerns that these traditional software engineering languages do not consider. Consider, for example, a general behavior modeling notation that expresses inter- actions of entities in the system without considering the harmful behavior of an adversary. Thus, the models do not convey the impacts of the malicious behavior of the adversary on requirements, design, and architecture to the next phases of system development lifecycle. As we have reviewed in Section 2.2, to model specific security aspects such as threats, vulnerabilities, assets, and security requirements several security modeling languages have been developed. A number of extensions of UML (i.e., UMLsec , SecureUML , Misuse cases , Abuse cases , etc.), allow to express security relevant information within the diagrams in a system specification have been proposed. Yet, to best of our knowledge, none of them pro- vides the expressivity required to deal effectively with system-wide SRE. Another major group of contributions to the conceptual modeling of security requirements like KAOS  and Secure Tropos , etc., have defined their own graphical formalism each of which allows to express security relevant information (i.e., goal, anti-goals, requirements, obstacles, etc.).
than the dimension of the raw data, performing signal dimension reduction at the front of the processing pipeline would potentially reduce the data-conversion rate and the com- putation complexity for all downstream processors. For example, consider the human visual system, which has evolved to be exceptionally efficient for targeted tasks such as identifying a dangerous situation from complex natural images. Instead of capturing and processing a high resolution image of the entire scene, the visual sensory system ignores details in the information-rich images and only extracts a few key features such as speed, color, size and shape to enable time-critical decision making [ 1 , 2 ]. Similarly, for speech recognition, the essential speech features lie in the modulated harmonics of a person’s voice, which occupy a small portion of the full speech spectrum. This property can be utilized to enable low complexity processing through early stage feature extraction.
Unified Shared Causal History Model (SCHO)
The Shared Causal History Ontology (SCHO) shown in Figure 1 represents all the concepts of SCHO: changesets, patches, push and pull feeds. This ontology enables the SCHO users to query the current state of the document and its complete history using semantic queries. This ontology is populated through the user interaction with the system using five basic operations: createPatch, createPush, push, createPull and pull. These operations are inspired by the Push/Pull/Clone model used in dis- tributed version control systems such as Git, Mercurial and Bazaar (Allen et al., 1995). These operations create instances of the SCHO ontology. The details and the algorithms of each operation are presented in the following section.
At present, although SOA claims to include autonomy as one of its properties, autonomy is still its desirable feature. On the other hand, MAS claims to have a distribution property and the AOA to have the capability to solve heterogeneity issues. However, this distribution property is based on the assumption that these distributed entities are all implemented as agents and running on the same type of agent platforms. This assumption is not always true in reality and limits the open and heterogeneous nature of Open CDS. Based on the above insight, this paper proposes an amphibian Service-Oriented Agent Model (SOAM) by extending the CIR-Agent model , so that agents can survive not only in agent-oriented environments, but also in service-oriented environments and all three design issues are solved.
Abstract. Nowadays, multi-agent simulations are often used to study complex systems, in particular in social sciences. However, most of the time, the developed models propose a simplistic modeling of the behavior of the different actors because of the lack of simple tools allowing the development of more complex agents. In order to fill this lack, this paper presents a cognitive agent architecture coupled with an emotional engine usable through a modeling language easy to understand by non computer scientists. This ease of use is illustrated through an example dealing with the evacuation of a city.
capabilities – see Figure 1. Those devices capture information about their environment – for example humidity, noise, location, air quality, or temperature – and communicate through a mobile ad-hoc network. The devices carried by firefighters’ could lose at times wireless connectivity but are always able to independently sense, compute and actuate. However, mobile devices are not necessary able of doing compute-intensive tasks due to battery, or CPU and memory restrictions. Team leaders carry devices with better computation and battery capabilities that aggregate data so that useful global information can be extracted. Additionally, there are data centers in the vicinity of these sensing and/or actuating devices, which are not restricted in terms of battery or computing power. In such setting, mobile devices should be able to offload compute-intensive tasks with real-time requirements to data centers in the vicinity so that the overall system overcomes its inherent risks – increased latencies, slow response times, increased delays, disconnection from the network, or battery drainage – that might hinder the real-time and resource- demanding scenarios. Additionally, cloud environments accessed through the internet could be employed to execute non real-time tasks such as Big Data analysis, data aggregation that allows coordination with other firefighting departments, or training of prediction models.
Figure 8. “Deny” based optimisations.
The second optimisation applies to «Permit» rules. These rules have to be duplicated between the source and
the destination of each communication that has to be controlled. The basic distribution algorithm provides this property. However when the communication to be controlled takes place between two internal devices, our basic distribution algorithm distributes these rules between the root of our network tree to the first device interconnecting our two devices. These rules are useless and have to be removed. Moreover removing these rules also provides an address spoofing protection because communications coming from the outside with external addresses will be discarded since not explicitly permitted.
other reasons like information filtering. To detail this last one, information filter- ing here refers to bottlenecks due to post- and pre-filtering, such as recommen- dation engines typically favouring a small number of popular items in the first case, and the last one being sampling bias. This study continues with the tem- poral evolution of the previously cited popularity distributions. As opposed to standard VoD services where content popularity fluctuation is rather predictable, UGC video popularity can be ephemeral and has a much more unpredictable be- haviour. The authors observed an ephemeral popularity for young videos during a few days after the upload. While interests of users seems to be video-age in- sensitive on a gross scale, on a one day period, roughly 50% of the top twenty videos are recent ones, and as the time-wondow increases, the median age shifts towards older videos, confirming ephemeral popularity of young videos. When addressing the temporal evolution of the popularity, the authors found that prob- ability of a given video to be requested decreases sharply over time. In fact, this indicates that if a video did not get enough requests during its first days, it is unlikely that they will get many requests in the future. Less than 1% of new videos make it to the top popular list, the rest having their popularity dimming over time creating a massive amount of very limited niche audiences, and their chances of becoming popular in the future being barely existing, although some few very noticeable exists due to rare and circumstantial phenomena like in this
The FRIENDS system is a very suitable platform for ex- perimenting with object-orientation and metalevel pro- gramming in various directions. Among them, we have identified the following: The first one is reusing metaobjects for implementing new mechanisms with respect to various fault assumptions and for evaluating how much can be re- used and what the impact on the existing set of classes is. The development of a more powerful metaobject protocol will be done for a more efficient implementation of existing mechanisms and, also, for the implementation of others meta-functional properties, mainly related to real-time. We are currently improving the underlying distributed object- oriented support using CORBA layers and implementing a metaobject protocol in this support. Some experiments have already been done using Open C++ V2. Another interesting aspect is to analyze alternative ways of designing the metalevel by different chaining of metaobjects and to allow dynamic connection between objects and metaobjects. En- gineering the metalevel is a long term activity. Considering more advanced computational models and evaluating to what extent this architecture enables more easy validation are two future directions of this work within the DeVa project.
Distributed storage systems rely heavily on replication to ensure data availability as well as durability. In networked systems subject to intermittent node unavailability, repli- cas need to be maintained (i.e. replicated and/or relo- cated upon failure). Repairs are well-known to be extremely bandwidth-consuming and it has been shown that, with- out care, they may signiﬁcantly congest the system. In this paper, we propose an approach to replica management ac- counting for nodes heterogeneity with respect to availabil- ity. We show that by using the availability history of nodes, the performance of two important faces of distributed stor- age (replica placement and repair) can be signiﬁcantly im- proved. Replica placement is achieved based on complemen- tary nodes with respect to nodes availability, improving the overall data availability. Repairs can be scheduled thanks to an adaptive per-node timeout according to node availabil- ity, so as to decrease the number of repairs while reaching comparable availability. We propose practical heuristics for those two issues. We evaluate our approach through exten- sive simulations based on real and well-known availability traces. Results clearly show the beneﬁts of our approach with regards to the critical trade-oﬀ between data availabil- ity, load-balancing and bandwidth consumption.
Abstract— Existing electronic healthcare systems based on
PACS and Hospital IS are designed for clinical practice. Yet, both for security, technical and legacy reasons, they are often weakly connected to computing infrastructures and data networks. In the context of the RAGTIME project, grid infrastructures are studied to propose a cheap and reliable infrastructure enabling computerized medical applications. This raises various concerns, in particular in terms of security and data privacy. This paper presents the results of this study and proposes a complete grid- based architecture able to process medical image for assisted diagnosis in a secured way. Using this infrastructure, care practitioner are able to execute the application from any machine connected to the Internet, therefore improving their mobility. Medical image analysis jobs are certified to be correct using the latest advances in result checking and fault-tolerant algorithms provided in , . The architecture has been successfully de- ployed and validated on the Grid5000 large scale infrastructure.