Abstract. The requirement for higher SecurityandDependability (S&D) of systems is continuously increasing, even in domains tradi- tionally not deeply involved in such issues. In our work, we propose a modeling environment for pattern-based secure and dependable embed- ded system development by design. Here we study a general scheme for representing securityanddependability (S&D) design patterns whose intention specification can be defined using a set of local properties. We propose an approach that associates Model Driven Engineering (MDE) and formal validation to get a common representation to specify pat- terns for several domains. The contribution of this work is twofold. On the one hand, we use model-based techniques to capture a set of artifacts to specify patterns. On the other hand, we introduce a set of artifacts for the formal validation of these patterns in order to guarantee their cor- rectness. As an illustration of the approach, we study the authorization pattern.
Unfortunately, most of S&D patterns are expressed, as informal indications on how to solve some security problems, using identical template to traditional patterns. These patterns do not include sufficient semantic descriptions, includ- ing those of securityanddependability concepts, for automated processing within a tool-supported development and to extend their use. Furthermore, due to man- ual pattern implementation use, the problem of incorrect implementation (the most important source of security problems) remains unsolved. For that, model driven software engineering can provide a solid basis for formulating design pat- terns that can incorporate securityanddependability aspects and offering such patterns at several layers of abstraction. We will use metamodeling techniques for representing and reasoning about S&D patterns in model-based development. Note, however, that our proposition is based on the previous definition and on the classical GoF [ 5 ] specification, and we deeply refined it in order to fit with the S&D needs.
In our previous work , we studied pattern mod- eling frameworks and we proposed methods to model securityanddependability aspects in patterns and to validate whether these still hold in RCES after pat- tern application. The question remains at which stage of the development process to integrate S&D patterns. In our work, we promote a new discipline for system engineering using a pattern as its ﬁrst class citizen: Pattern-based System Engineering (PBSE). PBSE ad- dresses challenges similar to those studied in software engineering focusing on patterns and from this view- point addresses two kind of processes: the process of pattern development and system development with pat- terns. In order to interconnect these two processes, we promote a structured model-based repository of S&D patterns and property models. Therefore, instead of deﬁning new modeling artifacts, that usually are time and eﬀorts consuming as well as errors prone, the sys- tem developer merely needs to select appropriate pat- terns from the repository and integrate them in the system under development.
The approach presented here has been developed within SQUALE 1 ( Security,
Safety and Quality Evaluation for Dependable Systems ), a European research project which is part of the ACTS program ( Advanced Communications, Technologies and Services ). The aim of this project was to develop assessment criteria which would make it possible to gain a justified confidence that a given system will satisfy, during its operational life and its disposal, the dependability objectives assigned to it. These criteria are generic in the sense that they do not aim at a particular application sector but on the contrary they have to be general enough not to require supplementary work for the system to be evaluated and certified according to the domain standards.
days (without reconfiguration) to 3.35 years (with reconfiguration), which is up to 513%. We
conclude that our reconfiguration approach improves highly the node availability that leads to increase highly cluster availability.
Afterwards, to demonstrate the feasibility of our approach, several implementations in real materials were made. Firstly, the PAM was implemented by using two low-cost, low-power, small size and high security material solutions: Microchip 16-bit microcontroller and FPGA IGLOO Nano. The FPGA was considered in our study to provide more material solution in PAM implementation next to traditional microcontroller device. Thank to its high flexibility, the FPGA device may be reprogrammed to do any task that is fitted to the number of gates available in this FPGA device. Vice versa, microcontrollers already have their own circuitry and instruction set that the programmers must follow in code programming, which restricts it to certain tasks. Besides, other improvements in term of increasingly better power efficiency and decreasing prices make FPGA rule almost electronic embedded systems in the near future. Through experimentation results, we noted that the FPGA IGLOO Nano is much higher energy- efficient than microcontroller for PAM implementation to realize any complex task. That is pivotal advantage of using FPGA IGLOO Nano in wireless sensor node that has a limited en- ergy budget. However, at the term of processing time, the microcontroller offers processing capability faster than the FPGA IGLOO Nano chip. For example, in the ajustable regulator implementation, the power consumptions of the PAM in active and sleep modes are 3.2mW and
Mots-clés: Sûreté de fonctionnement; nuage de stockage; cloud storage; systèmes distribués;
cohérence des données, placement des données; confidentialité des données; déduplication;
The quantity of data in the world is steadily increasing bringing challenges to storage system providers to find ways to handle data efficiently in terms of dependabilityand in a cost-effectively manner. We have been interested in cloud storage which is a growing trend in data storage solution. For instance, the International Data Corporation (IDC) predicts that by 2020, nearly 40% of the data in the world will be stored or processed in a cloud. This thesis addressed challenges around data access latency anddependability in cloud storage. We proposed Mistore, a distributed storage system that we designed to ensure data availability, durability, low access latency by leveraging the Digital Subscriber Line (xDSL) infrastructure of an Internet Service Provider (ISP). Mistore uses the available storage resources of a large number of home gateways, Points of Presence, and datacenters for content storage and caching facilities. Mistore also targets data consistency by providing multiple types of data consistency criteria and a versioning system. We also considered the data securityand confidentiality in the context of storage systems applying data deduplication which is becoming one of the most popular data technologies to reduce the storage cost and we design a data deduplication method that is secure against malicious clients while remaining efficient in terms of network bandwidth and storage space savings.
The process of defining and implementing an I&C sys- tem can be viewed as a multi-phase process starting from the issue of a call for tenders by the stakeholder. The call for tenders gives the functional and non-functional (e.g., dependability) requirements of the system and asks candidate contractors to make offers for possible sys- tems/architectures satisfying the specified requirements. A preliminary analysis of the numerous responses by the stakeholder, according to specific criteria, allows the pre- selection of two or three candidate systems. At this stage, the candidate systems are defined at a high level and the application software is not entirely written. The compara- tive analysis of the pre-selected candidate systems, in a sec- ond step, allows the selection of the most appropriate one. Finally, the retained system is refined and thoroughly ana- lyzed to go through the qualification process. This process is illustrated in Figure 1. Even though this process is spe- cific to a given company, the various phases are similar to those of a large category of critical systems.
AADL [SAE-AS5506 2004] is a textual and graphical ADL that provides precise execution
semantics for modeling the architecture of software systems and their target platform. It has been approved and published as an international standard by the International Society of Automotive Engineers (SAE). A prototype of AADL was previously developed by Honeywell under US Government sponsorship (DARPA and others) to prove the concept. This prototype, called MetaH, has been used extensively to validate the concepts now in AADL. AADL is characterized by all the properties that an ADL should provide (composition, abstraction, reusability, configuration, heterogeneity, analysis) [Shaw & Garlan 1994]. It has substantial support for modeling reconfigurable architectures. From the analysis point of view, [Medvidovic & Taylor 2000] showed that, compared to other ADLs (e.g., ACME, C2, Darwin, Rapide, Wright), AADL/MetaH provides more advanced support for analyzing quality attributes. AADL allows analyzing the impact of different architecture choices (such as scheduling policy or redundancy scheme) on a system’s properties [Feiler et al. 2004]. These characteristics led to its serious consideration in the embedded safety-critical industry (e.g., Honeywell, Rockwell Collins, Lockheed Martin, the European Space Agency, Astrium, Airbus) during the last years. Our work related to the integration of dependability modeling and evaluation into an MDE approach focuses on AADL. AADL is further detailed in Section II.1.
In this paper we are interested in the role of two features of the retirement systems in case of economic integration: whether it is funded or not and whether it comprises flexible or mandatory early retirement age. The impact of funding has been widely studied 2 . It is largely equivalent to the impact of public debt in an economic union. In contrast the effect of mandatory versus flexible retirement has received little attention in the literature. Using an overlapping generations model (OLG) in the steady state, we show that both a PAYG pension system and a totally endogenous retirement age imply an inflow of capital from countries with fully funded pensions and mandatory early retirement. In the real world one find all sorts of pension systems even though in the OECD the most frequent one is PAYG systems with mandatory early retirement. 3
We here consider a setting with four types. The two more noticeable types are FO and PE. Indeed, the association of PAYG pensions and early retirement on the one hand, and the association of flexible retirement and full funding on the other, are often observed and contrasted. For example, according to EC (2013), the share of PAYG pensions in GDP and the effective age of retirement respectively were 7.7% and 63.5 in the UK, 7.5% and 64.9 in Ireland but 14.6% and 60.1 in France, 15.3% and 61.3 in Italy. The former two countries correspond to the type FO and the latter two to the type PE. As we have shown regarding such social security systems, it is not easy to determine which countries may benefit from an economic union without looking closely into their system.
0 4 : iptables −A InvRQ −j LOG −−log−prefix ’InvRQ ’ 0 5 : iptables −A InvRQ −j DROP
In the previous example, the main action (cf. line 03) is based on the conntrack match for iptables, which makes it possible to define filtering rules in a much more granular way than simply using stateless rules or rules based on the state match (cf. reference [ 3 ] and citations thereof a more extensive description). This is defined by providing the parameter m conntrack to the rules. The ctstate NEW parameter is used to instruct the firewall to match those TCP packets in the conntrack table that are seen for first time. The syn parameter, preceded by the ’!’ symbol, is used to exclude from such packets, those with the SYN flag activated. Finally, the ctdir ORIGINAL parameter is used to exclude those packets flowing from the server to the Internet. As a result, the above rules allow to report and drop those TCP traffic connections across the forward chain that exhibit the invalid behavior defined in the first row in Figure 5.5 (b), i.e., transitions from the initial state to the invalid one, as a result of invalid flag combinations. In other words, it corresponds to the discovery and prohibition of TCP traffic connections that may be associated to illicit scanning activities.
The operating environment traditionally affects very much system dependability. The workload should represent a typical operational profile for the considered application area. The faultload consists of a set of faults and exceptional conditions that are intended to emulate the real exceptional situations the system would experience. This is clearly dependent on the operating environment for the external faults, which in turn depends on the application area. Internal faults (e.g., software and some hardware faults) are mainly determined by the actual target system implementation. A dependability benchmark must include standards for conducting experiments and to ensure uniform conditions for measurement. These standards and rules must guide all the processes of producing dependability measures using a dependability benchmark.
percent of all help desk calls are password-related, and most of these are because a password has been forgotten (Murrer, cited in ).
Probably because of the difficulty remembering, users also have a tendency to write their passwords down. In one study, 50 percent of the users surveyed admitted to writing down their passwords, and the other 50 percent did not answer the question . Other notorious password behaviors are: (1) users share their passwords with their friends and colleagues, (2) users fail to change their passwords on a regular basis even when instructed to, (3) users may choose the same password (or closely related passwords) for multiple systems, and (4) users are often willing to tell their passwords to strangers who asked for them. (Asking was the most common technique used by Kevin Mitnick in his infamous security exploits .) There are solutions to the security issues caused by the behavior of users, but they are not commonly used (see  for an excellent review). To alleviate the problem of a remembering multiple passwords, for example, organizations can support synchronized passwords across systems. A related solution is a single-sign-on system where users are authenticated once and then they are allowed to access multiple systems. Another technique is to reduce the memory load placed on users. It is well known that cued recall, where users are prompted for the information they must remember, is more accurate than free recall . This can be used in security systems by requiring personal associates for passwords, such as "dear - god, "black - white", "spring - garden". Performance can also be improved by not asking users to recall at all, but rather to recognize certain material. Recognition is much easier and more accurate than recall . There is some evidence, for example, that Passfaces are easier to remember than passwords, especially after long intervals with no use .
customizations might be needed. Some activities could be discarded if, for example, some system components are reused or if the dependability objectives to be satisfied do not require the implementation of these activities. The list of key-activities and guidelines proposed in the paper for the requirements, design, realization and integration development stages can be applied, irrespective of the development methods used (conventional functional approach, object-oriented, etc.). These guidelines focus on the nature of activities to be performed and the objectives to be met, rather than on the methods to be used to reach these objectives. Indeed several complementary techniques and practices could be used to reach the same objectives. The selection of optimal solutions depends on the complexity of the system, the dependability attributes to be satisfied, the confidence level to be achieved, and the constraints related to cost limitation or imposed by the certification standards. Especially, the model proposed can be used to support the ongoing standardization efforts towards the definition of application sector specific standards focused on the development and certification of dependability related issues. Indeed, it can be used as a baseline to define and to structure the objectives and the requirements related to dependability to be satisfied by the product to be assessed as well as the evidence to be provided to show that the product satisfies the dependability requirements assigned to it. These requirements are to be defined taking into account the specific constraints and needs of each application sector.
All TG RFID follow a common security concept. Whereas RFID Recommendation is primarily directed towards privacy and data protection, TG RFID cover all three security domains: safety, securityand privacy. Furthermore, TG RFID provide detailed guidance how to carry out all detailed work PIA Framework leaves out, because it is understood as a high level document more for senior management and non-IT people. TG RFID are written for IT experts who are responsible for designing systems, investigating threats and weaknesses and providing for the right protection provisions. Definition of generic controls and proposition of scenario-specific safeguards are carried out as a joint approach. This reflects the fact that threats for privacy are often threats to information security as well. Vice versa certain safeguards can counter threats for privacy and information security. The approach of TGs optimizes the impact of safeguards and minimizes cost of securityand privacy and complements PIA Framework.
We presented the specification of a dependability benchmark for OSs with respect to erroneous parameters in system calls, along with prototypes for two families of OSs, Windows and Linux. These prototypes allowed us to obtain the benchmark measures defined in the specification. We stress that the measures obtained for the different OSs are comparable as i) the same workload (PostMark) was used to activate all OSs, ii) the faultload corresponds to similar selective substitution techniques applied to all system calls activated by the workload and iii) the benchmark conduct was the same for all OSs. Concerning the robustness measure, the benchmark results show that all OSs of the same family are equivalent. They also show that none of the catastrophic states of the OS (Panic or Hang) occurred for any of the Windows and Linux OSs considered. Linux OSs notified more error codes (59-67 %) than Windows (23-27 %), while more exceptions were raised with Windows (17- 22%) than with Linux (8-10 %). More no-signaling cases have been observed for Windows (55-56 %) than for Linux (25-32 %).