Validation. The results were validated with literature searching, examples, and case studies,
prototypes and feedback obtained during the elaboration and presentation of peer-reviewed scientific publication.
First, we have started our work with a literature search of existing research results, techniques, and tools that are related to our work. We continued to review other new results during the three years of this Ph.D. thesis. Secondly, and, in order to understand and identify potential problems, we used and established examples and case studies, which were then reused to validate our approach. The main two case studies are the Dealer Network Architecture, a well-known case study taken from the SoaML Specification, andthe Travel Management System [10], which is a common case study on Web-based applications where a client uses a Travel Management System to search for flights and hotels. The Dealer Network Architecture case study is used along this dissertation to illustrate our approach. Third, several prototypes have been developed to support the contributions and validate them. The main three prototypes are the SoaML editor, which provides support forthe specification andthe validation of SoaML-based models. This prototype checks consistency between SoaML views with respect to the syntax andthe semantics described in SoaML specification. The second prototype is the SoaML generator, which automates the generation of web services artifacts from SoaML models. Finally, the third prototype, is an extension ofthe symbolic analysis and testing platform, Diversity, to support offline analysis ofservice choreographies under partial observability conditions. Our main contributions were published in a peer-reviewed scientific conference whose best paper award was attributed to our paper [172]. Having our main results evaluated and validated by international specialized researchers further reinforces the validity of our contributions.
in [26]. A survey of existing methods and approaches for reliable composite services is presented in [18]. Sev- eral other automatic, semi-automatic and manual ser- vice composition approaches are proposed in the lit- erature such as [34, 28, 36, 43, 35, 25, 5]. Paik et al. [34] propose a nested multilevel dynamic composition model which provides functional scalability and seamless com- position. Oh et al. [28] consider the automatic compo- sition of Web services as AI planning and network op- timization problems. In [36] a context-based semantic approach is proposed for classifying and ranking Web services in order to compose them. The classification is based on the analysis ofthe WSDL documents and free text descriptions ofthe Web services. Medjahed et al. [25] propose a composability model to check whether Web services can be composed without failure during their execution. In this model, the Web services are compared through four levels: syntactic, static and dy- namic semantic and qualitative levels. All these works deal with complementary aspects to our proposal. We create automatically compositions ofthe services gener- ated from Web applications. In our work, we have used the composability modelof [25] to check the syntactic composability ofthe services. Semantic composability is one ofthe perspectives of our work.
Brahim Hamid is an associate professor at the University
of Toulouse Jean-Jaurés and he is a member ofthe IRIT- MACAO team. He got his Ph.D. degree in 2007 in the area of dependability in distributed computing systems from the University of Bordeaux (France). In addition, he has an M.Sc. in Theoretical Computer Science that provides him with background on mathematical, logic and formal con- cepts. He has been an assistant professor (ATER) at EN- SEIRB (Bordeaux, France), anda member of LaBRI (France). Then he worked as a post-doc in the modeling group at the CEA-Saclay List (France). He was a visiting professor at the university of Concordia (August 2011), at the university of Florida (September 2014) and at the university of Vienna (April 2015). His main research topics are software languages engineering, at both the foundations and application level, particularly for resource constrained systems. He works on security, dependability, software ar- chitectures, formalization, validation andverification as well as supporting recon- figuration. Furthermore, he is an expert in model-drivendevelopment approaches
In the second category, we remain again in the same optimization use case but we change some pa- rameters, e.g. adding/removing an application. In jMetal it is necessary to manually add/remove this application from the array ofapplications in each crossover/mutation operator and fitness function. In Polymer, we just need to add/remove this application from the population creation factory. We do not need to modify any operator or fitness because themodel is integrally passed to them. Another modification that falls into this category, is to add an optimization field. Let’s say we want to optimize over CPU and network resources ofthe deployed applications on the virtual machines. In Polymer, we just have to add a field “network” in the metaclasses Task, Application and V mInstance. There is no code change in any pre- viously implemented fitness function, mutation and crossover operator necessary. However, in the tradi- tional approacha big change needs to be manually done on the encoding. Figure 2 is not enough any- more to represent the new encoding. Instead, a new integer needs to be reserved for each application task on each VM instance. The total number of integers in the array representing a solution, changes from the previously calculated m × (n + 1) to the new value of m × (2n + 1). Developers then need to manually up- date all previously defined operators in order to take this structural change in the encoding into account. In this simple use case implementation, we already counted as many as 43 lines of code affected by this change. What is even more dangerous, is that it is up to the encoding designer to define what comes first in the encoded array, among the v-cpu weight andthe network weight. Type checkers cannot enforce that the encoding is actually used by its intended meaning in fitness functions and mutation operators. Table 2 summarizes the modifications.
II. S ERVICE -O RIENTED C OMPUTING
A. Overview
TheService-Oriented Computing (SOC) [17] paradigm proposes a modular and loose-couple design for developing distributed applications. Such a design is based on theservice abstraction which uses contracts to define how these service will interact with each other. The SOC is represented by theService-Oriented Architecture (SOA). The SOA describes a three-layer architecture widely known as the SOA pyramid which enables the conception ofservice-based applications. In a brief, the lowest SOA layer addresses basic functionalities as service discovery and binding. The middle-layer deals with service composition in which services are combined in order to conceive composite services. Last, the upper layer handles high-level service management features that are mainly related to service life cycle.
Abstract. Developers in modern general-purpose programming languages cre- ate reusable code libraries by encapsulating them in Applications Programming Interfaces (APIs). Domain-specific languages (DSLs) can be developed as an al- ternative method for code abstraction and distribution, sometimes preferable to APIs because of their expressivity and tailored development environment. How- ever the cost of implementing a fully functional development environment fora DSL is generally higher. In this paper we propose DSLit, a prototype-tool that, given an existing API, reduces the cost of developing a corresponding DSL by analyzing the API, automatically generating a semantically equivalent DSL with its complete development environment, and allowing for user customization. To build this bridge between the API and DSL technical spaces we make use of exist- ing Model-Driven Engineering (MDE) techniques, further promoting the vision of MDE as a unifying technical space.
Since the functionalites are viewed as services it is possible to compose basic low level services in dierent fashion to provide dierent functionalities. For example, in cars, voice recognition service can be used in an entertainment system as well as in a navigation system. However, the bottleneck is more related to the business models existing in industries that develop such systems. The standard approachfor automobile OEMs is to develop systems by assembling components that have been completely or partly designed and developed by external vendors [82]. Because ofthe increasing complexity of automobile systems with large number of distributed features, such an approach will also lead to various compositional issues commonly known as feature interactions. Therefore, ServiceOriented Architectures (SoA) are gradually being adopted in these systems where various functionalities are provided as services andthe assembled components are seen as service providers. This software engineering paradigm has many advantages in Human-Machine collaborative work. Advances in human behavior research helps to model various human actions. These actions can be seen as services provided by human, for example, steering, braking, etc, applying acceleration by the driver can be viewed as services provided by the human driver. In some context, humans provide high quality services while in some others the machine counterpart does. For example, in the case of assisted parking, human steering control service is delegated to the machine still retaining the authority over acceleration with the driver.
Brahim Hamid is an associate professor at the University
of Toulouse Jean-Jaurés and he is a member ofthe IRIT- MACAO team. He got his Ph.D. degree in 2007 in the area of dependability in distributed computing systems from the University of Bordeaux (France). In addition, he has an M.Sc. in Theoretical Computer Science that provides him with background on mathematical, logic and formal con- cepts. He has been an assistant professor (ATER) at EN- SEIRB (Bordeaux, France), anda member of LaBRI (France). Then he worked as a post-doc in the modeling group at the CEA-Saclay List (France). He was a visiting professor at the university of Concordia (August 2011), at the university of Florida (September 2014) and at the university of Vienna (April 2015). His main research topics are software languages engineering, at both the foundations and application level, particularly for resource constrained systems. He works on security, dependability, software ar- chitectures, formalization, validation andverification as well as supporting recon- figuration. Furthermore, he is an expert in model-drivendevelopment approaches
can create innovative solutions and evolve their existing service offer. Despite the flexibility of this environment, Cloud business models and technologies are in their initial hype stage and are characterised by many critical early stage issues which pose specific challenges from a software engineering perspective. Specifically, “one ofthe most press- ing issues with respect to Cloud computing is the current difference between the individual vendor approaches, andthe implicit lack of interoperability [...]. Whilst a distributed data environment (Infrastructure-as-a-Service or IaaS) can- not be easily moved to any platform provider (Platform- as-a-Service or PaaS) and may even cause problems to be used by a specific service (Software-as-a-Service or SaaS), it is also almost impossible to move aservice / image /
We are currently working on validating two properties of our transformation rules: Reflex- ivity and Bidirectionality.
After the implementation ofthe basic transformation rules, we put to the proof our approach by applying our transformation process on our own Ecore subset. The purpose behind such an action is to validate our approach by giving it a property of reflexivity. The expected result is to be able to generate in Caml a data type description for Ecore, which is close to the representation ofthe Ecore grammar (given in this thesis in Chapter 2). Applying our transformation rules on this subset allows us to analyze our transformation rules and identify particular cases that have been added later to the transformation rules. The other property on which we are working is Bidirectionality. In fact, the composition of our two transformation functions gives the identity as we start with the formal model. We have detected this property when experimenting our transformations on different examples. In fact, when we apply our transformation function (f ()) on a source model (M s ) of data types, we generate a class diagram M T . If we apply to this generated M T the opposite
– All traffics are regulated by firewalls: Property P 7 is also interesting when there is a new context called logging: this context is managed by those devices with a logging functionality as today’s most popular firewalls.
– Integrity and confidentiality property: This property is related to the establishment of IPsec tunnels. It ensures the extremities ofthe IPsec tunnel. Moreover, particular IPsec configurations may include recursive encapsula- tion of traffic on a path. Verifying this property begins at higher levels: if no OrBAC security rule is enounced with a protected (prot) context, no further verification is necessary. To ensure the protected context activation, a configuration of an IPsec tunnel is necessary. If no specific information concerning the IPsec tunnel establishment is provided, we may suppose the following two cases: (1) the subject/source andthe object/destination are IPsec enabled (e.g., IPv6 nodes and end-to-end tunnel) or (2) at least one node in their neighborhood (e.g., site-to-site tunnel) has IPsec functional- ities. Forthe first case, it suffices to check on the IPsec functionalities on both the subject andthe object nodes and this is captured by the P 8 prop- erty ofthe WHILE loop. The second case is handled as follows: in one ofthe IMPORTED machines on the Weighted Forest development branch, we provide an operation predec(node) which returns the precedent node in the current shortest-path from the source (src) node to the destination (dest) node. Via PROMOTE clauses, the operation may be called by higher IM- PORTING machines, including the Deployment machine. We consequently check on the predec(dest) and predec −1 (src) nodes as in the P
With regard to the second aspect, it should be noted that modeldriven de- velopment ofapplications is a well established practice [8]. However, in terms of managing the Web servicedevelopment lifecycle, technology is still in the early stages. We believe that the level of automation can be substantially increased with respect to what is available today, especially in terms of factorizing into the middleware those chores common to thedevelopmentof many Web services. Theapproach proposed here has several advantages with respect to previous art, in- cluding early formal analysis and consistency checking of system functionalities, refinement and code generation. For example, the work proposed in [2] features generation rules from UML activity diagrams to BPEL processes. The work pre- sented in [9] focuses on generating executable process descriptions from UML process models. The contribution of our work is specializing themodeldrivenapproach to web service conversation and composition models. As mentioned before, our approach focuses on specifying service composition models along with the conversation definitions and generating the executable specifications ofaservice that not only implements theservice operations as specified, but also guarantees of conformance oftheservice implementation with the conversation specification.
The previously presented approaches [17–19] offer development processes that allow to con- ceive real-time embedded systems. They present methodologies to be followed by developer from high level models to code. However, these approaches do not support reconfigurable systems. In this direction, TimeAdapt [20] is adevelopment process for reconfigurable system design. It fol- lows a three-tiered approach providing means to specify reconfiguration actions, estimate whether their execution can be carried out within a given time bound and execute them in a timely man- ner. In fact, each reconfiguration has time bound that is based on environmental conditions and structural application. An admittance test calculates the probability whether the given reconfig- uration can meet the specified time bounds. If this probability exceeds a given threshold, the reconfiguration is scheduled as a high priority real-time task and its reconfiguration actions will be executed. In case ofa reconfiguration task rejection, the reconfiguration is rescheduled with a new time bound at some later point in time. TimeAdapt supports the execution of reconfigura- tions on component-based real-time applications. However, this framework provides a bounded time for each reconfiguration. If a reconfiguration exceeds its estimated time, it will not be executed.
VII. C ONCLUSION AND F UTURE W ORKS
In this paper we present the hHOEi 2 process and its formal-
ization language. We show how, by adapting the UML activity metamodel, we are able to define a very small-grained set of activities and tasks with clear inputs and outputs. This allows us to provide concepts for project characterization and moni- toring – possibly automatic. Our contribution shows benefits in terms of organization of tasks and produces consistent planning by means of explicit dependencies across tasks. We think that this is not only applicable to regular development but also to prototyping, where shorter cycles and efforts are expected to produce very targeted results. The dedicated tool CanHOE2 currently supports only the first phase ofthe process.
The Y symbol is frequently used to summarise these principles, as shown in Figure 2.
INSERT FIGURE 2 HERE
As cited above the transition from one level to another is based on model transformations. Amodel transformation can be seen as morphism between elements of two models. A meta-model allows fixing the syntax andthe semantic ofthe different elements that compose amodel. Morphism between two models is explained as a mapping between the elements of two related meta-models. On the basis ofthe defined mappings, a transformation can be done to link two models. By executing amodel transformation, models conforming to the source meta-model are transformed to models conforming to the target meta-model. This is crucial in our problematic of transforming a collaborative process model into an information system model: firstly, we have to define the two meta-models ofthe collaborative process andofthe collaborative architecture modeland secondly we have to define the transformation rules based on established mappings between the different elements ofthe two meta-models. TheModel-Driven Interoperability (MDI) proposal (Grangel, 2007) attempts to provide solutions that, following the MDA approach, can help enterprises to transform models at different levels of abstraction in order to generate Enterprise Software Applications (ESA) from enterprise models 2 and how amodel-drivenapproach could be a useful way to solve interoperability problems. An application ofthe MDI approach is described in
Software School, Fudan University, Shanghai, China Weiming Shen
Institute of Research in Construction, National Research Council Canada, London, Ontario, Canada
Abstract: Cities are being equipped with multiple information systems to provide public services for city officials, officers, and citizens. There are problems with efficient service implementation and provision, e.g., data islands and function overlaps between systems andapplications. Service-oriented portals are efficient at facilitating information sharing and collaborative work between city systems and users. The goal is to make cities responsive, agile and to provide composite services efficiently and cost efficiently. Aservice-oriented framework for city portals is proposed to design, integrate and streamline city systems andapplications. Amodel-driven collaborative development platform is developed forservice-oriented digital portals ofthe proposed framework. The architecture and implementation issues ofthe platform are discussed. Theservice identification policies are discussed within the framework. A case study has been developed and evaluated on the platform to provide a composite service, i.e., a traffic search service on a city portal.
The previously presented approaches [17–19] offer development processes that allow to con- ceive real-time embedded systems. They present methodologies to be followed by developer from high level models to code. However, these approaches do not support reconfigurable systems. In this direction, TimeAdapt [20] is adevelopment process for reconfigurable system design. It fol- lows a three-tiered approach providing means to specify reconfiguration actions, estimate whether their execution can be carried out within a given time bound and execute them in a timely man- ner. In fact, each reconfiguration has time bound that is based on environmental conditions and structural application. An admittance test calculates the probability whether the given reconfig- uration can meet the specified time bounds. If this probability exceeds a given threshold, the reconfiguration is scheduled as a high priority real-time task and its reconfiguration actions will be executed. In case ofa reconfiguration task rejection, the reconfiguration is rescheduled with a new time bound at some later point in time. TimeAdapt supports the execution of reconfigura- tions on component-based real-time applications. However, this framework provides a bounded time for each reconfiguration. If a reconfiguration exceeds its estimated time, it will not be executed.
However, none of them specifically address the needs of mobile applicationsdevelopment. Therefore, in mobile applications, front-end development continues to be a costly and inefficient process, where manual coding is the predominant developmentapproach, reuse of design artifacts is low, and cross-platform porta- bility remains difficult. The availability ofa platform-independent user interac- tion modeling language can bring several benefits to thedevelopment process of mobile application front-ends, as it improves thedevelopment process, by fostering the separation of concerns in the user interaction design, thus grant- ing the maximum efficiency to all the different developer roles; it enables the communication of interface and interaction design to non-technical stakehold- ers, permitting early validation of requirements.
2 Problem Statement
The starting point is a double finding, from literature review and current practice. Related works mention that service engineering in the context of CPPSs is still a craft activity, usually at the implementation level [Rodrigues et al., 2015, Morariu et al., 2013]. Two main levers are still required to go further. First, we need service models that can fit to various semantics and various granularity levels. In- deed, the concept of CPPS covers many classes of (physical) systems, from the manufacturing workshop to the whole supply chain. Taking the example of HMS, the control of such systems is often recursive, if not fractal, in order to aggregate the available resources and enable a heterarchic control architecture. Therefore, ser- vices that might be used at various levels ofthe architecture need to fit various granularity andthe portability of services between different applications with their own semantic requires an adaptability ofthe services to be effective.
All these approaches are dedicated to thedevelopmentof processes, either with the intention of modeling a given process in a given context, or with the intention of improving the understanding and sharing of information embedded in a process. Basically, process modeling languages are used to modela sequence of activities from the beginning to the end. Depending on the language, this will include more or less concepts (e.g., resources and control in SADT, message flows, pools in BPMN). However, they do not fully consider attributes that characterize and define the elements involved in a process. As a consequence, it is necessary to allow actors to gather a maximum amount of knowledge about elements. For instance, the time space and shape (TSS) frame of reference (Le Moigne, 1977) can allow this knowledge to be collected for any element involved in the process. Indeed, by adapting this frame, it is possible to position an activity, resource, control, or any other element in this frame (TSS). Moreover, this knowledge, represented in the form of attributes that characterize the element considered, can be independent of any domain of application (e.g., resource capacity, availability, pre-emption (Vernadat, 1996)), or specific to a given domain (e.g., an activity can require a certain level of protection for its resources in a crisis context). It is on the basis of this enrichment, in terms of attributes, that reasoning about the possible effects can be achieved.