Fig. 14. Behavior "choœeR.efferingTo" béng structured as a Java method.
Check Boxes, Links or Calendars, the concrete in stance of any of these elements are searched on the Presentation layer.
The Presentation component includes the My Pages Java Class that makes the mapping between abstract UI element.s of the ontology and the concrete/final UI components instantiated on the interface being tested. For that purpose, we make use of annotations in Java code following the Page Objects pattern  as illustrated in Fig. 15 . UI components are identified through their XPath references or some other unique ID eventually used for some frameworks to implement the interface. This link is essential to allow the framework to automatically run the Steps on the right components on the Final UI.
SPARKS Team, !SS Université Nice Sophia A ntipolis, France firstname.lastname@example.org
Nowadays many software development frameworks implement Behavior-Driven Development (BDD) as a mean of automating the test of interactive systems under construction. Automated testing helps to simulate user's actions on the User Interface and therefore check if the system behaves properly and in accordance to scenarios that describe functional requirements. How ever, tools supporting BDD run tests on implemented Usa- Interfaces and are a suitable alternative for assessing functional requirements in later phases of the development process. Howeva-, even when BDD tests can be written in early phases of the development process they can harclly be used with specifications of User Interfaœs such as prototypes. To address this problem, this paper proposes to raise the aœtraction level of both system interactive behaviors and User Interfaces by means of a formai ontology that is aimed at supporting test automation using BDD. The paper presents an ontology and an ontology-based approach for automating the test of functional requirements of interactive systems. We demoœtrate the feasibility of this ontohgy-based approach to assess functional requirements in prototypes and full.fledge applications through an illustrative case stucly of e-oommerœ applications for buying ftight tickets.
The first key element in Table 2 is related to the objectives of the approach, which can be classified into two main categories: Knowledge-oriented, or Predictive Model design. When falling into the first category, the framework can be aimed towards more precise goals, either associated to the ontologies themselves (learning, enrichment or mapping) or in relation with them (instance learning, relevance of association rules or discriminant features). In that case, the automation level is low to medium, as such ap- proaches typically requires at least some validation. For approaches under the second category, the automation level is higher, as the procedure is based on numerical crite- ria, and the expert involvement is not required, contrary to what happens in the first category. The complexity of the ontology is also a factor to be taken into consideration to distinguish between frameworks. In most cases, the ontology is a simple taxonomy. Among the existing frameworks, our proposal stands out for two reasons: its dual objective (both knowledge oriented and predictive model design) and the ontology complexity. This dual objective allows for an increased interaction between the meth- ods, but requires more interplay between the domain expert and the analyst (Section 8 discusses this topic). The knowledge oriented part currently aims to integrate relevant features into the ontology, by refining and possibly enriching it. Regarding the com- plexity of the ontology used in our approach, it is clear that there exist more complex ontologies, however to our knowledge such ontologies have not been used in combina- tion with data-mining techniques.
with regard to the current regulations. Our work is part of the FORMOSE project  which integrates in- dustrial partners involved in the implementation of critical systems for which the regulation imposes formal validations. The contribution presented in this paper represents a straight continuation of our research work on the formal specification of systems whose requirements are captured with SysML/KAOS goal models. The Event-B method  has been choosen for the formal validation steps because it involves simple mathematical concepts and has a powerful refinement logic facilitating the separation of concerns. Furthermore, it is sup- ported by many industrial tools. In , we have defined translation rules to produce an Event-B specification from SysML/KAOS goal models. Nevertheless, the generated Event-B specification does not contain the sys- tem state. This is why in , we have presented the use of ontologies and UML class and object diagrams for domain properties representation and have also introduced a first attempt to complete the Event-B model with specifications obtained from the translation of these domain representations. Unfortunately, the pro- posed approach raised several concerns such as the use of several modeling formalisms for the representation of domain knowledge or the disregard of variable entities. In addition, the proposed translation rules did not take into account several elements of the domain model such as data sets or predicates. We have therefore proposed in  a formalism for domain knowledge representation through ontologies. This paper is specif- ically concerned with establishing correspondence links between this new formalism called SysML/KAOS Domain Modeling and Event-B. The proposed approach allows a high-level modeling of domain properties
On the other hand, computational approaches, most of them based on finite- state automata , have no difficulty for efficiently generating correct paradigms. As a matter of fact, it has been shown by Karttunen  that if one reduces mor- phological theories, including PFM and Network Morphology, solely to their abil- ity to generate paradigms, they come down to realisational systems equivalent to finite-state automata . However, even if computational approaches perfectly achieve this goal, they are often criticised, in the eyes of theoricists, for lacking what is the most interesting aspect from the theoretical point of view, namely ex- plictely modeling regularities and irregularities within paradigms. We introduce a means to easily implement formal analyses in a typologically sound framework that benefits from the data processing power available through computational approaches alone. 4 On an experiment carried out on modelling Maltese verbal inflection, we show the benefit for formal approaches to rely on computational approaches.
Constraint programming can definitely be seen as a model-driven paradigm. The users write programs for modeling problems. These programs are mapped to executable models to calculate the solu- tions. This paper focuses on efficient model management (defini- tion and transformation). From this point of view, we propose to revisit the design of constraint-programming systems. A model- driven architecture is introduced to map solving-independent con- straint models to solving-dependent decision models. Several im- portant questions are examined, such as the need for a visual high- level modeling language, and the quality of metamodeling tech- niques to implement the transformations. A main result is the s- COMMA platform that efficiently implements the chain from mod- eling to solving constraint problems.
Among the different uses of these linked datasets, they can be exploited as knowledge bases to build games. In particular, if we consider the subcategory of Serious Games in which the objective of the game is to educate the user through the interactive discovery of real-life concepts, the inclusion of a semantic repre- sentation of the profile of the the game player and his contextual information become important elements to enhance the recommendation of educational re- sources. To allow this data integration and knowledge organization, this paper proposes an ontology that enables the description and representation of Serious Games with such characteristics. The potential of this ontology is demonstrated through the prototype of a serious, Web-based and question-based board game, which exploits the DBpedia dataset and vocabulary to propose the user ques- tions with multiple answers. Our ultimate aim is to develop such a serious game within the Semantic Educloud project, in which the research work described in this paper is carried out.
Keywords: UML models, OCL constraints, refactoring, model transformation
Refactoring is an important activity within the domain of software maintenance [1, 2]. It is an essential activity for handling software evolution . Refactoring is defined as change to the internal structure of software to improve certain software quality characteristics (such as understandability, modifiability, reusability, modularity, adaptability) without changing its observable behavior . In the domain of ModelDriven Engineering (MDE), refactoring is considered as a type of endogenous model transformation . Indeed, the modification of a source model is done by model transformation to produce a target model so that both models conform to the same metamodel. Several studies have already been carried out on the refactoring of models, in particular in UML  models and especially on the refactoring of UML class diagram models [7, 8]. Beyond the capabilities of UML graphical diagrams to elaborate UML models, constraints are added to allow for the precision needed to write executable models . These constraints are described using the Object Constraint language(OCL) . Current UML modelrefactoring, especially UML class diagram refactoring, concentrate on the diagrammatic part, but the OCL constraints are neglected and become incoherent with the new model . The solution used thus far is to modify them manually, which is very time consuming and error prone .
 Y. Zhang and S. Li, Research on Formal Categorical Ontologies, Computer Science, China.vol. 33(9), 1-3.
 F. Donini，M. Lenzerini，D. Ndari and A. Sehaerf，Reasoning in description logics，In G. Brewka editor， Principles of Knowledge Representation and Reasoning，Studies in Logic，Language and Information，CLSI Publications，1996，193 一 238.
3.1 Overview of merge operations Several design situations may require FM composi- tion, for example when several experts work on de- signing variability and independently develop their FMs with different concerns, or when several prod- ucts have to be merged. Another reason comes from the need for product line decomposition. Indeed, for large product lines, it is hardly possible to describe the variability in a single, complex, feature model. To manage the complexity, the usually-adopted so- lution is apply the separation-of-concerns principle, decomposing the feature model at design time, each sub-feature model focusing on a given concern. Then the sub-feature models are to be composed back into a global feature model. For that, merge operations are needed. To focus the study, we consider the two
Fig. 26 A section of LotosNt code corresponding to lts spt
We have exploited checking of the equivalence of hetero- geneous labelled state transition systems or lts with different sets of labels deÞned in . Classical weak bi-simulation is extended by the use of an explicit relation to link labels of lts so that these lts can be rewritten to lts with the same set of la- bels and compared modulo weak bi-simulation. The deÞned approach has been applied to compare task models represent- ing several interaction techniques in the Þeld of plastic user interfaces. A domain ontology of interaction techniques and devices has been proposed to provide the relation which links labels of the lts representing task models at interaction level. The application of our approach has been illustrated on two case studies through which we have shown how to check with formal tools if a task model designed for an application on a personal computer platform is equivalent to the task model designed for the same application but for another platform, a smartphone, a Touch Pad or a PC. This approach is particu- larly useful, for instance, to compare design strategies to face input/output hardware failure in critical interactive systems.
is to capture the semantics of mode change; verifications should be possible once we have refined this initial description by concrete implementation of mode switching. Currently, we have begun to sketch Giotto and AADL mode switching. . Our goal is to integrate this work in the framework we have presented here. This would give us the possibility of checking timed properties on the mode switch, for example we could check that a mode switch must happen in less than a particular time. It is interesting to remark that mode mechanisms in asynchronous systems requires more attention than in synchronous systems ,; actually, since we do not assume the basic hypothesis of the synchronous approach: zero time computation, deterministic concurrency and instantaneous communication, we have to handle the transitional aspects related to these concepts. From our point of view, the formal specification of these aspects is challenging and is worth consideration.
Then, using the ontology editor Protégé, the content of the structured table was im- ported into ADMO using the Protégé Cellfie plugin. Entities information were inte- grated as subclasses of ADMO participants classes. During the integration, we also added a new property has_template (sub-property of derives_from) to formally link a gene to its related mRNA and a mRNA to its related protein. Reactions were integrated as independent subclasses of the “process” class. Then, automated reasoning was used to classify them as subclasses of the ADMO upper model process classes depending on their formal definition (see Fig. 2a*). The 1,065 inferred subclass_of axioms corre- sponding to this refined classification of processes were then edited. During their im- port, process classes from AlzPathway were formally linked to their respective location through the RO property: occurs_in.
• and Þnally, the problem of interfaces behaviors compar- ison is handled using the classical techniques for com- paring state transitions systems. A revisited deÞnition of the classical bi-simulation relationship is provided. This paper is structured as follows. Section 2 addresses the design of human centered computer interfaces, it gives an overview of the different techniques developed to deÞne user tasks models. Section 3 focuses on the concept of user inter- face plasticity. It reviews the basic deÞnitions and surveys the state of the art in the design of plastic user interfaces. It also shows how devices and interaction modes can be modeled as an explicit knowledge domain, i.e., an ontology. In Sec- tion 4, the core principles of the proposed approach are pre- sented. Then, Sections 5 and 6 revisit the deÞnition of the bi- simulation relationship needed to compare user task models. The whole formalmodel for verifying plastic user interfaces and the plasticity property is presented in Section 7 where the different steps leading to analyze formally plastic user inter- faces are composed into a sequence of methodological steps. Section 8 is devoted to the development of our approach on two illustrative case studies. The use of a model checker for formal veriÞcation of plastic user interfaces is described in this section as well. Finally, Section 9 concludes this work and gives some future research directions.
With regard to the second aspect, it should be noted that modeldriven de- velopment of applications is a well established practice . However, in terms of managing the Web service development lifecycle, technology is still in the early stages. We believe that the level of automation can be substantially increased with respect to what is available today, especially in terms of factorizing into the middleware those chores common to the development of many Web services. The approach proposed here has several advantages with respect to previous art, in- cluding early formal analysis and consistency checking of system functionalities, reﬁnement and code generation. For example, the work proposed in  features generation rules from UML activity diagrams to BPEL processes. The work pre- sented in  focuses on generating executable process descriptions from UML process models. The contribution of our work is specializing the modeldriven approach to web service conversation and composition models. As mentioned before, our approach focuses on specifying service composition models along with the conversation deﬁnitions and generating the executable speciﬁcations of a service that not only implements the service operations as speciﬁed, but also guarantees of conformance of the service implementation with the conversation speciﬁcation.
Goal-oriented self-consistent data acquisition. An issue of critical impor- tance concerns the acquisition of material data sets with appropriate cov- erage of phase space for specific applications. For general materials, phase space is of a dimension such that it cannot be covered uniformly by data. High-dimensional spaces are encountered in other areas of physics such as statistical mechanics, where the high dimensionality of state space is usu- ally handled by means of importance sampling techniques. The main idea is to generate data that are highly relevant to the particular problem under consideration, while eschewing irrelevant areas of phase space. A method for generating such goal-oriented data sets is the self-consistent approach of Leygue et al. . In that approach, from a collection of non-homogeneous strain fields, e. g., measured through Digital Image Correlation (DIC), a self-consistent iteration builds a material data set of strain–stress pairs that cover the region of phase-space relevant to a particular problem. In effect, the self-consistent approach generates the material data set and solves for the corresponding Data-Driven solution simultaneously.