Haut PDF A behavior-driven approach for specifying and testing user requirements in interactive systems

A behavior-driven approach for specifying and testing user requirements in interactive systems

A behavior-driven approach for specifying and testing user requirements in interactive systems

For the purpose of illustration concerning how our approach can support such a traceability and consistency checking, Figure 66 presents the successive mapping of a less refined Balsamiq prototype, a PANDA prototype, and a final UI when testing the User Story “Flight Tickets Search”. In the first transition, the Balsamiq prototype, designed in Figure 54 previously, is evolving to a more refined level by using PANDA. Notice that more detailed decisions about the design solution have already been taken. For example, suppose that during the project a business decision has been taken to evolve the user requirements in order to provide a new option for booking hotels along with the flights. Thereby, instead of using a simple “Round trip / One way” ButtonBar, the PANDA prototype has been modeled using a three-button solution with a third option to book hotels in addition to the round trip / one-way flight options. None of the solutions however is covered by the ontology for the behavior “ I choose … referring to … ”, so the test fails. The ButtonBar used in the Balsamiq prototype is not an interaction element modeled and recognized by the ontology, and the three- button solution used in the PANDA prototype does not allow an action of choosing, once such a kind of behavior are not supported by buttons. On the final UI, links have been chosen instead, so the test passes.
En savoir plus

344 En savoir plus

Definition of a Behavior-Driven Model for Requirements Specification and Testing of Interactive Systems

Definition of a Behavior-Driven Model for Requirements Specification and Testing of Interactive Systems

Fig. 2. Conceptual Model for testable requirements C. Multi-Artifact Testing Fig. 3. gives a general view of how testing integration can occur in multiple artifacts, given an example of behavior. In the top of the figure is presented an example of a Step of Scenario describing the behavior “choose … referring to …”. In the ex- ample, a user is choosing the gender “Female” on the UI ele- ment “Gender” in a form. This task is triggered when an event “When” occurs in the Scenario. To be tested, this task is asso- ciated to values for data (“Female”) and UI element (“Gen- der”), indicating a possible and executable Scenario that can be extracted from that task. Following the ontology, the behavior addressed by this task can be associated to multiple UI ele- ments such as Radio Button, Check Box, Link and Calendar components. The arrows in the right side of the figure indicate two implementations of this ontology, highlighting these asso- ciations. First in an OWL version at the top and then converted in Java code in the bottom. Considering that the UI element Radio Button has been chosen to attend this behavior, a locator is triggered to trace this element throughout the artifacts, thus allowing us to reach it for testing purposes. The figure shows this trace being made through a HAMSTERS Specification for Task Models [22] (in the task “Choose Gender”), through a UsiXML Specification for Prototypes [17] (Radio Button “Gender” with the data options “Male” and “Female”), and finally through a Java Specification for Final UIs (@ElementMap “Gender” with the XPath reference "//input[@id='genderSelect']").
En savoir plus

8 En savoir plus

Definition of a Behavior-Driven Model for Requirements Specification and Testing of Interactive Systems

Definition of a Behavior-Driven Model for Requirements Specification and Testing of Interactive Systems

In the Software Engineering (SE) side, User Stories are typ- ically used to describe requirements in agile projects. This technique was proposed by Cohn [9] and provides in the same artifact a Narrative, briefly describing a feature in the business point of view, and a set of Scenarios to give details about busi- ness rules and to be used as Acceptance Criteria, giving con- crete examples about what should be tested to consider a given feature as “done”. This kind of description handles a Behavior- Driven Development (BDD) assumption [4], in which the sys- tem is developed under a behavior perspective in the user point of view. This method assures for clients and teams a semi- structured natural language description, in a non-ambiguous way (because it is supported by test cases), in addition to pro- mote the reuse of business behaviors that can be shared for multiple features in the system.
En savoir plus

7 En savoir plus

Extending Behavior-Driven Development for Assessing User Interface Design Artifacts

Extending Behavior-Driven Development for Assessing User Interface Design Artifacts

V. C ONCLUSION AND F UTURE W ORKS This paper summarizes the new results we got by applying our approach for specifying and checking the consistency of user requirements on core user interface design artifacts. Compared to plain-vanilla BDD, this approach benefits from (i) an extension to assess other software artifacts than final UIs, and (ii) a common vocabulary to be reused for specifying interactive scenarios without requiring developers to implement the mentioned behaviors. Compared to other approaches for assessing requirements and artifacts, the term “test” is usually not employed under the argument that such artifacts cannot be “run”, i.e. executed for testing purposes, so in practice they are just manually reviewed or inspected in a process called verification. Manual verification of the software outcomes is highly time-consuming, error-prone and even impracticable for large software systems. Fully interactive artifacts such as final UIs can in addition be validated by users who can interact with the artifact and assess whether its behavior is aligned with their actual needs. As within our approach we succeed automatically running User Stories on software artifacts for assessing their consistency with user requirements, we actually provide the “test” component for both verification and validation of artifacts in the software development. We consider this a big step towards
En savoir plus

5 En savoir plus

Towards Automated Requirements Checking Throughout Development Processes of Interactive Systems

Towards Automated Requirements Checking Throughout Development Processes of Interactive Systems

{rocha, winckler}@irit.fr Abstract. The user-centered development process of interactive systems is iter- ative and, during multiple iterations, users have the opportunity to bring new requirements that are very likely to have an impact, not only in future develop- ment, but also affect previously developed artifacts. Manual testing of all arti- facts when new requirements are introduced can be cumbersome and time con- suming. For that, we need flexible methods to ensure continuous consistency and accuracy among the various artifacts employed to build interactive systems. The ultimate goal of this position paper is to briefly present our vision on an approach for automating the requirements assessment using a Behavior-Driven Development perspective. Thereby, automated tests can run early in the design process, providing a continuous quality assurance of requirements, and helping clients and teams to identify potential problems and inconsistencies before commitments with software implementation.
En savoir plus

4 En savoir plus

A Rule-driven Approach for Defining the Behavior of Negotiating Software Agents

A Rule-driven Approach for Defining the Behavior of Negotiating Software Agents

We see the negotiation process as a form of interaction made of protocols and strategies. The protocols comprise the rules (i.e., the valid actions) of the game, and, for a given protocol, a participant (human or software) uses a strategy (i.e., a plan of action) to maximize her utility [5]. Based on this, many strategy-enabled agent-mediated negotiation systems have been described in the literature. Unfortunately, most of them use hardcoded, predefined, and non-adaptive negotiation strategies, which is evidently insufficient in regard to the ambitions and growing importance of automated negotiations research. The well-known KASBAH agent marketplace [6] is a good example of such systems. To overcome this shortcoming, we believe that negotiation strategies should be treated as declarative knowledge, and could, for instance, be represented as if-then rules, and exploited using inference engines. The focus of our research is on combined negotiations [7], a case where the consumer combines negotiations for different complementary products that are not negotiated on the same server. For instance, a consumer may want to simultaneously purchase an item and its delivery by engaging in separate negotiations. If software agents are assigned to these negotiations, this poses a coordination problem between them. Many multi-agent negotiation systems found in the literature still rely on ad hoc schemes to solve this problem [8][9]. Again, we believe that a declarative approach should be used to describe and manage the coordination of agents across several negotiations. To validate our approach, we designed and implemented an automated negotiation system called CONSENSUS [7] that enables a human user to instantiate one or more software agents, provide them with negotiation strategies, as well as coordination know-how, register them on corresponding negotiation servers, and launch them. The agents use the strategies to negotiate according to the protocol dictated by the server, and the coordination know-how to coordinate their actions. An example of a strategy, applicable to an English auction (one of many existing negotiation protocols) is: “If you notice any form of jump bidding in the auction, then stop bidding and quit”. Jump bidding means making a bid that is far greater than necessary in order to signal one’s interest in the auctioned item (See Section 4). An example of coordination know-how, applicable to two agents bidding as partners in two separate auctions for two complementary items is: “If your partner looses in its auction, then stop bidding and wait for further instructions” (See Section 4). We are currently testing various strategies and coordination schemes by way of agent tournaments. A large part of the paper is dedicated to this ongoing validation work.
En savoir plus

21 En savoir plus

Model-Based Testing for Building Reliable Realtime Interactive Music Systems

Model-Based Testing for Building Reliable Realtime Interactive Music Systems

A score-based IMS is therefore a reactive system, interacting with the out- side environment (the musicians) under strong timing constraints: the output (generally messages passed to an external audio application such as MAX [32]) must indeed be emitted at the right moment, not too late but also not too early. This may be a difficult task since audio calculations often have an important impact on the resource consumptions. In this context, it is important to be able to assess the behavior of an IMS on a given score before its real use in a concert. A traditional approach is to rehearse with musicians, trying to detect potential problems manually, i.e. by audition. This tedious method offers no real guaranty since it is not precise, not complete (it covers only one or a few particular musician’s performances), and error prone (it relies on a subjective view of the expected behavior instead of a formal specification).
En savoir plus

54 En savoir plus

A multi-formalism approach for model-based dynamic distribution of user interfaces of critical interactive systems.

A multi-formalism approach for model-based dynamic distribution of user interfaces of critical interactive systems.

3.1. Requirements for a user interface generation and distribution process for dynamic partly automated system As presented in Section 2 , in the area of complex command and control systems, some of the user tasks and activities cannot be identified beforehand i.e. at design time. In addition to that issue, these tasks can be complex and/or inadequate for a human being (requiring for instance, management of a large amount of informa- tion, execution of multiple commands under strong temporal constraints, …) thus requiring to be delegated to an autonomous sub-system. In order to address those issues there is a need to provide operators with meta-level systems able to combine multi- ple commands and to delegate their execution to an autonomous agent. The design of this part of the partly-autonomous command and control system requires the same level of reliability and usability as the rest of the application. While the reliability aspects of user interfaces can be addressed using standard dependability and fault-tolerance techniques such as the command and mon- itoring architecture initially proposed by self-checking compo-
En savoir plus

24 En savoir plus

An Approach for Multi-Artifact Testing Through an Ontological Perspective for Behavior-Driven Development

An Approach for Multi-Artifact Testing Through an Ontological Perspective for Behavior-Driven Development

system in terms of tasks that may be accomplished. This is particularly true in early phases of the development process when the Prototypes are rudimentary samples of interactive systems. In this paper we explore the use of BDD techniques for supporting automation of user requirements, testing a set of artifacts produced throughout the development process of interactive systems. Our ultimate goal is to test multiple artifacts throughout the development process looking for vertical and bidirectional traceability of functional requirements. To achieve this goal, a formal ontology model is provided to describe concepts used by platforms, models and artifacts that compose the design of interactive systems, allowing a wide description of User Interface (UI) elements (and their behaviors) to support testing activities. Whilst the approach is aimed at being generic to many types of artifacts, in this paper we have focused on Task Models, Prototypes and Final UIs. In the following sections we present the conceptual background, an overview of the underlying process for using the proposed approach and a case study that demonstrates its feasibility. Lastly we discuss related works and the next steps for this research. 2 Conceptual Background
En savoir plus

28 En savoir plus

A FORMAL ONTOLOGY FOR DESCRIBING INTERACTIVE BEHAVIORS ON USER INTERFACES

A FORMAL ONTOLOGY FOR DESCRIBING INTERACTIVE BEHAVIORS ON USER INTERFACES

Nowadays many software development frameworks implement Behavior-Driven Development (BDD) as a mean of automating the test of interactive systems under construction. Automated testing helps to simulate user’s actions on the User Interface and therefore check if the system behaves properly and in accordance to scenarios that describe functional requirements. However, tools supporting BDD run tests on implemented User Interfaces and are a suitable alternative for assessing functional requirements in later phases of the development process. However, even when BDD tests can be written in early phases of the development process they hardly can be used with specifications of User Interfaces such as prototypes. To address this problem, this paper proposes to raise the abstraction level of both system interactive behaviors and User Interfaces by the means of a formal ontology that is aimed at supporting test automation using BDD. The paper presents an ontology and an ontology-based approach for automating the test of functional requirements of interactive systems. We demonstrate the feasibility of this ontology-based approach to assess functional requirements in prototypes and full-fledge applications through an illustrative case study of e-commerce applications for buying flight tickets.
En savoir plus

26 En savoir plus

Model-Based Testing of Interactive Systems

Model-Based Testing of Interactive Systems

1.3 Internship objective Post-WIMP interactive systems do not consider widgets as first-class components but pre- fer the use of concepts that reduce the gap between the user and the system. For instance, direct manipulation is a principle stating that the users should interact through the UI with the common representation of the manipulated data [41]. A classical example of direct manipulation UI is a vectorial shape editor in which UI presents the 2D/3D representation of the shapes to handle. So the problem of testing post-WIMP systems is that they do not depend mainly on widgets nor a graph of events applied on widgets. For example a recognition-based interface, where the input is uncertain, “a conventional event-based model may no longer work, since recognition systems need to provide input continuously, rather than just in discrete events when they are finished” [34]. So the testing of post- WIMP systems has to mainly focus on interactions rather than UIs’ component. In [7], Beaudoin-Lafon explained that developers have to switch from designing UIs to interac- tions. This principle should also be applied on UI testing. Moreover, current UI testing approaches do not check what the interactions do on the data of the system. A modern UI testing approach must also provide features to test that.
En savoir plus

52 En savoir plus

Towards Automated Requirements Checking Throughout Development Processes of Interactive Systems

Towards Automated Requirements Checking Throughout Development Processes of Interactive Systems

Keywords: Automated Requirements Checking, Behavior-Driven Develop- ment, Multi-Artifact Testing. The activities of Requirements Engineering encompass cycles of meetings and in- terviews with clients, users and other stakeholders. The outcomes of these meetings are documented and modeled in several requirements artifacts which are cross- checked at the end of the process, when the system is functional and ready to be test- ed. When User-Centered Design (UCD) approaches are employed, intermediate tests can be conducted earlier in Prototypes. However, UCD practice has shown that users are keen to introduce new requirements along the process which might raise problems for the correspondence with artifacts already produced, demanding activities support- ed by Continuous Requirements Engineering. Currently, there are many solutions for tracing requirements specification artifacts such as Use Cases, Business Rules etc., but there is still room for investigating automated solutions for tracking and testing other types of artifacts such as Task Models, Prototypes etc. [2].
En savoir plus

3 En savoir plus

A behavior-based ontology for supporting automated assessment of interactive systems

A behavior-based ontology for supporting automated assessment of interactive systems

V. C ONCLUSION In this paper we have presented a behavior-based ontology aiming at test automation that can help to validate functional requirements when building interactive systems. The proposed ontology acts as a base of common vocabulary articulated to map user’s behaviors to Interaction Elements in the UI which allows us to automate tests. The ontology also provides im- portant improvements in the way teams should write require- ments for testing purposes. Once described in the ontology, behaviors can be freely reused to write new Scenarios in natu- ral language, providing test automation with little effort from the development team. Moreover, it allows specifying tests in a generic way that can be reused along the development process. For that reason, we are also investigating the use of the ontolo- gy to test model-based artifacts such as low-fidelity Prototypes and Task Models. Tests in these artifacts could be conducted through a static verification of their source codes and would help to integrate testing in a wider spectrum of artifacts com- monly used to build interactive systems.
En savoir plus

9 En savoir plus

A behavior-based ontology for supporting automated assessment of interactive systems

A behavior-based ontology for supporting automated assessment of interactive systems

V. CONCLUSION In this paper we have presented a behavior-based ontology aiming at test automation that can help to validate functional requirements when building interactive systems. The proposed ontology acts as a base of common vocabulary articulated to map user’s behaviors to Interaction Elements in the UI which allows us to automate tests. The ontology also provides im- portant improvements in the way teams should write require- ments for testing purposes. Once described in the ontology, behaviors can be freely reused to write new Scenarios in natu- ral language, providing test automation with little effort from the development team. Moreover, it allows specifying tests in a generic way that can be reused along the development process. For that reason, we are also investigating the use of the ontolo- gy to test model-based artifacts such as low-fidelity Prototypes and Task Models. Tests in these artifacts could be conducted through a static verification of their source codes and would help to integrate testing in a wider spectrum of artifacts com- monly used to build interactive systems.
En savoir plus

10 En savoir plus

A Formal Ontology for Describing Interactive Behaviors and Supporting Automated Testing on User Interfaces

A Formal Ontology for Describing Interactive Behaviors and Supporting Automated Testing on User Interfaces

During the last seven years, we have been involved in the development of web applications where we have observed certain patterns of low level behaviors that are recurrent when writing BDD Scenarios for testing functional requirements with the User Interfaces (UI). Besides that, we could also observe that User Stories speci¯ed in natural language often contain semantic inconsistencies. For example, it is not rare to ¯nd Scenarios that specify an action such as a selection to be made in semantically inconsistent widgets such as a Text Field. These observations motivated us to in vestigate the use of a formal ontology for describing pre de¯ned behaviors that could be used to specify Scenarios. On one hand, the ontology should act as a taxonomy for terms removing ambiguities in the description. On the other hand, the ontology would operate as a common language that could be used to write tests that can be run on many artefacts used along the development process of interactive systems.
En savoir plus

29 En savoir plus

REFAS: A PLE Approach for Simulation of Self-Adaptive Systems Requirements

REFAS: A PLE Approach for Simulation of Self-Adaptive Systems Requirements

Figure 3 Soft Goal Satisficing View of GridStix We define a simulation as a sequence of scenarios where a scenario is the definition of a partial mapping of the model’s context variables to corresponding values. Thus, running a simulation means to execute the constraint program (i.e., the core semantics of the requirements model) with each of the simulation scenarios (i.e., a constraint-satisfaction problem to find a configuration to satisfy the context-dependent requirements) in an interactive sequence. Therefore, to realize this interactive sequence we use a simulation control loop implementing the Monitor-Analyser-Planner- Executor-Knowledge base (MAPE-K) reference model [7]. Monitor. The monitor goal is to identify and report internal and external context events. In REFAS, there are two monitored sources of events. First, the requirements model defines the concepts, its attributes, and relations. For example, an attribute identifies whether the concept is in the model. Second, the requirements model configuration that defines restrictions on the selection and exclusion of concepts, and also the values for some of the variables. Analyser. The analyser evaluates the events notified by the monitor and the simulation's current configuration state, as specified by the requirements model. The simulation's current configuration results from the aggregation of the requirements model design and configuration, and the simulation configuration. The analyser invokes the planner if the configuration is not optimal or invalid. Planner. The planner evaluates the current configuration state and computes a new configuration by invoking the obtain solutions method, logging the results to save the configuration. The planner notifies the executor with the configuration plan (i.e., a configuration solution) and the analytical execution information. Executor. The executor formats the configuration and variable values of the selected solution using JavaScript Object Notation (JSON), writes the output files and triggers updates on the user interface. The user interface includes the requirements model, the dashboard, the statistical information, and alerts in case of error. Knowledge base. The knowledge-base element is a data structure storing the set of constraints automatically generated from the concepts and relations between all the requirements model views. This element also contains the constraints created with the values of variables used in conditional expressions of soft dependencies and claims. The constraints are used by the analyser and planner.
En savoir plus

6 En savoir plus

Testing Prototypes and Final User Interfaces Through an Ontological Perspective for Behavior-Driven Development

Testing Prototypes and Final User Interfaces Through an Ontological Perspective for Behavior-Driven Development

2 Conceptual Background Hereafter is a summary of the basic concepts to explain how the approach works. 2.1 User Stories and Scenarios A large set of requirements can be expressed as stories told by the user. Nonetheless, the term User Story might have diverse meaning in the literature. In the Human- Computer Interaction (HCI) field, a User Story refers to a description of users’ activities and jobs collected during meetings, which is close to the concept of Scenarios given by Rosson and Carroll [ 8 ]. Users and other stakeholders typically talk about their business process emphasizing the flow of activities they need to accomplish. These stories are captured in requirements meetings and are the main input to formalize a requirements artifact. These meetings work mainly like brainstorm sessions and include ideally several stakeholders addressing needs concerning features that may be developed. As stated by Lewis & Rieman, “…scenarios forced us to get specific about our design, […] to consider how the various features of the system would work together to accomplish real work…” [ 9 ]. For Santoro [ 7 ], Scenarios provide informal descriptions of a specific use in a specific context of application, so a Scenario might be viewed as an instance of a use case. An identification of meaningful Scenarios allows designers to get a descrip‐ tion of most of the activities that should be considered in a task model. Given task models have already been developed, Scenarios can also be extracted from them to provide executable and possible paths in the system.
En savoir plus

24 En savoir plus

Rationalizing the Need of Architecture-Driven Testing of Interactive Systems

Rationalizing the Need of Architecture-Driven Testing of Interactive Systems

The “Input Device Type” greyed-out box describes the information flow for a given type of input device. Each new type of input device requires a separate “Input Device Type”. An “Input Device Type” is composed of three components. First, “Input Devices” component is the physical (hardware) input device manipulated by the user (e.g. a mouse or a finger on a touchscreen). The “Input Devices” component sends information to or receives requests of information from the “Drivers & Libraries” software component, which, in turn, makes this information available to the other components of MIODMIT. Less commonly, “Drivers and Libraries” can manage “ Input Devices” behaviour such as sampling frequency [ 24 ] or providing user identi- fication [ 40 ]. “Drivers and Libraries” can be provided either by the “Input Devices” manufacturer or by the operating system if the hardware is standard or has been around for a significant amount of time. Lastly, the “Input Chain Device” component is a software component that mirrors the state of the “Input Devices” hardware (called “ Virtual Device”), the “Logical device” of the “Input Devices” hardware (e.g. cursor pointer position for a mouse) and a manager. These components are transducers [ 2 ] that transform raw data into low-level information. Virtual device can be dynamically instantiated with plug-and-play devices. Whereas, logical devices can be dynamically instantiated at operation time. For example, each time a finger touches a multi-touch input device, a new logical device associated with the new finger is created. The manager addresses configuration and dynamic configuration of devices.
En savoir plus

24 En savoir plus

Testing Prototypes and Final User Interfaces Through an Ontological Perspective for Behavior-Driven Development

Testing Prototypes and Final User Interfaces Through an Ontological Perspective for Behavior-Driven Development

6 Conclusion and Future Works In this paper we have presented an approach aiming test automation that can help to validate functional requirements through multiple artifacts used to build interactive sys- tems. For that, an ontology was provided to act as a base of common ontological con- cepts shared by different artifacts and to support traceability and test integration along the project. When representing the behaviors that each UI element is able to answer, the ontology also allows extending multiple solutions for the UI design. We have fo- cused in this paper in the testing of Prototypes and Final UIs, but the same solution can be propagated to verify and validate other types of artifacts like Task Models and oth- ers, integrating the testing process and assuring traceability through artifacts. The de- gree of formality of these artifacts, however, can influence the process of traceability and testing, making it more or less tricky to conduct. These variations should be inves- tigated in the future.
En savoir plus

23 En savoir plus

A Formal Ontology for Describing Interactive Behaviors and Supporting Automated Testing on User Interfaces

A Formal Ontology for Describing Interactive Behaviors and Supporting Automated Testing on User Interfaces

domains. For example, high level Steps like ``When I search for °ights to ``Destination"" encapsulate all low level behaviors referring to individual clicks, selections, etc.; however, it also contains information that refers to the airline domain (i.e. behavior ``search for °ights"). Therefore, that Step would only makes sense on that particular application domain. For further researches, it could be interesting to investigate domain ontologies to be used in parallel with our ontology, de¯ning a higher level business vocabulary database in which business behaviors could be mapped to a set of interaction behaviors, covering recurrent Scenarios for a speci¯c domain, and avoiding them to be written every time a new interaction may be tested. Another aspect to be discussed is that even having mapped synonyms for some speci¯c behaviors, our approach does not provide any kind of semantic interpreta tion, i.e. the Steps might be speci¯ed exactly as they were de¯ned on the ontology. The JBehave plugin for Eclipse shows (through di®erent colors) if the Step being written exists or not on the ontology. This resource reduces the workload to remember as exactly some behavior has been described on the ontology.
En savoir plus

28 En savoir plus

Show all 10000 documents...