• Aucun résultat trouvé

STATE-OF-THE-ART 109 It uses layout heuristics to aid the designer with alternatives, suggesting which one will be the most

Rapid Prototyping of User

5.4. STATE-OF-THE-ART 109 It uses layout heuristics to aid the designer with alternatives, suggesting which one will be the most

correct. Those heuristics are based on relationships like: justification, centring, uniformity, equilibrium, etc.

In this research, it has been acknowledged that it’s impossible to satisfy all ergonomic criteria at once, since each strategy preserves some ergonomic criteria but violates some another. So, one idea, is to adopt multi-criteria strategies where the designer is informed of one or several strategies that are compatible with the criteria the designer wants to prioritise. Another option, would be using pattern-matching techniques - from a template base, existing templates are automatically retrieved to avoid redesigning parts of the user interface that are morphologically similar.

Task and Dialogue Specification

Figure 5.2: Activity Chaining Graph for a specific task.

Once the information and actions of a particular task are defined, we need a way to automatically distribute them into windows. Traditional model-based approaches can easily distribute information and actions into windows by deciding the repartition on the underlying model. In MECANO and GENIUS the generated UI’s are unique and restricted to some category of interactive classes (input, display, list, browse). In task-based approaches, systems can support various interactive tasks. ADEPT uses an hierarchical tree of task decomposed into subtasks. Information and actions are grouped on equal tree levels and then grouped into windows.

TRIDENT adopts a task based strategy, in the sense that it uses a task model to distribute infor-mation and actions into windows.

This is done with an oriented 1-graph, the ACG orActivity Chaining Graph (see figure 5.2), which describes the interactive task the user has to carry out. It shows the information flow between function chained. Nodes are information or actions and links are inputs/outputs or chaining. Links were enriched to have logical conditions AND, OR, XOR.

According to the selected interaction style and based on the syntactic information on the ACG, there are several window identifications that can be proposed to the designer:

1. maximum identification - identifies only one window showing all interaction objects.

2. minimum identification - for each external information, a window is identified.

3. input/output identification - a window is identified gathering, all input objects, having external information, and another window for all output information.

110 CHAPTER 5. RAPID PROTOTYPING OF USER INTERFACES...

4. functional identification - a window is identified for all information of all functions having external information.

5. free identification - a window is identified for every information resulting from an arbitrary partition of the ACG set.

5.4.2 The MOBI-D Project

MOBI-D [34] (Model-Based Interface Designer), is a model based system which targets bi-dimensional interfaces.

The authors [34] (Model-Based Interface Designer) distinguish between abstract elements and con-crete elements in a interface model. A user task (get customer’s name), is an abstract element and scrollbars and pushbuttons (which are part of the presentation of the interface) are concrete elements.

In fact, interface elements can be placed into two different categories: an abstract, and a concrete. Concrete elements are those the user can access directly (windows, push buttons, mouse clicks and audio) -presentation and dialogue models contain concrete elements. Abstract elements are those the user can access only indirectly, or not at all. User-task model, domain model and user model components contain abstract elements.

Figure 5.3: The mapping problem in interface models.

The problem of linking abstract and concrete elements in an interface model is the so called “mapping problem”, see figure 5.3. In fact, traditional model based systems also try, in a way, to solve this problem, but with limited success. In general they are able to automatically generate forms and dialogue boxes for database access. Furthermore, interfaces produced by those systems all look fairly similar and developers have little flexibility to change them in any of its fundamental aspects. The main problem is in the gap between the abstract and the concrete in interface models. This leads to the need for a computational framework to make this bridge.

Instead of full automation, they suggest that model-based systems provide tools to allow developers to interactively set the mappings, assisting the designer pruning the space of potential mappings into a manageable set.

One hypothesis is that traditional model-based systems do not tackle directly the mapping problem since they lack a general solution to the mapping problem for all interfaces, or at least a general framework to help searching for solutions for individual interfaces. Any user interface design needs a set of mappings which cannot be produced by a single method (figure 5.3. So, we need a way to link abstract elements with concrete elements via multiple methods, and this raises a level of abstraction mismatch in interface

5.4. STATE-OF-THE-ART 111 models. This could suggest that the future of model-based systems would be to target simple specific-type interfaces.

MOBI-D tries to overcome this assumption, where the designers are able to design a multitude of types of user interfaces. They have identified the most important types of mappings that designers will need to specify a user interface.

User-Task models are arranged into hierarchical task/subtask decompositions which can be sequential or parallel, where each subtask may have some conditions attached. On the other hand, Dialogue models establish a navigation schema, where the accepted actions are defined. So, the possible mappings between user-task models and dialogue models are: task-execution order to navigation order;

conditions on task execution to enable/disable states in a dialogue; input/output requirements for tasks to input/output requirements for command execution.

Presentation models define an hierarchical arrangement and grouping of UI parts. So, themapping between user-task models and presentation modelsis done by arranging part/subpart hierarchies in presentation model to task/subtask decompositions on the user-task model.

User models specify a number of target users of a specific interface. So,mapping between user-task models on user models is done by specifying that a user may be involved in all tasks of a user-task model, or only in a subset of them.

Domain models specifies objects which possess a number of attributes, including a data type. When mapping domain models on user-task models, the interface model will define what domain objects are involved in conflicting user tasks.

To map between domain models and presentation models, the designer specifies how to present and make domain objects available in an user interface. A widget should be used to display the value of a domain object of a certain type. Not only the associated type may influence this selection but also other attributes such as range or minimum and maximum values may play a role. Finally, tomap between presentation models and dialogue models, the designer only has to assign presentation elements with a specific dialogue.

Automation and flexibility

An interface layout design can be as much an expression of the visual artistic capabilities of a designer as it is a rational selection of widgets based on data type definitions. Even in a rational selection, the designer faces a wide range of options for each mapping to be set: several widgets may be effective to display a certain domain object, or there may be many ways to distribute a task among windows, etc. Most model-based systems remove all issues of creativity and art from the mapping process. So, usually the interfaces produced are restricted to one look and feel, and to one design pattern. The ideal approach would be to automate the process supporting the choices of rational interface design as well as the creativity of artistic interface design.

Model Mapping

MOBI-D allows designers to directly set the mappings according to their needs instead of compromising the system with any particular method of setting the mappings. It provides a knowledge-based assistance in setting the mappings, in order to support a rational approach to solve the mapping problem without restricting the exploration over several design options. A new component is introduced in the interface model: the design model, which aims to help the inspection and setting of mappings. It represents in a declarative way, all the mappings in a user interface.

In addition to editors to aid declarative modelling of user-task models, domain models, and so forth, MOBI-D includes a design model editor and an intelligent mapping assistant. Thedesign model editor enables the designer to inspect entire contents of each basic model, and to set a mapping between their elements in a drag-and-drop fashion. The semantics of establishing a mapping obviously depends on the types of elements involved (e.g. mapping a domain object to a user-task means that such object is used in performing the selected user task). Designers can annotate and further specify the nature of a mapping. Designers use the design model editor to set mappings in the same level of abstraction.

112 CHAPTER 5. RAPID PROTOTYPING OF USER INTERFACES...

The design model provides no guidance in how to set mappings, so, for the more complex abstract-to-concrete mappings, the designer has the support of an intelligent assistant called TIMM (the interface model mapper). The UI design process usually starts from requirement and system analysis, and thus from abstract models (user-task models, domain models and user models). For these cases, the designer can use the design model editor. When the time comes to choose a concrete model (e.g. what presentation model and dialogue model should be given), designers are faced with numerous options in mapping, from the abstract level to the concrete level in the interface model. TIMM is supposed to assist designers on the navigation of the choice space of abstract-to-concrete mappings, pruning the design space of mappings into a manageable set, which can then be explored to make their final decisions.

TIMM is able to assist in mapping domain elements to presentation elements; task elements to dialogue elements; and task elements to presentation elements. Domain to presentation mappings is realised with TIMM, by using interactors: the designer selects a domain object, and then selects from a list of abstract widgets (interactors) the one which is the most appropriate presentation element to display that particular domain object. TIMM uses a knowledge based design guidelines, to examine the list of attributes of a domain object and build a list of potential interactors, arranged in order of priority given the current set of interface guidelines. These knowledge based design guidelines can also be changed giving the possibility to narrow the design possibilities to a reasonably sized set.

User-task model in dialogue model mappings and user-task model in presentation model mappings in TIMM: the user-task model is formed by a group of subtasks which can be specified as a sequence. This sequence can be enforced in several ways, that is, in a sequence ofn tasks, the interface may hide the area for completion of the second subtask ton, until the user completes the first one; leave it visible but disable the remaining area; or display an error message if a subtask is left out. Also there are multiple options as to how to split the user-task model (which is a tree structure with varying levels of depth) among windows in a user interface. With TIMM, the designer can adjust the number of windows to match the depth level of the current user-task model tree.

After completing the work with TIMM, the designers move to another tool to make the final design decisions and set theconcrete-to-concretemappings between design model and presentation model.

MOBI-D aims to achieve generality of design in model-based systems, defining a method for explicitly define relationships between elements in a user interface. MOBI-D extends the definition of an interface model to include knowledge elements, called mappings, that encapsulate the knowledge about those relationships, so an explicit representation of mappings enables the development of tools to define, inspect and analyse such mappings.

5.4.3 The ERGOCONCEPTOR

The ERGOCONCEPTOR [26] is a model-based approach which aims to automatically generate bi-dimensional user interfaces based on a description of an industrial process. Based on previous research in this area [21], [35], the authors of [26] present a global methodology to automatically generate user interface specifications with design alternatives, using guidelines. The specifications can then be used by the designer to interactively generate the final user interface. Their work targets the automated generation of interfaces in control applications over industrial processes, so their target seems to be quite similar to the problem of generating user interfaces to control systems.

They have identified the main requirements of the human operators which have to handle a control system on a daily basis. They have to solve problems involving hundreds or even thousands variables, and they are generally far from the industrial installations.

Also the human tasks requires an high-level of knowledge and know-how. To accomplish task of this complexity, the operators have to build and refresh in real time, a mental representation of the running process and monitor the system (variables and states) evolution over time. So usually UI graphically present a huge amount of information, and usually they possess assistant agents to help the operator on each task (fault prediction, compensation, etc.).

They presented a global methodology for the “design and evaluation“ which is the foundation of the ERGOCONCEPTOR’s design. One prerequisite of the ERGOCONCEPTOR and probably most model-based RUIP systems is that some human-machine system analysis and modelling has been taken

5.4. STATE-OF-THE-ART 113 place, like process modelling, task modelling and a list of user requirements or use cases. This is generally done by producing a database which collects all the data about the process:

1. Subsystems, variables, relations between sub-systems and variables, types of relationships.

2. The structure of the system.

3. Normal and abnormal functioning.

4. Task analysis and human requirements in case of abnormal situations.

5. All the different user profiles.

6. Changes needed to occur in the GUI depending on the particular contexts and system states.

Having done this pre-analysis resembled in a database, the ERGOCONCEPTOR can perform the semi-automated generation of the first version of the UI generation of a UI based on the information collected on the previous level. Then, the designer moves to a specific editor to interactively generate a more concrete and almost final user interface. This UI is then subject of static validations based on ergonomic production rules and subsequent validations. Also dynamic validation - on site or simulations - tests human interaction facing normal and abnormal situations (usability tests).

Inside ERGOCONCEPTOR

The ERGOCONCEPTOR, is made up of three main modules.

The first module is where the designer specifies and models the system based on its previous analysis.

The modelling and specification on the first module follows three stages of design and development of an UI, according to a model-based approach:

1. A physical model, which is a description of the industrial process with different abstraction levels - from high levels where production and safety goals are represented, to lower levels where sub-systems are described as sets of control variables, like temperatures or speeds.

2. A structural model, which describes sets of sub-systems and the data flow between them by input and output links.

3. Afunctional model, which describes the relations and influences between variables using causal-ity networks (similar to Petri-nets).

Based on three these stages, the second module generates an UI specification containing several design alternatives in which the designer can then take the final decision and choose the appropriate option to the final UI. This UI specification is supposed to be the direct mapping from user requirements.

Additionally, the UI specification is validated by an ergonomic knowledge base that tries to rearrange the UI specification according to human-factor rules. After this validation, in the third module, the designer can finally edit the final graphical displays using a specific editor.

Rasmussen tells us that the only way to control systems, that are complex in terms of large numbers of information sources and devices for basic control actions, is to structure the situation and thereby transfer the problem to a level with less resolution. And that thought takes us to another principle, applied so many times in software engineering that tells us that every problem is solved by introducing another level of indirection (except of course, the problem of “too many levels of indirection”).

The principles that support this architecture are based on the suggestions of Rasmussen and Lind.

They say that we can model systems and make the process description along two dimensions. The first one is the whole/part decomposition (or top-down), in which the system can be seen as a number of related components at several levels of physical aggregation. The second dimension is the degree to which the physical implementation of functions is maintained in the representation.

Lind’s and Rasmussen’s ideas were used in ERGO-CONCEPTOR, to build a formal methodology for process description in the field of the interface generation. As we can see in 5.4, the process descriptions will ascend the “means/goals” abstraction hierarchy to allow different users to detect abnormal situations, evaluate the process state, make decisions and take actions. It also allows users to take short cuts according to the different levels of knowledge. In that way, depending on the abstraction levels, functional groups of variables or sub-systems may be created from other functional groups or from sub-sets of

114 CHAPTER 5. RAPID PROTOTYPING OF USER INTERFACES...

Figure 5.4: The “means/goals” abstraction hierarchy.

variables extracted from the lower abstraction level. Synthesizing, the principle of the “means/goals”

abstraction hierarchy tells us that to solve a problem in a certain level of abstraction, we go to an inferior level, if the solution resides in a subsystem contained by the current subsystem and we go to a superior level, if the solution doesn’t resides in any of the subsystems of the current subsystem.

5.4.4 Multi-Modal Operator Interface

The Multi-Modal Operator Interface (MOI) [44] is a general concept of interfaces (bi-dimensional or even three-dimensional) dedicated to supervisory control systems (SCSs). The authors of [44] have considered several interactions and control modes, namely tele-manipulation, speech-based control, vision-based control, and model-based control (e.g. CAD robot model). Moreover, with MOI, they have enabled multi-level control interaction. They have also presented an anticipatory system that protects against potentially catastrophic errors, produced either by the operator and the machine. It has been acknowl-edged that a combination of several control modes in a complementary way may be required, to efficiently perform complex tasks in control systems. And this led to the design of an integrated MOI’s, which are user- and task- adaptable, and are expected to transform SCSs into more adaptive systems by means of which operators can select the control modes they prefer or considered as best suited for carrying out tasks in specific contexts.

However, giving the possibilities of various interaction and control modes increases the complexity of the system which in turn may question the efficiency of such integration. The design of such systems faces various practical issues in implementing the MOI, like the implementation of control techniques;

the computer graphics and information representation; managing the information from numerous various sources; meaningful representation of information; different allocation of functions between the human and the machine; on-line decision aids; and safety issues like the prevention of human and machine errors.

the computer graphics and information representation; managing the information from numerous various sources; meaningful representation of information; different allocation of functions between the human and the machine; on-line decision aids; and safety issues like the prevention of human and machine errors.