• Aucun résultat trouvé

Multi-modality

Rapid Prototyping of User

5.4. STATE-OF-THE-ART 115

5.4.6 Multi-modality

More than applying the fundamental concepts of modality, they use four different types of multi-modality: exclusive, alternate, concurrent and synergistic. In exclusive multimodality, the user changes independently between several modes of interaction along the session. In alternate multimodality, the user changes between several modes of interaction in a logical pre-determined sequence to complete a goal. In concurrent multimodality, the user disposes of several modes of interaction that can be used concurrently and independently. Finally in synergistic multimodality, the user can use several modes of interaction which depend on each other in order to complete a designed goal.

One of the stated user interaction paradigms is the fact that to enhance user experience, the interface should give the most pertinent interactions and visualisations that the operator needs in a certain occasion. So, interaction designers have to choose from a broader variety of possible interaction styles to use.

In figure 5.7 is shown a generic functional decomposition of the ubiquitous augmented reality (UAR) user interfaces, which was the basis for the DWARF architecture.

The input devices subsystem receives commands from the input devices used by the user. Each of these devices offers a specific input modality which has to be evaluated by the multimodal interface. The user input is then translated into standard tokens by the media analysis subsystem, which is composed

118 CHAPTER 5. RAPID PROTOTYPING OF USER INTERFACES...

Figure 5.7: DWARF architecture.

5.4. STATE-OF-THE-ART 119 by different classes that deal with the different individual input modalities of each device. Then, the interaction management subsystem is responsible to determine what should be presented as output to the user. In this subsystem the media fusion component takes out the tokens of several inputs and infers user intention from them.

In DWARF was developed a standardised format for input tokens that can be decomposed into four different types: discrete values, text strings and analog values within a limited range or unlimited range. This standard tokens permit the liability to exchange commands between input devices. As an example, an “ok” can come from pushing the button with the label “ok” or simply by saying “ok” on the speech device - the media fusion component receives this command as a unique token and deals with it accordingly.

The continuous integration component combines tokens that can take real values in a certain range and the discrete integration component refers to the integration of devices such as speech recognition, which delivers only discrete values, like the word that was recognised. Finally, the User Interface Con-troller (UIC) component selects the presentation medium and what to present through it.

Inside DWARF

The DWARF framework is made up by a set of tools, each providing a specific functionality to the user.

Those tools can be then composed with each other to build more complex tools out of the simple toolset, providing a higher-level functionality. In DWARF, the UIC deals with all the functionality of handling discrete events and dialogue control. The instance of a UIC contains an internal model of the state of the user interface, in order to interpret correctly context-sensitive commands from the user input. An interesting point here is that DWARF uses Petri nets, written in XML, to specify the behaviour of a UIC instance. They have done so, due to the fact that all interactions are each one realised as individual Petri nets, making it easy to specify multi-modal interactions. Another reason is that Petri nets can be very useful during development and also for demonstrations because people can always see immediately the current state of the UI. User input is modeled as Petri net tokens that are placed onto Petri net places.

User input is evaluated by matching rules, checking if a transition is legal or not. If the transition is legal, then it is triggered. Whenever a transition is triggered, events are sent to the media design components, which then adds, removes or changes the properties of parts of the UI.

In what matters to output there are two types frequencies by which messages are passed between components. Discrete events are typically sent every few seconds, whereas continuous events, such as tracking data, are sent at very high frequencies.

Navigation issues are also relevant to DWARF by the means of which the three dimensional visu-alisation of the virtual objects is relative to the current user position in the real world, which in turn corresponds to a specific position in the virtual world. The navigation is made by a tracking component that can be connected directly to a viewer component, so there is no special user navigation commands issued, turning the navigation natural and transparent to the user.

To provide the ability to add and remove I/O components conveniently at runtime, I/O components were designed to keep as little state as possible. However, this is not fully possible for some I/O components, so they are building a persistence layer to be able to pass states between I/O components.

Some I/O components used:

1. Speech recognition: (a command language) configured via a context-free grammar.

2. TouchGlove: emits continuous and discrete data. It can be directly connected to the viewer.

3. Collision detection: detects collisions between real objects with real objects; virtual objects with real objects; and virtual objects with virtual objects. Detecting collisions between virtual objects with real objects is important to detect the limits of user interaction with virtual boundaries.

4. Sound Player: maps each command to a sound file that is to be played.

5. 3D Viewer: (they found it difficult to design and implement)

6. Several viewing modes (video background, stereo modes for different stereoscopic displays) 7. They used the open-source projects: Coin3D (CoinDesigner) + OpenInventor

120 CHAPTER 5. RAPID PROTOTYPING OF USER INTERFACES...