• Aucun résultat trouvé

Multimodal Feedback and Interaction Techniques for Physically Based and Large Virtual Environments

N/A
N/A
Protected

Academic year: 2021

Partager "Multimodal Feedback and Interaction Techniques for Physically Based and Large Virtual Environments"

Copied!
217
0
0

Texte intégral

(1)

HAL Id: tel-00652077

https://tel.archives-ouvertes.fr/tel-00652077

Submitted on 14 Dec 2011

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

(2)

♣♦✉r ♦❜t❡♥✐r ❧❡ ❣r❛❞❡ ❞❡

❉❖❈❚❊❯❘ ❉❊ ▲✬■◆❙❆ ❉❊ ❘❊◆◆❊❙

❙♣é❝✐❛❧✐té ✿ ■♥❢♦r♠❛t✐q✉❡

●❛❜r✐❡❧ ❈■❘■❖

➱❈❖▲❊ ❉❖❈❚❖❘❆▲❊ ✿ ▼❆❚■❙❙❊

▲❆❇❖❘❆❚❖■❘❊ ✿ ■◆❘■❆ ❘❡♥♥❡s ✲ ■❘■❙❆

▼✉❧t✐♠♦❞❛❧ ❋❡❡❞❜❛❝❦

❛♥❞ ■♥t❡r❛❝t✐♦♥

❚❡❝❤♥✐q✉❡s ❢♦r P❤②s✐❝❛❧❧② ❇❛s❡❞ ❛♥❞

▲❛r❣❡ ❱✐rt✉❛❧

❊♥✈✐r♦♥♠❡♥ts

❚❤ès❡ à s♦✉t❡♥✐r ❧❡ ✷ ❉é❝❡♠❜r❡ ✷✵✶✶

❞❡✈❛♥t ❧❡ ❥✉r② ❝♦♠♣♦sé ❞❡ ✿

❇r✉♥♦ ❆r♥❛❧❞✐

Pr♦❢❡ss❡✉r✱ ■◆❙❆ ❞❡ ❘❡♥♥❡s ✴ Prés✐❞❡♥t

▼❛r✐❡✲P❛✉❧❡ ❈❛♥✐

Pr♦❢❡ss❡✉r✱ ■♥st✐t✉t P♦❧②t❡❝❤♥✐q✉❡ ❞❡ ●r❡♥♦❜❧❡ / ❘❛♣♣♦rt❡✉s❡

▼✐♥❣ ❈✳ ▲✐♥

Pr♦❢❡ss❡✉r✱ ❯♥✐✈❡rs✐t② ♦❢ ◆♦rt❤ ❈❛r♦❧✐♥❛ ❛t ❈❤❛♣❡❧ ❍✐❧❧ / ❘❛♣♣♦rt❡✉s❡

●❡♦r❣❡ ❉r❡tt❛❦✐s

❉✐r❡❝t❡✉r ❞❡ ❘❡❝❤❡r❝❤❡✱ ■◆❘■❆ ❙♦♣❤✐❛✲❆♥t✐♣♦❧✐s / ❊①❛♠✐♥❛t❡✉r

❆♥t❤♦♥② ❙t❡❡❞

Pr♦❢❡ss❡✉r✱ ❯♥✐✈❡rs✐t② ❈♦❧❧❡❣❡ ▲♦♥❞♦♥ / ❊①❛♠✐♥❛t❡✉r

❆♥❛t♦❧❡ ▲é❝✉②❡r

❉✐r❡❝t❡✉r ❞❡ ❘❡❝❤❡r❝❤❡✱ ■◆❘■❆ ❘❡♥♥❡s / ❉✐r❡❝t❡✉r ❞❡ t❤ès❡

▼❛✉❞ ▼❛r❝❤❛❧

(3)
(4)
(5)
(6)

Contents i

List of figures vii

List of tables xi

Introduction 1

I Haptic and Multimodal Interaction with Physically Based Com-

plex Media 11

1 Related Work: Physically Based Haptic Interaction with Complex Media 13

1.1 Fundamentals of Haptic Interaction . . . . 14

1.1.1 Fundamentals of the Human Haptic System . . . . 16

1.1.1.1 Tactile receptors . . . . 16

1.1.1.2 Proprioceptive receptors . . . . 17

1.1.1.3 Force control . . . . 17

1.1.2 Haptic Devices . . . . 17

1.1.2.1 Tactile interfaces . . . . 18

1.1.2.2 Kinesthetic interfaces . . . . 18

1.1.3 Haptic Rendering . . . . 19

1.1.3.1 Closed-loop rendering . . . . 20

1.1.3.2 Open-loop rendering . . . . 22

1.1.3.3 Simulation loops . . . . 23

1.2 Models for Physically Based Haptic Interaction . . . . 23

1.2.1 Rigid Bodies . . . . 24

1.2.1.1 Kinesthetic rendering . . . . 25

1.2.1.2 Vibrotactile rendering . . . . 28

1.2.2 Deformable Bodies . . . . 29

1.2.2.1 Kinesthetic rendering . . . . 30

1.2.2.2 Vibrotactile rendering . . . . 35

1.2.3 Fluids . . . . 37

1.3 Combining Haptics with other modalities . . . . 38

1.3.1 Low-level integration . . . . 39

1.3.2 High-level integration . . . . 40

1.4 Conclusion . . . . 43

2 Six Degrees-of-Freedom Haptic Interaction with Fluids 45 2.1 Smoothed-Particle Hydrodynamics Fluid Simulation . . . . 46

2.1.1 SPH Discretization . . . . 47

(7)

2.3.3 6DoF Haptic Coupling Scheme . . . . 54

2.3.4 Virtual Coupling . . . . 55

2.4 Visual Fluid Rendering . . . . 55

2.4.1 Computing Per-Pixel Fluid Data . . . . 55

2.4.2 Fluid Compositing . . . . 56

2.5 Evaluation . . . . 56

2.5.1 Hardware Setup . . . . 56

2.5.2 Computation Time . . . . 57

2.5.2.1 Fluid Simulation Performance . . . . 57

2.5.2.2 Unified Particle Model Performance . . . . 57

2.5.3 Graphic Rendering . . . . 58

2.5.4 Example Scenarios . . . . 58

2.5.4.1 6DoF Interaction . . . . 59

2.5.4.2 Container Interaction . . . . 59

2.5.4.3 Variable Viscosity . . . . 59

2.5.4.4 Bimanual coupling on the same rigid body . . . . 60

2.5.5 A complete use-case . . . . 61

2.6 Discussion . . . . 61

2.7 Conclusion . . . . 63

3 Six Degrees-of-Freedom Haptic Interaction with the Different States of Matter 65 3.1 SPH Multisate Simulation . . . . 66

3.1.1 SPH Deformable Body Model . . . . 67

3.1.2 Rigid Bodies, Fluids and Interaction Forces . . . . 68

3.1.3 Changes of State . . . . 68

3.1.4 Integration and Simulation Loop . . . . 69

3.2 6DoF Multistate Haptic Rendering . . . . 69

3.2.1 Rigid proxy . . . . 69

3.2.2 Deformable proxy . . . . 71

3.2.3 Proxy inducing state changes . . . . 71

3.2.4 Friction Forces . . . . 71

3.2.4.1 Contact point and surface detection . . . . 72

3.2.4.2 Friction force computation . . . . 72

3.3 Dual GPU Implementation . . . . 73

3.4 Evaluation . . . . 74

3.4.1 Haptic Time Step . . . . 75

3.4.2 Haptic Feedback . . . . 75

3.4.3 Perceptual Evaluation . . . . 76

(8)

3.4.3.1 Population . . . . 76

3.4.3.2 Experimental Apparatus . . . . 76

3.4.3.3 Procedure . . . . 76

3.4.3.4 Experimental Plan . . . . 77

3.4.3.5 Results . . . . 77

3.4.3.6 Subjective Questionnaire . . . . 78

3.5 Discussion . . . . 78

3.6 Conclusion . . . . 80

4 Vibrotactile Rendering of Fluids 83 4.1 Overview . . . . 84

4.2 Previous Approaches for Real-Time Fluid Sound Synthesis . . . . 84

4.3 Enhancing a Real-time Fluid Simulation with Bubbles . . . . 85

4.4 Vibrotactile Model . . . . 86

4.4.1 Initial Impact . . . . 86

4.4.1.1 Synthesis . . . . 86

4.4.1.2 Control . . . . 87

4.4.2 Harmonic Bubbles . . . . 87

4.4.2.1 Synthesis . . . . 87

4.4.2.2 Control . . . . 87

4.4.3 Main Cavity Oscillation . . . . 88

4.4.3.1 Synthesis . . . . 88

4.4.3.2 Control . . . . 88

4.5 Vibrotactile Rendering . . . . 89

4.6 Extension to Other Modalities . . . . 90

4.7 User Feedback . . . . 91

4.7.1 Scenario . . . . 91

4.7.2 Discussion . . . . 91

4.8 Conclusion . . . . 92

II Infinite Immersive Navigation Based on Natural Walking in Re- stricted Workspaces 93 5 Related Work: 3D Walking Interfaces for the Navigation of Large Virtual Environments within Restricted Workspaces 95 5.1 Locomotion Interfaces . . . . 96

5.1.1 Foot-based Devices . . . . 96

5.1.1.1 Foot-wearables . . . . 97

5.1.1.2 Foot platforms . . . . 97

5.1.2 Recentering Floors . . . . 98

5.1.2.1 Treadmills . . . . 98

5.1.2.2 Tiles . . . . 101

5.1.2.3 Spherical environments . . . . 101

5.2 3D Navigation techniques . . . . 102

5.2.1 Walking in place . . . . 103

5.2.2 Natural Walking Metaphors . . . . 103

5.2.3 Redirection Techniques . . . . 105

5.2.3.1 Redirected Walking . . . . 105

5.2.3.2 Motion Compression . . . . 107

(9)

6.3 Evaluation . . . . 116

6.3.1 Experiment #1: Pointing Task . . . . 116

6.3.1.1 Description . . . . 117

6.3.1.2 Results . . . . 118

6.3.2 Experiment #2: Path Following Task . . . . 119

6.3.2.1 Description . . . . 119

6.3.2.2 Results . . . . 121

6.3.3 Subjective Questionnaire . . . . 122

6.4 General Discussion . . . . 122

6.5 Conclusion . . . . 124

7 Infinite Navigation in Large Virtual Environments within Restricted Translation and Rotation Workspaces 127 7.1 Three Novel Navigation Techniques . . . . 128

7.1.1 Motivation for New Navigation Metaphors . . . . 129

7.1.2 General Terminology and Quantities . . . . 129

7.1.3 Constrained Wand and Signs . . . . 130

7.1.4 Extended Magic Barrier Tape . . . . 131

7.1.5 Virtual Companion . . . . 132

7.2 Evaluation . . . . 133

7.2.1 Experimental Conditions . . . . 134

7.2.1.1 Population . . . . 134

7.2.1.2 Experimental Apparatus . . . . 134

7.2.1.3 Procedure . . . . 135

7.2.1.4 Collected data . . . . 135

7.2.2 Task #1: Pointing Task . . . . 135

7.2.3 Task #2: Path Following Task . . . . 136

7.3 Results . . . . 136

7.3.1 Recorded time and tracking data . . . . 136

7.3.1.1 Time to complete the task . . . . 136

7.3.1.2 Physical walking distance . . . . 137

7.3.1.3 Time spent in reaction and danger zones . . . . 137

7.3.1.4 Deviation from the ideal path . . . . 138

7.3.1.5 VE differences . . . . 138

7.3.2 Questionnaire . . . . 138

7.4 Discussion . . . . 139

7.5 Conclusion . . . . 141

Conclusion 143

(10)

A Appendix: Fundamentals of Physically Based Simulation Models for

Haptic Interaction 151

A.1 Rigid bodies . . . . 151

A.2 Deformable bodies . . . . 153

A.2.1 Continuum mechanics . . . . 153

A.2.2 The Finite Element Method . . . . 154

A.2.2.1 Linear FEM . . . . 156

A.2.2.2 Corotated formulation . . . . 156

A.2.3 Mass-spring systems . . . . 156

A.3 Fluids . . . . 157

A.3.1 Navier-Stokes equations . . . . 157

A.3.2 Eulerian simulation . . . . 158

A.3.2.1 Extensions . . . . 160

A.4 Time integration schemes . . . . 161

A.4.1 Explicit integration . . . . 162

A.4.1.1 The explicit Euler method . . . . 162

A.4.1.2 The Runge-Kutta method . . . . 162

A.4.2 The implicit Euler method . . . . 163

B Appendix: Résumé Long en Français 165 B.1 Partie 1: Interaction Haptique et Multimodale avec des Milieux Complexes Basés Physique . . . . 169

B.1.1 Interaction haptique à 6DDL avec des fluides . . . . 169

B.1.1.1 Simulation physique de fluides . . . . 169

B.1.1.2 Simulation d’objets rigides . . . . 169

B.1.1.3 Rendu haptique à 6DDL . . . . 170

B.1.1.4 Scénarios de test . . . . 171

B.1.2 Interaction haptique à 6DDL avec les différents états de la matière . . . 171

B.1.2.1 Simulation d’objets déformables . . . . 172

B.1.2.2 Rendu haptique multi-état à 6DDL . . . . 172

B.1.2.3 Évaluation . . . . 172

B.1.3 Interaction vibrotactile et multimodale avec des fluides . . . . 173

B.1.3.1 Simulation SPH de bulles . . . . 174

B.1.3.2 Modèle vibrotactile . . . . 174

B.1.3.3 Rendu vibrotactile et multimodal . . . . 175

B.2 Partie 2: Navigation Immersive et Infinie Basée sur la Marche dans des Espaces de Travail Restreints . . . . 176

B.2.1 Navigation infinie d’EV dans des espaces de travail restreints en translation176 B.2.1.1 Le “Bandeau Magique” . . . . 176

B.2.1.2 Évaluation . . . . 177

B.2.2 Navigation infinie d’EV dans des espaces de travail restreints en trans- lation et en rotation . . . . 177

B.2.2.1 Trois nouvelles techniques de navigation . . . . 177

B.2.2.2 Évaluation . . . . 178

B.3 Conclusion . . . . 179

Publications 181

Bibliography 181

(11)
(12)

1 Concept: a person walking on a beach . . . . 2

2 Objectives of Axis 1 . . . . 5

3 Objectives of Axis 2 . . . . 6

1.1 Architecture of a VR application . . . . 14

1.2 Human and machine haptic loops . . . . 15

1.3 Examples of tactile interfaces . . . . 19

1.4 Examples of kinesthetic interfaces . . . . 20

1.5 Closed-loop rendering . . . . 20

1.6 Virtual coupling mechanism . . . . 22

1.7 Open-loop rendering . . . . 22

1.8 6DoF God-object technique . . . . 26

1.9 Voxel-based rendering discretizations . . . . 27

1.10 Example of voxel-based rendering . . . . 28

1.11 Vibrotactile rendering of rigid contacts . . . . 29

1.12 Spatialized haptic rendering . . . . 30

1.13 Non-linear deformations using mass-spring systems . . . . 31

1.14 Voxel-based rendering of deformable bodies . . . . 33

1.15 Example of voxel-based rendering of highly detailed deformable bodies . . . . . 33

1.16 Linear deformations using an LCP formulation . . . . 34

1.17 Non-linear deformations using an LCP formulation . . . . 34

1.18 Meshless rendering . . . . 35

1.19 Fracture mechanics for the vibrotactile rendering of granular materials . . . . . 36

1.20 Examples of vibrotactile rendering of granular materials . . . . 36

1.21 Precomputed haptic interaction with fluids . . . . 38

1.22 Physically based haptic interaction with viscous fluid . . . . 39

1.23 Physically based haptic interaction with smoke . . . . 39

1.24 The frozen pond . . . . 40

1.25 High-level multimodal integration for virtual prototyping . . . . 41

1.26 Elaborate high-level multimodal integration . . . . 42

1.27 The Munich Knee Joint Simulation project . . . . 43

2.1 Smoothing Volume and SPH haptic forces . . . . 48

2.2 Overview of the 6DoF haptic rendering of fluids . . . . 51

2.3 Rigid body particle sampling . . . . 52

2.4 Illustration of the computation of forces acting on a rigid body . . . . 53

2.5 The bilateral sampling kernel . . . . 55

2.6 Performance evaluation of our fluid simulation algorithms . . . . 57

2.7 Evaluation of our graphic rendering method . . . . 58

2.8 6DoF haptic interaction scenario . . . . 59

2.9 Container interaction scenario . . . . 60

(13)

3.7 Experimental apparatus of the multistate evaluation . . . . 77

3.8 The three different states of matter of the multistate evaluation . . . . 77

3.9 Results for subjective ratings in the multistate evaluation . . . . 79

3.10 Scenario illustrating a cooking simulator . . . . 79

3.11 Scenario illustrating the changes of state . . . . 80

4.1 Overview of our vibrotactile fluid approach . . . . 85

4.2 The three components of our vibrotactile model . . . . 86

4.3 Interaction examples for the vibrotactile rendering of fluids . . . . 89

4.4 Vibrotactile signal generated with our model . . . . 90

5.1 Foot-wearable devices . . . . 97

5.2 Foot platforms . . . . 98

5.3 1DoF treadmills . . . . 100

5.4 1DoF treadmills for dynamic terrain . . . . 100

5.5 Omni-directional treadmills . . . . 101

5.6 The CirculaFloor . . . . 101

5.7 Spherical environments . . . . 102

5.8 The Step WIM . . . . 104

5.9 The VE used to test the Redirected Walking technique . . . . 106

5.10 The VE used to test passive haptics with the Redirected Walking technique . . 107

5.11 Motion Compression real and virtual paths . . . . 108

5.12 Another example of Motion Compression . . . . 108

5.13 Sequence of changes exploiting change blindness . . . . 109

6.1 The Magic Barrier Tape . . . . 112

6.2 The three Magic Barrier Tape visual cues . . . . 114

6.3 The Gaussian deformation of the Magic Barrier Tape . . . . 115

6.4 The visual cues from the extended resetting techniques . . . . 116

6.5 A subject wearing the tracking equipment . . . . 117

6.6 VE used in Experiment #1 of the Magic Barrier Tape . . . . 118

6.7 Results of Experiment #1 of the Magic Barrier Tape . . . . 119

6.8 Two paths used in Experiment #2 of the Magic Barrier Tape . . . . 120

6.9 The VE used in the Experiment #2 of the Magic Barrier Tape . . . . 120

6.10 Results of Experiment #2 of the Magic Barrier Tape . . . . 122

6.11 Questionnaire results of the Magic Barrier Tape . . . . 123

7.1 Screenshots illustrating the three techniques . . . . 128

7.2 Regions and boundaries for translation and rotation . . . . 130

(14)

7.3 The constrained wand . . . . 131

7.4 The extended MBT . . . . 132

7.5 The Virtual Companion . . . . 133

7.6 The gesture set for controlling the Virtual Companion . . . . 133

7.7 The simple VE used for the first block of tests . . . . 134

7.8 Recorded trajectories of a participant . . . . 136

7.9 Equivalence groups for the recorded time and tracking data . . . . 137

7.10 Questionnaire results . . . . 139

(15)
(16)

2.1 Comparison of our haptic fluid approach with previous work . . . . 62

3.1 Probabilities of correct answers for each state of matter . . . . 78

7.1 Summary overview of the main advantages and drawbacks of our techniques. . 140

(17)
(18)

T his Ph.D. manuscript, entitled “Multimodal Feedback and Interaction Techniques for Physically Based and Large Virtual Environments”, presents research conducted in the context of Virtual Reality (VR). VR technologies aim at simulating digital environ- ments with which users can interact and, as a result, perceive through different modalities the effects of their actions in real time. Burdea and Coiffet [1] define VR as “a high-end user-computer interface that involves real-time simulation and interactions through mul- tiple sensory channels. These sensory modalities are visual, tactile, auditory, smell, and taste”.

VR has the inherent capacity to realistically create and simulate specific virtual en- vironments (VE), even before these environments are actually used or even built in real life. Furthermore, VR is not limited to copying and imitating real world scenarios and behaviors: it allows the creation and simulation of any sort of VE, limited only by the imagination of the designer and the capabilities of the system. As a consequence, VR can be found in different applied domains outside of the many research labs devoted to this field.

In the automotive industry, for instance, a vehicle in the design stage can be displayed through VR, allowing the identification of potential design problems without producing an expensive physical mock-up. Virtual assembly and maintenance procedures can help validate or modify real procedures [2], while workforces can be trained through VR sce- narios, greatly reducing training costs and risks [3]. VR has also drawn the attention of the medical field, with multimodal simulators for the training of surgeons [4, 5], dentists [6] and orthopedists [7, 8]. Patients suffering from phobias can be immersed into VE [9]

in order to treat the pathology in a completely safe and controlled environment. Other application areas of VR include the fields of entertainment (video games and motion simu- lators), education (enhanced visualization, distant learning, virtual museums, sports) and design (CAD, architectural mockups, virtual art).

Unfortunately, in the current state of the art there are many limitations in terms of in- teraction possibilities in VR, both in available hardware and software components. Many of these limitations arise when interacting with complex VE. For instance, it is quite chal- lenging to simulate natural phenomena under VR constraints, namely in real-time and with high quality feedback. It is also very hard to design a device allowing a user to walk without real forward motion while providing an accurate restitution of walking sensations.

The main limiting factors are the available computational power, the limited technology,

and the inherently complex nature of natural phenomena. In fact, most situations that we

live in our (real) life cannot be simulated in a faithful manner. For instance, it is not yet

possible to faithfully simulate the multimodal exploration of natural scenes such as the one

illustrated in Figure 1, i.e. walking on a beach. Water motion and sand compliance are

complex phenomena, and although there are physically based models to simulate them,

real-time constraints and multimodal feedback present a considerable challenge. Walking

(19)

Figure 1 – A person walking on a beach, interacting with water, sand and stones, while traveling along the beach shore. This scenario cannot be efficiently simulated in VR nowa- days, due to the limited availability of multistate simulations and multimodal feedback models, and due to the boundaries of the real workspace.

Research Context

Interaction significantly contributes in making VR such a powerful and immersive tool.

The more believable the interaction and its feedback, the more it makes the user uncon- sciously shift his reality from the real to the virtual environment, developing a true sense of presence 1 . Taking into account the capabilities of today’s and tomorrow’s VR systems, we define the research context of this thesis based on three fundamental conditions for VR interaction 2 . This context will allow us to highlighting the main weaknesses of current approaches, and point out what aspects have remained largely unexplored. These issues will lead us to define our different research axes and, most importantly, drive our work.

Interaction in VR should be multimodal. In real life, we interact with our surrounding environment with our five senses. Each sense provides complementary cues for a wider and more accurate perception. Ideally, it should be the same in a VR simulation. It should be safe to state that, for most tasks, humans rely on vision, hearing and touch. Thus, we believe these three modalities should be simulated and rendered to the user in immersive VR applications.

Interaction in VR should be physically based. Users expect the VE to be- have like in the real world, except for very specific scenarios. Objects are supposed

1

Presence can be defined as the illusion of being located inside the VE depicted by the VR system: the

“sense of being there” [10]

2

Not following these conditions does not necessarily mean that VR interaction is not possible. Many

uses of VR do not require the fulfillment of these three conditions.

(20)

to fall, collide, deform and flow as usual, and should respond to user actions with realistic behavior. Thus, they have to follow the different laws of physics, at least from a macroscopic point of view. Doing this geometrically or by predefined anima- tion keys only works for specific, precomputed, and therefore limited scenarios. For full interaction possibilities with different VE, the behavior has to be described by physically based models of the different objects populating the scene.

Interaction in VR should allow complex VE. By complex, we refer to a higher demand in the characteristics of the VE and/or its objects. There are many ways in which a VE can be complex:

in size: large objects, large scenes

in number: high polygon count, high object count

in shape: small-scale details, convex objects, landscape with relief

in behavior: non-rigid media (deformable bodies, fluids), large dynamic com- ponents (speed, force)

Users should be able to interact with complex environments, as they actually rep- resent most real-life scenarios. However, since they are labeled as complex, they inherently pose computation, modeling or interface challenges.

When enforcing these three conditions in a VR simulation, we are often confronted to the issues discussed earlier, namely limited available computational power, limited tech- nology, and the inherently complex nature of physical phenomena. Thus, in this Ph.D.

thesis, we focus on enhancing the multimodal and physically based interaction with com- plex VE.

In order to address this problem, we adopt a subdivision approach by breaking it down into two subproblems, following the main categories of VR interaction techniques.

As defined by Hinckley et al. [11], “an interaction technique is the fusion of input and output, consisting of all software and hardware elements, that provides a way for the user to accomplish a task”. From Bowman et al. [12] seminal taxonomy of VR tasks, we focus on two main categories that can be identified within VR interaction techniques 3 :

the manipulation category (and the very related selection category), regrouping the interaction techniques allowing the user to interact with the objects constituting the VE,

the navigation category, regrouping the interaction techniques allowing the user to move within the VE.

These categories represent the tasks that could be performed by a user in a real envi- ronment.

Objectives

In this Ph.D. thesis we focused on two research axes, corresponding to the multimodal and physically based interaction with complex VE within both fundamental interaction

3

We do not consider system control and symbolic input tasks, since these are a response to user interface

issues and do not arise from real world tasks transposed to VR.

(21)

environment. It is in fact quite likely that a higher sense of presence could be generated in a VR simulation by adding a simple force feedback interface with low resolution force restitution to an existing visual and auditory VR setup, than by improving one particular modality such as the visual display alone [13]. Besides, the addition of force feedback to VR simulations has been shown to improve user immersion and performance [14, 15] when accomplishing some specific tasks in the VE. Vibrotactile and acoustic feedback are also attractive additions to VR simulations, since they do not require expensive robotic devices as other modalities: there is a wide availability of off-the-shelf and easily built vibrotactile hardware (actuated floors [16], shoes [17], and hand-held transducers) and acoustic devices (speakers).

First objective: Multimodal manipulation of fluids

Most current multimodal simulations involve only rigid bodies, since they follow sim- ple dynamics and represent many of the objects that surround us. Complex VE, however, can have non-rigid media, exhibiting many more degrees of freedom, and following more complex phenomena. Different physically based approaches have been developed for the real-time simulation of non-rigid media such as elastic bodies and fluids. However, the mul- timodal interaction with these media has room for improvement, and is limited nowadays by the available computational power. Multimodal interaction with deformable bodies have received some attention in the context of force feedback. Surprisingly, multimodal interaction with fluids has been scarcely studied. However, we often interact with fluids in our daily life, either through tools such as when holding a glass of water or stepping on a puddle with our shoes, or directly with our body when we swim, wash our hands or walk on a beach shore. Fluids are also found in many applications such as for industrial or medical manipulations - involving for instance blood flow and natural liquids. Water, an example of fluid, is the most manipulated material [18] in industry. Enabling multimodal feedback in the interaction with fluids, besides allowing more realistic simulations, would enable a wide range of novel simulation scenarios and applications.

Second objective: A unified approach for the manipulation of media with force feedback

Complex VE with fluid inside are usually also populated by solid (rigid and deformable)

media. Generally speaking, complex VE usually involve simulating several types of media

at the same time. However, simulating fluid, deformable and rigid media in the same sim-

ulation with haptic feedback poses several challenges. What should be a fairly common

scenario implies the simulation of heterogeneous media through different models specific

to each medium. Their interactions have to be computed, thus requiring coupling mech-

(22)

anisms between each couple of media. And, most importantly, the user needs to interact with the VE and receive force feedback, thus requiring haptic coupling mechanisms for each medium present in the VE. Taking into account these constraints leads to an increase in the complexity and the computational cost of an already highly complex and time con- suming simulation. Previous haptic rendering techniques focus on a single medium, and existing multistate approaches are quite limited [19]. Haptic interaction with different physically based media would be more efficient and seamless to users, designers and de- velopers through a unified approach for simulation and rendering.

Figure 2 illustrates the objectives of Axis 1: the multimodal manipulation of complex VE with rigid, deformable and fluid media through different modalities (kiensthetic, tac- tile, acoustic and visual).

Figure 2 – Objectives of Axis 1: multimodal manipulation of complex VE. Modalities:

K (kinesthetic), T (tactile), A (acoustic), V (visual). Modalities that have been largely addressed in previous work are shown in green. Modalities that have been scarcely studied or not studied at all are shown in red. Those with a black frame are addressed in this manuscript.

Axis 2 - Multimodal navigation of physically based complex VE

There is a wide range of devices and metaphors for the navigation of VE. Following the fundamental conditions of our research context, we require a navigation interface pro- viding multimodal feedback. Instead of relying on computationally expensive artificially generated sensory feedback, we simply focus on natural walking as the core of the naviga- tion interface. Indeed, using natural walking in a VE inherently matches vestibular and proprioceptive cues from the real movement, but with the visual feedback from the virtual movement. Natural walking also naturally produces vibrotactile and acoustic feedback when stepping on the real ground. Thus, natural walking in a VE produces a perfectly ac- curate multi-sensory perception of navigation, hard to match with simulated approaches.

It also provides the most natural, intuitive and direct way of controlling one’s position.

In addition, several studies have shown the benefits of using natural walking for the nav-

(23)

tually possible. Although the design of a large environment is not necessarily complex (it actually depends on how it is populated), navigating it does pose many challenges: the VE might be very large or even infinite, but the real physical workspace is not. In most cases, the space in which the user moves is significantly smaller than the simulated environment.

This is the case for CAVE-like setups where the 4 screens represent the boundaries of the workspace, but it also applies to HMD setups since workspaces are bounded by the range of tracking systems. There are also boundaries rotation-wise in CAVE-like setups, where one screen (the “back” screen) is missing. In any case, walking users eventually reach the boundaries of the workspace, leading to breaks of immersion, blocking situations and safety problems. Providing an immersive and safe walking metaphor for navigating in infinite VE within the confines of restricted workspaces is a challenging task. It would provide a solution to many training and entertainment VR simulations requiring large scenes.

Figure 3 illustrates the objective of Axis 2: the multimodal navigation of complex VE, when the VE is larger than the available physical workspace.

Figure 3 – Objectives of Axis 2: multimodal navigation of complex VE. The black square represents the boundaries of the workspace. Multimodal interaction can happen in the VE inside the boundaries, but raise several issues when moving outside of the workspace, in translation and rotation.

Approach and Contributions

This manuscript presents the research carried out in order to address the three ob-

(24)

jectives mentioned above. It is naturally divided in two parts, each following a research axis:

Part I describes novel techniques for haptic and multimodal interaction with physically based and complex media. Haptic feedback includes kinesthetic (force) feedback as well as vibrotactile feedback. These modalities are combined to visual and, to some extent, acoustic feedback. The focus is given to fluids, since these are scarcely explored in previous work while being widely present in real life and in different VR application fields. Rigid and deformable bodies are also considered when proposing a unified approach for the kinesthetic interaction with different media.

Part II describes novel metaphors for infinite immersive navigation based on natural walking in restricted workspaces. Both translational boundaries (screens, tracking range) and rotational boundaries (missing screens) are considered in the de- sign of these metaphors.

More details are given in the remainder of this chapter.

Part 1 - Haptic and Multimodal Interaction with Physically Based Com- plex Media

We first propose a background overview of haptic and multimodal interaction with physically based complex media in Chapter 1. We begin by providing an introduction to the fundamentals of haptic interaction, namely the human haptic system, the hap- tic devices and the main concepts of haptic rendering. Then, we focus on the existing physically based models for haptic interaction for the different media of the computer graphics field: rigid bodies, deformable bodies and fluids. Finally, we provide an overview of existing multimodal approaches in different application areas involving the combination of visual and haptic feedback with other modalities.

The haptic interactive simulation of fluids is particularly challenging, especially to achieve realistic and stable force-feedback with high update rates using physically based models. To simulate interactions between fluids and rigid bodies with haptic rendering, previous studies have proposed precomputed ad-hoc algorithms [25], approaches featuring only 3 Degrees of Freedom (DoF) and non-viscous fluids [26], or implementations restricted to simple object shapes and small amounts of fluid [27]. Thus, as for today, there is a lack of models and rendering techniques handling complex 6DoF haptic interactions with viscous fluids in real-time.

In Chapter 2 we propose a novel approach that allows real-time 6 Degrees of Freedom haptic interaction with fluids of variable viscosity. Our haptic render- ing technique is based on the Smoothed-Particle Hydrodynamics [28, 29] model, and uses a new haptic coupling scheme and a unified particle model allowing the use of arbitrary-shaped rigid bodies. Particularly, fluid containers can be created to hold fluid and hence transmit to the user force feedback coming from fluid stirring, pouring, shaking and scooping, to name a few. In addition, we adapted an existing visual rendering algorithm to meet the frame rate requirements of the haptic algorithms. We evaluate and illustrate the main features of our approach through different scenarios, highlighting the 6DoF haptic feedback and the use of containers.

When populating the VE with multiple states of matter (fluid, deformable and rigid

media), the complexity of the simulation increases significantly. Different simulation mod-

(25)

different simulation algorithms and their coupling, and uses a single haptic rendering mech- anism. The approach is enhanced with state change mechanisms, friction forces and multistate proxies. Haptic rates are achieved through a dual GPU implementation.

The approach is evaluated by assessing the capability of users to recognize the different states of matter they interact with.

Force and visual feedback are not the only important modalities when interacting with the environment. Vibrotactile and acoustic feedback provide complementary cues for a better perception of materials, forces and distances, among others. The availability of cheap transducers for these modalities makes them an interesting addition to VR appli- cations. Indeed, many common materials with which we interact on a daily basis can be simulated and displayed through the vibrotactile and acoustic modalities in real-time. Ex- amples include solids such as wood and metal [30] and aggregates such as gravel and snow [31, 32]. However, materials such as water and other fluids have again been largely ignored in this context. Compelling multimodal VR simulations such as walking through puddles or splashing on the beach are very limited without these additional cues, but would be of great interest in the entertainment field.

To this end, in Chapter 4 we introduce the first approach for the vibrotactile ren- dering of fluids. Similar to other rendering approaches for virtual materials [31, 32, 30], we leverage the fact that vibrotactile and acoustic phenomena share a common physical source. Hence, we base the design of our vibrotactile model on prior knowledge of fluid sound rendering. Since fluid sound is generated mainly through bubble and air cavity resonance, we enhanced our fluid simulator presented in Chapter 2 with real-time bubble creation and solid-fluid impact mechanisms. We can synthesize vibrotactile feedback, and to some extent acoustic feedback, from interaction and simulation events.

Using this approach, we are exploring the use of bubble-based vibrations to convey fluid interaction sensations to users. We render the feedback for hand-based and, in a more innovative way, for foot-based interaction, engendering a rich perceptual experience of feeling the sensations of water.

Part 2 - Infinite Immersive Navigation Based on Natural Walking in Re- stricted Workspaces

We begin the second part of this manuscript by surveying in Chapter 5 the existing 3D user interfaces using walking for the navigation of large environments within the con- fines of restricted workspaces. We first review existing locomotion interfaces, which propose hardware solutions. We study foot-based devices, which compensate the motion of each foot separately, and recentering floors, which compensate the overall movement.

Then, we focus on software solutions with the existing 3D navigation techniques.

(26)

These include walking in place approaches, where the user performs the walking gait but without forward motion, natural walking metaphors, which combine natural walking with conscious and complementary techniques for dealing with the boundaries, and redirection techniques, which trick the user into modifying his trajectory in the VE.

When navigating in large or infinite VE within workspaces with restricted size, users are faced to the problem of reaching the workspace boundaries, thus raising safety problems and breaking immersion if not properly addressed. There are hardware and software-based approaches to overcome these issues: locomotion interfaces [33] such as treadmills often have major limitations that restrict their widespread use (huge size and weight, high cost, lack of accuracy), while existing navigation techniques [10, 34, 35, 36] often fail at providing a simple, intuitive and immersive interaction.

Therefore, in Chapter 6 we introduce a novel interaction metaphor called the Magic Barrier Tape, which allows a user to navigate in a potentially infinite VE while confined to walking workspaces restricted in translation. Head-Mounted Displays (HMD) with limited tracking range are examples of such workspaces. The technique relies on the barrier tape metaphor and its “do not cross” implicit message by surrounding the walking workspace with a virtual barrier tape in the VE. Therefore, the technique informs the user about the boundaries of his walking workspace, providing an environment safe from collisions and tracking problems. It uses a hybrid position/rate control mecha- nism to enable natural walking inside the workspace and rate control navigation to move beyond the boundaries by “pushing” on the virtual barrier tape. It provides an easy, intuitive and safe way of navigating in a VE, without break of immersion. Two ex- periments were conducted in order to evaluate the Magic Barrier Tape by comparing it to two navigation techniques sharing the same objectives.

The issue of reaching the boundaries of the workspace appears not only when the user moves towards the boundaries in translation: it can also happen when the user moves in rotation. Some VR setups, such as CAVE-like environments, present additional workspace restrictions. Indeed, in these setups users are not immersed in 360 : there are missing screens, leading to breaks of immersion when noticed by the user while turning. Hence, some workspaces are limited in translation and rotation.

Chapter 7 presents three new techniques that deal with translation and ro- tation issues through common metaphors. These techniques provide a navigation metaphor that keeps the user safe from the boundaries, without breaking immersion. The first metaphor extends the basic and well-known wand paradigm by adding virtual warn- ing signs. The second metaphor extends the Magic Barrier Tape presented in Chapter 6 by adding virtual walls that prevent the user from looking at the missing screen. The third metaphor introduces a virtual companion in the form of a bird to guide and pro- tect the user within the VE. These techniques are evaluated by comparing them first to a base wand condition. The study provides insight into the relative strengths of each new technique, while showing that they can efficiently address the issues of navigation in large VE within restricted workspaces.

Finally, Chapter 8 provides conclusions and perspectives of the work presented

in this manuscript.

(27)
(28)

Haptic and Multimodal

Interaction with Physically Based

Complex Media

(29)
(30)

Haptic Interaction with Complex 1

Media

Contents

1.1 Fundamentals of Haptic Interaction . . . . 14 1.1.1 Fundamentals of the Human Haptic System . . . . 16 1.1.2 Haptic Devices . . . . 17 1.1.3 Haptic Rendering . . . . 19 1.2 Models for Physically Based Haptic Interaction . . . . 23 1.2.1 Rigid Bodies . . . . 24 1.2.2 Deformable Bodies . . . . 29 1.2.3 Fluids . . . . 37 1.3 Combining Haptics with other modalities . . . . 38 1.3.1 Low-level integration . . . . 39 1.3.2 High-level integration . . . . 40 1.4 Conclusion . . . . 43

Just as the synthesizing and rendering of visual images defines the area of computer graphics, the art and science of developing devices and algorithms that synthesize computer generated force-feedback and vibrotactile cues is the concern of computer haptics [37].

Haptics broadly refers to touch interactions (physical contact) that occur for the purpose of perception or manipulation of objects [38].

In this chapter, we survey previous work on haptic interaction with physically based VE. These environments are often populated with complex media, such as detailed rigid bodies, deformable objects and volumes of fluid. Thus, a significant number of approaches have been developed since the introduction of haptics, allowing the haptic interaction with different bodies of different media. We first recall the fundamentals of haptic interaction, focusing on the main mechanisms behind human, device and software haptic components.

Then, we survey the existing physically based models for haptic interaction, allowing the

computation of force or vibrotactile feedback from the interaction with objects in the

rigid, deformable and fluid states of matter. Finally, we provide an overview of existing

multimodal approaches in different application areas involving the combination of visual

and haptic feedback with other modalities.

(31)

devices responsible for the conversion of signals into physical stimuli in visual or tactile form, perceived by the user: a new position of the haptic device handle, a specific vibration of a wearable vibrator, and an image drawn on a screen, for example.

Figure 1.1 – Architecture of a VR application with visual and haptic modalities. Inspired from [38]

.

A specificity of haptics compared to others sensory modalities is its bidirectional flow.

When subject to kinesthetic rendering, a user perceives a force from the haptic device, but also exerts a force on the haptic device. This is not the case with audio or visual modalities, where a user does not affect the rendering device and hence the sensory loop.

This bi-directionality, the capacity to exchange information and energy in two directions, from and toward the user, is often referred to as the single most important feature of the haptic modality [38], and clearly highlights its interactive nature.

The interest for force and touch feedback in computer simulations goes back to the mid-sixties [39, 40, 41]. Since then, it has been shown that haptic feedback enhances the immersion of users in VR, as well as their performance in the achievement of a task within a VE [14, 15, 42]. With the fast growing in computational power and in device rendering fidelity, many areas have been targeted by past and present applications of ongoing research, and exciting possibilities can be foreseen in the near future [13]:

Medicine: surgical simulators for medical training, remote diagnosis for telemedicine, aids for the blind such as warning or path guidance, rehabilitation of patients with gait problems

Industry: path planning, virtual prototyping, virtual assembly, virtual training

Scientific visualization: exploration of complex data sets, molecular manipulation

(32)

Entertainment: video games and simulators for a deeper immersion within the VE

Exhibitions: virtual art, virtual touching in museums

Content creation: enhanced modeling and virtual sculpturing, 3D painting

Architecture and design: virtual walkthrough, model testing

As defined by Srinivasan and Basdogan [13], research in the area of haptics can be categorized into two main areas: Human Haptics and Machine Haptics. These categories are tightly linked to the subsystems and information flow behind the haptic interaction between a human user and the VE through the haptic interface. Human haptics are related to the human sensorimotor loop: when a human user touches a real or virtual object, forces or vibrations are exerted on the skin and the muscles. The information is sensed by different receptors, depending on the type of stimuli. The associated sensory information is conveyed to the brain and leads to conscious or unconscious perception. The brain issues motor commands to activate the different effectors, eventually resulting in motion.

Conversely, Machine Haptics are related to the Machine sensorimotor loop: when the human user manipulates the haptic device, the device sensors convey the different sensed data to the computer. The VE is updated, and the computed output data is sent to the actuators of the haptic device to generate the haptic feedback. Both categories form two distinct loops in the haptic interaction process, as show in Figure 1.2.

Figure 1.2 – Human and machine loops during haptic interaction [13]

Within the machine haptics area, the haptic rendering refers to the process by which

sensory stimuli are computed through a software algorithm in order to convey information

about a virtual object [38]. A haptic rendering algorithm gathers data from the environ-

ment, such as the device position and the physical attributes of the virtual objects (shape,

elasticity, texture, mass, etc), and produces force, torque and/or tactile signals. The de-

sign of the algorithm is crucial for an accurate stimuli restitution. It is analogous to a

graphic rendering algorithm: a sphere visually rendered with simple shading techniques

will look different from the same sphere rendered with ray-tracing techniques. In haptics, a

sphere rendered using simple geometrical functions will feel different from the same sphere

rendered with physically based techniques conveying texture and friction sensations [38].

(33)

thermal or chemical). Information collected by receptors is conveyed to the central nervous system, mainly the brain and the spinal cord, using electrical impulses through the afferent (sensory) neural network. The responses generated in the central nervous system travel through the efferent (motor) nerve fibers, conducting impulses to motor neurons that transmute neural signals into activation of muscles and glands [13].

Haptic sensory information from the body in contact with an object can be divided into two classes: tactile perception involving the perception of sensations at the surface of the skin, and kinesthetic (proprioceptive) perception involving the perception of body positions and forces [43]. Upon contact, forces are usually sensed by both tactile and kinesthetic systems. Coarse properties of objects explored through hand or arm motion, such as large shapes (one meter or more) or spring-like compliances, are conveyed by the kinesthetic system. On the other hand, spatiotemporal variations of contact forces are usually sensed by the tactile system, including fine shapes, texture, slip, and rubber-like compliances, among others [13].

1.1.1.1 Tactile receptors

Tactile sensations result from the stimulation of three kind of receptors located in the skin [13]:

thermoreceptors, sensitive to temperature. There are two types of thermoreceptors, sensitive to changes in cold or warm temperatures.

nociceptors, sensitive to mechanical, thermal or chemical stimuli that have the po- tential to damage tissues.

mechanoreceptors, composed of different receptors sensitive to mechanical stimula- tions like pressure, vibrations, flutter, stretch and textures. Among the four types of mechanoreceptors, we can distinguish: 1) slow adapting receptors which are stim- ulated throughout a sustained stimulus (Merkel disks, sensitive to unchanging pres- sure, and Ruffini endings, responding to unchanging movements like stretching), and 2) rapid adapting receptors which are stimulated only at the onset and offset of a stimulus (Meissner corpuscles, sensitive to changing details, giving a perception of flutter, and Pacinian corpuscles, responding to changes in movement like vibrations).

At places where the tactile sensory capabilities are most acute (such as the fingertips),

the spatial location of a point is detectable up to 0.15 mm, with a spatial resolution between

two points of about one millimeter [13]. Textures made of a 0.06 µm high gratings are

detectable, as well as 2 µm high single dots [13]. Vibrations of up to 1kHz are detectable,

with highest sensitivity around 250Hz. The detection threshold globally decreases with

(34)

increasing frequencies [13]. The frequency JNDs 1 at the fingertip has been estimated to different values among multiple studies [44] from 3% to 38%. The intensity JNDs at the fingertip decrease as intensity increases, and are roughly independent of frequency.

1.1.1.2 Proprioceptive receptors

Kinesthetic perception, often jointly used with the term proprioception, is involved in the perception of limbs’ positions, movements and efforts. Proprioception is the result of the fusion of information generated by two kinds of receptors [13]:

receptors from muscles, joints and tendons. The most important receptors for con- trolling the muscular system are the spindle fibers, sensitive to changes in the length of muscles, and the Golgi tendon organs, sensitive to stretch.

tactile receptors. Tissues and ligaments surrounding the joint contain several mechanore- ceptors such as Ruffini endings and Pacinian corpuscles, providing information such as the stretch of the skin.

Some studies suggest that the maximum bandwidth of kinesthetic perception is around 12Hz [45]. The position JNDs vary from 0.8°for the elbow to 2.5°for the fingers [45]. The perception of efforts is anisotropic. The typical associated JNDs are 5% − 15% for contact forces, 10% for weight, 13% for torque and 22% for the stiffness of an object [46, 13].

1.1.1.3 Force control

During object manipulation, the maximum controllable force exerted through a finger is about 50 to 100 N, depending on whether shoulder muscles can be used or not [47].

However, typical forces in manipulation tasks are usually between 5 and 15 N, with a resolution of about 0.04 N [47]. When squeezing virtual objects, the perceptual resolution in terms of JNDs has been found to be about 7% for force and elastic stiffness, 12% for viscosity and 20% for mass [13]. In order to simulate rigid walls, a stiffness of about 25 N/mm is required, although 5 N/mm can already provide a good perception [47]. These thresholds, however, are the results of the haptic modality alone. They can be significantly altered when adding cues from other modalities [13].

1.1.2 Haptic Devices

When interacting with a VE with haptic feedback, a user receives tactual sensory infor- mation through his tactile and kinesthetic sensory systems. The interface in charge of displaying those haptic signals are the haptic devices. A particularity of haptic devices is that, in many cases and notably for kinesthetic interfaces, they also serve as input devices:

the user controls and manipulates the device or part of it (such as a handle) through which positions, velocities and/or forces exerted by the user are sensed by the device, creating a two-way coupling between the user and the virtual object manipulated [13].

Haptic devices can be classified into tactile interfaces, involving tactile perception, and kinesthetic interfaces, involving kinesthetic perception. Since tactile perception is mostly cutaneous, tactile interfaces are usually used to simulate the direct touch and feel of objects contacting the skin [13]. Conversely, since kinesthetic perception is based on limb movement and net forces, kinesthetic interfaces have a handle or a grasping mechanical

1

Just Noticeable Difference: the smallest detectable difference in a stimulus.

(35)

1.1.2.1 Tactile interfaces

Visell [31] characterizes tactile interfaces by the format in which energy is transmitted to the tactile receptors. The list is non-exhaustive:

Low frequency, low amplitude mechanical deformation, where pieces are moved in order to render a relief for the exploration by touch. Examples include the tactile shape display [51] (Fig. 1.3a) for normal strains, and the STReSS interface [52] (Fig.

1.3b) for lateral strains.

Vibrotactile stimulation, where the interface vibrates against the skin. Examples include the CyberTouch ™ glove from Immersion (Fig. 1.3c) for finger and palm stimulation, and the EcoTile [31] (Fig. 1.3d) for foot-floor interaction.

Electrotactile stimulation, where currents are used to stimulate the afferent nerves directly, bypassing the receptors.

Force feedback displays, which by nature are designed for kinesthetic perception but inherently stimulate tactile receptors through contact transients (friction, vibrations, etc). Several examples are shown in section 1.1.2.2.

Thermal displays, where heat is directed toward or away from the skin.

1.1.2.2 Kinesthetic interfaces

Kinesthetic interfaces are based on the kinesthetic part of the haptic system and thus measure and deliver positions and forces. We can differentiate passive interfaces containing only sensors and active interfaces delivering forces and movements to the user [53].

Passive interfaces regroup isotonic interfaces following the user’s movements without constraining them and isometric interfaces which are, in contrary, immobile [54]. Existing passive interfaces include the classic mouse for position sensing, or the SpaceMouse ™ from 3DConnexion for measuring forces and torques.

Active interfaces 3 can deliver forces using different kinds of actuation architecture, such as parallel or serial structures. In serial active interfaces, actuators are serially connected to each other from a static base to the manipulated part like a robotic arm.

Notable examples are the Phantom ® from SensAble (Fig. 1.4a) adapted to desktop use or the Haption Virtuose ™ (Fig. 1.4b) allowing a wider workspace. In parallel interfaces,

2

From a user perspective, a transparent system enables the haptic feedback of a VE without perceiving the mechanical dynamics of the device (such as inertia and friction)

3

This class of devices is often referred in the literature as “haptic devices”.

(36)

(a) Tactile shape display [51] (b) STReSS interface [52]

(c) CyberTouch ™ glove (d) The EcoTile [31]

Figure 1.3 – Examples of tactile interfaces

actuators are directly connected to the manipulated part. Notable examples are cable- based devices such as SPIDAR interfaces [55] (Fig. 1.4c) or three-armed devices such like the Falcon ® from Novint (Fig. 1.4d).

Other taxonomies classify active interfaces as ground-based (such as the Phantom ® ) or body-based (such as the Exoskeleton Force ArmMaster from EXOS, Inc., Fig. 1.4e) [13], or by the number of available degrees of freedom [49].

1.1.3 Haptic Rendering

Haptic rendering is the component of machine haptics concerned with generating and rendering haptic stimuli to the human user [13].

A typical haptic loop consists of the following sequence of events [37]:

Find the position of the proxy in the VE

Use collision detection algorithms to detect proxy interaction with the VE

Use data from collision detection and an interaction response algorithm to compute the interaction response signal

Send the interaction response signal to the control algorithms, which apply them on the operator through the haptic device

The haptic rendering loop involves the user, the device and the simulation. The user

controls the device, which senses a position/force that is sent to the simulation. In turn, the

simulation generates the haptic stimuli, sent to the user through the device. Depending on

whether there is a feedback mechanism between the device and the simulation, two different

haptic rendering loops exist: closed-loop rendering, common for kinesthetic devices, and

open-loop rendering, common for tactile devices.

(37)

(a) Phantom (b) Virtuose (c) Spidar [55]

(d) Falcon (e) ArmMaster

Figure 1.4 – Examples of kinesthetic interfaces

1.1.3.1 Closed-loop rendering

In closed-loop rendering, the output of the simulation is fed back to the input of the haptic device [56], as shown in Figure 1.5. This is typically the case of kinesthetic rendering, where a kinesthetic transducer displays a force to the user. Indeed, the force display induces a mechanical motion of the device, usually through its motors, that is eventually constrained by the user manipulating the device. Thus, there is a difference between the feedback configuration and the real device configuration, which is used to compute the new values sent to the simulation.

Figure 1.5 – Closed-loop rendering

Closed-loop rendering raises some specific issues. If an instability is introduced in the

loop (due to rate or time-stepping issues, for instance), the closed nature of the rendering

loop will make the errors propagate and will eventually be amplified until failure. Hence,

in closed-loop rendering special care needs to be taken for the stabilization of the overall

system. In addition, the loop has to run at an update rate appropriate to the type

of haptic interaction that is simulated. An update rate of 1 kHz is considered to be a

(38)

minimum in order to provide a good perceptive rendering, due to the frequency range of sensory receptors in the human haptic system. This rate seems to be a subjectively acceptable compromise permitting the representation of reasonably complex objects with reasonable stiffness. Higher rates can provide crisper contact and texture sensations for stiff objects, but only at the expense of reduced VE complexity or precision, or with the availability of more capable computers.

In general, closed-loop haptic rendering has to render two main features: the free motion corresponding to the unconstrained virtual object movement and the contact re- straining the interface movement during collision. The efficient rendering of these features is highly dependent on the type of closed-loop control schemes and corresponding device.

We can differentiate two broad classes of closed-loop control schemes: admittance and impedance [13]. The choice between these two main architectures raises some important implications in the design of the loop and the associated interface [56]:

admittance control systems measure the force applied by the user and control the position and/or velocity of the haptic device. They can efficiently render contact surfaces by constraining the position regardless of the force applied but raises some difficulties to render transparent unconstrained free motion.

impedance control systems detect the motion commanded by the user and control the forces applied by the haptic device. They can thus easily render free motions by not producing any force but cannot efficiently constrain the position on virtual contact surfaces.

The main issue raised by free motion and contact using admittance or impedance loops and devices is the instability caused by the rendering loop which can manifest as buzzing, oscillating or divergent behaviors, that might even be harmful for the user. We subsequently describe a mechanism called virtual coupling, designed to address this is- sue: it simulates virtual contacts based on a trade-off between a stable interaction and a convincing rendering of transparent free motion and stiff contact.

1.1.3.1.a Virtual coupling

The haptic interaction of a user with a VE can be described using Figure 1.5. x u and f u represent the forces and positions exchanged between the user and the haptic device, while x v , and f v represent the forces and positions exchanged between the haptic device and the VE. This configuration is called direct rendering, since the haptic device uses directly the data coming from the VE to display the force to the user. This approach, however, poses stability problems.

The simulation of a rigid contact can be modeled by using a contact model of stiffness

K and damping B such that f v (t) = − Kx v (t) − B x ˙ v (t) [57]. This ideal spring-damper

model is dissipative: an ideal spring is a lossless system (the energy accumulated by

squeezing the spring is entirely removed when releasing it) and the damper is a dissipative

system. However, the interactive simulation of such system implemented in discrete time

and values is not dissipative. For instance, the spring will not increase smoothly, but

will be repeatedly “held” at constant values of time-step T in a staircase fashion, and the

energy accumulated during squeezing will not be entirely removed during release. Thus,

the virtual spring does not behave as a lossless system but generates energy. Likewise,

a discretized damper is capable of producing energy. This discrete sampled-data system

constitutes one important reason of the instability of haptic rendering of virtual contact

[57].

(39)

Figure 1.6 – Virtual coupling mechanism: a visco-elastic link in the form of a spring and a damper is introduced between the device and the VE.

The virtual coupling is a multidimensional viscoelastic link (spring and damper) be- tween the haptic device and the simulation, as illustrated in Figure 1.6. Thus, the position of the virtual object in the simulation is distinct from the position imposed by the haptic device, but both are connected by a viscoelastic link. The spring tries to align the simula- tion position of the virtual object to the position of the haptic device. The damper tries to enforce equal velocities. This link exists for position and orientation in the case of 6DoF simulations [58].

Virtual coupling guarantees the stability of the discrete-time sampled system by lim- iting the maximum impedance exhibited by the haptic device as long as the simulation is discrete-time passive. Thus, even if the simulated object is constrained by a rigid contact (with high values of stiffness and damping) from the simulation, the virtual coupling will naturally limit those values to achieve a stable haptic rendering. The use of virtual cou- pling shifts the stability problem of the haptic rendering of a VE to the passivity of the simulation alone. Colgate et al. pointed out that this passivity of the simulation can be easily achieved by considering methods with implicit integration schemes [58].

1.1.3.2 Open-loop rendering

In open-loop rendering, the display of the force stimuli has no incidence on the data sent to the simulation [56], as shown in Figure 1.7. This is typically the case of tactile rendering, where a tactile interface displays force transients or any other signal to the user by, for example, vibrating, without incidence in the position data sent to the simulation. The loop is inherently more stable than a closed-loop rendering approach [56].

Figure 1.7 – Open-loop rendering: the output signal s

v

have no incidence on input data.

In the case of a kinesthetic device, open-loop can be achieved by momentarily ignor-

ing the measured position of the device during the duration of the event [59]. It allows

to present an impact transient pattern through an adequate kinesthetic device. This

Références

Documents relatifs

In addition to the total time, we report the average times needed per frame for the computation of the free-motion configuration, the collision detection, as well as the times

classification of 3 þ 1D topological orders for bosonic systems, when all the pointlike excitations are bosons: They are classified by unitary pointed fusion 2-categories, which in

Une faible capacité d’insight de la part du client (Sumsion et Lencucha, 2007), l’incompréhension de la pratique fondée sur les occupations par celui-ci (Che Daud et al., 2016a

If isopods are not a major item in the diet of fish in late summer and fall, regardless of how easy they are to catch, then intense host modification at this time of year would not

At each animation step, the target orientation may either remain unchanged if the object orientation is too different, either be recomputed by binary search, in order to find a

فوصلاوب ظيف ا دبع ما ا زكرﳌا – ﺔﻠيم ىقتﻠﳌا طولا لوح ثلاثلا ﻠ سﳌا يموي ﺔيام ا تايدحتو لاقتن ةرورض : قرلا داصتق و ك 23 و 24 لرفأ 2018 :ﺔﻣﺪﻘﻣ

In the first case, we suppose that one equilibrium point is chosen a priori and a continuous controller is used under the assumption that the rigid body always spins in the

Furthermore, compared to pure randomization of a low discrepancy set, the Blue Noise spectrum improves the approximation by ensuring a more uniform covering of the domain, even for