animation. The second limit of mo-cap is encountered when the animation need to be repeated in sequence (looped). Typically stand and walk animation are created by continuous repetition of short animated sequence, to create a walk animation for instance, an animation representing a sequence of two steps is created and played over and over; for the animation to be correctly looking while looped it need the first and last frame must be the most similar as possible. If these frames are not identical, the linear interpolation performed by the game engine will make the walk look un- natural. The problem with mocap is that hardly humans being when performing natural motion (like a walk) will adopt exactly the same posture in any part of the sequence. As a consequence a long work of editing is needed to identify in a long sequence of motion capture those sequences that due to their similarity could be used as the starting and ending point of the loop. As an ulterior constrain those points must be closer as possible to the starting point of other animations, for instance our walking loop usually starts and finishes with a posture that is as close as possible to the starting and ending frame of our stand animation. Another limit on using mocap data for looping animation is related with fact that mocap data often include all those little random movements that give a feeling of neutrality to human motion. These parasite random movements loose their natural look when they are repeated systematically. For instance a little movement of one hand during a walk once the animation is looped will be systematically repeated in the exactly the same way every two steps. For this reason while creating short loops like the ones adopted for a walk cycle (around 30 frames) a further process of cleaning up of the animation is needed. Such clean up on the other hand is not compulsory if the animation to use on the loop cycle is long enough to hide this repetitively to the eye of the user. In our case the stand animation were quite long, some hundred frames and the different stand animations where played in random order to reduce further the possibility that the user spot any anomaly. As a consequence for the stand animation the post-editing of stand animation was minimal.
We believe that this fusion is novel in the field of mixed/augmented reality and has not been applied in current systems, where augmented reality is understood as the positioning of static information in space 3 or animated sequences, 4 but
in very few cases deformable solids are taken into account. 5 We are looking for a simple and natural interaction with
virtual objects, which are capable of deform as if they were really in front of the user. We believe that the union between machine learning, computer graphics, and computer vision provides very realistic results. The fusion between virtual and real objects with interaction between them is known as MR, since both realities (virtual and real) are mixed in a not easily separable way. We do not understand the idea of MR without the union of the three scientific communities (Figure 1), where each one provides the necessary tools to obtain the desired result.
simulation), FRVSense (Rendering) and VR4I (interaction and collaboration invirtual worlds). My internship is realized in the VR4I team under the responsibility of Anatole Lécuyer, Maud Marchal and Gabriel Cirio.
The team realized several studies on haptic interactioninvirtual worlds such as developing interaction metaphors, simulating haptic uids, etc. Most of applications were uni-manual. A recent demonstration on haptic uids ([CMHL10]) raised the ques- tion of using two hands invirtualreality. An analysis of existing works on that topic has shown that this eld was recent and only a few number of articles was published. Bi-manual haptic devices have been designed and tested but some issues, such as col- lision between interfaces, have not been solved. Furthermore, no dedicated studies on bi-manual haptic interaction are present in the literature.
To clarify these cognitive immersions and interactions (I²), the user will make use of a sensorimotor schema that has been assimilated in the real world. The notion of a schema was put forward by the psychologist Piaget . He considered that in analysing the birth of intelligence in a child, in particular the sensorimotor aspect, schemas are resources the subject uses to help assimilate any situations and objects with which s/he is confronted. For Piaget, a schema is a structured group of action characteristics that can be generalised and which makes it possible to repeat the action or to apply it to new content (as the consumer does manipulating the trolley in conditions close to those encountered in the real world). A schema is therefore the mental organisation of actions as they can be transferred or generalised when the action is repeated in similar circumstances. They correspond to the stable aspects in various classes of situations. Use schemas have a personal dimension proper to each individual; they are part of the subject’s personal memory in the form of resources that can be mobilised. They also have a social dimension: they are common to all or many members in a particular social group, an organisation, a working environment. Consequently is appropriate to also consider them as social use schemas, such as resources inscribed in the memory of groupings . We base our approach on this concept with a view to obtaining the behavioural interfaces, offering pseudo natural immersion and interaction. The behavioural interface is in this case a mixed entity comprising both an artefact (the physical mechanism) and a schema that we call "Imported Behavioural Schema" (IBS). The schema is imported from the real environment and is transposed and adapted into a virtual environment.
involved. Two other interaction metaphors can be studied to "replace" haptic perception. The first one is the addition of a tangible interface, such as physical elements within the user’s workspace, creating a mixed reality , . However, that can be difficult to implement in the case of complex environment. The second one is lighter pseudo-haptic systems for representation of localized information such as contact points, without restriction of mobility (they can be wearable). This last metaphor can be implemented using vibrotactile interfaces. We propose to use vibrotactile feedback as a substitute for force feedback, to make users aware of impending or actual collisions in highly constrained spaces. This type of feedback has shown interesting result in the case of guidance ,  and also in collision awareness –.
American Journal of Speech Language Pathology, 26(4), 1129–1140.
▪ Wiederhold, B. & Bouchard, S. (2014). Advances invirtualreality and anxiety disorders. Springer.
▪ Wong Sarver, N., Beidel, D. C., & Spitalnick, J. S. (2014). The Feasibility and Acceptability of Virtual Environments in the Treatment of Childhood Social Anxiety Disorder. Journal of Clinical Child and
al. with the virtuality-reality continuum . This section briefly describes previous work that focused on enabling the combination and transition between mixed realities.
See-through displays and SAR have been combined in the past, notably in order to complement the HMD’s high resolution with the projectors’ scalable FOV (field of view) [10, 2]. In the con- text of multi-display environments, the combination of screens and projection has been studied, both with  and without see-through technologies [20, 7]. Other hybrid mixed reality systems have also been explored. For example, Magicbook  combines a physical non-augmented book with video see-through, enabling the transi- tion from non augmented interaction, through ST-AR to immersive VR. Dedual et al.  proposed a system that combines interactive tabletops with video see-through AR. Smarter Objects  and ex- Touch  use video ST-AR to control embedded systems; even when the physical artifact was the focus of attention, no spatial augmentation was presented, except the electronic behaviour itself. Closer to our work is Metadesk  by Ullmer and Ishii, which combines tabletop interaction, tangible tokens and a see-through AR window.
1 In our work, we use the SSI framework that recognizes the
user’s emotions expressed through his voice and his facial ex-
tough question, the interviewer will expect negative affects from the interviewee (distress, agitation). The affective module (described in ection 4) takes as inputs the scenario information and the detected social cues. It computes a social attitude for the virtual agent (the recruiter in our case). These attitudes are turned into non-verbal behaviours in the virtual agent animation module (Section 5) and used in the next interaction step. This module is based on the Greta platform  and is composed of an intent planner that gen- erates the communicative intentions (what the agent intents to communicate) based on the scenario, and a behaviour planner that transforms the communicative intentions into a set of signals (e.g. speech, gestures, facial expressions) based on the agent’s attitude and mental states.
Social Attitudes, Emotions, Affective computing, Virtual Agent, Non-verbal behaviour
In the context of social inclusion, the TARDIS project proposes to use virtual agents to support job interview sim- ulation and social coaching. The use of virtual agents insocial coaching has increased rapidly in the last decade  provide evidence that virtual agents can help humans im- prove their social skills and, more generally, their emotional intelligence . Most of the models of virtual agent insocial coaching domain have focused on the simulation of emotions , however a virtual agent should be able to express dif- ferent social attitudes. In this paper, we propose a model of social attitudes that enables a virtual agent to adapt its social attitude during the interaction with a user in a job interview simulation context. The methodology used to de- velop such a model combines a theoretical and an empirical approach.
Simon Richir § Arts et M ´etiers ParisTech
The usage of biofeedback inVirtualReality (VR) is becoming more and more important in providing fully immersive experiences. With the rapid evolution of physiological monitoring technologies it is important to study how different modalities of biofeedback can al- ter user experience. While previous studies use biofeedback as an additional interaction mechanic. We created a protocol to assess heart rate control competency and used the results of said protocol to immerse our participants in a VR experience where the biofeedback mechanics are mandatory to complete a game. We observed con- sistent results between our competency scale and the participants’ mastery of the biofeedback game mechanic in the VR experience. We also found that the biofeedback mechanic has a significant impact on engagement.
panels accompanying the target objects. However, many attempts have been made for the experience to be more interactive. Currently occurring at MFA is the “Conservation in Action” project 30 , which transforms an exhibition floor into a conservation space contained by glass walls so that the visitors could observe the working environment, equipment, and tools of conservators and understand how they perform their work on selected objects. Both of these contents could easily be transformed into a virtual exhibition, in which there will be fewer constraints regarding space and time. When the discussion returned to general concerns about the current state of museums, the conservators mentioned that attendance has been lowering; therefore, attempts have been made to make the museum more adaptive, better curated to younger people, compatible with social media, and suitable for the creation of public spaces to support the exchange of ideas. Art is traditionally made and consumed by elites, who collect it to reinforce their status. However, such a statement no longer applies to the contemporary time. The classic museum buildings of today still have a formal, cold, and quiet environment, which is not welcoming, especially for kids. Additionally, while the museum wishes to attract a larger audience, it requires background knowledge of the visitors to appreciate the art, but education focuses in the US have shifted. Last, compared with other types of entertainment, museums are still quite expansive. When discussing digital museums, the conservators expressed that from their perspective, seeing the actual object is still the most valuable part of a museum experience. Photography is mediated, as is the digital model of the objects. It is important to consider how to allocate resources between conserving and displaying the actual object as opposed to digitally archiving it. In terms of reaching a wider audience, digital museums could be a good means. Since no one can prevent the younger generation from using digital media, museums should learn to use it to their advantage.
Homing is a fundamental task which plays a vital role in spatial nav- igation. Its performance depends on the computation of a homing vector, where human beings can use simultaneously two different cognitive strategies: an online strategy based on the self-motion cues known as path integration (PI), and an offline strategy based on the spatial image of the path called piloting. Studies using virtualreality environments (VE) have shown that human being can perform hom- ing tasks with acceptable performance. However, in these studies, subjects were able to walk naturally across large tracking areas, or researchers provided them with high-end large-immersive displays. Unfortunately, these configurations are far from current consumer- oriented devices, and very little is known about how their limitations can influence these cognitive processes. Using a triangle comple- tion paradigm, we assessed homing tasks in two consumer-oriented displays (an HTC Vive and a GearVR) and two consumer-oriented interaction devices (a Virtuix Omni Treadmill and a Touchpad Con- trol). Our results show that when locomotion is available (treadmill condition), there exist significant effects regarding display and path complexity. In contrast, when locomotion is restricted (touchpad condition), some effects on path complexity were found. Thus, some future research directions are therefore proposed.
The advances inVirtual and Augmented Reality tech- nologies, including the improvements in the capabilities to trace human movements and gestures at low cost easily affordable by small companies, professionals and even by amateurs pave the way in the development of more natural shape interaction and modification ap- plications usable in various industrial and leisure con- texts. Early, immersive VirtualReality (VR) systems were highly expensive and only large automotive and aeronautic companies could afford their costs. Immer- sive VR systems were mainly used for product evalua- tion allowing simulations at 1:1 scale differently from a desktop screen that provides only a small size product visualisation. Anyhow, their usage within the product development process was and still is requiring addi- tional efforts. The use of engineering models in VR en- vironments involves an adaptation process which also includes format conversion and results in loss of infor- mation, being only the shape and few other attributes transferred. Moreover, any further shape modification resulting from the evaluation and simulation cannot be done in a VR environment but requires to go back to the original CAD (Computer-Aided Design System).
use and visual design, as well as solicited free-form
feedback for each system in form of a voluntary text field.
In addition to this declarative user feedback, we also recorded performance aspects (completion times, interaction measurements) and physiological data via oculometry. Eye movements bring a wealth of infor- mation — they are overt clues about an observer’s at- tention deployment and cognitive processes in gen- eral, and are increasingly being tracked for evaluat- ing visual analytics (Kurzhals et al., 2016). In the context of map reading, measuring gaze allows us to know precisely which map layers participants chose to observe in particular, and at which times. Further- more, gaze features and their properties, such as sac- cades and fixations can be derived, in this case by pro- cessing with the toolkit developed for the Salient360! dataset (David et al., 2018), using a velocity-based parsing algorithm (Salvucci and Goldberg, 2000).
As far as VR is concerned, the same observations hold . Beyond psychological barriers that may exist, there is no physical or cognitive barrier in using it; however, they need a period of time to gradually familiarize themselves with the technology, which may be longer for more elderly audiences. Once a good impression of the technology is established through regular usage, the convenience and even enjoyment that it brings can encourage the elderly to adopt a more positive attitude towards this technology . Thus to ease the introduction of our platform, the next immediate step is to incorporate learnable design into our system, by using familiar and easy-to-learn interaction mechanisms (e.g., voice, point and click), and deliver guidance either through in-application tutorials, or designing training procedures that can be delivered, for example, by close family or experienced low-vision service community, to the end users of our systems.
that many technologies have an initial learning curve that can be hard to overcome without proper training or help .
As far as VR is concerned, the same observations hold . Beyond psycho- logical barriers that may exist, there is no physical or cognitive barrier in using it; however, they need a period of time to gradually familiarize themselves with the technology, which may be longer for more elderly audiences. Once a good im- pression of the technology is established through regular usage, the convenience and even enjoyment that it brings can encourage the elderly to adopt a more positive attitude towards this technology . Thus to ease the introduction of our platform, the next immediate step is to incorporate learnable design into our system, by using familiar and easy-to-learn interaction mechanisms (e.g., voice, point and click), and deliver guidance either through in-application tutorials, or designing training procedures that can be delivered, for example, by close family or experienced low-vision service community, to the end users of our systems.
Abstract— The application of augmented reality (AR) technology for assembly guidance is a novel approach in the traditional manufacturing domain. In this paper, we propose an AR approach for assembly guidance using a virtual interactive tool that is intuitive and easy to use. The virtual interactive tool, termed the VirtualInteraction Panel (VirIP), involves two tasks: the design of the VirIPs and the real-time tracking of an interaction pen using a Restricted Coulomb Energy (RCE) neural network. The VirIP includes virtual buttons, which have meaningful assembly information that can be activated by an interaction pen during the assembly process. A visual assembly tree structure (VATS) is used for information management and assembly instructions retrieval in this AR environment. VATS is a hierarchical tree structure that can be easily maintained via a visual interface. This paper describes a typical scenario for assembly guidance using VirIP and VATS. The main characteristic of the proposed AR system is the intuitive way in which an assembly operator can easily step through a pre-defined assembly plan/sequence without the need of any sensor schemes or markers attached on the assembly components.
Toulouse 3 Julius-Maximilians University, Würzburg, Germany
Goal: Exploring cells inner dynamics of 3D biological models is of central interest. It is particularly the case for Multi
Cellular Tumor Spheroids (MCTS) to design new efficient therapeutic protocols. However, exploring themin vitro is technically challenging. Nowadays, computer science can help by providing increasingly realistic digital models and accessible means of visualization and interaction, especially also relying on virtualreality (VR) approaches.
Interaction: In this version of the use-case, the interaction feature only concerns two aspects: (i) coordination (the three actors being in the same scene, seeing each other, being able to coordinate their action) and (ii) use of tools (actors can use tools like firehose, enclosure tape, etc.). Other interaction aspects like (iii) use of the environment (players being able to open doors, move objects, etc.), or (iv) cooperation (actors being able to work simultaneously and conjunctly on specific tasks that would require several actors to be performed) have not been implemented. However, considering the next steps of the EGCERSIS project, these two features are clearly in the roadmap. Finally, it is crucial to explicitly notice that the two phases scenario presented previously is clearly disconnected from reality. It is full of approximations and mistakes regarding the real practices of professional responders. For instance, it is almost certain that they would have the obligation of exploring entirely the area and would discover the additional victim without requiring the ability of the TCMP to interact with sensors. Another example concerns the types of responders. It is almost certain as well that there would not be only three of them, investigating at the same time, etc. However, on the one hand, this is mainly a proof of concept (showing that heterogeneous actors, using an external decision support tool could interact in a virtual environment to take care of individual and shared objectives), and on the other hand, the main purpose of the exercise was not to be close to reality but to address some technical and scientific challenges (especially on the previously mentioned dimensions) and to open doors for the next steps of the project.