• Aucun résultat trouvé

Influence of hand visualization on tool-based motor skills training in an immersive VR simulator

N/A
N/A
Protected

Academic year: 2021

Partager "Influence of hand visualization on tool-based motor skills training in an immersive VR simulator"

Copied!
10
0
0

Texte intégral

(1)

HAL Id: hal-02921192

https://hal.archives-ouvertes.fr/hal-02921192

Submitted on 12 Nov 2020

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Influence of hand visualization on tool-based motor skills training in an immersive VR simulator

Aylen Ricca, Amine Chellali, Samir Otmane

To cite this version:

Aylen Ricca, Amine Chellali, Samir Otmane. Influence of hand visualization on tool-based motor skills

training in an immersive VR simulator. 19th IEEE International Symposium on Mixed and Augmented

Reality (ISMAR 2020), Nov 2020, Recif, Brazil. pp.260–268, �10.1109/ISMAR50242.2020.00049�. �hal-

02921192�

(2)

Influence of hand visualization on tool-based motor skills training in an immersive VR simulator

Aylen Ricca* Amine Chellali Samir Otmane

IBISC, Univ Evry, Universit ´e Paris-Saclay

ABSTRACT

Immersive VR technologies offer versatile training tools by recre- ating real-world situations in a virtual and safe environment and allowing users to have a first-person experience. The design of such training systems requires defining the most critical components to simulate, and to what extent they can be simulated successfully.

One open research question for designing such systems is how to represent the user in the virtual environment, and which is the added value of this representation for training purposes. In this work, we focus on how the user’s hand representation in an immersive virtual environment can impact the training of tool-based motor skills.

To investigate this question, we have designed a VR trainer for a simple tool-based pick and place task. A user experiment was conducted to evaluate how the movements of the users’ real hand representation influence their performance and subjective experi- ence in the virtual environment. For that purpose, the participants performed the task on the VR simulator with two conditions: the presence or absence of their animated virtual hands representation.

The results of this study show that, although users prefer to have a visual representation of their hands, they achieved similar and correlated performance in the VR system regardless of the hand representation condition. These results suggest that the presence of the user’s hand representation is not necessary when performing a tool-based motor skill task in a VR trainer. These findings have prac- tical implications for the design of VR simulators for training motor skills tasks since adding users’ hand representation may require cumbersome and expensive additional devices.

1 INTRODUCTION

Immersive technologies are gaining popularity beyond the gam- ing domain. Nowadays, immersive virtual reality (IVR) simulators are also being developed for training purposes in different fields, such as industry [28, 40], healthcare [22], military [46] and educa- tion [23, 33]. For instance, surgery can benefit from VR simulations by allowing medical students and practitioners to train their skills in a safe environment before exposure to real patients [9]. The mainte- nance industry can also use immersive virtual environments (IVEs) for training maintenance skills, machine assembly procedures, and machine repair [38].

Most current IVR requires the users to wear a head-mounted display (HMD) and allow them to explore the virtual environment (VE) from a first-person perspective without seeing their own physi- cal body. One open research question in the development of such systems is how the user’s own body is represented inside the VE (i.e., self-avatar). We are more particularly interested in how this representation may influence skills training. In technical skills train- ing, such as assembling mechanical parts, handling instruments, or

*e-mail: aylen.ricca@ibisc.fr

e-mail: amine.chellali@ibisc.fr

e-mail: samir.otmnae@ibisc.fr

tying knots in surgery, and from a first-person perspective, the avatar representation primarily concerns the user’s hands.

This research aims to gain insight into the importance of the user’s avatar in a VR trainer when performing technical skills. In particular, we investigate how the user’s performance is influenced by the real hand movements representation when performing a tool-based motor task inspired by existent surgical trainers.

Our research questions for this work are:

- Does the hand’s virtual representation affect users’ performance for a tool-based motor task in IVR simulators?

- Do users think they have a better sense of presence, better control of the system, and a higher sense of ownership when the real movements of their hands are reproduced in an IVR simulator?

To investigate these questions, a VR prototype was designed and developed to allow the training of a tool-based pick and place task, with the capability of representing the user’s hands. We conducted an experiment to study the influence of visualizing the hands in the VR prototype on the user’s performance and state of presence.

2 BACKGROUND

2.1 Hand representation in virtual environments The users’ body representation has been studied for some time. The rubber hand illusion, initially presented by Botvinick and Cohen in 1998 [7], is a simple experiment performed to understand how our brain resolves visual and perceptual stimuli, which lead to the appropriation of a rubber limb. Recent works have revisited this experiment [6,16] to explain how the visual and tactile cues influence the users’ sense of embodiment. Kilteni et al. define the sense of embodiment towards a body as the sense that emerges when that body’s properties are processed as if they were properties of one’s own body [26]. It can be decomposed into three different dimensions: the sense of body ownership (feeling that the artificial body is one’s own body and the source of sensations), the sense of agency (sense of having motor control), and self-location (perceived location space of one’s body) [25]. It has also been shown that there is a high correlation between presence (“sense of being in another environment” [5]) and embodiment [44].

The rubber hand experiment has had a significant impact on understanding the virtual body, and many recent experiments are based on it. Mohler et al. conclude that if we experience a conflict between the visual and proprioceptive position of our hand, we will strongly accept it as being placed where it is seen [34]. Argelaguet et al. have shown that in VR, the sense of agency improves with the increase of the virtual hand control capabilities, and the sense of ownership is related to the visual appearance of the virtual hand [2].

It has also been shown that the virtual hand’s structural and ap- pearance differences might affect the sense of ownership. Having an extra finger can be accepted without limiting its controllability and reports high levels of body ownership [21]. However, having fewer fingers in a virtual hand reduces the feeling of presence for re- alistic hands, but not for abstract hands [41]. Besides, human hands evoke higher body ownership than abstract representations [30, 35].

However, inverting gender models can decrease the users’ level of acceptance and presence [42]. In particular, a personalized virtual hand (real hand projection in the virtual environment) improves

(3)

the accuracy of object size estimation and increases the sense of ownership compared to a generic hand model [24]. Moreover, users’

object-size perception in VR is affected by anthropomorphic hands’

size, but not for non-anthropomorphic ones [35].

To summarize, several works have focused on studying the user’s hand representation and its influence on the sense of presence and the sense of embodiment. Findings support the inclusion of virtual hands to enhance the user experience in a VR simulator. In addition, the virtual hands appearance has been shown to have an impact on performance in immersive VR applications. However, it is important to understand the impact of including these hand representations on motor skills tasks.

2.2 Impact of virtual hand representation on motor skill tasks

The user’s avatar visualization can also influence motor skills. For instance, Ossmy et al. studied the influence of hand’s size on short- term motor skill learning of a finger movements sequence [36]. They concluded that the hand’s size affected the performance, observing that the performance increases with a larger virtual hand, and with a hand with an almost 1 to 1 ratio size. However, these results were observed only when the user was controlling the virtual hand and not when the user was playing a spectator role during training.

This result highlights the importance of agency on motor skills performance, which also proved to be true for the performance on distance estimations [34]. Another example was found for a needle insertion VR trainer, where authors investigated the influence of the virtual hand on the users’ feeling of accuracy and sense of realism [45]. This study suggests that having a static virtual hand increases the self-perception of accuracy, but limited the overall realism of the environment, pointing out that users’ real hand posture and movement were missing. Moreover, it has been shown that visualization of limb movements while learning can improve the motor tasks [12]. This suggests that if users visualize their own hands’ movement while performing the task, they could accelerate the motor skills learning process.

On the other hand, the self-avatar fidelity was analyzed for a block arrangement task during interaction with real objects [31].

The results suggest that to increase the sense of presence inside the VE, the kinematic fidelity of the avatar (hand movement) was more important than the visual fidelity of the avatar. However, the avatar fidelity did not influence the users’ performance for this task.

Moreover, performance on a pick and place task was evaluated for different hand representations in virtual grasping. Tracked hands representations (hands that can pass through the objects the user interacts with) improved task completion time. Nonetheless, users preferred to have more realistic interactions where the virtual hand physics with the virtual objects reflects a real interaction (i.e., the hand visualization does not penetrate the manipulated object) [10].

Finally, other works have used hand visualization in training mo- tor skills, without assessing the influence it may have on performance and skills transfer to the real world [4, 11, 39, 43].

To summarize, motor skills training is influence by the visual feedback in general, and particularly by the self-avatar representa- tion. The users’ hand representation is an important design choice to enhance users’ participation and state of presence in a VR simulation and may influence motor skills training in such systems. However, its impact on tool-based motor skills training remains an open issue.

In addition to visual feedback, haptic feedback is another aspect to consider when studying tool-based motor skills training. Although the visual feedback dominates how stimuli are perceived, and drives the multimodal interaction [19, 37], haptic feedback has also been shown to impact motor skills training [29]. Research in the field has shown that motor learning performance is improved in terms of accuracy and trajectory optimization when haptic feedback is used [17], which highlights the importance of including multimodal

feedback in VR simulators that aim to train and transfer tool-based motor skills.

2.3 Hand representation technologies

One crucial aspect to consider for representing the user’s real hand in the VE is the technology to employ. In general, we can differentiate two main approaches to capture the user’s hand movements: through optical trackers, such as the Microsoft Kinect or the LeapMotion, or by using inertial trackers or data gloves (many commercial solutions available) [15]. The advantages of vision-based systems, such as the LeapMotion, is that they are cheap and easy to use (plug and play).

They are usually mounted on HMDs to be used to track the user’s hands during interactions in IVR systems. However, they may not be usable in all the applications. For instance, they may not be suited for detecting the hand holding a tool, due to occlusions generated by the tool itself.

On the other hand, data gloves are more expensive, require proper calibration and data filtering to obtain acceptable tracking perfor- mance. Depending on the model, and mostly on the price, some provide only one sensor for each finger blending detection, requiring a new solution for positional tracking of the hand. However, they offer the advantage of being robust to occlusion issues when the user manipulates a tool. This suggests that data gloves are more appropriate for tracking the user’s hands in IVR environments when the task requires manipulating a tool or when occlusion issues may occur during interactions. It is to be noted that other technologies could also be used to generate the hand representation [24].

The objective of this work is to build on the existing literature, which shows a positive impact of hand presence on performance for direct object manipulation, and to go beyond by investigating the impact that the hand representation can have on motor tasks completion in IVR when the user manipulates a tool (indirect object- manipulation tasks). For that purpose, we have designed an IVR simulator where users could perform a tool-based pick and place task to train a motor skill. We have also conducted a user study to investigate our research questions on the influence of hand visual- ization on users’ performance and experience in this system. For that purpose, the users were asked to perform the task on the system with two conditions: presence or absence of the representation of their virtual hands.

3 METHODS

3.1 Working Hypotheses

Previous works have shown that the users’ sense of presence in- creased with the self-avatar representation for tasks involving direct manipulation of objects [31]. More particularly, users’ higher sense of ownership is associated with avatars’ higher appearance represen- tations [10]. In contrast, users’ higher sense of agency is associated with the users’ ability to reproduce more realistic movements with the virtual body [2]. Therefore, we aim to investigate whether these findings also apply when users utilize tools to manipulate objects.

Besides, previous research suggests that adding a virtual hand will increase depth perception and give users additional spatial cues when manipulating tools [45]. Moreover, visual feedback from arms was reported to be better for movement perception in tool-based tasks than the visualization of an isolated tool [18]. Therefore, we expect the presence of the hands’ representation to improve the users’

task performance.

Based on the previous observations, we have defined the following working hypotheses for our user study:

• H1) The users will have a higher sense of presence with the hand visualization condition in a tool-based interaction task:

– H1.1) Users in the real hand movement condition would feel a higher sense of ownership than users in the no hand representation condition.

(4)

– H1.2) Users in the real hand movement condition would feel a higher sense of agency than users in the no hand representation condition.

• H2) Users’ performance (time, accuracy, and errors) would be improved when they have the real hand representation com- pared to the no hand representation condition.

3.2 Participants

Forty-one participants (26 males, 15 females) from the University (students and staff) enrolled in this study (N=41). The mean age was 35.68 (SD=11.64). Thirty-one of them were right-handed, 8 left-handed, and 2 ambidextrous. All of them have normal or cor- rected to normal vision, and 18 wore their correction glasses during the experiment. All of them have previous video games experience (including smartphone games), with 18 of them being regular play- ers. Twenty-two participants reported previous experience with 3D VEs, and 14 participants used haptic devices before this experiment (mainly in demonstrations or previous user studies). Twenty-three participants have previously used HMDs, with only 2 of them using them regularly and for no more than 30 minutes per session.

The institutional ethics committee of Universit´e Paris Saclay (CER Paris Saclay) approved the experimental protocol before en- rolling any human subject. All the participants involved in the study gave informed written consent before their participation.

3.3 Experimental design

A within-subject design was used, with one independent factor with two levels: the presence of the animated virtual hand (VH), and the absence of the animated virtual hand (NH) (i.e., only the virtual tools visualization) in the VR simulator. All participants performed two sessions on the VR simulator (one for each condition). The presentation order of the condition was counterbalanced to avoid any learning effect. One female participant had to be excluded from the data analysis since she felt uncomfortable while using the plastic handles in the VR prototype and could not finish the whole experiment. This left 20 participants who started the experiment with the VH condition (N = 20), and 20 participants who started with the NH condition (N = 20). The sample size was calculated beforehand to use a two-tailed t-test for matched pairs (α=0.05 andβ=0.10) to be able to detect a medium effect size (d=0.55), which determined that at least 37 participants were required in this experiment to have an actual power of 0.90.

3.4 Experimental task

For this experiment, we have designed and developed a VR simulator for training a motor skill task. We have designed a simple tool-based pick, transfer, and place task (PT&P), in which the trainee had to pick a set of small virtual objects from a specific position using a first tool, transfer them from one tool to another, and place them in final position using the second tool.

This task is inspired by the peg and transfer task used in physical and VR simulators to train surgeons’ bi-manual dexterity and hand- eye coordination skills in laparoscopic surgery [13]. However, in contrast with the laparoscopic surgery simulation, the trainees using our system could see the objects and tools in 3D and in the same direction where they manipulate them. In contrast, in the laparo- scopic surgery simulation, they observe their actions on a 2D screen decoupled from their working space.

To train bi-manual dexterity, the simulated task was designed to be symmetrical in terms of tool manipulations, i.e., both hands/tools should be used to manipulate the same amount of objects. Cubes were used as manipulation objects to gather objective measures on placement accuracy (position and rotation). Also, by forcing the initial and target positions, the bi-manual interaction was consequent.

Previous research has shown that the haptic feedback (senses of touch and kinesthesia) component is an essential requirement for

tasks involving the manual manipulation of objects [1]. For instance, it was shown for surgical simulations that this source of feedback is crucial for practitioners, who must rely on their haptic skills to avoid errors when interacting with the patient’s organs [3, 48]. Therefore, the designed VR prototype included haptic interfaces to render the necessary feedback.

The experimental PT&P task was performed with two forceps tools and six small cubes. The goal was for the participant to grab each cube with one of the forceps, transfer it to the other forceps, and place it on a square target.

At the beginning of each task, the participant had to place the forceps at the initial position and wait for the countdown to start (see Fig. 1a). This ensured that the tools were positioned at the same starting position for all the trials and all the participants.

The initial and target positions of each cube were numbered with the same number (see Fig. 1b). Due to the limited workspace of the used haptic devices, both initial and final cubes positions were carefully chosen to enable full movement with the tools during the whole task. These positions remained the same for all trials and all participants. The cubes’ manipulation order had to respect the number sequence, i.e., the participant should first pick the cube number 1 and place it in the target position number 1, and so on until placing the cube number 6.

To make the task more challenging for participants, each cube had a green sticker on one of its faces. Initially, the cube was oriented with the sticker facing upwards. The participant had to place the cube with the sticker facing downwards, regardless of the orientation (i.e., the side of the sticker must be in contact with the target once placed), and to align the cube with the target square (see Fig. 1c).

This forced the participant to rotate the cubes during their transfer.

The working area was divided into two sides by a wall, with three cubes placed on each side. Each of the tools was used only to manipulate a cube on one side of the working area, which means that the right tool was used to pick and place cubes on the right side of the wall, and the left tool on the left side. The only zone where both tools were authorized to manipulate a cube at the same time was the exchange zone, which was on top of the wall (see Fig. 1d).

They were able to do as many manipulations of the cube between the tools as necessary.

An example of how all these rules apply is:

• pick cube 1 with the right tool,

• transfer it to the left tool in the transfer zone (taking into ac- count that the green sticker must be placed facing downwards),

• place it with the left tool on target number 1.

Continue with cubes 2 and 3 following the same rule. Then, for cubes 4 to 6, participants had to use the left tool for picking, and the right tool for placing them.

The participants were asked to move the cubes as quickly as possible, and place them on the targets as precisely as possible. They were also instructed to avoid errors, which were tool-wall collisions and cube drops. When a cube was dropped, they were allowed to pick it up and continue. They had to use the appropriate tool (left or right) to pick it up, depending on which side of the wall the cube fell in. Finally, either tool could be used to pick it from the wall.

3.5 Apparatus

VR prototype: It consisted of two physical interfaces, an HTC VIVE HMD providing a first-person perspective, and a pair of data gloves (see Fig. 2). The physical interfaces were used to control two virtual forceps. Two Geomagic Touch haptic devices were used, each providing 6 DOF for the position and 3 DOF for force feedback.

The interfaces styluses were removed, and 3D printed models of forceps handles were attached instead. Each handle added one extra

(5)

Figure 1: (a) The initial position of the forceps determined by the yellow spheres on top of cubes 2 and 5. (b) The initial position of the cubes. (c) The final position of a cube. (d) The left and right tools zones (green and blue, respectively) and the exchange zone (yellow).

DOF by allowing to open and close the tool. The opening angle was obtained through a potentiometer installed inside each handle and connected to an Arduino Uno card. We used the Noitom Hi5 VR Gloves, with a VIVE tracker attached to each of them to track the users’ hand movements. These data gloves provide 9 DOF, 1 DOF for each finger (flexion), an extra DOF for thumb finger, and 3 DOF for hand rotation. Extra 3 DOF for the position of the hand were obtained using the HTC tracker attached.

In our experiment, the users were required to hold the tools during the whole session. After some pre-tests performed with a LeapMo- tion device, the results were unstable with bad detection results due to the hand posture when grabbing the forceps handles attached to the haptic devices. Therefore, and as suggested by our literature review, we privileged the use of data gloves instead of vision-based systems. Although the Noitom VR gloves require recalibration for each user, the procedure lasts only one minute.

The virtual hand representation consisted of a human-like hand with a neutral color. Since virtual hands’ appearance may impact immersive VR applications, we chose to use a virtual hand model with a realistic shape and movements (obtained through the data gloves sensors) while controlling the differences in color and gender.

The VE consisted of 6 small pickable 3D cubes (dimensions: 2 cm side-length, mass: 29 g) with a green sticker on one of the faces, two virtual forceps representing the tools, and the user’s hand repre- sentation. It also included a delimited working area (dimensions: l

= 30.5 cm, w = 22 cm) divided by a wall and the cubes’ initial and target positions specified. The initial cube positions were marked on each side of the wall. Both the initial and final positions of the cubes were numbered.

The PT&P VR simulator was designed as a client-server system.

The server side was developed in C++ and used the chai3d [14]

framework for haptic force feedback simulation and ODE physics engine for collisions detection. The tools interaction with the virtual cubes and wall was computed on this application. The positions and orientations of the cubes and tools and the opening angle of the tools were continuously sent to the client-side using a UDP socket. Haptic feedback was displayed when the virtual tools were in contact with the cubes, the wall, and the table surface, similar to what one can experience when manipulating real objects. The client-side was developed using Unity3D (version 2018.3.6) with C#. It received the computed positions of the different components of the VE and rendered them in the HMD. It also communicated with the Hi5 data gloves to obtain the user’s fingers movements and animated the virtual hands accordingly. The client and server were executed in two different computers (CPU: Intel i7, GPU: GeForce

Figure 2: The VR prototype for the PT&P task. (a) User interact- ing with the system, (b) physical interfaces, (c) 3D printed handle, forceps model, (d) data gloves with HTC trackers, (e) HTC VIVE HMD, (f) virtual scene: working table, wall, cubes, forceps and hands representation, (g) HTC tracker for the whole setup.

GTX 1060/1070, RAM: 16 GB), directly connected through a UTP cable. The application frame rate was 90 fps for visual rendering and 500-600 fps for haptic rendering.

One of the main challenges faced during the development of this prototype was to match the virtual hands’ positions with that of the virtual tools and the user’s egocentric view of the VE. The hands’

positions were obtained through the HTC trackers attached to each glove, providing the same frame of reference as the user’s head (through the HTC VIVE HMD tracking). Also, a wood platform was built to position the two haptic devices. A third HTC tracker was placed on this fixed platform to track its position and locate it in the same reference frame as the HTC VIVE. The positions of the virtual tools (controlled by the haptic devices) were then associated with the corresponding position inside the VE.

3.6 Experimental procedure

Before the experiment, participants were asked to read and sign the consent form and complete the demographics questionnaire. Then, they were asked to read the experimental instructions presenting the prototype and explaining how to use the different devices and how to perform the task. After that, they were moved to the VR prototype area. They were asked to put in the gloves and the HMD, to grab the two forceps handles, and to sit in comfortably. The instructions were shown on the virtual screen. For the VH condition, a calibration of the data gloves was performed before the familiarization session.

They started then the familiarization session, which consisted of trying the system by performing a pick, transfer, and place of one cube (placed in the middle of the starting zone) on each side of the wall. At this point, the participants would feel comfortable with both the tools manipulation and task completion. After the familiarization phase, the experimental session started by performing three trials of the experimental task on the VR prototype for the first condition. To reduce cognitive load and allow participants to focus their attention on the motor task to perform, the cubes’

location per trial, and the order of cubes to manipulate remained the same. At the end of the third trial, the participants were asked to subjectively evaluate their experience with the system in the current condition through the presence-state questionnaire. This procedure was repeated for the other condition. We counterbalanced the order of conditions presentation to control any learning effect (this was also verified through the proper statistical tests). Finally, the participants were asked to answer a system usability questionnaire for the VR simulator. The whole procedure lasted, on average, 45

(6)

Figure 3: The study procedure.

minutes for each participant, with each condition taking between 10 to 15 minutes (VH condition was 2-3 minutes longer due to the data gloves calibration phase). See Fig. 3 for a summary of the experimental procedure.

3.7 Data collection and analyses

To compare the two VR conditions (VH, NH), objective and sub- jective measurements were recorded. The user’s performance was evaluated through the accuracy of placing the cubes, the task com- pletion time, and the number of errors during the task (the number of cubes dropped and the number of collisions of the tools with the wall). All the data was automatically recorded on a log file for each experimental condition.

3.7.1 Time

The time calculation of the task (total time) started once the tool touched the first cube, and ended when the last cube was placed. We have also calculated the time for the three subtasks: pick (pick time), transfer (transfer time), and place (place time). The pick subtask for a cube started when the tool touched it in its initial position and ended when the cube was touched with the other tool. This also corresponds to the beginning of the transfer subtask. The transfer subtask was determined by the exchange of the cube in the transfer zone at any moment in the task. Finally, the place subtask consisted of manipulating the cube with the placement tool outside of the transfer zone at any moment in the task. This means that the user could come back to the transfer subtask from the place subtask if necessary, i.e., to correct the cube’s orientation.

3.7.2 Accuracy

The distance and the minimal rotation angle difference between each cube center and the corresponding target center were used to measure each cube placement accuracy. The accuracy of the task was determined as the average of the six cubes’ distance error (position distance), and rotation difference (rotation difference).

3.7.3 Errors rates

Error 1: An error was counted for each time a cube was dropped during the task (total drops).

Error 2: The second error measure corresponded to the number of times one of the tools touched the wall (total wall collisions).

3.7.4 Subjective data

The subjective data consisted of responses to a “Presence” question- naire for each condition, using a 5-point Likert scale. The questions included eight different criteria: realism, possibility to act, quality of the interface, possibility to examine, self-evaluation of performance, haptic, ownership, and agency. Some of the questions (Q1-Q21) were extracted from the State of Presence Questionnaire [47], and the rest of them (Q22-Q25) were inspired by questionnaires used

in the literature [21]. Also, participants were asked to rate which condition they preferred the most (VH or NH).

Finally, the “System Usability Questionnaire” (SUS) [8] was used to obtain a general usability score of the VR system, which will be used to improve the system in the next iterations. Both questionnaires (“Presence” and “SUS”) are validated questionnaires used in the literature.

3.7.5 Data analyses

All data analyses were performed using R version 3.6.0 (R Core Team, 2019) using RStudio (RStudio Team, 2016, Boston, MA) with the appropriate statistical tests. We have used a confidence level of 95% for all our statistical analyses.

First, we have checked the normality assumption of the data through the Shapiro-Wilk test on the time, accuracy, and the two error measures for the VR prototype data. The results indicate that the total time, place time, position distance and rotation difference follow a normal distribution. Therefore, the paired samples t-test was used for these variables data to compare the mean values for the two conditions in the VR prototype. The non-parametric Wilcoxon Signed Rank Sum test was used to compare the means for all depen- dent variables that are not normally distributed.

In addition, we have used the Pearson’s correlation test (for vari- ables following a normal distribution), and the Spearman’s correla- tion test (for the other variables) to analyze, for each participant, the correlation between his/her performance on each VR condition. This test was used to determine whether each participant achieved the same task performance in both conditions regardless of differences in personal abilities between participants. To further investigate the similarity in performance between both conditions, equivalence tests (TOST [27]) were also carried out for each dependent variable.

Finally, the non-parametric Wilcoxon Signed Rank Sum test was used to compare the mean scores of the “Presence” questionnaire data (ordinal data). The general mean score is provided for the

“System Usability Questionnaire”.

4 RESULTS

4.1 Objective measures

The plots for all the performance measures for both conditions are detailed in Fig. 4.

4.1.1 Time

The paired-sample t-test shows no significant effect of the condition on the mean total time [t=0.001,p=0.999] nor on the mean place time [t=−1.29, p=0.206]. The non-parametric Wilcoxon test shows no significant effect of the condition on the mean pick time [Z=−0.61,p=0.541], nor on the mean transfer time [Z=−0.36, p=0.722].

In addition, the Pearson’s correlation test shows a moderate cor- relation between the mean total time [r=0.374,p=0.018], and

(7)

Figure 4: The median value (with inter-quartile range –Q1-Q3) for all measures in the VR prototype with VH and NH conditions. From left to right:

time measures; accuracy: rotation and distance; and errors: drops and wall collisions.

a high correlation between the place time [r=0.618,p<0.001]

in both conditions. The Spearman’s rank correlation test shows a high correlation between the mean pick time [r=0.513,p=0.001], and a moderate correlation between the transfer time [r=0.469, p=0.002] in both conditions.

Furthermore, the equivalence test was significant for the total time [t(39) =−1.89,p=0.033], pick time [t(39) =−2.24,p=0.016], transfer time [t(39) =−1.93,p=0.030], and place time [t(39) = 2.17,p=0.018].

These results suggest that participants spend an equivalent amount of time to perform the task in both conditions, globally, and for each sub-task.

4.1.2 Accuracy

The paired-sample t-test shows no significant effect of the condition on the mean position distance [t=1.04,p=0.305], nor on the mean rotation difference [t=0.02,p=0.983].

In addition, the Pearson’s correlation test shows a high correlation between the mean position distance in both conditions [r=0.672, p<0.001], and a high correlation between the mean rotation differ- ence in both conditions [r=0.787,p<0.001].

Furthermore, the equivalence test was significant for both, the position distance [t(39) =−3.36,p<0.001], and the rotation dif- ference [t(39) =−3.20,p=0.001].

We can observe that, regardless of personal motor skills, partici- pants achieved the same accuracy in both conditions.

4.1.3 Errors rates

The non-parametric Wilcoxon test shows no significant effect of the condition on the mean total drops [t=−0.20,p=0.838], nor on the mean total wall collisions [t=−1.00,p=0.317].

In addition, the Spearman’s rank correlation test shows a high correlation between the mean total drops [r=0.707,p<0.001], and a moderate correlation between the total wall collisions [r=0.426, p=0.006] in both conditions.

Furthermore, the equivalence test was significant for both error measures, [t(39) =2.21,p=0.017] and [t(39) =2.18,p=0.018]

for total drops and total collisions, respectively.

These results show that error rates remained equivalent between both conditions too.

4.2 Subjective data

The analysis of grouped questions through the non-parametric Wilcoxon tests show no significant effect of the condition on any

of the criteria: realism, the possibility to navigate and manipulate inside the environment, the possibility to examine, self-evaluation of the performance, haptic, and sense of ownership. However, a signifi- cant effect of the condition on the sense of agency can be observed [Z=−2.29,p=0.022], with the mean score being significantly higher in the VH condition. Results are summarized in Table 1.

On the other hand, the non-parametric Wilcoxon tests show a significant effect of the condition on the participant’s mean scores for questions Q8 (“Were you able to anticipate what would happen next in response to the actions that you performed?”) [Z=−1.97, p=0.049], Q14 (“How much delay did you experience between your actions and expected outcomes?”) [Z=−2.00,p=0.045], and Q24 (“I felt that I was losing control of my hand when the virtual (hand/tool) was not responding correctly.”) [Z=−2.32,p=0.020], with the mean scores being significantly higher in the VH condition for Q8 and Q24, and the NH condition for Q14. No significant effects were found for the other questions.

Table 1: Results of the grouped questions criteria mean comparison.

VH NH Wilcoxon test

Criteria Mean SD Mean SD Z p

Realism 3.65 0.60 3.67 0.59 -0.21 0.837

Possibility act 3.75 0.53 3.73 0.64 -0.18 0.854 Quality of interface 2.47 0.66 2.48 0.71 -0.19 0.854 Possibility examine 4.08 0.63 4.03 0.61 -0.15 0.885 Self performance 3.65 0.66 3.77 0.58 -0.82 0.415

Haptic 3.64 0.86 3.60 0.71 -0.43 0.667

Ownership 3.58 1.03 3.60 1.06 -0.10 0.918

Agency 3.17 0.59 2.91 0.86 -2.29 0.022*

*p<0.05

Concerning the preference of the visualization of the hand, 37.5%

of the participants preferred the visualization of their hands, 32.5%

preferred to visualize only the virtual tools, and 30% felt no differ- ence between the two options.

Finally, the SUS score reports a mean value of 74.25 (SD= 13.61), which stands for a grade B (“Good”, percentile range 70-79) on the usability scale.

5 DISCUSSION

The results show that although the greatest number of participants preferred to visualize their virtual hands (37.5% of the participants),

(8)

participants achieved the same performance for both conditions (presence and absence of the hands). Indeed, there was no signifi- cant difference between the two conditions for any objective measure (time, accuracy, and error measures). Moreover, the results show that all of these measures present a significant high to moderate correlation between the two experimental conditions and equivalent results. This does not allow us to validate our hypothesis on the participants’ performance for the hand visualization (H2). These results contradict previous research, which has shown that the vi- sualization of one’s upper limb movements during a manual task could be beneficial for the training of motor skills [12]. A possible explanation for this finding may rely on the fact that for our pick and place task, the users were holding the physical handlers during the whole experimental session. This might have generated a form of embodiment beyond the body itself, allowing the participants to include the tools as if they were part of their body [32] as if the hands were elongated to the extremity of the tools. This phenomenon is coherent with the “ready-to-hand” (zuhanden) concept introduced by Martin Heidegger [20]. By “ready-to-hand” Heidegger describes a tool that has become invisible, receding to the background of the work, and we are no longer conscious of it. This, in turn, allows our concentration to be focused on the work as opposed to the tools that we use to perform the work. In our case, the participants were more focused on the objects to be manipulated and the task rather than their hands. These results suggest that the visualization of the user’s animated hand is not necessary to perform a simple tool-based motor task in a VR simulator. This is an important finding for the design of IVR motor skills simulators, since obtaining a high-fidelity hand animation may be technically challenging and expensive. Further- more, hand tracking devices, such as data gloves or optical cameras, add a small delay during the simulation. For our simulator, a delay was indeed perceived by the users in the visual hand condition (Q14) compared to the tools only visualization.

On the other hand, the subjective data analyses show that the users feel a higher sense of agency when visualizing their hands’

real movements in the VE. This finding is coherent with previous results on existing VR simulators, including fully animated hands for motor skill trainers [34, 36]. This is also supported by responses to question 8 (Q8) of the presence questionnaire where participants in the hand condition felt more confident controlling the tools and events in the simulator. Furthermore, the fact that participants ex- perienced a higher sense of agency with the hands’ real movements representation reinforces the importance of kinematic fidelity of the avatar to enhance the sense of embodiment.

On the contrary, the sense of ownership score was not significantly different between the two conditions. As supported by previous research, the visual fidelity correlates with a sense of ownership. In our case, we have used human-like hands, but have kept a neutral skin color (that could be associated with wearing surgical gloves). In order to increase the sense of ownership, personalized gender, and skin-colored hands could be proposed to the users. Moreover, the hand and objects’ size could also be modified, as hand appearance was reported to influence object size estimation [24, 35]. Therefore, H1 is validated, except for the sense of ownership.

Finally, in terms of usability, the SUS questionnaire reported a good value (74%), which encourages the choices made for the design of this VR pick and transfer trainer.

6 CONCLUSIONS ANDFUTUREWORKS

In this paper, we have investigated the impact of self-avatar visual- ization on a VR simulator for tool-based motor skills training.

We have designed and developed a VR simulator for a pick, transfer, and place task. Our main objective was to investigate the impact of the users’ hand visualization on task performance and user experience. In other words, whether having animated hands reproducing the users’ movements allows them to achieve better

performance than only visualizing the tools. A user experiment was designed and conducted to compare the users’ performance for the two conditions (hand visualization and tools visualization) in the VR simulator. Results show that although users prefer to visualize their animated hands, they achieved the same performance on both VR conditions. These results suggest that the presence of the animated user’s hand representation is not necessary when performing a tool- based motor skill task in a VR trainer. These findings have practical implications for the design of VR simulators for training tool-based motor skills tasks. Adding users’ hand representation may require cumbersome and expensive additional devices. Therefore, to reduce the costs of VR trainers, designers can benefit from the findings of this study to build cost-effective and less constraining VR simulators.

One limitation of our VR prototype is related to the plastic handles connected to the haptic interfaces. Some participants mentioned that they were uncomfortable and did not provide the gripping forces, which decreased the system’s usability. Besides, several handles had to be made because they were fragile and broke after a couple of experimental sessions. To overcome these ergonomics and comfort issues, we have already worked on an improved model based on the 3D scan of an ergonomic scissors handle with curved edges and more robust design. Nonetheless, solutions are to be explored on how to provide the gripping force through them.

Following the design objective of our system, we focused on tool- based simulation for motor skills training. Such skills are essential in some domains such as surgery, where transferring the learned skills to the real world is essential. In this line, the literature has shown that haptic feedback is beneficial for surgical skills training since surgeons rely on their haptic skills to perform surgical tasks correctly. Therefore, our simulator included multimodal (haptic and visual) feedback. Therefore, the findings of our work can be generalized to other tool-based tasks that require this mixture of physical and virtual elements, for which IVR motor skills trainers are designed, and where the main goal is to be able to transfer the same skill to the equivalent real-world task.

A limitation that prevents further generalization of these results to other motor skills and VR simulators can be attributed to the multisensorial integration. Indeed, further studies are required to understand the contribution of each component: the visual feedback that originates from the tool and the hand, the functional charac- teristics of the tool (shape and dimensions) obtained through real tool manipulation, and the force feedback perceived through the physical device (tool). Therefore, it would be interesting to study in the future the impact of hand representation in the absence of haptic feedback, the use of purely virtual tools instead of physical ones, and the comparison between generic VR controllers and alike physical and virtual tools.

We are also planning to run a longitudinal study to determine whether the findings of the current study would apply to long term training of motor skills in IVR. For that purpose, we will conduct a learning curve study on the VR simulator, with pre- and post- tests, to determine the impact of hand representation on training tool-based motor skills. We will also study the transfer of skills to the real-world by comparing the users’ performance on the VR simulator to a similar physical system. The results of these studies will permit extracting guidelines for the design of IVR simulators for training tool-based motor skills. An example of a possible appli- cation of these systems is the training of technical skills in surgery and industry.

ACKNOWLEDGMENTS

The authors would like to thank all the volunteers that participated in the experimental study. We would also like to thank Olivier Connessons and Christophe Luquin from the University technical team for the help provided in the design and development of the forceps handles used in the VR prototype. Aylen Ricca received

(9)

a Ph.D. grant from the University of Evry. We also acknowledge support from Genopole.

REFERENCES

[1] R. Adams, D. Klowden, and B. Hannaford. Virtual training for a manual assembly task. Haptics-e, the electronic journal of haptics research, 2(2):1–7, 2001.

[2] F. Argelaguet, L. Hoyet, M. Trico, and A. L´ecuyer. The role of interac- tion in virtual embodiment: Effects of the virtual hand representation.

In2016 IEEE Conference on Virtual Reality (VR), pp. 3–10. IEEE, 2016. doi:10.1109/VR.2016.7504682

[3] C. Basdogan, S. De, J. Kim, M. Muniyandi, H. Kim, and M. A. Srini- vasan. Haptics in minimally invasive surgical simulation and training.

IEEE computer graphics and applications, 24(2):56–64, 2004. doi:10.

1109/MCG.2004.1274062

[4] A. U. Batmaz, M. de Mathelin, and B. Dresp-Langley. Effects of image size and structural complexity on time and precision of hand movements in head mounted virtual reality. In2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 167–174. IEEE, 2018. doi:10.1109/VR.2018.8446217

[5] F. Biocca, M. Levy, and J. Lawrence. Communication in the age of virtual reality.Psyccritiques, 42(2), 1997.

[6] O. Blanke, M. Slater, and A. Serino. Behavioral, neural, and computa- tional principles of bodily self-consciousness.Neuron, 88(1):145–166, 2015. doi:10.1016/j.neuron.2015.09.029

[7] M. Botvinick and J. Cohen. Rubber hands ‘feel’touch that eyes see.

Nature, 391(6669):756–756, 1998. doi:10.1038/35784

[8] J. Brooke. SUS: a “quick and dirty” usability scale.Usability Evalua- tion in Industry, 189(194):4–7, 1996.

[9] C. Buckley, E. Nugent, D. Ryan, and P. Neary. Virtual reality – a new era in surgical training. In C. Eichenberg, ed.,Virtual reality in psychological, medical and pedagogical applications, chap. 7, pp.

139–166. InTech, 2012. doi:10.5772/46415

[10] R. Canales, A. Normoyle, Y. Sun, Y. Ye, M. D. Luca, and S. J¨org.

Virtual grasping feedback and virtual hand ownership. InACM Sympo- sium on Applied Perception 2019, SAP’19, pp. 1–9. ACM, 2019. doi:

10.1145/3343036.3343132

[11] P. Carlson, A. Peters, S. B. Gilbert, J. M. Vance, and A. Luse. Virtual training: Learning transfer of assembly tasks.IEEE transactions on visualization and computer graphics, 21(6):770–782, 2015. doi:10.

1109/TVCG.2015.2393871

[12] J. C. Castro-Alonso, P. Ayres, and F. Paas. Dynamic visualisations and motor skills. In W. Huang, ed., Handbook of human centric visualization, chap. 6, pp. 551–580. Springer, 2014. doi:10.1007/978-1 -4614-7485-2 22

[13] A. Chellali, H. Mentis, A. Miller, W. Ahn, V. S. Arikatla, G. Sankara- narayanan, D. Suvranu, S. D. Schwaitzberg, and C. G. L. Cao. Achiev- ing interface and environment fidelity in the Virtual Basic Laparoscopic Surgical Trainer.International Journal of Human-Computer Studies, 96:22–37, 2016. doi:10.1016/j.ijhcs.2016.07.005

[14] F. Conti, F. Barbagli, R. Balaniuk, M. Halg, C. Lu, D. Morris, L. Sen- tis, J. Warren, O. Khatib, and K. Salisbury. The chai libraries. In Proceedings of Eurohaptics 2003, pp. 496–500, 2003.

[15] S. Dargar, R. Kennedy, W. Lai, V. Arikatla, and S. De. Towards immersive virtual reality (ivr): a route to surgical expertise.Journal of computational surgery, 2(1):2, 2015. doi:10.1186/s40244-015-0015-8 [16] H. H. Ehrsson. How many arms make a pair? perceptual illusion of

having an additional limb.Perception, 38(2):310–312, 2009. doi:10.

1068/p6304

[17] B. L. Grant, P. C. Yielder, T. A. Patrick, B. Kapralos, M. Williams-Bell, and B. A. Murphy. Audiohaptic feedback enhances motor performance in a low-fidelity simulated drilling task.Brain sciences, 10(1):21, 2020.

doi:10.3390/brainsci10010021

[18] M. Guerraz, A. Breen, L. Pollidoro, M. Luyat, and A. Kavounoudias.

Contribution of visual motion cues from a held tool to kinesthesia.

Neuroscience, 388:11–22, 2018. doi:10.1016/j.neuroscience.2018.06.048 [19] D. Hecht and M. Reiner. Sensory dominance in combinations of audio,

visual and haptic stimuli.Experimental brain research, 193(2):307–

314, 2009. doi:10.1007/s00221-008-1626-z

[20] M. Heidegger.Sein und Zeit (Being and Time), trans. J. Macquarrie, John and Robinson, Edward. Harper & Row, 7th. ed., 1927.

[21] L. Hoyet, F. Argelaguet, C. Nicole, and A. L´ecuyer. “Wow! I have six fingers!”: would you accept structural changes of your hand in VR?

Frontiers in Robotics and AI, 3(27):1–12, 2016. doi:10.3389/frobt.2016.

00027

[22] T. Huber, M. Paschold, C. Hansen, T. Wunderling, H. Lang, and W. Kneist. New dimensions in surgical training: immersive virtual reality laparoscopic simulation exhilarates surgical staff. Surgical endoscopy, 31(11):4472–4477, 2017. doi:10.1007/s00464-017-5500-6 [23] L. Jensen and F. Konradsen. A review of the use of virtual reality head-

mounted displays in education and training.Education and Information Technologies, 23(4):1515–1529, 2018. doi:10.1007/s10639-017-9676-0 [24] S. Jung, G. Bruder, P. J. Wisniewski, C. Sandor, and C. E. Hughes.

Over my hand: Using a personalized hand in vr to improve object size estimation, body ownership, and presence. InProceedings of the Symposium on Spatial User Interaction, SUI’18, pp. 60–68. ACM, 2018. doi:10.1145/3267782.3267920

[25] K. Kilteni, R. Groten, and M. Slater. The sense of embodiment in virtual reality. Presence: Teleoperators and Virtual Environments, 21(4):373–387, 2012. doi:10.1162/PRES a 00124

[26] K. Kilteni, J.-M. Normand, M. V. Sanchez-Vives, and M. Slater. Extend- ing body space in immersive virtual reality: a very long arm illusion.

PloS one, 7(7):1–15, 2012. doi:10.1371/journal.pone.0040867 [27] D. Lakens, A. M. Scheel, and P. M. Isager. Equivalence testing

for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science, 1(2):259–269, 2018. doi:10.1177/

2515245918770963

[28] G. Lawson, D. Salanitri, and B. Waterfield. Future directions for the development of virtual reality within an automotive manufacturer.

Applied ergonomics, 53:323–330, 2016.

[29] C.-L. Lin, F.-Z. Shaw, K.-Y. Young, C.-T. Lin, and T.-P. Jung. Eeg cor- relates of haptic feedback in a visuomotor tracking task.NeuroImage, 60(4):2258–2273, 2012. doi:10.1016/j.neuroimage.2012.02.008 [30] L. Lin and S. J¨org. Need a hand? how appearance affects the virtual

hand illusion. InProceedings of the ACM Symposium on Applied Perception, SAP ’16, pp. 69–76. Association for Computing Machinery, 2016. doi:10.1145/2931002.2931006

[31] B. Lok, S. Naik, M. Whitton, and F. P. Brooks. Effects of handling real objects and self-avatar fidelity on cognitive task performance and sense of presence in virtual environments. Presence: Teleop- erators and Virtual Environments, 12(6):615–628, 2003. doi:10.1162/

105474603322955914

[32] A. Maravita and A. Iriki. Tools for the body (schema). Trends in cognitive sciences, 8(2):79–86, 2004. doi:10.1016/j.tics.2003.12.008 [33] Z. Merchant, E. T. Goetz, L. Cifuentes, W. Keeney-Kennicutt, and T. J.

Davis. Effectiveness of virtual reality-based instruction on students’

learning outcomes in k-12 and higher education: A meta-analysis.

Computers & Education, 70:29–40, 2014. doi:10.1016/j.compedu.2013.

07.033

[34] B. J. Mohler, S. H. Creem-Regehr, W. B. Thompson, and H. H. B¨ulthoff.

The effect of viewing a self-avatar on distance judgments in an hmd- based virtual environment.Presence: Teleoperators and Virtual Envi- ronments, 19(3):230–242, 2010. doi:10.1162/pres.19.3.230

[35] N. Ogawa, T. Narumi, and M. Hirose. Virtual hand realism affects object size perception in body-based scaling. In2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 519–528. IEEE, 2019. doi:10.1109/VR.2019.8798040

[36] O. Ossmy and R. Mukamel. Short term motor-skill acquisition im- proves with size of self-controlled virtual hands.PloS one, 12(1), 2017.

doi:10.1371/journal.pone.0168520

[37] M. I. Posner, M. J. Nissen, and R. M. Klein. Visual dominance: an information-processing account of its origins and significance.Psycho- logical review, 83(2):157, 1976. doi:10.1037/0033-295X.83.2.157 [38] M. Poyade.Motor skill training using virtual reality and haptic inter-

action –A case study in industrial maintenance. Phd, University of Malaga, 2013.

[39] Y. Pulijala, M. Ma, M. Pears, D. Peebles, and A. Ayoub. Effectiveness of immersive virtual reality in surgical training—a randomized control trial. Journal of Oral and Maxillofacial Surgery, 76(5):1065–1072,

(10)

2018. doi:10.1016/j.joms.2017.10.002

[40] R. Sacks, A. Perlman, and R. Barak. Construction safety training using immersive virtual reality.Construction Management and Economics, 31(9):1005–1017, 2013. doi:10.1080/01446193.2013.828844

[41] V. Schwind, P. Knierim, L. Chuang, and N. Henze. ”where’s pinky?”

the effects of a reduced number of fingers in virtual reality. InPro- ceedings of the Annual Symposium on Computer-Human Interaction in Play, CHI PLAY ’17, pp. 507–515. Association for Computing Machinery, 2017. doi:10.1145/3116595.3116596

[42] V. Schwind, P. Knierim, C. Tasci, P. Franczak, N. Haas, and N. Henze.

”these are not my hands!” effect of gender on the perception of avatar hands in virtual reality. InProceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17, pp. 1577–1582.

Association for Computing Machinery, 2017. doi: 10.1145/3025453.

3025602

[43] D. Sportillo, G. Avveduto, F. Tecchia, and M. Carrozzino. Training in vr: a preliminary study on learning assembly/disassembly sequences.

InInternational Conference on Augmented and Virtual Reality, AVR 2015, pp. 332–343. Springer, 2015. doi:10.1007/978-3-319-22888-4 24 [44] M. Usoh, K. Arthur, M. C. Whitton, R. Bastos, A. Steed, M. Slater,

and F. P. Brooks Jr. Walking¿ walking-in-place¿ flying, in virtual envi- ronments. InProceedings of the 26th annual conference on Computer graphics and interactive techniques, pp. 359–364. ACM Press/Addison- Wesley Publishing Co., 1999. doi:10.1145/311535.311589

[45] D. Van Nguyen, S. B. Lakhal, and A. Chellali. Preliminary evaluation of a virtual needle insertion training system. In2015 IEEE Conference on Virtual Reality (VR), pp. 247–248. IEEE, 2015. doi:10.1109/VR.2015.

7223388

[46] R. Webster. Declarative knowledge acquisition in immersive virtual learning environments.Interactive Learning Environments, 24(6):1319–

1333, 2016. doi:10.1080/10494820.2014.994533

[47] B. G. Witmer and M. J. Singer. Measuring presence in virtual environ- ments: a presence questionnaire.Presence, 7(3):225–240, 1998. doi:

10.1162/105474698565686

[48] M. Zhou, S. Tse, A. Derevianko, D. Jones, S. Schwaitzberg, and C. Cao.

Effect of haptic feedback in laparoscopic surgery skill acquisition.

Surgical endoscopy, 26(4):1128–1134, 2012. doi:10.1007/s00464-011 -2011-8

Références

Documents relatifs

In early modern Europe, it is in large part thanks to the study of anatomy that theoretical discourses on the role of the senses in the acquisition of knowledge were elaborated

Along these lines, this paper describes an innovative home- based hand rehabilitation system device that exploits sensory substitution of median sensory deficits

A Multi-objective Simulation Based Tool: Application to the Design of High Performance LC-VCOs.. 4th Doctoral Conference on Comput- ing, Electrical and Industrial Systems (DoCEIS),

As performance did not consistently increase at the average level for the emergency ice management scenario in mild ice conditions, the method of estimating the

Transfer Orifices from office to corridor rates as office Controlled Supply Orifices (ref. ventilation shaft through a Controlled Exhaust Orifice. The model equations are

Abstract— The flying hand is a robotic hand consisting of a swarm of UAVs able to grasp an object where each UAV contributes to the grasping task with a single contact point at

Specifically, it aims to explore how the user’s real hand movement representation can influence the learning curve of a motor task requiring the manipulation of a tool, and its

The results of this longitudinal study show that participants improved their task completion time, accuracy (the distance error and angle difference between the placed cube and