Skip to main content

ORIGINAL RESEARCH article

Front. Virtual Real., 18 January 2021
Sec. Virtual Reality and Human Behaviour
Volume 1 - 2020 | https://doi.org/10.3389/frvir.2020.573167

Studying the Role of Haptic Feedback on Virtual Embodiment in a Drawing Task

  • 1Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 CRIStAL, Lille, France
  • 2Inria Rennes - Bretagne Atlantique, Rennes, France
  • 3Institut Universitaire de France, Paris, France

The role of haptic feedback on virtual embodiment is investigated in this paper in a context of active and fine manipulation. In particular, we explore which haptic cue, with varying ecological validity, has more influence on virtual embodiment. We conducted a within-subject experiment with 24 participants and compared self-reported embodiment over a humanoid avatar during a coloring task under three conditions: force feedback, vibrotactile feedback, and no haptic feedback. In the experiment, force feedback was more ecological as it matched reality more closely, while vibrotactile feedback was more symbolic. Taken together, our results show significant superiority of force feedback over no haptic feedback regarding embodiment, and significant superiority of force feedback over the other two conditions regarding subjective performance. Those results suggest that a more ecological feedback is better suited to elicit embodiment during fine manipulation tasks.

1. Introduction

A key factor of user experience in virtual reality (VR) is virtual embodiment, the “sense that emerges when [a body]'s properties are processed as if they were the properties of one's own biological body” (Kilteni et al., 2012). Embodiment is often studied when brought forth through visuomotor and/or visuotactile integration (Slater et al., 2009; Sanchez-Vives et al., 2010; Kokkinara and Slater, 2014; Kokkinara et al., 2015). Haptic is often decomposed into kinesthetic and tactile information. This dichotomy allows for different types of stimulation within immersive virtual environments (IVEs), mainly force feedback and vibrotactile feedback. Now, the integration of haptic feedback is raising attention in regard to embodiment (Raz et al., 2008; Choi et al., 2016; Frohner et al., 2018; Krogmeier et al., 2019) but studies related to that particular topic never combined VR and the various forms of haptic feedback to study embodiment. As such, it is still unclear how different kinds of haptic feedback can influence the sense of embodiment (SoE) in VR.

In this paper, we explore the effect of different kinds of haptic feedback on the sense of embodiment in virtual environments during a drawing task. In particular, the question is to see if there is a kind of haptic feedback that is more suited to the designed task in terms of embodiment and performance. A controlled experiment, with similar interactions as the study by Burin et al. (2019), was designed in which participants could freely interact with the environment, and had to color drawings in a limited amount of time. Two different feedbacks were evaluated: force and vibrotactile. Those two conditions were compared to a control condition with no haptic feedback. Users were mainly requested to interact with virtual objects that are palpable in reality. Thus, force feedback matched reality more closely and was a more ecological kind of feedback, whereas vibrotactile feedback could be considered symbolic. Virtual embodiment was studied through its main three subcomponents, ownership, agency and self-location, and through the tactile sensations. Workload and performance were also assessed. Participants reported their perceived level of embodiment and workload through questionnaires, and performance was assessed by analyzing the drawings realized. The main hypothesis was that haptic feedback would increase the SoE. It was further hypothesized that the more ecological feedback would increase embodiment more than the symbolic feedback, thus that force feedback would provide higher embodiment than vibrotactile feedback. Results show that embodiment is related to the form of haptic feedback, as force feedback brings about a significantly higher overall sense of embodiment compared to no haptic feedback. Force feedback also significantly reduces subjective workload compared to either vibrotactile or no haptic feedback. The contribution of this study will help designing haptic interactions in regard to embodiment in IVEs.

2. Background and Related Work

In this section, previous studies are reviewed that tackle haptic feedback in relation to embodiment in virtual environments, and most particularly in VR.

Embodiment is usually meant to encompass motor control and affective attachment toward a body (De Vignemont, 2011; Kilteni et al., 2012). It was historically laid out through the rubber-hand illusion (RHI) paradigm, proposed by Botvinick and Cohen (1998). They investigated the interaction between tactile stimulation, proprioception, and vision during the emergence of the feeling of embodiment. They placed a life-sized rubber hand in front of participants. Then they stroke both the fake limb and the participant's real hand, which was hidden from view. The synchronous visuo-tactile stimulation was enough to bring about a sense of embodiment toward the fake limb.

Later on in this paper, the definition of embodiment proposed by Kilteni et al. is adopted : “SoE (Sense of Embodiment) toward a body B is the sense that emerges when B's properties are processed as if they were the properties of one's own biological body” (Kilteni et al., 2012). For additional reading regarding embodiment, we refer to in-depth studies by Kilteni et al. (2012) and De Vignemont (2011).

Hereafter is further discussed the structure of embodiment proposed by Kilteni et al. (2012) and the factors that enhance or enable those subcomponents in IVEs.

2.1. Ownership

Ownership is defined by Gallagher (2000) as the self-attribution of a body that is the source of felt sensations, for example, the perception that the body is moving regardless of the will to move. The RHI (Botvinick and Cohen, 1998), and later the virtual hand illusion (VHI) (Slater et al., 2009; Sanchez-Vives et al., 2010), led to a better understanding of body ownership and limb embodiment. Tsakiris (2010) proposed the sense of ownership to come forth from a mix of bottom-up and top-down factors. Bottom-up factors refer to information arising from our sensory organs, such as tactile and proprioceptive inputs. For example, visuo-tactile stimuli elicit ownership when congruent in terms of place and temporality (Botvinick and Cohen, 1998; Tsakiris and Haggard, 2005), while visuo-motor synchronization in passive movements is enough for the sense of ownership to emerge and elicits a greater illusion than visuo-tactile integration (Tsakiris et al., 2006; Kokkinara and Slater, 2014). Top-down factors refer to cognitive processes that make possible the mechanism of embodiment to take place—the necessity for basic morphological similarity between the surrogate body part to embody and the participant's limb in the RHI is one such example of a top-down process (Tsakiris and Haggard, 2005; Tsakiris, 2010).

2.2. Agency

The sense of agency has been defined as the “global motor control, including the subjective experience of action, control, intention, motor selection, and the conscious experience of will” by Blanke and Metzinger (2009). It encompasses the will of movement (the judgment of agency), the effective movement of the body, along with the feedback of movement (the feeling of agency) (Bayne and Pacherie, 2007). Recent work by Jeunet et al. (2018) categorized agency in VR in a similar manner. Agency has, according to that study, two components, the judgment of agency and the feeling of agency based on three principles: priority, consistency, and exclusivity. The principle of consistency, in particular, is defined as “the sensory outcome must fit the predicted outcome.” The absence of consistency, or discrepancies between visual and behavioral feedback, leads to what it usually called the uncanny valley phenomenon (Mori, 1970; Mori et al., 2012). Visuomotor integration induces high sense of agency over the virtual body being controlled, as it correlates the movement of the user's real body, and as such the intention of movement, to the movement of the virtual body. Visuomotor integration is dependent on the coherence of the synchronization, as discrepancies (such as latency) will reduce the felt agency (Franck et al., 2001).

2.3. Self-Location

Self-location “is a determinate volume in space where one feels to be located” (Kilteni et al., 2012). This self-body space is complementary to the concept of presence, as it frames the relationship between one's self and one's body, while the presence would do so between one's self and the environment. There is a relationship between body representation (Maravita and Iriki, 2004; De Vignemont and Iannetti, 2015) (peri-personal space, body image) and self-location, as shown in experiences modifying and extending the body representations through tool use (Giummarra et al., 2008; Cardinali et al., 2009; Bergström et al., 2019). The visuospatial perspective within the IVE is of importance when it comes to self-location, as it will be stronger for first person perspective by collocating the virtual body and the real body (Petkova et al., 2011; Gorisse et al., 2017). Past experiments on body appropriation (Botvinick and Cohen, 1998; Sanchez-Vives et al., 2010) showed that congruent visuo-tactile stimuli influenced self-location. Yet, Lenggenhager et al. (2007, 2009) showed predominance of seen, congruent tactile stimuli over visual perspective. Vestibular stimulation is also linked to self-location and its modification through changes of spatial perception (Lopez et al., 2008).

2.4. Haptics and Embodiment in IVEs

Haptic perception, commonly referred as the sense of touch, encompasses two sensory systems: the kinesthetic and the tactile senses (Oakley et al., 2000; Rincon-Gonzalez et al., 2011). While kinesthetic information refers to the posture and perception of limbs and body parts within space, along with the forces associated, tactile information refers to the nature of contact between the skin and a surface, allowing to feel texture, heat, and pressure. Although numerous haptic devices exist to generate haptic feedback, in this paper the focus will only be on two largely used haptic devices, force feedback devices, which mainly generate kinesthetic sensations, and vibro-tactile devices, which mainly generate tactile sensations (Srinivasan and Basdogan, 1997; Culbertson et al., 2018). The description of existing haptic technologies falls beyond the scope of this paper, and for further information, we refer to extensive review of haptic technologies by Culbertson et al. (2018), haptic gloves by Perret and Vander Poorten (2018), and wearables by Pacchierotti et al. (2017).

When the haptic feedback is vibrotactile, the vibration models are key points in developing the environment: users need to be able to orient themselves within the IVE and to learn how to interact with it through the vibrations (Israr et al., 2014). Furthermore, there is a need for ecological coherence in terms of spatial and visual feedback when providing haptic cues in VR, similar to the uncanny valley effect (Berger et al., 2018).

A number of studies have shown the benefits and usages of haptic devices in virtual reality, such as studies on presence (Garćıa-Valle et al., 2017; Kreimeier et al., 2019), performance (Kreimeier et al., 2019), and learning (Lemole et al., 2007). However only few studies, detailed hereafter, focus on the role of haptic in virtual embodiment (Raz et al., 2008; Choi et al., 2016; Frohner et al., 2018; Krogmeier et al., 2019).

The work done by Krogmeier et al. (2019) is a multi-dimensional study that showed a positive correlation between embodiment and vibrotactile feedback. The authors did not consider force feedback, and the measurement of self-reported embodiment was limited to ownership. Instead, they used a vibrotactile vest to simulate collision between participants who stood still while virtual agents walked past them and bumped into their virtual representation. Such a feedback, replacing a force (the bump) with vibration, can be considered as symbolic feedback. Participants were not able to actively interact with the environment during the main task to experience haptic feedback.

Two other studies did not try to compare different forms of haptic feedback, but used a grounded force feedback arm that allowed active interaction with the environment (Raz et al., 2008; Choi et al., 2016). Raz et al. (2008) recreated the RHI/VHI with haptic feedback in passive and active movements, also adding a self-stimulation condition. This study did not explore components of embodiment other than ownership. Adding audio, Choi et al. (2016) showed that multisensory integration can lead to a stronger sense of body ownership, and that ownership was the strongest using multisensory integration with active movements. In their study, they had participants actively playing the xylophone with audio, visual, and tactile feedback, while immersed through stereoscopic glasses but the participants' virtual representation was limited to a hand. Both of those studies, by integrating a 3-degree of freedom (DoF) force feedback to simulate tangible objects, proposed an ecological haptic feedback.

Finally, the work by Frohner et al. (2018) provides great insight as to how haptic feedback rendered through wearables can increase embodiment. In their paper, the authors provided both force and vibrotactile feedback through timbles, with a normal force over two fingers to simulate force feedback, creating a wearable haptic glove. Although force feedback in their study is more ecological than vibrotactile feedback, it is still limited to the degree that it only offers 1-DoF. They evaluated the main three subcomponents of embodiment, and found that there was a significant superiority of force feedback and vibrotactile feedback over no haptic feedback. This was mainly driven by differences in self-location. They created a non-fully immersive environment, where the subjects were not collocated with the virtual representation of their hand, thus allowing for a proprioceptive drift.

In summary, there have been few studies that focused on haptic feedback and its relationship to embodiment (Raz et al., 2008; Choi et al., 2016; Frohner et al., 2018; Krogmeier et al., 2019). None of them implemented both force and tactile feedback to study embodiment in a complete IVE. It still is not clear if, in a given context, there is a superiority of a particular kind of haptic feedback regarding embodiment. The study by Frohner et al. (2018), while proposing both force and vibrotactile feedback, did not find significant difference between the two kinds of haptic feedback, and has not made any clear distinction between a feedback being more ecological and the other being symbolic. As such, this paper tries to investigate if, in a particular context, there is a kind of haptic feedback that is superior to other regarding embodiment, and if there is, is it more ecological or more symbolic? Thus, in the next section, an experiment is proposed in which participants actively interact with an IVE augmented with haptic feedback.

3. Experimental Setup

The main purpose of the experiment is to study how the use of haptic feedback can enhance the sense of embodiment of participants in an IVE, and compare the relative influence of different haptic cues, e.g., force feedback and vibrotactile stimulation.

3.1. Hypotheses

From our review of the literature, our first research hypothesis is directly derived from the paradigm of the RHI/VHI (Botvinick and Cohen, 1998; Slater et al., 2009; Sanchez-Vives et al., 2010) and from the different studies about embodiment in correlation with haptic feedback (Choi et al., 2016). Those studies show that multisensory integration leads to higher levels of embodiment and particularly of ownership. Thus, it could be inferred that results regarding different haptic modalities would not differ from previous findings. Moreover, in a context where haptic feedback simulates physical surfaces, force feedback is more ecological and coherent than vibrotactile feedback, as it matches reality more closely. We hypothesize that haptic feedback would increase the level of ownership (H1) and that force feedback would elicit a level of ownership higher than vibrotactile feedback (H1.1).

Moreover, the principle of consistency, as described by Jeunet et al. (2018), leads us to think that the most coherent haptic feedback will bring a higher sense of agency. Here is assumed that a finer control over the avatar, through agency, would lessen the perceived workload and increase performance, as the opposite was shown to be true (Waltemate et al., 2016). As studies have shown before (Cheng et al., 1997; Kreimeier et al., 2019), haptic feedback can increase performance, in particular the completion time. As such, we hypothesize that force feedback would elicit higher agency than vibrotactile and no haptic feedback (H2) and that haptic feedback would increase performance, and force feedback would increase it further than tactile feedback (H2.1).

The study by Frohner et al. (2018) and the different experiments reproducing the RHI (Botvinick and Cohen, 1998; Tsakiris and Haggard, 2005; Sanchez-Vives et al., 2010) show a change in the sense of self-location through a proprioceptive drift. But in those studies, the drift originates from the difference in apparent position between the real body and the fake or virtual one. In our experiment, the real body and the avatar are co-localized. However, in VR, if a user wants to interact with a virtual object fixed in space through the avatar, there are two possibilities. Either the virtual representation goes through the virtual objects and the avatar's position remains the same as the user's position, or the avatar does not go through, and when interacting with a virtual object, there will be a mismatch between the avatar and the user equal to the penetration into the virtual object, what will be referred to as interpenetration later on. The second possibility is the most standard way of implementing haptics in IVEs, using a proxy (Mitra and Niemeyer, 2004), and was therefore used in this experiment. This meant that in the no haptic and vibrotactile conditions, participants' real hand would mismatch the avatar's real hand when touching fixed virtual object, possibly dropping the level of self-location. As such, we hypothesize that force feedback only would increase the sense of self-location (H3).

3.2. Participants

This user study was carried out with 24 participants (13 male and 11 female), aged from 23 to 51 (M = 30.8;SD = 7.7). Participants were recruited within the laboratory and they were naive to the experimental hypotheses. Thirteen had prior VR experience, two were familiar with the technology, and the rest had no prior experience. Two male participants were left-handed, and the rest were right-handed. The participants did not receive any compensation and took part in the study as volunteers.

3.3. Task

The task consisted of coloring a mandala, as represented in Figure 1. This task was inspired by the study from Burin et al. (2019), who evaluated body ownership in VR using a drawing task. This fine manipulation task appeared also suited to explore the influence of haptic feedback on embodiment.

FIGURE 1
www.frontiersin.org

Figure 1. The virtual environment from the first-person perspective during the task (Left), and general overview of the environment with the two avatars (Right).

The diameter of the mandala was 300 mm. Participants embodied a full-body avatar, either male or female (see Figure 1). Participants held a brush in their dominant hand and could change the drawing color by touching the corresponding sphere at the bottom of the mandala with the brush. Each sphere represented a different paint color. The stroke size was coded to be linear with the interpenetration distance (as defined in section 3.1). The minimum and maximum stroke size is the same for each condition, but the three conditions differ in the linear relationship between stroke size and interpenetration. As interpenetration distances are different for the force feedback condition compared to others, this was done in order to have the same visual feedback in each condition. The minimum stroke size was set at 3 mm and the maximum stroke size was set at 60 mm. The maximum interpenetration was set at 80 mm for no haptic and vibrotactile feedback and 10 mm for force feedback. Force feedback required a smaller value as it created tangible hard surfaces that physically reduced interpenetration. These values were obtained through preliminary tests. More details regarding the conditions are given in the following subsection.

3.4. Apparatus

The experimental setup was composed of an HTC Vive1 head-mounted device (HMD), for the visual feedback, and a Phantom Desktop from 3D Systems2 to interact with the environment and render haptic feedback. Participants used the Phantom Desktop's stylus to control the 3D position and orientation of a virtual brush. The setup also comprised a noise reduction headset 3M 1435 featuring a 19 dB (SNR) noise reduction at 250 Hz3. The experiment was implemented with Unity 3D 2018.3.11.f1, and the Steam VR plugin to support VR within Unity. The 3D Systems Openhaptics Unity Plugin provided the stylus position and orientation and sent the force vector. It also artificially compensated for the inherent weight of the stylus of the Phantom Desktop. The computer running the experiment featured two Intel Xeon(R) E5-2630 @ 2.20 GHz CPUs, a Nvidia GeForce GTX 1080 GPU, and 64 GB of RAM.

The virtual representations of the participants were two avatars of both genders from the RocketBox library. The avatars were controlled with inverse kinematics, with the position and rotation of the HMD linked to the head, and the stylus to the dominant hand. The roll rotation (rotation around the axis of the stylus) was removed and set to a fixed value because of the limited number of DoF in the avatar that would cause unrealistic arm movements. The yaw and pitch rotations were kept, with a 1:1 ratio. The rest of the virtual body was arranged to be in a seated posture. The Phantom Desktop was laid on a desk, in front of participants, centered so that it could be manipulated in the same way by right- and left-handed participants (see the accompanying video 4).

The workspace of the Phantom Desktop being small (360 × 180 × 180mm, experimentally measured), an in-game movement/real movement ratio of 1.4 was used, staying under the threshold of 50 % discrepancies found by Burns and Brooks (2006).

In the force feedback condition, interactive objects in the scene could be felt either through their surface, for tangible objects, or their viscosity, for liquids (e.g., the spheres representing paint). Force feedback was computed from Unity 3D, using the Openhaptics plugin. Tangible surfaces were emulated through the Haptic Surface component from the plugin (with a stiffness of 0.7, damping of 0.1, static friction of 0.2, and dynamic friction of 0.3), and viscosity was emulated through the Haptic Effect component (effect type viscous, gain and magnitude set to 0.6). A stiffness of 0.7 renders a continuous force of approximately 1.2 N, which is in the order of magnitude for a usual interaction (Massie and Salisbury, 1994). These values were adjusted through informal pilot studies, conducted using four participants.

For tactile feedback, an EAI C2 actuator5, also known as tactor, was used. The tactor was attached below the Phantom stylus end point using an elastic band as illustrated in Figure 2. A stiff contact was ensured between them, and avoided the elastic band to prevent the movement of the actuated part of the tactor. The location of the tactor on the stylus provided a good transmission of vibrations along the stylus case, without impeding participants in their movements. The tactor was connected to an Arduino Leonardo board through a custom-made electronic board equipped with a CMOS NAND gate to modulate the tactile feedback. A classical synthesized signal based on a square shape was used (Gupta et al., 2016). The tactile feedback consisted of a 250 Hz square shape signal (Goff, 1967), modulated with a 31 kHz square signal with a variable duty cycle for controlling the amplitude. Note that 31 kHz is the fastest clock-type signal with a 16 MHz microcontroller and 256 levels of precision duty cycle. The tactile information was transmitted to the Arduino board using raw HID at 1,000 Hz to minimize communication delays (Casiez et al., 2017). The amplitude of the vibrations was simulated with a linear relationship according to the interpenetration distance for rigid contacts (Cheng et al., 1997). An informal preliminary test was conducted with four participants to determine the preferred maximum interpenetration distance. Four values were compared: 60, 80, 100, and 120mm. The smaller value did not allow the participants in the pre-tests to control the thickness finely enough, whereas the bigger values impeded too much on the working space of the device. The maximal amplitude was set for a 80mm distance as a compromise. For nonrigid contacts (e.g., the paint spheres exclusively), there was no modulation in amplitude, and the vibrations were set to 30 % of the maximum amplitude.

FIGURE 2
www.frontiersin.org

Figure 2. Experimental setup: Phantom Desktop close-up, with the tactor fitted to the stylus, and the Arduino Leonardo in background.

3.5. Design

The experiment was designed following a within-subjects design with one independent variable being the type of feedback, administrated with three levels: no haptic feedback (CTRL), force feedback (FFB), and vibrotactile feedback (VBT). Visual feedback was the same across all conditions. The type of feedback was counterbalanced with a Latin-square to minimize ordering effects.

In summary, our experimental design was 24 participants × 3 types of feedback = 72 trials.

3.6. Procedure

The participants were asked to fill a consent form and a questionnaire with demographic information, then to wear the HMD and then hold the stylus linked to the haptic device like they held a pen, while being seated. They were given information regarding the haptic device and the caution required for its manipulation. Considering there was no way to detect fingers posture when holding the stylus, participants were also requested to hold the Phantom stylus at all time and in the same way as the virtual hand (Figure 2). They then experimented each condition that was divided in two parts. The first phase consisted of a warm-up period, and placed the participants in front of a desk, with a blank canvas. Participants could explore the interactions and familiarize themselves with the sensory feedback. This warm-up lasted between 30 and 80s, and participants could skip it by pressing a button that appeared after 30s next to the bottom right corner of the canvas. In the second phase, participants had to color the mandala using the brush. They were instructed to color each zone of the mandala with the appropriate color, and color what they could within a 300s time frame and following the strategy they wanted. For each condition, they were asked to wear the noise reduction headset. This minimizes the effect of the sound produced by the tactor in the tactile condition. After finishing each condition, they removed the HMD and noise reduction headset to fill an online questionnaire, as detailed below, before moving to the next condition.

3.7. Dependent Variables

In order to evaluate the sense of embodiment, workload, and performance, both objective and subjective measurements were used.

3.7.1. Subjective Measures

Participants self-reported their sense of embodiment and workload through a questionnaire, using five-point Likert scales. We used an aggregated questionnaire for embodiment by Gonzalez-Franco and Peck (2018). Ten questions were selected (10 first questions in Table 1), regarding the subcomponents of embodiment that were relevant in our context. Items regarding the perception of Tactile Sensations, which are “present whenever there is tactile or haptic stimulation to enhance the embodiment illusion” (Gonzalez-Franco and Peck, 2018), were also selected, as they were relevant to our study (Questions T-1 to T-4 in Table 1). Workload was measured using the standard Nasa-TLX questionnaire 6 (last 6 questions in Table 1).

TABLE 1
www.frontiersin.org

Table 1. Post condition questionnaire composed of five dimensions: embodiment (Agency A, Ownership O, Self-location L, Tactile sensations T) and workload W.

3.7.2. Objective Measures

To assess performance, the degree of completeness and the degree of precision of the coloring were measured. The degree of completeness was measured as the percentage of pixels colored using any color. The degree of precision was measured as the ratio of pixels correctly colored and the total number of pixels colored. The total number of pixels correctly colored was computed by counting for each zone the number of pixels colored using the appropriate color. To compute these two values, only the zones inside the circle were used.

4. Results

As the data collected through the questionnaires were ordinal and did not follow normal distributions, every item was analyzed using nonparametric tests using Friedman analysis and Wilcoxon post-hoc paired tests with Bonferroni correction.

Repeated-measures ANOVA on Aligned Ranked Transformed data (Wobbrock et al., 2011), where the order of presentation was treated as a between-subjects independent variable and the type of feedback as a within-subject variable, did not show that the presentation order had a significant effect or interaction on any of our dependent variables.

4.1. Embodiment

The score for each sub-dimension of embodiment (agency, self location, and ownership) and a generic score for embodiment were computed by grouping items as described by Gonzalez-Franco and Peck (2018), and re-scaled between 1 and 5.

A Friedman analysis on the overall embodiment responses found a significant effect for type of feedback [χ2(3) = 52.0, p < 0.0001]. Wilcoxon post-hoc analysis revealed significant differences (p < 0.005) between FFB (Mdn = 3.35) and CTRL (Mdn = 2.99) and between FFB and VBT (Mdn = 3.05) (Figure 3—summary embodiment).

FIGURE 3
www.frontiersin.org

Figure 3. Boxplots for the rating of each embodiment and workload question, as described in Table 1, and grouped by each sub-category of embodiment. Red lines above the boxes represent significant differences found between the conditions.

Friedman analysis found a significant effect for type of feedback on the grouped items for agency [χ2(2) = 10.9, p = 0.004] but Wilcoxon post-hoc comparisons did not reveal significant differences. However, a Friedman analysis on each individual item of agency revealed a significant effect for type of feedback on A-1 [χ2(2) = 11.7, p = 0.003] with post-hoc showing a significant difference (p = 0.01) between FFB (Mdn = 4) and CTRL (Mdn = 3.5).

No significant effect was found for the grouped items or individual items of self-location.

Friedman analysis found a significant effect for type of feedback on the grouped items for ownership [χ2(2) = 11.9, p = 0.002] with post-hoc comparisons revealing significant difference (p < 0.01) between FFB (Mdn = 3.5) and CTRL (Mdn = 3). Further Friedman analysis on each item of ownership showed significant effect for type of feedback on O-1 [χ2(2) = 9.7, p = 0.007] and O-2 [χ2(2) = 6.2, p = 0.04]. However, Wilcoxon post-hoc comparisons did not revealed significant differences for O-2, but showed a significant difference (p < 0.05) between FFB (Mdn = 4) and CTRL (Mdn = 3) for O-1.

Finally, a Friedman analysis found a significant effect for type of feedback on the grouped items for tactile sensations [χ2(2) = 18.7, p < 0.0001]. Post-hoc comparisons revealed significant differences (p < 0.03) between all type of feedbacks (FFB: 3.38, VBT: 2.75, CTRL: 2.25). Friedman analysis on each individual item of tactile sensations showed significant effect of type of feedback on T-2 [χ2(2) = 14.1, p < 0.0001], T-3 [χ2(2) = 16.1, p < 0.0001], and T-4 [χ2(2) = 11.2, p = 0.004]. For T-2, post-hoc comparisons revealed significant difference (p < 0.007) between FFB (Mdn = 4) and CTRL (Mdn = 2) and significant difference (p < 0.01) between VBT (Mdn = 3) and CTRL (Mdn = 2). Regarding T-3, significant differences (p < 0.003) were found between FFB (Mdn = 4) and CTRL (Mdn = 2) and between FFB (p < 0.03, Mdn = 4) and VBT (Mdn = 2.5). For item T-4, significant differences were found between FFB (p < 0.006, Mdn = 3.5) and CTRL (Mdn = 1) and between FFB (p < 0.01, Mdn = 3) and VBT (Mdn = 2).

4.2. Workload

Friedman analysis found a significant effect for type of feedback on effort [χ2(2) = 9.5, p = 0.009] with post-hoc comparisons revealing significant differences between FFB (p < 0.04, Mdn = 3) and VBT (Mdn = 3) and between FFB (p < 0.03, Mdn = 3) and CTRL (Mdn = 3).

There was also a significant effect for type of feedback on frustration [χ2(2) = 10.8, p = 0.005] with post-hoc comparisons revealing significant differences between FFB (p < 0.02, Mdn = 1) and VBT (Mdn = 2) and between FFB (p < 0.05, Mdn = 1) and CTRL (Mdn = 2).

There was also a significant effect for type of feedback on mental [χ2(2) = 7.3, p = 0.03] with post-hoc comparisons revealing significant differences between FFB (p < 0.05, Mdn = 2) and CTRL (Mdn = 2.5) and between FFB (p < 0.04, Mdn = 2) and VBT (Mdn = 2.5).

There was also a significant effect for type of feedback on performance [χ2(2) = 9.5, p = 0.009] with post-hoc comparisons revealing significant differences between FFB (p < 0.02, Mdn = 3) and VBT (Mdn = 3).

The analysis did not reveal any significant effect for type of feedback on physical [χ2(2) = 2.3, p = 0.33].

Finally, there was also a significant effect for type of feedback on temporal [χ2(2) = 9.3, p = 0.01] with post-hoc comparisons revealing significant differences between FFB (p < 0.05, Mdn = 2) and VBT (Mdn = 3) and between CTRL (p < 0.03, Mdn = 2.5) and VBT (Mdn = 3).

4.3. Objective Measures

Repeated measures ANOVA on Aligned Rank Transformed data did not show any significant difference between conditions for completeness and accuracy metrics (p = 0.27 and p = 0.28, respectively).

5. Discussion

In this study, we aimed at assessing the relationship between the kind of haptic feedback and virtual embodiment. Overall, the results of our experiment show that in our drawing task, there is an influence of the haptic cue used on users' subjective experience: force feedback elicits higher embodiment than no haptic feedback, and it also elicits higher subjective performance than vibrotactile and no haptic feedback. This can be explained mainly by better tactile sensations but also improved ownership and, to a lesser extent, improved agency.

We can affirm that force feedback allows for better embodiment compared to no haptic feedback, and this is coherent with previous findings, as multisensory integration is determinant to elicit ownership and embodiment (Botvinick and Cohen, 1998; Tsakiris and Haggard, 2005).

Regarding hypotheses (H1), haptic feedback would bring about higher ownership and (H1.1), force feedback would bring about higher ownership than no haptic or vibrotactile feedback can be partially answered. Self-reported ownership shows significant superiority of force feedback over no haptic feedback in this particular context. Yet our results do not corroborate findings regarding vibrotactile feedback (Frohner et al., 2018; Krogmeier et al., 2019). This could be explained by the polarization of participants' reaction toward the vibrotactile condition: some participants considered the feedback totally fine while others disliked it during the task (as one of the participant exclaimed during the VBT, “This is really stressful. I feel like my alarm clock is going off continuously”). To further illustrate this, it is interesting to note that vibrotactile feedback and no haptic feedback were significantly more frustrating than force feedback. We conjecture that, in a context where users have to interact with usual palpable objects, vibrotactile feedback is also symbolic to compete with force feedback. There has been work showing the importance of temporal and spatial congruence (Shimada et al., 2009; Tsakiris et al., 2010) to elicit embodiment, but our results suggest that haptic feedback also needs to rely on contextual congruence. This contextual congruence could be related to the principle of consistency that was highlighted by Jeunet et al. (2018) for the sense of agency. The study by Alimardani et al. (2016) finds quite similar results, although not addressing haptic feedback. In their work, embodiment was studied by comparing two conditions: visuomotor control and brain–computer interface (BCI) imagery control. Participants could control a human-like robot's hands of which they had a first-person perspective through an HMD. They introduced a delay for both conditions, but the delay for the BCI condition was two times longer. Results showed that the BCI condition brought forth a significantly higher level of embodiment. Even with a longer delay, the use of BCI control, contextually not inducing any mismatch between visual and proprioceptive feedback, proved to be more ecological for this kind of tele-operation task. It is important to note that embodiment is task dependent, and that it is hard to generalize these results.

Considering hypothesis H2, force feedback would elicit higher agency than tactile feedback has no significant result to support it. We did not detect a significant difference in overall agency, although we found one significant difference between force and no haptic feedback for item A1. It would be more appropriate to say that force feedback elicits a higher sense of agency than no haptic feedback. Considering the fact that force feedback is more ecological than the other two conditions, following the principle of consistency (Jeunet et al., 2018), then agency should be significantly higher for force feedback condition. However, there is no evidence to support the hypothesis, and further tests are needed to conclude on the role of haptic feedback regarding agency.

Regarding H2.2, haptic feedback would increase performance regarding the task, and force feedback would increase it further than tactile feedback, quantitative measures over the drawings do not reveal any significant difference. This does not allow us to corroborate our results to previous findings about the role of haptic feedback on performance (Kreimeier et al., 2019). Performance is usually task dependent, so it is hard to generalize this absence of result. Yet, subjective results from the TLX questionnaire show significant differences for perceived workload. As such, force feedback was reported to be mentally less demanding, to elicit less frustration and require less effort compared to the other two conditions, and the perceived performance was significantly better for force feedback than for vibrotactile feedback. The absence of specific instructions given to participants to emphasize on precision or completeness could also explain the absence of significant difference between the types of feedback.

Finally, considering hypothesis H3, force feedback would increase the sense of self-location has no significant result to support it. As mentioned earlier in the paper, participants were co-localized with the avatar, and there may not have been enough of a difference, even with the 1.4 to 1 mapping, to create a drift and therefore a change in self-location. It is important to note that only one participant noticed the 1.4/1 mapping (at the start), and that participant said it was not noticeable anymore after a few seconds interacting with the environment. This tends to validate the threshold found by Burns and Brooks (2006). As such, we can suppose, as shown by Frohner et al. (2018), that haptic feedback changes the sense of self-location, but that constant mislocalization is a condition for bringing forth this change.

Most studies, when focusing on haptics and embodiment, tend to implement tasks where the interaction does not require fine manipulation (Frohner et al., 2018; Krogmeier et al., 2019). On the other hand, our coloring task required fine movement from the hand and wrist, and this kind of task has not been investigated much under the prism of embodiment and haptic feedback. Moreover, besides the study by Krogmeier et al. (2019), other works like Frohner et al. (2018) and Choi et al. (2016) only represented the virtual hand, and evaluated embodiment of the virtual limb through an explicit reference to the hand in the questionnaires. In our experiment, participants were embodied in a full-body avatar that moved accordingly to participants' head and hand movements, even if the task mainly involved interaction and haptic feedback with the hand. This could explain some differences between our results and those obtained in previous studies. It could be interesting to study the influence of other tasks, inducing more visibility and use of the virtual body, and/or distributed haptic sensations over a full-body avatar. As such, our results could be useful when designing haptic interactions, especially long interactions, that require precision. On the other hand, our coloring task involving continuous contact over long period of time (compared to the length of the experiment) may have hindered the vibrotactile feedback, as it was less coherent than force feedback.

6. Conclusion

In this paper, we presented a user study that investigated the role of haptic feedback on virtual embodiment in an immersive environment. The purpose of the study was to evaluate different kinds of haptic feedback over embodiment of a full-body avatar. Three different conditions were compared: force feedback, vibrotactile feedback, and no haptic feedback. We could observe that force feedback brought a stronger sense of embodiment and ownership. Unlike previous findings, vibrotactile feedback did not significantly improve embodiment, nor ownership, and moreover, it seemed that vibrations decreased subjective performance. We focused our study on a fine manipulation task that appears representative of daily interactions with hands and tools. Haptic feedback was thus rendered on the hand but could also be felt on the arm. As future work, other tasks could be implemented to induce more visibility or use of full-body avatars, and distribute haptic feedback differently over the virtual body. In this particular context, force feedback was an ecological feedback and vibrotactile feedback was more symbolic. This suggests that the appropriate kind of haptic feedback might be context dependent: the more ecological the better. It would be interesting to develop further experiments where vibrotactile feedback would be the more ecological while force feedback would be counted as symbolic to see if those results stand.

Data Availability Statement

All datasets presented in this study are included in the article/Supplementary Material. Any Supplementary Material updated in the future can be found at https://ns.inria.fr/loki/embodiment/.

Ethics Statement

The studies involving human participants were reviewed and approved by Comité Opérationnel d'Evaluation des Risques Légaux et Ethiques (COERLE)—Inria (Authorization #2020-15). The patients/participants provided their written informed consent to participate in this study.

Author Contributions

GR, TP, FA, AL, and GC contributed to the redaction of the paper. GR implemented and ran the experiment. TP and GC contributed to the implementation of the electronic hardware. All authors contributed to the article and approved the submitted version.

Funding

This work was partially supported by the Inria Challenge Avatar.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frvir.2020.573167/full#supplementary-material

Footnotes

1. ^Vive VR System, HTC Corporation, https://www.vive.com/us/product/vive-virtual-reality-system/ (accessed December 11, 2020).

2. ^Touch X Haptic Device, 3D Systems, https://www.3dsystems.com/haptics-devices/touch-x (accessed December 11, 2020).

3. ^Noise Reduction Headset 1435, 3M, http://multimedia.3m.com/mws/media/460698O/3m-general-purpose-ear-muff-1435.pdf (accessed December 11, 2020).

4. ^https://youtu.be/o_Vb2AdBK0E

5. ^C-2 Tactor, Engineering Acoustics, Inc., https://www.eaiinfo.com/product/c2/ (accessed December 11, 2020).

6. ^NASA TLX, Task Load Index, National Aeronautics and Space Administration, https://humansystems.arc.nasa.gov/groups/TLX/ (accessed December 11, 2020).

References

Alimardani, M., Nishio, S., and Ishiguro, H. (2016). Removal of proprioception by BCI raises a stronger body ownership illusion in control of a humanlike robot. Sci. Rep. 6:33514. doi: 10.1038/srep33514

PubMed Abstract | CrossRef Full Text | Google Scholar

Bayne, T., and Pacherie, E. (2007). Narrators and comparators: the architecture of agentive self-awareness. Synthese 159, 475–491. doi: 10.1007/s11229-007-9239-9

CrossRef Full Text | Google Scholar

Berger, C. C., Gonzalez-Franco, M., Ofek, E., and Hinckley, K. (2018). The uncanny valley of haptics. Sci. Robot. 3:7010. doi: 10.1126/scirobotics.aar7010

PubMed Abstract | CrossRef Full Text | Google Scholar

Bergström, J., Mottelson, A., Muresan, A., and Hornbæk, K. (2019). “Tool extension in human-computer interaction,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow: ACM), 568. doi: 10.1145/3290605.3300798

CrossRef Full Text | Google Scholar

Blanke, O., and Metzinger, T. (2009). Full-body illusions and minimal phenomenal selfhood. Trends Cogn. Sci. 13, 7–13. doi: 10.1016/j.tics.2008.10.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Botvinick, M., and Cohen, J. (1998). Rubber hands “feel” touch that eyes see. Nature 391:756. doi: 10.1038/35784

PubMed Abstract | CrossRef Full Text | Google Scholar

Burin, D., Kilteni, K., Rabuffetti, M., Slater, M., and Pia, L. (2019). Body ownership increases the interference between observed and executed movements. PLoS ONE 14:e209899. doi: 10.1371/journal.pone.0209899

PubMed Abstract | CrossRef Full Text | Google Scholar

Burns, E., and Brooks, F. P. (2006). “Perceptual sensitivity to visual/kinesthetic discrepancy in hand speed, and why we might care,” in Proceedings of the ACM Symposium on Virtual Reality Software and Technology (Lymassol: ACM), 3–8. doi: 10.1145/1180495.1180499

CrossRef Full Text | Google Scholar

Cardinali, L., Frassinetti, F., Brozzoli, C., Urquizar, C., Roy, A. C., and Farné, A. (2009). Tool-use induces morphological updating of the body schema. Curr. Biol. 19, R478–R479. doi: 10.1016/j.cub.2009.06.048

PubMed Abstract | CrossRef Full Text | Google Scholar

Casiez, G., Pietrzak, T., Marchal, D., Poulmane, S., Falce, M., and Roussel, N. (2017). “Characterizing latency in touch and button-equipped interactive systems,” in Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST '17 (New York, NY: ACM), 29–39. doi: 10.1145/3126594.3126606

CrossRef Full Text | Google Scholar

Cheng, L.-T., Kazman, R., and Robinson, J. (1997). “Vibrotactile feedback in delicate virtual reality operations,” in Proceedings of the Fourth ACM International Conference on Multimedia (Boston, MA), 243–251. doi: 10.1145/244130.244220

CrossRef Full Text | Google Scholar

Choi, W., Li, L., Satoh, S., and Hachimura, K. (2016). Multisensory integration in the virtual hand illusion with active movement. BioMed Res. Int. 2016:8163098. doi: 10.1155/2016/8163098

PubMed Abstract | CrossRef Full Text | Google Scholar

Culbertson, H., Schorr, S. B., and Okamura, A. M. (2018). Haptics: The present and future of artificial touch sensation. Annu. Rev. Control Robot. Auton. Syst. 1, 385–409. doi: 10.1146/annurev-control-060117-105043

CrossRef Full Text | Google Scholar

De Vignemont, F. (2011). Embodiment, ownership and disownership. Conscious. Cogn. 20, 82–93. doi: 10.1016/j.concog.2010.09.004

PubMed Abstract | CrossRef Full Text | Google Scholar

De Vignemont, F., and Iannetti, G. (2015). How many peripersonal spaces? Neuropsychologia 70, 327–334. doi: 10.1016/j.neuropsychologia.2014.11.018

PubMed Abstract | CrossRef Full Text | Google Scholar

Franck, N., Farrer, C., Georgieff, N., Marie-Cardine, M., Daléry, J., d'Amato, T., and Jeannerod, M. (2001). Defective recognition of one's own actions in patients with schizophrenia. Am. J. Psychiatry 158, 454–459. doi: 10.1176/appi.ajp.158.3.454

PubMed Abstract | CrossRef Full Text | Google Scholar

Fröhner, J., Salvietti, G., Beckerle, P., and Prattichizzo, D. (2018). Can wearable haptic devices foster the embodiment of virtual limbs? IEEE Trans. Hapt. 12, 339–349. doi: 10.1109/TOH.2018.2889497

PubMed Abstract | CrossRef Full Text | Google Scholar

Gallagher, S. (2000). Philosophical conceptions of the self: implications for cognitive science. Trends Cogn. Sci. 4, 14–21. doi: 10.1016/S1364-6613(99)01417-5

PubMed Abstract | CrossRef Full Text | Google Scholar

García-Valle, G., Ferre, M., Bre nosa, J., and Vargas, D. (2017). Evaluation of presence in virtual environments: haptic vest and user's haptic skills. IEEE Access 6, 7224–7233. doi: 10.1109/ACCESS.2017.2782254

CrossRef Full Text | Google Scholar

Giummarra, M. J., Gibson, S. J., Georgiou-Karistianis, N., and Bradshaw, J. L. (2008). Mechanisms underlying embodiment, disembodiment and loss of embodiment. Neurosci. Biobehav. Rev. 32, 143–160. doi: 10.1016/j.neubiorev.2007.07.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Goff, G. D. (1967). Differential discrimination of frequency of cutaneous mechanical vibration. J. Exp. Psychol. 74:294. doi: 10.1037/h0024561

PubMed Abstract | CrossRef Full Text | Google Scholar

Gonzalez-Franco, M., and Peck, T. C. (2018). Avatar embodiment. Towards a standardized questionnaire. Front. Robot. AI 5:74. doi: 10.3389/frobt.2018.00074

CrossRef Full Text | Google Scholar

Gorisse, G., Christmann, O., Amato, E. A., and Richir, S. (2017). First-and third-person perspectives in immersive virtual environments: presence and performance analysis of embodied users. Front. Robot. AI 4:33. doi: 10.3389/frobt.2017.00033

CrossRef Full Text | Google Scholar

Gupta, A., Pietrzak, T., Roussel, N., and Balakrishnan, R. (2016). “Direct manipulation in tactile displays,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, CA), 3683–3693. doi: 10.1145/2858036.2858161

CrossRef Full Text | Google Scholar

Israr, A., Zhao, S., Schwalje, K., Klatzky, R., and Lehman, J. (2014). Feel effects: enriching storytelling with haptic feedback. ACM Trans. Appl. Percept. 11:11. doi: 10.1145/2641570

CrossRef Full Text | Google Scholar

Jeunet, C., Albert, L., Argelaguet, F., and Lécuyer, A. (2018). “Do you feel in control?”: towards novel approaches to characterise, manipulate and measure the sense of agency in virtual environments. IEEE Trans. Visual. Comput. Graph. 24, 1486–1495. doi: 10.1109/TVCG.2018.2794598

PubMed Abstract | CrossRef Full Text | Google Scholar

Kilteni, K., Groten, R., and Slater, M. (2012). The sense of embodiment in virtual reality. Presence 21, 373–387. doi: 10.1162/PRES_a_00124

CrossRef Full Text | Google Scholar

Kokkinara, E., and Slater, M. (2014). Measuring the effects through time of the influence of visuomotor and visuotactile synchronous stimulation on a virtual body ownership illusion. Perception 43, 43–58. doi: 10.1068/p7545

PubMed Abstract | CrossRef Full Text | Google Scholar

Kokkinara, E., Slater, M., and López-Moliner, J. (2015). The effects of visuomotor calibration to the perceived space and body, through embodiment in immersive virtual reality. ACM Trans. Appl. Percept. 13:3. doi: 10.1145/2818998

CrossRef Full Text | Google Scholar

Kreimeier, J., Hammer, S., Friedmann, D., Karg, P., Bühner, C., Bankel, L., et al. (2019). “Evaluation of different types of haptic feedback influencing the task-based presence and performance in virtual reality,” in Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments (Rhodes: ACM), 289–298. doi: 10.1145/3316782.3321536

CrossRef Full Text | Google Scholar

Krogmeier, C., Mousas, C., and Whittinghill, D. (2019). Human-virtual character interaction: toward understanding the influence of haptic feedback. Comput. Anim. Virt. Worlds 30:e1883. doi: 10.1002/cav.1883

CrossRef Full Text | Google Scholar

Lemole Jr, G. M., Banerjee, P. P., Luciano, C., Neckrysh, S., and Charbel, F. T. (2007). Virtual reality in neurosurgical education: part-task ventriculostomy simulation with dynamic visual and haptic feedback. Neurosurgery 61, 142–149. doi: 10.1227/01.neu.0000279734.22931.21

PubMed Abstract | CrossRef Full Text | Google Scholar

Lenggenhager, B., Mouthon, M., and Blanke, O. (2009). Spatial aspects of bodily self-consciousness. Conscious. Cogn. 18, 110–117. doi: 10.1016/j.concog.2008.11.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Lenggenhager, B., Tadi, T., Metzinger, T., and Blanke, O. (2007). Video ergo sum: manipulating bodily self-consciousness. Science 317, 1096–1099. doi: 10.1126/science.1143439

PubMed Abstract | CrossRef Full Text | Google Scholar

Lopez, C., Halje, P., and Blanke, O. (2008). Body ownership and embodiment: vestibular and multisensory mechanisms. Neurophysiol. Clin. 38, 149–161. doi: 10.1016/j.neucli.2007.12.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Maravita, A., and Iriki, A. (2004). Tools for the body (schema). Trends Cogn. Sci. 8, 79–86. doi: 10.1016/j.tics.2003.12.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Massie, T. H., and Salisbury, J. K. (1994). “The phantom haptic interface: a device for probing virtual objects,” in Proceedings of the ASME Winter Annual Meeting, Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Vol. 1. (Chicago, IL), 295–301.

Google Scholar

Mitra, P., and Niemeyer, G. (2004). “Dynamic proxy objects in haptic simulations,” in IEEE Conference on Robotics, Automation and Mechatronics, 2004, Vol. 2 (Singapore: IEEE), 1054–1059.

Google Scholar

Mori, M. (1970). Bukimi no tani [the uncanny valley]. Energy 7, 33–35.

Google Scholar

Mori, M., MacDorman, K. F., and Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robot. Autom. Mag. 19, 98–100. doi: 10.1109/MRA.2012.2192811

CrossRef Full Text | Google Scholar

Oakley, I., McGee, M. R., Brewster, S., and Gray, P. (2000). “Putting the feel in “look and feel”,” in Proceedings of the SIGCHI conference on Human Factors in Computing Systems (The Hague: ACM), 415–422. doi: 10.1145/332040.332467

CrossRef Full Text | Google Scholar

Pacchierotti, C., Sinclair, S., Solazzi, M., Frisoli, A., Hayward, V., and Prattichizzo, D. (2017). Wearable haptic systems for the fingertip and the hand: taxonomy, review, and perspectives. IEEE Trans. Haptics 10, 580–600. doi: 10.1109/TOH.2017.2689006

PubMed Abstract | CrossRef Full Text | Google Scholar

Perret, J., and Vander Poorten, E. (2018). “Touching virtual reality: a review of haptic gloves,” in ACTUATOR 2018; 16th International Conference on New Actuators (Bremen: VDE), 1–5.

Google Scholar

Petkova, V. I., Khoshnevis, M., and Ehrsson, H. H. (2011). The perspective matters! Multisensory integration in ego-centric reference frames determines full-body ownership. Front. Psychol. 2:35. doi: 10.3389/fpsyg.2011.00035

PubMed Abstract | CrossRef Full Text | Google Scholar

Raz, L., Weiss, P. L., and Reiner, M. (2008). “The virtual hand illusion and body ownership,” in International Conference on Human Haptic Sensing and Touch Enabled Computer Applications (Madrid: Springer), 367–372. doi: 10.1007/978-3-540-69057-3_47

PubMed Abstract | CrossRef Full Text | Google Scholar

Rincon-Gonzalez, L., Warren, J. P., Meller, D. M., and Helms Tillery, S. (2011). Haptic interaction of touch and proprioception: implications for neuroprosthetics. IEEE Trans. Neural Syst. Rehabil. Eng. 19, 490–500. doi: 10.1109/TNSRE.2011.2166808

PubMed Abstract | CrossRef Full Text | Google Scholar

Sanchez-Vives, M. V., Spanlang, B., Frisoli, A., Bergamasco, M., and Slater, M. (2010). Virtual hand illusion induced by visuomotor correlations. PLoS ONE 5:e10381. doi: 10.1371/journal.pone.0010381

PubMed Abstract | CrossRef Full Text | Google Scholar

Shimada, S., Fukuda, K., and Hiraki, K. (2009). Rubber hand illusion under delayed visual feedback. PLoS ONE 4:e6185. doi: 10.1371/journal.pone.0006185

PubMed Abstract | CrossRef Full Text | Google Scholar

Slater, M., Pérez Marcos, D., Ehrsson, H., and Sanchez-Vives, M. V. (2009). Inducing illusory ownership of a virtual body. Front. Neurosci. 3:29. doi: 10.3389/neuro.01.029.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Srinivasan, M. A., and Basdogan, C. (1997). Haptics in virtual environments: taxonomy, research status, and challenges. Comput. Graph. 21, 393–404. doi: 10.1016/S0097-8493(97)00030-7

CrossRef Full Text | Google Scholar

Tsakiris, M. (2010). My body in the brain: a neurocognitive model of body-ownership. Neuropsychologia 48, 703–712. doi: 10.1016/j.neuropsychologia.2009.09.034

PubMed Abstract | CrossRef Full Text | Google Scholar

Tsakiris, M., and Haggard, P. (2005). The rubber hand illusion revisited: visuotactile integration and self-attribution. J. Exp. Psychol. 31:80. doi: 10.1037/0096-1523.31.1.80

PubMed Abstract | CrossRef Full Text | Google Scholar

Tsakiris, M., Longo, M. R., and Haggard, P. (2010). Having a body versus moving your body: neural signatures of agency and body-ownership. Neuropsychologia 48, 2740–2749. doi: 10.1016/j.neuropsychologia.2010.05.021

PubMed Abstract | CrossRef Full Text | Google Scholar

Tsakiris, M., Prabhu, G., and Haggard, P. (2006). Having a body versus moving your body: how agency structures body-ownership. Conscious. Cogn. 15, 423–432. doi: 10.1016/j.concog.2005.09.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Waltemate, T., Senna, I., Hülsmann, F., Rohde, M., Kopp, S., Ernst, M., et al. (2016). “The impact of latency on perceptual judgments and motor performance in closed-loop interaction in virtual reality,” in Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology (Garching bei München), 27–35. doi: 10.1145/2993369.2993381

CrossRef Full Text | Google Scholar

Wobbrock, J. O., Findlater, L., Gergle, D., and Higgins, J. J. (2011). “The aligned rank transform for nonparametric factorial analyses using only Anova procedures,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11 (New York, NY: ACM), 143–146. doi: 10.1145/1978942.1978963

CrossRef Full Text | Google Scholar

Keywords: virtual reality, haptic feedback, embodiment, vibrotactile feedback, kinesthesia, force-feedback

Citation: Richard G, Pietrzak T, Argelaguet F, Lécuyer A and Casiez G (2021) Studying the Role of Haptic Feedback on Virtual Embodiment in a Drawing Task. Front. Virtual Real. 1:573167. doi: 10.3389/frvir.2020.573167

Received: 16 July 2020; Accepted: 26 October 2020;
Published: 18 January 2021.

Edited by:

Daniel Thalmann, École Polytechnique Fédérale de Lausanne, Switzerland

Reviewed by:

Selim Balcisoy, Sabancı University, Turkey
Anderson Maciel, Federal University of Rio Grande Do Sul, Brazil

Copyright © 2021 Richard, Pietrzak, Argelaguet, Lécuyer and Casiez. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Grégoire Richard, gregoire.richard@inria.fr

Download