<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Virtual Reality | Haptics section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/virtual-reality/sections/haptics</link>
        <description>RSS Feed for Haptics section in the Frontiers in Virtual Reality journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-07T00:24:26.789+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1685366</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1685366</link>
        <title><![CDATA[Combination of hanger reflex and optical flow enhances head rotation and influences its direction]]></title>
        <pubdate>2026-01-08T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Yuki Kon</author><author>Hiroyuki Kajimoto</author>
        <description><![CDATA[The Hanger Reflex (HR) is a tactile illusion phenomenon in which involuntary head rotation occurs when a person wears a wire hanger on the head. This rotation is induced by skin deformation caused by pressure on the head, and can occur in multiple directions, including yaw, pitch, and roll. However, it has been reported that the Hanger Reflex tends to occur more easily along certain axes, while being less likely along others. To address this limitation, we propose a method to alter the direction of head rotation induced by the Hanger Reflex by simultaneously presenting optical flow (OF) stimuli. In our experiment, we presented optical flow stimuli with varying directions and phases while inducing the Hanger Reflex. As a result, we confirmed that the head rotation, which originally occurred primarily along the yaw axis due to the Hanger Reflex, could be shifted to the pitch or roll axis depending on the direction of the presented optical flow. Furthermore, compared to the condition using only the Hanger Reflex, the combined condition with optical flow and the Hanger Reflex produced 2.1 to 2.7 times more head rotation in the pitch and roll directions. These findings suggest that a Hanger Reflex-based device, originally limited to yaw-axis head rotation, can be repurposed as a head rotation actuator in arbitrary directions when combined with optical flow. This has potential applications in the design of VR content.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1623119</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1623119</link>
        <title><![CDATA[Proactive experiences of vibrotactile stimuli enhance learners’ task-specific confidence in cooking skills]]></title>
        <pubdate>2026-01-07T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Sana Inoue</author><author>Kakagu Komazaki</author><author>Yuji Wada</author><author>Junji Watanabe</author>
        <description><![CDATA[In skill acquisition, learners’ motivation plays an important role. Based on embodied cognition and haptic learning, we hypothesized that vibro-acoustic stimuli could serve as valuable feedback to enhance motivation. In this study, three experiments explored whether such stimuli strengthen learners’ task-specific confidence in their cooking abilities. Experiment 1 examined the discriminability of vibro-acoustic stimuli as feedback. Sounds and vibrations were recorded when a professional chef and an amateur cut two types of food, and were presented via a haptic device to participants, who evaluated the sensations. Participants successfully distinguished between the two performers solely through the vibro-acoustic stimuli. Experiments 2 and 3 investigated how vibro-acoustic stimuli corresponding to food cutting affect task-specific confidence. Participants who actively experienced the stimuli while watching a professional chef’s video showed greater confidence than those who only felt the stimuli passively. These findings point to the potential applicability of vibro-acoustic stimuli in facilitating certain aspects of cooking skill acquisition, although the current work is an exploratory step toward integrating haptic cues into cooking education rather than a demonstration of performance improvement.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1718198</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1718198</link>
        <title><![CDATA[“Did she just tap me?“: qualifying multisensory feedback for social touch during human-agent interaction in virtual reality]]></title>
        <pubdate>2026-01-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Grégoire Richard</author><author>Fabien Boucaud</author><author>Catherine Pelachaud</author><author>Indira Thouvenin</author>
        <description><![CDATA[Social touch is one of the many modalities of social communication. It can take many forms (from handshakes to hugs) and can convey a wide variety of emotions and intentions. Social touch is also multimodal: it is not only comprised of haptic feedback (both tactile and kinesthetic), but also visual feedback (gestures) and even audio feedback (sound of the hand movement on the body). Virtual agents (VA) can perceive and interact with users by making use of multimodality to express attitudes and complex emotions. There are still few studies that have investigated how to integrate touch into VAs’ social abilities, despite a growing interest in haptic interactions within immersive virtual environments (IVE). While prior work has examined haptic feedback, auditory substitution, or agent-based touch in isolation, no study has systematically disentangled the respective and combined contributions of visual, auditory, and tactile cues to the perception of being socially touched by a virtual agent. To address this gap, we conducted three experiments that progressively isolate and combine modalities, revealing how each shapes touch recognition, believability, and agency attribution. Taken together, our results show that multisensory feedback improves experience of social touch, which is in line with previous research in IVEs. They also show that visual feedback has the potential to guide the recognition of the stimulus the most but both audio and tactile feedback further help disambiguate the recognition of a touch gesture in particular cases. The results also show that both level of animation and interpersonal distance are essential to how much the VA is felt as a social agent when initiating touch. While visual feedback and tactile feedback are the main contributors to participants feeling touched by the VA, audio feedback also has a significant impact.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1648019</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1648019</link>
        <title><![CDATA[Safety for mobile encountered-type haptic devices in large-scale virtual reality]]></title>
        <pubdate>2025-10-03T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Soroosh Mortezapoor</author><author>Mohammad Ghazanfari</author><author>Khrystyna Vasylevska</author><author>Emanuel Vonach</author><author>Hannes Kaufmann</author>
        <description><![CDATA[Mobile robots are becoming more common in Virtual Reality applications, especially for delivering physical interactions through Encountered-Type Haptic Devices (ETHD). However, current safety standards and definitions of collaborative robots (Cobots) do not sufficiently address situations where an immersed user shares the workspace and interacts with a mobile robot without seeing it. In this paper, we explore the specific safety challenges of using mobile platforms for ETHDs in a large-scale immersive setup. We review existing robotic safety standards and perform a risk assessment of our immersive autonomous mobile ETHD system CoboDeck. We demonstrate a structured approach for potential risk identification and mitigation via a set of hardware and software safety measures. These include strategies for robot behavior, such as pre-emptive repositioning and active collision avoidance, as well as fallback mechanisms. We suggest a simulation-based testing framework that allows evaluating the safety measures systematically before involving human subjects. Based on that, we examine the impact of different proposed safety strategies on the number of collisions, robot movement, haptic feedback rendering, and noise resilience. Our results show considerably improved safety and robustness with our suggested approach.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1616442</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1616442</link>
        <title><![CDATA[Neuroadaptive haptics: a proof-of-concept comparing reinforcement learning from explicit ratings and neural signals for adaptive XR systems]]></title>
        <pubdate>2025-08-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Lukas Gehrke</author><author>Aleksandrs Koselevs</author><author>Marius Klug</author><author>Klaus Gramann</author>
        <description><![CDATA[IntroductionNeuroadaptive technology provides a promising path to enhancing immersive extended reality (XR) experiences by dynamically tuning multisensory feedback to user preferences. This study introduces a novel system employing reinforcement learning (RL) to adapt haptic rendering in XR environments based on user feedback derived either explicitly from user ratings or implicitly from neural signals measured via Electroencephalography (EEG).MethodsParticipants interacted with virtual objects in a VR environment and rated their experience using a traditional questionnaire while their EEG data were recorded. Then, in two RL conditions, an RL agent tried to tune the haptics to the user — learning either on the rewards from explicit ratings, or on implicit neural signals decoded from EEG.ResultsThe neural decoder achieved a mean F1 score of 0.8, supporting informative yet noisy classification. Exploratory analyses revealed instability in the RL agent’s behavior in both explicit and implicit feedback conditions.DiscussionA limited number of interaction steps likely constrained exploration and contributed to convergence instability. Revisiting the interaction design to support more frequent sampling may improve robustness to EEG noise and mitigate drifts in subjective experience. By demonstrating RL‐based adaptation from implicit neural signals, our proof-of-concept is a step towards seamless, low-friction personalization in XR.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1566680</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1566680</link>
        <title><![CDATA[VR environment of digital design laboratory: a usability study]]></title>
        <pubdate>2025-07-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mariam Ali Alnuaimi</author><author>Mamoun Awad</author>
        <description><![CDATA[IntroductionVirtual reality (VR) offers an immersive learning environment with the potential to revolutionize how students engage with complex topics and interact during classes. Great care must be taken when designing and developing VR applications. In this paper, we present a usability study of our proposed educational VR application designed for the digital design and computer organization lab. The VR-lab is meant to support the traditional lab that is taught in college first year.MethodsTo assess the VR application, we conducted a usability study with a wide range of students from different colleges, evaluating their performance on tasks within the VR environment and collecting their feedback on the application’s design, functionality, and ease of usage. By analyzing quantitative data (task completion rates and time spent) and qualitative feedback (surveys and observations), in this study, we identified potential usability issues and recommended improvements to enhance the learning experience within the VR application.ResultThe usability study showed that the majority of students found that the VR application immersive, engaging, and viewed their experience as helpful to better understand complex concepts. With confidence level of 95%, the VR application had significant improvement on students’ learnability, memorability, and the average task completion time.DiscussionAdditionally, in the study, we found that careful design and technology must be used selectively due to population variation in terms of technology knowledge level, health factor, and readiness.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1617481</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1617481</link>
        <title><![CDATA[Pain intensity control in virtual reality via modulated cooling rates in thermal grill illusion]]></title>
        <pubdate>2025-07-01T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Shinnosuke Hori</author><author>Renke Liu</author><author>Haruo Igarashi</author><author>Hideyuki Sawada</author>
        <description><![CDATA[As virtual reality (VR) technology advances, the need for realistic pain presentation to enhance immersion grows. The thermal grill illusion (TGI), which elicits a burning sensation through the simultaneous application of warm and cold stimuli, has emerged as a promising technique. To apply TGI in VR for pain intensity control, clarifying the relationship between stimulus parameters and perceptual intensity is crucial. In this study, we constructed a TGI display using six thermoelectric devices and conducted two user experiments. The first experiment investigated the relationship between the cooling rate of cold stimuli and TGI-induced pain intensity. Results indicated that faster cooling rates intensified perceived pain. The second experiment implemented pain intensity control in a VR environment. As a result, six out of seven participants reported that the perceived intensity of pain changed in response to the changes in the VR stimuli. Our findings demonstrate that manipulating the cooling rate can effectively control pain intensity in VR, enhancing the realism and immersion of VR experiences.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2025.1528675</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2025.1528675</link>
        <title><![CDATA[Method for modifying haptic feedback by displaying onomatopoeia]]></title>
        <pubdate>2025-03-24T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Izumi Mizoguchi</author><author>Sho Sakurai</author><author>Koichi Hirota</author><author>Takuya Nojima</author>
        <description><![CDATA[Haptic feedback, which enhances operability and realism, is extensively employed in smartphones and controllers. One notable tactile presentation method involves the use of visual stimuli to evoke tactile sensations, exemplified by the concept of pseudo-haptics. In these methodologies, resistance and force are simulated by modulating the velocity of the avatar’s finger or the operation pointer. Currently, there exists a discrepancy between the user’s inherent sensory perception and the visual information presented. In this study, we propose a novel approach to modifying the perception of tactile stimulation by concurrently presenting an onomatopoeic word with the tactile stimulus. For the experiment, we developed a smartphone application that, upon tapping a displayed button, triggers both a vibration stimulus and the presentation of an onomatopoeic word that conveys a sense of touch and sound. We employed six switches with varying tactile sensations to evaluate whether the user’s perception would be influenced by the type of onomatopoeia displayed in the application. The experimental results demonstrated that five onomatopoeic words elicited distinct tactile sensations. Additionally, we observed that four of these words enhanced the perceived realism of the button-press sensation. This method diverges from existing techniques by altering the tactile perception through visually presented linguistic information. Although this approach is constrained to scenarios where the haptic object is within the visual field, it is straightforward to implement and can be readily applied to existing smartphones and virtual reality devices.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2024.1442829</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2024.1442829</link>
        <title><![CDATA[Influence of habituation on pseudo-haptic weight perception of virtual objects]]></title>
        <pubdate>2024-11-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Kenta Ito</author><author>Yuki Ban</author><author>Shin’ichi Warisawa</author>
        <description><![CDATA[As the demand for realistic sensations in virtual reality (VR) environments escalates, interfaces that exploit cross-modal interactions between vision and haptics are gaining prominence. Pseudo-haptics, offering a tactile illusion without mechanical feedback devices, has emerged as a viable solution to enhance user immersion by simulating various sensations such as texture, fluid resistance, and weight. However, a potential issue in the long-term effectiveness of these illusions, particularly habituation, where the illusion diminishes after prolonged exposure, remains a concern. This study investigates whether habituation to pseudo-haptic weight illusions occurs with extended use. We conducted quantitative measurements of pseudo-haptic weight perception before and after prolonged exposure. The findings indicate a diminishing effect of pseudo-haptic weight illusion with repeated exposure, particularly among participants with high accuracy in weight perception. This study contributes important insights into the design of VR experiences, highlighting the need to consider the habituation effects when implementing pseudo-haptic illusions for weight perception.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2024.1406923</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2024.1406923</link>
        <title><![CDATA[Electrotactile displays: taxonomy, cross-modality, psychophysics and challenges]]></title>
        <pubdate>2024-09-13T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Rahul Kumar Ray</author><author>Madhan Kumar Vasudevan</author><author>M. Manivannan</author>
        <description><![CDATA[Touch is one of the primary senses, and the receptors for touch sense are spread across the whole human body. Electrotactile displays provide tactile feedback to generate different sensations (such as tickling, tingling, itching, and pressure) in human-computer interfaces or man-machine interactions. These displays encode tactile properties, such as shape and texture, facilitating immersive experiences in virtual or remote environments. Their compact form factor and low maintenance requirements render them versatile for myriad applications. This paper is a comprehensive survey of the design and implementation of electrotactile displays, elucidating their taxonomy, cross-modal integration strategies, and psychophysical underpinnings. Emphasizing the crucial role of psychophysics, it delineates how human perception informs the design and utilization of electrotactile displays. Furthermore, this paper identifies prevalent challenges in electrotactile displays and outlines future directions to advance their development and deployment.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2024.1379351</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2024.1379351</link>
        <title><![CDATA[Synergy and medial effects of multimodal cueing with auditory and electrostatic force stimuli on visual field guidance in 360° VR]]></title>
        <pubdate>2024-06-04T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Yasuhito Sawahata</author><author>Masamitsu Harasawa</author><author>Kazuteru Komine</author>
        <description><![CDATA[This study investigates the effects of multimodal cues on visual field guidance in 360° virtual reality (VR). Although this technology provides highly immersive visual experiences through spontaneous viewing, this capability can disrupt the quality of experience and cause users to miss important objects or scenes. Multimodal cueing using non-visual stimuli to guide the users’ heading, or their visual field, has the potential to preserve the spontaneous viewing experience without interfering with the original content. In this study, we present a visual field guidance method that imparts auditory and haptic stimulations using an artificial electrostatic force that can induce a subtle “fluffy” sensation on the skin. We conducted a visual search experiment in VR, wherein the participants attempted to find visual target stimuli both with and without multimodal cues, to investigate the behavioral characteristics produced by the guidance method. The results showed that the cues aided the participants in locating the target stimuli. However, the performance with simultaneous auditory and electrostatic cues was situated between those obtained when each cue was presented individually (medial effect), and no improvement was observed even when multiple cue stimuli pointed to the same target. In addition, a simulation analysis showed that this intermediate performance can be explained by the integrated perception model; that is, it is caused by an imbalanced perceptual uncertainty in each sensory cue for orienting to the correct view direction. The simulation analysis also showed that an improved performance (synergy effect) can be observed depending on the balance of the uncertainty, suggesting that a relative amount of uncertainty for each cue determines the performance. These results suggest that electrostatic force can be used to guide 360° viewing in VR, and that the performance of visual field guidance can be improved by introducing multimodal cues, the uncertainty of which is modulated to be less than or comparable to that of other cues. Our findings on the conditions that modulate multimodal cueing effects contribute to maximizing the quality of spontaneous 360° viewing experiences with multimodal guidance.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2023.1242587</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2023.1242587</link>
        <title><![CDATA[Haptic feedback in a virtual crowd scenario improves the emotional response]]></title>
        <pubdate>2023-11-28T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>R. K. Venkatesan</author><author>Domna Banakou</author><author>Mel Slater</author><author>Manivannan M.</author>
        <description><![CDATA[Research has shown that incorporating haptics into virtual environments can increase sensory fidelity and provide powerful and immersive experiences. However, current studies on haptics in virtual interactions primarily focus on one-on-one scenarios, while kinesthetic haptic interactions in large virtual gatherings are underexplored. This study aims to investigate the impact of kinesthetic haptics on eliciting emotional responses within crowded virtual reality (VR) scenarios. Specifically, we examine the influence of type or quality of the haptic feedback on the perception of positive and negative emotions. We designed and developed different combinations of tactile and torque feedback devices and evaluated their effects on emotional responses. To achieve this, we explored different combinations of haptic feedback devices, including “No Haptic,” “Tactile Stimulus” delivering tactile cues, and “Haptic Stimulus” delivering tactile and torque cues, in combination with two immersive 360-degree video crowd scenarios, namely, “Casual Crowd” and “Aggressive Crowd.” The results suggest that varying the type or quality of haptic feedback can evoke different emotional responses in crowded VR scenarios. Participants reported increased levels of nervousness with Haptic Stimulus in both virtual scenarios, while both Tactile Stimulus and Haptic Stimulus were negatively associated with pleasantness and comfort during the interaction. Additionally, we observed that participants’ sense of touch being real was enhanced in Haptic Stimulus compared to Tactile Stimulus. The “Haptic Stimulus” condition had the most positive influence on participants’ sense of identification with the crowd.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2023.1133146</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2023.1133146</link>
        <title><![CDATA[Softness presentation by combining electro-tactile stimulation and force feedback]]></title>
        <pubdate>2023-03-24T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Yui Suga</author><author>Masahiro Takeuchi</author><author>Satoshi Tanaka</author><author>Hiroyuki Kajimoto</author>
        <description><![CDATA[To provide realistic tactile sensations in a virtual environment, it is necessary to stimulate both the cutaneous and proprioceptive senses. This study focuses on a realistic method of presenting softness through the use of electro-tactile stimulation. Our system combines a force-feedback device with an electric stimulator to create a soft sensation by applying a reaction force and spreading cutaneous sensation based on the amount of indentation. We measured the change in the contact area of gel samples and used electric stimulation to reproduce the increase in the contact area of the sample. We conducted a psychophysical experiment to evaluate the effectiveness of the combination of cutaneous and force sensations and confirmed that the sensation of softness was enhanced by the simultaneous presentation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2023.1070739</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2023.1070739</link>
        <title><![CDATA[Effective haptic feedback type for robot-mediated material discrimination depending on target properties]]></title>
        <pubdate>2023-02-03T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Minami Takahashi</author><author>Hikaru Nagano</author><author>Yuichi Tazaki</author><author>Yasuyoshi Yokokohji</author>
        <description><![CDATA[Haptic feedback enables material perception via remote robotics. Both force and vibration information are essential for haptic feedback, and it is important to understand their applicability in different situations. In this study, the relationship between the effective type of haptic feedback and target properties in robot-mediated material discrimination was investigated. A remote-control system including a force presentation device and a wearable vibrotactile display was constructed. In the first experiment, the discrimination performance of material hardness was compared between two types of feedback, force and hybrid (vibrotactile and force) conditions. The results show that both feedback systems allow statistically-significant discrimination of the stimuli, and a significant difference in correct-answer rates between the two feedback conditions was not observed. This indicates that the force system was effective for hardness discrimination, and that there was no superimposed effect of the hybrid system. In the second experiment, the discrimination performance of material roughness was compared between three types of feedback (force, vibrotactile, and hybrid). The results indicate that the rate of correct responses for hybrid feedback condition are significantly higher than those for the force condition. This suggests that hybrid feedback is effective for roughness discrimination. Therefore, the effective type of feedback depends on the properties of target materials, and the superimposed effect of hybrid feedback was only observed in roughness discrimination. These findings play an important role in selecting the best feedback method for a given situation or constructing multiple feedback methods that achieve high discrimination performance.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2022.1019302</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2022.1019302</link>
        <title><![CDATA[Introducing wearable haptics for rendering velocity feedback in VR serious games for neuro-rehabilitation of children]]></title>
        <pubdate>2023-01-06T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Cristian Camardella</author><author>Domenico Chiaradia</author><author>Ilaria Bortone</author><author>Antonio Frisoli</author><author>Daniele Leonardis</author>
        <description><![CDATA[Rehabilitation in virtual reality offers advantages in terms of flexibility and parametrization of exercises, repeatability, and continuous data recording and analysis of the progress of the patient, also promoting high engagement and cognitive challenges. Still, most of the proposed virtual settings provide a high quality, immersive visual and audio feedback, without involving the sense of touch. In this paper, we show the design, implementation, and first evaluation of a gaming scenario for upper limb rehabilitation of children with cerebral palsy. In particular, we took care to introduce haptic feedback as a useful source of sensory information for the proposed task, considering—at the same time—the strict constraints for haptic wearable devices to comply with patient’s comfort, residual motor abilities, and with the embedded tracking features of the latest VR technologies. To show the potential of haptics in a rehabilitation setup, the proposed device and rendering method have been used to improve the velocity control of upper limb movements during the VR exercise, given its importance as a motor recovery metric. Eight healthy participants were enrolled, and results showed that haptic feedback can lead to lower speed tracking errors and higher movement smoothness, making the proposed setup suitable to be used in a rehabilitation context as a way to promote movement fluidity during exercises.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2022.997426</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2022.997426</link>
        <title><![CDATA[Walking on paintings: Assessment of passive haptic feedback to enhance the immersive experience]]></title>
        <pubdate>2022-09-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mounia Ziat</author><author>Rishi Jhunjhunwala </author><author>Gina Clepper </author><author>Pamela Davis Kivelson </author><author>Hong Z. Tan </author>
        <description><![CDATA[Virtual reality has been used in recent years for artistic expression and as a tool to engage visitors by creating immersive experiences. Most of these immersive installations incorporate visuals and sounds to enhance the user’s interaction with the artistic pieces. Very few, however, involve physical or haptic interaction. This paper investigates virtual walking on paintings using passive haptics. More specifically we combined vibrations and ultrasound technology on the feet using four different configurations to evaluate users’ immersion while they are virtually walking on paintings that transform into 3D landscapes. Results show that participants with higher immersive tendencies experienced the virtual walking by reporting illusory movement of their body regardless the haptic configuration used.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2022.954808</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2022.954808</link>
        <title><![CDATA[Avatar embodiment in VR: Are there individual susceptibilities to visuo-tactile or cardio-visual stimulations?]]></title>
        <pubdate>2022-09-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Yvan Pratviel</author><author>Alix Bouni</author><author>Véronique Deschodt-Arsac</author><author>Florian Larrue</author><author>Laurent M. Arsac</author>
        <description><![CDATA[Virtual reality has obvious potential to help humans developing/recovering brain functions, which operates through modulation of multisensory inputs. Some interventions using VR rely on the need to embody a virtual avatar, which stimulates cognitive-motor adaptations. Recent research has shown that embodiment can be facilitated by synchronizing natural sensory inputs with their visual redundancy on the avatar, e.g., the user’s heartbeat flashing around its avatar (cardio-visual stimulation) or the user’s body being physically stroked while the avatar is touched in synchronized conditions (visuo-tactile stimulation). While different full-body illusions have proven obvious interest in health and disease, it is unknown to date whether individual susceptibilities to illusion are equivalent with respect to cardio-visual or visuo-tactile stimulations. In fact, a number of factors like interoception, vestibular processing, a pronounced visual dependence, a specific cognitive ability for mental rotations, or user traits and habits like empathy and video games practice may interfere with the multifaceted construct of bodily self-consciousness, the conscious experience of owning a body in space from which the world is perceived. Here, we evaluated a number of dispositions in twenty-nine young and healthy participants submitted alternatively to cardio-visual and visuo-tactile stimulations to induce full-body illusions. Three components of bodily self-consciousness consensually identified in recent research, namely self-location, perspective taking and self-identification were quantified by self-reported feeling (questionnaires), and specific VR tasks used before and after multisensory stimulations. VR tasks allowed measuring self-location in reference to a virtual ball rolling toward the participant, perspective taking through visuomotor response times when mentally rotating an avatar suddenly presented at different angles, and self-identification through heart rate dynamics in response to a threatening stimulus applied to the (embodied) avatar. Full-body illusion was evidenced by self-reported quotations of self-identification to the avatar reaching scores in agreement with the literature, lower reaction times when taking the perspective of the avatar and a marked drop in heart rate showing obvious freezing reaction changes when the user saw the avatar being pierced by a spear. Changes in bodily self-consciousness components are not significantly dependent on the type of multisensory stimulation (visuo-tactile or cardio-visual). A principal component analysis demonstrated the lack of covariation between those components, pointing to the relative independence of self-location, perspective taking and self-identification measurements. Moreover, none of these components showed significant covariations with any of the individual dispositions. These results support the hypothesis that cardio-visual and visuo-tactile stimulations affect the main components of bodily self-consciousness in an extent that, in average, is mostly independent of individual perceptive-cognitive profiles, at least in healthy young people. Although this is an important observation at group level, which indicates a similar probability of inducing embodiment with either cardio-visual or visuo-tactile stimulations in VR, these results do not discard the fact that some individuals might have higher susceptibility to specific sensory inputs, which would represent a target to adapt efficient VR stimulations.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2022.930848</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2022.930848</link>
        <title><![CDATA[Head rotation and illusory force sensation by lateral skin stretch on the face]]></title>
        <pubdate>2022-08-09T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Masahiro Miyakami</author><author>Akifumi Takahashi</author><author>Hiroyuki Kajimoto</author>
        <description><![CDATA[Various methods for inducing an illusory force sensation to present a sense of force to users in energy-saving and space-saving systems have been proposed. One of them is the illusion of force sensation induced by cutaneous sensory stimulation. In this study, we hypothesized and empirically verified that lateral skin stretch alone on the face can induce an illusory force sensation in the direction of the stretch. We focused on the anterior temporal and cheekbone regions, in which the cushion part of the head-mounted display contacts the skin, and applied skin stretches of different intensities to these regions, envisioning a force presentation device built into the head-mounted display. Head rotations of approximately 40 and 50 degrees were generated by skin stretches in the anterior temporal and cheekbone regions, respectively, confirming the illusory force sensation in the direction of rotation. We confirmed a positive correlation between the head-turning angle and the amount of skin deformation. The intensity of the illusory force sensation can be controlled by changing the amount of lateral skin deformation; this may be applied to the development of a new force presentation head-mounted device.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2022.949324</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2022.949324</link>
        <title><![CDATA[A human-in-the-loop haptic interaction with subjective evaluation]]></title>
        <pubdate>2022-07-26T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ying Fang </author><author>Yangjun Qiao </author><author>Fanrong Zeng</author><author>Keke Zhang</author><author>Tiesong Zhao </author>
        <description><![CDATA[To date, one of the challenges in Human-Computer Interaction (HCI) is fully immersive multisensory remote physical interaction technologies. The applications of haptic perception in HCI can enrich the interaction details and effectively improve the immersion and realism of interaction. In the human-in-the-loop haptic interaction system, the quality of experience (QoE) of the human operator plays an essential role. However, QoE in haptic interaction is still in its infancy. Based on the typical application scenarios of haptic operation, the paper constructs a haptic-visual interaction framework and analyzes the QoE influencing factors. Through subjective evaluation experiments, the paper establishes a haptic interaction database that can provide a research basis for further exploring the relationship between various influencing factors and interactive QoE.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frvir.2022.925794</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frvir.2022.925794</link>
        <title><![CDATA[PARTI-A Haptic Virtual Reality Control Station for Model-Mediated Robotic Applications]]></title>
        <pubdate>2022-07-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jean Elsner</author><author>Gerhard Reinerth</author><author>Luis Figueredo</author><author>Abdeldjallil Naceri</author><author>Ulrich Walter</author><author>Sami Haddadin</author>
        <description><![CDATA[In this paper, we introduce a tele-robotic station called “PARTI” that leverages state-of-the-art robotics and virtual reality technology to enable immersive haptic interaction with simulated and remote environments through robotic avatars. Our hardware-in-the-loop framework integrates accurate multibody system dynamics and frictional contacts with digital twins of our robots in a virtual environment with real-time computational capabilities. This model mediated hardware-in-the-loop approach to robotic control allows a teleoperator to use the PARTI system to teach, evaluate, and control various robotic applications. In the current contribution, we focus on the general system description, integrated simulation and control framework, and a series of experiments highlighting the advantages of our approach.]]></description>
      </item>
      </channel>
    </rss>