ORIGINAL RESEARCH article
Front. Neurorobot.
Volume 19 - 2025 | doi: 10.3389/fnbot.2025.1628968
This article is part of the Research TopicMultimodal human action recognition in real or virtual environmentsView all articles
Tri-Manual Interaction in Hybrid BCI-VR Systems: Integrating Gaze, EEG Control for Enhanced 3D Object Manipulation
Provisionally accepted- 1Lingnan Normal University, Zhanjiang, China
- 2Sehan University, Yeongam, Republic of Korea
- 3Arrow Technology Company, zhuhai, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Brain-computer interface (BCI) integration with virtual reality (VR) has progressed from single-limb control to multi-limb coordination, yet achieving intuitive tri-manual operation remains challenging. This study presents a consumer-grade hybrid BCI-VR framework enabling simultaneous control of two biological hands and a virtual third limb through integration of Tobii eye-tracking, NeuroSky singlechannel EEG, and non-haptic controllers. The system employs e-Sense attention thresholds (>80% for 300ms) to trigger virtual hand activation combined with gaze-driven targeting within 45°visual cones. A soft maximum weighted arbitration algorithm resolves spatiotemporal conflicts between manual and virtual inputs with 92.4% success rate. Experimental validation with eight participants across 160 trials demonstrated 87.5% virtual hand success rate and 41% spatial error reduction (σ=0.23mm vs. 0.39mm) compared to traditional dual-hand control. The framework achieved 320ms activation latency and 22% NASA-TLX workload reduction through adaptive cognitive load management. Time-frequency analysis revealed characteristic beta-band (15-20Hz) energy modulations during successful virtual limb control, providing neurophysiological evidence for attention-mediated supernumerary limb embodiment. These findings demonstrate that sophisticated algorithmic approaches can compensate for consumer-grade hardware limitations, enabling laboratory-grade precision in accessible tri-manual VR applications for rehabilitation, training, and assistive technologies.
Keywords: brain-computer interface (BCI), Cognitive Load, Virtual reality (VR), Multimodal Interaction, Collaborative control
Received: 15 May 2025; Accepted: 28 Jul 2025.
Copyright: © 2025 Teng, Cho and Lee. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Jian Teng, Lingnan Normal University, Zhanjiang, China
Sukyoung Cho, Sehan University, Yeongam, Republic of Korea
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.