ORIGINAL RESEARCH article
Front. Virtual Real.
Sec. Technologies for VR
Volume 6 - 2025 | doi: 10.3389/frvir.2025.1629908
"Did you hear that?": Software-based spatial audio enhancements increase self-reported and physiological indices on auditory presence and affect in virtual reality First Author 1* , Second Author 2 , Third Author 3 , Forth Author 4 , Fifth Author 1 , Sixth Author 5 , Seventh Author 5 , Eighth Author 4 , Nineth Author 4 , Tenth Author 5 , Eleventh Author 4 , Twelfth Author 4 , Thirteenth Author 4* , Fourteenth Author 4
Provisionally accepted- 1Cognitive Science and Artificial Intelligence, Tilburg University, Tilburg, Netherlands
- 2Bournemouth University, Bournemouth, United Kingdom
- 3University of Geneva, Geneva, Switzerland
- 4Bongiovi Acoustic Labs, Port St Lucie, United States
- 5Emteq Labs, Brighton, United Kingdom
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
This study investigates the impact of a software-based audio enhancement tool in virtual reality (VR), examining the relationship between spatial audio, immersion, and affective responses using self-reports and physiological measures. Sixty-eight participants experienced two VR scenarios, i.e., a commercial game (Job Simulator) and a non-commercial simulation (Escape VR), under both enhanced and normal audio conditions. In this paper we propose a dual-method assessment approach, combining self-reports with moment-by-moment physiological data analysis, emphasizing the value of continuous physiological tracking for detecting subtle changes in electrophysiology in VR simulated experiences. Results show that enhanced 'localised" audio sounds significantly improved
Keywords: Audio systems, virtual reality, sensor systems human-computer interaction, Electrophysiology, Signal analysis
Received: 16 May 2025; Accepted: 08 Jul 2025.
Copyright: © 2025 Mavridou, Seiss, Ugazio, Harpster, Brown, Cox, Panchevski, Erie, Lopez Jr, Copt, Nduka, Hughes, Butera and Weiss. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Ifigeneia Mavridou, Cognitive Science and Artificial Intelligence, Tilburg University, Tilburg, Netherlands
Daniel N Weiss, Bongiovi Acoustic Labs, Port St Lucie, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.