Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Psychol.

Sec. Perception Science

Volume 16 - 2025 | doi: 10.3389/fpsyg.2025.1584250

This article is part of the Research TopicNeurocinematics: How the Brain Perceives AudiovisualsView all 10 articles

The Neural Impact of Editing on Viewer Narrative Cognition in Virtual Reality Films: Eye-Tracking Insights into Neural Mechanisms

Provisionally accepted
Qiaoling  ZouQiaoling Zou1*Wanyu  ZhengWanyu Zheng2Zishun  SuZishun Su3Li  ZhangLi Zhang2Ziqing  ZhuoZiqing Zhuo4Dongning  LiDongning Li2*
  • 1Jiangnan University, Wuxi, China
  • 2School of Digital Technology & Innovation Design,Jiangnan University, Wuxi, Liaoning Province, China
  • 3School of Art and Design, Shanghai Business School, Shanghai, China
  • 4School of Art, Wuxi Taihu University, Wuxi, Liaoning Province, China

The final, formatted version of the article will be published soon.

Introduction: The development of virtual reality (VR) films requires novel editing strategies to optimize narrative cognition in immersive environments. While traditional film editing guides attention through controlled sequences of shots, the interactive nature of VR disrupts linear storytelling, challenging creators to balance emotional experience and spatial coherence. By combining eye-tracking technology with neuroscientific findings, this study aims to investigate how different editing techniques in virtual reality (VR) films affect viewers' narrative cognition, focusing on visual attention, emotional experience and cognitive load, and to optimize VR film editing strategies through a neurocognitive lens. Methods: A controlled experiment with 42 participants was conducted using three versions of a VR movie: an unedited movie, a hard cut edited movie, and a dissolve-transition edited movie. Eye-tracking metrics were recorded using the HTC Vive Pro Eye headset, and emotional experiences were assessed using post-viewing questionnaires. Data were analyzed using SPSS and visualized using heat maps and trajectory maps. Results: The unedited movie (F1) elicited the highest visual attention (TDF: M = 18,953.83 vs. F2/F3, p < 0.001) and emotional immersion, with 75% of viewers rating it as "highly immersive." It also showed sustained activation in areas related to emotional engagement. Edited movies, both hard cuts (F2) and dissolve-transitions (F3), reduced cognitive load (TSD: M = 16,632.83 for F1 vs. 15,953.18 for F3, p < 0.01) but resulted in fragmented attention. Dissolve-transitions (F3) decreased viewer enjoyment (APD: M = 0.397 vs. F1, p < 0.001). One-way ANOVA analysis revealed that seamless editing enhanced emotional coherence, while abrupt cuts disrupted spatial and temporal integration, leading to reduced emotional engagement. Discussion: Unedited VR films promote emotional coherence driven by the amygdala and maintain attention stability mediated by the prefrontal cortex, which enhances immersive narrative cognition. In contrast, editing techniques prioritize cognitive efficiency at the expense of emotional experience. To maintain immersion, filmmakers should focus on seamless transitions, while strategically using edits to direct attention in the complex 360° environment of VR. These findings contribute to neurocinematic theory by connecting the neural dynamics induced by editing with behavioral outcomes, offering practical insights for VR content creation.

Keywords: virtual reality, eye-tracking technology, Film editing, narrative cognition, visual behavior, neurocinematics

Received: 27 Feb 2025; Accepted: 13 Aug 2025.

Copyright: © 2025 Zou, Zheng, Su, Zhang, Zhuo and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
Qiaoling Zou, Jiangnan University, Wuxi, China
Dongning Li, School of Digital Technology & Innovation Design,Jiangnan University, Wuxi, Liaoning Province, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.