Your new experience awaits. Try the new design now and help us make it even better

EDITORIAL article

Front. Neurosci., 29 October 2025

Sec. Perception Science

Volume 19 - 2025 | https://doi.org/10.3389/fnins.2025.1719407

This article is part of the Research TopicNeurocinematics: How the Brain Perceives AudiovisualsView all 12 articles

Editorial: Neurocinematics: how the brain perceives audiovisuals

  • 1Neuro-Com Research Group, Universitat Autònoma de Barcelona, Barcelona, Spain
  • 2Research and Development, Radio Televisión Española Instituto, Barcelona, Spain
  • 3Division of Neurosciences, Pablo de Olavide University, Seville, Spain

Understanding how the brain perceives audiovisuals seems crucial in the current screen era (Marciano et al., 2021). This type of research provides quantitative and reliable tools for designing media content based on neural responses rather than subjective reports. Identifying patterns that link brain activity with audiovisual characteristics, whether format or content-related, can offer profound insights into brain function in response to different stimuli.

Recent studies have shown that cognitive deficits can become apparent when perceiving audiovisuals, highlighting the potential for developing non-invasive tools to address these deficits (Chu et al., 2023; Li et al., 2025, 2024). Despite significant advancements, there remain gaps in our understanding of how media content impacts brain activity and, consequently, behavior and cognition. This underscores the need for further investigation into the neural mechanisms underlying audiovisual perception.

This Research Topic aimed to expand our knowledge of neurocinematics, an interdisciplinary approach to understanding audiovisual perception through the brain activity of spectators (Hasson et al., 2004, 2008). The primary objectives included exploring how the brain processes audiovisuals, identifying neural patterns associated with different types of media content, and understanding the cognitive and physiological responses elicited by audiovisual stimuli.

The articles contributing to this Research Topic address this critical goal by leveraging advanced neuroscientific and psychophysiological methodologies to explore the complexities of media perception, spanning fundamental film grammar, emotional processing in ecologically valid settings, and the emergence of immersive media formats.

Decoding cinematic grammar and temporal integration

A key area of investigation focuses on how the brain processes the most fundamental element of film structure: the editing cut. Sanz-Aznar explores whether the cinematographic cut functions as an articulation axis between adjacent shots. Through frequency domain analysis (Event-Related Desynchronization/Synchronization,ERD/ERS) of electroencephalographic (EEG) recordings, the study confirmed that continuity cuts elicit neural response patterns in theta-band synchronization and delta-band desynchronization, which are associated with memory encoding, narrative segmentation, and meaning construction. These results support the hypothesis that the shot change is neurally processed as an articulatory mechanism within film structure.

Complementing this, research presented by Cancer et al. investigates the mechanisms underlying the effect of editing density (master shot, slow-paced, fast-paced) on the viewer's perception of time. The study found that neuromodulation of the supplementary motor area (SMA) affected duration estimates and action speed judgments, highlighting the importance of the SMA in modulating the perception of filmic time during viewing.

To better characterize the overall integration of complex stimuli, Xi et al. present a novel EEG microstate-based method to calculate time-frequency features of multi-stage audiovisual (AV) neural processing. This study's approach segments AV processing into multiple distinct sub-stages: six under attended conditions and four under unattended conditions. The resulting time-frequency domain features achieved high classification accuracy (up to 97.8%) in distinguishing attended vs. unattended AV processing, demonstrating the method's effectiveness in providing a high-resolution temporal window into the brain's information processing stages.

Emotion, aesthetics, and predictive cognition

This Research Topic also approaches the influence of specific visual and aesthetic elements on emotional processing. Huttunen investigated the effects of lighting direction on the early emotional processing of faces. Measuring the Early Posterior Negativity (EPN), the study found that lighting styles that obscure or distort facial information (silhouette light, underlight, and toplight) elicited a statistically more negative EPN compared to neutral 45-degree light. This suggests that lighting design impacts the film experience at the subliminal level of emotional processing.

Extending to real-world media analysis, Martín-Pascual and Andreu-Sánchez used eye-tracking to examine the visual perception of war images in multi-window Spanish television news. They found that viewers dedicated the longest gaze duration and highest fixation count to the war imagery itself, compared to the journalist or explanatory texts. Notably, images depicting deceased individuals drew more time and fixations, potentially indicating the selective engagement of memory processes. Moreover, patterns of visual attention varied across political leaders, with Vladimir Putin attracting greater focus than Volodymyr Zelensky, a difference that was linked to stronger negative emotions.

Theoretically framing these aesthetic and narrative processes, Coëgnarts proposes integrating the Predictive Processing (PP) framework with cinematic aesthetics. PP posits that the mind acts as a predictive machine that minimizes error by anticipating sensory input. This theoretical application highlights how films deliberately engage the brain's inferential processes, proposing that image schemas function as mechanisms structuring sensory input in alignment with the brain's predictive logic, thereby enhancing comprehension and aesthetic pleasure.

Navigating the challenges of immersive media

The complexities of translating traditional cinematic grammar to new formats, particularly Virtual Reality (VR), are explored by Zou et al. This study investigated how different editing techniques (unedited, hard cuts, dissolve transitions) in VR films affect narrative cognition. Findings show that unedited VR films promote superior emotional coherence and immersion, likely driven by sustained amygdala activation. However, this format also imposes a higher cognitive load. Conversely, edited films reduced cognitive load but resulted in fragmented attention and diminished emotional engagement. Dissolve transitions were found to be especially detrimental to viewer enjoyment. These results show that VR editing places specific neurocognitive demands, emphasizing the need for flexible strategies that prioritize smooth transitions to maintain emotional engagement.

Embodiment, physiology, and inclusivity

Beyond the viewer's experience, Primett et al. explore the cinematographer's embodied experience during creation. Utilizing a neurophenomenological methodology combining physiological data (electrodermal activity, EDA, electrocardiogram, ECG), motion data, and micro-phenomenological interviews, the research investigates kinaesthetic empathy, highlighting how the cinematographer's embodied decision-making shapes and modulates emotional understanding throughout the filming process.

From another perspective, Kolesnikov et al. study how drone movements, human presence, and playback speed affect viewers' sense of movement, involvement, emotion, and time. Ascending trajectories provoked the highest levels of movement perception, aesthetic appreciation and emotional engagement. Results suggest drone cinematography can intensify embodied and emotional responses, supporting embodied simulation theory.

Addressing critical issues of accessibility, Yang et al. investigated the impact of film Audio Description (AD) style on the sense of presence in Chinese visually impaired audiences. They determined that subjective AD (which incorporates interpretation and emotional coloring) significantly enhanced key dimensions of presence—including engagement, spatial awareness, and ecological validity—compared to the more neutral, objective style.

Finally, Fusina et al. contribute to the understanding of individual differences in emotional processing by examining the heart-brain connection during ecological emotional film viewing. They focused on women with high (HD) and low (LD) traits of emotion dysregulation and found a significant positive correlation between heart rate (HR) and gamma band connectivity within the Ventral Attention Network (VAN) in the HD group, particularly during sadness and neutral clips. This consistent correlation suggests that sadness acts as a synchronizing agent coordinating cardiovascular and central cortical responses in this population.

Author contributions

CA-S: Writing – review & editing, Writing – original draft. MM-P: Writing – review & editing, Writing – original draft. JD-G: Writing – review & editing, Writing – original draft.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Chu, C. S., Wang, D. Y., Liang, C. K., Chou, M. Y., Hsu, Y. H., Wang, Y. C., et al. (2023). Automated video analysis of audio-visual approaches to predict and detect mild cognitive impairment and dementia in older adults. J. Alzheimers Dis. 92, 875–886. doi: 10.3233/JAD-220999

PubMed Abstract | Crossref Full Text | Google Scholar

Hasson, U., Landesman, O., Knappmeyer, B., Vallines, I., Rubin, N., and Heeger, D. J. (2008). Neurocinematics: the neuroscience of film. Projections 2, 1–26. doi: 10.3167/proj.2008.020102

Crossref Full Text | Google Scholar

Hasson, U., Nir, Y., Levy, I., Fuhrmann, G., and Malach, R. (2004). Intersubject synchronization of cortical activity during natural vision. Science 303, 1634–1640. doi: 10.1126/science.1089506

PubMed Abstract | Crossref Full Text | Google Scholar

Li, J., Yi, Y., Gao, X., Ren, Y., Gan, L., Zou, T., et al. (2025). High brain network dynamics mediate audiovisual integration deficits and cognitive impairment in Alzheimer's disease. J. Alzheimers Dis. doi: 10.1177/13872877251376717

PubMed Abstract | Crossref Full Text | Google Scholar

Li, S., Yang, W., Li, Y., Li, R., Zhang, Z., Takahashi, S., et al. (2024). Audiovisual integration and sensory dominance effects in older adults with subjective cognitive decline: enhanced redundant effects and stronger fusion illusion susceptibility. Brain Behav. 14:e3570. doi: 10.1002/brb3.3570

PubMed Abstract | Crossref Full Text | Google Scholar

Marciano, L., Camerini, A. L., and Morese, R. (2021). The developing brain in the digital era: a scoping review of structural and functional correlates of screen time in adolescence. Front. Psychol. 12:671817. doi: 10.3389/fpsyg.2021.671817

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: neurocinematics, visual perception, cognitive neuroscience, media, audiovisuals, neurophysiology

Citation: Andreu-Sánchez C, Martín-Pascual MÁ and Delgado-García JM (2025) Editorial: Neurocinematics: how the brain perceives audiovisuals. Front. Neurosci. 19:1719407. doi: 10.3389/fnins.2025.1719407

Received: 06 October 2025; Accepted: 13 October 2025;
Published: 29 October 2025.

Edited and reviewed by: Hulusi Kafaligonul, Neuroscience and Neurotechnology Center of Excellence (NÖROM), Türkiye

Copyright © 2025 Andreu-Sánchez, Martín-Pascual and Delgado-García. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Celia Andreu-Sánchez, Y2VsaWEuYW5kcmV1QHVhYi5jYXQ=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.