Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Hum. Neurosci.

Sec. Brain-Computer Interfaces

Volume 19 - 2025 | doi: 10.3389/fnhum.2025.1685087

This article is part of the Research TopicPassive Brain-Computer Interfaces: Moving from Lab to Real-World ApplicationView all 5 articles

DeepAttNet: Deep Neural Network Incorporating Cross-Attention Mechanism for Subject-Independent Mental Stress Detection in Passive Brain-Computer Interfaces Using Bilateral Ear-EEG

Provisionally accepted
  • Hanyang University, Seoul, Republic of Korea

The final, formatted version of the article will be published soon.

Introduction: electroencephalography (EEG)-based mental stress detection has the potential to be applied in diverse real-world scenarios, including workplace safety, mental health monitoring, and human–computer interaction. However, most previous passive brain-computer interface (BCI) studies have employed EEG recorded during the performance of specific tasks, making the classification results susceptible to task engagement effects rather than reflecting stress alone. To address this limitation, we introduce a rest-versus-rest paradigm that compares resting EEG recorded immediately after exposure to a stressor with that recorded after meditation, thereby isolating mental stress from the task-related confounds. EEG recording setups were designed under the assumption of bilateral ear-EEG, a compact and discreet form factor suitable for real-world applications. Furthermore, we developed a novel subject-independent deep learning classifier tailored to model interhemispheric neural dynamics for enhanced mental stress detection performance. Methods: Thirty-two adults participated in the experiment. To classify mental stress status in a subject-independent manner, we proposed DeepAttNet, a deep learning model based on cross-attention and pointwise temporal compression, specifically designed to effectively capture left and right hemispherical interactions. Classification performance was assessed using eight-fold subject-level cross-validation against conventional deep learning models, including EEGNet, ShallowConvNet, DeepConvNet, and TSception. Ablation studies evaluated the impact of the cross-attention and/or pointwise compression modules. Results: DeepAttNet achieved the highest average accuracy and macro-F1 values, with performance declining when either the cross-attention or pointwise compression module was removed in the ablation studies. Explainability analyses indicated lower cross-attention entropy with stronger directional ear-to-ear asymmetry under stress, and temporal occlusion identified mid–late windows supporting stress decisions. Moreover, six of seven canonical scalp-EEG markers were FDR-significant for post-stressor vs. post-relaxation rest. Conclusion: The proposed rest-versus-rest paradigm and DeepAttNet enabled robust, subject-independent mental stress detection with a fairly high accuracy using only two-channel EEG recordings. This approach is expected to offer a practical solution for continuous stress monitoring, potentially advancing passive BCI applications outside laboratory settings.

Keywords: Electroencephalography (EEG), deep learning, ear-EEG, mental stress, Passive Brain-Computer Interface

Received: 13 Aug 2025; Accepted: 20 Oct 2025.

Copyright: © 2025 Hyung, Kim, Kim and Im. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Chang-Hwan Im, ich@hanyang.ac.kr

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.