Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Hum. Neurosci.

Sec. Brain-Computer Interfaces

DSP-MCF: Dual Stream Pre-training and Multi-view Consistency Fine-tuning for Cross-subject EEG Emotion Recognition

Provisionally accepted
  • 1Nanyang Normal University, Nanyang, China
  • 2Guizhou University of Traditional Chinese Medicine, Guiyang, China

The final, formatted version of the article will be published soon.

Electroencephalogram (EEG) emotion recognition is attracting increasing attention in the field of brain-computer interface due to its strong objectivity and non-forgery. However, cross-subject emotion recognition is complicated by individual variability, limited availability of EEG data, and interference in certain channels during EEG acquisition. To address these issues, we propose a novel synergistic Dual Stream Pre-training and Multi-view Consistency Fine-tuning (DSP-MCF) framework. The DSP-MCF is based on a domain generalization architecture. The framework includes a dual stream pre-training stage, wherein the spatiotemporal encoder-decoder network extracts generalized spatiotemporal representations from masked channels and reconstructs EEG features from incomplete data. Then, a multi-view consistency loss function is proposed during the multi-view consistency fine-tuning. This loss function is essential for aligning the distribution of emotion predictions derived from various perspectives, specifically from actual and masked EEG data. To simulate and improve its capacity to handle actual data degradation, DSP-MCF implements a dynamic channel mask strategy during both training and testing. Experimentally, the proposed DSP-MCF outperforms the state-of-the-art in cross-subject EEG emotion recognition tasks, achieving an accuracy of 89.76 on the SEED dataset and 77.02 on the SEED-IV dataset.

Keywords: cross-subject, Domain generalization, electroencephalogram (EEG), emotion recognition, Multi-view consistency

Received: 08 Dec 2025; Accepted: 29 Jan 2026.

Copyright: © 2026 Li, Liu, Wu, Wang and Huang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Xin Huang

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.