Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Hum. Neurosci.

Sec. Brain-Computer Interfaces

Volume 19 - 2025 | doi: 10.3389/fnhum.2025.1660532

This article is part of the Research TopicAdvancing brain-computer interfaces through fNIRS and EEG integrationView all articles

Multimodal MBC-ATT: Cross-Modality Attentional Fusion of EEG-fNIRS for Cognitive State Decoding

Provisionally accepted
Yu  LiYu Li1Lei  ZhuLei Zhu1,2*Aiai  HuangAiai Huang1,2Jianhai  ZhangJianhai Zhang1Peng  YuanPeng Yuan3
  • 1Hangzhou Dianzi University, Hangzhou, China
  • 2Hangzhou Dianzi University School of Automation, Hangzhou, China
  • 3Wuxi People's Hospital, Wuxi, China

The final, formatted version of the article will be published soon.

With the rapid development of brain-computer interface (BCI) technology, the effective integration of multimodal biological signals to improve classification accuracy has become a research hotspot. However, existing methods often fail to fully exploit cross-modality correlations in complex cognitive tasks. To address this, this paper proposes a Multi-Branch Convolutional Neural Network with Attention (MBC-ATT) for BCI based cognitive tasks classification. MBC-ATT employs independent branch structures to process electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) signals separately, thereby leveraging the advantages of each modality. To further enhance the fusion of multimodal features, we introduce a cross-modal attention mechanism to discriminate features, strengthening the model's ability to focus on relevant signals and thereby improving classification accuracy. We conducted experiments on the n-back and WG datasets. The results demonstrate that the proposed model outperforms traditional conventional approaches in classification performance, further validating the effectiveness of MBC-ATT in brain-computer interfaces. This study not only provides novel insights for multimodal BCI systems but also holds great potential for various applications.

Keywords: Brain-computer interface, cognitive task, deep learning, multimodal signals, multimodal fusion

Received: 06 Jul 2025; Accepted: 04 Sep 2025.

Copyright: © 2025 Li, Zhu, Huang, Zhang and Yuan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Lei Zhu, Hangzhou Dianzi University, Hangzhou, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.