ORIGINAL RESEARCH article
Front. Hum. Neurosci.
Sec. Brain-Computer Interfaces
Volume 19 - 2025 | doi: 10.3389/fnhum.2025.1611229
A study of motor imagery EEG classification based on feature fusion and attentional mechanisms
Provisionally accepted- 1Guangdong Baiyun University, Guangzhou, China
- 2Princeton University, Princeton, New Jersey, United States
- 3Wuhan University, Wuhan, Hubei Province, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Motor imagery EEG-based action recognition is an emerging field arising from the intersection of brain science and information science, which has promising applications in the fields of neurorehabilitation and human-computer collaboration. However, existing methods face challenges including the low signal-to-noise ratio of EEG signals, inter-subject variability, and model overfitting.In this paper, an end-to-end motor imagery action classification network, HA-FuseNet, is proposed.Based on feature fusion and attention mechanisms, HA-FuseNet classifies movements of the left hand, right hand, foot, and tongue movement. This model enhances classification accuracy through multi-scale dense connectivity, a hybrid attention mechanism, and a global self-attention module, while reducing computational overhead by adopting a lightweight design. Experimental results on the BCI Competition IV Dataset 2A show that the average within-subject accuracy of HA-FuseNet reaches 77.89%, outperforming mainstream models such as EEGNet by 8.42%. The cross-subject accuracy reaches 68.53%, demonstrating its robustness to variations in spatial resolution and individual differences. These results validate the proposed model, mitigating key challenges in the field of motor imagery EEG classification.
Keywords: Brain-Computer Interface1, motor imagery2, EEG3, attention mechanism4, Feature fusion5
Received: 26 Apr 2025; Accepted: 30 Jun 2025.
Copyright: © 2025 Zhu, Tang, Jiang, Li, Li and Wu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Tingting Zhu, Guangdong Baiyun University, Guangzhou, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.