ORIGINAL RESEARCH article
Front. Neurorobot.
AMANet: A Data-Augmented Multi-Scale Temporal Attention Convolutional Network for Motor Imagery Classification
Provisionally accepted- 1School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, China
- 2School of Mechatronic Engineering and Automation, Shanghai University, Shanghai , China, Shanghai, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Abstract—Motor imagery brain–computer interface (MI-BCI) has garnered considerable attention due to its potential for neural plasticity. However, the limited number of MI-EEG samples per subject and the susceptibility of features to noise and artifacts posed significant challenges for achieving high decoding performance. To address this problem, a Data-Augmented Multi-Scale Temporal Attention Convolutional Network (AMANet) was proposed. The network mainly consisted of four modules. First, the data augmentation module comprises three steps: sliding-window segmentation to increase sample size, Common Spatial Pattern (CSP) extraction of discriminative spatial features, and linear scaling to enhance network robustness. Then, multi-scale temporal convolution was incorporated to dynamically extract temporal and spatial features. Subsequently, the ECA attention mechanism was integrated to realize the adaptive adjustment of the weights of different channels. Finally, depthwise separable convolution was utilized to fully integrate and classify the deep extraction of temporal and spatial features. In 10-fold cross-validation, the results show that AMANet achieves classification accuracies of 84.06% and 85.09% on the BCI Competition IV Datasets 2a and 2b, respectively, significantly outperforming baseline models such as Incep-EEGNet. On the High-Gamma dataset, AMANet attains a classification accuracy of 95.48%. These results demonstrate the excellent performance of AMANet in motor imagery decoding tasks.
Keywords: attention mechanism, brain–computer interface, Common Spatial Pattern, Data augmentation, Motor Imagery
Received: 12 Sep 2025; Accepted: 09 Dec 2025.
Copyright: © 2025 Wang, Wang, Chang, Wu and Hu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Shu Wang
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
