ORIGINAL RESEARCH article

Front. Hum. Neurosci.

Sec. Brain-Computer Interfaces

Volume 19 - 2025 | doi: 10.3389/fnhum.2025.1599960

This article is part of the Research TopicPassive Brain-Computer Interfaces: Moving from Lab to Real-World ApplicationView all 4 articles

• Multi-Branch GAT-GRU-Transformer for Explainable EEG-Based Finger Motor Imagery Classification

Provisionally accepted
  • Beijing University of Technology, Beijing, China

The final, formatted version of the article will be published soon.

Electroencephalography (EEG) signals provide a non-invasive, real-time approach to decoding motor imagery (MI) tasks, such as finger movements, with significant potential for brain-computer interface (BCI) applications. However, the complexity, noise, and non-stationary nature of EEG signals challenge accurate classification. Traditional methods, such as Common Spatial Pattern (CSP) and Power Spectral Density (PSD), struggle to capture the multidimensional dynamics of EEG data due to their reliance on manually engineered features. In contrast, recent deep learning models, such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), often focus on single-dimensional features and lack interpretability, which can limit their applicability in contexts requiring detailed neuroscientific understanding.This study proposes a novel multi-branch deep learning framework, termed "Multi-Branch GAT-GRU-Transformer," to address these issues. The model leverages parallel branches to extract spatial, temporal, and frequency features from EEG signals. Specifically, Graph Attention Networks (GAT) model spatial relationships, a hybrid Gated Recurrent Unit (GRU) and Transformer architecture captures temporal dependencies, and one-dimensional CNNs extract frequency characteristics. Feature fusion integrates these representations for robust MI classification. To enhance interpretability, SHAP (SHapley Additive exPlanations) and Phase Locking Value (PLV) analyses are incorporated, elucidating the neural basis of model decisions.Evaluated on the Kaya dataset, the model achieves a classification accuracy of 55.76% in a five-class MI task. Ablation studies validate the contribution of each component, while SHAP and PLV analyses reveal the model ' s reliance on C3 and C4 channels and the Beta frequency band, consistent with neurophysiological findings. This work advances EEG signal processing techniques and provides new insights and tools for BCI and neuroscience research.

Keywords: finger motor imagery classification, EEG, Multi-Branch, deep learning, Explainability

Received: 25 Mar 2025; Accepted: 05 May 2025.

Copyright: © 2025 Wang and Wang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Yunlong Wang, Beijing University of Technology, Beijing, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.