ORIGINAL RESEARCH article
Front. Neurosci.
Sec. Neuroprosthetics
Volume 19 - 2025 | doi: 10.3389/fnins.2025.1591398
A Feature Fusion Network with Spatial-Temporal-Enhanced Strategy for the Motor Imagery of Force Intensity Variation
Provisionally accepted- 1Cixi Institute of Biomedicine, Wenzhou Medical University, Cixi, China
- 2Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences (CAS), Ningbo, Zhejiang Province, China
- 3Ningbo Cixi Institute of Biomedical Engineering, Ningbo, Zhejiang Province, China
- 4Hangzhou RoboCT Technology Development Co., Ltd., Hangzhou, China, Hangzhou, China
- 5University of Chinese Academy of Sciences, Beijing, Beijing, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Motor imagery (MI)-based brain-computer interfaces (BCI) offers promising applications in rehabilitation. Traditional force-based MI-BCI paradigms generally require subjects to imagine constant force during static or dynamic state. It is challenging to meet the demands of dynamic interaction with force intensity variation in MI-BCI systems. To address this gap, we designed a novel MI paradigm inspired by daily life, where subjects imagined variations in force intensity during dynamic unilateral upper-limb movements. In a single trial, the subjects were required to complete one of three combinations of force intensity variations: large-to-small, large-to-medium, or medium-to-small. During the execution of this paradigm, electroencephalography (EEG) features exhibit dynamic coupling, with subtle variations in intensity, timing, frequency coverage, and spatial distribution, as the force intensity imagined by the subjects changed. To recognize these fine-grained features, we propose a feature fusion network with a spatial-temporal-enhanced strategy and an information reconstruction (FN-SSIR) algorithm. This model combines a multi-scale spatial-temporal convolution module with a spatial-temporal-enhanced strategy, a convolutional auto-encoder for information reconstruction, and a long short-term memory with self-attention, enabling the comprehensive extraction and fusion of EEG features across fine-grained time-frequency variations and dynamic spatial-temporal patterns. The accuracy rate of the FN-SSIR was 86.7±6.6%, which is 8.5% higher than that of the baseline algorithms. These findings highlight the potential of this paradigm and algorithm for advancing MI-BCI systems in rehabilitation training based on dynamic force interactions.
Keywords: Brain-Computer Interfaces, Force intensity variation, Spatial-temporal-enhanced strategy, Motor Imagery, deep learning
Received: 11 Mar 2025; Accepted: 19 May 2025.
Copyright: © 2025 Ying, Lv, Huang, Wang, Si, Zhang, Zhang, Zuo and Xu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Jiyu Zhang, Hangzhou RoboCT Technology Development Co., Ltd., Hangzhou, China, Hangzhou, China
Guokun Zuo, Cixi Institute of Biomedicine, Wenzhou Medical University, Cixi, China
Jialin Xu, Cixi Institute of Biomedicine, Wenzhou Medical University, Cixi, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.