Brain-Computer Interfaces for Perception, Learning, and Motor Control

94.8K
views
103
authors
17
articles
Cover image for research topic "Brain-Computer Interfaces for Perception, Learning, and Motor Control"
Editors
4
Impact
Loading...
8,884 views
75 citations
Original Research
07 April 2021
Adaptive Filtering for Improved EEG-Based Mental Workload Assessment of Ambulant Users
Olivier Rosanne
4 more and 
Tiago H. Falk
Graphical interface of the MATB-II software used to modulate high and low MW levels.

Recently, due to the emergence of mobile electroencephalography (EEG) devices, assessment of mental workload in highly ecological settings has gained popularity. In such settings, however, motion and other common artifacts have been shown to severely hamper signal quality and to degrade mental workload assessment performance. Here, we show that classical EEG enhancement algorithms, conventionally developed to remove ocular and muscle artifacts, are not optimal in settings where participant movement (e.g., walking or running) is expected. As such, an adaptive filter is proposed that relies on an accelerometer-based referential signal. We show that when combined with classical algorithms, accurate mental workload assessment is achieved. To test the proposed algorithm, data from 48 participants was collected as they performed the Revised Multi-Attribute Task Battery-II (MATB-II) under a low and a high workload setting, either while walking/jogging on a treadmill, or using a stationary exercise bicycle. Accuracy as high as 95% could be achieved with a random forest based mental workload classifier with ambulant users. Moreover, an increase in gamma activity was found in the parietal cortex, suggesting a connection between sensorimotor integration, attention, and workload in ambulant users.

7,264 views
32 citations
Original Research
03 February 2021
Article Cover Image
10,272 views
21 citations
Original Research
11 December 2020
Parallel Spatial–Temporal Self-Attention CNN-Based Motor Imagery Classification for BCI
Xiuling Liu
4 more and 
Feng Lin
Schematic illustration of the proposed method. The orange, blue, and green cuboids are feature maps in different modules; their corresponding sizes are indicated in the annotation. The convolution and pooling operations are indicated by the arrow lines. (A) Parallel spatial–temporal self-attention architecture-based feature extraction phase. The spatial and temporal self-attention modules are denoted by orange and blue rectangles, respectively. (B) Feature classification phase. F is the number of feature maps, and H and W are the height and width of the input signal, respectively, which means 22 sampling channels with 1, 125 time steps.

Motor imagery (MI) electroencephalography (EEG) classification is an important part of the brain-computer interface (BCI), allowing people with mobility problems to communicate with the outside world via assistive devices. However, EEG decoding is a challenging task because of its complexity, dynamic nature, and low signal-to-noise ratio. Designing an end-to-end framework that fully extracts the high-level features of EEG signals remains a challenge. In this study, we present a parallel spatial–temporal self-attention-based convolutional neural network for four-class MI EEG signal classification. This study is the first to define a new spatial-temporal representation of raw EEG signals that uses the self-attention mechanism to extract distinguishable spatial–temporal features. Specifically, we use the spatial self-attention module to capture the spatial dependencies between the channels of MI EEG signals. This module updates each channel by aggregating features over all channels with a weighted summation, thus improving the classification accuracy and eliminating the artifacts caused by manual channel selection. Furthermore, the temporal self-attention module encodes the global temporal information into features for each sampling time step, so that the high-level temporal features of the MI EEG signals can be extracted in the time domain. Quantitative analysis shows that our method outperforms state-of-the-art methods for intra-subject and inter-subject classification, demonstrating its robustness and effectiveness. In terms of qualitative analysis, we perform a visual inspection of the new spatial–temporal representation estimated from the learned architecture. Finally, the proposed method is employed to realize control of drones based on EEG signal, verifying its feasibility in real-time applications.

9,646 views
61 citations
Article Cover Image
6,391 views
28 citations
Construction of STFT images by sliding window of size 2 s with a shift/hop of 200 ms is divided into 256 ms sub-windows (with 56 ms shift/hop) for calculating STFT of the MI period within the trial.
Original Research
30 September 2020

Inter-subject transfer learning is a long-standing problem in brain-computer interfaces (BCIs) and has not yet been fully realized due to high inter-subject variability in the brain signals related to motor imagery (MI). The recent success of deep learning-based algorithms in classifying different brain signals warrants further exploration to determine whether it is feasible for the inter-subject continuous decoding of MI signals to provide contingent neurofeedback which is important for neurorehabilitative BCI designs. In this paper, we have shown how a convolutional neural network (CNN) based deep learning framework can be used for inter-subject continuous decoding of MI related electroencephalographic (EEG) signals using the novel concept of Mega Blocks for adapting the network against inter-subject variabilities. These Mega Blocks have the capacity to repeat a specific architectural block several times such as one or more convolutional layers in a single Mega Block. The parameters of such Mega Blocks can be optimized using Bayesian hyperparameter optimization. The results, obtained on the publicly available BCI competition IV-2b dataset, yields an average inter-subject continuous decoding accuracy of 71.49% (κ = 0.42) and 70.84% (κ = 0.42) for two different training methods such as adaptive moment estimation (Adam) and stochastic gradient descent (SGDM), respectively, in 7 out of 9 subjects. Our results show for the first time that it is feasible to use CNN based architectures for inter-subject continuous decoding with a sufficient level of accuracy for developing calibration-free MI-BCIs for practical purposes.

7,545 views
47 citations
Open for submission
Frontiers Logo

Frontiers in Neuroscience

Investigating the combined use of EEG and fNIRS signals in studying emotional responses or cognitive processes
Edited by Md. Asadur Rahman, Haroon Khan, Hammad Nazeer
Deadline
23 July 2025
Submit a paper
Recommended Research Topics
Frontiers Logo

Frontiers in Neuroscience

Identifying Neuroimaging-Based Markers for Distinguishing Brain Disorders
Edited by Yuhui Du, Jing Sui, Dongdong Lin
120.2K
views
181
authors
25
articles
Frontiers Logo

Frontiers in Neuroscience

Improving Diagnosis, Treatment, and Prognosis of Neuropsychiatric Disorders by Leveraging Neuroimaging-based Machine Learning
Edited by Baojuan Li, HONGBING LU, Yu-Feng Zang, Hui Shen, Qiuyun Fan
146.2K
views
241
authors
26
articles
Frontiers Logo

Frontiers in Neuroscience

Brain Imaging Relations Through Simultaneous Recordings
Edited by Waldemar Karwowski, Surjo R Soekadar, Aleksandra Dagmara Kawala-Sterniuk
50.9K
views
72
authors
12
articles
Frontiers Logo

Frontiers in Human Neuroscience

New Insights into Neural Control Mechanisms of the Brain in Health and Disease: Modalities, Methodologies, and Applications
Edited by Feng Fang, Yingchun Zhang, Sudhakar Selvaraj, Wens Hou
17.7K
views
39
authors
6
articles
Frontiers Logo

Frontiers in Neuroscience

Exploring the intersection of neuroimaging and artificial intelligence: What is the interplay?
Edited by Indranath Chatterjee, Wael MY Mohamed, Sahil Bajaj
10.5K
views
30
authors
4
articles