<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Human Neuroscience | Brain-Computer Interfaces section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/human-neuroscience/sections/brain-computer-interfaces</link>
        <description>RSS Feed for Brain-Computer Interfaces section in the Frontiers in Human Neuroscience journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-12T07:45:03.25+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1774230</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1774230</link>
        <title><![CDATA[Brain-computer interfaces and neural synchronization in esports: a systematic review of effects on reaction time, decision-making, and cognitive performance]]></title>
        <pubdate>2026-05-08T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Prashant Kumar Choudhary</author><author>Suchishrava Choudhary</author><author>Sohom Saha</author><author>Yajuvendra Singh Rajpoot</author><author>Vasile-Cătălin Ciocan</author><author>Voinea Nicolae-Lucian</author><author>Carmina Mihaela Gorgan</author><author>Constantin Șufaru</author>
        <description><![CDATA[BackgroundThe rapid expansion of esports has intensified interest in the cognitive and neurophysiological mechanisms underlying elite performance, particularly reaction time (RT), decision-making (DM), and neural efficiency. Advances in brain-computer interfaces (BCIs) offer targeted neural modulation that may enhance these abilities through improved neural synchronization. To systematically review evidence on the effects of BCI-based neural synchronization, including motor imagery (MI) BCIs, visual evoked potential (VEP/c-VEP) systems, neural entrainment, and dual-brain coupling, on RT, DM, and related cognitive outcomes in esports athletes and competitive gamers.MethodsFollowing PRISMA 2020 guidelines, comprehensive searches were conducted across PubMed, Scopus, Web of Science, IEEE Xplore, PsycINFO, ScienceDirect, and Google Scholar. Studies examining BCI-induced neural modulation and its cognitive or performance effects in esports players or experienced gamers were included. Eighteen studies met the criteria, comprising controlled trials, pre–post interventions, cross-sectional neurophysiology studies, comparative behavioural analyses, and supporting systematic reviews. Due to methodological heterogeneity, results were synthesised narratively. Although the review follows PRISMA 2020 guidelines for systematic study identification and selection, the synthesis adopts a structured integrative narrative approach due to substantial heterogeneity in study designs, BCI modalities, and outcome measures.ResultsAcross studies, BCI-mediated neural synchronization produced consistent improvements in RT, DM accuracy, cortical oscillatory stability, and neural connectivity. MI-BCI and gamified systems enhanced MI accuracy, user engagement, and cognitive load regulation. VEP-based BCIs accelerated perceptual processing by improving signal reliability and reducing latency. Dual-brain coupling improved coordinated decision behaviour. Additional evidence indicates that experienced gamers display superior working memory, attentional control, and visuomotor coordination compared with non-gamers. However, variability in study design, small samples, and moderate risk of bias limit the strength of causal inference.DiscussionBCI-based neural synchronization shows promise as a tool for enhancing neurocognitive performance in esports athletes. Future studies should prioritize standardized training protocols, multimodal neural-measurement methods, and longitudinal designs to determine long-term effectiveness and real-world applicability.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1778884</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1778884</link>
        <title><![CDATA[Toward practical BCIs: a BMNABC-based feature selection and sensor optimization framework for implicit learning detection from multimodal EEG-fNIRS data]]></title>
        <pubdate>2026-05-04T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Chayapol Chaiyanan</author><author>Tustanah Phukhachee</author><author>Keiji Iramina</author><author>Boonserm Kaewkamnerdpong</author>
        <description><![CDATA[Implicit learning is a fundamental cognitive process whose identification is critical for understanding human cognition and developing innovative training methodologies. We propose a generalizable feature selection and sensor optimization framework using simultaneous EEG and fNIRS to identify these events. Our approach leverages a two-stage optimization process driven by a binary multi-neighbor artificial bee colony (BMNABC) algorithm. The BMNABC uses the model’s classification accuracy to guide the heuristic search for the most discriminative feature subset. First, the framework prioritizes optimal features from high-dimensional, multimodal data using a normalized weighted sum (NWS) metric. Second, it implements a recursive backward elimination mechanism to reduce the number of sensors for practical brain-computer interface (BCIs) applications. Our results demonstrate that the BMNABC framework successfully identifies a superior feature set, leading to a significant improvement in classification accuracy over using either modality alone. Critically, the selected features provided neurophysiological validation, isolating key biomarkers in the prefrontal cortex. We also show that a sparse yet highly effective sensor configuration can be achieved, maintaining high performance with up to 66% fewer sensors. This work not only provides a data-driven method for detecting implicit learning but also advances the design of more efficient and user-friendly BCI systems.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1812507</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1812507</link>
        <title><![CDATA[Cortical activity increases in speech motor areas as a function of the subjective loudness of inner speech]]></title>
        <pubdate>2026-05-01T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Barry H. Cohen</author><author>Bin Zhang</author>
        <description><![CDATA[IntroductionInner speech, sometimes referred to as an inner monologue or silent verbal thinking, is a common mental phenomenon, often experienced as a faint auditory image of words as spoken in one’s own or a more generic voice. Brain-scanning research has shown that inner speech activates many of the same cortical areas responsible for spoken speech. This neural overlap has been leveraged with considerable success recently to decode the content of inner speech from cortical activity using cutting-edge data-analytic methods so that totally paralyzed patients can communicate. The hypothesis underlying the current study is that efforts to create subjectively louder inner speech will be associated with greater neural activity in cortical areas associated with overt speech.MethodsWhile they were situated in an MRI scanner, we asked eight participants to repeat simple syllables as either soft or loud inner speech in a randomized order. For comparison purposes, the inner speech trials were followed by participants listening to a recording of the same syllables they made earlier, and a phase during which they repeated those syllables aloud.ResultsAs expected, we found significantly greater neural activity in cortical areas associated with speech motor activity during loud as compared to soft inner speech. We also that found greater suppression of neural activity in auditory perception areas was associated with louder inner speech.DiscussionThe results are discussed with respect to their implications for the identification of inner speech in totally paralyzed individuals and the possibility of using neurofeedback to reduce the volume of negative inner speech.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1836774</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1836774</link>
        <title><![CDATA[Editorial: Non invasive BCI for communication]]></title>
        <pubdate>2026-04-27T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Sadaf Moaveninejad</author><author>Eduardo Santamaría-Vázquez</author><author>Jiahua Xu</author><author>Camillo Porcaro</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1777024</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1777024</link>
        <title><![CDATA[Brain-computer interface: an update for the clinicians]]></title>
        <pubdate>2026-04-22T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Agam Jain</author><author>Sreelakshmi Raveendran</author><author>Krishnan Padmakumari Sivaraman Nair</author><author>Subasree Ramakrishnan</author>
        <description><![CDATA[This narrative review critically examines the fundamental principles and clinical applications of Brain-Computer Interfaces (BCIs) in neuroscience and mental health. We searched PubMed, Scopus, and PEDro databases using pre-defined keywords, with inclusion restricted to clinical studies. The manuscript provides an evidence-based assessment of current indications, technological limitations, and emerging solutions, offering insights into both the opportunities and challenges for clinical integration. Clinical decision-making pathways are outlined to guide the adoption of BCI technologies in patient care. This article aims to increase awareness among clinicians and to equip them with the essential knowledge required as BCI systems advance toward mainstream clinical use.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1793705</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1793705</link>
        <title><![CDATA[In-ear EEG wearables for brain activity assessment and cognitive rehabilitation: the emerging role of multimodal embedded intelligence]]></title>
        <pubdate>2026-04-20T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Asma Channa</author><author>Herbert F. Jelinek</author><author>Abdelkader Nasreddine Belkacem</author><author>Mohamed Atef</author><author>Ibrahim (Abe) M. Elfadel</author>
        <description><![CDATA[This literature review critically examines the design, validation, and application of non-invasive in-ear electroencephalography (ear-EEG) systems as emerging wearable platforms for long-term neurophysiological monitoring and intervention. Following PRISMA guidelines, studies published between 2010 and 2025 were systematically selected from four major databases and organized into four thematic domains: in-ear wearable system design and validation, multimodal sensing and stimulation, embedded intelligence, and brain-state monitoring and rehabilitation. The review focuses exclusively on wearable, ear-centered EEG technologies, explicitly excluding cochlear implants and other invasive or behind-the-ear systems. We analyze key engineering challenges unique to ear-EEG, including electrode placement constraints, mechanical–electrical coupling, motion robustness, power efficiency, and long-term wearability. The review highlights a growing transition toward compact, wireless ear-EEG systems with on-device signal processing and embedded machine learning, enabling real-time brain-state estimation under ambulatory conditions. Multimodal integration, combining ear-EEG with complementary sensors such as EOG, inertial units, and cardiovascular signals is shown to improve artifact awareness, contextual interpretation, and closed-loop capability. Beyond summarizing existing technologies, this review identifies critical gaps limiting clinical translation, including the lack of standardized validation protocols, limited embedded autonomy, and underexplored closed-loop neurofeedback and neuromodulation architectures. By synthesizing advances across hardware design, signal processing, and intelligent system integration, this work provides a systems-level roadmap for the future development of wearable, intelligent, and clinically robust ear-EEG platforms for mental health, neurorehabilitation, and continuous brain monitoring.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2025.1695370</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2025.1695370</link>
        <title><![CDATA[Exploring individual biases in BCI research and users: Does gender matter?]]></title>
        <pubdate>2026-04-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Cornelia Herbert</author><author>Viviana Ramos Acuna</author><author>Raphael R. K. Kneipp</author><author>Nina I. Kapfer</author>
        <description><![CDATA[ObjectiveBrain-Computer Interface (BCI) is an interdisciplinary research field characterized by rapid technological advances and collaborative efforts to develop user-friendly, adaptive devices that enable healthy and non-responsive users to communicate and interact with their environment through brain signals elicited by specific instructions or tasks. However, research often shows gender bias, especially in scientific disciplines with strong technological, medical, or social foundations. Gender biases have been found among scientists conducting and publishing research. They may also exist among examiners and study participants.Research question and methodsThis study investigates whether gender biases are present in BCI research, particularly in the distribution of women and men across editorial boards and authorship of studies focusing on psychological human factors that influence BCI performance and usability. We systematically analyzed the gender distribution in neuroscientific journals that accept BCI research or have a strong focus on BCI, reviewed their editorial boards, analyzed BCI publications –including those related to psychological human factors–and examined gender biases among study participants. Additionally, we reviewed EEG studies investigating sex- or gender-related differences in EEG signals relevant to BCI research.ResultsWe observed significant differences in the representation of women and men among editorial board members and BCI authors, including first-, co-, and last-authorship. Similarly, there were differences in the gender distribution of participants in BCI studies. Moreover, the literature review suggests potential differences in brain signals between women and men within the studied samples. The impact of these differences on performance in BCIs, such as motor-imagery SMR-BCIs, SSVEP-BCIs, and P300-BCIs, as well as training methods and BCI usability, still needs to be explored.ConclusionOur findings emphasize the importance of increasing awareness of gender-, sex-, and user-related factors in BCI research. In line with recent perspectives that highlight the need to address gender biases and individual differences in the language of the user, their motivation or cultural background, future BCI research should focus on systematically examining gender and sex differences. This will help promote gender equality in BCI research and lead to a better understanding of users’ needs, preferences, and individual characteristics.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1774409</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1774409</link>
        <title><![CDATA[Case Report: post-stroke rehabilitation with a visuomotor transformation-based brain-computer interface]]></title>
        <pubdate>2026-04-14T00:00:00Z</pubdate>
        <category>Case Report</category>
        <author>Alisa Kokorina</author><author>Nikolay Syrov</author><author>Lev Yakovlev</author><author>Mikhail Lebedev</author>
        <description><![CDATA[Brain–computer interfaces (BCIs) are increasingly explored as tools for post-stroke neurorehabilitation. Motor imagery (MI)-based paradigms are widely used but may be difficult for some patients to perform reliably, motivating the exploration of alternative control strategies. This study presents a retrospective exploratory case series (n = 5) evaluating the feasibility and safety of a P300-based BCI paradigm designed to engage visuomotor transformation processes during upper limb rehabilitation. Two patients underwent rehabilitation using the P300-based paradigm, while three patients used an MI-based BCI within the same rehabilitation framework. In both conditions, BCI control was integrated with a robotic orthosis and an immersive virtual reality (VR) environment. BCI performance, neurophysiological responses (event-related potentials and event-related desynchronization), and clinical measures (Fugl–Meyer Assessment of the Upper Extremity, NIHSS) were assessed before and after a 10-session rehabilitation course. All participants were able to achieve BCI control above chance level. Across cases, changes in clinical scores and consistent neurophysiological patterns associated with task engagement were observed. No adverse events or clinically significant safety concerns were identified. These findings suggest that a P300-based BCI paradigm incorporating visuomotor transformation can be feasibly implemented within a VR-assisted robotic rehabilitation framework. Given the exploratory design, small sample size, and heterogeneity of the cohort, the results should be interpreted as hypothesis-generating. Further controlled studies are required to determine the clinical relevance and potential applications of this approach.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1811759</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1811759</link>
        <title><![CDATA[MCFANet: a multi-class fusion attention network for motor imagery EEG classification]]></title>
        <pubdate>2026-04-09T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Peijie Zhao</author><author>Tong Liang</author><author>Hao Jia</author><author>Azure Dayan</author><author>Josep Dinarès-Ferran</author><author>Jordi Solé-Casals</author>
        <description><![CDATA[IntroductionThis paper proposes a Multi-Class Fusion Attention Network (MCFANet) that combines the multi-class spatial filtering outputs of FBCSP with the spatiotemporal feature extraction capability of convolutional neural networks for multi-class motor imagery EEG classification. In multi-class motor imagery decoding, traditional spatial filtering methods extract effective discriminative spatial features but decompose the task into independent binary subproblems, and typically retain only energy statistics while discarding temporal dynamics. Deep learning methods can learn spatiotemporal features but must learn spatial patterns from the beginning, making it difficult to fully capture established neurophysiological priors under limited training samples.MethodsMCFANet concatenates the spatial filtering outputs from all classes and sub-bands along the channel dimension to construct a virtual channel representation containing the discriminative responses of all classes. The full time series is preserved and fed into a convolutional module for spatiotemporal feature extraction, and a channel attention module adaptively reweights the feature maps to focus on the most discriminative representations. Four-class classification experiments were conducted on two public datasets.ResultsOn Dataset 2a, MCFANet achieved an accuracy of 67.94% ±13.70, outperforming FBEEGNet (63.98%) and EEGNet (58.79%). On the High Gamma Dataset, MCFANet achieved 87.10% ±10.09, improving over FBEEGNet by approximately 2.5 percentage points. Paired t-tests and effect size analysis confirm that the improvements over the main baseline methods are statistically significant.DiscussionThe results suggest that reorganizing multi-class spatial discriminative responses into a unified representation that preserves temporal dynamics provides an effective path for bridging traditional spatial filtering and deep learning.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1697837</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1697837</link>
        <title><![CDATA[EEG-based brain–computer interface with immersive virtual reality for phantom limb pain: a single-center pilot neurofeedback trial]]></title>
        <pubdate>2026-04-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Vincent Roualdes</author><author>Saïd Moussaoui</author><author>Jean-Marie Normand</author><author>Emmanuelle Kuhn</author><author>Julien Nizard</author><author>Aurélien Van Langhenhove</author>
        <description><![CDATA[BackgroundPhantom limb pain (PLP) is a challenging neuropathic pain condition following limb amputation or brachial plexus injury. Non-pharmacological interventions such as motor imagery training, phantom motor execution and mirror therapy have shown potential to alleviate PLP by engaging sensorimotor circuits, but their effects are debated. We developed GHOST, a portable EEG-based brain–computer interface (BCI) coupled with immersive virtual reality (VR), allowing patients to control a virtual limb via motor imagery in real time, as a neurofeedback-based rehabilitation tool.MethodsWe conducted a single-center exploratory pilot trial to assess the feasibility and preliminary efficacy of this device. Seven patients with chronic upper-limb PLP (amputees or brachial plexus avulsion, pain ≥3/10) underwent 10 training sessions over 2 weeks. Daily pain diaries (distinguishing continuous pain vs. paroxysmal pain episodes) were recorded for 1 month before and 1 month after the intervention, with follow-up to 6 months. Motor imagery ability, anxiety-depression (HADS), and quality of life (SF-36) were also evaluated.ResultsSix patients completed ≥8 sessions. All participants achieved BCI control of the virtual hand, with high success rates (>70%) even as task difficulty increased, demonstrating system feasibility. No adverse events occurred. Compared to baseline, patients experienced a significant short-term reduction in paroxysmal pain (frequency and intensity of pain “flare-ups”), with up to >80% median decrease in weekly cumulated pain episode intensity (p < 0.001). Three of five patients also reported around 30% improvement in average daily pain during the first post-training month. HADS anxiety/depression scores showed a non-significant improving trend. By 3–6 months post-training, pain levels had largely returned to pre-intervention values.ConclusionThis pilot study supports the safety and feasibility of EEG-BCI with VR for PLP and suggests that it can yield short-term analgesic effects, particularly on paroxysmal pain. These findings support the hypothesis that sensorimotor re-engagement could effectively target maladaptive neural processes underlying PLP, while warranting confirmation in controlled trials. Future work will optimize the training protocol and investigate neuroplastic changes associated with this BCI-VR intervention.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1791677</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1791677</link>
        <title><![CDATA[Electroencephalogram-based multimodal attention level classification using deep learning techniques]]></title>
        <pubdate>2026-03-27T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Yi Zhong</author><author>Zhenyu Wang</author><author>Xi Zhao</author><author>Tianheng Xu</author><author>Ting Zhou</author><author>Honglin Hu</author>
        <description><![CDATA[This study aims to develop a novel attention level prediction method using a multimodal brain-computer interface system that integrates electroencephalogram (EEG), electrocardiogram (ECG), and electrooculogram (EOG) signals to enhance prediction accuracy and robustness. We propose the Multi-Feature Enhanced Attention Network (MEAN), which leverages the complementary strengths of these signals: EEG provides insights into brain electrical activity, ECG captures heart rate variability to reflect emotional and cognitive states, and EOG records eye movements for contextual attention level information. The model is designed to address the limitations of single-modality signals, such as noise susceptibility and limited information range. Experimental results demonstrate that MEAN achieves an average accuracy of 0.9524, outperforming traditional models. The model exhibits superior adaptability, particularly in handling EEG and multimodal data, and shows enhanced predictive performance compared to existing approaches. In conclusion, the proposed MEAN model effectively integrates multimodal physiological signals to improve attention level prediction, offering a robust and accurate solution for applications requiring attention level monitoring. This research provides a foundation for advancing applications in education, work efficiency assessment, and cognitive enhancement technologies, highlighting the potential of multimodal approaches for understanding and predicting attention states.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1720969</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1720969</link>
        <title><![CDATA[Individualized electrode subset improves the calibration accuracy of an EEG P300-design brain-computer interface for people with severe cerebral palsy]]></title>
        <pubdate>2026-03-26T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Si Long Jenny Tou</author><author>Seth A. Warschausky</author><author>Petra Karlsson</author><author>Jane E. Huggins</author>
        <description><![CDATA[IntroductionThis study examined the effect of individualized electroencephalogram (EEG) electrode location selection for non-invasive P300-design brain-computer interfaces (BCIs) in people with varying severity of cerebral palsy (CP) in a post-hoc offline analysis.MethodsA forward selection algorithm was used to select the best performing eight electrodes (of an available 32) to construct an individualized electrode subset for each participant. Custom electrode subset size was chosen to be 8 because BCI accuracy of the individualized subset was compared to accuracy of a widely used default subset.ResultsAcross 51 participants, individualized subsets improved calibration accuracy only for the severe CP cohort (mean +28.6% absolute; 95% CI [13.4%, 46.1%]; p < 0.0001). No group-level benefit was detected for mild CP or typically developing controls, although several individuals in these groups improved (2/17 mild CP; 1/10 controls). In the subset with held-out testing data (mild CP and controls), calibration gains did not translate to higher testing accuracy; among controls, the subset effect was reduced on testing (−9.6%, 95% CI [−13.3%, −5.8%], p < 0.0001), with no evidence of change for mild CP. Participants with severe CP typically required larger subsets to approach asymptotic accuracy, whereas ≤ 8 electrodes were sufficient for most others.DiscussionThe findings suggested that electrode selection can accommodate atypical neuroanatomy in people with severe CP, while the default electrode locations are sufficient for people with milder impairments from CP and typically developing individuals.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1712380</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1712380</link>
        <title><![CDATA[Polarity-considered EEG microstates improve classification accuracy of oddball stimulus]]></title>
        <pubdate>2026-03-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Tatsumi Tsubaki</author><author>Shiho Kashihara</author><author>Tomohisa Asai</author><author>Hiroshi Imamizu</author><author>Isao Nambu</author>
        <description><![CDATA[Brain–computer interfaces (BCIs) require efficient feature extraction and dimensionality reduction from high-dimensional neural signals. Electroencephalogram (EEG) microstate analysis is a rapid and noise-resistant approach that classifies instantaneous EEG states into several spatial distribution patterns (templates). Previous BCI studies using the EEG microstate approach have typically used aggregated metrics, such as duration, frequency of occurrence, or time coverage, and have rarely applied pointwise microstate labeling as temporally ordered, one-dimensional sequences for robust classification. Moreover, the physiological relevance of EEG topographic polarity has often been overlooked, despite its potential to reveal smoother state transitions and align with event-related potential components. In this study, we applied polarity-considered microstate labeling to stimulus-driven classification in an oddball paradigm. EEG data from 40 healthy participants (20 per response type) were analyzed across three factors: stimulus modality (auditory or visual), modality condition (unimodal or cross-modal), and response type (key-response task or mental counting task). Preprocessed 32-channel EEG data were labeled with microstate templates (A–E ± topographical polarity) using a winner-take-all approach, and the resulting sequences were classified using multiple machine-learning models. The results showed that tree-based ensemble models (Random Forest, XGBoost, and CatBoost) achieved the most stable and accurate performance in the key-response task with cross-modal visual targets. These models reached an area under the receiver operating characteristic curve above 0.8 and a mean F1 score of 0.83. Preserving polarity improved classification by approximately 20% across tasks, doubling the label-space granularity and revealing temporal patterns aligned with the N200 and P300 components. Visual stimuli generally outperformed auditory stimuli, and cross-modal benefits emerged primarily in key-response tasks. These findings demonstrate that polarity-considered microstate labeling enhances classification accuracy and interpretability in BCIs. This method highlights the potential for real-time applications, such as P300 spellers and multimodal attention monitoring.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1747655</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1747655</link>
        <title><![CDATA[Open access individual finger movement dataset with fNIRS]]></title>
        <pubdate>2026-03-13T00:00:00Z</pubdate>
        <category>Data Report</category>
        <author>Haroon Khan</author><author>Hammad Nazeer</author><author>Peyman Mirtaheri</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1755549</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1755549</link>
        <title><![CDATA[Dynamic graph based attention spectral network for motor imagery-brain computer interface]]></title>
        <pubdate>2026-03-04T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Zexiong Shao</author><author>Zhenghui Gu</author><author>Le Che</author><author>Zhuliang Yu</author><author>Yuanqing Li</author>
        <description><![CDATA[Motor imagery-based brain computer interface (MI-BCI) have been increasingly adopted in neurorehabilitation and related fields. The performance of MI-electroencephalogram (MI-EEG) decoding algorithms is central to the advancement of MI-BCI. However, current studies often lack rigorous investigation into the brain's complex network organization. Moreover, most existing methods do not incorporate the cross-frequency coupling (CFC) phenomena that occur during MI into their algorithmic designs, nor do they adequately account for how temporal dynamics across different MI stages influence decoding outcomes. To address these limitations, we propose the Dynamic Spectral-Spatial Interaction Convolution Neural Network (DSSICNN), a parameter-efficient MI-EEG decoding framework that jointly extracts temporal-spectral-spatial features. DSSICNN adopts a dual-branch parallel architecture to concurrently learn spatial representations in both Euclidean and non-Euclidean domains. It further integrates a CFC-inspired attention module to model cross-spectral interactions, followed by an additional attention mechanism that quantifies the contributions of distinct MI stages to decoding performance. DSSICNN achieves decoding performance on two public datasets that surpasses the current state-of-the-art (SOTA) under both session-dependent and session-independent settings. Beyond its empirical advantages, DSSICNN offers design insights for developing Graph Neural Network (GNN)-based MI-EEG decoding algorithms and provides a network neuroscience-inspired perspective for understanding the neurophysiological mechanisms underlying MI.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1795349</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1795349</link>
        <title><![CDATA[Editorial: Brain-Computer Interfaces (BCIs) for daily activities: innovations in EEG signal analysis and machine learning approaches]]></title>
        <pubdate>2026-02-20T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Baidaa Al-Bander</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1751058</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1751058</link>
        <title><![CDATA[Advancing individual finger classification through a sandwich enhanced CBAM network with ultra-high-density EEG data]]></title>
        <pubdate>2026-02-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Xinguo Zhang</author><author>Yiman Zhang</author><author>Hong Peng</author><author>Tao Deng</author>
        <description><![CDATA[IntroductionUltra-High-Density Electroencephalography (uHD EEG) has gained increasing attention for its potential in individual finger decoding. However, accurately classifying these movements remains challenging due to the subtle spatial overlaps in cortical activity, which standard architectures often fail to isolate.MethodsTo address this, we propose the Sandwich enhanced Convolutional Block Attention Module (SCBAM). The unique sandwich structure integrates dual attention mechanisms between convolutional layers, enabling the network to more effectively refine high-dimensional spatial features.Results and discussionThe proposed network achieves an average accuracy of 78.63 (1.56)% in binary classification across ten finger pairs in five subjects, with the highest accuracy of 85% obtained at Thumb vs. Ring. The proposed network achieves an average accuracy of 61.12 (0.95)% in five-class classification across five subjects, with a highest accuracy of 62.36% on subject S2. The five-class classification is performed using 10 binary classifiers under a one-vs.-one strategy. Notably, five-class classification of individual fingers has not been extensively explored in the current literature, particularly with high-density EEG (HDEEG) data. This study addresses this gap, offering a valuable reference for future discussions. We conduct ablation studies to investigate the individual and synergistic effects of the modules in the proposed model. The results highlight the effects of two sequential attention mechanisms in this task. We conduct comparative experiments of our proposed model against six benchmark networks. The results from SCBAM significantly outperform these established models with FBCSP features. The proposed SCBAM significantly improves accuracy in binary finger classification compared to SVM and MLP using the same uHD EEG dataset. In summary, this study presents a high-performance hybrid network for individual finger classification and highlights the potential of uHD EEG for dexterous task decoding in Brain-Computer Interfaces (BCI).]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1723907</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1723907</link>
        <title><![CDATA[DSP-MCF: dual stream pre-training and multi-view consistency fine-tuning for cross-subject EEG emotion recognition]]></title>
        <pubdate>2026-02-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jingjing Li</author><author>Xinqi Liu</author><author>Xia Wu</author><author>Ya Wang</author><author>Xin Huang</author>
        <description><![CDATA[IntroductionElectroencephalogram (EEG) emotion recognition is attracting increasing attention in the field of brain-computer interface due to its strong objectivity and non-forgery. However, cross-subject emotion recognition is complicated by individual variability, limited availability of EEG data, and interference in certain channels during EEG acquisition.MethodsWe propose a novel synergistic Dual Stream Pre-training and Multi-view Consistency Fine-tuning (DSP-MCF) framework. The DSP-MCF is based on a domain generalization architecture. The framework includes a dual stream pre-training stage, wherein the spatiotemporal encoder-decoder network extracts generalized spatiotemporal representations from masked channels and reconstructs EEG features from incomplete data. Then, a multi-view consistency loss function is proposed during the multi-view consistency fine-tuning. This loss function is essential for aligning the distribution of emotion predictions derived from various perspectives, specifically from actual and masked EEG data.ResultsExperimental results demonstrate that the proposed DSP-MCF framework outperforms state-of-the-art methods in cross-subject EEG emotion recognition tasks. The model achieved an accuracy of 89.76% on the SEED dataset and 77.02% on the SEED-IV dataset.DiscussionThe findings indicate that the DSP-MCF framework effectively addresses individual variability and maintains robust performance even under channel loss. By integrating spatiotemporal reconstruction with multi-view consistency, the model provides a reliable solution for handling incomplete or degraded EEG signals in practical BCI applications.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1738876</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1738876</link>
        <title><![CDATA[Individualized brain-computer interface for people with disabilities: a review]]></title>
        <pubdate>2026-02-10T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Simanto Saha</author><author>Petra Karlsson</author><author>Collin Anderson</author><author>Omid Kavehei</author><author>Alistair McEwan</author>
        <description><![CDATA[Brain-computer interfaces (BCIs) facilitate functional interaction between the brain and external devices, enabling users to bypass their typical peripheral motor actions to control assistive and rehabilitative technologies (ARTs). This review critically evaluates the state-of-the-art BCI-based ARTs by integrating the psychosocial and health-related factors impacting user needs, highlighting the influence of brain changes during development and aging on the design and ethical use of BCI technologies. As direct human-computer interfaces, BCI-based ARTs offer extended degrees of freedom via augmented mobility, cognition and communication, especially to people with disabilities. However, the innovation in BCI-based ARTs is guided by the complexity of disability types and levels of function across users that define individual needs. Therefore, an adaptable design is essential for tailoring a BCI-based ART that can fulfill user-specific requirements, which may hinder the scalability of BCIs for their widespread adoption across users with disabilities. The trade-offs between implantable and non-implantable BCIs are explored along with complex decisions around informed consent for people with communication or cognitive disabilities and pediatric settings. Non-implantable BCIs offer broader accessibility and transferability across users due to wider standardized signal acquisition and algorithm generalization, making them suited for a more comprehensive user group. This review contributes to the field by providing individualized user needs-informed discussion of BCI-based ARTs, emphasizing the need for adaptable designs that align the evolving functional and developmental needs of users with disabilities.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnhum.2026.1743936</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnhum.2026.1743936</link>
        <title><![CDATA[To assure aviation safety: the pilot fatigue detection based on short-term multimodal physiological signals]]></title>
        <pubdate>2026-02-03T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Kai Chen</author><author>Jiming Liu</author><author>Jiamei Zhu</author><author>Yan Xu</author><author>Lin Zhang</author><author>Zhenxing Gao</author>
        <description><![CDATA[Pilot fatigue detection based on physiological signals is practical for aviation safety. Current methods face challenges in balancing the high computational cost of deep learning models with robust accuracy, especially when integrating short-term multimodal physiological signals. To address these challenges, this paper proposes a framework for fast, accurate, and robust pilot fatigue detection by fusing features from electroencephalogram (EEG) and electrocardiogram (ECG) signals. The primary novelty of this work lies in a streamlined selection and classification strategy that overcomes the intrinsic limitations of Heart Rate Variability (HRV) analysis in short (2-s) segments while maintaining competitive accuracy at a drastically lower training cost. Specifically, by utilizing statistical ECG features, which are then integrated with EEG markers through a two-stage ANOVA-SVM feature selection process. The optimized, low-dimensional feature set is then classified using an XGBoost model. Evaluated on data from 32 pilots, the framework demonstrated robust generalization with an accuracy of 88.42% in rigorous cross-subject cross-validation, significantly outperforming our previous EEG-only ASFT-Transformer. While standard cross-clip validation yielded a higher accuracy of 98.36%, the cross-subject metric highlights the model's potential utility for unseen individuals. Crucially, the framework achieves this performance with an average training time of only 39.3 s, a drastic reduction compared to mainstream deep learning models. By striking a balance between accuracy, generalization, and efficiency, this study presents a promising and feasible approach for objective pilot fatigue management.]]></description>
      </item>
      </channel>
    </rss>