Your new experience awaits. Try the new design now and help us make it even better

EDITORIAL article

Front. Hum. Neurosci.

Sec. Brain-Computer Interfaces

This article is part of the Research TopicBrain-Computer Interfaces (BCIs) for daily activities: Innovations in EEG signal analysis and machine learning approachesView all 6 articles

Editorial: Brain-Computer Interfaces (BCIs) for daily activities: Innovations in EEG signal analysis and machine learning approaches

Provisionally accepted
  • Keele University School of Computer Science and Mathematics, Keele, United Kingdom

The final, formatted version of the article will be published soon.

Human-machine systems designed for daily activities must balance direct user control with autonomous system behaviour to reduce cognitive effort while maintaining responsiveness. Douglas et al. have demonstrated how shared-autonomy frameworks can enable BCIs to operate across multiple levels of user involvement, from direct neural control to high-level goal specification. Rather than relying solely on low-level EEG commands, the proposed approach integrates intent inference with adaptive robotic planning, allowing multiple users to coordinate multiple robots in functional daily tasks. A key insight from this contribution is that increasing autonomy can significantly reduce user workload while preserving task accuracy and efficiency, supporting more sustainable long-term BCI use in real-world settings. This work highlights a broader shift in BCI design toward collaborative control paradigms, where machines proactively assist users instead of acting only as passive command receivers. Single-modality EEG BCIs are often limited by noise, signal ambiguity, and performance variability across users and environments. Hybrid approaches that fuse EEG with additional sensing modalities offer a promising solution, as demonstrated by Coutray and their team. Coutray et al. have proven how integrating EEG with eye-tracking improves command disambiguation, interaction speed, and overall control reliability, particularly in immersive virtual reality environments. This hybrid design reduces reliance on high-precision EEG decoding alone and enables more natural, intuitive hands-free interaction, expanding BCI applicability beyond clinical contexts into entertainment, accessibility, and smartenvironment control. This contribution reinforces the growing consensus that multimodal BCIs are more scalable, resilient, and user-friendly than EEG-only systems. Robust BCI performance depends not only on decoding algorithms but also on the user's cognitive and neural state. Mohamed et al. have provided evidence that baseline neural oscillations, particularly precue alpha activity, influence event-related desynchronisation (ERD) strength, a core signal used in motorimagery BCIs. A concrete implication of this finding is that BCIs could dynamically adjust classification thresholds, training protocols, or feedback timing based on real-time cognitive state estimates, potentially improving accuracy, consistency, and user learning rates. These insights support the development of state-aware BCIs that adapt decoding strategies in response to moment-to-moment brain dynamics, improving reliability in naturalistic, everyday environments. Beyond EEG decoding alone, combining physiological stress markers and subjective workload measures enables more comprehensive prediction of user performance in demanding tasks. Wei et al. have demonstrated that interpretable machine-learning models can forecast task outcomes while revealing which physiological factors most strongly influence performance. This is particularly relevant for realworld BCI deployment, where fatigue, stress, and cognitive overload can degrade system reliability. A key insight is that interpretable predictive models can support adaptive intervention strategies, such as adjusting task difficulty or providing rest prompts, while maintaining transparency and user trust. Across the contributions, several overarching themes emerge:1. Multimodal integration: Combining EEG with complementary sensors enhances decoding robustness and interaction flexibility. 2. Adaptive autonomy: Systems that balance user intent with automated assistance reduce workload and improve task efficiency. 3. Machine-learning innovation: Deep and interpretable models improve accuracy while supporting transparency and trust. 4. Cognitive context awareness: Accounting for neural and psychological state enables more reliable and personalised BCI performance.Together, these themes suggest a transition from static, single-user BCIs toward adaptive, context-aware, and multi-agent systems designed for long-term everyday use.

Keywords: Assistive Technology, Electroencephalography (EEG), human machine interaction, hybrid brain-computer interfaces (BCIs), Interpretable AI, Machine Learning (ML), Neurotechnology and brain-machine interface, Real-world BCI applications

Received: 24 Jan 2026; Accepted: 02 Feb 2026.

Copyright: © 2026 Al-Bander. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Baidaa Al-Bander

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.