<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Neuroergonomics | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/neuroergonomics</link>
        <description>RSS Feed for Frontiers in Neuroergonomics | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-14T14:57:16.413+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1796721</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1796721</link>
        <title><![CDATA[Reframing neuroergonomics in an evolutionary and active inference context]]></title>
        <pubdate>2026-05-07T00:00:00Z</pubdate>
        <category>Perspective</category>
        <author>Farah I. Corona-Strauss</author><author>Jonas Vibell</author><author>Alexander L. Francis</author><author>Martina Lehser</author><author>Sebastian M. Markert</author><author>Daniel J. Strauss</author>
        <description><![CDATA[Everyday situations, such as feeling nauseous in virtual-reality environments or getting dizzy when reading as a car passenger, reveal how easily our senses can become confused when modern technology disrupts the innate relationship between the physical environment and human sensory systems. Such disruptions expose the vulnerability of the human senses to conflicting input arising in technologically altered environments. Even in the absence of direct sensory conflict, in complex technological settings such as digital factories and modern operating rooms, the convergence of multiple competing stimuli within and across sensory modalities further amplifies sensory load and cognitive strain. The common denominator of all such problems is that our ancient sensory processing and perceptual systems do not fit well with the technological world we have created. This evolutionary mismatch is already significant, but it will become even more critical as mixed reality concepts and advanced digital technologies integrate more deeply into our daily lives. Focusing on sensory mismatch and sensory strain as two significant ramifications of the Anthropocene, we reframe neuroergonomics in an evolutionary and active inference context. Our reframing argues that neuroergonomics must prioritize technology design that respects evolutionarily tuned priors, and should additionally deploy measured epigenetic, gene–culture, and learning-driven interventions as complementary levers to support adaptive change. Thus, we highlight the importance of aligning and grounding neuroergonomic design with the human sensory system according to constraints and affordances defined by human evolutionary history.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1797540</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1797540</link>
        <title><![CDATA[Physiological sensing for situational awareness: a theory-driven integrative review of multimodal and unsupervised approaches for visual search and human–autonomy teaming]]></title>
        <pubdate>2026-04-29T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Hicham Sekkati</author><author>Timothy Lam</author><author>Jean-Francois Lapointe</author><author>Luc Belliveau</author><author>Sebin Im</author>
        <description><![CDATA[Situational awareness (SA) is fundamental to performance in visually demanding, safety-critical tasks and in human–autonomy teaming (HAT), yet existing measurement approaches remain limited. Direct probes such as SAGAT or SART disrupt task flow, while performance-based metrics capture outcomes rather than the underlying cognitive processes of perception, comprehension, and projection. This article presents a theory-driven integrative review of physiological sensing approaches for estimating SA. Physiological sensing provides a complementary, continuous, and objective alternative, with evidence that neural (EEG), ocular (eye tracking and pupillometry), and autonomic (ECG and EDA) signals each index distinct components of SA. The review synthesizes findings across these modalities with emphasis on two key domains: visual search tasks, where operators must detect, classify, and assess potential threats, and human–autonomy teaming, where effective coordination depends on shared SA, trust, and transparency. Beyond traditional feature-based pipelines, we examine recent advances in multimodal fusion and deep learning, highlighting the increasing role of unsupervised and self-supervised representation learning. These approaches exploit large volumes of unlabeled physiological data to reveal latent cross-modal structure, reduce reliance on sparse ground-truth labels, and enhance scalability and ecological validity. By integrating evidence across sensing modalities and computational frameworks, this review outlines the current state of physiological SA estimation and identifies key research directions for continuous, real-time monitoring in visual search and human–autonomy teaming.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1757738</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1757738</link>
        <title><![CDATA[EEG hyperscanning in intellectual disability: a scoping review with implications for cognitive stimulation therapy]]></title>
        <pubdate>2026-04-13T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Pavithra Pavithra</author><author>James K. Bradshaw Bernacchi</author><author>Salma Ahmed</author><author>Garret McDermott</author><author>Mary McCarron</author><author>Philip McCallion</author><author>Eimear McGlinchey</author><author>Alejandro Lopez Valdes</author>
        <description><![CDATA[Electroencephalography (EEG) hyperscanning has emerged as a valuable method for examining social dynamics during group-based activities and may serve as a promising outcome measure in group interventions. Cognitive stimulation therapy (CST) is one such interventions shown to improve cognition and quality of life in people with dementia and has recently been adapted for individuals with intellectual disability (ID). However, the potential for obtaining objective neural markers of CST benefit via EEG and hyperscanning is yet to be explored. This scoping review aims to identify existing evidence and gaps related to the use of EEG within CST research for adults with ID by examining three relevant areas: (1) the use of individual EEG and hyperscanning to evaluate cognitive and social outcomes in CST; (2) the evidence base for individual and group-based CST in people with ID; and (3) the use of EEG to evaluate cognitive and social outcomes for people with ID. Following the PRISMA-ScR guidelines, studies were searched in CINAHL, MEDLINE, PsychInfo, and EMBASE. Our search focused on adult participants with ID and studies that used EEG for the purpose of evaluating cognitive or social outcomes. Currently, there are no studies that use EEG to evaluate CST in adults with ID. Following screening and eligibility assessment, no studies met the inclusion criteria for EEG and CST. Five studies were included for CST and ID, and 14 articles met criteria for EEG and ID. In total, 19 articles were included in the final review. The evidence base suggests that EEG has been successfully used to investigate neural mechanism in ID and Down Syndrome related Alzheimer's disease. Existing CST research in ID remains largely feasibility-focused but some preliminary findings show cognitive benefits, enhanced enjoyment, and social connectedness. Our review shows that there is a large gap when it comes to any objective metrics for CST in general. Given that there is evidence of EEG studies including populations with ID, we propose that this gap can be filled by EEG hyperscanning which offers a non-invasive, objective approach to evaluate cognitive and social outcomes in people with ID in future CST research.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1805149</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1805149</link>
        <title><![CDATA[Editorial: Virtual and robotic embodiment]]></title>
        <pubdate>2026-04-01T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Giacinto Barresi</author><author>Klaus Gramann</author><author>Giovanni Vecchiato</author><author>Gaetano Tieri</author><author>Philipp Beckerle</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1774423</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1774423</link>
        <title><![CDATA[Analysis of inter-brain synchrony in group-based electroencephalography to assess task-dependent interactions]]></title>
        <pubdate>2026-03-31T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Alex Kennedy</author><author>Nathan Shields</author><author>Sean Farrell</author><author>Alejandro Lopez Valdes</author>
        <description><![CDATA[IntroductionSocial interaction and cooperative behavior are inherent and important aspects of daily life. Neuroscience research has demonstrated that neural activity synchronizes during cooperative group behavior. Hyperscanning, a method of simultaneously recording neural activity from two or more subjects, allows insight into the underpinnings of neural dynamics.MethodsThis study involves a triadic 24-channel EEG hyperscanning experiment, using a cooperative card game to elicit group interaction and cognitive puzzle games as individual control tasks. The study was split into two separate experiments. Experiment One, where two groups repeatedly performed experimental blocks and Experiment Two where 10 individual groups participated in one block, where an adversary was randomly introduced to determine if negative social behavior changed neural synchrony. After removing artefactual contributions of muscle and eyeblink components and task duration discrepancies that may affect the group's synchrony, the neural correlation between subjects was examined via Inter-Subject Correlation (ISC). Linear mixed-effect models were used to assess the magnitude of differences in ISC, unadjusted, and adjusted trial-duration.ResultsSimilar neural synchrony levels were observed in the group members in Experiment One (unadjusted: cooperative ISC = 0.286 ± 0.013, individual ISC = 0.267 ± 0.02, baseline ISC = 0.219 ± 0.008, duration-adjusted: cooperative ISC = 0.225 ± 0.015, individual ISC = 0.278 ± 0.017, baseline ISC = 0.23 ± 0.007) and Experiment Two (unadjusted and duration-adjusted: cooperative ISC = 0.186 ± 0.009, individual ISC = 0.177 ± 0.01, baseline ISC = 0.157 ± 0.005).DiscussionWhile no statistically significant differences were found between cooperative and non-cooperative tasks, task-based synchrony was higher than resting state synchrony. Furthermore, significantly higher brain synchrony was observed in cooperative tasks when there were no adversaries present in the group. This study highlights the importance of analysis parameters like the analysis time window and task contrasts avoiding similarities in cognitive demands when evaluating brain synchronization in naturalistic environments for group-based interactions.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1741655</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1741655</link>
        <title><![CDATA[SSVEP-driven BCI authentication with reduced number of EEG electrodes across high and low frequency ranges]]></title>
        <pubdate>2026-03-26T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ayas Kiser</author><author>Atilla Cantürk</author><author>Ivan Volosyak</author>
        <description><![CDATA[Growing concerns over data privacy, credential theft, and spoofing attacks have highlighted the limitations of traditional authentication methods in high-security settings. To address these challenges, we propose a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) authentication system that verifies short-lived, session-specific identity prompts using neural activity. The proposed system uses a single flickering visual stimulus to encode a unique, system-generated random code that remains unknown to the user. Instead of relying on conscious input, the system directly extracts the user's brain responses to the stimulus. Authentication is achieved by matching the frequency components of the recorded electroencephalography (EEG) signals to those embedded in the visual stimulus, enabling implicit verification without prior training or manual interaction. In an online BCI study involving 21 healthy participants, we evaluated four configurations differing in stimulation frequencies and EEG electrode count. Mean symbol-level accuracy reached 99% (95% CI: 98.3 −99.6) for high-frequency stimulation with three electrodes, 95% (95% CI: 91.1 −98.2) for high-frequency stimulation with a single electrode, 97% (95% CI: 95.1 −98.4) for low-frequency stimulation with three electrodes, and 96% (95% CI: 94.5 −97.8) for low-frequency stimulation with a single electrode. The corresponding mean trial durations were 38.6 s, 76.6 s, 17.2 s, and 27.1 s, respectively. Participants generally rated high-frequency flickering stimuli as more comfortable, whereas setup time and EEG wearability were identified as the main barriers to usability. These findings demonstrate that SSVEP-based authentication can provide accurate and training-free implicit authentication, while also offering potential resistance to spoofing attacks. The results suggest that this passive BCI approach is a promising direction for secure authentication, although practical deployment will require further improvements in speed, comfort, and wearability.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1765659</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1765659</link>
        <title><![CDATA[Identifying neural correlates of cognitive workload in high-performance motorsport simulation: an integrated EEG and telemetry analysis of driver performance]]></title>
        <pubdate>2026-03-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Wasinee Terapaptommakol</author><author>Yuil Tripathee</author><author>Danai Phaoharuhansa</author><author>Tipporn Laohakangvalvit</author>
        <description><![CDATA[High-performance motorsport requires precise cognitive regulation and rapid decision-making under extreme dynamic conditions, yet traditional vehicle telemetry alone cannot reveal the psychophysiological mechanisms that influence driving performance. This study presents an integrated neuroengineering framework combining electroencephalography (EEG), and vehicle telemetry to identify objective neural markers of cognitive workload, emotional valence, and mental fatigue in a high-fidelity Formula 1 simulation. 15 participants drove on the Silverstone Circuit in the simulation platform, during which physiological data were continuously recorded and synchronized. Performance tiers were classified using k-means clustering on lap times and trajectory consistency, followed by EEG-based analysis of workload and fatigue indices. Results showed that high-performing drivers exhibited efficient workload modulation, higher alertness, and reduced fatigue compared to lower-performing drivers. A track-specific “cognitive workload profile” was also identified, revealing that technically demanding corners induced higher neural workload, whereas moderate turns corresponded to transient engagement peaks. The findings demonstrate that integrating EEG with telemetry enables objective, data-driven assessment of driver cognitive states and provides a foundation for predictive modeling, driver performance optimization, and advanced simulation-based training systems in high-performance vehicle engineering.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1693662</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1693662</link>
        <title><![CDATA[Not just heard, but judged: a multidimensional perspective on auditory attention in everyday life]]></title>
        <pubdate>2026-02-23T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Silvia Korte</author><author>Martin G. Bleichner</author>
        <description><![CDATA[This review examines how listeners evaluate sounds in everyday contexts and how auditory attention research has approached this process. While experimental paradigms have yielded important insights into auditory processing, their constructs often rely on task-specific definitions that may not fully reflect how sounds are perceived and interpreted outside the laboratory. We argue that auditory evaluation is shaped by the interaction of acoustic properties, affective tone, task demands, and contextual framing. To account for this, we propose a multidimensional framework based on arousal, valence, and context, which enables a more flexible characterization of how sounds are judged in everyday listening. We also examine how different methodological approaches highlight distinct facets of this evaluative process. By focusing on the conditions under which sounds are experienced, this review contributes to a more integrative understanding of auditory attention across both controlled and naturalistic settings.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1702748</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1702748</link>
        <title><![CDATA[Multi-method characterization of neurophysiological and biological stress responses in surgical teams during real surgical procedures]]></title>
        <pubdate>2026-02-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Vincenzo Ronca</author><author>Lidia Castagneto Gissey</author><author>Maria Irene Bellini</author><author>Alessandra Iodice</author><author>Valentina Sada</author><author>Emilia Sbardella</author><author>Ludovica Vincenzi</author><author>Pietro Aricò</author><author>Gianluca Di Flumeri</author><author>Andrea Giorgi</author><author>Alessia Vozzi</author><author>Rossella Capotorto</author><author>Fabio Babiloni</author><author>Giovanni Casella</author><author>Gianluca Borghini</author>
        <description><![CDATA[PurposeThe surgical operating room is a high-stakes environment where stress can impact performance and patient safety. While hormonal and neurophysiological markers are established stress indicators, integrative studies in real-world surgical settings are scarce. This study aimed to provide a comprehensive, multimodal characterization of stress in surgical teams during live operations, comparing neurophysiological, biological, and behavioral responses across different levels of expertise and surgical phases. The goal was to validate a multi-method approach and identify objective markers for monitoring stress in real-time.MethodSurgical teams, each composed of four members, were categorized as “Expert” or “Novice” based on the lead surgeon's experience. All teams performed a standardized inguinal hernia repair. Continuous electroencephalography (EEG) and electrodermal activity (EDA) were recorded throughout the procedure to derive stress indices. Blood samples were collected pre- and post-surgery to measure Adrenocorticotropic Hormone (ACTH) and cortisol levels. Subjective stress was assessed via questionnaires, and team performance was quantified using a Combined Behavioral Teamwork Index (CBTI) based on surgical time, materials used, and patient outcomes.FindingNeurophysiological data showed that the EEG-based stress index was significantly higher in Novice surgeons compared to Experts, particularly during the final and most demanding phase of the surgery (p = 0.008). This effect was most pronounced for the lead Novice surgeon (p = 0.01). Similarly, the EDA-based stress index was higher overall in Novices (p = 0.02). Post-surgery, ACTH levels increased significantly in Novices while decreasing in Experts (p = 0.008), indicating a sustained endocrine stress response in the less experienced group. Strong positive correlations were found between the EEG-stress index and both ACTH levels (R = 0.67) and subjective stress (R = 0.63), validating the multimodal assessment.ConclusionThis study demonstrates that a multimodal approach can effectively characterize stress dynamics in a real-world surgical environment. The EEG-derived metric emerged as the most sensitive indicator, capable of discriminating stress levels with high temporal and role-specific precision. Novice surgeons exhibit significantly greater neurophysiological and endocrine stress responses, underscoring the need for targeted support and advanced training protocols. These findings lay the groundwork for developing real-time, objective stress monitoring systems to enhance surgical performance, training, and patient safety.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1771753</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1771753</link>
        <title><![CDATA[When robots reshape teams: neurodynamic insights into taskwork and teamwork in search and rescue]]></title>
        <pubdate>2026-02-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Robert J. Spenceley</author><author>Ranjana K. Mehta</author>
        <description><![CDATA[IntroductionBeyond traditional, dyadic human-robot interaction, embedding robots into multi-human teams, such as search and rescue (SAR), requires an understanding of fundamental aspects of team composition and dynamics. While considerable work has examined how robot agents influence both taskwork and teamwork, few studies have focused on identifying which factor best explains differences in team outputs. This research investigates the neurodynamics of taskwork and teamwork as SAR teams transition between multi-human (mH) and multi-human-robot (mHR) configurations.MethodsElectroencephalogram (EEG) has been a key tool in human teamwork research because of its sensitivity to changes in cognitive states such as mental workload, sustained attention, and engagement. Specifically, EEG power spectral density (PSD), particularly frontal theta activity (4–7 Hz), has been used to assess variations in mental workload and social cognition associated with task performance. EEG hyperscanning, which evaluates interbrain synchrony between two or more individuals, using metrics such as weighted phase lag index (wPLI), has been widely employed to study teamwork among humans. In this study, PSD and EEG hyperscanning were used to analyze taskwork and teamwork in 22 teams comprising a highly engaged SAR team member (mission commander), a less-involved member (safety officer), and a navigator as they searched for victims in a virtual emergency environment. The navigator was either a trained researcher posing as a participant or a virtual robot, with the robot's performance manipulated using the Wizard of Oz technique.ResultsResults for taskwork show that the social-cognitive abilities of mission commanders, but not those of safety officers, are adversely impacted by a robot navigator compared with a human navigator, despite the perceived workload remaining stable. Although team trust outcomes were similar, neural synchrony across occipital, parietal, and temporal regions increased in mHR teams relative to mH teams, indicating different neurodynamical patterns of teamwork.DiscussionThe study findings provide evidence that both taskwork and teamwork are fundamentally altered in mHR teams, regardless of the effectiveness of robotic capabilities and functions, compared with mH teams. Therefore, beyond dyadic interactions, multi-human robot teaming must be viewed as a fundamentally distinct team construct rather than simply an extension of human-human teaming.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1756956</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1756956</link>
        <title><![CDATA[Reviewing digital collaborative interactions with multimodal hyperscanning through an ever-growing database]]></title>
        <pubdate>2026-02-10T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Anna Vorreuther</author><author>Anne-Marie Brouwer</author><author>Mathias Vukelić</author>
        <description><![CDATA[IntroductionDigital technologies now mediate a substantial proportion of human collaboration, reshaping how individuals coordinate attention, share information, and jointly act on goals. These digitally mediated interactions engage neural, physiological, and behavioral processes differently compared to face-to-face settings. Mobile hyperscanning, i.e., simultaneous (neuro-)physiological measures of two or more individuals, offers a unique window into these multidimensional dynamics. Yet, the existing literature is highly fragmented in design, modality, and analytic rigor, making it difficult to accumulate knowledge. This review systematically synthesizes hyperscanning research investigating collaboration involving digital components and identifies key methodological and conceptual gaps that must be addressed to advance the field.MethodsWe searched Scopus, PubMed, and Web of Science (April 2025) for mobile hyperscanning studies on digital collaboration. Forty-five eligible studies involving simultaneous measurements of at least two healthy adults engaged in collaborative tasks with a digital interaction component were included. Studies were categorized across 13 dimensions, including modality, task design, interaction type, analysis method, and cognitive domain. To ensure transparency and support cumulative synthesis, we created a continuously updated online resource (“InterBrainDB”).ResultsMost studies relied on unimodal neuroimaging, predominantly electroencephalography (EEG) or functional near-infrared spectroscopy (fNIRS), with only seven studies implementing multimodal combinations. Study designs favored cooperative tasks or naturalistic scenarios with symmetrical roles, typically using same-sex dyads of unfamiliar individuals. Non-verbal interaction was studied slightly more often than verbal. Analytically, functional connectivity dominated, whereas effective connectivity, multimodal fusion, and machine learning were scarcely used. Executive and social cognition were more frequently investigated than creativity, memory, and language.DiscussionResearch on digital collaboration through hyperscanning is growing, yet progress is limited by methodological heterogeneity, narrow use of modalities, and analytical conservatism. Future advances will require: (1) multimodal integration to fully capture neural, physiological, and behavioral dynamics; (2) systematic comparisons across varying degrees of digitalization to understand how technology shapes interaction; (3) physiology-informed analysis frameworks capable of modeling high-dimensional interpersonal dynamics; and (4) clearer reporting standards to enable reproducibility and large-scale synthesis. Resources like our InterBrainDB can structure a community-driven progress toward ecologically grounded models of digitally mediated collaboration, a domain of increasing scientific and societal relevance.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1696865</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2026.1696865</link>
        <title><![CDATA[Hybrid EEG-fNIRS phoneme classification based on imagined and perceived speech]]></title>
        <pubdate>2026-02-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Manuel Hons</author><author>Silvia Erika Kober</author><author>Selina Christin Wriessnegger</author><author>Guilherme Wood</author>
        <description><![CDATA[IntroductionIndividuals affected by severe motor impairments often have no means of communicating with others. To build an intuitive speech prosthesis, imagined speech brain-computer interface research began to prosper with numerous studies attempting to classify imagined speech from brain signals. While unimodal neuroimaging techniques, such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have been widely used, multimodal approaches combining two or more of them remain scarce.MethodsIn this study offline phoneme decoding based on hybrid EEG-fNIRS data was performed. Twenty-two right-handed participants performed imagined and perceived speech trials encompassing four phonemes /a/,/i/,/b/ and /k/. Features in the form of power spectral densities and mean hemoglobin concentration changes were extracted from EEG and fNIRS data, respectively. Features were ranked according to the mutual information criterion relative to the target vector, and the optimal number of features to include was determined through optimization via 10-fold cross-validation.ResultsHybrid classification yielded accuracy scores of 77.29% and 76.05% regarding imagined and perceived speech, respectively. In both conditions, hybrid and EEG-based classification performances did not differ significantly, while fNIRS based phoneme discrimination produced lower accuracies.DiscussionThis study represents an innovative phoneme decoding attempt based on multimodal EEG-fNIRS data, both in terms of imagined speech and perception. Four-class imagined speech classification was primarily driven by EEG features yet outperformed comparable previous studies.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1736672</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1736672</link>
        <title><![CDATA[Combining EEG and eye-tracking for cognitive and physiological states monitoring: a systematic review]]></title>
        <pubdate>2026-01-29T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Maria Rivas-Vidal</author><author>Alberto Calvo Cordoba</author><author>Cecilia E. García Cena</author><author>Fernando Daniel Farfán</author>
        <description><![CDATA[Monitoring situational awareness is critical in highly demanding environments where sustained attention and vigilance are essential for safety and performance. Electroencephalography (EEG) and eye-tracking (ET) provide complementary insights into the perceptual layer of situational awareness, capturing neural and ocular signatures of information processing, attention, and fatigue. However, studies have typically examined perception-related conditions such as workload, fatigue, stress, and drowsiness in isolation, limiting understanding of their shared and distinct physiological patterns. This systematic review synthesizes findings from studies that recorded EEG and ET concurrently to investigate perception-related conditions. Following the PRISMA 2020 statement, five databases were searched, and 47 studies met the inclusion criteria. The most frequently reported EEG features included theta, alpha, and beta activity, while ET metrics commonly involved fixation patterns, pupil diameter, blink dynamics, and percentage of eyes closed (PERCLOS). Across studies, fatigue, mental workload, and stress exhibited overlapping physiological signatures, although multimodal data helped differentiate these closely related states. Drowsiness and vigilance decrement appeared along a shared continuum, with microsleeps showing distinct physiological profiles. Classification models generally achieved higher accuracy when integrating EEG and ET features than when using either modality alone. This review highlights the potential of concurrent EEG and ET monitoring for improving the detection of perception-related conditions and for disambiguating closely related states. These findings also support the need for standardized multimodal protocols and real-time multimodal classification models to strengthen cognitive-state monitoring, operational performance, and error prevention in high-risk domains.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1673268</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1673268</link>
        <title><![CDATA[The state of the art in assessing mental fatigue in the cockpit using head-worn sensing technology]]></title>
        <pubdate>2026-01-12T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Anneke Hamann</author><author>Carmen van Klaren</author><author>Rolf Zon</author><author>Frédéric Dehais</author><author>Nils Carstengerdes</author><author>Maykel van Miltenburg</author><author>Kalou Cabrera Castillos</author>
        <description><![CDATA[Mental fatigue is an important construct for aviation as it can impact pilots' performance. However, its assessment has been and still is challenging. Most research done in this field is based on basic laboratory experiments, and the measurement methods in use have certain limits one needs to overcome in order to apply them in a cockpit. In this review, we present an overview of research on mental fatigue, its assessment and the gap between fundamental research and its application in aviation. We provide an overview over classical experimental paradigms for mental fatigue induction and subjective measures, as well as advanced head-worn sensing technologies (or such that target head and face), namely electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS) and eye-tracking. For each measure, we discuss limitations and open challenges. Finally, we draw conclusions on the feasibility of integrating the measurements into the cockpit. We also highlight gaps that future research needs to bridge.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1674928</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1674928</link>
        <title><![CDATA[Automated thermo-mechanical therapy for immediate relief in chronic non-specific lower back pain: a randomized controlled trial]]></title>
        <pubdate>2026-01-09T00:00:00Z</pubdate>
        <category>Clinical Trial</category>
        <author>Kyle Donnery</author><author>Giuseppina Pilloni</author><author>Mohamad FallahRad</author><author>Kiwon Lee</author><author>Byungyun Han</author><author>Soonhi Park</author><author>Jihye Kim</author><author>Leigh Charvet</author><author>Marom Bikson</author>
        <description><![CDATA[ObjectiveChronic non-specific lower back pain (cNSLBP) is a prevalent and disabling condition, imposing a substantial socioeconomic burden due to high healthcare costs and productivity losses, with limited accessible and effective long-term treatment options. Automated Thermo-mechanical Therapy (ATT) is a promising, non-drug intervention that leverages innovative technical advances to provide multimodal pain relief, offering accessibility and low-cost delivery. This study tested ATT for immediate pain relief in individuals with cNSLBP in a single-session, double-blind, randomized controlled trial.MethodsForty participants with cNSLBP were assigned to receive either active ATT (n = 20) or control ATT (n = 20) in a 40-min session with urn randomization. The active device applied heated cylindrical rollers along the spine, using far-infrared heat and mechanical tissue stimulation tailored to spinal alignment. In the control condition, the device used minimal mechanical therapy intensity without heat, targeting only the cervical area to avoid lower back therapeutic effects. Pre- and post-intervention assessments measured changes in pain intensity (primary outcome) via a 100-mm Visual Analog Scale for Pain (VAS-P100), alongside secondary outcomes assessing pain characteristics, anxiety, and functional mobility.ResultsThe active ATT group showed a significant reduction in pain on the VAS-P100, with an average decrease of 46.8%, compared to 17.0% in the control group. Participants in the active group also reported significantly greater subjective pain relief (p = 7.88e−05). Secondary outcomes demonstrated significant improvements in lumbar flexibility (Modified-Modified Schober Test, MMST) for the active ATT group compared to the control group (p = 0.0031). No adverse events were reported, and all participants tolerated the intervention well.ConclusionsA single session of ATT provides immediate, significant pain relief in individuals with cNSLBP, supporting its potential as a safe, non-invasive option for managing chronic back pain. Future studies should examine the long-term benefits of repeated ATT sessions and explore mechanistic insights into thermo-mechanical stimulation's effects on pain and function.Clinical Trial RegistrationClinicalTrials.gov, identifier: NCT06769321.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1629128</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1629128</link>
        <title><![CDATA[Attentional demands during walking are increased by small simulated leg length discrepancy]]></title>
        <pubdate>2025-12-08T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Keisuke Takada</author><author>Miyu Sugimoto</author><author>Yuma Takenaka</author><author>Kenichi Sugawara</author><author>Tomotaka Suzuki</author>
        <description><![CDATA[IntroductionLeg length discrepancy (LLD) is known to disrupt gait symmetry and affect motor control. However, the effects of LLD-induced gait asymmetry on attention functions during walking remain unclear. Therefore, this study aimed to investigate the impact of simulated LLD and walking track on attentional demands and gait parameters in young, healthy adults.MethodsThis prospective study included participants who completed walking trials on straight (n = 14) and circular (n = 16) tracks under randomly assigned LLD conditions (no lift and 10-, 20-, 30-, and 40-mm shoe lifts). Attentional demands during walking were assessed using a simple reaction time (RT) paradigm. Gait symmetry was evaluated by step-time ratio and triaxial trunk acceleration root mean square (RMS) ratios, calculated from timing and accelerometer data. The data were analyzed using a two-way mixed analysis of variance.ResultsLLD significantly increased RT and step-time ratio compared to zero LLD. However, the circular walking track did not significantly affect RT or step-time ratio. LLD also significantly increased trunk movement asymmetry (RMS ratios). No significant interaction effects were found for all variables.ConclusionSimulated LLD significantly increased attentional demands and gait asymmetry, although the rise in attentional demands was limited in healthy participants. The circular walking track had minimal effects and did not exacerbate the challenges associated with LLD. These results provide insights into the effects of gait asymmetry caused by the degree of LLD and walking environment on human gait strategy and its associated attentional demands.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1671311</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1671311</link>
        <title><![CDATA[Estimating the valence and arousal of dyadic conversations using autonomic nervous system responses and regression algorithms]]></title>
        <pubdate>2025-12-03T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Iman Chatterjee</author><author>Maja Goršič</author><author>Robert A. Kaya</author><author>Joshua D. Clapp</author><author>Vesna D. Novak</author>
        <description><![CDATA[IntroductionAutonomic nervous system responses provide valuable information about interactions between pairs or groups of people but have primarily been studied using group-level statistical analysis, with a few studies attempting single-trial classification. As an alternative to classification, our study uses regression algorithms to estimate the valence and arousal of specific conversation intervals from dyads' autonomic nervous system responses.MethodsForty-one dyads took part in 20-minute conversations following several different prompts. The conversations were divided into ten 2-minute intervals, with participants self-reporting perceived conversation valence and arousal after each 2-minute interval. Observers watched videos of the conversations and separately also rated valence and arousal. Four autonomic nervous system responses (electrocardiogram, electrodermal activity, respiration, skin temperature) were recorded, and both individual and synchrony features were extracted for each 2-minute interval. These extracted features were used with feature selection and a multilinear perceptron to estimate self-reported and observer-reported valence and arousal of each interval in both a dyad-specific (based on data from same dyad) and dyad-nonspecific (based on data from other dyads) manner.ResultsBoth dyad-specific and dyad-nonspecific regression using the multilinear perceptron resulted in lower root-mean-square errors than a simple median-based estimator and two other regression methods (linear regression and support vector machines).DiscussionThe results suggest that physiological measurements can be used to characterize dyadic conversations on the level of individual dyads and conversation intervals. In the long term, such regression algorithms could potentially be used in applications such as education and mental health counseling.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1520434</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1520434</link>
        <title><![CDATA[Let's put a person back into Cyber-Physical-Social research: Public Mental Models Framework]]></title>
        <pubdate>2025-11-26T00:00:00Z</pubdate>
        <category>Conceptual Analysis</category>
        <author>Mare Teichmann</author><author>Jaanus Kaugerand</author><author>Merik Meriste</author><author>Kalev Rannat</author>
        <description><![CDATA[In the current paper our focus is on linking Public Mental Models with behavior, Situation Awareness and stress management, with predicting and intervening in public behavior in critical situations. Understanding and influencing behavior within complex Cyber-Physical-Social Systems (CPSS) requires an explicit link between mental models, behavior, situation awareness, and stress management. This paper introduces the Public Mental Models Framework (PMMF) as a systematic approach for analyzing and predicting public behavior in critical situations, thereby improving adaptive decision-making and person—AI collaboration. The PMMF explains how internal and external indicators such as cognitive, social, cultural, political, economic, and technological, that shape perception and behavioral responses across multiple levels: individual, team, organizational, community, and societal. By identifying these triggers and markers, the framework supports why behaviors deviate or stabilize under stress, providing an analytical basis for targeted interventions and resilience-oriented design. In contrast to traditional Situation Awareness models that emphasize what is perceived and how it is processed, PMMF focuses on the interpretive mechanisms through which actors construct meaning and make decisions. Integrating PMMF with the Motivation-Opportunity-Ability (MOA) theory enables systematic assessment of behavioral potential and performance within CPSS. This integration strengthens the neuroergonomic foundation for evaluating human and AI entities and enhances the capacity to design interventions that foster informed, adaptive, and ethically aligned behavior in complex sociotechnical environments.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1672492</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1672492</link>
        <title><![CDATA[Pilot mental workload analysis in the A320 traffic pattern based on HRV features]]></title>
        <pubdate>2025-11-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jiajun Yuan</author><author>Bo Jia</author><author>Chenyang Zhang</author><author>Lu Tian</author><author>Han Yi</author><author>Lin Wei</author>
        <description><![CDATA[Pilot mental workload is a critical factor influencing flight safety, particularly during dynamic flight phases with high cognitive demands such as takeoff and landing. This study evaluates pilot workload across different flight phases (takeoff, climb, cruise, descent, and landing) using HRV (heart rate variability) features and machine learning methods. Heart rate data were collected through simulated A320 traffic pattern flight missions, combined with multidimensional task assessments, to obtain flight performance scores. Selected HRV features, Min_HR (minimum heart rate), SDNN (standard deviation of normal-to-normal intervals), SD2 (long-term variability index in Poincare Plot), Modified_csi (modified cardiac sympathetic index), were identified and used to train classifiers (RF, KNN, GBDT, XGBoost) for pilot mental workload level classification. The XGBoost model demonstrated optimal performance after feature selection, with accuracy increasing from 50.09% to 66.67% (a 16.58% improvement) and F1-score rising from 37.63% to 58.33% (a 20.70% improvement) compared with all HRV feature. The findings revealed selected HRV suppression during high-workload phases (landing) with the lowest performance scores, whereas HRV recovery and peak performance scores were observed in low-workload phases (cruise). This research establishes a reliable framework for real-time pilot mental workload monitoring and provides predictive insights into cognitive overload risks during critical flight operations.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1589734</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnrgo.2025.1589734</link>
        <title><![CDATA[Towards neuroadaptive chatbots: a feasibility study]]></title>
        <pubdate>2025-10-15T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Diana E. Gherman</author><author>Thorsten O. Zander</author>
        <description><![CDATA[IntroductionLarge-language models (LLMs) are transforming most industries today and are set to become a cornerstone of the human digital experience. While integrating explicit human feedback into the training and development of LLM-based chatbots has been integral to the progress we see nowadays, more work is needed to understand how to best align them with human values. Implicit human feedback enabled by passive brain-computer interfaces (pBCIs) could potentially help unlock the hidden nuance of users' cognitive and affective states during interaction with chatbots. This study proposes an investigation on the feasibility of using pBCIs to decode mental states in reaction to text stimuli, to lay the groundwork for neuroadaptive chatbots.MethodsTwo paradigms were created to elicit moral judgment and error-processing with text stimuli. Electroencephalography (EEG) data was recorded with 64 gel electrodes while participants completed reading tasks. Mental state classifiers were obtained in an offline manner with a windowed-means approach and linear discriminant analysis (LDA) for full-component and brain-component data. The corresponding event-related potentials (ERPs) were visually inspected.ResultsMoral salience was successfully decoded at a single-trial level, with an average calibration accuracy of 78% on the basis of a data window of 600 ms. Subsequent classifiers were not able to distinguish moral judgment congruence (i.e., moral agreement) and incongruence (i.e., moral disagreement). Error processing in reaction to factual inaccuracy was decoded with an average calibration accuracy of 66%. The identified ERPs for the investigated mental states partly aligned with other findings.DiscussionWith this study, we demonstrate the feasibility of using pBCIs to distinguish mental states from readers' brain data at a single-trial level. More work is needed to transition from offline to online investigations and to understand if reliable pBCI classifiers can also be obtained in less controlled language tasks and more realistic chatbot interactions. Our work marks preliminary steps for understanding and making use of neural-based implicit human feedback for LLM alignment.]]></description>
      </item>
      </channel>
    </rss>