<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Computational Neuroscience | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/computational-neuroscience</link>
        <description>RSS Feed for Frontiers in Computational Neuroscience | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-12T23:15:59.337+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1780000</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1780000</link>
        <title><![CDATA[A predictive map learned from diverse entorhinal inputs explains the role of context-dependent reorganization of hippocampal place cells]]></title>
        <pubdate>2026-05-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Yusuke Kuniyoshi</author><author>Tadashi Yamazaki</author>
        <description><![CDATA[The hippocampus is thought to support spatial memory and navigation by constructing predictive representations of the environment. Predictive map theory formalizes this function as a successor representation (SR). However, existing models assume a fixed and uniform distribution of place fields, despite experimental findings that place cell density is dynamically modulated by rewards and objects. Here, we propose a biologically inspired neural model in which predictive maps emerge from diverse entorhinal inputs. In the model, place cell-like representations are generated via non-negative sparse coding of medial entorhinal spatial signals and lateral entorhinal contextual and motivational signals, and are subsequently transformed into predictive maps using successor features. By coupling the predictive map to an actor–critic framework, the model supports goal-directed navigation in continuous environments. Furthermore, the model reproduces experience-dependent restructuring of hippocampal representations, including object-centered overrepresentation of place fields in two-dimensional environments and reward-centered overrepresentation in one-dimensional environments. Together, these results demonstrate that hippocampal predictive maps can emerge from the integration of diverse entorhinal inputs, providing a unified account of how spatial, contextual, and motivational information jointly shape hippocampal representations and behavior.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1835802</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1835802</link>
        <title><![CDATA[Feature fusion and WOA-GWO optimization for Alzheimer’s disease detection with sparse EEG channels]]></title>
        <pubdate>2026-05-08T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ruofan Wang</author><author>Jitong Wang</author><author>Jiaxuan Cai</author><author>Siqian Wang</author><author>Zixuan Bai</author><author>Yanqiu Che</author>
        <description><![CDATA[Alzheimer’s Disease (AD) is a neurodegenerative disorder with insidious onset, making early diagnosis challenging. Electroencephalogram (EEG) is a promising noninvasive tool for AD diagnosis, but high-density EEG configurations cause computational burdens and hinder clinical translation. Thus, developing an efficient sparse EEG channel selection method with high classification accuracy is urgent for AD auxiliary diagnosis. This study proposes a multi-strategy enhanced Whale Optimization Algorithm-Grey Wolf Optimizer (WOA-GWO) hybrid model for EEG channel selection, combined with a nonlinear dynamic feature fusion framework. We extracted geometric features from second-order difference plot (SODP) and complexity features (sample entropy, fuzzy entropy) of EEG signals, then adopted the ReliefF algorithm for feature fusion and key feature selection. The WOA-GWO model was improved via chaotic initialization, nonlinear convergence factors, spiral-hierarchical position update, and random perturbation to avoid local optima. Experimental results show that the proposed framework achieves a classification accuracy of 96.97% for AD detection, with significantly reduced EEG channel dimensions (four optimal channels identified: T5, FP1, T4, F4). The WOA-GWO model outperforms the original WOA and GWO in convergence speed and optimization accuracy, and the fused features exhibit strong discriminability for AD-related EEG abnormalities. This work provides a reliable computational framework for developing lightweight, portable AD diagnostic systems, and the identified optimal EEG channels offer neurophysiological evidence for AD electrophysiological biomarkers.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1797090</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1797090</link>
        <title><![CDATA[Computational predictive processing models of consciousness: a systematic review of non-invasive brain signal analysis in disorders of consciousness]]></title>
        <pubdate>2026-05-08T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Sophie Adama</author><author>Martin Bogdan</author>
        <description><![CDATA[IntroductionThe clinical assessment of patients with Disorders of Consciousness (DoC), ranging from the Vegetative State (VS/UWS) to the Minimally Conscious State (MCS), remains a significant challenge in neurology. Gold-standard behavioral tools are prone to high misdiagnosis rates because they depend on overt motor responses, which may be masked by physical impairments. Consequently, there is an urgent need for objective neurophysiological biomarkers to identify residual awareness. Predictive Processing (PP) is a leading theory that views the brain as a hierarchical inference engine. Under this framework, the brain minimizes “prediction errors” between internal generative models and sensory inputs. Neural signatures of these errors, such as the Mismatch Negativity (MMN), provide a window into the brain's automatic modeling of environmental regularities, serving as a proxy for conscious processing.ObjectiveThis systematic review aims to identify and appraise peer-reviewed studies from the past 15 years that apply computational PP models to non-invasive brain signals in DoC patients. It synthesizes evidence for their diagnostic and prognostic utility and identifies methodological hurdles to clinical translation.MethodsA systematic synthesis was conducted on 30 peer-reviewed studies. Data regarding population demographics (total N≈2045), paradigms, and computational methods, including multivariate pattern analysis and deep learning, were extracted and appraised.ResultsThe evidence reveals a transition from simple waveform averaging to high-dimensional decoding of hierarchical prediction errors. Global information-sharing markers effectively distinguish conscious states, while the temporal progression of prediction error signatures in the early stages of coma demonstrates high specificity for predicting awakening.ConclusionComputational PP models offer a transformative path toward reducing misdiagnosis. Future research must prioritize 24-hour continuous monitoring and multimodal data fusion to translate these theoretical frameworks into viable bedside clinical tools.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1825622</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1825622</link>
        <title><![CDATA[A brief history of dopamine prediction errors]]></title>
        <pubdate>2026-05-07T00:00:00Z</pubdate>
        <category>Mini Review</category>
        <author>Biru B. Dudhabhate</author><author>Kauê M. Costa</author>
        <description><![CDATA[Dopamine signaling has become closely associated with reward prediction errors (RPEs)–the difference between expected and experienced value. Although not without controversy, the dopamine RPE hypothesis is one of the most influential ideas in neuroscience. This review briefly summarizes its origins, empirical foundations, and theoretical development. We begin with early psychological studies which demonstrated that prediction errors, broadly defined, are central drivers of learning. These experiments inspired mathematical models that formalized associative learning rules and informed the development of reinforcement learning algorithms for artificial learning, including the influential temporal difference learning (TDRL) framework, where learning is guided by prediction errors in value or reward. These theoretical proposals converged with neuroscience through the landmark discovery that midbrain dopamine neurons show activity patterns that are strikingly similar to the RPEs proposed in TDRL. The idea that this unique neuronal population, already implicated in several behavioral processes and brain disorders, could encode a computational variable central to reinforcement learning algorithms was a major conceptual shift, and provided a strong framework that allowed for rigorous hypothesis testing. Over the past three decades, increasingly sophisticated experiments have both replicated the core dopamine RPE finding across distinct experimental contexts and revealed important deviations from the canonical model predictions. These exceptions have sparked ongoing debate about how the hypothesis should be enhanced, revised, or replaced. The history of the dopamine RPE hypothesis is a quintessential example of how the integration of theory and experiments can drive progress in neuroscience and offers a template for theoretical–experimental synthesis.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1793265</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1793265</link>
        <title><![CDATA[A novel image-based neuronal network model framework for understanding visual multistability and neurological disorders]]></title>
        <pubdate>2026-04-29T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Kaito N. Hikino</author><author>Marina Nakayama</author><author>Yihui Wu</author><author>Victor J. Barranca</author>
        <description><![CDATA[While perceptual multistability arises from many types of stimuli across different sensory systems, there are common dynamical features that may be rooted in universal organizing principles underlying perception. We probe the fundamental mechanisms responsible for visual multistability using a neuronal network model framework in which a set of realistic images directly drives competing pools of neurons with nonlinear dynamics. Incorporating balanced network architecture, long-range connections from excitatory neurons to inhibitory neurons in competing pools, and a dynamic spiking threshold, the model produces irregular percept switching and replicates key experimental observations regarding dominance durations in binocular rivalry. Using a sequence of short-time observations of neuronal dynamics, we derive a new methodology for reconstructing the dynamic percept that generalizes to an arbitrary number of percepts, suggesting how rivalry, fusion, and interocular grouping may serve as different states in a single decision-making system. The model dynamics illustrate that perceptual alternations are potentially rooted in the breakdown of balance between excitation and inhibition when the spiking thresholds of suppressed neurons become sufficiently small, with more balanced dynamics generally facilitating longer dominance durations. Finally, we apply our model analysis toward characterizing the causes of psychiatric or neurological disorders, such as amblyopia and autism. Increasing the strength of connections manifesting from the pool of neurons associated with the stronger eye in amblyopia, we find the weaker eye experiences shorter dominance durations as found experimentally, supporting the notion that sufficiently imbalanced inter-eye competition prompts the suppression of information from the monocular stimulus corresponding to the weakened eye. Similarly, we show increasing the ratio of excitatory to inhibitory inputs in the network systematically yields longer dominance durations as observed for individuals with autism, and we thus demonstrate support for the excitation/inhibition imbalance hypothesis for autism.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1839583</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1839583</link>
        <title><![CDATA[Contralateral dominance emerges from geometric transformation in bilateral control systems]]></title>
        <pubdate>2026-04-29T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Nobuchika Yamaki</author><author>Tenna Churiki</author>
        <description><![CDATA[IntroductionContralateral organization is a defining feature of vertebrate nervous systems, yet its functional origin remains incompletely understood. We examined whether contralateral routing can arise as an advantageous solution in delayed bilateral control systems using a minimal computational framework.MethodsWe constructed abstract bilateral sensorimotor networks composed of sensory, central, and motor units on the left and right sides, and systematically compared alternative architectures differing in sensory laterality, commissural coupling, and local connectivity. We evaluated one-dimensional and two-dimensional models, introducing in the latter a continuous twist parameter representing transformations between sensory and motor coordinate relationships. Dense parameter scanning and bootstrap analysis were used to estimate the transition point and its robustness.ResultsIn one-dimensional models, contralateral configurations were dynamically viable but sensitive to the choice of objective function. In two-dimensional models, the twist parameter reorganized the architecture landscape: without transformation, optimal solutions were predominantly ipsilateral, whereas under strong transformation they became predominantly contralateral. Intermediate conditions exhibited an abrupt transition rather than a gradual shift. Dense parameter scanning localized this transition to a threshold at θ_c ≈ 0.483. Bootstrap analysis showed that this threshold was stable (95% CI: 0.481766–0.483507) and only weakly dependent on longitudinal delay. Objective values were minimized near and just above the transition region.DiscussionThese results indicate that, within an abstract dynamical framework, contralateral routing can become advantageous under conditions of transformed sensorimotor relationships and delayed interactions.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1798561</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1798561</link>
        <title><![CDATA[Bridging modalities: a deep learning framework for brain tumor classification via CT-MRI integration and model fusion]]></title>
        <pubdate>2026-04-24T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ahmad Almadhor</author><author>Shtwai Alsubai</author><author>Najib Ben Aoun</author><author>Abdullah Al Hejaili</author><author>Amina Salhi</author><author>Tahani Alsubait</author><author>Fares Hamad Aljahani</author>
        <description><![CDATA[Artificial intelligence (AI) and machine learning (ML) have shown remarkable promise in advancing medical image analysis, yet their potential in neurology and psychiatry remains underexplored. This work explores the use of deep learning approaches for automated brain tumor classification, leveraging multimodal neuroimaging data comprising computed tomography (CT) and magnetic resonance imaging (MRI) scans. Two model families were evaluated: a custom CNN trained from scratch and a transfer-learning approach based on ResNet-18. Models were trained and validated separately on CT and MRI datasets, and further extended to a combined dataset through multimodal fusion. Experimental results demonstrate that the CNN achieved accuracies of 97 and 99% on CT and MRI datasets, respectively, outperforming ResNet18, which yielded 95 and 97% under the same settings. On the combined dataset, CNN maintained superior performance (98%) compared to ResNet18 (94%), highlighting the adaptability of CNNs to domain-specific features in medical imaging. These findings suggest that lightweight CNNs can be highly effective for neuroimaging-based tumor detection, particularly when multimodal data are leveraged. Beyond clinical utility in early diagnosis, the authors underscore the importance of exploring modality-specific characteristics and model adaptability in designing AI-driven diagnostic systems for neurological disorders.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1800284</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1800284</link>
        <title><![CDATA[A critical analysis of MBTI-based personality profiling with large language models]]></title>
        <pubdate>2026-04-22T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jean Marie Tshimula</author><author>René Manassé Galekwa</author><author>Belkacem Chikhaoui</author>
        <description><![CDATA[This paper critically analyzes MBTI-based personality profiling using Large Language Models (LLMs), examining both their use as tools for inferring human personality and as subjects evaluated through psychometric frameworks. We review recent work (2020–2025) spanning traditional machine learning, fine-tuned transformer models, and zero-shot prompting approaches across datasets such as Kaggle MBTI, PersonalityCafe, Pandora, and MBTIBench. While top-performing LLM-based systems report 75%–85% accuracy at the dichotomy level, improvements over baselines are often modest, domain-dependent, and sensitive to dataset biases. Recent benchmarks employing soft labels reveal systematic issues, including polarized predictions, overconfidence, and limited calibration relative to population trait distributions. Beyond predictive performance, we examine emerging research that applies MBTI instruments directly to LLMs, showing that models exhibit reproducible yet context-dependent “personality-like” profiles, often skewed toward socially desirable traits due to alignment training. These findings raise conceptual questions about whether stable internal dispositions can meaningfully be attributed to generative systems whose outputs vary across prompts and versions. We argue that MBTI-based modeling with LLMs faces three core challenges: psychometric limitations of the MBTI construct itself, methodological weaknesses in self-reported training data, and philosophical ambiguity regarding the notion of AI personality. The paper concludes by outlining ethical risks, evaluation gaps, and research directions for more rigorous, calibrated, and theoretically grounded personality modeling in artificial intelligence systems.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1762692</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1762692</link>
        <title><![CDATA[Deterministic, stochastic, and mean-field PDE models in neuroscience]]></title>
        <pubdate>2026-04-20T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Coşkun Çetin</author><author>Jose Roberto Castilho Piqueira</author><author>Burhaneddin İzgi</author><author>Ayse Peker-Dobie</author><author>Semra Ahmetolan</author><author>Murat Özkaya</author>
        <description><![CDATA[Large neuronal networks demonstrate complex dynamics across multiple scales, ranging from single-neuron excitability and spike-train variability to mesoscopic rhythms and whole-brain activity. Different types of differential equation models have been developed to comprehend these phenomena, connecting deterministic, stochastic, and mean-field descriptions. At the deterministic level, ordinary differential equation (ODE) models, including conductance-based neuron models, neural-mass systems, and whole-brain networks, summarize neural behavior through a reduced set of macroscopic variables. At the population level, mean-field partial differential equation (PDE) models such as Fokker-Planck, age-structured, kinetic, and neural field equations describe the evolution of probability or population densities over membrane-potentials, synaptic states, and other kinetic variables. These PDEs link single-neuron mechanisms to population-level activity and allow one to analyze bifurcations, oscillations and other collective patterns. Stochastic differential equation (SDE) models and their extensions that include jump-diffusion processes and stochastic PDEs (SPDEs) are widely used to describe random membrane fluctuations, irregular spike trains, synaptic plasticity and large-scale variability in neural activity. These stochastic models are also applied to neural data analysis, for example to quantify noise in electro-physiological recordings and to infer latent neural dynamics. Because variability and noise are central in neural systems, we devote more space to stochastic models but always relate them back to the surrounding ODE and PDE frameworks. This hierarchy of ODE, PDE, and SDE-SPDE models shows that the versatility of differential-equation-based approaches in neuroscience offers unified tools for multiscale modeling, neural signal processing, cognitive modeling, and the analysis of noisy neural systems. We also discuss some known numerical and computational approaches, especially for stochastic models and conclude by outlining open challenges, such as multiscale inference, control-oriented formulations and the integration of differential-equation models with modern machine-learning methods.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1810869</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1810869</link>
        <title><![CDATA[Commentary: Editorial: The convergence of AI, LLMs, and industry 4.0: enhancing BCI, HMI, and neuroscience research]]></title>
        <pubdate>2026-04-13T00:00:00Z</pubdate>
        <category>General Commentary</category>
        <author>Alessandro Rossi</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1761735</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1761735</link>
        <title><![CDATA[Adaptation modulates effective connectivity and network stability]]></title>
        <pubdate>2026-04-10T00:00:00Z</pubdate>
        <category>Perspective</category>
        <author>Thomas J. Richner</author><author>Martynas Dervinis</author><author>Brian Nils Lundstrom</author>
        <description><![CDATA[The brain is a highly recurrent, nonlinear network hypothesized to remain near the edge of chaos for optimal performance. Excitation and inhibition must be balanced precisely within every neuron to ensure a consistent level of dynamical stability and rich dynamics during transition to chaos. However, analysis of biologically realistic synaptic weight matrices suggests that sparsity and low-dimensional structure interact such that there is no known synaptic balancing rule that constrains the stability (i.e., eigenvalues) of the network while also preserving computationally useful, low-dimensional structure. Further, even if a network were well-balanced, external stimuli interact with the nonlinear activation functions to unbalance the network in real time. Therefore, the brain must utilize dynamic, rather than static, mechanisms to actively regulate its level of stability. We propose that two specific adaptation mechanisms, spike frequency adaptation (SFA) and short-term synaptic depression (STD), continuously modulate the effective connectivity, keeping the brain near the edge of chaos and reducing dynamical fluctuations caused by stimuli. This theoretical framework links intrinsic and synaptic negative feedback mechanisms to network-level dynamics. This offers an explanation of why data-driven modeling of human brain signals, an exciting and useful method in epilepsy and anesthesiology research, seems to require linear time-varying (LTV) models which are refit every half second: difficult to observe adaptation processes interact with nonlinearities to make connectivity effectively dynamic at the macroelectrode scale. We suggest that compromised adaptation may underlie neurological conditions characterized by altered excitability, and that targeted brain stimulation could be used to probe the regulatory action of adaptation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1834521</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1834521</link>
        <title><![CDATA[Editorial: Advancements in neural coding: sensory perception and multiplexed encoding strategies]]></title>
        <pubdate>2026-04-08T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Mohammad Amin Kamaleddin</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1805106</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1805106</link>
        <title><![CDATA[Simulated target search by bats using biomimetic SCAT biosonar model]]></title>
        <pubdate>2026-04-08T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>James A. Simmons</author><author>Prithvi Thakur</author><author>Ashok Ragavendran</author><author>Chen Ming</author><author>Andrea Megela Simmons</author>
        <description><![CDATA[Echolocating big brown bats broadcast short, wideband ultrasonic FM pulses for foraging and navigation. These broadcasts contain frequencies from 100 to 20 kHz (wavelengths 0.34–1.7 cm). Bats perceive target distance by measuring the time delay between the outgoing pulse and the returning echo. Acuity of this delay perception depends on the frequency content of echoes and the associated microsecond-level coherence between neural representations of the 1st and 2nd harmonic frequencies. Bats perceive target shape by estimating differences in the delay of mini-echoes from different reflecting points, or glints, within the target. A matched-filter receiver would register glints as prominent peaks in the pulse-echo cross-correlation output, but in bats the overlapping glint reflections mix together to create echo interference patterns that are transposed back into delay estimates. The process is modeled as spectrogram correlation and transformation (SCAT). The first, nearest glint is registered by echo delay itself, but subsequent glints are extracted from the nulls in the interference spectrum. Here, the SCAT receiver was evaluated for its ability to locate targets with a specific glint spacing in the 2D range/cross-range plane while rejecting other targets with larger or smaller spacings.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1796308</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1796308</link>
        <title><![CDATA[Replication challenges in linking personality to resting-state functional connectomics]]></title>
        <pubdate>2026-04-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Nikola Jajcay</author><author>David Tomeček</author><author>Iveta Fajnerová</author><author>Jan Rydlo</author><author>Jaroslav Tintěra</author><author>Jiří Horáček</author><author>Jiří Lukavský</author><author>Jaroslav Hlinka</author>
        <description><![CDATA[An increasing number of studies are currently focusing on “personality neuroscience,” a term denoting the research aimed at neuroimaging correlates of inter-individual temperament and character variability. Among other methods, a graph theoretical analysis of the functional connectivity in resting-state functional magnetic resonance imaging data was applied in a study by, reporting novel functional connectivity correlates of personality traits. The current paper presents a conceptual replication of the results of this study and discusses the related challenges, including an extension of the original statistical methods in order to illustrate the effect of the multiple comparison problem. Five personality dimensions were obtained using the revised “Big Five" Personality Inventory, including scores of Extraversion and Neuroticism covered in the original paper. Using a larger sample (84 subjects) with adequate statistical power (ranging from 0.75 to 0.95 across analyses), we failed to replicate any of the nine specific neuroimaging correlates of personality presented by Gao et al. While acknowledging differences in the experimental procedures, we discuss that the lack of replication might be caused by the relatively liberal control of false positives in the original study. Indeed, the original testing scheme leads to an expected count of about 10 false positive observations among all tests; applying this scheme to our data we observed a similar number of positive tests, albeit for different relations. No significant correlations were found in our data when standard family-wise error control was applied. These results illustrate the importance of combining exploration with independent validation, use of large datasets, as well as appropriate control of multiple comparison problem in order to prevent false alarms in research into neural substrates of personality differences. Importantly, our findings do not disprove the existence of a link between personality and the brain's intrinsic functional architecture; but rather suggest that such a link might be even more subtle and elusive than previously reported.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1780552</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1780552</link>
        <title><![CDATA[Designing implicit population learners: a permutation-equivariant state space approach for brain disease diagnosis]]></title>
        <pubdate>2026-03-27T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Chuan Yang</author>
        <description><![CDATA[IntroductionGroup-aware learning has recently emerged as a promising paradigm for neuroimaging-based disease diagnosis, as population-level interactions can provide complementary information beyond individual imaging features. However, most existing approaches rely on explicitly constructed graphs, which introduce non-trivial design choices, scalability limitations, and sensitivity to graph topology. By incorporating the design philosophy of participatory interaction, we propose IP-Mamba, a scalable and memory-efficient framework tailored for neuroimaging cohorts that models implicit population interactions without the computational burden of explicit graph construction.MethodsIP-Mamba treats a mini-batch of subjects as an unordered set and employs a bidirectional Mamba-based sequence modeling mechanism to capture latent inter-subject dependencies. To address the inherent order sensitivity of sequence models, we introduce a Shuffle Consistency Strategy, which promotes permutation equivariance under random permutations of subject order, thereby aligning the model behavior with the clinically-relevant, set-based nature of population data. This design enables efficient implicit hypergraph modeling while maintaining linear computational complexity with respect to the population size. We evaluate IP-Mamba on the OASIS-1 dataset, focusing on the binary classification of Alzheimer’s disease (Normal Controls vs. Abnormal) as an early clinical screening task. To address severe class imbalance and ensure diagnostic stability, we implement a Contextual Population Support Set inference mechanism coupled with a robust hybrid SVM decision layer.ResultsExperimental results demonstrate that IP-Mamba achieves a balanced accuracy of 87.84% and maintains a high sensitivity (Recall) of 89% for the minority disease class. Compared to conventional 3D CNNs and Transformer-based baselines, IP-Mamba provides highly competitive diagnostic robustness while maintaining a highly efficient linear O(N) memory scaling without the quadratic computational bottlenecks typical of graph-based attention networks.DiscussionComprehensive ablation studies further confirm the necessity of bidirectional modeling and shuffle consistency regularization. Overall, IP-Mamba offers a principled, memory-efficient alternative to explicit graph-based methods, providing a scalable solution for population-aware neuroimaging analysis under imbalanced clinical settings.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1826791</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1826791</link>
        <title><![CDATA[Editorial: Passive brain-computer interfaces: moving from lab to real-world application]]></title>
        <pubdate>2026-03-25T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Vincenzo Ronca</author><author>Luca Longo</author><author>Rossella Capotorto</author><author>Pietro Aricò</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1781080</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1781080</link>
        <title><![CDATA[NeuralVisionNet: a probabilistic neural process model for continuous visual anticipation]]></title>
        <pubdate>2026-03-24T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Han He</author><author>Ruinan Chen</author><author>Yixiang Wang</author><author>Xia Chen</author>
        <description><![CDATA[The ability to anticipate future events continuously is a hallmark of biological vision, yet standard deep learning models often struggle with long-term coherence due to the rigid discretization of time. In this paper, we propose NeuralVisionNet, a probabilistic framework that models visual anticipation as a continuous generative process, drawing inspiration from the predictive coding mechanisms of the hippocampal-entorhinal circuit. Our architecture synergizes hierarchical Video Swin Transformers with Attentive Neural Processes, employing a novel grid-like coding scheme to represent spatiotemporal dynamics as a continuous function rather than a fixed sequence of frames. Furthermore, we introduce a variational global latent variable to encode the “event gist,” ensuring semantic consistency over extended horizons. Extensive evaluations on KTH, Human 3.6M, and UCF 101 benchmarks demonstrate that NeuralVisionNet significantly outperforms state-of-the-art stochastic baselines in perceptual quality (FVD) and structural fidelity (SSIM), offering a robust computational proof-of-concept for continuous, bio-inspired visual forecasting.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1784913</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1784913</link>
        <title><![CDATA[Pre-movement EEG microstates reflect intended lifted load of volitional movement]]></title>
        <pubdate>2026-03-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Rohit Kumar Yadav</author><author>Sutirtha Ghosh</author><author>Lalan Kumar</author><author>Shubhendu Bhasin</author><author>Sitikantha Roy</author><author>Ratna Sharma</author><author>Suriya Prakash Muthukrishnan</author>
        <description><![CDATA[IntroductionLoad estimation is one of the essential parameters for assistive robotic control in cases of rehabilitation. The high temporal resolution of the Electroencephalography (EEG) technique makes it the best tool to resolve the temporal dynamics of movement intention and planning. The quasi-stable scalp electrical potential topography represented by the EEG microstates could assess the real-time information processing in the brain for controlling assistive devices. We hypothesize that the EEG microstate preceding the movement could reflect the increasing load during a biceps curl movement.MethodsTen healthy participants performed biceps curl movements, while their brain activity and muscle activation was recorded using EEG and EMG.ResultsEight microstate maps were found to represent the functional brain state before the movements. Two pre-movement microstate maps were found to reflect the load increments. The source maxima of these two reflective microstates maps were localized at the right insula and cingulate gyrus.DiscussionOur results imply that the load increments of volitional movement could be reflected by the pre-movement EEG microstates.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1778902</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1778902</link>
        <title><![CDATA[The two dragons of cognition: recursive condensation for predictive processing]]></title>
        <pubdate>2026-03-23T00:00:00Z</pubdate>
        <category>Hypothesis and Theory</category>
        <author>Xin Li</author>
        <description><![CDATA[Computation separates time from space: nondeterministic problems are exponential in time (the “Time Dragon”) but polynomially simulable in space (the “Space Dragon”), as formalized by Savitch's theorem (NPSPACE⊆PSPACE). We propose that the brain physically instantiates this theorem through Recursive Condensation, a topological mechanism that converts intractable high-dimensional search into efficient low-dimensional navigation. Drawing on Urysohn's Lemma, we demonstrate that separability is a property of connectivity, not volume; a stable decision boundary exists independent of ambient dimension provided the underlying manifolds are topologically disjoint. To manufacture this disjointness, the cortex employs a parity alternation strategy: it alternates between odd-parity metric expansion (exploratory search) to untangle local geometry, and even-parity topological contraction (closure/condensation) to lock in validated invariants. This cycle acts as a biological “Savitch Machine,” mediating a Topological Trinity Transformation (TTT), Search→Closure→Navigation, that compiles high-entropy exploration paths into low-energy quotient tokens. Under Memory-Amortized Inference (MAI), the cortex slays the Space Dragon by collapsing vast state spaces into compact metric singularities, and tames the Time Dragon by memoizing these traversals as structural priors. Evolution's “cheat code,” linear cortical growth yielding exponential cognitive gain, emerges as a physical law of topological inference: exponential search in time becomes polynomial reuse in space via recursive metric collapse.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fncom.2026.1682176</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fncom.2026.1682176</link>
        <title><![CDATA[Wave turbulence and cortical dynamics]]></title>
        <pubdate>2026-03-20T00:00:00Z</pubdate>
        <category>Hypothesis and Theory</category>
        <author>Gerald K. Cooray</author>
        <description><![CDATA[Cortical activity recorded through EEG and MEG reflects complex dynamics that span multiple temporal and spatial scales. Spectral analyses of these signals consistently reveal power-law behavior, a hallmark of turbulent systems. In this paper, we derive a kinetic equation for neural field activity based on wave turbulence theory, highlighting how quantities such as energy and pseudo-particle density flow through wave-space (k-space) via direct and inverse cascades. We explore how different forms of nonlinearity–particularly 3-wave and 4-wave interactions–shape spectral features, including harmonic generation, spectral dispersion, and transient dynamics. While the observed power-law decays in empirical data are broadly consistent with turbulent cascades, variations across studies—such as the presence of dual decay rates or harmonic structures—point to a diversity of underlying mechanisms. We argue that although no single model fully explains all spectral observations, key constraints emerge: namely, that cortical dynamics exhibit features consistent with turbulent wave systems involving both single and dual cascades and a mixture of 3- and 4-wave interactions. This turbulence-based framework offers a principled and unifying approach to interpreting large-scale brain activity, including state transitions and seizure dynamics.]]></description>
      </item>
      </channel>
    </rss>