<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Neuroscience | Perception Science section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/neuroscience/sections/perception-science</link>
        <description>RSS Feed for Perception Science section in the Frontiers in Neuroscience journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-13T14:22:40.324+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1816455</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1816455</link>
        <title><![CDATA[Distributed cortico-subcortical networks enable robust speech state detection from sparse intracranial recordings]]></title>
        <pubdate>2026-05-08T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Chen Feng</author><author>En Zhang</author><author>Yifei Jia</author><author>Zhoule Zhu</author><author>Junming Zhu</author><author>Di Wu</author><author>Kedi Xu</author>
        <description><![CDATA[IntroductionAccurate and reliable detection of speech state transitions is a prerequisite for practical speech brain–computer interfaces (BCIs). While cortical language areas have been extensively studied, it remains unclear whether speech onset information is exclusively localized to these regions or distributed across a broader cortico-subcortical network. Here, we investigated the feasibility of decoding speech state transitions using sparse stereo-electroencephalography (SEEG) recordings that sample both cortical and subcortical structures.MethodsFour Mandarin-speaking epilepsy patients undergoing clinical SEEG monitoring performed a sentence-reading task. Neural signals were segmented and labeled as rest or speech based on acoustic onset. A convolutional neural network was trained to classify speech states using broadband or high-gamma features derived from different anatomical channel subsets. We further evaluated continuous decoding performance, model robustness to channel dropout, and the specific contributions of different brain regions.ResultsSpeech state decoding accuracy exceeded chance level (50%) in all participants, with peak single-participant accuracies surpassing 90%. Models integrating both cortical and subcortical signals generally outperformed those restricted to a single anatomical domain. Notably, broadband signals yielded higher classification accuracy than high-gamma features. In continuous decoding simulations, performance remained above chance, although reduced relative to discretized evaluation. Crucially, decoding accuracy was robust to random channel reduction (up to 50%) and remained above 70% even after excluding classical speech-related cortical regions. Contribution analyses indicated participant-specific patterns of model sensitivity, with relatively higher contributions observed in frontal regions and the thalamus in multiple participants.DiscussionThese findings support the hypothesis that speech state information is represented in a distributed cortico-subcortical network rather than being confined to canonical language areas. The robustness of decoding performance despite channel reduction and regional exclusion suggests that sparsely sampled SEEG data can effectively drive speech detection modules. This study demonstrates the feasibility of utilizing deep brain recordings for speech BCIs, offering a pathway toward more stable and generalized implantable systems. Moreover, such autonomous speech state detection may also serve as an ethical safeguard, ensuring that neural language decoding is activated only during intended communicative acts.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1844642</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1844642</link>
        <title><![CDATA[A multichannel MEG time–frequency analysis framework for detecting stage -specific effects of spatial distraction in visual-spatial working memory]]></title>
        <pubdate>2026-05-08T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Zhengchen Li</author><author>Qian Liang</author><author>Wuqiang Xiao</author><author>Tao Li</author><author>Zhilin Chen</author><author>Xiaoshun Tang</author><author>Yetong Ouyang</author><author>Zhexue Huang</author><author>Limin Sun</author><author>Xiaohui Tang</author><author>Xijin Wang</author>
        <description><![CDATA[IntroductionSpatial distraction can disrupt visual-spatial working memory (VSWM), but its stage-dependent effects on multichannel neural dynamics remain insufficiently characterized. This study presents a multichannel magnetoencephalography (MEG) time—frequency analysis framework to detect stage-specific oscillatory responses to spatial distraction during a VSWM task.MethodsMEG signals were recorded from healthy participants under Distractor and No-distractor conditions and analyzed across encoding, maintenance, and retrieval/decision epochs. Time–frequency power was estimated in the delta, theta, alpha, beta, and gamma bands, and condition differences were evaluated using sensor-level spatiotemporal cluster-based permutation testing and Bonferroni correction within each predefined epoch.ResultsThe proposed analysis revealed a clear stage-specific pattern, with the most prominent modulation occurring during maintenance. Specifically, distraction induced robust and sustained increases in theta-, alpha-, and beta-band power during the retention interval (all cluster-level p < 0.01). Theta activity increased rapidly after maintenance onset and remained elevated throughout the full maintenance period over bilateral temporal, and widespread parieto-occipital sensors, while alpha and beta enhancements also showed temporally continuous and spatially stable patterns across widespread sensor networks.DiscussionThese findings highlight sustained large-scale oscillatory modulation as a key neural signature of distraction during mnemonic maintenance. The study provides an interpretable multichannel signal-analysis perspective on distraction effects in working memory and offers a practical framework for stage-resolved analysis of brain dynamics in cognitive tasks.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1829021</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1829021</link>
        <title><![CDATA[A game-theoretic framework for multimodal information utilization under heterogeneous processing environments in neuroscience and perception science]]></title>
        <pubdate>2026-05-01T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Zhanhong Cui</author><author>Kai Li</author>
        <description><![CDATA[Multimodal data integration is increasingly central to neuroscience and perception science, where heterogeneous signals such as behavioral responses, sensory inputs, electrophysiological recordings, neuroimaging measurements, and computational representations must be jointly interpreted. Based on the realistic background, there is a core theoretical problem that needs further research: under what heterogeneous processing conditions does enhanced multimodal information utilization produce meaningful gains, when does it become strategically necessary, and when does it generate only limited benefits relative to its cost? To clarify this core problem, this study develops a conceptual game-theoretic framework in which information utilization is treated not as a universally beneficial technical upgrade, but as a conditional strategic choice shaped by signal heterogeneity, information asymmetry, integration cost, and differential decision influence across actors. Within this framework, we compare three endogenous strategic profiles—no enhanced information utilization, unilateral information enhancement, and bilateral information enhancement—across multiple heterogeneous environments. The analysis results show that the value of multimodal information utilization is fundamentally environment-dependent. In highly homogeneous environments, additional information processing yields little marginal benefit and is therefore not sustained in equilibrium. In moderately heterogeneous environments, however, multimodal information utilization emerges as a strategically necessary response because it reduces mismatch, improves alignment, and stabilizes decision outcomes. In more asymmetric environments, stronger decision agents capture a disproportionate share of the gains from enhanced information utilization and increasingly rely on differentiated strategic responses, whereas weaker agents adopt more defensive and uniform strategies. In highly dominated environments, the marginal value of additional information utilization declines again because structural dominance itself already secures most attainable advantages. These findings contribute to multimodal neuroscience and perception science by clarifying that the consequences of information utilization depend not only on fusion efficiency, but also on environmental structure, asymmetry, and the distribution of strategic power.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1780980</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1780980</link>
        <title><![CDATA[Rhythmic visual stimulation enhances visual search via occipito-parietal alpha modulation: an electroencephalographic study]]></title>
        <pubdate>2026-04-15T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Hongwei Wang</author><author>Suya Bao</author><author>Zihan Gang</author><author>Bo Gao</author><author>Wenliang Fu</author><author>Yuhao Chi</author><author>Mingzhe Zhang</author><author>Yue Wu</author><author>Haowei Wu</author><author>Huan Niu</author><author>Chao Zhang</author><author>Donggang Xu</author><author>Yongcong Shao</author><author>Weiwei Xing</author>
        <description><![CDATA[IntroductionAs a non-invasive neuromodulation technique, visual flicker entrainment has demonstrated considerable potential in enhancing basic visual perception; however, the neurophysiological mechanisms underlying these effects remain unclear. This study investigated whether rhythmic visual stimulation at individualized alpha frequencies can improve low-contrast visual search performance by selectively modulating alpha-band neural oscillations.MethodsForty-three healthy male participants completed a low-contrast visual search task under two conditions: personalized rhythmic flicker and arrhythmic (random) flicker. Behavioral performance was evaluated using reaction time, accuracy, and perceptual sensitivity. Simultaneously, high-density electroencephalographic data were recorded. Neural activity was quantified using power spectral density analysis across delta, theta, and beta frequency bands. Neural oscillatory characteristics were compared across prefrontal, central, parietal, and temporal areas under different flicker conditions.ResultsBehaviorally, performance under the rhythmic flicker condition was significantly enhanced relative to that under the random flicker condition, as reflected by significantly reduced reaction times. Electrophysiologically, rhythmic flicker elicited a significant increase in overall alpha power (F(1, 42) = 6.90, p = 0.012, η2p = 0.14). Critically, this effect was region-specific: a significant Condition × Region interaction (F(3, 126) = 7.83, p < 0.001). Alpha power in the occipital region was significantly higher during rhythmic flicker compared to arrhythmic flicker (mean difference = 0.51 μV2, p = 0.007, uncorrected). The analysis of the parietal region using the Wilcoxon signed-rank test revealed a significant moderate increase (Z = 2.15, p = 0.031, uncorrected). No significant differences between conditions were found in the frontal or temporal regions (all p s > 0.05). Additionally, a significant Region × Electrode interaction was observed (F(6, 252) = 12.83, p < 0.001, η2p = 0.21). This indicates that the distribution of alpha power across electrodes differed by brain region. Furthermore, enhanced parietal alpha power was significantly correlated with a reduced reaction time (Pearson’s r = −0.35, p = 0.021). By contrast, no significant modulation by rhythmic stimulation was observed in delta, theta, or beta bands (all Condition main effects and Condition × Region interactions, p > 0.05).ConclusionIndividualized alpha-frequency visual flicker entrainment effectively enhances performance in male participants on complex visual search tasks, with the behavioral benefits mediated by selective modulation of neural oscillations in the parietal alpha band. These findings provide mechanistic electrophysiological evidence that rhythmic stimulation improves visual cognition by modulating frequency- and region-specific neural dynamics.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1815538</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1815538</link>
        <title><![CDATA[Decoding acupoint specificity: from neural patterns to bodily maps]]></title>
        <pubdate>2026-04-13T00:00:00Z</pubdate>
        <category>Opinion</category>
        <author>Da-Eun Yoon</author><author>Yeonhee Ryu</author><author>Younbyoung Chae</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1788255</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1788255</link>
        <title><![CDATA[Continuous attractor dynamics in spatial navigation: from population geometry to flexible computation]]></title>
        <pubdate>2026-04-08T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Yani Chen</author><author>Mu Hua</author><author>Xuelong Sun</author><author>Jigen Peng</author>
        <description><![CDATA[A central computational problem in spatial navigation is how spatial representations remain stable under noise and uncertainty, and update reliable estimations of continuous variables such as head-direction and position, which respectively rely on the head-direction system and the grid-cells system in the entorhinal cortex. The two systems demonstrate strong population-level dynamics, suggesting a potential framework to explain the critical problem of spatial representations. Currently, the framework involves continuous attractor networks and the neural field theories as an unified perspective, from which the population activity can be described as evolving of continuous variables on a low-dimensional attractor manifold, together with the selective instantiation of these dynamics across symmetry-related or context-dependent subspaces. From this viewpoint, a key question is how different sources of information, such as self-motion, sensory cues and environmental structure, interact with attractor dynamics to regulate the evolution and stability of population states. Specifically, external inputs can stabilize attractor states by anchoring them to landmarks; intrinsic network connectivity, symmetry, and multi-timescale dynamics determine whether an attractor is stable and whether it supports continuous motion; environmental boundaries and geometric constraints can systematically shape the local geometry of spatial activity patterns; direction- or context-dependent signals may selectively recruit neuronal subpopulations with specific tuning preferences; and cross-level organization of attractor dynamics, enabling a unified representational and control framework from individual decision-making to collective behavioral organization. Through the joint action of these mechanistic dimensions, continuous attractor representations are able to support the core computations required for navigation. More broadly, this perspective provides a theoretical foundation for understanding how continuous spatial representations are computed, read out, and flexibly manipulated to support planning and behavioral control.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1702124</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1702124</link>
        <title><![CDATA[Facial micro-movements as a proxy of increasingly erratic heart rate variability while experiencing pressure pain]]></title>
        <pubdate>2026-04-01T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Elizabeth B. Torres</author><author>Mona Elsayed</author>
        <description><![CDATA[IntroductionThe sensation of pain varies from person to person. These patterns of individual variation are difficult to capture using coarse subjective self-reports. However, they are important when prescribing therapies and tailoring them to each person’s own sensations. Pain can be experienced differently by the same person and can fluctuate based on context; yet, most analyses treat the problem with a one-size-fits-all model.MethodsIn this work, we introduce a series of assays to assess pressure pain across tasks with different motoric and cognitive demands, in relation to a resting state. In a cohort of healthy individuals, we examine pain-free vs. pain states at rest, during drawing with heavy cognitive demands, during pointing to a visual target, and during a grooved peg task, such as inserting a grooved key into a matching keyhole. We adopt a standardized data type called micro-movement spikes (MMS) to characterize the biorhythmic activities of facial micro-expressions and the micro-fluctuations in the heart’s inter-beat interval timings.ResultsUsing the MMS peaks, we find that the continuous Gamma family of probability distribution functions best fits the frequency histograms of both the facial and heart data. Furthermore, we find that the Gamma shape and scale parameters in both signals span a scaling power law whereby, as the noise-to-signal ratio (Gamma scale parameter) increases, so does the randomness of the stochastic process. We find that as the heart IBI becomes more erratic (noisier and more random), the facial ophthalmic region also increases in noise and randomness, with higher linear correlation for tasks requiring haptic feedback (R2 0.84) and lower correlation for tasks requiring greater cognitive and memory loads (R2 0.77).ConclusionIncreases in transfer entropy show that recent past activity (~167 ms back) of the heart IBI and facial data combined lower the uncertainty in predicting the present ophthalmic facial activity, suggesting that this facial region may serve as a proxy for the increasingly dysregulated heart. These results have implications for the detection and monitoring of pressure pain.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1705922</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1705922</link>
        <title><![CDATA[The internal representation of fractions: component, holistic or hybrid?]]></title>
        <pubdate>2026-03-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Weimin Lin</author><author>Yun Pan</author><author>Jun Zhu</author><author>Liangzhi Jia</author><author>Huanyu Yang</author><author>Yajie Bi</author><author>Fangwen Yu</author><author>Di Zhang</author>
        <description><![CDATA[This study aimed to differentiate among componential, holistic, and hybrid accounts of fraction representation through three behavioral experiments. Using the stimulus detection paradigm, we systematically tested the competing theoretical hypotheses of spatial attention mapping in fraction processing. Experiments 1 and 2 found that fraction numerical processing could effectively regulate spatial attention under conditions of consistent numerical information. The crucial Experiment 3, which employed a conflict design (where the overall value and component size information were contradictory), did not observe a significant spatial attention effect. Integrating these findings, fraction processing may simultaneously activate both holistic and component information, with its behavioral output being subject to immediate evaluation and regulation; automatic expression is permitted when information is consistent, whereas inhibitory control is triggered in the presence of conflict. The research results reveal the dynamic dual-processing of fractions in the domain of spatial attention. These findings provide key behavioral evidence for understanding the interaction between rapid extraction and controlled processes in mathematical cognition.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1781002</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1781002</link>
        <title><![CDATA[The effects of mirror visual feedback involved network priming on embodiment perception in healthy subjects: a proof-of-concept study]]></title>
        <pubdate>2026-03-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Yongxin Luo</author><author>Lin Cai</author><author>Juan Li</author><author>Jianwei Lu</author><author>Li Ding</author>
        <description><![CDATA[IntroductionMirror visual feedback (MVF) efficacy varies with individual embodiment perception.ObjectiveThe study aimed to investigate the behavioral effects of priming via the rubber hand illusion (RHI) and action observation (AO) on embodiment perception during MVF.MethodsTwenty healthy participants were recruited. This experiment contained three rounds: MVF, RHI-MVF, and AO-MVF. At first, all the participants completed the round of MVF, and after 24 hours, they received the round of RHI-MVF or AO-MVF at a random order with an interval of 24 hours. Each round comprised two sessions, including session of simple motor tasks (SMT) and session of objective-based tasks (OBT). In addition, each session contained 5 tasks, which was repeated 10 times at a frequency of 2 seconds per time.ResultsThe results showed that priming of networks overlapping with MVF through RHI/AO paradigms could enhance the intensity of embodiment perception. The machine learning analysis further revealed a stronger predictive association between RHI and heightened embodiment perception compared to AO. Additionally, we found that OBT could facilitate embodiment elicitation, comparing to SMT.ConclusionOur findings provided, which insights into modulating embodiment perception during MVF paradigms. These preliminary results might benefit future investigations therapeutic efficacy in neuro-rehabilitation.Clinical trial registrationIdentifier ChiCTR2500102438.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1759372</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1759372</link>
        <title><![CDATA[Cosmetic after-feel modulates brain activity in sensory and reward networks: an fMRI study]]></title>
        <pubdate>2026-03-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Audrey Maniere</author><author>Arnaud Pêtre</author><author>Ron Kupers</author><author>Céline Manetta</author><author>Joan Attia</author><author>Eloïse Gerardin</author>
        <description><![CDATA[The affective dimensions of cosmetic textures were investigated using functional magnetic resonance imaging (fMRI) to examine how after-feel, defined as residual tactile sensations persisting on the skin after product application, modulates sensory and emotional processing. Twenty healthy women took part in three conditions: no cream (control), cream A, or cream B, differing only in emulsifier composition. A fixed amount of cream was applied to predefined areas of the left hand. After absorption, participants stroked these areas at a controlled speed. fMRI data were acquired during this self-touch task, preprocessed using a standardized pipeline, and analyzed using a general linear model. Results showed that the no-cream and cream B conditions primarily engaged primary somatosensory regions, consistent with basic tactile encoding. In contrast, cream A additionally recruited brain areas involved in affective and reward processing, including the orbitofrontal cortex, amygdala, and putamen, with key reward-related responses, notably within striatal and insular regions, showing a right-hemispheric dominance contralateral to the hand receiving the tactile input. This broader activation pattern suggests that specific cosmetic ingredients can enhance the emotional salience of after-feel, potentially through C-tactile afferent pathways mediating affective tactile signals. These findings reflect a hierarchical integration of tactile input, from sensory encoding to higher-order affective appraisal. They highlight the potential of cosmetic formulations to influence central touch representation beyond surface-level sensation. This proof-of-concept study offers novel insights into how the sensory and emotional qualities of cosmetic products take shape in the brain, providing a neuroscientific foundation for the development of emotionally engaging textures.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1816472</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1816472</link>
        <title><![CDATA[Correction: Long-term relief of refractory trigeminal neuropathy using high-frequency spinal cord stimulation at the cervicomedullary junction: a 6-year follow-up case report]]></title>
        <pubdate>2026-03-10T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Daniela Floridia</author><author>Rossana Panasiti</author><author>Anna Anselmo</author><author>Francesco Corallo</author><author>Maria Pagano</author><author>Irene Cappadona</author><author>Salvatore Leonardi</author><author>Rocco S. Calabrò</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1738646</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1738646</link>
        <title><![CDATA[Association of brain cortical changes with efficacy of treatment in patients with chronic neck and shoulder pain: a longitudinal surface-based morphometry study]]></title>
        <pubdate>2026-03-03T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Zhiqiang Qiu</author><author>Jinming Tong</author><author>Maojiang Yang</author><author>Libing He</author><author>Hongjian Li</author><author>Tianci Liu</author><author>Xiaoxue Xu</author>
        <description><![CDATA[ObjectiveStudies have shown that the pathophysiological mechanisms of Chronic neck and shoulder pain (CNSP) involve not only local spinal and neural abnormalities but also abnormal brain cortical structures related to pain modulation. However, it remains unclear how these cortical alterations may influence efficacy of treatment.Materials and methods31 CNSP patients and 30 age- and gender-matched healthy controls (HCs) underwent 3D high-resolution structural magnetic resonance imaging (MRI) scans. The CNSP patients underwent a second MRI scan 3 months after receiving minimally invasive interventional treatment. The longitudinal changes in cortical thickness (CT), fractal dimension (FD), gyrification index (GI), and sulcal depth (SD) were studied before and after treatment in the CNSP patients, and conducted partial correlation analysis with treatment efficacy.ResultsCompared to healthy controls, CNSP patients at baseline exhibited significant reduced cortical thickness (CT) in the bilateral precentral gyrus, superior frontal gyrus, lingual gyrus, left paracentral lobule, fusiform gyrus, superior temporal gyrus, supramarginal gyrus, and right precuneus. Deeper sulcal depth (SD) was observed in the bilateral central sulcus, anterior and posterior cingulate cortices, insula, lateral orbitofrontal cortex (OFC), and left dorsolateral prefrontal cortex (DLPFC). Additionally, an increased Gyrification Index (GI) was found in the bilateral lingual gyrus, left lateral OFC, anterior/posterior cingulate cortices, and right medial OFC. Three months after minimally invasive intervention, these morphological abnormalities showed widespread normalization. Correlation analyses revealed that higher baseline CT in the left precentral gyrus and paracentral lobule, lower baseline SD in the left cingulate cortex and central sulcus, and higher baseline GI in the right medial OFC were significant predictors of greater pain relief. Furthermore, the longitudinal restoration of CT in the left precentral gyrus and SD normalization in the left DLPFC and cingulate cortex were positively correlated with the reduction in VAS scores.ConclusionThis study identifies specific morphological alterations characterized by cortical thinning and increased sulcal depth in the sensorimotor cortex (precentral gyrus, paracentral lobule, central sulcus) and the pain modulation network (cingulate cortex, DLPFC, OFC) as key biomarkers for CNSP. The findings demonstrate that baseline structural integrity in these specific regions serves as a robust predictor of treatment efficacy. Moreover, the longitudinal structural recovery paralleling pain relief confirms the reversible nature of maladaptive neuroplasticity, highlighting CT in the precentral gyrus and SD in the DLPFC as critical indicators for evaluating chronic pain interventions.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1754329</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1754329</link>
        <title><![CDATA[Sensory deficiencies correlate with tau protein and dementia]]></title>
        <pubdate>2026-03-03T00:00:00Z</pubdate>
        <category>Mini Review</category>
        <author>Marina Avila-Villanueva</author><author>Félix Hernández</author><author>Jesús Avila</author><author>Germán Plascencia-Villa</author><author>George Perry</author>
        <description><![CDATA[Sensory decline is a common feature of aging and an early sign of a high risk of developing neurodegenerative diseases. Abnormal protein deposits of tau are also observed in sensorial areas in early stages of Alzheimer’s disease and related dementia (ADRD), indicating that these two features are associated with common neuropathological changes in the affected brain areas. Alterations in taste and smell are evident in subjects with cognitive decline, but sensory decline is perceived in olfaction, vision, hearing (at early times of degeneration), and even touch, which correlates with disease progression. Consequently, affected individuals may suffer from varying altered behaviors that emerge from the declined capability to process and perceive information, suggesting that differences in sensory perception of the environment may play a key role in explaining these behavioral variations in subjects with cognitive impairment. This commentary discusses some of the alterations in sensory functionality and how these could contribute to the development of neurodegenerative disorders, such as ADRD.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1710656</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1710656</link>
        <title><![CDATA[Mechanisms of postoperative anorexia in surgical patients: a narrative review]]></title>
        <pubdate>2026-03-02T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Yanbo Sun</author><author>Zhichun Li</author><author>Ying Cai</author><author>Yunyun Cen</author><author>Yanli Li</author><author>Chengbin Li</author>
        <description><![CDATA[Postoperative anorexia is a highly prevalent condition among surgical patients, which exerting a profound impact on their recovery trajectories and nutritional status. The underlying mechanisms are complex and multifactorial, including neuroendocrine dysregulation, activation of inflammatory signaling pathways, and the interaction between psychological processes and pathological conditions. Emerging evidence underscores the significant role of altered hunger and satiety perception, cognitive modulation of food-related cues, and emotion-driven behavioral responses in the regulation of postoperative appetite. Despite these insights, there are currently no definitive targeted interventions available to effectively restore appetite in the postoperative setting. This narrative review summarizes recent advances in the understanding of appetite regulation, delineates key biological and psychosocial factors contributing to postoperative anorexia, and systematically synthesizes current clinical assessment approaches, and discusses emerging therapeutic strategies. By integrating insights from physiology, cognition, and affective science of postoperative anorexia, this narrative review seeks to provide a comprehensive understanding of the pathogenesis, assessment, and the current therapeutic strategies of postoperative anorexia.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1665633</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1665633</link>
        <title><![CDATA[Long-term relief of refractory trigeminal neuropathy using high-frequency spinal cord stimulation at the cervicomedullary junction: a 6-year follow-up case report]]></title>
        <pubdate>2026-02-20T00:00:00Z</pubdate>
        <category>Case Report</category>
        <author>Daniela Floridia</author><author>Rossana Panasiti</author><author>Anna Anselmo</author><author>Francesco Corallo</author><author>Maria Pagano</author><author>Irene Cappadona</author><author>Salvatore Leonardi</author><author>Rocco S. Calabrò</author>
        <description><![CDATA[Chronic neuropathic pain profoundly impairs quality of life and often remains refractory to pharmacological or surgical management. Spinal cord stimulation (SCS) is considered a second-line therapy when conventional treatments fail. In this context, high-frequency spinal cord stimulation (HFSCS) targeting the cervicomedullary junction (CMJ) has emerged as a promising option for drug-refractory facial pain syndromes, including trigeminal neuropathy, though clinical evidence remains limited. We report the case of a 67-year-old woman who developed severe right-sided trigeminal neuropathic pain following petroclival meningioma surgery. After multiple unsuccessful interventions, she underwent implantation of a 10 kHz HFSCS system targeting the CMJ. An epidural lead was placed at the C1-C2 level and connected to an implantable pulse generator, delivering continuous stimulation. The procedure produced complete relief of paroxysmal electric shock-like pain and neurophysiological evidence of reduced trigeminal nociceptive activity. Analgesia was sustained for 6 years, with a transient relapse due to battery depletion, which resolved completely after generator replacement. These findings confirm the long-term efficacy and durability of CMJ-targeted HFSCS and highlight the importance of structured follow-up and device maintenance. HFSCS at the CMJ may represent a safe and durable therapeutic option for refractory trigeminal neuropathy, warranting validation through larger prospective studies.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1798060</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1798060</link>
        <title><![CDATA[Retraction: Correlation analysis between clinical effective emotional treatment and plasma N-methyl-D-aspartate receptor function-related indexes]]></title>
        <pubdate>2026-02-05T00:00:00Z</pubdate>
        <category>Retraction</category>
        
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1666558</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1666558</link>
        <title><![CDATA[Evaluating haptic experience using EEG and deep learning across multiple modalities: linking stimulus and self-reports]]></title>
        <pubdate>2026-01-30T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Haneen Alsuradi</author><author>Yonas Atinafu</author><author>Mohamad Eid</author>
        <description><![CDATA[Conventionally, evaluations of haptic interfaces have relied on self-reported assessments, which offer limited objectivity and can disrupt the user experience, making it challenging to design interfaces that dynamically adapt to users' cognitive state in real time. To overcome these limitations, cognitive haptic interfaces leverage neurophysiological measures such as EEG and deep learning to directly capture the brain's responses to haptic stimulation. A key challenge is how to label these neural responses: do we ground models in objectively controlled Physical Stimulation (PS) parameters, or in participants' Self-Reported (SR) perceptions? The goal of this work is not to demonstrate that EEG can reproduce subjective reports, but rather to systematically examine how neural responses relate to these two aspects of haptic experience by training deep learning models under both PS and SR labeling schemes. Here, we investigate how PS- versus SR-based labeling impacts model performance across four modalities: (i) delayed force-feedback (DFF), (ii) fingertip vibration feedback (FVF), (iii) upper-body vibration feedback (UVF), and (iv) fingertip thermal feedback (FTF). We evaluate three deep learning benchmarked architectures: ATCNet, EEG Inception, and EEG Conformer on EEG data labeled according to both approaches. Across all modalities, PS-labeled models yield more stable and higher performance than SR-labeled models in a group-level leave-one-subject-out (LOSO) setting, with the largest gains at near-perceptual-threshold levels (e.g., mild thermal changes, moderate vibration intensities, borderline delay settings) where SR labels are most variable across individuals. Rather than aiming to replace self-reports, these results reveal when EEG-based models align more closely with the physical stimulation than with participants' reports and support using PS-trained decoders as a structured first-stage representation that can later be adapted with user-specific SR information.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1731980</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1731980</link>
        <title><![CDATA[Perceptual punctuation: fixational eye movements reveal segmentation of auditory streams]]></title>
        <pubdate>2026-01-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Vincenzo Rizzuto</author><author>Oren Kadosh</author><author>Roberto Montanari</author><author>Yoram Bonneh</author>
        <description><![CDATA[IntroductionPerception operates as rhythmically structured sampling in which temporal predictions determine when incoming signals are weighted. Fixational eye movements carry opposing consequences, enhancing acuity yet inducing brief peri-saccadic suppression, suggesting that their timing is paced by expected, salient rhythms. Auditory scenes can be parsed into competing streams that unfold over time. If fixation dynamics are shaped by temporal expectation, and auditory streaming imposes a percept-dependent temporal structure on otherwise identical acoustics, then fixational eye movements might provide a window into how listeners parse sound over time. We asked whether fixational eye movements reflect the perceived rather than the physical temporal organization of an ambiguous ABA– pattern.MethodsWhile listeners fixated and either attended High, Low, or All tones (Experiment 1, n = 15) or freely reported their percept (Experiment 2, n = 15), we recorded binocular eye position (500 Hz) and quantified microsaccade (MS) dynamics and eye-velocity spectra.ResultsAcross both experiments, eye-velocity spectra showed a percept-dependent redistribution between 2 and 4 Hz, with relative power shifting with the instructed/reported stream. A normalized 4–2 Hz index (ΔPSD) separated Low-tone from High-tone percepts across procedures. Time-resolved analyses further revealed within-trial waxing-and-waning of 2 vs. 4 Hz dominance, consistent with bistable fluctuations in maintaining a stream. Moreover, microsaccade reaction time (msRT), aligned to the onset of the sound sequence, differed significantly depending on the percept.DiscussionThese findings extend oculomotor inhibition beyond discrete events, positioning fixation dynamics as a sensitive, report-free marker of auditory scene organization. We discuss mechanistic links to temporal attention and active sensing, and implications for a multisensory timing framework.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1710208</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1710208</link>
        <title><![CDATA[Contextual cues shape facial emotion recognition: a combined behavioral and ERP study]]></title>
        <pubdate>2026-01-14T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mónica Toro</author><author>Cristian Cortés-Rivera</author><author>Francisco Cerić</author><author>Juan Carlos Oliveros</author>
        <description><![CDATA[IntroductionBeing able to recognize the emotions in others is fundamental to social interaction, yet the precise temporal dynamics by which the brain integrates contextual cues with facial expressions remain unclear. This study used behavioral measures and event-related potentials (ERPs) to investigate how contextual congruency and emotional valence modulate facial emotion recognition in a neurotypical population.MethodsParticipants viewed emotional faces preceded by either congruent or incongruent bimodal cues, combining vocalizations and visual images.ResultsBehaviorally, participants responded faster and made fewer errors during congruent trials than in incongruent trials, indicating that context facilitates emotional processing. At the neural level, incongruent cues elicited a significantly larger P1 component, suggesting that the brain allocates increased early attentional resources to conflicting stimuli. Furthermore, the P3 component was significantly larger for negative stimuli compared to neutral ones, highlighting the role of emotional valence in later stages of cognitive processing.DiscussionTogether, these findings support a multi-stage model of emotional integration, where contextual incongruency impacts processing from early perceptual encoding to later cognitive evaluation. By integrating behavioral and neural evidence, this study clarifies the temporal course of contextual integration in multisensory emotion perception and provides new insights with implications for clinical and applied research.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1605800</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1605800</link>
        <title><![CDATA[Figure–ground relationship of voices in musical structure modulates reciprocal frontotemporal connectivity]]></title>
        <pubdate>2026-01-13T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Chan Hee Kim</author><author>Jeong-Eun Seo</author><author>Jaeho Seol</author><author>Chun Kee Chung</author>
        <description><![CDATA[When listening to polyphonic music, we often perceive a melody as the figure against the ground of accompanying sounds. However, with repeated exposure, this figure–ground relationship may naturally shift, allowing the melody to recede into the ground. In a previous study, we found the consistent pattern of frontotemporal connectivity for the “Twinkle, Twinkle, Little Star” (TTLS) melody in the headings of two Variations (II and IV) in Mozart's 12 Variations, K. 265, indicating that the TTLS melody, but not the different lower voices, was the figure. However, the frontotemporal connectivity pattern may change in the same phrases repeating in the two variations. In the current study, we examined how frontotemporal connectivity changes in the repeated phrases. In the results, the frontotemporal connectivity pattern between the two variations changed in the final phrase after repeated passages. This suggests that the shift in the figure–ground relationship persists, with the TTLS melody becoming less prominent while the lower voices become relatively more prominent. Additionally, frontotemporal connectivity was strongly correlated with temporofrontal connectivity in the opposite direction. Finally, our data indicate that TTLS melody-based and sensory-based processes in response to a switched figure–ground relationship, are incorporated into the bidirectional connections between frontotemporal and temporofrontal connectivity. Our study highlights the brain's ability to reconfigure figure–ground relationships in the processing of musical voices.]]></description>
      </item>
      </channel>
    </rss>