<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Cognition | Perception section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/cognition/sections/perception</link>
        <description>RSS Feed for Perception section in the Frontiers in Cognition journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-04-13T22:56:57.611+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2026.1638501</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2026.1638501</link>
        <title><![CDATA[Gender identity impacts the perception of vocal congruence]]></title>
        <pubdate>2026-03-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Chiara De Livio</author><author>Claudia Mazzuca</author><author>Chiara Fini</author><author>Anna M. Borghi</author>
        <description><![CDATA[This study investigated vocal congruence, i.e., the alignment between self-voice perception and the sense of identity, across cisgender and transgender and gender non-conforming (TGNC) participants (N = 44) in three conditions: Silent Reading, Reading Aloud, and Listening to recorded speech. Results revealed that TGNC participants reported significantly lower vocal congruence than cisgender participants across all experimental conditions, with the starkest difference in conditions where auditory feedback was present. This experience of incongruence appears to be modulated by interoceptive sensibility and alexithymia, with TGNC individuals reporting lower interoceptive trust and higher levels of alexithymia. Emotional awareness was positively linked to inner-voice congruence in the TGNC group. Additionally, aspects related to gender-minority stress predicted lower congruence. These findings highlight the complex interplay between gender identity, interoception, emotion regulation strategies, and voice perception.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2025.1689600</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2025.1689600</link>
        <title><![CDATA[Bi-temporal processing in music notation reading: a theory linking prediction, memory, and automaticity]]></title>
        <pubdate>2026-02-06T00:00:00Z</pubdate>
        <category>Hypothesis and Theory</category>
        <author>Karen L. Heath</author>
        <description><![CDATA[Reading music notation requires musicians to extract and interpret visual information in real time while simultaneously anticipating future performance actions. This dual engagement, in which one acts in the present while processing material to be performed in the future, suggests that music reading relies on a bi-temporal cognitive architecture. Grounded in this premise, this theoretical paper develops a model that integrates Hebbian learning and automaticity as core mechanisms supporting the simultaneous perceptual and anticipatory demands of notation-based music performance. A systematic review of neuroimaging studies involving music-reading tasks was conducted to evaluate current evidence on the neural correlates of notation processing. The results of the review showed that music reading engaged distributed cortical and subcortical networks, including regions commonly implicated in text reading, and recruited auditory-motor integration systems essential for music performance. However, most studies isolated single parameters of notation (e.g., pitch identification), thereby limiting ecological validity and constraining interpretations of how musicians process in real-world contexts that require concurrent multi-parameter integration. Complementary research on cognitive prediction, sensorimotor coupling, and perceptual-motor learning demonstrates that musicians employ a dual-pathway system of immediate perception and forward prediction, shaped by Hebbian synaptic strengthening and the development of automaticity through repeated procedural engagement. Synthesizing these findings, this article proposes a bi-temporal cognitive model of music-notation processing that accounts for dynamic interplay between associative learning, predictive processing, and automated motor execution. The implications of this model for cognitive theory and music pedagogy are discussed, with recommendations for empirical approaches to test the bi-temporal framework and advance understanding of real-time cognitive coordination in music performance.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2025.1750627</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2025.1750627</link>
        <title><![CDATA[Editorial: Detrimental effects of hypoxia on brain and cognitive functions]]></title>
        <pubdate>2025-12-19T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Alberto Zani</author><author>Stephanie Otto</author><author>Terry McMorris</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2025.1715617</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2025.1715617</link>
        <title><![CDATA[The role of top-down and bottom-up factors in parafoveal reading]]></title>
        <pubdate>2025-12-02T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Valentina Bandiera</author><author>Silvia Primativo</author><author>Roberta Daini</author><author>Marialuisa Martelli</author><author>Lisa S. Arduino</author>
        <description><![CDATA[Recently we have shown that, while reading two words presented simultaneously, one in the fovea and one in the parafovea, participants are more accurate and faster when the two words are semantically related. The present study confirmed and supported the previous results by using the same Rapid Parallel Visual Presentation Paradigm (RPVP) and by changing the relative proportions of unrelated vs. semantically related word pairs. Indeed, differently from other studies where semantically unrelated and related word pairs were equally represented (50% each), in the present study only 30% of word pairs were semantically related. Results showed again an advantage when the two words were semantically related and we interpreted these findings in terms of automaticity between lexical/sublexical units processing and semantic access.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2025.1439439</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2025.1439439</link>
        <title><![CDATA[Learning and teaching of fluent musical note recognition: the visual perceptual perspective]]></title>
        <pubdate>2025-12-01T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Yetta Kwailing Wong</author><author>Jiaqi Fion Fang</author>
        <description><![CDATA[Musical notation enables communications between composers, performers, music learners and music lovers. However, learning and teaching of fluent musical note recognition is often thought to be highly challenging. This paper aimed to summarize the current understanding of development of musical note recognition, explain its pedagogical bottleneck, and propose a pedagogical tool to address this problem. Review of the psychology and neuroscience literature identified eight psychological factors associated with fluent recognition of musical notes at both behavioral and neural levels. Many of the identified factors involve specialized visual perceptual mechanisms that are automatic, implicit and without conscious effort. Since classroom teaching heavily relies on verbal explanation, which cannot efficiently address these visual perceptual mechanisms, musical note recognition becomes difficult to teach and learn. We propose that visual perceptual training can serve as an innovative pedagogical tool to efficiently relax the visual bottleneck and enhance fluency in recognizing musical notes. We discuss why theoretically it works, the empirical basis for its effectiveness, its advantages, and potential concerns of adopting this tool by the music education community. In sum, visual perceptual training can directly facilitate development of fluency in recognizing musical notes in an efficient and personalized manner. This will encourage music exposure, learning and participation, and may therefore widely benefit the music learning community.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2025.1692578</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2025.1692578</link>
        <title><![CDATA[Cognitive rehabilitation among long COVID patients using vibratory and auditory treatment (VAT) is linked to BDNF]]></title>
        <pubdate>2025-11-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Abdullah Mosabbir</author><author>Jed A. Meltzer</author><author>Arkady Uryash</author><author>Erika L. Beroncal</author><author>Ana C. Andreazza</author><author>Lee Bartel</author>
        <description><![CDATA[Cognitive dysfunction occurs in around 40% of long COVID (LC) patients, and in many cases appears second only to fatigue in prevalence. Vibratory and auditory treatment (VAT) within the gamma range has demonstrated improvements in symptoms associated with cognition and fatigue. In this open-label pilot study, we tested the effects of VAT on measures of cognition and fatigue in LC. Twenty patients were randomly divided into a treatment and a control group. Symptoms were monitored remotely through mobile apps and in-person visits before and after the treatment period. The treatment group received a device generating 40 Hz of VAT to take home and use every day from Monday to Friday for 4 weeks (i.e., 20 sessions over 28 days), whereas the control group did not use any device but followed the same data collection procedures. This study found that after 4 weeks of VAT, participants with LC exhibited increased performance in selective attention and response inhibition, an increased amount of circulating brain-derived neurotrophic factor (BDNF), and a reduced resting heart rate. We propose that VAT may be a useful rehabilitative tool for LC as well as other targeted populations that seek improvements in cognition or general health but are compromised immunologically or physically.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2025.1561842</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2025.1561842</link>
        <title><![CDATA[A sequential model of two-choice intensity identification]]></title>
        <pubdate>2025-03-19T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Robert C. G. Johansson</author><author>Rolf Ulrich</author>
        <description><![CDATA[A model of perceptual decision-making in two-choice intensity identification tasks is advanced. The model assumes that sensory pathways encode the physical intensity of the stimulus in the firing rates of sensory afferents, characterized by exponentially distributed interarrival times. The decision-making process entails a sequential comparison of each interarrival time with memory traces from prior stimulus exposure. This yields a random walk process reminiscent of the two-choice RT model by Stone (1960), but with an additional stochastic element introduced by variable sampling times. The model provides a reasonable account of data garnered in a brightness identification task (Experiment 1), aligning with distributional RT statistics and intensity effects on mean RTs. Several post hoc assumptions, such as variability and bias in the starting point of the random walk, are required to accurately predict error RT distributions, however, which introduces problematic asymmetries in predicted error probabilities. Applying the model to a loudness identification task (Experiment 2) necessitated the additional assumption of variability in transduction rates to overcome challenges in accommodating longer RTs for errors compared to correct responses in this task.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2025.1565759</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2025.1565759</link>
        <title><![CDATA[Neonatal hypoxia: impacts on the developing mind and brain]]></title>
        <pubdate>2025-02-28T00:00:00Z</pubdate>
        <category>Mini Review</category>
        <author>Nafiseh Shabani</author><author>Alice Mado Proverbio</author>
        <description><![CDATA[This review examines the cognitive, emotional, and neurofunctional effects of neonatal hypoxia in both the short and long term. Neonatal hypoxic-ischemic encephalopathy (NHIE) is a critical condition with profound and lasting effects on brain development and function. This mini review examines the structural, cognitive, behavioral, and psychopathological outcomes associated with NHIE, highlighting its impact on neurodevelopment. NHIE is linked to structural abnormalities such as reduced white matter integrity, ventricular enlargement, and damage to key regions including the basal ganglia, hippocampus, and corpus callosum. These changes correlate with long-term impairments in cognition, memory, and motor skills, alongside elevated risks of neurodevelopmental disorders such as autism spectrum disorder (ASD) and attention deficit/hyperactivity disorder (ADHD). Behavioral and emotional challenges, including anxiety, depression, and mood instability, are also prevalent. This review underscores the significant and multifaceted impact of NHIE on neurodevelopmental and behavioral health, emphasizing the importance of developing methodologies to eliminate or minimize neonatal hypoxic states as much as possible.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2025.1533913</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2025.1533913</link>
        <title><![CDATA[Music as a social instrument: a brief historical and conceptual perspective]]></title>
        <pubdate>2025-02-24T00:00:00Z</pubdate>
        <category>Mini Review</category>
        <author>Nicholas Bannan</author><author>Alan R. Harvey</author>
        <description><![CDATA[This article addresses the origins and purpose of communal music-making, including dance, and its role in human sociality. It accords special significance to the adapted nature of human vocalization, and the sensorimotor discrimination that allows the prediction and then generation of musically relevant, coordinated and simultaneous movements. Commencing with a historical survey of the development of ideas about the evolutionary importance of music in human social behavior, this mini-review then sets out to define and explore key issues involved in an evolutionary explanation. These include: acquisition and control of parameters required for vocal production (synchronization of pitch, timbre, duration and loudness); the exchange and transmission of pitched utterances in unison as well as in harmony; the roles of natural and sexual selection in shaping human musical abilities; the nature of cooperative behavior, and the consequences for social bonding of such interaction throughout life; transmission of such behaviors across generations, and the interaction between genes and culture that drives the evolution of complex social behavior in Homo sapiens. The article concludes with a brief review of current research that deals with contributory features of this field, especially in neuroscience which continues to provide important psychophysiological data that reinforces the long-held proposal that music has a key role in promoting cooperative, prosocial interactions leading to health and wellbeing over the human lifespan.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2025.1503028</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2025.1503028</link>
        <title><![CDATA[EEG as a neural measure of hypoxia-related impairment]]></title>
        <pubdate>2025-02-06T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Stephanie R. Otto</author><author>Cammi K. Borden</author><author>Daniel G. McHail</author><author>Kara J. Blacker</author>
        <description><![CDATA[Ambient oxygen decreases with increasing altitude, which poses a primary threat to aviators known as hypoxic hypoxia. Decades of research have shown that hypoxia impairs cognition, but the neurophysiological bases for these effects remain poorly understood. Recent advances in neuroscience have permitted non-invasive observation of neural activity under controlled hypoxia exposures and have begun to uncover how the brain responds to hypoxia. Electroencephalography (EEG) in particular has been used to explore how electrical activity produced by networks of cortical neurons changes under hypoxia. Here we review studies that have explored how hypoxia affects prominent EEG brain rhythms as well as responses to specific events or stimuli in the time and frequency domains. Experimental conditions have varied widely, including whether hypoxia exposures were normobaric or hypobaric and the range of equivalent altitudes and durations of exposures. Collectively, these studies have accumulated support for a variety of candidate neural markers of hypoxia impairment spanning sensory and cognitive domains. Continued research will build on these findings to leverage emerging technologies in neuroscience and further our understanding of how hypoxia affects cognition and associated neural activity.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2024.1468306</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2024.1468306</link>
        <title><![CDATA[From oxygen shortage to neurocognitive challenges: behavioral patterns and imaging insights]]></title>
        <pubdate>2024-11-05T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Alberto Zani</author><author>Yldjana Dishi</author><author>Alice Mado Proverbio</author>
        <description><![CDATA[Environmental hypoxia, resulting from reduced oxygen supply, poses a significant risk of dysfunctioning and damaging the neurocognitive system, particularly in relation to anxiety and stress. Inadequate oxygenation can lead to acute and chronic brain damage. Scholars used behavioral, hemodynamic, and electromagnetic neurofunctional techniques to investigate the effects of normobaric and hypobaric hypoxia on neurocognitive systems. They found a correlation between hypoxia, altered psychomotor responses, and changes in EEG alpha, theta, beta, and gamma rhythms, which affect spatial attention and memory. Hypoxia affects event related potential (ERP) components differently depending on latency. Perceptual responses N1 and P2 remain largely unaffected, while the amplitudes of preattentive MMN, vMMN, and P3a are significantly altered. Late latency components related to attention, particularly P3b, are also altered. These changes illustrate the spectrum from sensory detection to more complex cognitive processing, highlighting the brain's efficiency in managing information. Interestingly, the amplitudes of P3b, ADAN and CNV can increase with increased cognitive demands in hypoxia. This suggests a compensatory response. Prolonged exposure exacerbates these effects, resulting in compensatory delayed behavioral responses and alterations in behavioral monitoring and conflict inhibitory control, as reflected by reduced amplitudes in some attention related ERP components, including N2, N2pc, and ERN. Thus, neurocognitive function and integrity are under stress. ERP sources and hemodynamic images reveal that vulnerable brain regions include the frontal prefrontal cortices, hippocampus, basal ganglia, and parietal and visual cortices, which are essential for attention related processes like decision making and spatial memory. The auditory system appears less affected.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2024.1425005</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2024.1425005</link>
        <title><![CDATA[Coupling of anticipation and breathing in expert flute performance: the influence of musical structure and practice]]></title>
        <pubdate>2024-09-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Michel A. Cara</author><author>Divna Mitrovic</author>
        <description><![CDATA[IntroductionIn this study, we examined the cognitive processes and physiological responses involved in learning a flute piece by the composer Charles Koechlin among musicians of different expertise levels. Participants performed the piece four times consecutively, with a 2-min practice interval between the first and the second trial.MethodsUsing data obtained from an eye tracker, respiratory sensors, and an audio recorder we assessed short-term improvement and the effect of musical structure and practice on key variables identified through a multivariate approach: eye-hand span (EHS), time index of EHS, thoracic and abdominal amplitude (breathing patterns) and pupil dilation.ResultsThe analysis revealed two main dimensions: one associated with EHS; and the other with embodied responses to music, closely linked to breathing patterns and pupil dilation. We found an effect of musical structure on all the variables studied, while the EHS improved with practice. Expert musicians demonstrated enhanced EHS and adapted their breathing patterns more effectively to the music's structure.DiscussionThese insights support the hypothesis of a coupling between anticipation and breathing, emphasizing the role of perceptual and embodied components in music reading and learning.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2024.1417011</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2024.1417011</link>
        <title><![CDATA[Classifying musical reading expertise by eye-movement analysis using machine learning]]></title>
        <pubdate>2024-08-30T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Véronique Drai-Zerbib</author><author>Manon Ansart</author><author>Clément Grenot</author><author>Bénédicte Poulin-Charronnat</author><author>Joris Perra</author><author>Thierry Baccino</author>
        <description><![CDATA[Music reading is the key to literacy for musicians in the Western music tradition. This high-level activity requires an efficient extraction of the visual information from the score to the current needs of the execution. Differences in eye movements between expert and non-expert musicians during music reading have been shown. The present study goes further, using a machine learning approach to classify musicians according to their level of expertise in analyzing their eye movements and performance during sight-reading. We used a support vector machine (SVM) technique to (a) investigate whether the underlying expertise in musical reading could be reliably inferred from eye movements, performance, and subjective measures collected across five levels of expertise and (b) determine the best predictors for classifying expertise from 24 visual measures (e.g., the number of progressive fixations, the number of regressive fixations, pupil size, first-pass fixations, and second-pass fixations), 10 performance measures (e.g., eye–hand span, velocity, latency, play duration, tempo, and false notes), and 4 subjective measures (perceived complexity and cognitive skills). Eye movements from 68 pianists at five different levels of music expertise (according to their level in the conservatory of music—from first cycle to professional) were co-registered with their piano performance via a Musical Instrument Digital Interface, while they sight-read classical and contemporary music scores. Results revealed relevant classifications based on the SVM analysis. The model optimally classified the lower levels of expertise (1 and 2) compared to the higher levels (3, 4, and 5) and the medium level (3) compared to higher levels (4 and 5). Furthermore, across a total of 38 measures, the model identified the four best predictors of the level of expertise: the sum of fixations by note, the number of blinks, the number of fixations, and the average fixation duration. Thus, efficiently classifying musical reading expertise from musicians' eye movements and performance using SVM is possible. The results have important theoretical and practical implications for music cognition and pedagogy, enhancing the specialized eye and performance behaviors required for an expert music reading.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2024.1403584</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2024.1403584</link>
        <title><![CDATA[Music expertise differentially modulates the hemispheric lateralization of music reading]]></title>
        <pubdate>2024-08-19T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Sara Tze Kwan Li</author>
        <description><![CDATA[Previous studies have shown that music expertise relates to the hemispheric lateralization of music reading among musicians and non-musicians. However, it remains unclear that how music expertise modulates the hemispheric lateralization of music reading along the music learning trajectory and how music expertise modulates the hemispheric lateralization of reading different musical elements. This study examined how music expertise modulates the hemispheric lateralization of music reading in pitch elements (e.g., pitch, harmony), temporal elements (e.g., rhythm), and expressive elements (e.g., articulation) among musicians, music learners, and non-musicians. Musicians (n = 38), music learners (n = 26), and non-musicians (n = 33) worked on a set of divided visual field sequential matching tasks with four musical elements, i.e., pitch, harmony, rhythm, and articulation, in separate blocks. An eye-tracker was used to ensure participants' central fixation before each trial. Participants judged whether the first and second target stimuli were the same as quickly and accurately as possible. The findings showed that for musicians, no significant differences were observed between the left visual field (LVF) and the right visual field (RVF), suggesting musicians' bilateral representation in music reading. Music learners had an RVF/LH (left hemisphere) advantage over the LVF/RH (right hemisphere), suggesting music learners tended to be more left-lateralized in music reading. In contrast, non-musicians had an LVF/RH advantage over the RVF/LH, suggesting non-musicians tended to be more right-lateralized in music reading. In addition, music expertise correlates with the laterality index (LI) in music reading, suggesting that the better the overall performance in music expertise task, the greater the tendency to be more left-lateralized in music reading. Nonetheless, musicians, music learners, and non-musicians did not show different visual field effects in any individual musical elements respectively, suggesting the cognitive processes involved might share similar lateralization effects among the three groups when only one particular musical element is examined. In general, this study suggests the effect of music training on brain plasticity along the music learning trajectory. It also highlights the possibilities that bilateral or left hemispheric lateralization may serve as an expertise marker for musical reading.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2024.1400292</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2024.1400292</link>
        <title><![CDATA[Auditory-motor adaptation: induction of a lateral shift in sound localization after biased immersive virtual reality training]]></title>
        <pubdate>2024-07-31T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Alma Guilbert</author><author>Tristan-Gael Bara</author><author>Tifanie Bouchara</author>
        <description><![CDATA[IntroductionSensorimotor adaptation has often been studied in the visual modality through the Prism Adaptation (PA) paradigm. In this paradigm, a lateral shift in visual pointing was found after wearing prismatic goggles. An effect of PA has sometimes been observed on hearing, in favor of a cross-modality recalibration. However, no study has ever shown if a biased auditory-motor adaptation could induce this lateral shift, which appears essential to a better understanding of the mechanisms of auditory adaptation. The present study aimed at inducing an auditory prism-like effect.MethodsSixty healthy young adults underwent a session of active audio-proprioceptive training in immersive virtual reality based on Head Related Transfer Functions (HRTF). This training consisted of a game in which the hand-held controller emitted sounds either at its actual position in a control group or at 10° or 20° to the right of its actual position in two experimental groups. Sound localization was assessed before and after the training.ResultsThe difference between both localization tests was significantly different between the three groups. As expected, the difference was significantly leftward for the group with a 20° deviation compared to the control group. However, this effect is due to a significant rightward deviation in the control group whereas no significant difference between localization tests emerged in the two experimental groups, suggesting that other factors such as fatigue may have cumulated with the training after-effect.DiscussionMore studies are needed to determine which angle of deviation and which number of sessions of this audio-proprioceptive training are required to obtain the best after-effect. Although the coupling of hearing and vision in PA still needs to be studied, adding spatial hearing to PA programs could be a promising way to reinforce after-effects and optimize their benefits.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2024.1404112</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2024.1404112</link>
        <title><![CDATA[Emotional modulation of statistical learning in visual search]]></title>
        <pubdate>2024-06-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Artyom Zinchenko</author><author>Afton M. Bierlich</author><author>Markus Conci</author><author>Hermann J. Müller</author><author>Thomas Geyer</author>
        <description><![CDATA[IntroductionVisual search is facilitated when participants encounter targets in repeated display arrangements. This “contextual-cueing” effect is attributed to incidental learning of spatial distractor-target relations, which subsequently guides visual search more effectively toward the target location. Conversely, behaviorally significant, though task-irrelevant, negative emotional stimuli may involuntarily capture attention and thus hamper performance in visual search. This raises the question of how these two attention-guiding factors connect.MethodsTo this end, we investigated how an emotionally alerting stimulus induced by different classes of emotional (face, scene) pictures prior to the search task relates to memory-related plasticity. We tested 46 participants who were presented with repeated and non-repeated search layouts, preceded at variable (50, 500, 1,000 ms) intervals by emotional vs. neutral faces or scenes.ResultsWe found that contextual learning was increased with emotional compared to neutral scenes, which resulted in no contextual cueing was observed at all, while no modulation of the cueing effect was observed for emotional (vs. neutral) faces. This modulation occurred independent of the intervals between the emotional stimulus and the search display.DiscussionWe conclude that emotional scenes are particularly effective in withdrawing attentional resources, biasing individual participants to perform a visual search task in a passive, i.e., receptive, manner, which, in turn, improves automatic contextual learning.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2024.1369638</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2024.1369638</link>
        <title><![CDATA[Children's recognition of slapstick humor is linked to their Theory of Mind]]></title>
        <pubdate>2024-05-22T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ebru Ger</author><author>Moritz M. Daum</author><author>Mirella Manfredi</author>
        <description><![CDATA[Humor is an important component of children's learning and development. Yet, the cognitive mechanisms that underlie humor recognition in children have not been well-researched. In this pre-registered study, we asked whether (1) 4- to 5-year-old children recognize and categorize a misfortunate situation as funny only if the victims show a funny bewildered face (slapstick humor), and not a painful or angry expression, (2) this ability increases with age, (3) it is associated with children's Theory of Mind (ToM) abilities, (4) it is related to the ability to recognize facial emotional expressions. In an online experiment platform, children (N = 61, Mage = 53 months) were asked to point to the funny picture between a funny and an affective picture. Then, children were asked to point to the happy, sad, fearful, or angry face among four faces displaying these emotions. Children's ToM was assessed using the Children's Social Understanding Scale (CSUS), which was filled out online by parents. Results showed that from the earliest age onward, the predicted probability of humor recognition exceeded the chance level. Only ToM but not age was a significant predictor. Children with higher ToM scores showed better humor recognition. We found no evidence for a relation between children's humor recognition and their recognition of any emotion (happy, sad, fearful, or angry). Our findings suggest that 4–5-year-old children recognize facial emotional expressions and slapstick humor, although these abilities seem unrelated. Instead, children's understanding of mental states appears to play a role in their recognition of slapstick humor.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2024.1375919</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2024.1375919</link>
        <title><![CDATA[The time course of hypoxia effects using an aviation survival trainer]]></title>
        <pubdate>2024-04-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Cammi K. Borden</author><author>Daniel G. McHail</author><author>Kara J. Blacker</author>
        <description><![CDATA[IntroductionReduced environmental oxygen levels at high altitudes can result in hypoxic hypoxia, which remains a primary threat in tactical aviation. Hypoxia broadly impairs cognition and can degrade a pilot's ability to safely operate the aircraft. Current hypoxia countermeasures include aircraft life support systems that deliver supplemental oxygen and using controlled hypoxia exposures to train aviators to recognize symptoms. To maximize the effectiveness of these countermeasures, it is critical to understand how hypoxia impacts performance and associated neurocognitive outcomes. We previously showed that a neural marker that indexes sensory processing integrity is sensitive to hypoxia impairment.MethodsHere, we extend this line of research closer to the training environment by using hypoxia simulation equipment currently standard in aviation survival training. In a single-blind, repeated-measures, counterbalanced design, we exposed 34 healthy participants to either normoxic air (ground level) or normobaric hypoxia (altitude equivalent gradually increasing from 10 to 25k') for 20 min after a 10 min baseline at ground level. During the exposure, participants completed a cognitive assessment battery while passively elicited neural responses to auditory tones were recorded using electroencephalography (EEG). Participants reported their hypoxia symptoms throughout and upon completion of their exposures.ResultsWe found that the hypoxia exposure rapidly elicited the predicted physiological responses in peripheral oxygen saturation (decrease) and heart rate (increase) within 2–3 minutes of exposure onset. On average, participants reported hypoxia symptoms in a delayed manner, ~8 min following the exposure onset. Performance on the cognitive tasks was relatively unaffected by hypoxia for basic tasks including Stroop, fine motor tracking, color vision and arithmetic, but was significantly degraded by hypoxia for more advanced tasks that combined a visual search component with Stroop and a working memory task. EEG activity associated with pre-attentive auditory processing was impaired on average shortly after the first symptom report, ~10 min from exposure start.DiscussionTogether, these results move hypoxia research closer to conditions encountered in aviation survival training and support the use of training devices for future hypoxia research.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2024.1352656</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2024.1352656</link>
        <title><![CDATA[Can natural scenes cue attention to multiple locations? Evidence from eye-movements in contextual cueing]]></title>
        <pubdate>2024-03-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Josefine Albert</author><author>Werner X. Schneider</author><author>Christian H. Poth</author>
        <description><![CDATA[Humans find visual targets more quickly when the target appears at the same location in a repeated configuration of other stimuli. However, when the target alternates between two locations in the repeated configuration, the benefit for visual search is smaller. This reduction of benefits has been explained as the result of an averaging of a benefit for one location and a cost for the other location. In two experiments, we investigated this two-target-locations effect in real-world scenes using high-resolution eye-tracking. Experiment 1 adapted a study in which subjects searched for a small “T” or “L” superimposed on real-world photographs. Half of the trials showed repeated scenes with one possible target location each; half showed novel scenes. We replicated the pronounced contextual cueing effect in real-world scenes. In Experiment 2, two conditions were added. In one of them, targets appeared in repeated scenes alternating between two possible locations per scene. In the other condition, targets appeared in repeated scenes but at new locations, constrained to one side of the screen. Subjects were faster to search for and identify a target in repeated scenes than in novel scenes, including when the scene was paired with two alternating target locations and (after extensive training) even when the scene only predicted the hemifield. Separate analyses on the two possible target locations resulted in rejection of the suggestion of costs for the additional target location, since the contextual cueing effect was present in the second half of the experiment for both the favored and the less favored target location. The eye-tracking data demonstrated that contextual cueing influences searching fixations, characteristic of attentional guidance, rather than responding fixations, characteristic of facilitation of response processes. Further, these data revealed that adding another possible target location leads to less guidance, rather than impeding response processes. Thus, this study delivers evidence for a flexible and attentional guidance mechanism that is able to prioritize more than one location in natural contexts.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcogn.2024.1349505</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcogn.2024.1349505</link>
        <title><![CDATA[Feature discrimination learning transfers to noisy displays in complex stimuli]]></title>
        <pubdate>2024-03-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Orly Azulai</author><author>Lilach Shalev</author><author>Carmel Mevorach</author>
        <description><![CDATA[IntroductionPerception under noisy conditions requires not only feature identification but also a process whereby target features are selected and noise is filtered out (e.g., when identifying an animal hiding in the savannah). Interestingly, previous perceptual learning studies demonstrated the utility of training feature representation (without noise) for improving discrimination under noisy conditions. Furthermore, learning to filter out noise also appears to transfer to other perceptual task under similar noisy conditions. However, such learning transfer effects were thus far demonstrated predominantly in simple stimuli. Here we sought to explore whether similar learning transfer can be observed with complex real-world stimuli.MethodsWe assessed the feature-to-noise transfer effect by using complex stimuli of human faces. We first examined participants' performance on a face-noise task following either training in the same task, or in a different face-feature task. Second, we assessed the transfer effect across different noise tasks defined by stimulus complexity, simple stimuli (Gabor) and complex stimuli (faces).ResultsWe found a clear learning transfer effect in the face-noise task following learning of face features. In contrast, we did not find transfer effect across the different noise tasks (from Gabor-noise to face-noise).ConclusionThese results extend previous findings regarding transfer of feature learning to noisy conditions using real-life stimuli.]]></description>
      </item>
      </channel>
    </rss>