1 Introduction
When we perceive complex sounds, such as speech or music, our auditory perception systems need to process multiple sound features and integrate distinct characteristics across different timescales. Recent research has highlighted neurophysiological mechanisms that underlie auditory perception, such as neural synchronization with the temporal structure of external stimuli and rate coding across different frequencies, including those corresponding to the fundamental frequency (F0) of complex sounds (Nourski and Brugge, 2011; Wang, 2018). Synchronization mechanisms appear particularly important for the encoding of fast changing phonemic information, while rate coding contributes to prosody perception which detects sound variations over longer time windows (Giraud and Poeppel, 2012; Rosen, 1992). These mechanisms may be altered in neurodevelopmental conditions such as autism spectrum disorder (ASD). Children with ASD often exhibit atypical auditory processing (O'Connor, 2012), which may contribute to their difficulties in speech perception and language acquisition (Mody and Belliveau, 2013).
The vast majority of existing studies have focused on speakers of non-tonal languages, particularly English, thereby overlooking populations of children with ASD who grow up speaking tonal languages, where pitch carries lexical meaning and auditory processing demands may differ. This opinion paper aims to synthesize recent findings on auditory perception, examine cross-linguistic differences at neurophysiological level, and explore perspectives for studying these processes in populations with ASD. We propose that electroencephalographic (EEG) markers hold significant promise for investigating how auditory perception varies across language backgrounds in individuals with ASD. Given its high temporal resolution, EEG-based biomarkers are particularly well-suited for capturing both synchronized and rate-coded neural responses. In addition, EEG is non-invasive and cost-effective, making it ideal for use in pediatric and clinical populations.
2 Brain mechanisms of auditory perception
Pure tone perception relies on the tonotopic organization of the cochlear basilar membrane and the phase-locking properties of auditory nerves, which tend to generate spikes at specific phases of acoustic waves (Oxenham, 2018). However, timing information becomes increasingly less precise at higher stages of the auditory processing pathways. For example, in the inferior colliculus of the midbrain, phase-locked responses are detectable at and below 1,000 Hz (Liu et al., 2006), while in the auditory cortex, the threshold is about 100 Hz (Lu and Wang, 2000; Nourski and Brugge, 2011). As temporal precision decreases along the auditory pathway, cortical processing must rely on different mechanisms for encoding sound information. At the cortical stage of auditory processing, different types of neurons support auditory perception: stimulus-synchronized spiking neurons and non-synchronized neurons that rely on rate coding, meaning that they are selectively tuned to specific fundamental frequency (F0; Nourski and Brugge, 2011; Wang, 2018). Thus, the analysis of complex auditory stimuli, including speech stimuli, may be the results of coordinated involvement of these different mechanisms. By complex auditory stimuli, we refer to sounds that contain multiple spectrotemporal features that must be integrated over time, such as speech and music.
Stimulus-synchronized activity in humans is often studied presenting amplitude-modulated tones or click trains. Such stimulation elicits an evoked EEG response aligned with the frequency and phase of external stimulus and reflects the precision of synchronization between neuronal and auditory signals across trials which can be estimated as inter-trial phase coherence, or ITPC (Stroganova et al., 2020). This response has the highest amplitude when stimuli vary with the frequency of 20–40 Hz (Picton et al., 2003). Rhythmic stimuli varying within this range are perceived as roughness—a perceptual quality intermediate between distinct pulses and sounds with pitch. This response is associated with gap detection and in older adults, with speech in noise perception (Dheerendra et al., 2021; Ross and Fujioka, 2016). It is also associated with rate discrimination of rhythmic stimuli varying at frequency around 27 Hz in children (Neklyudova et al., 2026). Moreover, cortical oscillations at low-gamma frequency (25–40 Hz) are associated with phoneme perception (Giraud and Poeppel, 2012; Leong and Goswami, 2014). Taken together, these findings suggest that rhythmic responses in this frequency range may play an important role in processes that rely on precise temporal cues.
A rate coding mechanism can be detected with EEG when periodic stimuli that convey a discernible pitch (F0) are introduced. Pitch perception typically emerges around 40 Hz and becomes more salient at higher frequencies (Krumbholz et al., 2000). Recent studies associated pitch processing with a sustained wave (SW), a brain response which is elicited by continuous periodic sounds (click trains or vowels) and persists until the stimulus ends (Gutschalk et al., 2002; Keceli et al., 2012; Orekhova et al., 2024). During the presentation of rhythmic click trains, SW amplitude increases with the frequency of stimulation (Gutschalk et al., 2002; Keceli et al., 2015). Notably, the involvement of the SW in discriminating rhythmic stimuli around 40 Hz has also been demonstrated in children (Neklyudova et al., 2026). Overall, the converging evidence supports the view that the SW represents a rate-dependent neural mechanism that is preferentially sensitive to higher repetition frequencies at which the stimulus gives rise to pitch perception rather than merely distinct pulses or roughness.
These two brain responses also have different developmental trajectories: while ITPC increases with age, reaching its peak during adolescence (Cho et al., 2015), SW either remains stable or, according to some reports, even decreases with age (Arutiunian et al., 2022; Neklyudova et al., 2024; Stroganova et al., 2020). Thus, SW may reflect neural mechanisms particularly important in early development for shaping auditory perception. Unfortunately, SW is much less studied both in non-tonal and tonal languages. An interesting, yet unexplored, research question is whether this response plays a greater role in speech development in tonal compared to non-tonal languages.
3 Language experience and brain mechanisms of pitch perception
Pitch is a fundamental feature of all languages. Both in tonal and non-tonal languages pitch variation is crucial for the perception of prosody, including features such as intonation and stress. At the same time, in tonal languages such as Mandarin Chinese, Vietnamese, or Thai, pitch modulations, known as lexical tones, serve not only prosodic but also lexical functions, distinguishing word meanings. Importantly, lexical tone perception relies on pitch contour variations over relatively short time windows, typically at the syllable level (hundreds of milliseconds), whereas intonation and stress perception involve tracking pitch changes over longer durations, spanning words or entire sentences.
Evidence indicates that experience with a tonal language influences pitch perception. Specifically, tones tend to be perceived in a more categorical manner, leading to better discrimination across category boundaries compared to equivalently separated stimuli within the same category. Studies have shown that speakers of tonal languages have narrower boundary widths between tones (Morett, 2020; Peng et al., 2010). This sensitivity could be linked to enhanced neural plasticity in pitch processing regions, such as the superior temporal gyrus (STG; Bhaya-Grossman and Chang, 2022). Meta-analysis has shown that only tonal language speakers consistently recruited the left STG for lexical tone processing—an area implicated in phoneme processing in non-tonal languages (Liang and Du, 2018). At the same time, variation in intonation on sentence level elicits bilateral or rightwarded activation in both tonal and non-tonal language speakers (Gandour et al., 2003).
One of the theories describing the functional asymmetry in the auditory cortex is the asymmetric sampling in time hypothesis, according to which, the left auditory cortex processes auditory information that varies in time windows of 25–80 ms, whereas the right auditory cortex operates over timescales of 150–250 ms (Oderbolz et al., 2025; Poeppel, 2003). This division of labor appears to be rooted in differences in architectonic structure, connectivity patterns, and the heterogeneity of temporal receptive fields at the level of single neurons across the two hemispheres (Buxhoeveden et al., 2001; Caeyenberghs and Leemans, 2014; Cavanagh et al., 2020). Syllables in Mandarin Chinese, the most frequently studied tonal language, have an average duration of approximately 250 ms (Peng, 2006), which exceeds the temporal window of 25–80 Hz that is considered optimal for left-hemisphere processing. Therefore, it remains unclear which specific mechanisms in the left hemisphere contribute to the processing of lexical tones, and how this sensitivity to lexical tones is shaped in such languages. To address this knowledge gap, future research should examine how pitch variations are integrated across different temporal windows in speakers of both tonal and non-tonal languages. This apparent contradiction also highlights that findings on hemispheric specialization based on non-tonal languages may not be fully generalizable to tonal language contexts. Thus, there is a pressing need for more cross-linguistic studies to better understand the neural basis of auditory perception in diverse language systems.
The described above EEG responses of synchronized and non-synchronized activity can be a well-suited tool for studying these processes from the perspective of auditory development and its impairment across different language environments. Of particular relevance to our opinion paper is the finding that SW has been reported to be enhanced in speakers of tonal languages compared to those of non-tonal languages, but only when semantically meaningful syllables (with tones) were presented (Fan et al., 2017). Similar results were also found for auditory brainstem frequency-following response (FFR). Speakers of tonal languages show stronger FFR than English listeners (Krishnan et al., 2010). However, this result characterizes only adults, and not neonates (Jeng et al., 2011), suggesting that tone language experience sharpened neuronal sensitivity to linguistic pitch information at the auditory brainstem level (Figure 1).
Figure 1
4 Altered brain mechanisms of auditory perception in ASD
Disruptions in either of the described mechanisms (synchronized activity and rate-coding activity) can significantly affect the development of speech perception. This is particularly relevant in neurodevelopmental conditions such as ASD, where atypical atypical auditory processing in early infancy may trigger cascading effects on speech development and broader cognitive functions. Indeed, children with ASD exhibit difficulties in the development of both auditory and speech perception (Mody and Belliveau, 2013; O'Connor, 2012).
Several studies have reported reduced ITPC response in individuals with ASD (De Stefano et al., 2019; Seymour et al., 2020), although findings remain inconsistent (Edgar et al., 2016; Stroganova et al., 2020). Also, some studies have found that this response is associated with speech difficulties in children with ASD (Arutiunian et al., 2023; Roberts et al., 2021). All these studies were conducted on speakers of non-tonal languages.
SW is also impaired in ASD (Arutiunian et al., 2023; Fadeev et al., 2024; Stroganova et al., 2020). In the non-tonal language context (Russian), SW correlates with the ability to perceive words in noise in children with ASD (Fadeev et al., 2024). It is possible that the impaired pitch-processing mechanism has an even more significant effect in children with ASD from tonal language contexts, since this response has higher amplitude in individuals with tonal language background (Fan et al., 2017). Studies have shown that children with ASD from tonal language backgrounds exhibit deficits in pitch processing, but only in the speech domain, while their discrimination between pure tones or melodic contours are even enhanced (Jiang et al., 2015).
Taken together, these findings highlight the language-specific role of pitch processing in ASD and support the utility of neurophysiological markers for tracking atypical speech perception mechanisms.
5 Discussion
In this opinion paper, we reviewed studies on the brain mechanisms underlying auditory processing, with a focus on how these mechanisms may differ between speakers of tonal and non-tonal languages. Neurophysiological evidence suggests that the perception of pitch variation engages distinct neural processes depending on linguistic experience. However, it remains unknown how phase- and frequency-synchronized mechanisms, such as ITPC, function in speakers of tonal languages, as this phenomenon has not yet been systematically investigated in this population.
In individuals with ASD from non-tonal language context, both ITPC and pitch-related response (SW) mechanisms appear to be affected (De Stefano et al., 2019; Fadeev et al., 2024; Orekhova et al., 2024). However, these auditory responses have not yet been examined in individuals with ASD who speak tonal languages. Nevertheless, existing evidence indicates that children with ASD with tonal language experience exhibit wider categorical boundary widths compared to typically developing peers, showing reduced categorical perception of lexical pitch (Chen et al., 2022), which might affect language abilities in this population. Similarly, in languages where syllable duration carries lexical meaning (i.e., Finnish or Japanese) categorical perception of duration is also impaired in individuals with ASD (Kasai et al., 2005; Lepistö et al., 2005), whereas this deficit is not observed in languages where syllable duration is not lexically relevant (Huang et al., 2018).
Several limitations of the reviewed studies should be acknowledged. First, substantial variability in stimulation paradigms across studies, together with relatively small sample sizes in some cases, limits direct comparability and generalizability of findings. Importantly, this variability extends to the nature of the auditory stimuli: studies differ in their use of simple vs. complex sounds, synthetic vs. natural speech, listening conditions, and key acoustic properties (e.g., duration, modulation rate, and spectral content). These differences can engage partially distinct processing mechanisms and timescales, complicating the interpretation of inconsistent findings across studies. Notably, studies reporting altered categorical perception of linguistic tones in children with tonal language backgrounds (Morett, 2020; Peng et al., 2010) have predominantly used lexical or pure tone stimuli, which limits the ability to generalize findings to more naturalistic listening conditions. To better understand the neural mechanisms underlying these effects in ASD, future research should incorporate a broader range of stimuli that vary in complexity and ecological validity, allowing clearer dissociation of which aspects of auditory processing are affected. In addition, the marked heterogeneity of ASD may contribute to inconsistent results; however, these challenges can be addressed in future research through larger, well-characterized samples and the use of harmonized experimental protocols. While the present review considers both clinical and cultural differences, maintaining consistency within a given experimental paradigm is critical, and future research should prioritize more standardized stimulus designs
Taken together, these findings underscore the importance of cross-linguistic studies in children with ASD to identify which specific alterations in auditory perception contribute to the development of speech perception deficits within different language environments. It is possible that a common underlying auditory processing mechanism is affected in ASD, but the behavioral and neurophysiological manifestations vary depending on the linguistic context. Tonal languages, which place a high functional load on pitch, offer a valuable model for disentangling language-specific and universal components of auditory processing atypicalities in ASD.
Statements
Author contributions
AN: Conceptualization, Writing – review & editing, Writing – original draft. YL: Writing – original draft, Writing – review & editing, Conceptualization. YX: Writing – review & editing, Writing – original draft. OS: Writing – original draft, Conceptualization, Writing – review & editing.
Funding
The author(s) declared that financial support was received for this work and/or its publication. This work is supported by the Open Project Program of Key Laboratory of Child Development and Learning Science of the Ministry of Education, Southeast University (No. CDLS-2024-03), China.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was used in the creation of this manuscript. To assist with grammar and language polishing during the manuscript preparation.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1
ArutiunianV.ArcaraG.BuyanovaI.DavydovaE.PereverzevaD.SorokinA.et al. (2023). Neuromagnetic 40 Hz auditory steady-state response in the left auditory cortex is related to language comprehension in children with autism spectrum disorder. Prog. Neuro-Psychopharmacol. Biol. Psychiatry122:110690. doi: 10.1016/j.pnpbp.2022.110690
2
ArutiunianV.ArcaraG.BuyanovaI.GomozovaM.DragoyO. (2022). The age-related changes in 40 Hz auditory steady-state response and sustained event-related fields to the same amplitude-modulated tones in typically developing children: a magnetoencephalography study. Hum. Brain Mapp.43, 5370–5383. doi: 10.1002/hbm.26013
3
Bhaya-GrossmanI.ChangE. F. (2022). Speech computations of the human superior temporal hyrus. Ann. Rev. Psychol.73, 79–102. doi: 10.1146/annurev-psych-022321-035256
4
BuxhoevedenD. P.SwitalaA. E.LitakerM.RoyE.CasanovaM. F. (2001). Lateralization of minicolumns in human planum temporale is absent in nonhuman primate cortex. Brain Behav. Evol.57, 349–358. doi: 10.1159/000047253
5
CaeyenberghsK.LeemansA. (2014). Hemispheric lateralization of topological organization in structural brain networks. Hum. Brain Mapp.35, 4944–4957. doi: 10.1002/hbm.22524
6
CavanaghS. E.HuntL. T.KennerleyS. W. (2020). A diversity of intrinsic timescales underlie neural computations. Front. Neural Circuits14:615626. doi: 10.3389/fncir.2020.615626
7
ChenY.TangE.DingH.ZhangY. (2022). Auditory pitch perception in autism spectrum disorder: a systematic review and meta-analysis. J. Speech Lang. Hear. Res.65, 4866–4886. doi: 10.1044/2022_JSLHR-22-00254
8
ChoR. Y.WalkerC. P.PolizzottoN. R.WoznyT. A.FissellC.ChenC.-M. A.et al. (2015). Development of sensory gamma oscillations and cross-frequency coupling from childhood to early adulthood. Cerebr. Cortex25, 1509–1518. doi: 10.1093/cercor/bht341
9
De StefanoL. A.SchmittL. M.WhiteS. P.MosconiM. W.SweeneyJ. A.EthridgeL. E. (2019). Developmental effects on auditory neural oscillatory synchronization abnormalities in autism spectrum disorder. Front. Integr. Neurosci.13:34. doi: 10.3389/fnint.2019.00034
10
DheerendraP.BarascudN.KumarS.OverathT.GriffithsT. D. (2021). Dynamics underlying auditory-object-boundary detection in primary auditory cortex. Eur. J. Neurosci.54, 7274–7288. doi: 10.1111/ejn.15471
11
EdgarJ. C.FiskC. L.LiuS.PandeyJ.HerringtonJ. D.SchultzR. T.RobertsT. P. L. (2016). Translating adult electrophysiology findings to younger patient populations: difficulty measuring 40-Hz auditory steady-state responses in typically developing children and children with autism spectrum disorder. Dev. Neurosci.38, 1–14. doi: 10.1159/000441943
12
FadeevK. A.Romero ReyesI. V.GoiaevaD. E.ObukhovaT. S.OvsiannikovaT. M.ProkofyevA. O.RytikovaA. M.et al. (2024). Attenuated processing of vowels in the left temporal cortex predicts speech-in-noise perception deficit in children with autism. J. Neurodev. Disord.16:67. doi: 10.1186/s11689-024-09585-2
13
FanC. S.-D.ZhuX.DoschH. G.von StutterheimC.RuppA. (2017). Language related differences of the sustained response evoked by natural speech sounds. PLOS ONE12:e0180441. doi: 10.1371/journal.pone.0180441
14
GandourJ.DzemidzicM.WongD.LoweM.TongY.HsiehL.et al. (2003). Temporal integration of speech prosody is shaped by language experience: an fMRI study. Brain Lang.84, 318–336. doi: 10.1016/S0093-934X(02)00505-9
15
GiraudA.-L.PoeppelD. (2012). Cortical oscillations and speech processing: emerging computational principles and operations. Nat. Neurosci.15, 511–517. doi: 10.1038/nn.3063
16
GutschalkA.PattersonR. D.RuppA.UppenkampS.SchergM. (2002). Sustained magnetic fields reveal separate sites for sound level and temporal regularity in human auditory cortex. NeuroImage15, 207–216. doi: 10.1006/nimg.2001.0949
17
HuangD.YuL.WangX.FanY.WangS.ZhangY. (2018). Distinct patterns of discrimination and orienting for temporal processing of speech and nonspeech in Chinese children with autism: an event-related potential study. Eur. J. Neurosci.47, 662–668. doi: 10.1111/ejn.13657
18
JengF.-C.HuJ.DickmanB.Montgomery-ReaganK.TongM.WuG.et al. (2011). Cross-linguistic comparison of frequency-following responses to voice pitch in american and Chinese neonates and adults. Ear Hear.32:699. doi: 10.1097/AUD.0b013e31821cc0df
19
JiangJ.LiuF.WanX.JiangC. (2015). Perception of melodic contour and intonation in autism spectrum disorder: evidence from Mandarin speakers. J. Autism Dev. Disord.45, 2067–2075. doi: 10.1007/s10803-015-2370-4
20
KasaiK.HashimotoO.KawakuboY.YumotoM.KamioS.ItohK.et al. (2005). Delayed automatic detection of change in speech sounds in adults with autism: a magnetoencephalographic study. Clin. Neurophysiol.116, 1655–1664. doi: 10.1016/j.clinph.2005.03.007
21
KeceliS.InuiK.OkamotoH.OtsuruN.KakigiR. (2012). Auditory sustained field responses to periodic noise. BMC Neurosci.13:7. doi: 10.1186/1471-2202-13-7
22
KeceliS.OkamotoH.KakigiR. (2015). Hierarchical neural encoding of temporal regularity in the human auditory cortex. Brain Topogr.28, 459–470. doi: 10.1007/s10548-013-0300-3
23
KrishnanA.GandourJ. T.BidelmanG. M. (2010). The effects of tone language experience on pitch processing in the brainstem. J. Neurolinguistics23, 81–95. doi: 10.1016/j.jneuroling.2009.09.001
24
KrumbholzK.PattersonR. D.PressnitzerD. (2000). The lower limit of pitch as determined by rate discrimination. J. Acoust. Soc. Am.108, 1170–1180. doi: 10.1121/1.1287843
25
LeongV.GoswamiU. (2014). Impaired extraction of speech rhythm from temporal modulation patterns in speech in developmental dyslexia. Front. Hum. Neurosci. 8:96. doi: 10.3389/fnhum.2014.00096
26
LepistöT.KujalaT.VanhalaR.AlkuP.HuotilainenM.NäätänenR. (2005). The discrimination of and orienting to speech and non-speech sounds in children with autism. Brain Res.1066, 147–157. doi: 10.1016/j.brainres.2005.10.052
27
LiangB.DuY. (2018). The functional neuroanatomy of lexical tone perception: an activation likelihood estimation meta-analysis. Front. Neurosci.12:495. doi: 10.3389/fnins.2018.00495
28
LiuL.-F.PalmerA. R.WallaceM. N. (2006). Phase-locked responses to pure tones in the inferior colliculus. J. Neurophysiol.95, 1926–1935. doi: 10.1152/jn.00497.2005
29
LuT.WangX. (2000). Temporal discharge patterns evoked by rapid sequences of wide- and narrowband clicks in the primary auditory cortex of cat. J. Neurophysiol.84, 236–246. doi: 10.1152/jn.2000.84.1.236
30
ModyM.BelliveauJ. W. (2013). Speech and language impairments in autism: insights from behavior and neuroimaging. North Am. J. Med. Sci.5, 157–161. doi: 10.7156/v5i3p157
31
MorettL. M. (2020). The influence of tonal and atonal bilingualism on children's lexical and non-lexical tone perception. Lang. Speech63, 221–241. doi: 10.1177/0023830919834679
32
NeklyudovaA.KuramagomedovaR.VoinovaV.SysoevaO. (2024). Atypical brain responses to 40-Hz click trains in girls with Rett syndrome: auditory steady-state response and sustained wave. Psychiatry Clin. Neurosci.78, 282–290. doi: 10.1111/pcn.13638
33
NeklyudovaA.RebreikinaA.SysoevaO. (2026). The neurophysiological correlates of click rate discrimination in children. Int. J. Psychophysiol.220:113304. doi: 10.1016/j.ijpsycho.2025.113304
34
NourskiK. V.BruggeJ. F. (2011). Representation of temporal sound features in the human auditory cortex. Rev. Neurosci.22, 187–203. doi: 10.1515/rns.2011.016
35
O'ConnorK. (2012). Auditory processing in autism spectrum disorder: a review. Neurosci. Biobehav. Rev.36, 836–854. doi: 10.1016/j.neubiorev.2011.11.008
36
OderbolzC.PoeppelD.MeyerM. (2025). Asymmetric sampling in time: evidence and perspectives. Neurosci. Biobehav. Rev.171:106082. doi: 10.1016/j.neubiorev.2025.106082
37
OrekhovaE. V.FadeevK. A.GoiaevaD. E.ObukhovaT. S.OvsiannikovaT. M.ProkofyevA. O.et al. (2024). Different hemispheric lateralization for periodicity and formant structure of vowels in the auditory cortex and its changes between childhood and adulthood. Cortex171, 287–307. doi: 10.1016/j.cortex.2023.10.020
38
OxenhamA. J. (2018). How we hear: the perception and neural coding of sound. Ann. Rev. Psychol.69, 27–50. doi: 10.1146/annurev-psych-122216-011635
39
PengG. (2006). Temporal and tonal aspects of Chinese syllables: a corpus-based comparative study of mandarin and cantonese. J. Chinese Linguistics34, 134–154.
40
PengG.ZhengH.-Y.GongT.YangR.-X.KongJ.-P.WangW. S.-Y. (2010). The influence of language experience on categorical perception of pitch contours. J. Phon.38, 616–624. doi: 10.1016/j.wocn.2010.09.003
41
PictonT. W.JohnM. S.DimitrijevicA.PurcellD. (2003). Human auditory steady-state responses. Int. J. Audiol.42, 177–219. doi: 10.3109/14992020309101316
42
PoeppelD. (2003). The analysis of speech in different temporal integration windows: Cerebral lateralization as ‘asymmetric sampling in time.' Speech Commun. Nat. Speech Percep.41, 245–255. doi: 10.1016/S0167-6393(02)00107-3
43
RobertsT. P. L.BloyL.LiuS.KuM.BlaskeyL.JackelC. (2021). Magnetoencephalography studies of the envelope following response during amplitude-modulated sweeps: diminished phase synchrony in autism spectrum disorder. Front. Human Neurosci.15:787229. doi: 10.3389/fnhum.2021.787229
44
RosenS. (1992). Temporal information in speech: acoustic, auditory and linguistic aspects. Philos. Trans. R. Soc. Lond. B. Biol. Sci.336, 367–373. doi: 10.1098/rstb.1992.0070
45
RossB.FujiokaT. (2016). 40-Hz oscillations underlying perceptual binding in young and older adults: erceptional binding and aging. Psychophysiology53:12654. doi: 10.1111/psyp.12654
46
SeymourR. A.RipponG.Gooding-WilliamsG.SowmanP. F.KesslerK. (2020). Reduced auditory steady state responses in autism spectrum disorder. Mol. Autism11:56. doi: 10.1186/s13229-020-00357-y
47
StroganovaT. A.KomarovK. S.SysoevaO. V.GoiaevaD. E.ObukhovaT. S.OvsiannikovaT. M.et al. (2020). Left hemispheric deficit in the sustained neuromagnetic response to periodic click trains in children with ASD. Mol. Autism11:100. doi: 10.1186/s13229-020-00408-4
48
WangX. (2018). Cortical coding of auditory features. Annu. Rev. Neurosci.41, 527–552. doi: 10.1146/annurev-neuro-072116-031302
Summary
Keywords
auditory cortex, auditory perception development, autistic spectrum disorder, pitch processing, tonal and non-tonal languages
Citation
Neklyudova A, Liu Y, Xia Y and Sysoeva O (2026) Brain mechanisms of auditory perception in autism spectrum disorder: a comparative perspective on tonal and non-tonal languages. Front. Syst. Neurosci. 20:1628536. doi: 10.3389/fnsys.2026.1628536
Received
14 May 2025
Revised
17 April 2026
Accepted
20 April 2026
Published
12 May 2026
Volume
20 - 2026
Edited by
Maria Mody, Massachusetts General Hospital and Harvard Medical School, United States
Reviewed by
Mohammad Shamim Ansari, Ali Yavar Jung National Institute for the Hearing Handicapped, India
Updates
Copyright
© 2026 Neklyudova, Liu, Xia and Sysoeva.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Anastasia Neklyudova, anastacia.neklyudova@gmail.com; Yiyun Xi, Sharon-xiayiyun@foxmail.com
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.