Abstract
The study of spoken communication has long been entrenched in a debate surrounding the interdependence of speech production and perception. This mini review summarizes findings from prior studies to elucidate the reciprocal relationships between speech production and perception. We also discuss key theoretical perspectives relevant to speech perception-production loop, including hyper-articulation and hypo-articulation (H&H) theory, speech motor theory, direct realism theory, articulatory phonology, the Directions into Velocities of Articulators (DIVA) and Gradient Order DIVA (GODIVA) models, and predictive coding. Building on prior findings, we propose a revised auditory-motor integration model of speech and provide insights for future research in speech perception and production, focusing on the effects of impaired peripheral auditory systems.
Introduction
Debates on whether spoken communication involves both sides of speech/language production and perception/comprehension has shaped theories and research in the field. One side argues for a “general auditory” view, stating that speech perception involves processing acoustic signals independent of production components (Kluender, 1994; Diehl et al., 2004), despite substantial evidence supporting the contrary perspective (Casserly and Pisoni, 2010). This dichotomy echoes broader debates in cognitive psychology, where the idea of separate modules for perception and action, often called a “cognitive sandwich,” has been contested (Hurley, 2008). The same conclusion applies broadly to language production and comprehension as an important form of cognitive processing, involving both perception and action. These two components need to work in tango to establish “signal parity” between the produced and perceived representations for successful bidirectional communication (Liberman and Mattingly, 1989; Massaro, 2014). The entire system of speech production and perception forms a dynamic process with two cooperating sides to construct a stable and effective system of “speech chain” for effective verbal communication (Denes and Pinson, 1963).
Human language development is thought to either begin with inherent linguistic abilities (e.g., Chomsky’s universal grammar theory; Chomsky, 1995) or with no innate knowledge, i.e., a blank slate (e.g., Skinner’s behaviorist theory; Skinner, 1938). These ideas form competing theories that are still being debated. The factor of development in both perception and production of language requires an interwoven interaction between these two systems (Kuhl et al., 2008; Turnball and Justice, 2017). The strong linkage between perceptual representation of speech sounds and the degree of exposure to language and vocal imitation has been demonstrated by native language magnet expanded (NLM-e) model of speech perception (Kuhl et al., 2008), as well as the significant impacts of early language input on speech and language outcomes (Hart and Risley, 1995; Weisleder and Fernald, 2013; Arjmandi et al., 2022), demonstrating this connection during developmental stages. As stated by NLM-e model, organizing the phonetic perceptual space into prototypes space during development allows children to form a perceptual map for representation of linguistic phonetic units (Kuhl et al., 2008). These perceptual maps of speech are later used by children to produce sounds and words of their native language (Kuhl et al., 2008), demonstrating the tight interaction between the encoding and decoding pathways for the translation of speech to language and vice versa.
Connection between speech perception and production
The closest process to perceiving speech is its representation in action, namely speech production. Early support for a link between speech perception and production comes from Gregory and Webster (1996), showing convergence in speech patterns between speakers of different status in dyadic interviews. This study provided evidence on how the perception of speech influences the intonational structure of a speaker’s speech through accommodation. Pardo (2006) showed that speakers’ speaking style gradually aligns phonetically (i.e., “phonetic convergence”) during a “real-time” communication task, especially while playing a communication game (i.e., “map task”). Speakers also actively monitor their environment to compensate for any speech quality reduction delivered to listeners. Factors such as ambient background noise, complex multi-talker situations, and the speaker’s psychological state may impact the clarity of the speakers’ speech (Lindblom, 1990). The same compensatory reaction is evident in “Lombard Speech” effect where speakers utilize louder or highly articulated speech in noisy conditions to establish a successful communication of linguistic messages (Winkworth and Davis, 1997). Such perception-driven adaptations provide compelling evidence in support of the role of speech perception in production and their interconnection.
Connection between language comprehension and production
In language, the interconnection between comprehension and production is fundamental to effective communication. Studies on single word naming (Bock, 1994) and sentence completion (Bock and Miller, 1991) have demonstrated the connection between language production and comprehension. The single word naming task, often misunderstood as a purely comprehension-based activity, involves both comprehension and production aspects. This task necessitates the active generation and articulation of words, extending beyond mere comprehension. Sentence completion, which is assumed to be a production task, is also not feasible without comprehension. The connection between comprehension of linguistic units and production of speech is supported by neurobiological evidence as well. One such evidence has been provided by the discovery of “mirror neurons” comprising neuronal assemblies in the prefrontal cortex and other areas implicated in speech processing (e.g., Broca’s area) (Hickok, 2011). Shared neuronal activation in speech production and silent listening reinforces the link between speech perception and production, providing neurobiological support for Levelt’s proposed internal feedback loop (Levelt et al., 1999) and Moor’s PRESENCE model (Moore, 2007). These models explain emulation of the articulatory-to-acoustic mapping by speakers or listeners mirroring another speaker’s articulatory map (Levelt et al., 1999; Moore, 2007).
Other neurobiological evidence linking perception and production comes from Fadiga et al. (2002) and Watkins and Paus (2004), who showed that listening to speech, but not non-speech, triggers activity in cortical motor regions, such as Broca’s area, associated with speech articulation such as tongue movement. Hickok et al. (2009), however, argued against the role of mirror neurons in speech perception, underscoring evidence such as the lack of influence on speech comprehension when areas related to speech production, e.g., Broca’s, are impaired. They also highlighted infants’ ability to categorize speech sounds (categorical perception) before language production begins as additional evidence for the lack of a direct connection between speech perception and production. Several other studies delineated highly overlapping neural pathways implicated in processing and producing language, providing neurobiological evidence of interconnection between language comprehension and production (Scott and Johnsrude, 2003; Wilson et al., 2004; Chang et al., 2015). A speech perception related rate-dependent neural activation was reported during whispering while speech was not audible (Paus, 1996). In addition, the direct relationship between lip muscle activity and the level of neural activities in Broca’s area suggests that auditory input modulates excitability of motor system during speech perception (Fadiga et al., 2002). Active interaction between speech perception and production systems is further supported by demonstrating the involvement of the same cortical regions in semantic, lexical, and syntactic processing during tasks involving speaking and listening (Menenti et al., 2011, see McGettigan and Tremblay, 2017 for a detailed review).
Real-time auditory feedback and speech perception-production loop
Experimental data from human (Purcell and Munhall, 2006a,b; MacDonald et al., 2011; Khoshhal Mollasaraei and Behroozmand, 2023) and animal (Eliades and Wang, 2005; Eliades and Wang, 2008; Eliades and Tsunada, 2018) studies highlighting the impacts of the internal and external auditory feedback mechanisms on speech production further underscores the link between perception and production. The internal feedback serves as a self-calibrating mechanism that utilizes an internal model to predict the sensory (e.g., auditory) consequences of intended productions based on previously learned associations between motor commands and their feedback signals. This mechanism allows speakers to adjust phonatory and articulatory movements before and shortly after the onset of speech production without relying on the external auditory feedback, as evident through a centering behavior observed in the early stages of vowel production (Hockett, 1967; Gracco and Abbs, 1987; Niziolek et al., 2013). On the other hand, external feedback allows speakers to maintain phonation and articulation accuracy post-production, update the internal model due to production errors, and drive adaptive behavior for speech motor learning. The auditory feedback is incorporated in some computational models of speech acquisition and production (e.g., DIVA and GODVA models) to facilitate the training of the speech production process and simulate fine-grained articulatory movements (Tourville and Guenther, 2013). Other computational models have also been proposed where auditory perception was defined as a pathway to develop the learning process for speaking (Plant and Kello, 2013). In this context, the connectionist models assume that production and comprehension occur through the same network of nodes and connections such that the same pathway that is used for auditory feedback during production is recruited by individuals to perceive speech of others (MacKay, 1982; Dell, 1988). The artificial perturbation of real-time auditory feedback triggers compensatory motor behaviors that aim to minimize feedback error via modifying phonatory and articulatory movements to match the acoustic characteristics of the intended productions. Real-time shifts in vowel first and second formant frequencies prompt speakers to employ a compensatory strategy and adjust the perturbed formant while leaving the unperturbed formant unchanged (Purcell and Munhall, 2006a,b; MacDonald et al., 2011; Khoshhal Mollasaraei and Behroozmand, 2023). A similar compensatory vocal response has been extensively demonstrated in response to pitch-shifted auditory feedback (Behroozmand and Larson, 2011; Behroozmand et al., 2016). This well-known phenomenon of “sensorimotor adaptation” demonstrates the critical role of speech perception-production loop for real-time self-monitoring of speech output (Houde and Jordan, 2002; Villacorta et al., 2007).
Speech perception-production in theories
The interconnection between speech perception and production is a fundamental aspect in most theories of speech perception or speech production. The “hyper-articulation and hypo-articulation” (H&H) theory is one of the earliest theories of speech production which aligns with the notion of perception-driven adaptation in speech production. H&H theory highlights the influence of listeners and environmental factors on speakers’ adaptive behavior, wherein they adjust their articulatory patterns to balance between saving effort and making their communication clear (Lindblom, 1990).
Speakers incorporate auditory feedback to refine articulatory movements during speech production (Tourville et al., 2008), as demonstrated by the DIVA (Guenther et al., 1995, 1998, 2006) and GODIVA models (Civier et al., 2013). These models integrate auditory input and articulatory control in speech production by starting to train the network from a babbling phase (Oller and Eilers, 1988), incorporating both feedforward and feedback pathways based on neural theories of language development. GODIVA specifically addresses sound sequence order and function in speech, complementing DIVA’s speech sound map. The dynamic interaction between action and perception in sound production emphasizes the role of auditory perception in refining the speaking process. Studies have revealed that auditory target and error maps, constructed through auditory feedback in the models, are situated in distinct regions along the posterior temporal gyrus, activated during both perception and production (Buchsbaum et al., 2001; Hickok and Poeppel, 2004). However, these models have limitations, failing to account for aspects like adaptive components in preserving speech intelligibility and sensorimotor adaptation in Lombard speech. While these models account for the predictive aspects of speech production, including acoustic cues and somatosensory signals, uncertainties persist regarding the integration of prosodic elements such as intonation, rhythm, and amplitude modulation. Prosodic patterns contribute to predicting syllable and word boundaries in continuous speech, as demonstrated in studies involving both children (Fernald and Mazzie, 1991) and adults (Cutler et al., 1997).
The Motor Theory (MT) of speech perception posits that listeners reference their knowledge of speech production to perceive speech, relying on an internal structure mapping acoustic cues to articulatory movements (Liberman and Mattingly, 1985). However, MT lacks an explanation for the neurophysiological pathways underlying this mapping and is based on a simplified speech production system without accounting for predictive abilities during speech perception. Hickok et al. (2009) argued for an auditory theory, suggesting the motor system’s role is limited to a minor modulatory function, consistent with the “general auditory” view of speech perception (Stevens, 2002). They propose two networks of auditory-phonological and lexical-conceptual at different cortical levels for mapping acoustic to linguistic concepts. The “general auditory” view, however, overlooks the human capacity limitations in memorizing all acoustic-to-phoneme mappings (i.e., lack of an unlimited memory), which is highly variable considering the lack of invariance problem in speech comprehension (Browman and Goldstein, 1990; Goldstein and Fowler, 2003).
The Direct Realistic Theory (DRT) of speech perception, akin to Motor Theory (MT), connects speech perception to the production mechanism (Fowler, 1986). Contrary to the acoustic invariance theory (Stevens, 2002), DRT posits that speech is perceived through reconstructing speakers’ articulatory gestures rather than directly decoding acoustic features. In DRT, a group of neurons directly represents articulatory patterns, mapping relevant acoustic information to phonemic units. This active theory requires neural mechanisms for speech production to reconstruct vocal tract movements. Both MT and DRT claim that gestures are perceived during speech listening, involving the reconstruction of articulatory-phonetic patterns until the execution phase begins. However, neither theory provides compelling evidence for mapping acoustic cues to phonemic categories, and they lack a component for predictive coding during perception.
Articulatory phonology underscores gestures as the fundamental units for mapping articulation to the perception of lexical items (Ohala et al., 1986; Goldstein and Fowler, 2003). In this framework, phonological events result from dynamic variations in gestural patterns during articulation, such as tongue position changes. Unlike traditional models, articulatory gestures do not strictly correspond to acoustic features at the segmental or phonemic level, leading to an overlap between the onset, plateau, and offset period during phonemic unit pronunciation, addressing the lack of invariance problem (Browman and Goldstein, 1990; Goldstein and Fowler, 2003). Syllable and word formation rely on patterns of location and constriction created by articulatory movements in the vocal tract, rather than sequences of segments and phonemes in continuous speech. Perception involves reconstructing these articulatory patterns, either directly (as in the DRT) or indirectly (as in MT) mapping from acoustic to articulatory patterns. Biological evidence, such as the activation of mirror neurons in the motor cortex during speech listening, supports this mapping, but debates persist about the necessity of the connection between motor neurons and articulatory-related activations, as discussed in the MT.
Pickering and Garrod’s integrated theory of language comprehension and production posits a psycholinguistic framework where comprehension and speech production are interlinked components, and both involve predictive coding (Pickering and Garrod, 2013). This theory addresses the connection between action, action perception, and joint action in spoken word communication. Like Moore’s model (Moore, 2007), speakers construct forward models of their actions before execution, and listeners activate the same forward model of articulation. The prediction system in both parties ensures “signal parity,” essential for effective communication. Predictions span semantic, syntactic, and phonological levels through covert imitation and forward modeling. Listeners use this mechanism as active feedback, closely intertwining perception and comprehension. While the model accounts for dyadic communication, it lacks explanations for the neurobiological pathways of the forward model and simplifies intention reading in verbal communication to motor behavior tasks, overlooking the broader complexity of predicting interlocutors’ intentions.
Discussion
The extensive body of research discussed in this mini review underscores the intricate link between speech perception and production, emphasizing their bidirectional nature. Notably, online monitoring of auditory feedback plays a key role in normal speech production by using a complex sensorimotor integration mechanism to adjust phonatory and articulatory movements (Tourville et al., 2008; Behroozmand and Larson, 2011; Behroozmand et al., 2022). While existing models of speech perception-production offer valuable insights into sources of the deficit in disorders of language (e.g., aphasia) and speech (e.g., stuttering, dysarthria) (Hickok et al., 2011; Chang and Guenther, 2020), they may not fully explain the impact of impaired hearing and hearing devices (cochlear implants and hearing aids) on speech production.
The integrative sensorimotor model of speech (Behroozmand et al., 2018) proposes a framework where the auditory-motor interface transforms speech motor plans into forward predictions about the auditory feedback consequences of intended productions. The original model assumes a normal auditory pathway, identifying sensory prediction errors and translating them into corrective signals through the auditory-motor interface to adjust speech motor parameters. While previous models have emphasized the role of auditory feedback system for speech, no distinction was made between the mechanisms underlying peripheral vs. central auditory processing pathways. Here, we propose a revised model that incorporates a separate module to account for the role of peripheral auditory system in speech (Peripheral Auditory System in Figure 1). This revision is a critical consideration to explicitly examine the impact of peripheral auditory dysfunction, such as in patients with hearing loss or the users of hearing assistive devices (cochlear implants and hearing aids), on speech sensorimotor processes. This model illustrates how a spectrotemporally-degraded signal, due to impaired peripheral auditory pathways, may modify components and relationships within the classical model.
Figure 1

Revised auditory-motor integration model of speech with a peripheral auditory system and the relevant neurobiological pathways. In the model, an intact auditory system detects prediction errors and sensory prediction errors in response to a change in auditory feedback. The auditory-motor interface transforms speech motor plans into forward predictions of auditory feedback. The generation of corrective signals in response to errors in speech production can be disrupted due to an impaired peripheral hearing such as loss or damage to the hair cells in the cochlea in the peripheral auditory system (e.g., hearing loss or cochlear implants) or/and an impairment or distortion in the Auditory-Motor Interface (e.g., aphasia, stuttering, or dysarthria; see Hickok et al., 2011; Chang et al., 2015). HG, Heschl’s Gyrus; PT, Planum Temporale; STG, Superior Temporal Gyrus; STS, Superior Temporal Sulcus; Spt, posterior Planum Temporale; SMA, Supplementary Motor Area; PMC, Premotor Cortex; MC, Motor Cortex; BN, Brainstem Nuclei.
We propose that these modifications impact our understanding of how peripheral auditory deficits may induce detrimental effects on the accuracy of forward predictions, the detection of errors, and the generation of corrective speech motor commands by the auditory-motor interface. In fact, impairment in the peripheral auditory system (Figure 1), such as loss or damage to cochlear mechanisms [e.g., hair cells (HCs) and auditory nerve fibers (ANs)] can create a cascade of deficiencies, impacting components of the model at different levels. Hearing impairment, particularly sensorineural hearing loss, is often linked to missing or damaged HCs in the cochlea of the inner ear (Ashmore et al., 2010; Fettiplace and Kim, 2014). This condition results in the inability of HCs to effectively transduce acoustic energy into electrical signals transmitted to the brain through ANs, leading to degraded transmission of fine- and sometimes coarse-grained spectral and temporal cues along the auditory pathway from both the left and right cochleae to the brain (Saada et al., 1996; Kral et al., 2000; Raggio and Schreiner, 2003; Middlebrooks et al., 2005; Loizou et al., 2009; Sanes and Kotak, 2011). This lack of sensory input induces neuroplastic changes in the brains of both humans (Huttenlocher and Dabholkar, 1997; Moore and Guan, 2001; Moore and Angeles, 2002; Moore and Linthicum, 2007; Iyengar, 2012; Pundir et al., 2012) and animals (Arenberg et al., 2000; Eliades and Wang, 2005; Middlebrooks et al., 2005; Eliades and Wang, 2008; Eliades and Tsunada, 2018; Middlebrooks, 2018). The spectro-temporally degraded auditory input is expected to impact initial cortical processing of speech in superior temporal gyrus (STG; e.g., spectro-temporal analysis, region-specific response to different sound frequencies) and superior temporal sulcus (STS; e.g., phonological analysis and complex processing of speech) (Hickok et al., 2011, 2023; Oganian et al., 2023), mainly in Heschl’s Gyrus (HG) and Planum temporale (PT) (Ratnanather, 2020; Oganian et al., 2023) (Central Auditory System in Figure 1), which could also lead to a deficient formation of Auditory Target (Figure 1). These areas also project back to other brain structures via the thalamus and brainstem (Kara et al., 2006; Li et al., 2012, 2013; Hribar et al., 2014; Shiell et al., 2016; Smittenaar et al., 2016; Kumar and Mishra, 2018; Pereira-Jorge et al., 2018; Shiohama et al., 2019).
The degraded signal may also impact the formation of sensory information during learning phase (Goupell, 2015; Svirsky, 2017; Ratnanather, 2020; Arjmandi et al., 2021, 2022) and their transformation into appropriate speech motor commands in Supplementary Motor Area (SMA) and/or Premotor Cortex (PMC; Pre-Motor System in Figure 1), thus impacting the motor planning, initiation of and the temporal organization of sequences of movements involved in speech production. Such distorted internal model for sensory prediction may impact the integration of motor plans and auditory input in the Auditory-Motor Interface station (Figure 1), a process believed to involve multiple cortical regions, primarily the posterior PT (Spt) (Hickok et al., 2003, 2009; Chang et al., 2015). This may potentially result in the generation of impaired forward predictions and motor control commands. Therefore, the Speech Motor System in Figure 1 is expected to be impacted because the transformation of any mismatch between the learned motor commands and auditory feedback into compensatory gestures in motor cortex (MC) requires a normal motor plan signal as well as the faithful transmission of auditory feedback (Brown et al., 2008; Simonyan and Horwitz, 2011; Tourville and Guenther, 2013; Simonyan, 2014; Scott et al., 2020). Thus, impaired ability to detect errors complicates the generation of effective corrective speech motor commands, hindering the auditory-motor interface. Motor neurons, in turns, in the brainstem nuclei (BN) may not be able to accurately innervate muscles and control components of Speech Articulators in Figure 1 that are involved in speech production such as respiratory system, vocal folds vibration, and the movement of tongue, lips, jaw, and velopharyngeal port. Despite this potential cascade of impairments, the neurophysiological pathways that explain how these components impact sensorimotor processing due to peripheral auditory system impairment remain largely unknown. Understanding these effects can help with elucidating atypical features of speech production at the segmental and suprasegmental levels exhibited by listeners with hearing loss and those with cochlear implants such as contracted vowel space (Economou et al., 1992; Langereis et al., 1997; Schenk et al., 2003; Lane et al., 2007; Ménard et al., 2007), deviated vocal pitch (Perkell et al., 1992; Svirsky et al., 1992; Lane et al., 1995) and loudness (Plant and Oster, 1986; Perkell et al., 1992; Schenk et al., 2003; Evans and Deliyski, 2007), decreased vocal stability (Campisi et al., 2005; Hocevar-Boltezar et al., 2006; Evans and Deliyski, 2007; Dehqan and Scherer, 2011; Eskander et al., 2014; Wang et al., 2017), and increased variability in voice-onset time during consonant production (Tartter et al., 1989; Economou et al., 1992; Lane et al., 1994, 1995; Kishon-Rabin et al., 1999).
In conclusion, our understanding of the speech perception-production relationship has advanced significantly. However, it remains elusive concerning the effects of impaired hearing, specifically at the peripheral level. To address challenges presented by impaired auditory feedback, such as restricted access to spectrotemporal information, there is a need to enhance existing models. A refined model with integration of the peripheral auditory system can better explain the intricate interplay between perception and production of speech in the presence of impaired auditory feedback. Experimental data from testing such a model has the potential to lay the groundwork for developing customized diagnostic tools and personalized treatment approaches, ultimately optimizing both auditory input and speech outcomes.
Statements
Author contributions
MA: Conceptualization, Data curation, Investigation, Project administration, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing. RB: Conceptualization, Resources, Validation, Writing – review & editing, Supervision.
Funding
The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1
Arenberg J. G. Furukawa S. Middlebrooks J. C. (2000). Auditory cortical images of tones and noise bands. JARO1, 183–194. doi: 10.1007/s101620010036
2
Arjmandi M. K. Houston D. Dilley L. C. (2022). Variability in quantity and quality of early linguistic experience in children with Cochlear implants: evidence from analysis of natural auditory environments. Ear Hear.43, 685–698. doi: 10.1097/AUD.0000000000001136
3
Arjmandi M. Houston D. Wang Y. Dilley L. (2021). Estimating the reduced benefit of infant-directed speech in cochlear implant-related speech processing. Neurosci. Res.171, 49–61. doi: 10.1016/j.neures.2021.01.007
4
Arjmandi M. K. Jahn K. N. Arenberg J. G. (2022). Single-channel focused thresholds relate to vowel identification in pediatric and adult cochlear implant listeners. Trends Hear26:233121652210953. doi: 10.1177/23312165221095364
5
Ashmore J. Avan P. Brownell W. E. Dallos P. Dierkes K. Fettiplace R. et al . (2010). The remarkable cochlear amplifier. Hear. Res.266, 1–17. doi: 10.1016/j.heares.2010.05.001
6
Behroozmand R. Bonilha L. Rorden C. Hickok G. Fridriksson J. (2022). Neural correlates of impaired vocal feedback control in post-stroke aphasia. Neuroimage250:118938. doi: 10.1016/j.neuroimage.2022.118938
7
Behroozmand R. Larson C. R. (2011). Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback. BMC Neurosci.12:54. doi: 10.1186/1471-2202-12-54
8
Behroozmand R. Phillip L. Johari K. Bonilha L. Rorden C. Hickok G. et al . (2018). Sensorimotor impairment of speech auditory feedback processing in aphasia. Neuroimage165, 102–111. doi: 10.1016/j.neuroimage.2017.10.014
9
Behroozmand R. Sangtian S. Korzyukov O. Larson C. R. (2016). A temporal predictive code for voice motor control: evidence from ERP and behavioral responses to pitch-shifted auditory feedback. Brain Res.1636, 1–12. doi: 10.1016/j.brainres.2016.01.040
10
Bock J. K. (1994). Language production: methods and methodologies. Psychon. Bull. Rev.3, 395–421. doi: 10.3758/BF03214545
11
Bock K. Miller C. A. (1991). Broken agreement. Cogn. Psychol.23, 45–93. doi: 10.1016/0010-0285(91)90003-7
12
Browman C. P. Goldstein L. (1990). “Tiers in articulatory phonology, with some implications for casual speech,” in Papers in laboratory phonology I: between the grammar and physics of speech. (Eds.) KingstonJ.BeckmanM. (Cambridge: Cambridge University Press), 341–378.
13
Brown S. Ngan E. Liotti M. (2008). A larynx area in the human motor cortex. Cereb. Cortex18, 837–845. doi: 10.1093/cercor/bhm131
14
Buchsbaum B. Hickok G. Humphries C. (2001). Role of left posterior superior temporal gyrus in phonological processing for speech perception and production. Cogn. Sci.25, 663–678. doi: 10.1207/s15516709cog2505_2
15
Campisi P. Low A. Papsin B. Mount R. Cohen-Kerem R. Harrison R. (2005). Acoustic analysis of the voice in pediatric cochlear implant recipients: a longitudinal study. Laryngoscope115, 1046–1050. doi: 10.1097/01.MLG.0000163343.10549.4C
16
Casserly E. D. Pisoni D. B. (2010). Speech perception and production. Wiley Interdiscip. Rev. Cogn. Sci.1, 629–647. doi: 10.1002/wcs.63.Speech
17
Chang S. E. Guenther F. H. (2020). Involvement of the Cortico-basal ganglia-Thalamocortical loop in developmental stuttering. Front. Psychol.10:3088. doi: 10.3389/fpsyg.2019.03088
18
Chang E. F. Raygor K. P. Berger M. S. (2015). Contemporary model of language organization: an overview for neurosurgeons. J. Neurosurg.122, 250–261. doi: 10.3171/2014.10.JNS132647
19
Chomsky N. (1995). Mind association language and nature. Mind104, 1–61. doi: 10.1093/mind/104.413.1
20
Civier O. Bullock D. Max L. Guenther F. H. (2013). Computational modeling of stuttering caused by impairments in a basal ganglia thalamo-cortical circuit involved in syllable selection and initiation. Brain Lang.126, 263–278. doi: 10.1016/j.bandl.2013.05.016
21
Cutler A. Dahan D. van Donselaar W. (1997). Prosody in the comprehension of spoken language: a literature review. Lang. Speech40, 141–201. doi: 10.1177/002383099704000203
22
Dehqan A. Scherer R. C. (2011). Objective voice analysis of boys with profound hearing loss. J. Voice25, e61–e65. doi: 10.1016/j.jvoice.2010.08.006
23
Dell G. S. (1988). The retrieval of phonological forms in production: tests of predictions from a connectionist model. J. Mem. Lang.27, 124–142. doi: 10.1016/0749-596X(88)90070-8
24
Denes P. B. Pinson E. N. (1963). The Speech Chain: The Physics and Biology of Spoken Language. Murray Hill, NJ: Bell Telephone Laboratories.
25
Diehl R. L. Lotto A. J. Holt L. L. (2004). Speech perception. Annu. Rev. Psychol.55, 149–179. doi: 10.1146/annurev.psych.55.090902.142028
26
Economou A. Tartter V. C. Chute P. M. Hellman S. A. (1992). Speech changes following reimplantation from a single-channel to a multichannel cochlear implant. J. Acoust. Soc. Am.92, 1310–1323. doi: 10.1121/1.403925
27
Eliades S. J. Tsunada J. (2018). Auditory cortical activity drives feedback-dependent vocal control in marmosets. Nat. Commun.9:2540. doi: 10.1038/s41467-018-04961-8
28
Eliades S. J. Wang X. (2005). Dynamics of auditory-vocal interaction in monkey auditory cortex. Cereb. Cortex15, 1510–1523. doi: 10.1093/cercor/bhi030
29
Eliades S. J. Wang X. (2008). Neural substrates of vocalization feedback monitoring in primate auditory cortex. Nature453, 1102–1106. doi: 10.1038/nature06910
30
Eskander A. Gordon K. A. Tirado Y. Hopyan T. Russell L. Allegro J. et al . (2014). Normal-like motor speech parameters measured in children with long-term cochlear implant experience using a novel objective analytic technique. JAMA Otolaryngol. Head Neck Surg.140, 967–974. doi: 10.1001/jamaoto.2014.1730
31
Evans M. K. Deliyski D. D. (2007). Acoustic voice analysis of prelingually deaf adults before and after cochlear implantation. J. Voice21, 669–682. doi: 10.1016/j.jvoice.2006.07.005
32
Fadiga L. Craighero L. Buccino G. Rizzolatti G. (2002). Speech listening specifically modulates the excitability of tongue muscles: a TMS study. Eur. J. Neurosci.15, 399–402. doi: 10.1046/j.0953-816x.2001.01874.x
33
Fernald A. Mazzie C. (1991). Prosody and focus in speech to infants and adults. Dev. Psychol.27, 209–221. doi: 10.1037/0012-1649.27.2.209
34
Fettiplace R. Kim K. X. (2014). The physiology of mechanoelectrical transduction channels in hearing. Physiol. Rev.94, 951–986. doi: 10.1152/physrev.00038.2013
35
Fowler C. A. (1986). An event approach to the study of speech perception from a direct- realist perspective. J. Phon.14, 3–28. doi: 10.1016/S0095-4470(19)30607-2
36
Goldstein L. Fowler C. A. (2003). “Articulatory phonology: a phonology for public language use” in Phonetics and phonology in language comprehension and production (New York, NY: Mouton), 159–207.
37
Goupell M. J. (2015). Pushing the envelope of auditory research with cochlear implants. Acoust Today11, 26–33.
38
Gracco V. L. Abbs J. H. (1987). “Programming and execution processes of speech movement control: potential neural correlates” in Motor and sensory processes of language. eds. KellerE.GopnikM. (Hillsdale NJ: Lawrence Erlbaum Associates, Inc.), 163–201.
39
Gregory S. W. Webster S. (1996). A nonverbal signal in voices of interview partners effectively predicts communication accommodation and social status perceptions. J. Pers. Soc. Psychol.70, 1231–1240. doi: 10.1037/0022-3514.70.6.1231
40
Guenther F. H. Ghosh S. S. Tourville J. A. (2006). Neural modeling and imaging of the cortical interactions underlying syllable production. Brain Lang.96, 280–301. doi: 10.1016/j.bandl.2005.06.001
41
Guenther F. H. Hampson M. Johnson D. (1998). A theoretical investigation of reference frames for the planning of speech movements. Psychol. Rev.105, 611–633. doi: 10.1037/0033-295X.105.4.611-633
42
Hart B. Risley T. R. , Meaningful differences in the everyday experience of young American children. Baltimore, MD: Paul H Brookes Publishing (1995).
43
Hickok G. (2011). The role of Mirror neurons in speech and language processing. NIH Public Access112, –2. doi: 10.1016/j.bandl.2009.10.006
44
Hickok G. Buchsbaum B. Humphries C. Muftuler T. (2003). Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt. J. Cogn. Neurosci.15, 673–682. doi: 10.1162/089892903322307393
45
Hickok G. Holt L. L. Lotto A. J. (2009). Response to Wilson: what does motor cortex contribute to speech perception?Trends Cogn. Sci.13, 330–331. doi: 10.1016/j.tics.2009.05.002
46
Hickok G. Houde J. Rong F. (2011). Sensorimotor integration in speech processing: computational basis and neural organization. Neuron69, 407–422. doi: 10.1016/j.neuron.2011.01.019
47
Hickok G. Okada K. Serences J. T. (2009). Area SPT in the human planum temporale supports sensory-motor integration for speech processing. J. Neurophysiol.101, 2725–2732. doi: 10.1152/jn.91099.2008
48
Hickok G. Poeppel D. (2004). Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition92, 67–99. doi: 10.1016/j.cognition.2003.10.011
49
Hickok G. Venezia J. Teghipco A. (2023). Beyond Broca: neural architecture and evolution of a dual motor speech coordination system. Brain146, 1775–1790. doi: 10.1093/brain/awac454
50
Hocevar-Boltezar I. Radsel Z. Vatovec J. Geczy B. Cernelc S. Gros A. et al . (2006). Change of phonation Control after Cochlear implantation. Otol. Neurotol.27, 499–503. doi: 10.1097/00129492-200606000-00011
51
Hockett C. F. (1967). Where the tongue slips, there slip I. To Honor Roman Jakobson2, 910–936. doi: 10.1515/9783111349121-007
52
Houde J. F. Jordan M. I. (2002). Sensorimotor adaptation of speech I: compensation and adaptation. J. Speech Lang. Hear. Res.45, 295–310. doi: 10.1044/1092-4388(2002/023)
53
Hribar M. Šuput D. Carvalho A. A. Battelino S. Vovk A. (2014). Structural alterations of brain grey and white matter in early deaf adults. Hear. Res.318, 1–10. doi: 10.1016/j.heares.2014.09.008
54
Hurley S. (2008). The shared circuits model (SCM): how control, mirroring, and simulation can enable imitation, deliberation, and mindreading. Behav. Brain. Sci.31, 1–22; discussion 22–58. doi: 10.1017/S0140525X07003123
55
Huttenlocher P. R. Dabholkar A. S. (1997). Regional differences in synaptogenesis in human cerebral cortex. J. Comp. Neurol.387, 167–178. doi: 10.1002/(SICI)1096-9861(19971020)387:2<167::AID-CNE1>3.0.CO;2-Z
56
Iyengar S. (2012). Development of the human auditory system. J. Indian Inst. Sci.92, 427–440.
57
Kara A. Hakan Ozturk A. Kurtoglu Z. Umit Talas D. Aktekin M. Saygili M. et al . (2006). Morphometric comparison of the human corpus callosum in deaf and hearing subjects: an MRI study. J. Neuroradiol.33, 158–163. doi: 10.1016/S0150-9861(06)77253-4
58
Khoshhal Mollasaraei Z. Behroozmand R. Impairment of the internal forward model and feedback mechanisms for vocal sensorimotor control in post-stroke aphasia: evidence from directional responses to altered auditory feedback. Exp. Brain Res. (2023). 242, 225–239. doi: 10.1007/s00221-023-06743-1
59
Kishon-Rabin L. Taitelbaum R. Tobin Y. Hildesheimer M. (1999). The effect of partially restored hearing on speech production of postlingually deafened adults with multichannel cochlear implants. J. Acoust. Soc. Am.106, 2843–2857. doi: 10.1121/1.428109
60
Kluender K. R. , Speech perception as a tractable problem in cognitive science. San Diego, CA: Academic Pres (1994).
61
Kral A. Hartmann R. Tillein J. Heid S. Klinke R. (2000). Congenital auditory deprivation reduces synaptic activity within the auditory cortex in a layer-specific manner. Cereb. Cortex10, 714–726. doi: 10.1093/cercor/10.7.714
62
Kuhl P. K. Conboy B. T. Coffey-Corina S. Padden D. Rivera-Gaxiola M. Nelson T. (2008). Phonetic learning as a pathway to language: new data and native language magnet theory expanded (NLM-e). Philos. Trans. R. Soc. B Biol. Sci.363, 979–1000. doi: 10.1098/rstb.2007.2154
63
Kumar U. Mishra M. (2018). Pattern of neural divergence in adults with prelingual deafness: based on structural brain analysis. Brain Res.1701, 58–63. doi: 10.1016/j.brainres.2018.07.021
64
Lane H. Matthies M. L. Guenther F. H. Denny M. Perkell J. S. Stockmann E. et al . (2007). Effects of short- and long-term changes in auditory feedback on vowel and sibilant contrasts. J. Speech Lang. Hear. Res.50, 913–927. doi: 10.1044/1092-4388(2007/065)
65
Lane H. Wozniak J. Matthies M. Svirsky M. Perkell J. (1995). Phonemic resetting versus postural adjustments in the speech of cochlear implant users: an exploration of voice-onset time. J. Acoust. Soc. Am.98, 3096–3106. doi: 10.1121/1.413798
66
Lane H. Wozniak J. Perkell J. (1994). Changes in voice-onset time in speakers with cochlear implants. J. Acoust. Soc. Am.96, 56–64. doi: 10.1121/1.410442
67
Langereis M. C. Bosman A. J. van Olphen A. F. Smoorenburg G. F. (1997). Changes in vowel quality in post-lingually deafened cochlear implant users. Int. J. Audiol.36, 279–297. doi: 10.3109/00206099709071980
68
Levelt W. J. M. Roelofs A. Meyer A. S. (1999). A theory of lexical access in speech production. Behav. Brain Sci.22, 1–75. doi: 10.1017/S0140525X99001776
69
Li J. Li W. Xian J. Li Y. Liu Z. Liu S. et al . (2012). Cortical thickness analysis and optimized voxel-based morphometry in children and adolescents with prelingually profound sensorineural hearing loss. Brain Res.1430, 35–42. doi: 10.1016/j.brainres.2011.09.057
70
Li W. Li J. Xian J. Lv B. Li M. Wang C. et al . (2013). Alterations of grey matter asymmetries in adolescents with prelingual deafness: a combined VBM and cortical thickness analysis. Restor. Neurol. Neurosci.31, –17. doi: 10.3233/RNN-2012-120269
71
Liberman A. M. Mattingly I. G. (1989). A specialization for speech perception. Science243, 489–494. doi: 10.1126/science.2643163
72
Liberman A. M. Mattingly I. G. (1985). The motor theory of speech perception. Cognition21, –36. doi: 10.1016/0010-0277(85)90021-6
73
Lindblom B. (1990). “Explaining phonetic variation: A sketch of the H&H theory,” in Speech production and speech modelling. Dordrecht: Springer Netherlands. 403–439.
74
Loizou P. C. Hu Y. Litovsky R. Yu G. Peters R. Lake J. et al . (2009). Speech recognition by bilateral cochlear implant users in a cocktail-party setting. J. Acoust. Soc. Am.125, 372–383. doi: 10.1121/1.3036175
75
MacDonald E. N. Purcell D. W. Munhall K. G. (2011). Probing the independence of formant control using altered auditory feedback. J. Acoust. Soc. Am.129, 955–965. doi: 10.1121/1.3531932
76
MacKay D. G. (1982). The problems of flexibility, fluency, and speed-accuracy trade-off in skilled behavior. Psychol. Rev.89, 483–506. doi: 10.1037/0033-295X.89.5.483
77
Massaro D. W. (2014). Understanding language: an information-processing analysis of speech perception, reading, and psycholinguistics. Cambridge (Massachusetts): Academic Press.
78
McGettigan C. Tremblay P. Links between perception and production: examining the roles of motor and premotor cortices in understanding speech. Oxford handbook of psycholinguistics. Oxford University Press, Oxford (2017).
79
Ménard L. Polak M. Denny M. Burton E. Lane H. Matthies M. L. et al . (2007). Interactions of speaking condition and auditory feedback on vowel production in postlingually deaf adults with cochlear implants. J. Acoust. Soc. Am.121, 3790–3801. doi: 10.1121/1.2710963
80
Menenti L. Gierhan S. M. E. Segaert K. Hagoort P. (2011). Shared language: overlap and segregation of the neuronal infrastructure for speaking and listening revealed by functional MRI. Psychol. Sci.22, 1173–1182. doi: 10.1177/0956797611418347
81
Middlebrooks J. C. (2018). Chronic deafness degrades temporal acuity in the electrically stimulated auditory pathway. JARO19, 541–557. doi: 10.1007/s10162-018-0679-3
82
Middlebrooks J. C. Bierer J. A. Snyder R. L. (2005). Cochlear implants: the view from the brain. Curr. Opin. Neurobiol.15, 488–493. doi: 10.1016/j.conb.2005.06.004
83
Moore R. K. (2007). Spoken language processing: piecing together the puzzle. Speech Comm.49, 418–435. doi: 10.1016/j.specom.2007.01.011
84
Moore J. K. Angeles L. (2002). Maturation of human auditory cortex: implications for speech perception. Ann. Otol. Rhinol. Laryngol.111, 7–10. doi: 10.1177/00034894021110S502
85
Moore J. K. Guan Y. L. (2001). Cytoarchitectural and axonal maturation in human auditory cortex. JARO2, 297–311. doi: 10.1007/s101620010052
86
Moore J. K. Linthicum F. H. (2007). The human auditory system: a timeline of development. Int. J. Audiol.46, 460–478. doi: 10.1080/14992020701383019
87
Niziolek C. A. Nagarajan S. S. Houde J. F. (2013). What does motor efference copy represent? Evidence from speech production. J. Neurosci.33, 16110–16116. doi: 10.1523/JNEUROSCI.2137-13.2013
88
Oganian Y. Bhaya-Grossman I. Johnson K. Chang E. F. (2023). Vowel and formant representation in the human auditory speech cortex. Neuron111, 2105–2118.e4. doi: 10.1016/j.neuron.2023.04.004
89
Ohala J. J. Browman C. P. Goldstein L. M. (1986). Towards an articulatory phonology. Phonology3, 219–252. doi: 10.1017/S0952675700000658
90
Oller R. E. Eilers R. E. (1988). The role of audition in infant babbling. Child Dev.59, 441–449. doi: 10.2307/1130323
91
Pardo J. S. (2006). On phonetic convergence during conversational interaction. J. Acoust. Soc. Am.119, 2382–2393. doi: 10.1121/1.2178720
92
Paus T. (1996). Modulation of cerebral blood flow in the human auditory cortex during speech: role of motor-to-sensory discharges. Eur. J. Neurosci.8, 2236–2246. doi: 10.1111/j.1460-9568.1996.tb01187.x
93
Pereira-Jorge M. R. Andrade K. C. Palhano-Fontes F. X. Diniz P. R. B. Sturzbecher M. Santos A. C. et al . (2018). Anatomical and functional MRI changes after one year of auditory rehabilitation with hearing aids. Neural Plast.2018, 1–13. doi: 10.1155/2018/9303674
94
Perkell J. Lane H. Svirsky M. Webster J. (1992). Speech of cochlear implant patients: a longitudinal study of vowel production. J. Acoust. Soc. Am.91, 2961–2978. doi: 10.1121/1.402932
95
Pickering M. J. Garrod S. (2013). An integrated theory of language production and comprehension. Behav. Brain Sci.36, 329–347. doi: 10.1017/S0140525X12001495
96
Plant G. Oster A.-M. (1986). The effects of cochlear implantation on speech production. A case study. TL-QPSR27, 65–86.
97
Plant D. C. Kello C. T. (2013). “The emergence of phonology from the interplay of speech comprehension and production: A distributed connectionist approach,” in The emergence of language. Psychology Press. 399–434.
98
Pundir A. S. Hameed L. S. Dikshit P. C. Kumar P. Mohan S. Radotra B. et al . (2012). Expression of medium and heavy chain neurofilaments in the developing human auditory cortex. Brain Struct. Funct.217, 303–321. doi: 10.1007/s00429-011-0352-7
99
Purcell D. W. Munhall K. G. (2006a). Compensation following real-time manipulation of formants in isolated vowels. J. Acoust. Soc. Am.119, 2288–2297. doi: 10.1121/1.2173514
100
Purcell D. W. Munhall K. G. (2006b). Adaptive control of vowel formant frequency: evidence from real-time formant manipulation. J. Acoust. Soc. Am.120, 966–977. doi: 10.1121/1.2217714
101
Raggio M. W. Schreiner C. E. (2003). Neuronal responses in cat primary auditory cortex to electrical cochlear stimulation: IV. Activation pattern for sinusoidal stimulation. J. Neurophysiol.89, 3190–3204. doi: 10.1152/jn.00341.2002
102
Ratnanather J. T. (2020). Structural neuroimaging of the altered brain stemming from pediatric and adolescent hearing loss—scientific and clinical challenges. Wiley Interdiscip. Rev. Syst. Biol. Med.12:e1469. doi: 10.1002/wsbm.1469
103
Saada A. A. Niparko J. K. Ryugo D. K. (1996). Morphological changes in the cochlear nucleus of congenitally deaf white cats. Brain Res.736, 315–328. doi: 10.1016/0006-8993(96)00719-6
104
Sanes D. H. Kotak V. C. (2011). Developmental plasticity of auditory cortical inhibitory synapses. Hear. Res.279, 140–148. doi: 10.1016/j.heares.2011.03.015
105
Schenk B. S. Baumgartner W. D. Hamzavi J. S. (2003). Changes in vowel quality after cochlear implantation. ORL65, 184–188. doi: 10.1159/000072257
106
Scott T. L. Haenchen L. Daliri A. Chartove J. Guenther F. H. Perrachione T. K. (2020). Noninvasive neurostimulation of left ventral motor cortex enhances sensorimotor adaptation in speech production. Brain Lang.209:104840. doi: 10.1016/j.bandl.2020.104840
107
Scott S. K. Johnsrude I. S. (2003). The neuroanatomical and functional organization of speech perception. Trends Neurosci.26, 100–107. doi: 10.1016/S0166-2236(02)00037-1
108
Shiell M. M. Champoux F. Zatorre R. J. (2016). The right hemisphere Planum Temporale supports enhanced visual motion detection ability in deaf people: evidence from cortical thickness. Neural Plast.2016, 1–9. doi: 10.1155/2016/7217630
109
Shiohama T. McDavid J. Levman J. Takahashi E. (2019). The left lateral occipital cortex exhibits decreased thickness in children with sensorineural hearing loss. Int. J. Dev. Neurosci.76, 34–40. doi: 10.1016/j.ijdevneu.2019.05.009
110
Simonyan K. (2014). The laryngeal motor cortex: its organization and connectivity. Curr. Opin. Neurobiol.28, 15–21. doi: 10.1016/j.conb.2014.05.006
111
Simonyan K. Horwitz B. (2011). Laryngeal motor cortex and control of speech in humans. Neuroscientist17, 197–208. doi: 10.1177/1073858410386727
112
Skinner B. F. , The behavior of organisms. New York, NY: Appleton-Century (1938).
113
Smittenaar C. R. MacSweeney M. Sereno M. I. Schwarzkopf D. S. (2016). Does congenital deafness affect the structural and functional architecture of primary visual cortex?Open Neuroimaging J.10, –19. doi: 10.2174/1874440001610010001
114
Stevens K. N. (2002). Toward a model for lexical access based on acoustic landmarks and distinctive features. J. Acoust. Soc. Am.111, 1872–1891. doi: 10.1121/1.1458026
115
Svirsky M. (2017). Cochlear implants and electronic hearing. Phys. Today70, 52–58. doi: 10.1063/PT.3.3661
116
Svirsky M. A. Lane H. Perkell J. S. Wozniak J. (1992). Effects of short-term auditory deprivation on speech production in adult cochlear implant users. J. Acoust. Soc. Am.92, 1284–1300. doi: 10.1121/1.403923
117
Guenther A. Hewitt C. N. Erickson D. Fall R. Geron C. Graedel T. et al . (1995). A global model of natural volatile organic compound emissions. J. Geophys. Res. Atmos.100, 8873–8892. doi: 10.1029/94JD02950
118
Tartter V. C. Chute P. M. Hellman S. A. (1989). The speech of a postlingually deafened teenager during the first year of use of a multichannel cochlear implant. J. Acoust. Soc. Am.86, 2113–2121. doi: 10.1121/1.398471
119
Tourville J. A. Guenther F. H. (2013). The DIVA model: A neural theory of speech acquisition and production. Lang. Cogn. Process.26, 952–981. doi: 10.1080/01690960903498424
120
Tourville J. A. Reilly K. J. Guenther F. H. (2008). Neural mechanisms underlying auditory feedback control of speech. Neuroimage39, 1429–1443. doi: 10.1016/j.neuroimage.2007.09.054
121
Turnball K. L. P. Justice L. M. (2017). Language development from theory to practice. 3rd edition. Upper Saddle River: Pearson Education Inc.
122
Villacorta V. M. Perkell J. S. Guenther F. H. (2007). Sensorimotor adaptation to feedback perturbations of vowel acoustics and its relation to perception. J. Acoust. Soc. Am.122, 2306–2319. doi: 10.1121/1.2773966
123
Wang Y. Liang F. Yang J. Zhang X. Liu J. Zheng Y. (2017). The acoustic characteristics of the voice in Cochlear-implanted children: a longitudinal study. J. Voice31, 773.e21–773.e26. doi: 10.1016/j.jvoice.2017.02.007
124
Watkins K. Paus T. (2004). Modulation of motor excitability during speech perception: the role of Broca’s area. J. Cogn. Neurosci.16, 978–987. doi: 10.1162/0898929041502616
125
Weisleder A. Fernald A. (2013). Talking to children matters: early language experience strengthens processing and builds vocabulary. Psychol. Sci.24, 2143–2152. doi: 10.1177/0956797613488145
126
Wilson S. M. Saygin A. P. Sereno M. I. Iacoboni M. (2004). Listening to speech activates motor areas involved in speech production. Nat. Neurosci.7, 701–702. doi: 10.1038/nn1263
127
Winkworth A. L. Davis P. J. (1997). Speech breathing and the Lombard effect. J. Speech Lang. Hear. Res.40, 159–169. doi: 10.1044/jslhr.4001.159
Summary
Keywords
speech perception-production, auditory-motor integration, perception-driven adaptation, real-time auditory feedback, impaired peripheral auditory processing
Citation
Arjmandi MK and Behroozmand R (2024) On the interplay between speech perception and production: insights from research and theories. Front. Neurosci. 18:1347614. doi: 10.3389/fnins.2024.1347614
Received
01 December 2023
Accepted
08 January 2024
Published
25 January 2024
Volume
18 - 2024
Edited by
Jufang He, City University of Hong Kong, Hong Kong SAR, China
Reviewed by
Wenjian Sun, University of Southern California, United States
Lixia Gao, Zhejiang University, China
Updates
Copyright
© 2024 Arjmandi and Behroozmand.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Meisam K. Arjmandi, meisam@mailbox.sc.edu
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.