MINI REVIEW article

Front. Hum. Neurosci., 06 October 2015

Sec. Speech and Language

Volume 9 - 2015 | https://doi.org/10.3389/fnhum.2015.00558

Neural bases of accented speech perception

  • 1. Division of Psychology and Language Sciences, Department of Speech, Hearing, and Phonetic Sciences, University College London London, UK

  • 2. School of Psychological Sciences, University of Manchester Manchester, UK

Abstract

The recognition of unfamiliar regional and foreign accents represents a challenging task for the speech perception system (Floccia et al., 2006; Adank et al., 2009). Despite the frequency with which we encounter such accents, the neural mechanisms supporting successful perception of accented speech are poorly understood. Nonetheless, candidate neural substrates involved in processing speech in challenging listening conditions, including accented speech, are beginning to be identified. This review will outline neural bases associated with perception of accented speech in the light of current models of speech perception, and compare these data to brain areas associated with processing other speech distortions. We will subsequently evaluate competing models of speech processing with regards to neural processing of accented speech. See Cristia et al. (2012) for an in-depth overview of behavioral aspects of accent processing.

Processing accent variation at pre- and post-lexical levels

Models outlining the neural organization of speech perception (Hickok and Poeppel, 2007; Rauschecker and Scott, 2009) propose that the locus of processing intelligible speech is the temporal lobe within the ventral stream of speech processing. Rauschecker & Scott suggest that intelligibility processing has its center of gravity in left anterior STS (Superior Temporal Sulcus), while Hickok & Poeppel propose that processing intelligible speech is bilaterally organized and located both anteriorly and posteriorly to Heschl's Gyrus. However, both models are based on intelligible speech perception and do not make explicit predictions about the cortical substrates that subserve speech perception under challenging listening conditions (cf. Adank, 2012a) for a discussion on processing of intelligible speech).

A handful of fMRI studies address how the brain processes accent variation. Listening to difficult foreign phonemic contrasts (e.g., /l/-/r/ contrasts for Japanese listeners) has been associated with increased activation in auditory processing/speech production areas, including left Inferior Frontal Gyrus (IFG), left insula, bilateral ventral Premotor Cortex, right Pre- and Post-Central Gyrus, left anterior Superior Temporal Sulcus and Gyrus (STS/STG), left Planum Temporale (PT), left superior temporal parietal area (Stp), left Supramarginal Gyrus (SMG), and cerebellum bilaterally (Callan et al., 2004, 2014). It is noteworthy that the neural bases associated with listening to foreign languages overlap with those reported for unfamiliar accent processing, including bilateral STG/STS/MTG, and left IFG (Perani et al., 1996; Perani and Abutalebi, 2005; Hesling et al., 2012).

For sentence processing (Table 1, Figure 1), listening to an unfamiliar accent involves a network of frontal (left IFG, both Operculi/Insulas, Superior Frontal Gyrus), temporal (left Middle Temporal Gyrus [MTG], right STG), and medial regions (Supplementary Motor Area [SMA]) (Adank, 2012b; Adank et al., 2012b, 2013; Yi et al., 2014). It is unclear how the accent processing network maps onto the networks in Rauschecker and Scott (2009) and Hickok and Poeppel (2007). The coordinates for accent processing in the left temporal lobe are located anteriorly and posteriorly to Hickok and Poeppel's proposed STG area for spectrotemporal analysis, while the coordinates in left IFG are located inside Hickok and Poeppel's left inferior frontal area assigned to the dorsal stream's articulatory network. In contrast, the temporal coordinates in Table 1 fit well with Rauschecker & Scott's antero-ventral and postero-dorsal areas placed anteriorly and posteriorly to left primary auditory cortex, respectively, and the left IFG coordinates fall within their antero-ventral left inferior frontal area.

Table 1

DistortionStudyContrastMNILocationOriginal location*
Unfamiliar accentAdank et al., 2012bSentences unfamiliar > sentences familiar accent−54, −40, 4L MTGL Post. STG/SMG
−60, −34, 8L MTGL Post. STG/PT
−60, −26, −4L MTGL Post. MTG
60, −32, 2R STGR Post. STG/SMG
−50, 12, 24L POpL POp/PG
−46, 16, 12L POpL POp/PTr
54, −26, −2R STGR Post. STG/MTG/SMG
54, 4, −16R ROR Ant. STG/TP/MTG
38, 18, 26R PTrR Central Opercular Cortex
Adank et al., 2012aSentences in unfamiliar > sentences in familiar accent−60, −12, −6L MTGL STG/STS
Adank et al., 2013Sentences in unfamiliar accent > unintelligible sentences−62, −32, 4L MTGL STS
−58, −4, −8L FOL STG
−60, −16, −8L MTGL MTG
−50, 18, 24L PTrL IFG PTr)
−46, 28, −4L POrbL IFG POrb)
−36, 22, −4L InsulaL Insula
56, −20, −6R STGR STG
60, 2, −12R STGR STG
−2, 10, 60L SMAL SMA
Yi et al., 2014Sentences in foreign accent > sentences in native accent4, 24, 34R MCCR Paracingulate Gyrus
34, −52, 62R SPLR Motor cortex, SPL, somatosensory cortex
−40, 14, 8L InsulaL Insula
20, −2, 60R SFGR SFG
32, 20, −6No location givenR Insula
−52, 10, 10L POpL IFG
−26, 24, 0L InsulaL Insula
42, 14, 8R IFGR Insula
Time−compressed speechAdank and Devlin, 2010Time−compressed > normal−speed sentences−60, −14, 0L MTGL Ant. STG/STS
−58, −46, 4L MTGL Post. STG/STS
64, −14, 0R STGR Ant. STG/STS
56, −32, 4R STGR Post. STG/STS
0, 12, 60SMAPre−SMA
0, 22, 44SMACingulate sulcus
−36, 24, −4L InsulaL FO
36, 25, 2R InsulaR FO
Peelle et al., 2004Time−compressed > normal−speed sentences−28.38, −66.82, 47.33L SPLL Posterior parietal BA19/39/40)
−28.54, −76.78, 32.63L MOGL Inferior parietal BA19/39)
−54.12, −38.58, −16.66L STGL Inferior temporal BA20)
−15.43, −62.52, 46.69L SPLL Posterior parietal BA7)
14.07, −23.17, −4.77R ThalamusR Thalamus
13.99, −7.32, −7.46R ThalamusR Subthalamic nucleus
1.02, −38.08, −14.28R Cerebellar VermisR Cerebellum
Poldrack et al., 2001Compression−related increases during sentence processing−28, 54, 16L MFGL MFG
34, 26, −4R InsulaR IFG/Insula
4, 32, 20R ACCR ACC
18, 4, 8No location givenStriatum
66, −40, 8R MTGR STG
Noise−vocoded speechErb et al., 2013Noise−vocoded > clear sentences−6, 26, 40L SMedGL SMA/ACC
−30, 20, −5L InsulaL Ant. Insula
33, 23, −3R InsulaR Ant. Insula
−9, 11, 7L Caudate NucleusL Caudate Nucleus
12, 17, 10R Caudate NucleusR Caudate Nucleus
Zekveld et al., 2014Noise−vocoded > clear sentences−4, 8, 60L SMeDGL SFG
−64, −40, 10L STGL STG
−48, −42, 2L MTGL MTG
−44, −38, 8L STGL MTG
Background noiseAdank et al., 2012aSentences in background noise > sentences in quiet32, 28, 10No location givenR IFG/FO
−32, 24, 8L InsulaL FO/IFG/Insula
6, 14, 28No location givenR Cingulate Gyrus
−24, 40, −2No location givenL Parahippocampal Gyrus
−12, 10, −2L PutamenL Caudate
12, 20, 36R MCCR Paracingulate/Cingulate
30, 40, 24R MFGR Frontal Pole
8, 22, 18No location givenR Cingulate Gyrus
Peelle et al., 2010Sentences in continuous scanning EPI Sequence > sentences in quiet EPI sequence−36, −74, 44L IPLL Inferior parietal cortex
−40, −66, 44L AGL Angular gyrus
−48, −60, 48L IPLL Inferior parietal cortex
−56, −46, 8L MTGL Post. MTG
−66, −44, 0L MTGL Post MTG
−68, −14, 2L STGL Ant. STS
−68, 2, −8No location givenL Ant. STS
−60, 4, −14L STGL Ant. STS

Reported brain regions in studies investigating processing of accented, time-compressed, or noise-vocoded speech, plus speech with added background noise vs. undistorted words or sentences.

Note that the list of papers is not exhaustive. Coordinates in Talairach space were converted to MNI space using the tal2icbm_spm algorithm www.brainmap.org/ale. Anatomical locations determined using the Anatomy ToolBox (Eickhoff et al., 2005, 2006, 2007) in SPM8 Wellcome Imaging Department, University College London, London, UK).

*

Original location as reported in the study. AG, Angular Gyrus; FFG, Fusiform Gyrus; FO, Frontal Operculum; IFG, Inferior Frontal Gyrus; IOG, Inferior Occipital Gyrus; IPL, Inferior Parietal Lobule; MCC, Middle Cingulate Cortex; MFG, Middle Frontal Gyrus; MTG, Middle Temporal Gyrus; PG, Precentral Gyrus; POp, Pars Opercularis; PT, Planum Temporale; PTr, Pars Triangularis; POrb, Par Orbitalis: RO, Rolandic Operculum; SMA, Supplementary Motor Area; SMedG, Superior Medial Gyrus; SMG, Supramarginal Gyrus; STG, Superior Temporal Gyrus; STG, Superior Temporal Planum; STS, Superior Temporal Sulcus; TP, Temporal Pole.

Figure 1

Accented speech vs. other challenging listening conditions

As is the case with other types of distorted speech, understanding accented speech is associated with increased listening effort (Van Engen and Peelle, 2014). However, accent variation is of a conceptually different nature than variation in the acoustic signal resulting from an extrinsic source such as noise, i.e., phonetic realizations that differ from the listener's native realization of speech sounds. Furthermore, in contrast to speech-intrinsic variation, noise compromises the auditory system's representation of speech from ear to brain. Accented speech also differs from distortions such as noise-vocoded or time-compressed speech as the variation does not affect the acoustic integrity of the acoustic signal, as only specific phonemic and suprasegmental characteristics vary.

Processing speech in noise involves areas also activated for speech in an unfamiliar accent (Table 1): left insula (Adank et al., 2012a), left MTG (Peelle et al., 2010), left Pars Opercularis (POp), bilateral Pars Triangularis (PTr). Comprehension of time-compressed sentences activates left MTG (Poldrack et al., 2001; Adank and Devlin, 2010), right STG (Peelle et al., 2004; Adank and Devlin, 2010), SMA and left Insula (Adank and Devlin, 2010), while noise-vocoded speech activates left Insula (Erb et al., 2013), and left MTG/STG (Zekveld et al., 2014). However, it is clear from Figure 1 that processing accented speech also activates areas outside the network activated for processing speech in noise, time-compressed speech, and noise-vocoded speech.

Another problem in identifying networks governing accent processing is that perceiving variation in an unfamiliar accent (i.e., in an accent that differs from one's own accent and that the listener has had little or no exposure to) is confounded with cognitive load. Note that such confounds also exist for other distortions of the speech signal, such as background noise. Listeners process speech in an unfamiliar accent slower and less efficiently (Floccia et al., 2006). It is thus unclear to which extent the network supporting accented speech perception is shared with the network associated with increased task/cognitive load processing. Notably, an increase in task difficulty/working memory load relates to increases in BOLD-activation in left insula (Wild et al., 2012), and in left MTG, SMA, left PTr, and right STG (Wild et al., 2012), and could therefore explain activations in these regions related to processing accented speech. Directly comparing the neural processing of familiar/unfamiliar accents may help distinguishing between the two networks.

Accounts of accented and distorted speech processing

The current debate regarding how listeners understand others in challenging listening conditions focuses on the location and nature of neural substrates recruited for effective speech comprehension. The three accounts discussed below offer specific predictions regarding the neural networks involved in processing accented speech.

First, auditory-only accounts (Obleser and Eisner, 2009) hold that speech perception includes a prelexical abstraction process in which variation in the acoustic signal is “stripped away” to allow the perception system access to abstract linguistic representations. The abstraction process is placed at locations predominantly in the temporal (STS and STG) lobes. This account predicts that processing of accented speech takes place predominantly in the ventral stream, with minimal involvement of the dorsal stream.

Second, motor recruitment accounts suggest that auditory areas in the ventral stream and speech production areas in the dorsal stream are required to process unfamiliar speech signals (Wilson and Knoblich, 2005; Pickering and Garrod, 2013). These accounts assume that listening to speech results in the automatic activation of articulatory motor plans required for producing speech (Watkins et al., 2003). These motor plans provide forward models with information of articulatory mechanics, to be used when the incoming signal is ambiguous/unclear. Accented speech contains variation that can lead to ambiguities, and these accounts thus predict that perception of accented speech involves active involvement of speech production processes.

Third, executive recruitment accounts propose that activation of (pre-) motor areas during perception of distorted speech signals is not related to actual articulatory processing, but reflects the recruitment of general cognitive processes, such as increased attention, or decision processes (Rodríguez-Fornells et al., 2009; Venezia et al., 2012). Indeed, behavioral data suggest that recruitment of executive functions for processing accented speech (Adank and Janse, 2010; Janse and Adank, 2012; Banks et al., 2015) also predicts activation of frontal regions including left frontal operculum and anterior insula and precentral gyrus, as these regions have also been associated with executive functions such as working memory (Moisala et al., 2015).

The results in Table 1 contrast with predictions made by the auditory-only account (Obleser and Eisner, 2009), as areas associated with processing accent variation in Table 1 refer to a more widespread network than predicted. Instead, the network in Table 1 converges with the latter two accounts, as activation is located across ventral and prefrontal areas in the dorsal stream. We propose that these three accounts are synthesized into a single mixed account for processing of accented speech that brings together neural substrates associated with increased involvement of auditory and phonological processing (e.g., bilateral posterior STG), (pre-)motor recruitment for sensorimotor mapping (e.g., SMA), and substrates associated with increased reliance on cognitive control processes (e.g., IFG, insula, and frontal operculum).

Concluding remarks

The neural mechanisms responsible for processing accent variation in speech are not clearly outlined, but constitute a topic of active investigation in the field of speech perception. However, to progress our understanding in this area, future studies should meet several aims to overcome previous design limitations.

First, experiments should be designed so that contributions from processing accented speech and effortful processing can be teased apart (Venezia et al., 2012). Second, studies should aim to distinguish between brain activity related to processing accent variation and other distortions, such as background noise. Adank et al. (2012a) contrasted sentences in a familiar accent embedded in background noise with sentences in an unfamiliar accent, to disentangle areas associated with processing accent-related variation from those associated with processing speech in background noise: Left posterior temporal areas in STG (extending to PT) and right STG (extending into insula) were more activated for accented speech than speech in noise, while bilateral FO/insula were more activated for speech in noise compared to accented speech, indicating that the neural architecture for processing accented speech and speech in background noise is not generic. Third, different accents vary in how much they deviate from the listener's own accent. Greater deviation between accents is associated with greater processing cost, but the neural response associated with variations in distance between accents has not been explored using fMRI. A recent study using Transcranial Magnetic Stimulation (TMS) showed a causal role for lip and tongue motor cortex in perceived speaker and listener distance processing (Bartoli et al., 2013). Another study used EEG to show that regional and foreign accents might be processed differently: processing sentences in an unfamiliar foreign accent reduces the size of the N400 compared to unfamiliar native accents (Goslin et al., 2012). It may be fruitful to use a wider variety of neuroscience techniques, including (combinations of) fMRI, EEG, MEG, and TMS, to investigate how the brain successfully accomplishes accented speech perception. Third, as processing effort, or cognitive load, is inevitably confounded with processing unfamiliar variation in accented speech, experiments should be designed to identify neural substrates associated with processing accent variation and those associated with increased cognitive load. One possibility would be to examine task difficulty and accent processing in a fully crossed factorial design to single out areas that show increased BOLD-activation for accented speech and for task difficulty. Finally, the contribution of production resources to processing accented speech should be examined, to explicitly test predictions from motor and executive recruitment accounts (e.g., Du et al., 2014).

Statements

Acknowledgments

This work was supported by the Leverhulme Trust under award number RPG-2013-254.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  • 1

    AdankP. (2012a). Design choices in imaging speech comprehension: an Activation Likelihood Estimation (ALE) meta-Analysis. Neuroimage63, 16011613. 10.1016/j.neuroimage.2012.07.027

  • 2

    AdankP. (2012b). The neural bases of difficult speech comprehension and speech production and their overlap: two Activation Likelihood Estimation (ALE) meta-analyses. Brain Lang.122, 4254. 10.1016/j.bandl.2012.04.014

  • 3

    AdankP.DavisM.HagoortP. (2012a). Neural dissociation in processing noise and accent in spoken language comprehension. Neuropsychologia50, 7784. 10.1016/j.neuropsychologia.2011.10.024

  • 4

    AdankP.DevlinJ. T. (2010). On-line plasticity in spoken sentence comprehension: adapting to time-compressed speech. Neuroimage49, 11241132. 10.1016/j.neuroimage.2009.07.032

  • 5

    AdankP.EvansB. G.Stuart-SmithJ.ScottS. K. (2009). Comprehension of familiar and unfamiliar native accents under adverse listening conditions. J. Exp. Psychol. Hum. Percept. Perform.35, 520529. 10.1037/a0013552

  • 6

    AdankP.JanseE. (2010). Comprehension of a novel accent by young and older listeners. Psychol. Aging25, 736740. 10.1037/a0020054

  • 7

    AdankP.NoordzijM. L.HagoortP. (2012b). The role of Planum Temporale in processing accent variation in spoken language comprehension. Hum. Brain Mapp.33, 360372. 10.1002/hbm.21218

  • 8

    AdankP.RueschemeyerS. A.BekkeringH. (2013). The role of accent imitation in sensorimotor integration during processing of intelligible speech. Front. Hum. Neurosci.4:634. 10.3389/fnhum.2013.00634

  • 9

    BanksB.GowenE.MunroK.AdankP. (2015). Cognitive predictors of perceptual adaptation to accented speech. J. Acoust. Soc. Am.137, 20152024. 10.1121/1.4916265

  • 10

    BartoliE.D'AusilioA.BerryJ.BadinoL.BeverT.FadigaL. (2013). Listener–speaker perceived distance predicts the degree of motor contribution to speech perception. Cereb. Cortex25, 281288. 10.1093/cercor/bht257

  • 11

    CallanD. E.CallanA. M.JonesJ. A. (2014). Speech motor brain regions are differentially recruited during perception of native and foreign-accented phonemes for first and second language listeners. Front. Neurosci.8:275. 10.3389/fnins.2014.00275

  • 12

    CallanD. E.JonesJ. A.CallanA. M.Akahane-YamadaR. (2004). Phonetic perceptual identification by native- and second-language speakers differentially activates brain regions involved with acoustic phonetic processing and those involved with articulatory – auditory/orosensory internal models. Neuroimage22, 11821194. 10.1016/j.neuroimage.2004.03.006

  • 13

    CristiaA.SeidlA.VaughnC.SchmaleR.BradlowA. R.FlocciaC. (2012). Linguistic processing of accented speech across the lifespan. Front. Psychol.3:479. 10.3389/fpsyg.2012.00479

  • 14

    DuY.BuchsbaumB.GradyC. L.AlainC. (2014). Noise differentially impacts phoneme representations in the auditory and speech motor systems. Proc. Natl. Acad. Sci. U.S.A.111, 71267131. 10.1073/pnas.1318738111

  • 15

    EickhoffS. B.HeimS.ZillesK.AmuntsK. (2006). Testing anatomically specified hypotheses in functional imaging using cytoarchitectonic maps. Neuroimage32, 570582. 10.1016/j.neuroimage.2006.04.204

  • 16

    EickhoffS. B.PausT.CaspersS.GrosbrasM. H.EvansA.ZillesK.et al. (2007). Assignment of functional activations to probabilistic cytoarchitectonic areas revisited. Neuroimage36, 511521. 10.1016/j.neuroimage.2007.03.060

  • 17

    EickhoffS. B.StephanK. E.MohlbergH.GrefkesC.FinkG. R.AmuntsK.et al. (2005). A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. Neuroimage25, 13251335. 10.1016/j.neuroimage.2004.12.034

  • 18

    ErbJ.HenryM. J.EisnerF.ObleserJ. (2013). The brain dynamics of rapid perceptual adaptation to adverse listening conditions. J. Neurosci.33, 1068810697. 10.1523/JNEUROSCI.4596-12.2013

  • 19

    FlocciaC.GoslinJ.GirardF.KonopczynskiG. (2006). Does a regional accent perturb speech processing?J. Exp. Psychol. Hum. Percept. Perform.32, 12761293. 10.1037/0096-1523.32.5.1276

  • 20

    GoslinJ.DuffyH.FlocciaC. (2012). An ERP investigation of regional and foreign accent processing. Brain Lang.122, 92102. 10.1016/j.bandl.2012.04.017

  • 21

    HeslingI.DilharreguyB.BordessoulesM.AllardM. (2012). The neural processing of second language comprehension modulated by the degree of proficiency: a listening connected speech FMRI study. Open Neuroimag. J.6, 111. 10.2174/1874440001206010044

  • 22

    HickokG.PoeppelD. (2007). The cortical organization of speech processing. Nat. Rev. Neurosci.8, 393402. 10.1038/nrn2113

  • 23

    JanseE.AdankP. (2012). Predicting foreign-accent adaptation in older adults. Q. J. Exp. Psychol.65, 15631585. 10.1080/17470218.2012.658822

  • 24

    MoisalaM.SalmelaV.SaloE.CarlsonS.VuontelaV.SalonenO.et al. (2015). Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks. Front. Hum. Neurosci.9:86. 10.3389/fnhum.2015.00086

  • 25

    ObleserJ.EisnerF. (2009). Pre-lexical abstraction of speech in the auditory cortex. Trends Cogn. Sci.13, 1419. 10.1016/j.tics.2008.09.005

  • 26

    PeelleJ. E.EasonR. J.SchmitterS.SchwarzbauerC.DavisM. H. (2010). Evaluating an acoustically quiet EPI sequence for use in fMRI studies of speech and auditory processing. Neuroimage52, 14101419. 10.1016/j.neuroimage.2010.05.015

  • 27

    PeelleJ. E.McMillanC.MooreP.GrossmanM.WingfieldA. (2004). Dissociable patterns of brain activity during comprehension of rapid and syntactically complex speech: evidence from fMRI. Brain Lang.91, 315325. 10.1016/j.bandl.2004.05.007

  • 28

    PeraniD.AbutalebiJ. (2005). The neural basis of first and second language processing. Curr. Opin. Neurobiol.15, 202206. 10.1016/j.conb.2005.03.007

  • 29

    PeraniD.DehaeneS.GrassiF.CohenL.CappaS. F.DupouxE.et al. (1996). Brain processing of native and foreign languages. Neuroreport15–17, 24392444. 10.1097/00001756-199611040-00007

  • 30

    PickeringM. J.GarrodS. (2013). An integrated theory of language production and comprehension. Behav. Brain Sci.36, 329347. 10.1017/S0140525X12001495

  • 31

    PoldrackR. A.TempleE.ProtopapasA.NagarajanS.TallalP.MerzenichM.et al. (2001). Relations between the neural bases of dynamic auditory processing and phonological processing: evidence from fMRI. J. Cogn. Neurosci.13, 687697. 10.1162/089892901750363235

  • 32

    RauscheckerJ. P.ScottS. K. (2009). Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nat. Neurosci.12, 718724. 10.1038/nn.2331

  • 33

    Rodríguez-FornellsA.CunilleraT.Mestres-MisséA.de Diego-BalaguerR. (2009). Neurophysiological mechanisms involved in language learning in adults. Philos. Trans. R. Soc. B Biol. Sci.364, 37113735. 10.1098/rstb.2009.0130

  • 34

    Van EngenK. J.PeelleJ. E. (2014). Listening effort and accented speech. Front. Hum. Neurosci.8:577. 10.3389/fnhum.2014.00577

  • 35

    VeneziaJ. H.SaberiK.ChubbC.HickokG. (2012). Response bias modulates the speech motor system during syllable discrimination. Front. Psychol.3:157. 10.3389/fpsyg.2012.00157

  • 36

    WatkinsK. E.StrafellaA. P.PausT. (2003). Seeing and hearing speech excites the motor system involved in speech production. Neuropsychologia41, 989994. 10.1016/S0028-3932(02)00316-0

  • 37

    WildC. J.YusufA.WilsonD. E.PeelleJ. E.DavisM. H.JohnsrudeI. S. (2012). Effortful listening: the processing of degraded speech depends critically on attention. J. Neurosci.32, 1401014021. 10.1523/JNEUROSCI.1528-12.2012

  • 38

    WilsonM.KnoblichG. (2005). The case for motor involvement in perceiving conspecifics. Psychol. Bull.131, 460473. 10.1037/0033-2909.131.3.460

  • 39

    YiH.SmiljanicR.ChandrasekaranB. (2014). The neural processing of foreign-accented speech and its relationship to listener bias. Front. Hum. Neurosci.8:768. 10.3389/fnhum.2014.00768

  • 40

    ZekveldA. A.HeslenfeldD. J.JohnsrudeI. S.VersfeldN. J.KramerS. E. (2014). The eye as a window to the listening brain: neural correlates of pupil size as a measure of cognitive listening load. Neuroimage101, 7686. 10.1016/j.neuroimage.2014.06.069

Summary

Keywords

cognitive neuroscience, speech perception, accented speech, fMRI, speech in noise, noise-vocoded speech, time-compressed speech

Citation

Adank P, Nuttall HE, Banks B and Kennedy-Higgins D (2015) Neural bases of accented speech perception. Front. Hum. Neurosci. 9:558. doi: 10.3389/fnhum.2015.00558

Received

13 April 2015

Accepted

22 September 2015

Published

06 October 2015

Volume

9 - 2015

Edited by

Guadalupe Dávila, University of Málaga, Spain

Reviewed by

Antoni Rodriguez-Fornells, University of Barcelona, Spain; Kristin Van Engen, Washington University in St. Louis, USA

Updates

Copyright

*Correspondence: Patti Adank, Speech, Hearing and Phonetic Sciences, University College London (UCL), Chandler House, 2 Wakefield St., London WC1N 1PF, UK

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics