%A Rampinini,Alessandra Cecilia %A Handjaras,Giacomo %A Leo,Andrea %A Cecchetti,Luca %A Betta,Monica %A Marotta,Giovanna %A Ricciardi,Emiliano %A Pietrini,Pietro %D 2019 %J Frontiers in Human Neuroscience %C %F %G English %K fMRI,Language,motor theory of speech perception,Vowel acoustics,speech perception,phonology,Broca,Functional segregation,parcellation,model fit,Canonical correlation (CC) analysis,Brain Mapping %Q %R 10.3389/fnhum.2019.00032 %W %L %M %P %7 %8 2019-February-08 %9 Original Research %# %! Formant space reconstruction from brain activity in pSTS-MTG and IFGpTri. %* %< %T Formant Space Reconstruction From Brain Activity in Frontal and Temporal Regions Coding for Heard Vowels %U https://www.frontiersin.org/articles/10.3389/fnhum.2019.00032 %V 13 %0 JOURNAL ARTICLE %@ 1662-5161 %X Classical studies have isolated a distributed network of temporal and frontal areas engaged in the neural representation of speech perception and production. With modern literature arguing against unique roles for these cortical regions, different theories have favored either neural code-sharing or cortical space-sharing, thus trying to explain the intertwined spatial and functional organization of motor and acoustic components across the fronto-temporal cortical network. In this context, the focus of attention has recently shifted toward specific model fitting, aimed at motor and/or acoustic space reconstruction in brain activity within the language network. Here, we tested a model based on acoustic properties (formants), and one based on motor properties (articulation parameters), where model-free decoding of evoked fMRI activity during perception, imagery, and production of vowels had been successful. Results revealed that phonological information organizes around formant structure during the perception of vowels; interestingly, such a model was reconstructed in a broad temporal region, outside of the primary auditory cortex, but also in the pars triangularis of the left inferior frontal gyrus. Conversely, articulatory features were not associated with brain activity in these regions. Overall, our results call for a degree of interdependence based on acoustic information, between the frontal and temporal ends of the language network.