AUTHOR=Martin Stéphanie , Brunner Peter , Holdgraf Chris , Heinze Hans-Jochen , Crone Nathan E. , Rieger Jochem , Schalk Gerwin , Knight Robert T. , Pasley Brian N. TITLE=Decoding spectrotemporal features of overt and covert speech from the human cortex JOURNAL=Frontiers in Neuroengineering VOLUME=Volume 7 - 2014 YEAR=2014 URL=https://www.frontiersin.org/journals/neuroengineering/articles/10.3389/fneng.2014.00014 DOI=10.3389/fneng.2014.00014 ISSN=1662-6443 ABSTRACT=Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70-150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject. For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Covert reconstruction accuracy was then compared to the accuracy obtained from reconstructions in the baseline control condition. Across the group of subjects, reconstruction accuracy for the covert condition was significantly better than for the control condition. Electrodes in the superior temporal gyrus, pre- and post-central gyrus were the strongest predictors of decoding accuracy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate.