Generating text from functional brain images
- 1 Department of Psychology, Princeton University, Princeton, NJ, USA
- 2 Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA
Recent work has shown that it is possible to take brain images acquired during viewing of a scene and reconstruct an approximation of the scene from those images. Here we show that it is also possible to generate text about the mental content reflected in brain images. We began with images collected as participants read names of concrete items (e.g., “Apartment’’) while also seeing line drawings of the item named. We built a model of the mental semantic representation of concrete concepts from text data and learned to map aspects of such representation to patterns of activation in the corresponding brain image. In order to validate this mapping, without accessing information about the items viewed for left-out individual brain images, we were able to generate from each one a collection of semantically pertinent words (e.g., “door,” “window” for “Apartment’’). Furthermore, we show that the ability to generate such words allows us to perform a classification task and thus validate our method quantitatively.
Keywords: fMRI, topic models, semantic categories, classification, multivariate
Citation: Pereira F, Detre G and Botvinick M (2011) Generating text from functional brain images. Front. Hum. Neurosci. 5:72. doi: 10.3389/fnhum.2011.00072
Received: 17 April 2011; Accepted: 17 July 2011;
Published online: 23 August 2011.
Copyright: © 2011 Pereira, Detre and Botvinick. This is an open-access article subject to a non-exclusive license between the authors and Frontiers Media SA, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and other Frontiers conditions are complied with.
*Correspondence: Francisco Pereira, Department of Psychology, Green Hall, Princeton University, Princeton, NJ, USA. e-mail: email@example.com