Reading the mind's eye: Decoding object information during mental imagery from fMRI patterns
Several studies have shown that category information for visually presented objects can be read out from fMRI patterns of brain activation spread over several millimeters of cortex. What is the nature and reliability of these activation patterns in the absence of any bottom-up visual input, for example, during visual imagery - Previous work has shown that the BOLD response under conditions of visual imagery is lower and limited to a smaller number of voxels, suggesting that activation patterns during imagery could be substantially different from those observed during actual viewing. We here ask first, how well category information can be read-out for imagined objects and second, how the representations of imagined objects compare to those obtained during actual viewing.
These questions were addressed in an fMRI study in which we used four categories '' faces, buildings, tools and fruits '' in two conditions. In the V-condition, subjects were visually presented with the images and in the I-condition they imagined them. Using pattern classification techniques we found, as previously, that object category information could be reliably decoded in the V-condition, both from category selective regions (i.e. FFA and PPA), as well as more distributed patterns of fMRI activity in object selective voxels of ventral temporal cortex. Good classification performance was also observed in the I-condition indicating that object representations in visual areas in the absence of any bottom-up signals provide information about imagined objects. Interestingly, when the pattern classifier was trained on the V-condition and tested on the I-condition (or vice-versa), classification performance was comparable to the I-condition suggesting that the functional patterns of neural activity during imagery and actual viewing are surprisingly similar to each other. In addition we found that the performance of such classifier could be correlated with the reported vividness of individual subjects. This suggests that people with more vivid images are those able to accurately re-activate the patterns of neural activity from neural populations associated with the representation of specific objects.
Overall these results provide strong constraints for computational theories of vision and suggest that, in the absence of any bottom-up input, cortical back-projections can selectively re-activate specific patterns of neural activity.
Conference:
Computational and systems
neuroscience 2009, Salt Lake City, UT, United States, 26 Feb - 3 Mar, 2009.
Presentation Type:
Poster Presentation
Topic:
Poster Presentations
Citation:
(2009). Reading the mind's eye: Decoding object information during mental imagery from fMRI patterns.
Front. Syst. Neurosci.
Conference Abstract:
Computational and systems
neuroscience 2009.
doi: 10.3389/conf.neuro.06.2009.03.274
Copyright:
The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers.
They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.
The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.
Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.
For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.
Received:
04 Feb 2009;
Published Online:
04 Feb 2009.