Decoding multiple objects from populations of macaque IT neurons with and without spatial attention
-
1
MIT, United States
-
2
MIT , McGovern Inst/Dept of Brain , United States
-
3
MIT, McGovern Institute for Brain Research, United States
Spatial attention improves visual processing of specific objects when they are embedded in complex cluttered scenes. Computational simulations examining the limits of biologically plausible feedfoward hierarchical architectures have shown that these models can successfully discriminate between objects in low levels of clutter (2-3 objects). However, under more substantial clutter (> 3-5 objects), recognition performance decreases towards chance due to interference between the different object representations at higher levels of the visual hierarchy where neurons have larger receptive fields (Serre et al., 2005). These interference effects, which have also been seen in electrophysiology (e.g., Moran and Desimone, 1985; Missal et al., 1997; Zoccolan et al., 2007) and psychophysics (Serre et al. 2007), provide a computational motivation for why attention is needed to achieve above-chance recognition in cluttered display conditions (Serre et al. 2007; Chikkerur et al. 2009). In this work we examine this conjecture by assessing how attention affects the amount of information carried by populations of neurons in anterior inferior temporal cortex (AIT) in multiple object displays. We recorded from neurons in AIT as monkeys engaged in a spatial attention task in which three objects were displayed simultaneously at different spatial locations. A cue was shown near fixation, which pointed to the behaviorally relevant object. The monkey was rewarded for making a saccade to the cued object when it changed its color, and ignoring color changes of the distracters. Additionally, there were trials that had the same temporal sequence but only a single isolated object was displayed. We used population decoding (Meyers et al., 2008) to analyze the neural data. Using pseudo-populations of neurons (72 neuron from one monkey, 95 from another), we trained a classifier to discriminate between the objects using data from a subset of the single object trials. We then tested the classifier’s predictions of what object was shown in either a different subset of single objects trials, or on the cluttered display trials, before and after the attentional cue. This allowed us to compare the amount of information present in the population when an object is shown in isolation to the amount of information about the object in a cluttered display, with and without spatial attention. When multiple objects were displayed, the classifier could predict which objects were present at a level above chance, but the accuracy was greatly reduced compared to predictions made from data from single object trials. However, after spatial attention was deployed, the decoding accuracy for the attended stimulus greatly improved, while the decoding accuracy for nonattended stimuli decreased to near chance. These findings are consistent with computational models that show AIT is capable of supporting limited representations of multiple objects (as predicted by Serre, Kreiman et al., 2007), and that attention related changes play a significant role in increasing the amount of information available about behaviorally important objects.
Conference:
Computational and Systems Neuroscience 2010, Salt Lake City, UT, United States, 25 Feb - 2 Mar, 2010.
Presentation Type:
Poster Presentation
Topic:
Poster session II
Citation:
Meyers
E,
Zhang
Y,
Bichot
N,
Serre
T,
Poggio
T,
Desimone
R and
Chikkerur
S
(2010). Decoding multiple objects from populations of macaque IT neurons with and without spatial attention.
Front. Neurosci.
Conference Abstract:
Computational and Systems Neuroscience 2010.
doi: 10.3389/conf.fnins.2010.03.00322
Copyright:
The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers.
They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.
The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.
Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.
For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.
Received:
08 Mar 2010;
Published Online:
08 Mar 2010.
*
Correspondence:
Ethan Meyers, MIT, Paris, United States, emeyers@mit.edu