<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Comput. Neurosci.</journal-id>
<journal-title>Frontiers in Computational Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Comput. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5188</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fncom.2014.00132</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Modeling the shape hierarchy for visually guided grasping</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Rezai</surname> <given-names>Omid</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/168038"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Kleinhans</surname> <given-names>Ashley</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/139384"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Matallanas</surname> <given-names>Eduardo</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/161701"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Selby</surname> <given-names>Ben</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/160166"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Tripp</surname> <given-names>Bryan P.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/2497"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Systems Design Engineering, Centre for Theoretical Neuroscience, University of Waterloo</institution> <country>Waterloo, ON, Canada</country></aff>
<aff id="aff2"><sup>2</sup><institution>Mobile Intelligent Autonomous Systems, Council for Scientific and Industrial Research</institution> <country>Pretoria, South Africa</country></aff>
<aff id="aff3"><sup>3</sup><institution>School of Mechanical and Industrial Engineering, University of Johannesburg</institution> <country>Johannesburg, South Africa</country></aff>
<aff id="aff4"><sup>4</sup><institution>ETSI Telecomunicaci&#x000F3;n, Universidad Polit&#x000E9;cnica de Madrid</institution> <country>Madrid, Spain</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Antonio J. Rodriguez-Sanchez, University of Innsbruck, Austria</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Abdelmalik Moujahid, University of the Basque Country UPV/EHU, Spain; Ko Sakai, University of Tsukuba, Japan; Peter Janssen, Catholic University of Leuven, Belgium</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Omid Rezai and Bryan P. Tripp, Systems Design Engineering, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada e-mail: <email>omid.srezai&#x00040;uwaterloo.ca</email>; <email>bptripp&#x00040;uwaterloo.ca</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to the journal Frontiers in Computational Neuroscience.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>27</day>
<month>10</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>8</volume>
<elocation-id>132</elocation-id>
<history>
<date date-type="received">
<day>16</day>
<month>05</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>09</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2014 Rezai, Kleinhans, Matallanas, Selby and Tripp.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modeled shape tuning in visual AIP neurons and its relationship with curvature and gradient information from the caudal intraparietal area (CIP). The main goal was to gain insight into the kinds of shape parameterizations that can account for AIP tuning and that are consistent with both the inputs to AIP and the role of AIP in grasping. We first experimented with superquadric shape parameters. We considered superquadrics because they occupy a role in robotics that is similar to AIP, in that superquadric fits are derived from visual input and used for grasp planning. We also experimented with an alternative shape parameterization that was based on an Isomap dimension reduction of spatial derivatives of depth (i.e., distance from the observer to the object surface). We considered an Isomap-based model because its parameters lacked discontinuities between similar shapes. When we matched the dimension of the Isomap to the number of superquadric parameters, the superquadric model fit the AIP data somewhat more closely. However, higher-dimensional Isomaps provided excellent fits. Also, we found that the Isomap parameters could be approximated much more accurately than superquadric parameters by feedforward neural networks with CIP-like inputs. We conclude that Isomaps, or perhaps alternative dimension reductions of visual inputs to AIP, provide a promising model of AIP electrophysiology data. Further work is needed to test whether such shape parameterizations actually provide an effective basis for grasp control.</p></abstract>
<kwd-group>
<kwd>AIP</kwd>
<kwd>CIP</kwd>
<kwd>grasping</kwd>
<kwd>3D shape</kwd>
<kwd>cosine tuning</kwd>
<kwd>superquadrics</kwd>
<kwd>Isomap</kwd>
</kwd-group>
<counts>
<fig-count count="11"/>
<table-count count="0"/>
<equation-count count="10"/>
<ref-count count="64"/>
<page-count count="13"/>
<word-count count="9794"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>1. Introduction</title>
<p>The macaque anterior intraparietal area (AIP) receives input from the visual cortex, and is involved in visually guided grasping. A large fraction of neurons in this area encode information about three-dimensional object shapes from visual input (Murata et al., <xref ref-type="bibr" rid="B35">2000</xref>; Sakaguchi et al., <xref ref-type="bibr" rid="B47">2010</xref>). Responses are typically relatively invariant to object position in depth (Srivastava et al., <xref ref-type="bibr" rid="B54">2009</xref>). The responses of some neurons are also invariant to other properties. For example, some are orientation-tuned but not highly sensitive to object shape (Murata et al., <xref ref-type="bibr" rid="B35">2000</xref>). AIP has a strong recurrent connection with premotor area F5, which is involved in hand shaping for grasping (Rizzolatti et al., <xref ref-type="bibr" rid="B44">1990</xref>; Luppino et al., <xref ref-type="bibr" rid="B34">1999</xref>; Borra et al., <xref ref-type="bibr" rid="B5">2008</xref>). Reversible inactivation of AIP leads to grasping impairment, specifically a mismatch between object shape and hand preshape (Gallese et al., <xref ref-type="bibr" rid="B18">1994</xref>; Fogassi et al., <xref ref-type="bibr" rid="B17">2001</xref>). AIP is therefore thought to provide visual information for grasp control (Jeannerod et al., <xref ref-type="bibr" rid="B27">1995</xref>; Fagg and Arbib, <xref ref-type="bibr" rid="B14">1998</xref>).</p>
<p>The focus of this paper is the pathway from V3 and V3A, to the caudal intraparietal area (CIP), to visual-dominant neurons in AIP (Nakamura et al., <xref ref-type="bibr" rid="B36">2001</xref>; Tsutsui et al., <xref ref-type="bibr" rid="B61">2002</xref>). This pathway makes binocular disparity information available for grasp control. Most V3 neurons are selective for binocular disparity (Adams and Zeki, <xref ref-type="bibr" rid="B1">2001</xref>). V3 sends a major projection to V3A (Felleman et al., <xref ref-type="bibr" rid="B15">1997</xref>), which is also strongly activated during binocular disparity processing (Tsao et al., <xref ref-type="bibr" rid="B59">2003</xref>). Both V3 and V3A project to CIP (Katsuyama et al., <xref ref-type="bibr" rid="B28">2010</xref>). CIP neurons are selective for depth gradients (Taira et al., <xref ref-type="bibr" rid="B55">2000</xref>; Tsutsui et al., <xref ref-type="bibr" rid="B61">2002</xref>; Rosenberg et al., <xref ref-type="bibr" rid="B46">2013</xref>) and curvature (Katsuyama et al., <xref ref-type="bibr" rid="B28">2010</xref>). Neurons in AIP receive disynaptic input from V3A via CIP (Nakamura et al., <xref ref-type="bibr" rid="B36">2001</xref>; Borra et al., <xref ref-type="bibr" rid="B5">2008</xref>). Visual-dominant AIP neurons are selective for 3D object shape (Srivastava et al., <xref ref-type="bibr" rid="B54">2009</xref>; Sakaguchi et al., <xref ref-type="bibr" rid="B47">2010</xref>) cued by binocular disparity, consistent with input from this pathway.</p>
<p>AIP also receives many other inputs that we do not model in the present study. The first of these is the premotor area F5, which together with AIP forms a circuit for grasp-related visuomotor transformations. AIP also receives input from the second somatosensory (SII) cortical region (Krubitzer et al., <xref ref-type="bibr" rid="B30">1995</xref>; Fitzgerald et al., <xref ref-type="bibr" rid="B16">2004</xref>; Gregoriou et al., <xref ref-type="bibr" rid="B20">2006</xref>), which may provide tactile feedback and memory-based somatosensory expectations for grasping. Strong connections with other parietal areas are also identified, as well as with prefrontal areas 46 and 12. Area 12 is implicated in high level non-spatial processing including encoding of objects in working memory, suggesting that AIP may be influenced by visual memory of object features (Borra et al., <xref ref-type="bibr" rid="B5">2008</xref>). AIP also contains other neurons that fire in conjunction with motor plans in addition to or instead of visual input (Sakata et al., <xref ref-type="bibr" rid="B48">1997</xref>; Murata et al., <xref ref-type="bibr" rid="B35">2000</xref>; Taira et al., <xref ref-type="bibr" rid="B55">2000</xref>). Interestingly, AIP also receives subcortical input (via the thalamus) from both the cerebellum and basal ganglia (Clower et al., <xref ref-type="bibr" rid="B7">2005</xref>). Finally, AIP receives input from the inferotemporal cortex (IT), which is likely to provide additional visual information about shapes. Our present focus however is the visual input from CIP.</p>
<p>The main goal of this study is to model the neural spike code of object-selective visual-dominant AIP neurons. In particular, we wanted to know whether there are certain sets of shape parameters that are consistent with the responses of visual AIP neurons, and which can furthermore be estimated in a physiologically plausible way from the information available in CIP.</p>
<p>We therefore compared two ways of parameterizing shapes. First we considered the superquadric family of shapes, a continuum that includes cuboids, ellipsoids, spheres, octahedra, and cylinders, and which can also be extended in various ways to model more complex shapes (Solina and Bajcsy, <xref ref-type="bibr" rid="B53">1990</xref>). We considered superquadrics because they play a role in robotic grasp control (Duncan et al., <xref ref-type="bibr" rid="B11">2013</xref>) that seems to be similar to the role of AIP in primate grasp control, i.e., they represent shapes compactly as a basis for grasp planning. We also considered an alternative shape parameterization that is based on non-linear dimension reduction of the depth field. In particular, we used an Isomap (Tenenbaum et al., <xref ref-type="bibr" rid="B56">2000</xref>). We considered Isomap parameters partly because they are continuous, i.e., similar shapes have similar parameters. This is consistent with datasets in which similar 3D stimuli elicit similar spike rate patterns in AIP (Theys et al., <xref ref-type="bibr" rid="B57">2012</xref>, Figure 10; Srivastava et al., <xref ref-type="bibr" rid="B54">2009</xref>, Figure 11C).</p>
<p>This study is one of the first to model the mapping from CIP to AIP. Oztop et al. (<xref ref-type="bibr" rid="B39">2006</xref>) modeled AIP as a hidden layer in a multi-layer perceptron network that mapped visual depth onto hand configuration. The output layer of this model (corresponding to F5) was a self-organizing map of subnetworks that corresponded to different hand configurations. Prevete et al. (<xref ref-type="bibr" rid="B42">2011</xref>) developed a mixed neural and Gaussian-mixture model in which AIP received monocular infero-temporal input. This model did not include stereoscopic input from CIP. The FARS grasping model (Fagg and Arbib, <xref ref-type="bibr" rid="B14">1998</xref>) did not address in detail how AIP activity arises from visual input. While past AIP models have been relatively abstract, here our goal is to fit published tuning curves from AIP recordings, and furthermore to do so using depth-related input from a model of CIP. As far as we are aware, there have not been previous attempts to model AIP tuning in terms of either superquadric parameters or non-linear dimension reduction of depth features.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>2. Materials and methods</title>
<p>This study consists of three main parts. The first is a model of tuning for depth features in the caudal intraparietal area (CIP, see Section 2.1.1). The second is a model of tuning for three-dimensional shape features in the anterior intraparietal area (AIP, see Section 2.1.2). Finally, the third is an investigation of physiologically plausible feedforward mappings between CIP and AIP (see Section 2.5).</p>
<sec>
<title>2.1. Cosine-tuning models of neurophysiological data</title>
<p>We tested how well various tuning curves from the CIP and AIP electrophysiology literature could be approximated by cosine-tuned neuron models. In particular, given a vector <italic>x</italic> of stimulus variables, we modeled the net current, <italic>I</italic>, driving spiking activity in each neuron as</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mrow><mml:mi>I</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mover accent='true'><mml:mi>&#x003D5;</mml:mi><mml:mo>&#x002DC;</mml:mo></mml:mover><mml:mi>T</mml:mi></mml:msup><mml:mi>x</mml:mi><mml:mo>+</mml:mo><mml:mi>b</mml:mi><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where <italic>b</italic> is a bias term and <inline-formula><mml:math id="M2"><mml:mover accent='true'><mml:mi>&#x003D5;</mml:mi><mml:mo>&#x002DC;</mml:mo></mml:mover></mml:math></inline-formula> is parallel to the neuron&#x00027;s preferred direction in the space of stimulus parameters. Longer <inline-formula><mml:math id="M3"><mml:mover accent='true'><mml:mi>&#x003D5;</mml:mi><mml:mo>&#x002DC;</mml:mo></mml:mover></mml:math></inline-formula> corresponds to higher sensitivity of the neuron to variations along its preferred direction.</p>
<p>We used a normalized version of the leaky-integrate-and-fire (LIF) spiking model. In this model, the membrane potential <italic>V</italic> has subthreshold dynamics &#x003C4;<sub><italic>RC</italic></sub> <inline-formula><mml:math id="M4"><mml:mover accent='true'><mml:mi>V</mml:mi><mml:mo>&#x002D9;</mml:mo></mml:mover></mml:math></inline-formula> &#x0003D; &#x02212;<italic>V</italic> &#x0002B; <italic>I</italic>, where &#x003C4;<sub><italic>RC</italic></sub> is the membrane time constant and <italic>I</italic> is the driving current. The neuron spikes when <italic>V</italic> &#x0003E;&#x0003D; 1, after which <italic>V</italic> is held at 0 for a post-spike refractory time &#x003C4;<sub><italic>ref</italic></sub> before subthreshold integration begins again. These neurons have spike rate</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M5"><mml:mrow><mml:mi>r</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msub><mml:mi>&#x003C4;</mml:mi><mml:mrow><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003C4;</mml:mi><mml:mrow><mml:mi>R</mml:mi><mml:mi>C</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:mi>ln</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>I</mml:mi></mml:mfrac><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Except where noted, &#x003C4;<sub><italic>RC</italic></sub> was included among the optimization parameters and constrained to the range [0.02<italic>s</italic>, 0.2<italic>s</italic>]. In some cases (where noted), when the basic cosine-LIF model (above) produced poor fits, we also added Gaussian background noise to <italic>I</italic>. Such background noise more realistically reflects the input to neurons <italic>in vivo</italic> (Carandini, <xref ref-type="bibr" rid="B6">2004</xref>) and causes the LIF model to emit more realistic, irregular spike trains. It also has the potential to produce better tuning curve fits. The reason is that depending on the amplitude of the noise, the spike-rate function may be compressive [as in Equation (2)], sigmoidal, or nearly linear. In these cases we fixed &#x003C4;<sub><italic>ref</italic></sub> &#x0003D; 0.005s and &#x003C4;<sub><italic>RC</italic></sub> &#x0003D; 0.02<italic>s</italic>, included the noise variance as an optimization parameter, and interpolated the spike rate from a lookup table based on simulations. Given a tuning curve from the electrophysiology literature and a list of hypothesized tuning variables, we found least-squares optimal parameters <inline-formula><mml:math id="M6"><mml:mover accent='true'><mml:mi>&#x003D5;</mml:mi><mml:mo>&#x002DC;</mml:mo></mml:mover></mml:math></inline-formula> and <italic>b</italic> mainly, and either &#x003C4;<sub><italic>RC</italic></sub> or &#x003C3;<sub><italic>noise</italic></sub> (as noted in the corresponding sections), using Matlab&#x00027;s <italic>lsqcurvefit</italic> function. This function uses Matlab&#x00027;s trust-region-reflective algorithm, which is based partly on Coleman and Li (<xref ref-type="bibr" rid="B8">1994</xref>), to solve a non-linear curve-fitting problem in the sense of least-squares. We retried each optimization with at least 1000 random initial points in order to increase the probability of finding a global optimum.</p>
<p>We preferred cosine tuning models over more complex non-linear models for a number of reasons, including that they are simple and that cosine tuning is widespread in the cortex and elsewhere (Zhang and Sejnowski, <xref ref-type="bibr" rid="B64">1999</xref>). (See more detailed rationale in the Discussion).</p>
<sec>
<title>2.1.1. CIP tuning</title>
<p>We approximated CIP responses in terms of depth and its first and second spatial derivatives. CIP has been proposed to encode these variables (Orban et al., <xref ref-type="bibr" rid="B38">2006</xref>), and they have been the basis for several experimental studies of CIP responses (Sakata et al., <xref ref-type="bibr" rid="B49">1998</xref>; Taira et al., <xref ref-type="bibr" rid="B55">2000</xref>; Tsutsui et al., <xref ref-type="bibr" rid="B60">2001</xref>; Katsuyama et al., <xref ref-type="bibr" rid="B28">2010</xref>; Rosenberg et al., <xref ref-type="bibr" rid="B46">2013</xref>).</p>
<p>We fit cosine-tuned LIF neuron models to tuning curves from Tsutsui et al. (<xref ref-type="bibr" rid="B61">2002</xref>) and Rosenberg et al. (<xref ref-type="bibr" rid="B46">2013</xref>), and from Katsuyama et al. (<xref ref-type="bibr" rid="B28">2010</xref>), in which the stimuli varied in terms of first and second derivatives of depth, respectively. The stimuli in Katsuyama et al. (<xref ref-type="bibr" rid="B28">2010</xref>) consisted of curved surfaces with depth</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M7"><mml:mrow><mml:mi>z</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>K</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msup><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:msub><mml:mi>K</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msup><mml:mi>y</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p><italic>K</italic><sub>1</sub> and <italic>K</italic><sub>2</sub> were varied to produce two levels of &#x0201C;curvedness,&#x0201D;</p>
<disp-formula id="E4"><mml:math id="M8"><mml:mrow><mml:mi>C</mml:mi><mml:mo>=</mml:mo><mml:msqrt><mml:mrow><mml:mfrac><mml:mrow><mml:msubsup><mml:mi>K</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mi>K</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msubsup></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:msqrt></mml:mrow></mml:math></disp-formula>
<p>and a range of &#x0201C;shape indices&#x0201D;</p>
<disp-formula id="E5"><mml:math id="M9"><mml:mrow><mml:mi>S</mml:mi><mml:mi>I</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi></mml:mfrac><mml:mtext>arctan</mml:mtext><mml:mfrac><mml:mrow><mml:msub><mml:mi>K</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>K</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>K</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>K</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where <italic>K</italic><sub><italic>max</italic></sub> and <italic>K</italic><sub><italic>min</italic></sub> are the larger and smaller curvatures along the <italic>x</italic> and <italic>y</italic> axes, respectively.</p>
<p>In terms of the depth <italic>z</italic>, the principal curvature along the <italic>x</italic> axis is</p>
<disp-formula id="E6"><label>(4)</label><mml:math id="M10"><mml:mrow><mml:msub><mml:mi>K</mml:mi><mml:mi>x</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msup><mml:mo>&#x02202;</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:mi>z</mml:mi><mml:mo>/</mml:mo><mml:mo>&#x02202;</mml:mo><mml:msup><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x02202;</mml:mo><mml:mi>z</mml:mi><mml:mo>/</mml:mo><mml:mo>&#x02202;</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mo>/</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>
<p>(de Vries et al., <xref ref-type="bibr" rid="B9">1993</xref>). For these stimuli &#x02202;<italic>z</italic>/&#x02202;<italic>x</italic> &#x0003D; 0 at the center, and so <italic>K<sub>x</sub></italic> &#x0003D; &#x02202;<sup>2</sup><italic>z</italic>/&#x02202;<italic>x</italic><sup>2</sup>.</p>
</sec>
<sec>
<title>2.1.2. AIP tuning</title>
<p>Following Sakata et al. (<xref ref-type="bibr" rid="B49">1998</xref>) and Murata et al. (<xref ref-type="bibr" rid="B35">2000</xref>) and consistent with the role of AIP in grasping (Fagg and Arbib, <xref ref-type="bibr" rid="B14">1998</xref>), we took the visual-dominant neurons in AIP to be responsive to three-dimensional shape. Available tuning curves (e.g., Murata et al., <xref ref-type="bibr" rid="B35">2000</xref>) span small numbers of data points relative to the large space of shape variations that are relevant to hand pre-shaping. For this reason we fit models to various &#x0201C;augmented&#x0201D; tuning curves that matched published tuning curves for some shapes, and made assumptions about how these neurons might respond to other shapes (see <bold>Figure 2</bold>). These assumptions were based on additional data for separate AIP neurons (see below). Our augmented tuning curves spanned four of the shapes in Murata et al. (<xref ref-type="bibr" rid="B35">2000</xref>), specifically a sphere, cylinder, cube, and plate. Two other shapes (ring and cone) were omitted for simplicity, because they require additional superquadric shape parameters (see Section 2.2). The augmented tuning curves spanned four sizes and four orientations for each of the four shapes. Due to symmetries in the shapes, there were a total of 36 points in these tuning curves (see Figure <xref ref-type="fig" rid="F1">1</xref>). Four of these points corresponded to AIP data, and the rest (the augmented points) were extrapolated from the data.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>The complete set of 36 shapes used in the augmented tuning curves</bold>. Four basic shapes (sphere, cube, plate, and cylinder) were adapted from Murata et al. (<xref ref-type="bibr" rid="B35">2000</xref>). In order to constrain the models more fully, and in particular to ensure that tuning curves included more points than there were parameters in our models, we augmented these basic shapes by adding copies with different sizes (shown with 4 different colors) and orientations (i.e., horizontal, vertical, tilted forward 45&#x000B0;, tilted backward 45&#x000B0;). Note that due to the symmetry of the basic shapes, some orientations are redundant (e.g., rotating a sphere does not create a distinguishable shape).</p></caption>
<graphic xlink:href="fncom-08-00132-g0001.tif"/>
</fig>
<p>We based the augmented points on additional data from other AIP neurons, including aggregate data. Murata et al. (<xref ref-type="bibr" rid="B35">2000</xref>) provide shape-tuning curves for six different object-type visual-dominant AIP neurons. We tested different augmented versions of these curves with various combinations of size and orientation tuning (see Figure <xref ref-type="fig" rid="F2">2</xref>). Murata et al. (<xref ref-type="bibr" rid="B35">2000</xref>) reported (without plotting shape tuning for these neurons) that most object-type neurons were orientation selective, and that 16/26 were size-selective. Therefore, we created two augmented tuning curves for each of the six shape-tuning curves. Both were orientation-selective; one was size-selective and the other was size-invariant. For the size-selective tuning curves we assumed that spike rate increased monotonically with size (consistent with Murata et al., <xref ref-type="bibr" rid="B35">2000</xref>, Figure 19; note that preference for intermediate sizes was reported only for motor-dominant neurons). We assumed that orientation tuning was roughly Gaussian and fairly narrow (consistent with Murata et al., <xref ref-type="bibr" rid="B35">2000</xref>, Figure 18). Some AIP neurons are orientation selective with only mild selectivity across various elongated shapes (Sakata et al., <xref ref-type="bibr" rid="B49">1998</xref>). Therefore, we created a final augmented tuning curve that was orientation selective but responded equally to cylinders and plates. Figure <xref ref-type="fig" rid="F1">1</xref> shows an example of an augmented tuning curve and its relationship to the data. This procedure made the tuning curve optimization more challenging. This was important because even our simple cosine-tuned neuron models had more parameters than the number of points in the published tuning curves (see Section 3). It also allowed us to make use of additional AIP data.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>An example of an augmented AIP tuning curve</bold>. <bold>(A)</bold> Tuning curve adapted from Murata et al. (<xref ref-type="bibr" rid="B35">2000</xref>), Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP, 2580-2601, with permission. (See their Figure 11.) <bold>(B)</bold> The four points from the same tuning curve that belong to the basic superquadric family (a ring and cone are excluded from the current study). The spike rates are plotted as 3D bars. <bold>(C)</bold> An augmented tuning curve that includes the points in <bold>(B)</bold>, as well as other rotations and scales. This augmented tuning curve is both size-tuned and orientation-tuned, as were the majority of object-type visual neurons in Murata et al. (<xref ref-type="bibr" rid="B35">2000</xref>). Another large minority were orientation-tuned but not size-tuned. As in Figure <xref ref-type="fig" rid="F1">1</xref>, the colors correspond to different sizes.</p></caption>
<graphic xlink:href="fncom-08-00132-g0002.tif"/>
</fig>
</sec>
</sec>
<sec>
<title>2.2. Superquadrics</title>
<p>We modeled AIP shape tuning both on the parameters of the superquadric family of shapes, and on an Isomap dimension reduction of depth features. The superquadric family is a continuum that includes cuboids, ellipsoids, spheres, octahedra, and cylinders as examples. Superquadrics are often used to approximate observed shapes as an intermediate step in robotic grasp control (Ikeuchi and Hebert, <xref ref-type="bibr" rid="B25">1996</xref>; Biegelbauer and Vincze, <xref ref-type="bibr" rid="B4">2007</xref>; Goldfeder et al., <xref ref-type="bibr" rid="B19">2007</xref>; Huebner et al., <xref ref-type="bibr" rid="B23">2008</xref>; Duncan et al., <xref ref-type="bibr" rid="B11">2013</xref>). In this context, superquadric shape parameters are typically estimated from 3D point-cloud data using iterative non-linear optimization methods (Huebner et al., <xref ref-type="bibr" rid="B23">2008</xref>).</p>
<p>Their role in robotics suggests that superquadrics are a plausible model of AIP shape tuning. Specifically, they can be parameterized from visual information and they contain information about an object that is useful as a basis for grasp planning. One goal of the present study was to examine their physiological plausibility more closely, by fitting superquadric-tuned neuron models to AIP tuning curves. The surface of a superquadric shape is defined in <italic>x</italic>&#x02212;<italic>y</italic>&#x02212;<italic>z</italic> space as</p>
<disp-formula id="E7"><mml:math id="M11"><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mi>x</mml:mi><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:msub><mml:mi>&#x003F5;</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mi>y</mml:mi><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:msub><mml:mi>&#x003F5;</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mi>z</mml:mi><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mn>3</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:msub><mml:mi>&#x003F5;</mml:mi><mml:mn>3</mml:mn></mml:msub></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where <italic>A</italic> &#x0003E; 0 are scale parameters and &#x003F5; &#x0003E; 0 are curvature parameters. Values of &#x003F5; close to zero correspond to squared corners, while values close to one correspond to rounded corners. For example a sphere has <italic>A</italic><sub>1</sub> &#x0003D; <italic>A</italic><sub>2</sub> &#x0003D; <italic>A</italic><sub>3</sub> and &#x003F5;<sub>1</sub> &#x0003D; &#x003F5;<sub>2</sub> &#x0003D; &#x003F5;<sub>3</sub> &#x0003D; 1. We also used another parameter, &#x003B8;, that described the orientation of the superquadric. &#x003B8; was composed of three angles, one per coordinate. The rotation of the superquadric is done applying the rotation matrix described in Equation 5.</p>
<disp-formula id="E8"><label>(5)</label><mml:math id="M12"><mml:mrow><mml:mi>R</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>sin</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>cos</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
<p>We generated a database of 40,000 shapes that included spheres, cylinders, plates, and cubes as well as variations on these shapes with different scales in each dimension, and rotated versions of them. Our database contained roughly equal numbers of box-like, sphere-like, and cylinder-like shapes. For round edges we set &#x003F5; &#x0003D; 1. For squared edges we drew &#x003F5; from an exponential distribution that was shifted slightly away from zero, <italic>p</italic> &#x0003D; 10H(&#x003F5; &#x02212; &#x003B7;) exp(&#x02212;(&#x003F5; &#x02212; &#x003B7;)/0.1) with &#x003B7; &#x0003D; 0.01, where <italic>H</italic> is the Heaviside step function. The shift away from 0 (perfectly sharp corners) helped to avoid numerical problems. The objects had widths between 0.02 m and 0.12 m. We also allowed arbitrary rotations in three dimensions (except where symmetry made rotations redundant), so that each shape had a total of nine parameters.</p>
<p>This study considers only the basic superquadric family, which does not include all the shapes for which AIP responses have been reported. However, the basic family can also be extended in various ways to deal with more complex shapes. For example, hyperquadrics introduce asymmetry (Kumar et al., <xref ref-type="bibr" rid="B31">1995</xref>), and trees of superquadrics can be used to approximate complex shapes with arbitrary precision (Goldfeder et al., <xref ref-type="bibr" rid="B19">2007</xref>).</p>
</sec>
<sec>
<title>2.3. Creation of depth maps</title>
<p>CIP receives input from V3 and V3A, which encode binocular disparity information (Anzai et al., <xref ref-type="bibr" rid="B2">2011</xref>). Disparity is monotonically related to visual depth, or distance from observer to surface. As a simplified model of this input we created depth maps, i.e., grids of distances from a viewpoint to object surfaces. We created depth maps from the shapes in our superquadric database by finding intersections of the surfaces with rays at various visual angles from the view point. We used a 16 &#x000D7; 16 grid of visual angles. Grid spacing was closer near the center than in the periphery, in order to reflect higher visual acuity near the fovea and also to ensure that a few rays intersected with the smallest shapes (specifically, distances from the center were <italic>a</italic><sup>1.5</sup>, where <italic>a</italic> were evenly-spaced points). The grid covered &#x000B1; 10&#x000B0; of visual angle in each direction. The object centers were at a depth of 0.75 m from the viewpoint. Depth at each grid point was found as the intersection of the superquadric surface with a line from the observation point (Figure <xref ref-type="fig" rid="F3">3</xref>).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Illustration of the depth map construction process</bold>. Each superquadric was centered at (0, 0, 0.75) relative to an observer at (0, 0, 0). Rays were traced between the observation point and a grid of points in the frontoparallel plane at <italic>z</italic> &#x0003D; 0.75, and intersections (red dots) were found with the superquadric surface. The depth map consisted of a grid of distances from (0, 0, 0) to these intersections.</p></caption>
<graphic xlink:href="fncom-08-00132-g0003.tif"/>
</fig>
</sec>
<sec>
<title>2.4. Isomap shape parameters</title>
<p>Within the superquadric family there is typically more than one set of parameters that can describe a given shape. For example, a tall box can either be parameterized as a tall box or a wide box on its end. This is not very problematic in robotics, because an iterative search for matching parameters finds one of these solutions. However, our goal was to model a feedforward mapping from depth (V3A) to shape parameters (AIP). In order to use the superquadric parameters as the basis for an AIP tuning we therefore needed the superquadric-to-depth function to be invertible. We achieved this by restricting the ranges of angles. For example, for box-like shapes we restricted all angles to within &#x000B1;&#x003C0;/4. This resulted in a unique set of superquadric parameters for each shape. However, large discontinuities remained, in that some very similar shapes sometimes had very different parameters. For example, a tall box at an angle slightly less than &#x003C0;/4 has a depth map that is very similar to a wide box at angle just greater than &#x02212;&#x003C0;/4 radians. Similar discontinuities seem to exist regardless of the angle convention. We anticipated that these discontinuities would impair feedforward mapping in a neural network, so we also explored an alternative low-dimensional shape parameterization.</p>
<p>In the alternative model, neurons were tuned to an Isomap (Tenenbaum et al., <xref ref-type="bibr" rid="B56">2000</xref>) derived from depth data. Isomap is a non-linear dimension-reduction method in which samples are embedded in a lower-dimensional space in such a way that geodesic distances (i.e., distances along the shortest paths through edges between neighboring points) are maintained as well as possible. This method ensured that similar depth maps would be close together in the shape-parameter space, minimizing parameter discontinuities like those of the superquadric parameters. We constructed an Isomap of the first and second spatial derivatives of the depth maps in the horizontal and vertical directions.</p>
<p>We tested whether our augmented AIP tuning curves (above) were consistent with cosine tuning for these shape parameters. We also tested how well these shape parameters could be approximated by a neural network with CIP parameters as input.</p>
</sec>
<sec>
<title>2.5. Neural network models of CIP-to-AIP map</title>
<p>In addition to fitting cosine-LIF models to neural tuning curves in CIP and AIP, we also developed feedforward networks to map from CIP variables to AIP variables. Our general approach was to decode shape parameters from the spike rates of CIP models.</p>
<p>We experimented with several different networks including neural engineering framework networks (Eliasmith and Anderson, <xref ref-type="bibr" rid="B12">2003</xref>; Eliasmith et al., <xref ref-type="bibr" rid="B13">2012</xref>), multilayer perceptrons trained with the back-propagation algorithm (Haykin, <xref ref-type="bibr" rid="B22">1999</xref>) and convolutional networks (LeCun et al., <xref ref-type="bibr" rid="B32">1998</xref>).</p>
<p>In each case the output units were linear. Linear decoding of the tuning parameters was of interest because decoding weights can be multiplied with preferred directions to give synaptic weights for any cosine tuning curve over the decoded variables (Eliasmith and Anderson, <xref ref-type="bibr" rid="B12">2003</xref>). Specifically, suppose we have presynaptic rates <bold>r</bold><sub><italic>pre</italic></sub> and linearly decoded estimates <inline-formula><mml:math id="M13"><mml:mrow><mml:mover accent='true'><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo stretchy='true'>&#x0005E;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula> &#x0003D; &#x003A6;<bold>r</bold><sub><italic>pre</italic></sub> of shape parameters <bold>p</bold>, where &#x003A6; is a matrix of decoding weights. In this case the family of cosine tuning curves over <inline-formula><mml:math id="M14"><mml:mrow><mml:mover accent='true'><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo stretchy='true'>&#x0005E;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula> is</p>
<disp-formula id="E9"><label>(6)</label><mml:math id="M15"><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>G</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msup><mml:mover accent='true'><mml:mi>&#x003D5;</mml:mi><mml:mo>&#x002DC;</mml:mo></mml:mover><mml:mi>T</mml:mi></mml:msup><mml:mover accent='true'><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo stretchy='true'>&#x0005E;</mml:mo></mml:mover><mml:mo>+</mml:mo><mml:mi>b</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M16"><mml:mrow><mml:msup><mml:mover accent='true'><mml:mi>&#x003D5;</mml:mi><mml:mo>&#x002DC;</mml:mo></mml:mover><mml:mi>T</mml:mi></mml:msup><mml:mover accent='true'><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>p</mml:mi></mml:mstyle><mml:mo stretchy='true'>&#x0005E;</mml:mo></mml:mover><mml:mo>+</mml:mo><mml:mi>b</mml:mi></mml:mrow></mml:math></inline-formula> is the driving current, <inline-formula><mml:math id="M17"><mml:mover accent='true'><mml:mi>&#x003D5;</mml:mi><mml:mo>&#x002DC;</mml:mo></mml:mover></mml:math></inline-formula> is the neuron&#x00027;s preferred direction, <italic>G</italic> is a physiological model of the current-spike rate relationship, and <italic>b</italic> is a bias current. Such a tuning curve can then be obtained with synaptic weights (from all <italic>presynaptic</italic> neurons to a single <italic>postsynaptic</italic> neuron)</p>
<disp-formula id="E10"><label>(7)</label><mml:math id="M18"><mml:mrow><mml:msup><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle><mml:mi>T</mml:mi></mml:msup><mml:mo>=</mml:mo><mml:msup><mml:mover accent='true'><mml:mi>&#x003D5;</mml:mi><mml:mo>&#x002DC;</mml:mo></mml:mover><mml:mi>T</mml:mi></mml:msup><mml:mi>&#x003A6;</mml:mi><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>This allows us to draw general conclusions about how well our various models can account for AIP tuning, and how they would relate to future data.</p>
<p>Equations 6 and 7 are important components of the Neural Engineering Framework (Eliasmith and Anderson, <xref ref-type="bibr" rid="B12">2003</xref>; Eliasmith et al., <xref ref-type="bibr" rid="B13">2012</xref>), a method of developing large-scale neural circuit models.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>3. Results</title>
<sec>
<title>3.1. CIP tuning</title>
<p>Figure <xref ref-type="fig" rid="F4">4</xref> shows an optimal fit of a cosine-tuned LIF model to a tuning curve from Katsuyama et al. (<xref ref-type="bibr" rid="B28">2010</xref>). Following their convention the spike rates are shown as a function of shape index, separately for the two curvedness levels. Inspection of the tuning curve revealed that it contained an expansive non-linearity, so we included Gaussian background noise in the model (as described in Section 2). To improve the fit further, in addition to tuning variables <italic>X</italic> &#x0003D; &#x02202;<sup>2</sup><italic>z</italic>/&#x02202;<italic>x</italic><sup>2</sup> and <italic>Y</italic> &#x0003D; &#x02202;<sup>2</sup><italic>z</italic>/&#x02202;<italic>y</italic><sup>2</sup> we introduced new tuning variables <inline-formula><mml:math id="M19"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula>(3(<italic>X</italic>)<sup>2</sup> &#x02212; 1) and <inline-formula><mml:math id="M20"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:math></inline-formula>(3(<italic>Y</italic>)<sup>2</sup> &#x02212; 1). The rationale for their inclusion was that these are the non-linear functions for which linear reconstruction is (with reasonable assumptions) most accurate from populations of LIF neurons tuned to <italic>X</italic> and <italic>Y</italic> (Eliasmith and Anderson, <xref ref-type="bibr" rid="B12">2003</xref>). However, the fit to the Katsuyama et al. (<xref ref-type="bibr" rid="B28">2010</xref>) data remained poor despite these measures.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Fit of CIP model (squares) to tuning curve (circles) of an example neuron (0.04 &#x000B1; 5.22 spikes/s; mean error &#x000B1; <italic>SD</italic>)</bold>. The tuning curve is replotted from Katsuyama et al. (<xref ref-type="bibr" rid="B28">2010</xref>), with permission from Elsevier. In our model of CIP, neurons are cosine-tuned to five dimensions: depth, horizontal and vertical first spatial derivatives of depth, and horizontal and vertical second spatial derivatives of depth. The stimuli in Katsuyama et al. (<xref ref-type="bibr" rid="B28">2010</xref>) varied only in terms of the second derivatives. We also added non-linear tuning functions to improve the fit (see text). The left and right tuning curves are for two different levels of curvedness.</p></caption>
<graphic xlink:href="fncom-08-00132-g0004.tif"/>
</fig>
<p>We considered whether a linear-nonlinear receptive field model with depth inputs might produce a better fit. Such models are essentially cosine tuning models with multiple input variables on a grid. However, the depth stimuli in this case (see Equation 3) consisted of linear combinations of <italic>x</italic><sup>2</sup> and <italic>y</italic><sup>2</sup>, so any receptive-field model over the depth field has an equivalent cosine tuning model over <italic>K</italic><sub>1</sub> and <italic>K</italic><sub>2</sub>. Therefore, the neuron is not cosine tuned to either depth or the curvature parameters.</p>
<p>Figure <xref ref-type="fig" rid="F5">5</xref> shows an example of a more complex non-linear neuron model that fits the data. This model is based on non-linear interactions between nearby inputs on the same dendrite, which suggest that pyramidal cells may function similarly to multilayer perceptrons (Polsky et al., <xref ref-type="bibr" rid="B41">2004</xref>). The input to this model was a 3 &#x000D7; 3 depth grid. The model contained 50 dendritic branches, each of which was cosine tuned to the depths. The linear kernels (analogous to preferred directions) were random. The output of each branch was a sigmoid function of the point-wise product of the depth stimulus and the linear kernel. The spike rate was a least-squares optimal weighted sum of the branch outputs. This was found using a matrix pseudoinverse that used 14 singular values. We also created another version of this model (not shown) in which the tuning curve was augmented with additional stimuli (completing the outer circle of points in Figure <xref ref-type="fig" rid="F5">5B</xref>) and it was assumed that the neuron would respond to these stimuli at the background spike rate. This version of the model therefore fit 26 points, and we used 20 singular values in the pseudoinverse. The fit was similar in this case.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>(A)</bold> Non-linear model (squares) of same neuron as in Figure <xref ref-type="fig" rid="F4">4</xref> (circles). <bold>(B)</bold> The same spike rates as <bold>(A)</bold> (black circles), re-plotted as a function of &#x02202;<sup>2</sup><italic>z</italic>/&#x02202;<italic>x</italic><sup>2</sup> and &#x02202;<sup>2</sup><italic>z</italic>/&#x02202;<italic>y</italic><sup>2</sup>, and the best model fit (mesh) (0.00 &#x000B1; 1.82 spikes/s; mean error &#x000B1; <italic>SD</italic>). The data plots (black circles) are adapted from Katsuyama et al. (<xref ref-type="bibr" rid="B28">2010</xref>), with permission from Elsevier.</p></caption>
<graphic xlink:href="fncom-08-00132-g0005.tif"/>
</fig>
<p>We also constructed another alternative model of this cell that was based on a more detailed model of V3A activity. Specifically, instead of a 3 &#x000D7; 3 depth grid, this model received input from seven non-linear functions of depth at each point. Five of these were Gaussian functions based on &#x0201C;tuned near,&#x0201D; &#x0201C;tuned zero,&#x0201D; and &#x0201C;tuned far&#x0201D; neurons (Poggio et al., <xref ref-type="bibr" rid="B40">1988</xref>). Two were sigmoidal functions based on &#x0201C;near&#x0201D; and &#x0201C;far&#x0201D; tuning (Poggio et al., <xref ref-type="bibr" rid="B40">1988</xref>). This model (not shown) reproduced the tuning curve somewhat less accurately than the non-linear cell model above. This was the case regardless of minor variations in the set of input tuning functions and their parameters.</p>
<p>Figure <xref ref-type="fig" rid="F6">6</xref> shows a cosine-tuning fit of data from Tsutsui et al. (<xref ref-type="bibr" rid="B61">2002</xref>). This tuning curve is an average over multiple cells that were tuned to depth gradients of visual stimuli. The best fitting cosine-tuning model has a notably different shape than the aggregate data. In particular, the actual spike rates are fairly constant far away from the preferred stimulus, while the model spike rates continue to decrease farther from the preferred stimulus.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p><bold>Cosine tuning model (left) of spike rate data aggregated across neurons (right) (0.03 &#x000B1; 3.00 spikes/s; mean error &#x000B1; <italic>SD</italic>)</bold>. The right panel is from Tsutsui et al. (<xref ref-type="bibr" rid="B61">2002</xref>). Reprinted with permission from AAAS. In which <italic>N</italic> is the number of neurons and <italic>r</italic> is the regression coefficient.</p></caption>
<graphic xlink:href="fncom-08-00132-g0006.tif"/>
</fig>
<p>Rosenberg et al. (<xref ref-type="bibr" rid="B46">2013</xref>) provide several additional CIP tuning curves over 49 different plane stimuli. Some of these tuning curves are clearly not consistent with cosine tuning for first derivatives of depth or disparity, e.g., with multimodal responses to surface tilt. We fit the non-linear model of Figure <xref ref-type="fig" rid="F5">5</xref> to seven of these tuning curves (their Figures 4, 5B). Using 20 singular values, the correlations between data and our best model fits were <italic>r</italic> &#x0003D; 0.98 &#x000B1; 0.01 <italic>SD</italic> for the four tuning curves in their Figure 4, and <italic>r</italic> &#x0003D; 0.78 &#x000B1; 0.09 <italic>SD</italic> for the three tuning curves in their Figure 5B. (These fits are somewhat closer than fits reported by Rosenberg et al. to Bingham functions, which is unsurprising as our model has more parameters.) Using 40 singular values, our correlations improved to <italic>r</italic> &#x0003D; 0.91 &#x000B1; 0.02 <italic>SD</italic> for the tuning curves in their Figure 5B.</p>
<p>In summary, the spike rates of these CIP neurons varied with the first and second spatial derivatives of depth, but not in a way that is consistent with cosine tuning to either the depth map, its first and second derivatives, or low-order polynomial functions of these derivatives. Other models, which are physiologically plausible but more complex, fit the data more closely.</p>
</sec>
<sec>
<title>3.2. AIP tuning</title>
<p>Figure <xref ref-type="fig" rid="F7">7</xref> shows an example cosine-tuning fit of an augmented tuning curve in superquadric space. This fit is based on a noise-free LIF neuron. For this dataset the shapes were rotated only in one dimension, so we avoided angle discontinuities by using a 2D direction vector in place of the angle. The optimized parameters were the 8-dimensional preferred direction vector <inline-formula><mml:math id="M21"><mml:mover accent='true'><mml:mi>&#x003D5;</mml:mi><mml:mo>&#x002DC;</mml:mo></mml:mover></mml:math></inline-formula>, the bias <italic>b</italic>, and the membrane time constant &#x003C4;<sub><italic>RC</italic></sub>. Across the 36 points in the augmented tuning curve, the spike rate error (difference between augmented and model spike rates) was 0.70 &#x000B1; 1.57 (mean &#x000B1; <italic>SD</italic>).</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><bold>Best fit of a model neuron that is cosine-tuned over superquadric parameters to an augmented tuning curve</bold>. This augmented tuning curve is size-invariant. Color corresponds to the size of object (see Figure <xref ref-type="fig" rid="F1">1</xref>) <bold>Left</bold>: Augmented tuning curve. This includes data replotted with permission from Murata et al. (<xref ref-type="bibr" rid="B35">2000</xref>). <bold>Center</bold>: Best fit of a cosine-tuned neuron to the augmented tuning curve. <bold>Right</bold>: Error (ideal minus model augmented tuning curve).</p></caption>
<graphic xlink:href="fncom-08-00132-g0007.tif"/>
</fig>
<p>Figure <xref ref-type="fig" rid="F8">8</xref> shows the means and standard deviations of spike-rate errors for each of the augmented tuning curves. Good fits were obtained for some of the neurons (&#x00023;1 and &#x00023;3 in Murata et al., <xref ref-type="bibr" rid="B35">2000</xref>, Figure 10, and the second in Figure 11, which we label &#x00023;5). This was true for both size-invariant and size-selective augmented tuning curves. Neuron &#x00023;1 had low spike rates for the stimuli that we studied. Neurons &#x00023;3 was highly selective for cylinders, and &#x00023;5 was more broadly tuned but also preferred cylinders. The worst fits were obtained for neuron &#x00023;6 which responded strongly to plates and cylinders but not to cubes or spheres.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p><bold>Quality of fit of cosine tuning model over superquadric parameters with various augmented tuning curves</bold>. The plots shows, for each tuning curve, the mean &#x000B1; <italic>SD</italic> of the errors over the grid of shapes, sizes, and orientations shown in Figure <xref ref-type="fig" rid="F7">7</xref>. Note that in this model we can trivially achieve invariance to any superquadric parameter by setting the corresponding component of the preferred direction to zero.</p></caption>
<graphic xlink:href="fncom-08-00132-g0008.tif"/>
</fig>
<p>Figure <xref ref-type="fig" rid="F9">9A</xref> shows the means and standard deviations of spike-rate errors for each of the augmented tuning curves in an 8-dimensional Isomap space. We plot the results for the 8-dimensional Isomap in order to match the number of superquadric parameters. The cosine tuning errors (&#x02212;0.88 &#x000B1; 10.68 spikes/s; mean &#x000B1; <italic>SD</italic>) were larger than those in the superquadric space (&#x02212;0.53 &#x000B1; 6.75 spikes/s). The difference between these variances was significant according to Levene&#x00027;s test [<italic>W</italic><sub>(1, 910)</sub> &#x0003D; 41.3; <italic>p</italic> &#x0003C; 0.001].</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p><bold>Quality of fit of cosine tuning model over Isomap parameters with the same augmented tuning curves</bold>. <bold>(A)</bold>, Mean &#x000B1; <italic>SD</italic> of errors with 8-dimensional Isomap (the same number of parameters as the superquadric family used in Figure <xref ref-type="fig" rid="F8">8</xref>). Across tuning curves the error is 0.89 &#x000B1; 10.68. <bold>(B)</bold>, Standard deviation of error over all augmented tuning curves vs. dimension of the Isomap. The error declines sharply with increasing dimension.</p></caption>
<graphic xlink:href="fncom-08-00132-g0009.tif"/>
</fig>
<p>Figure <xref ref-type="fig" rid="F9">9B</xref> shows how the error declined with higher-dimensional Isomaps. Error variance with the 16-dimensional Isomap (&#x02212;1.77 &#x000B1; 6.35) was not significantly different from that of the 8-parameter superquadric [Levene&#x00027;s Test; <italic>W</italic><sub>(1, 910)</sub> &#x0003D; 1.83; <italic>p</italic> &#x0003D; 0.18]. (Recalculating the variances around 0 instead of &#x02212;1.77 and &#x02212;0.89 did not make the difference significant; <italic>p</italic> &#x0003D; 0.058). The cosine-tuning fits were excellent in the 32-dimensional Isomap space, with significantly lower variance [&#x02212;0.17 &#x000B1; 1.29 spikes/s; <italic>W</italic><sub>(1, 910)</sub> &#x0003D; 316.2; <italic>p</italic> &#x0003C; 0.001]. This higher-dimensional shape representation is therefore consistent with the data and with the augmented tuning curves.</p>
</sec>
<sec>
<title>3.3. Mapping from CIP to AIP</title>
<p>We trained multi-layer perceptrons in order to understand whether the superquadric or Isomap models of AIP were more consistent with mapping from CIP input. Because CIP neurons are sensitive to depth and to first and second spatial derivatives of depth, we used these as inputs to the networks. Specifically the inputs consisted of 16 &#x000D7; 16 depth maps, their 16 &#x000D7; 16 horizontal and vertical derivatives, and their 16 &#x000D7; 16 horizontal and vertical second derivatives. The derivatives were approximated by convolving with 3 &#x000D7; 3 kernels (e.g., [1 1 1]<sup><italic>T</italic></sup>[1 0 &#x02212;1] and [1 1 1]<sup><italic>T</italic></sup>[0.5 &#x02212; 1 0.5]). The total number of inputs was therefore 16 &#x000D7; 16 &#x000D7; 5 &#x0003D; 1280. The hidden layers had logistic activation functions. The weights and biases were trained with the backpropagation algorithm in Matlab&#x00027;s Neural Network Toolbox. The output layer had a linear activation function in order to model the input to cosine-tuned neurons, as described in the Methods. A dataset of 40000 rotated superquadric objects was generated, from which depth and curvature images were derived. This dataset was divided into 28000 objects for training the network and 12000 objects to validate the results obtained in the training.</p>
<p>Figure <xref ref-type="fig" rid="F10">10</xref> shows results from networks with two hidden layers, the first with 600 units and the second with 300 units. The scatter plots show the network&#x00027;s output vs. the actual values of the validation dataset. In Figure <xref ref-type="fig" rid="F10">10A</xref> is the network&#x00027;s result for the superquadric shape parameter &#x003F5;<sub>1</sub>. The other scatterplots in Figures <xref ref-type="fig" rid="F10">10C,E</xref> illustrate the network&#x00027;s approximation of the scale and orientation parameters <italic>A</italic><sub>1</sub> and &#x003B8;<sub>1</sub>. Approximation of the other six parameters was similar (e.g., the scatterplots for &#x003F5;<sub>2</sub> and &#x003F5;<sub>3</sub> resemble that for &#x003F5;<sub>1</sub>). The scatterplots Figures <xref ref-type="fig" rid="F10">10B,D,F</xref> illustrate the network&#x00027;s approximation of Isomap parameters. The first, fourth, and seventh dimensions are shown as illustrative examples.</p>
<fig id="F10" position="float">
<label>Figure 10</label>
<caption><p><bold>Regression plot comparison between neural network approximations of superquadrics parameters and Isomap parameters</bold>. <bold>(A)</bold> superquadric epsilon parameter, <bold>(B)</bold> Isomap dimension 1 parameter, <bold>(C)</bold> superquadric scale parameter, <bold>(D)</bold> Isomap dimension 4 parameter, <bold>(E)</bold> superquadric rotation angle parameter and <bold>(F)</bold> Isomap dimension 7 parameter.</p></caption>
<graphic xlink:href="fncom-08-00132-g0010.tif"/>
</fig>
<p>Approximation of the Isomap parameters was much more accurate than approximation of the superquadric parameters. This outcome was very consistent across a variety of networks of different sizes, with one or two hidden layers, with pre-training of hidden layers as autoencoders, etc. We also experimented with networks that contained a hidden layer of LIF neurons with random preferred directions over various local kernels, and optimal linear estimates of the shape parameters from the hidden-layer activity (Eliasmith and Anderson, <xref ref-type="bibr" rid="B12">2003</xref>). The results were also similar in this case, although (as expected) more neurons were required to achieve performance like that of the more fully-optimized multilayer perceptrons.</p>
<p>Figure <xref ref-type="fig" rid="F11">11</xref> compares the distribution of the network&#x00027;s Isomap approximation errors with the distribution of pairwise distances between shape examples in our database. The errors were much smaller than typical distances between examples.</p>
<fig id="F11" position="float">
<label>Figure 11</label>
<caption><p><bold>Histogram of Euclidian distances in Isomap space vs. the root sum square error</bold>.</p></caption>
<graphic xlink:href="fncom-08-00132-g0011.tif"/>
</fig>
<p>We also experimented with a wide variety of larger networks, including convolutional networks, using the cuda-convnet package (Krizhevsky et al., <xref ref-type="bibr" rid="B29">2012</xref>). These networks did not substantially outperform the multilayer perceptron of Figures <xref ref-type="fig" rid="F10">10</xref>, <xref ref-type="fig" rid="F11">11</xref> (lowest mean Euclidean error 0.066 as opposed to 0.081 in Figure <xref ref-type="fig" rid="F11">11</xref>). We also trained some convolutional networks with only the depth map as input, and with a 3 &#x000D7; 3 kernel in the first convolutional layer. Interestingly, some of the resulting kernels resembled the kernels that we created manually to approximate the first and second derivatives.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>4. Discussion</title>
<p>This study examined the neural code for three-dimensional shape in visual-dominant AIP neurons. AIP is critical for hand pre-shaping in grasping, and these neurons encode properties that are relevant to grasping including object shape, size, and orientation.</p>
<p>Our motivation for testing superquadric parameters as a model of AIP tuning was that superquadrics have been used in robotics, in a role that we take to be similar to the role of AIP in the primate brain. Specifically, they have been used as compact approximate representations of point clouds on which to base grasp planning. Such a representation is useful because it allows generalization from training examples to unseen examples, e.g., by interpolating between known solutions for known sets of parameters. An alternative approach in robotics is to cluster point clouds into discrete shape categories (Detry et al., <xref ref-type="bibr" rid="B10">2013</xref>). We see the Isomap as an intermediate approach with some of the advantages of both superquadric fitting and clustering. The Isomap is data-driven and adapts to the statistics of the environment (like clustering), but its parameters make up a low-dimensional and continuous space (like those of superquadrics). Furthermore, unlike the superquadric representation, the Isomap representation does not have large discontinuities between very similar shapes.</p>
<p>We found that cosine tuning on a 32-dimensional Isomap accounted well for the tuning curves of object-selective AIP neurons. We also found that, in contrast with superquadric parameters, the Isomap parameters could be approximated fairly well by various neural networks with CIP-like input.</p>
<sec>
<title>4.1. Augmented tuning curves</title>
<p>Available AIP data includes the responses of individual neurons to only a few different shapes, in fact fewer shapes than there are parameters in even the simplest superquadric model. To more vigorously test the different shape parameterizations as a basis for plausible neural tuning, and to incorporate additional aggregate information on shape tuning (e.g., the fact that most visual-dominant AIP neurons are orientation selective), we created &#x0201C;augmented&#x0201D; tuning curves that included both data and extrapolations of the data. It is likely that some of these augmented tuning curves were unrealistic. While the general trends in our AIP fitting results are informative (e.g., that Isomap fits improve and outperform superquadrics as dimensions increase), the details depend on our augmentation assumptions. For example, we found that the Isomap error declined more rapidly when we excluded orientation-selective/shape-invariant tuning curves from the analysis. This limitation does not affect interpretation of our other main result, i.e., that superquadrics were poorly approximated by feedforward neural networks while Isomaps were well approximated.</p>
<p>Future modeling would be facilitated by tuning curves with greater numbers of data points. For example, the dataset in Lehky et al. (<xref ref-type="bibr" rid="B33">2011</xref>) includes responses of 674 inferotemporal neurons to a common set of 806 images. A relatively extensive AIP dataset was recently collected (Schaffelhofer and Scherberger, <xref ref-type="bibr" rid="B51">2014</xref>), but no tuning curves from this dataset have yet been published.</p>
</sec>
<sec>
<title>4.2. Cosine tuning</title>
<p>We were primarily interested in cosine-tuning models for several reasons, not least because cosine tuning is widespread in the brain (see many examples in Zhang and Sejnowski, <xref ref-type="bibr" rid="B64">1999</xref>). Linear-nonlinear receptive field models of the early visual system are another kind of cosine tuning, with multiple tuning variables on a 2D grid. Furthermore, a practical advantage of cosine tuning models is that they require only <italic>n</italic> &#x0002B; 1 tuning parameters for <italic>n</italic> stimulus variables (in contrast a full <italic>n</italic>-dimensional Gaussian tuning curve has <italic>n</italic> &#x0002B; <italic>n</italic><sup>2</sup> parameters). This is important because published tuning curves in CIP and AIP consist of relatively few points, so models with large numbers of parameters may be underconstrained. Cosine tuning is also physiologically realistic in that it can arise from linear synaptic integration. For example, if a matrix <italic>W</italic> of synaptic weights has <italic>n</italic> large singular values, then the post-synaptic neurons are tuned to a <italic>n</italic>-dimensional space (if <italic>W</italic> &#x0003D; <italic>U</italic>&#x003A3;<italic>V<sup>T</sup></italic> then the preferred directions are in the first <italic>n</italic> columns of <italic>U</italic>). Cosine tuning curves are also optimal for linear decoding (Salinas and Abbott, <xref ref-type="bibr" rid="B50">1994</xref>). There are also many neurons that do not appear to be cosine tuned, for example speed-tuned neurons in the middle temporal area (Nover et al., <xref ref-type="bibr" rid="B37">2005</xref>). However, where applicable, cosine tuning models provide rich insight into neural activity. We therefore attempted to fit such models to the data where possible. Many AIP tuning curves over similar stimuli with different curvatures vary smoothly and monotonically (Srivastava et al., <xref ref-type="bibr" rid="B54">2009</xref>), consistent with cosine tuning.</p>
<p>Cosine tuning to modest numbers of Isomap parameters (relative to the 256-element depth maps on which they were based) accounted for the AIP data and for our augmented AIP tuning curves.</p>
<p>In contrast, we concluded that the CIP neurons we modeled were not cosine tuned to the stimulus variables with which they have been examined. CIP has been proposed to encode first and second derivatives of depth (Orban et al., <xref ref-type="bibr" rid="B38">2006</xref>). Various neurons in CIP respond to disparity gradient (Shikata et al., <xref ref-type="bibr" rid="B52">1996</xref>; Sakata et al., <xref ref-type="bibr" rid="B49">1998</xref>), texture gradient (Tsutsui et al., <xref ref-type="bibr" rid="B60">2001</xref>), and/or perspective cues for oriented surfaces (Tsutsui et al., <xref ref-type="bibr" rid="B60">2001</xref>). (Accordingly, visual-dominant AIP neurons also respond to monocular visual cues as well as disparity cues, and respond most strongly when disparity and other depth cues are congruent (Romero et al., <xref ref-type="bibr" rid="B45">2013</xref>). Sakata et al. (<xref ref-type="bibr" rid="B49">1998</xref>) describe various neurons in CIP as axis-orientation-selective and surface-orientation-selective. The former were sensitive to the orientation of a long cylinder, consistent with two-dimensional tuning for horizontal and vertical curvature. The latter were selective for the orientation of a flat plate, consistent with two-dimensional tuning for depth gradient. Furthermore, Sakata et al. (<xref ref-type="bibr" rid="B49">1998</xref>) also recorded a neuron that preferred a cylinder of certain diameter which was tilted back and to the right, but did not respond strongly to a square column of similar dimensions. This suggests selectivity for both first and second derivatives within the same neuron. Katsuyama et al. (<xref ref-type="bibr" rid="B28">2010</xref>) recorded CIP responses to curved surfaces that varied in terms of their second derivatives. Tuning to the first and second derivatives of depth is physiologically plausible in that these quantities are linear functions of the depth field, which is available from V3A. We therefore attempted to fit models that were cosine tuned over these variables, but we obtained poor fits.</p>
<p>While CIP neurons are certainly responsive to these variables (and more complex non-linear models of tuning to these variables fit the data closely) it is possible that there are other related variables that provide a more elegant account of these neurons&#x00027; responses. Notably, some CIP neurons prefer intermediate cylinder diameters (Sakata et al., <xref ref-type="bibr" rid="B49">1998</xref>), whereas cosine tuning for curvature would be constrained to monotonic changes with respect to curvature. Also, some of the neurons in Rosenberg et al. (<xref ref-type="bibr" rid="B46">2013</xref>) are clearly non-cosine-tuned for depth slope.</p>
<p>Some CIP tuning curves (see e.g., Figure <xref ref-type="fig" rid="F6">6</xref>) seem to be fairly similar to rectified cosine functions (Salinas and Abbott, <xref ref-type="bibr" rid="B50">1994</xref>) with a negative offset, except that their baseline rates are not zero. In general, spike sorting limitations, which cannot be completely avoided in extracellular recordings (Harris et al., <xref ref-type="bibr" rid="B21">2000</xref>), are a potential source of uncertainty in tuning curves. However, if misclassification rates had been substantial then multi-peaked tuning curves might have been expected, and none were reported in these studies.</p>
</sec>
<sec>
<title>4.3. Relationship to shape representation in IT</title>
<p>Area IT has been shown to represent medial axes and surfaces of objects (Yamane et al., <xref ref-type="bibr" rid="B62">2008</xref>; Hung et al., <xref ref-type="bibr" rid="B24">2012</xref>). AIP has significant connections with IT areas including the lower bank of the superior temporal sulcus (STS), specifically areas TEa and TEm (Borra et al., <xref ref-type="bibr" rid="B5">2008</xref>). These areas partially correspond to functional area TEs, which encodes curvature of depth (Janssen et al., <xref ref-type="bibr" rid="B26">2000</xref>) similarly to CIP. However, AIP responds to depth differences much earlier than TEs (Srivastava et al., <xref ref-type="bibr" rid="B54">2009</xref>). It is possible that a shape representation in IT, with some similarities to that in CIP, provides longer latency reinforcement and/or correction of shape representation in AIP.</p>
</sec>
<sec>
<title>4.4. Future work</title>
<p>A key direction for future work is to test how well the Isomap shape representation works for robotic grasp planning. This would provide important information about the functional plausibility of this representation. For example, if Isomap-based shape parameters cannot be used to shape a hand for effective grasping, this will strongly suggest that there are critical differences between AIP tuning parameters and Isomap parameters. On the other hand, if the Isomap representation performs well, it may suggest a new biologically-inspired approach for robotic grasping.</p>
<p>An apparent advantage of the Isomap approach is that it is data-driven and makes no prior assumptions about shapes. It would be informative to build Isomaps for less idealized shapes that monkeys might grasp in nature.</p>
<p>Other non-linear dimension-reduction methods (e.g., Yan et al., <xref ref-type="bibr" rid="B63">2007</xref>) could also be compared with the Isomap in terms of fitting AIP data and providing an effective basis for grasp planning. We would expect differences relative to Isomap tuning to be subtle relative to available AIP data, but perhaps distinct advantages would appear in a grasp control system. One interesting possibility would be to emphasize features that are related to reward or performance (Bar-Gad et al., <xref ref-type="bibr" rid="B3">2003</xref>).</p>
<p>Another important direction for future work is to extend the model to include motor-dominant AIP neurons and to F5 neurons as in e.g., Theys et al. (<xref ref-type="bibr" rid="B57">2012</xref>, <xref ref-type="bibr" rid="B58">2013</xref>) and Raos et al. (<xref ref-type="bibr" rid="B43">2006</xref>).</p>
<p>Finally, our models produced constant spike rates in response to static inputs. A more sophisticated future model would account for response timing and dynamics (Sakaguchi et al., <xref ref-type="bibr" rid="B47">2010</xref>). The Neural Engineering Framework (Eliasmith and Anderson, <xref ref-type="bibr" rid="B12">2003</xref>) provides a principled approach to modeling dynamics in systems of spiking neurons.</p>
</sec>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
</sec>
</body>
<back>
<ack>
<p>Financial support was provided by CrossWing Inc.; NSERC and Mitacs (Canada); DAAD-NRF (South Africa); and the Spanish Ministry of Education and Consejo Social UPM (Spain). We thank Paul Calamai, Renaud Detry, and Andr&#x000E9; Nel for helpful discussions.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Adams</surname> <given-names>D. L.</given-names></name> <name><surname>Zeki</surname> <given-names>S.</given-names></name></person-group> (<year>2001</year>). <article-title>Functional organization of macaque v3 for stereoscopic depth</article-title>. <source>J. Neurophysiol</source>. <volume>86</volume>, <fpage>2195</fpage>&#x02013;<lpage>2203</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://jn.physiology.org/content/86/5/2195">http://jn.physiology.org/content/86/5/2195</ext-link> <pub-id pub-id-type="pmid">11698511</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Anzai</surname> <given-names>A.</given-names></name> <name><surname>Chowdhury</surname> <given-names>S. A.</given-names></name> <name><surname>DeAngelis</surname> <given-names>G. C.</given-names></name></person-group> (<year>2011</year>). <article-title>Coding of stereoscopic depth information in visual areas v3 and v3a</article-title>. <source>J. Neurosci</source>. <volume>31</volume>, <fpage>10270</fpage>&#x02013;<lpage>10282</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.5956-10.2011</pub-id><pub-id pub-id-type="pmid">21753004</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bar-Gad</surname> <given-names>I.</given-names></name> <name><surname>Morris</surname> <given-names>G.</given-names></name> <name><surname>Bergman</surname> <given-names>H.</given-names></name></person-group> (<year>2003</year>). <article-title>Information processing, dimensionality reduction and reinforcement learning in the basal ganglia</article-title>. <source>Prog. Neurobiol</source>. <volume>71</volume>, <fpage>439</fpage>&#x02013;<lpage>473</lpage>. <pub-id pub-id-type="doi">10.1016/j.pneurobio.2003.12.001</pub-id><pub-id pub-id-type="pmid">15013228</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Biegelbauer</surname> <given-names>G.</given-names></name> <name><surname>Vincze</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Efficient 3D object detection by fitting superquadrics to range image data for Robot&#x00027;s object manipulation</article-title>, in <source>2007 IEEE International Conference on Robotics and Automation (ICRA)</source> (<publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1086</fpage>&#x02013;<lpage>1091</lpage>. <pub-id pub-id-type="doi">10.1109/ROBOT.2007.363129</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Borra</surname> <given-names>E.</given-names></name> <name><surname>Belmalih</surname> <given-names>A.</given-names></name> <name><surname>Calzavara</surname> <given-names>R.</given-names></name> <name><surname>Gerbella</surname> <given-names>M.</given-names></name> <name><surname>Murata</surname> <given-names>A.</given-names></name> <name><surname>Rozzi</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2008</year>). <article-title>Cortical connections of the macaque anterior intraparietal (AIP) area</article-title>. <source>Cereb. Cortex</source> <volume>18</volume>, <fpage>1094</fpage>&#x02013;<lpage>1111</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhm146</pub-id><pub-id pub-id-type="pmid">17720686</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Carandini</surname> <given-names>M.</given-names></name></person-group> (<year>2004</year>). <article-title>Amplification of trial-to-trial response variability by neurons in visual cortex</article-title>. <source>PLoS Biol</source>. <volume>2</volume>:<fpage>E264</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pbio.0020264</pub-id><pub-id pub-id-type="pmid">15328535</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clower</surname> <given-names>D. M.</given-names></name> <name><surname>Dum</surname> <given-names>R. P.</given-names></name> <name><surname>Strick</surname> <given-names>P. L.</given-names></name></person-group> (<year>2005</year>). <article-title>Basal ganglia and cerebellar inputs to &#x02018;AIP&#x02019;</article-title>. <source>Cereb. Cortex</source> <volume>15</volume>, <fpage>913</fpage>&#x02013;<lpage>920</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhh190</pub-id><pub-id pub-id-type="pmid">15459083</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Coleman</surname> <given-names>T. F.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name></person-group> (<year>1994</year>). <article-title>On the convergence of interior-reflective Newton methods for nonlinear minimization subject to bounds</article-title>. <source>Math. Prog</source>. <volume>67</volume>, <fpage>189</fpage>&#x02013;<lpage>224</lpage>. <pub-id pub-id-type="doi">10.1007/BF01582221</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>de Vries</surname> <given-names>S. C.</given-names></name> <name><surname>Kappers</surname> <given-names>A. M.</given-names></name> <name><surname>Koenderink</surname> <given-names>J. J.</given-names></name></person-group> (<year>1993</year>). <article-title>Shape from stereo: a systematic approach using quadratic surfaces</article-title>. <source>Percept. Psychophys</source>. <volume>53</volume>, <fpage>71</fpage>&#x02013;<lpage>80</lpage>. <pub-id pub-id-type="doi">10.3758/BF03211716</pub-id><pub-id pub-id-type="pmid">8433907</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Detry</surname> <given-names>R.</given-names></name> <name><surname>Ek</surname> <given-names>C. H.</given-names></name> <name><surname>Madry</surname> <given-names>M.</given-names></name> <name><surname>Kragic</surname> <given-names>D.</given-names></name></person-group> (<year>2013</year>). <article-title>Learning a dictionary of prototypical grasp-predicting parts from grasping experience</article-title>, in <source>2013 IEEE International Conference on Robotics and Automation (ICRA)</source> (<publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>601</fpage>&#x02013;<lpage>608</lpage>. <pub-id pub-id-type="doi">10.1109/ICRA.2013.6630635</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Duncan</surname> <given-names>K.</given-names></name> <name><surname>Sarkar</surname> <given-names>S.</given-names></name> <name><surname>Alqasemi</surname> <given-names>R.</given-names></name> <name><surname>Dubey</surname> <given-names>R.</given-names></name></person-group> (<year>2013</year>). <article-title>Multi-scale superquadric fitting for efficient shape and pose recovery of unknown objects</article-title>, in <source>2013 IEEE International Conference on Robotics and Automation (ICRA)</source> (<publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>4238</fpage>&#x02013;<lpage>4243</lpage>. <pub-id pub-id-type="doi">10.1109/ICRA.2013.6631176</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Eliasmith</surname> <given-names>C.</given-names></name> <name><surname>Anderson</surname> <given-names>C. H.</given-names></name></person-group> (<year>2003</year>). <source>Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eliasmith</surname> <given-names>C.</given-names></name> <name><surname>Stewart</surname> <given-names>T. C.</given-names></name> <name><surname>Choo</surname> <given-names>X.</given-names></name> <name><surname>Bekolay</surname> <given-names>T.</given-names></name> <name><surname>DeWolf</surname> <given-names>T.</given-names></name> <name><surname>Tang</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>A large-scale model of the functioning brain</article-title>. <source>Science</source> <volume>338</volume>, <fpage>1202</fpage>&#x02013;<lpage>1205</lpage>. <pub-id pub-id-type="doi">10.1126/science.1225266</pub-id><pub-id pub-id-type="pmid">23197532</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fagg</surname> <given-names>A. H.</given-names></name> <name><surname>Arbib</surname> <given-names>M. A.</given-names></name></person-group> (<year>1998</year>). <article-title>Modeling parietal-premotor interactions in primate control of grasping</article-title>. <source>Neural Netw</source>. <volume>11</volume>, <fpage>1277</fpage>&#x02013;<lpage>1303</lpage>. <pub-id pub-id-type="doi">10.1016/S0893-6080(98)00047-1</pub-id><pub-id pub-id-type="pmid">12662750</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Felleman</surname> <given-names>D. J.</given-names></name> <name><surname>Burkhalter</surname> <given-names>A.</given-names></name> <name><surname>Van Essen</surname> <given-names>D. C.</given-names></name></person-group> (<year>1997</year>). <article-title>Cortical connections of areas V3 and VP of macaque monkey extrastriate visual cortex</article-title>. <source>J. Comp. Neurol</source>. <volume>379</volume>, <fpage>21</fpage>&#x02013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1002/(SICI)1096-9861(19970303)379:1&#x0003C;21::AID-CNE3&#x0003E;3.0.CO;2-K</pub-id><pub-id pub-id-type="pmid">9057111</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fitzgerald</surname> <given-names>P. J.</given-names></name> <name><surname>Lane</surname> <given-names>J. W.</given-names></name> <name><surname>Thakur</surname> <given-names>P. H.</given-names></name> <name><surname>Hsiao</surname> <given-names>S. S.</given-names></name></person-group> (<year>2004</year>). <article-title>Receptive field properties of the macaque second somatosensory cortex: evidence for multiple functional representations</article-title>. <source>J. Neurosci</source>. <volume>24</volume>, <fpage>11193</fpage>&#x02013;<lpage>11204</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3481-04.2004</pub-id><pub-id pub-id-type="pmid">15590936</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fogassi</surname> <given-names>L.</given-names></name> <name><surname>Gallese</surname> <given-names>V.</given-names></name> <name><surname>Buccino</surname> <given-names>G.</given-names></name> <name><surname>Craighero</surname> <given-names>L.</given-names></name> <name><surname>Fadiga</surname> <given-names>L.</given-names></name> <name><surname>Rizzolatti</surname> <given-names>G.</given-names></name></person-group> (<year>2001</year>). <article-title>Cortical mechanism for the visual guidance of hand grasping movements in the monkey: a reversible inactivation study</article-title>. <source>Brain</source> <volume>124(Pt 3)</volume>, <fpage>571</fpage>&#x02013;<lpage>586</lpage>. <pub-id pub-id-type="doi">10.1093/brain/124.3.571</pub-id><pub-id pub-id-type="pmid">11222457</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gallese</surname> <given-names>V.</given-names></name> <name><surname>Murata</surname> <given-names>A.</given-names></name> <name><surname>Kaseda</surname> <given-names>M.</given-names></name> <name><surname>Niki</surname> <given-names>N.</given-names></name> <name><surname>Sakata</surname> <given-names>H.</given-names></name></person-group> (<year>1994</year>). <article-title>Deficit of hand preshaping after muscimol injection in monkey parietal cortex</article-title>. <source>Neuroreport</source> <volume>5</volume>, <fpage>1525</fpage>&#x02013;<lpage>1529</lpage>. <pub-id pub-id-type="doi">10.1097/00001756-199407000-00029</pub-id><pub-id pub-id-type="pmid">7948854</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Goldfeder</surname> <given-names>C.</given-names></name> <name><surname>Allen</surname> <given-names>P. K.</given-names></name> <name><surname>Lackner</surname> <given-names>C.</given-names></name> <name><surname>Pelossof</surname> <given-names>R.</given-names></name></person-group> (<year>2007</year>). <article-title>Grasp planning via decomposition trees</article-title>, in <source>2007 IEEE International Conference on Robotics and Automation (ICRA)</source> (<publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>4679</fpage>&#x02013;<lpage>4684</lpage>. <pub-id pub-id-type="doi">10.1109/ROBOT.2007.364200</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gregoriou</surname> <given-names>G. G.</given-names></name> <name><surname>Borra</surname> <given-names>E.</given-names></name> <name><surname>Matelli</surname> <given-names>M.</given-names></name> <name><surname>Luppino</surname> <given-names>G.</given-names></name></person-group> (<year>2006</year>). <article-title>Architectonic organization of the inferior parietal convexity of the macaque monkey</article-title>. <source>J. Comp. Neurol</source>. <volume>496</volume>, <fpage>422</fpage>&#x02013;<lpage>451</lpage>. <pub-id pub-id-type="doi">10.1002/cne.20933</pub-id><pub-id pub-id-type="pmid">16566007</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Harris</surname> <given-names>K. D.</given-names></name> <name><surname>Henze</surname> <given-names>D. A.</given-names></name> <name><surname>Csicsvari</surname> <given-names>J.</given-names></name> <name><surname>Hirase</surname> <given-names>H.</given-names></name> <name><surname>Buzs&#x000E1;ki</surname> <given-names>G.</given-names></name></person-group> (<year>2000</year>). <article-title>Accuracy of tetrode spike separation as determined by simultaneous intracellular and extracellular measurements</article-title>. <source>J. Neurophysiol</source>. <volume>84</volume>, <fpage>401</fpage>&#x02013;<lpage>414</lpage>. <pub-id pub-id-type="pmid">10899214</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Haykin</surname> <given-names>S.</given-names></name></person-group> (<year>1999</year>). <source>Neural Networks and Learning Machines, 3rd Edn</source>. <publisher-loc>Upper Saddle River, NJ</publisher-loc>: <publisher-name>Prentice Hall PTR</publisher-name>.</citation>
</ref>
<ref id="B23">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Huebner</surname> <given-names>K.</given-names></name> <name><surname>Ruthotto</surname> <given-names>S.</given-names></name> <name><surname>Kragic</surname> <given-names>D.</given-names></name></person-group> (<year>2008</year>). <article-title>Minimum volume bounding box decomposition for shape approximation in robot grasping</article-title>, in <source>2008 IEEE International Conference on Robotics and Automation (ICRA)</source> (<publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1628</fpage>&#x02013;<lpage>1633</lpage>. <pub-id pub-id-type="doi">10.1109/ROBOT.2008.4543434</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hung</surname> <given-names>C.-C.</given-names></name> <name><surname>Carlson</surname> <given-names>E.</given-names></name> <name><surname>Connor</surname> <given-names>C.</given-names></name></person-group> (<year>2012</year>). <article-title>Medial axis shape coding in macaque inferotemporal cortex</article-title>. <source>Neuron</source> <volume>74</volume>, <fpage>1099</fpage>&#x02013;<lpage>1113</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2012.04.029</pub-id><pub-id pub-id-type="pmid">22726839</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ikeuchi</surname> <given-names>K.</given-names></name> <name><surname>Hebert</surname> <given-names>M.</given-names></name></person-group> (<year>1996</year>). <article-title>Task-oriented vision</article-title>, in <source>Exploratory Vision, Springer Series in Perception Engineering</source>, eds <person-group person-group-type="editor"><name><surname>Landy</surname> <given-names>M.</given-names></name> <name><surname>Maloney</surname> <given-names>L.</given-names></name> <name><surname>Pavel</surname> <given-names>M.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>257</fpage>&#x02013;<lpage>277</lpage>. <pub-id pub-id-type="doi">10.1007/978-1-4612-3984-0_11</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Janssen</surname> <given-names>P.</given-names></name> <name><surname>Vogels</surname> <given-names>R.</given-names></name> <name><surname>Orban</surname> <given-names>G. A.</given-names></name></person-group> (<year>2000</year>). <article-title>Three-dimensional shape coding in inferior temporal cortex</article-title>. <source>Neuron</source> <volume>27</volume>, <fpage>385</fpage>&#x02013;<lpage>397</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(00)00045-3</pub-id><pub-id pub-id-type="pmid">10985357</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jeannerod</surname> <given-names>M.</given-names></name> <name><surname>Arbib</surname> <given-names>M.</given-names></name> <name><surname>Rizzolatti</surname> <given-names>G.</given-names></name> <name><surname>Sakata</surname> <given-names>H.</given-names></name></person-group> (<year>1995</year>). <article-title>Grasping objects: the cortical mechanisms of visuomotor transformation</article-title>. <source>Trends Neurosci</source>. <volume>18</volume>, <fpage>314</fpage>&#x02013;<lpage>320</lpage>. <pub-id pub-id-type="doi">10.1016/0166-2236(95)93921-J</pub-id><pub-id pub-id-type="pmid">7571012</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Katsuyama</surname> <given-names>N.</given-names></name> <name><surname>Yamashita</surname> <given-names>A.</given-names></name> <name><surname>Sawada</surname> <given-names>K.</given-names></name> <name><surname>Naganuma</surname> <given-names>T.</given-names></name> <name><surname>Sakata</surname> <given-names>H.</given-names></name> <name><surname>Taira</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Functional and histological properties of caudal intraparietal area of macaque monkey</article-title>. <source>Neuroscience</source> <volume>167</volume>, <fpage>1</fpage>&#x02013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroscience.2010.01.028</pub-id><pub-id pub-id-type="pmid">20096334</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Krizhevsky</surname> <given-names>A.</given-names></name> <name><surname>Sutskever</surname> <given-names>I.</given-names></name> <name><surname>Hinton</surname> <given-names>G. E.</given-names></name></person-group> (<year>2012</year>). <article-title>Imagenet classification with deep convolutional neural networks</article-title>, in <source>Advances in Neural Information Processing Systems 25</source>, eds <person-group person-group-type="editor"><name><surname>Pereira</surname> <given-names>F.</given-names></name> <name><surname>Burges</surname> <given-names>C.</given-names></name> <name><surname>Bottou</surname> <given-names>L.</given-names></name> <name><surname>Weinberger</surname> <given-names>K.</given-names></name></person-group> (<publisher-name>Curran Associates, Inc.</publisher-name>), <fpage>1097</fpage>&#x02013;<lpage>1105</lpage>.</citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krubitzer</surname> <given-names>L.</given-names></name> <name><surname>Clarey</surname> <given-names>J.</given-names></name> <name><surname>Tweedale</surname> <given-names>R.</given-names></name> <name><surname>Elston</surname> <given-names>G.</given-names></name> <name><surname>Calford</surname> <given-names>M.</given-names></name></person-group> (<year>1995</year>). <article-title>A redefinition of somatosensory areas in the lateral sulcus of macaque monkeys</article-title>. <source>J. Neurosci</source>. <volume>15</volume>, <fpage>3821</fpage>&#x02013;<lpage>3839</lpage>. <pub-id pub-id-type="pmid">7751949</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kumar</surname> <given-names>S.</given-names></name> <name><surname>Goldgof</surname> <given-names>D.</given-names></name> <name><surname>Bowyer</surname> <given-names>K.</given-names></name></person-group> (<year>1995</year>). <article-title>On recovering hyperquadrics from range data</article-title>. <source>IEEE Trans. Patt. Anal. Mach. Intell</source>. <volume>17</volume>, <fpage>1079</fpage>&#x02013;<lpage>1083</lpage>. <pub-id pub-id-type="doi">10.1109/34.473234</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>LeCun</surname> <given-names>Y.</given-names></name> <name><surname>Bottou</surname> <given-names>L.</given-names></name> <name><surname>Bengio</surname> <given-names>Y.</given-names></name> <name><surname>Haffner</surname> <given-names>P.</given-names></name></person-group> (<year>1998</year>). <article-title>Gradient-based learning applied to document recognition</article-title>, in <source>Proceedings of the IEEE</source> (<publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>2278</fpage>&#x02013;<lpage>2324</lpage>. <pub-id pub-id-type="doi">10.1109/5.726791</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lehky</surname> <given-names>S. R.</given-names></name> <name><surname>Kiani</surname> <given-names>R.</given-names></name> <name><surname>Esteky</surname> <given-names>H.</given-names></name> <name><surname>Tanaka</surname> <given-names>K.</given-names></name></person-group> (<year>2011</year>). <article-title>Statistics of visual responses in primate inferotemporal cortex to object stimuli</article-title>. <source>J. Neurophysiol</source>. <volume>106</volume>, <fpage>1097</fpage>&#x02013;<lpage>1117</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00990.2010</pub-id><pub-id pub-id-type="pmid">21562200</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luppino</surname> <given-names>G.</given-names></name> <name><surname>Murata</surname> <given-names>A.</given-names></name> <name><surname>Govoni</surname> <given-names>P.</given-names></name> <name><surname>Matelli</surname> <given-names>M.</given-names></name></person-group> (<year>1999</year>). <article-title>Largely segregated parietofrontal connections linking rostral intraparietal cortex (areas AIP and VIP) and the ventral premotor cortex (areas F5 and F4)</article-title>. <source>Exp. Brain Res</source>. <volume>128</volume>, <fpage>181</fpage>&#x02013;<lpage>187</lpage>. <pub-id pub-id-type="doi">10.1007/s002210050833</pub-id><pub-id pub-id-type="pmid">10473756</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Murata</surname> <given-names>A.</given-names></name> <name><surname>Gallese</surname> <given-names>V.</given-names></name> <name><surname>Luppino</surname> <given-names>G.</given-names></name> <name><surname>Kaseda</surname> <given-names>M.</given-names></name> <name><surname>Sakata</surname> <given-names>H.</given-names></name></person-group> (<year>2000</year>). <article-title>Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP</article-title>. <source>J. Neurophysiol</source>. <volume>83</volume>, <fpage>2580</fpage>&#x02013;<lpage>2601</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://jn.physiology.org/content/83/5/2580">http://jn.physiology.org/content/83/5/2580</ext-link> <pub-id pub-id-type="pmid">10805659</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nakamura</surname> <given-names>H.</given-names></name> <name><surname>Kuroda</surname> <given-names>T.</given-names></name> <name><surname>Wakita</surname> <given-names>M.</given-names></name> <name><surname>Kusunoki</surname> <given-names>M.</given-names></name> <name><surname>Kato</surname> <given-names>A.</given-names></name> <name><surname>Mikami</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2001</year>). <article-title>From three-dimensional space vision to prehensile hand movements: the lateral intraparietal area links the area V3A and the anterior intraparietal area in macaques</article-title>. <source>J. Neurosci</source>. <volume>21</volume>, <fpage>8174</fpage>&#x02013;<lpage>8187</lpage>. <pub-id pub-id-type="pmid">11588190</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nover</surname> <given-names>H.</given-names></name> <name><surname>Anderson</surname> <given-names>C. H.</given-names></name> <name><surname>DeAngelis</surname> <given-names>G. C.</given-names></name></person-group> (<year>2005</year>). <article-title>A logarithmic, scale-invariant representation of speed in macaque middle temporal area accounts for speed discrimination performance</article-title>. <source>J. Neurosci</source>. <volume>25</volume>, <fpage>10049</fpage>&#x02013;<lpage>10060</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1661-05.2005</pub-id><pub-id pub-id-type="pmid">16251454</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Orban</surname> <given-names>G. A.</given-names></name> <name><surname>Janssen</surname> <given-names>P.</given-names></name> <name><surname>Vogels</surname> <given-names>R.</given-names></name></person-group> (<year>2006</year>). <article-title>Extracting 3D structure from disparity</article-title>. <source>Trends Neurosci</source>. <volume>29</volume>, <fpage>466</fpage>&#x02013;<lpage>473</lpage>. <pub-id pub-id-type="doi">10.1016/j.tins.2006.06.012</pub-id><pub-id pub-id-type="pmid">16842865</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oztop</surname> <given-names>E.</given-names></name> <name><surname>Imamizu</surname> <given-names>H.</given-names></name> <name><surname>Cheng</surname> <given-names>G.</given-names></name> <name><surname>Kawato</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>A computational model of anterior intraparietal (AIP) neurons</article-title>. <source>Neurocomputing</source> <volume>69</volume>, <fpage>1354</fpage>&#x02013;<lpage>1361</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2005.12.106</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Poggio</surname> <given-names>G. F.</given-names></name> <name><surname>Gonzalez</surname> <given-names>F.</given-names></name> <name><surname>Krause</surname> <given-names>F.</given-names></name></person-group> (<year>1988</year>). <article-title>Stereoscopic mechanisms in monkey visual cortex: binocular correlation and disparity selectivity</article-title>. <source>J. Neurosci</source>. <volume>8</volume>, <fpage>4531</fpage>&#x02013;<lpage>4550</lpage>. <pub-id pub-id-type="pmid">3199191</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Polsky</surname> <given-names>A.</given-names></name> <name><surname>Mel</surname> <given-names>B. W.</given-names></name> <name><surname>Schiller</surname> <given-names>J.</given-names></name></person-group> (<year>2004</year>). <article-title>Computational subunits in thin dendrites of pyramidal cells</article-title>. <source>Nat. Neurosci</source>. <volume>7</volume>, <fpage>621</fpage>&#x02013;<lpage>627</lpage>. <pub-id pub-id-type="doi">10.1038/nn1253</pub-id><pub-id pub-id-type="pmid">15156147</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Prevete</surname> <given-names>R.</given-names></name> <name><surname>Tessitore</surname> <given-names>G.</given-names></name> <name><surname>Catanzariti</surname> <given-names>E.</given-names></name> <name><surname>Tamburrini</surname> <given-names>G.</given-names></name></person-group> (<year>2011</year>). <article-title>Perceiving affordances: a computational investigation of grasping affordances</article-title>. <source>Cogn. Syst. Res</source>. <volume>12</volume>, <fpage>122</fpage>&#x02013;<lpage>133</lpage>. <pub-id pub-id-type="doi">10.1016/j.cogsys.2010.07.005</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Raos</surname> <given-names>V.</given-names></name> <name><surname>Umilt&#x000E1;</surname> <given-names>M.-A.</given-names></name> <name><surname>Murata</surname> <given-names>A.</given-names></name> <name><surname>Fogassi</surname> <given-names>L.</given-names></name> <name><surname>Gallese</surname> <given-names>V.</given-names></name></person-group> (<year>2006</year>). <article-title>Functional properties of grasping-related neurons in the ventral premotor area F5 of the macaque monkey</article-title>. <source>J. Neurophysiol</source>. <volume>95</volume>, <fpage>709</fpage>&#x02013;<lpage>729</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00463.2005</pub-id><pub-id pub-id-type="pmid">16251265</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rizzolatti</surname> <given-names>G.</given-names></name> <name><surname>Gentilucci</surname> <given-names>M.</given-names></name> <name><surname>Camarda</surname> <given-names>R.</given-names></name> <name><surname>Gallese</surname> <given-names>V.</given-names></name> <name><surname>Luppino</surname> <given-names>G.</given-names></name> <name><surname>Matelli</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>1990</year>). <article-title>Neurons related to reaching-grasping arm movements in the rostral part of area 6 (area 6a&#x003B2;)</article-title>. <source>Exp. Brain Res</source>. <volume>82</volume>, <fpage>337</fpage>&#x02013;<lpage>350</lpage>. <pub-id pub-id-type="doi">10.1007/BF00231253</pub-id><pub-id pub-id-type="pmid">2286236</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Romero</surname> <given-names>M. C.</given-names></name> <name><surname>Van Dromme</surname> <given-names>I. C. L.</given-names></name> <name><surname>Janssen</surname> <given-names>P.</given-names></name></person-group> (<year>2013</year>). <article-title>The role of binocular disparity in stereoscopic images of objects in the macaque anterior intraparietal area</article-title>. <source>PLoS ONE</source> <volume>8</volume>:<fpage>e55340</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0055340</pub-id><pub-id pub-id-type="pmid">23408970</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rosenberg</surname> <given-names>A.</given-names></name> <name><surname>Cowan</surname> <given-names>N. J.</given-names></name> <name><surname>Angelaki</surname> <given-names>D. E.</given-names></name></person-group> (<year>2013</year>). <article-title>The visual representation of 3d object orientation in parietal cortex</article-title>. <source>J. Neurosci</source>. <volume>33</volume>, <fpage>19352</fpage>&#x02013;<lpage>19361</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3174-13.2013</pub-id><pub-id pub-id-type="pmid">24305830</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sakaguchi</surname> <given-names>Y.</given-names></name> <name><surname>Ishida</surname> <given-names>F.</given-names></name> <name><surname>Shimizu</surname> <given-names>T.</given-names></name> <name><surname>Murata</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>Time course of information representation of macaque AIP neurons in hand manipulation task revealed by information analysis</article-title>. <source>J. Neurophysiol</source>. <volume>104</volume>, <fpage>3625</fpage>&#x02013;<lpage>3643</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00125.2010</pub-id><pub-id pub-id-type="pmid">20943943</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sakata</surname> <given-names>H.</given-names></name> <name><surname>Taira</surname> <given-names>M.</given-names></name> <name><surname>Kusunoki</surname> <given-names>M.</given-names></name> <name><surname>Murata</surname> <given-names>A.</given-names></name> <name><surname>Tanaka</surname> <given-names>Y.</given-names></name></person-group> (<year>1997</year>). <article-title>The TINS lecture. the parietal association cortex in depth perception and visual control of hand action</article-title>. <source>Trends Neurosci</source>. <volume>20</volume>, <fpage>350</fpage>&#x02013;<lpage>357</lpage>. <pub-id pub-id-type="doi">10.1016/S0166-2236(97)01067-9</pub-id><pub-id pub-id-type="pmid">9246729</pub-id></citation>
</ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sakata</surname> <given-names>H.</given-names></name> <name><surname>Taira</surname> <given-names>M.</given-names></name> <name><surname>Kusunoki</surname> <given-names>M.</given-names></name> <name><surname>Murata</surname> <given-names>A.</given-names></name> <name><surname>Tanaka</surname> <given-names>Y.</given-names></name> <name><surname>Tsutsui</surname> <given-names>K.</given-names></name></person-group> (<year>1998</year>). <article-title>Neural coding of 3D features of objects for hand action in the parietal cortex of the monkey</article-title>. <source>Philos. Trans. R. Soc. Lond. B Biol. Sci</source>. <volume>353</volume>, <fpage>1363</fpage>&#x02013;<lpage>1373</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.1998.0290</pub-id><pub-id pub-id-type="pmid">9770229</pub-id></citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Salinas</surname> <given-names>E.</given-names></name> <name><surname>Abbott</surname> <given-names>L. F.</given-names></name></person-group> (<year>1994</year>). <article-title>Vector reconstruction from firing rates</article-title>. <source>J. Comput. Neurosci</source>. <volume>1</volume>, <fpage>89</fpage>&#x02013;<lpage>107</lpage>. <pub-id pub-id-type="doi">10.1007/BF00962720</pub-id><pub-id pub-id-type="pmid">8792227</pub-id></citation>
</ref>
<ref id="B51">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Schaffelhofer</surname> <given-names>S.</given-names></name> <name><surname>Scherberger</surname> <given-names>H.</given-names></name></person-group> (<year>2014</year>). <article-title>From vision to action: a comparative population study of hand grasping areas AIP, F5, and M1</article-title>, in <source>Bernstein Conference 2014</source> (<publisher-loc>G&#x000F6;ttingen</publisher-loc>). <pub-id pub-id-type="doi">10.12751/nncn.bc2014.0253</pub-id></citation>
</ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shikata</surname> <given-names>E.</given-names></name> <name><surname>Tanaka</surname> <given-names>Y.</given-names></name> <name><surname>Nakamura</surname> <given-names>H.</given-names></name> <name><surname>Taira</surname> <given-names>M.</given-names></name> <name><surname>Sakata</surname> <given-names>H.</given-names></name></person-group> (<year>1996</year>). <article-title>Selectivity of the parietal visual neurones in 3D orientation of surface of stereoscopic stimuli</article-title>. <source>Neuroreport</source> <volume>7</volume>, <fpage>2389</fpage>&#x02013;<lpage>2394</lpage>. <pub-id pub-id-type="doi">10.1097/00001756-199610020-00022</pub-id><pub-id pub-id-type="pmid">8951858</pub-id></citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Solina</surname> <given-names>F.</given-names></name> <name><surname>Bajcsy</surname> <given-names>R.</given-names></name></person-group> (<year>1990</year>). <article-title>Recovery of parametric models from range images: the case for superquadrics with global deformations</article-title>. <source>IEEE Trans. Patt. Anal. Mach. Intell</source>. <volume>12</volume>, <fpage>131</fpage>&#x02013;<lpage>147</lpage>. <pub-id pub-id-type="doi">10.1109/34.44401</pub-id></citation>
</ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Srivastava</surname> <given-names>S.</given-names></name> <name><surname>Orban</surname> <given-names>G. A.</given-names></name> <name><surname>Mazire</surname> <given-names>P. A. D.</given-names></name> <name><surname>Janssen</surname> <given-names>P.</given-names></name></person-group> (<year>2009</year>). <article-title>A distinct representation of three-dimensional shape in macaque anterior intraparietal area: fast, metric, and coarse</article-title>. <source>J. Neurosci</source>. <volume>29</volume>, <fpage>10613</fpage>&#x02013;<lpage>10626</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.6016-08.2009</pub-id><pub-id pub-id-type="pmid">19710314</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Taira</surname> <given-names>M.</given-names></name> <name><surname>Tsutsui</surname> <given-names>K.-I.</given-names></name> <name><surname>Jiang</surname> <given-names>M.</given-names></name> <name><surname>Yara</surname> <given-names>K.</given-names></name> <name><surname>Sakata</surname> <given-names>H.</given-names></name></person-group> (<year>2000</year>). <article-title>Parietal neurons represent surface orientation from the gradient of binocular disparity</article-title>. <source>J. Neurophysiol</source>. <volume>83</volume>, <fpage>3140</fpage>&#x02013;<lpage>3146</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://jn.physiology.org/content/83/5/3140">http://jn.physiology.org/content/83/5/3140</ext-link> <pub-id pub-id-type="pmid">10805708</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tenenbaum</surname> <given-names>J. B.</given-names></name> <name><surname>Silva</surname> <given-names>V. D.</given-names></name> <name><surname>Langford</surname> <given-names>J. C.</given-names></name></person-group> (<year>2000</year>). <article-title>A global geometric framework for nonlinear dimensionality reduction</article-title>. <source>Science</source> <volume>290</volume>, <fpage>2319</fpage>&#x02013;<lpage>2323</lpage>. <pub-id pub-id-type="doi">10.1126/science.290.5500.2319</pub-id><pub-id pub-id-type="pmid">11125149</pub-id></citation>
</ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Theys</surname> <given-names>T.</given-names></name> <name><surname>Pani</surname> <given-names>P.</given-names></name> <name><surname>van Loon</surname> <given-names>J.</given-names></name> <name><surname>Goffin</surname> <given-names>J.</given-names></name> <name><surname>Janssen</surname> <given-names>P.</given-names></name></person-group> (<year>2012</year>). <article-title>Selectivity for three-dimensional shape and grasping-related activity in the macaque ventral premotor cortex</article-title>. <source>J. Neurosci</source>. <volume>32</volume>, <fpage>12038</fpage>&#x02013;<lpage>12050</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1790-12.2012</pub-id><pub-id pub-id-type="pmid">22933788</pub-id></citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Theys</surname> <given-names>T.</given-names></name> <name><surname>Pani</surname> <given-names>P.</given-names></name> <name><surname>van Loon</surname> <given-names>J.</given-names></name> <name><surname>Goffin</surname> <given-names>J.</given-names></name> <name><surname>Janssen</surname> <given-names>P.</given-names></name></person-group> (<year>2013</year>). <article-title>Three-dimensional shape coding in grasping circuits: a comparison between the anterior intraparietal area and ventral premotor area F5a</article-title>. <source>J. Cogn. Neurosci</source>. <volume>25</volume>, <fpage>352</fpage>&#x02013;<lpage>364</lpage>. <pub-id pub-id-type="doi">10.1162/jocn-a-00332</pub-id><pub-id pub-id-type="pmid">23190325</pub-id></citation>
</ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsao</surname> <given-names>D. Y.</given-names></name> <name><surname>Vanduffel</surname> <given-names>W.</given-names></name> <name><surname>Sasaki</surname> <given-names>Y.</given-names></name> <name><surname>Fize</surname> <given-names>D.</given-names></name> <name><surname>Knutsen</surname> <given-names>T. A.</given-names></name> <name><surname>Mandeville</surname> <given-names>J. B.</given-names></name> <etal/></person-group>. (<year>2003</year>). <article-title>Stereopsis activates V3A and caudal intraparietal areas in macaques and humans</article-title>. <source>Neuron</source> <volume>39</volume>, <fpage>555</fpage>&#x02013;<lpage>568</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(03)00459-8</pub-id><pub-id pub-id-type="pmid">12895427</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsutsui</surname> <given-names>K.</given-names></name> <name><surname>Jiang</surname> <given-names>M.</given-names></name> <name><surname>Yara</surname> <given-names>K.</given-names></name> <name><surname>Sakata</surname> <given-names>H.</given-names></name> <name><surname>Taira</surname> <given-names>M.</given-names></name></person-group> (<year>2001</year>). <article-title>Integration of perspective and disparity cues in surface-orientation-selective neurons of area CIP</article-title>. <source>J. Neurophysiol</source>. <volume>86</volume>, <fpage>2856</fpage>&#x02013;<lpage>2867</lpage>. <pub-id pub-id-type="pmid">11731542</pub-id></citation>
</ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsutsui</surname> <given-names>K.-I.</given-names></name> <name><surname>Sakata</surname> <given-names>H.</given-names></name> <name><surname>Naganuma</surname> <given-names>T.</given-names></name> <name><surname>Taira</surname> <given-names>M.</given-names></name></person-group> (<year>2002</year>). <article-title>Neural correlates for perception of 3D surface orientation from texture gradient</article-title>. <source>Science</source> <volume>298</volume>, <fpage>409</fpage>&#x02013;<lpage>412</lpage>. <pub-id pub-id-type="doi">10.1126/science.1074128</pub-id><pub-id pub-id-type="pmid">12376700</pub-id></citation>
</ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yamane</surname> <given-names>Y.</given-names></name> <name><surname>Carlson</surname> <given-names>E. T.</given-names></name> <name><surname>Bowman</surname> <given-names>K. C.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Connor</surname> <given-names>C. E.</given-names></name></person-group> (<year>2008</year>). <article-title>A neural code for three-dimensional object shape in macaque inferotemporal cortex</article-title>. <source>Nat. Neurosci</source>. <volume>11</volume>, <fpage>1352</fpage>&#x02013;<lpage>1360</lpage>. <pub-id pub-id-type="doi">10.1038/nn.2202</pub-id><pub-id pub-id-type="pmid">18836443</pub-id></citation>
</ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yan</surname> <given-names>S.</given-names></name> <name><surname>Xu</surname> <given-names>D.</given-names></name> <name><surname>Zhang</surname> <given-names>B.</given-names></name> <name><surname>Zhang</surname> <given-names>H.-J.</given-names></name> <name><surname>Yang</surname> <given-names>Q.</given-names></name> <name><surname>Lin</surname> <given-names>S.</given-names></name></person-group> (<year>2007</year>). <article-title>Graph embedding and extensions: a general framework for dimensionality reduction</article-title>. <source>IEEE Trans. Patt. Anal. Mach. Intell</source>. <volume>29</volume>, <fpage>40</fpage>&#x02013;<lpage>51</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2007.250598</pub-id><pub-id pub-id-type="pmid">17108382</pub-id></citation>
</ref>
<ref id="B64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>K.</given-names></name> <name><surname>Sejnowski</surname> <given-names>T. J.</given-names></name></person-group> (<year>1999</year>). <article-title>A theory of geometric constraints on neural activity for natural three-dimensional movement</article-title>. <source>J. Neurosci</source>. <volume>19</volume>, <fpage>3122</fpage>&#x02013;<lpage>3145</lpage>. <pub-id pub-id-type="pmid">10191327</pub-id></citation>
</ref>
</ref-list>
</back>
</article>
