<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Hum. Neurosci.</journal-id>
<journal-title>Frontiers in Human Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Hum. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5161</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnhum.2016.00137</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Rectilinear Edge Selectivity Is Insufficient to Explain the Category Selectivity of the Parahippocampal Place Area</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Bryan</surname> <given-names>Peter B.</given-names></name>
<xref ref-type="author-notes" rid="fn002"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/312526/overview"/>
<xref ref-type="aff" rid="aff1"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Julian</surname> <given-names>Joshua B.</given-names></name>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/310371/overview"/>
<xref ref-type="aff" rid="aff1"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Epstein</surname> <given-names>Russell A.</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/175215/overview"/>
<xref ref-type="aff" rid="aff1"/>
</contrib>
</contrib-group>
<aff id="aff1"><institution>Department of Psychology, University of Pennsylvania</institution> <country>Philadelphia, PA, USA</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Merim Bilali&#x00107;, Alpen Adria University Klagenfurt, Austria</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Chris I. Baker, National Institutes of Health, USA; Jonathan S. Cant, University of Toronto Scarborough, Canada</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Joshua B. Julian <email>joshua.b.julian&#x00040;gmail.com</email></p></fn>
<fn fn-type="other" id="fn002"><p><sup>&#x02020;</sup>These authors have contributed equally to this work.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>30</day>
<month>03</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<year>2016</year>
</pub-date>
<volume>10</volume>
<elocation-id>137</elocation-id>
<history>
<date date-type="received">
<day>22</day>
<month>01</month>
<year>2016</year>
</date>
<date date-type="accepted">
<day>15</day>
<month>03</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2016 Bryan, Julian and Epstein.</copyright-statement>
<copyright-year>2016</copyright-year>
<copyright-holder>Bryan, Julian and Epstein</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution and reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>The parahippocampal place area (PPA) is one of several brain regions that respond more strongly to scenes than to non-scene items such as objects and faces. The mechanism underlying this scene-preferential response remains unclear. One possibility is that the PPA is tuned to low-level stimulus features that are found more often in scenes than in less-preferred stimuli. Supporting this view, Nasr et al. (<xref ref-type="bibr" rid="B28">2014</xref>) recently observed that some of the stimuli that are known to strongly activate the PPA contain a large number of rectilinear edges. They further demonstrated that PPA response is modulated by rectilinearity for a range of non-scene images. Motivated by these results, we tested whether rectilinearity suffices to explain PPA selectivity for scenes. In the first experiment, we replicated the previous finding of modulation by rectilinearity in the PPA for arrays of 2-d shapes. However, two further experiments failed to find a rectilinearity effect for faces or scenes: high-rectilinearity faces and scenes did not activate the PPA any more strongly than low-rectilinearity faces and scenes. Moreover, the categorical advantage for scenes vs. faces was maintained in the PPA and two other scene-selective regions&#x02014;the retrosplenial complex (RSC) and occipital place area (OPA)&#x02014;when rectilinearity was matched between stimulus sets. We conclude that selectivity for scenes in the PPA cannot be explained by a preference for low-level rectilinear edges.</p></abstract>
<kwd-group>
<kwd>fMRI</kwd>
<kwd>scene perception</kwd>
<kwd>neural specialization</kwd>
<kwd>vision</kwd>
<kwd>ventral stream</kwd>
</kwd-group>
<contract-num rid="cn001">R01 EY-022350</contract-num>
<contract-num rid="cn002">SBE-0541957</contract-num>
<contract-sponsor id="cn001">Office of Extramural Research, National Institutes of Health<named-content content-type="fundref-id">10.13039/100006955</named-content></contract-sponsor>
<contract-sponsor id="cn002">National Science Foundation<named-content content-type="fundref-id">10.13039/100000001</named-content></contract-sponsor>
<counts>
<fig-count count="5"/>
<table-count count="0"/>
<equation-count count="22"/>
<ref-count count="41"/>
<page-count count="12"/>
<word-count count="9062"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>Functional magnetic resonance imaging (fMRI) studies have identified several brain regions that respond preferentially to visual scenes. For example, a region in ventral temporal cortex known as the parahippocampal place area (PPA) responds more strongly when people view scenes (e.g., landscapes, cityscapes, rooms) than when they view isolated single objects or faces (Aguirre et al., <xref ref-type="bibr" rid="B1">1998</xref>; Epstein and Kanwisher, <xref ref-type="bibr" rid="B9">1998</xref>). Although the robustness of this scene-preferential response is well-established, the mechanism behind it is not entirely understood. The standard explanation is that the PPA selectivity reflects tuning to a high-level stimulus category such as &#x0201C;scene&#x0201D;, &#x0201C;landmark&#x0201D;, or &#x0201C;place&#x0201D; (Epstein, <xref ref-type="bibr" rid="B8">2005</xref>; Downing et al., <xref ref-type="bibr" rid="B6">2006</xref>). However, an alternative possibility is that the PPA is tuned for low-level features that are more commonly found in scenes than in other non-preferred stimulus categories.</p>
<p>Recent work has provided some support for the low-level feature explanation by demonstrating response biases in the PPA that relate to the distribution of low-level features. For example, the PPA responds more strongly to high spatial frequency stimuli than low spatial frequency stimuli (Rajimehr et al., <xref ref-type="bibr" rid="B30">2011</xref>; Zeidman et al., <xref ref-type="bibr" rid="B41">2012</xref>; Kauffmann et al., <xref ref-type="bibr" rid="B19">2015</xref>; Watson et al., <xref ref-type="bibr" rid="B34">2016</xref>) and more strongly to images with edges at cardinal orientations (vertical, horizontal) than to images with edges at non-cardinal orientations (Nasr and Tootell, <xref ref-type="bibr" rid="B27">2012</xref>; Lescroart et al., <xref ref-type="bibr" rid="B22">2015</xref>). Moreover, in an intriguing recent study, Nasr et al. (<xref ref-type="bibr" rid="B28">2014</xref>) report that the PPA is sensitive to the presence of rectilinear edges: it responds more strongly to stimuli with many right angles than to stimuli with few right angles, even when the stimuli are basic shapes without any high-level semantic content. These authors further report that this preference for right angles is at least as large as the PPA preference for scenes, and they note the interesting fact that many stimuli that strongly activated the PPA in previous studies had a large quantity of right angles. They speculate that scene-selectivity in the ventral visual stream might be explained by a sensitivity to rectilinear edges&#x02014;an idea that has some plausibility given the ubiquitous nature of rectilinear junctions in modern built environments. We will henceforth refer to the idea that rectilinearity sensitivity might explain PPA scene selectivity as the &#x0201C;rectilinearity hypothesis.&#x0201D;</p>
<p>Although these results are suggestive, the presence of low-level feature biases in a region does not preclude the possibility that it might also encode high-level category information that is tolerant to transformations of those low-level features. As an example, the PPA exhibits retinotopic organization (Levy et al., <xref ref-type="bibr" rid="B23">2001</xref>, <xref ref-type="bibr" rid="B24">2004</xref>; Arcaro et al., <xref ref-type="bibr" rid="B2">2009</xref>; Silson et al., <xref ref-type="bibr" rid="B32">2015</xref>), but it also represents scene identity in a manner that is invariant to retinotopic location (MacEvoy and Epstein, <xref ref-type="bibr" rid="B25">2007</xref>; Golomb and Kanwisher, <xref ref-type="bibr" rid="B12">2011</xref>). Nor is it clear that these low-level biases are sufficient to explain all aspects of regional tuning. For example, the PPA response to objects and buildings is modulated by their navigational history, an effect that cannot be explained in terms of the low-level visual features of the objects and buildings (Janzen and van Turennout, <xref ref-type="bibr" rid="B16">2004</xref>; Schinazi and Epstein, <xref ref-type="bibr" rid="B31">2010</xref>). Moreover, although Nasr and colleagues established a rectilinearity effect for single objects, arrays of objects, and arrays of geometric shapes, they did not examine the effect of rectilinearity on PPA response to naturalistic scenes. Thus, it remains possible that category selectivity rather than low-level feature selectivity best characterizes the response properties of the PPA.</p>
<p>The present study addresses this issue. We present results from three experiments that aimed to determine whether rectilinearity suffices to explain the scene-selectivity of the PPA. The first experiment attempted to replicate Nasr and colleagues&#x02019; finding of a rectilinearity bias for basic shapes in the PPA. The second experiment examined whether a similar rectilinearity bias could be found for naturalistic stimuli (i.e., scenes and faces), and tested whether the &#x0201C;categorical&#x0201D; difference between scenes and faces in the PPA would be maintained when rectilinearity was matched between the two stimulus classes. The third experiment again tested for a rectilinearity bias in naturalistic stimuli, by using faces and scenes with artificially enhanced or degraded rectilinearity. To anticipate, our results show that the PPA is indeed sensitive to rectilinearity for arrays of 2-d shapes, but rectilinearity does not suffice to explain scene-sensitivity of the PPA.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>Materials and Methods</title>
<sec id="s2-1">
<title>Participants</title>
<p>Participants were recruited from the University of Pennsylvania community to participate in one of three experiments (Experiment 1: <italic>n</italic> = 8, 4 female, age range: 21&#x02013;38; Experiment 2: <italic>n</italic> = 8, 3 female, age range 20&#x02013;38; Experiment 3: <italic>n</italic> = 15, 7 female, age range: 20&#x02013;38). Five subjects that participated in Experiment 1 also participated in Experiment 2 (separated by around 3 months). All subjects that participated in Experiment 1 also participated in Experiment 3 during the same testing session, with Experiment 3 preceding Experiment 1. Subjects had normal or corrected-normal vision and had radiologically normal brains, without history of neuropsychological disorder. All participants provided written consent according to procedures approved by the University of Pennsylvania institutional review board.</p>
</sec>
<sec id="s2-2">
<title>MRI Acquisition</title>
<p>Scanning was performed at the Hospital of the University of Pennsylvania using a 3T Siemens Trio scanner equipped with a 32-channel head coil. High-resolution T1-weighted images for anatomical localization were acquired using a three-dimensional magnetization-prepared rapid acquisition gradient echo pulse sequence [repetition time (TR), 1620 ms; echo time (TE), 3.09 ms; inversion time (TI), 950 ms; voxel size, 1 &#x000D7; 1 &#x000D7; 1 mm; matrix size, 192 &#x000D7; 256 &#x000D7; 160]. T2*-weighted images sensitive to blood oxygenation level-dependent (BOLD) contrasts were acquired using a gradient echo, echoplanar pulse sequence [TR, 3000 ms; TE, 30 ms; flip angle 90&#x000B0;; voxel size, 3 &#x000D7; 3 &#x000D7; 3 mm; field of view (FOV), 192; matrix size, 64 &#x000D7; 64 &#x000D7; 44]. Visual stimuli were displayed by rear-projecting them onto a Mylar screen at 1024 &#x000D7; 768 pixel resolution with an Epson 8100 3-LCD projector equipped with a Buhl long-throw lens. Subjects viewed stimuli through a mirror attached to the head coil.</p>
</sec>
<sec id="s2-3">
<title>General Design and Procedure</title>
<p>Each experiment consisted of two 5 min 25 s fMRI scan runs in which subjects viewed stimuli from four conditions that were chosen to test specific hypotheses about PPA function (see &#x0201C;Stimuli&#x0201D; Section). Scan runs were divided into sixteen 15 s blocks; in each, subjects viewed 15 stimuli from the same condition presented one at a time for 600 ms each followed by a 400 ms interstimulus interval. Stimuli had a visual extent of approximately 13 &#x000D7; 13 degrees and were presented on a gray background. The experimental blocks were interspersed with five 15 s fixation blocks in which a black fixation cross was presented at the middle of a uniform gray screen. Subject attention was maintained by asking them to perform a one-back image repetition detection task during the experimental blocks. Stimulus repetitions occurred twice per block; thus, there were 52 unique stimuli per condition.</p>
<p>Following the experimental runs, subjects completed two functional localizer runs in which they viewed scenes, objects, faces, and scrambled objects in separate blocks. Data from these runs were used to identify the location of the PPA and other scene-selective regions. These runs had the same length, design, timing, and task as the main experimental runs.</p>
</sec>
<sec id="s2-4">
<title>Stimuli</title>
<sec id="s2-4-1">
<title>Experiment 1</title>
<p>To replicate Nasr and colleagues finding of an effect of rectilinearity on PPA response during viewing of geometric shapes, subjects were presented with arrays of computer-generated 2D squares (high-rectilinearity) or circles (low-rectilinearity; Figure <xref ref-type="fig" rid="F1">1A</xref>). Because it is unknown whether the PPA rectilinearity bias depends on the spatial extent of the rectilinear edges, the squares and circles in the two shape arrays were generated at two different sizes (large and small; Figure <xref ref-type="fig" rid="F1">1A</xref>). Widths of squares and diameters of circles were ten times larger, on average, in the large shape conditions than the small shape conditions. Fifty two unique images were generated per condition. Each individual shape in an array in a given image was randomly assigned a gray-scale fill. Stimulus conditions were matched on mean luminance and contrast using the SHINE toolbox (Willenbockel et al., <xref ref-type="bibr" rid="B36">2010</xref>).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Stimulus conditions (left) and example right angle convolution intensities (right) for Experiments 1&#x02013;3. (A)</bold> Experiment 1 stimuli consisted of squares (high-rectilinearity) and circles (low-rectilinearity) that were either large or small in size. <bold>(B)</bold> Experiment 2 stimuli consisted of naturalistic high- and low- rectilinearity scene and face images. <bold>(C)</bold> Experiment 3 stimuli consisted of pixilated (high-rectilinearity) and pointillized (low-rectilinearity) scene and face images.</p></caption>
<graphic xlink:href="fnhum-10-00137-g0001.tif"/>
</fig>
<p>Rectilinearity for each condition was calculated using the methods outlined in Nasr et al. (<xref ref-type="bibr" rid="B28">2014</xref>) and clarified through personal correspondence with the authors. In brief, right angle wavelet filters were first constructed with an algorithm originally used to generate curved &#x0201C;banana&#x0201D; filters (Kr&#x000FC;ger et al., <xref ref-type="bibr" rid="B21">1996</xref>). Rather than using a square root function to produce curved filters, however, an absolute value function was used to produce angled filters. Wavelets were constructed at four different spatial scales (1/5,1/9,1/15, and 1/27 cycles per pixel) and 16 different orientations (22.5&#x02013;360&#x000B0; in 22.5&#x000B0; steps) at a size of 300 pixels by 300 pixels. Edges in the images were then extracted using Canny edge detection at a threshold of 0.2 and each filter was individually convolved with the edge map. Intensities from the resultant convolved matrix were averaged across edge points and orientations to generate orientation-invariant wavelet coefficients. These coefficients were then normalized within spatial scale across the image set for each experiment by subtracting the minimum value within spatial scale and dividing by the range. The final rectilinearity index for each image was determined by averaging these normalized coefficients across the four spatial scales. As expected, squares had significantly higher rectilinearity than circles (<italic>t</italic><sub>(206)</sub> = 5.54, <italic>p</italic> &#x0003C; 10<sup>&#x02212;7</sup>).</p>
</sec>
<sec id="s2-4-2">
<title>Experiment 2</title>
<p>To test whether rectilinearity effects could be found for naturalistic stimuli, subjects were presented with grayscale images of faces and scenes that were grouped by rectilinearity (Figure <xref ref-type="fig" rid="F1">1B</xref>). Specifically, 52 high-rectilinearity scenes, 52 low-rectilinearity scenes, 52 high-rectilinearity faces, and 52 low-rectilinearity faces were chosen from a larger image set (377 faces; 543 scenes) based on their rectilinearity values. High-rectilinearity stimuli had, by design, significantly higher rectilinearity than low-rectilinearity stimuli (<italic>t</italic><sub>(206)</sub> = 27.92, <italic>p</italic> &#x0003C; 10<sup>&#x02212;72</sup>). Crucially, high-rectilinearity faces and scenes were statistically matched (<italic>t</italic><sub>(102)</sub> = 1.53, <italic>p</italic> = 0.13) as were low-rectilinearity faces and scenes (<italic>t</italic><sub>(102)</sub> = 1.42, <italic>p</italic> = 0.16). Thus, response differences between faces and scenes could not be explained by differences in rectilinearity. To ensure that all stimuli had equal retinotopic extent, faces were displayed on a phase-scrambled variation of a single scene image, which was included in the rectilinearity calculation. Each stimulus condition was matched for mean luminance and contrast using the SHINE toolbox. All scene stimuli depicted natural outdoor scenes (e.g., forests, lakes, etc.).</p>
</sec>
<sec id="s2-4-3">
<title>Experiment 3</title>
<p>To further test whether the PPA and other scene regions are sensitive to the rectilinearity of naturalistic stimuli, we created a new set of grayscale images of natural faces and scenes, which had rectilinearity artificially enhanced or reduced. These images were pseudorandomly drawn from the same image set as in Experiment 2 and from the SUN image database (Xiao et al., <xref ref-type="bibr" rid="B38">2010</xref>). The same images were presented to each participant. For each stimulus category, half of the images were decomposed into square pixels that were larger than the original pixels (high-rectilinearity) and half were decomposed into round points (low-rectilinearity). The result of these manipulations is to shift the perceptual salience of high spatial frequency rectilinearity up or down, respectively (Figure <xref ref-type="fig" rid="F1">1C</xref>). Pixelated images were divided into pixels aligned by row and column across the image. Pointillized images consisted of imbricated circles to cover the full image. Pixels and points had edges or diameters, respectively, of 6 pixels each at display resolution size. Pixilation and pointillization was executed using Pixelmator Software (v3.3.2, 2014). There were 52 unique images generated per condition (pixelated scene, pointillized scenes, pixelated faces, pointillized faces). Pixelated scenes had higher rectilinearity than pointillized scenes (<italic>t</italic><sub>(102)</sub> = 3.02, <italic>p</italic> &#x0003C; 0.01), and pixelated faces had higher rectilinearity than pointillized faces (<italic>t</italic><sub>(102)</sub> = 4.44, <italic>p</italic> &#x0003C; 0.0001). Further, rectilinearity was biased against the expected fMRI category effect in the PPA: pixelated faces had marginally greater rectilinearity than pixelated scenes (<italic>t</italic><sub>(102)</sub> = 1.88, <italic>p</italic> = 0.06) and pointillized faces had significantly greater rectilinearity than pointillized scenes (<italic>t</italic><sub>(102)</sub> = 3.26, <italic>p</italic> &#x0003C; 0.01). Each stimulus condition was matched for mean luminance and contrast using the SHINE toolbox.</p>
</sec>
</sec>
<sec id="s2-5">
<title>Data Analysis</title>
<p>Functional MR images for both the main experiments and functional localizer were preprocessed using the following steps. First, they were corrected for differences in slice timing by resampling slices in time to match the first slice of each volume. Second, they were corrected for subject motion by realigning to the first volume of the scan run using MCFLIRT (Jenkinson et al., <xref ref-type="bibr" rid="B17">2002</xref>). Third, the timecourses for each voxel were high-pass filtered to remove low temporal frequency fluctuations in the BOLD signal that exceeded lengths of 100 s. Data from the functional localizer scan were smoothed with a 5 mm full-width at half-maximum (FWHM) Gaussian filter. Data from the experimental scans were smoothed with a 5 mm FWHM Gaussian filter for all region of interest analyses and 8 mm FWHM for all whole-brain group analyses.</p>
<p>We examined univariate responses within several regions of interest (ROI) known to be involved in visual processing. ROIs were defined individually for each subject using data from the functional localizer scans. In addition to the PPA, we defined ROIs for two other scene-responsive regions [retrosplenial complex (RSC) and occipital place area (OPA); Hasson et al., <xref ref-type="bibr" rid="B14">2002</xref>; Bar and Aminoff, <xref ref-type="bibr" rid="B3">2003</xref>; Dilks et al., <xref ref-type="bibr" rid="B5">2013</xref>], and early visual cortex (EVC). The OPA, but not the RSC or EVC, has been previously reported to show a similar rectilinearity bias to PPA (Nasr et al., <xref ref-type="bibr" rid="B28">2014</xref>). ROIs were defined using a contrast of scenes &#x0003E; objects for PPA, RSC, and OPA, and scrambled-objects &#x0003E; baseline for EVC, and they were further constrained by a group-based anatomical map of scene- or scrambled-object-selective activation derived from a large number (42) of localizer subjects that had been previously obtained in our lab (Julian et al., <xref ref-type="bibr" rid="B18">2012</xref>). Specifically, each ROI was defined as the top 100 voxels in each hemisphere that exhibited the defining contrast and fell within the group-parcel mask for that ROI. The group-parcel mask for EVC was defined based on a scrambled-objects &#x0003E; intact-objects contrast. The voxels comprising each ROI did not need to be contiguous. This method ensured that all ROIs could be defined in both hemispheres in every subject and that all ROIs contained the same number of voxels. All ROIs were combined across hemispheres unless otherwise noted. All contrasts were performed in the native anatomical space for each subject and the group-parcel map was mapped into that space using a linear transformation.</p>
<p>We then used general linear models (GLMs) implemented in FSL<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref> to estimate the response of each voxel to the four experimental conditions for each experiment. Each condition was modeled as a boxcar function convolved with a canonical hemodynamic response function. To test for the effects of independent factors, we applied repeated-measures analysis of variances (ANOVAs) to the univariate responses within each ROI. Subsequent comparisons between individual conditions were based on paired-sampled <italic>t-</italic>tests. For tests of the rectilinearity hypothesis, significance was assessed using 1-tailed tests in the direction of the rectilinearity hypothesis (i.e., greater response to high- than low-rectilinearity conditions). For all other tests, significance was assessed using 2-tailed tests.</p>
<p>In addition to the ROI analyses, we also performed a whole-brain group analysis to test for effects of rectilinearity and category in Experiments 2 and 3 outside of our ROIs. For this analysis, data from those subjects who participated in both Experiments 2 and 3 were first combined via a within-subject fixed-effects analysis prior to the group analysis. To generate group-averaged maps, each individual participant&#x02019;s functional maps were spatially transformed onto the averaged human brain using a spherical transformation in FreeSurfer<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref> (Fischl et al., <xref ref-type="bibr" rid="B11">1999</xref>) and then averaged using random effects models in FSL.</p>
<p>In addition to univariate analyses, we also assessed whether there was information about rectilinearity and stimulus category represented in the multivoxel patterns of response in each ROI in Experiments 2 and 3. To do so, for each participant, we used GLMs to estimate the response pattern evoked by each stimulus condition separately for each of the two fMRI runs. Multivoxel pattern analyses (MVPA) were then performed through split-half pattern comparison (Haxby et al., <xref ref-type="bibr" rid="B15">2001</xref>). Individual patterns were normalized prior to this computation by subtracting the grand mean pattern (i.e., the cocktail mean) for each half of the data. For each ROI, we then computed the correlation between the response patterns resulting from the same stimulus conditions and from different stimulus conditions. To test for coding of rectilinearity controlling for stimulus category, we computed a discrimination index that was the difference in the average correlation between the same rectilinearity condition and the corresponding different rectilinearity condition (i.e., [same rectilinearity, same category] &#x02212; [different rectilinearity, same category]). This rectilinearity discrimination index was computed separately for scenes and faces. Likewise, to test for coding of stimulus category controlling for rectilinearity we computed a discrimination index that was the difference in the average correlation between the same category condition and the corresponding different category condition (i.e., [same category, same rectilinearity] &#x02212; [different category, same rectilinearity]). This category discrimination index was computed separately for high and low rectilinearity stimulus conditions. To assess statistical significance, <italic>t</italic>-tests were used evaluate if the discrimination indices were greater than zero.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec id="s3-1">
<title>Experiment 1: Does the PPA Respond More to High- than Low-Rectilinearity Shapes?</title>
<p>In our first experiment, we sought to replicate the PPA rectilinearity bias for simple shapes reported by Nasr et al. (<xref ref-type="bibr" rid="B28">2014</xref>). To do so, we scanned participants while they viewed arrays of computer-generated gray-scale squares (high-rectilinearity) and circles (low-rectilinearity) presented at two different sizes (small or large; Figure <xref ref-type="fig" rid="F1">1A</xref>). We then examined the fMRI response in each of the predefined ROIs.</p>
<p>Consistent with the rectilinearity hypothesis, the PPA responded more strongly to arrays of squares than to arrays of circles (Figure <xref ref-type="fig" rid="F2">2</xref>). Confirming this observation, a 2 &#x000D7; 2 ANOVA with factors for shape (square vs. circle) and size (small vs. large) found a main effect of shape (<italic>F</italic><sub>(1,7)</sub> = 7.48, <italic>p</italic> &#x0003C; 0.05, <inline-formula><mml:math id="M1"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.52). The greater PPA response to squares than circles was significant for large shapes (<italic>t</italic><sub>(7)</sub> = 2.64, <italic>p</italic> &#x0003C; 0.05) and marginally significant for small shapes (<italic>t</italic><sub>(7)</sub> = 1.72, <italic>p</italic> = 0.07). Thus, our results replicate the basic finding of Nasr and colleagues that the PPA&#x02019;s response to arrays of 2-d shapes is modulated by rectilinearity.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Results for Experiment 1.</bold> Average percent signal change (&#x000B1;1 SEM) to large and small squares (high-rectilinearity) and circles (low-rectilinearity) is shown for each region of interest (ROI) averaged across hemispheres. The parahippocampal place area (PPA) and occipital place area (OPA) both showed a significant main effect of shape, with greater overall responses to squares than circles. (<sup>&#x02020;</sup><italic>p</italic> &#x0003C; 0.07; *<italic>p</italic> &#x0003C; 0.05).</p></caption>
<graphic xlink:href="fnhum-10-00137-g0002.tif"/>
</fig>
<p>We also observed a significant effect of rectilinearity, with greater response to arrays of squares than to arrays of circles, in OPA (<italic>F</italic><sub>(1,7)</sub> = 10.37, <italic>p</italic> &#x0003C; 0.01, <inline-formula><mml:math id="M2"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.60). In contrast, there was no rectilinearity bias in RSC (<italic>F</italic><sub>(1,7)</sub> = 0.09, <italic>p</italic> = 0.77) or EVC (<italic>F</italic><sub>(1,7)</sub> = 2.94, <italic>p</italic> = 0.13). The fact that EVC responds equally to squares and circles suggests that the rectilinearity bias observed in PPA and OPA was not simply inherited from this region. The OPA showed a significantly greater rectilinearity effect than EVC (<italic>F</italic><sub>(1,7)</sub> = 8.64, <italic>p</italic> &#x0003C; 0.05, <inline-formula><mml:math id="M3"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.55) but the region-by-rectilinearity interaction between PPA and EVC fell short of significance (<italic>F</italic><sub>(1,7)</sub> = 3.89, <italic>p</italic> = 0.089).</p>
<p>Unexpectedly, we also observed size effects in the PPA, OPA, and RSC. All three scene regions responded significantly more to large than small shapes (PPA: <italic>F</italic><sub>(1,7)</sub> = 41.91, <italic>p</italic> &#x0003C; 0.01, <inline-formula><mml:math id="M4"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.86; RSC: <italic>F</italic><sub>(1,7)</sub> = 14.35, <italic>p</italic> &#x0003C; 0.01, <inline-formula><mml:math id="M5"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.59; OPA: <italic>F</italic><sub>(1,7)</sub> = 10.01, <italic>p</italic> &#x0003C; 0.01, <inline-formula><mml:math id="M6"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.67). The reason for this preference for large shapes is unclear. It may indicate a preference for larger objects (Konkle and Oliva, <xref ref-type="bibr" rid="B20">2012</xref>), or it might be driven by uncontrolled variables such as spatial frequency or numerosity. Notably, EVC showed the opposite effect, responding more to small than large shapes (<italic>F</italic><sub>(1,7)</sub> = 72.97, <italic>p</italic> &#x0003C; 0.001, <inline-formula><mml:math id="M7"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.91). In no ROI was there was significant interaction between shape and size (PPA: <italic>F</italic><sub>(1,7)</sub> = 0.51, <italic>p</italic> = 0.50; RSC: <italic>F</italic><sub>(1,7)</sub> = 0.01, <italic>p</italic> = 0.92; OPA: <italic>F</italic><sub>(1,7)</sub> = 0.15, <italic>p</italic> = 0.71; EVC: <italic>F</italic><sub>(1,7)</sub> = 2.36, <italic>p</italic> = 0.17). Further, an additional 2 &#x000D7; 2 &#x000D7; 2 analysis with hemisphere as a factor found that the shape and size effects did not vary by hemisphere in any ROI (all <italic>F</italic><sub>(1,7)</sub>s &#x0003C; 2.59, <italic>p</italic>s &#x0003E; 0.15).</p>
</sec>
<sec id="s3-2">
<title>Experiment 2: Does the PPA Exhibit a Rectilinearity Effect for Naturalistic Stimuli (Scenes and Faces)?</title>
<p>After replicating the rectilinearity bias for shapes in the PPA, we next moved on to test whether there is a rectilinearity effect for naturalistic images, by scanning participants while they viewed images of high- and low-rectilinearity scenes and high- and low-rectilinearity faces (Figure <xref ref-type="fig" rid="F1">1B</xref>). Importantly, rectilinearity was matched between the scenes and faces; that is, the high-rectilinearity scenes and faces had a similar level of rectilinearity, as did the low-rectilinearity scenes and faces. Thus, our design not only allowed us to examine rectilinearity effects for scenes and faces, it also provided a strong test of the rectilinearity hypothesis. If the preferential response to scenes compared to faces in the PPA is due to the greater rectilinearity of scenes, then the &#x0201C;categorical&#x0201D; effect should be eliminated by rectilinear matching. On the other hand, if the PPA responds strongly to scenes in part because it is tuned to scenes as a category, then it should continue to exhibit a preferential response to scenes even after rectilinear matching.</p>
<p>Results are plotted in Figure <xref ref-type="fig" rid="F3">3A</xref>. A 2 &#x000D7; 2 ANOVA with factors for category (scene vs. face) and rectilinearity (high vs. low) found a strong effect of category in the PPA (<italic>F</italic><sub>(1,7)</sub> = 153.65, <italic>p</italic> &#x0003C; 0.001, <inline-formula><mml:math id="M8"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.96), with greater response to scenes than to faces. Crucially, there was no main effect of rectilinearity (<italic>F</italic><sub>(1,7)</sub> = 0.49, <italic>p</italic> = 0.51). There was a significant interaction between category and rectilinearity (<italic>F</italic><sub>(1,7)</sub> = 11.26, <italic>p</italic> &#x0003C; 0.05, <inline-formula><mml:math id="M9"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.62); however, this interaction was driven by a <italic>lower</italic> response to high- than low-rectilinearity scenes (<italic>t</italic><sub>(7)</sub> = &#x02212;3.77, <italic>p</italic> = 0.99) and the numerically converse effect for faces, though the rectilinearity effect for faces was not significant (<italic>t</italic><sub>(7)</sub> = 1.52, <italic>p</italic> = 0.17).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Univariate results for Experiments 2 and 3. (A)</bold> Experiment 2 average percent signal change (&#x000B1;1 SEM) to high-rectilinearity and low-rectilinearity scenes and faces in each ROI. No main effect of rectilinearity was observed in any ROI, although all ROIs showed a greater response to scenes than faces. <bold>(B)</bold> Experiment 2 comparison of rectilinearity-selectivity (high-rectilinearity &#x0003E; low-rectilinearity contrast <italic>t</italic>-statistic) and category-selectivity (scenes &#x0003E; faces contrast <italic>t</italic>-statistic) for all voxels in the group-defined PPA parcel for all participants (top row). Points that fall below the unity line are voxels with greater category-selectivity than rectilinearity-selectivity (shown in purple), and points that fall above the unity line are voxels with greater rectilinearity-selectivity than category-selectivity (show in red). Gray voxels were not significant (<italic>p</italic> &#x0003C; 0.05, uncorrected) for either contrast. Few voxels exhibited greater rectilinearity-selectivity than scene-selectivity. The bottom row shows a histogram of rectilinearity- and category-selective voxels in each participant. In all subjects, the number of category-selective voxels far exceeded the number of rectilinearity-selective voxels. <bold>(C)</bold> Experiment 3 average percent signal change (&#x000B1;1 SEM) to pixilated (high-rectilinearity) and pointillized (low-rectilinearity) scenes and faces in each ROI. No main effect of rectilinearity was observed in any ROI, although all ROIs showed a greater response to scenes than faces. <bold>(D)</bold> Experiment 3 comparison of rectilinearity-selectivity (pixilated &#x0003E; pointillized contrast <italic>t</italic>-statistic) and category-selectivity (scenes &#x0003E; faces contrast <italic>t</italic>-statistic) for all voxels in the group-defined PPA parcel for all participants (top row). As in Experiment 2, few voxels exhibited greater rectilinearity-selectivity than scene-selectivity. Further, in all participants the number of category-selective voxels again far exceeded the number of rectilinearity-selective voxels (bottom row). (<sup>&#x02020;</sup><italic>p</italic> &#x0003C; 0.07; *<italic>p</italic> &#x0003C; 0.05).</p></caption>
<graphic xlink:href="fnhum-10-00137-g0003.tif"/>
</fig>
<p>Results in the other two scene regions were similar to the PPA: both RSC and OPA responded significantly more to scenes than faces (RSC: <italic>F</italic><sub>(1,7)</sub> = 31.10, <italic>p</italic> &#x0003C; 0.01, <inline-formula><mml:math id="M10"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.82; OPA: <italic>F</italic><sub>(1,7)</sub> = 46.09, <italic>p</italic> &#x0003C; 0.001, <inline-formula><mml:math id="M11"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.87) but neither region exhibited a main effect of rectilinearity (RSC: <italic>F</italic><sub>(1,7)</sub> = 0.26, <italic>p</italic> = 0.63; OPA: <italic>F</italic><sub>(1,7)</sub> = 0, <italic>p</italic> = 0.99). Like PPA, OPA also showed a significant interaction between category and rectilinearity (<italic>F</italic><sub>(1,7)</sub> = 15.45, <italic>p</italic> &#x0003C; 0.01, <inline-formula><mml:math id="M12"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.69), with a lower response to high- than low-rectilinearity scenes (<italic>t</italic><sub>(7)</sub> = &#x02212;3.10, <italic>p</italic> = 0.99) and a marginal rectilinearity effect for faces (<italic>t</italic><sub>(7)</sub> = 1.65, <italic>p</italic> = 0.06). There was no interaction between category and rectilinearity in RSC (<italic>F</italic><sub>(1,7)</sub> = 2.08, <italic>p</italic> = 0.19). Comparison of the three scene regions revealed no interaction between region and rectilinearity (<italic>F</italic><sub>(2,14)</sub> = 0.05, <italic>p</italic> = 0.95), but an interaction between region and category (<italic>F</italic><sub>(2,14)</sub> = 15.99, <italic>p</italic> &#x0003C; 0.001, <inline-formula><mml:math id="M13"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.70; scene-face response difference: OPA &#x0003E; PPA &#x0003E; RSC). Additional 2 &#x000D7; 2 &#x000D7; 2 ANOVAs with hemisphere as a factor found no significant interaction between hemisphere and rectilinearity in the PPA or RSC (both <italic>F</italic><sub>(1,7)</sub>s &#x0003C; 2.5, <italic>p</italic>s &#x0003E; 0.16). In the OPA, there was a significant interaction between hemisphere and rectilinearity (<italic>F</italic><sub>(1,7)</sub> = 5.69, <italic>p</italic> &#x0003C; 0.05, <inline-formula><mml:math id="M14"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.45), with a numerically greater response to high- than low-rectilinearity in the left hemisphere, and the converse effect in the right hemisphere, although neither hemisphere exhibited a significant effect of rectilinearity (both <italic>t</italic><sub>(7)</sub>s &#x0003C; 0.44, <italic>p</italic>s &#x0003E; 0.3). There was no interaction between category and hemisphere in any scene region (all <italic>F</italic><sub>(1,7)</sub>s &#x0003C; 2.61, <italic>p</italic>s &#x0003E; 0.15).</p>
<p>Like the scene regions, EVC responded similarly to high and low rectilinearity stimuli (Figure <xref ref-type="fig" rid="F3">3A</xref>; <italic>F</italic><sub>(1,7)</sub> = 0.85, <italic>p</italic> = 0.39), and there was also no significant interaction between category and rectilinearity (<italic>F</italic><sub>(1,7)</sub> = 0.98, <italic>p</italic> = 0.36) or hemisphere and rectilinearity (<italic>F</italic><sub>(1,7)</sub> = 0.66, <italic>p</italic> = 0.44). Moreover, there was no significant region-by-rectilinearity interaction between EVC and any of the scene regions (all <italic>F</italic><sub>(1,7)</sub>s &#x0003C; 2.52, <italic>p</italic>s &#x0003E; 0.18). However, EVC responded more to scenes than faces (<italic>F</italic><sub>(1,7)</sub> = 23.33, <italic>p</italic> &#x0003C; 0.01, <inline-formula><mml:math id="M15"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.77), and there was a significant interaction between hemisphere and category (<italic>F</italic><sub>(1,7)</sub> = 11.29, <italic>p</italic> &#x0003C; 0.05, <inline-formula><mml:math id="M16"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.62). Although PPA and OPA showed a greater scene-preferential response than EVC (all <italic>F</italic><sub>(1,7)</sub>s &#x0003E; 39.95, <italic>p</italic>s &#x0003C; 0.001, <inline-formula><mml:math id="M17"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula>s &#x0003E; 0.85), the presence of category effects in EVC nonetheless indicates that there are some low-level differences between the scene and face categories despite our efforts to control for rectilinearity, overall visual extent, contrast, and luminance. There was no region-by-category interaction between EVC and RSC (<italic>F</italic><sub>(1,7)</sub> = 0.44, <italic>p</italic> = 0.53).</p>
<p>We considered the possibility that a subregion in the vicinity of the PPA, but not the most scene-selective part of the region, might be selective for the presence of right angles. Such a subregion might not be included in the PPA as defined by the top 100 voxels showing scene-selectivity in each hemisphere in the functional localizers. To address this possibility, we compared rectilinearity-selectivity (high vs. low rectilinearity contrast <italic>t</italic>-statistic) and category-selectivity (scene vs. face category contrast <italic>t</italic>-statistic) for each participant for each voxel in both hemispheres in the group-defined PPA parcel, which is larger than the individually-defined ROIs (Julian et al., <xref ref-type="bibr" rid="B18">2012</xref>). Figure <xref ref-type="fig" rid="F3">3B</xref> shows the results of this comparison. Only a small fraction of voxels was more rectilinearity-selective than scene selective. Further, in each participant the number of category-selective voxels far exceeded the number of rectilinearity-selective voxels (Figure <xref ref-type="fig" rid="F3">3B</xref>). Thus, it is unlikely that the failure to find rectilinearity-selectivity in the PPA was due to a bias in ROI definition induced by our analysis methods.</p>
<p>We also considered the possibility that the PPA might be sensitive to rectilinearity at the representational level. That is, even though the overall level of activity in PPA did not distinguish between high vs. low rectilinearity scenes and faces, such distinctions might be apparent in multi-voxel response patterns. To test this, we performed split-half MVPA to see if it was possible to distinguish between stimuli based on rectilinearity (Figure <xref ref-type="fig" rid="F4">4A</xref>). We did not find strong evidence for this: in PPA and OPA, scenes with the same level of rectilinearity were no more representationally similar than scenes with different levels of rectilinearity (both <italic>t</italic><sub>(7)</sub>s &#x0003C; 1.20, <italic>p</italic>s &#x0003E; 0.26), and an absence of rectilinearity information was also observed for faces (both <italic>t</italic><sub>(7)</sub>s &#x0003C; &#x02212;0.03, <italic>p</italic>s &#x0003E; 0.51). In RSC and EVC, there was significant information about rectilinearity for scenes (both <italic>t</italic><sub>(7)</sub>s &#x0003E; 2.41, <italic>p</italic>s &#x0003C; 0.05), but not faces (both <italic>t</italic><sub>(7)</sub>s &#x0003C; &#x02212;0.82, <italic>p</italic>s &#x0003E; 0.78). In contrast, both high- and low-rectilinearity stimuli were more representationally similar if they were drawn from same category than if they were drawn from different categories in all ROIs (all <italic>t</italic><sub>(7)</sub>s &#x0003E; 5.83, <italic>p</italic>s &#x0003C; 0.001; Figure <xref ref-type="fig" rid="F4">4B</xref>). These results reinforce the idea that scene regions are driven more by category than by rectilinearity.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Multivariate results for Experiments 2 and 3. (A)</bold> To test for information about rectilinearity in each ROI, we computed a discrimination index that was the difference in split-half pattern similarity (r) between the same rectilinearity conditions and different rectilinearity conditions, separately for scenes and faces. To test for information about stimulus category, we computed a discrimination index that was the difference in split-half pattern similarity between the same category conditions and different category conditions, separately for high- and low-rectilinearity stimuli. <bold>(B)</bold> Experiment 2 average discrimination indices (&#x000B1;1 SEM) for rectilinearity and category in each ROI. There was no information about rectilinearity independent of category in any ROI, although all ROIs exhibited significant information about stimulus category.<bold> (C)</bold> Experiment 3 average discrimination indices (&#x000B1;1 SEM) for rectilinearity and category in each ROI. There was significant information about rectilinearity in early visual cortex (EVC), but not in other ROIs. All ROIs exhibited significant information about stimulus category. (<sup>&#x02020;</sup><italic>p</italic> &#x0003C; 0.08; *<italic>p</italic> &#x0003C; 0.05; **<italic>p</italic> &#x0003C; 0.01; ***<italic>p</italic> &#x0003C; 0.001).</p></caption>
<graphic xlink:href="fnhum-10-00137-g0004.tif"/>
</fig>
<p>In sum, our data in this case did not support the rectilinearity hypothesis. Not only was the categorical effect maintained after rectilinear matching, but no rectilinearity effect was observed for scenes and faces. Moreover, MVPA could not distinguish between stimuli that differed only in rectilinearity independent of category.</p>
</sec>
<sec id="s3-3">
<title>Experiment 3: Does the PPA Exhibit a Rectilinearity Effect for Scenes and Faces with Artificially Enhanced Rectilinearity?</title>
<p>One possible reason for the lack of a rectilinearity effect in Experiment 2 may have been that difference between the high- and low-rectilinearity conditions was too subtle to be noticed by participants. Although the high- and low-rectilinearity conditions in Experiment 2 differed on rectilinearity according to the rectilinearity index designed by Nasr et al. (<xref ref-type="bibr" rid="B28">2014</xref>), this index may fail to capture the most perceptually salient rectilinearity dimensions. Further, the greater response to scenes than faces in EVC in Experiment 2 emphasizes that there were other uncontrolled low-level differences between the categories, complicating the interpretation of the category results. To address these concerns, participants in Experiment 3 viewed images of scenes and faces with artificially enhanced or degraded rectilinearity (Figure <xref ref-type="fig" rid="F1">1C</xref>). Images were decomposed into square pixels (pixelated) to increase rectilinearity or round points (pointillized) to decrease rectilinearity. We then examined the fMRI response in each of the predefined ROIs.</p>
<p>Once again, we failed to find an effect of rectilinearity on PPA response (Figure <xref ref-type="fig" rid="F3">3C</xref>). A 2 &#x000D7; 2 ANOVA with factors for category (scene vs. face) and rectilinearity (pixelated vs. pointillized) found greater response in the PPA to scenes compared to faces (<italic>F</italic><sub>(1,14)</sub> = 100.19, <italic>p</italic> &#x0003C; 0.0001, <inline-formula><mml:math id="M18"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.88) but no difference between pixelated and pointillized stimuli (<italic>F</italic><sub>(1,14)</sub> = 0.93, <italic>p</italic> = 0.35). This lack of a rectilinearity bias was found for both scenes (<italic>t</italic><sub>(14)</sub> = 1.20, <italic>p</italic> = 0.25) and faces (<italic>t</italic><sub>(14)</sub> = 0.20, <italic>p</italic> = 0.84). There was no interaction between category and rectilinearity (<italic>F</italic><sub>(1,14)</sub> = 0.46, <italic>p</italic> = 0.51). Further, comparing rectilinearity selectivity (pixelated vs. pointillized contrast <italic>t</italic>-statistic) and category selectivity (scene vs. face category contrast <italic>t</italic>-statistic) for each voxel in both hemispheres of the group-defined PPA parcel (Julian et al., <xref ref-type="bibr" rid="B18">2012</xref>), there were few PPA voxels that were more selective for right angles than for scenes, and each participant exhibited substantially more voxels selective for scenes than right angles (Figure <xref ref-type="fig" rid="F3">3D</xref>). Thus, the failure to find rectilinearity-selectivity in the PPA in the present experiment was not due to a bias induced by our method of defining the ROI.</p>
<p>Results in the other scene regions were similar. Both RSC and OPA responded more to scenes than faces (RSC: <italic>F</italic><sub>(1,14)</sub> = 96.60, <italic>p</italic> &#x0003C; 0.001, <inline-formula><mml:math id="M19"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.87; OPA: <italic>F</italic><sub>(1,14)</sub> = 90.15, <italic>p</italic> &#x0003C; 0.0001, <inline-formula><mml:math id="M20"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.87), but neither region showed a significant main effect of rectilinearity (RSC: <italic>F</italic><sub>(1,14)</sub> = 1.02, <italic>p</italic> = 0.33; OPA: <italic>F</italic><sub>(1,14)</sub> = 2.96, <italic>p</italic> = 0.11), although the nonsignificant trend in OPA was in the predicted direction. There was no interaction between category and rectilinearity (RSC: <italic>F</italic><sub>(1,14)</sub> = 0.69, <italic>p</italic> = 0.42; OPA: <italic>F</italic><sub>(1,14)</sub> = 1.79, <italic>p</italic> = 0.20). Comparison of the three scene regions revealed no interaction between region and rectilinearity (<italic>F</italic><sub>(2,28)</sub> = 2.26, <italic>p</italic> = 0.12), but an interaction between region and category (<italic>F</italic><sub>(2,28)</sub> = 45.19, <italic>p</italic> &#x0003C; 0.001, <inline-formula><mml:math id="M21"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.78; scene-face response difference: OPA &#x0003E; PPA &#x0003E; RSC). EVC did not show effects of category (<italic>F</italic><sub>(1,14)</sub> = 1.39, <italic>p</italic> = 0.26) or rectilinearity (<italic>F</italic><sub>(1,14)</sub> = 0.38, <italic>p</italic> = 0.55), and no interaction between category and rectilinearity (<italic>F</italic><sub>(1,14)</sub> = 2.61, <italic>p</italic> = 0.13), indicating the category effects in scene regions could not have been inherited from EVC. There was no significant region-by-rectilinearity interaction between EVC and any of the scene regions (all <italic>F</italic><sub>(1,14)</sub>s &#x0003C; 1.15, <italic>p</italic>s &#x0003E; 0.30), but all scene regions showed a greater scene-preferential response than EVC (all <italic>F</italic><sub>(1,14)</sub>s &#x0003E; 12.50, <italic>p</italic>s &#x0003C; 0.004, <inline-formula><mml:math id="M22"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula>s &#x0003E; 0.47). 2 &#x000D7; 2 &#x000D7; 2 ANOVAs with hemisphere as a factor found no significant interactions between hemisphere and rectilinearity or category in any ROI (all <italic>F</italic><sub>(1,14)</sub>s &#x0003C; 3.98, <italic>p</italic>s &#x0003E; 0.07).</p>
<p>To test whether the scene regions distinguished between pixilated and pointillized stimuli at the level of multivoxel patterns, we again performed split-half MVPA (Figure <xref ref-type="fig" rid="F4">4C</xref>). There was no significant information about rectilinearity in the RSC or OPA for either scenes or faces (both <italic>t</italic><sub>(14)</sub>s &#x0003C; 1.22, <italic>p</italic>s &#x0003E; 0.12). The PPA showed marginal information about rectilinearity for faces (<italic>t</italic><sub>(14)</sub> = 1.59, <italic>p</italic> = 0.07, but not scenes (<italic>t</italic><sub>(14)</sub> = 0.72, <italic>p</italic> = 0.24). All ROIs contained significant information about category for both pixelated and pointilized stimuli (all <italic>t</italic><sub>(14)</sub>s &#x0003E; 6.46, <italic>p</italic>s &#x0003C; 0.0001). Notably there was significant information about rectilinearity in EVC for scenes (<italic>t</italic><sub>(14)</sub> = 3.28, <italic>p</italic> &#x0003C; 0.01) and marginal information for faces (<italic>t</italic><sub>(14)</sub> = 1.52, <italic>p</italic> = 0.075), indicating that this region was sensitive to the difference between the pixilated and pointillized stimulus conditions.</p>
</sec>
<sec id="s3-4">
<title>Effects of Category and Rectilinearity Outside of our ROIs</title>
<p>To test for effects of rectilinearity and category outside of our ROIs, we performed a whole-brain group analysis, aggregating data from Experiments 2 and 3 to maximize power to detect effects of rectilinearity and category. This analysis revealed very strong category effects throughout high-level visual cortex (Figure <xref ref-type="fig" rid="F5">5</xref>): the PPA, RSC, and OPA responded more strongly to scenes, whereas lateral and ventral occipitotemporal regions responded more strongly to faces. By contrast, we observed no rectilinearity effects that survived correction for multiple comparisons, although notably there was sensitivity to rectilinearity observed near the posterior right PPA and the left OPA at uncorrected statistical thresholds.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>Group-averaged contrast maps of the effect of category (scenes &#x0003E; faces contrast) and rectilinearity (high-rectilinearity &#x0003E; low-rectilinearity contrast).</bold> Outlines of the group-defined parcels are shown for the PPA in brown, RSC in light blue, OPA in green, and EVC in purple.</p></caption>
<graphic xlink:href="fnhum-10-00137-g0005.tif"/>
</fig>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>Replicating the findings of Nasr et al. (<xref ref-type="bibr" rid="B28">2014</xref>), our data provide evidence that the PPA is sensitive to the presence of right-angle junctions in basic shapes (Experiment 1). This result suggests that rectilinearity may play a role in PPA stimulus tuning. However, the rectilinearity bias observed in Experiment 1 failed to explain PPA scene-selectivity in two further experiments. In particular, we did not observe a rectilinearity effect for naturalistic scene or face stimuli (Experiment 2), even when these stimuli had artificially enhanced rectilinearity (Experiment 3). Furthermore, the PPA responded more to scenes than faces in Experiment 2 even when the scene and face stimuli were matched on rectilinearity. The presence of a PPA rectilinearity bias in Experiment 1, but not Experiments 2 and 3, suggests that sensitivity to a single low-level image feature, namely right-angle junctions, may be more relevant to understanding PPA tuning to basic shapes than to naturalistic stimuli. More broadly, these results demonstrate that sensitivity to rectilinearity is insufficient to explain PPA category tuning.</p>
<p>Why might the PPA exhibit sensitivity to rectilinearity for basic shapes, but not naturalistic scene and face images? One possibility is that basic shapes are interpreted as being more scene-like when they contain more right angles. Scenes and faces, by contrast, do not suffer from such ambiguity of interpretation: they are clearly either scenes or non-scenes, irrespective of their rectilinearity content. Alternatively, although our naturalistic stimuli were matched on overall visual extent, luminance, and contrast, the high- and low-rectilinearity stimuli in the present studies may have differed along some other uncontrolled stimulus dimension that modulated the PPA response in the opposite direction of the predicted rectilinearity effect. Finally, the scene stimuli used in the current experiments only depicted naturalistic outdoor scenes, and it is possible that the PPA response is modulated by rectilinearity for other scene categories (e.g., man-made scenes; Walther and Shen, <xref ref-type="bibr" rid="B33">2014</xref>). However, note that for these latter two possibilities, even if there were uncontrolled low-level differences between the stimulus conditions, and even if a rectilinearity effect were observed for man-made scenes, the lack of a rectilinearity effect in the current data is still evidence against the strongest form of the rectilinearity hypothesis (i.e., that the PPA response is determined mainly by stimulus rectilinearity).</p>
<p>Before dismissing the significance of right-angle junctions to PPA scene tuning, there is one caveat that merits mention. In the present study, rectilinearity of an image was calculated using an index introduced by Nasr et al. (<xref ref-type="bibr" rid="B28">2014</xref>). It is possible that the PPA is selective for rectilinear edges, but this rectilinearity index fails to detect the rectilinear edges to which the PPA is most tuned. In our experiments, there are at least three ways in which the index may have been inadequate. First, the rectilinearity index only reflects the presence of rectilinear edges at four spatial scales. While we found some insensitivity to spatial scale in Experiment 1, the PPA may be particularly sensitive to rectilinear edges at larger or smaller spatial scales than those in the current stimulus sets. Second, although this rectilinearity index is robust to rotations and translations of rectilinear edges in an image, it lacks invariance to skew deformations introduced by shifts in real-world viewpoint. The PPA could be highly sensitive to veridical rectilinearity&#x02014;that is, true right angle junctions in the world&#x02014;while maintaining invariance to the angular distortions caused by viewpoint shifts as they appear on the retina. If the PPA is tuned to this veridical (rather than image-level) rectilinearity, the current index would be sufficient from the vantage of a viewer positioned orthogonally to the surface plane, but not for a viewer positioned oblique to the plane, and the low-rectilinearity stimuli in the present experiments may have contained more right angle junctions oblique to the plane of the viewer. Finally, it is possible that the rectilinearity difference between the high- and low-rectilinearity conditions in Experiments 2 and 3 was smaller than in Experiment 1, or than in previous reports. Ideally, it would be possible to compare the rectilinearity range across stimulus sets. However, because the rectilinearity index normalizes rectilinearity within an image set, comparing across stimulus sets directly is problematic; in order to compare stimulus sets, rectilinearity values for each image must be recomputed, and this can cause changes in relative rectilinearity if new minimum or maximum rectilinearity values are introduced.</p>
<p>These caveats aside, we believe that our results serve to illustrate some of the possible dangers in attributing the responses of high-level visual areas such as the PPA to low-level biases. Granted, if the PPA and other scene-selective regions are involved in the perceptual analysis of the currently visible scene, it should be possible to explain how selectivity for scenes emerges from low-level representations in early visual areas (Op de Beeck et al., <xref ref-type="bibr" rid="B29">2008</xref>). This observation does not imply, however, that scene region tuning can be reduced to a small set of visual features. There are four reasons for this. First, as discussed, low-level feature biases in the scene regions may occur simply because images with those features are more likely to be interpreted as scenes. Second, it is possible that scene-selectivity reflects tuning to the feature conjunctions that jointly define scenes, rather than a small set of low-level features. Third, representations in the scene regions may be tolerant to identity preserving transformations of the low-level features towards which these regions exhibit some bias (e.g., Marchette et al., <xref ref-type="bibr" rid="B26">2015</xref>). Fourth and finally, the extent to which the scene regions are purely involved in visual perception rather than multimodal processing of scene shape and identity remains unknown (e.g., Wolbers et al., <xref ref-type="bibr" rid="B37">2011</xref>).</p>
<p>Indeed, the issues addressed in the present work are germane to other debates regarding the mechanism of category-selectivity of other ventral visual stream regions. For instance, biases toward curvilinear shapes (Wilkinson et al., <xref ref-type="bibr" rid="B35">2000</xref>; Caldara et al., <xref ref-type="bibr" rid="B4">2006</xref>), increasing contrast (Yue et al., <xref ref-type="bibr" rid="B40">2011</xref>), and the upper visual field (Caldara et al., <xref ref-type="bibr" rid="B4">2006</xref>) have been reported in the face-selective fusiform face area (FFA). Findings such as these in the FFA have been taken to imply that low-level stimulus features may determine FFA tuning (Caldara et al., <xref ref-type="bibr" rid="B4">2006</xref>; Yue et al., <xref ref-type="bibr" rid="B40">2011</xref>). As with the scene regions, however, some caution must be taken here as well. For example, the FFA may be sensitive to curved shapes simply because such stimuli tend to look more face-like. In general, if some low-level feature is proposed to explain some brain region&#x02019;s category selectivity, even in part, we propose a simple test, implemented here and inspired by previous approaches taken to understand FFA face-selectivity (Yue et al., <xref ref-type="bibr" rid="B40">2011</xref>). Insofar as it is possible, it should be tested whether that region responds more to its preferred category than non-preferred categories when stimuli are matched on that low-level feature.</p>
<p>The results of the present experiments also reinforce the importance of testing whether low-level biases detected in high-level visual areas survive semantic variation in naturalistic image sets. We failed to find a rectilinearity bias in the PPA for natural scene and face images. In general, high-level regions may exhibit biases toward certain low-level image statistics for low-complexity stimuli because such low-level statistics are a defining characteristic of the preferred stimulus category (like sphericity for faces). Preferences observed in low-dimensionality image sets cannot thus be interpreted as sufficient explanations of cortical selectivity in general. For such claims, robust effects must be demonstrated across a wide range of semantic categories using naturalistic stimuli.</p>
<p>Finally, the types of information represented in the ventral visual stream may be sensitive to task demands. In the present work, participants performed a repetition detection task, whereas in Nasr et al. (<xref ref-type="bibr" rid="B28">2014</xref>) participants performed an orthogonal attention task that involved detecting if a small fixation point changed shape. The task in the present experiments may thus have required greater attention to stimulus identity than the orthogonal perceptual task employed by Nasr et al. (<xref ref-type="bibr" rid="B28">2014</xref>). The influence of low-level image statistics on activity in late visual areas may be exaggerated by orthogonal attention tasks. Indeed, task demands modulate representations across the ventral visual stream (Egner and Hirsch, <xref ref-type="bibr" rid="B7">2005</xref>; Harel et al., <xref ref-type="bibr" rid="B13">2014</xref>; Erez and Duncan, <xref ref-type="bibr" rid="B10">2015</xref>), and in the PPA specifically attention has been shown to attenuate the processing of task-irrelevant background scenes (Yi et al., <xref ref-type="bibr" rid="B39">2004</xref>). By directing attention to visual features unrelated to stimulus identity, the relative importance of low-level image properties in driving univariate responses may be inflated.</p>
<p>In sum, we found that rectilinearity is not sufficient to explain the category selectivity of the PPA. This result illustrates that reductive efforts to explain high-level semantic preferences with low-level image statistics, while informative, may not elucidate the ultimate mechanism of category-selectivity in high-level visual areas.</p>
</sec>
<sec id="s5">
<title>Author Contributions</title>
<p>Conceptualization, PBB and JBJ; Methodology, PBB and JBJ; Software, PBB and JBJ; Formal Analysis, PBB and JBJ; Investigation, PBB and JBJ; Writing, PBB, JBJ and RAE; Visualization, PBB, JBJ, and RAE; Supervision, JBJ and RAE; Funding Acquisition, RAE.</p>
</sec>
<sec id="s6">
<title>Funding</title>
<p>This work was supported by NIH (R01 EY-022350) and NSF (SBE-0541957) Grants to RAE and an NSF Graduate Research Fellowship to JBJ.</p>
</sec>
<sec id="s7">
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>We thank Jack Ryan for help with data collection. We also thank S. Nasr and C. Echavarria for clarifying several technical details from their earlier article.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aguirre</surname> <given-names>G. K.</given-names></name> <name><surname>Zarahn</surname> <given-names>E.</given-names></name> <name><surname>D&#x02019;Esposito</surname> <given-names>M.</given-names></name></person-group> (<year>1998</year>). <article-title>An area within human ventral cortex sensitive to &#x0201C;building&#x0201D; stimuli evidence and implications</article-title>. <source>Neuron</source> <volume>21</volume>, <fpage>373</fpage>&#x02013;<lpage>383</lpage>. <pub-id pub-id-type="doi">10.1016/s0896-6273(00)80546-2</pub-id><pub-id pub-id-type="pmid">9728918</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arcaro</surname> <given-names>M. J.</given-names></name> <name><surname>McMains</surname> <given-names>S. A.</given-names></name> <name><surname>Singer</surname> <given-names>B. D.</given-names></name> <name><surname>Kastner</surname> <given-names>S.</given-names></name></person-group> (<year>2009</year>). <article-title>Retinotopic organization of human ventral visual cortex</article-title>. <source>J. Neurosci.</source> <volume>29</volume>, <fpage>10638</fpage>&#x02013;<lpage>10652</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.2807-09.2009</pub-id><pub-id pub-id-type="pmid">19710316</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bar</surname> <given-names>M.</given-names></name> <name><surname>Aminoff</surname> <given-names>E.</given-names></name></person-group> (<year>2003</year>). <article-title>Cortical analysis of visual context</article-title>. <source>Neuron</source> <volume>38</volume>, <fpage>347</fpage>&#x02013;<lpage>358</lpage>. <pub-id pub-id-type="doi">10.1016/s0896-6273(03)00167-3</pub-id><pub-id pub-id-type="pmid">12718867</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Caldara</surname> <given-names>R.</given-names></name> <name><surname>Seghier</surname> <given-names>M. L.</given-names></name> <name><surname>Rossion</surname> <given-names>B.</given-names></name> <name><surname>Lazeyras</surname> <given-names>F.</given-names></name> <name><surname>Michel</surname> <given-names>C.</given-names></name> <name><surname>Hauert</surname> <given-names>C.-A.</given-names></name></person-group> (<year>2006</year>). <article-title>The fusiform face area is tuned for curvilinear patterns with more high-contrasted elements in the upper part</article-title>. <source>Neuroimage</source> <volume>31</volume>, <fpage>313</fpage>&#x02013;<lpage>319</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2005.12.011</pub-id><pub-id pub-id-type="pmid">16460963</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dilks</surname> <given-names>D. D.</given-names></name> <name><surname>Julian</surname> <given-names>J. B.</given-names></name> <name><surname>Paunov</surname> <given-names>A. M.</given-names></name> <name><surname>Kanwisher</surname> <given-names>N.</given-names></name></person-group> (<year>2013</year>). <article-title>The occipital place area is causally and selectively involved in scene perception</article-title>. <source>J. Neurosci.</source> <volume>33</volume>, <fpage>1331</fpage>&#x02013;<lpage>1336</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.4081-12.2013</pub-id><pub-id pub-id-type="pmid">23345209</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Downing</surname> <given-names>P. E.</given-names></name> <name><surname>Chan</surname> <given-names>A. W.-Y.</given-names></name> <name><surname>Peelen</surname> <given-names>M. V.</given-names></name> <name><surname>Dodds</surname> <given-names>C. M.</given-names></name> <name><surname>Kanwisher</surname> <given-names>N.</given-names></name></person-group> (<year>2006</year>). <article-title>Domain specificity in visual cortex</article-title>. <source>Cereb. Cortex</source> <volume>16</volume>, <fpage>1453</fpage>&#x02013;<lpage>1461</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhj086</pub-id><pub-id pub-id-type="pmid">16339084</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Egner</surname> <given-names>T.</given-names></name> <name><surname>Hirsch</surname> <given-names>J.</given-names></name></person-group> (<year>2005</year>). <article-title>Cognitive control mechanisms resolve conflict through cortical amplification of task-relevant information</article-title>. <source>Nat. Neurosci.</source> <volume>8</volume>, <fpage>1784</fpage>&#x02013;<lpage>1790</lpage>. <pub-id pub-id-type="doi">10.1038/nn1594</pub-id><pub-id pub-id-type="pmid">16286928</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Epstein</surname> <given-names>R.</given-names></name></person-group> (<year>2005</year>). <article-title>The cortical basis of visual scene processing</article-title>. <source>Vis. Cogn.</source> <volume>12</volume>, <fpage>954</fpage>&#x02013;<lpage>978</lpage>. <pub-id pub-id-type="doi">10.1080/13506280444000607</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Epstein</surname> <given-names>R.</given-names></name> <name><surname>Kanwisher</surname> <given-names>N.</given-names></name></person-group> (<year>1998</year>). <article-title>A cortical representation of the local visual environment</article-title>. <source>Nature</source> <volume>392</volume>, <fpage>598</fpage>&#x02013;<lpage>601</lpage>. <pub-id pub-id-type="doi">10.1038/33402</pub-id><pub-id pub-id-type="pmid">9560155</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Erez</surname> <given-names>Y.</given-names></name> <name><surname>Duncan</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <article-title>Discrimination of visual categories based on behavioral relevance in widespread regions of frontoparietal cortex</article-title>. <source>J. Neurosci.</source> <volume>35</volume>, <fpage>12383</fpage>&#x02013;<lpage>12393</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1134-15.2015</pub-id><pub-id pub-id-type="pmid">26354907</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fischl</surname> <given-names>B.</given-names></name> <name><surname>Sereno</surname> <given-names>M. I.</given-names></name> <name><surname>Dale</surname> <given-names>A. M.</given-names></name></person-group> (<year>1999</year>). <article-title>Cortical surface-based analysis: II: inflation, flattening and a surface-based coordinate system</article-title>. <source>Neuroimage</source> <volume>9</volume>, <fpage>195</fpage>&#x02013;<lpage>207</lpage>. <pub-id pub-id-type="doi">10.1006/nimg.1998.0396</pub-id><pub-id pub-id-type="pmid">9931269</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Golomb</surname> <given-names>J. D.</given-names></name> <name><surname>Kanwisher</surname> <given-names>N.</given-names></name></person-group> (<year>2011</year>). <article-title>Higher level visual cortex represents retinotopic, not spatiotopic, object location</article-title>. <source>Cereb. Cortex</source> <volume>22</volume>, <fpage>2794</fpage>&#x02013;<lpage>2810</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhr357</pub-id><pub-id pub-id-type="pmid">22190434</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Harel</surname> <given-names>A.</given-names></name> <name><surname>Kravitz</surname> <given-names>D. J.</given-names></name> <name><surname>Baker</surname> <given-names>C. I.</given-names></name></person-group> (<year>2014</year>). <article-title>Task context impacts visual object processing differentially across the cortex</article-title>. <source>Proc. Natl. Acad. Sci. U S A</source> <volume>111</volume>, <fpage>E962</fpage>&#x02013;<lpage>E971</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1312567111</pub-id><pub-id pub-id-type="pmid">24567402</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hasson</surname> <given-names>U.</given-names></name> <name><surname>Levy</surname> <given-names>I.</given-names></name> <name><surname>Behrmann</surname> <given-names>M.</given-names></name> <name><surname>Hendler</surname> <given-names>T.</given-names></name> <name><surname>Malach</surname> <given-names>R.</given-names></name></person-group> (<year>2002</year>). <article-title>Eccentricity bias as an organizing principle for human high-order object areas</article-title>. <source>Neuron</source> <volume>34</volume>, <fpage>479</fpage>&#x02013;<lpage>490</lpage>. <pub-id pub-id-type="doi">10.1016/s0896-6273(02)00662-1</pub-id><pub-id pub-id-type="pmid">11988177</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haxby</surname> <given-names>J. V.</given-names></name> <name><surname>Gobbini</surname> <given-names>M. I.</given-names></name> <name><surname>Furey</surname> <given-names>M. L.</given-names></name> <name><surname>Ishai</surname> <given-names>A.</given-names></name> <name><surname>Schouten</surname> <given-names>J. L.</given-names></name> <name><surname>Pietrini</surname> <given-names>P.</given-names></name></person-group> (<year>2001</year>). <article-title>Distributed and overlapping representations of faces and objects in ventral temporal cortex</article-title>. <source>Science</source> <volume>293</volume>, <fpage>2425</fpage>&#x02013;<lpage>2430</lpage>. <pub-id pub-id-type="doi">10.1126/science.1063736</pub-id><pub-id pub-id-type="pmid">11577229</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Janzen</surname> <given-names>G.</given-names></name> <name><surname>van Turennout</surname> <given-names>M.</given-names></name></person-group> (<year>2004</year>). <article-title>Selective neural representation of objects relevant for navigation</article-title>. <source>Nat. Neurosci.</source> <volume>7</volume>, <fpage>673</fpage>&#x02013;<lpage>677</lpage>. <pub-id pub-id-type="doi">10.1038/nn1257</pub-id><pub-id pub-id-type="pmid">15146191</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jenkinson</surname> <given-names>M.</given-names></name> <name><surname>Bannister</surname> <given-names>P.</given-names></name> <name><surname>Brady</surname> <given-names>M.</given-names></name> <name><surname>Smith</surname> <given-names>S.</given-names></name></person-group> (<year>2002</year>). <article-title>Improved optimization for the robust and accurate linear registration and motion correction of brain images</article-title>. <source>Neuroimage</source> <volume>17</volume>, <fpage>825</fpage>&#x02013;<lpage>841</lpage>. <pub-id pub-id-type="doi">10.1006/nimg.2002.1132</pub-id><pub-id pub-id-type="pmid">12377157</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Julian</surname> <given-names>J. B.</given-names></name> <name><surname>Fedorenko</surname> <given-names>E.</given-names></name> <name><surname>Webster</surname> <given-names>J.</given-names></name> <name><surname>Kanwisher</surname> <given-names>N.</given-names></name></person-group> (<year>2012</year>). <article-title>An algorithmic method for functionally defining regions of interest in the ventral visual pathway</article-title>. <source>Neuroimage</source> <volume>60</volume>, <fpage>2357</fpage>&#x02013;<lpage>2364</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2012.02.055</pub-id><pub-id pub-id-type="pmid">22398396</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kauffmann</surname> <given-names>L.</given-names></name> <name><surname>Ramano&#x000EB;l</surname> <given-names>S.</given-names></name> <name><surname>Guyader</surname> <given-names>N.</given-names></name> <name><surname>Chauvin</surname> <given-names>A.</given-names></name> <name><surname>Peyrin</surname> <given-names>C.</given-names></name></person-group> (<year>2015</year>). <article-title>Spatial frequency processing in scene-selective cortical regions</article-title>. <source>Neuroimage</source> <volume>112</volume>, <fpage>86</fpage>&#x02013;<lpage>95</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2015.02.058</pub-id><pub-id pub-id-type="pmid">25754068</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Konkle</surname> <given-names>T.</given-names></name> <name><surname>Oliva</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>A real-world size organization of object responses in occipitotemporal cortex</article-title>. <source>Neuron</source> <volume>74</volume>, <fpage>1114</fpage>&#x02013;<lpage>1124</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2012.04.036</pub-id><pub-id pub-id-type="pmid">22726840</pub-id></citation></ref>
<ref id="B21"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Kr&#x000FC;ger</surname> <given-names>N.</given-names></name> <name><surname>Peters</surname> <given-names>G.</given-names></name> <name><surname>Malsburg</surname> <given-names>C. V. D.</given-names></name></person-group> (<year>1996</year>). &#x0201C;<article-title>Object recognition with a sparse and autonomously learned representation based on banana wavelets</article-title>,&#x0201D; in <source>Learned Representation based on Banana Wavelets, Technical Report IR-INI 96&#x02013;11. Institut fur Neuroinformatik</source>, (<publisher-name>Ruhr-Universitat Bochum</publisher-name>).</citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lescroart</surname> <given-names>M. D.</given-names></name> <name><surname>Stansbury</surname> <given-names>D. E.</given-names></name> <name><surname>Gallant</surname> <given-names>J. L.</given-names></name></person-group> (<year>2015</year>). <article-title>Fourier power, subjective distance and object categories all provide plausible models of BOLD responses in scene-selective visual areas</article-title>. <source>Front. Comput. Neurosci.</source> <volume>9</volume>:<fpage>135</fpage>. <pub-id pub-id-type="doi">10.3389/fncom.2015.00135</pub-id><pub-id pub-id-type="pmid">26594164</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Levy</surname> <given-names>I.</given-names></name> <name><surname>Hasson</surname> <given-names>U.</given-names></name> <name><surname>Avidan</surname> <given-names>G.</given-names></name> <name><surname>Hendler</surname> <given-names>T.</given-names></name> <name><surname>Malach</surname> <given-names>R.</given-names></name></person-group> (<year>2001</year>). <article-title>Center-periphery organization of human object areas</article-title>. <source>Nat. Neurosci.</source> <volume>4</volume>, <fpage>533</fpage>&#x02013;<lpage>539</lpage>. <pub-id pub-id-type="doi">10.1038/87490</pub-id><pub-id pub-id-type="pmid">11319563</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Levy</surname> <given-names>I.</given-names></name> <name><surname>Hasson</surname> <given-names>U.</given-names></name> <name><surname>Harel</surname> <given-names>M.</given-names></name> <name><surname>Malach</surname> <given-names>R.</given-names></name></person-group> (<year>2004</year>). <article-title>Functional analysis of the periphery effect in human building related areas</article-title>. <source>Hum. Brain Mapp.</source> <volume>22</volume>, <fpage>15</fpage>&#x02013;<lpage>26</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.20010</pub-id><pub-id pub-id-type="pmid">15083523</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>MacEvoy</surname> <given-names>S. P.</given-names></name> <name><surname>Epstein</surname> <given-names>R. A.</given-names></name></person-group> (<year>2007</year>). <article-title>Position selectivity in scene- and object-responsive occipitotemporal regions</article-title>. <source>J. Neurophysiol.</source> <volume>98</volume>, <fpage>2089</fpage>&#x02013;<lpage>2098</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00438.2007</pub-id><pub-id pub-id-type="pmid">17652421</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marchette</surname> <given-names>S. A.</given-names></name> <name><surname>Vass</surname> <given-names>L. K.</given-names></name> <name><surname>Ryan</surname> <given-names>J.</given-names></name> <name><surname>Epstein</surname> <given-names>R. A.</given-names></name></person-group> (<year>2015</year>). <article-title>Outside looking in: landmark generalization in the human navigational system</article-title>. <source>J. Neurosci.</source> <volume>35</volume>, <fpage>14896</fpage>&#x02013;<lpage>14908</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.2270-15.2015</pub-id><pub-id pub-id-type="pmid">26538658</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nasr</surname> <given-names>S.</given-names></name> <name><surname>Echavarria</surname> <given-names>C. E.</given-names></name> <name><surname>Tootell</surname> <given-names>R. B. H.</given-names></name></person-group> (<year>2014</year>). <article-title>Thinking outside the box: rectilinear shapes selectively activate scene-selective cortex</article-title>. <source>J. Neurosci.</source> <volume>34</volume>, <fpage>6721</fpage>&#x02013;<lpage>6735</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.4802-13.2014</pub-id><pub-id pub-id-type="pmid">24828628</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nasr</surname> <given-names>S.</given-names></name> <name><surname>Tootell</surname> <given-names>R. B. H.</given-names></name></person-group> (<year>2012</year>). <article-title>A cardinal orientation bias in scene-selective visual cortex</article-title>. <source>J. Neurosci.</source> <volume>32</volume>, <fpage>14921</fpage>&#x02013;<lpage>14926</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.2036-12.2012</pub-id><pub-id pub-id-type="pmid">23100415</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Op de Beeck</surname> <given-names>H. P.</given-names></name> <name><surname>Haushofer</surname> <given-names>J.</given-names></name> <name><surname>Kanwisher</surname> <given-names>N. G.</given-names></name></person-group> (<year>2008</year>). <article-title>Interpreting fMRI data: maps, modules and dimensions</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>9</volume>, <fpage>123</fpage>&#x02013;<lpage>135</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2314</pub-id><pub-id pub-id-type="pmid">18200027</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rajimehr</surname> <given-names>R.</given-names></name> <name><surname>Devaney</surname> <given-names>K. J.</given-names></name> <name><surname>Bilenko</surname> <given-names>N. Y.</given-names></name> <name><surname>Young</surname> <given-names>J. C.</given-names></name> <name><surname>Tootell</surname> <given-names>R. B. H.</given-names></name></person-group> (<year>2011</year>). <article-title>The &#x0201C;parahippocampal place area&#x0201D; responds preferentially to high spatial frequencies in humans and monkeys</article-title>. <source>PLoS Biol.</source> <volume>9</volume>:<fpage>e1000608</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pbio.1000608</pub-id><pub-id pub-id-type="pmid">21483719</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schinazi</surname> <given-names>V. R.</given-names></name> <name><surname>Epstein</surname> <given-names>R. A.</given-names></name></person-group> (<year>2010</year>). <article-title>Neural correlates of real-world route learning</article-title>. <source>Neuroimage</source> <volume>53</volume>, <fpage>725</fpage>&#x02013;<lpage>735</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2010.06.065</pub-id><pub-id pub-id-type="pmid">20603219</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Silson</surname> <given-names>E. H.</given-names></name> <name><surname>Chan</surname> <given-names>A. W.-Y.</given-names></name> <name><surname>Reynolds</surname> <given-names>R. C.</given-names></name> <name><surname>Kravitz</surname> <given-names>D. J.</given-names></name> <name><surname>Baker</surname> <given-names>C. I.</given-names></name></person-group> (<year>2015</year>). <article-title>A retinotopic basis for the division of high-level scene processing between lateral and ventral human occipitotemporal cortex</article-title>. <source>J. Neurosci.</source> <volume>35</volume>, <fpage>11921</fpage>&#x02013;<lpage>11935</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.0137-15.2015</pub-id><pub-id pub-id-type="pmid">26311774</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Walther</surname> <given-names>D. B.</given-names></name> <name><surname>Shen</surname> <given-names>D.</given-names></name></person-group> (<year>2014</year>). <article-title>Nonaccidental properties underlie human categorization of complex natural scenes</article-title>. <source>Psychol. Sci.</source> <volume>25</volume>, <fpage>851</fpage>-<lpage>860</lpage>. <pub-id pub-id-type="doi">10.1177/0956797613512662</pub-id><pub-id pub-id-type="pmid">24474725</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Watson</surname> <given-names>D. M.</given-names></name> <name><surname>Hymers</surname> <given-names>M.</given-names></name> <name><surname>Hartley</surname> <given-names>T.</given-names></name> <name><surname>Andrews</surname> <given-names>T. J.</given-names></name></person-group> (<year>2016</year>). <article-title>Patterns of neural response in scene-selective regions of the human brain are affected by low-level manipulations of spatial frequency</article-title>. <source>Neuroimage</source> <volume>124</volume>, <fpage>107</fpage>&#x02013;<lpage>117</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2015.08.058</pub-id><pub-id pub-id-type="pmid">26341028</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wilkinson</surname> <given-names>F.</given-names></name> <name><surname>James</surname> <given-names>T. W.</given-names></name> <name><surname>Wilson</surname> <given-names>H. R.</given-names></name> <name><surname>Gati</surname> <given-names>J. S.</given-names></name> <name><surname>Menon</surname> <given-names>R. S.</given-names></name> <name><surname>Goodale</surname> <given-names>M. A.</given-names></name></person-group> (<year>2000</year>). <article-title>An fMRI study of the selective activation of human extrastriate form vision areas by radial and concentric gratings</article-title>. <source>Curr. Biol.</source> <volume>10</volume>, <fpage>1455</fpage>&#x02013;<lpage>1458</lpage>. <pub-id pub-id-type="doi">10.1016/s0960-9822(00)00800-9</pub-id><pub-id pub-id-type="pmid">11102809</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Willenbockel</surname> <given-names>V.</given-names></name> <name><surname>Sadr</surname> <given-names>J.</given-names></name> <name><surname>Fiset</surname> <given-names>D.</given-names></name> <name><surname>Horne</surname> <given-names>G. O.</given-names></name> <name><surname>Gosselin</surname> <given-names>F.</given-names></name> <name><surname>Tanaka</surname> <given-names>J. W.</given-names></name></person-group> (<year>2010</year>). <article-title>Controlling low-level image properties: the SHINE toolbox</article-title>. <source>Behav. Res. Methods</source> <volume>42</volume>, <fpage>671</fpage>&#x02013;<lpage>684</lpage>. <pub-id pub-id-type="doi">10.3758/BRM.42.3.671</pub-id><pub-id pub-id-type="pmid">20805589</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wolbers</surname> <given-names>T.</given-names></name> <name><surname>Klatzky</surname> <given-names>R. L.</given-names></name> <name><surname>Loomis</surname> <given-names>J. M.</given-names></name> <name><surname>Wutte</surname> <given-names>M. G.</given-names></name> <name><surname>Giudice</surname> <given-names>N. A.</given-names></name></person-group> (<year>2011</year>). <article-title>Modality-independent coding of spatial layout in the human brain</article-title>. <source>Curr. Biol.</source> <volume>21</volume>, <fpage>984</fpage>&#x02013;<lpage>989</lpage>. <pub-id pub-id-type="doi">10.1016/j.cub.2011.04.038</pub-id><pub-id pub-id-type="pmid">21620708</pub-id></citation></ref>
<ref id="B38"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Xiao</surname> <given-names>J.</given-names></name> <name><surname>Hays</surname> <given-names>J.</given-names></name> <name><surname>Ehinger</surname> <given-names>K. A.</given-names></name> <name><surname>Oliva</surname> <given-names>A.</given-names></name> <name><surname>Torralba</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). &#x0201C;<article-title>SUN database: large-scale scene recognition from abbey to zoo</article-title>,&#x0201D; in <source>2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>, (<conf-loc>San Francisco, CA</conf-loc>: <conf-name>IEEE</conf-name>), <fpage>3485</fpage>&#x02013;<lpage>3492</lpage>).</citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yi</surname> <given-names>D.-J.</given-names></name> <name><surname>Woodman</surname> <given-names>G. F.</given-names></name> <name><surname>Widders</surname> <given-names>D.</given-names></name> <name><surname>Marois</surname> <given-names>R.</given-names></name> <name><surname>Chun</surname> <given-names>M. M.</given-names></name></person-group> (<year>2004</year>). <article-title>Neural fate of ignored stimuli: dissociable effects of perceptual and working memory load</article-title>. <source>Nat. Neurosci.</source> <volume>7</volume>, <fpage>992</fpage>&#x02013;<lpage>996</lpage>. <pub-id pub-id-type="doi">10.1038/nn1294</pub-id><pub-id pub-id-type="pmid">15286791</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yue</surname> <given-names>X.</given-names></name> <name><surname>Cassidy</surname> <given-names>B. S.</given-names></name> <name><surname>Devaney</surname> <given-names>K. J.</given-names></name> <name><surname>Holt</surname> <given-names>D. J.</given-names></name> <name><surname>Tootell</surname> <given-names>R. B. H.</given-names></name></person-group> (<year>2011</year>). <article-title>Lower-level stimulus features strongly influence responses in the fusiform face area</article-title>. <source>Cereb. Cortex</source> <volume>21</volume>, <fpage>35</fpage>&#x02013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhq050</pub-id><pub-id pub-id-type="pmid">20375074</pub-id></citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zeidman</surname> <given-names>P.</given-names></name> <name><surname>Mullally</surname> <given-names>S. L.</given-names></name> <name><surname>Schwarzkopf</surname> <given-names>D. S.</given-names></name> <name><surname>Maguire</surname> <given-names>E. A.</given-names></name></person-group> (<year>2012</year>). <article-title>Exploring the parahippocampal cortex response to high and low spatial frequency spaces</article-title>. <source>Neuroreport</source> <volume>23</volume>, <fpage>503</fpage>&#x02013;<lpage>507</lpage>. <pub-id pub-id-type="doi">10.1097/WNR.0b013e328353766a</pub-id><pub-id pub-id-type="pmid">22473293</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn id="fn0001"><p><sup>1</sup><ext-link ext-link-type="uri" xlink:href="http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/">http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/</ext-link></p></fn>
<fn id="fn0002"><p><sup>2</sup><ext-link ext-link-type="uri" xlink:href="http://surfer.nmr.mgh.harvard.edu/">http://surfer.nmr.mgh.harvard.edu/</ext-link></p></fn>
</fn-group>
</back>
</article>