<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Syst. Neurosci.</journal-id>
<journal-title>Frontiers in Systems Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Syst. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5137</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnsys.2016.00078</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Contextual Interactions in Grating Plaid Configurations Are Explained by Natural Image Statistics and Neural Modeling</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Ernst</surname> <given-names>Udo A.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/431/overview"/></contrib>
<contrib contrib-type="author">
<name><surname>Schiffer</surname> <given-names>Alina</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/362309/overview"/></contrib>
<contrib contrib-type="author">
<name><surname>Persike</surname> <given-names>Malte</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/228001/overview"/></contrib>
<contrib contrib-type="author">
<name><surname>Meinhardt</surname> <given-names>G&#x000FC;nter</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/130097/overview"/></contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Computational Neuroscience Lab, Department of Physics, Institute for Theoretical Physics, University of Bremen</institution> <country>Bremen, Germany</country></aff>
<aff id="aff2"><sup>2</sup><institution>Methods Section, Department of Psychology, Johannes Gutenberg University Mainz</institution> <country>Mainz, Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Jochem W. Rieger, University of Oldenburg, Germany</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Gregor Rainer, University of Fribourg, Switzerland; Sacha Jennifer Van Albada, Forschungszentrum J&#x000FC;lich, Germany</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Udo A. Ernst <email>udo&#x00040;neuro.uni-bremen.de</email></p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>04</day>
<month>10</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<year>2016</year>
</pub-date>
<volume>10</volume>
<elocation-id>78</elocation-id>
<history>
<date date-type="received">
<day>01</day>
<month>02</month>
<year>2016</year>
</date>
<date date-type="accepted">
<day>16</day>
<month>09</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2016 Ernst, Schiffer, Persike and Meinhardt.</copyright-statement>
<copyright-year>2016</copyright-year>
<copyright-holder>Ernst, Schiffer, Persike and Meinhardt</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>Processing natural scenes requires the visual system to integrate local features into global object descriptions. To achieve coherent representations, the human brain uses statistical dependencies to guide weighting of local feature conjunctions. Pairwise interactions among feature detectors in early visual areas may form the early substrate of these local feature bindings. To investigate local interaction structures in visual cortex, we combined psychophysical experiments with computational modeling and natural scene analysis. We first measured contrast thresholds for 2 &#x000D7; 2 grating patch arrangements (plaids), which differed in spatial frequency composition (low, high, or mixed), number of grating patch co-alignments (0, 1, or 2), and inter-patch distances (1&#x000B0; and 2&#x000B0; of visual angle). Contrast thresholds for the different configurations were compared to the prediction of probability summation (PS) among detector families tuned to the four retinal positions. For 1&#x000B0; distance the thresholds for all configurations were larger than predicted by PS, indicating inhibitory interactions. For 2&#x000B0; distance, thresholds were significantly lower compared to PS when the plaids were homogeneous in spatial frequency and orientation, but not when spatial frequencies were mixed or there was at least one misalignment. Next, we constructed a neural population model with horizontal laminar structure, which reproduced the detection thresholds after adaptation of connection weights. Consistent with prior work, contextual interactions were medium-range inhibition and long-range, orientation-specific excitation. However, inclusion of orientation-specific, inhibitory interactions between populations with different spatial frequency preferences were crucial for explaining detection thresholds. Finally, for all plaid configurations we computed their likelihood of occurrence in natural images. The likelihoods turned out to be inversely related to the detection thresholds obtained at larger inter-patch distances. However, likelihoods were almost independent of inter-patch distance, implying that natural image statistics could not explain the crowding-like results at short distances. This failure of natural image statistics to resolve the patch distance modulation of plaid visibility remains a challenge to the approach.</p></abstract>
<kwd-group>
<kwd>natural image statistics</kwd>
<kwd>network model</kwd>
<kwd>contextual interactions</kwd>
<kwd>visual perception</kwd>
<kwd>feature integration</kwd>
<kwd>visual cortex</kwd>
</kwd-group>
<contract-num rid="cn001">01GQ1106</contract-num>
<contract-sponsor id="cn001">Bundesministerium f&#x000FC;r Bildung und Forschung<named-content content-type="fundref-id">10.13039/501100002347</named-content></contract-sponsor>
<contract-sponsor id="cn002">Volkswagen Foundation<named-content content-type="fundref-id">10.13039/501100001663</named-content></contract-sponsor>
<contract-sponsor id="cn003">Deutsche Forschungsgemeinschaft<named-content content-type="fundref-id">10.13039/501100001659</named-content></contract-sponsor>
<counts>
<fig-count count="9"/>
<table-count count="4"/>
<equation-count count="4"/>
<ref-count count="60"/>
<page-count count="16"/>
<word-count count="11459"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Visual scenes are composed of many objects which usually extend over large regions in the visual field. However, since visual information is represented in the early visual system as a collection of isolated local features, one of the most challenging tasks for our brain is to integrate this information into coherent percepts. This task is performed by a hierarchical and recurrent network, which builds increasingly complex representations of visual scenes as information propagates to downstream visual areas (Lamme and Roelfsema, <xref ref-type="bibr" rid="B25">2000</xref>; Roelfsema et al., <xref ref-type="bibr" rid="B45">2002</xref>). The corresponding computations are highly non-linear (Adini et al., <xref ref-type="bibr" rid="B1">1997</xref>), and even the first stages of this network are still not well understood.</p>
<p>For more than a century, psychophysical studies have strived to identify principles of feature integration: starting from the first attempt at quantifying the laws of feature integration by the Gestalt psychologists (Metzger, <xref ref-type="bibr" rid="B36">2006</xref>), a large body of facts has been assembled which describes elementary feature integration processes in the early visual system (Ehrenstein et al., <xref ref-type="bibr" rid="B7">2003</xref>). Most of this work uses oriented and localized gratings like Gabor patches since these stimuli are known to drive neurons in primary visual cortex well (Hubel and Wiesel, <xref ref-type="bibr" rid="B16">1962</xref>). These are typically set into context with one or more flanking patches in various spatial configurations. Prominent findings using these stimuli include threshold modulation in collinear configurations, exhibiting suppression at small element distances, and facilitation at larger element distances (Polat and Sagi, <xref ref-type="bibr" rid="B43">1993</xref>). The range of these effects typically scales almost linearly in dependence on the spatial frequency of the patches (Polat and Sagi, <xref ref-type="bibr" rid="B44">1994</xref>). In addition, there is also a strong dependence on stimulus contrast, with facilitation prevailing at low contrasts and suppression observed with high contrasts (Mizobe et al., <xref ref-type="bibr" rid="B37">2001</xref>). In addition to these effects, element density also plays an important role. The closer single elements in a scene are to each other, the more difficult a target element, which is typically placed in the center of such an arrangement, becomes to perceive. This effect is commonly referred to as <italic>crowding</italic> (Whitney and Levi, <xref ref-type="bibr" rid="B55">2011</xref>).</p>
<p>The findings on the behavioral level have been complemented by anatomical and physiological studies. In primary visual cortex, neurons are tuned to the orientation of stimuli inside a small region in the visual field, which is termed the &#x0201C;classical receptive field&#x0201D; (shorthand: cRF). Neurons with different preferred orientations between 0 and 180&#x000B0; are organized into orientation hypercolumns (Hubel and Wiesel, <xref ref-type="bibr" rid="B16">1962</xref>); additionally, each hypercolumn separates into populations with low or high spatial frequency preference (Shmuel and Grinvald, <xref ref-type="bibr" rid="B49">1996</xref>). Contextual interactions in psychophysical studies have been related to the so-called &#x0201C;non-classical&#x0201D; receptive fields (shorthand: ncRFs): modulations of neural responses by stimuli positioned outside the cRF, in addition to a stimulus inside the cRF (Haider et al., <xref ref-type="bibr" rid="B13">2010</xref>; Ernst, <xref ref-type="bibr" rid="B8">2013</xref>). These physiological effects turned out to be (partly) compatible with the behavioral evidence: for example, in colinear configurations, suppression and facilitation depends on grating contrast (Mizobe et al., <xref ref-type="bibr" rid="B37">2001</xref>). Also, interactions between two oriented line segments at different visual field positions (Kapadia et al., <xref ref-type="bibr" rid="B22">2000</xref>) resemble interaction patterns (&#x0201C;association fields&#x0201D;) proposed to explain contour integration (Field et al., <xref ref-type="bibr" rid="B9">1993</xref>; Kovacs, <xref ref-type="bibr" rid="B23">1996</xref>). For mediating these effects, anatomical studies identified connection structures putatively responsible for ncRFs, such as orientation-specific long-range excitatory horizontal interactions (Bosking et al., <xref ref-type="bibr" rid="B3">1997</xref>) for enhancing collinear configurations, or short-range feedback projections from higher visual areas targeting inhibitory circuits (Johnson and Burkhalter, <xref ref-type="bibr" rid="B21">1996</xref>; Lamme et al., <xref ref-type="bibr" rid="B26">1998</xref>; Hup&#x000E9; et al., <xref ref-type="bibr" rid="B18">2001</xref>; Callaway, <xref ref-type="bibr" rid="B5">2004</xref>) for surround suppression. In addition, there is a dense and not yet fully understood network within a cortical column. In particular, any projections entering a cortical column may target inhibitory or excitatory populations, thus being able to exert a potentially positive or negative modulation.</p>
<p>While models constructed from anatomical and physiological knowledge were reasonably successful in explaining a range of extra-classical receptive field properties (for an overview, see Ernst, <xref ref-type="bibr" rid="B8">2013</xref>), a different idea is to understand feature integration processes from first principles. This includes deriving stylized facts about ncRFs from postulating that neurons in visual cortex perform probabilistic inference on visual scenes (Lochmann et al., <xref ref-type="bibr" rid="B31">2012</xref>), or from requiring visual cortex to construct a sparse representation of natural stimuli (Zhu and Rozell, <xref ref-type="bibr" rid="B60">2013</xref>). It has also been shown that natural image statistics explains fundamental laws in feature integration, such as the law of good continuation by demonstrating a close match between contour statistics and the shape of the association field used by the visual system for contour integration (Geisler et al., <xref ref-type="bibr" rid="B11">2001</xref>; Geisler and Perry, <xref ref-type="bibr" rid="B10">2009</xref>).</p>
<p>Taking together these results from the past 20 years, a coherent account of feature integration begins to emerge. However, since experimental work often uses structurally simple stimuli, we still do not understand enough about how more complex stimulus configurations, or stimuli involving two or more elementary features, are represented and processed. To fill this gap, we present a study which analyzes feature integration with a combination of methods (experiment, modeling, image statistics) spanning a range of observation levels (psychophysics, neural network simulations, external world), thus aiming at a unifying perspective. In particular, we focus on the following questions: How do different feature dimensions interact, and how are they processed by the visual system? What kind of neural interactions would be required to explain the corresponding effects? What does behavior tell us about computations performed by the visual system, and are the observed effects linked to the higher-order statistics of the &#x0201C;typical&#x0201D; stimuli processed by the visual system?</p>
<p>For this purpose, we extended the standard experimental paradigm of using visual stimuli consisting of strings of oriented grating patches to patches arranged in more complex, two-by-two element plaids. This enables us to investigate the interplay of interactions along two orthogonal axes in visual space. We introduced spatial frequency as a second feature dimension besides orientation and first quantified human detection thresholds for different plaid configurations with varying inter-patch distances. Next we reproduced human behavior in a simplistic neural network and identified interaction structures which are capable of explaining our experimental data. Finally, we compared the statistics of the plaid configurations in natural images to human behavior and tested the hypothesis that visual stimuli occurring more frequently are detected more easily. Our results turned out to be compatible with prior work and in addition reveal three major findings going beyond well-established facts:
<list list-type="bullet">
<list-item><p>Detection thresholds are perfectly explained by pairwise couplings in a structurally simple model; thus no higher-order interaction schemes are required.</p></list-item>
<list-item><p>Interactions between feature detectors with different preferred spatial frequencies must be both suppressive, and orientation-specific.</p></list-item>
<list-item><p>For larger inter-patch distances, detection performance is inversely related to plaid likelihood (ratio) in natural images.</p></list-item>
</list></p>
<p>By obtaining these results in a common framework encompassing experiment, modeling, and image statistics, our study directly addresses two main goals of this special issue in Frontiers, namely that &#x0201C;brain activity is predicted from [e.g.,] stimuli (encoding),&#x0201D; and that &#x0201C;subjective/cognitive states are predicted from brain activity (decoding).&#x0201D;</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>2. Materials and methods</title>
<sec>
<title>2.1. Experiments</title>
<sec>
<title>2.1.1. Outline</title>
<p>We created 2D spatial arrangements of four grating patches (&#x0201C;plaids&#x0201D;) to study the impact of spatial distance, spatial frequency homogeneity and orientation alignment on the detectability of the patch arrangement. Two spatial frequencies and two orientations were used for the grating patches. In preparatory measurements the carrier frequencies were determined such that all four patches were equally detectable when presented individually on the spatial position grids. Contrasts thresholds were measured for plaids with 0, 1, or 2 orientation alignments in spatial frequency homogeneous and inhomogeneous configurations, and with near and far distance between patches. To compare with a benchmark, the contrast thresholds were tested against the prediction derived from probability summation among the four locations of a plaid arrangement. This test was used to indicate whether the specific parameter combination of a plaid yielded inhibition, facilitation or independent feature processing.</p>
</sec>
<sec>
<title>2.1.2. Stimuli</title>
<p>The grating patches were circular sinusoids with an effective diameter of 1&#x000B0;, achieved by multiplying the sinusoid with a radially symmetric logistic envelope. The envelope was defined as</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>a</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:mo>exp</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>b</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:mo>exp</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mi>b</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>with <inline-formula><mml:math id="M45"><mml:mi>r</mml:mi><mml:mo>=</mml:mo><mml:msqrt><mml:mrow><mml:msup><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:msup><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:msqrt></mml:math></inline-formula> (in degrees of visual angle) and <italic>b</italic> &#x0003D; ln(128)/0.05&#x000B0; (in 1/&#x000B0; of visual angle). The choice for the parameter <italic>b</italic> made the envelope rise (fall, resp.) within the interval (&#x02212;<italic>r</italic><sub>0</sub> &#x02212; 0.05&#x000B0;, &#x02212;<italic>r</italic><sub>0</sub> &#43; 0.05&#x000B0;) (left) and (<italic>r</italic><sub>0</sub> &#x02212; 0.05&#x000B0;, <italic>r</italic><sub>0</sub> &#43; 0.05&#x000B0;) (right), respectively. Two orientations (&#x02212;45&#x000B0;, 45&#x000B0;) and two carrier spatial frequencies (<italic>f</italic><sub>low</sub>, <italic>f</italic><sub>high</sub>) were used, with <italic>f</italic><sub>low</sub>, <italic>f</italic><sub>high</sub> being determined in preparatory measurements (see below). The stimuli are illustrated in Figure <xref ref-type="fig" rid="F1">1A</xref> (labeled P1&#x02013;P4).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Construction of plaids from grating patches</bold>. The four grating patches <bold>(A)</bold> used for 4&#x02013;plaid configurations in small and large inter-patch distance <bold>(B)</bold>.</p></caption>
<graphic xlink:href="fnsys-10-00078-g0001.tif"/>
</fig>
<p>Grating patches were located on the edge points of the cardinal axes of a spatial position grid to define square arrangements. Two inner radii were were used to define squares with near (1&#x000B0; inner radius, side length <inline-formula><mml:math id="M5"><mml:msup><mml:mrow><mml:msqrt><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msqrt></mml:mrow><mml:mrow><mml:mo>&#x000B0;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula>) and far (2&#x000B0; inner radius, side length <inline-formula><mml:math id="M6"><mml:msup><mml:mrow><mml:msqrt><mml:mrow><mml:mn>8</mml:mn></mml:mrow></mml:msqrt></mml:mrow><mml:mrow><mml:mo>&#x000B0;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula>) patch distance (see Figure <xref ref-type="fig" rid="F1">1B</xref>).</p>
</sec>
<sec>
<title>2.1.3. Stimulus plaid configurations and experimental design</title>
<p>In order to create different patch configurations we first formed pairs of the 4 primary stimuli P1&#x02013;P4 (i.e., left or right oblique gratings with either high or low spatial frequency). Allowing replication of the same element, <inline-formula><mml:math id="M7"><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mn>4</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>2</mml:mn></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x000A0;</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x000A0;</mml:mo><mml:mn>4</mml:mn><mml:mo>=</mml:mo><mml:mn>10</mml:mn></mml:mrow></mml:math></inline-formula> pairs can be formed. The pairs were then doubled to create 4-tuples containing one replication of the same element. Such sets can be allocated to 4 locations in 4!/(2!2!) &#x0003D; 6 different ways. However, as illustrated in Figure <xref ref-type="fig" rid="F2">2A</xref>, the 6 spatial arrangements fall into 3 base configurations, each one having a mirrored equivalent (see mirror axes in Figure <xref ref-type="fig" rid="F2">2A</xref>). For pairing the same stimuli (i.e., P1-P1, P2-P2, P3-P3, P4-P4) the 3 base configurations are not distinguished. This means there are <inline-formula><mml:math id="M8"><mml:mrow><mml:mn>3</mml:mn><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mn>4</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>2</mml:mn></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x000A0;</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x000A0;</mml:mo><mml:mn>4</mml:mn><mml:mo>=</mml:mo><mml:mn>22</mml:mn></mml:mrow></mml:math></inline-formula> distinct spatial arrangements. According to this rule of combining the 10 pairs to the 3 spatial configurations plaid configurations with 0, 1, or 2 orientation alignments of the patches in subsets containing both, only the low and only the high spatial frequency were formed. This means that alignment (0, 1, 2) and spatial frequency homogeneity (mixed, <italic>f</italic><sub>low</sub>, <italic>f</italic><sub>high</sub>) are the dimensions of an orthogonal experimental plan for generating plaid configurations from 4 patches with either right or left oblique orientation and either high or low spatial frequency (see Figure <xref ref-type="fig" rid="F2">2B</xref>).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Categories of plaids</bold>. Different grating patch configurations obtained from allocating two pairs of patches to four locations <bold>(A)</bold> and alignment variation in frequency&#x02013;homogeneous and inhomogeneous patch compositions <bold>(B)</bold>.</p></caption>
<graphic xlink:href="fnsys-10-00078-g0002.tif"/>
</fig>
</sec>
<sec>
<title>2.1.4. Subjects</title>
<p>Two male students, FA (22 years) and KF (25 years), served as subjects. Both were highly experienced psychophysical observers and familiar with staircase procedures for contrast detection threshold measurement. Both were paid for participation as part of their student aid contract. Prior to the experiment, participants were informed about the course and expected duration of the experiment. They received a general description of the purpose of the experiment but not about specific outcome expectations. All participants signed a written consent form according to the World Medical Association Helsinki Declaration and were informed that they could withdraw from the experiment at any time without penalty. At the time of data collection, no local ethics committee was instated. Non-invasive experimental studies without deception did not require a formal ethics review provided the experiment complied with the relevant institutional and national regulations and legislation which was carefully ascertained by the authors. After completing the experiment, a summary of their individual data was shown to the observers and the results pattern explained within the scope of the purpose of the study.</p>
</sec>
<sec>
<title>2.1.5. Contrast threshold measurement procedure</title>
<p>Contrast thresholds were measured with an adapted version of the method of limits (see Meinhardt, <xref ref-type="bibr" rid="B34">1999</xref>). The method was constructed as a semi-adaptive method that adjusted starting values from the results of former measurements within a set of successive runs, but kept the advantage of multiple independent threshold determinations, as the original limits method. A temporal staircase with a range of 512 equidistant contrast steps, each of which with 35 ms duration, was used. By this procedure we estimated a contrast threshold value in the <italic>i</italic>-th trial, &#x00398;<sub><italic>i</italic></sub>, from two up-runs and two down-runs. This was done as follows: the initial contrast was set to the starting value. For the first measurement, this was a value well above threshold, for the subsequent measurements this value was the last threshold contrast measured &#x0002B;25% of contrast. Then the first down-run started: The contrast was decremented using a temporal staircase until the subject signaled that the pattern was no longer visible by pressing a button on a small response keyboard. Then the contrast was diminished by 25% and the contrast was incremented using the temporal staircase until the subject signaled that the pattern was just distinguishable from the background. Then the average of both threshold contrast values &#x00398;<sub>0, <italic>up</italic></sub> and &#x00398;<sub>0, <italic>down</italic></sub> was taken, and after adding 25% of contrast this value was assigned to the next starting contrast and a second down-up-run started. The contrast threshold &#x00398;<sub>0</sub> was then computed as the mean of all four threshold determinations. Eight replications of this threshold measurement procedure were carried out for each of the 22 spatial plaid arrangements. All threshold measurements for the plaid patterns were randomly interleaved. The subjects were instructed to rest their judgements on any local deviations of contrast they perceived.</p>
</sec>
<sec>
<title>2.1.6. Apparatus</title>
<p>Patterns were programmed using the VSG2/3 stimulus generator (Cambridge Research Systems) and displayed on a EIZO FlexScan 6600 21&#x0201D; grayscale monitor with gamma-correction. The linearity of the digital gray values of the VSG2/3 and luminance <italic>L</italic> in cd/m<sup>2</sup>, measured by an LMT 1003 photometer, was checked before each experimental session. Grating patterns were displayed using a linear gray staircase with 256 entries chosen from a palette of 4096 possible gray values, the medium step (128) always referring to gray value no. 2048. Contrast variation was realized by scaling the step size of the staircase. Hence, independent of contrast a grating was always displayed with a grayscale resolution of 256 steps. We used Maxwell contrast as the contrast metric for the grating plaids, &#x00398; &#x0003D; (<italic>L</italic><sub>max</sub> &#x02212; <italic>L</italic><sub>min</sub>)/(2<italic>L</italic><sub>0</sub>). The contrast value describes the contrast of each single patch, while all 4 patches of a plaid had the same contrast. The luminance of the grating patches was modulated across the mean luminance <italic>L</italic><sub>0</sub> of the screen, which means that a grating of 0 contrast had mean luminance. The refresh rate of the monitor was 85 Hz at a horizontal frequency of 67.8 kHz, the pixel resolution was set to 1024 &#x000D7; 768 pixels. The room was darkened so that the ambient illumination matched the illumination on the screen. The mean luminance of the screen was set to <italic>L</italic><sub>0</sub> &#x0003D; 50 cd/m<sup>2</sup>. Patterns were viewed monocularly at a distance of 75 cm. The subjects used a chin rest and an ocular. The ocular limited the visible area of the screen to a circular field of 8.5&#x000B0; in diameter. A small black dot in the center of the screen was used for fixation. The subjects signaled the presence or absence of the stimulus by pressing a button on an external response box.</p>
</sec>
<sec>
<title>2.1.7. Preparation, preliminary measurements, and estimation of threshold reduction</title>
<p>In preliminary measurements the threshold contrasts for a single grating patch, presented on any of the four possible patch positions on the spatial grid, was determined for both the small and large patch distance. Measurements for the two patch distances were arranged in separate experimental blocks. As for the plaid patterns the adapted version of the method of limits was used as the threshold measuring procedure (see above). In order to avoid spatial uncertainty effects (Yager et al., <xref ref-type="bibr" rid="B59">1984</xref>; H&#x000FC;bner, <xref ref-type="bibr" rid="B17">1996</xref>) the fixation point turned into a small arrow that pointed to the grid position where the patch was successively presented. The grid position for stimulus presentation changed randomly from trial to trial. Seven carrier spatial frequencies, ranging from 1.5 to 7 cycles per degree (cpd), were tested. Sixteen replications of the threshold measurement procedure were carried out for each carrier spatial frequency. The threshold contrast as a function of spatial frequency were fitted with a 3rd order polynomial, and two spatial frequencies with equal contrast threshold below and beyond the minimum were extrapolated. These frequencies were <italic>f</italic><sub>low</sub> &#x0003D; 2cpd, <italic>f</italic><sub>high</sub> &#x0003D; 4cpd for subject FA and <italic>f</italic><sub>low</sub> &#x0003D; 2cpd, <italic>f</italic><sub>high</sub> &#x0003D; 5cpd for subject KF. For both subjects, the contrast threshold functions for the small and the large patch distance were shifted against each other on the contrast scale but the principal course across spatial frequency was the same. The subject specific selections for <italic>f</italic><sub>low</sub> and <italic>f</italic><sub>high</sub> were used for constructing the plaid stimuli in the main experiment.</p>
<p>The average threshold contrast (across trials and subjects) for the two equally detectable spatial frequency patches were &#x00398;<sub>0</sub> &#x0003D; 0.0111 for the small distance grid (1&#x000B0;) and &#x00398;<sub>0</sub> &#x0003D; 0.0166 for the large distance grid (2&#x000B0;).</p>
<p>In order to judge whether a given plaid configuration caused inhibitory or excitatory patch interactions across the four grid positions the expected threshold contrast for the assumption of spatial independence was derived from the threshold contrast for a single grating patch. Assuming probability summation (detailed derivations see <xref ref-type="supplementary-material" rid="SM1">Supplementary Material</xref>) yields estimated threshold reduction factors <inline-formula><mml:math id="M9"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> between 0.673 and 0.707. Multiplying &#x00398;<sub>0</sub> with <inline-formula><mml:math id="M10"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> yields the threshold contrast prediction for probability summation on the non-normalized scale.</p>
</sec>
<sec>
<title>2.1.8. Data analysis</title>
<p>The threshold contrast mean across all trials for the same condition was used as the estimate of the true threshold contrast for each plaid configuration. The threshold contrast means of the two subjects were again averaged to result in the contrast threshold estimate for the <italic>c</italic>-th plaid configuration, <inline-formula><mml:math id="M11"><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. Since there were <italic>n</italic> &#x0003D; 8 replications of contrast threshold measurement for each plaid configuration, <inline-formula><mml:math id="M12"><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> rested on <italic>M</italic> &#x0003D; 2<italic>n</italic> replications. Standard errors <inline-formula><mml:math id="M13"><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>e</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x003C3;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:msqrt><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:msqrt></mml:math></inline-formula>, were based on pooled variance estimates from the data of the two subjects, <inline-formula><mml:math id="M14"><mml:msup><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:msubsup><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:mi>n</mml:mi><mml:msubsup><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>c</mml:mi><mml:mo>,</mml:mo><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mi>n</mml:mi><mml:mo>-</mml:mo><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>. Confidence intervals for <inline-formula><mml:math id="M15"><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> were calculated assuming a Student <italic>t</italic>-distribution for the means, <inline-formula><mml:math id="M16"><mml:mi>C</mml:mi><mml:mi>I</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B1;</mml:mo><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>975</mml:mn><mml:mo>;</mml:mo><mml:mi>M</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>e</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>. The critical test for the <italic>c</italic>-th plaid configuration was to decide whether the contrast interval predicted by probability summation among 4 grating patches fell beyond, below, or within the confidence interval of the threshold contrast mean, <inline-formula><mml:math id="M17"><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>.</p>
</sec>
</sec>
<sec>
<title>2.2. Cortex model</title>
<sec>
<title>2.2.1. Outline</title>
<p>We studied two variants of a recurrently coupled, neuronal network model representing populations in early visual cortex engaged in processing plaid stimuli. Recurrent weights were adapted such that network activations for different plaid configurations most closely predicted human detection thresholds. The structure of the model was held as simple as possible, for having a minimum number of free parameters while still being able to reproduce all experimental findings.</p>
</sec>
<sec>
<title>2.2.2. Single units</title>
<p>Each unit <italic>i</italic> in our model network represents a population of neurons and is described in terms of its mean activity <italic>A</italic><sub><italic>i</italic></sub>(<italic>t</italic>). Activation changes in dependence on the current feedforward input <inline-formula><mml:math id="M40"><mml:mrow><mml:msubsup><mml:mi>J</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mtext>ffw</mml:mtext></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula>(<italic>t</italic>) and recurrent feedback <inline-formula><mml:math id="M41"><mml:mrow><mml:msubsup><mml:mi>J</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mtext>rec</mml:mtext></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula>(<italic>t</italic>), and is described by a time coarse-grained Wilson-Cowan dynamics (Wilson and Cowan, <xref ref-type="bibr" rid="B56">1972</xref>)</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mi>&#x003C4;</mml:mi><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:msub><mml:mi>A</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>A</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mi>g</mml:mi><mml:mtext>&#x02009;</mml:mtext><mml:mo stretchy='false'>[</mml:mo><mml:msubsup><mml:mi>J</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mtext>rec</mml:mtext></mml:mrow></mml:msubsup><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:msubsup><mml:mi>J</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mtext>ffw</mml:mtext></mml:mrow></mml:msubsup><mml:mo stretchy='false'>(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msup><mml:mi>J</mml:mi><mml:mrow><mml:mtext>thr</mml:mtext></mml:mrow></mml:msup><mml:mo stretchy='false'>]</mml:mo><mml:mtext>&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;</mml:mtext><mml:mo>.</mml:mo></mml:math></disp-formula>
<p>Here, &#x003C4; is a time constant (w.l.o.g. set to 1), and <italic>g</italic>[&#x02026;] denotes a rectifying gain function which we choose to be <italic>g</italic>[<italic>J</italic>]: &#x0003D; <italic>J</italic><sup>max</sup>(1 &#x02212; ((<italic>J</italic><sup>max</sup> &#x02212; 1)/<italic>J</italic><sup>max</sup>)<sup><italic>J</italic></sup>) for <italic>J</italic> &#x0003E; 0 and 0 otherwise. Choosing <italic>J</italic><sup>max</sup> &#x0003D; 10, the gain function is approximately linear with slope 1 for <italic>J</italic> &#x0003D; 1 and saturates at <italic>J</italic><sup>max</sup> for <italic>J</italic> &#x02192; &#x0221E; (inset Figure <xref ref-type="fig" rid="F3">3</xref>). For simplicity, we model <italic>A</italic> as a dimensionless quantity which can, for an intended comparison to a particular experimental situation, be scaled to fit the corresponding neurophysiological quantity such as the population firing rate.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Network model</bold>. Feedforward input from the visual stimulus (bottom) activates neural columns (marked in yellow) with matching orientation and spatial frequency (SF) preference in each of the four hypercolumns (vertical structures). Horizontal interactions (in red) provide recurrent feedback between different (hyper-)columns in the network. Note that for clarity, we only show connections originating from the top column in the rearmost hypercolumn, targeting columns with the same orientation and SF preference in the neighboring three hypercolumns (i.e., the set of interactions shown in the top left subpanel of <bold>Figures 6</bold>, <bold>7</bold>). The inset graph shows the neural gain function <italic>g</italic>[<italic>J</italic>] mapping a synaptic input <italic>J</italic> to a neural population response.</p></caption>
<graphic xlink:href="fnsys-10-00078-g0003.tif"/>
</fig>
</sec>
<sec>
<title>2.2.3. Full network</title>
<p>The network consists of <italic>i</italic> &#x0003D; 1, &#x02026;, <italic>N</italic> &#x0003D; 16 units, comprising four &#x0201C;hypercolumns&#x0201D; of four units each (Figure <xref ref-type="fig" rid="F3">3</xref>, vertical structures). The four units in each hypercolumn represent populations with different preferred orientations and preferred spatial frequencies, but with same spatial (classical) receptive field centered on one of the four positions within a plaid configuration. For a specific plaid configuration, exactly one unit in each hypercolumn becomes activated with an input of <italic>J</italic><sup>ffw</sup> &#x0003E; 0. All other units receive a feedforward input of <italic>J</italic><sup>ffw</sup> &#x0003D; 0 leading to zero activation, which is a simplification of the fact that neurons with receptive field properties deviating from or being orthogonal to the properties of the stimulus are only weakly activated or remain silent, respectively.</p>
<p>Recurrent input <inline-formula><mml:math id="M35"><mml:mrow><mml:msubsup><mml:mi>J</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mtext>rec</mml:mtext></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula> provides feedback from other units via a coupling matrix <italic>W</italic> &#x0003D; {<italic>w</italic><sub><italic>ik</italic></sub>}, <inline-formula><mml:math id="M19"><mml:mrow><mml:msubsup><mml:mi>J</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mtext>rec</mml:mtext></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:munder><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> (w.l.o.g. we assume self-interactions to be zero). For finding suitable weights, we used two complementary approaches. These have different advantages and disadvantages as explained below.</p>
</sec>
<sec>
<title>2.2.4. No prior assumptions on interactions</title>
<p>In our first approach (from here on termed &#x0201C;model A&#x0201D;), we decided to ignore prior knowledge about the nature of interactions from psychophysical or physiological evidence. Having this (essentially) assumption-free approach allows discovering functional principles going beyond the current state of anatomical and functional knowledge. The high number of degrees-of-freedom (<italic>df</italic>&#x00027;s) can be drastically reduced by imposing symmetry constraints to the weights (details see <xref ref-type="supplementary-material" rid="SM1">Supplementary Material</xref>), leading to 30 free parameters.</p>
</sec>
<sec>
<title>2.2.5. Postulating interactions from prior knowledge</title>
<p>In our second approach (&#x0201C;model B&#x0201D;), we computed weights from postulating three types of (parametrized) interactions which were motivated from psychophysical or physiological evidence (Polat and Sagi, <xref ref-type="bibr" rid="B43">1993</xref>, <xref ref-type="bibr" rid="B44">1994</xref>; Kapadia et al., <xref ref-type="bibr" rid="B22">2000</xref>). Although being more restrictive on &#x0201C;weight space,&#x0201D; this approach yields a parametrization of interactions that can be extended to other stimuli, such as more complex plaid configurations going beyond (2 &#x000D7; 2)-patches.</p>
<p>In particular, we hypothesized that three types of interactions play a role for explaining contextual integration:
<list list-type="bullet">
<list-item><p><italic>w</italic><sup>iso</sup>: Orientation-<italic>unspecific</italic>, isotropic <italic>inhibitory</italic> interactions</p></list-item>
<list-item><p><italic>w</italic><sup>ori</sup>: Orientation-<italic>specific, excitatory</italic> interactions (between <italic>similar</italic> spatial frequencies)</p></list-item>
<list-item><p><italic>w</italic><sup>frq</sup>: Orientation-<italic>specific, inhibitory</italic> interactions (between <italic>different</italic> spatial frequencies).</p></list-item>
</list></p>
<p>All of these types have a typical strength and range of interaction, described by Gaussian functions with free parameters amplitude, mean, and variance, giving a total of 9 <italic>df</italic> as compared to the 30 <italic>df</italic> in model A. The total interaction strength <italic>w</italic><sub><italic>ik</italic></sub> between units <italic>i</italic> and <italic>k</italic> is then obtained by adding these three contributions, <inline-formula><mml:math id="M36"><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msubsup><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mtext>iso</mml:mtext></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mtext>ori</mml:mtext></mml:mrow></mml:msubsup><mml:mo>&#x02212;</mml:mo><mml:msubsup><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mtext>frq</mml:mtext></mml:mrow></mml:msubsup></mml:mrow></mml:math></inline-formula>.</p>
</sec>
<sec>
<title>2.2.6. Linking hypothesis</title>
<p>For linking simulation to experiment, we needed a suitable mapping of model activities to psychophysical detection thresholds &#x00398;. In general, we assumed that the <italic>higher</italic> model activity gets with a <italic>fixed</italic> input, the lower will be the corresponding detection threshold. This assumption is equivalent to the required input becoming <italic>lower</italic> in order to achieve a <italic>fixed</italic> activation level. The reciprocal dependency between activity <italic>A</italic> and average human threshold <inline-formula><mml:math id="M20"><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:math></inline-formula> does not need to be linear, but can be convex or concave. With setting <italic>A</italic><sup>min</sup>: &#x0003D; 0, introducing two additional free parameters <italic>A</italic><sup>max</sup> and &#x003BA;, and abbreviating total activity in the steady state as <inline-formula><mml:math id="M21"><mml:msup><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x0221E;</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:munder class="msub"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:munder><mml:msub><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x02192;</mml:mo><mml:mi>&#x0221E;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, we defined a linking hypothesis by</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M22"><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>&#x00398;</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>c</mml:mi></mml:msub><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:msup><mml:mi>&#x00398;</mml:mi><mml:mrow><mml:mtext>min</mml:mtext></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mi>&#x00398;</mml:mi><mml:mrow><mml:mtext>max</mml:mtext></mml:mrow></mml:msup><mml:mo>&#x02212;</mml:mo><mml:msup><mml:mi>&#x00398;</mml:mi><mml:mrow><mml:mi>min</mml:mi></mml:mrow></mml:msup><mml:mo stretchy='false'>)</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:msubsup><mml:mi>A</mml:mi><mml:mi>c</mml:mi><mml:mi>&#x0221E;</mml:mi></mml:msubsup><mml:mo>&#x02212;</mml:mo><mml:msup><mml:mi>A</mml:mi><mml:mrow><mml:mi>min</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mi>A</mml:mi><mml:mrow><mml:mi>max</mml:mi></mml:mrow></mml:msup><mml:mo>&#x02212;</mml:mo><mml:msup><mml:mi>A</mml:mi><mml:mrow><mml:mi>min</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mi>&#x003BA;</mml:mi></mml:msup><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Here, &#x00398;<sup>min</sup> and &#x00398;<sup>max</sup> are the minimum and maximum thresholds measured in the experiment, respectively, while <italic>c</italic> indexes the plaid configuration for which the corresponding threshold <inline-formula><mml:math id="M23"><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> was measured. By Equation (3), <inline-formula><mml:math id="M24"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> defines the model estimate for <inline-formula><mml:math id="M25"><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>.</p>
</sec>
</sec>
<sec>
<title>2.3. Natural image statistics</title>
<p>We also performed a natural image analysis to test our hypothesis that human detection thresholds are linked to the frequency with which different plaid configurations occur in natural scenes. For this purpose, we quantified whether plaids occur more or less often than predicted from the likelihoods of their constituting single patches. This statistics was derived from analyzing how similar local image regions are to the four different oriented gratings used in our experiments (Figure <xref ref-type="fig" rid="F4">4</xref>). Mathematical details of the procedures described below can be found in the <xref ref-type="supplementary-material" rid="SM1">Supplementary Material</xref>.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Image analysis</bold>. Image regions taken from a full image converted to gray scale (left) are compared with plaid configurations (top right) by comparing Gabor templates (bottom right) with different spatial frequencies and different orientations to image patches (yellow outlines) positioned at the four positions in a plaid.</p></caption>
<graphic xlink:href="fnsys-10-00078-g0004.tif"/>
</fig>
<p>Image processing consisted of the following basic steps, hereby making use of the convolution theorem to realize whitening and Gabor filtering in a numerically efficient way:
<list list-type="order">
<list-item><p>Conversion from RGB color space to grayscale.</p></list-item>
<list-item><p>Transformation into Fourier space.</p></list-item>
<list-item><p>Multiplication by Whitening filter <italic>F</italic><sub><italic>w</italic></sub>(<bold>k</bold>), see below.</p></list-item>
<list-item><p>Multiplication by Gabor filter(s) <italic>g</italic><sub><bold>p</bold></sub> transformed into Fourier space.</p></list-item>
<list-item><p>Inverse Fourier transform, thus providing the grating patch &#x02013; image patch overlaps <italic>O</italic><sub><bold>p</bold></sub>(<bold>r</bold>) for each position <bold>r</bold> in the image, see below.</p></list-item>
<list-item><p>Removal of an image border of width 4&#x003C3; (which is approximately the size of one Gabor template) for excluding Fourier transformation artefacts at the (non-periodic) image boundaries.</p></list-item>
</list></p>
<p>In natural scenes, lower spatial frequencies typically occur with higher amplitudes than higher spatial frequencies (van der Schaaf and van Hateren, <xref ref-type="bibr" rid="B51">1996</xref>). To compensate for this effect, whitening is used to equalize the average spectral composition of image ensembles, allowing us to separate the actual probabilities of occurrence of the different grating plaid configurations from the typical intensity with which they are present.</p>
<p>The grating patch &#x02013; image patch overlaps <italic>O</italic><sub><bold>p</bold></sub> statistically quantify how well an image patch is explained by the presence of a single grating with parameters <bold>p</bold> (e.g., comprising orientation and spatial frequency, detailed explanation see <xref ref-type="supplementary-material" rid="SM1">Supplementary Material</xref>). Consequently, it also allows assessing the presence or absence of full plaid configurations <bold>C</bold> comprising a combination of four grating patches. To quantify how often configuration <bold>C</bold> is encountered in an image ensemble <inline-formula><mml:math id="M32"><mml:mi mathvariant="-tex-caligraphic">E</mml:mi></mml:math></inline-formula>, we computed the ratio &#x0039B;(<bold>C</bold>) between the joint likelihood to observe <bold>C</bold> and the likelihood to independently observe the single grating patches <bold>c</bold><sub><italic>i</italic></sub> of <bold>C</bold>:</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M26"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>&#x0039B;</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>C</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mo>:</mml:mo><mml:mo>=</mml:mo></mml:mtd><mml:mtd><mml:mfrac><mml:mrow><mml:mi>L</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>C</mml:mtext></mml:mstyle><mml:mo>|</mml:mo><mml:mi mathvariant="-tex-caligraphic">E</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x0220F;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>4</mml:mn></mml:mrow></mml:munderover><mml:mi>L</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>c</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mi mathvariant="-tex-caligraphic">E</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>For example, a value of &#x0039B;(<bold>C</bold>) &#x0003D; 2 would mean that plaid <bold>C</bold> occurs twice as frequently as expected from the probability of occurrence of its single grating patches <bold>c</bold><sub><italic>i</italic></sub>.</p>
<p>As image ensembles <inline-formula><mml:math id="M33"><mml:mi mathvariant="-tex-caligraphic">E</mml:mi></mml:math></inline-formula>, we used two different data bases: first, the Corel Image Database [Corel Mega Gallery (add-on to CorelDraw version 6), Corel Corporation (1996)] with about 68,000 images of size 384 &#x000D7; 256 pixels in JPEG-compression, and the McGill Color Image Database with about 820 color-calibrated images of size 576 &#x000D7; 768 pixels without compression (Olmos and Kingdom, <xref ref-type="bibr" rid="B38">2004</xref>). JPEG compression is known to introduce artifacts in cardinal orientations. However, since we were only interested in oblique orientations this putative confounding factor was of negligible concern for our investigations.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>3. Results</title>
<sec>
<title>3.1. Experiment</title>
<p>The threshold contrast results are summarized in Figure <xref ref-type="fig" rid="F5">5B</xref> for the small distance grid (1&#x000B0;, left panel) and the large distance grid (2&#x000B0;, right panel). The results patterns for small and larger inter-patch distance were remarkably different. To substantiate different effects of alignment and spatial frequency for the two grid sizes we analyzed the threshold contrast data with ANOVA, and tested against the assumption of probability summation among the four grid positions with a confidence interval test.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>Human detection thresholds for different plaid configurations</bold>. <bold>(A)</bold> The 22 plaid configurations used in the experiment, sorted according to grating patch alignment (zero alignments, one alignment, or two alignments) and SF content (only low SFs, only high SFs, or both SFs), giving nine categories in total. <bold>(B)</bold> The graph on the left shows results for plaid distance <italic>d</italic> &#x0003D; 1&#x000B0;, and the graph on the right for plaid distances of <italic>d</italic> &#x0003D; 2&#x000B0;. The height of the bars indicates the average detection threshold and the vertical black lines the corresponding 95% confidence interval. For comparison, the gray bars display the approximate detection thresholds predicted from single element detection by assuming independency and probability summation. Bars significantly above the gray region thus indicate suppressive interactions, while bars significantly below indicate facilitating interactions.</p></caption>
<graphic xlink:href="fnsys-10-00078-g0005.tif"/>
</fig>
<sec>
<title>3.1.1. Results for the small distance grid (1&#x000B0;)</title>
<p>For the small distance grid there were main effects of alignment [<italic>F</italic><sub>(2, 875)</sub> &#x0003D; 8.32, <italic>p</italic> &#x0003C; 0.001] and spatial frequency [<italic>F</italic><sub>(2, 875)</sub> &#x0003D; 13.64, <italic>p</italic> &#x0003C; 0.001], but no significant interaction of both factors [<italic>F</italic><sub>(4, 875)</sub> &#x0003D; 1.79, <italic>p</italic> &#x0003D; 0.129]. Pairwise comparisons showed that configurations with 1 alignment had significantly larger threshold contrasts compared to configurations with 2 alignments [<italic>F</italic><sub>(1, 875)</sub> &#x0003D; 16.31, <italic>p</italic> &#x0003C; 0.001] and 0 alignment [<italic>F</italic><sub>(1, 875)</sub> &#x0003D; 5.62, <italic>p</italic> &#x0003C; 0.02], while threshold contrasts did not differ significantly for 0 and 2 alignments [<italic>F</italic><sub>(1, 875)</sub> &#x0003D; 0.517, <italic>p</italic> &#x0003D; 0.517]. Homogeneous low spatial frequency patches were detected at lower contrasts than homogeneous high frequency patches [<italic>F</italic><sub>(1, 875)</sub> &#x0003D; 20.61, <italic>p</italic> &#x0003C; 0.001], and also compared to plaids combining both spatial frequencies [<italic>F</italic><sub>(1, 875)</sub> &#x0003D; 22.95, <italic>p</italic> &#x0003C; 0.001]. High spatial frequency and mixed frequency plaids were detected at equal contrast levels [<italic>F</italic><sub>(1, 875)</sub> &#x0003D; 0.44, <italic>p</italic> &#x0003D; 0.506].</p>
<p>The hypothesis of probability summation among equally detectable grating patch stimuli at the 4 grid positions was tested with a confidence interval test for the threshold contrast derived from assuming probability summation, using estimates of the shape parameter &#x003B2; in the interval [3.5, 4] (see <xref ref-type="supplementary-material" rid="SM1">Supplementary Material</xref>). For this range of &#x003B2;, <inline-formula><mml:math id="M27"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> leads to estimated threshold reduction factors in the range of [0.673, 0.707]. For the measured threshold contrast of a single patch at any of the 4 grid positions, &#x00398;<sub>0</sub> &#x0003D; 0.0111, the threshold contrast for probability summation among 4 grating patches presented simultaneously on the spatial grid is expected within the interval [0.0075, 0.0078] (see gray shaded area in Figure <xref ref-type="fig" rid="F5">5B</xref>, left panel; see Table <xref ref-type="table" rid="T1">1</xref>). The test shows that only plaid configurations with low spatial frequencies at 0 and 2 alignments had threshold contrasts that were compatible with a probability summation mechanism. All the other combinations yielded threshold contrasts significantly above the prediction, indicating strong inhibitory interactions. For 1 alignment, where the orientations of 2 grating patches were orthogonal to an aligned array of the 2 other gratings, inhibitory interactions were strongest, and present for all spatial frequency compositions.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>Confidence interval tests for 1&#x000B0;</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Alignments</bold></th>
<th valign="top" align="left"><bold>Spatial frequency</bold></th>
<th valign="top" align="center"><bold><inline-formula><mml:math id="M28"><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:math></inline-formula></bold></th>
<th valign="top" align="center"><bold><italic>s</italic><sub><italic>e</italic></sub></bold></th>
<th valign="top" align="center"><bold>&#x00398;<sub><italic>l</italic></sub></bold></th>
<th valign="top" align="center"><bold>&#x00398;<sub><italic>u</italic></sub></bold></th>
<th valign="top" align="left"><bold>CI(&#x003B2;<sub>1</sub>)</bold></th>
<th valign="top" align="left"><bold>CI(&#x003B2;<sub>2</sub>)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">0</td>
<td valign="top" align="left"><italic>f</italic><sub>low</sub></td>
<td valign="top" align="center">0.0081</td>
<td valign="top" align="center">0.0002</td>
<td valign="top" align="center">0.0077</td>
<td valign="top" align="center">0.0086</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">IN</td>
</tr>
<tr>
<td valign="top" align="left">1</td>
<td valign="top" align="left"><italic>f</italic><sub>low</sub></td>
<td valign="top" align="center">0.0084</td>
<td valign="top" align="center">0.0002</td>
<td valign="top" align="center">0.0080</td>
<td valign="top" align="center">0.0087</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">OUT</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">2</td>
<td valign="top" align="left"><italic>f</italic><sub>low</sub></td>
<td valign="top" align="center">0.0079</td>
<td valign="top" align="center">0.0001</td>
<td valign="top" align="center">0.0077</td>
<td valign="top" align="center">0.0082</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">IN</td>
</tr> <tr>
<td valign="top" align="left">0</td>
<td valign="top" align="left"><italic>f</italic><sub>high</sub></td>
<td valign="top" align="center">0.0089</td>
<td valign="top" align="center">0.0002</td>
<td valign="top" align="center">0.0084</td>
<td valign="top" align="center">0.0094</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">OUT</td>
</tr>
<tr>
<td valign="top" align="left">1</td>
<td valign="top" align="left"><italic>f</italic><sub>high</sub></td>
<td valign="top" align="center">0.0091</td>
<td valign="top" align="center">0.0002</td>
<td valign="top" align="center">0.0088</td>
<td valign="top" align="center">0.0095</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">OUT</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">2</td>
<td valign="top" align="left"><italic>f</italic><sub>high</sub></td>
<td valign="top" align="center">0.0085</td>
<td valign="top" align="center">0.0001</td>
<td valign="top" align="center">0.0082</td>
<td valign="top" align="center">0.0087</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">OUT</td>
</tr> <tr>
<td valign="top" align="left">0</td>
<td valign="top" align="left"><italic>f</italic><sub>both</sub></td>
<td valign="top" align="center">0.0084</td>
<td valign="top" align="center">0.0001</td>
<td valign="top" align="center">0.0082</td>
<td valign="top" align="center">0.0087</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">OUT</td>
</tr>
<tr>
<td valign="top" align="left">1</td>
<td valign="top" align="left"><italic>f</italic><sub>both</sub></td>
<td valign="top" align="center">0.0090</td>
<td valign="top" align="center">0.0001</td>
<td valign="top" align="center">0.0088</td>
<td valign="top" align="center">0.0092</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">OUT</td>
</tr>
<tr>
<td valign="top" align="left">2</td>
<td valign="top" align="left"><italic>f</italic><sub>both</sub></td>
<td valign="top" align="center">0.0088</td>
<td valign="top" align="center">0.0001</td>
<td valign="top" align="center">0.0086</td>
<td valign="top" align="center">0.0090</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">OUT</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Confidence interval tests for the prediction of detection by probability summation among the 4 grid positions for the small distance grid (1&#x000B0;). The table shows mean, <inline-formula><mml:math id="M29"><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:math></inline-formula>, standard error, s<sub>e</sub>, lower and upper confidence limits for a 5% &#x003B1;-level, &#x00398;<sub>l</sub> and &#x00398;<sub>u</sub>, and decision whether the probability summation prediction falls within or outside the confidence interval of the mean, assuming &#x003B2;<sub>1</sub> &#x0003D; 3.5 and &#x003B2;<sub>2</sub> &#x0003D; 4.0.</italic></p>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>3.1.2. Results for the large distance grid (2&#x000B0;)</title>
<p>For the large distance grid there were main effects of alignment [<italic>F</italic><sub>(2, 875)</sub> &#x0003D; 19.65, <italic>p</italic> &#x0003C; 0.001] and spatial frequency [<italic>F</italic><sub>(2, 875)</sub> &#x0003D; 12.04, <italic>p</italic> &#x0003C; 0.001], and also the interaction of both factors reached significance [<italic>F</italic><sub>(4, 875)</sub> &#x0003D; 5.78, <italic>p</italic> &#x0003C; 0.001]. The data shown in the right panel of Figure <xref ref-type="fig" rid="F5">5B</xref> confirm that the main effects of spatial frequency and alignment were not unique, but mediated by the alignment &#x000D7; spatial frequency interaction. Pairwise comparisons across alignment revealed that, for mixed spatial frequencies, there was no alignment effect [D(0&#x02013;1): <italic>F</italic><sub>(1, 875)</sub> &#x0003D; 0.03, <italic>p</italic> &#x0003D; 0.873; D(0&#x02013;2): <italic>F</italic><sub>(1, 875)</sub> &#x0003D; 0.11, <italic>p</italic> &#x0003D; 0.745; D(1&#x02013;2): <italic>F</italic><sub>(1, 875)</sub> &#x0003D; 0.33, <italic>p</italic> &#x0003D; 0.567]. For spatial frequency homogeneous plaid configurations the threshold contrasts did not differ for 0 and 1 alignment, but were significantly lowered for 2 alignments, with a particularly pronounced threshold reduction at 2 alignments for homogeneous high spatial frequency plaids [<italic>f</italic><sub>low</sub>: D(0&#x02013;1): <italic>F</italic><sub>(1, 875)</sub> &#x0003D; 0.71, <italic>p</italic> &#x0003D; 0.398; D(0&#x02013;2): <italic>F</italic><sub>(1, 875)</sub> &#x0003D; 9.21, <italic>p</italic> &#x0003C; 0.01; D(1&#x02013;2): <italic>F</italic><sub>(1, 875)</sub> &#x0003D; 7.35, <italic>p</italic> &#x0003C; 0.01; <italic>f</italic><sub>high</sub>: D(0&#x02013;1): <italic>F</italic><sub>(1, 875)</sub> &#x0003D; 0.08, <italic>p</italic> &#x0003D; 0.772; D(0&#x02013;2): <italic>F</italic><sub>(1, 875)</sub> &#x0003D; 18.81, <italic>p</italic> &#x0003C; 0.001; D(1&#x02013;2): <italic>F</italic><sub>(1, 875)</sub> &#x0003D; 25.26, <italic>p</italic> &#x0003C; 0.001].</p>
<p>The confidence interval test for probability summation among grating patches at the four grid positions showed that the probability summation hypothesis could not be rejected for all plaid configurations with 0 and 1 alignment (see gray shaded area in Figure <xref ref-type="fig" rid="F5">5B</xref>, right panel; see Table <xref ref-type="table" rid="T2">2</xref>). For 2 alignments, probability summation was compatible with the threshold contrast data for mixed frequency plaids. For high spatial frequency plaids, the threshold contrasts fell significantly below the predicted contrast range. For low spatial frequency plaids, the confidence interval of threshold contrasts fell significantly below the predicted contrast range for larger &#x003B2; values, but had an intersection with predicted threshold contrasts when smaller &#x003B2; values were assumed.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p><bold>Confidence interval tests for 2&#x000B0;</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Alignments</bold></th>
<th valign="top" align="left"><bold>Spatial frequency</bold></th>
<th valign="top" align="center"><bold><inline-formula><mml:math id="M30"><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>&#x00398;</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:math></inline-formula></bold></th>
<th valign="top" align="center"><bold><italic>s</italic><sub><italic>e</italic></sub></bold></th>
<th valign="top" align="center"><bold>&#x00398;<sub><italic>l</italic></sub></bold></th>
<th valign="top" align="center"><bold>&#x00398;<sub><italic>u</italic></sub></bold></th>
<th valign="top" align="left"><bold>CI(&#x003B2;<sub>1</sub>)</bold></th>
<th valign="top" align="left"><bold>CI(&#x003B2;<sub>2</sub>)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">0</td>
<td valign="top" align="left"><italic>f</italic><sub>low</sub></td>
<td valign="top" align="center">0.0121</td>
<td valign="top" align="center">0.0003</td>
<td valign="top" align="center">0.0114</td>
<td valign="top" align="center">0.0127</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">IN</td>
</tr>
<tr>
<td valign="top" align="left">1</td>
<td valign="top" align="left"><italic>f</italic><sub>low</sub></td>
<td valign="top" align="center">0.0117</td>
<td valign="top" align="center">0.0002</td>
<td valign="top" align="center">0.0113</td>
<td valign="top" align="center">0.0122</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">IN</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">2</td>
<td valign="top" align="left"><italic>f</italic><sub>low</sub></td>
<td valign="top" align="center">0.0110</td>
<td valign="top" align="center">0.0002</td>
<td valign="top" align="center">0.0106</td>
<td valign="top" align="center">0.0113</td>
<td valign="top" align="left">IN</td>
<td valign="top" align="left">OUT</td>
</tr> <tr>
<td valign="top" align="left">0</td>
<td valign="top" align="left"><italic>f</italic><sub>high</sub></td>
<td valign="top" align="center">0.0116</td>
<td valign="top" align="center">0.0003</td>
<td valign="top" align="center">0.0110</td>
<td valign="top" align="center">0.0123</td>
<td valign="top" align="left">IN</td>
<td valign="top" align="left">IN</td>
</tr>
<tr>
<td valign="top" align="left">1</td>
<td valign="top" align="left"><italic>f</italic><sub>high</sub></td>
<td valign="top" align="center">0.0115</td>
<td valign="top" align="center">0.0002</td>
<td valign="top" align="center">0.0110</td>
<td valign="top" align="center">0.0119</td>
<td valign="top" align="left">IN</td>
<td valign="top" align="left">IN</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">2</td>
<td valign="top" align="left"><italic>f</italic><sub>high</sub></td>
<td valign="top" align="center">0.0100</td>
<td valign="top" align="center">0.0002</td>
<td valign="top" align="center">0.0096</td>
<td valign="top" align="center">0.0103</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">OUT</td>
</tr> <tr>
<td valign="top" align="left">0</td>
<td valign="top" align="left"><italic>f</italic><sub>both</sub></td>
<td valign="top" align="center">0.0119</td>
<td valign="top" align="center">0.0002</td>
<td valign="top" align="center">0.0115</td>
<td valign="top" align="center">0.0123</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">IN</td>
</tr>
<tr>
<td valign="top" align="left">1</td>
<td valign="top" align="left"><italic>f</italic><sub>both</sub></td>
<td valign="top" align="center">0.0119</td>
<td valign="top" align="center">0.0001</td>
<td valign="top" align="center">0.0116</td>
<td valign="top" align="center">0.0122</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">IN</td>
</tr>
<tr>
<td valign="top" align="left">2</td>
<td valign="top" align="left"><italic>f</italic><sub>both</sub></td>
<td valign="top" align="center">0.0118</td>
<td valign="top" align="center">0.0001</td>
<td valign="top" align="center">0.0116</td>
<td valign="top" align="center">0.0121</td>
<td valign="top" align="left">OUT</td>
<td valign="top" align="left">IN</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Confidence interval tests for the prediction of detection by probability summation among the 4 grid positions for the large distance grid (2&#x000B0;). Conventions as in Table <xref ref-type="table" rid="T1">1</xref>.</italic></p>
</table-wrap-foot>
</table-wrap>
<p>The overall picture of threshold contrast results differed remarkably for the two distance grids. For the small grid there was evidence for inhibitory interactions, being most pronounced for the 1 alignment configurations. For the large grid there was evidence for independence, except for frequency homogeneous configurations. These reflected excitatory interactions, being much better visible than predicted by an OR-detection rule for spatially distributed stimulus events.</p>
</sec>
</sec>
<sec>
<title>3.2. Cortex model</title>
<p>For assessing how well a model with a particular set of adapted parameters fits the experimental data, we first counted the number of conditions <italic>N</italic><sub>outside</sub> in which the model prediction fell outside the confidence intervals around the measured thresholds. The lower the minimum <italic>N</italic><sub>outside</sub> achieved over the full set of simulations, the better the fit between model and experiment. In total, there were 19 conditions: 9 plaid configurations for each distance plus one probability summation threshold which we required the model to reproduce if all interactions were set to zero (see <xref ref-type="supplementary-material" rid="SM1">Supplementary Material</xref>). Second, we computed the mean quadratic error <italic>E</italic><sub>2</sub> between predicted and measured thresholds.</p>
<p>Parameter optimization of model <italic>A</italic> was performed starting from 500 different initial conditions. After convergence of the stochastic gradient descent method, 457 parameter sets yielded thresholds for which <italic>N</italic><sub>outside</sub> &#x0003D; 0, giving an average <italic>E</italic><sub>2</sub> of 1.68 &#x000B7; 10<sup>&#x02212;6</sup> &#x000B1; 4.72 &#x000B7; 10<sup>&#x02212;6</sup>. The best performing model with lowest quadratic distance <italic>E</italic><sub>2</sub> &#x0003D; 6.6 &#x000B7; 10<sup>&#x02212;8</sup> is shown in Figure <xref ref-type="fig" rid="F6">6</xref>, with its interactions displayed in panel Figure <xref ref-type="fig" rid="F6">6A</xref>, and the corresponding linking function in Figure <xref ref-type="fig" rid="F6">6B</xref>. Detection thresholds are shown in panel Figure <xref ref-type="fig" rid="F6">6C</xref>, demonstrating a perfect fit between model and experiment. In particular, this fit is much closer than the human response variability in the experiment expressed by the confidence intervals. Typically, such a perfect match indicates that a model contains too many free parameters (&#x0201C;overfitting&#x0201D;) and thus will not generalize well to other experimental situations. For different initializations, the parameters after convergence were very similar, which we demonstrate by also showing the nine linking functions for the next best matches of model to experiment (Figure <xref ref-type="fig" rid="F6">6B</xref>, black lines). All linking functions have exponents around 1.17&#x02013;3.35 and exhibit a similar shape (concave down).</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p><bold>Parameters and performance of model A</bold>. For this figure, we used the parameter set yielding model results best matching the experimental data. <bold>(A)</bold> Interactions for plaid distances <italic>d</italic> &#x0003D; 1&#x000B0; (left) and <italic>d</italic> &#x0003D; 2&#x000B0; (right). The six subplots corresponding to each plaid distance show interactions between the neuronal unit in the lower left corner to all other units, with their orientation preferences and SFs indicated by the black bars (thick bar for low SF, thin bar for high SF). Interaction strength is color coded (insides of circles). The upper row displays all interactions for units with similar orientation preferences, while the lower row displays interactions between units with orthogonal orientation preferences. The outer columns display interactions between units with similar SFs, while the middle column displays interactions between units with different SFs. For simplifying the figure, the original plaid configuration has been rotated by 45 degrees. <bold>(B)</bold> Mapping of activities onto thresholds for the &#x00027;best&#x00027; model (thick red line), and for nine other models with the next-best performances (thin black lines). <bold>(C)</bold> Comparison of detection thresholds from model and experiment for plaid distances <italic>d</italic> &#x0003D; 1&#x000B0; (left) and <italic>d</italic> &#x0003D; 2&#x000B0; (right). The predicted thresholds from the model are displayed as colored bars (color code as inset), while the psychophysical thresholds are indicated by the black circles with the vertical lines showing the corresponding 95%-confidence intervals. The region shaded in gray color indicates the range of thresholds expected from probability summation. Note that model predictions and actual thresholds are indistinguishable from each other.</p></caption>
<graphic xlink:href="fnsys-10-00078-g0006.tif"/>
</fig>
<p>For model B, we also performed simulations from 500 initial conditions. Since the lower number of parameters restricts the degrees-of-freedom in the model&#x00027;s dynamics, the 8 best models yielded a minimum <italic>N</italic><sub>outside</sub> &#x0003D; 2, with an average <italic>E</italic><sub>2</sub> of 2.54 &#x000B7; 10<sup>&#x02212;4</sup> &#x000B1; 4.64 &#x000B7; 10<sup>&#x02212;5</sup>. The best performing model with lowest quadratic distance <italic>E</italic><sub>2</sub> is shown in Figure <xref ref-type="fig" rid="F7">7</xref>, with its interactions displayed in panels Figure <xref ref-type="fig" rid="F7">7A</xref>, and the corresponding linking function in Figure <xref ref-type="fig" rid="F7">7B</xref>. Detection thresholds shown in panels Figure <xref ref-type="fig" rid="F7">7C</xref> confirm that the fit of model to experiment is now less accurate. However, the number of parameters is about three times lower and thus a perfect fit is less likely than for model A. Again, for different initializations, the parameters after convergence are very similar. For example, the exponent of the linking function now varies between 0.99 and 1.37 (see Figure <xref ref-type="fig" rid="F7">7B</xref>, black lines, for examples of linking functions). An advantage of model B is that the parametric definition of the interactions as distance-dependent Gaussian functions allows one to predict interaction strengths also for other plaid configurations not used in this particular experiment.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><bold>Parameters and performance of model B</bold>. We again use the parameter set yielding the best matching results. Display as in Figure <xref ref-type="fig" rid="F6">6</xref>. Although match of model to experiment is not as good as before, still only one model prediction is outside the confidence interval.</p></caption>
<graphic xlink:href="fnsys-10-00078-g0007.tif"/>
</fig>
<p>Although individual weight values were different between models A and B, the general pattern which emerged after learning was very similar and thus confirmed the consistency of our approach. Comparing interaction weights for models A and B we find very similar structures that provide an intuitive explanation for the empirical results:
<list list-type="bullet">
<list-item><p>First, we computed the mean over all interactions as displayed in Figures <xref ref-type="fig" rid="F6">6A,B</xref> for <italic>d</italic> &#x0003D; 1&#x000B0; and <italic>d</italic> &#x0003D; 2&#x000B0;. Averaged over the 457 (8) best models of type A (type B), we obtained &#x02329;<italic>w</italic>&#x0232A; &#x0003D; &#x02212;0.14 &#x000B1; 0.07 for 1&#x000B0; and &#x02329;<italic>w</italic>&#x0232A; &#x0003D; &#x02212;0.03 &#x000B1; 0.02 for 2&#x000B0; (&#x02329;<italic>w</italic>&#x0232A; &#x0003D; &#x02212;0.28 &#x000B1; 0.10 for 1&#x000B0; and &#x02329;<italic>w</italic>&#x0232A; &#x0003D; &#x02212;0.01 &#x000B1; 0.05 for 2&#x000B0;), respectively. Clearly, for the smaller patch distance interactions must be more inhibitory, thus explaining the higher detection thresholds.</p></list-item>
<list-item><p>Second, we assessed the difference in coupling strengths between feature detectors for similar spatial frequencies (low-low or high-high) and coupling strengths between feature detectors for different spatial frequencies (low-high), averaged over patch distances (Table <xref ref-type="table" rid="T3">3</xref>). We found that coupling strengths are similar between feature detectors with orthogonal orientation preferences (second and fourth line in Table <xref ref-type="table" rid="T3">3</xref>). However, for parallel orientation preferences, interactions between units with different spatial frequency preferences are much lower (inhibitory) than between units with similar preferred spatial frequencies (first and third line in Table <xref ref-type="table" rid="T3">3</xref>). These inhibitory couplings explain the higher detection thresholds for plaids with different spatial frequencies.</p></list-item>
<list-item><p>Third, we compared the coupling strengths between units with low spatial frequency preferences and units with high spatial frequency preferences, averaged over (relative) orientations (Table <xref ref-type="table" rid="T4">4</xref>). Averaged over the best models of type A (type B), we found an inverse relation for different patch distances. In particular for smaller distances, low-frequency interactions must be stronger than high-frequency interactions, while high-frequency interactions must be more positive than low-frequency interactions for larger patch distances.</p></list-item>
</list></p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p><bold>Comparison of interactions between parallel and orthogonal orientation preferences</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Model</bold></th>
<th valign="top" align="left"><bold>Orientations</bold></th>
<th valign="top" align="center"><bold>Low-low SFs</bold></th>
<th valign="top" align="center"><bold>High-low SFs</bold></th>
<th valign="top" align="center"><bold>High-high SFs</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">A</td>
<td valign="top" align="left">Parallel</td>
<td valign="top" align="center">&#x0002B;0.01 &#x000B1; 0.01</td>
<td valign="top" align="center">&#x02212;0.24 &#x000B1; 0.11</td>
<td valign="top" align="center">&#x0002B;0.06 &#x000B1; 0.02</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td/>
<td valign="top" align="left">Orthogonal</td>
<td valign="top" align="center">&#x02212;0.12 &#x000B1; 0.05</td>
<td valign="top" align="center">&#x02212;0.08 &#x000B1; 0.04</td>
<td valign="top" align="center">&#x02212;0.15 &#x000B1; 0.05</td>
</tr> <tr>
<td valign="top" align="left">B</td>
<td valign="top" align="left">Parallel</td>
<td valign="top" align="center">&#x0002B;0.05 &#x000B1; 0.003</td>
<td valign="top" align="center">&#x02212;0.55 &#x000B1; 0.36</td>
<td valign="top" align="center">&#x0002B;0.08 &#x000B1; 0.01</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Orthogonal</td>
<td valign="top" align="center">&#x02212;0.13 &#x000B1; 0.02</td>
<td valign="top" align="center">&#x02212;0.13 &#x000B1; 0.02</td>
<td valign="top" align="center">&#x02212;0.19 &#x000B1; 0.03</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p><bold>Comparison of interactions between similar spatial frequencies (SFs) for small and large patch distances</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Model</bold></th>
<th valign="top" align="center"><bold>Distance</bold></th>
<th valign="top" align="center"><bold>Low-low SFs</bold></th>
<th valign="top" align="center"><bold>Relation</bold></th>
<th valign="top" align="center"><bold>High-high SFs</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">A</td>
<td valign="top" align="center"><italic>d</italic> &#x0003D; 1&#x000B0;</td>
<td valign="top" align="center">&#x02212;0.10 &#x000B1; 0.04</td>
<td valign="top" align="center">&#x0003E;</td>
<td valign="top" align="center">&#x02212;0.14 &#x000B1; 0.07</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td/>
<td valign="top" align="center"><italic>d</italic> &#x0003D; 2&#x000B0;</td>
<td valign="top" align="center">&#x02212;0.01 &#x000B1; 0.03</td>
<td valign="top" align="center">&#x0003C;</td>
<td valign="top" align="center">&#x0002B;0.06 &#x000B1; 0.01</td>
</tr> <tr>
<td valign="top" align="left">B</td>
<td valign="top" align="center"><italic>d</italic> &#x0003D; 1&#x000B0;</td>
<td valign="top" align="center">&#x02212;0.07 &#x000B1; 0.02</td>
<td valign="top" align="center">&#x0003E;</td>
<td valign="top" align="center">&#x02212;0.28 &#x000B1; 0.05</td>
</tr>
<tr>
<td/>
<td valign="top" align="center"><italic>d</italic> &#x0003D; 2&#x000B0;</td>
<td valign="top" align="center">&#x02212;0.004 &#x000B1; 0.01</td>
<td valign="top" align="center">&#x0003C;</td>
<td valign="top" align="center">&#x0002B;0.17 &#x000B1; 0.01</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>3.3. Natural image statistics</title>
<p>The image analysis was performed for all 256 possible patch configurations, including plaids never used in the experiment. For comparison with the psychophysical data, the corresponding likelihood ratios &#x0039B;(<bold>C</bold>) for the 22 patch configurations <bold>C</bold> used were extracted and sorted into the 9 categories defined by alignment (0, 1, or 2 alignments) and spatial frequency (only low SFs, only high SFs, or both SFs), identically to the presentation of the experimental data in Figure <xref ref-type="fig" rid="F5">5</xref>. In Figure <xref ref-type="fig" rid="F8">8A</xref>, the results are shown for both data bases for a patch distance of <italic>d</italic> &#x0003D; 2&#x000B0;. In particular, we plotted 1/&#x0039B; since our hypothesis is that the more likely a patch, the lower will be the corresponding detection threshold. Compared to the experimental data for the same distance shown in Figure <xref ref-type="fig" rid="F8">8B</xref>, it turns out that the general result pattern is well reproduced, in particular for configurations with two alignments. Two exceptions are the inverse likelihood ratios for high SF configurations: they are much lower than in the corresponding psychophysical data.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p><bold>Results of image analysis compared to human psychophysics</bold>. <bold>(A)</bold> The height of the bars indicates the inverse of the average likelihood ratio &#x0039B; for the corresponding plaid configurations, sorted into the same categories as used in Figure <xref ref-type="fig" rid="F5">5</xref> for showing the psychophysics results. The left graph and right graphs show the results for the Corel and McGill data bases, respectively. For comparison, <bold>(B)</bold> shows the experimental results for <italic>d</italic> &#x0003D; 2&#x000B0; (same as in Figure <xref ref-type="fig" rid="F5">5</xref>, right graph), which come closest to the observed result pattern.</p></caption>
<graphic xlink:href="fnsys-10-00078-g0008.tif"/>
</fig>
<p>Since natural images can be observed from different viewing angles, one degree of visual angle can correspond to a varying numbers of pixels. To check how strongly results depend on this unknown variable, we analyzed image data using a range from 6 to 24 pixels/degree for the Corel data base, and 12&#x02013;48 pixels/degree for the McGill data base, over a range of spatial distances from <italic>d</italic> &#x0003D; 0.5&#x000B0; to <italic>d</italic> &#x0003D; 4&#x000B0;. Although results varied quantitatively, the general pattern as displayed in Figure <xref ref-type="fig" rid="F8">8</xref> remained unchanged (not shown). There was also no conspicuous change when we varied the threshold used for reducing noise. While this finding means that the psychophysical data for <italic>d</italic> &#x0003D; 1&#x000B0; has no apparent relation to natural image statistics, it nevertheless confirms the well-known fact that natural image statistics is in many aspects scale-invariant (Ruderman, <xref ref-type="bibr" rid="B46">1997</xref>).</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>4. Discussion</title>
<p>Combining image analysis, computational modeling and psychophysical experiment, we have investigated visual feature integration of oriented patch gratings and established a link between stimuli (image statistics), predicted brain activity (network model), and cognitive states (perception). In particular, our model consistently and precisely reproduces human detection thresholds in all experimental conditions. Moreover, image statistics are closely linked to perception for 2&#x000B0; inter-patch distance: the more likely a particular plaid configuration, the lower its detection threshold. The model predicts three types of interactions required to explain the observed effects: medium-range spatially isotropic inhibition, long-range iso-orientation excitation for feature detectors with <italic>similar</italic> spatial frequency preferences, turning into suppression between iso-oriented feature detectors with <italic>different</italic> spatial frequencies.</p>
 
<sec>
<title>4.1. Interactions and their putative computational role</title>
<p>A single, common functional principle emerges when putting these observations into context with our knowledge about neural activation and detection thresholds if grating patches are smaller than 1&#x000B0;: typically, neural activation increases and detection thresholds decrease when the diameter of the patches becomes larger (Kretzberg and Ernst, <xref ref-type="bibr" rid="B24">2013</xref>). This suggests an interaction profile resembling a Mexican hat: short-range excitation combined with medium-range inhibition. Now considering the orientation-specific interactions, a second Mexican-hat profile emerges in spatial frequency space: excitation if frequencies are close, and inhibition if frequencies are further apart. Functionally, Mexican-hat interactions are closely related to edge detection and image compression (e.g., see examples in Ernst, <xref ref-type="bibr" rid="B8">2013</xref>): stimuli consisting of similar features are suppressed (low neural activity, Sillito et al., <xref ref-type="bibr" rid="B50">1995</xref>; Levitt and Lund, <xref ref-type="bibr" rid="B28">1997</xref>), while stimuli consisting of dissimilar features are enhanced (high neural activity, Sillito et al., <xref ref-type="bibr" rid="B50">1995</xref>; Levitt and Lund, <xref ref-type="bibr" rid="B28">1997</xref>). More complex interaction patterns (surrounds) which might be used by the brain to detect specifc patch configurations have also been reported from physiological studies (Walker et al., <xref ref-type="bibr" rid="B53">1999</xref>). In our situation, the Mexican-hat in Cartesian space will suppress configurations with multiple, closely spaced patches (of any orientation and spatial frequency) in favor of configurations with more widely spaced, or isolated, patches. At the same time, the Mexican-hat in spatial frequency space will suppress configurations containing many spatial frequencies, while enhancing configurations with a single spatial frequency&#x02014;provided that the patches have similar orientations, since this latter interaction is orientation (difference)-specific (Figure <xref ref-type="fig" rid="F9">9</xref>).</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p><bold>Schematic representation of interactions</bold>. Coupling scheme implied by our findings, shown for feature detectors with similar orientation preferences: antagonistic interactions in space (on small and intermediate distances &#x00394;<italic>r</italic>, horizontal axis) are complemented by antagonistic interactions in spatial frequency (vertical axis, &#x00394;<italic>f</italic>) for long spatial distances. Excitatory and inhibitory interactions are shown in red and blue shading, respectively. For clarity of illustration, we do not show that interaction length scales in addition depend on spatial frequency.</p></caption>
<graphic xlink:href="fnsys-10-00078-g0009.tif"/>
</fig>
<p>The finding of suppression in configurations with multiple, closely spaced patches resembles crowding phenomena, i.e. general detrimental effects of nearby probe stimuli on the perception of test stimulus attributes. Crowding effects were first observed in letter identification, but appear in a large variety of tasks (see Levi, <xref ref-type="bibr" rid="B27">2008</xref>, for a comprehensive review). However, crowding effects mostly concern object feature identification and discrimination, but hardly object detection (Pelli et al., <xref ref-type="bibr" rid="B39">2004</xref>). Particularly, crowding does <italic>not</italic> affect the apparent contrast of the test stimulus (Levi, <xref ref-type="bibr" rid="B27">2008</xref>). Further, crowding effects are quite feature specific (Herzog et al., <xref ref-type="bibr" rid="B15">2015</xref>). On the contrary, the inhibitory effects of close grating patch spacing reported here are effects on <italic>contrast detection</italic>, and were observed irrespective of orientation alignment and spatial frequency homogeneity. We therefore conclude that the inhibitory interaction for close spacings are better understood as lateral or surround masking effects (see Levi, <xref ref-type="bibr" rid="B27">2008</xref>, for further aspects of distinguishing crowding from masking phenomena). Masking effects of surround gratings on central grating patches were extensively studied by Xing and Heeger (<xref ref-type="bibr" rid="B57">2000</xref>, <xref ref-type="bibr" rid="B58">2001</xref>). Center (test) and surround (inducer) gratings were separated by a thin annulus, and the test contrast was matched to a reference grating of equal size, but without surrounding stimulus. The stimulus geometry in Xing&#x00027;s and Heeger&#x00027;s experiments compares to ours in the short (1&#x000B0;) spacing, since there the patches were separated by just 0.41&#x000B0; space of background luminance. Results showed that the test grating had lower perceived contrast than the reference grating, even when inducer contrasts were low. This result was practically independent of the spatial frequency of the gratings, but orientation difference of center and surround diminished the inhibitory contextual influence. In a similar center-surround arrangement Bruchmann et al. (<xref ref-type="bibr" rid="B4">2010</xref>) studied the temporal dynamics of the center-surround interaction, and consistently found evidence for inhibitory effects of the surround masker. Consistent with these results, we conclude that, despite some orientation selectivity, net contextual influence is inhibitory in the near surround.</p>
<p>Orientation-specific interactions are well known from electrophysiological studies on contour integration and are already observed in the form of firing rate enhancements for configurations of only two aligned edges (Ito and Gilbert, <xref ref-type="bibr" rid="B20">1999</xref>). These effects increase in strength with an increasing length of a contour embedded into a randomly oriented background (Li et al., <xref ref-type="bibr" rid="B29">2006</xref>). Optical imaging confirms these findings, and allows us to observe modulatory effects in a spatially extended manner (Gilad et al., <xref ref-type="bibr" rid="B12">2013</xref>). Note that while our study locates all interactions within a single cortical layer, in the real brain different types of contextual interactions are typically located in different visual areas, as e.g., contour integration might be performed not in V1 but in V2 or in V4 (Chen et al., <xref ref-type="bibr" rid="B6">2014</xref>). Moreover, psychophysical studies have shown that in contour integration, spatial frequency and orientation alignment information are combined to yield higher detection performances than expected from the additive combination of the single cues (Persike and Meinhardt, <xref ref-type="bibr" rid="B40">2015a</xref>). Furthermore, if spatial frequencies in contour and background become homogeneous, detection performance is enhanced (Persike et al., <xref ref-type="bibr" rid="B42">2009</xref>; Persike and Meinhardt, <xref ref-type="bibr" rid="B41">2015b</xref>), similar to the effects observed with SF-homogeneous plaids in the 2&#x000B0; condition.</p>
<p>The functional role of the Mexican hat profile in spatial frequency is a potential advantage in reaching unique shape descriptions from single spatial scales. Studies on the detectability of simple global shapes have shown a detection advantage for shapes formed by parameter homogeneous Gabor patches, compared to heterogeneous Gabor elements (Saarinen et al., <xref ref-type="bibr" rid="B48">1997</xref>; Saarinen and Levi, <xref ref-type="bibr" rid="B47">2001</xref>). The advantage was found to be relatively independent of orientation alignment, and stressed the benefit of parameter homogeneity, in contrast to mixed configurations (Saarinen et al., <xref ref-type="bibr" rid="B48">1997</xref>). These findings correspond to our finding of enhanced plaid detectability for orientation and spatial frequency homogeneous grating patches. However, evidence for suppressive interaction among different spatial scales is closely bound to contextual interactions in 2D configurations. Studying the interaction of different spatial scales at the same retinal location has revealed gradual decline of facilitation among grating patches when spatial scale difference is increased, until independence is reached for far apart local carrier frequencies of the grating stimuli (Watson, <xref ref-type="bibr" rid="B54">1982</xref>). While the bandwidth of the psychophysical tuning function is close to the bandwidth of the grating patches, there is no evidence for inhibitory interactions among spatial frequency channels at one retinal location (see Meinhardt, <xref ref-type="bibr" rid="B35">2001</xref>, p. 417). In their seminal psychophysical study on lateral grating patch interactions Polat and Sagi (<xref ref-type="bibr" rid="B43">1993</xref>) also studied contextual interactions for spatial scale differences of test and flankers. Results showed that the biphasic contextual response profile was attenuated for increasing spatial scale differences, but there was no change of the form of the profile, indicating no suppressive interactions for larger spatial scale differences. However, Polat and Sagi (<xref ref-type="bibr" rid="B43">1993</xref>) tested 1D contextual configurations of co-aligned stimuli, but not 2D configurations with collinear and collateral stimulus arrangements, as done here. Electrophysiological studies aiming at measuring a complete contextual interaction topography in 2D (Kapadia et al., <xref ref-type="bibr" rid="B22">2000</xref>) have unfortunately not yet explored whether the contextual response field changes qualitatively if there is a spatial scale difference of central test and peripheral probe stimulus. More data are needed to settle the constraints for the relationship of object descriptions on different spatial scales and their neural underpinnings.</p>
</sec>
<sec>
<title>4.2. Similarities to natural image statistics</title>
<p>The observation that the statistics of plaid configurations in natural images is similar for a wide range of inter-patch distances is not surprising: many studies have shown (albeit sometimes w.r.t simpler features) that natural images have self-similar structures, i.e. that their statistics are invariant with respect to the particular observation scale (e.g., see Ruderman, <xref ref-type="bibr" rid="B46">1997</xref>). For large inter-patch distances (in our case 2&#x000B0;), the visual system seems to realize interactions that enhance feature combinations with higher likelihoods to occur in &#x0201C;nature.&#x0201D; But why does this parallelism fail at 1&#x000B0;? Apart from the trivial explanation that there might be no reason at all, or that biophysical constraints prevent the brain from realizing the necessary neural couplings, there might be one functional explanation: Instead of just enhancing representation proportionally to their likelihood, our brain might rather be interested in enhancing representations <italic>only</italic> when they appear in contexts normally leading to reduced saliency. For example, in contour integration a continuous curve is much easier to follow and to integrate than a broken curve consisting of isolated, colinearly oriented line segments separated by &#x0201C;open&#x0201D; space: the larger the separation of elements, the less salient are contours (Mandon and Kreiter, <xref ref-type="bibr" rid="B32">2005</xref>). Enhanced processing with larger distances would help to bridge the gaps and allow to detect the contour, in particular if these gaps would be filled with distractor elements. This example bears a close resemblance to a plaid consisting of four iso-oriented, frequency-homogeneous gratings, where we find suppression when the elements are very close (1&#x000B0;), but enhancement when the elements are well separated (2&#x000B0;). Here one might speculate that the visual system suppresses &#x0201C;trivial&#x0201D; while enhancing &#x0201C;surprising,&#x0201D; or more challenging feature conjunctions.</p>
</sec>
<sec>
<title>4.3. Model comparison and parameter discussion</title>
<p>The two variants of our network model are extremely simplified versions of more complex approaches (e.g., Li, <xref ref-type="bibr" rid="B30">1998</xref>; Ichida et al., <xref ref-type="bibr" rid="B19">2006</xref>; Hansen and Neumann, <xref ref-type="bibr" rid="B14">2008</xref>). This reductionist approach was taken for two reasons. First, for revealing computational mechanisms as succinctly as possible, and second, for reducing free parameters as far as possible. Even so, model A is still susceptible to overfitting, as indicated by its match to the experiment being far better than expected from the confidence intervals of the experimental results. Model B is superior in the sense of avoiding overfitting: except from one configuration, still all experimental results are explained within their confidence intervals.</p>
<p>Short-range excitatory couplings, which would have been implemented as self-interactions between orientation columns, were not included in our model. Mathematically, <italic>including</italic> these interactions is equivalent with re-scaling the feedforward and recurrent input strength <italic>without</italic> having such interactions. Consistent with previous results, our model also requires interaction ranges to depend on SF preference of the feature detectors involved. It turned out that this interaction has to scale less than exactly anti-proportionally to SF (Polat and Sagi, <xref ref-type="bibr" rid="B43">1993</xref>). Furthermore, we also tested whether cross-SF interactions are required to be orientation-specific. Without this specificity, the match between experimental results and model predictions was far worse. A comparison of our interaction scheme to association fields obtained from studies on contour integration is, unfortunately, not possible. Since our visual stimuli sample orientation (difference) space only extremely sparsely, it is difficult to predict interactions for orientation preferences not being parallel or orthogonal to each other. In addition, we obtain the best fit of the model by <italic>not</italic> having different interaction strengths between parallel and aligned configurations with same orientation. This feature is in contrast to interactions predicted from contour integration studies where parallel configurations (&#x0201C;ladders&#x0201D;) are much harder to perceive than aligned configurations (&#x0201C;snakes&#x0201D;) (Bex et al., <xref ref-type="bibr" rid="B2">2001</xref>; May and Hess, <xref ref-type="bibr" rid="B33">2007</xref>; Vancleef and Wagemans, <xref ref-type="bibr" rid="B52">2013</xref>).</p>
</sec>
<sec>
<title>4.4. Outlook</title>
<p>Can our approach make predictions for even more complex plaids? Our interpretation of the interactions as one functional principle (Mexican hat) extending over multiple feature dimensions makes possible some &#x0201C;educated guesses.&#x0201D; For example, if a 2&#x000B0; configuration is &#x0201C;filled&#x0201D; by adding five patches in the spaces between the original plaid, we expect inhibition to kick in and raise detection thresholds. This would be consistent with our functional explanation that a closely spaced, 3 &#x000D7; 3 configuration of identical patches would be not surprising at all, but considered as a homogeneous texture possibly just being the background of much more interesting image features. It would also be interesting to restrict the image analysis to &#x0201C;informative&#x0201D; patch configurations, as e.g., has been done for oriented edge statistics by requiring human observers to label contours belonging to the same object (Geisler and Perry, <xref ref-type="bibr" rid="B10">2009</xref>).</p>
</sec>
</sec>
<sec id="s5">
<title>Author contributions</title>
<p>All authors contributed equally to the conceptualization of the study. GM set up the basic design, performed the data analysis and contributed the interpretation. MP conducted the experiments and data preparation. UE set up the models and image analysis, performed the simulations, and analyzed model and image analysis results. AS performed the initial simulations for model A. All authors were involved in writing, preparation of the manuscript and final approval. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are investigated and resolved appropriately.</p>
</sec>
<sec id="s6">
<title>Funding</title>
<p>This work was supported by the BMBF (Bernstein Award UE, grant no. 01GQ1106) by the Volkswagen Foundation (SmartStart grant to AS), and the DFG (Priority Program 1665, grant ER 324/3).</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
</sec>
</body>
<back>
<ack><p>We would like to thank Sophie Den&#x000E8;ve for fruitful discussions and useful comments on an earlier version of this manuscript.</p>
</ack>
<sec sec-type="supplementary-material" id="s7">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="http://journal.frontiersin.org/article/10.3389/fnsys.2016.00078">http://journal.frontiersin.org/article/10.3389/fnsys.2016.00078</ext-link></p>
<supplementary-material xlink:href="DataSheet1.PDF" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Adini</surname> <given-names>Y.</given-names></name> <name><surname>Sagi</surname> <given-names>D.</given-names></name> <name><surname>Tsodyks</surname> <given-names>M.</given-names></name></person-group> (<year>1997</year>). <article-title>Excitatory-inhibitory network in the visual cortex: psychophysical evidence</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A.</source> <volume>94</volume>, <fpage>10426</fpage>&#x02013;<lpage>10431</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.94.19.10426</pub-id><pub-id pub-id-type="pmid">9294227</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bex</surname> <given-names>P. J.</given-names></name> <name><surname>Simmers</surname> <given-names>A. J.</given-names></name> <name><surname>Dakin</surname> <given-names>S. C.</given-names></name></person-group> (<year>2001</year>). <article-title>Snakes and ladders: the role of temporal modulation in visual contour integration</article-title>. <source>Vis. Res.</source> <volume>41</volume>, <fpage>3775</fpage>&#x02013;<lpage>3782</lpage>. <pub-id pub-id-type="doi">10.1016/S0042-6989(01)00222-X</pub-id><pub-id pub-id-type="pmid">11712989</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bosking</surname> <given-names>W. H.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Schofield</surname> <given-names>B.</given-names></name> <name><surname>Fitzpatrick</surname> <given-names>D.</given-names></name></person-group> (<year>1997</year>). <article-title>Orientation selectivity and the arrangement of horizontal connections in tree shrew striate cortex</article-title>. <source>J. Neurosci.</source> <volume>17</volume>, <fpage>2112</fpage>&#x02013;<lpage>2127</lpage>. <pub-id pub-id-type="pmid">9045738</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bruchmann</surname> <given-names>M.</given-names></name> <name><surname>Breitmeyer</surname> <given-names>B. G.</given-names></name> <name><surname>Pantev</surname> <given-names>C.</given-names></name></person-group> (<year>2010</year>). <article-title>Metacontrast masking within and between visual channels: effects of orientation and spatial frequency contrasts</article-title>. <source>J. Vis.</source> <volume>10</volume>, <fpage>1</fpage>&#x02013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1167/10.6.12</pub-id><pub-id pub-id-type="pmid">20884561</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Callaway</surname> <given-names>E. M.</given-names></name></person-group> (<year>2004</year>). <article-title>Feedforward, feedback and inhibitory connections in primate visual cortex</article-title>. <source>Neural Netw.</source> <volume>17</volume>, <fpage>625</fpage>&#x02013;<lpage>632</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2004.04.004</pub-id><pub-id pub-id-type="pmid">15288888</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>M.</given-names></name> <name><surname>Yan</surname> <given-names>Y.</given-names></name> <name><surname>Gong</surname> <given-names>X.</given-names></name> <name><surname>Gilbert</surname> <given-names>C. D.</given-names></name> <name><surname>Liang</surname> <given-names>H.</given-names></name> <name><surname>Li</surname> <given-names>W.</given-names></name></person-group> (<year>2014</year>). <article-title>Incremental integration of global contours through interplay between visual cortical areas</article-title>. <source>Neuron</source> <volume>82</volume>, <fpage>682</fpage>&#x02013;<lpage>694</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2014.03.023</pub-id><pub-id pub-id-type="pmid">24811385</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ehrenstein</surname> <given-names>W. H.</given-names></name> <name><surname>Spillmann</surname> <given-names>L.</given-names></name> <name><surname>Sarris</surname> <given-names>V.</given-names></name></person-group> (<year>2003</year>). <article-title>Gestalt issues in modern neuroscience</article-title>. <source>Axiomathes</source> <volume>13</volume>, <fpage>433</fpage>&#x02013;<lpage>458</lpage>. <pub-id pub-id-type="doi">10.1023/B:AXIO.0000007203.44686.aa</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ernst</surname> <given-names>U.</given-names></name></person-group> (<year>2013</year>). <article-title>Computational role of center-surround processing</article-title>, in <source>Encyclopedia of Computational Neuroscience</source>, eds <person-group person-group-type="editor"><name><surname>Jaeger</surname> <given-names>D.</given-names></name> <name><surname>Jung</surname> <given-names>R.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>12</lpage>.</citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Field</surname> <given-names>D. J.</given-names></name> <name><surname>Hayes</surname> <given-names>A.</given-names></name> <name><surname>Hess</surname> <given-names>R. F.</given-names></name></person-group> (<year>1993</year>). <article-title>Contour integration by the human visual system: evidence for a local &#x0201C;association field.&#x0201D;</article-title> <source>Vis. Res.</source> <volume>33</volume>, <fpage>173</fpage>&#x02013;<lpage>193</lpage>. <pub-id pub-id-type="doi">10.1016/0042-6989(93)90156-Q</pub-id><pub-id pub-id-type="pmid">8447091</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Geisler</surname> <given-names>W. S.</given-names></name> <name><surname>Perry</surname> <given-names>J. S.</given-names></name></person-group> (<year>2009</year>). <article-title>Contour statistics in natural images: grouping across occlusions</article-title>. <source>Vis. Neurosci.</source> <volume>26</volume>, <fpage>109</fpage>&#x02013;<lpage>121</lpage>. <pub-id pub-id-type="doi">10.1017/S0952523808080875</pub-id><pub-id pub-id-type="pmid">19216819</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Geisler</surname> <given-names>W. S.</given-names></name> <name><surname>Perry</surname> <given-names>J. S.</given-names></name> <name><surname>Super</surname> <given-names>B. J.</given-names></name> <name><surname>Gallogly</surname> <given-names>D. P.</given-names></name></person-group> (<year>2001</year>). <article-title>Edge co-occurrence in natural images predicts contour grouping performance</article-title>. <source>Vis. Res.</source> <volume>41</volume>, <fpage>711</fpage>&#x02013;<lpage>724</lpage>. <pub-id pub-id-type="doi">10.1016/S0042-6989(00)00277-7</pub-id><pub-id pub-id-type="pmid">11248261</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gilad</surname> <given-names>A.</given-names></name> <name><surname>Meirovithz</surname> <given-names>E.</given-names></name> <name><surname>Slovin</surname> <given-names>H.</given-names></name></person-group> (<year>2013</year>). <article-title>Population responses to contour integration: early encoding of discrete elements and late perceptual grouping</article-title>. <source>Neuron</source> <volume>78</volume>, <fpage>389</fpage>&#x02013;<lpage>402</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2013.02.013</pub-id><pub-id pub-id-type="pmid">23622069</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haider</surname> <given-names>B.</given-names></name> <name><surname>Krause</surname> <given-names>M. R.</given-names></name> <name><surname>Duque</surname> <given-names>A.</given-names></name> <name><surname>Yu</surname> <given-names>Y.</given-names></name> <name><surname>Touryan</surname> <given-names>J.</given-names></name> <name><surname>Mazer</surname> <given-names>J. A.</given-names></name> <etal/></person-group>. (<year>2010</year>). <article-title>Synaptic and network mechanisms of sparse and reliable visual cortical activity during nonclassical receptive field stimulation</article-title>. <source>Neuron</source> <volume>65</volume>, <fpage>107</fpage>&#x02013;<lpage>121</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2009.12.005</pub-id><pub-id pub-id-type="pmid">20152117</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hansen</surname> <given-names>A.</given-names></name> <name><surname>Neumann</surname> <given-names>H.</given-names></name></person-group> (<year>2008</year>). <article-title>A recurrent model of contour integration in primary visual cortex</article-title>. <source>J. Vis.</source> <volume>8</volume>, <fpage>1</fpage>&#x02013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1167/8.8.8</pub-id><pub-id pub-id-type="pmid">18831631</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Herzog</surname> <given-names>M. H.</given-names></name> <name><surname>Sayim</surname> <given-names>B.</given-names></name> <name><surname>Chicherov</surname> <given-names>B.</given-names></name> <name><surname>Manassi</surname> <given-names>M.</given-names></name></person-group> (<year>2015</year>). <article-title>Crowding, grouping, and object recognition: a matter of appearance</article-title>. <source>J. Vis.</source> <volume>15</volume>, <fpage>1</fpage>&#x02013;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1167/15.6.5</pub-id><pub-id pub-id-type="pmid">26024452</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hubel</surname> <given-names>D. H.</given-names></name> <name><surname>Wiesel</surname> <given-names>T. N.</given-names></name></person-group> (<year>1962</year>). <article-title>Receptive fields, binocular interaction and functional architecture in the cat&#x00027;s visual cortex</article-title>. <source>J. Physiol.</source> <volume>160</volume>, <fpage>106</fpage>&#x02013;<lpage>154</lpage>. <pub-id pub-id-type="doi">10.1113/jphysiol.1962.sp006837</pub-id><pub-id pub-id-type="pmid">14449617</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>H&#x000FC;bner</surname> <given-names>R.</given-names></name></person-group> (<year>1996</year>). <article-title>Specific effects of spatial-frequency uncertainty and different cue types on contrast detection: data and models</article-title>. <source>Vis. Res.</source> <volume>36</volume>, <fpage>3429</fpage>&#x02013;<lpage>3439</lpage>. <pub-id pub-id-type="doi">10.1016/0042-6989(95)00112-3</pub-id><pub-id pub-id-type="pmid">8977010</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Hup&#x000E9;</surname> <given-names>J. M.</given-names></name> <name><surname>James</surname> <given-names>A. C.</given-names></name> <name><surname>Girard</surname> <given-names>P.</given-names></name> <name><surname>Lomber</surname> <given-names>S. G.</given-names></name> <name><surname>Payne</surname> <given-names>B. R.</given-names></name> <name><surname>Bullier</surname> <given-names>J.</given-names></name></person-group> (<year>2001</year>). <article-title>Feedback connections act on the early part of the responses in monkey visual cortex</article-title>. <source>J. Neurophysiol.</source> <volume>85</volume>, <fpage>134</fpage>&#x02013;<lpage>145</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://jn.physiology.org/content/85/1/134">http://jn.physiology.org/content/85/1/134</ext-link></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ichida</surname> <given-names>J.</given-names></name> <name><surname>Schwabe</surname> <given-names>L.</given-names></name> <name><surname>Bressloff</surname> <given-names>P.</given-names></name> <name><surname>Angelucci</surname> <given-names>A.</given-names></name></person-group> (<year>2006</year>). <article-title>The role of feedback in shaping the extra-classical receptive field of cortical neurons: a recurrent network model</article-title>. <source>J. Neurosci.</source> <volume>26</volume>, <fpage>9117</fpage>&#x02013;<lpage>9129</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1253-06.2006</pub-id><pub-id pub-id-type="pmid">16957068</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ito</surname> <given-names>M.</given-names></name> <name><surname>Gilbert</surname> <given-names>C.</given-names></name></person-group> (<year>1999</year>). <article-title>Attention modulates contextual influences in the primary visual cortex of alert monkeys</article-title>. <source>Neuron</source> <volume>22</volume>, <fpage>593</fpage>&#x02013;<lpage>604</lpage>. <pub-id pub-id-type="doi">10.1016/S0896-6273(00)80713-8</pub-id><pub-id pub-id-type="pmid">10197538</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Johnson</surname> <given-names>R. R.</given-names></name> <name><surname>Burkhalter</surname> <given-names>A.</given-names></name></person-group> (<year>1996</year>). <article-title>Microcircuitry of forward and feedback connections within rat visual cortex</article-title>. <source>J. Comp. Neurol.</source> <volume>368</volume>, <fpage>383</fpage>&#x02013;<lpage>398</lpage>. <pub-id pub-id-type="pmid">8725346</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Kapadia</surname> <given-names>M. K.</given-names></name> <name><surname>Westheimer</surname> <given-names>G.</given-names></name> <name><surname>Gilbert</surname> <given-names>C. D.</given-names></name></person-group> (<year>2000</year>). <article-title>Spatial distribution of contextual interactions in primary visual cortex and in visual perception</article-title>. <source>J. Neurophysiol.</source> <volume>84</volume>, <fpage>2048</fpage>&#x02013;<lpage>2064</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://jn.physiology.org/content/84/4/2048">http://jn.physiology.org/content/84/4/2048</ext-link></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kovacs</surname> <given-names>I.</given-names></name></person-group> (<year>1996</year>). <article-title>Gestalten of today: early processing of visual contours and surfaces</article-title>. <source>Behav. Brain Res.</source> <volume>82</volume>, <fpage>1</fpage>&#x02013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1016/S0166-4328(97)81103-5</pub-id><pub-id pub-id-type="pmid">9021065</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kretzberg</surname> <given-names>J.</given-names></name> <name><surname>Ernst</surname> <given-names>U. A.</given-names></name></person-group> (<year>2013</year>). <source>Neurosciences - From Molecule to Behavior: A University Textbook</source>. <publisher-loc>Berlin; Heidelberg</publisher-loc>: <publisher-name>Springer-Verlag</publisher-name>, <fpage>363</fpage>&#x02013;<lpage>407</lpage>.</citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lamme</surname> <given-names>V. A.</given-names></name> <name><surname>Roelfsema</surname> <given-names>P. R.</given-names></name></person-group> (<year>2000</year>). <article-title>The distinct modes of vision offered by feedforward and recurrent processing</article-title>. <source>Trends Neurosci.</source> <volume>23</volume>, <fpage>571</fpage>&#x02013;<lpage>579</lpage>. <pub-id pub-id-type="doi">10.1016/S0166-2236(00)01657-X</pub-id><pub-id pub-id-type="pmid">11074267</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lamme</surname> <given-names>V. A.</given-names></name> <name><surname>Super</surname> <given-names>H.</given-names></name> <name><surname>Spekreijse</surname> <given-names>H.</given-names></name></person-group> (<year>1998</year>). <article-title>Feedforward, horizontal, and feedback processing in the visual cortex</article-title>. <source>Curr. Opin. Neurobiol.</source> <volume>8</volume>, <fpage>529</fpage>&#x02013;<lpage>535</lpage>. <pub-id pub-id-type="doi">10.1016/S0959-4388(98)80042-1</pub-id><pub-id pub-id-type="pmid">9751656</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Levi</surname> <given-names>D. M.</given-names></name></person-group> (<year>2008</year>). <article-title>Crowding - an essential bottleneck for object recognition: a mini-review</article-title>. <source>Vis. Res.</source> <volume>48</volume>, <fpage>635</fpage>&#x02013;<lpage>654</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2007.12.009</pub-id><pub-id pub-id-type="pmid">18226828</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Levitt</surname> <given-names>J.</given-names></name> <name><surname>Lund</surname> <given-names>J.</given-names></name></person-group> (<year>1997</year>). <article-title>Contrast dependence of contextual effects in primate visual cortex</article-title>. <source>Nature</source> <volume>387</volume>, <fpage>73</fpage>&#x02013;<lpage>76</lpage>. <pub-id pub-id-type="doi">10.1038/387073a0</pub-id><pub-id pub-id-type="pmid">9139823</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>W.</given-names></name> <name><surname>Pi&#x000EB;ch</surname> <given-names>V.</given-names></name> <name><surname>Gilbert</surname> <given-names>C. D.</given-names></name></person-group> (<year>2006</year>). <article-title>Contour saliency in primary visual cortex</article-title>. <source>Neuron</source> <volume>50</volume>, <fpage>951</fpage>&#x02013;<lpage>962</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2006.04.035</pub-id><pub-id pub-id-type="pmid">16772175</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>Z.</given-names></name></person-group> (<year>1998</year>). <article-title>A neural model of contour integration in the primary visual cortex</article-title>. <source>Neural Comput.</source> <volume>10</volume>, <fpage>903</fpage>&#x02013;<lpage>940</lpage>. <pub-id pub-id-type="doi">10.1162/089976698300017557</pub-id><pub-id pub-id-type="pmid">9573412</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lochmann</surname> <given-names>T.</given-names></name> <name><surname>Ernst</surname> <given-names>U.</given-names></name> <name><surname>Deneve</surname> <given-names>S.</given-names></name></person-group> (<year>2012</year>). <article-title>Perceptual inference predicts contextual modulations of sensory responses</article-title>. <source>J. Neurosci.</source> <volume>32</volume>, <fpage>4179</fpage>&#x02013;<lpage>4195</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.0817-11.2012</pub-id><pub-id pub-id-type="pmid">22442081</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mandon</surname> <given-names>S.</given-names></name> <name><surname>Kreiter</surname> <given-names>A. K.</given-names></name></person-group> (<year>2005</year>). <article-title>Rapid contour integration in macaque monkeys</article-title>. <source>Vis. Res.</source> <volume>45</volume>, <fpage>291</fpage>&#x02013;<lpage>300</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2004.08.010</pub-id><pub-id pub-id-type="pmid">15607346</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>May</surname> <given-names>K. A.</given-names></name> <name><surname>Hess</surname> <given-names>R. F.</given-names></name></person-group> (<year>2007</year>). <article-title>Dynamics of snakes and ladders</article-title>. <source>J. Vis.</source> <volume>7</volume>, <fpage>13.1</fpage>&#x02013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1167/7.12.13</pub-id><pub-id pub-id-type="pmid">17997655</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meinhardt</surname> <given-names>G.</given-names></name></person-group> (<year>1999</year>). <article-title>Evidence for different nonlinear summation schemes for lines and gratings at threshold</article-title>. <source>Biol. Cybernet.</source> <volume>81</volume>, <fpage>263</fpage>&#x02013;<lpage>277</lpage>. <pub-id pub-id-type="doi">10.1007/s004220050561</pub-id><pub-id pub-id-type="pmid">10473850</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meinhardt</surname> <given-names>G.</given-names></name></person-group> (<year>2001</year>). <article-title>Detection of sinusoidal gratings by pattern-specific detectors: Further evidence for the correlation principle in human vision</article-title>. <source>Biol. Cybernet.</source> <volume>85</volume>, <fpage>401</fpage>&#x02013;<lpage>422</lpage>. <pub-id pub-id-type="doi">10.1007/s004220000236</pub-id><pub-id pub-id-type="pmid">11762232</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Metzger</surname> <given-names>W.</given-names></name></person-group> (<year>2006</year>). <source>Laws of Seeing</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mizobe</surname> <given-names>K.</given-names></name> <name><surname>Polat</surname> <given-names>U.</given-names></name> <name><surname>Pettet</surname> <given-names>M. W.</given-names></name> <name><surname>Kasamatsu</surname> <given-names>T.</given-names></name></person-group> (<year>2001</year>). <article-title>Facilitation and suppression of single striate-cell activity by spatially discrete pattern stimuli presented beyond the receptive field</article-title>. <source>Vis. Neurosci.</source> <volume>18</volume>, <fpage>377</fpage>&#x02013;<lpage>391</lpage>. <pub-id pub-id-type="doi">10.1017/S0952523801183045</pub-id><pub-id pub-id-type="pmid">11497414</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Olmos</surname> <given-names>A.</given-names></name> <name><surname>Kingdom</surname> <given-names>F.</given-names></name></person-group> (<year>2004</year>). <article-title>A biologically inspired algorithm for the recovery of shading and reflectance images</article-title>. <source>Perception</source> <volume>33</volume>, <fpage>1463</fpage>&#x02013;<lpage>1473</lpage>. <pub-id pub-id-type="doi">10.1068/p5321</pub-id><pub-id pub-id-type="pmid">15729913</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pelli</surname> <given-names>D. G.</given-names></name> <name><surname>Palomares</surname> <given-names>M.</given-names></name> <name><surname>Majaj</surname> <given-names>N. J.</given-names></name></person-group> (<year>2004</year>). <article-title>Crowding is unlike ordinary masking: distinguishing feature integration from detection</article-title>. <source>J. Vis.</source> <volume>4</volume>, <fpage>1136</fpage>&#x02013;<lpage>1169</lpage>. <pub-id pub-id-type="doi">10.1167/4.12.12</pub-id><pub-id pub-id-type="pmid">15669917</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Persike</surname> <given-names>M.</given-names></name> <name><surname>Meinhardt</surname> <given-names>G.</given-names></name></person-group> (<year>2015a</year>). <article-title>Cue combination anisotropies in contour integration: the role of lower spatial frequencies</article-title>. <source>J. Vis.</source> <volume>15</volume>, <fpage>1</fpage>&#x02013;<lpage>16</lpage>. <pub-id pub-id-type="doi">10.1167/15.5.17</pub-id><pub-id pub-id-type="pmid">26067535</pub-id>.</citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Persike</surname> <given-names>M.</given-names></name> <name><surname>Meinhardt</surname> <given-names>G.</given-names></name></person-group> (<year>2015b</year>). <article-title>Effects of spatial frequency similarity and dissimilarity on contour integration</article-title>. <source>PLoS ONE</source> <volume>10</volume>:<fpage>e0126449</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0126449</pub-id><pub-id pub-id-type="pmid">26057620</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Persike</surname> <given-names>M.</given-names></name> <name><surname>Olzak</surname> <given-names>L.</given-names></name> <name><surname>Meinhardt</surname> <given-names>G.</given-names></name></person-group> (<year>2009</year>). <article-title>Contour integration across spatial frequency</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>35</volume>, <fpage>1629</fpage>&#x02013;<lpage>1648</lpage>. <pub-id pub-id-type="doi">10.1037/a0016473</pub-id><pub-id pub-id-type="pmid">19968425</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Polat</surname> <given-names>U.</given-names></name> <name><surname>Sagi</surname> <given-names>D.</given-names></name></person-group> (<year>1993</year>). <article-title>Lateral interactions between spatial channels: suppression and facilitation revealed by lateral masking experiments</article-title>. <source>Vis. Res.</source> <volume>33</volume>, <fpage>993</fpage>&#x02013;<lpage>999</lpage>. <pub-id pub-id-type="doi">10.1016/0042-6989(93)90081-7</pub-id><pub-id pub-id-type="pmid">8506641</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Polat</surname> <given-names>U.</given-names></name> <name><surname>Sagi</surname> <given-names>D.</given-names></name></person-group> (<year>1994</year>). <article-title>The architecture of perceptual spatial interactions</article-title>. <source>Vis. Res.</source> <volume>34</volume>, <fpage>73</fpage>&#x02013;<lpage>78</lpage>. <pub-id pub-id-type="doi">10.1016/0042-6989(94)90258-5</pub-id><pub-id pub-id-type="pmid">8116270</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Roelfsema</surname> <given-names>P. R.</given-names></name> <name><surname>Lamme</surname> <given-names>V. A.</given-names></name> <name><surname>Spekreijse</surname> <given-names>H.</given-names></name> <name><surname>Bosch</surname> <given-names>H.</given-names></name></person-group> (<year>2002</year>). <article-title>Figure-ground segregation in a recurrent network architecture</article-title>. <source>J. Cogn. Neurosci.</source> <volume>14</volume>, <fpage>525</fpage>&#x02013;<lpage>537</lpage>. <pub-id pub-id-type="doi">10.1162/08989290260045756</pub-id><pub-id pub-id-type="pmid">12126495</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ruderman</surname> <given-names>D.</given-names></name></person-group> (<year>1997</year>). <article-title>Origins of scaling in natural images</article-title>. <source>Vis. Res.</source> <volume>37</volume>, <fpage>3385</fpage>&#x02013;<lpage>3398</lpage>. <pub-id pub-id-type="doi">10.1016/S0042-6989(97)00008-4</pub-id><pub-id pub-id-type="pmid">9425551</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Saarinen</surname> <given-names>J.</given-names></name> <name><surname>Levi</surname> <given-names>D. M.</given-names></name></person-group> (<year>2001</year>). <article-title>Integration of local features into a global shape</article-title>. <source>Vis. Res.</source> <volume>41</volume>, <fpage>1785</fpage>&#x02013;<lpage>1790</lpage>. <pub-id pub-id-type="doi">10.1016/S0042-6989(01)00058-X</pub-id><pub-id pub-id-type="pmid">11369042</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Saarinen</surname> <given-names>J.</given-names></name> <name><surname>Levi</surname> <given-names>D. M.</given-names></name> <name><surname>Shen</surname> <given-names>B.</given-names></name></person-group> (<year>1997</year>). <article-title>Integration of local pattern elements into a global shape in human vision</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A.</source> <volume>94</volume>, <fpage>8267</fpage>&#x02013;<lpage>8271</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.94.15.8267</pub-id><pub-id pub-id-type="pmid">9223350</pub-id></citation>
</ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shmuel</surname> <given-names>A.</given-names></name> <name><surname>Grinvald</surname> <given-names>A.</given-names></name></person-group> (<year>1996</year>). <article-title>Functional organization for direction of motion and its relationship to orientation maps in cat area 18</article-title>. <source>J. Neurosci.</source> <volume>16</volume>, <fpage>6945</fpage>&#x02013;<lpage>6964</lpage>. <pub-id pub-id-type="pmid">8824332</pub-id></citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sillito</surname> <given-names>A.</given-names></name> <name><surname>Grieve</surname> <given-names>K.</given-names></name> <name><surname>Jones</surname> <given-names>H.</given-names></name> <name><surname>Cudeiro</surname> <given-names>J.</given-names></name> <name><surname>Davis</surname> <given-names>J.</given-names></name></person-group> (<year>1995</year>). <article-title>Visual cortical mechanisms detecting focal orientation discontinuities</article-title>. <source>Nature</source> <volume>378</volume>, <fpage>492</fpage>&#x02013;<lpage>496</lpage>. <pub-id pub-id-type="doi">10.1038/378492a0</pub-id><pub-id pub-id-type="pmid">7477405</pub-id></citation>
</ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>van der Schaaf</surname> <given-names>A.</given-names></name> <name><surname>van Hateren</surname> <given-names>J. H.</given-names></name></person-group> (<year>1996</year>). <article-title>Modelling the power spectra of natural images: statistics and information</article-title>. <source>Vis. Res.</source> <volume>36</volume>, <fpage>2759</fpage>&#x02013;<lpage>2770</lpage>. <pub-id pub-id-type="doi">10.1016/0042-6989(96)00002-8</pub-id><pub-id pub-id-type="pmid">8917763</pub-id></citation>
</ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vancleef</surname> <given-names>K.</given-names></name> <name><surname>Wagemans</surname> <given-names>J.</given-names></name></person-group> (<year>2013</year>). <article-title>Component processes in contour integration: a direct comparison between snakes and ladders in a detection and a shape discrimination task</article-title>. <source>Vis. Res.</source> <volume>92</volume>, <fpage>39</fpage>&#x02013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1016/j.visres.2013.09.003</pub-id><pub-id pub-id-type="pmid">24051198</pub-id></citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Walker</surname> <given-names>G. A.</given-names></name> <name><surname>Ohzawa</surname> <given-names>I.</given-names></name> <name><surname>Freeman</surname> <given-names>R. D.</given-names></name></person-group> (<year>1999</year>). <article-title>Asymmetric suppression outside the classical receptive field of the visual cortex</article-title>. <source>J. Neurosci.</source> <volume>19</volume>, <fpage>10536</fpage>&#x02013;<lpage>10553</lpage>. <pub-id pub-id-type="pmid">10575050</pub-id></citation>
</ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Watson</surname> <given-names>A. B.</given-names></name></person-group> (<year>1982</year>). <article-title>Summation of grating patches indicates many types of detector at one retinal location</article-title>. <source>Vis. Res.</source> <volume>22</volume>, <fpage>17</fpage>&#x02013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1016/0042-6989(82)90162-6</pub-id><pub-id pub-id-type="pmid">7101741</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Whitney</surname> <given-names>D.</given-names></name> <name><surname>Levi</surname> <given-names>D. M.</given-names></name></person-group> (<year>2011</year>). <article-title>Visual crowding: a fundamental limit on conscious perception and object recognition</article-title>. <source>Trends Cogn. Sci.</source> <volume>15</volume>, <fpage>160</fpage>&#x02013;<lpage>168</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2011.02.005</pub-id><pub-id pub-id-type="pmid">21420894</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wilson</surname> <given-names>H. R.</given-names></name> <name><surname>Cowan</surname> <given-names>J. D.</given-names></name></person-group> (<year>1972</year>). <article-title>Excitatory and inhibitory interactions in localized populations of model neurons</article-title>. <source>Biophys. J.</source> <volume>12</volume>, <fpage>1</fpage>&#x02013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1016/S0006-3495(72)86068-5</pub-id><pub-id pub-id-type="pmid">4332108</pub-id></citation>
</ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xing</surname> <given-names>J.</given-names></name> <name><surname>Heeger</surname> <given-names>D. J.</given-names></name></person-group> (<year>2000</year>). <article-title>Center-surround interactions in foveal and peripheral vision</article-title>. <source>Vis. Res.</source> <volume>40</volume>, <fpage>3065</fpage>&#x02013;<lpage>3072</lpage>. <pub-id pub-id-type="doi">10.1016/S0042-6989(00)00152-8</pub-id><pub-id pub-id-type="pmid">10996610</pub-id></citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xing</surname> <given-names>J.</given-names></name> <name><surname>Heeger</surname> <given-names>D. J.</given-names></name></person-group> (<year>2001</year>). <article-title>Measurement and modeling of center-surround suppression and enhancement</article-title>. <source>Vis. Res.</source> <volume>41</volume>, <fpage>571</fpage>&#x02013;<lpage>583</lpage>. <pub-id pub-id-type="doi">10.1016/S0042-6989(00)00270-4</pub-id><pub-id pub-id-type="pmid">11226503</pub-id></citation>
</ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yager</surname> <given-names>D.</given-names></name> <name><surname>Kramer</surname> <given-names>P.</given-names></name> <name><surname>Shaw</surname> <given-names>M.</given-names></name> <name><surname>Graham</surname> <given-names>N.</given-names></name></person-group> (<year>1984</year>). <article-title>Detection and identification of spatial frequency: models and data</article-title>. <source>Vis. Res.</source> <volume>24</volume>, <fpage>1021</fpage>&#x02013;<lpage>1035</lpage>. <pub-id pub-id-type="doi">10.1016/0042-6989(84)90079-8</pub-id><pub-id pub-id-type="pmid">6506466</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname> <given-names>M.</given-names></name> <name><surname>Rozell</surname> <given-names>C. J.</given-names></name></person-group> (<year>2013</year>). <article-title>Visual nonclassical receptive field effects emerge from sparse coding in a dynamical system</article-title>. <source>PLoS Comput. Biol.</source> <volume>9</volume>:<fpage>e1003191</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1003191</pub-id><pub-id pub-id-type="pmid">24009491</pub-id></citation>
</ref>
</ref-list>
</back>
</article>
