<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Syst. Neurosci.</journal-id>
<journal-title>Frontiers in Systems Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Syst. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5137</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnsys.2014.00183</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Neural representation of calling songs and their behavioral relevance in the grasshopper auditory system</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Meckenh&#x000E4;user</surname> <given-names>Gundula</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn003"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/39082"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Kr&#x000E4;mer</surname> <given-names>Stefanie</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="author-notes" rid="fn003"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/184744"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Farkhooi</surname> <given-names>Farzad</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/9029"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Ronacher</surname> <given-names>Bernhard</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/17426"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Nawrot</surname> <given-names>Martin P.</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/8880"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Neuroinformatics and Theoretical Neuroscience, Department of Biology, Chemistry and Pharmacy, Institute of Biology, Freie Universit&#x000E4;t Berlin</institution> <country>Berlin, Germany</country></aff>
<aff id="aff2"><sup>2</sup><institution>Behavioural Physiology Group, Department of Biology, Humboldt-Universit&#x000E4;t zu Berlin</institution> <country>Berlin, Germany</country></aff>
<aff id="aff3"><sup>3</sup><institution>Bernstein Center for Computational Neuroscience</institution> <country>Berlin, Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Detlef H. Heck, University of Tennessee Health Science Center, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Thomas Nowotny, University of Sussex, UK; Ewa Kublik, Polish Academy of Sciences, Poland</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Martin P. Nawrot, Bernstein Center for Computational Neuroscience, Philippstra&#x000DF;e 13, Haus 6, 10119 Berlin, Germany e-mail: <email>martin.nawrot&#x00040;fu-berlin.de</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to the journal Frontiers in Systems Neuroscience.</p></fn>
<fn fn-type="present-address" id="fn003"><p>&#x02020;These authors have contributed equally to this work.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>19</day>
<month>12</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>8</volume>
<elocation-id>183</elocation-id>
<history>
<date date-type="received">
<day>14</day>
<month>03</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>09</day>
<month>09</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2014 Meckenh&#x000E4;user, Kr&#x000E4;mer, Farkhooi, Ronacher and Nawrot.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>Acoustic communication plays a key role for mate attraction in grasshoppers. Males use songs to advertise themselves to females. Females evaluate the song pattern, a repetitive structure of sound syllables separated by short pauses, to recognize a conspecific male and as proxy to its fitness. In their natural habitat females often receive songs with degraded temporal structure. Perturbations may, for example, result from the overlap with other songs. We studied the response behavior of females to songs that show different signal degradations. A perturbation of an otherwise attractive song at later positions in the syllable diminished the behavioral response, whereas the same perturbation at the onset of a syllable did not affect song attractiveness. We applied na&#x000EF;ve Bayes classifiers to the spike trains of identified neurons in the auditory pathway to explore how sensory evidence about the acoustic stimulus and its attractiveness is represented in the neuronal responses. We find that populations of three or more neurons were sufficient to reliably decode the acoustic stimulus and to predict its behavioral relevance from the single-trial integrated firing rate. A simple model of decision making simulates the female response behavior. It computes for each syllable the likelihood for the presence of an attractive song pattern as evidenced by the population firing rate. Integration across syllables allows the likelihood to reach a decision threshold and to elicit the behavioral response. The close match between model performance and animal behavior shows that a spike rate code is sufficient to enable song pattern recognition.</p></abstract>
<kwd-group>
<kwd>acoustic communication</kwd>
<kwd>decision making</kwd>
<kwd>na&#x000EF;ve Bayes classifier</kwd>
<kwd>neural information processing</kwd>
<kwd>pattern recognition</kwd>
<kwd>population coding</kwd>
</kwd-group>
<counts>
<fig-count count="7"/>
<table-count count="0"/>
<equation-count count="10"/>
<ref-count count="49"/>
<page-count count="12"/>
<word-count count="9560"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>Acoustic communication of grasshoppers has become a prominent model system to investigate principles of neuronal processing of acoustic stimuli. It provides the opportunity to study perceptual decision making in a comparatively simple nervous system. Grasshoppers produce acoustic signals, termed &#x0201C;songs,&#x0201D; to attract a mating partner. Natural songs consist of a repetition of stereotyped subunits with species-specific amplitude modulations of a broad carrier frequency band that are produced by moving the hind legs against the forewings (Von Helversen and von Helversen, <xref ref-type="bibr" rid="B44">1997</xref>). Due to characteristic differences between grasshopper species the songs constitute an important barrier against hybridization. Both the song production and the song recognition are innate behaviors, and therefore we can be confident that the corresponding neuronal circuits are &#x0201C;hard-wired.&#x0201D; In behavioral tests one can use artificial song models that mimic and vary certain song features, and thereby explore which cues are crucial for song recognition (Von Helversen, <xref ref-type="bibr" rid="B43">1972</xref>; Von Helversen and von Helversen, <xref ref-type="bibr" rid="B44">1997</xref>, <xref ref-type="bibr" rid="B45">1998</xref>). These experiments demonstrated that the decisive cues for song recognition reside in the temporal pattern of amplitude modulations, i.e., in a song&#x00027;s envelope. In the grasshopper <italic>Chorthippus biguttulus</italic>, the subject of this investigation, a very simple but highly attractive song model consists of a series of sound &#x0201C;syllables&#x0201D; separated by pauses (see Figure <xref ref-type="fig" rid="F1">1A</xref>). Using song models we can reduce the signal&#x00027;s complexity and compare the behavioral responses directly with the processing capacities of neurons at different stages of the auditory pathway.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Perturbation of the standard song affects attractiveness when placed at later syllable positions. (A)</bold> Envelopes of song models used for behavioral and neurophysiological tests. An attractive standard song consisted of 72 ms syllables and 12 ms pauses. The other stimuli had the same syllable and pause durations but exhibited perturbations at different positions within a syllable (onset, middle, end). <bold>(B)</bold> The median response rate of 33 <italic>C. biguttulus</italic> female responses for the stimulus with onset perturbation was 83%, thus very similar to the response to the standard stimulus. In contrast, stimuli with perturbation in the middle and end were mostly rejected (median response rate 6%). The median is displayed as the central mark in the box plot. The edges of the box are the 25 and 75th percentiles. <bold>(C)</bold> Note the high variance in female responses, especially when perturbation is at syllable onset.</p></caption>
<graphic xlink:href="fnsys-08-00183-g0001.tif"/>
</fig>
<p>The nervous system of grasshoppers offers an important advantage: it contains identifiable neurons that can be discriminated on the basis of their characteristic morphology (R&#x000F6;mer and Marquart, <xref ref-type="bibr" rid="B26">1984</xref>; Stumpner and Ronacher, <xref ref-type="bibr" rid="B37">1991</xref>). Thus, specific processing properties can be assigned to groups of identified neurons in the auditory pathway. The first stage of auditory processing comprises three neuron classes: auditory receptor neurons, local neurons (LNs) and ascending neurons (ANs). The ears of grasshoppers are located on the sides of the first abdominal segment. A total of approximately 60 receptor neurons transduce the vibrations of the tympanum into series of action potentials that travel via the axons into the metathoracic ganglion complex, which houses the first auditory processing stage. There, axons make contact to various types of LNs&#x02014;about 10&#x02013;15 different types of LNs have been identified so far. The LNs then contact a set of about 20 types of ANs, the axons of which ascend to the animal&#x00027;s head, and constitute the sole auditory input to higher processing circuits and decision centers located in the brain (Ronacher et al., <xref ref-type="bibr" rid="B32">1986</xref>; Bauer and von Helversen, <xref ref-type="bibr" rid="B1">1987</xref>). Since the population of ANs constitutes a bottleneck for the information that is available to the brain, they will be in the focus of the present study. Remarkably, the auditory pathway including the ANs is highly conserved between different grasshopper species (Ronacher and Stumpner, <xref ref-type="bibr" rid="B31">1988</xref>; Neuhofer et al., <xref ref-type="bibr" rid="B21">2008</xref>). Not only are the neurons&#x00027; morphologies extremely similar in two not related species (<italic>C. biguttulus</italic> and the locust <italic>Locusta migratoria</italic>), but homologous neurons also exhibit the same physiological properties and processing capacities&#x02014;for a detailed description of the response types see (R&#x000F6;mer and Marquart, <xref ref-type="bibr" rid="B26">1984</xref>; Stumpner and Ronacher, <xref ref-type="bibr" rid="B37">1991</xref>; Stumpner et al., <xref ref-type="bibr" rid="B38">1991</xref>; Wohlgemuth and Ronacher, <xref ref-type="bibr" rid="B50">2007</xref>). Neuhofer et al. (<xref ref-type="bibr" rid="B21">2008</xref>) have shown that auditory neurons of the locust respond in the very same manner to a song signal of <italic>C. biguttulus</italic> as do the homologous neurons of <italic>C. biguttulus</italic>; the similarity of responses has been quantified by the van Rossum metric. Only at the next processing stages, located in the brain, we expect to find neuronal networks that respond selectively to the species-specific song patterns. Due to the high interspecific similarity of the local and ascending neurons we can compare neuronal properties of the locust&#x00027;s neurons with behavioral data obtained with <italic>C. biguttulus</italic>.</p>
<p>The decision centers located in the female brain must evaluate whether a heard song follows the con-specific pattern and whether it is attractive enough to trigger a response song as the appropriate behavior. This task appears simple under ideal conditions, since the song patterns of different species differ considerably (Stumpner and von Helversen, <xref ref-type="bibr" rid="B39">1994</xref>; Gottsberger and Mayer, <xref ref-type="bibr" rid="B10">2007</xref>). However, in nature there are many factors that may degrade the acoustic signal on its way from sender to receiver. This aggravates the classification problem. Here we introduced perturbations of the signal envelope that strongly influenced behavioral decisions. Applying perturbations to the pattern of an attractive song model affected the signal&#x00027;s attractiveness as measured by the female response rates differently, depending on the specific position of a perturbation within a song syllable (Figure <xref ref-type="fig" rid="F1">1A</xref>). Presenting the same stimuli while performing intracellular recordings from identified neurons allowed to investigate the neural representation of the stimulus identity and of its behavioral relevance.</p>
<p>Using na&#x000EF;ve Bayes classifiers (for review see Pouget et al., <xref ref-type="bibr" rid="B23">2000</xref>; Quiroga and Panzeri, <xref ref-type="bibr" rid="B24">2009</xref>) we specifically asked to what degree the acoustic stimulus can be decoded and whether the behavioral stimulus category can be predicted from the single-trial responses of single neurons and neuron populations. We introduce an abstract model of decision-making for triggering a behavior based on the sensory information encoded in the AN population firing rate during a single trial. This model accounts for the observed behavioral scores to different stimulus types.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>Materials and methods</title>
<sec>
<title>Animals</title>
<p>The behavioral tests were performed with females of <italic>C. biguttulus</italic>. The animals were reared as the filial generation (F1) from eggs of individuals collected as adults near G&#x000F6;ttingen, Germany. After adult molt females and males were held separately in plastic cages to ensure virginity. In this species the females respond to a male&#x00027;s song with a song of their own, thereby indicating their readiness to mate. This response song is an ideal criterion showing that a female has identified a song as belonging to a potential conspecific mating partner.</p>
<p>Electrophysiological experiments were performed on locusts, <italic>L. migratoria</italic>, that were bought from a commercial supplier (for details of the breeding and keeping procedures see Schmidt et al., <xref ref-type="bibr" rid="B34">2008</xref>; Stange and Ronacher, <xref ref-type="bibr" rid="B36">2012</xref>). We can homologize identified neurons between the two species on the basis of their characteristic morphology (R&#x000F6;mer and Marquart, <xref ref-type="bibr" rid="B26">1984</xref>; Stumpner and Ronacher, <xref ref-type="bibr" rid="B37">1991</xref>). The homologous auditory neurons of the thoracic ganglia show quantitatively similar response patterns in both species (Neuhofer et al., <xref ref-type="bibr" rid="B21">2008</xref>). In these experiments songs or song models of <italic>C. biguttulus</italic> were presented to both species, and neurons of the locust showed the same responses as neurons of <italic>C. biguttulus</italic> although these songs have, of course, no relevance for the locust (see also Ronacher and Stumpner, <xref ref-type="bibr" rid="B31">1988</xref>; Sokoliuk et al., <xref ref-type="bibr" rid="B35">1989</xref>). On the basis of this strong homology we can use recordings from <italic>L. migratoria</italic> neurons and compare their spike patterns with behavioral responses of <italic>C. biguttulus</italic>.</p>
</sec>
<sec>
<title>Acoustic stimuli</title>
<p>A digitally generated song envelope consisting of rectangular syllables of 72 ms duration separated by 12 ms pauses served as an attractive standard stimulus (Figure <xref ref-type="fig" rid="F1">1A</xref>). In order to systematically screen the detrimental effect of degradation at different syllable positions, we inserted perturbations of 24 ms either in the first, or in the middle, or in the last part of each syllable (Figure <xref ref-type="fig" rid="F1">1A</xref>). A perturbation consisted of 2 alternating accents and gaps, each of 6 ms duration and 12 dB higher or lower sound pressure relative to the syllable plateau. Earlier experiments had revealed that gaps within a syllable do markedly reduce the stimulus attractiveness; accentuations that occur at the end of a syllable have similar detrimental effects (Von Helversen, <xref ref-type="bibr" rid="B43">1972</xref>, <xref ref-type="bibr" rid="B46">1979</xref>; Ronacher and Stumpner, <xref ref-type="bibr" rid="B31">1988</xref>; Von Helversen and von Helversen, <xref ref-type="bibr" rid="B44">1997</xref>; for reviews see Ronacher et al., <xref ref-type="bibr" rid="B28">2004</xref>; Ronacher and Stange, <xref ref-type="bibr" rid="B30">2013</xref>).</p>
<p>The envelopes of all song models were convolved with the same carrier frequency (a broad band noise spectrum of 5&#x02013;40 kHz). Sound intensity was calibrated with a half inch microphone (type 4133; Br&#x000FC;el and Kj&#x000E6;r, N&#x000E6;rum, Denmark) and a measuring amplifier (type 2209, Br&#x000FC;el and Kj&#x000E6;r) at the position of the animal. All four test patterns were presented with the same effective intensity (RMS) of 70 dB SPL; therefore, the peak and plateau intensities differed between stimuli (syllable plateau 70 dB for the standard stimulus and 65 dB for perturbed stimuli, Figure <xref ref-type="fig" rid="F1">1A</xref>). Yet, these intensities fall into the intensity range well accepted by <italic>C. biguttulus</italic> females (Von Helversen and von Helversen, <xref ref-type="bibr" rid="B47">1994</xref>, <xref ref-type="bibr" rid="B44">1997</xref>). The songs presented in the behavioral and electrophysiology tests comprised the same envelope structure but differed in length: 2772 ms (33 subunits; behavior) and 756 ms (9 subunits for electrophysiology), respectively.</p>
</sec>
<sec>
<title>Behavioral experiments</title>
<p>Virgin <italic>C. biguttulus</italic> females were tested in a sound proof chamber at a constant temperature of 30 &#x000B1; 2&#x000B0;C. The experiments were automatically conducted by a custom made program (written by M. Hennig in Labview 7.1, National Instruments) presenting songs in a pseudo-randomized order while recording the females&#x00027; responses (for details of the apparatus and testing procedures see Schmidt et al., <xref ref-type="bibr" rid="B34">2008</xref>). Each song was iterated 18 times. As a measure of stimulus attractiveness we used the percentage of responses normalized to the 18 presentations for each female. Out of these individual responses median response rates were calculated. Additionally, a negative control was presented, comprising the same carrier frequency and length as the standard signal, but lacking any syllable pause structure. In applying this negative control stimulus those females indicating a not discriminative behavior for song patterns could be detected. We therefore excluded from further analysis 11 of 44 females as they responded more than twice to the negative control. Applied statistic software was GraphPad Instat Version 3.06.</p>
</sec>
<sec>
<title>Electrophysiological experiments</title>
<p>Auditory interneurons were recorded intracellularly in the frontal auditory neuropil of the metathoracic ganglion in both sexes of <italic>L. migratoria</italic>. During the experiments the torso of the animal was filled with a locust Ringer solution (Pearson and Robertson, <xref ref-type="bibr" rid="B22a">1981</xref>), to prevent the ganglia from drying. The temperature was kept constant at 30 &#x000B1; 2&#x000B0;C. For the recordings we used glass microelectrodes (borosilicate, O&#x000D8; &#x0003D; 1 mm, I&#x000D8; &#x0003D; 0.58 mm, GC100F-10; Harvard Apparatus, LTD, USA), with capacities varying between 20 and 100 M&#x003A9;. They were filled with a fluorescent dye, a 3&#x02013;5 % solution of Lucifer yellow (Sigma&#x02013;Aldrich, Taufkirchen, Germany) in 0.5 M LiCl. Neural responses were amplified (10-fold, BRAMP-01 R, npi, USA) and recorded by a data-acquisition board (PCI-MIO-16E-4, 16 bit, National Instruments, USA) with a sampling rate of 20 kHz. The dye was injected into the recorded cell by applying hyperpolarizing current of 0.5&#x02013;1 nA. Subsequently the thoracic ganglia were incubated in a fixation solution (4% paraformaldehyde), dehydrated and cleared in methyl salicylate. This procedure allowed an identification of the stained cells under a fluorescent microscope according to their characteristic morphology (R&#x000F6;mer and Marquart, <xref ref-type="bibr" rid="B26">1984</xref>; Stumpner and Ronacher, <xref ref-type="bibr" rid="B37">1991</xref>).</p>
<p>Experiments were performed in a Faraday cage lined with reflection absorbing prisms. One of two speakers (frequency response 2&#x02013;40 kHz, D21, Dynaudio, Denmark), which were placed laterally, at a distance of 30 cm from the animal&#x00027;s tympanal organ, emitted the sound signal. The acoustic stimuli were attenuated (PA5, Tucker-Davis Technologies, USA) and amplified (Raveland-XA600, Conrad Electronics, Germany). They were stored digitally and delivered by custom-made software (LabVIEW, National Instruments) using a 100-kHz D/A-conversion (PCI-MIO-16E-1, National Instruments). For this study ANs were analyzed which represent the third processing stage in the metathoracic ganglion and transmit the auditory information to the grasshopper&#x00027;s brain. We recorded four different types of ANs (AN1, AN4, AN3, AN12) from 25 animals (details of the response properties of these neurons can be found in Ronacher and Stumpner, <xref ref-type="bibr" rid="B31">1988</xref>; Stumpner and Ronacher, <xref ref-type="bibr" rid="B37">1991</xref>; Wohlgemuth and Ronacher, <xref ref-type="bibr" rid="B50">2007</xref>). The direction from which the sound stimuli were presented depended on the side where the neurons were more sensitive to. With the exception of AN1 the ANs AN4, AN3, and AN 12 do not exhibit strong direction sensitivity. The AN1 was mostly recorded from the contralateral side (respective to the soma), the other neurons from both sides. Each song was presented within a looped order: standard stimulus, onset-perturbation, perturbation in the middle, then perturbation in the end, and starting again with the standard stimulus. Stimulus iteration was 8 times, each iteration comprised the full stimulus length (9 subunits).</p>
</sec>
<sec>
<title>Data analysis</title>
<sec>
<title>Estimation of firing rates and trial-by-trial variability</title>
<p>We estimated time-resolved firing rate profiles from single spike trains by convolution with a Gaussian kernel with width &#x00431; ranging from 1 to 30 ms and support [&#x02212;4&#x000B7;&#x00431;,4&#x000B7;&#x00431;] (Nawrot et al., <xref ref-type="bibr" rid="B18">1999</xref>). The kernel was normalized to unit area such that the time integral of the estimated rates equals the number of spikes.</p>
<p>To quantify the trial-by-trial variability of the single neuron spike count we employed the commonly used measure of the Fano factor (Nawrot et al., <xref ref-type="bibr" rid="B19">2008</xref>; Nawrot, <xref ref-type="bibr" rid="B17">2010</xref>), which computes the variance of the spike count across repeated trials divided by the trial-averaged spike count within in a fixed observation interval.</p>
</sec>
<sec>
<title>Na&#x000ef;ve Bayes classification</title>
<p>Na&#x000EF;ve Bayes classifiers are statistical classifiers that are based on Bayes&#x00027; theorem together with na&#x000EF;ve independence assumptions. We applied Bayesian classifiers to decode which stimulus class evoked a particular neural response. Na&#x000EF;ve Bayes classifiers have frequently been used to quantify encoded information in neural spike trains (for reviews see Pouget et al., <xref ref-type="bibr" rid="B23">2000</xref>; Quiroga and Panzeri, <xref ref-type="bibr" rid="B24">2009</xref>), for instance in olfactory sensory neurons in Drosophila larvae (Hoare et al., <xref ref-type="bibr" rid="B12">2011</xref>), in visual interneurons of the blowfly (Karmeier et al., <xref ref-type="bibr" rid="B14">2005</xref>), or in motor cortical neurons of behaving monkeys (Rickert et al., <xref ref-type="bibr" rid="B25">2009</xref>). Let <italic>P</italic>(s) denote the probability of presentation of stimulus class s and <italic>P</italic>(<italic>x</italic><sub>1</sub>, &#x02026;, <italic>x<sub>n</sub></italic>|<italic>s</italic>) the conditional probability of observing spike train features <italic>x</italic><sub>1</sub>, &#x02026;, <italic>x<sub>n</sub></italic> given s. The posterior probability that stimulus class s was presented given <italic>x</italic><sub>1</sub>, &#x02026;, <italic>x<sub>n</sub></italic> is according to Bayes&#x00027; theorem</p>
<disp-formula id="E1"><mml:math id="M1"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>s</mml:mi><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo><mml:mtext>with</mml:mtext></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;</mml:mtext><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>s</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mi>S</mml:mi></mml:mrow></mml:munder><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mstyle><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The na&#x000EF;ve independence assumption that each feature x<sub>i</sub> is conditionally independent of feature <italic>x</italic><sub><italic>j</italic></sub> given s simplifies to</p>
<disp-formula id="E2"><mml:math id="M2"><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>s</mml:mi><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mstyle displaystyle='true'><mml:msubsup><mml:mo>&#x0220F;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:msubsup><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mstyle></mml:mrow><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>From this posterior probability distribution the stimulus class <inline-formula><mml:math id="M11"><mml:mrow><mml:mover accent='true'><mml:mtext>s</mml:mtext><mml:mo stretchy='true'>&#x0005E;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula> that maximizes the probability that <italic>x</italic><sub>1</sub>, &#x02026;, <italic>x<sub>n</sub></italic> was observed is chosen:</p>
<disp-formula id="E3"><mml:math id="M3"><mml:mrow><mml:mover accent='true'><mml:mi>s</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>g</mml:mi><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mi>S</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0007B;</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>s</mml:mi><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0007D;</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Since <italic>P</italic>(<italic>x</italic><sub>1</sub>, &#x02026;, <italic>x<sub>n</sub></italic>) is constant for any choice of the stimulus class s, the classification rule can be written as</p>
<disp-formula id="E4"><mml:math id="M4"><mml:mrow><mml:mover accent='true'><mml:mi>s</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>g</mml:mi><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mi>S</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x0220F;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mstyle></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p><bold><italic>Different decoding approaches</italic></bold>. First, we decoded stimulus classes based on the spike count of single neurons which can be considered as a very simple descriptor of a neural spike response pattern. For each stimulus of 756 ms duration we counted the number of spikes for each of the eight trials, which is proportional to the time-averaged firing rate over the total stimulus length. In a leave-one-out cross-validation every count c was used once as validation data to decoded the stimulus class as:
<disp-formula id="E5"><mml:math id="M5"><mml:mrow><mml:mover accent='true'><mml:mi>s</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>g</mml:mi><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mi>S</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0007B;</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>c</mml:mi><mml:mo>&#x0007C;</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0007D;</mml:mo><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
while the remaining counts were used as training data to compute the probability density functions <italic>P</italic>(c|s) with kernel density estimation. The estimation was implemented with scipy.stats.gaussian_kde (Oliphant, <xref ref-type="bibr" rid="B22">2007</xref>). As the procedure includes automatic bandwidth determination, the probability density functions were estimated with different bandwidths. To account for the non-negativity of the counts, we restricted the support to positive values and normalized the probability density function to unit area. For the very rare case that not more than two counts had different values we assumed a Poisson distribution with mean of the counts.</p>
<p>Second, for decoding from a pseudo-population of neurons, we used the counts <italic>c</italic><sub>1</sub>, &#x02026;, <italic>c<sub>n</sub></italic> of n neurons of different type recorded in different females and calculated
<disp-formula id="E6"><mml:math id="M6"><mml:mrow><mml:mover accent='true'><mml:mi>s</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>g</mml:mi><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mi>S</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x0220F;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mstyle><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
to decode which stimulus class triggered the counts <italic>c</italic><sub>1</sub>, &#x02026;, <italic>c<sub>n</sub></italic>.</p>
<p><bold><italic>Grouping of stimuli into classes</italic></bold>. We followed the decoding approaches to first decode the four stimuli. In this case the set S of stimulus class consists of the standard stimulus, onset perturbation, middle-perturbed song, and end-perturbed song, i.e., each song forming a single class. As all four songs were equally often presented we applied the classification rules with <italic>P</italic>(s)&#x0003D;1/4 for all <italic>s</italic> &#x02208; <italic>S</italic>. However, we may also define stimulus classes that consist of grouped stimuli. For example, decoding whether or not a song shows degradation yields two classes, one consisting of the standard stimulus, and the other one of the three perturbed songs. The prior of these two classes is:</p>
<disp-formula id="E7"><mml:math id="M7"><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mn>4</mml:mn><mml:mo>,</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:mi>f</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mtext>&#x02009;</mml:mtext><mml:mi>s</mml:mi><mml:mo>=</mml:mo><mml:mi>s</mml:mi><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>d</mml:mi><mml:mtext>&#x02009;</mml:mtext><mml:mi>s</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>m</mml:mi><mml:mi>u</mml:mi><mml:mi>l</mml:mi><mml:mi>u</mml:mi><mml:mi>s</mml:mi><mml:mtext>&#x02009;&#x02009;</mml:mtext></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mn>3</mml:mn><mml:mo>/</mml:mo><mml:mn>4</mml:mn><mml:mo>,</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:mi>f</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mtext>&#x02009;</mml:mtext><mml:mi>s</mml:mi><mml:mo>=</mml:mo><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>t</mml:mi><mml:mi>u</mml:mi><mml:mi>r</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi><mml:mtext>&#x02009;</mml:mtext><mml:mi>s</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>m</mml:mi><mml:mi>u</mml:mi><mml:mi>l</mml:mi><mml:mi>u</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mrow></mml:math></disp-formula>
<p><bold><italic>Performance of the Classifier</italic></bold>. To validate the performance of the classifier we performed a leave-one-out cross validation in which each single trial response was used once for decoding based on the distribution of the remaining trials. The results were stored in a confusion matrix (Jurman et al., <xref ref-type="bibr" rid="B13">2012</xref>) whose entry (i,j) represents the number of times that a presentation of stimulus class <italic>i</italic> was predicted to be stimulus class <italic>j</italic>. Based on the confusion matrix we quantified the decoding performance with the Matthews correlation coefficient (MCC) as it is defined in Jurman et al. (<xref ref-type="bibr" rid="B13">2012</xref>). The MCC assumes values between &#x02212;1 and 1, where 0 indicates chance level classification and 1 perfect prediction. In case of binary classification (e.g., decoding the standard stimulus against the three perturbed stimuli) the formula reads
<disp-formula id="E8"><mml:math id="M8"><mml:mrow><mml:mi>M</mml:mi><mml:mi>C</mml:mi><mml:mi>C</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:mi>T</mml:mi><mml:mi>N</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>F</mml:mi><mml:mi>P</mml:mi><mml:mtext>&#x02009;</mml:mtext><mml:mo>&#x000B7;</mml:mo><mml:mi>F</mml:mi><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:msqrt><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>N</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:mi>N</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:mi>N</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>N</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msqrt></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>
where TP, TN, FP, and FN denote true positives, true negatives, false positives, and false negatives, respectively. The MCC has two advantages over the more common measure of accuracy &#x0003D; (TP &#x0002B; TN)/(TP &#x0002B; TN &#x0002B; FP &#x0002B; FN), commonly referred to as &#x0201C;fraction correct.&#x0201D; First, the MCC can be applied in multiclass problems even if the classes are of different sizes (Gorodkin, <xref ref-type="bibr" rid="B9">2004</xref>; Jurman et al., <xref ref-type="bibr" rid="B13">2012</xref>) whereas the measure of accuracy is biased in the case of uneven sample sizes. In our case the sample size is uneven when we group stimuli into classes. Second, the chance level of the MCC is 0 independent of the number <italic>m</italic> of classes whereas the chance level of accuracy (1/<italic>m</italic>) depends on the class number. In our case the MCC thus allows for a direct comparison of decoding performance for stimulus classification (3 or 4 different stimuli) and prediction of the behavioral state (2 classes: attractive or unattractive).</p>
<p>To test whether a classifier decodes significantly better than chance we performed a leave-one-out cross-validation based on spike train features that were randomly reassigned to the stimuli, followed by a calculation of the MCC. We repeated this procedure 1000 times and calculated the <italic>p</italic>-value as the percentage of MCCs that are larger than or equal to the actual MCC. A significance level of 0.05 was chosen.</p>
<p>We implemented all data analysis algorithms in the Python programming language.</p>
</sec>
</sec>
<sec>
<title>Model of decision making</title>
<p>Following Gold and Shadlen (<xref ref-type="bibr" rid="B8">2007</xref>) we use the experimental realizations of the count pattern in <italic>n</italic> &#x0003D; 8 ANs to fit a simple probabilistic model for the female&#x00027;s decision to respond to a calling song. This model is based on the log likelihood ratio (LR) of the song attractiveness given the AN population spike count. We computed for each syllable <italic>j</italic> separately the log LR as
<disp-formula id="E9"><mml:math id="M9"><mml:mrow><mml:mi>log</mml:mi><mml:mi>L</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mo>&#x000B1;</mml:mo></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>j</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>log</mml:mi><mml:mfrac><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>j</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mn>8</mml:mn></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>j</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>h</mml:mi><mml:mo>+</mml:mo></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>j</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mn>8</mml:mn></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:mi>j</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>h</mml:mi><mml:mo>&#x02212;</mml:mo></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>
where the denominator accounts for the probability that the a given count vector <italic>c</italic><sub>1</sub>(<italic>j</italic>), &#x02026;, <italic>c</italic><sub>8</sub>(<italic>j</italic>) (test trial) across 8 neurons stems from the hypothesis h<sub>&#x0002B;</sub>, which is represented by the probability distribution of counts estimated from the remaining trials given an attractive stimulus s<sub>&#x0002B;</sub>. We then defined the decision variable as:</p>
<disp-formula id="E10"><mml:math id="M10"><mml:mrow><mml:mi>D</mml:mi><mml:mi>V</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>k</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>k</mml:mi></mml:munderover><mml:mrow><mml:mi>log</mml:mi><mml:mi>L</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mo>&#x000B1;</mml:mo></mml:msub></mml:mrow></mml:mstyle><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>The decision variable is updated after each syllable <italic>k</italic> by taking the cumulative sum over the past log LR values up to the <italic>k</italic>th syllable. It represents a cumulative sum over the evidence for the presence of an attractive song. The larger DV the more likely is the presence of an attractive song over an unattractive song.</p>
<p>For any combination of <italic>n</italic> &#x0003D; 8 selected ANs, two of each type, we compute for each single song presentation (test trial) the LR and the DV based on the remaining trials (leave-one-out). We repeat this for all possible combinations of 8 neurons that comprise 2 neurons of each type of AN representing input from both ears. We next introduced a decision threshold &#x003B8; on DV. For a single trial, i.e., a particular song presentation, a behavioral response is elicited if DV(k) &#x0003E; &#x003B8; in any <italic>k</italic>. This approach allows us to simulate the female single trial response behavior based on the experimentally recorded AN population activity.</p>
<p>We compared the performance of the simulated animal decisions to the actual animal performance in the behavioral experiments. For a given value of &#x003B8; the true positive (TP) rate is defined as the fraction of correct detections, i.e., threshold crossings in the presence of an attractive song over all presentations of an attractive song. The false positive (FP) rate quantifies the fraction of false alarms, i.e., the threshold crossings in the presence of an unattractive song over all presentations of an unattractive song. TP and FP rates depend on the choice of &#x003B8;. We thus computed the receiver operating characteristic (ROC) that represents the TP rate as a function of the FP rate for varying &#x003B8; (Wiley, <xref ref-type="bibr" rid="B48">2006</xref>). We measure the area under the ROC to quantify the model performance independent of the behavioral threshold &#x003B8;.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Behavioral decisions reveal two behaviorally relevant stimulus classes</title>
<p>In behavioral tests we investigated how degradation at specific positions within the signal does affect signal recognition. We compared the responses of <italic>C. biguttulus</italic> females to four stimulus types (Figure <xref ref-type="fig" rid="F1">1A</xref>): (i) standard stimulus without perturbation, (ii) with perturbation during the first third of the syllable (&#x0201C;onset&#x0201D;), (iii) during the second third (&#x0201C;middle&#x0201D;), and (iv) during the last third (&#x0201C;end&#x0201D;). Figure <xref ref-type="fig" rid="F1">1B</xref> shows the distribution of response rates across individual females to all four stimuli (see Materials and Methods). The standard stimulus was highly attractive (median: 83%), although individual females differed considerably in their response rate (compare quartile ranges and see variance in Figure <xref ref-type="fig" rid="F1">1C</xref>). Females showed similar high response rates toward the stimulus with onset perturbation, whereas the same perturbation in the middle or the end of a syllable led to a behavioral rejection (median response levels of &#x0003C;10%). Only 3 out of 33 females responded to the latter stimuli in more than 50% of the stimulus presentations.</p>
<p>In order to further analyze differences in attractiveness we pairwise compared stimulus responses in individual females. For each female, the response rates for any two stimuli (see left column in Figure <xref ref-type="fig" rid="F2">2</xref>) were subtracted. Thus, it could be shown that the responses to the onset stimulus did not differ significantly from the responses to the standard (top row, Figure <xref ref-type="fig" rid="F2">2</xref>); the same is true for the comparison of the stimuli perturbed in the second and third part of the syllable (lowest row, Figure <xref ref-type="fig" rid="F2">2</xref>). In contrast, the responses to the unperturbed song and the song with middle and end perturbations differed significantly (<italic>p</italic> &#x0003C; 0.001; Friedman and Dunn&#x00027;s Multiple Comparison Test), and in both cases the median difference was about 60%. Similar results were found for the comparison between the onset perturbed stimulus and the other two perturbed stimuli (median differences &#x0003E;50%, <italic>p</italic> &#x0003C; 0.001).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Pairwise comparison of individual female responses allows distinction in attractive and unattractive stimulus classes</bold>. Box plots show medians of response differences in individual females for stimulus comparisons shown in the left. Whereas there is no difference in response between stimuli with onset perturbation and the standard song, they are both significantly more attractive than stimuli with perturbation at middle and end (&#x0002A;&#x0002A;&#x0002A;<italic>p</italic> &#x0003C; 0.001, Dunn&#x00027;s <italic>post-hoc</italic> test after Friedman).</p></caption>
<graphic xlink:href="fnsys-08-00183-g0002.tif"/>
</fig>
</sec>
<sec>
<title>Decoding stimulus identity and behavioral class from the neuronal spike count</title>
<p>Grasshoppers have to make their decisions based on the information about the environment provided by the sensory and higher order neurons of the auditory pathway. The clear separation into two behavioral stimulus classes raises the question of how the different stimuli and the different behavioral classes are represented and discriminated within the grasshopper&#x00027;s nervous system. We address this question in intracellular <italic>in vivo</italic> recordings of identified ANs during repeated presentations of all four songs. To quantify the encoded information we apply a single-trial decoding approach to the neural spiking activity using a Bayesian classifier. We first decode the identity of the auditory stimulus before we predict the behavioral class (attractive vs. non-attractive).</p>
<sec>
<title>Stimulus classification based on single neuron and population activity</title>
<p>How is information about a stimulus, such as the stimulus type or its attractiveness, represented in the spike responses of the ANs? We obtained intracellular recordings from AN1 (<italic>n</italic> &#x0003D; 9), AN3 (<italic>n</italic> &#x0003D; 10), AN4 (<italic>n</italic> &#x0003D; 4), and AN12 (<italic>n</italic> &#x0003D; 2); for the terminology see R&#x000F6;mer and Marquart, <xref ref-type="bibr" rid="B26">1984</xref>; Stumpner and Ronacher, <xref ref-type="bibr" rid="B37">1991</xref>). Figure <xref ref-type="fig" rid="F3">3</xref> shows example voltage traces of <italic>in vivo</italic> intracellular recordings from two individual ANs, and the corresponding spike raster plots. The example AN3-neuron responded with a burst of spikes to the stimulus onset and with smaller bursts at syllable onsets. In the two unattractive stimuli, however, additional spike bursts occurred in the middle or at the end of the syllables. The AN1-neuron marked the syllable onsets of the standard stimulus, whereas the perturbations evoked additional spikes within the syllables. The trial-averaged firing rates (Figure <xref ref-type="fig" rid="F3">3</xref>, color coded) of all recorded neurons indicate that neuronal response patterns vary for the four different song patterns. Also, neurons that are of the same morphological type (AN1, AN3, AN4, AN12) show variations in their response patterns across individual animals.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Neuronal responses to all four songs categorized by their behavioral relevance</bold>. Voltage traces and spike raster plots (8 trials) in the second and third columns show responses to the first four syllable&#x02013;pause subunits for two example neurons AN3 and AN1. The fourth column shows trial-averaged firing rates estimated with a Gaussian kernel of width &#x00431; &#x0003D; 4 ms during the whole stimulus presentation. Each row within a block of a neuron type represents the response of a single neuron [from top to bottom AN12 (<italic>n</italic> &#x0003D; 2), AN4 (<italic>n</italic> &#x0003D; 4), AN3 (<italic>n</italic> &#x0003D; 10), AN1 (<italic>n</italic> &#x0003D; 9)]. Color denotes the amplitude of the estimated firing rates normalized to the maximum rate within each neuron class. Arrows point out the firing rates of the shown examples.</p></caption>
<graphic xlink:href="fnsys-08-00183-g0003.tif"/>
</fig>
<p>We use a Bayesian approach to classify the acoustic stimulus based on the neural activity (see Materials and Methods). To this end we counted the number of spikes in each single trial and for each of the four stimuli during the complete stimulus duration of 756 ms, comprising 9 syllables and the respective pauses. Based on the spike count we decoded the stimulus identity according to the classification rules in Different Decoding Approaches. We measured the classification performance by the MCC.</p>
<p>Figure <xref ref-type="fig" rid="F4">4</xref> shows the results for decoding the four stimuli from single neuron activity. The MCC was higher than chance level for all but two neurons (see Figure <xref ref-type="fig" rid="F4">4</xref>) and 11 out of 25 decoded the stimuli significantly better than on basis of randomized counts (black dots in Figure <xref ref-type="fig" rid="F4">4</xref>, <italic>p</italic> &#x0003C; 0.05). Averaging across all 25 neurons yielded a mean MCC of 0.32. The decoding results were best for the standard song (not shown). As shown in Figure <xref ref-type="fig" rid="F1">1A</xref> the standard song had a higher syllable plateau than the perturbed songs which is a consequence of our constraint that all stimuli have the same effective intensity (see Materials and Methods). A closer look showed that the trial-averaged spike count elicited by the standard syllables differed from the spike counts evoked by the perturbed syllables. However, this is not consistent across neurons. For some neurons the spike count evoked by the standard stimulus is considerably larger than the spike count evoked by any of the perturbed stimuli, for other neurons this relation is reversed. This difference between the spike count triggered by the standard and the perturbed stimuli is reflected in a higher performance in decoding the standard stimulus against the class of perturbed stimuli (Figure <xref ref-type="supplementary-material" rid="SM1">S1</xref>: averaged MCC is 0.78; 22 neurons decode significantly better than by chance). To avoid a bias of the decoding performance due to the higher syllable plateau of the unperturbed standard stimulus, we restrict our analyses to the stimulus set of the three perturbed songs throughout the rest of the manuscript. This reduced stimulus set yielded only 5 neurons that allowed for a successful decoding of the three stimuli, and the average MCC dropped sharply to 0.08 (Figure <xref ref-type="fig" rid="F5">5A</xref>).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Count based decoding of stimuli in single neurons</bold>. A classification of the four stimuli is in 11 (filled circles) out of 25 neurons significantly better than a classification based on randomized counts. The distribution of MCC values of all 25 neurons differs significantly from the MCC distribution of the classifiers that are based on randomized counts (<italic>p</italic> &#x0003C; 0.05, one-sided Wilcoxon rank-sum test). Dashed line represents chance level based on randomized counts.</p></caption>
<graphic xlink:href="fnsys-08-00183-g0004.tif"/>
</fig>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>Count based decoding of the three perturbed stimuli in single neurons and populations. (A)</bold> Only in 5 neurons the three perturbed stimuli are decoded significantly better than a classification based on randomized counts. The distribution of MCC values of all 25 classifiers does not differ significantly from the MCC distribution of the classifiers that are based on randomized counts (<italic>p</italic> &#x0003D; 0.23, one-sided Wilcoxon rank-sum test). Dashed line represents chance level. <bold>(B)</bold> Averaged time course of the MCC is not increasing with stimulus duration (thick black line). <bold>(C)</bold> Decoding performance increases with population size. Classification is based on the spike count measured over all nine stimulus periods. MCCs are averaged across neurons and vertical error bars depict standard deviation. The mean performance increases significantly from single neurons to populations of size three, four, and eight (&#x0002A;<italic>p</italic> &#x0003C; 0.05, one-sided Wilcoxon rank-sum test).</p></caption>
<graphic xlink:href="fnsys-08-00183-g0005.tif"/>
</fig>
<p>So far, the spike count was measured during the complete stimulus presentation which consists of nine periods (syllable plus pause). Next, we asked how good we can decode the stimuli based on the spike count extracted over shorter time windows. To this end, we investigated the MCC as a function of the number of periods starting at stimulus onset (Figure <xref ref-type="fig" rid="F5">5B</xref>). Interestingly, the MCC, averaged across neurons within one class, stayed constant over stimulus time (see thick lines in Figure <xref ref-type="fig" rid="F5">5B</xref>). For single neurons the MCC fluctuated without apparent increase or decrease (thin lines in Figure <xref ref-type="fig" rid="F5">5B</xref>).</p>
<p>The performance of the Bayesian classifier generally depends on the encoding rate signal and on the noise that is evident in the trial-by-trial variability of the spike train responses. High variability increases the uncertainty of the decoder model. We estimated the trial-by-trial spike count variability in our AN recordings using the Fano factor (see Materials and Methods). As shown in Figure <xref ref-type="supplementary-material" rid="SM1">S2</xref> the variability remained constant with increasing the stimulus time in almost all neurons. This fits the result of the constant decoding performance independent of stimulus duration in Figure <xref ref-type="fig" rid="F5">5B</xref>.</p>
<p>As the grasshopper brain receives input from several ANs (up to 20 at each side Stumpner and Ronacher, <xref ref-type="bibr" rid="B37">1991</xref>) we next decoded the three perturbed songs from neuronal populations (see Materials and Methods). We constructed neuronal populations up to size four with each neuron from a different type, representing a subpopulation of ANs in one hemisphere. Additionally, we decoded on a basis of populations of size eight, consisting of two different neurons of each available type reflecting the input from both ears. As to be expected the averaged decoding performance is increasing with population size up to an average MCC &#x0003D; 0.41 for 8 neurons if counts were extracted over the complete stimulus duration (Figure <xref ref-type="fig" rid="F5">5C</xref>). This improvement was significant between populations of size 3 or larger and single neurons (<italic>p</italic> &#x0003C; 0.05, one-sided Wilcoxon rank-sum test).</p>
</sec>
<sec>
<title>Decoding of the behavioral relevance</title>
<p>In our behavioral experiments stimuli fell into two behaviorally relevant classes: the standard song and the onset-perturbed song were attractive whereas songs with middle- and end-perturbed syllables were rejected (Figure <xref ref-type="fig" rid="F1">1B</xref>). Here we asked: is it possible to predict whether a song belongs to the accepted or rejected class based on the neuronal spike count? We again used a Bayesian decoder and evaluated the success of correct predictions in single trials with the MCC. We first considered the total spike count over all nine periods in single neurons. Only half of all MCC values were larger than zero and the number of neurons that decoded significantly better than by chance was reduced to 3 (Figure <xref ref-type="fig" rid="F6">6A</xref>). The MCC averaged across all 25 neurons was 0.19 and the distribution of the MCC did not differ significantly from the distribution of the performance values based on randomized counts (<italic>p</italic> &#x0003D; 0.45, one-sided Wilcoxon rank-sum test). Investigating the MCC as a function of the number of periods starting at stimulus onset again showed a constant representation across syllables (cf. Figure <xref ref-type="fig" rid="F6">6B</xref>).</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p><bold>Count based decoding of behaviorally relevant classes in single neurons and populations. (A)</bold> Decoding the class of accepted versus the class of rejected stimuli is in only 3 neurons successful. The distribution of the 25 MCC values does not differ significantly from the MCC distribution of the classifiers that are based on randomized counts (<italic>p</italic> &#x0003D; 0.45, one-sided Wilcoxon rank-sum test). <bold>(B)</bold> Averaged time course of the MCC for each neuron class is not increasing with stimulus duration (thick black line). Gray lines indicate results for single neurons. <bold>(C)</bold> Decoding performance increases with population size. The increase differs significantly between single neurons and populations of size three and larger (&#x0002A;<italic>p</italic> &#x0003C; 0.05, one-sided Wilcoxon rank-sum test).</p></caption>
<graphic xlink:href="fnsys-08-00183-g0006.tif"/>
</fig>
<p>If information was used from AN populations, the performance improved remarkably up to an average MCC of 0.69 (counts over all nine periods; Figure <xref ref-type="fig" rid="F6">6C</xref>) for populations of size eight. This increase differed significantly between single neurons and populations of size three or larger for counts measured over the complete stimulus duration (Figure <xref ref-type="fig" rid="F6">6C</xref>). Our results show that information about the behavioral relevance is encoded in the time-averaged AN population rate.</p>
</sec>
</sec>
<sec>
<title>Modeling the behavioral decision based on sensory evidence</title>
<p>Thus far we have shown that a population of ANs carries a significant amount of information about the behavioral relevance of the stimulus that allowed for a binary classification of the attractive vs. the unattractive stimulus class based on the neurons&#x00027; spike count (Figure <xref ref-type="fig" rid="F6">6C</xref>). Here we introduce a simple model of decision making inspired by Gold and Shadlen (<xref ref-type="bibr" rid="B8">2007</xref>). In our model we interpret the population spike count of the ANs as sensory evidence about the behaviorally relevant cues that indicate an attractive calling song (see Materials and Methods). Our results in Figure <xref ref-type="fig" rid="F6">6B</xref> indicate that this information is encoded in a persistent and stable manner across syllables. We thus hypothesize that a decision circuit at a higher processing level makes use of this stable representation at the sensory level by accumulating evidence across successive syllables.</p>
<p>Formally, our model (c.f. Materials and Methods, Model of Decision Making) assumes that the AN population firing rate for each syllable provides an independent piece of evidence about the behaviorally relevant cues. For each single trial spike count pattern in a population of 8 neurons and for each syllable separately we computed the log LR for the presence of an attractive song over the presence of an unattractive song. In a second step we integrated the log LR across syllables. We then define the decision variable (DV) as the time integral over the log LR. Positive values of the DV indicate that the presence of an attractive stimulus is more likely than the presence of an unattractive stimulus and vice versa for negative values of the DV.</p>
<p>Figure <xref ref-type="fig" rid="F7">7A</xref> shows the DV as a function of time based on the measured neuronal response patterns. In the case of attractive calling songs (red) the average DV is positive already during the first syllable and shows an overall increase over the 9 syllables. For trials in which an unattractive song was presented the average DV (black) steadily decreased across syllables. The individual single trial curves of the DV show a variable behavior (Figure <xref ref-type="fig" rid="F7">7A</xref>). In order to simulate the behavioral decision we introduced a decision threshold on the DV. In each single trial a response is simulated if the log LR value crosses this threshold during any of the syllables.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><bold>A model of decision making based on the experimental spike trains. (A)</bold> The DV as a function of the song syllables for 100 presentations of an attractive song (perturbed at the beginning of each syllable) are shown as light red lines. The average (red line) is computed across all possible combinations of 8 neurons and all trials. It signifies an overall increase over time. The single trial DV for 100 unattractive song presentations are shown as gray lines. The average (black line) shows a monotonic decrease over time. <bold>(B)</bold> TP rate (red) and FP rate (black) in dependence on the decision threshold computed across 9720 different combinations of 8 neurons and all single trial stimulus presentations. <bold>(C)</bold> The ROC (black line) relates TP rate and FP rate for a varying threshold. The area under the ROC (gray) amounts to 0.97.</p></caption>
<graphic xlink:href="fnsys-08-00183-g0007.tif"/>
</fig>
<p>In the cases of an attractive (unattractive) trial we count a threshold crossing as TP or FP result, respectively. We then computed the TP and FP rates in dependence on the threshold value. The TP rate of the model relates to the female response rate for attractive song presentations in animal experiments, the FP rate relates to the female response rate to unattractive songs (Figure <xref ref-type="fig" rid="F1">1B</xref>). As shown in Figure <xref ref-type="fig" rid="F7">7B</xref> the FP rate drops sharply and much faster than the TP rate when increasing the decision threshold.</p>
<p>How does the model performance compare quantitatively to the behavioral experiments? The median female response rates were 83% for attractive stimuli and 6% for unattractive stimuli (Figure <xref ref-type="fig" rid="F1">1B</xref>). A variation of the decision boundary in our model corresponding to a variation of the TP rate in the range of 80&#x02013;85% corresponds to FP rates in the range of 3&#x02013;4% (Figure <xref ref-type="fig" rid="F7">7B</xref>). This indicates that the behavioral decisions based on the neural recordings from a population of 8 ANs and the simple decision model presented here are, on average, comparable to the average performance in the behavioral experiments with female grasshoppers.</p>
<p>The ROC in Figure <xref ref-type="fig" rid="F7">7C</xref> quantifies the model performance independent of the threshold. Integrating over the ROC (area under ROC) yielded a high value of 0.97 indicating that this decision model based on the neuronal population spike count performs very well in making correct detections of attractive calling songs and in avoiding false alarms in the case of unattractive calling songs.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<sec>
<title>Population rate code at the output of the grasshopper thoracic pathway</title>
<p>We evaluated the information about stimulus and behavioral contingency using a simple measure of neuronal activity: the total spike count during stimulus presentation. For single neurons we obtained only poor decoding performances. The full time-resolved firing rate estimate over the stimulus duration carries much more stimulus information and naturally results in much higher decoding performances (Figure <xref ref-type="supplementary-material" rid="SM1">S3</xref>). However, in the realistic scenario of decoding the spike counts from a population of neurons the performance increased significantly as compared to the single neuron case. For the maximum population size of 8 ANs we obtained on average MCC &#x0003D; 0.69 for predicting the behavioral class (Figure <xref ref-type="fig" rid="F6">6C</xref>). We grouped maximally 8 neurons, two of each of the morphological types that had been recorded in our experiments. This represents a realistic subpopulation of ANs from an individual animal. We can expect that the decoding from an intact population of at least 20 morphologically distinct ANs per hemisphere in the grasshopper would reach considerably higher decoding performances, indicating that the relevant stimulus features are represented by a combinatorial rate code in the AN population. These results are particularly interesting in view of recent papers investigating different aspects of the grasshopper&#x00027;s auditory pathway. Clemens et al. (<xref ref-type="bibr" rid="B3">2011</xref>) provided evidence that between the local and ascending neurons, i.e., between the second and third processing stage, the coding principle changes from a summed population code to a labeled-line population code where the population&#x00027;s information is maximal if a decoder takes into account neuronal identity. At the level of the AN population, the temporal sparseness as well as the population sparseness increases (Clemens et al., <xref ref-type="bibr" rid="B5">2012</xref>). At the same time, integrated spike rate information gains in significance compared to spike timing information (Clemens et al., <xref ref-type="bibr" rid="B3">2011</xref>, <xref ref-type="bibr" rid="B5">2012</xref>; see also Wohlgemuth and Ronacher, <xref ref-type="bibr" rid="B50">2007</xref>; Creutzig et al., <xref ref-type="bibr" rid="B6">2009</xref>; Ronacher, <xref ref-type="bibr" rid="B27">2014</xref>) which fits our results. In addition, the use of a spike count code would also explain why the remarkable imprecise spike timing found in ANs (Vogel et al., <xref ref-type="bibr" rid="B41">2005</xref>) does not impair the precise evaluation of song features in the millisecond range as observed in behavioral tests (Von Helversen, <xref ref-type="bibr" rid="B46">1979</xref>; Ronacher and Stumpner, <xref ref-type="bibr" rid="B31">1988</xref>; Ronacher and Stange, <xref ref-type="bibr" rid="B30">2013</xref>; Ronacher, <xref ref-type="bibr" rid="B27">2014</xref>).</p>
</sec>
<sec>
<title>Persistent and reliable sensory evidence at the level of ascending neurons</title>
<p>We found that the across syllables information is encoded persistently and reliably in the AN population rate and we hypothesize that the role the grasshopper&#x00027;s auditory system is to provide stable sensory evidence that can be evaluated in the brain. The performance of the Bayesian classifier depends on both, the encoding rate signal and the noise. We found that the Fano factor of ANs, which estimates the noise as trial-by-trial variability of the spike number (Nawrot, <xref ref-type="bibr" rid="B17">2010</xref>), is constant across time, indicating a constant level of noise in the peripheral auditory system (Figure <xref ref-type="supplementary-material" rid="SM1">S2</xref>). The absolute values of the Fano factor match previous results showing that variability of spike trains increases from receptor neurons to the ANs (Ronacher et al., <xref ref-type="bibr" rid="B28">2004</xref>; Vogel et al., <xref ref-type="bibr" rid="B41">2005</xref>; Vogel and Ronacher, <xref ref-type="bibr" rid="B42">2007</xref>; Neuhofer et al., <xref ref-type="bibr" rid="B20">2011</xref>), which on average showed a reduced performance in stimulus classification compared to LNs (Wohlgemuth and Ronacher, <xref ref-type="bibr" rid="B50">2007</xref>). Using song models that were progressively degraded, Neuhofer et al. (<xref ref-type="bibr" rid="B20">2011</xref>) could estimate the respective contributions of external signal degradation and the trial-to-trial variability of spike trains caused by intrinsic neuronal noise. Intrinsic neuronal noise had a very strong impact on the spike train variability, in particular in ANs, thus likely affecting the representation of acoustic signals along the auditory pathway, and thus also the discrimination and recognition of grasshopper songs (Ronacher, <xref ref-type="bibr" rid="B27">2014</xref>).</p>
</sec>
<sec>
<title>Integrating sensory evidence for behavioral decisions&#x02014;a hypothetical brain algorithm in the grasshopper</title>
<p>At the level of ANs that provide the sole auditory input to the grasshopper&#x00027;s brain we found a steady representation of information about the stimulus and its behavioral relevance in the population spike count. We devised a simple decision making model that integrates evidence over time generating a decision variable, which eventually may reach a decision threshold to elicit a behavioral response. Such models have previously been formulated for alternative choices in sensory decision tasks (e.g., Gold and Shadlen, <xref ref-type="bibr" rid="B8">2007</xref>; Beck et al., <xref ref-type="bibr" rid="B2">2008</xref>; Drugowitsch and Pouget, <xref ref-type="bibr" rid="B7">2012</xref>). The model integrates the estimated Bayesian likelihood across successive syllables and, by crossing a decision threshold allows to form behavioral decisions. In the grasshopper, recognition, and evaluation of a conspecific calling song simplifies to the female&#x00027;s decision between showing or not showing her response behavior depending on whether and when the evidence reaches a threshold. In a neuroethological context as well as in controlled behavioral experiments animals can modulate their behavioral response level (Von Helversen and von Helversen, <xref ref-type="bibr" rid="B47">1994</xref>, <xref ref-type="bibr" rid="B44">1997</xref>; Wirmer et al., <xref ref-type="bibr" rid="B49">2010</xref>). In our model this could be realized by a modulation of response threshold, e.g., through neuromodulators in the relevant brain circuit (Heinrich et al., <xref ref-type="bibr" rid="B11">2001</xref>; Wirmer et al., <xref ref-type="bibr" rid="B49">2010</xref>).</p>
<p>Our model presented here is based on neural recordings in the auditory pathway and thus extends on approaches that model female response behavior based on the auditory stimuli alone. Clemens and Ronacher (<xref ref-type="bibr" rid="B4">2013</xref>) devised an abstract linear-nonlinear cascade model: In a first step the model continuously extracts characteristic stimulus features from the sound stimulus by use of linear filters. In the second step the model transforms each filter output with a non-linear function. The resulting signals are then integrated across features and over the whole stimulus period, neglecting the exact temporal position of specific song features. Their model was able to predict behavioral responses with high reliability (<italic>r</italic><sup>2</sup> &#x0003D; 0.87) with a set of only two distinct song features. This serial structure of (i) extraction of sensory evidence, and (ii) subsequent temporal integration over this evidence is paralleled in our model and the model proposed by Clemens and Ronacher (<xref ref-type="bibr" rid="B4">2013</xref>).</p>
<p>If we assume a time-integrating algorithm in the grasshopper&#x00027;s brain, what could be the underlying neuronal mechanism? The relevant time span is indicated by the duration of the reported response times in the range of typically several hundreds of milliseconds. One cellular mechanism that could serve this task is short-term synaptic plasticity. Fascilitation and depression at synapses are governed by processes with typical time constants in the right order of magnitude and they have repeatedly been suggested to be involved in decision making processes (Mongillo et al., <xref ref-type="bibr" rid="B16">2008</xref>; Mart&#x000ED;nez-Garc&#x000ED;a et al., <xref ref-type="bibr" rid="B15">2011</xref>) including a suggested algorithm for auditory pattern recognition in the cricket&#x00027;s central brain (Rost et al., <xref ref-type="bibr" rid="B33">2013</xref>).</p>
<p>In summary, our results support the hypothesis of a population rate code in ANs that project the acoustic information to the central brain (see Clemens et al., <xref ref-type="bibr" rid="B3">2011</xref>, <xref ref-type="bibr" rid="B5">2012</xref>). The information about the behavioral relevance of a stimulus is well represented in the population rate and this information is constantly present throughout the stimulus presentation. The good performance of our decision model suggests a computational process located within the grasshopper brain that infers the behaviorally relevant information and integrates this evidence over time to reach a behavioral decision based on accumulated evidence.</p>
</sec>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>We thank R. Matthias Hennig for providing the programs for data acquisition and data evaluation and for valuable comments on an earlier version of the manuscript. We thank both reviewers for their constructive comments and helpful suggestions for improving the manuscript. This research was funded by the German Research Foundation (DFG) through the <italic>Collaborative Research Center for Theoretical Biology</italic> (SFB 618&#x02014;grants to Martin P. Nawrot and Bernhard Ronacher) and the Research Training Group <italic>Sensory Computation in Neural Systems</italic> (GRK 1589&#x02014;grant to Martin P. Nawrot). Martin P. Nawrot received additional funding from the Federal Ministry of Education and Research (grant 01GQ0413 to <italic>Bernstein Center for Computational Neuroscience Berlin</italic> and grant 01GQ0941 to <italic>Bernstein Focus Neuronal Basis of Learning: Memory in Decision Making</italic>).</p>
</ack>
<sec sec-type="supplementary material" id="s5">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="http://www.frontiersin.org/journal/10.3389/fnsys.2014.00183/abstract">http://www.frontiersin.org/journal/10.3389/fnsys.2014.00183/abstract</ext-link></p>
<supplementary-material xlink:href="DataSheet1.DOCX" id="SM1" mimetype="application/vnd.openxmlformats-officedocument.wordprocessingml.document" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bauer</surname> <given-names>M.</given-names></name> <name><surname>von Helversen</surname> <given-names>O.</given-names></name></person-group> (<year>1987</year>). <article-title>Separate localization of sound recognizing and sound producing neural mechanisms in a grasshopper</article-title>. <source>J. Comp. Physiol. A</source> <volume>161</volume>, <fpage>95</fpage>&#x02013;<lpage>101</lpage>. <pub-id pub-id-type="doi">10.1007/BF00609458</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Beck</surname> <given-names>J. M.</given-names></name> <name><surname>Ma</surname> <given-names>W. J.</given-names></name> <name><surname>Kiani</surname> <given-names>R.</given-names></name> <name><surname>Hanks</surname> <given-names>T.</given-names></name> <name><surname>Churchland</surname> <given-names>A. K.</given-names></name> <name><surname>Roitman</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2008</year>). <article-title>Probabilistic population codes for Bayesian decision making</article-title>. <source>Neuron</source> <volume>60</volume>, <fpage>1142</fpage>&#x02013;<lpage>1152</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2008.09.021</pub-id><pub-id pub-id-type="pmid">19109917</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clemens</surname> <given-names>J.</given-names></name> <name><surname>Kutzki</surname> <given-names>O.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name> <name><surname>Schreiber</surname> <given-names>S.</given-names></name> <name><surname>Wohlgemuth</surname> <given-names>S.</given-names></name></person-group> (<year>2011</year>). <article-title>Efficient transformation of an auditory population code in a small sensory system</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>108</volume>, <fpage>13812</fpage>&#x02013;<lpage>13817</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1104506108</pub-id><pub-id pub-id-type="pmid">21825132</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clemens</surname> <given-names>J.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name></person-group> (<year>2013</year>). <article-title>Feature extraction and integration underlying perceptual decision making during courtship behavior</article-title>. <source>J. Neurosci</source>. <volume>33</volume>, <fpage>12136</fpage>&#x02013;<lpage>12145</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.0724-13.2013</pub-id><pub-id pub-id-type="pmid">23864698</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clemens</surname> <given-names>J.</given-names></name> <name><surname>Wohlgemuth</surname> <given-names>S.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name></person-group> (<year>2012</year>). <article-title>Nonlinear computations underlying temporal and population sparseness in the auditory system of the grasshopper</article-title>. <source>J. Neurosci</source>. <volume>32</volume>, <fpage>10053</fpage>&#x02013;<lpage>62</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.5911-11.2012</pub-id><pub-id pub-id-type="pmid">22815519</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Creutzig</surname> <given-names>F.</given-names></name> <name><surname>Wohlgemuth</surname> <given-names>S.</given-names></name> <name><surname>Stumpner</surname> <given-names>A.</given-names></name> <name><surname>Benda</surname> <given-names>J.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name> <name><surname>Herz</surname> <given-names>A. V. M.</given-names></name></person-group> (<year>2009</year>). <article-title>Timescale-invariant representation of acoustic communication signals by a bursting neuron</article-title>. <source>J. Neurosci</source>. <volume>29</volume>, <fpage>2575</fpage>&#x02013;<lpage>2580</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.0599-08.2009</pub-id><pub-id pub-id-type="pmid">19244533</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Drugowitsch</surname> <given-names>J.</given-names></name> <name><surname>Pouget</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>Probabilistic vs. non-probabilistic approaches to the neurobiology of perceptual decision-making</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>22</volume>, <fpage>963</fpage>&#x02013;<lpage>969</lpage>. <pub-id pub-id-type="doi">10.1016/j.conb.2012.07.007</pub-id><pub-id pub-id-type="pmid">22884815</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gold</surname> <given-names>J.</given-names></name> <name><surname>Shadlen</surname> <given-names>M. N.</given-names></name></person-group> (<year>2007</year>). <article-title>The neural basis of decision making</article-title>. <source>Annu. Rev. Neurosci</source>. <volume>30</volume>, <fpage>535</fpage>&#x02013;<lpage>574</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.29.051605.113038</pub-id><pub-id pub-id-type="pmid">17600525</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gorodkin</surname> <given-names>J.</given-names></name></person-group> (<year>2004</year>). <article-title>Comparing two K-category assignments by a K-category correlation coefficient</article-title>. <source>Comput. Biol. Chem</source>. <volume>28</volume>, <fpage>367</fpage>&#x02013;<lpage>374</lpage> <pub-id pub-id-type="doi">10.1016/j.compbiolchem.2004.09.006</pub-id><pub-id pub-id-type="pmid">15556477</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gottsberger</surname> <given-names>B.</given-names></name> <name><surname>Mayer</surname> <given-names>F.</given-names></name></person-group> (<year>2007</year>). <article-title>Behavioral sterility of hybrid males in acoustically communicating grasshoppers (Acrididae, Gomphocerinae)</article-title>. <source>J. Comp. Physiol. A</source> <volume>193</volume>, <fpage>703</fpage>&#x02013;<lpage>714</lpage>. <pub-id pub-id-type="doi">10.1007/s00359-007-0225-y</pub-id><pub-id pub-id-type="pmid">17440734</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Heinrich</surname> <given-names>R.</given-names></name> <name><surname>Wenzel</surname> <given-names>B.</given-names></name> <name><surname>Elsner</surname> <given-names>N.</given-names></name></person-group> (<year>2001</year>). <article-title>A role for muscarinic excitation: control of specific singing behavior by activation of the adenylate cyclase pathway in the brain of grasshoppers</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>98</volume>, <fpage>9919</fpage>&#x02013;<lpage>9923</lpage> <pub-id pub-id-type="doi">10.1073/pnas.151131998</pub-id><pub-id pub-id-type="pmid">11438697</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hoare</surname> <given-names>D. J.</given-names></name> <name><surname>Humble</surname> <given-names>J.</given-names></name> <name><surname>Jin</surname> <given-names>D.</given-names></name> <name><surname>Gilding</surname> <given-names>N.</given-names></name> <name><surname>Petersen</surname> <given-names>R.</given-names></name> <name><surname>Cobb</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>Modeling peripheral olfactory coding in <italic>Drosophila</italic> larvae</article-title>. <source>PLoS ONE</source> <volume>6</volume>:<fpage>e22996</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0022996</pub-id><pub-id pub-id-type="pmid">21857978</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jurman</surname> <given-names>G.</given-names></name> <name><surname>Riccadonna</surname> <given-names>S.</given-names></name> <name><surname>Furlanello</surname> <given-names>C.</given-names></name></person-group> (<year>2012</year>). <article-title>A comparison of MCC and CEN error measures in multi-class prediction</article-title>. <source>PLoS ONE</source> <volume>7</volume>:<fpage>e41882</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0041882</pub-id><pub-id pub-id-type="pmid">22905111</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Karmeier</surname> <given-names>K.</given-names></name> <name><surname>Krapp</surname> <given-names>H. G.</given-names></name> <name><surname>Egelhaaf</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <article-title>Population coding of self-motion: applying Bayesian analysis to a population of visual interneurons in the fly</article-title>. <source>J. Neurophysiol</source>. <volume>94</volume>, <fpage>2182</fpage>&#x02013;<lpage>2194</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00278.2005</pub-id><pub-id pub-id-type="pmid">15901759</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mart&#x000ED;nez-Garc&#x000ED;a</surname> <given-names>M.</given-names></name> <name><surname>Rolls</surname> <given-names>E. T.</given-names></name> <name><surname>Deco</surname> <given-names>G.</given-names></name> <name><surname>Romo</surname> <given-names>R.</given-names></name></person-group> (<year>2011</year>). <article-title>Neural and computational mechanisms of postponed decisions</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>108</volume>, <fpage>11626</fpage>&#x02013;<lpage>11631</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1108137108</pub-id><pub-id pub-id-type="pmid">21709222</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mongillo</surname> <given-names>G.</given-names></name> <name><surname>Barak</surname> <given-names>O.</given-names></name> <name><surname>Tsodyks</surname> <given-names>M.</given-names></name></person-group> (<year>2008</year>). <article-title>Synaptic theory of working memory</article-title>. <source>Science</source> <volume>319</volume>, <fpage>1543</fpage>&#x02013;<lpage>1546</lpage>. <pub-id pub-id-type="doi">10.1126/science.1150769</pub-id><pub-id pub-id-type="pmid">18339943</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Nawrot</surname> <given-names>M. P.</given-names></name></person-group> (<year>2010</year>). <article-title>Analysis and interpretation of interval and count variability in neural spike trains</article-title>, in <source>Analysis of Parallel Spike Trains</source>, eds <person-group person-group-type="editor"><name><surname>Gr&#x000FC;n</surname> <given-names>S.</given-names></name> <name><surname>Rotter</surname> <given-names>S.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>37</fpage>&#x02013;<lpage>58</lpage>.</citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nawrot</surname> <given-names>M. P.</given-names></name> <name><surname>Aertsen</surname> <given-names>A.</given-names></name> <name><surname>Rotter</surname> <given-names>S.</given-names></name></person-group> (<year>1999</year>). <article-title>Single-trial estimation of neuronal firing rates: from single-neuron spike trains to population activity</article-title>. <source>J. Neurosci. Methods</source> <volume>94</volume>, <fpage>81</fpage>&#x02013;<lpage>92</lpage>. <pub-id pub-id-type="doi">10.1016/S0165-0270(99)00127-2</pub-id><pub-id pub-id-type="pmid">10638817</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nawrot</surname> <given-names>M. P.</given-names></name> <name><surname>Boucsein</surname> <given-names>C.</given-names></name> <name><surname>Rodriguez Molina</surname> <given-names>V.</given-names></name> <name><surname>Riehle</surname> <given-names>A.</given-names></name> <name><surname>Aertsen</surname> <given-names>A.</given-names></name> <name><surname>Rotter</surname> <given-names>S.</given-names></name></person-group> (<year>2008</year>). <article-title>Measurement of variability dynamics in cortical spike trains</article-title>. <source>J. Neurosci. Methods</source> <volume>169</volume>, <fpage>374</fpage>&#x02013;<lpage>390</lpage>. <pub-id pub-id-type="doi">10.1016/j.jneumeth.2007.10.013</pub-id><pub-id pub-id-type="pmid">18155774</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Neuhofer</surname> <given-names>D.</given-names></name> <name><surname>Stemmler</surname> <given-names>M.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name></person-group> (<year>2011</year>). <article-title>Neuronal precision and the limits for acoustic signal recognition in a small neuronal network</article-title>. <source>J. Comp. Physiol. A</source> <volume>197</volume>, <fpage>251</fpage>&#x02013;<lpage>265</lpage>. <pub-id pub-id-type="doi">10.1007/s00359-010-0606-5</pub-id><pub-id pub-id-type="pmid">21063712</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Neuhofer</surname> <given-names>D.</given-names></name> <name><surname>Wohlgemuth</surname> <given-names>S.</given-names></name> <name><surname>Stumpner</surname> <given-names>A.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name></person-group> (<year>2008</year>). <article-title>Evolutionarily conserved coding properties of auditory neurons across grasshopper species</article-title>. <source>Proc. R. Soc. B</source> <volume>275</volume>, <fpage>1965</fpage>&#x02013;<lpage>1974</lpage>. <pub-id pub-id-type="doi">10.1098/rspb.2008.0527</pub-id><pub-id pub-id-type="pmid">18505715</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oliphant</surname> <given-names>T. E.</given-names></name></person-group> (<year>2007</year>). <article-title>Python for scientific computing</article-title>. <source>Comput. Sci. Eng</source> <volume>9</volume>, <fpage>10</fpage>&#x02013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1109/MCSE.2007.58</pub-id></citation>
</ref>
<ref id="B22a">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pearson</surname> <given-names>K. G.</given-names></name> <name><surname>Robertson</surname> <given-names>R. M.</given-names></name></person-group> (<year>1981</year>). <article-title>Interneurones coactivating hindleg flexor and extensor motoneurones in the locust</article-title>. <source>J. Comp. Physiol</source>. <volume>144</volume>, <fpage>391</fpage>&#x02013;<lpage>400</lpage>. <pub-id pub-id-type="doi">10.1007/BF00612571</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pouget</surname> <given-names>A.</given-names></name> <name><surname>Dayan</surname> <given-names>P.</given-names></name> <name><surname>Zemel</surname> <given-names>R.</given-names></name></person-group> (<year>2000</year>). <article-title>Information processing with population codes</article-title>. <source>Nat. Rev. Neuroci</source>. <volume>1</volume>, <fpage>125</fpage>&#x02013;<lpage>132</lpage>. <pub-id pub-id-type="doi">10.1038/35039062</pub-id><pub-id pub-id-type="pmid">11252775</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Quiroga</surname> <given-names>R. Q.</given-names></name> <name><surname>Panzeri</surname> <given-names>S.</given-names></name></person-group> (<year>2009</year>). <article-title>Extracting information from neuronal populations: information theory and decoding approaches</article-title>. <source>Nat. Rev. Neurosci</source>. <volume>10</volume>, <fpage>173</fpage>&#x02013;<lpage>185</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2578</pub-id><pub-id pub-id-type="pmid">19229240</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rickert</surname> <given-names>J.</given-names></name> <name><surname>Riehle</surname> <given-names>A.</given-names></name> <name><surname>Aertsen</surname> <given-names>A.</given-names></name> <name><surname>Rotter</surname> <given-names>S.</given-names></name> <name><surname>Nawrot</surname> <given-names>M. P.</given-names></name></person-group> (<year>2009</year>). <article-title>Dynamic encoding of movement direction in motor cortical neurons</article-title>. <source>J. Neurosci</source>. <volume>29</volume>, <fpage>13870</fpage>&#x02013;<lpage>13882</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.5441-08.2009</pub-id><pub-id pub-id-type="pmid">19889998</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>R&#x000F6;mer</surname> <given-names>H.</given-names></name> <name><surname>Marquart</surname> <given-names>V.</given-names></name></person-group> (<year>1984</year>). <article-title>Morphology and physiology of auditory interneurons in the metathoracic ganglion of the locust</article-title>. <source>J. Comp. Physiol. A</source> <volume>155</volume>, <fpage>249</fpage>&#x02013;<lpage>262</lpage>. <pub-id pub-id-type="doi">10.1007/BF00612642</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ronacher</surname> <given-names>B.</given-names></name></person-group> (<year>2014</year>). <article-title>Processing of species-specific signals in the auditory pathway of grasshoppers</article-title>, in <source>Insect Hearing and Acoustic Communication</source>, ed <person-group person-group-type="editor"><name><surname>Hedwig</surname> <given-names>B.</given-names></name></person-group> (<publisher-loc>Berlin; Heidelberg</publisher-loc>: <publisher-name>Springer Verlag</publisher-name>), <fpage>185</fpage>&#x02013;<lpage>204</lpage>.</citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ronacher</surname> <given-names>B.</given-names></name> <name><surname>Franz</surname> <given-names>A.</given-names></name> <name><surname>Wohlgemuth</surname> <given-names>S.</given-names></name> <name><surname>Hennig</surname> <given-names>R. M.</given-names></name></person-group> (<year>2004</year>). <article-title>Variability of spike trains and the processing of temporal patterns of acoustic signals-problems, constraints, and solutions</article-title>. <source>J. Comp. Physiol. A</source> <volume>190</volume>, <fpage>257</fpage>&#x02013;<lpage>277</lpage>. <pub-id pub-id-type="doi">10.1007/s00359-004-0494-7</pub-id><pub-id pub-id-type="pmid">14872260</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ronacher</surname> <given-names>B.</given-names></name> <name><surname>Stange</surname> <given-names>N.</given-names></name></person-group> (<year>2013</year>). <article-title>Processing of acoustic signals in grasshoppers - A neuroethological approach towards female choice</article-title>. <source>J. Physiol. Paris</source> <volume>107</volume>, <fpage>41</fpage>&#x02013;<lpage>50</lpage>. <pub-id pub-id-type="doi">10.1016/j.jphysparis.2012.05.005</pub-id><pub-id pub-id-type="pmid">22728472</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ronacher</surname> <given-names>B.</given-names></name> <name><surname>Stumpner</surname> <given-names>A.</given-names></name></person-group> (<year>1988</year>). <article-title>Filtering of behaviourally relevant temporal parameters of a grasshopper&#x00027;s song by an auditory interneuron</article-title>. <source>J. Comp. Physiol. A</source> <volume>163</volume>, <fpage>517</fpage>&#x02013;<lpage>523</lpage>. <pub-id pub-id-type="doi">10.1007/BF00604905</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ronacher</surname> <given-names>B.</given-names></name> <name><surname>von Helversen</surname> <given-names>D.</given-names></name> <name><surname>von Helversen</surname> <given-names>O.</given-names></name></person-group> (<year>1986</year>). <article-title>Routes and stations in the processing of auditory directional information in the CNS of a grasshopper, as revealed by surgical experiments</article-title>. <source>J. Comp. Physiol. A</source> <volume>158</volume>, <fpage>363</fpage>&#x02013;<lpage>374</lpage>. <pub-id pub-id-type="doi">10.1007/BF00603620</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rost</surname> <given-names>T.</given-names></name> <name><surname>Ramachandran</surname> <given-names>H.</given-names></name> <name><surname>Nawrot</surname> <given-names>M. P.</given-names></name> <name><surname>Chicca</surname> <given-names>E.</given-names></name></person-group> (<year>2013</year>). <article-title>A neuromorphic approach to auditory pattern recognition in cricket phonotaxis</article-title>, in <source>Circuit Theory and Design (ECCTD), 2013 European Conference on (Dresden)</source>, <fpage>1</fpage>&#x02013;<lpage>4</lpage>. <pub-id pub-id-type="doi">10.1109/ECCTD.2013.6662247</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schmidt</surname> <given-names>A.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name> <name><surname>Hennig</surname> <given-names>R. M.</given-names></name></person-group> (<year>2008</year>). <article-title>The role of frequency, phase and time for processing of amplitude modulated signals by grasshoppers</article-title>. <source>J. Comp. Physiol. A</source> <volume>194</volume>, <fpage>221</fpage>&#x02013;<lpage>233</lpage>. <pub-id pub-id-type="doi">10.1007/s00359-007-0295-x</pub-id><pub-id pub-id-type="pmid">18043922</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sokoliuk</surname> <given-names>T.</given-names></name> <name><surname>Stumpner</surname> <given-names>A.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name></person-group> (<year>1989</year>). <article-title>GABA-Iike immunoreactivity suggests an inhibitory function of the thoracic low-frequency neuron (TN1) in Acridid grasshoppers</article-title>. <source>Naturwissenschaften</source> <volume>76</volume>, <fpage>223</fpage>&#x02013;<lpage>225</lpage>. <pub-id pub-id-type="doi">10.1007/BF00627695</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stange</surname> <given-names>N.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name></person-group> (<year>2012</year>). <article-title>Grasshopper calling songs convey information about condition and health of males</article-title>. <source>J. Comp. Physiol. A</source> <volume>198</volume>, <fpage>309</fpage>&#x02013;<lpage>318</lpage>. <pub-id pub-id-type="doi">10.1007/s00359-012-0709-2</pub-id><pub-id pub-id-type="pmid">22246210</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stumpner</surname> <given-names>A.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name></person-group> (<year>1991</year>). <article-title>Auditory interneurones in the metathoracic ganglion of the grasshopper <italic>Chorthippus biguttulus</italic>. I. Morphological and physiological characterization</article-title>. <source>J. Exp. Biol</source>. <volume>158</volume>, <fpage>391</fpage>&#x02013;<lpage>410</lpage>.</citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stumpner</surname> <given-names>A.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name> <name><surname>von Helversen</surname> <given-names>O.</given-names></name></person-group> (<year>1991</year>). <article-title>Auditory interneurones in the metathoracic ganglion of the grasshopper <italic>Chorthippus biguttulus</italic>. II Processing of temporal patterns of the song of the male</article-title>. <source>J. Exp. Biol</source>. <volume>158</volume>, <fpage>411</fpage>&#x02013;<lpage>430</lpage>.</citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stumpner</surname> <given-names>A.</given-names></name> <name><surname>von Helversen</surname> <given-names>O.</given-names></name></person-group> (<year>1994</year>). <article-title>Song production and song recognition in a group of sibling grasshopper species (<italic>Ch. dorsatus, Ch. dichrous and Ch. loratus</italic>: Orthoptera, Acrididae)</article-title>. <source>Bioacoustics</source> <volume>6</volume>, <fpage>1</fpage>&#x02013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1080/09524622.1994.9753268</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vogel</surname> <given-names>A.</given-names></name> <name><surname>Hennig</surname> <given-names>R. M.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name></person-group> (<year>2005</year>). <article-title>Increase of neuronal response variability at higher processing levels as revealed by simultaneous recordings</article-title>. <source>J. Neurophysiol</source>. <volume>93</volume>, <fpage>3548</fpage>&#x02013;<lpage>3559</lpage>. <pub-id pub-id-type="doi">10.1152/jn.01288.2004</pub-id><pub-id pub-id-type="pmid">15716366</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vogel</surname> <given-names>A.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name></person-group> (<year>2007</year>). <article-title>Neural correlations increase between consecutive processing levels in the auditory system of locusts</article-title>. <source>J. Neurophysiol</source>. <volume>97</volume>, <fpage>3376</fpage>&#x02013;<lpage>3385</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00796.2006</pub-id><pub-id pub-id-type="pmid">17360818</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Von Helversen</surname> <given-names>D.</given-names></name></person-group> (<year>1972</year>). <article-title>Gesang des M&#x000E4;nnchens und Lautschema des Weibchens bei der Feldheuschrecke <italic>Chorthippus biguttulus</italic> (Orthoptera, Acrididae)</article-title>. <source>J. Comp. Physiol. A</source> <volume>81</volume>, <fpage>381</fpage>&#x02013;<lpage>422</lpage>. <pub-id pub-id-type="doi">10.1007/BF00697757</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Von Helversen</surname> <given-names>D.</given-names></name> <name><surname>von Helversen</surname> <given-names>O.</given-names></name></person-group> (<year>1997</year>). <article-title>Recognition of sex in the acoustic communication of the grasshopper <italic>Chorthippus biguttulus</italic> (Orthoptera, Acrididae)</article-title>. <source>J. Comp. Physiol. A</source> <volume>180</volume>, <fpage>373</fpage>&#x02013;<lpage>386</lpage>. <pub-id pub-id-type="doi">10.1007/s003590050056</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Von Helversen</surname> <given-names>D.</given-names></name> <name><surname>von Helversen</surname> <given-names>O.</given-names></name></person-group> (<year>1998</year>). <article-title>Acoustic pattern recognition in a grasshopper: processing in the time or frequency domain?</article-title> <source>Biol. Cybern</source>. <volume>79</volume>, <fpage>467</fpage>&#x02013;<lpage>476</lpage>. <pub-id pub-id-type="doi">10.1007/s004220050496</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Von Helversen</surname> <given-names>O.</given-names></name></person-group> (<year>1979</year>). <article-title>Angeborenes Erkennen akustischer Schl&#x000FC;sselreize</article-title>, in <source>Verhandlungen der Deutschen Zoologischen Gesellschaft</source>, ed <person-group person-group-type="editor"><name><surname>Rathmayer</surname> <given-names>W.</given-names></name></person-group> (<publisher-loc>Stuttgart</publisher-loc>: <publisher-name>Gustav Fischer Verlag</publisher-name>), <fpage>42</fpage>&#x02013;<lpage>59</lpage>.</citation>
</ref>
<ref id="B47">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Von Helversen</surname> <given-names>O.</given-names></name> <name><surname>von Helversen</surname> <given-names>D.</given-names></name></person-group> (<year>1994</year>). <article-title>Forces driving coevolution of song and song recognition in grasshoppers</article-title>, in <source>Fortschritte der Zoologie</source>, eds <person-group person-group-type="editor"><name><surname>Schildberger</surname> <given-names>K.</given-names></name> <name><surname>Elsner</surname> <given-names>N.</given-names></name></person-group> (<publisher-loc>Stuttgart</publisher-loc>: <publisher-name>Gustav Fischer Verlag</publisher-name>), <fpage>253</fpage>&#x02013;<lpage>283</lpage>.</citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wiley</surname> <given-names>R. H.</given-names></name></person-group> (<year>2006</year>). <article-title>Signal detection and animal communication</article-title>. <source>Adv. Study Behav</source>. <volume>36</volume>, <fpage>217</fpage>. <pub-id pub-id-type="doi">10.1016/S0065-3454(06)36005-6</pub-id></citation>
</ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wirmer</surname> <given-names>A.</given-names></name> <name><surname>Faustmann</surname> <given-names>M.</given-names></name> <name><surname>Heinrich</surname> <given-names>R.</given-names></name></person-group> (<year>2010</year>). <article-title>Reproductive behaviour of female <italic>Chorthippus biguttulus</italic> grasshoppers</article-title>. <source>J. Insect. Physiol</source>. <volume>56</volume>, <fpage>745</fpage>&#x02013;<lpage>753</lpage> <pub-id pub-id-type="doi">10.1016/j.jinsphys.2010.01.006</pub-id><pub-id pub-id-type="pmid">20116380</pub-id></citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wohlgemuth</surname> <given-names>S.</given-names></name> <name><surname>Ronacher</surname> <given-names>B.</given-names></name></person-group> (<year>2007</year>). <article-title>Auditory discrimination of amplitude modulations based on metric distances of spike trains</article-title>. <source>J. Neurophysiol</source>. <volume>97</volume>, <fpage>3082</fpage>&#x02013;<lpage>3092</lpage>. <pub-id pub-id-type="doi">10.1152/jn.01235.2006</pub-id><pub-id pub-id-type="pmid">17314239</pub-id></citation>
</ref>
</ref-list>
</back>
</article>
