<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2014.00374</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Electrophysiological evidence for change detection in speech sound patterns by anesthetized rats</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Astikainen</surname> <given-names>Piia</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/18452"/>
</contrib>
<contrib contrib-type="author">
<name><surname>M&#x000E4;llo</surname> <given-names>Tanel</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/95442"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Ruusuvirta</surname> <given-names>Timo</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/36370"/>
</contrib>
<contrib contrib-type="author">
<name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>Risto</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<xref ref-type="aff" rid="aff5"><sup>5</sup></xref>
<xref ref-type="aff" rid="aff6"><sup>6</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/92963"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Psychology, University of Jyv&#x000E4;skyl&#x000E4;</institution> <country>Jyv&#x000E4;skyl&#x000E4;, Finland</country></aff>
<aff id="aff2"><sup>2</sup><institution>Centre for Learning Research, University of Turku</institution> <country>Turku, Finland</country></aff>
<aff id="aff3"><sup>3</sup><institution>Department of Teacher education/Rauma Unit, University of Turku</institution> <country>Rauma, Finland</country></aff>
<aff id="aff4"><sup>4</sup><institution>Institute of Psychology, University of Tartu</institution> <country>Tartu, Estonia</country></aff>
<aff id="aff5"><sup>5</sup><institution>Center of Functionally Integrative Neuroscience, University of &#x000C5;rhus</institution> <country>&#x000C5;rhus, Denmark</country></aff>
<aff id="aff6"><sup>6</sup><institution>Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki</institution> <country>Helsinki, Finland</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Lynne E. Bernstein, George Washington University, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Alexandra Bendixen, Carl von Ossietzky University of Oldenburg, Germany; Guangying Wu, George Washington University, USA</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Piia Astikainen, Department of Psychology, University of Jyv&#x000E4;skyl&#x000E4;, PO Box 35, Ylist&#x000F6;nm&#x000E4;entie 33, 40014 Jyv&#x000E4;skyl&#x000E4;, Finland e-mail: <email>piia.astikainen&#x00040;jyu.fi</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Neuroscience.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>17</day>
<month>11</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>8</volume>
<elocation-id>374</elocation-id>
<history>
<date date-type="received">
<day>25</day>
<month>06</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>30</day>
<month>10</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2014 Astikainen, M&#x000E4;llo, Ruusuvirta and N&#x000E4;&#x000E4;t&#x000E4;nen.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>Human infants are able to detect changes in grammatical rules in a speech sound stream. Here, we tested whether rats have a comparable ability by using an electrophysiological measure that has been shown to reflect higher order auditory cognition even before it becomes manifested in behavioral level. Urethane-anesthetized rats were presented with a stream of sequences consisting of three pseudowords carried out at a fast pace. Frequently presented &#x0201C;standard&#x0201D; sequences had 16 variants which all had the same structure. They were occasionally replaced by acoustically novel &#x0201C;deviant&#x0201D; sequences of two different types: structurally consistent and inconsistent sequences. Two stimulus conditions were presented for separate animal groups. In one stimulus condition, the standard and the pattern-obeying deviant sequences had an AAB structure, while the pattern-violating deviant sequences had an ABB structure. In the other stimulus condition, these assignments were reversed. During the stimulus presentation, local-field potentials were recorded from the dura, above the auditory cortex. Two temporally separate differential brain responses to the deviant sequences reflected the detection of the deviant speech sound sequences. The first response was elicited by both types of deviant sequences and reflected most probably their acoustical novelty. The second response was elicited specifically by the structurally inconsistent deviant sequences (pattern-violating deviant sequences), suggesting that rats were able to detect changes in the pattern of three-syllabic speech sound sequence (i.e., location of the reduplication of an element in the sequence). Since all the deviant sound sequences were constructed of novel items, our findings indicate that, similarly to the human brain, the rat brain has the ability to automatically generalize extracted structural information to new items.</p></abstract>
<kwd-group>
<kwd>local-field potentials</kwd>
<kwd>pattern perception</kwd>
<kwd>auditory cortex</kwd>
<kwd>rat</kwd>
<kwd>mismatch negativity</kwd>
<kwd>speech</kwd>
</kwd-group>
<counts>
<fig-count count="2"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="33"/>
<page-count count="6"/>
<word-count count="5070"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>The ability to detect abstract grammatical rules, i.e., principles that govern speech sound streams, is essential for learning a language. To investigate the infants&#x00027; ability to extract abstract algebraic rules, Marcus et al. (<xref ref-type="bibr" rid="B18">1999</xref>) familiarized infants to sequences of syllables (or sentences) that followed a particular &#x0201C;grammatical&#x0201D; rule (e.g., &#x0201C;ga ti ga&#x0201D; for ABA). During the test, infants were observed to be more attentive to sequences that were grammatically inconsistent (e.g., &#x0201C;wo fe fe,&#x0201D; which is ABB) than to those sequences that were consistent with grammatical rules (e.g., &#x0201C;wo fe wo&#x0201D;). Because the test sentences were different to those used in the training phase, the authors concluded that infants can extract an abstract rule and generalize it to novel instances. Also, detection of ABB and AAB structures were compared, and it was found that even if both structures have a reduplication element, the infants paid more attention to the inconsistent patterns.</p>
<p>It is not known, however, whether the ability to extract grammatical rules from speech sounds only applies to human linguistic cognition or whether this cognitive element has originally evolved for other, more general purposes. In the latter case, these skills could also be found in non-human animal species.</p>
<p>It is known that non-human animal species can process speech up to a certain level of cognitive complexity. Speech sound discrimination has been demonstrated in various animal species neurophysiologically (e.g., Dooling and Brown, <xref ref-type="bibr" rid="B6">1990</xref> in birds; Kraus et al., <xref ref-type="bibr" rid="B15">1994</xref> in guinea pigs, Ahmed et al., <xref ref-type="bibr" rid="B1a">2011</xref> in rats), and on a behavioral level (e.g., Engineer et al., <xref ref-type="bibr" rid="B9">2008</xref> in rats; Sinnott et al., <xref ref-type="bibr" rid="B27">1976</xref> in monkeys; Sinnott and Mosteller, <xref ref-type="bibr" rid="B28">2001</xref> in gerbils). Also, word segmentation based on transitional probabilities has been demonstrated, on a behavioral level, in rats (Toro and Trobalon, <xref ref-type="bibr" rid="B32">2005</xref>) as well as in cotton-top tamarins (Hauser et al., <xref ref-type="bibr" rid="B12">2001</xref>). Extraction of grammatical rules (i.e., structural patterns) from speech sounds in non-human species has been studied in tamarin-monkeys and rats with similar stimulus conditions as applied originally by Marcus et al. (<xref ref-type="bibr" rid="B18">1999</xref>). The report concerning tamarin-monkeys (Hauser et al., <xref ref-type="bibr" rid="B13">2002</xref>) was later retracted (Retraction notice, <xref ref-type="bibr" rid="B1">2010</xref>). In rats, no evidence of pattern extraction was found (Toro and Trobalon, <xref ref-type="bibr" rid="B32">2005</xref>).</p>
<p>It might be, however, too early to conclude that rats are not able to extract structural patterns from three-syllabic speech sequences, as were applied in a classic study by Marcus et al. (<xref ref-type="bibr" rid="B18">1999</xref>) in infants. Since there is evidence in rats of representing abstract rules from pure tones (Murphy et al., <xref ref-type="bibr" rid="B20">2008</xref>), this issue should be further explored. To the present study we applied a neurophysiological mismatch response (MMR), a measure of automatic cognition, which is the equivalent of the human electrophysiological response called mismatch negativity (MMN; N&#x000E4;&#x000E4;t&#x000E4;nen et al., <xref ref-type="bibr" rid="B22">1978</xref>, <xref ref-type="bibr" rid="B23">1997</xref>, <xref ref-type="bibr" rid="B21">2010</xref>). MMR can reflect auditory cognition before its behavioral manifestation (e.g., Tremblay et al., <xref ref-type="bibr" rid="B33">1998</xref>). Based on this method, we have previously demonstrated that the rat&#x00027;s brain is able to detect changes in abstract auditory features, such as melodic patterns in tone-pairs (Ruusuvirta et al., <xref ref-type="bibr" rid="B26">2007</xref>) and in combinatory rules between frequency and intensity of the sound objects (Astikainen et al., <xref ref-type="bibr" rid="B3">2006</xref>, <xref ref-type="bibr" rid="B2">2014</xref>). Rats also make representations of spectro-temporally complex sounds such as speech sounds in their brains, and they can detect changes in these sounds based on the content of the transient memory (Ahmed et al., <xref ref-type="bibr" rid="B1a">2011</xref>). Rats, anesthetized with urethane have been used in these studies as urethane is known to largely preserve the awake-like function of the brain (Maggi and Meli, <xref ref-type="bibr" rid="B16">1986</xref>).</p>
<p>In the present study, capitalizing on the above mentioned studies, we recorded local-field potentials (LFPs) from the dura, above the auditory cortex in urethane-anesthetized rats. We presented the animals with a series of synthesized speech sounds. The stimulus series (modified from Marcus et al., <xref ref-type="bibr" rid="B18">1999</xref>) consisted of several different sequences consisting of three pseudowords (called sentences here). Ninety percent of the sentences followed a specific pattern structure (&#x0201C;standards&#x0201D;). Acoustically novel sentences were introduced (&#x0201C;deviants&#x0201D;) rarely (10% of the sentences) and randomly in the sequences. Deviant sentences were of two different types: 1) &#x0201C;pattern-obeying deviants&#x0201D; that shared the pattern structure of the standard sentences but deviated from them physically, and 2) &#x0201C;pattern-violating deviants&#x0201D; that differed from the standards physically but also presented a different pattern structure. We expected to observe an early MMR to be triggered by the first pseudoword for both types of deviant sentences due to their acoustical differences from the standard pseudowords. We also expected to observe a later MMR to be triggered by the second word in the pattern-violating deviant sentences. This would indicate that the syntax-like rule, carried by the standard patterns, was extracted by the animals&#x00027; brains.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>Materials and methods</title>
<sec>
<title>Subjects</title>
<p>The subjects were 14 male Sprague-Dawley rats from Harlan Laboratories (England, UK), weighing 410&#x02013;500 g and aged between 13 and 18 weeks at the time of the individual recordings. The animals were housed in standard plastic cages, in groups of 2&#x02013;4, under a controlled temperature and subjected to a 12 h light/dark cycle, with free access to water and food pellets in the Experimental Animal Unit of the University of Jyv&#x000E4;skyl&#x000E4;, Jyv&#x000E4;skyl&#x000E4;, Finland. The experiments were approved by the Finnish National Animal Experiment Board, and carried out in accordance with the European Communities Council Directive (86/609/EEC) regarding the care and use of animals used for experimental procedures. The license for the present experiments has been approved by County Administrative Board of Southern Finland (Permit code: ESLH-2007-00662).</p>
</sec>
<sec>
<title>Surgery</title>
<p>All surgical procedures were done under urethane (Sigma Chemicals, St Louis, MO, USA) induced anesthesia (1.2 g/kg dose, 0.24 g/ml concentration, injected intraperitoneally). Supplemental doses were injected if the required level of anesthesia was not obtained. The level of anesthesia was monitored by testing the withdrawal reflexes. The anesthetized animal was moved into a Faraday cage and mounted in a standard stereotactic frame (David Kopf Instruments, Model 962, Tujunga, CA, USA). The animal&#x00027;s head was fixed to the stereotaxic frame using blunt ear bars. Under additional local anesthesia (lidocaine 20%, Orion Pharma, Espoo, Finland), the skin was removed from the top of the head and the skull revealed. Positioned contralaterally to the recording site, two stainless steel skull screws (0.9 mm diameter, World Precision Instruments, Berlin, Germany) fixed above the cerebellum (AP &#x02212;11.0, ML 3.0) and frontal cortex (AP &#x0002B;4.0, ML 3.0) served as reference and ground electrodes, respectively. A headstage, composed of a screw and dental acrylic, was attached to the right prefrontal part of the skull to hold the head in place and allow removal of the right ear bar. A unilateral craniotomy was performed in order to expose a 2 &#x000D7; 2 mm region over the left auditory cortex (4.5&#x02013;6.5 mm posterior to the bregma and 2&#x02013;4 mm lateral to the bony ridge between the dorsal and lateral skull surfaces) for the placement of the recording electrode. The level of anesthesia was periodically monitored throughout the whole experiment. Animals were rehydrated with a 2 ml injection of saline under the skin every 2 h. After the surgery, the right ear bar was removed and recording started. After the experiment, the animals were further anesthetized with urethane and then put down by cervical dislocation.</p>
</sec>
<sec>
<title>Recording</title>
<p>Local-field potentials in response to auditory stimuli were recorded with a teflon-coated stainless steel wire (200 &#x003BC;m in diameter, A-M Systems, Chantilly, VA) positioned on the dura surface above the left auditory cortex. Continuous electrocorticogram was primarily amplified 10-fold, by using the AI 405 amplifier (Molecular Devices Corporation, Union City, CA, USA), high-pass filtered at 0.1 Hz, 200-fold amplified, and low-pass filtered at 400 Hz (CyberAmp 380, Molecular Devices Corporation), and finally sampled with 16-bit precision at 2 kHz (DigiData 1320A, Molecular Devices Corporation). The data were stored on a computer hard disk using Axoscope 9.0 data acquisition software (Molecular Devices Corporation) for later off-line analysis.</p>
</sec>
<sec>
<title>Stimuli</title>
<p>Synthesized human male voice speech sounds which consisted of five formants, were created using Mikropuhe 5-software (Timehouse, Helsinki, Finland). The speech sound stream consisted of consonant-vowel syllables (words) that were 100 ms in duration. These were presented in groups of three (modified from Marcus et al., <xref ref-type="bibr" rid="B18">1999</xref>). There was a 50-ms pause between each consecutive word, within the sentences, and 100-ms pause between the sentences.</p>
<p>One of the two stimulus blocks (1 or 2) was presented in each animal (<italic>n</italic> &#x0003D; 7 for both blocks, see Table <xref ref-type="table" rid="T1">1</xref>). In each block, 90% of the sentences (&#x0201C;standards&#x0201D;) followed a specific structure. For one block, this structure was of AAB type (two identical words followed by a different word) and for the other block of ABB type (one word followed by two identical words). In each block, one structure was assigned to the standards (16 different variants, <italic>p</italic> &#x0003D; 0.9) and the other structure for the deviants (<italic>p</italic> &#x0003D; 0.1).The deviants were of two different types: (1) &#x0201C;pattern-obeying deviants&#x0201D; (2 variants, <italic>p</italic> &#x0003D; 0.05) that physically differed from the standards but obeyed the structure of standard sentences and (2) &#x0201C;pattern-violating deviants&#x0201D; (2 variants, <italic>p</italic> &#x0003D; 0.05) that differed from the standard sentences, both physically and in respect of the pattern. Since all the stimulus types included a repetition of an element, they were not possible to differentiate by detecting only this property of the stimulus. The sentences were ordered in a pseudorandom fashion with the restriction that consecutive deviants were separated by at least two standards. There were a total of 996 stimulus sequences in one stimulus block.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>Stimulus categories and sequence variants</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th align="left"><bold>Stimulus categories</bold></th>
<th align="left"><bold>Sequence variants</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Stimulus block 1</td>
<td align="left">Standard &#x0201C;A-A-B&#x0201D; (90%)</td>
<td align="left">LE-LE-JE; LE-LE-WE; LE-LE-DI; LE-LE-LI; WI-WI-JE; WI-WI-WE; WI-WI-DI; WI-WI-LI; JI-JI-JE; JI-JI-WE; JI-JI-DI; JI-JI-LI; DE-DE-JE; DE-DE-WE; DE-DE-DI; DE-DE-LI</td>
</tr>
<tr>
<td/>
<td align="left">Pattern-obeying deviant &#x0201C;A-A-B&#x0201D; (5%)</td>
<td align="left">BA-BA-BO, KO-KO-GE</td>
</tr>
<tr>
<td/>
<td align="left">Pattern-violating deviant &#x0201C;A-B-B&#x0201D; (5%)</td>
<td align="left">BA-PO-PO, KO-GA-GA</td>
</tr>
<tr>
<td align="left">Stimulus block 2</td>
<td align="left">Standard &#x0201C;A-B-B&#x0201D; (90%)</td>
<td align="left">LE-JE-JE; LE-WE-WE; LE-DI-DI; LE-LI-LI; WI-JE-JE; WI-WE-WE; WI-DI-DI; WI-LI-LI; JI-JE-JE; JI-WE-WE; JI-DI-DI; JI-LI-LI; DE-JE-JE; DE-WE-WE; DE-DI-DI; DE-LI-LI</td>
</tr>
<tr>
<td/>
<td align="left">Pattern-obeying deviant &#x0201C;A-B-B&#x0201D; (5%)</td>
<td align="left">BA-BO-BO, KO-GE-GE</td>
</tr>
<tr>
<td/>
<td align="left">Pattern-violating deviant &#x0201C;A-A-B&#x0201D; (5%)</td>
<td align="left">BA-BA-PO, KO-KO-GA</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>One of the structures (AAB or ABB, in different stimulus blocks) was assigned to standards and pattern-obeying deviants. The other structure was assigned to pattern-violating deviants. Sixteen variants of standard sentences were used in both stimulus blocks to exclude the possibility of standards being memorized by the brain as individual objects. In both type of deviants, two variants were applied per stimulus block. The percentages refer to the proportion of each of the stimulus categories out of the total number of sentences (996).The stimulus block 1 was applied for one animal group (n &#x0003D; 7) and stimulus block 2 for the other animal group (n &#x0003D; 7).</italic></p>
</table-wrap-foot>
</table-wrap>
<p>The speech sounds were played from a PC via an active loudspeaker system (Studiopro 3, M-audio, Irwindale, CA, USA). The stimulation was presented with the loudspeaker system directed toward the right ear of the animal at a distance of 20 cm. In all conditions, the sound pressure level for each tone was 70 dB, as measured with a sound level meter (type 2235, Bruel and Kjaer, N&#x000E6;rum Denmark) with C-weighting (optimized for 40&#x02013;100 dB measurement) in the vicinity of the animal&#x00027;s right pinna during the recording.</p>
</sec>
<sec>
<title>Analysis</title>
<p>The data were off-line filtered at 0.1&#x02013;30 Hz (24 dB/octave roll off). Data of the two animal groups (stimulus blocks 1 and 2) were averaged. Sweeps from 50 ms before to 500 ms after each stimulus onset were segmented. In order to have same amount of standard and deviant responses in the analysis, only the responses to the standard sentences immediately preceding the deviant sentences were analyzed. The averaged waveforms were then baseline-corrected. The baseline correction was calculated for the period of -50 to 0 ms relative to the second word in the sentence since the change in the pattern occurred at that time in the pattern-violating deviants.</p>
<p>First, the timing of the MMR was investigated by applying point-by-point 2-tailed paired <italic>t</italic>-tests to compare local-field potential amplitudes for the standard and deviant sentences. <italic>P</italic>-values smaller than or equal to 0.05 for at least 20 consecutive sample points (i.e., for the period of 10 ms) were required for the difference in local-field potentials to be considered robust. Next, ANOVA with factors stimulus type (standard vs. deviant) and deviant type (pattern-obeying deviant vs. pattern-violating deviant) for the MMR specific to the pattern-violating deviant sentences was applied. For the ANOVA, mean amplitude values were extracted from the latency range of the significant differential response indicated by the point-by-point <italic>t</italic>-tests. Partial eta squared values present effect size estimates for ANOVA and Cohen&#x00027;s <italic>d</italic> for <italic>t</italic>-tests.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<p>The first MMR, i.e., an amplitude difference in local-field potentials, between the standard and the deviant sentences, was found for both the pattern-violating deviant sentences (Figure <xref ref-type="fig" rid="F1">1</xref>, left) and the pattern-obeying deviant sentences (Figure <xref ref-type="fig" rid="F1">1</xref>, right). This first MMR for the pattern-violating deviant sentences, was significant at 194&#x02013;213 ms after the sentence onset, [<italic>t</italic><sub>(13)</sub> &#x0003D; 2.2&#x02013;2.7, <italic>p</italic> &#x0003D; 0.020&#x02013;0.047], and at 231.5&#x02013;251 ms after the sentence onset, [<italic>t</italic><sub>(13)</sub> &#x0003D; 2.2&#x02013;2.3, <italic>p</italic> &#x0003D; 0.039&#x02013;0.050]. For the pattern-obeying deviant sentences the corresponding latency ranges were 187.5&#x02013;206.5 ms after the sentence onset, [<italic>t</italic><sub>(13)</sub> &#x0003D; 2.155&#x02013;2.379, <italic>p</italic> &#x0003D; 0.033&#x02013;0.050], and 228&#x02013;261 ms after the sentence onset, [<italic>t</italic><sub>(13)</sub> &#x0003D; 2.2&#x02013;2.8, <italic>p</italic> &#x0003D; 0.016&#x02013;0.048].</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Local-field potentials in response to pseudo-sentences</bold>. Responses to pattern-violating deviants and standard sentences immediately preceding them (left); and responses to pattern-obeying deviants and standard sentences immediately preceding them (right). The horizontal black bars represent each of the three pseudowords of 100 ms in duration. The triplets were presented at 150 ms stimulus-onset-asynchrony. The gray arrow in figures refers to the onset of the first word of a deviant sentence that physically differed from the standards; and the black arrow in the figure on the left refers to the onset of the structural change present only in the pattern-violating deviants. The two different time scales at the bottom of the left figure refer to the two different onsets of the different deviances in the pattern-violating deviant sentences (onset of the physical difference&#x02014;the gray time line; onset of the pattern-related difference&#x02014;the black time line). Shaded rectangles illustrate the time windows of significant amplitude differences (<italic>p</italic> &#x0003C; 0.05) between the two waveforms as indicated by point-by-point <italic>t</italic>-tests.</p></caption>
<graphic xlink:href="fnins-08-00374-g0001.tif"/>
</fig>
<p>The second MMR was found only for the pattern-violating deviant sentences, in which the second word at a low probability (probability 0.05) violated the pattern that the rest of the sentences followed (probability 0.95). The latency for this MMR second was 217.5&#x02013;316.5 ms from the onset of the second word, [<italic>t</italic><sub>(13)</sub> &#x0003D; 2.2&#x02013;3.6, <italic>p</italic> &#x0003D; 0.003&#x02013;0.050] (Figure <xref ref-type="fig" rid="F1">1</xref>, left).</p>
<p>Next, an ANOVA comparing the responses to the pattern-violating and pattern-obeying deviants and their consecutive standards in the time window in which the second MMR was found (i.e., 217.5&#x02013;316.5 ms from the onset of the second word) was conducted. Significant interaction effect of stimulus type &#x000D7; deviant type was found, [<italic>F</italic><sub>(1, 13)</sub> &#x0003D; 8.7, <italic>p</italic> &#x0003D; 0.011, &#x003B7;<sup>2</sup><sub>p</sub> &#x0003D; 0.401]. Main effects were non-significant. Responses to pattern-violating deviant sequences and those to the preceding standard sequences differed significantly, [<italic>t</italic><sub>(13)</sub> &#x0003D; 3.5, <italic>p</italic> &#x0003D; 0.004, <italic>d</italic> &#x0003D; 1.02]. The corresponding difference was non-significant for the pattern-obeying deviants and preceding standards, [<italic>t</italic><sub>(13)</sub> &#x0003D; 0.7, <italic>p</italic> &#x0003D; 0.525, <italic>d</italic> &#x0003D; 0.23]. Figure <xref ref-type="fig" rid="F2">2</xref> depicts the mean amplitude values, standard deviation, and individual subjects&#x00027; amplitude values for the differential responses.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Mean amplitude values, standard deviation and scatterplots for the individual animals&#x00027; amplitude values for the second MMR (217.5&#x02013;316.5 ms from the onset of the second word)</bold>. Differential LFPs (deviant - standard) to pattern-obeying and pattern-violating deviant sentences.</p></caption>
<graphic xlink:href="fnins-08-00374-g0002.tif"/>
</fig>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>Both types of deviant sentences, pattern-obeying and pattern-violating deviants, were detected from among the repeated standard sentences in the rat brain as indexed by the electrophysiological mismatch response. The earlier difference starting at 187.5 ms, after the sentence onset, was most probably elicited by the physical novelty of the deviant sounds; since the probability for the each standard variant was 22.5% and that of the deviant variants was 5%. An additional mismatch response, starting at 217.5 ms from the onset of the pattern change, was specifically found for the deviant sound sequences that were different in pattern structure from the frequently presented standard sequences. This finding suggests that anesthetized rats are able to extract structural patterns from speech stream that is carried out at a fast pace, and generalize this information to new items (since the deviant sentences differed physically from the standard sentences). Namely, in order to detect the pattern-violating deviant sequences, the brains of the animals needed to make a representation of the structure in the frequently presented &#x0201C;standard&#x0201D; sequences (N&#x000E4;&#x000E4;t&#x000E4;nen et al., <xref ref-type="bibr" rid="B24">2001</xref>, <xref ref-type="bibr" rid="B21">2010</xref>).</p>
<p>There is previous evidence of non-human animals&#x00027; ability to extract grammatical rules from speech sounds. Common marmosets (New World monkeys) detected the grammatical differences based on simpler learning strategies than Rhesus monkeys (Old Wold monkeys) (Wilson et al., <xref ref-type="bibr" rid="B35">2013</xref>). Similar ability for rule extraction has been previously reported from sinusoidal sounds in rats (Murphy et al., <xref ref-type="bibr" rid="B20">2008</xref>) and from speech-specific calls in song birds (e.g., Gentner et al., <xref ref-type="bibr" rid="B10">2006</xref>). In human infants, there is evidence that they learn more easily rule-like regularities from speech than from other auditory material (Marcus et al., <xref ref-type="bibr" rid="B17">2007</xref>). It is not known whether this preference is related to linguistic potential in an infant&#x00027;s brain, familiarity of the speech sounds, or some other factors. Future studies in non-human animals could enlighten this issue.</p>
<p>In the present study we tested the rats&#x00027; ability to detect pattern violation is speech sound sequences that all included a repetition of an element. Therefore, they were not possible to differentiate by detecting only this property of the stimulus. On the other hand, the generalization of the present results may be restricted to stimuli in which the pattern is defined as a repetition of an element and only the position of the repetition in the three-syllabic sequence is varied. Humans are particularly sensitive to rules that are expressed as a repetition of an element at the edges of a sequence (Endress et al., <xref ref-type="bibr" rid="B8">2005</xref>). In our experiment, repetitions were always at the edge of the sequence. It is thus unclear as to what extent the present results in rats can be generalized to other types of rules. Furthermore, the types of rules applied to the previous studies on rule extraction have been under debate (Gentner et al., <xref ref-type="bibr" rid="B11">2010</xref>; ten Cate et al., <xref ref-type="bibr" rid="B31">2010</xref>). Thus, far studies in song birds have been progressive in solving this problem (e.g., van Heijningen et al., <xref ref-type="bibr" rid="B34">2013</xref>), but there are still open questions (ten Cate and Okanoya, <xref ref-type="bibr" rid="B30">2012</xref>). Electrophysiological methods which provide accurate information on the timing of neural activity (recorded in animals and humans) would be a feasible addition when studying different levels of cognitive complexity required in rule extraction. In humans, event-related potentials to study processing of non-adjacent dependencies, i.e., AXC structure in which the first and the last element are dependent (De Diego Balaguer et al., <xref ref-type="bibr" rid="B4">2007</xref>; Mueller et al., <xref ref-type="bibr" rid="B19">2009</xref>) and structural rules (ABB vs. ABA, Sun et al., <xref ref-type="bibr" rid="B29">2012</xref>) in speech sounds have been utilized.</p>
<p>Previous behavioral research has failed to find evidence for rule extraction from speech sounds in rats (Toro and Trobalon, <xref ref-type="bibr" rid="B32">2005</xref>). In this study, rats were presented with similar three-syllabic sequences of speech sounds, as in Marcus et al. (the third experiment, 1999). Our stimuli were nearly identical and the variability in the &#x0201C;standard&#x0201D; and &#x0201C;deviant&#x0201D; sequences was also the same (16 standard variants and 2 deviant variants of both deviant types). In the study by Toro and Trobalon (<xref ref-type="bibr" rid="B32">2005</xref>), rats indicated the detection of the pattern violation by pressing a lever. The present positive finding may be related to the methodology used. Namely, the mismatch response is known to be capable of probing into auditory cognition regardless of its behavioral manifestations (Tremblay et al., <xref ref-type="bibr" rid="B33">1998</xref>). This method can bypass a wide range of factors related to behavior, for example, motivation, attention, or requirements of overt behavior. However, the constraints of such non-behavioral measures should also be acknowledged. Namely, it is unclear whether this ability can support behavioral adaptation in rats or not. Nevertheless, its existence in an animal species, which do not use complex sequences of calls in intra-species communication, (as compared to human speech or birdsong, e.g., Doupe and Kuhl, <xref ref-type="bibr" rid="B7">1999</xref>; Gentner et al., <xref ref-type="bibr" rid="B10">2006</xref>) supports the notion of its non-linguistic origin. Moreover, these findings endorse the view that even the most complex functions, quintessentially considered inherent to the human brain only, may in fact, also be represented in a primitive form (N&#x000E4;&#x000E4;t&#x000E4;nen et al., <xref ref-type="bibr" rid="B21">2010</xref>) in brains thus far considered evolutionarily incapable of such procedures. Since extraction of rule-like patterns, in serially presented spectro-temporally complex sounds, is one of the mechanisms utilized by humans in receptive language learning the results might imply that some of the mechanisms supporting human language learning may not have evolved solely for human language during evolution.</p>
<p>In conclusion, the present results demonstrate the ability of the anesthetized rat brain to detect and represent the common abstract rule or pattern obeyed by a sequence of speech-like sound stimuli with a wide acoustic variation. Hence, these results appear to give a major contribution to the evidence suggesting the presence of the automatic sensory-cognitive core of cognitive function that is shared by humans and different other, at least higher species, at different developmental stages, and even in different states of consciousness, as proposed by N&#x000E4;&#x000E4;t&#x000E4;nen et al. (<xref ref-type="bibr" rid="B24">2001</xref>, <xref ref-type="bibr" rid="B21">2010</xref>).</p>
</sec>
<sec>
<title>Author contributions</title>
<p>All authors contributed substantially to the conception and design of the work; Tanel M&#x000E4;llo, Timo Ruusuvirta, and Piia Astikainen contributed to the acquisition and analysis of the data, and all authors contributed to the interpretation of data for the work; Tanel M&#x000E4;llo and Piia Astikainen drafted the work, and Timo Ruusuvirta and Risto N&#x000E4;&#x000E4;t&#x000E4;nen contributed to revising it critically for its important intellectual content. Final approval of the version to be published was attained from all authors who also agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
</sec>
</body>
<back>
<ack>
<p>This work was supported by the Academy of Finland, grant number 127595 and 273134 to Piia Astikainen and grant number 122743 to Risto N&#x000E4;&#x000E4;t&#x000E4;nen The authors are grateful to Petri Kinnunen for preparing the stimulus materials, M. Sci. Mustak Ahmed for assisting in electrophysiological recordings, and Dr. Markku Penttonen for technical help.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1a">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahmed</surname> <given-names>M.</given-names></name> <name><surname>M&#x000E4;llo</surname> <given-names>T.</given-names></name> <name><surname>Lepp&#x000E4;nen</surname> <given-names>P. H. T.</given-names></name> <name><surname>H&#x000E4;m&#x000E4;l&#x000E4;inen</surname> <given-names>J.</given-names></name> <name><surname>&#x000C4;yr&#x000E4;v&#x000E4;inen</surname> <given-names>L.</given-names></name> <name><surname>Ruusuvirta</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>Mismatch brain response to speech sound changes in rats</article-title>. <source>Front. Psychol</source>. <volume>2</volume>:<issue>283</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2011.00283</pub-id><pub-id pub-id-type="pmid">22059082</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Astikainen</surname> <given-names>P.</given-names></name> <name><surname>Ruusuvirta</surname> <given-names>T.</given-names></name> <name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>Rapid categorization of sound objects in anesthetized rats as indexed by the electrophysiological mismatch response</article-title>. <source>Psychophysiology</source> <volume>51</volume>, <fpage>1195</fpage>&#x02013;<lpage>1199</lpage>. <pub-id pub-id-type="doi">10.1111/psyp.12284</pub-id><pub-id pub-id-type="pmid">24981508</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Astikainen</surname> <given-names>P.</given-names></name> <name><surname>Ruusuvirta</surname> <given-names>T.</given-names></name> <name><surname>Wikgren</surname> <given-names>J.</given-names></name> <name><surname>Penttonen</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>Memory-based detection of rare sound feature combinations in anesthetized rats</article-title>. <source>Neuroreport</source> <volume>17</volume>, <fpage>1561</fpage>&#x02013;<lpage>1564</lpage>. <pub-id pub-id-type="doi">10.1097/01.wnr.0000233097.13032.7d</pub-id><pub-id pub-id-type="pmid">16957608</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Diego Balaguer</surname> <given-names>R.</given-names></name> <name><surname>Toro</surname> <given-names>J. M.</given-names></name> <name><surname>Rodriguez-Fornells</surname> <given-names>A.</given-names></name> <name><surname>Bachoud-Levi</surname> <given-names>A. C.</given-names></name></person-group> (<year>2007</year>). <article-title>Different neurophysiological mechanisms underlying word and rule extraction from speech</article-title>. <source>PLoS ONE</source> <volume>2</volume>:<fpage>e1175</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0001175</pub-id><pub-id pub-id-type="pmid">18000546</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dooling</surname> <given-names>R. J.</given-names></name> <name><surname>Brown</surname> <given-names>S. D.</given-names></name></person-group> (<year>1990</year>). <article-title>Speech perception by budgerigars (Melopsittacusundulatus): spoken vowels</article-title>. <source>Percept. Psychophys</source>. <volume>47</volume>, <fpage>568</fpage>&#x02013;<lpage>574</lpage>. <pub-id pub-id-type="doi">10.3758/BF03203109</pub-id><pub-id pub-id-type="pmid">2367177</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Doupe</surname> <given-names>A. J.</given-names></name> <name><surname>Kuhl</surname> <given-names>P. K.</given-names></name></person-group> (<year>1999</year>). <article-title>Birdsong and human speech: common themes and mechanisms</article-title>. <source>Ann. Rev. Neurosci</source>. <volume>22</volume>, <fpage>567</fpage>&#x02013;<lpage>631</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.22.1.567</pub-id><pub-id pub-id-type="pmid">10202549</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Endress</surname> <given-names>A. D.</given-names></name> <name><surname>Scholl</surname> <given-names>B. J.</given-names></name> <name><surname>Mehler</surname> <given-names>J.</given-names></name></person-group> (<year>2005</year>). <article-title>The role of salience in the extraction of algebraic rules</article-title>. <source>J. Exp. Psychol. Gen</source>. <volume>134</volume>, <fpage>406</fpage>&#x02013;<lpage>419</lpage>. <pub-id pub-id-type="doi">10.1037/0096-3445.134.3.406</pub-id><pub-id pub-id-type="pmid">16131271</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Engineer</surname> <given-names>C. T.</given-names></name> <name><surname>Perez</surname> <given-names>C. A.</given-names></name> <name><surname>Chen</surname> <given-names>Y. H.</given-names></name> <name><surname>Carraway</surname> <given-names>R. S.</given-names></name> <name><surname>Reed</surname> <given-names>A. C.</given-names></name> <name><surname>Shetake</surname> <given-names>J. A.</given-names></name> <etal/></person-group>. (<year>2008</year>). <article-title>Cortical activity patterns predict speech discrimination ability</article-title>. <source>Nat. Neurosci</source>. <volume>11</volume>, <fpage>603</fpage>&#x02013;<lpage>608</lpage>. <pub-id pub-id-type="doi">10.1038/nn.2109</pub-id><pub-id pub-id-type="pmid">18425123</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gentner</surname> <given-names>T. Q.</given-names></name> <name><surname>Fenn</surname> <given-names>K. M.</given-names></name> <name><surname>Margoliash</surname> <given-names>D.</given-names></name> <name><surname>Nusbaum</surname> <given-names>H. C.</given-names></name></person-group> (<year>2006</year>). <article-title>Recursive syntactic pattern learning by songbirds</article-title>. <source>Nature</source> <volume>440</volume>, <fpage>1204</fpage>&#x02013;<lpage>1207</lpage>. <pub-id pub-id-type="doi">10.1038/nature04675</pub-id><pub-id pub-id-type="pmid">16641998</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gentner</surname> <given-names>T. Q.</given-names></name> <name><surname>Fenn</surname> <given-names>K. M.</given-names></name> <name><surname>Margoliash</surname> <given-names>D.</given-names></name> <name><surname>Nusbaum</surname> <given-names>H. C.</given-names></name></person-group> (<year>2010</year>). <article-title>Simple stimuli, simple strategies</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>107</volume>:<fpage>E65</fpage>. <pub-id pub-id-type="doi">10.1073/pnas.1000501107</pub-id><pub-id pub-id-type="pmid">20388905</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hauser</surname> <given-names>M. D.</given-names></name> <name><surname>Newport</surname> <given-names>E. L.</given-names></name> <name><surname>Aslin</surname> <given-names>R. N.</given-names></name></person-group> (<year>2001</year>). <article-title>Segmentation of the speech stream in a non-human primate: statistical learning in cotton-top tamarins</article-title>. <source>Cognition</source> <volume>78</volume>, <fpage>B53</fpage>&#x02013;<lpage>B64</lpage>. <pub-id pub-id-type="doi">10.1016/S0010-0277(00)00132-3</pub-id><pub-id pub-id-type="pmid">11124355</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hauser</surname> <given-names>M. D.</given-names></name> <name><surname>Weiss</surname> <given-names>D.</given-names></name> <name><surname>Marcus</surname> <given-names>G.</given-names></name></person-group> (<year>2002</year>). <article-title>Rule learning by cotton-top tamarins</article-title>. <source>Cognition</source> <volume>86</volume>, <fpage>B15</fpage>&#x02013;<lpage>B22</lpage>. <pub-id pub-id-type="doi">10.1016/S0010-0277(02)00139-7</pub-id><pub-id pub-id-type="pmid">12208654</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kraus</surname> <given-names>N.</given-names></name> <name><surname>McGee</surname> <given-names>T.</given-names></name> <name><surname>Carrell</surname> <given-names>T.</given-names></name> <name><surname>King</surname> <given-names>C.</given-names></name> <name><surname>Littman</surname> <given-names>T.</given-names></name> <name><surname>Nicol</surname> <given-names>T.</given-names></name></person-group> (<year>1994</year>). <article-title>Discrimination of speech-like contrasts in the auditory thalamus and cortex</article-title>. <source>J. Acoust. Soc. Am</source>. <volume>96</volume>, <fpage>2758</fpage>&#x02013;<lpage>2768</lpage>. <pub-id pub-id-type="doi">10.1121/1.411282</pub-id><pub-id pub-id-type="pmid">7983281</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maggi</surname> <given-names>C. A.</given-names></name> <name><surname>Meli</surname> <given-names>A.</given-names></name></person-group> (<year>1986</year>). <article-title>Suitability of urethane anesthesia for physiopharmacological investigations in various systems Part 1: general considerations</article-title>. <source>Experientia</source> <volume>42</volume>, <fpage>109</fpage>&#x02013;<lpage>114</lpage>. <pub-id pub-id-type="doi">10.1007/BF01952426</pub-id><pub-id pub-id-type="pmid">2868911</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marcus</surname> <given-names>G. F.</given-names></name> <name><surname>Fernandes</surname> <given-names>K. J.</given-names></name> <name><surname>Johnson</surname> <given-names>S. J.</given-names></name></person-group> (<year>2007</year>). <article-title>Infant rule learning facilitated by speech</article-title>. <source>Psychol. Sci</source>. <volume>18</volume>, <fpage>387</fpage>&#x02013;<lpage>391</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-9280.2007.01910.x</pub-id><pub-id pub-id-type="pmid">17576276</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marcus</surname> <given-names>G. F.</given-names></name> <name><surname>Vijayan</surname> <given-names>S.</given-names></name> <name><surname>BandiRao</surname> <given-names>S.</given-names></name> <name><surname>Vishton</surname> <given-names>P. M.</given-names></name></person-group> (<year>1999</year>). <article-title>Rule learning by seven-month-old infants</article-title>. <source>Science</source> <volume>283</volume>, <fpage>77</fpage>&#x02013;<lpage>80</lpage>. <pub-id pub-id-type="doi">10.1126/science.283.5398.77</pub-id><pub-id pub-id-type="pmid">9872745</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mueller</surname> <given-names>J. L.</given-names></name> <name><surname>Oberecker</surname> <given-names>R.</given-names></name> <name><surname>Friederici</surname> <given-names>A. D.</given-names></name></person-group> (<year>2009</year>). <article-title>Syntactic learning by mere exposure - an ERP study in adult learners</article-title>. <source>BMC Neurosci</source>. <volume>10</volume>:<fpage>89</fpage>. <pub-id pub-id-type="doi">10.1186/1471-2202-10-89</pub-id><pub-id pub-id-type="pmid">19640301</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Murphy</surname> <given-names>R. A.</given-names></name> <name><surname>Mondragon</surname> <given-names>E.</given-names></name> <name><surname>Murphy</surname> <given-names>V. A.</given-names></name></person-group> (<year>2008</year>). <article-title>Rule learning by rats</article-title>. <source>Science</source> <volume>319</volume>, <fpage>1849</fpage>&#x02013;<lpage>1851</lpage>. <pub-id pub-id-type="doi">10.1126/science.1151564</pub-id><pub-id pub-id-type="pmid">18369151</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name> <name><surname>Astikainen</surname> <given-names>P.</given-names></name> <name><surname>Ruusuvirta</surname> <given-names>T.</given-names></name> <name><surname>Huotilainen</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Automatic auditory intelligence: an expression of the sensory-cognitive core of cognitive processes</article-title>. <source>Brain Res. Rev</source>. <volume>64</volume>, <fpage>123</fpage>&#x02013;<lpage>136</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainresrev.2010.03.001</pub-id><pub-id pub-id-type="pmid">20298716</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name> <name><surname>Gaillard</surname> <given-names>A. W.</given-names></name> <name><surname>M&#x000E4;ntysalo</surname> <given-names>S.</given-names></name></person-group> (<year>1978</year>). <article-title>Early selective-attention effect on evoked potential reinterpreted</article-title>. <source>Acta Psychol</source>. <volume>42</volume>, <fpage>313</fpage>&#x02013;<lpage>329</lpage>. <pub-id pub-id-type="doi">10.1016/0001-6918(78)90006-9</pub-id><pub-id pub-id-type="pmid">685709</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name> <name><surname>Lehtokoski</surname> <given-names>A.</given-names></name> <name><surname>Lennes</surname> <given-names>M.</given-names></name> <name><surname>Cheour</surname> <given-names>M.</given-names></name> <name><surname>Huotilainen</surname> <given-names>M.</given-names></name> <name><surname>Iivonen</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>1997</year>). <article-title>Language-specific phoneme representations revealed by electric and magnetic brain responses</article-title>. <source>Nature</source> <volume>385</volume>, <fpage>432</fpage>&#x02013;<lpage>434</lpage>. <pub-id pub-id-type="doi">10.1038/385432a0</pub-id><pub-id pub-id-type="pmid">9009189</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name> <name><surname>Tervaniemi</surname> <given-names>M.</given-names></name> <name><surname>Sussman</surname> <given-names>E.</given-names></name> <name><surname>Paavilainen</surname> <given-names>P.</given-names></name> <name><surname>Winkler</surname> <given-names>I.</given-names></name></person-group> (<year>2001</year>). <article-title>&#x0201C;Primitive intelligence&#x0201D; in the auditory cortex</article-title>. <source>Trends Neurosci</source>. <volume>24</volume>, <fpage>283</fpage>&#x02013;<lpage>288</lpage>. <pub-id pub-id-type="doi">10.1016/S0166-2236(00)01790-2</pub-id><pub-id pub-id-type="pmid">11311381</pub-id></citation>
</ref>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><collab>Retraction notice.</collab></person-group> (<year>2010</year>). <article-title>Retraction notice. Rule learning by cotton-top tamarins. Cognition 86, B15&#x02013;B22</article-title>. <source>Cognition</source> <volume>117</volume>:<fpage>106</fpage>. <pub-id pub-id-type="pmid">20839386</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ruusuvirta</surname> <given-names>T.</given-names></name> <name><surname>Koivisto</surname> <given-names>K.</given-names></name> <name><surname>Wikgren</surname> <given-names>J.</given-names></name> <name><surname>Astikainen</surname> <given-names>P.</given-names></name></person-group> (<year>2007</year>). <article-title>Processing of melodic contours in urethane-anaesthetized rats</article-title>. <source>Eur. J. Neurosci</source>. <volume>26</volume>, <fpage>701</fpage>&#x02013;<lpage>703</lpage>. <pub-id pub-id-type="doi">10.1111/j.1460-9568.2007.05687.x</pub-id><pub-id pub-id-type="pmid">17634069</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sinnott</surname> <given-names>J. M.</given-names></name> <name><surname>Beecher</surname> <given-names>M. D.</given-names></name> <name><surname>Moody</surname> <given-names>D. B.</given-names></name> <name><surname>Stebbins</surname> <given-names>W. C.</given-names></name></person-group> (<year>1976</year>). <article-title>Speech sound discrimination by monkeys and humans</article-title>. <source>J. Acoust. Soc. Am</source>. <volume>60</volume>, <fpage>687</fpage>&#x02013;<lpage>695</lpage>. <pub-id pub-id-type="doi">10.1121/1.381140</pub-id><pub-id pub-id-type="pmid">824334</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sinnott</surname> <given-names>J. M.</given-names></name> <name><surname>Mosteller</surname> <given-names>K. W.</given-names></name></person-group> (<year>2001</year>). <article-title>A comparative assessment of speech sound discrimination in the <italic>Mongolian gerbil.</italic></article-title> <source>J. Acoust. Soc. Am</source>. <volume>110</volume>, <fpage>1729</fpage>&#x02013;<lpage>1732</lpage>. <pub-id pub-id-type="doi">10.1121/1.1398055</pub-id><pub-id pub-id-type="pmid">11681351</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>F.</given-names></name> <name><surname>Hoshi-Shiba</surname> <given-names>R.</given-names></name> <name><surname>Abla</surname> <given-names>D.</given-names></name> <name><surname>Okanoya</surname> <given-names>K.</given-names></name></person-group> (<year>2012</year>). <article-title>Neural correlates of abstract rule learning: an event-related potential study</article-title>. <source>Neuropsychologia</source> <volume>50</volume>, <fpage>2617</fpage>&#x02013;<lpage>2624</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2012.07.013</pub-id><pub-id pub-id-type="pmid">22820632</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>ten Cate</surname> <given-names>C.</given-names></name> <name><surname>Okanoya</surname> <given-names>K.</given-names></name></person-group> (<year>2012</year>). <article-title>Revisiting the syntactic abilities of non-human animals: natural vocalizations and artificial grammar learning</article-title>. <source>Philos. Trans. R. Soc. Biol. Sci</source>. <volume>367</volume>, <fpage>1984</fpage>&#x02013;<lpage>1994</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.2012.0055</pub-id><pub-id pub-id-type="pmid">22688634</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>ten Cate</surname> <given-names>C.</given-names></name> <name><surname>van Heijningen</surname> <given-names>C. A. A.</given-names></name> <name><surname>Zuidema</surname> <given-names>W.</given-names></name></person-group> (<year>2010</year>). <article-title>Reply to Gentner et al.: as simple as possible, but not simpler</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A</source>. <volume>107</volume>, <fpage>E66</fpage>&#x02013;<lpage>E67</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1002174107</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Toro</surname> <given-names>J. M.</given-names></name> <name><surname>Trobalon</surname> <given-names>J. B.</given-names></name></person-group> (<year>2005</year>). <article-title>Statistical computations over a speech stream in a rodent</article-title>. <source>Percept. Psychophys</source>. <volume>67</volume>, <fpage>867</fpage>&#x02013;<lpage>875</lpage>. <pub-id pub-id-type="doi">10.3758/BF03193539</pub-id><pub-id pub-id-type="pmid">16334058</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tremblay</surname> <given-names>K.</given-names></name> <name><surname>Kraus</surname> <given-names>N.</given-names></name> <name><surname>McGee</surname> <given-names>T.</given-names></name></person-group> (<year>1998</year>). <article-title>The time course of auditory perceptual learning: neurophysiological changes during speech&#x02013;sound training</article-title>. <source>Neuroreport</source> <volume>9</volume>, <fpage>3557</fpage>&#x02013;<lpage>3560</lpage>. <pub-id pub-id-type="doi">10.1097/00001756-199811160-00003</pub-id><pub-id pub-id-type="pmid">9858359</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>van Heijningen</surname> <given-names>C. A.</given-names></name> <name><surname>Chen</surname> <given-names>J.</given-names></name> <name><surname>van Laatum</surname> <given-names>I.</given-names></name> <name><surname>van der Hulst</surname> <given-names>B.</given-names></name> <name><surname>ten Cate</surname> <given-names>C.</given-names></name></person-group> (<year>2013</year>). <article-title>Rule learning by zebra finches in an artificial grammar learning task: which rule?</article-title> <source>Anim. Cogn</source>. <volume>16</volume>, <fpage>165</fpage>&#x02013;<lpage>175</lpage>. <pub-id pub-id-type="doi">10.1007/s10071-012-0559-x</pub-id><pub-id pub-id-type="pmid">22971840</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wilson</surname> <given-names>B.</given-names></name> <name><surname>Slater</surname> <given-names>H.</given-names></name> <name><surname>Kikuchi</surname> <given-names>Y.</given-names></name> <name><surname>Milne</surname> <given-names>A. E.</given-names></name> <name><surname>Marslen-Wilson</surname> <given-names>W. D.</given-names></name> <name><surname>Smith</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>Auditory artificial grammar learning in macaque and marmoset monkeys</article-title>. <source>J. Neurosci</source>. <volume>33</volume>, <fpage>18825</fpage>&#x02013;<lpage>18835</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.2414-13.2013</pub-id><pub-id pub-id-type="pmid">24285889</pub-id></citation>
</ref>
</ref-list>
</back>
</article>
