<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="brief-report" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2022.957389</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Brief Research Report</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Familiarization with meaningless sound patterns facilitates learning to detect those patterns among distracters</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes"><name><surname>Wisniewski</surname><given-names>Matthew G.</given-names></name><xref rid="c001" ref-type="corresp"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1351184/overview"/>
</contrib>
</contrib-group>
<aff><institution>Department of Psychological Sciences, Kansas State University</institution>, <addr-line>Manhattan, KS</addr-line>, <country>United States</country></aff>
<author-notes>
<fn id="fn0001" fn-type="edited-by">
<p>Edited by: Claude Alain, Rotman Research Institute (RRI), Canada</p>
</fn>
<fn id="fn0002" fn-type="edited-by">
<p>Reviewed by: E. Glenn Schellenberg, University of Toronto, Canada; Psyche Loui, Northeastern University, United States</p>
</fn>
<corresp id="c001">&#x002A;Correspondence: Matthew G. Wisniewski, <email>mgwisniewski@ksu.edu</email>
</corresp>
<fn id="fn0003" fn-type="other">
<p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Psychology</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>14</day>
<month>09</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>13</volume>
<elocation-id>957389</elocation-id>
<history>
<date date-type="received">
<day>31</day>
<month>05</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>23</day>
<month>08</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2022 Wisniewski.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Wisniewski</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Initially &#x201C;meaningless&#x201D; and randomly generated sounds can be learned over exposure. This is demonstrated by studies where repetitions of randomly determined sound patterns are detected better if they are the same sounds presented on previous trials than if they are novel. This experiment posed two novel questions about this learning. First, does familiarization with a sound outside of the repetition detection context facilitate later performance? Second, does familiarization enhance performance when repeats are interleaved with distracters? Listeners were first trained to categorize a unique pattern of synchronous complex tone trains (210 ms in duration) from other tone trains with similar qualities (familiarization phase). They were then tasked to detect repeated pattern presentations interleaved with similar distracters in 4.2 s long excerpts (repetition detection phase). The familiarized pattern (Familiar Fixed &#x2013; FF), an unfamiliar pattern that remained fixed throughout (Unfamiliar Fixed &#x2013; UF), or patterns that were uniquely determined on each trial (Unfamiliar Unfixed &#x2013; UU) could be presented as repeats. FF patterns were learned at a faster rate and achieved higher repetition detection sensitivity than UF and UU patterns. Similarly, FF patterns also showed steeper learning slopes in their response times (RTs) than UF patterns. The data show that familiarity with a &#x201C;meaningless&#x201D; sound pattern on its own (i.e., without repetition) can facilitate repetition detection even in the presence of distracters. Familiarity effects become most apparent in the potential for learning.</p>
</abstract>
<kwd-group>
<kwd>frozen noise</kwd>
<kwd>perceptual learning</kwd>
<kwd>temporal dynamics</kwd>
<kwd>pattern detection</kwd>
<kwd>learning rate</kwd>
<kwd>informational masking</kwd>
<kwd>auditory memory</kwd>
</kwd-group>
<counts>
<fig-count count="2"/>
<table-count count="0"/>
<equation-count count="1"/>
<ref-count count="34"/>
<page-count count="7"/>
<word-count count="4656"/>
</counts>
</article-meta>
</front>
<body>
<sec id="sec1" sec-type="intro">
<title>Introduction</title>
<p>Sounds that are familiar to us can show advantages in perceptual processing compared to unfamiliar sounds. This is a phenomenon indicative of <italic>perceptual learning</italic> (for review, see <xref ref-type="bibr" rid="ref34">Wright and Zhang, 2009</xref>; <xref ref-type="bibr" rid="ref19">Irvine, 2018</xref>; <xref ref-type="bibr" rid="ref21">Maniglia and Seitz, 2018</xref>). The impacts of sound familiarity can appear rather quickly with initially &#x201C;meaningless&#x201D; sound stimuli. For instance, <xref ref-type="bibr" rid="ref3">Agus et al. (2010)</xref> found that repetitions of random Gaussian noise samples were detected with greater sensitivity if they were the same noise samples presented on previous trials than if they were unfamiliar samples generated under the same constraints (also, see <xref ref-type="bibr" rid="ref1">Agus and Pressnitzer, 2013</xref>). These frozen noise effects occur after a few trials (<xref ref-type="bibr" rid="ref3">Agus et al., 2010</xref>), can be seen for noise samples as short as 10&#x2009;ms (<xref ref-type="bibr" rid="ref6">Andrillon et al., 2015</xref>), and are observable a month or more after initial exposure (<xref ref-type="bibr" rid="ref28">Viswanathan et al., 2016</xref>; <xref ref-type="bibr" rid="ref7">Bianco et al., 2020</xref>; <xref ref-type="bibr" rid="ref2">Agus and Pressnitzer, 2021</xref>). Also notable is that they can occur with randomly generated tone pattern stimuli that closely resemble the acoustic characteristics of many real-world sounds and receptive field properties of auditory cortical neurons (<xref ref-type="bibr" rid="ref10">DeCharms et al., 1998</xref>; <xref ref-type="bibr" rid="ref27">Sohoglu and Chait, 2016</xref>; <xref ref-type="bibr" rid="ref7">Bianco et al., 2020</xref>; <xref ref-type="bibr" rid="ref18">Herrmann et al., 2021</xref>). This introduces exciting possibilities for studies of perceptual learning with complex sounds that are uncorrupted by previous knowledge on the part of listeners (as is the case with speech or environmental sounds).</p>
<p>In these previous works, sound repetitions mostly occurred consecutively with no intervening stimuli. In real-world environments, however, repetitions are rarely experienced without distracters. Examples include environmental noise during alarm sound repetition (<xref ref-type="bibr" rid="ref12">Edworthy et al., 2018</xref>) and accompaniment to the melody of a single musical instrument (<xref ref-type="bibr" rid="ref29">Waggoner, 2011</xref>). It is well known that detection can be hindered in these types of scenarios (<xref ref-type="bibr" rid="ref11">Durlach, 2006</xref>). In one recent study, it was found that listeners could learn repeated tone patterns interleaved with other random patterns over the course of trials, but this learning was limited compared to patterns presented without distracters (<xref ref-type="bibr" rid="ref7">Bianco et al., 2020</xref>). Also, this learning occurred within the repetition task itself. This leaves ambiguity as to whether learning entails memory for a specific sound pattern, or a learned strategy to listen for the sound quality that results from pattern repeats (e.g., &#x201C;wooshing&#x201D;; <xref ref-type="bibr" rid="ref30">Warren and Bashford, 1981</xref>). Distinguishing these will be informative for development of perceptual learning models where these possibilities can be associated with different mechanisms (e.g., plasticity in signal representations or changes in top-down selective attention). Whether or not perceptual learning transfers to a listening situation where repeats are interleaved with similar sounds is also undetermined. Such work is needed to assess the predictions of several learning theories that the benefits of perceptual learning lie in the potential for further learning on untrained tasks, not just performance in the trained task (e.g., <xref ref-type="bibr" rid="ref14">Gibson, 1969</xref>; <xref ref-type="bibr" rid="ref22">Mercado III, 2008</xref>; <xref ref-type="bibr" rid="ref16">Goldstone et al., 2010</xref>; <xref ref-type="bibr" rid="ref25">Sepp&#x00E4;nen et al., 2013</xref>). This is especially needed in tasks that induce learning with stimuli that do not have preexisting biases associated with them (e.g., speech). The current experiment addresses both of these questions.</p>
<p>In a first familiarization phase, synchronous tone train patterns having randomly generated frequencies between 300 and 1,200&#x2009;Hz were presented to listeners. Instructions were to answer whether a sound was &#x201C;Sound A&#x201D; or &#x201C;Sound B.&#x201D; While Sound A was frozen throughout this phase (&#x201C;Familiar-Fixed&#x201D; &#x2013; FF), Sound B was generated randomly on each trial (&#x201C;Unfamiliar-Unfixed&#x201D; &#x2013; UU). In a following repetition detection phase of the experiment, listeners were tasked to detect repeating patterns of synchronous tone trains within a relatively long excerpt (4.2&#x2009;s in length) containing multiple tonal patterns. Previously familiarized (FF) and unfamiliar patterns (Unfamiliar &#x2013; Fixed &#x2013; UF; UU) were presented within excerpts. On &#x201C;repeating&#x201D; trials, repeating patterns were always interleaved with other randomly generated tonal patterns. It was hypothesized that familiarization with a &#x201C;meaningless&#x201D; sound pattern presented by itself would lead to differences in repetition detection sensitivity, response time, and learning rate between familiar and unfamiliar sounds. It was expected that this effect would be observable when patterns were interleaved with other patterns having similar characteristics.</p>
</sec>
<sec id="sec2" sec-type="materials|methods">
<title>Materials and methods</title>
<sec id="sec3">
<title>Listeners</title>
<p>Listeners were 31 individuals enrolled in General Psychology courses at Kansas State University. The <italic>N</italic> was determined <italic>a priori</italic> based on a presumed effect size of Cohen&#x2019;s <italic>d</italic>&#x2009;=&#x2009;0.5 for a comparison between FF and UF sounds (achieving &#x003E;80% power for a single-sided hypothesis). All signed an informed consent document. All procedures were approved by Kansas State University&#x2019;s institutional review board. All listeners reported normal hearing. One listener was eliminated from analysis for failing to perform above chance in the familiarization phase of the experiment.</p>
</sec>
<sec id="sec4">
<title>Apparatus</title>
<p>Sounds were presented by an RME UC Fireface device over Sennheiser HD-280 closed-back headphones in a WhisperRoom sound-attenuating booth. Experimental procedures and stimuli were programmed in Matlab. Listeners made responses using a computer mouse to click buttons on an on-screen GUI.</p>
</sec>
<sec id="sec5">
<title>Stimuli</title>
<p>All sounds were synchronous tone trains containing tones with 42&#x2009;ms duration (cosine on-and off-ramps of 5&#x2009;ms). Parameters for these trains were selected based on pilot testing aimed at identifying suitable performance levels with <italic>d&#x2032;</italic> for the repetition detection task at or near 1.0. The &#x201C;synchronous&#x201D; aspect of the trains corresponded to two tones combining to form a multitone complex. The frequencies making up a complex were randomly selected from 500 possible frequencies spaced between 300 and 1,200&#x2009;Hz on a log scale. Patterns were then made by combining 5 randomly generated complexes consecutively. This made patterns of 210&#x2009;ms in duration. For each listener, there were three types of patterns. A &#x201C;Familiar-Fixed&#x201D; (FF) and &#x201C;Unfamiliar-Fixed&#x201D; (UF) pattern were determined at the beginning of the experiment. The seed of Matlab&#x2019;s random number generator was reset at the start of an experimental session to assure unique fixed patterns for every listener. &#x201C;Unfamiliar-Unfixed&#x201D; (UU) patterns were generated randomly throughout the experiment. In relation to previous work, the UF <italic>vs</italic>. UU comparison has been made several times (see intro). The FF <italic>vs</italic>. UF comparison is the novel and relevant comparison for the current study, with the UU condition serving as a control for procedural learning (e.g., learning what &#x201C;repeat&#x201D; means, learning the timing of pattern repeats, etc.). In the repetition detection phase, patterns could also be combined to create repeating or non-repeating long duration sound excerpts. In non-repeating excerpts, 20 consecutive patterns (all different) were combined consecutively to make a 4,200&#x2009;ms excerpt. In repeating excerpts, every other pattern of the 20 consecutive patterns was the same. This procedure created a stimulus with a repeating pattern that was interleaved with other sounds immediately preceding and following it. See <xref rid="fig1" ref-type="fig">Figure 1</xref> for spectrogram depictions of example stimuli.</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>Spectrogram example of repeating and non-repeating excerpts of synchronous tone trains. The breakout spectrogram shows an example pattern made up of 5 randomly generated multitone complexes. The dashed boxes in the repeating and non-repeating excerpts mark the presentation of that pattern.</p>
</caption>
<graphic xlink:href="fpsyg-13-957389-g001.tif"/>
</fig>
</sec>
<sec id="sec6">
<title>Procedure</title>
<p>There were two phases of the experiment. A familiarization phase employed a categorization task intended to familiarize listeners with the FF pattern. On each trial either an FF or a UU pattern was presented. Listeners&#x2019; task was to indicate whether the pattern was a &#x201C;Sound A&#x201D; type or a &#x201C;Sound B&#x201D; type. The FF pattern was assigned to the &#x201C;Sound A&#x201D; category, while UU patterns were assigned to the &#x201C;Sound B&#x201D; category. Listeners made responses by clicking on-screen buttons corresponding to these labels. There was no response deadline. They received on-screen feedback in the form of &#x201C;Correct&#x201D; or &#x201C;Wrong&#x201D; text presented for 1.5&#x2009;s after each response. There were 3 blocks with 15 FF presentations and 15 UU presentations in each block. Order was completely randomized within a block. In between each block, an irrelevant 15&#x2009;s silent video was presented to give listeners a break from listening and to mitigate fatigue. All videos were neutral valence (e.g., nature-, recreation-, or transportation-related; <italic>cf.</italic> <xref ref-type="bibr" rid="ref31">Wisniewski et al., 2019</xref>).</p>
<p>The second experimental phase involved a pattern repetition detection task. On half of trials a repeated excerpt was presented. On the other half of trials a non-repeating excerpt was presented. Listeners&#x2019; task was to click on a GUI button labeled &#x201C;no repeats&#x201D; or &#x201C;some repeats.&#x201D; They were instructed to value accuracy over RT, but to respond as soon as they knew the answer. For repeated excerpt trials the repeating pattern could either be an FF pattern, a UF pattern, or a UU pattern fixed within that one excerpt (i.e., the same UU pattern was not fixed across trials). There were also three different types of non-repeating excerpt trials. All patterns were different within each non-repeating excerpt, with a single FF, UF, or UU pattern contained within at a randomly determined position. The position was determined randomly on each trial with equal probability for any position in the excerpt (1&#x2013;20). This was done to assure that detection of repeats for the FF and UF conditions was not due solely to the recognition of a familiar pattern (<italic>cf.</italic> <xref ref-type="bibr" rid="ref1">Agus and Pressnitzer, 2013</xref>). Note that this repetition detection task is based on detected repeats of a pattern, not explicit recognition of a pattern from the training phase.</p>
<p>Trials were organized in 12 trial blocks with 2 of each type of trial in each block: 2 repeating FF, 2 repeating UF, 2 repeating UU, 2 non-repeating FF, 2 non-repeating UF, and 2 non-repeating UU. Order within blocks was completely randomized. Feedback of correctness was given at the end of each block in percent correct. There were 10 total blocks for a total of 120 trials, with 40 trials per condition (20 repeating, 20 non-repeating).</p>
</sec>
<sec id="sec7">
<title>Performance measures</title>
<p>For the familiarization phase, the signal detection <italic>d&#x2032;</italic> measure was used to determine whether listeners had become familiar with the FF sound. Signal detection <italic>d&#x2032;</italic> along with median response times were analyzed for the repetition detection phase. To characterize the temporal dynamics of learning, a 10 trial running average (boxcar window) of the hit (&#x201C;some repeats&#x201D; responses to repeating excerpts) and false alarm (&#x201C;some repeats&#x201D; responses to non-repeating excerpts) rates was taken. The running average window length of 10 trials was chosen based on previous data showing minimal distortion of true <italic>d&#x2032;</italic> for ~10 trials within the <italic>d&#x2032;</italic> range observed for the current data (<xref ref-type="bibr" rid="ref23">Miller, 1996</xref>). These were used to create a running average <italic>d&#x2032;</italic> using equation 1. Here, H represents the hit rate and F represents the false alarm rate. Where there were hit and false alarm rates of 1 or 0, the rates were adjusted to 1&#x2013;1/(2n) and 1/(2n) respectively, where n is the number of trials in the window (<xref ref-type="bibr" rid="ref20">Macmillan and Creelman, 2005</xref>). Median response times were also taken for hits (correct response on repeating trial) across the same 10 trial sliding window.</p>
<disp-formula id="EQ1">
<label>(1)</label>
<mml:math id="M1">
<mml:mrow>
<mml:msup>
<mml:mi>d</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="normal">H</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>F</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math></disp-formula>
</sec>
<sec id="sec8">
<title>Statistics</title>
<p>Linear mixed effects models were used. All models were fit in Matlab&#x2019;s statistics toolbox using maximum likelihood. First, a linear mixed effects model was fit to <italic>d&#x2032;</italic> and hit RT data with sound type (FF, UF, and UU), window (centered before fitting), and the sound type x window interaction as fixed effects. Sound type was reference coded to FF. Listener intercept and slopes for sound type and window were entered as random effects. The significance of fixed effects was assessed by comparing the likelihood of this model to those of models where the effect of interest was absent. A value of <italic>p</italic> for each fixed effect was generated by comparing the observed ratio of likelihoods to a &#x03C7;<sup>2</sup> distribution with <italic>df</italic> being the difference in number of coefficients for the full and reduced model (<xref ref-type="bibr" rid="ref26">Singmann and Kellen, 2019</xref>). Any <italic>p</italic>-values less than <italic>&#x0251;</italic>&#x2009;=&#x2009;0.05 were deemed significant. Significant effects were followed by post-hoc tests of regression coefficients interpreted with Bonferroni corrections (uncorrected <italic>p</italic>-values reported).</p>
</sec>
</sec>
<sec id="sec9" sec-type="results">
<title>Results</title>
<p>Listeners learned the categorization task adequately in the familiarization phase. Mean sensitivity (<italic>d'</italic>) to Sound A <italic>vs</italic>. Sound B differences was 1.78 (SD&#x2009;=&#x2009;1.11), 2.82 (SD&#x2009;=&#x2009;0.87), and 3.14 (SD&#x2009;=&#x2009;0.75) in blocks 1&#x2013;3, respectively. A sign test showed sensitivity in the last block of the familiarization phase to be significantly above zero, <italic>&#x03C7;</italic><sup>2</sup>&#x2009;=&#x2009;30, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001. Thus, listeners moved on to the repetition detection phase with familiarity for their unique FF sound pattern.</p>
<p><xref rid="fig2" ref-type="fig">Figure 2</xref> shows <italic>d&#x2032;</italic> and hit RTs over the course of the repetition detection phase. For <italic>d&#x2032;</italic>, there was a significant effect of window, <italic>&#x03C7;</italic><sup>2</sup>&#x2009;=&#x2009;23.43, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, demonstrating increased sensitivity over the course of the repetition detection task. There was also a significant sound type x window interaction, <italic>&#x03C7;</italic><sup>2</sup>&#x2009;=&#x2009;43.58, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, demonstrating that the rate of learning was not equal across sound types. Indeed, the model estimated learning slope for FF sounds was significantly steeper than UF, <italic>&#x03B2;</italic>&#x2009;=&#x2009;&#x2212;0.04, <italic>t</italic>&#x2009;=&#x2009;4.36, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, and UU sounds, <italic>&#x03B2;</italic>&#x2009;=&#x2009;&#x2212;0.06, <italic>t</italic>&#x2009;=&#x2009;6.57, <italic>p&#x2009;&#x003C;</italic> 0.001. UF sounds showed no significant difference in learning slope compared to UU sounds, <italic>p</italic>&#x2009;&#x003E;&#x2009;0.10. The effect of condition was not significant, <italic>p</italic>&#x2009;&#x003E;&#x2009;0.10.</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Repetition detection phase data. Signal detection <italic>d</italic>&#x2018; and median RT for hit trials. Signal detection <italic>d</italic>&#x2018; was computed using running average hit and false alarm rates where each rate reflected the contribution of 10 different trials. Median response times (RT) are shown for repeating trials that were correctly detected. All error bars represent within-subject standard errors of the mean (<xref ref-type="bibr" rid="ref9">Cousineau, 2005</xref>).</p>
</caption>
<graphic xlink:href="fpsyg-13-957389-g002.tif"/>
</fig>
<p>The relatively long length of RTs shown in <xref rid="fig2" ref-type="fig">Figure 2</xref> likely reflects the fact that the task was difficult and participants were instructed to value accuracy over response speed. Nevertheless, RTs on hit trials showed a significant effect of window, <italic>&#x03C7;</italic><sup>2</sup>&#x2009;=&#x2009;16.42, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, owing to RTs reducing in latency over the course of the repetition detection task. There was also a significant sound type x window interaction, <italic>&#x03C7;</italic><sup>2</sup>&#x2009;=&#x2009;27.98, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, demonstrating differences in the slope of these RT reductions. Model estimated learning slope was significantly steeper for FF sounds, <italic>&#x03B2;</italic>&#x2009;=&#x2009;0.02, <italic>t</italic>&#x2009;=&#x2009;5.22, <italic>p&#x2009;&#x003C;</italic> 0.001, and UU sounds, <italic>&#x03B2;</italic>&#x2009;=&#x2009;0.01, <italic>t</italic>&#x2009;=&#x2009;3.98, <italic>p&#x2009;&#x003C;</italic> 0.001, compared to UF sounds. The slope for FF and UU sounds was not significantly different, <italic>p</italic>&#x2009;=&#x2009;0.094. The effect of condition was not significant, <italic>p</italic>&#x2009;&#x003E;&#x2009;0.10.</p>
</sec>
<sec id="sec10" sec-type="discussions">
<title>Discussion</title>
<p>The current study was designed to determine whether familiarization with a meaningless sound pattern would facilitate the ability to detect repeated presentations of that pattern among acoustically similar distracters. Familiarity was induced through a categorization task in which a randomly generated pattern of multitone complexes was assigned to one category (FF type) while other randomly generated patterns (UU types) were assigned to another. Though initial sensitivity to FF sounds in a following repetition detection task was comparable to unfamiliar sounds, FF sounds showed a learning advantage. Sensitivity (<italic>d&#x2032;</italic>) improved at a faster rate and reached a higher level compared to sounds that were repeated throughout the repetition detection task (UF), or that were generated randomly on each trial (UU). Response times decreased over the course of the repetition detection task, but did so at a faster rate for FF and UU sounds compared to UF sounds.</p>
<p>The processes leading to the familiarity effects observed here are not likely to be based in any procedural type of learning. This is because any procedural learning (e.g., developing a concept for what &#x201C;repeat&#x201D; means, learning the repetition rate of repeats, etc.) should have been equal across all sound types. It is also unlikely that differences among sound types were related to learning a specific repetition sound quality during the familiarization phase. This is because no consecutive presentations of the FF sound occurred at a rate as fast as that experienced in the repetition detection task (e.g., from trial-to-trial), nor were presentations isochronous during categorization training. The impacts of familiarity are likely perceptual and based in memory for the FF sound itself. This is in line with previous conclusions on learning in UF <italic>vs</italic>. UU sound type comparisons with Gaussian noise samples (<xref ref-type="bibr" rid="ref1">Agus and Pressnitzer, 2013</xref>). Though in that study all learning did take place within the repetition detection task, those authors showed an increased false alarm rate to trials in which a single familiar pattern was presented. This showed that UF patterns were detectable even without repeats. Together, theses studies suggest that learning in repetition detection tasks reflects the learning of a specific sound pattern.</p>
<p>An advantage of UF over UU sound types has been consistently demonstrated in repetition detection tasks similar to the one used here (<xref ref-type="bibr" rid="ref3">Agus et al., 2010</xref>; <xref ref-type="bibr" rid="ref1">Agus and Pressnitzer, 2013</xref>; <xref ref-type="bibr" rid="ref6">Andrillon et al., 2015</xref>; <xref ref-type="bibr" rid="ref28">Viswanathan et al., 2016</xref>; <xref ref-type="bibr" rid="ref18">Herrmann et al., 2021</xref>). However, no such advantage was observable in this study. Rather, the only difference between UF and UU sounds appeared as a disadvantage for the former in learning as measured by RT. This failure to replicate might be related to the interleaving with distracter sounds (i.e., more informational masking), the lack of immediate feedback, and/or the accompanying FT sound type. Further speculation would require an alternative experimental design. Nevertheless, the FF <italic>vs</italic>. UF comparisons shed light on the impact that familiarity with meaningless sound patterns can have on repetition detection performance and learning. That differences between FF and UF sounds appeared over the course of learning is consistent with a recurring idea in perceptual learning theory that the benefits of familiarization with a stimulus lie in the impact on learning potential (e.g., accelerated learning rates or capacity to learn; <xref ref-type="bibr" rid="ref14">Gibson, 1969</xref>; <xref ref-type="bibr" rid="ref22">Mercado III, 2008</xref>; <xref ref-type="bibr" rid="ref16">Goldstone et al., 2010</xref>). Prior research on this hypothesis has mostly employed familiar speech sounds (<italic>cf.</italic> <xref ref-type="bibr" rid="ref32">Wisniewski et al., 2014</xref>), known environmental objects (for review, see <xref ref-type="bibr" rid="ref13">Fine and Jacobs, 2002</xref>), or preexisting individual differences in perceptual acuity (e.g., <xref ref-type="bibr" rid="ref24">Ordu&#x00F1;a et al., 2005</xref>). Familiarity with these methods comes from an unknown learning history with learned perceptual and social biases (e.g., towards one spoken accent or another; <xref ref-type="bibr" rid="ref15">Gluszek and Dovidio, 2010</xref>). In contrast, this study produced familiarity effects which are unlikely to have any contribution from preexisting biases of listeners. FF versus UF comparisons in tasks like the one use here will be of use to further the study of familiarity effects in perceptual learning without these confounding biases.</p>
<p>Future research studies are needed to tease out the roles of specific learning processes in the effect of familiarity on learning rates observed here. One possibility is that a filter is developed during the familiarization phase based on predictable sound patterns (<xref ref-type="bibr" rid="ref17">Heilbron and Chait, 2018</xref>). This could then transfer over to the repetition detection task (e.g., <xref ref-type="bibr" rid="ref5">Amitay et al., 2014</xref>). A filter such as this could operate to separate previously experienced patterns (signals) from novel patterns (noise). That filter could be accomplished through long-term plasticity in the auditory processing hierarchy (e.g., <xref ref-type="bibr" rid="ref33">Wisniewski et al., 2017</xref>; <xref ref-type="bibr" rid="ref19">Irvine, 2018</xref>) or be dependent on transient shifts in attention (<xref ref-type="bibr" rid="ref8">Carlin and Elhilali, 2015</xref>). Another possibility is that representations of initially &#x201C;meaningless&#x201D; sounds build to meaning over the course of the familiarization phase. For instance, an FF sound could end up being represented by the Sound A label/category. When performing the repetition detection task, listeners may be able to use this representation at a higher-level of the auditory hierarchy to enhance learning (<xref ref-type="bibr" rid="ref4">Ahissar et al., 2009</xref>). This study has laid out a paradigm in which these possibilities can be further investigated.</p>
</sec>
<sec id="sec11" sec-type="data-availability">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the author, without undue reservation.</p>
</sec>
<sec id="sec12">
<title>Ethics statement</title>
<p>The studies involving human participants were reviewed and approved by Kansas State University Institutional Review Board. The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="sec13">
<title>Author contributions</title>
<p>The author confirms being the sole contributor of this work and has approved it for publication.</p>
</sec>
<sec id="sec14" sec-type="funding-information">
<title>Funding</title>
<p>This research was supported by the Cognitive and Neurobiological Approaches to Plasticity (CNAP) Center of Biomedical Research Excellence (COBRE) of the National Institutes of Health under Grant No. P20GM113109. Publication of this article was funded in part by the Kansas State University Open Access Publishing Fund.</p>
</sec>
<sec id="conf1" sec-type="COI-statement">
<title>Conflict of interest</title>
<p>The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="sec100" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ack>
<p>Raelynn Slipke, Jenny Amerin, Anna Turco, and Alexys Anguiano are thanked for their great help with data collection.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="ref1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Agus</surname> <given-names>T. R.</given-names></name> <name><surname>Pressnitzer</surname> <given-names>D.</given-names></name></person-group> (<year>2013</year>). <article-title>The detection of repetitions in noise before and after perceptual learning</article-title>. <source>J. Acoust. Soc. Am.</source> <volume>134</volume>, <fpage>464</fpage>&#x2013;<lpage>473</lpage>. doi: <pub-id pub-id-type="doi">10.1121/1.4807641</pub-id>, PMID: <pub-id pub-id-type="pmid">23862821</pub-id></citation></ref>
<ref id="ref2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Agus</surname> <given-names>T. R.</given-names></name> <name><surname>Pressnitzer</surname> <given-names>D.</given-names></name></person-group> (<year>2021</year>). <article-title>Repetition detection and rapid auditory learning for stochastic tone clouds</article-title>. <source>J. Acoust. Soc. Am.</source> <volume>150</volume>, <fpage>1735</fpage>&#x2013;<lpage>1749</lpage>. doi: <pub-id pub-id-type="doi">10.1121/10.0005935</pub-id>, PMID: <pub-id pub-id-type="pmid">34598638</pub-id></citation></ref>
<ref id="ref3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Agus</surname> <given-names>T. R.</given-names></name> <name><surname>Thorpe</surname> <given-names>S. J.</given-names></name> <name><surname>Pressnitzer</surname> <given-names>D.</given-names></name></person-group> (<year>2010</year>). <article-title>Rapid formation of robust auditory memories: insights from noise</article-title>. <source>Neuron</source> <volume>66</volume>, <fpage>610</fpage>&#x2013;<lpage>618</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neuron.2010.04.014</pub-id>, PMID: <pub-id pub-id-type="pmid">20510864</pub-id></citation></ref>
<ref id="ref4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahissar</surname> <given-names>M.</given-names></name> <name><surname>Nahum</surname> <given-names>M.</given-names></name> <name><surname>Nelken</surname> <given-names>I.</given-names></name> <name><surname>Hochstein</surname> <given-names>S.</given-names></name></person-group> (<year>2009</year>). <article-title>Reverse hierarchies and sensory learning</article-title>. <source>Philos. Trans. R. Soc. B</source> <volume>364</volume>, <fpage>285</fpage>&#x2013;<lpage>299</lpage>. doi: <pub-id pub-id-type="doi">10.1098/rstb.2008.0253</pub-id>, PMID: <pub-id pub-id-type="pmid">18986968</pub-id></citation></ref>
<ref id="ref5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amitay</surname> <given-names>S.</given-names></name> <name><surname>Zhang</surname> <given-names>Y. X.</given-names></name> <name><surname>Jones</surname> <given-names>P. R.</given-names></name> <name><surname>Moore</surname> <given-names>D. R.</given-names></name></person-group> (<year>2014</year>). <article-title>Perceptual learning: top to bottom</article-title>. <source>Vis. Res.</source> <volume>99</volume>, <fpage>69</fpage>&#x2013;<lpage>77</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.visres.2013.11.006</pub-id></citation></ref>
<ref id="ref6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Andrillon</surname> <given-names>T.</given-names></name> <name><surname>Kouider</surname> <given-names>S.</given-names></name> <name><surname>Agus</surname> <given-names>T.</given-names></name> <name><surname>Pressnitzer</surname> <given-names>D.</given-names></name></person-group> (<year>2015</year>). <article-title>Perceptual learning of acoustic noise generates memory-evoked potentials</article-title>. <source>Curr. Biol.</source> <volume>25</volume>, <fpage>2823</fpage>&#x2013;<lpage>2829</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.cub.2015.09.027</pub-id>, PMID: <pub-id pub-id-type="pmid">26455302</pub-id></citation></ref>
<ref id="ref7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bianco</surname> <given-names>R.</given-names></name> <name><surname>Harrison</surname> <given-names>P. M.</given-names></name> <name><surname>Hu</surname> <given-names>M.</given-names></name> <name><surname>Bolfer</surname> <given-names>C.</given-names></name> <name><surname>Picken</surname> <given-names>S.</given-names></name> <name><surname>Pearce</surname> <given-names>M. T.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Long-term implicit memory for sequential auditory patterns in humans</article-title>. <source>elife</source> <volume>9</volume>:<fpage>e56073</fpage>. doi: <pub-id pub-id-type="doi">10.7554/eLife.56073</pub-id>, PMID: <pub-id pub-id-type="pmid">32420868</pub-id></citation></ref>
<ref id="ref8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Carlin</surname> <given-names>M. A.</given-names></name> <name><surname>Elhilali</surname> <given-names>M.</given-names></name></person-group> (<year>2015</year>). <article-title>Modeling attention-drive plasticity in auditory cortical receptive fields</article-title>. <source>Front. Comput. Neurosci.</source> <volume>9</volume>:<fpage>106</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fncom.2015.00106</pub-id></citation></ref>
<ref id="ref9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cousineau</surname> <given-names>D.</given-names></name></person-group> (<year>2005</year>). <article-title>Confidence intervals in within-subject designs: a simpler solution to Loftus and Masson&#x2019;s method</article-title>. <source>Tutor. Quant. Methods Psychol.</source> <volume>1</volume>, <fpage>42</fpage>&#x2013;<lpage>45</lpage>. doi: <pub-id pub-id-type="doi">10.20982/tqmp.01.1.p042</pub-id></citation></ref>
<ref id="ref10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>DeCharms</surname> <given-names>R.</given-names></name> <name><surname>Blake</surname> <given-names>D. T.</given-names></name> <name><surname>Merzenich</surname> <given-names>M. M.</given-names></name></person-group> (<year>1998</year>). <article-title>Optimizing sound features for cortical neurons</article-title>. <source>Science</source> <volume>280</volume>, <fpage>1439</fpage>&#x2013;<lpage>1444</lpage>. doi: <pub-id pub-id-type="doi">10.1126/science.280.5368.1439</pub-id>, PMID: <pub-id pub-id-type="pmid">9603734</pub-id></citation></ref>
<ref id="ref11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Durlach</surname> <given-names>N.</given-names></name></person-group> (<year>2006</year>). <article-title>Auditory masking: need for improved conceptual structure</article-title>. <source>J. Acoust. Soc. Am.</source> <volume>120</volume>, <fpage>1787</fpage>&#x2013;<lpage>1790</lpage>. doi: <pub-id pub-id-type="doi">10.1121/1.2335426</pub-id>, PMID: <pub-id pub-id-type="pmid">17069274</pub-id></citation></ref>
<ref id="ref12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Edworthy</surname> <given-names>J. R.</given-names></name> <name><surname>McNeer</surname> <given-names>R. R.</given-names></name> <name><surname>Bennett</surname> <given-names>C. L.</given-names></name> <name><surname>Dudaryk</surname> <given-names>R.</given-names></name> <name><surname>McDougall</surname> <given-names>S. J. P.</given-names></name> <name><surname>Schlesinger</surname> <given-names>J. J.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Getting better hospital alarm sounds into a global standard</article-title>. <source>Ergon. Des.</source> <volume>26</volume>, <fpage>4</fpage>&#x2013;<lpage>13</lpage>. doi: <pub-id pub-id-type="doi">10.1177/1064804618763268</pub-id></citation></ref>
<ref id="ref13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fine</surname> <given-names>I.</given-names></name> <name><surname>Jacobs</surname> <given-names>R. A.</given-names></name></person-group> (<year>2002</year>). <article-title>Comparing perceptual learning across tasks: a review</article-title>. <source>J. Vis.</source> <volume>2</volume>, <fpage>190</fpage>&#x2013;<lpage>203</lpage>. doi: <pub-id pub-id-type="doi">10.1167/2.2.5</pub-id>, PMID: <pub-id pub-id-type="pmid">12678592</pub-id></citation></ref>
<ref id="ref14"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Gibson</surname> <given-names>E. J.</given-names></name></person-group> (<year>1969</year>). <source>Principles of Perceptual Learning and Development</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-loc>Appleton Century Crofts</publisher-loc>.</citation></ref>
<ref id="ref15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gluszek</surname> <given-names>A.</given-names></name> <name><surname>Dovidio</surname> <given-names>J. F.</given-names></name></person-group> (<year>2010</year>). <article-title>Speaking with a nonnative accent: perceptions of bias, communication difficulties, and belonging in the United States</article-title>. <source>J. Lang. Soc. Psychol.</source> <volume>29</volume>, <fpage>224</fpage>&#x2013;<lpage>234</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0261927X09359590</pub-id></citation></ref>
<ref id="ref16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goldstone</surname> <given-names>R. L.</given-names></name> <name><surname>Landy</surname> <given-names>D. H.</given-names></name> <name><surname>Son</surname> <given-names>J. Y.</given-names></name></person-group> (<year>2010</year>). <article-title>The education of perception</article-title>. <source>Top. Cogn. Sci.</source> <volume>2</volume>, <fpage>265</fpage>&#x2013;<lpage>284</lpage>. doi: <pub-id pub-id-type="doi">10.1111/j.1756-8765.2009.01055.x</pub-id></citation></ref>
<ref id="ref17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Heilbron</surname> <given-names>M.</given-names></name> <name><surname>Chait</surname> <given-names>M.</given-names></name></person-group> (<year>2018</year>). <article-title>Great expectations: is there evidence for predictive coding in auditory cortex?</article-title> <source>Neuroscience</source> <volume>389</volume>, <fpage>54</fpage>&#x2013;<lpage>73</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neuroscience.2017.07.061</pub-id>, PMID: <pub-id pub-id-type="pmid">28782642</pub-id></citation></ref>
<ref id="ref18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Herrmann</surname> <given-names>B.</given-names></name> <name><surname>Araz</surname> <given-names>K.</given-names></name> <name><surname>Johnsrude</surname> <given-names>I. S.</given-names></name></person-group> (<year>2021</year>). <article-title>Sustained neural activity correlates with rapid perceptual learning of auditory patterns</article-title>. <source>NeuroImage</source> <volume>238</volume>:<fpage>118238</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neuroimage.2021.118238</pub-id>, PMID: <pub-id pub-id-type="pmid">34098064</pub-id></citation></ref>
<ref id="ref19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Irvine</surname> <given-names>D. R.</given-names></name></person-group> (<year>2018</year>). <article-title>Auditory perceptual learning and changes in the conceptualization of auditory cortex</article-title>. <source>Hear. Res.</source> <volume>366</volume>, <fpage>3</fpage>&#x2013;<lpage>16</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.heares.2018.03.011</pub-id>, PMID: <pub-id pub-id-type="pmid">29551308</pub-id></citation></ref>
<ref id="ref20"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Macmillan</surname> <given-names>N. A.</given-names></name> <name><surname>Creelman</surname> <given-names>C. D.</given-names></name></person-group> (<year>2005</year>). <source>Detection Theory: A User&#x2019;s Guide</source> (<edition>2nd Edn.</edition>). <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Routledge</publisher-name>.</citation></ref>
<ref id="ref21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maniglia</surname> <given-names>M.</given-names></name> <name><surname>Seitz</surname> <given-names>A. R.</given-names></name></person-group> (<year>2018</year>). <article-title>Towards a whole brain model of perceptual learning</article-title>. <source>Curr. Opin. Behav. Sci.</source> <volume>20</volume>, <fpage>47</fpage>&#x2013;<lpage>55</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.cobeha.2017.10.004</pub-id>, PMID: <pub-id pub-id-type="pmid">29457054</pub-id></citation></ref>
<ref id="ref22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mercado</surname> <given-names>E.</given-names> <suffix>III</suffix></name></person-group> (<year>2008</year>). <article-title>Neural and cognitive plasticity: from maps to minds</article-title>. <source>Psychol. Bull.</source> <volume>134</volume>, <fpage>109</fpage>&#x2013;<lpage>137</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0033-2909.134.1.109</pub-id>, PMID: <pub-id pub-id-type="pmid">18193997</pub-id></citation></ref>
<ref id="ref23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miller</surname> <given-names>J. O.</given-names></name></person-group> (<year>1996</year>). <article-title>The sampling distribution of d</article-title>. <source>Percept. Psychophys.</source> <volume>58</volume>, <fpage>65</fpage>&#x2013;<lpage>72</lpage>. doi: <pub-id pub-id-type="doi">10.3758/BF03205476</pub-id>, PMID: <pub-id pub-id-type="pmid">8668521</pub-id></citation></ref>
<ref id="ref24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ordu&#x00F1;a</surname> <given-names>I.</given-names></name> <name><surname>Mercado</surname> <given-names>E.</given-names> <suffix>III</suffix></name> <name><surname>Gluck</surname> <given-names>M. A.</given-names></name> <name><surname>Merzenich</surname> <given-names>M. M.</given-names></name></person-group> (<year>2005</year>). <article-title>Cortical responses in rats predict perceptual sensitivities to complex sounds</article-title>. <source>Behav. Neurosci.</source> <volume>119</volume>, <fpage>256</fpage>&#x2013;<lpage>264</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0735-7044.119.1.256</pub-id>, PMID: <pub-id pub-id-type="pmid">15727530</pub-id></citation></ref>
<ref id="ref25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sepp&#x00E4;nen</surname> <given-names>M.</given-names></name> <name><surname>H&#x00E4;m&#x00E4;l&#x00E4;inen</surname> <given-names>J.</given-names></name> <name><surname>Pesonen</surname> <given-names>A. K.</given-names></name> <name><surname>Tervaniemi</surname> <given-names>M.</given-names></name></person-group> (<year>2013</year>). <article-title>Passive soud exposure induces rapid perceptual learning in musicians: event-related potential evidence</article-title>. <source>Biol. Psychol.</source> <volume>94</volume>, <fpage>341</fpage>&#x2013;<lpage>353</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.biopsycho.2013.07.004</pub-id>, PMID: <pub-id pub-id-type="pmid">23886959</pub-id></citation></ref>
<ref id="ref26"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Singmann</surname> <given-names>H.</given-names></name> <name><surname>Kellen</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>An introduction to mixed models for experimental psychology</article-title>,&#x201D; in <source>New Methods in Cognitive Psychology</source>. eds. <person-group person-group-type="editor"><name><surname>Spieler</surname> <given-names>D. H.</given-names></name> <name><surname>Schumacher</surname> <given-names>E.</given-names></name></person-group> (<publisher-loc>Hove</publisher-loc>: <publisher-name>Psychology Press</publisher-name>), <fpage>4</fpage>&#x2013;<lpage>31</lpage>.</citation></ref>
<ref id="ref27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sohoglu</surname> <given-names>E.</given-names></name> <name><surname>Chait</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Detecting and representing predictable structure during auditory scene analysis</article-title>. <source>elife</source> <volume>5</volume>:<fpage>e19113</fpage>. doi: <pub-id pub-id-type="doi">10.7554/eLife.19113</pub-id>, PMID: <pub-id pub-id-type="pmid">27602577</pub-id></citation></ref>
<ref id="ref28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Viswanathan</surname> <given-names>J.</given-names></name> <name><surname>R&#x00E9;my</surname> <given-names>F.</given-names></name> <name><surname>Bacon-Mac&#x00E9;</surname> <given-names>N.</given-names></name> <name><surname>Thorpe</surname> <given-names>S. J.</given-names></name></person-group> (<year>2016</year>). <article-title>Long term memory for noise: evidence of robust encoding of very short temporal acoustic patterns</article-title>. <source>Front. Neurosci.</source> <volume>10</volume>:<fpage>490</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnins.2016.00490</pub-id>, PMID: <pub-id pub-id-type="pmid">27932941</pub-id></citation></ref>
<ref id="ref29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Waggoner</surname> <given-names>D. T.</given-names></name></person-group> (<year>2011</year>). <article-title>Effects of listening conditions, error types, and ensemble textures on error detection skills</article-title>. <source>J. Res. Music. Educ.</source> <volume>59</volume>, <fpage>56</fpage>&#x2013;<lpage>71</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0022429410396094</pub-id></citation></ref>
<ref id="ref30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Warren</surname> <given-names>R. M.</given-names></name> <name><surname>Bashford</surname> <given-names>J. A.</given-names></name></person-group> (<year>1981</year>). <article-title>Perception of acoustic iterance: pitch and infrapitch</article-title>. <source>Percept. Psychophys.</source> <volume>29</volume>, <fpage>395</fpage>&#x2013;<lpage>402</lpage>. doi: <pub-id pub-id-type="doi">10.3758/BF03207350</pub-id>, PMID: <pub-id pub-id-type="pmid">7279564</pub-id></citation></ref>
<ref id="ref31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wisniewski</surname> <given-names>M. G.</given-names></name> <name><surname>Church</surname> <given-names>B. A.</given-names></name> <name><surname>Mercado</surname> <given-names>E.</given-names> <suffix>III</suffix></name> <name><surname>Radell</surname> <given-names>M. L.</given-names></name> <name><surname>Zakrzewski</surname> <given-names>A. C.</given-names></name></person-group> (<year>2019</year>). <article-title>Easy-to-hard effects in perceptual learning depend upon the degree to which initial trials are &#x201C;easy&#x201D;</article-title>. <source>Psychon. Bull. Rev.</source> <volume>26</volume>, <fpage>1889</fpage>&#x2013;<lpage>1895</lpage>. doi: <pub-id pub-id-type="doi">10.3758/s13423-019-01627-4</pub-id>, PMID: <pub-id pub-id-type="pmid">31243721</pub-id></citation></ref>
<ref id="ref32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wisniewski</surname> <given-names>M. G.</given-names></name> <name><surname>Mercado</surname> <given-names>E.</given-names> <suffix>III</suffix></name> <name><surname>Church</surname> <given-names>B. A.</given-names></name> <name><surname>Gramann</surname> <given-names>K.</given-names></name> <name><surname>Makeig</surname> <given-names>S.</given-names></name></person-group> (<year>2014</year>). <article-title>Brain dynamics that correlate with effects of learning on auditory distance perception</article-title>. <source>Front. Neurosci.</source> <volume>8</volume>:<fpage>306</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnins.2014.00396</pub-id></citation></ref>
<ref id="ref33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wisniewski</surname> <given-names>M. G.</given-names></name> <name><surname>Radell</surname> <given-names>M. L.</given-names></name> <name><surname>Church</surname> <given-names>B. A.</given-names></name> <name><surname>Mercado</surname> <given-names>E.</given-names> <suffix>III</suffix></name></person-group> (<year>2017</year>). <article-title>Benefits of fading in perceptual learning are driven by more than dimensional attention</article-title>. <source>PLoS One</source> <volume>12</volume>:<fpage>e0180959</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pone.0180959</pub-id>, PMID: <pub-id pub-id-type="pmid">28723976</pub-id></citation></ref>
<ref id="ref34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wright</surname> <given-names>B. A.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name></person-group> (<year>2009</year>). <article-title>A review of the generalization of auditory learning</article-title>. <source>Philos. Trans. R. Soc. B</source> <volume>364</volume>, <fpage>301</fpage>&#x2013;<lpage>311</lpage>. doi: <pub-id pub-id-type="doi">10.1098/rstb.2008.0262</pub-id>, PMID: <pub-id pub-id-type="pmid">18977731</pub-id></citation></ref></ref-list>
</back>
</article>