<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Hum. Neurosci.</journal-id>
<journal-title>Frontiers in Human Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Hum. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5161</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnhum.2021.585817</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Human Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Phonological Underspecification: An Explanation for How a Rake Can Become Awake</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Cummings</surname> <given-names>Alycia E.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/880864/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Wu</surname> <given-names>Ying C.</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/186904/overview"/>
</contrib> 
<contrib contrib-type="author">
<name><surname>Ogiela</surname> <given-names>Diane A.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1082132/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Communication Sciences and Disorders, Idaho State University</institution>, <addr-line>Meridian, ID</addr-line>, <country>United States</country></aff>
<aff id="aff2"><sup>2</sup><institution>Swartz Center for Computational Neuroscience, University of California, San Diego</institution>, <addr-line>San Diego, CA</addr-line>, <country>United States</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Arild Hestvik, University of Delaware, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Wolfram Ziegler, Ludwig Maximilian University of Munich, Germany; Karthik Durvasula, Michigan State University, United States</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Alycia E. Cummings <email>cummalyc&#x00040;isu.edu</email></corresp>
<fn fn-type="other" id="fn001"><p><bold>Specialty section</bold>: This article was submitted to Speech and Language, a section of the journal Frontiers in Human Neuroscience</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>17</day>
<month>02</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>15</volume>
<elocation-id>585817</elocation-id>
<history>
<date date-type="received">
<day>21</day>
<month>07</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>25</day>
<month>01</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2021 Cummings, Wu and Ogiela.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Cummings, Wu and Ogiela</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>Neural markers, such as the mismatch negativity (MMN), have been used to examine the phonological underspecification of English feature contrasts using the Featurally Underspecified Lexicon (FUL) model. However, neural indices have not been examined within the approximant phoneme class, even though there is evidence suggesting processing asymmetries between liquid (e.g., /&#x00279;/) and glide (e.g., /w/) phonemes. The goal of this study was to determine whether glide phonemes elicit electrophysiological asymmetries related to [consonantal] underspecification when contrasted with liquid phonemes in adult English speakers. Specifically, /&#x00279;&#x00251;/ is categorized as [+consonantal] while /w&#x00251;/ is not specified [i.e., (&#x02013;consonantal)]. Following the FUL framework, if /w/ is less specified than /&#x00279;/, the former phoneme should elicit a larger MMN response than the latter phoneme. Fifteen English-speaking adults were presented with two syllables, /&#x00279;&#x00251;/ and /w&#x00251;/, in an event-related potential (ERP) oddball paradigm in which both syllables served as the standard and deviant stimulus in opposite stimulus sets. Three types of analyses were used: (1) traditional mean amplitude measurements; (2) cluster-based permutation analyses; and (3) event-related spectral perturbation (ERSP) analyses. The less specified /w&#x00251;/ elicited a large MMN, while a much smaller MMN was elicited by the more specified /&#x00279;&#x00251;/. In the standard and deviant ERP waveforms, /w&#x00251;/ elicited a significantly larger negative response than did /&#x00279;&#x00251;/. Theta activity elicited by /&#x00279;&#x00251;/ was significantly greater than that elicited by /w&#x00251;/ in the 100&#x02013;300 ms time window. Also, low gamma activation was significantly lower for /&#x00279;&#x00251;/ vs. /w&#x00251;/ deviants over the left hemisphere, as compared to the right, in the 100&#x02013;150 ms window. These outcomes suggest that the [consonantal] feature follows the underspecification predictions of FUL previously tested with the place of articulation and voicing features. Thus, this study provides new evidence for phonological underspecification. Moreover, as neural oscillation patterns have not previously been discussed in the underspecification literature, the ERSP analyses identified potential new indices of phonological underspecification.</p></abstract>
<kwd-group>
<kwd>ERP</kwd>
<kwd>EEG</kwd>
<kwd>underspecification</kwd>
<kwd>MMN</kwd>
<kwd>ERSP</kwd>
<kwd>theta</kwd>
<kwd>gamma</kwd>
<kwd>phonology</kwd>
</kwd-group>
<contract-sponsor id="cn001">National Institute on Deafness and Other Communication Disorders<named-content content-type="fundref-id">10.13039/100000055</named-content></contract-sponsor>
<contract-sponsor id="cn002">SBE Office of Multidisciplinary Activities<named-content content-type="fundref-id">10.13039/100005717</named-content></contract-sponsor>
<counts>
<fig-count count="7"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="98"/>
<page-count count="18"/>
<word-count count="14026"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>Distinctive features are often described as the functional units of phonological systems (Chomsky and Halle, <xref ref-type="bibr" rid="B13">1968</xref>). Phonemes are composed of combinations of features, with each phoneme being distinguished from all other phonemes by at least one feature. Phonological underspecification theories propose that only the distinctive features that differentiate a phoneme are present in the adult phonological representation (Kiparsky, <xref ref-type="bibr" rid="B53">1985</xref>; Archangeli, <xref ref-type="bibr" rid="B1">1988</xref>; Mohanan, <xref ref-type="bibr" rid="B67">1991</xref>; Steriade, <xref ref-type="bibr" rid="B89">1995</xref>). Specifically, underspecification identifies some features as &#x0201C;default&#x0201D; and others as &#x0201C;marked.&#x0201D; Default features are not stored within the phonological representation because they are assumed to be predictable by phonological rule. Conversely, marked features are the contrastive, or not otherwise predictable, phonological information that must be specified and stored. A marked phoneme is presumed to require the storage of more distinctive features in its phonological representation as compared to an unmarked phoneme. Thus, marked phonemes are considered to be more phonologically specified than unmarked phonemes.</p>
<p>By only storing specified features within the phonological representation, underspecification can improve speech processing efficiency when encountering the wide variability present in natural speech (Chomsky and Halle, <xref ref-type="bibr" rid="B13">1968</xref>; Eulitz and Lahiri, <xref ref-type="bibr" rid="B34">2004</xref>). Indeed, evidence for the effectiveness of phonological underspecification can be found in speech production. For example, phonological code retrieval in adults is slower when naming words beginning with marked phonemes, such as /&#x00279;/, as compared to unmarked phonemes, such as /b/ (Cummings et al., <xref ref-type="bibr" rid="B21">2016</xref>).</p>
<p>The application of underspecification is also often observed in speech production errors. Specifically, speech errors typically affect specified features and phonemes rather than underspecified features and phonemes (Fromkin, <xref ref-type="bibr" rid="B36">1973</xref>; Levelt et al., <xref ref-type="bibr" rid="B58">1999</xref>; Brown, <xref ref-type="bibr" rid="B8">2004</xref>). For example, approximants are involved in a common phonological process, called liquid gliding, found in the productions of both typically developing children and those with speech sound disorders (Shriberg, <xref ref-type="bibr" rid="B87">1980</xref>; Broen et al., <xref ref-type="bibr" rid="B7">1983</xref>). That is, many young English-speaking children incorrectly produce pre-vocalic /&#x00279;/ as [w] (e.g., &#x02018;rake&#x02019; is pronounced as &#x02018;wake&#x02019;); however, children rarely, if ever, produce /w/ as [&#x00279;]<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref>. Thus, during typical and atypical development, children tend to incorrectly produce phonemes with specified features (Stoel-Gammon and Dunn, <xref ref-type="bibr" rid="B90">1985</xref>; Grunwell, <xref ref-type="bibr" rid="B42">1987</xref>). Such evidence suggests that the underlying phonological representations can affect speech production. A better understanding of how specified and underspecified features are stored within phonological representations has important clinical implications for speech-language pathologists working with clients who have speech production errors. Due to the high frequency of the liquid gliding phonological process in pediatric American English-speaking populations, the examination of the /&#x00279;/-/w/ contrast is of particular interest.</p>
<p>As underlying phonological representations cannot be easily, if at all, accessed behaviorally, neuroimaging tools have proven useful in examining phonological underspecification. Neural markers of phonological underspecification have primarily been examined using the framework established by the Featurally Underspecified Lexicon (FUL) model (Lahiri and Marslen-Wilson, <xref ref-type="bibr" rid="B55">1991</xref>; Lahiri and Reetz, <xref ref-type="bibr" rid="B56">2002</xref>, <xref ref-type="bibr" rid="B57">2010</xref>). Phonological underspecification has been found in vowels (Diesch and Luce, <xref ref-type="bibr" rid="B28">1997</xref>; Eulitz and Lahiri, <xref ref-type="bibr" rid="B34">2004</xref>; Cornell et al., <xref ref-type="bibr" rid="B18">2011</xref>; Scharinger et al., <xref ref-type="bibr" rid="B83">2012</xref>), as well as in consonants such as stops (Cummings et al., <xref ref-type="bibr" rid="B20">2017</xref>), nasals (Cornell et al., <xref ref-type="bibr" rid="B19">2013</xref>), and fricatives (Schluter et al., <xref ref-type="bibr" rid="B84">2016</xref>). Many of these studies have indexed underspecification using the mismatch negativity (MMN), which is a well-studied event-related potential (ERP) peak that is elicited by auditory oddballs elicited within a stream of standard stimuli (N&#x000E4;&#x000E4;t&#x000E4;nen and Winkler, <xref ref-type="bibr" rid="B69">1999</xref>; Picton et al., <xref ref-type="bibr" rid="B76">2000</xref>; N&#x000E4;&#x000E4;t&#x000E4;nen et al., <xref ref-type="bibr" rid="B71">2007</xref>). The MMN is a neurophysiological index of auditory change detection. As the deviant oddball becomes more different from the standard, MMN amplitude increases and latency decreases. Thus, the timing and size of the MMN may reflect the amount of perceived difference between the standard and the deviant stimuli (Tiitinen et al., <xref ref-type="bibr" rid="B92">1994</xref>; N&#x000E4;&#x000E4;t&#x000E4;nen et al., <xref ref-type="bibr" rid="B70">1997</xref>).</p>
<p>Within the FUL framework, the size of the MMN depends on the degree of specification of the features extracted from the stimuli (Winkler et al., <xref ref-type="bibr" rid="B96">1999</xref>; Eulitz and Lahiri, <xref ref-type="bibr" rid="B34">2004</xref>; Scharinger et al., <xref ref-type="bibr" rid="B82">2011</xref>). For example, a true mismatch occurs when the more specified sound is the standard and the less specified sound is the deviant in the MMN oddball paradigm. In this situation, large MMN responses are elicited by the less specified deviant sound because it violates the feature expectations established by the standard. Conversely, a no-mismatch occurs when the less specified sound serves as the standard and the more specified sound is the deviant. In this context, no conflict between the phonetic features is identified because the feature was not specified by the standard. Thus, a very small, or no, MMN is elicited. Because of the predicted size differences of the MMN responses, the true mismatch contrast could be considered an easier feature comparison to make than the no-mismatch feature comparison.</p>
<p>Neural indices of underspecification have not been examined within the English approximant phoneme class, even though there is evidence suggesting processing asymmetries exist between liquid (e.g., /&#x00279;/) and glide (e.g., /w/) phonemes (Greenberg, <xref ref-type="bibr" rid="B40">1975</xref>; Shriberg, <xref ref-type="bibr" rid="B87">1980</xref>; Edwards, <xref ref-type="bibr" rid="B33">1983</xref>; Clements, <xref ref-type="bibr" rid="B14">1990</xref>). These asymmetries suggest that /&#x00279;/ and /w/ might differ in how they are stored within a phonological representation. While /&#x00279;/ and /w/ share several distinctive features (Chomsky and Halle, <xref ref-type="bibr" rid="B13">1968</xref>), liquid phonemes are specified as [+consonantal] while glide phonemes are considered semi-vowels and are not specified for that feature (i.e., [&#x02013;consonantal]. The basic definition of [consonantal] is: &#x0201C;&#x02026; sounds [are] produced with a radical obstruction in the midsagittal region of the vocal tract; nonconsonantal sounds are produced without such an obstruction.&#x0201D; (Chomsky and Halle, <xref ref-type="bibr" rid="B13">1968</xref>; p. 302). That is, [consonantal] phonemes are produced with varying amounts of constriction created by the labial, coronal, and/or dorsal articulators in the oral cavity. This feature classification essentially places vowels, glides, and laryngeal consonants in one natural sound class: [&#x02013;consonantal], while the other consonant phonemes, including /&#x00279;/, are in a separate sound class: [+consonantal]. Thus, glide phonemes can be considered underspecified for [consonantal] in comparison to liquid phonemes.</p>
<p>While [consonantal] never functions as the sole feature responsible for distinguishing phonemes (Hume and Odden, <xref ref-type="bibr" rid="B50">1996</xref>), it is hypothesized that constriction is the primary distinguishing feature of /&#x00279;/ and /w/, at least in American English. There are many ways that the American pre-vocalic /&#x00279;/ can be produced (Preston et al., <xref ref-type="bibr" rid="B80">2020</xref>), with the /&#x00279;/ productions broadly described as either being &#x0201C;retroflex&#x0201D; or &#x0201C;bunched&#x0201D; in nature. Regardless of the type of production used, two separate constrictions are necessary for /&#x00279;/ to be produced: palatal constriction and pharyngeal constriction (Delattre and Freeman, <xref ref-type="bibr" rid="B25">1968</xref>; Gick, <xref ref-type="bibr" rid="B38">1999</xref>; Secord et al., <xref ref-type="bibr" rid="B85">2007</xref>). The palatal constriction is made with the dorsum of the tongue being brought near the soft palate while the pharyngeal constriction is achieved with tongue root retraction. Indeed, problems with vocal tract constriction, and arguably the application of [consonantal], are often observed in children with speech sound disorders as they often have difficulty achieving adequate palatal and pharyngeal constriction necessary for an &#x0201C;accurate&#x0201D; /&#x00279;/ production. That is, they produce /&#x00279;/ with /w/-level constriction, which is not enough, either in terms of the amount and/or place of constriction.</p>
<p>Previous studies examining underspecification within the FUL-MMN paradigm relied on strict superset-subset relationships between the specified and underspecified features [e.g., contrasting voiced stops differing only by place of articulation: (coronal) vs. (labial); Cummings et al., <xref ref-type="bibr" rid="B20">2017</xref>]. Such a contrast is not available for /&#x00279;/ and /w/ because they vary both in terms of manner and place of articulation. Thus, a strict application of FUL cannot be applied to identify the underspecification differences in /&#x00279;/ and /w/. Nevertheless, there are other potential ways to identify phonological underspecification in this contrast, which could then be tested using FUL-based predictions in an MMN paradigm.</p>
<p>Feature Geometry is an alternative way of organizing features in a hierarchical relationship that reflects the configuration of the vocal tract and articulators in a tree diagram. It allows for broad feature groupings (e.g., manner and place of articulation) to be associated with individual features. This means that some features, such as those at the root node (e.g., consonantal, sonorant) dominate place (e.g., coronal, labial) nodes<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref> (Bernhardt and Stoel-Gammon, <xref ref-type="bibr" rid="B5">1994</xref>; Clements and Hume, <xref ref-type="bibr" rid="B15">1995</xref>; Halle et al., <xref ref-type="bibr" rid="B46">2000</xref>; Lahiri and Reetz, <xref ref-type="bibr" rid="B56">2002</xref>). It is assumed that the determination of features in the higher nodes of the tree will impact the features available at lower nodes in the tree. Thus, the idea of markedness is present in both feature geometry and underspecification theory.</p>
<p>Given the hypothesis that constriction is the distinguishing articulatory property of /w/ and /&#x00279;/, the feature geometry theory of Clements and Hume (<xref ref-type="bibr" rid="B15">1995</xref>) was used (<xref ref-type="fig" rid="F1">Figure 1</xref>). The Clements and Hume (<xref ref-type="bibr" rid="B15">1995</xref>) model is a constriction-based approach that defines most phonemes in terms of their constriction location and degree. This means that the place features (i.e., the articulators and dependents) define the constriction location while the articulator-free features define constriction degree (i.e., consonantal/vocoid, sonorant, approximant, and continuant). Three major class features are located at the root node: [sonorant], [approximant], and [vocoid]. As [vocoid] is the terminal opposite of [consonantal], we will refer to /w/ as [&#x02013;consonantal] ([+vocoid]) and /&#x00279;/ as [+consonantal] ([-vocoid]). These distinct [+consonantal] and [&#x02013;consonantal] designations place /&#x00279;/ on the C-place tier and /w/ on the V-place tier, respectively. Both phonemes are [+sonorant], [+approximant], and [+continuant].</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Feature geometry trees based on Clements and Hume (<xref ref-type="bibr" rid="B15">1995</xref>). Panel <bold>(A)</bold> displays the full feature geometry tree for consonants. Panel <bold>(B)</bold> displays the full feature geometry tree for vocoids (i.e., vowels and glides). Panel <bold>(C)</bold> displays the features of /&#x00279;/: [+consonantal] ([-vocoid]), [+sonorant], [+approximant], [+continuant], and [coronal: +distributed]. Panel <bold>(D)</bold> displays the features of /w/: [&#x02013;consonantal] ([+vocoid]), [+sonorant], [+approximant], [+continuant], and [labial].</p></caption>
<graphic xlink:href="fnhum-15-585817-g0001.tif"/>
</fig>
<p>In this model, the place nodes for vowels and consonants are on separate tiers, designated <italic>V</italic>-place and <italic>C</italic>-place, respectively, with the vocalic node linking under the <italic>C</italic>-place node. The actual constriction location (i.e., place of articulation) is largely the same for both vowels and consonants: [labial], [coronal], and [dorsal]. As a result, consonant and vowel articulators are placed on the same tier. In addition, [coronal] has two dependents: [anterior] and [distributed]. This means that coronal itself is not the terminal place of articulation&#x02014;[anterior] or [distributed] is; conversely, [labial] and [dorsal] are terminal. This feature tree organization leads to /w/ being characterized as [&#x02013;consonantal, labial] while /&#x00279;/ contains the features [+consonantal, coronal: +distributed]. With [coronal: +distributed] being located lower on the feature tree than [labial], /&#x00279;/ more specified for the place of articulation than /w/. Thus, following Clements and Hume (<xref ref-type="bibr" rid="B15">1995</xref>), as compared to /w/, /&#x00279;/ is more specified both in terms of the manner of articulation [+consonantal] and place of articulation [coronal: +distributed].</p>
<p>In regards to /&#x00279;/ and /w/, the feature [consonantal] ([vocoid]) is located on the highest node of the Clements and Hume (<xref ref-type="bibr" rid="B15">1995</xref>) tree (<xref ref-type="fig" rid="F1">Figure 1</xref>). As such, processing this feature should dominate the processing of features at lower nodes, including the place of articulation nodes (i.e., <italic>C</italic>-Place and <italic>V</italic>-Place). That is, feature geometry theory predicts that the presence or absence of the [consonantal] feature will be the relevant contrasting feature of /w/ and /&#x00279;/. While the Clements and Hume (<xref ref-type="bibr" rid="B15">1995</xref>) model does not have the same organization as FUL, both can identify features and phonemes that are less specified than others. It is assumed that if underspecification is a language universal phenomenon, the underspecification-specific MMN predictions of FUL would hold regardless of whether FUL was strictly adhered to, or if another theoretical interpretation of underspecification was used. Thus, the Clements and Hume (<xref ref-type="bibr" rid="B15">1995</xref>) model was employed as the framework for the [consonantal] underspecification of /w/, as compared to /&#x00279;/. This prediction was then tested in the present study using the FUL-based predictions in an MMN paradigm.</p>
<p>Feature geometry can also provide a framework to explain children&#x02019;s acquisition of phonemes and speech production errors. That is, higher and dominant nodes in the hierarchy are proposed to be acquired before subordinate nodes (Bernhardt, <xref ref-type="bibr" rid="B3">1992</xref>; Core, <xref ref-type="bibr" rid="B17">1997</xref>). Moreover, default features would be acquired early in development, with minimal specification present in the phonological representations (Bernhardt and Gilbert, <xref ref-type="bibr" rid="B4">1992</xref>). The liquid gliding phonological process could then be explained by the early acquisition of /w/, which is not specified for [consonantal] and is the default feature. Only after the [consonantal] feature of /&#x00279;/ is fully established in the phonological representation is the gliding pattern suppressed in children&#x02019;s production. As the basic definition of [consonantal] suggests that articulatory precision (i.e., constriction control) is necessary, it seems logical that an underspecified [&#x02013;consonantal] phoneme (i.e., /w/) would be acquired prior to a [+consonantal] phoneme (i.e., /&#x00279;/). Thus, there is speech production evidence for the underspecification of [consonantal] in typical and atypical development.</p>
<p>There is clear evidence from developmental and clinical (i.e., disordered speech) data that there is a relationship between /w/ and /&#x00279;/ in American English, with young children and children with speech disorders substituting [w] for /&#x00279;/. Moreover, when adults mimic the speech of young children, they almost always substitute [w] for /&#x00279;/. Thus, the liquid gliding phonological process is an arguably ingrained stereotype of young children&#x02019;s speech&#x02014;even for adults who have essentially no explicit knowledge of the phonological system. Given these observations, it was hypothesized that /w/ contains one or more default features leading to its common usage in development while /&#x00279;/ contains one or more specified features that limits its production early in development. The purpose of the study was to address this potential underspecified/specified feature relationship in adults before examining the neural processing patterns in children. Thus, this study aims to determine whether glide phonemes elicit [consonantal] underspecification-related electrophysiological asymmetries when contrasted with liquid phonemes in adult English speakers.</p>
<p>Following FUL&#x02019;s predictions and framework, if /w/ is less specified than /&#x00279;/ in terms of the manner of articulation, the former phoneme should elicit a larger MMN response than the latter phoneme. That is, a standard stream of /w/ phonemes would not set expectations for [consonantal], so when a deviant /&#x00279;/ is presented, it would be a no-mismatch. Thus, a small, or no, MMN response is predicted to occur in the no-mismatch situation. Conversely, hearing /&#x00279;/ as the standard stimulus would set up the expectation for [+consonantal], which would be violated by a deviant /w/. Thus, a large MMN response is predicted to occur in the true mismatch situation.</p>
<p>While underspecification has primarily been addressed with ERPs, subtle processing differences between distinct phonemes may not be detected due to the averaging of brain signals in traditional ERP methods. In contrast, time-frequency analyses provide an alternative approach that involves decomposing the spectral power of the EEG signal over time (Davidson and Indefrey, <xref ref-type="bibr" rid="B23">2007</xref>; Cohen, <xref ref-type="bibr" rid="B16">2014</xref>). Unlike ERPs, which only reveal phase-locked changes in the time series data, this approach affords both a view of changes in EEG signals that are phase-locked to stimulus onset (evoked responses), as well as a view of changes that are not phase-locked (induced responses). The synchronization of neuronal cell assemblies proposed to underlie increases in induced power has been hypothesized to mediate the binding of perceptual information (Singer and Gray, <xref ref-type="bibr" rid="B88">1995</xref>). Experimental results have also implicated induced responses in various cognitive functions such as working memory (Gurariy et al., <xref ref-type="bibr" rid="B43">2016</xref>) and attentional processes (Ward, <xref ref-type="bibr" rid="B95">2003</xref>).</p>
<p>In keeping with these findings, there is reason to believe that phonological underspecification could also be indicated by neural oscillation patterns. For example, cortical oscillations in the theta (&#x0007E;4&#x02013;7 Hz) and low gamma (&#x0007E;25&#x02013;35 Hz) bands have been implicated in decoding syllabic and phonemic segments, respectively, from continuous speech (Luo and Poeppel, <xref ref-type="bibr" rid="B60">2007</xref>; Ghitza, <xref ref-type="bibr" rid="B37">2011</xref>; Giraud and Poeppel, <xref ref-type="bibr" rid="B39">2012</xref>; Doelling et al., <xref ref-type="bibr" rid="B31">2014</xref>; Di Liberto et al., <xref ref-type="bibr" rid="B27">2015</xref>). That is, theta band has been proposed to represent higher-order syllable-level processing while low gamma band activities have been linked to phoneme feature-level processing (e.g., formant transitions, voicing). Possibly, one, or both, of these bands could demonstrate underspecification response asymmetries. As neural oscillation patterns underlying phonological underspecification have not previously been examined, this work was exploratory in nature and no specific hypotheses were proposed regarding theta and low gamma response patterns.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>Materials and Methods</title>
<sec id="s2-1">
<title>Participants</title>
<p>Fifteen native speakers of (American) English (three males, 12 female; mean age: 21.71 years, range: 19&#x02013;26 years) who were undergraduate students participated in the study. All of them had a normal or corrected-to-normal vision, and none had a history of speech, language, and/or hearing impairment. This study was approved by the university institutional review board and each participant signed informed consent following the university human research protection program.</p>
</sec>
<sec id="s2-2">
<title>Stimuli</title>
<p>Syllables (consonant + /&#x00251;/) were pronounced by a male North American English speaker. The syllables were digitally recorded in a sound isolated room (Industrial Acoustics Company, Inc., Winchester, UK) using a Beyer Dynamic (Heilbronn, Germany) Soundstar MK II unidirectional dynamic microphone and Behringer (Willich, Germany) Eurorack MX602A mixer. All syllables were digitized with a 16-bit AD converter at a 44.1 kHz sampling rate. The average intensity of all the syllable stimuli was normalized to 65 dB SPL.</p>
<p>The adults heard two oddball stimulus sets, each containing the same four English speech consonant-vowel (CV) syllables: &#x0201C;ra&#x0201D; (/&#x00279;&#x00251;/), &#x0201C;wa&#x0201D; (/w&#x00251;/), &#x0201C;ba&#x0201D; (/b&#x00251;/), and &#x0201C;da&#x0201D; (/d&#x00251;/). In one stimulus set, /&#x00279;&#x00251;/ served as the standard syllable, with the other three CV syllables serving as deviants. In the second stimulus set, /w&#x00251;/ served as the standard syllable, with the other three syllables being deviants. Only responses to the /&#x00279;&#x00251;/ and /w&#x00251;/ syllables will be addressed further since they served as both standard and deviant stimuli, which allowed for the creation of same-stimulus identity difference waves. Since /b&#x00251;/ and /d&#x00251;/ deviants were incorporated to prevent MMN habituation, they were not examined. As initially recorded, the syllables varied slightly in duration, due to the individual phonetic make-up of each consonant. Syllable duration was minimally modified in /w&#x00251;/ (by shortening the steady-state vowel duration by 24 ms) so that all syllables were 375 ms in length. Each syllable token used in the study was correctly identified by at least 15 adult listeners.</p>
<p>The phonotactic probability<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref> of each phoneme and syllable were calculated using the online phonotactic probability calculator<xref ref-type="fn" rid="fn0004"><sup>4</sup></xref> (Vitevitch and Luce, <xref ref-type="bibr" rid="B94">2004</xref>). These probability values are presented in <xref ref-type="table" rid="T1">Table 1</xref>. The singleton /&#x00279;/ occurs 2.5 times more frequently than /w/ in English. Similarly, the &#x00279;&#x00251;/ syllable in English occurs 1.375 times more frequently than that of /w&#x00251;/.</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption><p>The phonotactic probability in English of the phonemes and syllables used in the study.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center"></th>
<th align="center">Consonant</th>
<th align="center">Consonant + /&#x00251;/</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">/&#x00279;/</td>
<td align="center">0.0501</td>
<td align="center">0.0011</td>
</tr>
<tr>
<td align="left">/w/</td>
<td align="center">0.0203</td>
<td align="center">0.0008</td>
</tr>
<tr>
<td align="left">/b/</td>
<td align="center">0.0512</td>
<td align="center">0.0039</td>
</tr>
<tr>
<td align="left">/d/</td>
<td align="center">0.0518</td>
<td align="center">0.0023</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s2-3">
<title>Stimulus Presentation</title>
<p>The stimuli were presented in blocks containing 237 standard stimuli and 63 deviant stimuli (21 per deviant), with five blocks of each stimulus set being presented to each participant (10 total blocks). The stimulus sets were presented sequentially in the session, with all five blocks of one stimulus set (e.g., /&#x00279;&#x00251;/ standard set) being presented before the other stimulus set (e.g., /w&#x00251;/ standard set); the presentation of the stimulus sets was counterbalanced across participants. Each block lasted approximately 6 min and the participants were given a break between blocks when necessary. Within the block, the four stimuli were presented using an oddball paradigm in which the three deviant stimuli (probability = 0.07 for each) were presented in a series of standard stimuli (probability = 0.79). Stimuli were presented in a pseudorandom sequence and the onset-to-onset inter-stimulus interval varied randomly between 600 and 800 ms. The syllables were delivered by stimulus presentation software (Presentation software, www.neurobs.com). The syllable sounds were played <italic>via</italic> two loudspeakers situated 30 degrees to the right and left from the midline 120 cm in front of a participant, which allowed the sounds to be perceived as emanating from the midline space. The participants sat in a sound-treated room and watched a silent cartoon video of their choice. The recording of the ERPs took approximately 1 h.</p>
</sec>
<sec id="s2-4">
<title>EEG Recording and Averaging</title>
<p>Sixty-six channels of continuous EEG (DC-128 Hz) were recorded using an ActiveTwo data acquisition system (Biosemi, Inc, Amsterdam, Netherlands) at a sampling rate of 256 Hz. This system provides &#x0201C;active&#x0201D; EEG amplification at the scalp that substantially minimizes movement artifacts. The amplifier gain on this system is fixed, allowing ample input range (&#x02212;264 to 264 mV) on a wide dynamic range (110 dB) Delta- Sigma (&#x00394;&#x003A3;) 24-bit AD converter. Sixty-four channel scalp data were recorded using electrodes mounted in a stretchy cap according to the International 10-20 system. Two additional electrodes were placed on the right and left mastoids. Eye movements were monitored using FP1/FP2 (blinks) and F7/F8 channels (lateral movements, saccades). During data acquisition, all channels were referenced to the system&#x02019;s internal loop (CMS/DRL sensors located in the centro-parietal region), which drives the average potential of a subject (the Common Mode voltage) as close as possible to the Analog-Digital Converter reference voltage (the amplifier &#x0201C;zero&#x0201D;). The DC offsets were kept below 25 microvolts at all channels. Off-line, data were re-referenced to the common average of the 64 scalp electrode tracings.</p>
<p>Data processing followed an EEGLAB (Delorme and Makeig, <xref ref-type="bibr" rid="B26">2004</xref>) pipeline. Briefly, data were high-pass filtered at 0.5 Hz using a pass-band filter. Line noise was removed using the CleanLine EEGLAB plugin. Bad channels were rejected using the trimOutlier EEGLAB plugin and the removed channels were interpolated. Source level contributions to channel EEG were decomposed using Adaptive Mixed Model Independent Component Analysis (AMICA; Palmer et al., <xref ref-type="bibr" rid="B74">2008</xref>) in EEGLAB<xref ref-type="fn" rid="fn0005"><sup>5</sup></xref>. Artifactual independent components (ICs) were identified by their activation patterns, scalp topographies, and power spectra, and the contribution of these components to the channel EEG was zeroed (Jung et al., <xref ref-type="bibr" rid="B51">2000</xref>; Delorme and Makeig, <xref ref-type="bibr" rid="B26">2004</xref>). Epochs containing 100 ms pre-auditory stimulus to 800 ms post-auditory stimulus time were baseline-corrected for the pre-stimulus interval and averaged by stimulus type. On average, individual data contained 804 (SD = 84) /&#x00279;&#x00251;/ standard syllable epochs (i.e., trials), 794 (SD = 79) /w&#x00251;/ standard syllable epochs, 96 (SD = 9) /&#x00279;&#x00251;/ deviant syllable epochs, and 97 (SD = 9) /w&#x00251;/ deviant syllable epochs.</p>
</sec>
<sec id="s2-5">
<title>ERP and EEG Measurements</title>
<p>Three different data analysis strategies were used in the present study: (1) traditional mean amplitude repeated measure ANOVA analyses using averaged data; (2) cluster-based permutation analyses of averaged data (Bullmore et al., <xref ref-type="bibr" rid="B9">1999</xref>; Groppe et al., <xref ref-type="bibr" rid="B41">2011</xref>); and (3) event-related spectral perturbation (ERSP) analyses (Makeig, <xref ref-type="bibr" rid="B64">1993</xref>).</p>
<sec id="s2-5-1">
<title>Mean Amplitude Measurements of Averaged Data</title>
<p>The dual stimulus set nature of the present study allowed for the creation of &#x0201C;same-stimulus&#x0201D;, or identity, difference waveforms. These difference waves were created by subtracting the ERP response of a stimulus serving as the standard from that of the same stimulus serving as the deviant, across stimulus sets. For example, the ERP response for /&#x00279;&#x00251;/ as the standard was subtracted from the ERP response for /&#x00279;&#x00251;/ as the deviant (of the reversed stimulus set; Eulitz and Lahiri, <xref ref-type="bibr" rid="B34">2004</xref>; Cornell et al., <xref ref-type="bibr" rid="B18">2011</xref>, <xref ref-type="bibr" rid="B19">2013</xref>). The creation of identity difference waveforms eliminates the potential confound that may result from acoustic stimulus differences since the same stimulus is used to elicit both the standard and deviant responses. The waveforms were visually inspected from 0 to 400 ms, with the MMN appearing between approximately 100 and 250 ms post-syllable onset.</p>
<p>Since the MMN was present in 12 electrodes centered around the scalp midline (Fz, F1/F2, FCz, FC1/FC2, Cz, C1/C2, CPz, and CP1/CP2)<xref ref-type="fn" rid="fn0006"><sup>6</sup></xref>, these electrodes were selected for the mean amplitude analyses. The MMN elicited by /w&#x00251;/ extended for approximately 150 ms. Given the extended duration of the MMN, the mean amplitude measurement of the MMN was split into three 50 ms windows: 100&#x02013;150, 150&#x02013;200, and 200&#x02013;250 ms post-stimulus onset. Phonological underspecification in the identity difference waves was analyzed separately in each time window using a Phoneme Type (/&#x00279;&#x00251;/, /w&#x00251;/) &#x000D7; Anterior-Posterior (4 Levels) &#x000D7; Left-Right (3 Levels) repeated measure ANOVA.</p>
<p>Since the difference waves were generated from the standard and deviant syllable ERPs, the mean amplitude measurements of the standard and deviant waveforms were taken from the same three time windows as that of the MMN: 100&#x02013;150, 150&#x02013;200, and 200&#x02013;250 ms post-syllable onset. In terms of ERP waveform morphology, these measurements approximately captured the auditory N1 (100&#x02013;200 ms) and auditory P2 (200&#x02013;250 ms). Phonological underspecification in these ERPs was analyzed separately for each time window using a Phoneme Type (/&#x00279;&#x00251;/, /w&#x00251;/) &#x000D7; Trial Type (Standard, Deviant) &#x000D7; Anterior-Posterior (4 Levels) &#x000D7; Left-Right (3 Levels) repeated measure ANOVA. Partial eta squared (<italic>&#x003B7;</italic><sup>2</sup>) effect sizes are reported for all significant effects and interactions. When applicable, Geiser&#x02013;Greenhouse corrected <italic>p</italic>-values are reported.</p>
</sec>
<sec id="s2-5-2">
<title>Cluster Mass Permutation Tests of Averaged Data</title>
<p>The ERPs were submitted to repeated measures two-tailed cluster-based permutation tests (Bullmore et al., <xref ref-type="bibr" rid="B9">1999</xref>; Groppe et al., <xref ref-type="bibr" rid="B41">2011</xref>) using the Mass Univariate ERP Toolbox for EEGLAB<xref ref-type="fn" rid="fn0007"><sup>7</sup></xref>. Four tests were conducted: (1) /&#x00279;&#x00251;/ standard vs. /&#x00279;&#x00251;/ deviant ERPs; (2) /w&#x00251;/ standard vs. /w&#x00251;/ deviant ERPs; (3) /&#x00279;&#x00251;/ vs. /w&#x00251;/ standard ERPs; and (4) /&#x00279;&#x00251;/ vs. /w&#x00251;/ deviant ERPs. Each test included the same 12 electrodes from the mean amplitude ERP measurements: F1/F2, Fz, FC1/FC2, FCz, C1/C2, Cz, CP1/CP2, and CPz. All of the time points (measured every 4 ms; 155 total time points) between 0 and 600 ms at the 12 scalp electrodes were included in the test (i.e., 1,860 total comparisons).</p>
<p><italic>T</italic>-tests were performed for each comparison using the original data and 2,500 random within-participant permutations of the data. For each permutation, all <italic>t</italic>-scores corresponding to uncorrected <italic>p</italic>-values of 0.05 or less were formed into clusters. Electrodes within about 5.44 cm of one another were considered spatial neighbors, and adjacent time points were considered temporal neighbors. The sum of the <italic>t</italic>-scores in each cluster was the &#x0201C;mass&#x0201D; of that cluster. The most extreme cluster mass in each of the 2,501 sets of tests was recorded and used to estimate the distribution of the null hypothesis (i.e., no difference between conditions). The permutation cluster mass percentile ranking of each cluster from the observed data was used to derive <italic>p</italic>-values assigned to each member of the cluster. <italic>t</italic>-scores that were not included in a cluster were given a <italic>p</italic>-value of 1.</p>
</sec>
<sec id="s2-5-3">
<title>Event-Related Spectral Perturbation (ERSP) Analyses</title>
<p>ERSP analyses were performed to examine theta (4&#x02013;7 Hz) and low gamma (25&#x02013;35 Hz) band activities elicited by the /&#x00279;&#x00251;/ and /w&#x00251;/ standard and deviant syllable stimuli. This approach was informed by prior work on speech syllable decoding (Ghitza, <xref ref-type="bibr" rid="B37">2011</xref>; Giraud and Poeppel, <xref ref-type="bibr" rid="B39">2012</xref>). ERSPs were computed from time-series data from 16 electrodes: F3/F4, F1/F2, FC3/FC4, FC1/FC2, C3/C4, C1/C2, CP3/CP4, CP1/CP2<xref ref-type="fn" rid="fn0008"><sup>8</sup></xref> (<xref ref-type="supplementary-material" rid="SM1">Supplementary Figure 1</xref>). Data were epoched from &#x02212;0.6 ms before stimulus onset to 1.6 ms after. Estimates of spectral power for each of these EEG epochs were computed across 200 equally spaced time points along 100 frequency steps spanning 3&#x02013;50 Hz using Morlet wavelets with cycles gradually increasing with frequency (Delorme and Makeig, <xref ref-type="bibr" rid="B26">2004</xref>). ERSPs were created by converting spectral density estimates to log power, averaging across single trials, and subtracting the mean log power derived from the pre-stimulus baseline period of the same trials. The final output for each channel was a matrix of 100 frequency values (3&#x02013;50 Hz) by 200 time points (&#x02212;0.5 to 1 s).</p>
<p>It has been proposed that the decoding of auditory information during speech perception occurs during two distinct time scales&#x02014;one which relates to syllable-level processing (&#x0007E;200 ms) and one related to phoneme-level processing (&#x0007E;25 ms; Poeppel, <xref ref-type="bibr" rid="B78">2003</xref>; Ghitza, <xref ref-type="bibr" rid="B37">2011</xref>; Giraud and Poeppel, <xref ref-type="bibr" rid="B39">2012</xref>; Doelling et al., <xref ref-type="bibr" rid="B31">2014</xref>). As such, theta (4&#x02013;7 Hz) bandwidth responses were measured in one 200 ms window occurring 100&#x02013;300 ms post-syllable onset. Low gamma (25&#x02013;35 Hz) bandwidth responses were measured separately in five 50 ms windows occurring 50&#x02013;300 ms post-syllable onset.</p>
<p>For each participant, the magnitude of synchronized theta and gamma activity at each electrode was derived by averaging estimates of spectral power computed across steps within each of these bandwidths and across time points within the selected time interval. Phoneme-related differences in theta and low gamma power were examined in separate Phoneme Type (/&#x00279;&#x00251;/, /w&#x00251;/) &#x000D7; Trial Type (Standard, Deviant) &#x000D7; Laterality (Left, Right) &#x000D7; Anterior-Posterior (4) &#x000D7; Electrode Laterality (Far, Close) repeated measure ANOVAs.</p>
</sec>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<p>Only significant results for all analyses are reported.</p>
<sec id="s3-1">
<title>ERP Mean Amplitude Results</title>
<p>For both the /&#x00279;&#x00251;/ and /w&#x00251;/ syllables, the ERP waveforms elicited by the standard and deviant stimuli consisted of P1 at ca. 75 ms, N1 at ca. 150 ms, P2 at ca. 225 ms, and N2 at ca. 350 ms (<xref ref-type="fig" rid="F2">Figure 2</xref>). In the same-stimulus identity difference waves, an MMN was visible in both the /&#x00279;&#x00251;/ and /w&#x00251;/ identity waveforms at ca. 200 ms; the /w&#x00251;/ MMN extended from ca. 100&#x02013;250 while the /&#x00279;&#x00251;/ MMN extended from ca. 175&#x02013;225 ms (<xref ref-type="fig" rid="F2">Figure 2</xref>).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Event-related potential (ERP) waveforms elicited by the /w&#x00251;/ (left side) and /&#x00279;&#x00251;/ (right side) syllables in the ERP study. The deviant waveforms represent the neural responses when the deviant syllable was presented within a stream of the opposite syllable standards. Subtracting the standard syllable response from the deviant syllable response resulted in the identity difference waves. Note that negative is plotted <italic>up</italic> in all waveforms.</p></caption>
<graphic xlink:href="fnhum-15-585817-g0002.tif"/>
</fig>
<sec id="s3-1-1">
<title>Identity Difference Waves: MMN</title>
<p>Individual participants&#x02019; mean ERP responses to /&#x00279;&#x00251;/ and /w&#x00251;/ stimuli are presented in <xref ref-type="supplementary-material" rid="SM1">Supplementary Figure 2</xref>. During the 200&#x02013;250 time window, MMN responses elicited by /w&#x00251;/ were significantly more negative than those elicited by /&#x00279;&#x00251;/ (<italic>F</italic><sub>(1,14)</sub> = 5.479, <italic>p</italic> &#x0003C; 0.04, <italic>&#x003B7;</italic><sup>2</sup> = 0.281; <xref ref-type="fig" rid="F2">Figures 2</xref>, <xref ref-type="fig" rid="F3">3</xref>). During the 150&#x02013;200 ms time window, the overall magnitude of the MMN was larger over the left hemisphere, as compared to the right (<italic>F</italic><sub>(2, 28)</sub> = 5.343, <italic>p</italic> &#x0003C; 0.02, <italic>&#x003B7;</italic><sup>2</sup> = 0.276; <xref ref-type="supplementary-material" rid="SM1">Supplementary Figure 3</xref>).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Average mean amplitudes for standard and deviant ERPs (left side) and mismatch responses measured in identity difference waves (right side) across three time windows: (1) 100&#x02013;150 ms post-syllable onset; (2) 150&#x02013;200 ms post-syllable onset; and (3) 200&#x02013;250 ms post-syllable onset. Error bars represent SEM. Time Windows 1 and 2 broadly captured the auditory N1 response, while Time Window 3 captured the auditory P2 response. The Mismatch Negativity (MMN) was present in all three time windows. The /w&#x00251;/ deviants were significantly more negative than the /w&#x00251;/ standards during Window 3. The MMN responses elicited by /w&#x00251;/ were significantly more negative than those elicited by /&#x00279;&#x00251;/ during Window 3. *Significant effects.</p></caption>
<graphic xlink:href="fnhum-15-585817-g0003.tif"/>
</fig>
</sec>
<sec id="s3-1-2">
<title>Standard and Deviant Waveforms</title>
<p>The standard and deviant ERP responses elicited by /w&#x00251;/ were significantly more negative than those elicited by /&#x00279;&#x00251;/ during both the 150&#x02013;200 ms time window (<italic>F</italic><sub>(1,14)</sub> = 12.448, <italic>p</italic> &#x0003C; 0.004, <italic>&#x003B7;</italic><sup>2</sup> = 0.471) and 200&#x02013;250 ms time window (<italic>F</italic><sub>(1,14)</sub> = 21.272, <italic>p</italic> &#x0003C; 0.0001, <italic>&#x003B7;</italic><sup>2</sup> = 0.603; <xref ref-type="fig" rid="F3">Figure 3</xref> and <xref ref-type="supplementary-material" rid="SM1">Supplementary Figure 4</xref>). Deviant trials elicited significantly more negative responses than did standard responses during the 150&#x02013;200 ms time window (<italic>F</italic><sub>(1,14)</sub> = 10.029, <italic>p</italic> &#x0003C; 0.008, <italic>&#x003B7;</italic><sup>2</sup> = 0.417).</p>
<p>A phoneme &#x000D7; trial type interaction (<italic>F</italic><sub>(1,14)</sub> = 5.481, <italic>p</italic> &#x0003C; 0.04, <italic>&#x003B7;</italic><sup>2</sup> = 0.281) was observed during the 200&#x02013;250 ms time window (<xref ref-type="fig" rid="F3">Figure 3</xref>). Whereas the /&#x00279;&#x00251;/ standard and /&#x00279;&#x00251;/ deviant responses did not reliably differ, ERPs elicited by /w&#x00251;/ deviants were consistently more negative than the /w&#x00251;/ standards (<italic>F</italic><sub>(1,14)</sub> = 14.189, <italic>p</italic> &#x0003C; 0.003, <italic>&#x003B7;</italic><sup>2</sup> = 0.503).</p>
</sec>
<sec id="s3-1-3">
<title>ERP Summary</title>
<p>The FUL underspecification paradigm predicts that the underspecified phoneme deviant presented within a stream of the specified phoneme standards will elicit a large MMN response, as this situation creates a true feature mismatch context. The opposite stimulus presentation is predicted to elicit a small, or no, MMN response due to the feature no-match context. These hypotheses were supported. The underspecified /w&#x00251;/ stimuli elicited significantly larger and more negative responses than did the specified /&#x00279;&#x00251;/.</p>
</sec>
</sec>
<sec id="s3-2">
<title>Cluster Permutation Analysis Results</title>
<p>Four cluster-level mass permutation tests encompassing 0&#x02013;600 ms were applied to the standard and deviant syllable data. The results of the tests are displayed in raster diagrams in <xref ref-type="fig" rid="F4">Figures 4A&#x02013;C</xref>.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>(A)</bold> Raster diagram illustrating differences between the /w&#x00251;/ deviants and standards, which extended from 132 ms post-syllable onset to the end of the analysis window (600 ms). <bold>(B)</bold> Raster diagram illustrating differences between /w&#x00251;/ standards and /&#x00279;&#x00251;/ standards, one cluster extended from 175 to 230 ms post-syllable onset and the second cluster extended from 243 to 343 ms post-syllable onset. <bold>(C)</bold> Raster diagram illustrating differences between /w&#x00251;/ deviants and /&#x00279;&#x00251;/ deviants, which extended from 136 to 253 ms post-syllable onset. There were no reliable clusters for the comparison of /&#x00279;&#x00251;/ deviants and standards. Note: for the raster diagrams, colored rectangles indicate electrodes/time points in which the ERPs to one stimulus are significantly different from those to another. The color scale dictates the size of the <italic>t</italic>-test result, with dark red and blue colors being more significant. Green areas indicate electrodes/time points at which no significant differences were found. Note that the electrodes are organized along the <italic>y</italic>-axis somewhat topographically. Electrodes on the left and right sides of the head are grouped on the figure&#x02019;s top and bottom, respectively; midline electrodes are shown in the middle. Within those three groupings, the <italic>y</italic>-axis top-to-bottom corresponds to scalp anterior-to-posterior.</p></caption>
<graphic xlink:href="fnhum-15-585817-g0004.tif"/>
</fig>
<p>No reliable clusters were identified when examining the difference between /&#x00279;&#x00251;/ standards and /&#x00279;&#x00251;/ deviants. On the other hand, one broadly distributed cluster extending from 132 to 600 ms signified a period during which the /w&#x00251;/ deviants elicited more negative ERP responses than the /w&#x00251;/ standards; the smallest significant <italic>t</italic>-score (in absolute values) was: <italic>t</italic><sub>(14)</sub> = &#x02212;2.159, <italic>p</italic> &#x0003C; 0.0001 (<xref ref-type="fig" rid="F4">Figure 4A</xref>).</p>
<p>When contrasting the standard syllables, two broadly distributed clusters extending from 175 to 230 ms and 242 to 343 ms signified two time periods during which the /&#x00279;&#x00251;/ standards differed from the /w&#x00251;/ standards (<xref ref-type="fig" rid="F4">Figure 4B</xref>); the smallest significant <italic>t</italic>-score was: <italic>t</italic><sub>(14)</sub> = 2.149, <italic>p</italic> &#x0003C; 0.05. When the /&#x00279;&#x00251;/ and /w&#x00251;/ deviant syllables were contrasted, a broadly distributed cluster extending from 136 to 253 ms signified a time period during which the /w&#x00251;/ deviants elicited more negative (i.e., larger) ERP responses than the /&#x00279;&#x00251;/ deviants (<xref ref-type="fig" rid="F4">Figure 4C</xref>); the smallest significant <italic>t</italic>-score was: <italic>t</italic><sub>(14)</sub> = 2.158, <italic>p</italic> &#x0003C; 0.005.</p>
<sec id="s3-2-1">
<title>Cluster Permutation Analysis Summary</title>
<p>FUL predicts a larger MMN will be elicited by an underspecified phoneme, as compared to a specified phoneme. Consistent with the ERP analyses, this prediction was confirmed. The MMN appeared in the difference waveforms between 100 and 300 ms post-syllable onset. The effects seen in the /w&#x00251;/ stimuli extended far beyond the traditional timeline of the MMN<xref ref-type="fn" rid="fn0009"><sup>9</sup></xref>. This result was unexpected. As no phoneme type differences were observed in the standard trial and deviant trial analyses, this effect appears to be specific to the contrast of the /w&#x00251;/ standard and deviant trials. Visual analysis of the cluster permutation (<xref ref-type="fig" rid="F4">Figure 4A</xref>) suggests that there were potentially three parts to the /w&#x00251;/ effect, &#x0007E;132&#x02013;&#x0007E;275, &#x0007E;300&#x02013;400, and &#x0007E;400&#x02013;600 ms. Thus, the first part could be attributed to the MMN, the second part could represent the deviance-related or novelty N2 (Folstein and Van Petten, <xref ref-type="bibr" rid="B35">2008</xref>), while the third part could be attributed to a late MMN or Late Negativity (LN)<xref ref-type="fn" rid="fn0010"><sup>10</sup></xref>. Previous studies have identified the LN as a secondary index of speech perception and discrimination (Korpilahti et al., <xref ref-type="bibr" rid="B54">1995</xref>; &#x0010C;eponien&#x00117; et al., <xref ref-type="bibr" rid="B11">1998</xref>; Cheour et al., <xref ref-type="bibr" rid="B12">1998</xref>; Shafer et al., <xref ref-type="bibr" rid="B86">2005</xref>; Datta et al., <xref ref-type="bibr" rid="B22">2010</xref>; Hestvik and Durvasula, <xref ref-type="bibr" rid="B47">2016</xref>). However, there is currently insufficient information in the underspecification literature to further interpret this finding.</p>
<p>Consistent with the ERP analyses, phoneme type differences in the standard and deviant trials were observed. The two clusters in the analysis of standard trials were consistent with the auditory N1 and P2 ERP responses. That is, in the first period, the /w&#x00251;/ standards elicited a larger auditory N1 than did the /&#x00279;&#x00251;/ standards. During the second period, the /&#x00279;&#x00251;/ standards elicited a larger auditory P2 than did the /w&#x00251;/ standards. Similarly, in the analysis of deviant trials, the identified cluster almost exclusively encompassed the auditory N1 ERP response. As the MMN is derived from the subtraction of the standard stimulus from the deviant stimulus, the responses elicited by /w&#x00251;/ are consistent with the prediction that the underspecified phoneme should elicit larger (i.e., more negative) responses than the more specified /&#x00279;&#x00251;/. Thus, the cluster permutation analyses provide converging evidence for the underspecification of /w&#x00251;/.</p>
</sec>
</sec>
<sec id="s3-3">
<title>ERSP Results</title>
<sec id="s3-3-1">
<title>Theta Band (4&#x02013;7 Hz) 100&#x02013;300 ms</title>
<p>Individual participants&#x02019; mean theta band responses to /&#x00279;&#x00251;/ and /w&#x00251;/ standards and deviants are presented in <xref ref-type="supplementary-material" rid="SM1">Supplementary Figure 5</xref>. Theta responses elicited by /&#x00279;&#x00251;/ were significantly greater than those elicited by /w&#x00251;/ (<italic>F</italic><sub>(1,14)</sub> = 4.571, <italic>p</italic> = 0.05, <italic>&#x003B7;</italic><sup>2</sup> = 0.246; <xref ref-type="fig" rid="F5">Figures 5</xref>, <xref ref-type="fig" rid="F6">6</xref>). A significant electrode laterality effect was found (<italic>F</italic><sub>(1,14)</sub> = 14.053, <italic>p</italic> &#x0003C; 0.003, <italic>&#x003B7;</italic><sup>2</sup> = 0.501), as the electrodes closer to midline (1- and 2-level electrodes; <italic>M</italic> = 0.235, SEM = 0.059) elicited greater theta activity than did the far lateral electrodes (three- and four-level electrodes; <italic>M</italic> = 0.140, SEM = 0.043).</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Event-related spectral perturbation (ERSP) activation patterns (in dB) elicited by /w&#x00251;/ and /&#x00279;&#x00251;/ in the standard and deviant stimuli averaged across the eight left hemisphere electrodes and eight right hemisphere electrodes for theta (4&#x02013;7 Hz) and low gamma (25&#x02013;35 Hz) bandwidths. Time is on the <italic>x</italic>-axis and frequency is on the <italic>y</italic>-axis. Theta band window of interest is highlighted by the solid black box while the low gamma band window of interest is highlighted by the dashed box. Overall, /&#x00279;&#x00251;/ elicited greater neural synchrony (i.e., more activation) in the theta band than did /w&#x00251;/. The /&#x00279;&#x00251;/ deviant elicited less neural synchrony over the left hemisphere, as compared to the right, in the low gamma band.</p></caption>
<graphic xlink:href="fnhum-15-585817-g0005.tif"/>
</fig>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>ERSP activation (in dB) elicited by /w&#x00251;/ and /&#x00279;&#x00251;/ for the theta (4&#x02013;7 Hz) bandwidth in the 100&#x02013;300 ms time window. The /&#x00279;&#x00251;/ elicited greater neural synchrony (i.e., more activation) in the theta band than did /w&#x00251;/. The electrodes closer to midline (e.g., F1 and F2) elicited greater theta activation than did the electrodes further away from midline (e.g., F3 and F4). *Significant effects.</p></caption>
<graphic xlink:href="fnhum-15-585817-g0006.tif"/>
</fig>
</sec>
<sec id="s3-3-2">
<title>Low Gamma Band (25&#x02013;35 Hz) 50&#x02013;300 ms</title>
<p>Individual participants&#x02019; mean low gamma band responses to /&#x00279;&#x00251;/ and /w&#x00251;/ standards and deviants are presented in <xref ref-type="supplementary-material" rid="SM1">Supplementary Figure 6</xref>. Low gamma activation varied across variables and time windows (<xref ref-type="fig" rid="F5">Figures 5</xref>, <xref ref-type="fig" rid="F7">7</xref>). The laterality of low gamma activation patterns changed over time, as significantly less gamma band activation was found across left hemisphere electrodes as compared to right hemisphere electrodes from 50 to 100 ms (Left: <italic>M</italic> = &#x02013;0.028, SEM = 0.023; Right: <italic>M</italic> = 0.011, SEM = 0.016; <italic>F</italic><sub>(1,14)</sub> = 5.042, <italic>p</italic> &#x0003C; 0.05, <italic>&#x003B7;</italic><sup>2</sup> = 0.265) and from 100 to 150 ms (Left: <italic>M</italic> = &#x02013;0.042, SEM = 0.028; Right: <italic>M</italic> = 0.012, SEM = 0.021; <italic>F</italic><sub>(1,14)</sub> = 6.030, <italic>p</italic> &#x0003C; 0.03, <italic>&#x003B7;</italic><sup>2</sup> = 0.301).</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>Event-related spectral perturbation activation (in dB) elicited by /w&#x00251;/ and /&#x00279;&#x00251;/ for the low gamma (25&#x02013;35 Hz) bandwidth across five 50 ms time windows: 50&#x02013;100 ms, 100&#x02013;150 ms, 150&#x02013;200 ms, 200&#x02013;250 ms, and 250&#x02013;300 ms. The /&#x00279;&#x00251;/ deviants elicited significantly less low gamma neural synchrony over the left hemisphere, as compared to the right. *Significant effects.</p></caption>
<graphic xlink:href="fnhum-15-585817-g0007.tif"/>
</fig>
<p>The trial type &#x000D7; laterality interaction was significant from 50 to 100 ms (<italic>F</italic><sub>(1,14)</sub> = 6.019, <italic>p</italic> &#x0003C; 0.03, <italic>&#x003B7;</italic><sup>2</sup> = 0.301) and 100 to 150 ms (<italic>F</italic><sub>(1,14)</sub> = 7.589, <italic>p</italic> &#x0003C; 0.02, <italic>&#x003B7;</italic><sup>2</sup> = 0.352). The deviants elicited significantly less gamma activation over the left hemisphere than over the right during the 50&#x02013;100 ms window (<italic>F</italic><sub>(1,14)</sub> = 6.152, <italic>p</italic> &#x0003C; 0.03, <italic>&#x003B7;</italic><sup>2</sup> = 0.305) and 100&#x02013;150 ms window (<italic>F</italic><sub>(1,14)</sub> = 7.434, <italic>p</italic> &#x0003C; 0.02, <italic>&#x003B7;</italic><sup>2</sup> = 0.348), while no laterality difference was observed for the standards. This effect was driven primarily by the /&#x00279;&#x00251;/ deviant responses elicited over the left hemisphere. Specifically, low gamma activation elicited by /&#x00279;&#x00251;/ over the left hemisphere was significantly less than low gamma activation recorded over the right hemisphere during the 100&#x02013;150 ms window (<italic>F</italic><sub>(1,14)</sub> = 5.575, <italic>p</italic> &#x0003C; 0.04, <italic>&#x003B7;</italic><sup>2</sup> = 0.285; <xref ref-type="fig" rid="F7">Figure 7</xref>). No laterality differences were noted for the /w&#x00251;/ deviant responses. Moreover, there was a strong trend for the /&#x00279;&#x00251;/ deviants to elicit less low gamma activation than the /w&#x00251;/ deviants over the left hemisphere during both the 50&#x02013;100 ms (<italic>F</italic><sub>(1,14)</sub> = 2.899, <italic>p</italic> &#x0003C; 0.11, <italic>&#x003B7;</italic><sup>2</sup> = 0.172) and 100&#x02013;150 ms windows (<italic>F</italic><sub>(1,14)</sub> = 3.063, <italic>p</italic> &#x0003C; 0.10, <italic>&#x003B7;</italic><sup>2</sup> = 0.180); no phoneme differences were noted over the right hemisphere.</p>
</sec>
<sec id="s3-3-3">
<title>ERSP Summary</title>
<p>The ERSP analyses were exploratory, as previous underspecification work has not addressed this aspect of phonological processing. Thus, the findings are preliminary. Theta band activation was examined to measure syllable-level processing while the low gamma band was examined to measure phoneme-level processing. At the syllable level, /&#x00279;&#x00251;/ elicited greater theta activation than did /w&#x00251;/. At the phoneme level, low gamma activation was significantly lower for /&#x00279;&#x00251;/ vs. /w&#x00251;/ deviants over the left hemisphere, as compared to the right.</p>
</sec>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>This study provides the first neural evidence for [consonantal] underspecification in English-speaking adults. Two phonemes differing in their specification of the [consonantal] feature were contrasted: /&#x00279;/ and /w/. As /w/ is not specified for [consonantal] while /&#x00279;/ is, it was hypothesized that asymmetrical speech processing differences would be apparent. Indeed, mean amplitude measurements and cluster permutation analyses both showed that /w&#x00251;/, as an oddball in a sequence of /&#x00279;&#x00251;/, elicited significantly larger MMN responses than did the reciprocal stimulus set&#x02014;namely, /&#x00279;&#x00251;/ oddballs embedded within frequently occurring instances of /w&#x00251;/. Characterizing the theta and low gamma band neural oscillation patterns provided further evidence for underspecification. The more specified /&#x00279;&#x00251;/ elicited increased activation, or neural synchrony, in the theta bandwidth as compared to /w&#x00251;/. Moreover, the /&#x00279;&#x00251;/ deviants elicited less low gamma activation over the left hemisphere, as compared to the right hemisphere. As neural oscillation patterns have not previously been discussed concerning underspecification, these ERSP analyses identified potentially new indices of phonological underspecification.</p>
<sec id="s4-1">
<title>ERP Evidence For [Consonantal] Underspecification</title>
<p>Consistent with previous reports of phonological underspecification (Diesch and Luce, <xref ref-type="bibr" rid="B28">1997</xref>; Eulitz and Lahiri, <xref ref-type="bibr" rid="B34">2004</xref>; Cornell et al., <xref ref-type="bibr" rid="B18">2011</xref>, <xref ref-type="bibr" rid="B19">2013</xref>; Scharinger et al., <xref ref-type="bibr" rid="B83">2012</xref>; Schluter et al., <xref ref-type="bibr" rid="B84">2016</xref>; Cummings et al., <xref ref-type="bibr" rid="B20">2017</xref>), ERP evidence for phonological underspecification was observed. The underspecified /w&#x00251;/ elicited larger neural responses than did more specified /&#x00279;&#x00251;/. Moreover, the cluster permutation analyses identified a significant difference between the /w&#x00251;/ standards and deviants, indicative of a reliable MMN response. No significant difference was observed between the /&#x00279;&#x00251;/ standards and deviants. Thus, the /w&#x00251;/ deviant response (elicited within the /&#x00279;&#x00251;/ standard) appeared to drive the phoneme underspecification differences. These findings were consistent with the underspecification logic of FUL (Eulitz and Lahiri, <xref ref-type="bibr" rid="B34">2004</xref>) which predicts an underspecified phoneme deviant (i.e., /w&#x00251;/) presented within a stream of specified phoneme standards (i.e., /&#x00279;&#x00251;/) would elicit a large mismatch response due to the contrast in [consonantal] feature specification.</p>
<p>While [consonantal] was the obvious feature that differentiated /&#x00279;/ and /w/, these two phonemes also differ in terms of their place of articulation, with /&#x00279;/ being characterized as [coronal: +distributed] and /w/ being characterized as [labial] by (Clements and Hume, <xref ref-type="bibr" rid="B15">1995</xref>; <xref ref-type="fig" rid="F1">Figure 1</xref>). Given the previous investigations of [coronal] underspecification (Eulitz and Lahiri, <xref ref-type="bibr" rid="B34">2004</xref>; Cornell et al., <xref ref-type="bibr" rid="B18">2011</xref>, <xref ref-type="bibr" rid="B19">2013</xref>; Scharinger et al., <xref ref-type="bibr" rid="B83">2012</xref>; Cummings et al., <xref ref-type="bibr" rid="B20">2017</xref>), possibly the place of articulation of these phonemes would affect the neural response patterns.</p>
<p>Based on previous FUL work, /&#x00279;/ could have arguably constituted the underspecified phoneme due to its [coronal] place. However, [coronal] also has the assigned daughter [+distributed] within the Clements and Hume (<xref ref-type="bibr" rid="B15">1995</xref>) model<xref ref-type="fn" rid="fn0011"><sup>11</sup></xref>. This dependent feature is on a lower level of the feature tree than that of /w/&#x02019;s [labial]. As features lower on the tree are more specified than those higher up in the tree (Core, <xref ref-type="bibr" rid="B17">1997</xref>), in the Clements and Hume (<xref ref-type="bibr" rid="B15">1995</xref>) model, /&#x00279;/ is more specified in terms of place of articulation [coronal: +distributed], as well as in the manner of articulation [+consonantal]. If the [labial] of /w/ was considered to be underspecified as compared to the [coronal: +distributed] of /&#x00279;/, this would be contrary to all previous work proposing [coronal] underspecification. As a result, it is hypothesized that the place of articulation was not the target feature contrast of /&#x00279;/ and /w/. However, the multiple features that are underspecified in /w/ (i.e., [&#x02013;consonantal] and [labial]), make it unclear as to what exactly might have been the feature that was driving the observed MMN asymmetry.</p>
<p>Additional studies contrasting liquid and glide phonemes are necessary to further test [consonantal] underspecification. Since /&#x00279;/ and /w/ differ not only in terms of [consonantal] but also in terms of place of articulation, a contrast that only varies [consonantal] is needed. This contrast is possible in /l/ and /j/. Both phonemes are [coronal] in nature, thus their main feature distinction is [consonantal], with /l/ being [+consonantal] ([&#x02212;vocoid]) and /j/ [&#x02013;consonantal] ([+vocoid]). Importantly, similar to /&#x00279;/, prevocalic /l/ often undergoes the phonological process of liquid gliding during typical and atypical phonological development, with /l/ substituted with [j] and/or [w]. For example, young children commonly produce &#x0201C;like&#x0201D; (i.e., /l&#x00251;Ik/) as [j&#x00251;Ik]. Thus, both liquid phonemes in American English are commonly observed to undergo liquid gliding during phonological development. These developmental and clinical observations provide additional evidence for the possibility that both American English glides, /w/ and /j/, are underspecified as compared to the American English liquids /&#x00279;/ and /l/. Replication of the present study with /l/ and /j/ would provide important converging evidence for the underspecification of [consonantal] in glide phonemes.</p>
</sec>
<sec id="s4-2">
<title>ERSP Evidence For [Consonantal] Underspecification</title>
<p>Since EEG neural oscillation patterns drive ERP responses, it was hypothesized that they could be additional indices of phonological underspecification. Exploratory analyses were conducted to examine whether specified and underspecified phonemes elicited distinct patterns of neural activity. Indeed, significant differences in neural oscillation patterns were elicited by /&#x00279;&#x00251;/ and /w&#x00251;/. In the theta band, /&#x00279;&#x00251;/ elicited more spectral power than did /w&#x00251;/. It has been proposed that inherent, resting-state oscillations in the primary auditory cortex undergo phase resetting&#x02014;particularly in the theta range&#x02014;in response to speech stimuli (Ghitza, <xref ref-type="bibr" rid="B37">2011</xref>; Giraud and Poeppel, <xref ref-type="bibr" rid="B39">2012</xref>). Thus, the enhanced theta activities between 100 and 300 ms to /&#x00279;&#x00251;/ relative to /w&#x00251;/ likely reflect the impact of specification on this phase resetting process. Within the FUL framework, a more specified phoneme contains more exact phonetic feature information in its phonological representation&#x02014;which may drive more precise theta phase-locking to the presentation of /&#x00279;&#x00251;/ syllables, yielding a stronger evoked response as compared to an underspecified phoneme that does not contain the same degree of robust featural specification.</p>
<p>As theta activities are proposed to capture syllable-level processing (Ghitza, <xref ref-type="bibr" rid="B37">2011</xref>; Giraud and Poeppel, <xref ref-type="bibr" rid="B39">2012</xref>), a secondary interpretation of the theta band results is acoustic in nature. That is, /&#x00279;&#x00251;/ may have been acoustically more distinct, with a clearer syllable onset boundary, than was /w&#x00251;/. As the sharpness of a syllable&#x02019;s acoustic edges affects how easily the stimulus can be parsed into chunks (Prendergast et al., <xref ref-type="bibr" rid="B79">2010</xref>; Ding and Simon, <xref ref-type="bibr" rid="B30">2014</xref>; Doelling et al., <xref ref-type="bibr" rid="B31">2014</xref>), /&#x00279;&#x00251;/ was able to elicit greater theta neural synchrony than /w&#x00251;/. To explain further, /&#x00279;/ is a more preferable syllable onset consonant than /w/ due to the sounds&#x02019; sonority differences (Clements, <xref ref-type="bibr" rid="B14">1990</xref>). Specifically, listeners prefer syllables with strong consonant onsets that are clearly differentiated from the vowel nucleus (e.g., the Head Law; Vennemann, <xref ref-type="bibr" rid="B93">1988</xref>). Since /w/ is nearly as sonorous as vowels, it does not provide a clear differentiated onset; thus, syllable-initial /&#x00279;/ is preferred over /w/ cross-linguistically (Dziubalska-Ko&#x00142;aczyk, <xref ref-type="bibr" rid="B32">2001</xref>). This acoustic interpretation is still consistent with the idea of feature specification, as the [consonant] aspect of /&#x00279;/ is what arguably makes it a stronger syllable onset than that of /w/. Thus, while /w/ can function as a syllable onset (Bernhardt and Stoel-Gammon, <xref ref-type="bibr" rid="B5">1994</xref>), /&#x00279;&#x00251;/ is a better-formed syllable than /w&#x00251;/ because of its specified [consonant] feature.</p>
<p>While theta band activity has been correlated with syllable-level processing, low gamma band has been correlated with more rapid information sampling, analysis, and decoding (Poeppel, <xref ref-type="bibr" rid="B78">2003</xref>; Ghitza, <xref ref-type="bibr" rid="B37">2011</xref>; Giraud and Poeppel, <xref ref-type="bibr" rid="B39">2012</xref>), likely linked to the binding of different acoustic features needed to derive phonological representations from incoming speech signals. Notably, low gamma band responses have not been consistently observed in auditory paradigms (Luo and Poeppel, <xref ref-type="bibr" rid="B60">2007</xref>; Howard and Poeppel, <xref ref-type="bibr" rid="B49">2010</xref>; Luo et al., <xref ref-type="bibr" rid="B62">2010</xref>), potentially due to the stimuli used (Luo and Poeppel, <xref ref-type="bibr" rid="B61">2012</xref>). The present study provided an ideal situation for eliciting distinguishing gamma responses, as [consonantal] was the contrasting feature between the phonemes.</p>
<p>Our findings revealed less low gamma activation over the left hemisphere as compared to the right hemisphere overall. Specifically, the low gamma activation in response to the /&#x00279;&#x00251;/ deviants was reliably less over the left hemisphere, as compared to the right, whereas the /w&#x00251;/ deviants did not elicit laterality differences. Moreover, /w&#x00251;/ elicited greater low gamma activation over the left hemisphere as compared to /&#x00279;&#x00251;/, while no phoneme differences were observed over the right hemisphere. Thus, /&#x00279;&#x00251;/ appeared to elicit a distinct pattern of activation over the left hemisphere. Interpreting this finding is challenging, given the lack of prior findings. However, a general interpretation could be similar to that of the MMN results. Namely, /w&#x00251;/ elicited greater low gamma activation over the left hemisphere due to its underspecified nature. Future studies will need to continue to test the relationship between underspecification and low gamma activation.</p>
</sec>
<sec id="s4-3">
<title>Alternative Interpretations and Study Limitations</title>
<p>While the data in the present study provide evidence of [consonantal] underspecification, other interpretations are possible. For example, a memory/usage-based account of language (UBA; Pierrehumbert, <xref ref-type="bibr" rid="B77">2006</xref>; Bybee, <xref ref-type="bibr" rid="B10">2010</xref>) addresses how the neighborhood density of phonemes affects processing. That is, the larger a phoneme&#x02019;s phonological neighborhood, the more difficult it is to identify and differentiate a specific phoneme from others within the neighborhood. Within UBA, the [+consonantal] category contains many more consonants (21: /p b t d k g f v &#x003B8; &#x000F0; s z &#x023B0; &#x02128; t&#x00283; d&#x00292; m n &#x0014B; l &#x00279;/) than does the unspecified [&#x02013;consonantal] category (3: /w j h/). Thus, /&#x00279;/ has a denser phonological neighborhood than does /w/. When considering MMN responses in the context where /&#x00279;&#x00251;/ is the standard and /w&#x00251;/ is the deviant, UBA would predict that the large phonological neighborhood of /&#x00279;/ would negatively impact the system&#x02019;s ability to create a strong feature prediction of [+consonantal]. Without clear feature specification, this situation should result in a no mismatch situation and a small/no MMN being elicited by the /w&#x00251;/ deviant. Conversely, in the context where /w&#x00251;/ is the standard and /&#x00279;&#x00251;/ is the deviant, UBA would predict that the small phonological neighborhood of /w/ would allow the system to establish a strong feature prediction. This should result in a true mismatch situation&#x02014;and also in a large MMN being elicited by the /&#x00279;&#x00251;/ deviant. However, neither one of these proposed results was observed in the present study. Instead, the exact opposite MMN response patterns were observed. Thus, it does not appear that UBA can account for the present study&#x02019;s findings.</p>
<p>The frequency occurrence of sounds in the ambient language environment could have unintentionally affected the MMN responses observed in the present study. Specifically, the MMN can reflect the phonotactic probability of phoneme combinations (Bonte et al., <xref ref-type="bibr" rid="B6">2005</xref>; N&#x000E4;&#x000E4;t&#x000E4;nen et al., <xref ref-type="bibr" rid="B71">2007</xref>). That is, the statistical regularity of sound combinations in a language can modulate the size of the MMN response. For example, nonwords with high phonotactic probability have been found to elicit larger MMN responses than nonwords with low phonotactic probability (Bonte et al., <xref ref-type="bibr" rid="B6">2005</xref>). Bonte et al. (<xref ref-type="bibr" rid="B6">2005</xref>) suggested that the frequent co-occurrence of certain phoneme combinations could result in enhanced auditory cortical responses. In the present study, the phonotactic probability of the /&#x00279;&#x00251;/ syllable in English was greater than that of /w&#x00251;/ (<xref ref-type="table" rid="T1">Table 1</xref>). Thus, following the results of Bonte et al. (<xref ref-type="bibr" rid="B6">2005</xref>), the more frequently occurring /&#x00279;&#x00251;/ should have elicited a larger MMN than did /w&#x00251;/, which was not observed. The same general argument could be made for the frequency of occurrence of single phonemes, with /&#x00279;/ occurring much more frequently in English than /w/ (<xref ref-type="table" rid="T1">Table 1</xref>). However, again, the high frequency of /&#x00279;/ did not elicit larger MMN responses than did the less commonly occurring /w/. While the findings of the present study do not appear to be driven by the frequency of occurrence of the phonemes, this will remain a possible interpretation until this prediction is directly tested. Fully-crossed stimulus sets with similar individual phoneme and syllable phonotactic probabilities should be used to elicit responses from high and low frequency phonemes and syllables.</p>
<p>The present study included identity difference waves to control for basic differences in acoustic detail present in the /&#x00279;&#x00251;/ and /w&#x00251;/ stimuli. However, it is still a possibility that the study design and/or stimuli did not test phonological representations, but rather tested the phonetic differences between the stimuli. It has been suggested that a single-standard MMN experiment can only capture the phonetic differences between speech sounds. That is, if the standards are not varied, the established memory trace is based on the consistent phonetic makeup of the standard. It has been argued that a variable-standards MMN experimental design (e.g., /t/ produced with multiple voice onset time allophones) is necessary instead to establish a true phonemic MMN (Phillips et al., <xref ref-type="bibr" rid="B75">2000</xref>; Hestvik and Durvasula, <xref ref-type="bibr" rid="B47">2016</xref>). For example, Hestvik and Durvasula (<xref ref-type="bibr" rid="B47">2016</xref>) only observed an underspecification MMN asymmetry using a variable-standards paradigm; symmetrical MMN responses were elicited with a single-standards paradigm.</p>
<p>While the possibility remains that the present study only captured phonetic differences between /&#x00279;&#x00251;/ and /w&#x00251;/, the data suggest that the phonological level of representation was tested. The previous MMN studies accessing phonological representations only used a single deviant within their multiple-standard presentations (Phillips et al., <xref ref-type="bibr" rid="B75">2000</xref>; Hestvik and Durvasula, <xref ref-type="bibr" rid="B47">2016</xref>). Although the present study used a single-standard paradigm, it did incorporate three phoneme deviants. The three deviants were included to maximize the MMN responses. That is, the response to a deviant is reduced not only when it is preceded by itself, but also when it is preceded by other similar stimuli (Sams et al., <xref ref-type="bibr" rid="B81">1984</xref>; N&#x000E4;&#x000E4;t&#x000E4;nen et al., <xref ref-type="bibr" rid="B71">2007</xref>; Symonds et al., <xref ref-type="bibr" rid="B91">2017</xref>). However, the reduction in MMN amplitude can be reduced if the second of two successive deviants differs from the standards in a different attribute/feature than the first deviant (Nousak et al., <xref ref-type="bibr" rid="B72">1996</xref>; M&#x000FC;ller et al., <xref ref-type="bibr" rid="B68">2005</xref>; N&#x000E4;&#x000E4;t&#x000E4;nen et al., <xref ref-type="bibr" rid="B71">2007</xref>). The two unused deviants in the present study, /b&#x00251;/ and /d&#x00251;/, were chosen in part because they were phonetically distinct from /&#x00279;/ and /w/. Thus, the presentation of multiple deviants, and the phonetic distinctiveness of the stimuli, could have allowed for phonological categorization to occur. Indeed, unlike Hestvik and Durvasula (<xref ref-type="bibr" rid="B47">2016</xref>), asymmetrical MMN responses were found in the present study, indicative of phonological-level processing.</p>
<p>A basic stimulus difference could also explain why a phonological mismatch asymmetry was elicited, rather than the symmetrical phonetic mismatch response predicted by previous studies. That is, previous studies used synthetic speech, while the present study used naturally-produced syllables. The acoustic-phonetic structure of synthetic speech conveys less information (per unit of time) than that of natural speech (Nusbaum and Pisoni, <xref ref-type="bibr" rid="B73">1985</xref>). As a result, synthetic speech is considered to be perceptually impoverished as compared to natural speech because basic acoustic-phonetic cues are obscured, masked, or physically degraded in some way. Natural speech is highly redundant at the level of acoustic-phonetic structure, with many acoustic cues being present in the signal. As limited acoustic information is present in synthetic speech, some phonetic feature distinctions are minimally cued. This means that a single cue presented within a single synthetic stimulus might not be enough to convey a particular level of feature distinction. As a result, multiple different tokens of a synthetic phoneme might need to be presented to fully establish a phonemic category. This hypothesis is supported by the results of the previous studies (Phillips et al., <xref ref-type="bibr" rid="B75">2000</xref>; Hestvik and Durvasula, <xref ref-type="bibr" rid="B47">2016</xref>). Alternatively, the spectral variation and redundancy found in the naturally produced speech tokens of the present study might have been enough to accurately establish phonemic categories.</p>
<p>Thus, the naturally produced standard and deviants in the present study could have allowed for phonological categorization of all the stimuli, much like the variable standard presentation of synthetic speech did in previous studies. That said, it is still a possibility that the memory trace tested in the present study was a detailed acoustic/phonetic representation rather than a phonemic representation. Future studies that systematically vary the phonetic allophonic productions and phonemic categories of both standards and deviants are needed to address how best to access phonological representations. Additional studies contrasting synthetic and naturally-produced speech will also provide information regarding how specified and unspecified features are stored and accessed.</p>
<p>As discussed previously concerning theta band activities, possibly the acoustic differences of /&#x00279;&#x00251;/ and /w&#x00251;/ alone were responsible for the observed MMN response asymmetry. That is, the intrinsic physical differences between stimuli could elicit different MMN response patterns (N&#x000E4;&#x000E4;t&#x000E4;nen et al., <xref ref-type="bibr" rid="B71">2007</xref>). For example, the larger sonority difference between /&#x00279;/ and /&#x00251;/ made it an acoustically more distinctive syllable than that of /w&#x00251;/<xref ref-type="fn" rid="fn0012"><sup>12</sup></xref>. In other words, the /w/ is perceptually more similar to /&#x00251;/ than is /&#x00279;/. Thus, if acoustic distinctiveness and clarity were the underlying mechanisms driving the MMN responses, hearing the deviant /&#x00279;&#x00251;/ within a stream of the /w&#x00251;/ standards should have elicited a larger MMN response than hearing the deviant /w&#x00251;/ in the stream of /&#x00279;&#x00251;/ standards. Yet, the opposite MMN response pattern was observed. The less acoustically distinct /w&#x00251;/ deviant elicited a larger MMN than did the acoustically preferable /&#x00279;&#x00251;/. Moreover, the MMN response elicited by both syllables was larger over the left hemisphere, as compared to the right, which is indicative of feature-level processing; acoustical change detection would have been indicated by similar bilateral MMN responses (N&#x000E4;&#x000E4;t&#x000E4;nen et al., <xref ref-type="bibr" rid="B70">1997</xref>).</p>
<p>While underspecification is presumed to be a language universal phenomenon, possibly the specification of features can vary across languages. For example, voiced stops are underspecified in English, while voiceless stops are underspecified in Japanese (Hestvik and Durvasula, <xref ref-type="bibr" rid="B47">2016</xref>; Hestvik et al., <xref ref-type="bibr" rid="B48">2020</xref>). In terms of /&#x00279;/, Natvig (<xref ref-type="bibr" rid="B950">2020</xref>) proposed that liquids, and rhotics in particular, are underspecified consonantal sonorants due to the multiple variations of &#x0201C;r-sounds&#x0201D; that occur in languages such as German, Arabic, Hawaiian, New Zealand Maori, Malayalam, and Norwegian. While it is beyond the scope of this study to address whether /&#x00279;/ is specified or not in languages other than English, cross-linguistic differences in [consonantal] underspecification are possible.</p>
<p>The decision to use /&#x00279;/ and /w/ here was driven by the need to better understand the clinical observation of particular speech error patterns observed during phonological development. Specifically, young typically developing children, as well as older children with speech sound disorders, often have difficulty producing /&#x00279;/ with adequate palatal and pharyngeal constriction, resulting in an incorrect [w] production. Thus, it was hypothesized that constriction [i.e., (consonantal)] is the primary distinguishing feature of /&#x00279;/ and /w/, at least in American English. Clements and Hume (<xref ref-type="bibr" rid="B15">1995</xref>) feature geometry theory was used to address the underlying differences in the phonological representations of /&#x00279;/ and /w/. Alternative explanations, including usage-based phonology, phonotactic probability, and sonority/acoustics/phonetics were explored. However, none of the predictions made by these approaches fit with the data. Moreover, while it was possible that [labial] underspecification of /w/ elicited the observed results, that explanation would not be consistent with the many previous studies showing [coronal] to be the underspecified place of articulation. As a result, the presence or absence of [consonantal] in the phonological representations of /&#x00279;/ and /w/, respectively, is the current best explanation of the results. Future work can either further confirm and extend our proposal, or correct it as needed.</p>
<p>FUL&#x02019;s underspecification predictions, tested within an oddball paradigm, provide a clear framework within which to examine feature encoding and the specification of phonological representations. By contrasting single phonemes, different patterns of neural responses can be associated with distinctive features. The identification of individual features&#x02019; neural patterns is a necessary first step in understanding how speech perception and processing lead to language comprehension and production. However, as pointed out by a reviewer, the use of individual phonemes and/or syllables in the oddball paradigm does not capture the complexity of parsing phonemes (and features) and their subsequent mapping onto lexical items in single words or continuous speech (Gwilliams et al., <xref ref-type="bibr" rid="B45">2018</xref>, <xref ref-type="bibr" rid="B44">2020</xref>; Dikker et al., <xref ref-type="bibr" rid="B29">2020</xref>). To further understand how phonological underspecification improves the efficiency of speech processing, studies involving naturalistic language tasks are an important next step.</p>
</sec>
<sec id="s4-4">
<title>Underlying Neural Mechanisms for Underspecification</title>
<p>From its theoretical inception, underspecification has been proposed as a mechanism to improve the efficiency<xref ref-type="fn" rid="fn0013"><sup>13</sup></xref> of speech processing (Chomsky and Halle, <xref ref-type="bibr" rid="B13">1968</xref>; Kiparsky, <xref ref-type="bibr" rid="B53">1985</xref>; Archangeli, <xref ref-type="bibr" rid="B1">1988</xref>; Mohanan, <xref ref-type="bibr" rid="B67">1991</xref>; Clements and Hume, <xref ref-type="bibr" rid="B15">1995</xref>; Steriade, <xref ref-type="bibr" rid="B89">1995</xref>; Eulitz and Lahiri, <xref ref-type="bibr" rid="B34">2004</xref>). That is, an underspecified feature is the default in a phonological representation. It is efficient to assume a feature is underspecified unless evidence is presented to the contrary. The predictability of that default status allows for ease of phonological processing.</p>
<p>The hallmark neural index of underspecification in electrophysiological studies has been a larger MMN to underspecified phonemes, as compared to specified ones. However, few proposals have been made to address the underlying neural mechanisms of this underspecification response. The size of the MMN has been associated with ease of discrimination (Tiitinen et al., <xref ref-type="bibr" rid="B92">1994</xref>; N&#x000E4;&#x000E4;t&#x000E4;nen et al., <xref ref-type="bibr" rid="B70">1997</xref>). The large underspecification MMN response would thus suggest that it is easier to discriminate an underspecified feature in a phoneme within a stream of specified phonemes, as compared to contrasting a specified feature within a stream of underspecified phonemes. But what does this large MMN response characterize at a neural level?</p>
<p>From a neurophysiological standpoint, one possibility is that the size of the MMN reflects the tuning characteristics of the responding neural populations. That is, the specification of a feature could lead to the recruitment of specialized neural populations that are tuned to only respond to that feature. Conversely, if a phoneme is not specified for a feature, other less-specialized populations of neurons could be recruited to respond. These less-specialized neurons could be weakly tuned for phonetic-acoustic content. By having weaker encoding, these neurons might be more flexible in their perceptual responses and would likely respond to more types of features at the same time. As a result, the responses elicited by the less-specified neurons could be larger than those of the specifically tuned neurons because they are coded to respond to more types of acoustic-phonetic information. Besides, since the less-specified neurons might be activated more frequently due to their lack of feature specification, their responses could be more highly tuned/practiced, which could also result in larger responses.</p>
<p>In regards to the present study, perhaps the underspecified [&#x02013;consonantal] feature in /w&#x00251;/ could activate the weakly-coded neurons that were tuned to respond to a variety of phonetic-acoustic content. This broad phonetic-acoustic tuning could elicit a large neural response due to the many different cues that might be summed together in the response. Alternatively, the specified feature in /&#x00279;&#x00251;/ could access neuronal populations that were explicitly coded for a single feature, [+consonantal]. Thus, the neurons would respond, but only to that specific feature and ignore all other features. This could result in a small neural response.</p>
<p>Neuroimaging studies have provided some evidence in support of this proposal. For example, very small populations of neurons (characterized by single electrodes or voxels) have been found to encode and respond to linguistically meaningful information, such as formant frequencies (e.g., low-back vowels), phonetic features (e.g., obstruent, plosive, voicing), and/or entire phonemes (Mesgarani et al., <xref ref-type="bibr" rid="B66">2014</xref>; Arsenault and Buchsbaum, <xref ref-type="bibr" rid="B2">2015</xref>; de Heer et al., <xref ref-type="bibr" rid="B24">2017</xref>; Gwilliams et al., <xref ref-type="bibr" rid="B45">2018</xref>; Yi et al., <xref ref-type="bibr" rid="B97">2019</xref>). Also, phonemes and features elicited activation across multiple electrodes and voxels, suggesting that responses were not constrained to a single neural population. Thus, there is evidence for highly tuned neural populations to respond to one, or many, features, while also working in conjunction with other neural populations.</p>
<p>To our knowledge, previous studies of underspecification have not directly discussed the neural implications of underspecification, and rightfully so, given the limited spatial resolution of scalp-level EEG recordings (Luck, <xref ref-type="bibr" rid="B59">2014</xref>). The present study proposes some possible neural-level interpretations of its results. Future collaborative work with researchers using spatially sensitive neuroimaging techniques will be necessary to further define the underlying neural mechanisms of underspecification.</p>
</sec>
<sec id="s4-5">
<title>Summary and Conclusions</title>
<p>The less specified /w&#x00251;/ elicited a large MMN, whereas a much smaller MMN was elicited by the more specified /&#x00279;&#x00251;/. This outcome reveals that the [consonantal] feature follows the underspecification predictions of FUL previously tested with the place of articulation and voicing features. Thus, this study provides new evidence for the language universality of underspecification by addressing a different phoneme feature. Moreover, left hemisphere low gamma activation characterized distinct phoneme-specific feature processing patterns for /&#x00279;/ and /w/, revealing a potentially novel index of underspecification. Examining theta and/or low gamma bandwidths in future studies could provide further support for the claims of underspecification.</p>
</sec>
</sec>
<sec id="s5">
<title>Data Availability Statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="s6">
<title>Ethics Statement</title>
<p>The studies involving human participants were reviewed and approved by Idaho State University Human Subjects Committee and the University of North Dakota Institutional Review Board. The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="s7">
<title>Author Contributions</title>
<p>AC created the stimuli, tested participants, prepared and analyzed the data, and helped write the manuscript. DO and YW analyzed data and helped write the manuscript. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec sec-type="COI-statement" id="s8">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>We would like to thank Kari Lehr for her assistance during the testing of participants.</p>
</ack>
<fn-group>
<fn fn-type="financial-disclosure">
<p><bold>Funding.</bold> This research was supported by NIH grant R15DC013359 (from the National Institute on Deafness and Other Communication Disorders) awarded to the first author (AC). This funding paid for research program equipment, participant research payments, and student research assistants. The open access fees were paid for by Idaho State University start-up funds awarded to the first author. The second author (YW) was supported by NSF grant 1540943.</p>
</fn>
</fn-group>
<sec id="s9">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fnhum.2021.585817/full&#x00023;supplementary-material">https://www.frontiersin.org/articles/10.3389/fnhum.2021.585817/full&#x00023;supplementary-material</ext-link>.</p>
<supplementary-material xlink:href="Data_Sheet_1.PDF" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Data_Sheet_2.PDF" id="SM2" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Data_Sheet_3.PDF" id="SM3" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Data_Sheet_4.PDF" id="SM4" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Data_Sheet_5.PDF" id="SM5" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Data_Sheet_6.PDF" id="SM6" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Archangeli</surname> <given-names>D.</given-names></name></person-group> (<year>1988</year>). <article-title>Aspects of underspecification theory</article-title>. <source>Phonology</source> <volume>5</volume>, <fpage>183</fpage>&#x02013;<lpage>207</lpage>.</citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arsenault</surname> <given-names>J. S.</given-names></name> <name><surname>Buchsbaum</surname> <given-names>B. R.</given-names></name></person-group> (<year>2015</year>). <article-title>Distributed neural representations of phonological features during speech perception</article-title>. <source>J. Neurosci.</source> <volume>35</volume>, <fpage>634</fpage>&#x02013;<lpage>642</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.2454-14.2015</pub-id><pub-id pub-id-type="pmid">25589757</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bernhardt</surname> <given-names>B.</given-names></name></person-group> (<year>1992</year>). <article-title>The application of nonlinear phonological theory to intervention with one phonologically disordered child</article-title>. <source>Clin. Linguist. Phon.</source> <volume>6</volume>, <fpage>283</fpage>&#x02013;<lpage>316</lpage>. <pub-id pub-id-type="doi">10.3109/02699209208985537</pub-id><pub-id pub-id-type="pmid">20670204</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bernhardt</surname> <given-names>B.</given-names></name> <name><surname>Gilbert</surname> <given-names>J.</given-names></name></person-group> (<year>1992</year>). <article-title>Applying linguistic theory to speech-language pathology: the case for nonlinear phonology</article-title>. <source>Clin. Linguist. Phon.</source> <volume>6</volume>, <fpage>123</fpage>&#x02013;<lpage>145</lpage>. <pub-id pub-id-type="doi">10.3109/02699209208985523</pub-id><pub-id pub-id-type="pmid">20672888</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bernhardt</surname> <given-names>B.</given-names></name> <name><surname>Stoel-Gammon</surname> <given-names>C.</given-names></name></person-group> (<year>1994</year>). <article-title>Nonlinear phonology: introduction and clinical application</article-title>. <source>J. Speech Hear. Res.</source> <volume>37</volume>, <fpage>123</fpage>&#x02013;<lpage>143</lpage>. <pub-id pub-id-type="pmid">8170119</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bonte</surname> <given-names>M. L.</given-names></name> <name><surname>Mitterer</surname> <given-names>H.</given-names></name> <name><surname>Zellagui</surname> <given-names>N.</given-names></name> <name><surname>Poelmans</surname> <given-names>H.</given-names></name> <name><surname>Blomert</surname> <given-names>L.</given-names></name></person-group> (<year>2005</year>). <article-title>Auditory cortical tuning to statistical regularities in phonology</article-title>. <source>Clin. Neurophysiol.</source> <volume>116</volume>, <fpage>2765</fpage>&#x02013;<lpage>2774</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2005.08.012</pub-id><pub-id pub-id-type="pmid">16256430</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Broen</surname> <given-names>P. A.</given-names></name> <name><surname>Strange</surname> <given-names>W.</given-names></name> <name><surname>Doyle</surname> <given-names>S. S.</given-names></name> <name><surname>Heller</surname> <given-names>J. H.</given-names></name></person-group> (<year>1983</year>). <article-title>Perception and production of approximant consonants by normal and articulation-delayed preschool children</article-title>. <source>J. Speech Hear. Res.</source> <volume>26</volume>, <fpage>601</fpage>&#x02013;<lpage>608</lpage>. <pub-id pub-id-type="doi">10.1044/jshr.2604.601</pub-id><pub-id pub-id-type="pmid">6199587</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brown</surname> <given-names>J. C.</given-names></name></person-group> (<year>2004</year>). <article-title>Eliminating the segmental tier: evidence from speech errors</article-title>. <source>J. Psycholinguist. Res.</source> <volume>33</volume>, <fpage>97</fpage>&#x02013;<lpage>101</lpage>. <pub-id pub-id-type="doi">10.1023/b:jopr.0000017222.24698.73</pub-id><pub-id pub-id-type="pmid">15098510</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bullmore</surname> <given-names>E. T.</given-names></name> <name><surname>Suckling</surname> <given-names>J.</given-names></name> <name><surname>Overmeyer</surname> <given-names>S.</given-names></name> <name><surname>Rabe-Hesketh</surname> <given-names>S.</given-names></name> <name><surname>Taylor</surname> <given-names>E.</given-names></name> <name><surname>Brammer</surname> <given-names>M. J.</given-names></name></person-group> (<year>1999</year>). <article-title>Global, voxel and cluster tests, by theory and permutation, for a difference between two groups of structural MR images of the brain</article-title>. <source>IEEE Trans. Med. Imaging</source> <volume>18</volume>, <fpage>32</fpage>&#x02013;<lpage>42</lpage>. <pub-id pub-id-type="doi">10.1109/42.750253</pub-id><pub-id pub-id-type="pmid">10193695</pub-id></citation></ref>
<ref id="B10"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Bybee</surname> <given-names>J.</given-names></name></person-group> (<year>2010</year>). <source>Language, Usage and Cognition.</source> <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>&#x0010C;eponien&#x00117;</surname> <given-names>R.</given-names></name> <name><surname>Cheour</surname> <given-names>M.</given-names></name> <name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name></person-group> (<year>1998</year>). <article-title>Interstimulus interval and auditory event-related potentials in children: evidence for multiple generators</article-title>. <source>Electroencephalogr. Clin. Neurophysiol.</source> <volume>108</volume>, <fpage>345</fpage>&#x02013;<lpage>354</lpage>. <pub-id pub-id-type="doi">10.1016/s0168-5597(97)00081-6</pub-id><pub-id pub-id-type="pmid">9714376</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cheour</surname> <given-names>M.</given-names></name> <name><surname>&#x0010C;eponien&#x00117;</surname> <given-names>R.</given-names></name> <name><surname>Lehtokoski</surname> <given-names>A.</given-names></name> <name><surname>Luuk</surname> <given-names>A.</given-names></name> <name><surname>Allik</surname> <given-names>J.</given-names></name> <name><surname>Alho</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>1998</year>). <article-title>Development of language-specific phoneme representations in the infant brain</article-title>. <source>Nat. Neurosci.</source> <volume>1</volume>, <fpage>351</fpage>&#x02013;<lpage>353</lpage>. <pub-id pub-id-type="doi">10.1038/1561</pub-id><pub-id pub-id-type="pmid">10196522</pub-id></citation></ref>
<ref id="B13"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Chomsky</surname> <given-names>N.</given-names></name> <name><surname>Halle</surname> <given-names>M.</given-names></name></person-group> (<year>1968</year>). <source>The Sound Pattern of English</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Harper and Row</publisher-name>.</citation></ref>
<ref id="B14"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Clements</surname> <given-names>G.</given-names></name></person-group> (<year>1990</year>). &#x0201C;<article-title>The role of the sonority cycle in core syllabification</article-title>,&#x0201D; in <source>Papers in Laboratory Phonology I: Between the Grammar and the Physics of Speech</source>, eds <person-group person-group-type="editor"><name><surname>Kingston</surname> <given-names>J.</given-names></name> <name><surname>Beckman</surname> <given-names>M.</given-names></name></person-group> (<publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>), <fpage>283</fpage>&#x02013;<lpage>333</lpage>.</citation></ref>
<ref id="B15"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Clements</surname> <given-names>G.</given-names></name> <name><surname>Hume</surname> <given-names>E.</given-names></name></person-group> (<year>1995</year>). &#x0201C;<article-title>The internal organization of speech sounds</article-title>,&#x0201D; in <source>The Handbook of Phonological Theory</source>, ed. <person-group person-group-type="editor"><name><surname>Goldsmith</surname> <given-names>J. A.</given-names></name></person-group> (<publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Blackwell Publishing</publisher-name>), <fpage>245</fpage>&#x02013;<lpage>306</lpage>.</citation></ref>
<ref id="B16"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Cohen</surname> <given-names>M. X.</given-names></name></person-group> (<year>2014</year>). <source>Analyzing Neural Time Series Data: Theory and Practice.</source> <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</citation></ref>
<ref id="B17"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Core</surname> <given-names>C.</given-names></name></person-group> (<year>1997</year>). <source>Feature Geometry, Underspecification and Child Substitutions.</source> <publisher-loc>Miamai, FL</publisher-loc>: <publisher-name>Florida International University. Thesis</publisher-name>.</citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cornell</surname> <given-names>S. A.</given-names></name> <name><surname>Lahiri</surname> <given-names>A.</given-names></name> <name><surname>Eulitz</surname> <given-names>C.</given-names></name></person-group> (<year>2011</year>). <article-title>What you encode is not necessarily what you store: evidence for sparse feature representations from mismatch negativity</article-title>. <source>Brain Res.</source> <volume>1394</volume>, <fpage>79</fpage>&#x02013;<lpage>89</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2011.04.001</pub-id><pub-id pub-id-type="pmid">21549357</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cornell</surname> <given-names>S. A.</given-names></name> <name><surname>Lahiri</surname> <given-names>A.</given-names></name> <name><surname>Eulitz</surname> <given-names>C.</given-names></name></person-group> (<year>2013</year>). <article-title>Inequality across consonantal contrasts in speech perception: evidence from mismatch negativity</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>39</volume>, <fpage>757</fpage>&#x02013;<lpage>772</lpage>. <pub-id pub-id-type="doi">10.1037/a0030862</pub-id><pub-id pub-id-type="pmid">23276108</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cummings</surname> <given-names>A.</given-names></name> <name><surname>Madden</surname> <given-names>J.</given-names></name> <name><surname>Hefta</surname> <given-names>K.</given-names></name></person-group> (<year>2017</year>). <article-title>Converging evidence for [coronal] underspecification in English-speaking adults</article-title>. <source>J. Neurolinguistics</source> <volume>44</volume>, <fpage>147</fpage>&#x02013;<lpage>162</lpage>. <pub-id pub-id-type="doi">10.1016/j.jneuroling.2017.05.003</pub-id><pub-id pub-id-type="pmid">29085183</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cummings</surname> <given-names>A.</given-names></name> <name><surname>Seddoh</surname> <given-names>A.</given-names></name> <name><surname>Jallo</surname> <given-names>B.</given-names></name></person-group> (<year>2016</year>). <article-title>Phonological code retrieval during picture naming: influence of consonant class</article-title>. <source>Brain Res.</source> <volume>1635</volume>, <fpage>71</fpage>&#x02013;<lpage>85</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2016.01.014</pub-id><pub-id pub-id-type="pmid">26801830</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Datta</surname> <given-names>H.</given-names></name> <name><surname>Shafer</surname> <given-names>V. L.</given-names></name> <name><surname>Morr</surname> <given-names>M. L.</given-names></name> <name><surname>Kurtzberg</surname> <given-names>D.</given-names></name> <name><surname>Schwartz</surname> <given-names>R. G.</given-names></name></person-group> (<year>2010</year>). <article-title>Electrophysiological indices of discrimination of long-duration, phonetically similar vowels in children with typical and atypical language development</article-title>. <source>J. Speech Lang. Hear. Res.</source> <volume>53</volume>, <fpage>757</fpage>&#x02013;<lpage>777</lpage>. <pub-id pub-id-type="doi">10.1044/1092-4388(2009/08-0123)</pub-id><pub-id pub-id-type="pmid">20530387</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davidson</surname> <given-names>D. J.</given-names></name> <name><surname>Indefrey</surname> <given-names>P.</given-names></name></person-group> (<year>2007</year>). <article-title>An inverse relation between event-related and time-frequency violation responses in sentence processing</article-title>. <source>Brain Res.</source> <volume>1158</volume>, <fpage>81</fpage>&#x02013;<lpage>92</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2007.04.082</pub-id><pub-id pub-id-type="pmid">17560965</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>de Heer</surname> <given-names>W. A.</given-names></name> <name><surname>Huth</surname> <given-names>A. G.</given-names></name> <name><surname>Griffiths</surname> <given-names>T. L.</given-names></name> <name><surname>Gallant</surname> <given-names>J. L.</given-names></name> <name><surname>Theunissen</surname> <given-names>F. E.</given-names></name></person-group> (<year>2017</year>). <article-title>The hierarchical cortical organization of human speech processing</article-title>. <source>J. Neurosci.</source> <volume>37</volume>, <fpage>6539</fpage>&#x02013;<lpage>6557</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3267-16.2017</pub-id><pub-id pub-id-type="pmid">28588065</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Delattre</surname> <given-names>P.</given-names></name> <name><surname>Freeman</surname> <given-names>D.</given-names></name></person-group> (<year>1968</year>). <article-title>A dialect study of American R&#x02019;s by X-ray motion picture</article-title>. <source>Linguistics</source> <volume>6</volume>, <fpage>29</fpage>&#x02013;<lpage>68</lpage>.</citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Delorme</surname> <given-names>A.</given-names></name> <name><surname>Makeig</surname> <given-names>S.</given-names></name></person-group> (<year>2004</year>). <article-title>EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis</article-title>. <source>J. Neurosci. Methods</source> <volume>134</volume>, <fpage>9</fpage>&#x02013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1016/j.jneumeth.2003.10.009</pub-id><pub-id pub-id-type="pmid">15102499</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Diesch</surname> <given-names>E.</given-names></name> <name><surname>Luce</surname> <given-names>T.</given-names></name></person-group> (<year>1997</year>). <article-title>Magnetic mismatch fields elicited by vowels and consonants</article-title>. <source>Exp. Brain Res.</source> <volume>116</volume>, <fpage>139</fpage>&#x02013;<lpage>152</lpage>. <pub-id pub-id-type="doi">10.1007/pl00005734</pub-id><pub-id pub-id-type="pmid">9305823</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dikker</surname> <given-names>S.</given-names></name> <name><surname>Assaneo</surname> <given-names>M. F.</given-names></name> <name><surname>Gwilliams</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>L.</given-names></name> <name><surname>K&#x000F6;sem</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Magnetoencephalography and language</article-title>. <source>Neuroimaging Clin. N. Am.</source> <volume>30</volume>, <fpage>229</fpage>&#x02013;<lpage>238</lpage>. <pub-id pub-id-type="doi">10.1016/j.nic.2020.01.004</pub-id><pub-id pub-id-type="pmid">32336409</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Di Liberto</surname> <given-names>G. M.</given-names></name> <name><surname>O&#x02019;Sullivan</surname> <given-names>J. A.</given-names></name> <name><surname>Lalor</surname> <given-names>E. C.</given-names></name></person-group> (<year>2015</year>). <article-title>Low-frequency cortical entrainment to speech reflects phoneme-level processing</article-title>. <source>Curr. Biol.</source> <volume>25</volume>, <fpage>2457</fpage>&#x02013;<lpage>2465</lpage>. <pub-id pub-id-type="doi">10.1016/j.cub.2015.08.030</pub-id><pub-id pub-id-type="pmid">26412129</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ding</surname> <given-names>N.</given-names></name> <name><surname>Simon</surname> <given-names>J. Z.</given-names></name></person-group> (<year>2014</year>). <article-title>Cortical entrainment to continuous speech: functional roles and interpretations</article-title>. <source>Front. Hum. Neurosci.</source> <volume>8</volume>:<fpage>311</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2014.00311</pub-id><pub-id pub-id-type="pmid">24904354</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Doelling</surname> <given-names>K. B.</given-names></name> <name><surname>Arnal</surname> <given-names>L. H.</given-names></name> <name><surname>Ghitza</surname> <given-names>O.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2014</year>). <article-title>Acoustic landmarks drive delta-theta oscillations to enable speech comprehension by facilitating perceptual parsing</article-title>. <source>NeuroImage</source> <volume>85</volume>, <fpage>761</fpage>&#x02013;<lpage>768</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2013.06.035</pub-id><pub-id pub-id-type="pmid">23791839</pub-id></citation></ref>
<ref id="B32"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Dziubalska-Ko&#x00142;aczyk</surname> <given-names>K.</given-names></name></person-group> (<year>2001</year>). <source>Constraints and Preferences.</source> <publisher-loc>Berlin</publisher-loc>: <publisher-name>Walter de Gruyter</publisher-name>.</citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Edwards</surname> <given-names>M. L.</given-names></name></person-group> (<year>1983</year>). <article-title>Selection criteria for developing therapy goals</article-title>. <source>J. Childhood Commun. Disord.</source> <volume>7</volume>, <fpage>36</fpage>&#x02013;<lpage>45</lpage>.</citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eulitz</surname> <given-names>C.</given-names></name> <name><surname>Lahiri</surname> <given-names>A.</given-names></name></person-group> (<year>2004</year>). <article-title>Neurobiological evidence for abstract phonological representations in the mental lexicon during speech recognition</article-title>. <source>J. Cogn. Neurosci.</source> <volume>16</volume>, <fpage>577</fpage>&#x02013;<lpage>583</lpage>. <pub-id pub-id-type="doi">10.1162/089892904323057308</pub-id><pub-id pub-id-type="pmid">15185677</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Folstein</surname> <given-names>J. R.</given-names></name> <name><surname>Van Petten</surname> <given-names>C.</given-names></name></person-group> (<year>2008</year>). <article-title>Influence of cognitive control and mismatch on the N2 component of the ERP: a review</article-title>. <source>Psychophysiology</source> <volume>45</volume>, <fpage>152</fpage>&#x02013;<lpage>170</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2007.00602.x</pub-id><pub-id pub-id-type="pmid">17850238</pub-id></citation></ref>
<ref id="B36"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Fromkin</surname> <given-names>V.</given-names></name></person-group> (<year>1973</year>). <source>Speech Errors as Linguistic Evidence.</source> <publisher-loc>The Hague</publisher-loc>: <publisher-name>Mouton</publisher-name>.</citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ghitza</surname> <given-names>O.</given-names></name></person-group> (<year>2011</year>). <article-title>Linking speech perception and neurophysiology: speech decoding guided by cascaded oscillators locked to the input rhythm</article-title>. <source>Front. Psychol.</source> <volume>2</volume>:<fpage>130</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2011.00130</pub-id><pub-id pub-id-type="pmid">21743809</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gick</surname> <given-names>B.</given-names></name></person-group> (<year>1999</year>). <article-title>A gesture-based account of intrusive consonants in English</article-title>. <source>Phonology</source> <volume>16</volume>, <fpage>29</fpage>&#x02013;<lpage>54</lpage>.</citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Giraud</surname> <given-names>A.-L.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2012</year>). <article-title>Cortical oscillations and speech processing: emerging computational principles and operations</article-title>. <source>Nat. Neurosci.</source> <volume>15</volume>, <fpage>511</fpage>&#x02013;<lpage>517</lpage>. <pub-id pub-id-type="doi">10.1038/nn.3063</pub-id><pub-id pub-id-type="pmid">22426255</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Greenberg</surname> <given-names>J. H.</given-names></name></person-group> (<year>1975</year>). <article-title>Research on language universals</article-title>. <source>Ann. Rev. Anthropol.</source> <volume>4</volume>, <fpage>75</fpage>&#x02013;<lpage>94</lpage>.</citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Groppe</surname> <given-names>D. M.</given-names></name> <name><surname>Urbach</surname> <given-names>T. P.</given-names></name> <name><surname>Kutas</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>Mass univariate analysis of event-related brain potentials/fields I: a critical tutorial review</article-title>. <source>Psychophysiology</source> <volume>48</volume>, <fpage>1711</fpage>&#x02013;<lpage>1725</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2011.01273.x</pub-id><pub-id pub-id-type="pmid">21895683</pub-id></citation></ref>
<ref id="B42"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Grunwell</surname> <given-names>P.</given-names></name></person-group> (<year>1987</year>). <source>Clinical Phonology, 2nd edition</source>. <publisher-loc>Baltimore, MD</publisher-loc>: <publisher-name>Williams and Wilkins</publisher-name>.</citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gurariy</surname> <given-names>G.</given-names></name> <name><surname>Killebrew</surname> <given-names>K. W.</given-names></name> <name><surname>Berryhill</surname> <given-names>M. E.</given-names></name> <name><surname>Caplovitz</surname> <given-names>G. P.</given-names></name></person-group> (<year>2016</year>). <article-title>Induced and evoked human electrophysiological correlates of visual working memory set-size effects at encoding</article-title>. <source>PLoS One</source> <volume>11</volume>:<fpage>e0167022</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0167022</pub-id><pub-id pub-id-type="pmid">27902738</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gwilliams</surname> <given-names>L.</given-names></name> <name><surname>King</surname> <given-names>J.-R.</given-names></name> <name><surname>Marantz</surname> <given-names>A.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2020</year>). <article-title>Neural dynamics of phoneme sequencing in real speech jointly encode order and invariant content</article-title>. <source>BioRxiv</source> [Preprint]. <pub-id pub-id-type="doi">10.1101/2020.04.04.025684</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gwilliams</surname> <given-names>L.</given-names></name> <name><surname>Linzen</surname> <given-names>T.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name> <name><surname>Marantz</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). <article-title>In spoken word recognition, the future predicts the past</article-title>. <source>J. Neurosci.</source> <volume>38</volume>, <fpage>7585</fpage>&#x02013;<lpage>7599</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.0065-18.2018</pub-id><pub-id pub-id-type="pmid">30012695</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Halle</surname> <given-names>M.</given-names></name> <name><surname>Vaux</surname> <given-names>B.</given-names></name> <name><surname>Wolfe</surname> <given-names>A.</given-names></name></person-group> (<year>2000</year>). <article-title>On feature spreading and the representation of place of articulation</article-title>. <source>Linguist. Inq.</source> <volume>31</volume>, <fpage>387</fpage>&#x02013;<lpage>444</lpage>. <pub-id pub-id-type="doi">10.1162/002438900554398</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hestvik</surname> <given-names>A.</given-names></name> <name><surname>Durvasula</surname> <given-names>K.</given-names></name></person-group> (<year>2016</year>). <article-title>Neurobiological evidence for voicing underspecification in English</article-title>. <source>Brain Lang.</source> <volume>152</volume>, <fpage>28</fpage>&#x02013;<lpage>43</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandl.2015.10.007</pub-id><pub-id pub-id-type="pmid">26705957</pub-id></citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hestvik</surname> <given-names>A.</given-names></name> <name><surname>Shinohara</surname> <given-names>Y.</given-names></name> <name><surname>Durvasula</surname> <given-names>K.</given-names></name> <name><surname>Verdonschot</surname> <given-names>R. G.</given-names></name> <name><surname>Sakai</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>Abstractness of human speech sound representations</article-title>. <source>Brain Res.</source> <volume>1732</volume>:<fpage>146664</fpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2020.146664</pub-id><pub-id pub-id-type="pmid">31930995</pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Howard</surname> <given-names>M. F.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2010</year>). <article-title>Discrimination of speech stimuli based on neuronal response phase patterns depends on acoustics but not comprehension</article-title>. <source>J. Neurophysiol.</source> <volume>104</volume>, <fpage>2500</fpage>&#x02013;<lpage>2511</lpage>. <pub-id pub-id-type="doi">10.1152/jn.00251.2010</pub-id><pub-id pub-id-type="pmid">20484530</pub-id></citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hume</surname> <given-names>E.</given-names></name> <name><surname>Odden</surname> <given-names>D.</given-names></name></person-group> (<year>1996</year>). <article-title>Reconsidering [consonantal]</article-title>. <source>Phonology</source> <volume>13</volume>, <fpage>345</fpage>&#x02013;<lpage>376</lpage>.</citation></ref>
<ref id="B51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jung</surname> <given-names>T. P.</given-names></name> <name><surname>Makeig</surname> <given-names>S.</given-names></name> <name><surname>Westerfield</surname> <given-names>M.</given-names></name> <name><surname>Townsend</surname> <given-names>J.</given-names></name> <name><surname>Courchesne</surname> <given-names>E.</given-names></name> <name><surname>Sejnowski</surname> <given-names>T. J.</given-names></name></person-group> (<year>2000</year>). <article-title>Removal of eye activity artifacts from visual event-related potentials in normal and clinical subjects</article-title>. <source>Clin. Neurophysiol.</source> <volume>111</volume>, <fpage>1745</fpage>&#x02013;<lpage>1758</lpage>. <pub-id pub-id-type="doi">10.1016/s1388-2457(00)00386-2</pub-id><pub-id pub-id-type="pmid">11018488</pub-id></citation></ref>
<ref id="B52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jusczyk</surname> <given-names>P. W.</given-names></name> <name><surname>Luce</surname> <given-names>P. A.</given-names></name> <name><surname>Charles-Luce</surname> <given-names>J.</given-names></name></person-group> (<year>1994</year>). <article-title>Infants&#x02019; sensitivity to phonotactic patterns in the native language</article-title>. <source>J. Mem. Lang.</source> <volume>33</volume>, <fpage>630</fpage>&#x02013;<lpage>645</lpage>.</citation></ref>
<ref id="B53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kiparsky</surname> <given-names>P.</given-names></name></person-group> (<year>1985</year>). <article-title>Some consequences of lexical phonology</article-title>. <source>Phonol. Yearbook</source> <volume>2</volume>, <fpage>85</fpage>&#x02013;<lpage>138</lpage>.</citation></ref>
<ref id="B54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Korpilahti</surname> <given-names>P.</given-names></name> <name><surname>Lang</surname> <given-names>H.</given-names></name> <name><surname>Aaltonen</surname> <given-names>O.</given-names></name></person-group> (<year>1995</year>). <article-title>Is there a late-latency mismatch negativity (MMN) component?</article-title> <source>Electroencephalogr. Clin. Neurophysiol.</source> <volume>95</volume>:<fpage>P96</fpage>.</citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lahiri</surname> <given-names>A.</given-names></name> <name><surname>Marslen-Wilson</surname> <given-names>W.</given-names></name></person-group> (<year>1991</year>). <article-title>The mental representation of lexical form: a phonological approach to the recognition lexicon</article-title>. <source>Cognition</source> <volume>38</volume>, <fpage>245</fpage>&#x02013;<lpage>294</lpage>. <pub-id pub-id-type="doi">10.1016/0010-0277(91)90008-r</pub-id><pub-id pub-id-type="pmid">2060271</pub-id></citation></ref>
<ref id="B56"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Lahiri</surname> <given-names>A.</given-names></name> <name><surname>Reetz</surname> <given-names>H.</given-names></name></person-group> (<year>2002</year>). &#x0201C;<article-title>Underspecified recognition</article-title>,&#x0201D; in <source>Labphon 7</source>, eds <person-group person-group-type="editor"><name><surname>Gussenhoven</surname> <given-names>C.</given-names></name> <name><surname>Warnereds</surname> <given-names>N.</given-names></name></person-group> (<publisher-loc>Berlin</publisher-loc>: <publisher-name>Mouton</publisher-name>), <fpage>637</fpage>&#x02013;<lpage>676</lpage>.</citation></ref>
<ref id="B57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lahiri</surname> <given-names>A.</given-names></name> <name><surname>Reetz</surname> <given-names>H.</given-names></name></person-group> (<year>2010</year>). <article-title>Distinctive features: phonological underspecification in representation and processing</article-title>. <source>J. Phon.</source> <volume>38</volume>, <fpage>44</fpage>&#x02013;<lpage>59</lpage>. <pub-id pub-id-type="doi">10.1016/j.wocn.2010.01.002</pub-id></citation></ref>
<ref id="B58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Levelt</surname> <given-names>W. J.</given-names></name> <name><surname>Roelofs</surname> <given-names>A.</given-names></name> <name><surname>Meyer</surname> <given-names>A. S.</given-names></name></person-group> (<year>1999</year>). <article-title>A theory of lexical access in speech production</article-title>. <source>Behav. Brain Sci.</source> <volume>22</volume>, <fpage>1</fpage>&#x02013;<lpage>38</lpage>; discussion <fpage>38</fpage>&#x02013;<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1017/s0140525x99001776</pub-id><pub-id pub-id-type="pmid">11301520</pub-id></citation></ref>
<ref id="B59"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Luck</surname> <given-names>S.</given-names></name></person-group> (<year>2014</year>). <source>An Introduction to the Event-Related Potential Technique, 2nd edition</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</citation></ref>
<ref id="B60"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>H.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2007</year>). <article-title>Phase patterns of neuronal responses reliably discriminate speech in human auditory cortex</article-title>. <source>Neuron</source> <volume>54</volume>, <fpage>1001</fpage>&#x02013;<lpage>1010</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2007.06.004</pub-id><pub-id pub-id-type="pmid">17582338</pub-id></citation></ref>
<ref id="B61"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>H.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2012</year>). <article-title>Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex</article-title>. <source>Front. Psychol.</source> <volume>3</volume>:<fpage>170</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2012.00170</pub-id><pub-id pub-id-type="pmid">22666214</pub-id></citation></ref>
<ref id="B62"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>H.</given-names></name> <name><surname>Liu</surname> <given-names>Z.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2010</year>). <article-title>Auditory cortex tracks both auditory and visual stimulus dynamics using low-frequency neuronal phase modulation</article-title>. <source>PLoS Biol.</source> <volume>8</volume>:<fpage>e1000445</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pbio.1000445</pub-id><pub-id pub-id-type="pmid">20711473</pub-id></citation></ref>
<ref id="B63"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Maddieson</surname> <given-names>I.</given-names></name></person-group> (<year>1984</year>). <source>Patterns of Sounds.</source> <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation></ref>
<ref id="B64"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Makeig</surname> <given-names>S.</given-names></name></person-group> (<year>1993</year>). <article-title>Auditory event-related dynamics of the EEG spectrum and effects of exposure to tones</article-title>. <source>Electroencephalogr. Clin. Neurophysiol.</source> <volume>86</volume>, <fpage>283</fpage>&#x02013;<lpage>293</lpage>. <pub-id pub-id-type="doi">10.1016/0013-4694(93)90110-h</pub-id><pub-id pub-id-type="pmid">7682932</pub-id></citation></ref>
<ref id="B65"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McLeod</surname> <given-names>S.</given-names></name> <name><surname>Crowe</surname> <given-names>K.</given-names></name></person-group> (<year>2018</year>). <article-title>Children&#x02019;s consonant acquisition in 27 languages: a cross-linguistic review</article-title>. <source>Am. J. Speech Lang. Pathol.</source> <volume>27</volume>, <fpage>1</fpage>&#x02013;<lpage>26</lpage>. <pub-id pub-id-type="doi">10.1044/2018_AJSLP-17-0100</pub-id><pub-id pub-id-type="pmid">30177993</pub-id></citation></ref>
<ref id="B66"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mesgarani</surname> <given-names>N.</given-names></name> <name><surname>Cheung</surname> <given-names>C.</given-names></name> <name><surname>Johnson</surname> <given-names>K.</given-names></name> <name><surname>Chang</surname> <given-names>E. F.</given-names></name></person-group> (<year>2014</year>). <article-title>Phonetic feature encoding in human superior temporal gyrus</article-title>. <source>Science</source> <volume>343</volume>, <fpage>1006</fpage>&#x02013;<lpage>1010</lpage>. <pub-id pub-id-type="doi">10.1126/science.1245994</pub-id><pub-id pub-id-type="pmid">24482117</pub-id></citation></ref>
<ref id="B67"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mohanan</surname> <given-names>K. P.</given-names></name></person-group> (<year>1991</year>). <article-title>On the bases of radical underspecification</article-title>. <source>Nat. Lang. Linguist. Theory</source> <volume>9</volume>, <fpage>285</fpage>&#x02013;<lpage>325</lpage>. <pub-id pub-id-type="doi">10.1007/BF00134678</pub-id></citation></ref>
<ref id="B68"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>M&#x000FC;ller</surname> <given-names>D.</given-names></name> <name><surname>Widmann</surname> <given-names>A.</given-names></name> <name><surname>Schr&#x000F6;ger</surname> <given-names>E.</given-names></name></person-group> (<year>2005</year>). <article-title>Deviance-repetition effects as a function of stimulus feature, feature value variation and timing: a mismatch negativity study</article-title>. <source>Biol. Psychol.</source> <volume>68</volume>, <fpage>1</fpage>&#x02013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2004.03.018</pub-id><pub-id pub-id-type="pmid">15312692</pub-id></citation></ref>
<ref id="B70"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name> <name><surname>Lehtokoski</surname> <given-names>A.</given-names></name> <name><surname>Lennes</surname> <given-names>M.</given-names></name> <name><surname>Cheour</surname> <given-names>M.</given-names></name> <name><surname>Huotilainen</surname> <given-names>M.</given-names></name> <name><surname>Iivonen</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>1997</year>). <article-title>Language-specific phoneme representations revealed by electric and magnetic brain responses</article-title>. <source>Nature</source> <volume>385</volume>, <fpage>432</fpage>&#x02013;<lpage>434</lpage>. <pub-id pub-id-type="doi">10.1038/385432a0</pub-id><pub-id pub-id-type="pmid"> 9009189</pub-id></citation></ref>
<ref id="B71"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name> <name><surname>Paavilainen</surname> <given-names>P.</given-names></name> <name><surname>Rinne</surname> <given-names>T.</given-names></name> <name><surname>Alho</surname> <given-names>K.</given-names></name></person-group> (<year>2007</year>). <article-title>The mismatch negativity (MMN) in basic research of central auditory processing: a review</article-title>. <source>Clin. Neurophysiol.</source> <volume>118</volume>, <fpage>2544</fpage>&#x02013;<lpage>2590</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2007.04.026</pub-id><pub-id pub-id-type="pmid">17931964</pub-id></citation></ref>
<ref id="B69"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name> <name><surname>Winkler</surname> <given-names>I.</given-names></name></person-group> (<year>1999</year>). <article-title>The concept of auditory stimulus representation in cognitive neuroscience</article-title>. <source>Psychol. Bull.</source> <volume>125</volume>, <fpage>826</fpage>&#x02013;<lpage>859</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.125.6.826</pub-id><pub-id pub-id-type="pmid">10589304</pub-id></citation></ref>
<ref id="B950"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Natvig</surname> <given-names>D.</given-names></name></person-group> (<year>2020</year>). <article-title>Rhotic underspecification: Deriving variability and arbitrariness through phonological representations</article-title>. <source>Glossa</source> <volume>5</volume>:<fpage>48</fpage>. <pub-id pub-id-type="doi">10.5334/gjgl.1172</pub-id><pub-id pub-id-type="pmid">14643372</pub-id></citation></ref>
<ref id="B72"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nousak</surname> <given-names>J. M.</given-names></name> <name><surname>Deacon</surname> <given-names>D.</given-names></name> <name><surname>Ritter</surname> <given-names>W.</given-names></name> <name><surname>Vaughan</surname> <given-names>H. G.</given-names></name></person-group> (<year>1996</year>). <article-title>Storage of information in transient auditory memory</article-title>. <source>Cogn. Brain Res.</source> <volume>4</volume>, <fpage>305</fpage>&#x02013;<lpage>317</lpage>. <pub-id pub-id-type="doi">10.1016/s0926-6410(96)00068-7</pub-id><pub-id pub-id-type="pmid"> 8957572</pub-id></citation></ref>
<ref id="B73"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nusbaum</surname> <given-names>H. C.</given-names></name> <name><surname>Pisoni</surname> <given-names>D. B.</given-names></name></person-group> (<year>1985</year>). <article-title>Constraints on the perception of synthetic speech generated by rule</article-title>. <source>Behav. Res. Methods Instrum. Comput.</source> <volume>17</volume>, <fpage>235</fpage>&#x02013;<lpage>242</lpage>. <pub-id pub-id-type="doi">10.3758/bf03214389</pub-id><pub-id pub-id-type="pmid">24511177</pub-id></citation></ref>
<ref id="B74"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Palmer</surname> <given-names>J. A.</given-names></name> <name><surname>Makeig</surname> <given-names>S.</given-names></name> <name><surname>Kreutz-Delgado</surname> <given-names>K.</given-names></name> <name><surname>Rao</surname> <given-names>B. D.</given-names></name></person-group> (<year>2008</year>). &#x0201C;<article-title>Newton method for the ICA mixture model</article-title>,&#x0201D; in <source>2008 IEEE International Conference on Acoustics, Speech and Signal Processing,</source> (Las Vegas, NV: IEEE), <fpage>1805</fpage>&#x02013;<lpage>1808</lpage>. <pub-id pub-id-type="doi">10.1109/ICASSP.2008.4517982</pub-id></citation></ref>
<ref id="B75"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Phillips</surname> <given-names>C.</given-names></name> <name><surname>Pellathy</surname> <given-names>T.</given-names></name> <name><surname>Marantz</surname> <given-names>A.</given-names></name> <name><surname>Yellin</surname> <given-names>E.</given-names></name> <name><surname>Wexler</surname> <given-names>K.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2000</year>). <article-title>Auditory cortex accesses phonological categories: an MEG mismatch study</article-title>. <source>J. Cogn. Neurosci.</source> <volume>12</volume>, <fpage>1038</fpage>&#x02013;<lpage>1055</lpage>. <pub-id pub-id-type="doi">10.1162/08989290051137567</pub-id><pub-id pub-id-type="pmid">11177423</pub-id></citation></ref>
<ref id="B76"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Picton</surname> <given-names>T. W.</given-names></name> <name><surname>Alain</surname> <given-names>C.</given-names></name> <name><surname>Otten</surname> <given-names>L.</given-names></name> <name><surname>Ritter</surname> <given-names>W.</given-names></name> <name><surname>Achim</surname> <given-names>A.</given-names></name></person-group> (<year>2000</year>). <article-title>Mismatch negativity: different water in the same river</article-title>. <source>Audiol. Neurootol.</source> <volume>5</volume>, <fpage>111</fpage>&#x02013;<lpage>139</lpage>. <pub-id pub-id-type="doi">10.1159/000013875</pub-id><pub-id pub-id-type="pmid">10859408</pub-id></citation></ref>
<ref id="B77"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pierrehumbert</surname> <given-names>J.</given-names></name></person-group> (<year>2006</year>). <article-title>The next toolkit</article-title>. <source>J. Phon.</source> <volume>34</volume>, <fpage>516</fpage>&#x02013;<lpage>530</lpage>. <pub-id pub-id-type="doi">10.1016/j.wocn.2006.06.003</pub-id></citation> </ref>
<ref id="B78"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2003</year>). <article-title>The analysis of speech in different temporal integration windows: cerebral lateralization as &#x0201C;asymmetric sampling in time.&#x0201D;</article-title> <source>Speech Commun.</source> <volume>41</volume>, <fpage>245</fpage>&#x02013;<lpage>255</lpage>. <pub-id pub-id-type="doi">10.1016/S0167-6393(02)00107-3</pub-id></citation> </ref>
<ref id="B79"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Prendergast</surname> <given-names>G.</given-names></name> <name><surname>Johnson</surname> <given-names>S. R.</given-names></name> <name><surname>Green</surname> <given-names>G. G. R.</given-names></name></person-group> (<year>2010</year>). <article-title>Temporal dynamics of sinusoidal and non-sinusoidal amplitude modulation</article-title>. <source>Eur. J. Neurosci.</source> <volume>32</volume>, <fpage>1599</fpage>&#x02013;<lpage>1607</lpage>. <pub-id pub-id-type="doi">10.1111/j.1460-9568.2010.07423.x</pub-id><pub-id pub-id-type="pmid">21039961</pub-id></citation></ref>
<ref id="B80"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Preston</surname> <given-names>J.</given-names></name> <name><surname>Benway</surname> <given-names>N.</given-names></name> <name><surname>Leece</surname> <given-names>M.</given-names></name> <name><surname>Hitchcock</surname> <given-names>E.</given-names></name> <name><surname>McAllister</surname> <given-names>T.</given-names></name></person-group> (<year>2020</year>). <article-title>Tutorial: motor-based treatment strategies for /r/ distortions</article-title>. <source>Lang. Speech Hear. Serv. Sch.</source> <volume>51</volume>, <fpage>966</fpage>&#x02013;<lpage>980</lpage>. <pub-id pub-id-type="doi">10.1044/2020_LSHSS-20-00012</pub-id><pub-id pub-id-type="pmid">32783706</pub-id></citation></ref>
<ref id="B81"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sams</surname> <given-names>M.</given-names></name> <name><surname>Alho</surname> <given-names>K.</given-names></name> <name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name></person-group> (<year>1984</year>). <article-title>Short-term habituation and dishabituation of the mismatch negativity of the ERP</article-title>. <source>Psychophysiology</source> <volume>21</volume>, <fpage>434</fpage>&#x02013;<lpage>441</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.1984.tb00223.x</pub-id><pub-id pub-id-type="pmid">6463176</pub-id></citation></ref>
<ref id="B82"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Scharinger</surname> <given-names>M.</given-names></name> <name><surname>Merickel</surname> <given-names>J.</given-names></name> <name><surname>Riley</surname> <given-names>J.</given-names></name> <name><surname>Idsardi</surname> <given-names>W. J.</given-names></name></person-group> (<year>2011</year>). <article-title>Neuromagnetic evidence for a featural distinction of English consonants: sensor- and source-space data</article-title>. <source>Brain Lang.</source> <volume>116</volume>, <fpage>71</fpage>&#x02013;<lpage>82</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandl.2010.11.002</pub-id><pub-id pub-id-type="pmid">21185073</pub-id></citation></ref>
<ref id="B83"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Scharinger</surname> <given-names>M.</given-names></name> <name><surname>Monahan</surname> <given-names>P. J.</given-names></name> <name><surname>Idsardi</surname> <given-names>W. J.</given-names></name></person-group> (<year>2012</year>). <article-title>Asymmetries in the processing of vowel height</article-title>. <source>J. Speech Lang. Hear. Res.</source> <volume>55</volume>, <fpage>903</fpage>&#x02013;<lpage>918</lpage>. <pub-id pub-id-type="doi">10.1044/1092-4388(2011/11-0065)</pub-id><pub-id pub-id-type="pmid">22232394</pub-id></citation></ref>
<ref id="B84"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schluter</surname> <given-names>K.</given-names></name> <name><surname>Politzer-Ahles</surname> <given-names>S.</given-names></name> <name><surname>Almeida</surname> <given-names>D.</given-names></name></person-group> (<year>2016</year>). <article-title>No place for /h/: an ERP investigation of English fricative place features</article-title>. <source>Lang. Cogn. Neurosci.</source> <volume>31</volume>, <fpage>728</fpage>&#x02013;<lpage>740</lpage>. <pub-id pub-id-type="doi">10.1080/23273798.2016.1151058</pub-id><pub-id pub-id-type="pmid">27366758</pub-id></citation></ref>
<ref id="B85"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Secord</surname> <given-names>W. A.</given-names></name> <name><surname>Boyce</surname> <given-names>S. E.</given-names></name> <name><surname>Donohue</surname> <given-names>J. S.</given-names></name> <name><surname>Fox</surname> <given-names>R. A.</given-names></name> <name><surname>Shine</surname> <given-names>R. E.</given-names></name></person-group> (<year>2007</year>). <source>Eliciting Sounds: Techniques and Strategies for Clinicians</source>, 2nd edition. <publisher-name>Clifton Park, NY: Cengage Learning</publisher-name>.</citation></ref>
<ref id="B86"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shafer</surname> <given-names>V. L.</given-names></name> <name><surname>Morr</surname> <given-names>M. L.</given-names></name> <name><surname>Datta</surname> <given-names>H.</given-names></name> <name><surname>Kurtzberg</surname> <given-names>D.</given-names></name> <name><surname>Schwartz</surname> <given-names>R. G.</given-names></name></person-group> (<year>2005</year>). <article-title>Neurophysiological indexes of speech processing deficits in children with specific language impairment</article-title>. <source>J. Cogn. Neurosci.</source> <volume>17</volume>, <fpage>1168</fpage>&#x02013;<lpage>1180</lpage>. <pub-id pub-id-type="doi">10.1162/0898929054475217</pub-id><pub-id pub-id-type="pmid">16138434</pub-id></citation></ref>
<ref id="B87"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shriberg</surname> <given-names>L. D.</given-names></name></person-group> (<year>1980</year>). <article-title>An intervention procedure for children with persistent /r/ errors</article-title>. <source>Lang. Speech Hear. Serv. Sch.</source> <volume>11</volume>, <fpage>102</fpage>&#x02013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1044/0161-1461.1102.102</pub-id></citation></ref>
<ref id="B88"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Singer</surname> <given-names>W.</given-names></name> <name><surname>Gray</surname> <given-names>C. M.</given-names></name></person-group> (<year>1995</year>). <article-title>Visual feature integration and the temporal correlation hypothesis</article-title>. <source>Ann. Rev. Neurosci.</source> <volume>18</volume>, <fpage>555</fpage>&#x02013;<lpage>586</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.ne.18.030195.003011</pub-id><pub-id pub-id-type="pmid">7605074</pub-id></citation></ref>
<ref id="B89"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Steriade</surname> <given-names>D.</given-names></name></person-group> (<year>1995</year>). &#x0201C;<article-title>Underspecification and markedness</article-title>,&#x0201D; in <source>The Handbook of Phonological Theory</source>, ed <person-group person-group-type="editor"><name><surname>Goldsmith</surname> <given-names>J. A.</given-names></name></person-group> (<publisher-loc>Oxford and Cambridge, MA</publisher-loc>: <publisher-name>Blackwell Publishing</publisher-name>), <fpage>114</fpage>&#x02013;<lpage>174</lpage>.</citation></ref>
<ref id="B90"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Stoel-Gammon</surname> <given-names>C.</given-names></name> <name><surname>Dunn</surname> <given-names>C.</given-names></name></person-group> (<year>1985</year>). <source>Normal and Disordered Phonology in Children.</source> <publisher-loc>Baltimore, MD</publisher-loc>: <publisher-name>University Park Press</publisher-name>.</citation></ref>
<ref id="B91"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Symonds</surname> <given-names>R. M.</given-names></name> <name><surname>Lee</surname> <given-names>W. W.</given-names></name> <name><surname>Kohn</surname> <given-names>A.</given-names></name> <name><surname>Schwartz</surname> <given-names>O.</given-names></name> <name><surname>Witkowski</surname> <given-names>S.</given-names></name> <name><surname>Sussman</surname> <given-names>E. S.</given-names></name></person-group> (<year>2017</year>). <article-title>Distinguishing neural adaptation and predictive coding hypotheses in auditory change detection</article-title>. <source>Brain Topogr.</source> <volume>30</volume>, <fpage>136</fpage>&#x02013;<lpage>148</lpage>. <pub-id pub-id-type="doi">10.1007/s10548-016-0529-8</pub-id><pub-id pub-id-type="pmid">27752799</pub-id></citation></ref>
<ref id="B92"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tiitinen</surname> <given-names>H.</given-names></name> <name><surname>May</surname> <given-names>P.</given-names></name> <name><surname>Reinikainen</surname> <given-names>K.</given-names></name> <name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name></person-group> (<year>1994</year>). <article-title>Attentive novelty detection in humans is governed by pre-attentive sensory memory</article-title>. <source>Nature</source> <volume>372</volume>, <fpage>90</fpage>&#x02013;<lpage>92</lpage>. <pub-id pub-id-type="doi">10.1038/372090a0</pub-id><pub-id pub-id-type="pmid">7969425</pub-id></citation></ref>
<ref id="B93"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Vennemann</surname> <given-names>T.</given-names></name></person-group> (<year>1988</year>). <source>Preference Laws For Syllable Structure and the Explanation of Sound Change: With Special Reference to German, Germanic, Italian and Latin.</source> <publisher-loc>Berlin</publisher-loc>: <publisher-name>Mouton de Gruyter</publisher-name>.</citation></ref>
<ref id="B94"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vitevitch</surname> <given-names>M. S.</given-names></name> <name><surname>Luce</surname> <given-names>P. A.</given-names></name></person-group> (<year>2004</year>). <article-title>A web-based interface to calculate phonotactic probability for words and nonwords in English</article-title>. <source>Behav. Res. Methods Instrum. Comput.</source> <volume>36</volume>, <fpage>481</fpage>&#x02013;<lpage>487</lpage>. <pub-id pub-id-type="doi">10.3758/bf03195594</pub-id><pub-id pub-id-type="pmid">15641436</pub-id></citation></ref>
<ref id="B95"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ward</surname> <given-names>L. M.</given-names></name></person-group> (<year>2003</year>). <article-title>Synchronous neural oscillations and cognitive processes</article-title>. <source>Trends Cogn. Sci.</source> <volume>7</volume>, <fpage>553</fpage>&#x02013;<lpage>559</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2003.10.012</pub-id><pub-id pub-id-type="pmid">14643372</pub-id></citation></ref>
<ref id="B96"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Winkler</surname> <given-names>I.</given-names></name> <name><surname>Lehtokoski</surname> <given-names>A.</given-names></name> <name><surname>Alku</surname> <given-names>P.</given-names></name> <name><surname>Vainio</surname> <given-names>M.</given-names></name> <name><surname>Czigler</surname> <given-names>I.</given-names></name> <name><surname>Cs&#x000E9;pe</surname> <given-names>V.</given-names></name> <etal/></person-group>. (<year>1999</year>). <article-title>Pre-attentive detection of vowel contrasts utilizes both phonetic and auditory memory representations</article-title>. <source>Cogn. Brain Res.</source> <volume>7</volume>, <fpage>357</fpage>&#x02013;<lpage>369</lpage>. <pub-id pub-id-type="doi">10.1016/s0926-6410(98)00039-1</pub-id><pub-id pub-id-type="pmid">9838192</pub-id></citation></ref>
<ref id="B97"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yi</surname> <given-names>H. G.</given-names></name> <name><surname>Leonard</surname> <given-names>M. K.</given-names></name> <name><surname>Chang</surname> <given-names>E. F.</given-names></name></person-group> (<year>2019</year>). <article-title>The encoding of speech sounds in the superior temporal gyrus</article-title>. <source>Neuron</source> <volume>102</volume>, <fpage>1096</fpage>&#x02013;<lpage>1110</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2019.04.023</pub-id><pub-id pub-id-type="pmid">31220442</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn id="fn0001"><p><sup>1</sup>The /&#x00279;/ is generally considered to be more complex and specified than /w/ (Greenberg, <xref ref-type="bibr" rid="B40">1975</xref>) as it is acquired later in development (McLeod and Crowe, <xref ref-type="bibr" rid="B65">2018</xref>) and occurs in fewer world languages than /w/ (Maddieson, <xref ref-type="bibr" rid="B63">1984</xref>).</p></fn>
<fn id="fn0002"><p><sup>2</sup>There is some debate as to the organization of the tree, but all theories agree upon a root node, and separate class nodes.</p></fn>
<fn id="fn0003"><p><sup>3</sup>Phonotactic probability refers to the frequency with which a phonological segment, such as /&#x00279;/, and a sequence of phonological segments, such as /&#x00279;&#x00251;/, occur in a given position in a word (Jusczyk et al., <xref ref-type="bibr" rid="B52">1994</xref>).</p></fn>
<fn id="fn0004"><p><sup>4</sup><ext-link ext-link-type="uri" xlink:href="https://calculator.ku.edu/phonotactic/about">https://calculator.ku.edu/phonotactic/about</ext-link></p></fn>
<fn id="fn0005"><p><sup>5</sup><ext-link ext-link-type="uri" xlink:href="http://www.sccn.ucsd.edu/eeglab">http://www.sccn.ucsd.edu/eeglab</ext-link></p></fn>
<fn id="fn0006"><p><sup>6</sup>These electrodes encompass four anterior-posterior levels (Frontal, Frontal-Central, Central, Central-Parietal) and three left-right laterality levels [Left (1), Midline (z), Right (2)].</p></fn>
<fn id="fn0007"><p><sup>7</sup><ext-link ext-link-type="uri" xlink:href="https://openwetware.org/wiki/Mass_UnivariateERPToolbox">https://openwetware.org/wiki/Mass_UnivariateERPToolbox</ext-link>.</p></fn>
<fn id="fn0008"><p><sup>8</sup>These electrodes encompassed two laterality levels (Left: F3, F1, FC3, FC1, C3, C1, CP3, CP1; Right: F4, F2, FC4, FC2, C4, C2, CP4, CP2), four anterior-posterior levels (Frontal, Frontal-Central, Central, Central-Parietal) and two electrode laterality levels (Far laterality: F3/F4, FC3/FC4, C3/C4, CP3/CP4; Close laterality: F1/F2, FC1/FC2, C1/C2, CP1/CP2).</p></fn>
<fn id="fn0009"><p><sup>9</sup>Note that while only the mismatch responses elicited by /w&#x00251;/ were found to be significant <italic>via</italic> the cluster permutation analyses, mismatch responses were also present in the /&#x00279;&#x00251;/ difference wave (<xref ref-type="fig" rid="F2">Figure 2</xref>).</p></fn>
<fn id="fn0010"><p><sup>10</sup>Note that while only the mismatch responses elicited by /w&#x00251;/ were found to be significant <italic>via</italic> the cluster permutation analyses, mismatch responses were also present in the /&#x00279;&#x00251;/ difference wave (<xref ref-type="fig" rid="F2">Figure 2</xref>).</p></fn>
<fn id="fn0011"><p><sup>11</sup>The LN is a negativity that follows the MMN and typically peaks between 300 and 500 ms at fronto-central electrode sites.</p></fn>
<fn id="fn0012"><p><sup>12</sup>It should be noted that this is a different feature assignment than what would be found in FUL (Lahiri &#x00026; Reetz, 2002), as there are no articulator dependents in FUL; this is what allows for [coronal] to be underspecified. The dependents in the Clements and Hume (<xref ref-type="bibr" rid="B15">1995</xref>) make it difficult, or impossible, for [coronal] to be underspecified.</p></fn>
<fn id="fn0013"><p><sup>13</sup>It is important to note that the acoustic aspects of cannot be fully differentiated from language experience and phonotactics as honeme combinations that are perceptually more distinct tend to occur more often in and across languages (Bonte et al., <xref ref-type="bibr" rid="B6">2005</xref>).</p></fn>
</fn-group>
</back>
</article>
