<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Hum. Neurosci.</journal-id>
<journal-title>Frontiers in Human Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Hum. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5161</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnhum.2019.00215</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Learning to Expect: Predicting Sounds During Movement Is Related to Sensorimotor Association During Listening</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Burgess</surname> <given-names>Jed D.</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/97044/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Major</surname> <given-names>Brendan P.</given-names></name>
</contrib>
<contrib contrib-type="author">
<name><surname>McNeel</surname> <given-names>Claire</given-names></name>
<uri xlink:href="https://loop.frontiersin.org/people/654608/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Clark</surname> <given-names>Gillian M.</given-names></name>
<uri xlink:href="https://loop.frontiersin.org/people/705143/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Lum</surname> <given-names>Jarrad A. G.</given-names></name>
<uri xlink:href="https://loop.frontiersin.org/people/275767/overview"/>
</contrib> 
<contrib contrib-type="author">
<name><surname>Enticott</surname> <given-names>Peter G.</given-names></name>
<uri xlink:href="https://loop.frontiersin.org/people/23826/overview"/>
</contrib>
</contrib-group>
<aff><institution>Cognitive Neuroscience Unit, School of Psychology, Deakin University</institution>, <addr-line>Geelong, VIC</addr-line>, <country>Australia</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Yusuf Ozgur Cakmak, University of Otago, New Zealand</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Maher A. Quraan, Izaak Walton Killam Health Centre, Canada; Xing Tian, New York University Shanghai, China; Pekcan Ungan, Ko&#x000E7; University, Turkey</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Jed D. Burgess <email>jed.burgess&#x00040;deakin.edu.au</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>04</day>
<month>07</month>
<year>2019</year>
</pub-date>
<pub-date pub-type="collection">
<year>2019</year>
</pub-date>
<volume>13</volume>
<elocation-id>215</elocation-id>
<history>
<date date-type="received">
<day>05</day>
<month>12</month>
<year>2018</year>
</date>
<date date-type="accepted">
<day>11</day>
<month>06</month>
<year>2019</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2019 Burgess, Major, McNeel, Clark, Lum and Enticott.</copyright-statement>
<copyright-year>2019</copyright-year>
<copyright-holder>Burgess, Major, McNeel, Clark, Lum and Enticott</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>Sensory experiences, such as sound, often result from our motor actions. Over time, repeated sound-producing performance can generate sensorimotor associations. However, it is not clear how sensory and motor information are associated. Here, we explore if sensory prediction is associated with the formation of sensorimotor associations during a learning task. We recorded event-related potentials (ERPs) while participants produced index and little finger-swipes on a bespoke device, generating novel sounds. ERPs were also obtained as participants heard those sounds played back. Peak suppression was compared to assess sensory prediction. Additionally, transcranial magnetic stimulation (TMS) was used during listening to generate finger-motor evoked potentials (MEPs). MEPs were recorded before and after training upon hearing these sounds, and then compared to reveal sensorimotor associations. Finally, we explored the relationship between these components. Results demonstrated that an increased positive-going peak (e.g., P2) and a suppressed negative-going peak (e.g., N2) were recorded during action, revealing some sensory prediction outcomes (P2: <italic>p</italic> = 0.050, <inline-formula><mml:math id="M1"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mtext>p</mml:mtext><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.208; N2: <italic>p</italic> = 0.001, <inline-formula><mml:math id="M2"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mtext>p</mml:mtext><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.474). Increased MEPs were also observed upon hearing congruent sounds compared with incongruent sounds (i.e., associated to a finger), demonstrating precise sensorimotor associations that were not present before learning (Index finger: <italic>p</italic> &#x0003C; 0.001, <inline-formula><mml:math id="M3"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mtext>p</mml:mtext><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.614; Little finger: <italic>p</italic> &#x0003C; 0.001, <inline-formula><mml:math id="M4"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mtext>p</mml:mtext><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.529). Consistent with our broad hypotheses, a negative association between the MEPs in one finger during listening and ERPs during performance of the other was observed (Index finger MEPs and Fz N1 action ERPs; <italic>r</italic> = &#x02212;0.655, <italic>p</italic> = 0.003). Overall, data suggest that predictive mechanisms are associated with the fine-tuning of sensorimotor associations.</p></abstract>
<kwd-group>
<kwd>sensory prediction</kwd>
<kwd>sensorimotor association</kwd>
<kwd>predictive comparison</kwd>
<kwd>TMS</kwd>
<kwd>EEG</kwd>
</kwd-group>
<counts>
<fig-count count="4"/>
<table-count count="0"/>
<equation-count count="13"/>
<ref-count count="94"/>
<page-count count="14"/>
<word-count count="11189"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>Typically, sounds are produced by our motor actions. Through performance, the <italic>cause and effect</italic> relationship between motor and sound information can become evident. Over time, repeated experience can generate sensorimotor associations and the innervation of motor and sensory data. It is proposed that these sensorimotor associations assist with precise motor control (Shadmehr et al., <xref ref-type="bibr" rid="B77">2010</xref>). For instance, when learning to press a key on the piano, a student will begin to recognize that sounds are aligned with keys, and these are (typically) activated by specific finger movements. Thus, should one desire to hear those sounds, the known finger movements should be executed.</p>
<p>In humans, evidence of sensorimotor association is usually demonstrated during sensory processing and activation of motor-brain regions. This can be achieved with transcranial magnetic stimulation (TMS) to the primary motor cortex (M1). TMS can be applied when listening to sounds that are associated with a particular action, such as clicking one&#x02019;s fingers (for review, see Aglioti and Pazzaglia, <xref ref-type="bibr" rid="B1">2010</xref>). Upon stimulation, hearing the sound activates the motor program that formerly produced it, and heightened M1 excitability is revealed. Subsequently, larger motor evoked potentials (MEPs) are measured at the peripheral muscle involved in the action when compared to a baseline condition. The response, where known sounds activate motor regions and specific corticospinal tracts, is called <italic>auditory-motor resonance</italic> (AMR; for review, see Burgess et al., <xref ref-type="bibr" rid="B13">2017</xref>).</p>
<p>AMR has been demonstrated extensively. Hearing piano sounds increases MEPs in finger muscles of pianists when compared to non-pianists (Furukawa et al., <xref ref-type="bibr" rid="B30">2017</xref>), highlighting the importance of experience. Similarly, MEPs in tongue muscles are facilitated in response to speech listening (D&#x02019;Ausilio et al., <xref ref-type="bibr" rid="B19">2014</xref>; Nuttall et al., <xref ref-type="bibr" rid="B63">2017</xref>). Sensorimotor training has even produced novel MEP <italic>disassociations</italic> (Ticini et al., <xref ref-type="bibr" rid="B83">2011</xref>, <xref ref-type="bibr" rid="B82">2019</xref>). That is, an index or little finger button-press can be associated with distinct sounds during training. Afterwards, hearing the index-<italic>congruent</italic> sound generates larger MEPs in the index muscle than hearing the <italic>incongruent</italic> little-finger sound. Together, these disassociations highlight the precision of the AMR networks.</p>
<p>However, how sensory and motor systems integrate to produce a sensorimotor association is unclear. Associating a sound with a motor action must overcome inherent time delays that are met during sensory processing. For example, the action that generated <italic>this</italic> sound occurred in the past. Thus, in the first instance, there is a temporal disconnect between the motor and sensory aspects that might seem to work against the formation of an experience-dependent association. To overcome this issue (and others), it is suggested the central nervous system (CNS) predicts impending sensory changes during action (Shadmehr et al., <xref ref-type="bibr" rid="B77">2010</xref>; Burgess et al., <xref ref-type="bibr" rid="B13">2017</xref>).</p>
<p>Sensory predictions are critical for effective and fluid behavior (for review, see Sawtell, <xref ref-type="bibr" rid="B74">2017</xref>; Schneider and Mooney, <xref ref-type="bibr" rid="B76">2018</xref>; Straka et al., <xref ref-type="bibr" rid="B80">2018</xref>). From a prediction perspective, smooth movement is achieved over time <italic>via</italic> a comparison between desired (predicted) and produced (actual) sensory consequences. When predicted and actual stimuli are compared, expected sensory consequences are attenuated or suppressed (Aliu et al., <xref ref-type="bibr" rid="B2">2009</xref>; Kilteni and Ehrsson, <xref ref-type="bibr" rid="B50">2017</xref>). Unexpected or novel stimuli, however, are not (Knolle et al., <xref ref-type="bibr" rid="B52">2013</xref>; Mathias et al., <xref ref-type="bibr" rid="B55">2015</xref>). In turn, the feedback generated by this process helps make the movement more efficient and accurate.</p>
<p>In humans, electroencephalography (EEG) and event-related potentials (ERPs) can be used to indicate the sensory prediction processes during action (for reviews, see Woodman, <xref ref-type="bibr" rid="B93">2010</xref>; Bendixen et al., <xref ref-type="bibr" rid="B10">2012</xref>; Joos et al., <xref ref-type="bibr" rid="B48">2014</xref>; Horv&#x000E1;th, <xref ref-type="bibr" rid="B45">2015</xref>). When individuals produce sounds (e.g., speaking, arm, leg, or finger-press generated sounds), which are presumably highly predictable, the suppression of a negative-going peak around 100 ms is often demonstrated when compared to the same audition-obtained peak (Ford et al., <xref ref-type="bibr" rid="B23">2007</xref>; Baess et al., <xref ref-type="bibr" rid="B4">2011</xref>; Van Elk et al., <xref ref-type="bibr" rid="B86">2014</xref>). This suppression represents attenuation of those expected sensory consequences during action. Alternatively, the accentuation of the ERP peak recorded during listening is thought to represent the absence of sensory prediction. For the CNS, it indicates that the incoming sounds are unexpected and important or might even be produced by someone else (Haggard, <xref ref-type="bibr" rid="B36">2017</xref>).</p>
<p>Beyond the attenuation of the incoming sounds during a performance, as indexed by N1 peak suppression, modulation of other peaks during an action have also been discussed in terms of sensory prediction. While it is not well understood (Crowley and Colrain, <xref ref-type="bibr" rid="B18">2004</xref>; Tong et al., <xref ref-type="bibr" rid="B85">2009</xref>) and relatively unclear what sensory prediction outcomes they could represent (Horv&#x000E1;th, <xref ref-type="bibr" rid="B45">2015</xref>; Pinheiro et al., <xref ref-type="bibr" rid="B69">2018</xref>), modulation of the positive-going P2 peak around 200 ms is also reported during action (Chen et al., <xref ref-type="bibr" rid="B16">2012</xref>; Knolle et al., <xref ref-type="bibr" rid="B52">2013</xref>; Timm et al., <xref ref-type="bibr" rid="B84">2014</xref>; Ghio et al., <xref ref-type="bibr" rid="B32">2018</xref>). Effects include decreased suppression for delayed stimulus onsets (Behroozmand et al., <xref ref-type="bibr" rid="B9">2011</xref>; Pereira et al., <xref ref-type="bibr" rid="B67">2014</xref>), pitch-shifted sounds (Behroozmand et al., <xref ref-type="bibr" rid="B8">2014</xref>), or trained sounds (Reinke et al., <xref ref-type="bibr" rid="B71">2003</xref>; Tong et al., <xref ref-type="bibr" rid="B85">2009</xref>). Enhancement of an earlier P1 component (Boutonnet and Lupyan, <xref ref-type="bibr" rid="B12">2015</xref>), and also suppression of latent N2 peaks (Knolle et al., <xref ref-type="bibr" rid="B52">2013</xref>; Mathias et al., <xref ref-type="bibr" rid="B55">2015</xref>) or related (Horv&#x000E1;th et al., <xref ref-type="bibr" rid="B46">2008</xref>) mismatch negativity (MMN) is also reported (for review, see N&#x000E4;&#x000E4;t&#x000E4;nen et al., <xref ref-type="bibr" rid="B62">2007</xref>; Winkler, <xref ref-type="bibr" rid="B88">2007</xref>; Garrido et al., <xref ref-type="bibr" rid="B31">2009</xref>; Bartha-Doering et al., <xref ref-type="bibr" rid="B7">2015</xref>). Altogether, changes in negative and positive-going ERP peaks across action and audition recordings are considered to reflect the sensory prediction mechanisms and their outcomes.</p>
<p>In sum, there is reliable evidence for sensory prediction and sensorimotor associations. However, few reports investigate how they interact. Outside of studies exploring the cerebellum&#x02019;s role in sensory prediction during visually perturbed actions (Miall et al., <xref ref-type="bibr" rid="B59">2007</xref>; Izawa et al., <xref ref-type="bibr" rid="B47">2012</xref>; Yavari et al., <xref ref-type="bibr" rid="B94">2016</xref>), very few studies show the interaction between auditory predictions and audio-motor associations following a learning task. Here, we explore for the first time, from an auditory perspective in humans, how motor behavior, sensory prediction markers, and sensorimotor associations are correlated within a single paradigm.</p>
<p>To investigate, an auditory-motor task was designed. This required participants to make an index or little finger-swipe movement on a bespoke device. Activation of one of two switches would result in playback of a sound <italic>via</italic> in-ear headphones. The sensory prediction mechanisms were assessed <italic>via</italic> ERPs, as demonstrated by changes in ERP suppression across action and audition stages. The sensorimotor associations, however, were assessed <italic>via</italic> TMS-induced MEPs during listening, before and after the training period. Finally, we explored the relationship between sensory prediction, sensorimotor association, and motor behavior [e.g., electromyography (EMG) recordings during swipes].</p>
<p>We hypothesize that sensory prediction will be evident. That is, increases in ERP suppression for a negative-going peak such as the N1 (e.g., Van Elk et al., <xref ref-type="bibr" rid="B86">2014</xref>) or a later N2 peak are expected to be present during action (e.g., Mathias et al., <xref ref-type="bibr" rid="B55">2015</xref>; i.e., given the novelty of the action and relatively long swipe duration, we investigated three broad ERP peak windows). Despite the conflicting evidence regarding a positive peak usually around 200 ms, we hypothesized that a decrease in suppression of the first positive-going peak (e.g., P2) will be observed during the performance, reflecting a signal to maintain perceptual gaps during sensory prediction processes (e.g., Wang et al., <xref ref-type="bibr" rid="B87">2014</xref>). AMR was also expected to be revealed. Specifically, we hypothesized that congruent sounds will generate larger MEPs when compared with those recorded upon hearing incongruent sounds (e.g., Ticini et al., <xref ref-type="bibr" rid="B83">2011</xref>). Finally, given the close ties between sensory prediction and sensorimotor association under theoretical accounts (Wolpert and Kawato, <xref ref-type="bibr" rid="B90">1998</xref>; Burgess et al., <xref ref-type="bibr" rid="B13">2017</xref>), we hypothesized that some correlations between the ERP and EMG data during action with MEPs during listening will be present (see <xref ref-type="supplementary-material" rid="SM1">Supplementary Figure S1</xref> in <xref ref-type="supplementary-material" rid="SM1">Supplementary Materials</xref> for illustration of the experimental hypotheses).</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>Materials and Methods</title>
<sec id="s2-1">
<title>Participants</title>
<p>We recruited 18 healthy adult participants, including eight females [mean age = 27.33 years (SD = 5.28)]. Participants did not reveal a history of neurological or psychiatric illnesses (i.e., self-reported). All participants indicated normal hearing and reported right-handedness, as confirmed by the Edinburgh handedness inventory (Oldfield, <xref ref-type="bibr" rid="B64">1971</xref>). Participants were also screened to ensure they met TMS safety standards (Rossi et al., <xref ref-type="bibr" rid="B73">2009</xref>, <xref ref-type="bibr" rid="B72">2011</xref>). Participants provided informed written consent in accordance with the Declaration of Helsinki. Participants were compensated for their time. The research was approved by the Deakin University Human Research Ethics Committee (2015-034).</p>
</sec>
<sec id="s2-2">
<title>Experimental Design and Procedure</title>
<p>The experimental paradigm was based on an ERP investigation (Ford et al., <xref ref-type="bibr" rid="B25">2010</xref>) and a TMS motor-learning study (Ticini et al., <xref ref-type="bibr" rid="B83">2011</xref>). It consisted of two main stages: (1) <italic>Action</italic> and (2) <italic>Audition</italic>. During the action stage, participants produced sounds by making finger-swipes on the experimental device, while EEG and EMG techniques recorded CNS activity. The audition stage, however, used TMS and EMG to record CNS activity when participants passively listened to sounds that were played back <italic>via</italic> the device. These stages were also comprised of individual blocks to help minimize issues with participant attention waning (e.g., Finkbeiner et al., <xref ref-type="bibr" rid="B21">2016</xref>). Overall, participants produced sounds (i.e., action) or heard them (i.e., audition) while EEG, EMG, and TMS recorded CNS activity within separate experimental blocks (see <xref ref-type="fig" rid="F1">Figure 1</xref> for illustration of the protocol).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Experimental protocol. Top panel: outlined here are the experimental stages, which are made up with blocks of transcranial magnetic stimulation (TMS) or electroencephalography (EEG) trials. Bottom panel: this panel indicates the order of blocks (within their respective stage) across the experiment.</p></caption>
<graphic xlink:href="fnhum-13-00215-g0001.tif"/>
</fig>
<p>Participants sat comfortably in a chair. The custom-made device, labeled the AMRJ (outlined below and illustrated in <xref ref-type="fig" rid="F2">Figure 2</xref>), was placed in front of them. After each neuroscientific technique setup was completed (described below), the experiment began with the <italic>Baseline</italic> stage. This stage consisted of two blocks (<italic>B1</italic> and <italic>B2</italic>). These blocks and the final Baseline block (<italic>B3</italic>) were designed to measure transient-state influences (Schmidt et al., <xref ref-type="bibr" rid="B75">2009</xref>) and potential cumulative effects of single-pulse TMS (Pellicciari et al., <xref ref-type="bibr" rid="B66">2016</xref>). Each block had 20 TMS pulses with a 4-s inter-stimulus interval and 5-s inter-block interval. Each block lasted approximately 2 min.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>The AMRJ. Left panel: during the action stage, the custom-made AMRJ device (abbreviation used as the proper name) uses finger movement to activate the respective capacitance switch. This activation produces playback of an original sound <italic>via</italic> the in-ear headphones (i.e., there are two sounds). During the audition stage, the AMRJ can produce a pseudorandom playback of the sounds without the need for finger switch activation. In either mode, the AMRJ triggers the respective recording devices <italic>via</italic> the serial port or BNC connectors. Right panel: swipes are produced by either the (right-hand) index or little fingers. For the index finger, a swipe is produced upon the touch of the left capacitance switch, from inside-to-outside of the switch (e.g., left direction). For the little finger, a right directed finger-swipe initiates the respective switch (i.e., over the right-hand side switch from inside-to-outside).</p></caption>
<graphic xlink:href="fnhum-13-00215-g0002.tif"/>
</fig>
<p>Following a 1-min break, the <italic>Pre-learning</italic> stage (<italic>Pre-LP</italic>) began. The Pre-LP stage was comprised of two blocks (so-called <italic>Pre-LP 1</italic> and <italic>Pre-LP 2</italic>). Each block established a baseline of MEPs and, therefore, AMR. Participants listened to a quasi-randomized block of 48 sound samples (i.e., 48 trials in a block; sounds are described below). During listening, TMS was applied to M1, and MEPs were recorded from both the index and little finger muscles simultaneously. Each block lasted approximately 5 min, and a 1-min break between blocks was provided.</p>
<p>After another 1-min break, the <italic>Learning Procedure</italic> (LP) stage began. This stage was intended to associate finger-swipes and sounds, and participants produced the sounds <italic>via</italic> finger-swipes. There were four LP blocks (<italic>LP 1</italic>, <italic>LP 2</italic>, <italic>LP 3</italic>, and <italic>LP 4</italic>). Each block required participants to perform 48 swipe movements with their index or little fingers across the corresponding switch, which would generate the sounds. Beginning with the inside switch-edge, each finger would move towards the outer edge of the device (see <xref ref-type="fig" rid="F3">Figure 3</xref> for illustration). The index finger-swipe was toward the left-hand side of the device, while the little finger generated a swipe towards the right. The starting position required the middle and ring fingers to be positioned over the two <italic>home</italic> plates. This allowed the index and little fingers to be aligned with the inside edge of the respective switch. Before testing, the experimenter provided an example of both finger-swipes. Each swipe needed to be at least 300 ms in duration for sound playback to occur. Participants were instructed to modulate their speed to ensure swipe time was a minimum of 300 ms.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Grand average event-related potentials (ERPs). Top panel: grand average ERPs were recorded during the experimental stages across both fingers.Shown here are the traces for the Fz and FCz electrodes (i.e., the action stage data has been corrected for movement-related potentials, and both finger action-sounds have been combined to illustrate the ERPs). Bottom panel: here, the peak amplitude within each time window are plotted (*<italic>p</italic> &#x0003C; 0.05, **<italic>p</italic> &#x0003C; 0.001; error bars represent standard error of the mean).</p></caption>
<graphic xlink:href="fnhum-13-00215-g0003.tif"/>
</fig>
<p>Once sound playback had finished after each swipe, participants returned to the original starting position. At the beginning of each LP block, participants were instructed to begin another swipe only after a self-timed 3-s break had expired. This break was designed to assuage the concern of fatigue during sensorimotor learning tasks (Bock et al., <xref ref-type="bibr" rid="B11">2005</xref>; McDonnell and Ridding, <xref ref-type="bibr" rid="B57">2006</xref>). If the experimenter observed that the participant began a swipe before the 3-s break had expired, participants were notified <italic>via</italic> a shoulder tap before the next swipe. This indicated they were to increase the rest period between trials.</p>
<p>To help promote motor learning, participants voluntarily chose which movement to execute (Herwig et al., <xref ref-type="bibr" rid="B41">2007</xref>). Participants were asked to perform an approximately equivalent number of index and little finger-swipes (24 each) to mitigate any potential learning biases. This was observed, and participants were made aware of the swipe distribution-count between LP blocks. During testing, 50.51% of swipes were generated with the index finger while 49.49% were produced using the little finger. Within an LP block, the minimum index swipe count was 18, and for the little finger at least 20 swipes were produced. Each block lasted approximately 5-min, and a 1-min break between each LP block was used.</p>
<p>Following a 4-min break, the next stage was the post-learning stage named <italic>Post-LP</italic>. Like the Pre-LP blocks, these stages required listening to the 48 sound-trials over separate blocks (<italic>Post-LP 1</italic>, <italic>Post-LP 2</italic>, <italic>Post-LP 3</italic>, and <italic>Post-LP 4</italic>). Post-LP 1 and 2 blocks used TMS, while Post-LP 3 and 4 used EEG separately. TMS and EEG were used independently to minimize interference across recordings. A 1-min break between all blocks was added.</p>
<p>Participants then completed the <italic>Learning Procedure-control</italic> (<italic>LP-C</italic>) stage. Here, two LP-C blocks (<italic>LP-C 1</italic> and <italic>LP-C 2</italic>) were used to isolate the motor component within the ERP trace. Convention suggests that data associated with movement should be subtracted from the LP block ERPs. This is thought to improve the comparison with listening-derived ERPs (Martikainen et al., <xref ref-type="bibr" rid="B54">2005</xref>; Ford et al., <xref ref-type="bibr" rid="B23">2007</xref>, <xref ref-type="bibr" rid="B25">2010</xref>, <xref ref-type="bibr" rid="B24">2014</xref>; Baess et al., <xref ref-type="bibr" rid="B4">2011</xref>; Van Elk et al., <xref ref-type="bibr" rid="B86">2014</xref>). Therefore, the LP-C blocks used the same overall design as the LP but did not produce any sound following a swipe action. That is, swipe movements were produced; however, no sounds were played. During testing, 50.77% of swipes were generated with the index finger while 49.23% were produced using the little finger.</p>
<p>Lastly, another <italic>Baseline</italic> stage was completed (block <italic>B3</italic>). Like the initial Baseline blocks, four sets of five TMS pulses at the motor threshold (MT) were applied while MEPs were recorded.</p>
<p>Participants were asked to observe their right hand throughout the experiment to ensure a degree of uniformity across experimental stages. Throughout all listening blocks, participants were asked to pay attention to the sounds and indicate if they heard a control sound after sound playback has finished (i.e., no control sound was used during listening trials, however). In total, the experiment lasted approximately 90&#x02013;120 min.</p>
</sec>
<sec id="s2-3">
<title>The AMRJ</title>
<p>Manufactured by SPLat Controls (SPLat Controls, Seaford, VIC, Australia) and Maximum Design (Max Designs, Croydon North, VIC, Australia), the AMRJ consists of the MS121USB216 controller and MP3 Trigger printed circuit board. Intended for our experimental protocol, the AMRJ uses finger movement to activate inbuilt switches. There are two switches; one for the index finger and another for the little finger. These detect changes in electrical capacitance upon touch. They are capacitance switches and, therefore, do not require a force to activate. Mechanical button presses can produce unwanted sounds and even accentuate tactile information, which can affect ERP recordings (Horv&#x000E1;th, <xref ref-type="bibr" rid="B44">2014</xref>). The use of capacitance switches reduces the potential confound of tactile and other sound feedback on ERP recordings.</p>
<p>The device can also bypass switch activation and play a quasi-randomized sequence of the audio samples. In either mode, the AMRJ triggers EEG, EMG, and TMS equipment to help record data when needs require (see <xref ref-type="fig" rid="F2">Figure 2</xref> for illustration of the AMRJ, while more details regarding the switches and trigger design can be found in the <xref ref-type="supplementary-material" rid="SM1">Supplementary Materials</xref>).</p>
</sec>
<sec id="s2-4">
<title>Sounds</title>
<p>Swipe movements were followed by one (of two) complex sounds at a stimulus onset asynchrony (SOA) of 10 ms. This delay would ensure all techniques were triggered simultaneously (where required). The sounds were recorded on Ableton 9.0 software (Ableton AG, Mitte, BER, Germany) using a Roland Juno 60 synthesizer (Roland Corporation, Hamamatsu, 22, Japan) at Otologic Studios (Toorak, VIC, Australia). One sound consisted of an approximate 500 Hz fundamental tone, as well as 250 Hz and 1,000 Hz overtones. This sound was heard as a <italic>low</italic> sound. The other sound, so-heard as the high sound, was comprised of an approximate 1,800 Hz fundamental tone, as well as 2,100 Hz and 4,000 Hz overtones (for sound spectrograms, see <xref ref-type="supplementary-material" rid="SM1">Supplementary Figure S2</xref> in <xref ref-type="supplementary-material" rid="SM1">Supplementary Materials</xref>).</p>
<p>Assignment of sounds (low or high) to each switch (index or little finger) were counterbalanced across participants (Ticini et al., <xref ref-type="bibr" rid="B83">2011</xref>). For example, some participants produced the high sound by an index finger-swipe, while others produced the low sound (by that finger-swipe). For statistical purposes, however, we refer to the MEPs recorded in the index finger as the congruent sound or sound associated with the index finger as the Congruent<sub>FDI</sub> data. Since these sounds were pinned for each participant, the Congruent<sub>FDI</sub> sound then becomes the incongruent sound for the little finger. Therefore, the Incongruent<sub>ADM</sub> term refers to the MEPs obtained in the little finger during playback of the index finger sound. Similarly, the Congruent<sub>ADM</sub> sound describes the MEPs obtained during playback of the little finger-congruent sound. Therefore, this sound becomes the index finger incongruent sound, and reflects the MEPs recorded in the index finger during the little finger-swipe sound playback (i.e., Incongruent<sub>FDI</sub>).</p>
<p>Sounds were played through Etymotic ER3-10 ABR insert earphones (Compumedics USA, Charlotte, NC, USA), <italic>via</italic> the AMRJ 3.5 mm stereo port, and amplified to 75 dB or slightly lower if individual comfort levels were exceeded (i.e., sound levels were determined <italic>via</italic> the inbuilt MP3 Trigger printed circuit board).</p>
</sec>
<sec id="s2-5">
<title>EEG Setup and Data Extraction</title>
<p>During the EEG blocks, data were recorded using 12 Ag-AgCl sintered electrodes. EEG electrode sites comprised Fz, FCz, Pz, P3, and P4. Electrodes were also placed on both mastoids, and a ground electrode was placed on the forehead for off-line referencing. Both vertical and horizontal electrooculograms (EOGs) were recorded using electrodes above and below the left eye, and on the outer canthus of each eye. EEG data were obtained <italic>via</italic> a SynAmps RT system (Compumedics USA, Charlotte, NC, USA) and Curry 7.07xs (Compumedics USA, Charlotte, NC, USA). Data were sampled at 10 kHz, and impedance was kept below 5 k&#x003A9; for each electrode.</p>
<p>EEG and EOG data were analyzed offline in Curry 7.07xs (Compumedics USA, Charlotte, NC, USA). A 50 Hz notch filter and a band-pass filter between 0.5 and 15 Hz was applied. Data were re-referenced to the mean combination of the left and right mastoids. To minimize the influence of eye blinks on the ERP, horizontal and vertical EOG data were corrected using Curry&#x02019;s covariate analysis tool.</p>
<p>Bad EEG periods exceeding &#x000B1;75 &#x003BC;V in amplitude were detected. The 700 ms epochs around these (i.e., 200 ms prior and 500 ms after trigger) were removed from further analysis to obtain conservative EEG epochs. Epochs that exceeded a signal-to-noise ratio below 0.5 and above 2.5 in a 700 ms window time-locked to triggers were also excluded from further analysis. From a possible 7114 blocks, 21.9% or 1,556 were removed.</p>
<p>For each participant, epochs were labeled for Sound (Congruent<sub>FDI</sub> or Congruent<sub>ADM</sub>), Block [(action or audition) LP 1, LP 2, LP 3, LP 4, Post-LP 3, Post-LP 4], and LP-C blocks (LP-C 1 and LP-C 2). Subsequent analyses of the epochs investigated the peak amplitude of the N1, P2, and N2 ERP components. These were respectively examined within an epoch of 50&#x02013;150 ms (N1), 100&#x02013;200 ms (P2), and 151&#x02013;250 ms (N2) for each participant across each ERP recording.</p>
</sec>
<sec id="s2-6">
<title>TMS Setup and Data Extraction</title>
<p>During TMS blocks, focal TMS pulses were delivered to the scalp over the left M1. A 70 mm figure-of-eight stimulation coil was used (Magstim Company, Whitland, UK), and the coil was connected to a Magstim 200 stimulator (Magstim Company, Whitland, UK). Using self-adhesive Ag-AgCl electrodes, TMS-induced MEPs were recorded from first dorsal interosseous (FDI) muscle and abductor minimi digiti muscle (ADM) muscles simultaneously. A ground electrode was placed on the dorsal surface of the wrist (i.e., ulna bone). The EMG signal was amplified by a PowerLab/4SP (ADInstruments, Colorado Springs, CO, USA), and data were sampled <italic>via</italic> a FE135 Dual Bio-amp (ADInstruments, Colorado Springs, CO, USA). A band-pass filter between 0.3 and 1,000 Hz was applied, as well as a mains 50 Hz notch filter.</p>
<p>The site on the scalp that produced the largest median (peak-to-peak) MEP in five consecutive trials from the right-FDI while at rest was defined as M1. Stimulation to M1 determined the MT. MT was defined here as the stimulation intensity that evoked a median peak-to-peak MEP of approximately 1 mV. The acceptable range was between 0.8 and 1.3 mV. Once a suitable stimulator output had been determined (e.g., indicating MEPs from M1 were within the 0.8&#x02013;1.3 mV range), recordings were obtained from 10 trials using the FDI muscle while the hand was at rest. If the median MT MEP was not found to fall within the accepted range across those 10 trials, the MT process began again. The stimulator output ranged from 33% to 60% [<italic>M</italic> = 46% (<italic>SD</italic> = 7.6%)] of the maximum stimulator output. We also used the Baseline blocks (B1, B2, and B3) to ensure no significant changes in background corticospinal excitability occurred. Corticospinal excitability across the paradigm was stable as measured <italic>via</italic> a mean of TMS Baseline blocks (see <xref ref-type="supplementary-material" rid="SM1">Supplementary Materials</xref> for test details).</p>
<p>Due to the uniqueness and duration of swipe movements, we opted for a broad range of trigger-points to obtain MEPs during listening. We were unsure where peak AMR would be recorded during listening. It was foreseeable the sensorimotor association might involve an internal mapping that encodes the sound with either movement commencement, movement transition, swipe termination, or a variation of each (for related discussion, see Horv&#x000E1;th, <xref ref-type="bibr" rid="B43">2013</xref>). We could not, however, determine swipe time precisely, which might help calculate a suitable AMR time-window or trigger point for participants. For example, an individual might produce slower swipes than another, which suggests a longer TMS trigger latency is appropriate to assess the sensorimotor association. Thus, a variety of <italic>static</italic> TMS-trigger time points were used to overcome this issue.</p>
<p>Focal-TMS pulses were delivered to M1 at 50, 150, 300, or 450 ms from SOA in each of the 48 trials (i.e., 12 pulses at each 50, 150, 300, or 450 ms from SOA). While these triggers do not consider individual variability, we considered the AMR time-window suitable for swipes 300+ ms in duration. Additionally, we considered the time points helpful in exploring some basics regarding timing during the association process. Potentially, that is, how the brain overcomes sensory delays while integrating a present sound with a past action (for discussion on this point, see Hanuschkin et al., <xref ref-type="bibr" rid="B37">2013</xref>; Giret et al., <xref ref-type="bibr" rid="B33">2014</xref>; Keysers and Gazzola, <xref ref-type="bibr" rid="B49">2014</xref>; Burgess et al., <xref ref-type="bibr" rid="B13">2017</xref>).</p>
<p>As is recommended for MEP data (Schmidt et al., <xref ref-type="bibr" rid="B75">2009</xref>), individual median peak-to-peak MEP amplitudes (mV) were extracted for each TMS block (Pre-LP 1, Pre-LP 2, Post-LP 1, Post-LP 2, B1, B2, and B3) across Muscle (FDI or ADM), Sound (Congruent<sub>FDI</sub> or Congruent<sub>ADM</sub>), and Time point (50, 150, 300, and 450 ms). Approximately 252 MEPs were obtained for each participant. Missing data points in baseline blocks, due to faulty TMS-based triggers, were replaced by the median MEPs of the remaining blocks for those participants (this required the alteration of two data points or 3.7% of Baseline MEPs collected across all participants). Also, one participant&#x02019;s Post-LP 1 block was corrupt, requiring the (presumably comparable) Post-LP 2 dataset to be used (i.e., alteration of 1.4% of total MEPs averaged across all participants).</p>
<p>To minimize the influence on tests of normality, extreme outliers at a sample level were reduced to one value above the next highest data point (Tabachnick and Fidell, <xref ref-type="bibr" rid="B81">2006</xref>). From the FDI muscle, 1.2% of MEP recordings were altered, while 1.9% of the ADM raw data. Tests of normality, histograms, as well as stem and leaf plots, were inspected and were considered satisfactory for parametric data analyses.</p>
</sec>
<sec id="s2-7">
<title>Statistical Analyses</title>
<sec id="s2-7-1">
<title>Testing Sensory Prediction</title>
<p>To assess if sensory prediction mechanisms are present, we investigated changes in N1, P2, and N2 peaks across action and audition stages. We examined each peak component separately <italic>via</italic> 2 (Finger: index or little) &#x000D7; 2 (Stage: Action or Audition) &#x000D7; 2 (Electrode: Fz or FCz) ANOVAs. A main effect for Stage will reveal sensory prediction mechanisms, with <italic>post hoc</italic> tests showing that suppression of ERP peaks is modulated by hearing the sounds across action and audition stages.</p>
</sec>
<sec id="s2-7-2">
<title>Testing Sensorimotor Associations</title>
<p>To assess the development of AMR, a 2 (Finger: Index or Little) &#x000D7; 4 (Audition block: Pre-LP 1&#x02013;2 or Post-LP 1&#x02013;2) &#x000D7; 4 (Time point: 50, 150, 300, or 450) &#x000D7; 2 (Sound: Congruent<sub>FDI/ADM</sub> or Incongruent<sub>FDI/ADM</sub>) repeated-measures ANOVA was conducted on the normalized median peak-to-peak amplitude MEPs. This analysis compares Pre and Post-LP MEPs, which are recorded upon hearing congruent and incongruent sounds at a variety of time points. As stated, we were unsure where the largest AMR recordings would be revealed. Therefore, to show AMR, we expect the four-way ANOVA to reveal Audition block &#x000D7; Sound &#x000D7; Time point interactions. Subsequent <italic>post hoc</italic> tests should indicate larger MEPs in the Post-LP 1 and 2 blocks when the congruent sounds are heard at some time point in comparison to the incongruent sound MEPs at that time point. This effect, though, should not be present with the Pre-LP blocks (e.g., it is a <italic>trained</italic> effect).</p>
</sec>
<sec id="s2-7-3">
<title>Testing Sensory Prediction and Sensorimotor Association</title>
<p>Finally, we were interested in exploring the relationship between sensory prediction, sensorimotor association, behavioral data. Therefore, we used a Spearman&#x02019;s correlation to investigate the association between: (a) EMG data obtained during the LP blocks with; (b) N1; (c) P2; (d) N2 ERP components also recorded during the LP training; (e) MEPs from post-LP blocks (i.e., we decided to omit pre-learning AMR data for the sake of clarity); (f) N1; (g) P2; and (h) N2 ERP components recorded during the audition stage. Here, we expect some correlations to exist between ERP and EMG data during action with MEPs during listening. This would indicate close ties between sensory prediction and sensorimotor association mechanisms.</p>
<p>All statistical analyses were carried out using SPSS 24 (IBM Corporation, Armonk, NY, USA) and analyses used a criterion of <italic>p</italic> &#x0003C; 0.05. All significant effects were investigated <italic>via</italic> follow-up ANOVA and pairwise comparisons (PCs) using a Bonferroni adjustment for multiple comparisons. Partial eta-squared effect sizes (<inline-formula><mml:math id="M5"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mtext>p</mml:mtext><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula>) were calculated to estimate the magnitude of an effect. Finally, in an effort towards conciseness, reporting of statistics is limited. For all test results, see <xref ref-type="supplementary-material" rid="SM1">Supplementary Materials</xref>.</p>
</sec>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec id="s3-1">
<title>Testing Sensory Prediction</title>
<p>First, we investigated changes in sensory prediction. <xref ref-type="fig" rid="F3">Figure 3</xref> displays the grand-average ERP traces from SOA. Regarding the N1 peak, a three-way interaction between Finger, Stage, and Electrode was revealed (<italic>F</italic><sub>(1,17)</sub> = 6.031, <italic>p</italic> = 0.025, <inline-formula><mml:math id="M6"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mtext>p</mml:mtext><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.262) requiring further investigation (see <xref ref-type="supplementary-material" rid="SM1">Supplementary Materials</xref>). However, no main effects were present, suggesting action and audition peaks did not differ within this time window (50&#x02013;150 ms).</p>
<p>Regarding the P2 peak, ANOVA revealed a main effect for Stage as hypothesized (<italic>F</italic><sub>(1,17)</sub> = 4.471, <italic>p</italic> = 0.050, <inline-formula><mml:math id="M7"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mtext>p</mml:mtext><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.208). <italic>Post hoc</italic> PCs, which were Bonferroni corrected, indicated that the action stage produced larger P2 peaks (<italic>M</italic> = 2.704, <italic>SE</italic> = 0.653) than the audition stage (<italic>M</italic> = 1.438, <italic>SE</italic> = 0.486). This suggests that some sensory prediction outcomes during action are being reflected in this peak.</p>
<p>As hypothesized, a main effect for Stage was demonstrated with the N2 peak data (<italic>F</italic><sub>(1,17)</sub> = 15.296, <italic>p</italic> = 0.001, <inline-formula><mml:math id="M8"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mtext>p</mml:mtext><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.474). PCs (Bonferroni corrected) revealed audition generated larger (more negative) peaks (<italic>M</italic> = &#x02212;9.354<italic>, SE</italic> = 0.738) in comparison to action (<italic>M</italic> = &#x02212;6.676, <italic>SE</italic> = 0.435). This suggests that a predictive process is being undertaken during action which suppresses the incoming sounds.</p>
<p>Altogether, we found evidence for some sensory prediction outcomes during finger-swipe movements.</p>
</sec>
<sec id="s3-2">
<title>Testing Sensorimotor Association</title>
<p>Next, we determined if AMR developed (i.e., sensorimotor associations). Comparisons between Pre and Post-LP MEPs, which were recorded upon hearing congruent and incongruent sounds using the static time points, were explored.</p>
<p>The four-way ANOVA revealed a main effect for Time point (<italic>F</italic><sub>(3,51)</sub> = 4.809, <italic>p</italic> = 0.005, <inline-formula><mml:math id="M9"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mtext>p</mml:mtext><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.220). PCs (Bonferroni corrected) indicated the MEPs at the 50 ms time point (<italic>M</italic> = 0.681, <italic>SE</italic> = 0.060) were significantly larger than the 150 ms time point (<italic>M</italic> = 0.612, <italic>SE</italic> = 0.051; <italic>p</italic> = 0.003). This suggests motor-brain activity is high during the early stages of swipe-sound listening.</p>
<p>ANOVA also revealed a main effect for Audition blocks (<italic>F</italic><sub>(3,51)</sub> = 3.799, <italic>p</italic> = 0.016, <inline-formula><mml:math id="M10"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mtext>p</mml:mtext><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.183). Although PCs did not survive Bonferroni corrections, estimated means indicated that Post-LP 1 MEPs were, unexpectedly, the smallest recorded [Pre-LP 1: <italic>M</italic> = 0.708 (<italic>SE</italic> = 0.076); Pre-LP 2: <italic>M</italic> = 0.704 (<italic>SE</italic> = 0.068); Post-LP 1: <italic>M</italic> = 0.538 (<italic>SE</italic> = 0.063); Post-LP 2: <italic>M</italic> = 0.645, <italic>SE</italic> = 0.060]. When both congruent and incongruent MEPs are examined, the reduction in MEP size immediately post-training is in direct contrast with our expectations. Indeed, it will be difficult to show AMR across pre- and post-training comparisons if, overall, post-training MEPs are reduced when compared with baseline measurements (see <xref ref-type="supplementary-material" rid="SM1">Supplementary Materials</xref> for discussion of repetition suppression during the LP blocks, which might explain this unanticipated finding).</p>
<p>Regarding AMR illustration, no Audition block &#x000D7; Sound &#x000D7; Time point interactions were present. This suggests that AMR did not develop. Trained sounds did not increase finger-corticospinal networks beyond baseline measures when all blocks and time points are considered.</p>
<p>However, we were concerned with the use of static TMS trigger points, which do not consider individual variability and learning. We suspected these triggers might be censoring the AMR illustration. If sensorimotor associations are experience dependent, and a participant learns to complete the swipe in 450 ms, then the largest MEPs for the congruent sounds might be generated at this 450 ms time point. Another participant, however, may produce a swipe duration of 300 ms. Thus, the 300 ms time point might be better suited at revealing AMR for this person. Others still, might encode the swipe initiation with the sound, which suggests the 50 ms time point might be suitable. Therefore, examining the MEPs without regard for individual variability could conceal the AMR illustration. Add to this the surprising finding regarding the reduced MEPs immediately post-training, and it perhaps explains why AMR was not revealed.</p>
<p>Accordingly, we examined changes in MEPs across congruent and incongruent sounds for both fingers <italic>via</italic> pre and post-learning blocks separately (for a related examination, see Ticini et al., <xref ref-type="bibr" rid="B83">2011</xref>, <xref ref-type="bibr" rid="B82">2019</xref>). Furthermore, we selected a single time point to overcome the stated challenges with individual variability. We supposed, if (a) individuals learn to associate a finger movement with a sound (e.g., the index finger with the Congruent<sub>FDI</sub> sound), and (b) behavioral learning variabilities cause changes in the timing of the sensorimotor association process. Perhaps, then, (c) we should explore the inhibition of the incongruent sound MEPs relative to maximum congruent sound MEP at a given TMS time point. In other words, we were interested in the maximal dissociation of congruent vs. incongruent sound-generated MEPs across post-blocks within a time point.</p>
<p>We determined where a participant&#x02019;s maximal congruent sound MEP in either Post-LP block was recorded. This time point then became the <italic>guide</italic>. We obtained the congruent and incongruent MEPs for both Post-LP blocks at this time point only. A 2 (<italic>Sound</italic>: MAX-congruent and incongruent) &#x000D7; 2 (<italic>Block</italic>: Post-LP 1 and 2) ANOVA for each muscle was run. Demonstration of AMR would show that listening to incongruent sounds generates smaller, perhaps inhibited, MEPs when compared to congruent, trained sounds.</p>
<p>Provided the AMR illustration is time-locked to maximal trained muscle activity during audition, the index finger ANOVA with post-learning blocks demonstrated a significant main effect for Sound (<italic>F</italic><sub>(1,17)</sub> = 26.987, <italic>p</italic> &#x0003C; 0.001, <inline-formula><mml:math id="M11"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mtext>p</mml:mtext><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.614). PCs (Bonferroni corrected) showed hearing the congruent sound produced larger MEPs (<italic>M</italic> = 1.404, <italic>SE</italic> = 0.143) than the incongruent sound within a time point (<italic>M</italic> = 0.744, <italic>SE</italic> = 0.101). This indicates that AMR develops, and trained sounds generate larger MEPs in corticospinal tracts than those recorded upon hearing untrained (incongruent) sounds.</p>
<p>There was also a significant main effect for Post-LP block (<italic>F</italic><sub>(1,17)</sub> = 12.322, <italic>p</italic> = 0.003, <inline-formula><mml:math id="M12"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mtext>p</mml:mtext><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.420). PCs (Bonferroni corrected) revealed the Post-LP 1 block generated smaller MEPs (<italic>M</italic> = 0.847, <italic>SE</italic> = 0.125) than those recorded within the Post-LP 2 (<italic>M</italic> = 1.301, <italic>SE</italic> = 0.123). Perhaps this modulation of MEPs across blocks reveals a memory consolidation period, which is facilitated by the TMS pulse and sound playback. Sound playback during TMS to M1 over the FDI region could aid the formed sensorimotor association. In other words, Post-LP block 1 might act like another training block (although see the &#x0201C;Discussion&#x0201D; section for a caveat to this explanation).</p>
<p>The same procedure followed for the little finger. Despite using the higher threshold muscle to generate the MT, ANOVA revealed a significant main effect for Sound (<italic>F</italic><sub>(1,17)</sub> = 19.062, <italic>p</italic> &#x0003C; 0.001, <inline-formula><mml:math id="M13"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mtext>p</mml:mtext><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula> = 0.529). Bonferroni corrected PCs indicated the congruent sound produced larger MEPs (<italic>M</italic> = 0.313, <italic>SE</italic> = 0.234) than those recorded when the incongruent sound is played (<italic>M</italic> = 0.234, <italic>SE</italic> = 0.034). This data supports the index finger AMR illustrations, and suggest sensorimotor associations developed after training.</p>
<p>To limit concerns with <italic>post hoc</italic> statistical biases, we used the respective (i.e., individual&#x02019;s) congruent time point as the guide and explored Pre-LP MEPs, too. For a powerful AMR illustration to exist, we did not expect to find the post-training disassociation between congruent and incongruent sound-derived MEPs to be present in Pre-LP blocks.</p>
<p>Indeed, separate ANOVAs for each finger do not reveal any main effects or interactions when the Pre-LP MEPs are examined using the guide time point. As shown in <xref ref-type="fig" rid="F4">Figure 4</xref>, a disassociation between congruent and incongruent sound MEPs is only present after learning. This indicates that AMR is a trained effect. Together, this suggests that a bidirectional sensorimotor association developed.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Maximal disassociation of congruent and incongruent sounds across fingers. Left panel: provided the auditory-motor resonance (AMR) illustration is time-locked to the maximal trained motor evoked potential (MEP) time point, a disassociation between congruent index finger MEPs and the incongruent sound-derived MEPs is present after training. This effect, indicating congruent sounds generate larger MEPs than incongruent sounds at an individual time point, is not revealed in the Pre-LP blocks before learning takes place. Right panel: the trained disassociation is also present with MEPs recorded from the little finger [significance levels for <italic>congruency</italic> differences are determined by one-way ANOVAs within a block (see <xref ref-type="supplementary-material" rid="SM1">Supplementary Materials</xref> for test details); *<italic>p</italic> &#x0003C; 0.05, **<italic>p</italic> &#x0003C; 0.001; error bars represent standard error of the mean].</p></caption>
<graphic xlink:href="fnhum-13-00215-g0004.tif"/>
</fig>
</sec>
<sec id="s3-3">
<title>Testing the Relationship Between Sensory Prediction and Sensorimotor Association</title>
<p>Having established sensory prediction mechanisms during action and AMR during audition, we undertook some nonparametric Spearman correlational analyses to explore how these mechanisms are associated putatively. First, negative correlations between the maximum Post-LP 1 MEPs in the index finger (i.e., audition) and N1 peak data during the little finger-swipe (i.e., action) are present (Fz, <italic>r</italic> = &#x02212;0.655, <italic>p</italic> = 0.003; FCz, <italic>r</italic> = &#x02212;0.544, <italic>p</italic> = 0.020). This relationship is mirrored at the Fz electrode with the Post-LP 2 MEPs (<italic>r</italic> = &#x02212;0.598, <italic>p</italic> = 0.009). Also, the maximum Post-LP 1 MEPs in the index finger and the P2 peak during a little finger-swipe are negatively correlated (Fz, <italic>r</italic> = &#x02212;0.544, <italic>p</italic> = 0.020; FCz, <italic>r</italic> = &#x02212;0.610, <italic>p</italic> = 0.007). These data suggest that large index finger MEPs during congruent sound listening are associated with recordings of larger (i.e., less suppressed and more negative) ERP data during the little finger-swipe. Together, this highlights the close ties between a sensorimotor association and sensory prediction during action learning.</p>
<p>In support of this close relationship, behavioral activity and sensory prediction markers also demonstrate some correlation. EMG activity in the index finger during the swipe is negatively correlated with the N1 peak data during audition of the incongruent little finger-sound at Fz (<italic>r</italic> = &#x02212;0.513, <italic>p</italic> = 0.030) and FCz electrodes (<italic>r</italic> = &#x02212;0.548, <italic>p</italic> = 0.019). It would seem that efficient index-swipe movements are associated with the absence of early sensory prediction data when hearing the different (related) sound.</p>
<p>Therefore, negative correlations across sensory prediction, sensorimotor association, and behavioral data highlight an interdependent nature of these components. Simply put, the disassociation of closely related motor control plans (e.g., sensorimotor associations) might be achieved <italic>via</italic> sensory prediction processes, which are learnt during behavior.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>Few studies have explored how sensorimotor associations develop by way of sensory predictions within a single paradigm. Here, we combined a bespoke device with TMS, EMG, and EEG techniques to explore how these study components were correlated. As expected, our data show increases in P2 and suppression of N2 peaks during action, demonstrating some aspects of sensory prediction outcomes. Also, time-locked AMR disassociations are present, which show that congruent sounds generate larger MEPs than hearing incongruent sounds. These disassociations are considered here to represent precise sensorimotor associations and only appear after learning. Finally, novel findings show that negative correlations between MEPs after learning and ERP data during action are present. Taken together, these results might suggest that sensory prediction mechanisms fine-tune sensorimotor associations, perhaps in line with an internal modeling account of sensorimotor learning.</p>
<p>First, sensory prediction mechanisms are present during action. The N2 peak-component around 200 ms was suppressed during action. Typically, modulation of the N2 is thought to reflect higher-order conflict monitoring (Folstein and Van Petten, <xref ref-type="bibr" rid="B22">2008</xref>), and it is often shown in response to an auditory violation (Kujala et al., <xref ref-type="bibr" rid="B53">2007</xref>). For example, larger N2 components are related to hearing deviant musical notes embedded in known melodies (Mathias et al., <xref ref-type="bibr" rid="B55">2015</xref>, <xref ref-type="bibr" rid="B56">2016</xref>). Here, larger N2 peaks were evident in comparison with auditory-based recordings (i.e., no deviants were heard during sound playback). To reconcile, some suggest the amount of suppression of the N2 peak is contingent upon a comparison between a memory trace and the reafferent information (N&#x000E4;&#x000E4;t&#x000E4;nen et al., <xref ref-type="bibr" rid="B62">2007</xref>). So, in the context of conflict monitoring or auditory violation, the N2 peak could represent feedback from the reafferent comparison.</p>
<p>Taken further, if the N2 peak represents the amount of <italic>comparative feedback</italic>, does this explain why the earlier P2 peak, approximately 150 ms, was increased during the action? Conceivably, the relatively long action has been able to draw out the sensory prediction outcomes. Perhaps, then, the P2 peak is revealing the comparison or even the prediction itself, rather than some type of signal to maintain the sensory representation during and following suppression (Wang et al., <xref ref-type="bibr" rid="B87">2014</xref>). In doing so, this might explain why the P2 component here was increased. To clarify, sensory predictions should be generated first, before comparison with reafferent sensory stimuli. The comparison should then produce some feedback. Therefore, a sensory prediction mechanism should have three main processes: prediction, comparison, and feedback.</p>
<p>In a practical sense, a swipe movement is made, and an epoch of EEG activity is recorded. Simultaneously, a prediction is made (e.g., expect the index swipe-sound). Meanwhile, motor preparations for the swipe-termination component (e.g., lift-off) are initialized and executed. When available, this prediction or a negative image copy (Ramaswami, <xref ref-type="bibr" rid="B70">2014</xref>; Barron et al., <xref ref-type="bibr" rid="B6">2017</xref>; Enikolopov et al., <xref ref-type="bibr" rid="B20">2018</xref>) is compared with the reafferent auditory stimuli (e.g., a comparison will determine if that was the index swipe-sound). Finally, feedback is provided to motor control areas. We suspect this feedback is in the form of an N2 peak here. If the N2 represents feedback from the predictive comparison, we wonder whether the P2 component might then represent a preceding stage of the prediction process. This could be either the prediction (centrally located) or even the comparison with sensory reafference (perhaps located of auditory-parietal regions).</p>
<p>While we do acknowledge assigning specific processes to individual ERP components is difficult (for a related discussion on complexities of ERP analyses, see Horv&#x000E1;th, <xref ref-type="bibr" rid="B45">2015</xref>; Spriggs et al., <xref ref-type="bibr" rid="B78">2018</xref>), we suggest the long action and effect might have been able to tease out some of these separate prediction stages. In contrast, fast actions like a button press (B&#x000E4;ss et al., <xref ref-type="bibr" rid="B5">2008</xref>; Baess et al., <xref ref-type="bibr" rid="B4">2011</xref>; Ford et al., <xref ref-type="bibr" rid="B24">2014</xref>) or short speech sounds (Heinks-Maldonado et al., <xref ref-type="bibr" rid="B39">2005</xref>), might conflate these separate sensory prediction processes into a single N1 outcome. In that case, the time window of action, prediction, and the feedback-response are so small that EEG recordings might not be able to show the underlying computations. In any case, we find evidence for some sensory prediction processes during action.</p>
<p>Sensorimotor associations were also present. As hypothesized, when the learned sounds are heard, activation of related corticospinal circuits that are involved in the associated actions are revealed. This is supported by published data regarding sensorimotor association that indicate training can lead to AMR (Butler et al., <xref ref-type="bibr" rid="B15">2011</xref>; Butler and James, <xref ref-type="bibr" rid="B14">2013</xref>; D&#x02019;Ausilio et al., <xref ref-type="bibr" rid="B19">2014</xref>; Furukawa et al., <xref ref-type="bibr" rid="B30">2017</xref>).</p>
<p>More specifically, the AMR response was shown <italic>via</italic> a time-locked dissociation between congruent and incongruent sound-derived MEPs. When training generates the largest AMR response for the congruent sound, hearing an incongruent sound generates less activation in the motor circuit. This type of AMR illustration is also supported by other published works on AMR congruency (Ticini et al., <xref ref-type="bibr" rid="B83">2011</xref>, <xref ref-type="bibr" rid="B82">2019</xref>).</p>
<p>Importantly, this type of AMR disassociation is not present before learning. It develops because of sensorimotor experience and appears predicated on behavioral variability; albeit, a <italic>post hoc</italic> and simple delineation of individual differences. That is, the disassociation between congruent and incongruent sounds are not revealed <italic>via</italic> the static TMS time points. Only when an individual time point for the maximally trained (congruent) response is used as an index or guide does AMR appear after learning and not before. Given this, it would seem the sensorimotor association here is more complicated than just a broad cause and effect relationship between motor and sound information. Add to this the sensory prediction data regarding different stages of the predictive process, and we suspect an internal modeling process might be generating the sensorimotor associations.</p>
<p>Indeed, other interpretations can explain how sensory and motor information are associated, such as an Association account (for review e.g., see Cook et al., <xref ref-type="bibr" rid="B17">2014</xref>), Ideomotor perspective (for review e.g., see Herwig, <xref ref-type="bibr" rid="B40">2015</xref>), or even more contemporary prediction theories (Friston, <xref ref-type="bibr" rid="B26">2010</xref>, <xref ref-type="bibr" rid="B27">2011</xref>; Friston et al., <xref ref-type="bibr" rid="B28">2011</xref>, <xref ref-type="bibr" rid="B29">2017</xref>; Pickering and Clark, <xref ref-type="bibr" rid="B68">2014</xref>). However, we focus here on a conventional internal modeling perspective.</p>
<p>Simply put, an internal model mimics the behavior of actions and their consequences within the CNS (for review e.g., see Miall et al., <xref ref-type="bibr" rid="B60">1993</xref>; Miall and Wolpert, <xref ref-type="bibr" rid="B58">1995</xref>; Wolpert et al., <xref ref-type="bibr" rid="B92">1995</xref>, <xref ref-type="bibr" rid="B91">2011</xref>; Wolpert and Kawato, <xref ref-type="bibr" rid="B90">1998</xref>; Wolpert and Ghahramani, <xref ref-type="bibr" rid="B89">2000</xref>; Grush, <xref ref-type="bibr" rid="B35">2004</xref>; Burgess et al., <xref ref-type="bibr" rid="B13">2017</xref>). Traditionally, they consist of an inverse or controller unit that causally integrates sensory consequences with the actions and motor commands that produce them. Second, there is a forward component, which generates predictions regarding upcoming sensory change given the outgoing motor commands. Principally, the combination of inverse and forward components helps overcome temporal delays when integrating sound and action, given large sensorimotor loops (Wolpert et al., <xref ref-type="bibr" rid="B92">1995</xref>; Wolpert and Kawato, <xref ref-type="bibr" rid="B90">1998</xref>). They do this by reducing error in the underlying component mappings through a comparison between predicted and produced sensory stimuli. In turn, feedback updates both model components. In other words, sensory predictions help to overcome the delays in sensorimotor feedback loops when integrating sound and action. Over time, fewer cortical resources are needed to produce an action as the models increase in accuracy and efficiency.</p>
<p>In support of the interdependence of components expected under an internal modeling account, recent studies have shown that sensory predictions are involved with activating sensorimotor association and motor representations (Gordon et al., <xref ref-type="bibr" rid="B34">2018</xref>). Indeed, Stephan et al. (<xref ref-type="bibr" rid="B79">2018</xref>) demonstrated that anticipatory MEPs were produced upon hearing sounds in a musical sequence after learning (i.e., sound sequences automatically cued future movement in specific finger muscles). Others have even implicated the cerebellum in hand-reaching experiments when inverse and forward models work in tandem to overcome behavioral adaptation (Honda et al., <xref ref-type="bibr" rid="B42">2018</xref>). Altogether, we suspect the <italic>post hoc</italic> explorations are necessary here to find AMR because sensorimotor associations are very sensitive to individual timing and temporal delays during the action-learning process. In turn, this type of association relies on feedback from the reafferent comparisons <italic>via</italic> sensory prediction, which is shown <italic>via</italic> peak modulation and correlational data across study components. As is, we suspect sensory predictions fine-tune the sensorimotor associations during learning as expected by internal modeling.</p>
<p>Finally, we recognize the complexity of demonstrating these relationships across the sensorimotor divide and concede some methodological issues, which future investigations may wish to consider. Measuring AMR should accommodate individual learning variability. More detailed indices of the behavior (e.g., swipe time and distance) should be recorded and used to inform, for example, TMS-triggering schedules in real time. In turn, more accurate recordings of the AMR time course might mitigate problems with <italic>post hoc</italic> time point selection. Also, more accurate TMS triggers might help explain the decrease in Post-LP 1 MEPs, recorded immediately after training. Some TMS studies have indicated that repetitive finger movements can decrease MEPs when measured immediately after training (i.e., 1&#x02013;2 min), even without fatigue (McDonnell and Ridding, <xref ref-type="bibr" rid="B57">2006</xref>; Avanzino et al., <xref ref-type="bibr" rid="B3">2011</xref>; Kluger et al., <xref ref-type="bibr" rid="B51">2012</xref>; Miyaguchi et al., <xref ref-type="bibr" rid="B61">2017</xref>). While we applied a 4-min break between training and TMS assessment during listening, not to mention the obligatory 3 s break between swipes, this might not have been long enough to overcome the supposed <italic>post-exercise depression</italic> in MEPs when using static time points.</p>
<p>Additionally, change in the movement should also trigger EEG equipment such that the protocol can determine when an <italic>error</italic> occurred. For instance, a movement that was outside of a typical participant response may help clarify more precisely how behavior affects sensory prediction, which in turn affects AMR development. Alternatively, it might show how these prediction processes develop during learning (e.g., what happens to ERP peak modulation as behavior goes from atypical-to-typical movements). Similarly, future sensory prediction studies might wish to examine motor-related potentials more closely, which do not subtract the motor trace from the ERP. In doing so, understanding the brain dynamics of sensory prediction during action might be more achievable. Future studies might also wish to consider if differences in stimulus sequences and inter-stimulus intervals across action and audition stages affect the ERP traces. Indeed, there are many questions that remain when analyzing sensory prediction with EEG (Horv&#x000E1;th, <xref ref-type="bibr" rid="B45">2015</xref>). Finally, there is evidence to suggest that the menstrual cycle may affect motor cortex excitability when measured <italic>via</italic> TMS (Hattemer et al., <xref ref-type="bibr" rid="B38">2007</xref>; Pellegrini et al., <xref ref-type="bibr" rid="B65">2018</xref>). Thus, future studies should take this into account.</p>
<p>The problem with attempting to measure these internal modeling mechanisms in humans goes beyond the simple inferences of <italic>gross</italic> or system-level recordings. Instead, methodological protocols and technology that can measure, record, and reflect those processes as they develop should be used. As is, we can only assume that our data measure these modeling processes rather than something more straightforward.</p>
</sec>
<sec id="s5">
<title>Summary</title>
<p>Overall, we have documented putative sensorimotor association or AMR development from a sensory prediction and internal modeling perspective. In the present study, sensory prediction indices are present in the form of enhanced P2 and suppressed N2 peaks during action. These might represent different stages of the prediction process. Also, novel sensorimotor associations develop and appear tuned. Hearing congruent sounds generates larger MEPs than those recorded during incongruent sound listening once time-locked and within a TMS time point. Importantly, these disassociations are not present before learning, suggesting that AMR and sensorimotor associations are experience dependent. Finally, there appears to be a relationship between the strength of a sensorimotor association measured during listening and how a related, yet incongruent, sound is predicted during action. In other words, sensory predictions seem to affect how precise a sensorimotor action is encoded. While future investigations may wish to examine behavioral indices further, we consider our data to represent a preliminary step towards understanding how, and perhaps why, sensory signals activate motor brain regions.</p>
</sec>
<sec id="s6">
<title>Ethics Statement</title>
<p>All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committee, and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The research was approved by Deakin University Human Research Ethics Committee (2015-034).</p>
</sec>
<sec id="s7">
<title>Author Contributions</title>
<p>JB and PE: experiment design. JB, BM, and CM: data collection. JB: data analysis. JB, GC, JL, and PE: wrote the article.</p>
</sec>
<sec id="s8">
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>A special thanks to our colleague, Dr Wei-Peng Teo, for his helpful comments regarding manuscript accessibility.</p>
</ack>
<fn-group>
<fn fn-type="financial-disclosure">
<p><bold>Funding.</bold> PE is supported by a Future Fellowship from the Australian Research Council (FT160100077). CM is supported by an Australian government Research Training Program scholarship.</p>
</fn>
</fn-group>
<sec sec-type="supplementary material" id="s9">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fnhum.2019.00215/full&#x00023;supplementary-material">https://www.frontiersin.org/articles/10.3389/fnhum.2019.00215/full&#x00023;supplementary-material</ext-link></p>
<supplementary-material xlink:href="Table_1.pdf" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aglioti</surname> <given-names>S. M.</given-names></name> <name><surname>Pazzaglia</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Representing actions through their sound</article-title>. <source>Exp. Brain Res.</source> <volume>206</volume>, <fpage>141</fpage>&#x02013;<lpage>151</lpage>. <pub-id pub-id-type="doi">10.1007/s00221-010-2344-x</pub-id><pub-id pub-id-type="pmid">20602092</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aliu</surname> <given-names>S. O.</given-names></name> <name><surname>Houde</surname> <given-names>J. F.</given-names></name> <name><surname>Nagarajan</surname> <given-names>S. S.</given-names></name></person-group> (<year>2009</year>). <article-title>Motor-induced suppression of the auditory cortex</article-title>. <source>J. Cogn. Neurosci.</source> <volume>21</volume>, <fpage>791</fpage>&#x02013;<lpage>802</lpage>. <pub-id pub-id-type="doi">10.1162/jocn.2009.21055</pub-id><pub-id pub-id-type="pmid">18593265</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Avanzino</surname> <given-names>L.</given-names></name> <name><surname>Tacchino</surname> <given-names>A.</given-names></name> <name><surname>Abbruzzese</surname> <given-names>G.</given-names></name> <name><surname>Quartarone</surname> <given-names>A.</given-names></name> <name><surname>Ghilardi</surname> <given-names>M. F.</given-names></name> <name><surname>Bonzano</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>Recovery of motor performance deterioration induced by a demanding finger motor task does not follow cortical excitability dynamics</article-title>. <source>Neuroscience</source> <volume>174</volume>, <fpage>84</fpage>&#x02013;<lpage>90</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroscience.2010.11.008</pub-id><pub-id pub-id-type="pmid">21075172</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baess</surname> <given-names>P.</given-names></name> <name><surname>Horv&#x000E1;th</surname> <given-names>J.</given-names></name> <name><surname>Jacobsen</surname> <given-names>T.</given-names></name> <name><surname>Schr&#x000F6;ger</surname> <given-names>E.</given-names></name></person-group> (<year>2011</year>). <article-title>Selective suppression of self-initiated sounds in an auditory stream: an ERP study</article-title>. <source>Psychophysiology</source> <volume>48</volume>, <fpage>1276</fpage>&#x02013;<lpage>1283</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2011.01196.x</pub-id><pub-id pub-id-type="pmid">21449953</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>B&#x000E4;ss</surname> <given-names>P.</given-names></name> <name><surname>Jacobsen</surname> <given-names>T.</given-names></name> <name><surname>Schr&#x000F6;ger</surname> <given-names>E.</given-names></name></person-group> (<year>2008</year>). <article-title>Suppression of the auditory N1 event-related potential component with unpredictable self-initiated tones: evidence for internal forward models with dynamic stimulation</article-title>. <source>Int. J. Psychophysiol.</source> <volume>70</volume>, <fpage>137</fpage>&#x02013;<lpage>143</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2008.06.005</pub-id><pub-id pub-id-type="pmid">18627782</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barron</surname> <given-names>H. C.</given-names></name> <name><surname>Vogels</surname> <given-names>T. P.</given-names></name> <name><surname>Behrens</surname> <given-names>T. E.</given-names></name> <name><surname>Ramaswami</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Inhibitory engrams in perception and memory</article-title>. <source>Proc. Natl. Acad. Sci. U S A</source> <volume>114</volume>, <fpage>6666</fpage>&#x02013;<lpage>6674</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1701812114</pub-id><pub-id pub-id-type="pmid">28611219</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bartha-Doering</surname> <given-names>L.</given-names></name> <name><surname>Deuster</surname> <given-names>D.</given-names></name> <name><surname>Giordano</surname> <given-names>V.</given-names></name> <name><surname>Am Zehnhoff-Dinnesen</surname> <given-names>A.</given-names></name> <name><surname>Dobel</surname> <given-names>C.</given-names></name></person-group> (<year>2015</year>). <article-title>A systematic review of the mismatch negativity as an index for auditory sensory memory: from basic research to clinical and developmental perspectives</article-title>. <source>Psychophysiology</source> <volume>52</volume>, <fpage>1115</fpage>&#x02013;<lpage>1130</lpage>. <pub-id pub-id-type="doi">10.1111/psyp.12459</pub-id><pub-id pub-id-type="pmid">26096130</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Behroozmand</surname> <given-names>R.</given-names></name> <name><surname>Ibrahim</surname> <given-names>N.</given-names></name> <name><surname>Korzyukov</surname> <given-names>O.</given-names></name> <name><surname>Robin</surname> <given-names>D. A.</given-names></name> <name><surname>Larson</surname> <given-names>C. R.</given-names></name></person-group> (<year>2014</year>). <article-title>Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch</article-title>. <source>Brain Cogn.</source> <volume>84</volume>, <fpage>97</fpage>&#x02013;<lpage>108</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandc.2013.11.007</pub-id><pub-id pub-id-type="pmid">24355545</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Behroozmand</surname> <given-names>R.</given-names></name> <name><surname>Liu</surname> <given-names>H.</given-names></name> <name><surname>Larson</surname> <given-names>C. R.</given-names></name></person-group> (<year>2011</year>). <article-title>Time-dependent neural processing of auditory feedback during voice pitch error detection</article-title>. <source>J. Cogn. Neurosci.</source> <volume>23</volume>, <fpage>1205</fpage>&#x02013;<lpage>1217</lpage>. <pub-id pub-id-type="doi">10.1162/jocn.2010.21447</pub-id><pub-id pub-id-type="pmid">20146608</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bendixen</surname> <given-names>A.</given-names></name> <name><surname>SanMiguel</surname> <given-names>I.</given-names></name> <name><surname>Schr&#x000F6;ger</surname> <given-names>E.</given-names></name></person-group> (<year>2012</year>). <article-title>Early electrophysiological indicators for predictive processing in audition: a review</article-title>. <source>Int. J. Psychophysiol.</source> <volume>83</volume>, <fpage>120</fpage>&#x02013;<lpage>131</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2011.08.003</pub-id><pub-id pub-id-type="pmid">21867734</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bock</surname> <given-names>O.</given-names></name> <name><surname>Thomas</surname> <given-names>M.</given-names></name> <name><surname>Grigorova</surname> <given-names>V.</given-names></name></person-group> (<year>2005</year>). <article-title>The effect of rest breaks on human sensorimotor adaptation</article-title>. <source>Exp. Brain Res.</source> <volume>163</volume>, <fpage>258</fpage>&#x02013;<lpage>260</lpage>. <pub-id pub-id-type="doi">10.1007/s00221-005-2231-z</pub-id><pub-id pub-id-type="pmid">15754173</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boutonnet</surname> <given-names>B.</given-names></name> <name><surname>Lupyan</surname> <given-names>G.</given-names></name></person-group> (<year>2015</year>). <article-title>Words jump-start vision: a label advantage in object recognition</article-title>. <source>J. Neurosci.</source> <volume>35</volume>, <fpage>9329</fpage>&#x02013;<lpage>9335</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.5111-14.2015</pub-id><pub-id pub-id-type="pmid">26109657</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Burgess</surname> <given-names>J. D.</given-names></name> <name><surname>Lum</surname> <given-names>J. A. G.</given-names></name> <name><surname>Hohwy</surname> <given-names>J.</given-names></name> <name><surname>Enticott</surname> <given-names>P. G.</given-names></name></person-group> (<year>2017</year>). <article-title>Echoes on the motor network: how internal motor control structures afford sensory experience</article-title>. <source>Brain Struct. Funct.</source> <volume>222</volume>, <fpage>3865</fpage>&#x02013;<lpage>3888</lpage>. <pub-id pub-id-type="doi">10.1007/s00429-017-1484-1</pub-id><pub-id pub-id-type="pmid">28770338</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Butler</surname> <given-names>A. J.</given-names></name> <name><surname>James</surname> <given-names>K. H.</given-names></name></person-group> (<year>2013</year>). <article-title>Active learning of novel sound-producing objects: motor reactivation and enhancement of visuo-motor connectivity</article-title>. <source>J. Cogn. Neurosci.</source> <volume>25</volume>, <fpage>203</fpage>&#x02013;<lpage>218</lpage>. <pub-id pub-id-type="doi">10.1162/jocn_a_00284</pub-id><pub-id pub-id-type="pmid">22905816</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Butler</surname> <given-names>A. J.</given-names></name> <name><surname>James</surname> <given-names>T. W.</given-names></name> <name><surname>James</surname> <given-names>K. H.</given-names></name></person-group> (<year>2011</year>). <article-title>Enhanced multisensory integration and motor reactivation after active motor learning of audiovisual associations</article-title>. <source>J. Cogn. Neurosci.</source> <volume>23</volume>, <fpage>3515</fpage>&#x02013;<lpage>3528</lpage>. <pub-id pub-id-type="doi">10.1162/jocn_a_00015</pub-id><pub-id pub-id-type="pmid">21452947</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>Z.</given-names></name> <name><surname>Chen</surname> <given-names>X.</given-names></name> <name><surname>Liu</surname> <given-names>P.</given-names></name> <name><surname>Huang</surname> <given-names>D.</given-names></name> <name><surname>Liu</surname> <given-names>H.</given-names></name></person-group> (<year>2012</year>). <article-title>Effect of temporal predictability on the neural processing of self-triggered auditory stimulation during vocalization</article-title>. <source>BMC Neurosci.</source> <volume>13</volume>:<fpage>55</fpage>. <pub-id pub-id-type="doi">10.1186/1471-2202-13-55</pub-id><pub-id pub-id-type="pmid">22646514</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cook</surname> <given-names>R.</given-names></name> <name><surname>Bird</surname> <given-names>G.</given-names></name> <name><surname>Catmur</surname> <given-names>C.</given-names></name> <name><surname>Press</surname> <given-names>C.</given-names></name> <name><surname>Heyes</surname> <given-names>C.</given-names></name></person-group> (<year>2014</year>). <article-title>Mirror neurons: from origin to function</article-title>. <source>Behav. Brain Sci.</source> <volume>37</volume>, <fpage>177</fpage>&#x02013;<lpage>192</lpage>. <pub-id pub-id-type="doi">10.1017/S0140525X13000903</pub-id><pub-id pub-id-type="pmid">24775147</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Crowley</surname> <given-names>K. E.</given-names></name> <name><surname>Colrain</surname> <given-names>I. M.</given-names></name></person-group> (<year>2004</year>). <article-title>A review of the evidence for P2 being an independent component process: age, sleep and modality</article-title>. <source>Clin. Neurophysiol.</source> <volume>115</volume>, <fpage>732</fpage>&#x02013;<lpage>744</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2003.11.021</pub-id><pub-id pub-id-type="pmid">15003751</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>D&#x02019;Ausilio</surname> <given-names>A.</given-names></name> <name><surname>Maffongelli</surname> <given-names>L.</given-names></name> <name><surname>Bartoli</surname> <given-names>E.</given-names></name> <name><surname>Campanella</surname> <given-names>M.</given-names></name> <name><surname>Ferrari</surname> <given-names>E.</given-names></name> <name><surname>Berry</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Listening to speech recruits specific tongue motor synergies as revealed by transcranial magnetic stimulation and tissue-Doppler ultrasound imaging</article-title>. <source>Philos. Trans. R. Soc. B Biol. Sci.</source> <volume>369</volume>:<fpage>20130418</fpage>. <pub-id pub-id-type="doi">10.1098/rstb.2013.0418</pub-id><pub-id pub-id-type="pmid">24778384</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Enikolopov</surname> <given-names>A. G.</given-names></name> <name><surname>Abbott</surname> <given-names>L. F.</given-names></name> <name><surname>Sawtell</surname> <given-names>N. B.</given-names></name></person-group> (<year>2018</year>). <article-title>Internally generated predictions enhance neural and behavioral detection of sensory stimuli in an electric fish</article-title>. <source>Neuron</source> <volume>99</volume>, <fpage>135.e3</fpage>&#x02013;<lpage>146.e3</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2018.06.006</pub-id><pub-id pub-id-type="pmid">30001507</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Finkbeiner</surname> <given-names>K. M.</given-names></name> <name><surname>Russell</surname> <given-names>P. N.</given-names></name> <name><surname>Helton</surname> <given-names>W. S.</given-names></name></person-group> (<year>2016</year>). <article-title>Rest improves performance, nature improves happiness: assessment of break periods on the abbreviated vigilance task</article-title>. <source>Conscious. Cogn.</source> <volume>42</volume>, <fpage>277</fpage>&#x02013;<lpage>285</lpage>. <pub-id pub-id-type="doi">10.1016/j.concog.2016.04.005</pub-id><pub-id pub-id-type="pmid">27089530</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Folstein</surname> <given-names>J. R.</given-names></name> <name><surname>Van Petten</surname> <given-names>C.</given-names></name></person-group> (<year>2008</year>). <article-title>Influence of cognitive control and mismatch on the N2 component of the ERP: a review</article-title>. <source>Psychophysiology</source> <volume>45</volume>, <fpage>152</fpage>&#x02013;<lpage>170</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2007.00602.x</pub-id><pub-id pub-id-type="pmid">17850238</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ford</surname> <given-names>J. M.</given-names></name> <name><surname>Gray</surname> <given-names>M.</given-names></name> <name><surname>Faustman</surname> <given-names>W. O.</given-names></name> <name><surname>Roach</surname> <given-names>B. J.</given-names></name> <name><surname>Mathalon</surname> <given-names>D. H.</given-names></name></person-group> (<year>2007</year>). <article-title>Dissecting corollary discharge dysfunction in schizophrenia</article-title>. <source>Psychophysiology</source> <volume>44</volume>, <fpage>522</fpage>&#x02013;<lpage>529</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2007.00533.x</pub-id><pub-id pub-id-type="pmid">17565658</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ford</surname> <given-names>J. M.</given-names></name> <name><surname>Palzes</surname> <given-names>V. A.</given-names></name> <name><surname>Roach</surname> <given-names>B. J.</given-names></name> <name><surname>Mathalon</surname> <given-names>D. H.</given-names></name></person-group> (<year>2014</year>). <article-title>Did i do that? Abnormal predictive processes in schizophrenia when button pressing to deliver a tone</article-title>. <source>Schizophr. Bull.</source> <volume>40</volume>, <fpage>804</fpage>&#x02013;<lpage>812</lpage>. <pub-id pub-id-type="doi">10.1093/schbul/sbt072</pub-id><pub-id pub-id-type="pmid">23754836</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ford</surname> <given-names>J. M.</given-names></name> <name><surname>Roach</surname> <given-names>B. J.</given-names></name> <name><surname>Mathalon</surname> <given-names>D. H.</given-names></name></person-group> (<year>2010</year>). <article-title>Assessing corollary discharge in humans using noninvasive neurophysiological methods</article-title>. <source>Nat. Protoc.</source> <volume>5</volume>, <fpage>1160</fpage>&#x02013;<lpage>1168</lpage>. <pub-id pub-id-type="doi">10.1038/nprot.2010.67</pub-id><pub-id pub-id-type="pmid">20539291</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname> <given-names>K.</given-names></name></person-group> (<year>2010</year>). <article-title>The free-energy principle: a unified brain theory?</article-title> <source>Nat. Rev. Neurosci.</source> <volume>11</volume>, <fpage>127</fpage>&#x02013;<lpage>138</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2787</pub-id><pub-id pub-id-type="pmid">20068583</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname> <given-names>K.</given-names></name></person-group> (<year>2011</year>). <article-title>What is optimal about motor control?</article-title> <source>Neuron</source> <volume>72</volume>, <fpage>488</fpage>&#x02013;<lpage>498</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2011.10.018</pub-id><pub-id pub-id-type="pmid">22078508</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname> <given-names>K.</given-names></name> <name><surname>Mattout</surname> <given-names>J.</given-names></name> <name><surname>Kilner</surname> <given-names>J.</given-names></name></person-group> (<year>2011</year>). <article-title>Action understanding and active inference</article-title>. <source>Biol. Cybern.</source> <volume>104</volume>, <fpage>137</fpage>&#x02013;<lpage>160</lpage>. <pub-id pub-id-type="doi">10.1007/s00422-011-0424-z</pub-id><pub-id pub-id-type="pmid">21327826</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname> <given-names>K.</given-names></name> <name><surname>Rosch</surname> <given-names>R.</given-names></name> <name><surname>Parr</surname> <given-names>T.</given-names></name> <name><surname>Price</surname> <given-names>C.</given-names></name> <name><surname>Bowman</surname> <given-names>H.</given-names></name></person-group> (<year>2017</year>). <article-title>Deep temporal models and active inference</article-title>. <source>Neurosci. Biobehav. Rev.</source> <volume>77</volume>, <fpage>388</fpage>&#x02013;<lpage>402</lpage>. <pub-id pub-id-type="doi">10.1016/j.neubiorev.2017.04.009</pub-id><pub-id pub-id-type="pmid">28416414</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Furukawa</surname> <given-names>Y.</given-names></name> <name><surname>Uehara</surname> <given-names>K.</given-names></name> <name><surname>Furuya</surname> <given-names>S.</given-names></name></person-group> (<year>2017</year>). <article-title>Expertise-dependent motor somatotopy of music perception</article-title>. <source>Neurosci. Lett.</source> <volume>650</volume>, <fpage>97</fpage>&#x02013;<lpage>102</lpage>. <pub-id pub-id-type="doi">10.1016/j.neulet.2017.04.033</pub-id><pub-id pub-id-type="pmid">28435044</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Garrido</surname> <given-names>M. I.</given-names></name> <name><surname>Kilner</surname> <given-names>J. M.</given-names></name> <name><surname>Stephan</surname> <given-names>K. E.</given-names></name> <name><surname>Friston</surname> <given-names>K. J.</given-names></name></person-group> (<year>2009</year>). <article-title>The mismatch negativity: a review of underlying mechanisms</article-title>. <source>Clin. Neurophysiol.</source> <volume>120</volume>, <fpage>453</fpage>&#x02013;<lpage>463</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2008.11.029</pub-id><pub-id pub-id-type="pmid">19181570</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ghio</surname> <given-names>M.</given-names></name> <name><surname>Scharmach</surname> <given-names>K.</given-names></name> <name><surname>Bellebaum</surname> <given-names>C.</given-names></name></person-group> (<year>2018</year>). <article-title>ERP correlates of processing the auditory consequences of own versus observed actions</article-title>. <source>Psychophysiology</source> <volume>55</volume>:<fpage>e13048</fpage>. <pub-id pub-id-type="doi">10.1111/psyp.13048</pub-id><pub-id pub-id-type="pmid">29266338</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Giret</surname> <given-names>N.</given-names></name> <name><surname>Kornfeld</surname> <given-names>J.</given-names></name> <name><surname>Ganguli</surname> <given-names>S.</given-names></name> <name><surname>Hahnloser</surname> <given-names>R. H. R.</given-names></name></person-group> (<year>2014</year>). <article-title>Evidence for a causal inverse model in an avian cortico-basal ganglia circuit</article-title>. <source>Proc. Natl. Acad. Sci. U S A</source> <volume>111</volume>, <fpage>6063</fpage>&#x02013;<lpage>6068</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1317087111</pub-id><pub-id pub-id-type="pmid">24711417</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gordon</surname> <given-names>C. L.</given-names></name> <name><surname>Iacoboni</surname> <given-names>M.</given-names></name> <name><surname>Balasubramaniam</surname> <given-names>R.</given-names></name></person-group> (<year>2018</year>). <article-title>Multimodal music perception engages motor prediction: a TMS study</article-title>. <source>Front. Neurosci.</source> <volume>12</volume>:<fpage>736</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2018.00736</pub-id><pub-id pub-id-type="pmid">30405332</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grush</surname> <given-names>R.</given-names></name></person-group> (<year>2004</year>). <article-title>The emulation theory of representation: motor control, imagery, and perception</article-title>. <source>Behav. Brain Sci.</source> <volume>27</volume>, <fpage>377</fpage>&#x02013;<lpage>396</lpage>. <pub-id pub-id-type="doi">10.1017/s0140525x04000093</pub-id><pub-id pub-id-type="pmid">15736871</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haggard</surname> <given-names>P.</given-names></name></person-group> (<year>2017</year>). <article-title>Sense of agency in the human brain</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>18</volume>, <fpage>197</fpage>&#x02013;<lpage>207</lpage>. <pub-id pub-id-type="doi">10.1038/nrn.2017.14</pub-id><pub-id pub-id-type="pmid">28251993</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hanuschkin</surname> <given-names>A.</given-names></name> <name><surname>Ganguli</surname> <given-names>S.</given-names></name> <name><surname>Hahnloser</surname> <given-names>R. H. R.</given-names></name></person-group> (<year>2013</year>). <article-title>A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models</article-title>. <source>Front. Neural Circuits</source> <volume>7</volume>:<fpage>106</fpage>. <pub-id pub-id-type="doi">10.3389/fncir.2013.00106</pub-id><pub-id pub-id-type="pmid">23801941</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hattemer</surname> <given-names>K.</given-names></name> <name><surname>Knake</surname> <given-names>S.</given-names></name> <name><surname>Reis</surname> <given-names>J.</given-names></name> <name><surname>Rochon</surname> <given-names>J.</given-names></name> <name><surname>Oertel</surname> <given-names>W. H.</given-names></name> <name><surname>Rosenow</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2007</year>). <article-title>Excitability of the motor cortex during ovulatory and anovulatory cycles: a transcranial magnetic stimulation study</article-title>. <source>Clin. Endocrinol.</source> <volume>66</volume>, <fpage>387</fpage>&#x02013;<lpage>393</lpage>. <pub-id pub-id-type="doi">10.1111/j.1365-2265.2007.02744.x</pub-id><pub-id pub-id-type="pmid">17302873</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Heinks-Maldonado</surname> <given-names>T. H.</given-names></name> <name><surname>Mathalon</surname> <given-names>D. H.</given-names></name> <name><surname>Gray</surname> <given-names>M.</given-names></name> <name><surname>Ford</surname> <given-names>J. M.</given-names></name></person-group> (<year>2005</year>). <article-title>Fine-tuning of auditory cortex during speech production</article-title>. <source>Psychophysiology</source> <volume>42</volume>, <fpage>180</fpage>&#x02013;<lpage>190</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2005.00272.x</pub-id><pub-id pub-id-type="pmid">15787855</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Herwig</surname> <given-names>A.</given-names></name></person-group> (<year>2015</year>). <article-title>Linking perception and action by structure or process? Toward an integrative perspective</article-title>. <source>Neurosci. Biobehav. Rev.</source> <volume>52</volume>, <fpage>105</fpage>&#x02013;<lpage>116</lpage>. <pub-id pub-id-type="doi">10.1016/j.neubiorev.2015.02.013</pub-id><pub-id pub-id-type="pmid">25732773</pub-id></citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Herwig</surname> <given-names>A.</given-names></name> <name><surname>Prinz</surname> <given-names>W.</given-names></name> <name><surname>Waszak</surname> <given-names>F.</given-names></name></person-group> (<year>2007</year>). <article-title>Two modes of sensorimotor integration in intention-based and stimulus-based actions</article-title>. <source>Q. J. Exp. Psychol.</source> <volume>60</volume>, <fpage>1540</fpage>&#x02013;<lpage>1554</lpage>. <pub-id pub-id-type="doi">10.1080/17470210601119134</pub-id><pub-id pub-id-type="pmid">17853217</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Honda</surname> <given-names>T.</given-names></name> <name><surname>Nagao</surname> <given-names>S.</given-names></name> <name><surname>Hashimoto</surname> <given-names>Y.</given-names></name> <name><surname>Ishikawa</surname> <given-names>K.</given-names></name> <name><surname>Yokota</surname> <given-names>T.</given-names></name> <name><surname>Mizusawa</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Tandem internal models execute motor learning in the cerebellum</article-title>. <source>Proc. Natl. Acad. Sci. U S A</source> <volume>115</volume>, <fpage>7428</fpage>&#x02013;<lpage>7433</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1716489115</pub-id><pub-id pub-id-type="pmid">29941578</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Horv&#x000E1;th</surname> <given-names>J.</given-names></name></person-group> (<year>2013</year>). <article-title>Action-sound coincidence-related attenuation of auditory ERPs is not modulated by affordance compatibility</article-title>. <source>Biol. Psychol.</source> <volume>93</volume>, <fpage>81</fpage>&#x02013;<lpage>87</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2012.12.008</pub-id><pub-id pub-id-type="pmid">23298717</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Horv&#x000E1;th</surname> <given-names>J.</given-names></name></person-group> (<year>2014</year>). <article-title>The role of mechanical impact in action-related auditory attenuation</article-title>. <source>Cogn. Affect. Behav. Neurosci.</source> <volume>14</volume>, <fpage>1392</fpage>&#x02013;<lpage>1406</lpage>. <pub-id pub-id-type="doi">10.3758/s13415-014-0283-x</pub-id><pub-id pub-id-type="pmid">24723005</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Horv&#x000E1;th</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <article-title>Action-related auditory ERP attenuation: paradigms and hypotheses</article-title>. <source>Brain Res.</source> <volume>1626</volume>, <fpage>54</fpage>&#x02013;<lpage>65</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2015.03.038</pub-id><pub-id pub-id-type="pmid">25843932</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Horv&#x000E1;th</surname> <given-names>J.</given-names></name> <name><surname>Winkler</surname> <given-names>I.</given-names></name> <name><surname>Bendixen</surname> <given-names>A.</given-names></name></person-group> (<year>2008</year>). <article-title>Do N1/MMN, P3a and RON form a strongly coupled chain reflecting the three stages of auditory distraction?</article-title> <source>Biol. Psychol.</source> <volume>79</volume>, <fpage>139</fpage>&#x02013;<lpage>147</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2008.04.001</pub-id><pub-id pub-id-type="pmid">18468765</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Izawa</surname> <given-names>J.</given-names></name> <name><surname>Criscimagna-Hemminger</surname> <given-names>S. E.</given-names></name> <name><surname>Shadmehr</surname> <given-names>R.</given-names></name></person-group> (<year>2012</year>). <article-title>Cerebellar contributions to reach adaptation and learning sensory consequences of action</article-title>. <source>J. Neurosci.</source> <volume>32</volume>, <fpage>4230</fpage>&#x02013;<lpage>4239</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.6353-11.2012</pub-id><pub-id pub-id-type="pmid">22442085</pub-id></citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Joos</surname> <given-names>K.</given-names></name> <name><surname>Gilles</surname> <given-names>A.</given-names></name> <name><surname>Van De Heyning</surname> <given-names>P.</given-names></name> <name><surname>De Ridder</surname> <given-names>D.</given-names></name> <name><surname>Vanneste</surname> <given-names>S.</given-names></name></person-group> (<year>2014</year>). <article-title>From sensation to percept: the neural signature of auditory event-related potentials</article-title>. <source>Neurosci. Biobehav. Rev.</source> <volume>42</volume>, <fpage>148</fpage>&#x02013;<lpage>156</lpage>. <pub-id pub-id-type="doi">10.1016/j.neubiorev.2014.02.009</pub-id><pub-id pub-id-type="pmid">24589492</pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Keysers</surname> <given-names>C.</given-names></name> <name><surname>Gazzola</surname> <given-names>V.</given-names></name></person-group> (<year>2014</year>). <article-title>Hebbian learning and predictive mirror neurons for actions, sensations and emotions</article-title>. <source>Philos. Trans. R. Soc. B Biol. Sci.</source> <volume>369</volume>:<fpage>20130175</fpage>. <pub-id pub-id-type="doi">10.1098/rstb.2013.0175</pub-id><pub-id pub-id-type="pmid">24778372</pub-id></citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kilteni</surname> <given-names>K.</given-names></name> <name><surname>Ehrsson</surname> <given-names>H. H.</given-names></name></person-group> (<year>2017</year>). <article-title>Sensorimotor predictions and tool use: hand-held tools attenuate self-touch</article-title>. <source>Cognition</source> <volume>165</volume>, <fpage>1</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1016/j.cognition.2017.04.005</pub-id><pub-id pub-id-type="pmid">28458089</pub-id></citation></ref>
<ref id="B51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kluger</surname> <given-names>B. M.</given-names></name> <name><surname>Palmer</surname> <given-names>C.</given-names></name> <name><surname>Shattuck</surname> <given-names>J. T.</given-names></name> <name><surname>Triggs</surname> <given-names>W. J.</given-names></name></person-group> (<year>2012</year>). <article-title>Motor evoked potential depression following repetitive central motor initiation</article-title>. <source>Exp. Brain Res.</source> <volume>216</volume>, <fpage>585</fpage>&#x02013;<lpage>590</lpage>. <pub-id pub-id-type="doi">10.1007/s00221-011-2962-y</pub-id><pub-id pub-id-type="pmid">22130780</pub-id></citation></ref>
<ref id="B52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Knolle</surname> <given-names>F.</given-names></name> <name><surname>Schr&#x000F6;ger</surname> <given-names>E.</given-names></name> <name><surname>Kotz</surname> <given-names>S. A.</given-names></name></person-group> (<year>2013</year>). <article-title>Prediction errors in self- and externally-generated deviants</article-title>. <source>Biol. Psychol.</source> <volume>92</volume>, <fpage>410</fpage>&#x02013;<lpage>416</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2012.11.017</pub-id><pub-id pub-id-type="pmid">23246535</pub-id></citation></ref>
<ref id="B53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kujala</surname> <given-names>T.</given-names></name> <name><surname>Tervaniemi</surname> <given-names>M.</given-names></name> <name><surname>Schr&#x000F6;ger</surname> <given-names>E.</given-names></name></person-group> (<year>2007</year>). <article-title>The mismatch negativity in cognitive and clinical neuroscience: theoretical and methodological considerations</article-title>. <source>Biol. Psychol.</source> <volume>74</volume>, <fpage>1</fpage>&#x02013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2006.06.001</pub-id><pub-id pub-id-type="pmid">16844278</pub-id></citation></ref>
<ref id="B54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Martikainen</surname> <given-names>M. H.</given-names></name> <name><surname>Kaneko</surname> <given-names>K. I.</given-names></name> <name><surname>Hari</surname> <given-names>R.</given-names></name></person-group> (<year>2005</year>). <article-title>Suppressed responses to self-triggered sounds in the human auditory cortex</article-title>. <source>Cereb. Cortex</source> <volume>15</volume>, <fpage>299</fpage>&#x02013;<lpage>302</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhh131</pub-id><pub-id pub-id-type="pmid">15238430</pub-id></citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mathias</surname> <given-names>B.</given-names></name> <name><surname>Palmer</surname> <given-names>C.</given-names></name> <name><surname>Perrin</surname> <given-names>F.</given-names></name> <name><surname>Tillmann</surname> <given-names>B.</given-names></name></person-group> (<year>2015</year>). <article-title>Sensorimotor learning enhances expectations during auditory perception</article-title>. <source>Cereb. Cortex</source> <volume>25</volume>, <fpage>2238</fpage>&#x02013;<lpage>2254</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhu030</pub-id><pub-id pub-id-type="pmid">24621528</pub-id></citation></ref>
<ref id="B56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mathias</surname> <given-names>B.</given-names></name> <name><surname>Tillmann</surname> <given-names>B.</given-names></name> <name><surname>Palmer</surname> <given-names>C.</given-names></name></person-group> (<year>2016</year>). <article-title>Sensory, cognitive and sensorimotor learning effects in recognition memory for music</article-title>. <source>J. Cogn. Neurosci.</source> <volume>28</volume>, <fpage>1111</fpage>&#x02013;<lpage>1126</lpage>. <pub-id pub-id-type="doi">10.1162/jocn_a_00958</pub-id><pub-id pub-id-type="pmid">27027544</pub-id></citation></ref>
<ref id="B57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McDonnell</surname> <given-names>M. N.</given-names></name> <name><surname>Ridding</surname> <given-names>M. C.</given-names></name></person-group> (<year>2006</year>). <article-title>Transient motor evoked potential suppression following a complex sensorimotor task</article-title>. <source>Clin. Neurophysiol.</source> <volume>117</volume>, <fpage>1266</fpage>&#x02013;<lpage>1272</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2006.02.008</pub-id><pub-id pub-id-type="pmid">16600678</pub-id></citation></ref>
<ref id="B59"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miall</surname> <given-names>R. C.</given-names></name> <name><surname>Christensen</surname> <given-names>L. O. D.</given-names></name> <name><surname>Cain</surname> <given-names>O.</given-names></name> <name><surname>Stanley</surname> <given-names>J.</given-names></name></person-group> (<year>2007</year>). <article-title>Disruption of state estimation in the human lateral cerebellum</article-title>. <source>PLOS Biol.</source> <volume>5</volume>:<fpage>e316</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pbio.0050316</pub-id><pub-id pub-id-type="pmid">18044990</pub-id></citation></ref>
<ref id="B60"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miall</surname> <given-names>R. C.</given-names></name> <name><surname>Weir</surname> <given-names>D. J.</given-names></name> <name><surname>Wolpert</surname> <given-names>D. M.</given-names></name> <name><surname>Stein</surname> <given-names>J. F.</given-names></name></person-group> (<year>1993</year>). <article-title>Is the cerebellum a smith predictor?</article-title> <source>J. Mot. Behav.</source> <volume>25</volume>, <fpage>203</fpage>&#x02013;<lpage>216</lpage>. <pub-id pub-id-type="doi">10.1080/00222895.1993.9942050</pub-id><pub-id pub-id-type="pmid">12581990</pub-id></citation></ref>
<ref id="B58"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Miall</surname> <given-names>R. C.</given-names></name> <name><surname>Wolpert</surname> <given-names>D. M.</given-names></name></person-group> (<year>1995</year>). &#x0201C;<article-title>The cerebellum as a predictive model of the motor system: a smith predictor hypothesis</article-title>,&#x0201D; in <source>Neural Control of Movement</source>, eds <person-group person-group-type="editor"><name><surname>Ferrell</surname> <given-names>W.</given-names></name> <name><surname>Proske</surname> <given-names>U.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Plenum Press</publisher-name>), <fpage>215</fpage>&#x02013;<lpage>223</lpage>.</citation></ref>
<ref id="B61"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miyaguchi</surname> <given-names>S.</given-names></name> <name><surname>Kojima</surname> <given-names>S.</given-names></name> <name><surname>Sasaki</surname> <given-names>R.</given-names></name> <name><surname>Kotan</surname> <given-names>S.</given-names></name> <name><surname>Kirimoto</surname> <given-names>H.</given-names></name> <name><surname>Tamaki</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Decrease in short-latency afferent inhibition during corticomotor postexercise depression following repetitive finger movement</article-title>. <source>Brain Behav.</source> <volume>7</volume>:<fpage>e00744</fpage>. <pub-id pub-id-type="doi">10.1002/brb3.744</pub-id><pub-id pub-id-type="pmid">28729946</pub-id></citation></ref>
<ref id="B62"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name> <name><surname>Paavilainen</surname> <given-names>P.</given-names></name> <name><surname>Rinne</surname> <given-names>T.</given-names></name> <name><surname>Alho</surname> <given-names>K.</given-names></name></person-group> (<year>2007</year>). <article-title>The mismatch negativity (MMN) in basic research of central auditory processing: a review</article-title>. <source>Clin. Neurophysiol.</source> <volume>118</volume>, <fpage>2544</fpage>&#x02013;<lpage>2590</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2007.04.026</pub-id><pub-id pub-id-type="pmid">17931964</pub-id></citation></ref>
<ref id="B63"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nuttall</surname> <given-names>H. E.</given-names></name> <name><surname>Kennedy-Higgins</surname> <given-names>D.</given-names></name> <name><surname>Devlin</surname> <given-names>J. T.</given-names></name> <name><surname>Adank</surname> <given-names>P.</given-names></name></person-group> (<year>2017</year>). <article-title>The role of hearing ability and speech distortion in the facilitation of articulatory motor cortex</article-title>. <source>Neuropsychologia</source> <volume>94</volume>, <fpage>13</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2016.11.016</pub-id><pub-id pub-id-type="pmid">27884757</pub-id></citation></ref>
<ref id="B64"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oldfield</surname> <given-names>R. C.</given-names></name></person-group> (<year>1971</year>). <article-title>The assessment and analysis of handedness: the Edinburgh inventory</article-title>. <source>Neuropsychologia</source> <volume>9</volume>, <fpage>97</fpage>&#x02013;<lpage>113</lpage>. <pub-id pub-id-type="doi">10.1016/0028-3932(71)90067-4</pub-id><pub-id pub-id-type="pmid">5146491</pub-id></citation></ref>
<ref id="B65"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pellegrini</surname> <given-names>M.</given-names></name> <name><surname>Zoghi</surname> <given-names>M.</given-names></name> <name><surname>Jaberzadeh</surname> <given-names>S.</given-names></name></person-group> (<year>2018</year>). <article-title>Biological and anatomical factors influencing interindividual variability to noninvasive brain stimulation of the primary motor cortex: a systematic review and meta-analysis</article-title>. <source>Rev. Neurosci.</source> <volume>29</volume>, <fpage>199</fpage>&#x02013;<lpage>222</lpage>. <pub-id pub-id-type="doi">10.1515/revneuro-2017-0048</pub-id><pub-id pub-id-type="pmid">29055940</pub-id></citation></ref>
<ref id="B66"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pellicciari</surname> <given-names>M. C.</given-names></name> <name><surname>Miniussi</surname> <given-names>C.</given-names></name> <name><surname>Ferrari</surname> <given-names>C.</given-names></name> <name><surname>Koch</surname> <given-names>G.</given-names></name> <name><surname>Bortoletto</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Ongoing cumulative effects of single TMS pulses on corticospinal excitability: an intra- and inter-block investigation</article-title>. <source>Clin. Neurophysiol.</source> <volume>127</volume>, <fpage>621</fpage>&#x02013;<lpage>628</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2015.03.002</pub-id><pub-id pub-id-type="pmid">25823698</pub-id></citation></ref>
<ref id="B67"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pereira</surname> <given-names>D. R.</given-names></name> <name><surname>Cardoso</surname> <given-names>S.</given-names></name> <name><surname>Ferreira-Santos</surname> <given-names>F.</given-names></name> <name><surname>Fernandes</surname> <given-names>C.</given-names></name> <name><surname>Cunha-Reis</surname> <given-names>C.</given-names></name> <name><surname>Paiva</surname> <given-names>T. O.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Effects of inter-stimulus interval (ISI) duration on the N1 and P2 components of the auditory event-related potential</article-title>. <source>Int. J. Psychophysiol.</source> <volume>94</volume>, <fpage>311</fpage>&#x02013;<lpage>318</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2014.09.012</pub-id><pub-id pub-id-type="pmid">25304172</pub-id></citation></ref>
<ref id="B68"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pickering</surname> <given-names>M. J.</given-names></name> <name><surname>Clark</surname> <given-names>A.</given-names></name></person-group> (<year>2014</year>). <article-title>Getting ahead: forward models and their place in cognitive architecture</article-title>. <source>Trends Cogn. Sci.</source> <volume>18</volume>, <fpage>451</fpage>&#x02013;<lpage>456</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2014.05.006</pub-id><pub-id pub-id-type="pmid">24909775</pub-id></citation></ref>
<ref id="B69"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pinheiro</surname> <given-names>A. P.</given-names></name> <name><surname>Schwartze</surname> <given-names>M.</given-names></name> <name><surname>Kotz</surname> <given-names>S. A.</given-names></name></person-group> (<year>2018</year>). <article-title>Voice-selective prediction alterations in nonclinical voice hearers</article-title>. <source>Sci. Rep.</source> <volume>8</volume>:<fpage>14717</fpage>. <pub-id pub-id-type="doi">10.1038/s41598-018-32614-9</pub-id><pub-id pub-id-type="pmid">30283058</pub-id></citation></ref>
<ref id="B70"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ramaswami</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <article-title>Network plasticity in adaptive filtering and behavioral habituation</article-title>. <source>Neuron</source> <volume>82</volume>, <fpage>1216</fpage>&#x02013;<lpage>1229</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2014.04.035</pub-id><pub-id pub-id-type="pmid">24945768</pub-id></citation></ref>
<ref id="B71"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reinke</surname> <given-names>K. S.</given-names></name> <name><surname>He</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>C.</given-names></name> <name><surname>Alain</surname> <given-names>C.</given-names></name></person-group> (<year>2003</year>). <article-title>Perceptual learning modulates sensory evoked response during vowel segregation</article-title>. <source>Cogn. Brain Res.</source> <volume>17</volume>, <fpage>781</fpage>&#x02013;<lpage>791</lpage>. <pub-id pub-id-type="doi">10.1016/s0926-6410(03)00202-7</pub-id><pub-id pub-id-type="pmid">14561463</pub-id></citation></ref>
<ref id="B72"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rossi</surname> <given-names>S.</given-names></name> <name><surname>Hallett</surname> <given-names>M.</given-names></name> <name><surname>Rossini</surname> <given-names>P. M.</given-names></name> <name><surname>Pascual-Leone</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Screening questionnaire before TMS: an update</article-title>. <source>Clin. Neurophysiol.</source> <volume>122</volume>:<fpage>1686</fpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2010.12.037</pub-id><pub-id pub-id-type="pmid">21227747</pub-id></citation></ref>
<ref id="B73"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rossi</surname> <given-names>S.</given-names></name> <name><surname>Hallett</surname> <given-names>M.</given-names></name> <name><surname>Rossini</surname> <given-names>P. M.</given-names></name> <name><surname>Pascual-Leone</surname> <given-names>A.</given-names></name> <collab>Safety of TMS Consensus Group</collab></person-group>. (<year>2009</year>). <article-title>Safety, ethical considerations, and application guidelines for the use of transcranial magnetic stimulation in clinical practice and research</article-title>. <source>Clin. Neurophysiol.</source> <volume>120</volume>, <fpage>2008</fpage>&#x02013;<lpage>2039</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2009.08.016</pub-id><pub-id pub-id-type="pmid">19833552</pub-id></citation></ref>
<ref id="B74"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sawtell</surname> <given-names>N. B.</given-names></name></person-group> (<year>2017</year>). <article-title>Neural mechanisms for predicting the sensory consequences of behavior: insights from electrosensory systems</article-title>. <source>Annu. Rev. Physiol.</source> <volume>79</volume>, <fpage>381</fpage>&#x02013;<lpage>399</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-physiol-021115-105003</pub-id><pub-id pub-id-type="pmid">27813831</pub-id></citation></ref>
<ref id="B75"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schmidt</surname> <given-names>S.</given-names></name> <name><surname>Cichy</surname> <given-names>R. M.</given-names></name> <name><surname>Kraft</surname> <given-names>A.</given-names></name> <name><surname>Brocke</surname> <given-names>J.</given-names></name> <name><surname>Irlbacher</surname> <given-names>K.</given-names></name> <name><surname>Brandt</surname> <given-names>S. A.</given-names></name></person-group> (<year>2009</year>). <article-title>An initial transient-state and reliable measures of corticospinal excitability in TMS studies</article-title>. <source>Clin. Neurophysiol.</source> <volume>120</volume>, <fpage>987</fpage>&#x02013;<lpage>993</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2009.02.164</pub-id><pub-id pub-id-type="pmid">19359215</pub-id></citation></ref>
<ref id="B76"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schneider</surname> <given-names>D. M.</given-names></name> <name><surname>Mooney</surname> <given-names>R.</given-names></name></person-group> (<year>2018</year>). <article-title>How movement modulates hearing</article-title>. <source>Annu. Rev. Neurosci.</source> <volume>41</volume>, <fpage>553</fpage>&#x02013;<lpage>572</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-neuro-072116-031215</pub-id><pub-id pub-id-type="pmid">29986164</pub-id></citation></ref>
<ref id="B77"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shadmehr</surname> <given-names>R.</given-names></name> <name><surname>Smith</surname> <given-names>M. A.</given-names></name> <name><surname>Krakauer</surname> <given-names>J. W.</given-names></name></person-group> (<year>2010</year>). <article-title>Error correction, sensory prediction, and adaptation in motor control</article-title>. <source>Annu. Rev. Neurosci.</source> <volume>33</volume>, <fpage>89</fpage>&#x02013;<lpage>108</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-neuro-060909-153135</pub-id><pub-id pub-id-type="pmid">20367317</pub-id></citation></ref>
<ref id="B78"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Spriggs</surname> <given-names>M. J.</given-names></name> <name><surname>Sumner</surname> <given-names>R. L.</given-names></name> <name><surname>McMillan</surname> <given-names>R. L.</given-names></name> <name><surname>Moran</surname> <given-names>R. J.</given-names></name> <name><surname>Kirk</surname> <given-names>I. J.</given-names></name> <name><surname>Muthukumaraswamy</surname> <given-names>S. D.</given-names></name></person-group> (<year>2018</year>). <article-title>Indexing sensory plasticity: evidence for distinct predictive coding and hebbian learning mechanisms in the cerebral cortex</article-title>. <source>Neuroimage</source> <volume>176</volume>, <fpage>290</fpage>&#x02013;<lpage>300</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2018.04.060</pub-id><pub-id pub-id-type="pmid">29715566</pub-id></citation></ref>
<ref id="B79"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stephan</surname> <given-names>M. A.</given-names></name> <name><surname>Lega</surname> <given-names>C.</given-names></name> <name><surname>Penhune</surname> <given-names>V. B.</given-names></name></person-group> (<year>2018</year>). <article-title>Auditory prediction cues motor preparation in the absence of movements</article-title>. <source>Neuroimage</source> <volume>174</volume>, <fpage>288</fpage>&#x02013;<lpage>296</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2018.03.044</pub-id><pub-id pub-id-type="pmid">29571713</pub-id></citation></ref>
<ref id="B80"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Straka</surname> <given-names>H.</given-names></name> <name><surname>Simmers</surname> <given-names>J.</given-names></name> <name><surname>Chagnaud</surname> <given-names>B. P.</given-names></name></person-group> (<year>2018</year>). <article-title>A new perspective on predictive motor signaling</article-title>. <source>Curr. Biol.</source> <volume>28</volume>, <fpage>R193</fpage>&#x02013;<lpage>R193</lpage>. <pub-id pub-id-type="doi">10.1016/j.cub.2018.01.033</pub-id><pub-id pub-id-type="pmid">29510116</pub-id></citation></ref>
<ref id="B81"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Tabachnick</surname> <given-names>B. G.</given-names></name> <name><surname>Fidell</surname> <given-names>L. S.</given-names></name></person-group> (<year>2006</year>). <source>Using Multivariate Statistics.</source> <edition>5th Edn.</edition> <publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Allyn and Bacon, Inc</publisher-name>.</citation></ref>
<ref id="B82"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ticini</surname> <given-names>L. F.</given-names></name> <name><surname>Sch&#x000FC;tz-Bosbach</surname> <given-names>S.</given-names></name> <name><surname>Waszak</surname> <given-names>F.</given-names></name></person-group> (<year>2019</year>). <article-title>From goals to muscles: motor familiarity shapes the representation of action-related sounds in the human motor system</article-title>. <source>Cogn. Neurosci.</source> <volume>10</volume>, <fpage>20</fpage>&#x02013;<lpage>29</lpage>. <pub-id pub-id-type="doi">10.1080/17588928.2018.1424128</pub-id><pub-id pub-id-type="pmid">29307264</pub-id></citation></ref>
<ref id="B83"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ticini</surname> <given-names>L. F.</given-names></name> <name><surname>Sch&#x000FC;tz-Bosbach</surname> <given-names>S.</given-names></name> <name><surname>Weiss</surname> <given-names>C.</given-names></name> <name><surname>Casile</surname> <given-names>A.</given-names></name> <name><surname>Waszak</surname> <given-names>F.</given-names></name></person-group> (<year>2011</year>). <article-title>When sounds become actions: higher-order representation of newly learned action sounds in the human motor system</article-title>. <source>J. Cogn. Neurosci.</source> <volume>24</volume>, <fpage>464</fpage>&#x02013;<lpage>474</lpage>. <pub-id pub-id-type="doi">10.1162/jocn_a_00134</pub-id><pub-id pub-id-type="pmid">21916562</pub-id></citation></ref>
<ref id="B84"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Timm</surname> <given-names>J.</given-names></name> <name><surname>Sch&#x000F6;nwiesner</surname> <given-names>M.</given-names></name> <name><surname>Sanmiguel</surname> <given-names>I.</given-names></name> <name><surname>Schr&#x000F6;ger</surname> <given-names>E.</given-names></name></person-group> (<year>2014</year>). <article-title>Sensation of agency and perception of temporal order</article-title>. <source>Conscious. Cogn.</source> <volume>23</volume>, <fpage>42</fpage>&#x02013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1016/j.concog.2013.11.002</pub-id><pub-id pub-id-type="pmid">24362412</pub-id></citation></ref>
<ref id="B85"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tong</surname> <given-names>Y.</given-names></name> <name><surname>Melara</surname> <given-names>R. D.</given-names></name> <name><surname>Rao</surname> <given-names>A.</given-names></name></person-group> (<year>2009</year>). <article-title>P2 enhancement from auditory discrimination training is associated with improved reaction times</article-title>. <source>Brain Res.</source> <volume>1297</volume>, <fpage>80</fpage>&#x02013;<lpage>88</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2009.07.089</pub-id><pub-id pub-id-type="pmid">19651109</pub-id></citation></ref>
<ref id="B86"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Elk</surname> <given-names>M.</given-names></name> <name><surname>Salomon</surname> <given-names>R.</given-names></name> <name><surname>Kannape</surname> <given-names>O.</given-names></name> <name><surname>Blanke</surname> <given-names>O.</given-names></name></person-group> (<year>2014</year>). <article-title>Suppression of the N1 auditory evoked potential for sounds generated by the upper and lower limbs</article-title>. <source>Biol. Psychol.</source> <volume>102</volume>, <fpage>108</fpage>&#x02013;<lpage>117</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2014.06.007</pub-id><pub-id pub-id-type="pmid">25019590</pub-id></citation></ref>
<ref id="B87"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Mathalon</surname> <given-names>D. H.</given-names></name> <name><surname>Roach</surname> <given-names>B. J.</given-names></name> <name><surname>Reilly</surname> <given-names>J.</given-names></name> <name><surname>Keedy</surname> <given-names>S. K.</given-names></name> <name><surname>Sweeney</surname> <given-names>J. A.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Action planning and predictive coding when speaking</article-title>. <source>Neuroimage</source> <volume>91</volume>, <fpage>91</fpage>&#x02013;<lpage>98</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2014.01.003</pub-id><pub-id pub-id-type="pmid">24423729</pub-id></citation></ref>
<ref id="B88"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Winkler</surname> <given-names>I.</given-names></name></person-group> (<year>2007</year>). <article-title>Interpreting the mismatch negativity</article-title>. <source>J. Psychophysiol.</source> <volume>21</volume>, <fpage>147</fpage>&#x02013;<lpage>163</lpage>. <pub-id pub-id-type="doi">10.1027/0269-8803.21.34.147</pub-id></citation></ref>
<ref id="B91"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wolpert</surname> <given-names>D. M.</given-names></name> <name><surname>Diedrichsen</surname> <given-names>J.</given-names></name> <name><surname>Flanagan</surname> <given-names>J. R.</given-names></name></person-group> (<year>2011</year>). <article-title>Principles of sensorimotor learning</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>12</volume>, <fpage>739</fpage>&#x02013;<lpage>751</lpage>. <pub-id pub-id-type="doi">10.1038/nrn3112</pub-id><pub-id pub-id-type="pmid">22033537</pub-id></citation></ref>
<ref id="B89"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wolpert</surname> <given-names>D. M.</given-names></name> <name><surname>Ghahramani</surname> <given-names>Z.</given-names></name></person-group> (<year>2000</year>). <article-title>Computational principles of movement neuroscience</article-title>. <source>Nat. Neurosci.</source> <volume>3</volume>, <fpage>1212</fpage>&#x02013;<lpage>1217</lpage>. <pub-id pub-id-type="doi">10.1038/81497</pub-id><pub-id pub-id-type="pmid">11127840</pub-id></citation></ref>
<ref id="B92"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wolpert</surname> <given-names>D. M.</given-names></name> <name><surname>Ghahramani</surname> <given-names>Z.</given-names></name> <name><surname>Jordan</surname> <given-names>M. I.</given-names></name></person-group> (<year>1995</year>). <article-title>An internal model for sensorimotor integration</article-title>. <source>Science</source> <volume>269</volume>, <fpage>1880</fpage>&#x02013;<lpage>1882</lpage>. <pub-id pub-id-type="doi">10.1126/science.7569931</pub-id><pub-id pub-id-type="pmid">7569931</pub-id></citation></ref>
<ref id="B90"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wolpert</surname> <given-names>D. M.</given-names></name> <name><surname>Kawato</surname> <given-names>M.</given-names></name></person-group> (<year>1998</year>). <article-title>Multiple paired forward and inverse models for motor control</article-title>. <source>Neural Netw.</source> <volume>11</volume>, <fpage>1317</fpage>&#x02013;<lpage>1329</lpage>. <pub-id pub-id-type="doi">10.1016/s0893-6080(98)00066-5</pub-id><pub-id pub-id-type="pmid">12662752</pub-id></citation></ref>
<ref id="B93"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Woodman</surname> <given-names>G. F.</given-names></name></person-group> (<year>2010</year>). <article-title>A brief introduction to the use of event-related potentials in studies of perception and attention</article-title>. <source>Atten. Percept. Psychophys.</source> <volume>72</volume>, <fpage>2031</fpage>&#x02013;<lpage>2046</lpage>. <pub-id pub-id-type="doi">10.3758/app.72.8.2031</pub-id><pub-id pub-id-type="pmid">21097848</pub-id></citation></ref>
<ref id="B94"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yavari</surname> <given-names>F.</given-names></name> <name><surname>Mahdavi</surname> <given-names>S.</given-names></name> <name><surname>Towhidkhah</surname> <given-names>F.</given-names></name> <name><surname>Ahmadi-Pajouh</surname> <given-names>M. A.</given-names></name> <name><surname>Ekhtiari</surname> <given-names>H.</given-names></name> <name><surname>Darainy</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Cerebellum as a forward but not inverse model in visuomotor adaptation task: a tDCS-based and modeling study</article-title>. <source>Exp. Brain Res.</source> <volume>234</volume>, <fpage>997</fpage>&#x02013;<lpage>1012</lpage>. <pub-id pub-id-type="doi">10.1007/s00221-015-4523-2</pub-id><pub-id pub-id-type="pmid">26706039</pub-id></citation></ref>
</ref-list>
</back>
</article>
