<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2023.1148793</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Active self-motion control and the role of agency under ambiguity</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes"><name>
<surname>Rineau</surname>
<given-names>Anne-Laure</given-names>
</name><xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<xref rid="c001" ref-type="corresp"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2176277/overview"/>
</contrib>
<contrib contrib-type="author"><name>
<surname>Berberian</surname>
<given-names>Bruno</given-names>
</name><xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/358584/overview"/>
</contrib>
<contrib contrib-type="author"><name>
<surname>Sarrazin</surname>
<given-names>Jean-Christophe</given-names>
</name><xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/274433/overview"/>
</contrib>
<contrib contrib-type="author"><name>
<surname>Bringoux</surname>
<given-names>Lionel</given-names>
</name><xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/143940/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>ONERA, Information Processing and Systems Department (DTIS)</institution>, <addr-line>Salon-de-Provence</addr-line>, <country>France</country></aff>
<aff id="aff2"><sup>2</sup><institution>CNRS, ISM, Aix Marseille Univ</institution>, <addr-line>Marseille</addr-line>, <country>France</country></aff>
<author-notes>
<fn id="fn0001" fn-type="edited-by">
<p>Edited by: Birgitta Dresp-Langley, Centre National de la Recherche Scientifique (CNRS), France</p>
</fn>
<fn id="fn0002" fn-type="edited-by">
<p>Reviewed by: Bin Yin, Fujian Normal University, China; Rob Whitwell, Western University, Canada</p>
</fn>
<corresp id="c001">&#x002A;Correspondence: Anne-Laure Rineau, <email>anrineau@gmail.com</email></corresp>
<fn id="fn0003" fn-type="other">
<p>This article was submitted to Perception Science, a section of the journal Frontiers in Psychology</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>20</day>
<month>04</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>14</volume>
<elocation-id>1148793</elocation-id>
<history>
<date date-type="received">
<day>20</day>
<month>01</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>31</day>
<month>03</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2023 Rineau, Berberian, Sarrazin and Bringoux.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Rineau, Berberian, Sarrazin and Bringoux</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<sec>
<title>Purpose</title>
<p>Self-motion perception is a key factor in daily behaviours such as driving a car or piloting an aircraft. It is mainly based on visuo-vestibular integration, whose weighting mechanisms are modulated by the reliability properties of sensory inputs. Recently, it has been shown that the internal state of the operator can also modulate multisensory integration and may sharpen the representation of relevant inputs. In line with the concept of <italic>agency</italic>, it thus appears relevant to evaluate the impact of being in control of our own action on self-motion perception.</p>
</sec>
<sec>
<title>Methodology</title>
<p>Here, we tested two conditions of motion control (active/manual trigger versus passive/ observer condition), asking participants to discriminate between two consecutive longitudinal movements by identifying the larger displacement (displacement of higher intensity). We also tested motion discrimination under two levels of ambiguity by applying acceleration ratios that differed from our two &#x201C;standard&#x201D; displacements (i.e., 3 s; 0.012 m.s<sup>&#x2212;2</sup> and 0.030 m.s<sup>&#x2212;2</sup>).</p>
</sec>
<sec>
<title>Results</title>
<p>We found an effect of control condition, but not of the level of ambiguity on the way participants perceived the standard displacement, i.e., perceptual bias (Point of Subjective Equality; PSE). Also, we found a significant effect of interaction between the active condition and the level of ambiguity on the ability to discriminate between displacements, i.e., sensitivity (Just Noticeable Difference; JND).</p>
</sec>
<sec>
<title>Originality</title>
<p>Being in control of our own motion through a manual intentional trigger of self-displacement maintains overall motion sensitivity when ambiguity increases.</p>
</sec>
</abstract>
<kwd-group>
<kwd>self-motion perception</kwd>
<kwd>multisensory integration</kwd>
<kwd>agency</kwd>
<kwd>discrimination task</kwd>
<kwd>predictive coding</kwd>
</kwd-group>
<counts>
<fig-count count="5"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="43"/>
<page-count count="10"/>
<word-count count="7024"/>
</counts>
</article-meta>
</front>
<body>
<sec id="sec1" sec-type="intro">
<title>1. Introduction</title>
<p>Whether for simple or more complex tasks, accurate perception of one&#x2019;s own motion is crucial. It is now widely accepted that accurate perception of self-motion requires integrating a variety of information provided by both environment-centred (e.g., optic flow) and body-centred cues (e.g., vestibular, proprioceptive inputs and motor output; <xref ref-type="bibr" rid="ref12">Cheng and Gu, 2018</xref>; <xref ref-type="bibr" rid="ref14">Cullen, 2019</xref>). During the last decades, the human brain has been credited with developing adaptive mechanisms that contribute to an optimal integration of multisensory cues by combining redundant and complementary inputs accounting for stimulus characteristics (<xref ref-type="bibr" rid="ref18">Ernst and Banks, 2002</xref>; <xref ref-type="bibr" rid="ref1">Alais and Burr, 2004</xref>; <xref ref-type="bibr" rid="ref19">Ernst and B&#x00FC;lthoff, 2004</xref>; <xref ref-type="bibr" rid="ref20">Fetsch et al., 2013</xref>). Seminal works stressed the importance of sensory reliability of inputs in multisensory integration for self-motion perception (<xref ref-type="bibr" rid="ref250">Gu et al., 2008</xref>; <xref ref-type="bibr" rid="ref31">Morgan et al., 2008</xref>; <xref ref-type="bibr" rid="ref220">ter Horst et al., 2015</xref>).</p>
<p>As promoted by the concept of <italic>Active sensing</italic> (<xref ref-type="bibr" rid="ref28">Kveraga et al., 2007</xref>; <xref ref-type="bibr" rid="ref36">Schroeder et al., 2010</xref>; <xref ref-type="bibr" rid="ref38">van Atteveldt et al., 2014</xref>), it has been recently suggested that multisensory processing does not depend solely on the nature of sensory inputs, but also on the motor and attentional contexts of an action (<xref ref-type="bibr" rid="ref17">Donohue et al., 2015</xref>). A growing body of studies recently investigated perceptual responses in situations where an external event (stimulus) is the result of an intentional action (<xref ref-type="bibr" rid="ref27">van Kemenade et al., 2016</xref>; <xref ref-type="bibr" rid="ref2">Arikan et al., 2017</xref>; <xref ref-type="bibr" rid="ref37">Straube et al., 2017</xref>). Sensory integration during self-generated actions has been found to be modulated at both physiological (<xref ref-type="bibr" rid="ref25">Hughes et al., 2013</xref>) and behavioural (<xref ref-type="bibr" rid="ref6">Bays et al., 2006</xref>) level compared to the processing of the same sensory inputs generated by an external system. Yet most studies exploring self-motion perception are limited to &#x201C;passive&#x201D; stimulations, i.e., where motion is not self-generated (<xref ref-type="bibr" rid="ref12">Cheng and Gu, 2018</xref>). So far, visuo-vestibular integration for motion perception has not been investigated through the prism of intentional action.</p>
<p>However, the link between intention and perception has been widely studied within the theoretical framework of agency. Indeed, the sense of agency describes the subjective feeling associated with controlling one&#x2019;s own actions and, through these actions, events in the outside world (<xref ref-type="bibr" rid="ref24">Haggard and Tsakiris, 2009</xref>; <xref ref-type="bibr" rid="ref11">Chambon and Haggard, 2012</xref>). Agency is largely explained through a comparator model (CM) that describes internal computational predictive mechanisms of human action control (<xref ref-type="bibr" rid="ref22">Frith et al., 2000</xref>; <xref ref-type="bibr" rid="ref7">Blakemore et al., 2002</xref>). Interestingly, previous work has highlighted the dependence of the vestibular system on this model. Indeed, a decrease in response of the VO (<italic>vestibular only</italic>) neurons was specifically observed when the efference copy due to active motion was in agreement with current sensory inputs, both for rotation (<xref ref-type="bibr" rid="ref35">Roy and Cullen, 2004</xref>) and translation (<xref ref-type="bibr" rid="ref9">Carriot et al., 2013</xref>). Specifically, it has recently been reported that the active vs. passive internal state distinction at an early stage of integration may be a source of modulation in the computation of motion (<xref ref-type="bibr" rid="ref23">Gu, 2018</xref>; <xref ref-type="bibr" rid="ref8">Brooks and Cullen, 2019</xref>; <xref ref-type="bibr" rid="ref15">Cullen and Wang, 2020</xref>; <xref ref-type="bibr" rid="ref16">Cullen and Zobeiri, 2021</xref>). There is, however, strikingly little information on the consequences of active versus passive states on self-motion perception. In this context, our study explored the impact of the intentional nature of an action triggering a visuo-vestibular stimulation on self-motion perception, as seen through the prism of agency.</p>
<p><xref ref-type="bibr" rid="ref400">Yon and Frith (2021)</xref> recently argued that intentional action comes with prior knowledge (e.g., prediction) that could be used to optimise perception in an uncertain world <xref ref-type="bibr" rid="ref400">Yon and Frith (2021)</xref>. In addition, recent work demonstrates that being active (in terms of motor control of the action) potentiates the integration of relevant cues at the audio-visual level (<xref ref-type="bibr" rid="ref27">van Kemenade et al., 2016</xref>; <xref ref-type="bibr" rid="ref2">Arikan et al., 2017</xref>). It can therefore reasonably be hypothesised that being in control of an action may optimise the integration of the different sensory inputs relevant to the task at hand. Here, we speculate that the agentive context of an action may help reduce uncertainty by promoting multisensory integration of relevant inputs. Our hypothesis is consistent with the fact that action can be considered as a powerful way to reduce uncertainty, since it allows better prediction of outcomes (<xref ref-type="bibr" rid="ref40">Yon et al., 2018</xref>). We therefore speculate that this reduction of uncertainty during sensory integration would help refine the distinction of motions whose characteristics would slightly vary. Thus, being intentionally active during a perceptual task may be particularly valuable in situations with a high level of sensory ambiguity.</p>
<p>The present study sought to explore how being active might impact the perception of one&#x2019;s own motion at different levels of sensory ambiguity. Specifically, we aimed at investigating to what extent having control over one&#x2019;s own motion help distinguish it from an externally generated motion under uncertainty. To that purpose, we used a two-alternative forced-choice (2AFC) discrimination task adapted from previous studies on audition (<xref ref-type="bibr" rid="ref34">Reznik et al., 2015</xref>; <xref ref-type="bibr" rid="ref32">Paraskevoudi and SanMiguel, 2021</xref>). In this task, the participants had to compare two displacements and identify which was larger. To compare active versus passive motion, two different conditions were presented. In the active condition, the first displacement was intentionally triggered by the participant whilst the second was externally generated (that is, without any participant-intentional action). In the passive condition, both displacements were externally generated. The first displacement had a fixed acceleration value, and the second displacement varied around this value. Two levels of ambiguity were introduced into the discrimination task by manipulating acceleration ratios between the two movements. We expected better perceptual discrimination between movements when participants intentionally triggered the first displacement than when both displacements were externally generated, particularly under a high level of ambiguity between movements.</p>
</sec>
<sec id="sec2" sec-type="materials|methods">
<title>2. Materials and methods</title>
<sec id="sec3">
<title>2.1. Participants</title>
<p>Twenty participants (12 men, 8 women, M<sub>age</sub> =&#x2009;27, SD&#x2009;=&#x2009;5, age range: 20&#x2013;32 years) took part in the experiment. This sample size was defined, using a power analysis based on a comparable study based on sound discrimination (<xref ref-type="bibr" rid="ref32">Paraskevoudi and SanMiguel, 2021</xref>). Participants were recruited in the population of students and engineers of the ONERA center of Salon-de-Provence. They were all na&#x00EF;ve to the purpose and hypotheses of the study. None of them reported vestibular or other sensory issues (all had corrected-to-normal vision), nor any history of motion or cybersickness. The French CERSTAPS ethics committee approved the experiment (IRB00012476-2021-23-06-119), and participants gave their informed consent prior to the experiment, in accordance with the 1964 Declaration of Helsinki.</p>
</sec>
<sec id="sec4">
<title>2.2. Apparatus</title>
<p>The physical motion was generated via a mobile platform (Motion Systems PS-6TM-550<sub>&#x00A9;</sub>; <xref rid="fig1" ref-type="fig">Figure 1A</xref>). The visual dynamic environment was simulated using a virtual reality headset (Varjo VR-3 Pro<sub>&#x00A9;</sub>). It consisted in a virtual textured corridor (provided by the Unity3D game engine) 2&#x2009;m wide x 5&#x2009;m high x 6.25&#x2009;m deep (<xref rid="fig1" ref-type="fig">Figure 1B</xref>). All visual events allowing the participant to situate himself/herself in the trials (trial start signal, choice gauge, movement announcements, responses) occurred at a helmet distance of 3&#x2009;m in the virtual corridor. Participants&#x2019; position in the virtual environment was individually calibrated via a pair of cameras (SteamVR Lighthouses 2.0<sub>&#x00A9;</sub>) installed at the top two corners of the wall facing the platform. The image centre was adjusted to each participant&#x2019;s eye level. All actions, choices and responses of participants were generated by a dual throttle controller (Thrustmaster HOTAS Warthog TM) and a button placed at the handle extremity. To mask any possible sound information from the platform, earphones (Turtle Beach Stealth 350VR headset) were used to produce constant white noise throughout trials.</p>
<fig position="float" id="fig1"><label>Figure 1</label>
<caption>
<p><bold>(A)</bold> Schematic representation of the setup, <bold>(B)</bold> Virtual environment.</p>
</caption>
<graphic xlink:href="fpsyg-14-1148793-g001.tif"/>
</fig>
</sec>
<sec id="sec5">
<title>2.3. Experimental task and stimuli</title>
<p>Participants were subjected to forward translations of a fixed duration of 3&#x2009;s and with a triangular acceleration profile (1.5&#x2009;s of acceleration and 1.5&#x2009;s of deceleration). This same profile was applied at the visual level so that participants advanced through the virtual scene by scrolling the textured corridor congruently and synchronously to the platform. Based on this, they had to perform a two-alternative forced-choice (2AFC) discrimination task, identifying which displacement, the first (standard) or second (comparison), was larger.</p>
<p>Participants performed the comparison task at two levels of intensity, i.e., standard displacements. The first was the minimum stimulation under which the platform could generate all the comparison pairs, i.e., 0.012&#x2009;m.s<sup>&#x2212;2</sup>, considered here as the low level of intensity. The second was set at 0.030&#x2009;m.s<sup>&#x2212;2</sup>, close to the maximum limit (distance to be covered) of the system, considered here as the high level of intensity. In both cases, we made sure that the standard stimuli were perceptible by the participants, i.e., above threshold (determined via a detection task performed the day before). From these standard values, we increased or decreased the acceleration rate of the second comparison displacement. Thus, the comparison displacement was of variable value, being more or less large, stronger or weaker than the standard displacement, depending on the differences applied. These differences were of 0, &#x00B1;0.002, &#x00B1;0.004, &#x00B1;0.006, &#x00B1;0.008, &#x00B1;0.010&#x2009;m.s<sup>&#x2212;2</sup> and&#x2009;+&#x2009;0.012&#x2009;m.s<sup>&#x2212;2</sup>. Thus, whatever the level of intensity, the differences in acceleration rate between the first standard (fixed) displacement and the comparison (variable) displacement were identical. However, for a given difference, the ratio to the standard reference differed depending on the current intensity level. Indeed, by keeping the same comparison values between the two intensity levels, we can claim to have generated two conditions in which the differences were more or less marked from a relative point of view (i.e., a difference of 0.002&#x2009;m.s<sup>&#x2212;2</sup> is relatively more marked for a 0.012&#x2009;m.s<sup>&#x2212;2</sup> standard stimulation than for a 0.030&#x2009;m.s<sup>&#x2212;2</sup> standard stimulation). This configuration thus yielded two conditions differing in terms of ambiguity (difficulty), providing a high level of ambiguity for the high level of intensity, and a low level of ambiguity for the low level of intensity.</p>
<p>In addition, the participants performed the task under two conditions of motion control, i.e., passive (observer) versus active (manual trigger). In the active condition, participants manually chose the intensity level for the next trial. By confirming his/her choice via a button press, he/she had control over the first displacement, whereas the second displacement was generated automatically. In passive trials, both displacements were automatically generated. Thus, 4 types of trials were considered: Active-High (AH), Active-Low (AL), Passive-High (PH) and Passive-Low (PL). Each block of 48 trials allowed all test types (AH, AL, PH, PL) and comparison values to be presented once in a randomised fashion. Each participant performed 6 blocks, thus representing 288 pseudo-randomized trials, for a total duration of approximately 2h10min. A mandatory break was scheduled in the middle of the session, but the participant was free to take a break at the end of each block.</p>
</sec>
<sec id="sec6">
<title>2.4. Procedure</title>
<p>Participants were first provided with the experimental objectives and instructions and signed the consent form. Then, they were strapped into seats on the mobile platform. The two-alternative forced-choice (2AFC) discrimination task started after a calibration process and the completion of a training block of about 15 trials.</p>
<p>During the 2AFC discrimination, each trial had three main phases: a pre-stimulation phase, a stimulation phase, and a response phase (<xref rid="fig2" ref-type="fig">Figure 2</xref>).</p>
<fig position="float" id="fig2"><label>Figure 2</label>
<caption>
<p>Schematic representation of the different phases of a trial.</p>
</caption>
<graphic xlink:href="fpsyg-14-1148793-g002.tif"/>
</fig>
<p>The pre-stimulation phase began with a one-second signal onset (black frame and central cross) that indicated whether the trial would be externally generated (passive) or intentionally generated (active). In active condition, the participant was also kept informed of the number of trials of each type (low or high) remaining to be generated by him or her. Then, the first displacement was announced during 1&#x2009;s. In active condition, participants were then asked to choose a level of intensity for the next trial through a cursor to be positioned on a visual gauge, using a throttle (<xref rid="fig2" ref-type="fig">Figure 2</xref>: <italic>motion selection</italic>). Then, they press a button on the throttle to generate the first displacement which started without latency after the disappearance of the gauge. In the passive condition, as previously designed in a similar task, the interval between visual cue and the standard displacement onset was randomly selected from the participants&#x2019; distribution of press times (<xref ref-type="bibr" rid="ref32">Paraskevoudi and SanMiguel, 2021</xref>). In addition, participants were not informed of the type of intensity selected in this passive condition.</p>
<p>Then, they were entering the stimulation phase during which the two displacements (i.e., standard and comparison displacements, lasting 3&#x2009;s each) were successively produced (<xref rid="fig2" ref-type="fig">Figure 2</xref>: <italic>standard and comparison displacements</italic>). Importantly here, the comparison displacement was announced and always externally generated after visual pre-cueing. The time interval between pre-cueing and the start of the comparison displacement was randomly distributed from each participant&#x2019;s distribution of press times. The stimulation phase ended with a red ending signal (END) presented for 1&#x2009;s.</p>
<p>Then, participants entered a response phase during which they first identified and then confirmed with the throttle one of the two displacements as larger (displacement of higher intensity; <xref rid="fig2" ref-type="fig">Figure 2</xref>: <italic>discrimination</italic>). Next, they had to respond using a continuous scale (analogical from 1 to 5) from <italic>&#x201C;I am not sure at all&#x201D;</italic> to <italic>&#x201C;I am absolutely sure,&#x201D;</italic> to assess their confidence on motion discrimination (<xref rid="fig2" ref-type="fig">Figure 2</xref>: <italic>confidence</italic>).</p>
<p>The trial ended and the platform was moved back from the final to the initial position following a fixed time smooth animation trajectory during 3&#x2009;s for the participants to be ready for the next trial. Videos of the events that take place during the trials, from the point of view of the virtual environment, are available in supplemented data.</p>
</sec>
<sec id="sec7">
<title>2.5. Data analysis</title>
<p>The proportion of responses perceiving the second displacement as larger was calculated for each condition, according to the different comparison values. This was used to fit psychometric curves for each participant and condition with a normal cumulative function via the <italic>quickpsy</italic> package of R (version 4.0.0). We used this package since it has been specifically developed for this type of analysis (<xref ref-type="bibr" rid="ref29">Linares and L&#x00F3;pez-Moliner, 2016</xref>) and recently used in a similar task (<xref ref-type="bibr" rid="ref32">Paraskevoudi and SanMiguel, 2021</xref>). The lower asymptote of the psychometric function that corresponds to the gamma parameter of the fitting model was set to 0. The upper asymptote (i.e., lambda) which corresponds to the lapse rate was set to 0.001. These fitting parameters have previously been used in other 2-AFC discrimination tasks (<xref ref-type="bibr" rid="ref32">Paraskevoudi and SanMiguel, 2021</xref>) and enabled us to generate fitting models with the most satisfactory Akaike information criterion for our data. The Akaike Information Criterion (AIC) is a mathematical method for evaluating how well a model fits the data from which it was generated. An AIC score is assigned based on the relative amount of information lost by a given model. Thus, the less information a model loses, the lower the score, and the higher the quality of the model.</p>
<p>Two variables were extracted from the psychometric curves for each participant in each condition. First, the Point of Subjective Equality (PSE) corresponding to the value at which the comparison displacement is judged statistically equal to the standard displacement, which is used to express a potential perceptual bias across conditions. Indeed, a shift in PSE relative to the Point of Objective Equality (i.e., the point of physical equality between the two displacements, here 0&#x2009;m.s<sup>&#x2212;2</sup>) reflects a biassed estimate of perceived motion intensity. A higher PSE indicates that the standard displacement is perceived as larger. Indeed, if the PSE is positive, the participant judged the two displacements as identical when the comparison displacement had a slightly higher acceleration rate than the standard displacement (by the value of the PSE). The standard displacement is then perceived as a displacement that has a higher acceleration rate than its actual acceleration rate. The PSE corresponds to the alpha value of the model. Second, the Just Noticeable Difference (JND) was extracted to establish the discrimination sensitivity between the two displacements. This corresponds to the minimum gap between stimuli for perceiving a difference between displacements. Thus, the lower it is, the better the performance. The higher it is, the more difficult for the participant to discriminate differences between motions. Therefore, and in accordance with Weber&#x2019;s law, a greater JND is expected as the intensity increases. The JND corresponds to the beta value of the model.</p>
<p>In addition, the confidence level each participant reported for their responses was recorded for all conditions. Higher confidence indicates less uncertainty in the participant&#x2019;s judgement on the current task. Additional metacognition analyses were conducted based on these confidence levels, enabling the <italic>M-ratio</italic> and <italic>meta-d</italic>&#x2019; variables to be explored. These analyses assessed the participants&#x2019; ability to translate their performance back into their confidence levels. Indeed, the <italic>meta-d</italic>&#x2019; reveals a degree of cognitive sensitivity. It is the ability of the participant to adapt his/her confidence to his/her performance (i.e., to give high confidence scores when he is right and to give low confidence scores when he is wrong). The higher it is, the better the cognitive sensitivity of the participant. The <italic>M-ratio</italic> is a ratio of the <italic>meta-d&#x2019;</italic> to the <italic>d&#x2019;</italic>. <italic>d&#x2019;</italic> represents the performance on the task and is not transcribed here since the JND is our reference of discrimination performance. Besides, the analyses showed that the results behave in the same way for these two parameters. Metacognitive efficiency was computed for each participant based on confidence scores, in each condition separately, using the <italic>metaSDT</italic> package (<xref ref-type="bibr" rid="ref13">Craddock, 2018</xref>) in the R environment.</p>
<p>All these variables (i.e., PSE, JND, levels of confidence, <italic>M-ratio</italic> and <italic>meta-d</italic>&#x2019;) were analysed using a repeated measures ANOVA combining two factors: Agency condition (Active versus Passive) and level of ambiguity (Low versus High). Effect sizes were estimated using partial eta-squared (&#x03B7;<sup>2</sup><sub>p</sub>). All statistical analyses were performed using R (version 4.0.0). Analysis codes are available by the authors without under request and without undue reservation.</p>
</sec>
</sec>
<sec id="sec8" sec-type="results">
<title>3. Results</title>
<sec id="sec9">
<title>3.1. Psychometric curves</title>
<p>Visual inspection of the psychometric curves from the discrimination task (<xref rid="fig3" ref-type="fig">Figure 3</xref>) reveals a comparable slope for both levels of ambiguity in the self-generated motion condition (active). However, the slope appears to become stiffer under high ambiguity in the externally generated motion condition (passive). More strikingly, there appears at first glance to be a difference between the active and the passive condition under both levels of motion ambiguity (low <italic>vs.</italic> high). To corroborate these observations, statistical analyses of the PSE and JND extracted from each individual curve were subsequently performed. Indeed, to assess whether theses parameters differed across agentive conditions and levels of ambiguity, we performed a repeated-measures ANOVA evaluating the influence of the agentive condition (automatic <italic>vs.</italic> manual) and the level of ambiguity (high <italic>vs.</italic> low). We applied the correction of Bonferroni for post-hoc comparisons.</p>
<fig position="float" id="fig3"><label>Figure 3</label>
<caption>
<p>Psychometric curve depending on level of motion ambiguity and agency condition.</p>
</caption>
<graphic xlink:href="fpsyg-14-1148793-g003.tif"/>
</fig>
<p>The analysis conducted on PSE did reveal a significant main effect of the agentive condition (<italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;6.90, <italic>p</italic>&#x2009;=&#x2009;0.017, &#x03B7;<sup>2</sup><italic>
<sub>p</sub>
</italic>&#x2009;=&#x2009;0.27) with a higher PSE in the agentive condition (M<sub>A</sub>&#x2009;=&#x2009;1.7.10<sup>&#x2212;3</sup>, M<sub>P</sub>&#x2009;=&#x2009;0.9.10<sup>&#x2212;3</sup>, SD<sub>A</sub>&#x2009;=&#x2009;0.002, SD<sub>P</sub>&#x2009;=&#x2009;0.003). However, no effect of the level of motion ambiguity was revealed (F<sub>(1,19)</sub>&#x2009;=&#x2009;1.76, <italic>p</italic>&#x2009;=&#x2009;0.2; <xref rid="fig4" ref-type="fig">Figure 4A</xref>) nor were any interaction effects (<italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;0.76,  <italic>p</italic>&#x2009;=&#x2009;0.395). In addition, we analysed whether any of the PSEs differed significantly from zero. T.tests revealed that PSEs differed from zero for AL (<italic>t</italic> (19)&#x2009;=&#x2009;3.30, <italic>p</italic>&#x2009;=&#x2009;0.004), PL (<italic>t</italic> (19)&#x2009;=&#x2009;2.71, <italic>p</italic>&#x2009;=&#x2009;0.014), and AH (<italic>t</italic> (19)&#x2009;=&#x2009;3.12, <italic>p</italic>&#x2009;=&#x2009;0.006) conditions, whilst it was non-significant for the PH condition (<italic>t</italic> (19)&#x2009;=&#x2009;0.27,  <italic>p</italic>&#x2009;=&#x2009;0.79).</p>
<fig position="float" id="fig4"><label>Figure 4</label>
<caption>
<p><bold>(A)</bold> Mean of PSE, <bold>(B)</bold> Mean of JND. Significant effect of interaction between motion ambiguity and intensity on JND (<italic>p</italic>&#x003C;0.001), with post-hoc comparison showing lower JND for the active than for passive high ambiguity condition (one-tailed paired samples post-hoc <italic>t</italic>-test; <italic>p</italic>&#x003C;0.001) and significantly higher JND for the passive high ambiguity than for the passive low ambiguity condition (one-tailed paired samples <italic>post hoc t</italic>-test; <italic>p</italic>&#x003C;0.001). Error bars in Figures represent the within-subject confidence intervals, calculated using the <italic>summarySEwithin</italic> function in <italic>R</italic>.</p>
</caption>
<graphic xlink:href="fpsyg-14-1148793-g004.tif"/>
</fig>
<p>In contrast, the analysis conducted on JND did not reveal a main effect of the agentive condition (<italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;2.71, <italic>p</italic>&#x2009;=&#x2009;0.12; <xref rid="fig4" ref-type="fig">Figure 4B</xref>). Moreover, it revealed a main effect of the level of ambiguity (<italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;19.06, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, &#x03B7;<sup>2</sup><sub>p</sub>&#x2009;=&#x2009;0.50) with a higher JND for the high level of ambiguity (M<sub>L</sub>&#x2009;=&#x2009;0.007, M<sub>H</sub>&#x2009;=&#x2009;0.009, SD<sub>L</sub>&#x2009;=&#x2009;0.002, SD<sub>H</sub>&#x2009;=&#x2009;0.004). Importantly, a significant interaction between the two factors (<italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;19.07, p&#x2009;&#x003C;&#x2009;0.001, &#x03B7;<sup>2</sup><sub>p</sub>&#x2009;=&#x2009;0.50). The Bonferroni corrected post-hoc analysis revealed that JND was lower for intentionally generated motions (active) than for externally generated motions (passive) under high ambiguity (M<sub>AH</sub>&#x2009;=&#x2009;0.007, M<sub>PH</sub>&#x2009;=&#x2009;0.011, SD<sub>AH</sub>&#x2009;=&#x2009;0.002, SD<sub>PH</sub>&#x2009;=&#x2009;0.005, <italic>t</italic> (19)&#x2009;=&#x2009;&#x2212;4.27, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, <italic>d</italic>&#x2009;=&#x2009;0.98). In addition, JND was significantly higher for high <italic>vs</italic>. low motion ambiguity in the passive condition (M<sub>PL</sub>&#x2009;=&#x2009;0.006, M<sub>PH</sub>&#x2009;=&#x2009;0.011, SD<sub>PL</sub>&#x2009;=&#x2009;0.002, SD<sub>PH</sub>&#x2009;=&#x2009;0.005, <italic>t</italic> (19)&#x2009;=&#x2009;6.16, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001, d&#x2009;=&#x2009;1.22).</p>
</sec>
<sec id="sec10">
<title>3.2. Confidence and metacognition</title>
<p>The analyses on level of confidence values revealed a significant effect of agentive condition (<italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;5.77, <italic>p</italic>&#x2009;=&#x2009;0.027, &#x03B7;<sup>2</sup><sub>p</sub>&#x2009;=&#x2009;0.23). Participants were more confident when they intentionally generated the motions (M<sub>A</sub>&#x2009;=&#x2009;3.63, M<sub>P</sub>&#x2009;=&#x2009;3.56, SD<sub>A</sub>&#x2009;=&#x2009;1.14, SD<sub>P</sub>&#x2009;=&#x2009;1.14). However, no effect of level of ambiguity was found (<italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;1.47,  <italic>p</italic>&#x2009;=&#x2009;0.24) nor were any interaction effects with agentive condition (<italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;2.16; <italic>p</italic>&#x2009;=&#x2009;0.16).</p>
<p>Complementary metacognition analyses were conducted on these levels of confidence to assess the extent to which the participant&#x2019;s reported level of confidence is correlated with his/her performance. The <italic>M-ratio</italic> analysis did not reveal any effect of level of ambiguity (<italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;0.65, <italic>p</italic>&#x2009;=&#x2009;0.43) or of agentive condition (<italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;1.24, <italic>p</italic>&#x2009;=&#x2009;0.28), and nor were any interaction effects on this variable (<italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;0.002, <italic>p</italic>&#x2009;=&#x2009;0.92; <xref rid="fig5" ref-type="fig">Figure 5A</xref>). In contrast, analysis did reveal an effect of level of ambiguity on the <italic>meta-d&#x2019;</italic> variable with a lower <italic>meta-d&#x2019;</italic> in the high level of ambiguity (M<sub>L</sub>&#x2009;=&#x2009;1.24, M<sub>H</sub>&#x2009;=&#x2009;0.92, SD<sub>L</sub>&#x2009;=&#x2009;0.85, SD<sub>H</sub>&#x2009;=&#x2009;0.76, <italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;7.87, <italic>p</italic>&#x2009;=&#x2009;0.011, &#x03B7;<sup>2</sup><sub>p</sub>&#x2009;=&#x2009;0.29). However, no effect of agentive condition was revealed (<italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;1.56, <italic>p</italic>&#x2009;=&#x2009;0.22; <xref rid="fig5" ref-type="fig">Figure 5B</xref>). Even though we did not find any interaction effect between the two factors (ambiguity, agentive condition), a trend was noted (<italic>F</italic><sub>(1,19)</sub>&#x2009;=&#x2009;3.13, <italic>p</italic>&#x2009;=&#x2009;0.09).</p>
<fig position="float" id="fig5"><label>Figure 5</label>
<caption>
<p><bold>(A)</bold> Mean of M-ration, <bold>(B)</bold> Mean of meta-d&#x2019;. significant effect of level of ambiguity on the meta-d&#x2019; variable (<italic>p</italic>&#x003C;0.05). But no effect of agentive condition. Error bars in Figures represent the within-subjects confidence intervals, calculated using the <italic>summarySEwithin</italic> function in <italic>R</italic>.</p>
</caption>
<graphic xlink:href="fpsyg-14-1148793-g005.tif"/>
</fig>
</sec>
</sec>
<sec id="sec11" sec-type="discussions">
<title>4. Discussion</title>
<p>According to Bergson&#x2019;s thinking (1896), voluntary action is linked to perception. Thus, understanding what governs being in control could help to elucidate human perception. Exploring the intentional nature of action may shed light on the mechanisms underlying perception. Recently, it has been proposed that action control could be a way to sharpen representation of expected outcomes (<xref ref-type="bibr" rid="ref40">Yon et al., 2018</xref>; <xref ref-type="bibr" rid="ref26">Jagini, 2021</xref>). Based on these premises, the present study explored the role of action control in the perception of one&#x2019;s own motion at different levels of sensory ambiguity. Indeed, we designed a discrimination task enabling us to manipulate both the ambiguity of pairs of displacements and the voluntary nature of the action. Performance on the discrimination task as well as participants&#x2019; reported confidence in their perception of the displacements were considered for each trial. We expected better perceptual discrimination between displacements when participants intentionally triggered the first movement, especially under a high level of motion ambiguity.</p>
<p>Analysis of the psychometric curves and of confidence levels highlighted three main results. First, we found better sensitivity (lower JND) in the active condition than in the passive condition, specific to a high level of ambiguity. Second, we observed an effect of agentive condition (higher PSE in the manual condition) but not of the level of ambiguity on PSE (perceptual bias). Third, we observed a general effect of agentive condition, with a higher level of confidence under the agentive condition but no effect of interaction with level of ambiguity.</p>
<p>The two main parameters (PSE and JND) obtained from participants&#x2019; psychometric curves indicate the way they performed the task. The PSE represents the value to be added to the standard displacement for the comparison displacement to be perceived as equal. It indicates how that standard displacement is perceived since a difference in perceptual bias (PSE) between conditions represents either a perceptual attenuation or an enhancement of the first displacement relative to its physical reality. We did not observe any effect of level of ambiguity. In other words, the first displacement was perceived identically under both levels of ambiguity. Also, we did not find any interaction effect between ambiguity and agentive conditions. In contrast, we did find a general effect of the agentive condition with higher PSE in manual condition. Overall, participants tended to enhance the first displacement in the agentive condition. In recent years, the control of action has been associated with both phenomena of attenuation and enhancement of the related sensory processing. It seems here that the context of action and its sensory consequences themselves play indeed a role in the underlying mechanisms. For example, previous studies suggest that this may depend on the intensity of the sensory consequences (<xref ref-type="bibr" rid="ref33">Reznik et al., 2015</xref>; <xref ref-type="bibr" rid="ref32">Paraskevoudi and SanMiguel, 2021</xref>). In our study, two levels of intensity have been manipulated (<italic>&#x201C;High&#x201D;</italic> and <italic>&#x201C;Low&#x201D;</italic>). However, they can be both considered as rather low displacement intensities (as reported by the participants). We can assume here that having control over the first displacement increases in both cases its perception to better perceive it and compare it to the second one. The second parameter (JND) is particularly informative on sensitivity of discrimination between displacements, representing the value at which a difference between the two displacements is perceived: the lower the JND, the better the discrimination performance. Better sensitivity (lower JND) was observed in the active condition with a high level of ambiguity than in the passive condition. Taken together, these results suggest that having control over the first displacement lead to higher stability of mechanisms enabling its discrimination from the second when ambiguity vary. Therefore, being active leads to better discrimination of controlled information (first displacement) from uncontrolled information (second displacement) when ambiguity increases.</p>
<p>It should be noted that previous studies using the same type of discrimination task observed different effects of the active condition on these two parameters (<xref ref-type="bibr" rid="ref33">Reznik et al., 2015</xref>; <xref ref-type="bibr" rid="ref32">Paraskevoudi and SanMiguel, 2021</xref>). For example, <xref ref-type="bibr" rid="ref32">Paraskevoudi and SanMiguel (2021)</xref> found reduced perceived intensity (i.e., perceptual bias) for self-generated sounds when presented at supra-threshold intensities (high level), but which increased when presented at near-threshold intensities (low level). Also, the authors found no difference in discrimination sensitivity (i.e., JND). The fact that we extended the aforementioned paradigm to cover several new and important aspects could explain such differences.</p>
<p>First, in our study, participants had control over both the level of intensity they wanted to work at and the timing of the first displacement (the pair of displacements starting as soon as they confirmed their choice). Therefore, the agentive condition strongly mobilises the participant&#x2019;s intention in the forthcoming comparison pair, rather than in continuous control of the displacements. We decided to add this intentional selection because the choice between different possible actions strongly influences the experience of control and agency (<xref ref-type="bibr" rid="ref5">Barlas and Obhi, 2013</xref>; <xref ref-type="bibr" rid="ref3">Barlas et al., 2017</xref>; <xref ref-type="bibr" rid="ref4">Barlas and Kopp, 2018</xref>). One could hypothesise here that it comes from a higher predictability of event following the choice (i.e., regarding both timing and intensity of the first displacement). In addition, we speculate that the predictive mechanisms involved in this agentive situation lie more in the predictability and attentional commitment related to the participant&#x2019;s intention. Indeed, it is now established that agency is associated with a better commitment to the task (<xref ref-type="bibr" rid="ref10">Caspar et al., 2016</xref>), with the mobilisation of attentional mechanisms (<xref ref-type="bibr" rid="ref39">Wen and Haggard, 2018</xref>). Since the participant&#x2019;s intention was always respected, we speculate that being in intentional control of the first displacement allowed for a better integration of the related information. More precisely, according to the comparator model, the agentive situation refines the comparative action-consequence mechanisms linking action to its consequences. We can speculate that this serves a better integration of sensory inputs related to the action. Besides, it has recently been suggested that action may help sharpen sensory integration of relevant sensory outcomes (<xref ref-type="bibr" rid="ref36">Schroeder et al., 2010</xref>; <xref ref-type="bibr" rid="ref40">Yon et al., 2018</xref>). In addition, we did observe a general stability of both parameters (i.e., perceptual bias and sensitivity) when ambiguity vary. Indeed, we note that sensitivity of discrimination between the two levels of ambiguity remained constant in the active condition whereas it decreased in the passive condition (higher JND) when ambiguity increased. Thus, we speculate that intentional control led to smaller prediction error on this first displacement, allowing better detection of the prediction error gap on a second uncontrolled displacement, even when ambiguity increased. In our high ambiguity condition, the differences in the second displacement were more difficult to perceive (in terms of ratio from the standard displacement). Rather than demonstrating better discrimination in ambiguous situations under the agentive condition, our results tend to show a reduction in performance under the passive condition when the level of ambiguity increases. Therefore, a lower error signal in the agentive condition appears to promote stability and consistency in integrative performance when ambiguity increases. In this case, the prediction error difference between the first controlled displacement and the second uncontrolled displacement remains sufficient to maintain discrimination. In contrast, a fully passive situation generates higher prediction error, preventing the detection of smaller deviations from the first displacement during the second. Thus, our results highlight the impact of the decrease in prediction error in an agentive situation when there is a match between intentional control and the consequences. The fact that the difference from the passive situation is observed under more ambiguous conditions strengthens this hypothesis.</p>
<p>Second, we considered a far more complex multisensory stimulation than in previous studies, which mainly considered unimodal input (mainly bip sound). Recent studies suggest that predictive mechanisms engaged during an action promote the binding of sensory inputs relevant to the task at hand (<xref ref-type="bibr" rid="ref27">van Kemenade et al., 2016</xref>; <xref ref-type="bibr" rid="ref2">Arikan et al., 2017</xref>; <xref ref-type="bibr" rid="ref37">Straube et al., 2017</xref>). Since our results differ from those of previous studies conducted under a specific single sensory stimulation, we also speculate that action may shape sensory consequences differently depending on the amount of task-relevant information available. Also, it is undeniable that future studies would clarify the part of each of underlying mechanisms in such agentive condition (i.e., attention, prediction, predictability, choice).</p>
<p>In addition, we performed an analysis of the confidence levels and of the metacognition of the participants. Having observed a general effect of being in control on confidence levels, but no interaction effect with the level of ambiguity, we decided to complement our analysis by considering metacognition variables. The <italic>meta-d</italic>&#x2019; variable was used to evaluate the correlation between the participant&#x2019;s confidence in his/her performance and the reality of the performance itself (<xref ref-type="bibr" rid="ref30">Maniscalco and Lau, 2012</xref>; <xref ref-type="bibr" rid="ref21">Fleming and Lau, 2014</xref>). Therefore, when this score decreases, the participant&#x2019;s metacognitive performance decreases. An effect of the level of ambiguity on metacognitive performance is observed: as the level of ambiguity increases, metacognitive performance decreases. Interestingly, we observe a trend towards interaction effects with the agentive condition. Associated with the graphical representation of these results (<xref rid="fig4" ref-type="fig">Figure 5B</xref>), the metacognitive performance can be seen to strongly decrease in the passive condition, compared to the active condition.</p>
<p>One limitation here is the number of trials compared to other studies using much shorter stimulations (increasing the number of trials using our multisensory stimulations would have made the study too long and onerous for the participants). A greater number of trials might highlight this trend towards maintaining metacognitive performance in the agentive condition. However, it is interesting to note that the agentive nature of the stimulation could promote not only the mechanisms of multisensory integration but also the mechanisms underlying metacognitive performance.</p>
</sec>
<sec id="sec12" sec-type="conclusions">
<title>5. Conclusion</title>
<p>To our knowledge, our study is the first to extend the notions of both agency and ambiguity management to human motion perception. Its main contribution lies in showing that being in control of one&#x2019;s motion is beneficial when faced with ambiguous situations. Our conclusions are strengthened by the fact that participants did not perceive themselves as performing better in the high-ambiguity active condition. Such a difference in the management of ambiguous situations should be further explored to better understand the perception of an observer versus an operator, particularly for critical situations. This would provide answers in key areas such as aeronautics and the automotive industry. Thus, there is clearly a need to better understand the underlying integrative mechanisms involved according to the operator&#x2019;s level of control.</p>
</sec>
<sec id="sec13" sec-type="data-availability">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors under request and without undue reservation.</p>
</sec>
<sec id="sec14">
<title>Ethics statement</title>
<p>The studies involving human participants were reviewed and approved by The French CERSTAPS ethics committee (IRB00012476-2021-23-06-119). The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="sec15">
<title>Author contributions</title>
<p>A-LR, BB, J-CS, and LB: study conception and design, interpretation of results, and draft manuscript. A-LR: data collection and analysis preparation. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec id="sec150">
<title>Funding</title>
<p>This work was supported by APR Grants (DAR 1107) from the French National Space Research Centre (CNES).</p>
</sec>
<sec id="conf1" sec-type="COI-statement">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="sec100" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<sec id="sec17" sec-type="supplementary-material">
<title>Supplementary material</title>
<p>The Supplementary material for this article can be found online at: <ext-link xlink:href="https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1148793/full#supplementary-material" ext-link-type="uri">https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1148793/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Video_1.MP4" id="SM2" mimetype="video/mp4" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Video_2.MP4" id="SM3" mimetype="video/mp4" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alais</surname> <given-names>D.</given-names></name> <name><surname>Burr</surname> <given-names>D.</given-names></name></person-group> (<year>2004</year>). <article-title>The ventriloquist effect results from near-optimal bimodal integration</article-title>. <source>Curr. Biol.</source> <volume>14</volume>, <fpage>257</fpage>&#x2013;<lpage>262</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.cub.2004.01.029</pub-id>, PMID: <pub-id pub-id-type="pmid">14761661</pub-id></citation></ref>
<ref id="ref2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arikan</surname> <given-names>B. E.</given-names></name> <name><surname>van Kemenade</surname> <given-names>B. M.</given-names></name> <name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>Harris</surname> <given-names>L. R.</given-names></name> <name><surname>Kircher</surname> <given-names>T.</given-names></name></person-group> (<year>2017</year>). <article-title>Voluntary and involuntary movements widen the window of subjective simultaneity</article-title>. <source>i-Perception</source> <volume>8</volume>:<fpage>204166951771929</fpage>. doi: <pub-id pub-id-type="doi">10.1177/2041669517719297</pub-id>, PMID: <pub-id pub-id-type="pmid">28835813</pub-id></citation></ref>
<ref id="ref3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barlas</surname> <given-names>Z.</given-names></name> <name><surname>Hockley</surname> <given-names>W. E.</given-names></name> <name><surname>Obhi</surname> <given-names>S. S.</given-names></name></person-group> (<year>2017</year>). <article-title>The effects of freedom of choice in action selection on perceived mental effort and the sense of agency</article-title>. <source>Acta Psychol.</source> <volume>180</volume>, <fpage>122</fpage>&#x2013;<lpage>129</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.actpsy.2017.09.004</pub-id>, PMID: <pub-id pub-id-type="pmid">28942124</pub-id></citation></ref>
<ref id="ref4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barlas</surname> <given-names>Z.</given-names></name> <name><surname>Kopp</surname> <given-names>S.</given-names></name></person-group> (<year>2018</year>). <article-title>Action choice and outcome congruency independently affect intentional binding and feeling of control judgments</article-title>. <source>Front. Hum. Neurosci.</source> <volume>12</volume>:<fpage>137</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnhum.2018.00137</pub-id>, PMID: <pub-id pub-id-type="pmid">29695958</pub-id></citation></ref>
<ref id="ref5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barlas</surname> <given-names>Z.</given-names></name> <name><surname>Obhi</surname> <given-names>S. S.</given-names></name></person-group> (<year>2013</year>). <article-title>Freedom, choice, and the sense of agency</article-title>. <source>Front. Hum. Neurosci.</source> <volume>7</volume>:<fpage>514</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnhum.2013.00514</pub-id>, PMID: <pub-id pub-id-type="pmid">24009575</pub-id></citation></ref>
<ref id="ref6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bays</surname> <given-names>P. M.</given-names></name> <name><surname>Flanagan</surname> <given-names>J. R.</given-names></name> <name><surname>Wolpert</surname> <given-names>D. M.</given-names></name></person-group> (<year>2006</year>). <article-title>Attenuation of self-generated tactile sensations is predictive, not Postdictive</article-title>. <source>PLoS Biol.</source>. <comment>Edited by J. Lackner</comment> <volume>4</volume>:<fpage>e28</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pbio.0040028</pub-id>, PMID: <pub-id pub-id-type="pmid">16402860</pub-id></citation></ref>
<ref id="ref7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blakemore</surname> <given-names>S.-J.</given-names></name> <name><surname>Wolpert</surname> <given-names>D. M.</given-names></name> <name><surname>Frith</surname> <given-names>C. D.</given-names></name></person-group> (<year>2002</year>). <article-title>Abnormalities in the awareness of action</article-title>. <source>Trends Cogn. Sci.</source> <volume>6</volume>, <fpage>237</fpage>&#x2013;<lpage>242</lpage>. doi: <pub-id pub-id-type="doi">10.1016/S1364-6613(02)01907-1</pub-id></citation></ref>
<ref id="ref8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brooks</surname> <given-names>J. X.</given-names></name> <name><surname>Cullen</surname> <given-names>K. E.</given-names></name></person-group> (<year>2019</year>). <article-title>Predictive sensing: the role of motor signals in sensory processing</article-title>. <source>Biol. Psychiatry. Cogn. Neurosci. Neuroimaging</source> <volume>4</volume>, <fpage>842</fpage>&#x2013;<lpage>850</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.bpsc.2019.06.003</pub-id>, PMID: <pub-id pub-id-type="pmid">31401034</pub-id></citation></ref>
<ref id="ref9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Carriot</surname> <given-names>J.</given-names></name> <name><surname>Brooks</surname> <given-names>J. X.</given-names></name> <name><surname>Cullen</surname> <given-names>K. E.</given-names></name></person-group> (<year>2013</year>). <article-title>Multimodal integration of self-motion cues in the vestibular system: active versus passive translations</article-title>. <source>J. Neurosci.</source> <volume>33</volume>, <fpage>19555</fpage>&#x2013;<lpage>19566</lpage>. doi: <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3051-13.2013</pub-id>, PMID: <pub-id pub-id-type="pmid">24336720</pub-id></citation></ref>
<ref id="ref10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Caspar</surname> <given-names>E. A.</given-names></name> <name><surname>Desantis</surname> <given-names>A.</given-names></name> <name><surname>Dienes</surname> <given-names>Z.</given-names></name> <name><surname>Cleeremans</surname> <given-names>A.</given-names></name> <name><surname>Haggard</surname> <given-names>P.</given-names></name></person-group> (<year>2016</year>). <article-title>The sense of agency as tracking control</article-title>. <source>PLoS One</source> <volume>11</volume>:<fpage>e0163892</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pone.0163892</pub-id>, PMID: <pub-id pub-id-type="pmid">27741253</pub-id></citation></ref>
<ref id="ref11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chambon</surname> <given-names>V.</given-names></name> <name><surname>Haggard</surname> <given-names>P.</given-names></name></person-group> (<year>2012</year>). <article-title>Sense of control depends on fluency of action selection, not motor performance</article-title>. <source>Cognition</source> <volume>125</volume>, <fpage>441</fpage>&#x2013;<lpage>451</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.cognition.2012.07.011</pub-id></citation></ref>
<ref id="ref12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cheng</surname> <given-names>Z.</given-names></name> <name><surname>Gu</surname> <given-names>Y.</given-names></name></person-group> (<year>2018</year>). <article-title>Vestibular system and self-motion</article-title>. <source>Front. Cell. Neurosci.</source> <volume>12</volume>:<fpage>456</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fncel.2018.00456</pub-id>, PMID: <pub-id pub-id-type="pmid">30524247</pub-id></citation></ref>
<ref id="ref13"><citation citation-type="other"><person-group person-group-type="author"><name><surname>Craddock</surname> <given-names>M.</given-names></name></person-group> (<year>2018</year>). <source>metaSDT: Calculate type 1 and type 2 signal detection measures</source>. R package version 0.5. 0, 2018. Available at: <ext-link xlink:href="https://github.com/craddm/metaSDT" ext-link-type="uri">https://github.com/craddm/metaSDT</ext-link> (Accessed April 5, 2023).</citation></ref>
<ref id="ref14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cullen</surname> <given-names>K. E.</given-names></name></person-group> (<year>2019</year>). <article-title>Vestibular processing during natural self-motion: implications for perception and action</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>20</volume>, <fpage>346</fpage>&#x2013;<lpage>363</lpage>. doi: <pub-id pub-id-type="doi">10.1038/s41583-019-0153-1</pub-id>, PMID: <pub-id pub-id-type="pmid">30914780</pub-id></citation></ref>
<ref id="ref15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cullen</surname> <given-names>K. E.</given-names></name> <name><surname>Wang</surname> <given-names>L.</given-names></name></person-group> (<year>2020</year>). <article-title>Predictive coding in early vestibular pathways: implications for vestibular cognition</article-title>. <source>Cogn. Neuropsychol.</source> <volume>37</volume>, <fpage>423</fpage>&#x2013;<lpage>426</lpage>. doi: <pub-id pub-id-type="doi">10.1080/02643294.2020.1783222</pub-id>, PMID: <pub-id pub-id-type="pmid">32619395</pub-id></citation></ref>
<ref id="ref16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cullen</surname> <given-names>K. E.</given-names></name> <name><surname>Zobeiri</surname> <given-names>O. A.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x2018;Proprioception and the predictive sensing of active self-motion&#x2019;, current opinion</article-title>. <source>Physiology</source> <volume>20</volume>, <fpage>29</fpage>&#x2013;<lpage>38</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.cophys.2020.12.001</pub-id>, PMID: <pub-id pub-id-type="pmid">33954270</pub-id></citation></ref>
<ref id="ref17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Donohue</surname> <given-names>S. E.</given-names></name> <name><surname>Green</surname> <given-names>J. J.</given-names></name> <name><surname>Woldorff</surname> <given-names>M. G.</given-names></name></person-group> (<year>2015</year>). <article-title>The effects of attention on the temporal integration of multisensory stimuli</article-title>. <source>Front. Integr. Neurosci.</source> <volume>9</volume>:<fpage>32</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnint.2015.00032</pub-id></citation></ref>
<ref id="ref18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ernst</surname> <given-names>M. O.</given-names></name> <name><surname>Banks</surname> <given-names>M. S.</given-names></name></person-group> (<year>2002</year>). <article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>. <source>Nature</source> <volume>415</volume>, <fpage>429</fpage>&#x2013;<lpage>433</lpage>. <comment>Available at:</comment>. doi: <pub-id pub-id-type="doi">10.1038/415429a</pub-id>, PMID: <pub-id pub-id-type="pmid">11807554</pub-id></citation></ref>
<ref id="ref19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ernst</surname> <given-names>M. O.</given-names></name> <name><surname>B&#x00FC;lthoff</surname> <given-names>H. H.</given-names></name></person-group> (<year>2004</year>). <article-title>Merging the senses into a robust percept</article-title>. <source>Trends Cogn. Sci.</source> <volume>8</volume>, <fpage>162</fpage>&#x2013;<lpage>169</lpage>. <comment>Available at:</comment>. doi: <pub-id pub-id-type="doi">10.1016/j.tics.2004.02.002</pub-id>, PMID: <pub-id pub-id-type="pmid">15050512</pub-id></citation></ref>
<ref id="ref20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fetsch</surname> <given-names>C. R.</given-names></name> <name><surname>DeAngelis</surname> <given-names>G. C.</given-names></name> <name><surname>Angelaki</surname> <given-names>D. E.</given-names></name></person-group> (<year>2013</year>). <article-title>Bridging the gap between theories of sensory cue integration and the physiology of multisensory neurons</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>14</volume>, <fpage>429</fpage>&#x2013;<lpage>442</lpage>. doi: <pub-id pub-id-type="doi">10.1038/nrn3503</pub-id>, PMID: <pub-id pub-id-type="pmid">23686172</pub-id></citation></ref>
<ref id="ref21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fleming</surname> <given-names>S. M.</given-names></name> <name><surname>Lau</surname> <given-names>H. C.</given-names></name></person-group> (<year>2014</year>). <article-title>How to measure metacognition</article-title>. <source>Front. Hum. Neurosci.</source> <volume>8</volume>:<fpage>443</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnhum.2014.00443</pub-id></citation></ref>
<ref id="ref22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Frith</surname> <given-names>C. D.</given-names></name> <name><surname>Blakemore</surname> <given-names>S.-J.</given-names></name> <name><surname>Wolpert</surname> <given-names>D. M.</given-names></name></person-group> (<year>2000</year>). <article-title>Abnormalities in the awareness and control of action</article-title>. <source>Philos. Trans. R. Soc. Lond. B Biol. Sci.</source> <volume>355</volume>, <fpage>1771</fpage>&#x2013;<lpage>1788</lpage>. doi: <pub-id pub-id-type="doi">10.1098/rstb.2000.0734</pub-id>, PMID: <pub-id pub-id-type="pmid">11205340</pub-id></citation></ref>
<ref id="ref23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gu</surname> <given-names>Y.</given-names></name></person-group> (<year>2018</year>). <article-title>Vestibular system and self-motion</article-title>. <source>Front. Cell. Neurosci.</source> <volume>12</volume>:<fpage>9</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fncel.2018.00456</pub-id></citation></ref>
<ref id="ref250"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gu</surname> <given-names>Y.</given-names></name> <name><surname>Angelaki</surname> <given-names>D. E.</given-names></name> <name><surname>DeAngelis</surname> <given-names>G. C.</given-names></name></person-group> (<year>2008</year>). <article-title>Neural correlates of multisensory cue integration in macaque MSTd</article-title>. <source>Nat. Neurosci.</source> <volume>11</volume>, <fpage>1201</fpage>&#x2013;<lpage>1210</lpage>. doi: <pub-id pub-id-type="doi">10.1038/nn.2191</pub-id>, PMID: <pub-id pub-id-type="pmid">23331545</pub-id></citation></ref>
<ref id="ref24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haggard</surname> <given-names>P.</given-names></name> <name><surname>Tsakiris</surname> <given-names>M.</given-names></name></person-group> (<year>2009</year>). <article-title>The experience of agency: feelings, judgments, and responsibility</article-title>. <source>Curr. Dir. Psychol. Sci.</source> <volume>18</volume>, <fpage>242</fpage>&#x2013;<lpage>246</lpage>. doi: <pub-id pub-id-type="doi">10.1111/j.1467-8721.2009.01644.x</pub-id></citation></ref>
<ref id="ref25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hughes</surname> <given-names>G.</given-names></name> <name><surname>Desantis</surname> <given-names>A.</given-names></name> <name><surname>Waszak</surname> <given-names>F.</given-names></name></person-group> (<year>2013</year>). <article-title>Attenuation of auditory N1 results from identity-specific action-effect prediction</article-title>. <source>Eur. J. Neurosci.</source> <volume>37</volume>, <fpage>1152</fpage>&#x2013;<lpage>1158</lpage>. doi: <pub-id pub-id-type="doi">10.1111/ejn.12120</pub-id>, PMID: <pub-id pub-id-type="pmid">23331545</pub-id></citation></ref>
<ref id="ref26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jagini</surname> <given-names>K. K.</given-names></name></person-group> (<year>2021</year>). <article-title>Temporal binding in multisensory and motor-sensory contexts: toward a unified model</article-title>. <source>Front. Hum. Neurosci.</source> <volume>15</volume>:<fpage>629437</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnhum.2021.629437</pub-id>, PMID: <pub-id pub-id-type="pmid">33841117</pub-id></citation></ref>
<ref id="ref220"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>ter Horst</surname> <given-names>A. C.</given-names></name> <name><surname>Koppen</surname> <given-names>M.</given-names></name> <name><surname>Selen</surname> <given-names>L. P. J.</given-names></name> <name><surname>Medendor</surname> <given-names>W. P.</given-names></name></person-group> (<year>2000</year>). <article-title>Reliability-based weighting of visual and vestibular cues in displacement estimation</article-title>. <source>PLoS One.</source> <volume>10</volume>: <fpage>e0145015</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pone.0145015</pub-id>, PMID: <pub-id pub-id-type="pmid">11205340</pub-id></citation></ref>
<ref id="ref27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>van Kemenade</surname> <given-names>B. M.</given-names></name> <name><surname>Arikan</surname> <given-names>B. E.</given-names></name> <name><surname>Kircher</surname> <given-names>T.</given-names></name> <name><surname>Straube</surname> <given-names>B.</given-names></name></person-group> (<year>2016</year>). <article-title>Predicting the sensory consequences of one&#x2019;s own action: first evidence for multisensory facilitation</article-title>. <source>Atten. Percept. Psychophys.</source> <volume>78</volume>, <fpage>2515</fpage>&#x2013;<lpage>2526</lpage>. doi: <pub-id pub-id-type="doi">10.3758/s13414-016-1189-1</pub-id>, PMID: <pub-id pub-id-type="pmid">27515031</pub-id></citation></ref>
<ref id="ref28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kveraga</surname> <given-names>K.</given-names></name> <name><surname>Ghuman</surname> <given-names>A. S.</given-names></name> <name><surname>Bar</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Top-down predictions in the cognitive brain</article-title>. <source>Brain Cogn.</source> <volume>65</volume>, <fpage>145</fpage>&#x2013;<lpage>168</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.bandc.2007.06.007</pub-id>, PMID: <pub-id pub-id-type="pmid">17923222</pub-id></citation></ref>
<ref id="ref29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Linares</surname> <given-names>D.</given-names></name> <name><surname>L&#x00F3;pez-Moliner</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>Quickpsy: an R package to fit psychometric functions for multiple groups</article-title>. <source>The R Journal</source> <volume>8</volume>:<fpage>122</fpage>. doi: <pub-id pub-id-type="doi">10.32614/RJ-2016-008</pub-id></citation></ref>
<ref id="ref30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maniscalco</surname> <given-names>B.</given-names></name> <name><surname>Lau</surname> <given-names>H.</given-names></name></person-group> (<year>2012</year>). <article-title>A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratings</article-title>. <source>Conscious. Cogn.</source> <volume>21</volume>, <fpage>422</fpage>&#x2013;<lpage>430</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.concog.2011.09.021</pub-id>, PMID: <pub-id pub-id-type="pmid">22071269</pub-id></citation></ref>
<ref id="ref31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morgan</surname> <given-names>M. L.</given-names></name> <name><surname>DeAngelis</surname> <given-names>G. C.</given-names></name> <name><surname>Angelaki</surname> <given-names>D. E.</given-names></name></person-group> (<year>2008</year>). <article-title>Multisensory integration in macaque visual cortex depends on Cue reliability</article-title>. <source>Neuron</source> <volume>59</volume>, <fpage>662</fpage>&#x2013;<lpage>673</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neuron.2008.06.024</pub-id>, PMID: <pub-id pub-id-type="pmid">18760701</pub-id></citation></ref>
<ref id="ref32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Paraskevoudi</surname> <given-names>N.</given-names></name> <name><surname>SanMiguel</surname> <given-names>I.</given-names></name></person-group> (<year>2021</year>). <article-title>Self-generation and sound intensity interactively modulate perceptual bias, but not perceptual sensitivity</article-title>. <source>Sci. Rep.</source> <volume>11</volume>:<fpage>17103</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s41598-021-96346-z</pub-id>, PMID: <pub-id pub-id-type="pmid">34429453</pub-id></citation></ref>
<ref id="ref33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reznik</surname> <given-names>D.</given-names></name> <name><surname>Henkin</surname> <given-names>Y.</given-names></name> <name><surname>Levy</surname> <given-names>O.</given-names></name> <name><surname>Mukamel</surname> <given-names>R.</given-names></name></person-group> (<year>2015</year>). <article-title>Perceived loudness of self-generated sounds is differentially modified by expected sound intensity</article-title>. <source>PLoS One</source> <volume>10</volume>:<fpage>e0127651</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pone.0127651</pub-id></citation></ref>
<ref id="ref34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reznik</surname> <given-names>D.</given-names></name> <name><surname>Ossmy</surname> <given-names>O.</given-names></name> <name><surname>Mukamel</surname> <given-names>R.</given-names></name></person-group> (<year>2015</year>). <article-title>Enhanced auditory evoked activity to self-generated sounds is mediated by primary and supplementary motor cortices</article-title>. <source>J. Neurosci.</source> <volume>35</volume>, <fpage>2173</fpage>&#x2013;<lpage>2180</lpage>. doi: <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3723-14.2015</pub-id>, PMID: <pub-id pub-id-type="pmid">25653372</pub-id></citation></ref>
<ref id="ref35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Roy</surname> <given-names>J. E.</given-names></name> <name><surname>Cullen</surname> <given-names>K. E.</given-names></name></person-group> (<year>2004</year>). <article-title>Dissociating self-generated from passively applied head motion: neural mechanisms in the vestibular nuclei</article-title>. <source>J. Neurosci.</source> <volume>24</volume>, <fpage>2102</fpage>&#x2013;<lpage>2111</lpage>. doi: <pub-id pub-id-type="doi">10.1523/JNEUROSCI.3988-03.2004</pub-id>, PMID: <pub-id pub-id-type="pmid">14999061</pub-id></citation></ref>
<ref id="ref36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schroeder</surname> <given-names>C. E.</given-names></name> <name><surname>Wilson</surname> <given-names>D. A.</given-names></name> <name><surname>Radman</surname> <given-names>T.</given-names></name> <name><surname>Scharfman</surname> <given-names>H.</given-names></name> <name><surname>Lakatos</surname> <given-names>P.</given-names></name></person-group> (<year>2010</year>). <article-title>Dynamics of active sensing and perceptual selection</article-title>. <source>Curr. Opin. Neurobiol.</source> <volume>20</volume>, <fpage>172</fpage>&#x2013;<lpage>176</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.conb.2010.02.010</pub-id>, PMID: <pub-id pub-id-type="pmid">20307966</pub-id></citation></ref>
<ref id="ref37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Straube</surname> <given-names>B.</given-names></name> <name><surname>van Kemenade</surname> <given-names>B.</given-names></name> <name><surname>Arikan</surname> <given-names>B. E.</given-names></name> <name><surname>Fiehler</surname> <given-names>K.</given-names></name> <name><surname>Leube</surname> <given-names>D. T.</given-names></name> <name><surname>Harris</surname> <given-names>L. R.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Predicting the multisensory consequences of One&#x2019;s own action: BOLD suppression in auditory and visual cortices</article-title>. <source>PLoS One</source>. <comment>Edited by J. Ahveninen</comment> <volume>12</volume>:<fpage>e0169131</fpage>. <comment>Available at</comment>. doi: <pub-id pub-id-type="doi">10.1371/journal.pone.0169131</pub-id>, PMID: <pub-id pub-id-type="pmid">28060861</pub-id></citation></ref>
<ref id="ref38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>van Atteveldt</surname> <given-names>N.</given-names></name> <name><surname>Murray</surname> <given-names>M. M.</given-names></name> <name><surname>Thut</surname> <given-names>G.</given-names></name> <name><surname>Schroeder</surname> <given-names>C. E.</given-names></name></person-group> (<year>2014</year>). <article-title>Multisensory integration: flexible use of general operations</article-title>. <source>Neuron</source> <volume>81</volume>, <fpage>1240</fpage>&#x2013;<lpage>1253</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neuron.2014.02.044</pub-id>, PMID: <pub-id pub-id-type="pmid">24656248</pub-id></citation></ref>
<ref id="ref39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wen</surname> <given-names>W.</given-names></name> <name><surname>Haggard</surname> <given-names>P.</given-names></name></person-group> (<year>2018</year>). <article-title>Control changes the way we look at the world</article-title>. <source>J. Cogn. Neurosci.</source> <volume>30</volume>, <fpage>603</fpage>&#x2013;<lpage>619</lpage>. doi: <pub-id pub-id-type="doi">10.1162/jocn_a_01226</pub-id>, PMID: <pub-id pub-id-type="pmid">29308984</pub-id></citation></ref>
<ref id="ref400"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yon</surname> <given-names>D.</given-names></name> <name><surname>Frith</surname> <given-names>C. D.</given-names></name></person-group> (<year>2021</year>). <article-title>Precision and the Bayesian brain</article-title>. <source>Curr. Biol.</source> <volume>31</volume>:<fpage>R1026</fpage>&#x2013;<lpage>R1032</lpage></citation></ref>
<ref id="ref40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yon</surname> <given-names>D.</given-names></name> <name><surname>Gilbert</surname> <given-names>S. J.</given-names></name> <name><surname>de Lange</surname> <given-names>F. P.</given-names></name> <name><surname>Press</surname> <given-names>C.</given-names></name></person-group> (<year>2018</year>). <article-title>Action sharpens sensory representations of expected outcomes</article-title>. <source>Nat. Commun.</source> <volume>9</volume>:<fpage>4288</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s41467-018-06752-7</pub-id>, PMID: <pub-id pub-id-type="pmid">30327503</pub-id></citation></ref>
</ref-list>
</back>
</article>