<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Artif. Intell.</journal-id>
<journal-title>Frontiers in Artificial Intelligence</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Artif. Intell.</abbrev-journal-title>
<issn pub-type="epub">2624-8212</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">531316</article-id>
<article-id pub-id-type="doi">10.3389/frai.2021.531316</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Artificial Intelligence</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>An Overcomplete Approach to Fitting Drift-Diffusion Decision Models to Trial-By-Trial Data</article-title>
<alt-title alt-title-type="left-running-head">Feltgen and Daunizeau</alt-title>
<alt-title alt-title-type="right-running-head">An Overcomplete Approach to DDMs</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Feltgen</surname>
<given-names>Q.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/938966/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Daunizeau</surname>
<given-names>J.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/4309/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<label>
<sup>1</sup>
</label>Paris Brain Institute (ICM), Sorbonne Universit&#x00E9;, Inserm, CNRS, H&#x00F4;pital Piti&#x00E9;&#x2010;Salp&#x00EA;tri&#x00E8;re, <addr-line>Paris</addr-line>, <country>France</country>
</aff>
<aff id="aff2">
<label>
<sup>2</sup>
</label>ETH, <addr-line>Zurich</addr-line>, <country>Switzerland</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/499268/overview">Thomas Parr</ext-link>, University College London, United&#x20;Kingdom</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/20813/overview">Sebastian Gluth</ext-link>, University of Hamburg, Germany</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/435119/overview">Vincent Moens</ext-link>, Catholic University of Louvain, Belgium</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: J.&#x20;Daunizeau, <email>jean.daunizeau@gmail.com</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Machine Learning and Artificial Intelligence, a section of the journal Frontiers in Artificial Intelligence</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>09</day>
<month>04</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>4</volume>
<elocation-id>531316</elocation-id>
<history>
<date date-type="received">
<day>31</day>
<month>01</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>17</day>
<month>02</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2021 Feltgen and Daunizeau.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Feltgen and Daunizeau</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these&#x20;terms.</p>
</license>
</permissions>
<abstract>
<p>Drift-diffusion models or DDMs are becoming a standard in the field of computational neuroscience. They extend models from signal detection theory by proposing a simple mechanistic explanation for the observed relationship between decision outcomes and reaction times (RT). In brief, they assume that decisions are triggered once the accumulated evidence in favor of a particular alternative option has reached a predefined threshold. Fitting a DDM to empirical data then allows one to interpret observed group or condition differences in terms of a change in the underlying model parameters. However, current approaches only yield reliable parameter estimates in specific situations (c.f. fixed drift rates vs drift rates varying over trials). In addition, they become computationally unfeasible when more general DDM variants are considered (e.g., with collapsing bounds). In this note, we propose a fast and efficient approach to parameter estimation that relies on fitting a &#x201c;self-consistency&#x201d; equation that RT fulfill under the DDM. This effectively bypasses the computational bottleneck of standard DDM parameter estimation approaches, at the cost of estimating the trial-specific neural noise variables that perturb the underlying evidence accumulation process. For the purpose of behavioral data analysis, these act as nuisance variables and render the model &#x201c;overcomplete,&#x201d; which is finessed using a variational Bayesian system identification scheme. However, for the purpose of neural data analysis, estimates of neural noise perturbation terms are a desirable (and unique) feature of the approach. Using numerical simulations, we show that this &#x201c;overcomplete&#x201d; approach matches the performance of current parameter estimation approaches for simple DDM variants, and outperforms them for more complex DDM variants. Finally, we demonstrate the added-value of the approach, when applied to a recent value-based decision making experiment.</p>
</abstract>
<kwd-group>
<kwd>DDM</kwd>
<kwd>decision making</kwd>
<kwd>computational modeling</kwd>
<kwd>variational bayes</kwd>
<kwd>neural noise</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Over the past two decades, neurocognitive processes of decision making have been extensively studied under the framework of so-called <italic>drift-diffusion models</italic> or DDMs. These models tie together decision outcomes and response times (RT) by assuming that decisions are triggered once the accumulated evidence in favor of a particular alternative option has reached a predefined threshold (<xref ref-type="bibr" rid="B43">Ratcliff and McKoon, 2008</xref>; <xref ref-type="bibr" rid="B44">Ratcliff et&#x20;al., 2016</xref>). They owe their popularity both to experimental successes in capturing observed data in a broad set of behavioral studies (<xref ref-type="bibr" rid="B21">Gold and Shadlen, 2007</xref>; <xref ref-type="bibr" rid="B47">Resulaj et&#x20;al., 2009</xref>; <xref ref-type="bibr" rid="B34">Milosavljevic et&#x20;al., 2010</xref>; <xref ref-type="bibr" rid="B11">De Martino et&#x20;al., 2012</xref>; <xref ref-type="bibr" rid="B25">Hanks et&#x20;al., 2014</xref>; <xref ref-type="bibr" rid="B40">Pedersen et&#x20;al., 2017</xref>), and to theoretical work showing that DDMs can be thought of as optimal decision problem solvers (<xref ref-type="bibr" rid="B5">Bogacz et&#x20;al., 2006</xref>; <xref ref-type="bibr" rid="B1">Balci et&#x20;al., 2011</xref>; <xref ref-type="bibr" rid="B12">Drugowitsch et&#x20;al., 2012</xref>; <xref ref-type="bibr" rid="B61">Zhang, 2012</xref>; <xref ref-type="bibr" rid="B51">Tajima et&#x20;al., 2016</xref>). Critically, mathematical analyses of the DDM soon demonstrated that it suffers from inherent non-identifiability issues, e.g., predicted choices and RTs are invariant under any arbitrary rescaling of DDM parameters (<xref ref-type="bibr" rid="B46">Ratcliff and Tuerlinckx, 2002</xref>; <xref ref-type="bibr" rid="B44">Ratcliff et&#x20;al., 2016</xref>). This is important because, in principle, this precludes proper, quantitative, DDM-based data analysis. Nevertheless, over the past decade, many statistical approaches to DDM parameter estimation have been proposed, which yield efficient parameter estimation under simplifying assumptions (<xref ref-type="bibr" rid="B55">Voss and Voss, 2007</xref>; <xref ref-type="bibr" rid="B58">Wagenmakers et&#x20;al., 2007</xref>, <xref ref-type="bibr" rid="B57">2008</xref>; <xref ref-type="bibr" rid="B53">Vandekerckhove and Tuerlinckx, 2008</xref>; <xref ref-type="bibr" rid="B23">Grasman et&#x20;al., 2009</xref>; <xref ref-type="bibr" rid="B61">Zhang, 2012</xref>; <xref ref-type="bibr" rid="B59">Wiecki et&#x20;al., 2013</xref>; <xref ref-type="bibr" rid="B62">Zhang et&#x20;al., 2014</xref>; <xref ref-type="bibr" rid="B26">Hawkins et&#x20;al., 2015</xref>; <xref ref-type="bibr" rid="B54">Voskuilen et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B41">Pedersen and Frank, 2020</xref>). Typically, these techniques essentially fit the choice-conditional distribution of observed RT (or moments thereof), having arbitrarily fixed at least one of the DDM parameters. They are now established statistical tools for experimental designs where the observed RT variability is mostly induced by internal (e.g., neural) stochasticity in the decision process (<xref ref-type="bibr" rid="B4">Boehm et&#x20;al., 2018</xref>).</p>
<p>Now current decision making experiments typically consider situations in which decision-relevant variables are manipulated on a trial-by-trial basis. For example, the reliability of perceptual evidence (e.g., the psychophysical contrast in a perceptual decision) may be systematically varied from one trial to the next. Under current applications of the DDM, this implies that some internal model variables (e.g., the drift rate) effectively vary over trials. Classical DDM parameter estimation approaches do not optimally handle this kind of experimental designs, because these lack the trial repetitions that would be necessary to provide empirical estimates of RT moments in each condition. In turn, alternative statistical approaches to parameter estimation have been proposed, which can exploit predictable inter-trial variations of DDM variables to fit the model to RT data (<xref ref-type="bibr" rid="B56">Wabersich and Vandekerckhove, 2014</xref>; <xref ref-type="bibr" rid="B35">Moens and Zenon, 2017</xref>; <xref ref-type="bibr" rid="B40">Pedersen et&#x20;al., 2017</xref>; <xref ref-type="bibr" rid="B16">Fontanesi et&#x20;al., 2019a</xref>; <xref ref-type="bibr" rid="B17">Fontanesi et&#x20;al., 2019b</xref>; <xref ref-type="bibr" rid="B20">Gluth and Meiran, 2019</xref>). In brief, they directly compare raw RT data with expected RTs, which vary over trials in response to known variations in internal variables. Although close to optimal from a statistical perspective, they suffer from a challenging computational bottleneck that lies in the trial-by-trial derivation of RT first-order moments. This is why they are typically constrained to simple DDM variants, for which analytical solutions exist (<xref ref-type="bibr" rid="B36">Navarro and Fuss, 2009</xref>; <xref ref-type="bibr" rid="B50">Srivastava et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B14">Fengler et&#x20;al., 2020</xref>; <xref ref-type="bibr" rid="B49">Shinn et&#x20;al., 2020</xref>).</p>
<p>This note is concerned with the issue of obtaining reliable DDM parameter estimates from concurrent trial-by-trial choice and response time data, for a broad class of DDM variants. We propose a fast and efficient approach that relies on fitting a <italic>self-consistency</italic> equation, which RTs necessarily fulfill under the DDM. This provides a simple and elegant way to bypassing the common computational bottleneck of existing approaches, at the cost of considering additional trial-specific nuisance model variables. These are the cumulated &#x201c;neural&#x201d; noise that perturbs the evidence accumulation process at the corresponding trial. Including these variables in the model makes it &#x201c;overcomplete,&#x201d; the identification of which is finessed with a dedicated variational Bayesian scheme. In turn, the ensuing overcomplete approach generalizes to a large class of DDM model variants, without any additional computational and/or implementational burden.</p>
<p>In <italic>Model Formulation and Impact of DDM Parameters</italic> section of this document, we briefly recall the derivation of the DDM, and summarize the impact of DDM parameters onto the conditional RT distributions. In <italic>A Self-Consistency Equation for DDMs</italic> and An <italic>Overcomplete Likelihood Approach to DDM Inversion</italic> sections, we derive the DDM&#x27;s self-consistency equation and describe the ensuing overcomplete approach to DDM-based data analysis. In <italic>Parameter Recovery Analysis</italic> section, we perform parameter recovery analyses for standard DDM fitting procedures and the overcomplete approach. In <italic>Application to a Value-Based Decision Making Experiment</italic> section, we demonstrate the added-value of the overcomplete approach, when applied to a value-based decision making experiment. Finally, in <italic>Discussion</italic> section, we discuss our results in the context of the existing literature. In particular, we comment on the potential utility of neural noise perturbation estimates for concurrent neuroimaging data analysis.</p>
</sec>
<sec id="s2">
<title>Model Formulation and Impact of DDM Parameters</title>
<p>First, let us recall the simplest form of a drift-diffusion decision model or DDM (in what follows, we will refer to this variant as the &#x201c;vanilla&#x201d; DDM). Let <inline-formula id="inf1">
<mml:math id="minf1">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> be a decision variable that captures the accumulated evidence (up to time <inline-formula id="inf2">
<mml:math id="minf2">
<mml:mi>t</mml:mi>
</mml:math>
</inline-formula>) in favor of a given option in a binary choice set. Under the vanilla DDM, a decision is triggered whenever <inline-formula id="inf3">
<mml:math id="minf3">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> hits either of two bounds, which are positioned at <inline-formula id="inf4">
<mml:math id="minf4">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf5">
<mml:math id="minf5">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, respectively. When a bound hit occurs defines the decision time, and which bound is hit determines the (binary) decision outcome <inline-formula id="inf6">
<mml:math id="minf6">
<mml:mi>o</mml:mi>
</mml:math>
</inline-formula>. By assumption, the decision variable <inline-formula id="inf7">
<mml:math id="minf7">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is supposed to follow the following stochastic differential equation:<disp-formula id="e1">
<mml:math id="me1">
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>x</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>v</mml:mi>
<mml:mo>&#x2323;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x2323;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#xd7;</mml:mo>
<mml:mi>d</mml:mi>
<mml:mi>&#x3b7;</mml:mi>
</mml:mrow>
</mml:math>
<label>(1)</label>
</disp-formula>where <inline-formula id="inf8">
<mml:math id="minf8">
<mml:mi>v</mml:mi>
</mml:math>
</inline-formula> is the drift rate, <inline-formula id="inf9">
<mml:math id="minf9">
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is a standard Wiener process, and <inline-formula id="inf10">
<mml:math id="minf10">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x2323;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> is the standard deviation of the stochastic (diffusion) perturbation&#x20;term.</p>
<p>
<xref ref-type="disp-formula" rid="e1">Equation 1</xref> can be discretized using an Euler-Maruyma scheme (<xref ref-type="bibr" rid="B28">Kloeden and Platen, 1992</xref>), yielding the following discrete form of the decision variable dynamics:<disp-formula id="e2">
<mml:math id="me2">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:msub>
<mml:mi>&#x3b7;</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(2)</label>
</disp-formula>where <inline-formula id="inf11">
<mml:math id="minf11">
<mml:mi>t</mml:mi>
</mml:math>
</inline-formula> indexes time on a temporal grid with resolution <inline-formula id="inf12">
<mml:math id="minf12">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf13">
<mml:math id="minf13">
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>v</mml:mi>
<mml:mo>&#x2323;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the discrete-time drift rate, <inline-formula id="inf14">
<mml:math id="minf14">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x2323;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math>
</inline-formula> is the discrete-time standard deviation of the perturbation term and <inline-formula id="inf15">
<mml:math id="minf15">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b7;</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>0,1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is a standard normal random variable. By convention, the system&#x27;s initial condition is denoted as <inline-formula id="inf16">
<mml:math id="minf16">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, which we refer to as the &#x201c;initial bias&#x201d;.</p>
<p>The joint distribution of response times and decision outcomes depends upon the DDM parameters, which include: the drift rate <inline-formula id="inf17">
<mml:math id="minf17">
<mml:mi>v</mml:mi>
</mml:math>
</inline-formula>, the bound&#x2019;s height <inline-formula id="inf18">
<mml:math id="minf18">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula>, the noise&#x2019;s standard deviation <inline-formula id="inf19">
<mml:math id="minf19">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula> and the initial condition <inline-formula id="inf20">
<mml:math id="minf20">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. DDMs also typically include a so-called &#x201c;non-decision&#x201d; time parameter <inline-formula id="inf21">
<mml:math id="minf21">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, which captures systematic latencies between covert bound hit times and overt response times. Under such simple DDM variant, variability in response times and decision outcomes derive from stochastic terms <inline-formula id="inf22">
<mml:math id="minf22">
<mml:mi>&#x3b7;</mml:mi>
</mml:math>
</inline-formula>. These are typically thought of as neural noise that perturb the evidence accumulation process within the brain&#x2019;s decision system (<xref ref-type="bibr" rid="B21">Gold and Shadlen, 2007</xref>; <xref ref-type="bibr" rid="B52">Turner et&#x20;al., 2015</xref>; <xref ref-type="bibr" rid="B24">Guevara Erra et&#x20;al., 2019</xref>).</p>
<p>Under such simple DDM variant, analytical expressions exist for the first two moments of RT distributions (<xref ref-type="bibr" rid="B50">Srivastava et&#x20;al., 2016</xref>). Higher-order moments can also be derived from efficient semi-analytical solutions to the issue of deriving the joint choice/RT distribution (<xref ref-type="bibr" rid="B36">Navarro and Fuss, 2009</xref>). However, more complex variants of the DDM (including, e.g., collapsing bounds) are much more difficult to simulate, and require either sampling schemes or numerical solvers of the underlying Fokker-Planck equation (<xref ref-type="bibr" rid="B14">Fengler et&#x20;al., 2020</xref>; <xref ref-type="bibr" rid="B49">Shinn et&#x20;al., 2020</xref>).</p>
<p>
<xref ref-type="fig" rid="F1">Figures 1</xref>&#x2013;<xref ref-type="fig" rid="F4">4</xref> below demonstrate the impact of model parameters on the decision outcome ratios <inline-formula id="inf23">
<mml:math id="minf23">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and the first three moments of conditional hitting time (HT) distributions, namely: their mean <inline-formula id="inf24">
<mml:math id="minf24">
<mml:mrow>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, variance <inline-formula id="inf25">
<mml:math id="minf25">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and skewness <inline-formula id="inf26">
<mml:math id="minf26">
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. As we will see, each DDM parameter has a specific signature, in terms of its joint impact on these seven quantities. This does not imply however, that different parameter settings necessarily yield distinct moments. In fact, there are changes in the DDM parameters that leave the predicted moments unchanged. This will induce parameter recovery issues, which we will demonstrate&#x20;later.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Impact of initial bias <inline-formula id="inf27">
<mml:math id="minf27">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. In all panels, the color code indicates the decision outcomes (green: &#x201c;up&#x201d; decisions, red: &#x201c;down&#x201d; decisions). The black dotted line indicates the default parameter value (for ease of comparison with other figures below). Upper-left panel: mean hitting times (<italic>y</italic>-axis) as a function of initial bias (<italic>x</italic>-axis). Upper-right panel: hitting times&#x2019; variance (<italic>y</italic>-axis) as a function of initial bias (<italic>x</italic>-axis). Lower-left panel: hitting times&#x27; skewness (<italic>y</italic>-axis) as a function of initial bias (<italic>x</italic>-axis). Lower-right panel: outcome ratios (<italic>y</italic>-axis) as a function of initial bias (<italic>x</italic>-axis).</p>
</caption>
<graphic xlink:href="frai-04-531316-g001.tif"/>
</fig>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>Impact of drift rate <inline-formula id="inf28">
<mml:math id="minf28">
<mml:mi>v</mml:mi>
</mml:math>
</inline-formula>. Same format as <xref ref-type="fig" rid="F1">Figure&#x20;1</xref>.</p>
</caption>
<graphic xlink:href="frai-04-531316-g002.tif"/>
</fig>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Impact of the perturbation&#x2019; standard deviation <inline-formula id="inf29">
<mml:math id="minf29">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>. Same format as <xref ref-type="fig" rid="F1">Figure&#x20;1</xref> (but the <italic>x</italic>-axis is now in log-scale).</p>
</caption>
<graphic xlink:href="frai-04-531316-g003.tif"/>
</fig>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>Impact of the threshold&#x2019;s height <italic>b</italic>. Same format as <xref ref-type="fig" rid="F1">Figure&#x20;1</xref>.</p>
</caption>
<graphic xlink:href="frai-04-531316-g004.tif"/>
</fig>
<p>But let first us summarize the impact of DDM parameters. To do this, we first set model parameters to the following &#x201c;default&#x201d; values: <inline-formula id="inf30">
<mml:math id="minf30">
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf31">
<mml:math id="minf31">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf32">
<mml:math id="minf32">
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf33">
<mml:math id="minf33">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. This parameter setting yields about 30% decision errors, which we take as a valid reference point for typical studies of decision making. In what follows, we vary each model parameter one by one, keeping the other ones at their default&#x20;value.</p>
<p>
<xref ref-type="fig" rid="F1">Figure&#x20;1</xref> below shows the impact of initial bias&#x20;<inline-formula id="inf34">
<mml:math id="minf34">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<p>One can see that increasing the initial bias accelerates decision times for &#x201c;up&#x201d; decisions, and decelerates decision times for &#x201c;down&#x201d; decisions. This is because increasing <inline-formula id="inf35">
<mml:math id="minf35">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> mechanically increases the probability of an early upper bound hit, and decreases the probability of an early lower bound hit. Increasing <inline-formula id="inf36">
<mml:math id="minf36">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> also decreases (resp., increases) the variance for &#x201c;up&#x201d; (resp., &#x201c;down&#x201d;) decisions, and increases (resp., decreases) the skewness for &#x201c;up&#x201d; (resp., &#x201c;down&#x201d;) decisions. Finally, increasing the initial bias increases the ratio of &#x201c;up&#x201d; decisions. These are corollary consequences of increasing (resp. decreasing) the probability of an early upper (resp., lower) bound hit. This is because when an increasing proportion of stochastic paths eventually hit a bound very early, this squeezes the distribution of hitting times just above null hitting times. Note that the outcome ratios are not equal to <inline-formula id="inf37">
<mml:math id="minf37">
<mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> when <inline-formula id="inf38">
<mml:math id="minf38">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. This is because the default drift rate <inline-formula id="inf39">
<mml:math id="minf39">
<mml:mi>v</mml:mi>
</mml:math>
</inline-formula> is positive, and therefore favors &#x201c;up&#x201d; decisions. Most importantly, the initial bias is the only DDM parameter that has opposite effects on mean HT for &#x201c;up&#x201d; and &#x201c;down&#x201d; decision outcomes.</p>
<p>
<xref ref-type="fig" rid="F2">Figure&#x20;2</xref> below shows the impact of drift rate&#x20;<inline-formula id="inf40">
<mml:math id="minf40">
<mml:mi>v</mml:mi>
</mml:math>
</inline-formula>.</p>
<p>One can see that the mean and variance of decision times are maximal when the drift rate is null. This is because the probability of an early (upper or lower) bound hit decreases as <inline-formula id="inf41">
<mml:math id="minf41">
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. Also, the drift rate has little impact on the HT skewness. Note that, in contrast to the initial bias, the impact of the drift rate on mean HT is identical for both &#x201c;up&#x201d; and &#x201c;down&#x201d; decisions. Finally, and as expected, increasing the drift rate increases the ratio of &#x201c;up&#x201d; decisions.</p>
<p>
<xref ref-type="fig" rid="F3">Figure&#x20;3</xref> below shows the impact of the noise&#x27;s standard deviation <inline-formula id="inf42">
<mml:math id="minf42">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>.</p>
<p>One can see that increasing the standard deviation decreases the mean HT, and increases its skewness. This is, again, because increasing <inline-formula id="inf43">
<mml:math id="minf43">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula> increases the probability of an early bound hit. Its impact on the variance, however, is less trivial. When the standard deviation <inline-formula id="inf44">
<mml:math id="minf44">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula> is very low, increasing <inline-formula id="inf45">
<mml:math id="minf45">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula> first increases the hitting times&#x27; variance. This is because it progressively frees the system from its deterministic fate, therefore enabling HT variability around the mean. Then, it reaches a critical point above which increasing it further now decreases the variance. This is again a consequence of increasing the probability of an early bound hit. The associated HT squeezing effect can be seen on the skewness, which steadily increases beyond the critical point. Note that the standard deviation has the same impact on mean HT for &#x201c;up&#x201d; and &#x201c;down&#x201d; decisions. Finally, increasing the standard deviation eventually maximizes the entropy of the decision outcomes, i.e.,&#x20;<inline-formula id="inf46">
<mml:math id="minf46">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2192;</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> when <inline-formula id="inf47">
<mml:math id="minf47">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mi>&#x221e;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. This is because the relative contribution of the diffusion term eventually masks the&#x20;drift.</p>
<p>
<xref ref-type="fig" rid="F4">Figure&#x20;4</xref> below shows the impact of the bound&#x2019;s height&#x20;<inline-formula id="inf48">
<mml:math id="minf48">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula>.</p>
<p>One can see that increasing the bound&#x27;s height increases both the mean and the variance of HT, and decreases its skewness, identically for &#x201c;up&#x201d; and &#x201c;down&#x201d; decisions. Finally, increasing the threshold&#x2019;s height decreases the entropy of the decision outcomes, i.e.,&#x20;<inline-formula id="inf49">
<mml:math id="minf49">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2192;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mtext>&#xa0;</mml:mtext>
<mml:mi mathvariant="normal">or</mml:mi>
<mml:mtext>&#xa0;</mml:mtext>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> when <inline-formula id="inf50">
<mml:math id="minf50">
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mi>&#x221e;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. This directly derives from the fact that increasing <inline-formula id="inf51">
<mml:math id="minf51">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula> decreases the probability of an early bound hit. This effect basically competes with the effect of the standard deviation <inline-formula id="inf52">
<mml:math id="minf52">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>, which accelerates HTs. This is why one may say that increasing the threshold&#x2019;s height effectively increases the demand for evidence strength in favor of one of the decision outcomes.</p>
<p>Note that the impact of the &#x201c;non-decision&#x201d; time <inline-formula id="inf53">
<mml:math id="minf53">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> simply reduces to shifting the mean of the RT distribution, without any effect on other moments.</p>
<p>In brief, DDM parameters have distinct impacts on the sufficient statistics of response times. This means that they could, in principle, be discriminated from each other. Standard DDM fitting procedures rely on adjusting the DDM parameters so that the RT moments (e.g., up to third order) match model predictions. In what follows, we refer to this as the &#x201c;method of moments&#x201d; (see <xref ref-type="sec" rid="s12">Supplementary Appendix S2</xref>). However, we will see below that the DDM is not perfectly identifiable. One can also see that changing any of these parameters from trial to trial will most likely induce non-trivial variations in RT data. Here, the method of moments may not be optimal, because predictable trial-by-trial variations in DDM parameters will be lumped together with stochastic perturbation-induced variations. One may thus rather attempt to match the trial-by-trial series of raw response times directly with their corresponding first-order moments. In what follows, we refer to this as the &#x201c;method of trial means&#x201d; (see <xref ref-type="sec" rid="s12">Supplementary Appendix S3</xref>). Given the computational cost of deriving expected response times for each trial, this type of approach is typically restricted to the vanilla DDM, since there is no known analytical expression for response time moments under more complex DDM variants.</p>
<p>Below, we suggest a simple and efficient way of performing DDM parameter estimation, which applies to a broad class of DDM variants without significant additional computational burden. This follows from fitting a self-consistency equation that, under a broad class of DDM variants, response times have to&#x20;obey.</p>
</sec>
<sec id="s3">
<title>A Self-Consistency Equation for DDMs</title>
<p>First, note that <xref ref-type="disp-formula" rid="e2">Eq. 2</xref> can be rewritten as follows:<disp-formula id="e3">
<mml:math id="me3">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mi>v</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>t</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b7;</mml:mi>
<mml:msup>
<mml:mi>t</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mi>v</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:msqrt>
<mml:mi>t</mml:mi>
</mml:msqrt>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(3)</label>
</disp-formula>where we coin <inline-formula id="inf54">
<mml:math id="minf54">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x225c;</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mrow>
<mml:msqrt>
<mml:mi>t</mml:mi>
</mml:msqrt>
</mml:mrow>
</mml:mrow>
<mml:mstyle displaystyle="true">
<mml:msubsup>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>t</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b7;</mml:mi>
<mml:msup>
<mml:mi>t</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</inline-formula> the &#x201c;normalized cumulative perturbation&#x201d;. Now let <inline-formula id="inf55">
<mml:math id="minf55">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> be the decision time of the <italic>i</italic>th trial. Note that decision times are trivially related to cumulative perturbations because, by definition, <inline-formula id="inf56">
<mml:math id="minf56">
<mml:mrow>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. This implies that:<disp-formula id="e4">
<mml:math id="me4">
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mi>v</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msqrt>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(4)</label>
</disp-formula>where <inline-formula id="inf57">
<mml:math id="minf57">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> denotes the (unknown) cumulative perturbation term of the <italic>i</italic>th&#x20;trial.</p>
<p>Information regarding the binary decision outcome <inline-formula id="inf58">
<mml:math id="minf58">
<mml:mrow>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2208;</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1,1</mml:mn>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> further disambiguates <xref ref-type="disp-formula" rid="e4">Eq. 4</xref> as follows:<disp-formula id="e5">
<mml:math id="me5">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>b</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mi>v</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msqrt>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mtext>&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;if&#xa0;</mml:mtext>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mtext>&#xa0;&#xa0;&#xa0;</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtext>&#x27;up&#x27;&#xa0;decision</mml:mtext>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mi>v</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msqrt>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mtext>&#xa0;&#xa0;&#xa0;&#xa0;if&#xa0;</mml:mtext>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mtext>&#xa0;</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtext>&#x27;down&#x27;&#xa0;decision</mml:mtext>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mi>v</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msqrt>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(5)</label>
</disp-formula>where <inline-formula id="inf59">
<mml:math id="minf59">
<mml:mrow>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> can only take two possible values (&#x2212;1 or 1). <xref ref-type="disp-formula" rid="e5">Eq. 5</xref> can then be used to relate decision times directly to DDM model parameters (and cumulative perturbations):<disp-formula id="e6">
<mml:math id="me6">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mi>b</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mi>v</mml:mi>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
<mml:mi>v</mml:mi>
</mml:mfrac>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(6)</label>
</disp-formula>
</p>
<p>From <xref ref-type="disp-formula" rid="e6">Eq. 6</xref>, one can express observed trial-by-trial empirical response times <inline-formula id="inf60">
<mml:math id="minf60">
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> as follows:<disp-formula id="e7">
<mml:math id="me7">
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2248;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mi>b</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mi>v</mml:mi>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
<mml:mi>v</mml:mi>
</mml:mfrac>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(7)</label>
</disp-formula>where <inline-formula id="inf61">
<mml:math id="minf61">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are unknown i. i.d. model residuals.</p>
<p>Note that decision times effectively appear on both the left-hand and the right-hand side of <xref ref-type="disp-formula" rid="e6">Eqs 6,</xref> <xref ref-type="disp-formula" rid="e7">7</xref>. This is a slightly unorthodox feature, but, as we will see, this has effectively no consequence from the perspective of model inversion. In fact, one can think of <xref ref-type="disp-formula" rid="e7">Eq. 7</xref> as a &#x201c;self-consistency&#x201d; constraint that response times have to fulfill under the DDM. This is why we refer to <xref ref-type="disp-formula" rid="e7">Eq. 7</xref> as the <italic>self-consistency equation</italic> of DDMs. This, however, prevents us from using <xref ref-type="disp-formula" rid="e7">Eq. 7</xref> to generate data under the DDM. In other terms, <xref ref-type="disp-formula" rid="e7">Eq. 7</xref> is only useful when analyzing empirical RT&#x20;data.</p>
<p>
<xref ref-type="fig" rid="F5">Figure&#x20;5</xref> below exemplifies the accuracy of DDM&#x2019;s self-consistency equation, using a Monte-Carlo simulation of 200 trials under the vanilla&#x20;DDM.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>Self-consistency equation. Monte-Carlo simulation of 200 trials of a DDM, with arbitrary parameters (in this example, the drift rate is positive). In all panels, the color code indicates the decision outcomes, which depends upon the sign of the drift rate (green: correct decisions, red: incorrect decisions). Upper-left panel: simulated trajectories of the decision variable (<italic>y</italic>-axis) as a function of time (<italic>x</italic>-axis). Upper-right panel: response times&#x2019; distribution for both correct and incorrect choice outcomes over the 200&#x20;Monte-Carlo simulations. Lower-left panel: outcome ratios. Lower-right panel: the left-hand side of <xref ref-type="disp-formula" rid="e7">Eq. 7</xref> (<italic>y</italic>-axis) is plotted against the right-hand side of <xref ref-type="disp-formula" rid="e7">Eq. 7</xref> (<italic>x</italic>-axis), for each of the 200 trials.</p>
</caption>
<graphic xlink:href="frai-04-531316-g005.tif"/>
</fig>
<p>One can see that the DDM&#x2019;s self-consistency equation is valid, i.e.,&#x20;simulated response times almost always equate their theoretical prediction. The few (small) deviations that can be eyeballed on the lower-right panel of <xref ref-type="fig" rid="F5">Figure&#x20;5</xref> actually correspond to simulation artifacts where the decision variable exceeds the bound by some relatively small amount. This happen when the discretization step <inline-formula id="inf62">
<mml:math id="minf62">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> (cf. <xref ref-type="disp-formula" rid="e2">Eq. 2</xref>) is too large when compared to the relative magnitude of the stochastic component of the system&#x2019;s dynamics. In effect, these artifactual errors grow when <inline-formula id="inf63">
<mml:math id="minf63">
<mml:mrow>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>/</mml:mo>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> increases. Nevertheless, in principle, these and other errors would be absorbed in the model residuals <inline-formula id="inf64">
<mml:math id="minf64">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> of <xref ref-type="disp-formula" rid="e7">Eq.&#x20;7</xref>.</p>
<p>Now recall that recent extensions of vanilla DDMs include e.g., collapsing bounds (<xref ref-type="bibr" rid="B26">Hawkins et&#x20;al., 2015</xref>; <xref ref-type="bibr" rid="B54">Voskuilen et&#x20;al., 2016</xref>) and/or nonlinear transformations of the state-space (<xref ref-type="bibr" rid="B51">Tajima et&#x20;al., 2016</xref>). As the astute reader may have already guessed, the self-consistency equation can be generalized to such DDM variants. Let us assume that <xref ref-type="disp-formula" rid="e2">Eqs 2,</xref> <xref ref-type="disp-formula" rid="e3">3</xref> still hold, i.e.,&#x20;the decision process is still somehow based upon a gaussian random walk. However, we now assume that the decision is triggered when an arbitrary transformation <inline-formula id="inf65">
<mml:math id="minf65">
<mml:mrow>
<mml:mi>z</mml:mi>
<mml:mo>:</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> of the base random walk <inline-formula id="inf66">
<mml:math id="minf66">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> has reached a predefined threshold <inline-formula id="inf67">
<mml:math id="minf67">
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x2322;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> that can vary over time (e.g., a collapsing bound). <xref ref-type="disp-formula" rid="e5">Eq. 5</xref> now becomes:<disp-formula id="e8">
<mml:math id="me8">
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x2322;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mi>v</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c4;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msqrt>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(8)</label>
</disp-formula>
</p>
<p>If the transformation <inline-formula id="inf68">
<mml:math id="minf68">
<mml:mrow>
<mml:mi>z</mml:mi>
<mml:mo>:</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is invertible (i.e.,&#x20;if <inline-formula id="inf69">
<mml:math id="minf69">
<mml:mrow>
<mml:msup>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> exists and is unique), then the self-consistency equation for reaction times <inline-formula id="inf70">
<mml:math id="minf70">
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> now generalizes as follows:<disp-formula id="e9">
<mml:math id="me9">
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2248;</mml:mo>
<mml:munder>
<mml:munder>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x2322;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mi>v</mml:mi>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
<mml:mi>v</mml:mi>
</mml:mfrac>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="true">&#xfe38;</mml:mo>
</mml:munder>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:munder>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(9)</label>
</disp-formula>where <inline-formula id="inf71">
<mml:math id="minf71">
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the &#x201c;expected&#x201d; (or rather, &#x201c;self-consistent&#x201d;) response time at trial <inline-formula id="inf72">
<mml:math id="minf72">
<mml:mi>i</mml:mi>
</mml:math>
</inline-formula>, which depends nonlinearly on DDM parameters (and on response times). Note that one recovers the self-consistency equation of &#x201c;vanilla&#x201d; DDM (<xref ref-type="disp-formula" rid="e7">Eq. 7</xref>) when setting <inline-formula id="inf73">
<mml:math id="minf73">
<mml:mrow>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf74">
<mml:math id="minf74">
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x2322;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>b</mml:mi>
<mml:mtext>&#x2002;</mml:mtext>
<mml:mo>&#x2200;</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<p>Importantly, inverting <xref ref-type="disp-formula" rid="e9">Eq. 9</xref> can be used to estimate parameters <inline-formula id="inf75">
<mml:math id="minf75">
<mml:mi>&#x3b3;</mml:mi>
</mml:math>
</inline-formula> and <inline-formula id="inf76">
<mml:math id="minf76">
<mml:mi>&#x3c9;</mml:mi>
</mml:math>
</inline-formula> that control the transformation <inline-formula id="inf77">
<mml:math id="minf77">
<mml:mrow>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>&#x3b3;</mml:mi>
</mml:msub>
<mml:mo>:</mml:mo>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mover accent="true">
<mml:mo>&#x2192;</mml:mo>
<mml:mi>&#x3b3;</mml:mi>
</mml:mover>
</mml:mrow>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>&#x3b3;</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> or the collapsing bounds <inline-formula id="inf78">
<mml:math id="minf78">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x2322;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
</mml:msub>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mover accent="true">
<mml:mo>&#x2192;</mml:mo>
<mml:mi>&#x3c9;</mml:mi>
</mml:mover>
</mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x2322;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, respectively. We will see examples of this in the Results section below. This implies that the self-consistency equation can be used, in conjunction with adequate statistical parameter estimation approaches (see below), for estimating DDM parameters under many different variants of DDM, including those for which no analytical result exists for the response time distribution.</p>
</sec>
<sec id="s4">
<title>An Overcomplete Likelihood Approach to DDM Inversion</title>
<p>Fitting <xref ref-type="disp-formula" rid="e9">Eq. 9</xref> to response time data reduces to finding the set of parameters that renders the DDM self-consistent. In doing so, normalized cumulative perturbation terms <inline-formula id="inf79">
<mml:math id="minf79">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> are treated as nuisance model parameters, but model parameters nonetheless. This means that there are more model parameters than there are data points. In other words, <xref ref-type="disp-formula" rid="e9">Eq. 9</xref> induces an &#x201c;overcomplete&#x201d; likelihood function <inline-formula id="inf80">
<mml:math id="minf80">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c9;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b3;</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>:<disp-formula id="e10">
<mml:math id="me10">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c9;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b3;</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x220f;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c9;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b3;</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x3d;</mml:mo>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x220f;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c9;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>&#x3b3;</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>&#x3bb;</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(10)</label>
</disp-formula>where <inline-formula id="inf81">
<mml:math id="minf81">
<mml:mi>&#x3bb;</mml:mi>
</mml:math>
</inline-formula> is the variance of the model residuals <inline-formula id="inf82">
<mml:math id="minf82">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> of <xref ref-type="disp-formula" rid="e9">Eq. 9</xref>, <inline-formula id="inf83">
<mml:math id="minf83">
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the &#x201c;self-consistent&#x201d; response time given in <xref ref-type="disp-formula" rid="e9">Eq. 9</xref>, and we have used the (convenient but slightly abusive) notation <inline-formula id="inf84">
<mml:math id="minf84">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> to reference cumulative perturbations w.r.t. to their corresponding trial&#x20;index.</p>
<p>Dealing with such overcomplete likelihood function requires additional constraints on model parameters: this is easily done within a Bayesian framework. Therefore, we rely on the variational Laplace approach (<xref ref-type="bibr" rid="B19">Friston et&#x20;al., 2007</xref>; <xref ref-type="bibr" rid="B9">Daunizeau, 2017</xref>), which was developed to perform approximate bayesian inference on nonlinear generative models (see <xref ref-type="sec" rid="s12">Supplementary Appendix S1</xref> for mathematical details). In what follows, we propose a simple set of prior constraints that help regularizing the inference.<list list-type="simple">
<list-item>
<p>a. Prior moments of the cumulative perturbations: the &#x201c;no barrier&#x201d; approximation</p>
</list-item>
</list>
</p>
<p>Recall that, under the DDM framework, errors can only be due to the stochastic perturbation noise. More precisely, errors are due to those perturbations that are strong enough to deviate the system&#x2019;s trajectory and make it hit the &#x201c;wrong&#x201d; bound (e.g., the lower bound if the drift rate is positive). Let <inline-formula id="inf85">
<mml:math id="minf85">
<mml:mrow>
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> be the proportion of correct responses. For example, if the drift rate is positive, then <inline-formula id="inf86">
<mml:math id="minf86">
<mml:mrow>
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> corresponds to responses that hit the upper bound. Now let <inline-formula id="inf87">
<mml:math id="minf87">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> be the critical value of <inline-formula id="inf88">
<mml:math id="minf88">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> such that <inline-formula id="inf89">
<mml:math id="minf89">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2265;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> (see <xref ref-type="fig" rid="F6">Figure&#x20;6</xref> below). Then, we know that errors correspond to those perturbations <inline-formula id="inf90">
<mml:math id="minf90">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> that are smaller than <inline-formula id="inf91">
<mml:math id="minf91">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. But what do we know about the distribution of perturbations? Importantly, if the DDM&#x2019;s stochastic evidence accumulation process had no decision bound, then the distribution of normalized cumulative perturbations would be invariant over time and such that <inline-formula id="inf92">
<mml:math id="minf92">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>&#x223c;</mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>0,1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mo>&#x2200;</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>. This, in fact, is the very reason why we introduced normalized cumulative perturbations in <xref ref-type="disp-formula" rid="e3">Eq. 3</xref>. Under this &#x201c;no barrier&#x201d; approximation, one can now derive the conditional expectations <inline-formula id="inf93">
<mml:math id="minf93">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf94">
<mml:math id="minf94">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> of the perturbation <inline-formula id="inf95">
<mml:math id="minf95">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, given that the decision outcome <inline-formula id="inf96">
<mml:math id="minf96">
<mml:mrow>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is correct or erroneous, respectively:<disp-formula id="e11">
<mml:math id="me11">
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
<mml:mo>&#x225c;</mml:mo>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3e;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
<mml:mo>&#x225c;</mml:mo>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>E</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3c;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
<mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(11)</label>
</disp-formula>
</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>Approximate conditional distributions of the normalized cumulative perturbations. Upper-left panel: The black line shows the &#x201c;no barrier&#x201d; standard normal distribution of normalized cumulative perturbations. The shaded gray area has size <inline-formula id="inf97">
<mml:math id="minf97">
<mml:mrow>
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, and its left bound (dashed black line) is the critical value <inline-formula id="inf98">
<mml:math id="minf98">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> below which cumulative perturbations eventually induce errors. The green and red lines depict the ensuing approximate conditional distributions given in <xref ref-type="disp-formula" rid="e13">Eq. 13</xref>. Upper-right panel: a Representative monte-carlo simulation. The green and red bars show the sample histogram of normalized cumulative perturbations for correct and erroneous decisions, respectively (over 200 trials, same simulation as in <xref ref-type="fig" rid="F5">Figure&#x20;5</xref>). The green and red lines depict the corresponding approximate conditional normal distributions (cf. <xref ref-type="disp-formula" rid="e13">Eq. 13</xref>). Lower-left panel: The sample mean estimates of conditional perturbations (<italic>y</italic>-axis) are plotted against their &#x201c;no barrier&#x201d; approximation (<italic>x</italic>-axis, <xref ref-type="disp-formula" rid="e11">Eq. 11</xref>). Monte-carlo simulations are split according to the sign of the drift rate, and then binned according to deciles of approximate conditional means of normalized cumulative perturbations (green: Correct, red: error, errorbars: Within-decile means&#x20;&#xb1; standard deviations). The black dotted line shows the identity mapping (perfect approximation). Lower-right panel: The sample variance estimates of normalized cumulative perturbations (<italic>y</italic>-axis) are plotted against their &#x201c;no barrier&#x201d; approximation (<italic>x</italic>-axis, <xref ref-type="disp-formula" rid="e12">Eq. 12</xref>). Same format as lower-left&#x20;panel.</p>
</caption>
<graphic xlink:href="frai-04-531316-g006.tif"/>
</fig>
<p>
<xref ref-type="disp-formula" rid="e11">Equation 11</xref> is obtained from the known expression of first-order moments of a truncated normal density <inline-formula id="inf99">
<mml:math id="minf99">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>0,1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. Critically, <xref ref-type="disp-formula" rid="e11">Eq. 11</xref> does not depend upon DDM parameters. Of course, the same logic extends to conditional variances <inline-formula id="inf100">
<mml:math id="minf100">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3a3;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf101">
<mml:math id="minf101">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3a3;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, whose analytical expressions are given by:<disp-formula id="e12">
<mml:math id="me12">
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3a3;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
<mml:mo>&#x225c;</mml:mo>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3e;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3a3;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
<mml:mo>&#x225c;</mml:mo>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3c;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2260;</mml:mo>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(12)</label>
</disp-formula>
</p>
<p>A simple moment-matching approach thus suggests to approximate the conditional distribution <inline-formula id="inf102">
<mml:math id="minf102">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> of normalized cumulative perturbations as follows:<disp-formula id="e13">
<mml:math id="me13">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3a3;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mtext>&#xa0;&#xa0;&#xa0;if&#xa0;</mml:mtext>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>t</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3a3;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mtext>&#xa0;&#xa0;&#xa0;if&#xa0;</mml:mtext>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(13)</label>
</disp-formula>where the correct/error label depends on the sign of the drift rate. This concludes the derivation of our simple &#x201c;no barrier&#x201d; approximation to the conditional moments of cumulative perturbations.</p>
<p>Note that we derived this approximation without accounting for the (only) mathematical subtlety of the DDM: namely, the fact that decision bounds formally act as &#x201c;absorbing barriers&#x201d; for the system (<xref ref-type="bibr" rid="B7">Broderick et&#x20;al., 2009</xref>). Critically, absorbing barriers induce some non-trivial forms of dynamical degeneracy. In particular, they eventually favor paths that are made of extreme samples of the perturbation noise. This is because these have a higher chance of crossing the boundary, despite being comparatively less likely than near-zero samples under the corresponding &#x201c;no barrier&#x201d; distribution. One may thus wonder whether ignoring absorbing barriers may invalidate the moment-matching approximation given in <xref ref-type="disp-formula" rid="e11">Eqs 11</xref>&#x2013;<xref ref-type="disp-formula" rid="e13">13</xref>. To address this concern, we conducted a series of 1000&#x20;Monte-Carlo simulations, where DDM parameters were randomly drawn (each simulation consisted of 200 trials of the same decision system). We use these to compare the sample estimates of first- and second-order moments of normalized cumulative perturbations and their analytical approximations (as given in <xref ref-type="disp-formula" rid="e11">Eqs. 11,</xref> <xref ref-type="disp-formula" rid="e12">12</xref>). The results are given in <xref ref-type="fig" rid="F6">Figure&#x20;6</xref>&#x20;below.</p>
<p>One can see on the upper-right panel of <xref ref-type="fig" rid="F6">Figure&#x20;6</xref> that the distribution of normalized cumulative perturbations may strongly deviate from the standard normal density. In particular, this distribution clearly exhibits two modes, which correspond to correct and incorrect decisions, respectively. We have observed this bimodal shape across almost all Monte-Carlo simulations. This means that bound hits are less likely to be caused by perturbations of small magnitude than expected under the &#x201c;no-barrier&#x201d; distribution (cf. lack of probability mass around zero). Nevertheless, the ensuing approximate conditional distributions seem to be reasonably matched with their sample estimates. In fact, lower panels of <xref ref-type="fig" rid="F6">Figure&#x20;6</xref> demonstrate that sample means and variances of normalized cumulative perturbations are well approximated by <xref ref-type="disp-formula" rid="e11">Eqs 11,</xref> <xref ref-type="disp-formula" rid="e12">12</xref> for a broad range of DDM parameters. We note that the &#x201c;no-barrier&#x201d; approximation tends to slightly underestimate first-order moments, and overestimate second-order moments. This bias is negligible however, when compared to the overall range of variations of conditional moments. In brief, the effect of absorbing barriers on system dynamics has little impact on the conditional moments of normalized cumulative perturbations.</p>
<p>When fitting the DDM to empirical RT data, one thus wants to enforce the distributional constraint in <xref ref-type="disp-formula" rid="e11">Eqs 11</xref>&#x2013;<xref ref-type="disp-formula" rid="e13">13</xref> onto the perturbation term in <xref ref-type="disp-formula" rid="e9">Eq. 9</xref>. This can be done using a change of variable <inline-formula id="inf103">
<mml:math id="minf103">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c2;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, where <inline-formula id="inf104">
<mml:math id="minf104">
<mml:mi>&#x3c2;</mml:mi>
</mml:math>
</inline-formula> are non-constrained dummy variables and <inline-formula id="inf105">
<mml:math id="minf105">
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>:</mml:mo>
<mml:msub>
<mml:mi>&#x3c2;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2192;</mml:mo>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c2;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the following moment-enforcing mapping:<disp-formula id="e14">
<mml:math id="me14">
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c2;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c2;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:munder>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c2;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3a3;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:munder>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c2;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:munder>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>i</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c2;</mml:mi>
<mml:msup>
<mml:mi>i</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msqrt>
<mml:mtext>&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;&#xa0;</mml:mtext>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
<mml:mtext>&#xa0;</mml:mtext>
<mml:mi>i</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3bc;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c2;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:munder>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c2;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3a3;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:munder>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c2;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:munder>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>i</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c2;</mml:mi>
<mml:msup>
<mml:mi>i</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msqrt>
<mml:mtext>&#xa0;&#xa0;&#xa0;</mml:mtext>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
<mml:mtext>&#xa0;</mml:mtext>
<mml:mi>i</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(14)</label>
</disp-formula>where <inline-formula id="inf106">
<mml:math id="minf106">
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mo>&#x3d;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf107">
<mml:math id="minf107">
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mo>&#x2260;</mml:mo>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are the indices of correct and incorrect trials, respectively (and <inline-formula id="inf108">
<mml:math id="minf108">
<mml:mi>n</mml:mi>
</mml:math>
</inline-formula> is the total number of trials). <xref ref-type="disp-formula" rid="e14">Eq. 14</xref> ensures that the sample moments of the estimated normalized cumulative perturbations <inline-formula id="inf109">
<mml:math id="minf109">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3b7;</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c2;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> match <xref ref-type="disp-formula" rid="e11">Eqs 11,</xref> <xref ref-type="disp-formula" rid="e12">12</xref>, irrespective of the dummy variable <inline-formula id="inf110">
<mml:math id="minf110">
<mml:mi>&#x3c2;</mml:mi>
</mml:math>
</inline-formula>. This also implies that the effective degrees of freedom of the constrained model are in fact lower than what the native self-consistency function would suggest.<list list-type="simple">
<list-item>
<p>b. Prior constraints on native DDM parameters.</p>
</list-item>
</list>
</p>
<p>In addition, one may want to introduce the following prior constraints on the native DDM parameters:<list list-type="simple">
<list-item>
<p>&#x2022; The bound&#x2019;s height <inline-formula id="inf111">
<mml:math id="minf111">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula> is necessarily positive. This positivity constraint can be enforced by replacing <inline-formula id="inf112">
<mml:math id="minf112">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula> with a non-bounded parameter <inline-formula id="inf113">
<mml:math id="minf113">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, which relates to the bound&#x2019;s height through the following mapping: <inline-formula id="inf114">
<mml:math id="minf114">
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. We note that parameters <inline-formula id="inf115">
<mml:math id="minf115">
<mml:mi>&#x3c9;</mml:mi>
</mml:math>
</inline-formula> of collapsing bounds <inline-formula id="inf116">
<mml:math id="minf116">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x2322;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> may not have to obey such positivity constraint.</p>
</list-item>
<list-item>
<p>&#x2022; The standard deviation <inline-formula id="inf117">
<mml:math id="minf117">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula> is necessarily positive. Again, this can be enforced by replacing it with the following mapped parameter <inline-formula id="inf118">
<mml:math id="minf118">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>: <inline-formula id="inf119">
<mml:math id="minf119">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</list-item>
<list-item>
<p>&#x2022; The non-decision time <inline-formula id="inf120">
<mml:math id="minf120">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is necessarily positive and smaller than the minimum observed reaction time. This can be enforced by replacing the native non-decision time with the following mapped parameter <inline-formula id="inf121">
<mml:math id="minf121">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>: <inline-formula id="inf122">
<mml:math id="minf122">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>min</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, where <inline-formula id="inf123">
<mml:math id="minf123">
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mo>&#xb7;</mml:mo>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the standard sigmoid mapping.</p>
</list-item>
<list-item>
<p>&#x2022; The initial bias <inline-formula id="inf124">
<mml:math id="minf124">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is necessarily constrained between <inline-formula id="inf125">
<mml:math id="minf125">
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf126">
<mml:math id="minf126">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula>. This can be enforced by replacing the native initial condition with the following mapped parameter <inline-formula id="inf127">
<mml:math id="minf127">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>: <inline-formula id="inf128">
<mml:math id="minf128">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</list-item>
<list-item>
<p>&#x2022; In principle, the drift rate <inline-formula id="inf129">
<mml:math id="minf129">
<mml:mi>v</mml:mi>
</mml:math>
</inline-formula> can be either positive or negative. However, its magnitude is necessarily smaller than <inline-formula id="inf130">
<mml:math id="minf130">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>min</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</inline-formula>, which corresponds to its &#x201c;ballistic&#x201d; limit (see <xref ref-type="sec" rid="s12">Supplementary Appendix S6</xref> for more details). This can be enforced by replacing the native drift rate with the following mapped parameter <inline-formula id="inf131">
<mml:math id="minf131">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>: <inline-formula id="inf132">
<mml:math id="minf132">
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2b;</mml:mo>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>min</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</list-item>
</list>
</p>
<p>Here again, we use the set of dummy variables <inline-formula id="inf133">
<mml:math id="minf133">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> in lieu of native DDM parameters.</p>
<p>The statistical efficiency of the ensuing overcomplete approach can be evaluated by simulating RT and choice data under different settings of the DDM parameters, and then comparing estimated and simulated parameters. Below, we use such recovery analysis to compare the overcomplete approach with standard DDM fitting procedures.<list list-type="simple">
<list-item>
<p>c. Accounting for predictable trial-by-trial RT variability.</p>
</list-item>
</list>
</p>
<p>Critically, the above overcomplete approach can be extended to ask whether trial-by-trial variations in DDM parameters explain trial-by-trial variations in observed RT, above and beyond the impact of the random perturbation term in <xref ref-type="disp-formula" rid="e7">Eq. 7</xref>. For example, one may want to assess whether predictable variations in e.g., the drift term, accurately predict variations in RT data. This kind of questions underlies many recent empirical studies of human and/or animal decision making. In the context of perceptual decision making, the drift rate is assumed to derive from the strength of momentary evidence, which is controlled experimentally and varies in a trial-by-trial fashion (<xref ref-type="bibr" rid="B27">Huk and Shadlen, 2005</xref>; <xref ref-type="bibr" rid="B3">Bitzer et&#x20;al., 2014</xref>). A straightforward extension of this logic to value-based decisions implies that the drift rate should vary in proportion to the value difference between alternative options (<xref ref-type="bibr" rid="B29">Krajbich et&#x20;al., 2010</xref>; <xref ref-type="bibr" rid="B11">De Martino et&#x20;al., 2012</xref>; <xref ref-type="bibr" rid="B32">Lopez-Persem et&#x20;al., 2016</xref>). In both cases, a prediction for drift rate variations across trials is available, which is likely to induce trial-by-trial variations in choice and RT data. Let <inline-formula id="inf134">
<mml:math id="minf134">
<mml:mi>D</mml:mi>
</mml:math>
</inline-formula> be a known predictor variable, which is expected to capture trial-by-trial variations in some DDM parameter (e.g., the drift rate). One may then alter the self-consistency equation such that DDM parameters are treated as affine functions of trial-by-trial predictors (e.g., <inline-formula id="inf135">
<mml:math id="minf135">
<mml:mrow>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x225c;</mml:mo>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:msub>
<mml:mi>D</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>), and exploit trial-by-trial variations in response times to fit the ensuing offset and slope parameters (here, <inline-formula id="inf136">
<mml:math id="minf136">
<mml:mrow>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf137">
<mml:math id="minf137">
<mml:mrow>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>). Alternatively, one can simply set the drift rate to the predictor variable (i.e.,&#x20;assume <italic>a priori</italic> <inline-formula id="inf138">
<mml:math id="minf138">
<mml:mrow>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf139">
<mml:math id="minf139">
<mml:mrow>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>), which is currently the favorite approach in the field. As we will see below, this significantly improves model identifiability for the remaining parameters. This is because trial-by-trial variations in the drift rate will only accurately predict trial-by-trial variations in response time data if the remaining parameters are correctly set. This is just an example of course, and one can see how easily any prior dependency to a predictor variable could be accounted for. The critical point here is that the overcomplete approach can exploit predictable trial-by-trial variations in RT data to improve the inference on model parameters.</p>
</sec>
<sec id="s5">
<title>Parameter Recovery Analysis</title>
<p>In what follows, we use numerical simulations to evaluate the approach&#x2019;s ability to recover DDM parameters. Our parameter recovery analyses proceed as follows. First, we sample 1,000 sets of model parameters <inline-formula id="inf140">
<mml:math id="minf140">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> under some arbitrary distribution. Second, for each of these parameter, we simulate a series of N &#x3d; 200 DDM trials according to <xref ref-type="disp-formula" rid="e2">Eq. 2</xref> above. Third, we fit the DDM to each series of simulated reaction times (200 data points) and extract parameter estimates. Last, we compare simulated and estimated parameters to each other. In particular, we measure the relative estimation error for each DDM parameter. We also quantify potential non-identifiability issues using so-called recovery matrices and the ensuing identifiability index. We note that details regarding parameter recovery analyses can be found in <xref ref-type="sec" rid="s12">Supplementary Appendix S4</xref> of this manuscript (along with definitions of the relative estimation error <inline-formula id="inf141">
<mml:math id="minf141">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, recovery matrices and identifiability index <inline-formula id="inf142">
<mml:math id="minf142">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>).</p>
<p>To begin with, we will focus on &#x201c;vanilla&#x201d; DDMs, because they provide a fair benchmark for parameter estimation methods. In this context, we will compare the overcomplete approach with two established methods (<xref ref-type="bibr" rid="B35">Moens and Zenon, 2017</xref>; <xref ref-type="bibr" rid="B4">Boehm et&#x20;al., 2018</xref>), namely: the &#x201c;method of moments&#x201d; and the &#x201c;method of trial means&#x201d;. These methods are summarized in <xref ref-type="sec" rid="s12">Supplementary Appendixes S2, S3</xref>, respectively. In brief, the former attempts to match empirical and theoretical moments of RT data. We expect this method to perform best when DDM parameters are fixed across trials. The latter rather attempts to match raw trial-by-trial RT data to trial-by-trial theoretical RT means. This will be most reliable when DDM parameters (e.g., the drift rate) vary over trials. Note that, in all cases, we inserted the prior constraints on DDM parameters given in <italic>An Overcomplete Likelihood Approach to DDM Inversion</italic> (section b) above, along with standard normal priors on unmapped parameters <inline-formula id="inf143">
<mml:math id="minf143">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>. We will therefore compare the ability of these methods to recover DDM parameters (i) when no parameter is fixed (full parameter set), (ii) when the drift rate is fixed, and (iii) when drift rates vary over trials.</p>
<p>Finally, we perform a parameter recovery analysis in the context of a generalized DDM, which includes collapsing bounds. This will serve to demonstrate the flexibility and robustness of the overcomplete approach.<list list-type="simple">
<list-item>
<p>a. Vanilla DDM: recovery analysis for the full parameter&#x20;set.</p>
</list-item>
</list>
</p>
<p>First, we compare the three approaches when all DDM parameters have to be estimated. This essentially serves as a reference point for the other recovery analyses. The ensuing recovery analysis is summarized in <xref ref-type="fig" rid="F7">Figure&#x20;7</xref> below, in terms of the comparison between simulated and estimated parameters.</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption>
<p>Comparison of simulated and estimated DDM parameters (full parameter set). Left panel: Estimated parameters using the overcomplete approach (<italic>y</italic>-axis) are plotted against simulated parameters (<italic>x</italic>-axis). Each dot is a monte-carlo simulation and different colors indicate distinct parameters (blue: <inline-formula id="inf144">
<mml:math id="minf144">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>, red: <inline-formula id="inf145">
<mml:math id="minf145">
<mml:mi>v</mml:mi>
</mml:math>
</inline-formula>, yellow: <inline-formula id="inf146">
<mml:math id="minf146">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula>, purple: <inline-formula id="inf147">
<mml:math id="minf147">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, green: <inline-formula id="inf148">
<mml:math id="minf148">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>). The black dotted line indicate the identity line (perfect estimation). Middle panel: Method of moments, same format as left panel. Right panel: Method of trial means, same format as left&#x20;panel.</p>
</caption>
<graphic xlink:href="frai-04-531316-g007.tif"/>
</fig>
<p>Unsurprisingly, parameter estimates depend on the chosen estimation method, i.e. different methods exhibit distinct estimation errors structures. In addition, estimated and simulated parameters vary with similar magnitudes, and no systematic estimation bias is noticeable. It turns out that, in this setting, estimation error is minimal for the method of moments, which exhibits lower error than both the overcomplete approach (mean error difference: <inline-formula id="inf149">
<mml:math id="minf149">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.27</mml:mn>
<mml:mo>&#xb1;</mml:mo>
<mml:mn>0.03</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, p &#x3c; 10<sup>&#x2013;4</sup>, two-sided F-test) and the method of moments (mean error difference: <inline-formula id="inf150">
<mml:math id="minf150">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.26</mml:mn>
<mml:mo>&#xb1;</mml:mo>
<mml:mn>0.02</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, p &#x3c; 10<sup>&#x2013;4</sup>, two-sided F-test). However, the overcomplete approach and the method of trial means yield comparable estimation errors (mean error difference: <inline-formula id="inf151">
<mml:math id="minf151">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.006</mml:mn>
<mml:mo>&#xb1;</mml:mo>
<mml:mn>0.04</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, p &#x3d; 0.88, two-sided F-test).</p>
<p>Now, although estimation errors enable a coarse comparison of methods, it does not provide any quantitative insight regarding potential non-identifiability issues. We address this using recovery matrices (see <xref ref-type="sec" rid="s12">Supplementary Appendix S4</xref>), which are shown on <xref ref-type="fig" rid="F8">Figure&#x20;8</xref>&#x20;below.</p>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption>
<p>DDM parameter recovery matrices (full parameter set). Left panel: overcomplete approach. Middle panel: method of moments. Right panel: Method of trial means. Each line shows the squared partial correlation coefficient between a given estimated parameter and each simulated parameter (across 1000&#x20;Monte-Carlo simulations). Note that perfect recovery would exhibit a diagonal structure, where variations in each estimated parameter is only due to variations in the corresponding simulated parameter. Diagonal elements of the recovery matrix measure &#x201c;correct estimation variability&#x201d;, i.e.,&#x20;variations in the estimated parameters that are due to variations in the corresponding simulated parameter. In contrast, non-diagonal elements of the recovery matrix measure &#x201c;incorrect estimation variability&#x201d;, i.e.,&#x20;variations in the estimated parameters that are due to variations in other parameters. Strong non-diagonal elements in recovery matrices thus signal pairwise non-identifiability issues.</p>
</caption>
<graphic xlink:href="frai-04-531316-g008.tif"/>
</fig>
<p>None of the estimation methods is capable of perfectly identifying DDM parameters (except <inline-formula id="inf152">
<mml:math id="minf152">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>), i.e.,&#x20;all methods exhibit strong non-identifiability issues. In particular, variations in the perturbations&#x2019; standard deviation <inline-formula id="inf153">
<mml:math id="minf153">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula> are partially confused with variations in the bound&#x2019;s height <inline-formula id="inf154">
<mml:math id="minf154">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula>, and reciprocally. This is because increasing both at the same time leaves RT trial-by-trial variability unchanged. Therefore, RT produced under strong neural perturbations can be equally well explained with a small bound height (and reciprocally). Interestingly, drift rate estimates are the least reliable: though their amount of &#x201c;correct variability&#x201d; is decent for the method of moments (45.3%), it is very low for both the overcomplete approach (5.3%) and the method of trial means (7.5%). If anything, non-identifiability issues are strongest for the overcomplete approach, which also exhibits weak &#x201c;correct variability&#x201d; for initial conditions (5.1%).<list list-type="simple">
<list-item>
<p>b. Vanilla DDM: recovery analysis with a fixed drift&#x20;rate.</p>
</list-item>
</list>
</p>
<p>In fact, we expect non-identifiability issues of this sort, which were already highlighted in early DDM studies (<xref ref-type="bibr" rid="B42">Ratcliff, 1978</xref>). Note that this basic form of non-identifiability is easily disclosed from the self-consistency equation, which is invariant to a rescaling of all DDM parameters (except <inline-formula id="inf155">
<mml:math id="minf155">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>). In other terms, response times are left unchanged if all these parameters are rescaled by the same amount. Although this problematic invariance would disappear if a single DDM parameter was fixed rather than fitted, other non-identifiability issues may still hamper DDM parameter recovery. To test this, we re-performed the above parameter recovery analysis, but this time informing estimation methods about the drift rate, which was set to its simulated value. We note that such arbitrary reduction of the parameter space is routinely performed, as it was already suggested in seminal empirical applications of the DDM (<xref ref-type="bibr" rid="B42">Ratcliff, 1978</xref>). <xref ref-type="fig" rid="F9">Figure&#x20;9</xref> below summarizes the ensuing comparison between simulated and estimated parameters.</p>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption>
<p>Comparison of simulated and estimated DDM parameters (fixed drift rates). Same format as <xref ref-type="fig" rid="F7">Figure&#x20;7</xref>, except for the color code in upper panels (blue: <inline-formula id="inf156">
<mml:math id="minf156">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>, red: <inline-formula id="inf157">
<mml:math id="minf157">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula>, yellow: <inline-formula id="inf158">
<mml:math id="minf158">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, purple: <inline-formula id="inf159">
<mml:math id="minf159">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>).</p>
</caption>
<graphic xlink:href="frai-04-531316-g009.tif"/>
</fig>
<p>Comparing <xref ref-type="fig" rid="F7">Figures 7</xref>, <xref ref-type="fig" rid="F9">9</xref> provides a clear insight regarding the impact of reducing the DDM&#x2019;s parameter space. In brief, estimation errors decrease for all methods, which seem to provide much more reliable parameter estimates. The method of moments still yields the most reliable parameter estimates, eventually exhibiting lower error than the overcomplete approach (mean error difference: <inline-formula id="inf160">
<mml:math id="minf160">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.21</mml:mn>
<mml:mo>&#xb1;</mml:mo>
<mml:mn>0.03</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, p &#x3d; 0.04, two-sided F-test) and the method of trial means (mean error difference: <inline-formula id="inf161">
<mml:math id="minf161">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.53</mml:mn>
<mml:mo>&#xb1;</mml:mo>
<mml:mn>0.03</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, p &#x3c; 10<sup>&#x2013;4</sup>, two-sided F-test). In addition, the overcomplete approach yields lower estimation error than the method of trial means (mean error difference: <inline-formula id="inf162">
<mml:math id="minf162">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.33</mml:mn>
<mml:mo>&#xb1;</mml:mo>
<mml:mn>0.04</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, p &#x3c; 10<sup>&#x2013;4</sup>, two-sided F-test). The reason why the methods of trial means performs worst here is that it is blind to trial-by-trial variability in the data (beyond mean RT differences between the two decision outcomes). This is not the case however, for the two other methods.</p>
<p>We then evaluated non-identifiability issues using recovery matrices, which are summarized in <xref ref-type="fig" rid="F10">Figure&#x20;10</xref>&#x20;below.</p>
<fig id="F10" position="float">
<label>FIGURE 10</label>
<caption>
<p>DDM parameter recovery matrices (fixed drift rates). Same format as <xref ref-type="fig" rid="F8">Figure&#x20;8</xref>, except that recovery matrices do not include the line that corresponds to the drift rate estimates. Note, however, that we still account for variations in the remaining estimated parameters that are attributable to variations in simulated drift&#x20;rates.</p>
</caption>
<graphic xlink:href="frai-04-531316-g010.tif"/>
</fig>
<p>
<xref ref-type="fig" rid="F10">Figure&#x20;10</xref> clearly demonstrates an overall improvement in parameter identifiability (compare to <xref ref-type="fig" rid="F8">Figure&#x20;8</xref>). In brief, most parameters are now identifiable, at least for the method of moments (which clearly performs best) and the overcomplete approach. Nevertheless, some weaker non-identifiability issues still remain, even when fixing the drift rate to its simulated value. For example, the overcomplete approach and the method of trial means still somehow confuse bound&#x2019;s heights with perturbations&#x2019; standard deviations. More precisely, <inline-formula id="inf163">
<mml:math id="minf163">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> shows unacceptably weak &#x201c;correct variations&#x201d; (overcomplete approach: 12.3%, method of trial means: 2.7%), when compared to &#x201c;incorrect variations&#x201d; due to the bound&#x2019;s height (overcomplete approach: 12.4%, method of trial means: 14.3%). Note that this does not hold for the method of moments, for which <inline-formula id="inf164">
<mml:math id="minf164">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> shows strong &#x201c;correct variations&#x201d; (30.2%). Having said this, even the method of moments exhibit partial non-identifiability issues, in particular between perturbations&#x2019; standard deviations and drift rates (incorrect variations:&#x20;4.1%).</p>
<p>We note that fixing another DDM parameter, e.g., the noise&#x2019;s standard deviation <inline-formula id="inf165">
<mml:math id="minf165">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula> (instead of <inline-formula id="inf166">
<mml:math id="minf166">
<mml:mi>&#x3bd;</mml:mi>
</mml:math>
</inline-formula>), would not change the relative merits of estimation methods in terms of parameter recovery. In other words, the above results are representative of the impact of fixing any DDM parameter. But situations where the drift rate is fixed can be directly compared with situations where one is attempting to exploit predictable drift rates trial-by-trial variations, which is the focus of the next section.<list list-type="simple">
<list-item>
<p>c. Vanilla DDM: recovery analysis with varying drift&#x20;rates.</p>
</list-item>
</list>
</p>
<p>Now, accounting for predictable trial-by-trial variations in model parameters may, in principle, improve model identifiability. This is due to the fact that the net effect of each DDM parameter depends upon the setting of other parameters. Let us assume, for example, that the drift rate varies across trials according to some predictor variable (e.g., the relative evidence strength of alternative options in the context of perceptual decision making). The impact of other DDM parameters will not be the same, depending on whether the drift rate is high or low. In turn, there are fewer settings of these parameters that can predict trial-by-trial variations in RT data from variations in drift rate. To test this, we re-performed the recovery analysis, this time setting the drift rate according to a varying predictor variable, which is supposed to be known. The ensuing comparison between simulated and estimated parameters is summarized in <xref ref-type="fig" rid="F11">Figure&#x20;11</xref>&#x20;below.</p>
<fig id="F11" position="float">
<label>FIGURE 11</label>
<caption>
<p>Comparison of simulated and estimated DDM parameters (varying drift rates). Same format as <xref ref-type="fig" rid="F9">Figure&#x20;9</xref>.</p>
</caption>
<graphic xlink:href="frai-04-531316-g011.tif"/>
</fig>
<p>On the one hand, the estimation error has now been strongly reduced, at least for the overcomplete approach and the method of trial means. On the other hand, estimation error has increased for the method of moments. This is because the method of moments confuses trial-by-trial variations that are caused by variations in drift rates with those that arise from the DDM&#x2019;s stochastic &#x201c;neural&#x201d; perturbation term. This is not the case for the overcomplete approach and the method of trial means. In turn, the method of moments now shows much higher estimation error than the overcomplete approach (mean error difference: <inline-formula id="inf167">
<mml:math id="minf167">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.55</mml:mn>
<mml:mo>&#xb1;</mml:mo>
<mml:mn>0.03</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, p &#x3c; 10<sup>&#x2013;4</sup>, two-sided F-test) or the method of trial means (mean error difference: <inline-formula id="inf168">
<mml:math id="minf168">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.83</mml:mn>
<mml:mo>&#xb1;</mml:mo>
<mml:mn>0.04</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, p &#x3c; 10<sup>&#x2013;4</sup>, two-sided F-test). Note that the latter eventually performs slightly better than the overcomplete approach (mean error difference: <inline-formula id="inf169">
<mml:math id="minf169">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.28</mml:mn>
<mml:mo>&#xb1;</mml:mo>
<mml:mn>0.03</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, p &#x3d; 0.04, two-sided F-test).</p>
<p>
<xref ref-type="fig" rid="F12">Figure&#x20;12</xref> below then summarizes the evaluation of non-identifiability issues, in terms of recovery matrices.</p>
<fig id="F12" position="float">
<label>FIGURE 12</label>
<caption>
<p>DDM parameter recovery matrices (varying drift rates). Same format as <xref ref-type="fig" rid="F10">Figure&#x20;10</xref>, except that fixed drift rates are replaced by their average across DDM trials.</p>
</caption>
<graphic xlink:href="frai-04-531316-g012.tif"/>
</fig>
<p>For the overcomplete approach and the method of trial means, <xref ref-type="fig" rid="F12">Figure&#x20;12</xref> shows a further improvement in parameter identifiability (compare to <xref ref-type="fig" rid="F8">Figures 8</xref>, <xref ref-type="fig" rid="F10">10</xref>). For these two methods, all parameters are now well identifiable (&#x201c;correct variations&#x201d; are always greater than 67.2% for all parameters), and no parameter estimate is strongly influenced by other simulated parameters. This is a simple example of the gain in statistical efficiency that result from exploiting known trial-by-trial variations in DDM model parameters. The situation is quite different for the method of moments, which exhibits clear non-identifiability issues for all parameters except the non-decision time. In particular, the bound&#x2019;s height is frequently confused with the perturbations&#x2019; standard deviation (20.3% of &#x201c;incorrect variations&#x201d;), the estimate of which has become unreliable (only 17.6% of &#x201c;correct variations&#x201d;).</p>
<p>We note that the gain in parameter recovery that obtains from exploiting predictable trial-by-trial variations in drift rates (with either the method of trial means or the overcomplete approach) does not generalize to situations where drift rates are defined in term of an affine transformation of some predictor variable (see <italic>An Overcomplete Likelihood Approach to DDM Inversion</italic> section. c above). This is because the ensuing offset and slope parameters would then need to be estimated along with other native DDM parameters. In turn, this would reintroduce identifiability issues similar or worse than when the full set of parameters have to be estimated (cf. <italic>An Overcomplete Likelihood Approach to DDM Inversion</italic> section.a). This is why people then typically fix another DDM parameter, e.g., the standard deviation <inline-formula id="inf170">
<mml:math id="minf170">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula> (<xref ref-type="bibr" rid="B44">Ratcliff et&#x20;al., 2016</xref>). But the risk of drawing erroneous conclusions, e.g., blindly interpreting differences due to <inline-formula id="inf171">
<mml:math id="minf171">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula> in terms of differences in other DDM parameters, should invite modelers to be cautious with this kind of strategy.<list list-type="simple">
<list-item>
<p>d. Generalized DDM: recovery analysis with collapsing bounds.</p>
</list-item>
</list>
</p>
<p>We now consider generalized DDMs that include collapsing bounds. More precisely, we will consider a DDM where the bound <inline-formula id="inf172">
<mml:math id="minf172">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x2322;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is exponentially decaying in time, i.e.: <inline-formula id="inf173">
<mml:math id="minf173">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>b</mml:mi>
<mml:mo>&#x2322;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mi>&#x3c9;</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, where <inline-formula id="inf174">
<mml:math id="minf174">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf175">
<mml:math id="minf175">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> control the bound&#x2019;s initial height and decay rate, respectively. This DDM variant reduces to the vanilla DDM when <inline-formula id="inf176">
<mml:math id="minf176">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x2248;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, in which case the parameter <inline-formula id="inf177">
<mml:math id="minf177">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is formally identical to the vanilla bound&#x2019;s height <inline-formula id="inf178">
<mml:math id="minf178">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula>. When <inline-formula id="inf179">
<mml:math id="minf179">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c9;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x2260;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> however, collapsing bounds induce a causal dependency between choice accuracy and response times that cannot be captured by the vanilla DDM (<xref ref-type="bibr" rid="B61">Zhang, 2012</xref>; <xref ref-type="bibr" rid="B62">Zhang et&#x20;al., 2014</xref>; <xref ref-type="bibr" rid="B26">Hawkins et&#x20;al., 2015</xref>; <xref ref-type="bibr" rid="B51">Tajima et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B54">Voskuilen et&#x20;al., 2016</xref>).</p>
<p>In what follows, we report the results of a recovery analysis, in which data was simulated under the above generalized DDM (with drift rates varying across trials). We note that, under such generalized DDM variant, no analytical solution is available to derive RT moments. Applying the method of moments or the method of trial means to such generalized DDM variant thus involves either sampling schemes or numerical solvers for the underlying Fokker-Planck equation (<xref ref-type="bibr" rid="B49">Shinn et&#x20;al., 2020</xref>). However, the computational cost of deriving trial-by-estimates of RT moments precludes routine data analysis using these methods, which is why most model-based studies are currently restricted to the vanilla DDM (<xref ref-type="bibr" rid="B14">Fengler et&#x20;al., 2020</xref>). In turn, we do not consider here such computationally intensive extensions of the method of moments and/or method of trial means. In this setting, they thus do not rely on the correct generative model. The ensuing estimation errors and related potential identifiability issues should thus be interpreted in terms of the (lack of) robustness against simplifying modeling assumptions. This is not the case for the overcomplete approach, which bypasses this computational bottleneck and hence generalizes without computational harm to such DDM variants.</p>
<p>
<xref ref-type="fig" rid="F13">Figure&#x20;13</xref> below summarizes the ensuing comparison between simulated and estimated parameters.</p>
<fig id="F13" position="float">
<label>FIGURE 13</label>
<caption>
<p>Comparison of simulated and estimated DDM parameters (collapsing bounds). Same format as <xref ref-type="fig" rid="F9">Figure&#x20;9</xref>, except that the left panel includes an additional parameter (<inline-formula id="inf180">
<mml:math id="minf180">
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>: green color), which controls the decay rate of DDM bounds.</p>
</caption>
<graphic xlink:href="frai-04-531316-g013.tif"/>
</fig>
<p>In brief, the overcomplete approach seems to perform as well as for non-collapsing bounds (see <xref ref-type="fig" rid="F11">Figure&#x20;11</xref>). Expectedly however, the method of moments and the method of trial means do incur some reliability loss. Quantitatively, the overcomplete approach shows much smaller estimation error than the method of moments (mean error difference: <inline-formula id="inf181">
<mml:math id="minf181">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.88</mml:mn>
<mml:mo>&#xb1;</mml:mo>
<mml:mn>0.05</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, p &#x3c; 10<sup>&#x2013;4</sup>, two-sided F-test) or the method of trial means (mean error difference: <inline-formula id="inf182">
<mml:math id="minf182">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mo>&#x2061;</mml:mo>
<mml:mi>log</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0.61</mml:mn>
<mml:mo>&#xb1;</mml:mo>
<mml:mn>0.05</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, p &#x3c; 10<sup>&#x2013;4</sup>, two-sided F-test).</p>
<p>
<xref ref-type="fig" rid="F14">Figure&#x20;14</xref> below then summarizes the ensuing evaluation of non-identifiability issues, in terms of recovery matrices.</p>
<fig id="F14" position="float">
<label>FIGURE 14</label>
<caption>
<p>DDM parameter recovery matrices (collapsing bounds). Same format as <xref ref-type="fig" rid="F12">Figure&#x20;12</xref>, except that recovery matrices now also include the bound&#x2019;s decay rate parameter (<inline-formula id="inf183">
<mml:math id="minf183">
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>), in addition to the bound&#x2019;s initial height (<inline-formula id="inf184">
<mml:math id="minf184">
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>).</p>
</caption>
<graphic xlink:href="frai-04-531316-g014.tif"/>
</fig>
<p>For the overcomplete approach, <xref ref-type="fig" rid="F14">Figure&#x20;14</xref> shows a similar parameter identifiability than <xref ref-type="fig" rid="F12">Figure&#x20;12</xref>. In brief, all parameters of the generalized DDM are identifiable from each other (the amount of &#x201c;correct variations&#x201d; is 33.8% for the bound&#x2019;s decay parameter, and greater than 75.5% for all other parameters). This implies that including collapsing bounds does not impact parameter recovery with this method. This is not the case for the two other methods, however. In particular, the method of moments confuses the perturbations&#x2019; standard deviation with the bound&#x2019;s decay rate (7.2% &#x201c;correct variations&#x201d; against 20.8% &#x201c;incorrect variations&#x201d;). This is also true, though to a lesser extent, for the method of trial means (31.6% &#x201c;correct variations&#x201d; against 5.4% &#x201c;incorrect variations&#x201d;). Again, these identifiability issues are expected, given that neither the method of moments nor the method of trial means (or, more properly, the variant that we use here) rely on the correct generative model. Maybe more surprising is the fact that these methods now exhibit non-identifiability issues w.r.t. parameters that they can, in principle, estimate. This exemplifies the sorts of interpretation issues that arise when relying on methods that neglect decision-relevant mechanisms. We will comment on this and related issues further in the Discussion section below.<list list-type="simple">
<list-item>
<p>e. Summary of recovery analyses.</p>
</list-item>
</list>
</p>
<p>
<xref ref-type="fig" rid="F15">Figure&#x20;15</xref> below summarizes all our recovery analyses above, in terms of the average (log-) relative estimation error <inline-formula id="inf185">
<mml:math id="minf185">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and the parameter identifiability index <inline-formula id="inf186">
<mml:math id="minf186">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> (cf. <xref ref-type="sec" rid="s12">Supplementary Appendix&#x20;S4</xref>).</p>
<fig id="F15" position="float">
<label>FIGURE 15</label>
<caption>
<p>Summary of DDM parameter recovery analyses. Left panel: The mean log relative estimation error <inline-formula id="inf187">
<mml:math id="minf187">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> (<italic>y</italic>-axis) is shown for all methods (OcA: Overcomplete approach, MoM: Method of moments, MoTM: Method of trial means), and all simulation series (black: Full parameter set, blue: fixed drift rate, red: varying drift rates, green: Collapsing bounds). Right panel: The mean identifiability index <inline-formula id="inf188">
<mml:math id="minf188">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> (<italic>y</italic>-axis) is shown for all methods and all simulation series (same format as left panel). Note that the situation in which the full parameter set has to be estimated serves as a References point. To enable a fair comparison, both the estimation error and the identifiability index are computed for the parameter subset that is common to all simulation series (i.e.: The perturbations &#x2018;standard deviation <inline-formula id="inf189">
<mml:math id="minf189">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>, the bound&#x2019;s height <inline-formula id="inf190">
<mml:math id="minf190">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula>, the initial condition <inline-formula id="inf191">
<mml:math id="minf191">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, and the non-decision time <inline-formula id="inf192">
<mml:math id="minf192">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>).</p>
</caption>
<graphic xlink:href="frai-04-531316-g015.tif"/>
</fig>
<p>
<xref ref-type="fig" rid="F15">Figure&#x20;15</xref> enables a visual comparison of the impact of simulation series on parameter estimation methods. As expected, for the method of moments and the method of trial means, the most favorable situation (in terms of estimation error and identifiability) is when the drift rate is fixed and varying over trials, respectively. This is also when these methods perform best in relation to each other. All other situations are detrimental, and eventually yield estimation error and identifiability issues similar or worse than when the full parameter set has to be estimated. This is not the case for the overcomplete approach, which exhibits comparable estimation error and/or identifiability than the best method in all situations, except for collapsing bounds, where it strongly outperforms the two other methods. Here again, we note that parameter recovery for generalized DDMs may, in principle, be improved for the method of moments and/or the method of trial means. But extending these methods to generalized DDMs is beyond the scope of the current&#x20;work.</p>
</sec>
<sec id="s6">
<title>Application to a Value-Based Decision Making Experiment</title>
<p>To demonstrate the above overcomplete likelihood approach, we apply it to data acquired in the context of a value-based decision making experiment (<xref ref-type="bibr" rid="B32">Lopez-Persem et&#x20;al., 2016</xref>). This experiment was designed to understand how option values are compared when making a choice. In particular, it tested whether agents may have prior preferences that create default policies and shape the neural comparison process.</p>
<p>Prior to the choice session, participants (n &#x3d; 24) rated the likeability of 432 items belonging to three different domains (food, music, magazines). Each domain included four categories of 36 items. At that time, participants were unaware of these categories. During the choice session, subjects performed series of choices between two items, knowing that one choice in each domain would be randomly selected at the end of the experiment and that they would stay in the lab for another 15&#xa0;min to enjoy their reward (listening to the selected music, eating the selected food and reading the selected magazine). Trials were blocked in a series of nine choices between items belonging to the same two categories within a same domain. The two categories were announced at the beginning of the block, such that subjects could form a prior or "default" preference (although they were not explicitly asked to do so). We quantified this prior preference as the difference between mean likeability ratings (across all items within each of the two categories). In what follows, we refer to the "default" option as the choice options that belonged to the favored category. Each choice can then be described in terms of choosing between the default and the alternative option.</p>
<p>
<xref ref-type="fig" rid="F16">Figure&#x20;16</xref> below summarizes the main effects of a bias toward the default option (i.e.,&#x20;the option belonging to the favored category) in both choice and response time, above and beyond the effect of individual item values.</p>
<fig id="F16" position="float">
<label>FIGURE 16</label>
<caption>
<p>Evidence for choice and RT biases in the default/alternative frame. Left: Probability of choosing the default option (<italic>y</italic>-axis) is plotted as a function of decision value V<sub>def</sub>-V<sub>alt</sub> (<italic>x</italic>-axis), divided into 10 bins. Values correspond to likeability ratings given by the subject prior to choice session. For each participant, the choice bias was defined as the difference between chance level (50%) and the observed probability of choosing the default option for a null decision value (i.e.,&#x20;when V<sub>def</sub> &#x3d; V<sub>alt</sub>). Right: Response time RT (<italic>y</italic>-axis) is plotted as a function of the absolute decision value &#x7c;V<sub>def</sub>-V<sub>alt</sub>&#x7c; (<italic>x</italic>-axis) divided into 10 bins, separately for trials in which the default option was chosen (black) or not (red). For each participant, the RT bias was defined as the difference between the RT intercepts (when V<sub>def</sub> &#x3d; V<sub>alt</sub>) observed for each choice outcome.</p>
</caption>
<graphic xlink:href="frai-04-531316-g016.tif"/>
</fig>
<p>A simple random effect analysis based upon logistic regression shows that the probability of choosing the default option significantly increases with decision value, i.e. the difference V<sub>def</sub>-V<sub>alt</sub> between the default and alternative option values (t&#x20;&#x3d;&#x20;8.4, dof &#x3d; 23, p &#x3c; 10<sup>&#x2013;4</sup>). In addition, choice bias is significant at the group-level (t &#x3d; 8.7, dof &#x3d; 23, p &#x3c; 10<sup>&#x2013;4</sup>). Similarly, RT significantly decreases with absolute decision value &#x7c;V<sub>def</sub>-V<sub>alt</sub>&#x7c; (t &#x3d; 8.7, dof &#x3d; 23, p &#x3c; 10<sup>&#x2013;4</sup>), and RT bias is significant at the group-level (t &#x3d; 7.4, dof &#x3d; 23, p &#x3c; 10<sup>&#x2013;4</sup>).</p>
<p>To interpret these results, we fitted the DDM using the above overcomplete approach, when encoding the choice either (i) in terms of default versus alternative option (i.e.,&#x20;as is implicit on <xref ref-type="fig" rid="F10">Figure&#x20;10</xref>) or (ii) in terms of right option versus left option. In what follows, we refer to the former choice frame as the &#x201c;default/alternative&#x201d; frame, and to the latter as the &#x201c;native&#x201d; frame. In both cases, the drift rate of each choice trial was set to the corresponding decision value (either V<sub>def</sub>-V<sub>alt</sub> or V<sub>right</sub>-V<sub>left</sub>). It turns out that within-subject estimates of <inline-formula id="inf193">
<mml:math id="minf193">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>, <inline-formula id="inf194">
<mml:math id="minf194">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula> and <inline-formula id="inf195">
<mml:math id="minf195">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> do not depend upon the choice frame. More precisely, the cross-subjects correlation of these estimates between the two choice frames is significant in all three cases (<inline-formula id="inf196">
<mml:math id="minf196">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>: r &#x3d; 0.76, p &#x3c; 10<sup>&#x2013;4</sup>; <inline-formula id="inf197">
<mml:math id="minf197">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula>: r &#x3d; 0.82, p &#x3c; 10<sup>&#x2013;4</sup>; <inline-formula id="inf198">
<mml:math id="minf198">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>: r &#x3d; 0.94, p &#x3c; 10<sup>&#x2013;4</sup>). This implies that inter-individual differences in <inline-formula id="inf199">
<mml:math id="minf199">
<mml:mi>&#x3c3;</mml:mi>
</mml:math>
</inline-formula>, <inline-formula id="inf200">
<mml:math id="minf200">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula> and <inline-formula id="inf201">
<mml:math id="minf201">
<mml:mrow>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>D</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> can be robustly identified, irrespective of the choice frame. However, the between-frame correlation is not significant for the initial bias <inline-formula id="inf202">
<mml:math id="minf202">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> (r &#x3d; 0.29, p &#x3d; 0.17). In addition, the initial bias is significant at the group level for the default/alternative frame (F &#x3d; 45.2, dof &#x3d; [1,23], p &#x3c; 10<sup>&#x2013;4</sup>) but not for the native frame (F &#x3d; 2.36, dof &#x3d; [1,23], p &#x3d; 0.14). In brief, the two choice frames only differ in terms of the underlying initial bias, which is only revealed in the default/alternative&#x20;frame.</p>
<p>Now, we expect, from model simulations, that the presence of an initial bias induces both a choice bias, and a reduction of response times for default choices when compared to alternative choices (cf. upper-left and lower-right panels in <xref ref-type="fig" rid="F1">Figure&#x20;1</xref>). The fact that <inline-formula id="inf203">
<mml:math id="minf203">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is significant in the default/alternative frame thus explains the observed choice and RT biases shown on <xref ref-type="fig" rid="F10">Figure&#x20;10</xref>. But do inter-individual differences in <inline-formula id="inf204">
<mml:math id="minf204">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> predict inter-individual differences in observed choice and RT biases? The corresponding statistical relationships are summarized on <xref ref-type="fig" rid="F17">Figure&#x20;17</xref>&#x20;below.</p>
<fig id="F17" position="float">
<label>FIGURE 17</label>
<caption>
<p>Model-based analyses of choice and RT data. Left: For each participant, the observed choice bias (<italic>y</italic>-axis) is plotted as a function of the initial bias estimate <inline-formula id="inf205">
<mml:math id="minf205">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>&#x5e;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> in the default/alternative frame (<italic>x</italic>-axis). Right: Same for the observed RT&#x20;bias.</p>
</caption>
<graphic xlink:href="frai-04-531316-g017.tif"/>
</fig>
<p>One can see that both pairs of variables are statistically related (choice bias: r &#x3d; 0.70, p &#x3c; 10<sup>&#x2013;4</sup>; RT bias: r &#x3d; 0.44, p &#x3d; 0.03). This is important, because this provides further evidence in favor of the hypothesis that people&#x27;s covert decision frame facilitates the default option. Note that this could not be shown using the method of moments or the method of trial means, which were not able to capture these inter-individual differences (see <xref ref-type="sec" rid="s12">Supplementary Appendix S7</xref> for details).</p>
<p>Finally, can we exploit model fits to provide a normative argument for why the brain favors a biased choice frame? Recall that, if properly set, the DDM can implement the optimal speed-accuracy tradeoff inherent in making online value-based decisions (<xref ref-type="bibr" rid="B51">Tajima et&#x20;al., 2016</xref>). Here, it may seem that the presence of an initial bias would induce a gain in decision speed that would be overcompensated by the ensuing loss of accuracy. But in fact, the net tradeoff between decision speed and accuracy depends upon how the system sets the bound&#x27;s height <inline-formula id="inf206">
<mml:math id="minf206">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula>. This is because <inline-formula id="inf207">
<mml:math id="minf207">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula> determines the demand for evidence before the system commits to a decision. More precisely, the system can favor decision accuracy by increasing <inline-formula id="inf208">
<mml:math id="minf208">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula>, or improve decision speed by decreasing <inline-formula id="inf209">
<mml:math id="minf209">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula>. We thus defined a measure <inline-formula id="inf210">
<mml:math id="minf210">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>e</mml:mi>
<mml:mo>&#x2322;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula> of the optimality of each participant&#x27;s decisions, by comparing the speed-accuracy efficiency of her estimated DDM and the maximum speed-accuracy efficiency that can be achieved over alternative bound heights <inline-formula id="inf211">
<mml:math id="minf211">
<mml:mi>b</mml:mi>
</mml:math>
</inline-formula> (see <xref ref-type="sec" rid="s12">Supplementary Appendix SA5</xref> below). This measure of optimality can be obtained either under the default-alternative frame or under the native frame. It turns out that the measured optimality of participants&#x27; decisions is significantly higher under the default/alternative frame than under the native frame (<inline-formula id="inf212">
<mml:math id="minf212">
<mml:mrow>
<mml:mi>&#x394;</mml:mi>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>e</mml:mi>
<mml:mo>&#x2322;</mml:mo>
</mml:mover>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> &#x3d; 0.007&#x20;&#xb1; 0.003, t &#x3d; 2.2, dof &#x3d; 23, p &#x3d; 0.02). In other words, participants&#x27; decisions appear more optimal under the default/alternative frame than under the native frame. We comment on possible interpretations of this result in the Discussion section&#x20;below.</p>
</sec>
<sec sec-type="discussion" id="s7">
<title>Discussion</title>
<p>In this note, we have described an overcomplete approach to fitting the DDM to trial-by-trial RT data. This approach is based upon a self-consistency equation that response times obey under DDM models. It bypasses the computational bottleneck of existing DDM parameter estimation approaches, at the cost of augmenting the model with stochastic neural noise variables that perturb the underlying decision process. This makes it suitable for generalized variants of the DDM, which would not otherwise be considered for behavioral data analysis.</p>
<p>Strictly speaking, the DDM predicts the RT distribution conditional on choice outcomes. This is why variants of the method of moments are not optimal when empirical design parameters (e.g., evidence strength) are varied on a trial-by-trial basis. More precisely, one would need a few trial repetitions of empirical conditions (e.g., at least a few tens of trials per evidence strength) to estimate the underlying DDM parameters from the observed moments of associated RT distributions (<xref ref-type="bibr" rid="B4">Boehm et&#x20;al., 2018</xref>; <xref ref-type="bibr" rid="B45">Ratcliff, 2008</xref>; <xref ref-type="bibr" rid="B50">Srivastava et&#x20;al., 2016</xref>). Alternatively, one could rely on variants of the method of trial means to find the DDM parameters that best match expected and observed RTs (<xref ref-type="bibr" rid="B16">Fontanesi et&#x20;al., 2019a</xref>; <xref ref-type="bibr" rid="B17">Fontanesi et&#x20;al., 2019b</xref>; <xref ref-type="bibr" rid="B20">Gluth and Meiran, 2019</xref>; <xref ref-type="bibr" rid="B35">Moens and Zenon, 2017</xref>; <xref ref-type="bibr" rid="B40">Pedersen et&#x20;al., 2017</xref>; <xref ref-type="bibr" rid="B56">Wabersich and Vandekerckhove, 2014</xref>). But this becomes computationally cumbersome when the number of trials is high and one wishes to use generalized variants of the DDM. This however, is not the case for the overcomplete approach. As with the method of trial means, its statistical power is maximal when design parameters are varied on a trial-by-trial basis. But the overcomplete approach does not suffer from the same computational bottleneck. This is because evaluating the underlying self-consistency equation (<xref ref-type="disp-formula" rid="e7">Eqs. 7</xref>&#x2013;<xref ref-type="disp-formula" rid="e9">9</xref>) is much simpler than deriving moments of the conditional RT distributions (<xref ref-type="bibr" rid="B7">Broderick et&#x20;al., 2009</xref>; <xref ref-type="bibr" rid="B36">Navarro and Fuss, 2009</xref>). In turn, the statistical added-value of the overcomplete approach is probably highest for analyzing data acquired with such designs, under generalized DDM variants.</p>
<p>We note that this feature of the overcomplete approach makes it particularly suited for learning experiments, where sequential decisions are based upon beliefs that are updated on a trial-by-trial basis from systematically varying pieces of evidence. In such contexts, existing modeling studies restrict the number of DDM parameters to deal with parameter recovery issues (<xref ref-type="bibr" rid="B18">Frank et&#x20;al., 2015</xref>; <xref ref-type="bibr" rid="B40">Pedersen et&#x20;al., 2017</xref>). This is problematic, since reducing the set of free DDM parameters can lead to systematic interpretation errors. In contrast, it would be trivial to extend the overcomplete approach to learning experiments without having to simplify the parameter space. We will pursue this in forthcoming publications.</p>
<p>Now what are the limitations of the overcomplete approach?</p>
<p>In brief, the overcomplete approach effectively reduces to adjusting DDM parameters such that RT become self-consistent. Interestingly, we derived the self-consistency equation without regard to the subtle dynamical degeneracies that (absorbing) bounds induce on stochastic processes (<xref ref-type="bibr" rid="B7">Broderick et&#x20;al., 2009</xref>). It simply follows from noting that if a decision is triggered at time <inline-formula id="inf213">
<mml:math id="minf213">
<mml:mi>&#x3c4;</mml:mi>
</mml:math>
</inline-formula>, then the underlying stochastic process has reached the bound (i.e.,&#x20;<inline-formula id="inf214">
<mml:math id="minf214">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>&#x3c4;</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#xb1;</mml:mo>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>). This serves to identify the cumulative perturbation that eventually drove the system toward the bound. But a bound hit event at time <inline-formula id="inf215">
<mml:math id="minf215">
<mml:mi>&#x3c4;</mml:mi>
</mml:math>
</inline-formula> is more informative about the history of the stochastic process than just its fate: it also tells us that the path did not cross the barrier before (i.e.,&#x20;<inline-formula id="inf216">
<mml:math id="minf216">
<mml:mrow>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>b</mml:mi>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mo>&#x2200;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x3c;</mml:mo>
<mml:mi>&#x3c4;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>). This disqualifies those sample paths whose first-passage time happens sooner, even though all barrier crossings are (by definition) &#x201c;self-consistent&#x201d;. In retrospect, one may thus wonder whether the self-consistency equation may be suboptimal, in the sense of incurring some loss of information. Critically however, no information is lost about cumulative perturbations (or about DDM parameters). Although these are not sufficient to discriminate between the many sample paths that are compatible with a given RT, this is essentially irrelevant to the objective of the overcomplete approach. In turn, the existing limitations of the overcomplete approach lie elsewhere.</p>
<p>First and foremost, the self-consistency equation cannot be used to simulate data (recall that RTs appear on both the left- and right-hand sides of the equation). This restricts the utility of the approach to data analysis. Note however, that data simulations can still be performed using <xref ref-type="disp-formula" rid="e2">Eq. 2</xref>, once the model parameters have been identified. This enables all forms of posterior predictive checks and/or other types of model fit diagnostics (<xref ref-type="bibr" rid="B39">Palminteri et&#x20;al., 2017</xref>). Second, the accuracy of the method depends upon the reliability of response time data. In particular, the recovery of the noise&#x2019;s standard deviation depends upon the accuracy of the empirical proxy for decision times (cf. second term in <xref ref-type="disp-formula" rid="e7">Eq. 7</xref>). In addition, the method inherits the potential limitations of its underlying parameter estimation technique: namely, the variational Laplace approach (<xref ref-type="bibr" rid="B19">Friston et&#x20;al., 2007</xref>; <xref ref-type="bibr" rid="B9">Daunizeau, 2017</xref>). In particular, and as is the case for any numerical optimization scheme, it is not immune to multimodal likelihood landscapes. We note that this may result in non-identifiability issues of the sort that we have demonstrated here (cf., e.g., <xref ref-type="fig" rid="F8">Figures 8</xref>, <xref ref-type="fig" rid="F10">10</xref>). One cannot guarantee that this will not happen for some generalized DDM variant of interest. A possible diagnostic to this problem is to perform a systematic fit/sample/refit analysis to evaluate the stability of parameter estimates. In any case, we would advise to re-evaluate (and report) parameter recovery for any novel DDM variant. Third, the computational cost of model inversion scales with the number of trials. This is because each trial has its own nuisance perturbation parameter. Note however, that the ensuing computational cost is many orders of magnitude lower than that of standard methods for generalized DDM variants. Fourth, proper bayesian model comparison may be more difficult. In particular, simulations show that a chance model always has a higher model evidence than the overcomplete model. This is another consequence of the overcompleteness of the likelihood function, which eventually pays a high complexity penalty cost in the context of Bayesian model comparison. Whether different DDM variants can be discriminated using the overcomplete approach is beyond the scope of the current work.</p>
<p>Let us now discuss the results of our model-based data analysis from the value-based decision making experiment (<xref ref-type="bibr" rid="B32">Lopez-Persem et&#x20;al., 2016</xref>). Recall that we eventually provided evidence that peoples&#x2019; decisions are more optimal under the default/alternative frame than under the native frame. Recall that this efficiency gain is inherited from the initial condition parameter <inline-formula id="inf217">
<mml:math id="minf217">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, which turns out be significant under the default/alternative frame. The implicit interpretation here is that the brain&#x2019;s decision system starts with a prior bias toward the default option. Critically however, we would have obtained the exact same results, would we have fixed the initial condition to zero but allowed upper and lower decision bounds to be asymmetrical. This is interesting, because it highlights a slightly different interpretation of our results. Under this alternative scenario, one would state that the brain&#x2019;s decision system is comparatively less demanding regarding the evidence that is required for committing to the default option. In turn, the benefit of lowering the bound for the default option may simply be to speed up decisions when evidence is congruent with default preferences, at the expense of slowing down incongruent decisions. Importantly, this strategy does not compromise decision accuracy if the incongruent decisions are rarer than the congruent ones (as is effectively the case in this experiment).</p>
<p>At this point, we would like to discuss potential neuroscientific applications of trial-by-trial estimates of &#x201c;neural&#x201d; perturbation terms. Recall that the self-consistency equation makes it possible to infer these neural noise variables from response times (cf. <xref ref-type="disp-formula" rid="e7">Eq. 7</xref> or 9). For the purpose of behavioral data analysis, where one is mostly interested in native DDM parameters, these are treated as nuisance variables. However, should one acquire neuroimaging data concurrently with behavioral data, one may want to exploit this unique feature of the overcomplete approach. In brief, estimates of &#x201c;neural&#x201d; perturbation terms moves the DDM one step closer to neural data. This is because DDM-based analysis of behavioral data now provides quantitative trial-by-trial predictions of an underlying neural variable. This becomes particularly interesting when internal variables (e.g., drift rates) are systematically varied over trials, hence de-correlating the neural predictor from response times. For example, in the context of fMRI investigations of value-based decisions, one may search for brain regions whose activity eventually perturbs the computation and/or comparison of options&#x2019; values. This would extend the portfolio of recent empirical studies of neural noise perturbations to learning-relevant computations (<xref ref-type="bibr" rid="B13">Drugowitsch et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B60">Wyart and Koechlin, 2016</xref>; <xref ref-type="bibr" rid="B15">Findling et&#x20;al., 2019</xref>). Reciprocally, using some variant of mediation analysis (<xref ref-type="bibr" rid="B33">MacKinnon et&#x20;al., 2007</xref>; <xref ref-type="bibr" rid="B31">Lindquist, 2012</xref>; <xref ref-type="bibr" rid="B6">Brochard and Daunizeau, 2020</xref>), one may extract neuroimaging estimates of neural noise that can inform DDM-based behavioral data analysis. Alternatively, one may model neural and behavioral data in a joint and symmetrical manner, with the purpose of testing some predefined DDM variant (<xref ref-type="bibr" rid="B48">Rigoux and Daunizeau, 2015</xref>; <xref ref-type="bibr" rid="B52">Turner et&#x20;al., 2015</xref>).</p>
<p>Finally, one may ask how generalizable the overcomplete approach is? Strictly speaking, one can evaluate the self-consistency equation under any DDM variant, as long as the mapping <inline-formula id="inf218">
<mml:math id="minf218">
<mml:mrow>
<mml:mi>z</mml:mi>
<mml:mo>:</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>&#x2192;</mml:mo>
<mml:mi>z</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> from the base random walk to the bound subspace is invertible (cf. <xref ref-type="disp-formula" rid="e8">Eqs. 8,</xref> <xref ref-type="disp-formula" rid="e9">9</xref>). No such formal constraint exists for the dynamical form of the collapsing bound. This spans a family of DDM variants that is much broader than what is currently being used in the field (<xref ref-type="bibr" rid="B14">Fengler et&#x20;al., 2020</xref>; <xref ref-type="bibr" rid="B49">Shinn et&#x20;al., 2020</xref>). For example, this family includes decision models that trigger a decision when decision <italic>confidence</italic> reaches a bound (<xref ref-type="bibr" rid="B51">Tajima et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B30">Lee and Daunizeau, 2020</xref>). To the best of our knowledge, there is not a single example of existing DDM variants that does not belong to this class. Having said this, future extensions of the DDM framework may render the current overcomplete approach obsolete. Our guess is that such DDM improvements may then need to be informed with additional behavioral data, such as decision confidence (<xref ref-type="bibr" rid="B11">De Martino et&#x20;al., 2012</xref>) and/or mental effort (<xref ref-type="bibr" rid="B30">Lee and Daunizeau, 2020</xref>), for which other kinds of self-consistency equations may be derived.</p>
<p>To conclude, we note that the code that is required to perform a DDM-based data analysis under the overcomplete approach will be made available soon from the VBA academic freeware <ext-link ext-link-type="uri" xlink:href="https://mbb-team.github.io/VBA-toolbox/">https://mbb-team.github.io/VBA-toolbox/</ext-link>(<xref ref-type="bibr" rid="B8">Daunizeau et&#x20;al., 2014</xref>).</p>
</sec>
</body>
<back>
<sec id="s8">
<title>Data Availability Statement</title>
<p>The datasets presented in this article are not readily available because they were not acquired by the authors. Requests to access the datasets should be directed to <email>jean.daunizeau@inserm.fr</email>.</p>
</sec>
<sec id="s9">
<title>Ethics Statement</title>
<p>Ethical review and approval was not required for the reuse of data from human participants in accordance with the local legislation. The patients/participants provided their written informed consent to participate in this&#x20;study.</p>
</sec>
<sec id="s10">
<title>Author Contributions</title>
<p>All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.</p>
</sec>
<sec sec-type="COI-statement" id="s11">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<ack>
<p>We would like to thank Aliz&#xe9;e Lopez-Persem for providing us with the empirical data that serves to demonstrate our approach.</p>
</ack>
<sec id="s12">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/frai.2021.531316/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/frai.2021.531316/full&#x23;supplementary-material</ext-link>.</p>
<supplementary-material xlink:href="DataSheet1.docx" id="SM1" mimetype="application/docx" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Balci</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Simen</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Niyogi</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Saxe</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Hughes</surname>
<given-names>J.&#x20;A.</given-names>
</name>
<name>
<surname>Holmes</surname>
<given-names>P.</given-names>
</name>
<etal/>
</person-group> (<year>2011</year>). <article-title>Acquisition of decision making criteria: reward rate ultimately beats accuracy</article-title>. <source>Atten. Percept. Psychophys.</source> <volume>73</volume>, <fpage>640</fpage>&#x2013;<lpage>657</lpage>. <pub-id pub-id-type="doi">10.3758/s13414-010-0049-7</pub-id> </citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beal</surname>
<given-names>M. J.</given-names>
</name>
</person-group> (<year>2003</year>). <article-title>Variational algorithms for approximate Bayesian inference/</article-title>. <comment>PhD Thesis</comment>. <publisher-loc>London</publisher-loc>: <publisher-name>University College London</publisher-name>. </citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bitzer</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Blankenburg</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kiebel</surname>
<given-names>S. J.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Perceptual decision making: drift-diffusion model is equivalent to a Bayesian model</article-title>. <source>Front. Hum. Neurosci.</source> <volume>8</volume>, <fpage>102</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2014.00102</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Boehm</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Annis</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Hawkins</surname>
<given-names>G. E.</given-names>
</name>
<name>
<surname>Heathcote</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kellen</surname>
<given-names>D.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>Estimating across-trial variability parameters of the diffusion decision model: expert advice and recommendations</article-title>. <source>J.&#x20;Math. Psychol.</source> <volume>87</volume>, <fpage>46</fpage>&#x2013;<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1016/j.jmp.2018.09.004</pub-id> </citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bogacz</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Moehlis</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Holmes</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>J.&#x20;D.</given-names>
</name>
</person-group> (<year>2006</year>). <article-title>The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks</article-title>. <source>Psychol. Rev.</source> <volume>113</volume>, <fpage>700</fpage>&#x2013;<lpage>765</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295x.113.4.700</pub-id> </citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brochard</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Daunizeau</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Blaming blunders on the brain: can indifferent choices be driven by range adaptation or synaptic plasticity?</article-title> <source>BioRxiv</source>, <fpage>287714</fpage>. <pub-id pub-id-type="doi">10.1101/2020.09.08.287714</pub-id> </citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Broderick</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Wong-Lin</surname>
<given-names>K. F.</given-names>
</name>
<name>
<surname>Holmes</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>Closed-form approximations of first-passage distributions for a stochastic decision-making model</article-title>. <source>Appl. Math. Res. Express</source> <volume>2009</volume>, <fpage>123</fpage>&#x2013;<lpage>141</lpage>. <pub-id pub-id-type="doi">10.1093/amrx/abp008</pub-id> </citation>
</ref>
<ref id="B8">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Daunizeau</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Adam</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Rigoux</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>VBA: a probabilistic treatment of nonlinear models for neurobiological and behavioral data</article-title>. <source>Plos Comput. Biol.</source> <volume>10</volume>, <fpage>e1003441</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1003441</pub-id> </citation>
</ref>
<ref id="B9">
<citation citation-type="web">
<person-group person-group-type="author">
<name>
<surname>Daunizeau</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>The variational Laplace approach to approximate Bayesian inference</article-title>. <ext-link ext-link-type="uri" xlink:href="http://arXiv:1703.02089">arXiv:1703.02089</ext-link>. </citation>
</ref>
<ref id="B10">
<citation citation-type="web">
<person-group person-group-type="author">
<name>
<surname>Daunizeau</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Variational Bayesian modeling of mixed-effects</article-title>. <ext-link ext-link-type="uri" xlink:href="http://arXiv:1903.09003">arXiv:1903.09003</ext-link>. </citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>De Martino</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Fleming</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Garrett</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Dolan</surname>
<given-names>R. J.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Confidence in value-based choice</article-title>. <source>Nat. Neurosci.</source> <volume>16</volume>, <fpage>105</fpage>&#x2013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1038/nn.3279</pub-id> </citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Drugowitsch</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Moreno-Bote</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Churchland</surname>
<given-names>A. K.</given-names>
</name>
<name>
<surname>Shadlen</surname>
<given-names>M. N.</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>The cost of accumulating evidence in perceptual decision making</article-title>. <source>J.&#x20;Neurosci.</source> <volume>32</volume>, <fpage>3612</fpage>&#x2013;<lpage>3628</lpage>. <pub-id pub-id-type="doi">10.1523/jneurosci.4010-11.2012</pub-id> </citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Drugowitsch</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wyart</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Devauchelle</surname>
<given-names>A.-D.</given-names>
</name>
<name>
<surname>Koechlin</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Computational precision of mental inference as critical source of human choice suboptimality</article-title>. <source>Neuron</source> <volume>92</volume>, <fpage>1398</fpage>&#x2013;<lpage>1411</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2016.11.005</pub-id> </citation>
</ref>
<ref id="B14">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Fengler</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Govindarajan</surname>
<given-names>L. N.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
</person-group> (<year>2020</year>). <source>Likelihood approximation networks (LANs) for fast inference of simulation models in cognitive neuroscience</source>. <publisher-name>BioRxiv</publisher-name>. <pub-id pub-id-type="doi">10.1101/2020.11.20.392274</pub-id> </citation>
</ref>
<ref id="B15">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Findling</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Chopin</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Koechlin</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2019</year>). <source>Imprecise neural computations as source of human adaptive behavior in volatile environments</source>. <publisher-name>BioRxiv</publisher-name>, <fpage>799239</fpage>.</citation>
</ref>
<ref id="B16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fontanesi</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Gluth</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Spektor</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Rieskamp</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2019a</year>). <article-title>A reinforcement learning diffusion decision model for value-based decisions</article-title>. <source>Psychon. Bull. Rev.</source> <volume>26</volume>, <fpage>1099</fpage>&#x2013;<lpage>1121</lpage>. <pub-id pub-id-type="doi">10.3758/s13423-018-1554-2</pub-id> </citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fontanesi</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Palminteri</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lebreton</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2019b</year>). <article-title>Decomposing the effects of context valence and feedback information on speed and accuracy during reinforcement learning: a meta-analytical approach using diffusion decision modeling</article-title>. <source>Cogn. Affect. Behav. Neurosci.</source> <volume>19</volume>, <fpage>490</fpage>&#x2013;<lpage>502</lpage>. <pub-id pub-id-type="doi">10.3758/s13415-019-00723-1</pub-id> </citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Gagne</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Nyhus</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Masters</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Wiecki</surname>
<given-names>T. V.</given-names>
</name>
<name>
<surname>Cavanagh</surname>
<given-names>J.&#x20;F.</given-names>
</name>
<etal/>
</person-group> (<year>2015</year>). <article-title>fMRI and EEG predictors of dynamic decision parameters during human reinforcement learning</article-title>. <source>J.&#x20;Neurosci.</source> <volume>35</volume>, <fpage>485</fpage>&#x2013;<lpage>494</lpage>. <pub-id pub-id-type="doi">10.1523/jneurosci.2036-14.2015</pub-id> </citation>
</ref>
<ref id="B19">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Friston</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Mattout</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Trujillo-Barreto</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Ashburner</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Penny</surname>
<given-names>W.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>Variational free energy and the Laplace approximation</article-title>. <source>NeuroImage</source> <volume>34</volume>, <fpage>220</fpage>&#x2013;<lpage>234</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2006.08.035</pub-id> </citation>
</ref>
<ref id="B20">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gluth</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Meiran</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Leave-One-Trial-Out, LOTO, a general approach to link single-trial parameters of cognitive models to neural data</article-title>. <source>eLife Sciences</source> <volume>8</volume>. <pub-id pub-id-type="doi">10.7554/eLife.42607</pub-id> </citation>
</ref>
<ref id="B21">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gold</surname>
<given-names>J.&#x20;I.</given-names>
</name>
<name>
<surname>Shadlen</surname>
<given-names>M. N.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>The neural basis of decision making</article-title>. <source>Annu. Rev. Neurosci.</source> <volume>30</volume>, <fpage>535</fpage>&#x2013;<lpage>574</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.neuro.29.051605.113038</pub-id> </citation>
</ref>
<ref id="B22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goldfarb</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Leonard</surname>
<given-names>N. E.</given-names>
</name>
<name>
<surname>Simen</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Caicedo-N&#xfa;&#xf1;ez</surname>
<given-names>C. H.</given-names>
</name>
<name>
<surname>Holmes</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>A comparative study of drift diffusion and linear ballistic accumulator models in a reward maximization perceptual choice task</article-title>. <source>Front. Neurosci.</source> <volume>8</volume>, <fpage>148</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2014.00148</pub-id> </citation>
</ref>
<ref id="B23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grasman</surname>
<given-names>R. P. P. P.</given-names>
</name>
<name>
<surname>Wagenmakers</surname>
<given-names>E.-J.</given-names>
</name>
<name>
<surname>van der Maas</surname>
<given-names>H. L. J.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>On the mean and variance of response times under the diffusion model with an application to parameter estimation</article-title>. <source>J.&#x20;Math. Psychol.</source> <volume>53</volume>, <fpage>55</fpage>&#x2013;<lpage>68</lpage>. <pub-id pub-id-type="doi">10.1016/j.jmp.2009.01.006</pub-id> </citation>
</ref>
<ref id="B24">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guevara Erra</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Arbotto</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Schurger</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>An integration-to-bound model of decision-making that accounts for the spectral properties of neural data</article-title>. <source>Sci. Rep.</source> <volume>9</volume>, <fpage>8365</fpage>. <pub-id pub-id-type="doi">10.1038/s41598-019-44197-0</pub-id> </citation>
</ref>
<ref id="B25">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hanks</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kiani</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Shadlen</surname>
<given-names>M. N.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>A neural mechanism of speed-accuracy tradeoff in macaque area LIP</article-title>. <source>ELife</source> <volume>3</volume>, <fpage>e02260</fpage>. <pub-id pub-id-type="doi">10.7554/elife.02260</pub-id> </citation>
</ref>
<ref id="B26">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hawkins</surname>
<given-names>G. E.</given-names>
</name>
<name>
<surname>Forstmann</surname>
<given-names>B. U.</given-names>
</name>
<name>
<surname>Wagenmakers</surname>
<given-names>E.-J.</given-names>
</name>
<name>
<surname>Ratcliff</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>S. D.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Revisiting the evidence for collapsing boundaries and urgency signals in perceptual decision-making</article-title>. <source>J.&#x20;Neurosci.</source> <volume>35</volume>, <fpage>2476</fpage>&#x2013;<lpage>2484</lpage>. <pub-id pub-id-type="doi">10.1523/jneurosci.2410-14.2015</pub-id> </citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Huk</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Shadlen</surname>
<given-names>M. N.</given-names>
</name>
</person-group> (<year>2005</year>). <article-title>Neural activity in macaque parietal cortex reflects temporal integration of visual motion signals during perceptual decision making</article-title>. <source>J.&#x20;Neurosci.</source> <volume>25</volume>, <fpage>10420</fpage>&#x2013;<lpage>10436</lpage>. <pub-id pub-id-type="doi">10.1523/jneurosci.4684-04.2005</pub-id> </citation>
</ref>
<ref id="B28">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kloeden</surname>
<given-names>P. E.</given-names>
</name>
<name>
<surname>Platen</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>1992</year>). <source>Numerical solution of stochastic differential equations</source>. <publisher-loc>Berlin Heidelberg</publisher-loc>: <publisher-name>Springer-Verlag</publisher-name>. </citation>
</ref>
<ref id="B29">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Krajbich</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Armel</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Rangel</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Visual fixations and the computation and comparison of value in simple choice</article-title>. <source>Nat. Neurosci.</source> <volume>13</volume>, <fpage>1292</fpage>&#x2013;<lpage>1298</lpage>. <pub-id pub-id-type="doi">10.1038/nn.2635</pub-id> </citation>
</ref>
<ref id="B30">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Daunizeau</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2020</year>). <source>Trading mental effort for confidence: the metacognitive control of value-based decision-making</source>. <publisher-name>BioRxiv 837054</publisher-name>.</citation>
</ref>
<ref id="B31">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lindquist</surname>
<given-names>M. A.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Functional causal mediation analysis with an application to brain connectivity</article-title>. <source>J.&#x20;Am. Stat. Assoc.</source> <volume>107</volume>, <fpage>1297</fpage>&#x2013;<lpage>1309</lpage>. <pub-id pub-id-type="doi">10.1080/01621459.2012.695640</pub-id> </citation>
</ref>
<ref id="B32">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lopez-Persem</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Domenech</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Pessiglione</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>How prior preferences determine decision-making frames and biases in the human brain</article-title>. <source>ELife</source> <volume>5</volume>, <fpage>e20317</fpage>. <pub-id pub-id-type="doi">10.7554/elife.20317</pub-id> </citation>
</ref>
<ref id="B33">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>MacKinnon</surname>
<given-names>D. P.</given-names>
</name>
<name>
<surname>Fairchild</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Fritz</surname>
<given-names>M. S.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>Mediation analysis</article-title>. <source>Annu. Rev. Psychol.</source> <volume>58</volume>, <fpage>593</fpage>. <pub-id pub-id-type="doi">10.1146/annurev.psych.58.110405.085542</pub-id> </citation>
</ref>
<ref id="B34">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Milosavljevic</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Malmaud</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Huth</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Koch</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Rangel</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>The drift diffusion model can account for the accuracy and reaction time of value-based choices under high and low time pressure</article-title>. <source>Judgm. Decis. Mak.</source> <volume>5</volume>, <fpage>437</fpage>&#x2013;<lpage>449</lpage>. </citation>
</ref>
<ref id="B35">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Moens</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Zenon</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Variational treatment of trial-by-trial drift-diffusion models of behavior</article-title>. <source>BioRxiv 220517</source>. </citation>
</ref>
<ref id="B36">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Navarro</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Fuss</surname>
<given-names>I. G.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>Fast and accurate calculations for first-passage times in Wiener diffusion models</article-title>. <source>J.&#x20;Math. Psychol.</source> <volume>53</volume>, <fpage>222</fpage>&#x2013;<lpage>230</lpage>. <pub-id pub-id-type="doi">10.1016/j.jmp.2009.02.003</pub-id> </citation>
</ref>
<ref id="B37">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Newey</surname>
<given-names>W. K.</given-names>
</name>
<name>
<surname>West</surname>
<given-names>K. D.</given-names>
</name>
</person-group> (<year>1987</year>). <article-title>Hypothesis testing with efficient method of moments estimation</article-title>. <source>Int. Econ. Rev.</source> <volume>28</volume>, <fpage>777</fpage>&#x2013;<lpage>787</lpage>. <pub-id pub-id-type="doi">10.2307/2526578</pub-id> </citation>
</ref>
<ref id="B38">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Osth</surname>
<given-names>A. F.</given-names>
</name>
<name>
<surname>Bora</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Dennis</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Heathcote</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Diffusion vs. linear ballistic accumulation: different models, different conclusions about the slope of the zROC in recognition memory</article-title>. <source>J.&#x20;Mem. Lang.</source> <volume>96</volume>, <fpage>36</fpage>&#x2013;<lpage>61</lpage>. <pub-id pub-id-type="doi">10.1016/j.jml.2017.04.003</pub-id> </citation>
</ref>
<ref id="B39">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Palminteri</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Wyart</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Koechlin</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>The importance of falsification in computational cognitive modeling</article-title>. <source>Trends Cogn. Sci.</source> <volume>21</volume>, <fpage>425</fpage>&#x2013;<lpage>433</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2017.03.011</pub-id> </citation>
</ref>
<ref id="B40">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pedersen</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Biele</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>The drift diffusion model as the choice rule in reinforcement learning</article-title>. <source>Psychon. Bull. Rev.</source> <volume>24</volume>, <fpage>1234</fpage>&#x2013;<lpage>1251</lpage>. <pub-id pub-id-type="doi">10.3758/s13423-016-1199-y</pub-id> </citation>
</ref>
<ref id="B41">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pedersen</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Simultaneous hierarchical bayesian parameter estimation for reinforcement learning and drift diffusion models: a tutorial and links to neural data</article-title>. <source>Comput. Brain Behav.</source> <volume>3</volume>, <fpage>458</fpage>&#x2013;<lpage>471</lpage>. <pub-id pub-id-type="doi">10.1007/s42113-020-00084-w</pub-id> </citation>
</ref>
<ref id="B42">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ratcliff</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>1978</year>). <article-title>A theory of memory retrieval</article-title>. <source>Psychol. Rev.</source> <volume>85</volume>, <fpage>59</fpage>&#x2013;<lpage>108</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295x.85.2.59</pub-id> </citation>
</ref>
<ref id="B43">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ratcliff</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>McKoon</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>The diffusion decision model: theory and data for two-choice decision tasks</article-title>. <source>Neural Comput.</source> <volume>20</volume>, <fpage>873</fpage>&#x2013;<lpage>922</lpage>. <pub-id pub-id-type="doi">10.1162/neco.2008.12-06-420</pub-id> </citation>
</ref>
<ref id="B44">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ratcliff</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>P. L.</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>S. D.</given-names>
</name>
<name>
<surname>McKoon</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Diffusion decision model: current issues and history</article-title>. <source>Trends Cogn. Sci.</source> <volume>20</volume>, <fpage>260</fpage>&#x2013;<lpage>281</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2016.01.007</pub-id> </citation>
</ref>
<ref id="B45">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ratcliff</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>The EZ diffusion method: too EZ?</article-title> <source>Psychon. Bull. Rev.</source> <volume>15</volume>, <fpage>1218</fpage>&#x2013;<lpage>1228</lpage>. <pub-id pub-id-type="doi">10.3758/pbr.15.6.1218</pub-id> </citation>
</ref>
<ref id="B46">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ratcliff</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Tuerlinckx</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2002</year>). <article-title>Estimating parameters of the diffusion model: approaches to dealing with contaminant reaction times and parameter variability</article-title>. <source>Psychon. Bull. Rev.</source> <volume>9</volume>, <fpage>438</fpage>&#x2013;<lpage>481</lpage>. <pub-id pub-id-type="doi">10.3758/bf03196302</pub-id> </citation>
</ref>
<ref id="B47">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Resulaj</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kiani</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>D. M.</given-names>
</name>
<name>
<surname>Shadlen</surname>
<given-names>M. N.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>Changes of mind in decision-making</article-title>. <source>Nature</source> <volume>461</volume>, <fpage>263</fpage>&#x2013;<lpage>266</lpage>. <pub-id pub-id-type="doi">10.1038/nature08275</pub-id> </citation>
</ref>
<ref id="B48">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rigoux</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Daunizeau</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Dynamic causal modeling of brain-behavior relationships</article-title>. <source>NeuroImage</source> <volume>117</volume>, <fpage>202</fpage>&#x2013;<lpage>221</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2015.05.041</pub-id> </citation>
</ref>
<ref id="B49">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shinn</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lam</surname>
<given-names>N. H.</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>J.&#x20;D.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>A flexible framework for simulating and fitting generalized drift-diffusion models</article-title>. <source>ELife</source> <volume>9</volume>, <fpage>e56938</fpage>. <pub-id pub-id-type="doi">10.7554/elife.56938</pub-id> </citation>
</ref>
<ref id="B50">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Srivastava</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Holmes</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Simen</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Explicit moments of decision times for single- and double-threshold drift-diffusion processes</article-title>. <source>J.&#x20;Math. Psychol.</source> <volume>75</volume>, <fpage>96</fpage>&#x2013;<lpage>109</lpage>. <pub-id pub-id-type="doi">10.1016/j.jmp.2016.03.005</pub-id> </citation>
</ref>
<ref id="B51">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tajima</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Drugowitsch</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Optimal policy for value-based decision-making</article-title>. <source>Nat. Commun.</source> <volume>7</volume>, <fpage>12400</fpage>. <pub-id pub-id-type="doi">10.1038/ncomms12400</pub-id> </citation>
</ref>
<ref id="B52">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Turner</surname>
<given-names>B. M.</given-names>
</name>
<name>
<surname>van Maanen</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Forstmann</surname>
<given-names>B. U.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Informing cognitive abstractions through neuroimaging: the neural drift diffusion model</article-title>. <source>Psychol. Rev.</source> <volume>122</volume>, <fpage>312</fpage>&#x2013;<lpage>336</lpage>. <pub-id pub-id-type="doi">10.1037/a0038894</pub-id> </citation>
</ref>
<ref id="B53">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vandekerckhove</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Tuerlinckx</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Diffusion model analysis with MATLAB: a DMAT primer</article-title>. <source>Behav. Res.</source> <volume>40</volume>, <fpage>61</fpage>&#x2013;<lpage>72</lpage>. <pub-id pub-id-type="doi">10.3758/brm.40.1.61</pub-id> </citation>
</ref>
<ref id="B54">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Voskuilen</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Ratcliff</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>P. L.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Comparing fixed and collapsing boundary versions of the diffusion model</article-title>. <source>J.&#x20;Math. Psychol.</source> <volume>73</volume>, <fpage>59</fpage>&#x2013;<lpage>79</lpage>. <pub-id pub-id-type="doi">10.1016/j.jmp.2016.04.008</pub-id> </citation>
</ref>
<ref id="B55">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Voss</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Voss</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>Fast-dm: a free program for efficient diffusion model analysis</article-title>. <source>Behav. Res. Methods</source> <volume>39</volume>, <fpage>767</fpage>&#x2013;<lpage>775</lpage>. <pub-id pub-id-type="doi">10.3758/bf03192967</pub-id> </citation>
</ref>
<ref id="B56">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wabersich</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Vandekerckhove</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Extending JAGS: a tutorial on adding custom distributions to JAGS (with a diffusion model example)</article-title>. <source>Behav. Res.</source> <volume>46</volume>, <fpage>15</fpage>&#x2013;<lpage>28</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-013-0369-3</pub-id> </citation>
</ref>
<ref id="B57">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wagenmakers</surname>
<given-names>E.-J.</given-names>
</name>
<name>
<surname>van der Maas</surname>
<given-names>H. L. J.</given-names>
</name>
<name>
<surname>Dolan</surname>
<given-names>C. V.</given-names>
</name>
<name>
<surname>Grasman</surname>
<given-names>R. P. P. P.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>EZ does it! Extensions of the EZ-diffusion model</article-title>. <source>Psychon. Bull. Rev.</source> <volume>15</volume>, <fpage>1229</fpage>&#x2013;<lpage>1235</lpage>. <pub-id pub-id-type="doi">10.3758/pbr.15.6.1229</pub-id> </citation>
</ref>
<ref id="B58">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wagenmakers</surname>
<given-names>E.-J.</given-names>
</name>
<name>
<surname>van der Maas</surname>
<given-names>H. L. J.</given-names>
</name>
<name>
<surname>Grasman</surname>
<given-names>R. P. P. P.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>An EZ-diffusion model for response time and accuracy</article-title>. <source>Psychon. Bull. Rev.</source> <volume>14</volume>, <fpage>3</fpage>&#x2013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.3758/bf03194023</pub-id> </citation>
</ref>
<ref id="B59">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wiecki</surname>
<given-names>T. V.</given-names>
</name>
<name>
<surname>Sofer</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Frank</surname>
<given-names>M. J.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>HDDM: hierarchical bayesian estimation of the drift-diffusion model in Python</article-title>. <source>Front. Neuroinformatics</source> <volume>7</volume>, <fpage>14</fpage>. <pub-id pub-id-type="doi">10.3389/fninf.2013.00014</pub-id> </citation>
</ref>
<ref id="B60">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wyart</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Koechlin</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Choice variability and suboptimality in uncertain environments</article-title>. <source>Curr. Opin. Behav. Sci.</source> <volume>11</volume>, <fpage>109</fpage>&#x2013;<lpage>115</lpage>. <pub-id pub-id-type="doi">10.1016/j.cobeha.2016.07.003</pub-id> </citation>
</ref>
<ref id="B61">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>The effects of evidence bounds on decision-making: theoretical and empirical developments</article-title>. <source>Front. Psychol.</source> <volume>3</volume>, <fpage>263</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2012.00263</pub-id> </citation>
</ref>
<ref id="B62">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>M. D.</given-names>
</name>
<name>
<surname>Vandekerckhove</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Maris</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Wagenmakers</surname>
<given-names>E.-J.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Time-varying boundaries for diffusion models of decision making and response time</article-title>. <source>Front. Psychol.</source> <volume>5</volume>, <fpage>1364</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2014.01364</pub-id> </citation>
</ref>
</ref-list>
</back>
</article>