<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2018.02051</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Hypothesis and Theory</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Imprecise Uncertain Reasoning: A Distributional Approach</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Kleiter</surname> <given-names>Gernot D.</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/200741/overview"/>
</contrib>
</contrib-group>
<aff><institution>Fachbereich Psychologie, Universit&#x000E4;t Salzburg</institution>, <addr-line>Salzburg</addr-line>, <country>Austria</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Nathan Dieckmann, Oregon Health &#x00026; Science University, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Nadia Ben Abdallah, NATO Centre for Maritime Research and Experimentation, Italy; Edgar Merkle, University of Missouri, United States</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Gernot D. Kleiter <email>gernot.kleiter&#x00040;gmail.com</email>;<email>gernot.kleiter&#x00040;sbg.ac.at</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Cognition, a section of the journal Frontiers in Psychology</p></fn></author-notes>
<pub-date pub-type="epub">
<day>26</day>
<month>10</month>
<year>2018</year>
</pub-date>
<pub-date pub-type="collection">
<year>2018</year>
</pub-date>
<volume>9</volume>
<elocation-id>2051</elocation-id>
<history>
<date date-type="received">
<day>12</day>
<month>04</month>
<year>2018</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>10</month>
<year>2018</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2018 Kleiter.</copyright-statement>
<copyright-year>2018</copyright-year>
<copyright-holder>Kleiter</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>The contribution proposes to model imprecise and uncertain reasoning by a mental probability logic that is based on probability distributions. It shows how distributions are combined with logical operators and how distributions propagate in inference rules. It discusses a series of examples like the Linda task, the suppression task, Doherty&#x00027;s pseudodiagnosticity task, and some of the deductive reasoning tasks of Rips. It demonstrates how to update distributions by soft evidence and how to represent correlated risks. The probabilities inferred from different logical inference forms may be so similar that it will be impossible to distinguish them empirically in a psychological study. Second-order distributions allow to obtain the probability distribution of being coherent. The maximum probability of being coherent is a second-order criterion of rationality. Technically the contribution relies on beta distributions, copulas, vines, and stochastic simulation.</p></abstract>
<kwd-group>
<kwd>uncertain reasoning</kwd>
<kwd>judgment under uncertainty</kwd>
<kwd>probability logic</kwd>
<kwd>imprecise probability</kwd>
<kwd>second-order distributions</kwd>
<kwd>coherence</kwd>
</kwd-group>
<counts>
<fig-count count="8"/>
<table-count count="4"/>
<equation-count count="17"/>
<ref-count count="86"/>
<page-count count="16"/>
<word-count count="12135"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<sec>
<title>1.1. Logic, probability, and statistics in models of human reasoning</title>
<p>Fifty years ago Peterson and Beach (<xref ref-type="bibr" rid="B60">1967</xref>) wrote a paper with the title &#x0201C;Man as an intuitive statistician.&#x0201D; In the time before the heuristics-and-biases paradigm human judgments and decisions were seen on the background of Baysian statistics. In the same time human reasoning was exclusively seen on the background of classical logic. The Wason task became a prototypical experimental paradigm. One might have written a paper with the title &#x0201C;The human reasoner as an intuitive logician.&#x0201D; This changed from the middle of the 1990s when probability entered the scene of human reasoning research. In 1993 <italic>Cognition</italic> published a special issue on the interaction between reasoning and decision making (Johnson-Laird and Shafir, <xref ref-type="bibr" rid="B38">1993</xref>) with contributions, among others, by Johnson-Laird, Tversky, or Evans. Shortly afterwards Oaksford and Chater (<xref ref-type="bibr" rid="B57">1995</xref>) proposed to model the Wason task in terms of probabilistic information seeking. In the same year Over investigated the suppression task in terms of probabilities (Stevenson and Over, <xref ref-type="bibr" rid="B73">1995</xref>). Before that time reasoning research was exclusively done on the background of logical benchmarks, while judgment under uncertainty, however, was investigated on the background of probabilistic and decision theoretic benchmarks. Reasoning investigated the human understanding of material implications (like in the Wason task), propositional inference rules (like the <sc>modus ponens</sc>), inferences with quantifiers (like syllogisms), and the validity of inference forms. The <sc>modus ponens</sc>, for example, was not cast into a probabilistic format (except by George Boole more than 100 years earlier). The judgment under uncertainty community investigated updating probabilities via Bayes&#x00027; theorem, calibration, and later on the heuristics and biases. Logicians had already started probability logic and default reasoning in the 1960s (Adams, <xref ref-type="bibr" rid="B1">1965</xref>, <xref ref-type="bibr" rid="B2">1966</xref>; Suppes, <xref ref-type="bibr" rid="B74">1966</xref>)<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref>.</p>
<p>In judgment under uncertainty logical rules like the <sc>modus ponens</sc> or the <sc>modus tollens</sc> were not investigated. Inference forms of classical logic could not directly be cast into a probabilistic format. First, there was the problem of conditionals. In classical logic a conditional is a material implication. In probability logic the conditional is a conditional event to which a conditional probability may be assigned. Conditional events, however, are outside of classical logic. Second, probabilistic inference is not &#x0201C;truth-functional&#x0201D; in a way that is analog to classical logic. In classical logic the truth values of the premises determine the truth-value of the conclusion. If <italic>A</italic> is true and <italic>A</italic> &#x02192; <italic>B</italic> is true, then <italic>B</italic> is true. In probability theory the probabilities of the premises of a <sc>modus ponens</sc> do not exactly determine the probability of its conclusion; the premises only constrain the probability of the conclusion by lower and upper probabilities. If <italic>P</italic>(<italic>A</italic>) &#x0003D; <italic>x</italic> and <italic>P</italic>(<italic>B</italic>|<italic>A</italic>) &#x0003D; <italic>y</italic>, then <italic>xy</italic> &#x02264; <italic>P</italic>(<italic>B</italic>) &#x02264; 1&#x02212;<italic>x</italic> &#x0002B; <italic>xy</italic>. Research on mental probability logic and the new (probabilistic) paradigm after the middle of the 1990s might have been published under the title &#x0201C;The human reasoner as an intuitive probabilist.&#x0201D; At conferences one could follow discussions on questions like &#x0201C;should binary truth values be basic ingredients in models on human reasoning?&#x0201D;</p>
<p>No doubt, the adoption of probability extended and enriched the research on human reasoning. However, probability combined with some logic is still insufficient to model reasoning and decision making in a complex and uncertain environment. The reasoner as an &#x0201C;intuitive statistician&#x0201D; is missing. The intuitive statistician is required when it comes to learning, to prediction, and to decision making. A typical problem that cannot be handled in elementary probability logic but than can conveniently be handled in statistics is the <italic>distributional precision</italic>. By distributional precision I mean the spread-out and dispersion of a continuous distribution around a favorite value. Mental probability logic assumes precise point probabilities or probability intervals where the lower and upper bounds are again precise. Representing imprecise uncertainties by distributions opens the door to invoke an interface to frequencies observed in the outside world. We will borrow the tool of beta distributions from Bayesian statistics. Their use in psychological modeling has the advantage of providing the possibility to update beliefs in the light of new evidence and observed frequencies. &#x0201C;&#x02026;the true power of a probabilistic representation is its ability not only to deal with <italic>imprecise</italic> probability assessments, but to welcome them as providing a natural basis for the system to improve with experience&#x0201D; (Spiegelhalter et al., <xref ref-type="bibr" rid="B70">1990</xref>, p. 285). In Pfeifer and Kleiter (<xref ref-type="bibr" rid="B62">2006a</xref>) we used mixtures of beta distributions to model inferences with imprecise probabilities.</p>
<p>The present paper proposes first steps toward a mental probability logic based on distributions. It employs second-order probability distributions and some more recent concepts of modeling probabilistic dependence by copulas and vines. Human reasoners and decision makers should be seen as a combination of intuitive logicians, of intuitive probabilists, and of intuitive statisticians. All three levels should be addressed in the basic research questions, in the experimental paradigms, and in the normative models.</p>
<p>Imprecision may be expressed by various distributions. One option, for example, is the family of log-normal distributions. We made a different choice and decided for beta distributions, a family of distributions that seems to be simpler and more flexible than the log-normal. So let us, at the outset, give a short characterization of the beta family.</p>
</sec>
<sec>
<title>1.2. Beta distribution</title>
<p>Throughout the contribution we will express imprecise probabilities by beta distributions. Beta distributions build a rich and flexible family of probability density functions (Johnson and Kotz, <xref ref-type="bibr" rid="B37">1970</xref>; Gupta and Nadarajah, <xref ref-type="bibr" rid="B30">2004</xref>). An uncertain quantity <italic>X</italic> is (standard) beta distributed in the interval [0, 1] with shape parameters <italic>&#x003B1;</italic> and <italic>&#x003B2;</italic> if</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mtext>&#x00393;</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B1;</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mtext>&#x00393;</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mtext>&#x00393;</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:msup><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003B1;</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>x</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x003B2;</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mn>0</mml:mn><mml:mo>&#x02264;</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x02264;</mml:mo><mml:mn>1</mml:mn><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>For integer values the ratio of gamma functions simplifies to (<italic>&#x003B1;</italic> &#x0002B; <italic>&#x003B2;</italic> &#x02212; 1)!/[(<italic>&#x003B1;</italic> &#x02212; 1)!(<italic>&#x003B2;</italic> &#x02212; 1)!]. We write for short <italic>X</italic> &#x0007E; Be(<italic>&#x003B1;</italic>, <italic>&#x003B2;</italic>). The mean and the variance of the distribution are</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mtext class="textrm" mathvariant="normal">E</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003B1;</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B2;</mml:mi></mml:mrow></mml:mfrac><mml:mtext>&#x000A0;and&#x000A0;</mml:mtext><mml:mtext class="textrm" mathvariant="normal">Var</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B1;</mml:mi><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B1;</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B2;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003B1;</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B2;</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>In the present context the random variable <italic>X</italic> is a first-order probability and <italic>p</italic>(<italic>X</italic>) is a second-order probability density function. In Bayesian statistics the shape parameters <italic>&#x003B1;</italic> and <italic>&#x003B2;</italic> are related to the frequencies of success and failure. <italic>&#x003B1;</italic> and <italic>&#x003B2;</italic> may be interpreted as weights of evidence, the pros and contras for a binary event, or as real or hypothetical samples sizes. <italic>Be</italic>(1, 1) is the uniform distribution. If <italic>&#x003B1;</italic> &#x0003E; 1 and <italic>&#x003B2;</italic> &#x0003E; 1 the distributions is uni-modal, if either <italic>&#x003B1;</italic> &#x0003C; 1 or <italic>&#x003B2;</italic> &#x0003C; 1 it is J-shaped, and if <italic>&#x003B1;</italic> &#x0003C; 1 and <italic>&#x003B2;</italic> &#x0003C; 1 it is U-shaped. Figure <xref ref-type="fig" rid="F1">1</xref> shows uni-modal examples.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Beta distributions for the verbal phrases of the Lichtenstein and Newman data in Table <xref ref-type="table" rid="T1">1</xref>. From <bold>(left)</bold> to <bold>(right)</bold>: Very unlikely, unlikely, about as likely as not, likely, very likely.</p></caption>
<graphic xlink:href="fpsyg-09-02051-g0001.tif"/>
</fig>
<p>While beta distributions do not arise exclusively in Bayesian statistics, Bayesian statistics is the field in which they are most prominent. For the assessment of subjective probability distributions Sta&#x000EB;l von Holstein proposed to fit beta distributions to quantiles 1970 and before (Sta&#x000EB;l von Holstein, <xref ref-type="bibr" rid="B72">1970</xref>; Kleiter, <xref ref-type="bibr" rid="B40">1981</xref>). Thomas Bayes was actually the pioneer of beta distributions in his investigation of an uncertain probability (Bayes, <xref ref-type="bibr" rid="B7">1958</xref>).</p>
<p>The next section gives a motivating example of the application of beta distributions. Imprecision is contained in the verbal uncertainty phrases we use in everyday conversation and beta distributions may be used to represent the imprecision in a mathematical form.</p>
</sec>
<sec>
<title>1.3. Verbal uncertainty phrases</title>
<p>Practically all human probability judgments are imprecise. Take the following phrases in everyday communication: &#x0201C;very probably,&#x0201D; &#x0201C;pretty sure,&#x0201D; &#x0201C;highly unlikely,&#x0201D; and so on. Verbal phrases are not only used to express degrees of belief in everyday conversation, they are also used to communicate expert knowledge, for example in geopolitical forecasting (Friedman et al., <xref ref-type="bibr" rid="B20">2018</xref>) or in climate research. The Climate Science Special Report of the United States Government&#x00027;s (Wuebbles et al., <xref ref-type="bibr" rid="B85">2017</xref>) reports a list of Key Findings. In the Climate Report each Key Finding is weighted by a verbal phrase for its likelihood. The &#x0201C;semantics&#x0201D; given to each of the phrases are shown in Table <xref ref-type="table" rid="T1">1</xref>.</p>
<disp-quote><p>&#x0201C;The frequency and intensity of extreme heat and heavy precipitation events are increasing in most continental regions of the world (<italic>very high confidence</italic>). These trends are consistent with expected physical responses to a warming climate. Climate model studies are also consistent with these trends, although models tend to underestimate the observed trends, especially for the increase in extreme precipitation events (<italic>very high confidence</italic> for temperature, <italic>high confidence</italic> for extreme precipitation). The frequency and intensity of extreme high temperature events are virtually certain to increase in the future as global temperature increases (<italic>high confidence</italic>). Extreme precipitation events will very likely continue to increase in frequency and intensity throughout most of the world (<italic>high confidence</italic>). Observed and projected trends for some other types of extreme events, such as floods, droughts, and severe storms, have more variable regional characteristics&#x0201D; Wuebbles et al. (<xref ref-type="bibr" rid="B85">2017</xref>, p. 35).</p></disp-quote>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Verbal uncertainty phrases (Row A) and their numerical interpretation (Row B) as used in the US Government&#x00027;s climate report [Wuebbles et al. (<xref ref-type="bibr" rid="B85">2017</xref>, p. 35)].</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>A</bold></th>
<th valign="top" align="center"><bold>Exceptionally</bold></th>
<th valign="top" align="center"><bold>Extremely</bold></th>
<th valign="top" align="center"><bold>Very</bold></th>
<th valign="top" align="center"><bold>Unlikely</bold></th>
<th valign="top" align="center"><bold><bold>About as </bold></bold></th>
<th valign="top" align="center"><bold>Likely</bold></th>
<th valign="top" align="center"><bold>Very</bold></th>
<th valign="top" align="center"><bold>Extremely</bold></th>
<th valign="top" align="center"><bold>Virtually</bold></th>
</tr>
<tr>
<th/>
<th valign="top" align="center"><bold>unlikely</bold></th>
<th valign="top" align="center"><bold>unlikely</bold></th>
<th valign="top" align="center"><bold>unlikely</bold></th>
<th/>
<th valign="top" align="center"><bold><bold>likely as not</bold></bold></th>
<th/>
<th valign="top" align="center"><bold>likely</bold></th>
<th valign="top" align="center"><bold>likely</bold></th>
<th valign="top" align="center"><bold>certain</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">B</td>
<td valign="top" align="center">0&#x02013;1%</td>
<td valign="top" align="center">0&#x02013;5%</td>
<td valign="top" align="center">0&#x02013;10%</td>
<td valign="top" align="center">0&#x02013;33%</td>
<td valign="top" align="center">33&#x02013;66%</td>
<td valign="top" align="center">66&#x02013;100%</td>
<td valign="top" align="center">90&#x02013;100%</td>
<td valign="top" align="center">95&#x02013;100%</td>
<td valign="top" align="center">99&#x02013;100%</td>
</tr>
<tr>
<td valign="top" align="left">C</td>
<td/>
<td/>
<td valign="top" align="center">10% (7%)</td>
<td valign="top" align="center">16% (10%)</td>
<td valign="top" align="center">50% (13%)</td>
<td valign="top" align="center">75% (11%)</td>
<td valign="top" align="center">90% (4%)</td>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left">D</td>
<td/>
<td/>
<td valign="top" align="center">Be(6, 25)</td>
<td valign="top" align="center">Be(5,14)</td>
<td valign="top" align="center">Be(7,7)</td>
<td valign="top" align="center">Be(12,6)</td>
<td valign="top" align="center">Be(66, 12)</td>
<td/>
<td/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Row C, Medians and standard deviations (in parentheses) of the interpretation of the same verbal phrases as in Row A by 180 persons in the study of Lichtenstein and Newman (<xref ref-type="bibr" rid="B54">1967</xref>). Row D, Shape parameters of the fitted beta distributions shown in Figure <xref ref-type="fig" rid="F1">1</xref></italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>One of the first empirical studies on the interpretation of verbal uncertainty phrases in terms of numerical probabilities was performed by Lichtenstein and Newman (<xref ref-type="bibr" rid="B54">1967</xref>). Table <xref ref-type="table" rid="T1">1</xref> shows the medians and standard deviations of the distributions of the responses of 180 persons. We represent the verbal uncertainty phrases by beta distributions. Figure <xref ref-type="fig" rid="F1">1</xref> shows the beta distributions fitted to the medians and standard deviations of the data.</p>
<p>There are two different directions in which imprecise uncertainty can be modeled, by down-shifting or by up-shifting. Down-shifting relaxes the precision of the description and works with qualitative or comparative probabilities. Baratgin et al. (<xref ref-type="bibr" rid="B5">2013</xref>), for example, investigated human reasoning in terms of qualitative probabilities. Up-shifting refines the level of the description on a meta-level. Describing imprecise uncertainty by distributions, as proposed in the present contribution, is an example of up-shifting.</p>
<p>The elementary theorems of probability theory propagate precise probabilities of the premises to precise probabilities of the conclusions. If, for example, <italic>A</italic> and <italic>B</italic> are two probabilistically independent events and <italic>P</italic>(<italic>A</italic>) &#x0003D; <italic>x</italic> and <italic>P</italic>(<italic>B</italic>) &#x0003D; <italic>y</italic>, then <italic>P</italic>(<italic>A</italic>&#x02227;<italic>B</italic>) &#x0003D; <italic>z</italic> &#x0003D; <italic>x</italic>&#x000B7;<italic>y</italic>. If probabilities are introduced in elementary <italic>logical</italic> operators or theorems, however, precise probabilities of the premises propagate to <italic>imprecise</italic> probabilities of the conclusions. If the two events <italic>A</italic> and <italic>B</italic> are not probablistically independent then the probability of <italic>A</italic>&#x02227;<italic>B</italic> is an interval probability, <italic>P</italic>(<italic>A</italic>&#x02227;<italic>B</italic>) &#x0003D; <italic>z</italic>&#x02208;[max{0, <italic>x</italic>&#x0002B;<italic>y</italic>&#x02212;1}, min{<italic>x, y</italic>}].</p>
<p>The theory of imprecise probabilities (Walley, <xref ref-type="bibr" rid="B79">1991</xref>; Augustin et al., <xref ref-type="bibr" rid="B4">2014</xref>) expresses imprecision by lower and upper probabilities, i.e., by <italic>interval probabilities</italic>. For psychological modeling, however, interval probabilities have several disadvantages. The iteration of conditional interval probabilities leads to theoretically complex solutions (Gilio and Sanfilippo, <xref ref-type="bibr" rid="B28">2013</xref>). Moreover, empirically checking the endorsement of inferences may become too permissive because the responses of the participants may fall into very wide intervals. Another, more principal and theoretical difficulty poses the question how to base decisions on probability intervals. This problem was especially raised by Smets (<xref ref-type="bibr" rid="B68">1990</xref>) (for a review see Cuzzolin, <xref ref-type="bibr" rid="B11">2012</xref>). Smets distinguished <italic>credal</italic> and <italic>pignistic</italic> degrees of belief, the first one for contemplation and the second one for action. We will tackle the question below and propose a new criterion, the maximum probability of being coherent. But let us first turn to the question of how to incorporate and propagate distributions in the framework of basic logical operators.</p>
</sec>
</sec>

<sec id="s2">
<title>2. Propagating imprecision in logical inferences forms</title>
<sec>
<title>2.1. Elementary logical operators</title>
<p>If our knowledge about the probability of an event <italic>A</italic> is represented by the beta distribution <italic>P</italic>(<italic>A</italic>) &#x0007E; <italic>Be</italic>(<italic>&#x003B1;</italic>, <italic>&#x003B2;</italic>), then our knowledge about its negation &#x000AC;<italic>A</italic> should be expressed by <italic>P</italic>(&#x000AC;<italic>A</italic>) &#x0007E; <italic>Be</italic>(<italic>&#x003B2;</italic>, <italic>&#x003B1;</italic>). The parameters <italic>&#x003B1;</italic> and <italic>&#x003B2;</italic> just switch positions.</p>
<p>In many investigations (see for example Kleiter et al., <xref ref-type="bibr" rid="B46">2002</xref>) it was observed that probability assessments of <italic>A</italic> and &#x000AC;<italic>A</italic> do not add up to 1. If the participants of an experiment assess the probability of <italic>A</italic> and after a while give an assessment of &#x000AC;<italic>A</italic> then usually <italic>P</italic>(<italic>A</italic>)&#x0002B;<italic>P</italic>(<italic>B</italic>) &#x02260; 1.0. Probability judgments of &#x0201C;Is New York north of Rome?&#x0201D; and &#x0201C;Is Rome north of New York?&#x0201D; may easily lead to superadditivity, <italic>P</italic><sub>1</sub> &#x0002B; <italic>P</italic><sub>2</sub> &#x0003E; 1. Deviations from 1.0 may be systematic or random. Poor experimental conditions contribute to low reliability and next-best judgments. Erev et al. (<xref ref-type="bibr" rid="B18">1994</xref>) have shown that low reliability of probability judgments may lead to overconfidence and hyper-precision.</p>
<p>Let us next consider logical conjunction. For precise probabilities of the premises we have</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M3"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mtext class="textrm" mathvariant="normal">If&#x000A0;</mml:mtext><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>x</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mtext class="textrm" mathvariant="normal">and&#x000A0;</mml:mtext><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>y</mml:mi><mml:mo>,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext class="textrm" mathvariant="normal">then&#x000A0;</mml:mtext><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x02227;</mml:mo><mml:mi>B</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>z</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mo class="qopname">max</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>y</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo class="qopname">min</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The lower and the upper bounds are known as the two Fr&#x000E9;chet-Hoeffding copulas (Nelsen, <xref ref-type="bibr" rid="B56">2006</xref>). Any probability assessment <italic>z</italic> in the interval is <italic>coherent</italic>. A probability assessment is coherent if it does not lead to a Dutch book (losing for sure). The top left panel in Figure <xref ref-type="fig" rid="F2">2</xref> shows lines for equal lower (upper) probabilities as functions of the marginals <italic>P</italic>(<italic>A</italic>) and <italic>P</italic>(<italic>B</italic>). At (0.8, 0.6) the probabilities &#x0201C;project&#x0201D; to the interval [0.4, 0.6].</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Lower and upper probabilities for the conjunction, the disjunction, the conditional with <italic>P</italic>(<italic>A</italic>) &#x0003D; <italic>X</italic> and <italic>P</italic>(<italic>B</italic>) &#x0003D; <italic>Y</italic>, and the <sc>modus ponens</sc> with <italic>P</italic>(<italic>A</italic>) &#x0003D; <italic>X</italic> and <italic>P</italic>(<italic>B</italic>|<italic>A</italic>) &#x0003D; <italic>Y</italic>. Numerical example for <italic>x</italic> &#x0003D; 0.8 and <italic>y</italic> &#x0003D; 0.6 (for the <sc>MODUS PONENS</sc> slightly above 0.6). The yellow shadowed areas indicate the projections to the intervals [0.4, 0.6], [0.8, 1], [0.5, 0.7], and [0.5, 0.7].</p></caption>
<graphic xlink:href="fpsyg-09-02051-g0002.tif"/>
</fig>
<p>Next we replace the precise probabilities <italic>x</italic> and <italic>y</italic> by the two random variables <italic>X</italic> and <italic>Y</italic>, where <italic>X</italic> &#x0007E; <italic>Be</italic>(<italic>&#x003B1;</italic><sub>1</sub>, <italic>&#x003B2;</italic><sub>1</sub>) and <italic>Y</italic> &#x0007E; <italic>Be</italic>(<italic>&#x003B1;</italic><sub>2</sub>, <italic>&#x003B2;</italic><sub>2</sub>). Moreover, we specify the kind and the degree of dependence between <italic>X</italic> and <italic>Y</italic> by a copula <italic>C</italic>(<italic>x, y</italic>). To keep the contribution as simple as possible we will use Gaussian copulas, that is, Pearson&#x00027;s correlations. The coefficients will be denoted by <italic>&#x003C1;</italic>. There are many other copulas (Nelsen, <xref ref-type="bibr" rid="B56">2006</xref>). The two marginal distributions of <italic>X</italic> and <italic>Y</italic>, together with the copula <italic>C</italic>(<italic>x, y</italic>), determine the joint distribution with the densities <italic>p</italic>(<italic>x, y</italic>) on the unit square [0, 1]<sup>2</sup>. The bivariate Gaussian copula with the correlation coefficient <italic>&#x003C1;</italic> is given by</p>
<disp-formula id="E5"><label>(4)</label><mml:math id="M5"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mi>C</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>u</mml:mi><mml:mo>,</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msub><mml:mi>N</mml:mi><mml:mi>&#x003C1;</mml:mi></mml:msub><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mi>&#x003A6;</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo stretchy='false'>(</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>,</mml:mo><mml:msup><mml:mi>&#x003A6;</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo stretchy='false'>(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:msqrt><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:msup><mml:mi>&#x003C1;</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:msqrt></mml:mrow></mml:mfrac><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>&#x0221E;</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mi>&#x003A6;</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo stretchy='false'>(</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup><mml:mtext>&#x0200B;</mml:mtext></mml:mrow></mml:mstyle><mml:mtext>&#x0200B;</mml:mtext><mml:mstyle displaystyle='true'><mml:mrow><mml:msubsup><mml:mo>&#x0222B;</mml:mo><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>&#x0221E;</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mi>&#x003A6;</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo stretchy='false'>(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:msubsup><mml:mtext>&#x0200B;</mml:mtext></mml:mrow></mml:mstyle><mml:mtext>&#x0200B;</mml:mtext><mml:mi>e</mml:mi><mml:mi>x</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo>[</mml:mo> <mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:msup><mml:mi>s</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo>&#x02212;</mml:mo><mml:mn>2</mml:mn><mml:mi>&#x003C1;</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:msup><mml:mi>t</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:msup><mml:mi>&#x003C1;</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow> <mml:mo>]</mml:mo></mml:mrow><mml:mi>d</mml:mi><mml:mi>s</mml:mi><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>with <inline-formula><mml:math id="M6"><mml:mi>s</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>u</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:math></inline-formula> and <inline-formula><mml:math id="M7"><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>v</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow><mml:mrow><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:math></inline-formula> and &#x003A6;<sup>&#x02212;1</sup>(<italic>u</italic>) and &#x003A6;<sup>&#x02212;1</sup>(<italic>u</italic>) denote the inverse of the univariate standard normal distribution function.</p>
<p>The unit square is analog to the 2 &#x000D7; 2 truth table in classical logic. While a truth table has only the two values 0 and 1 on its margins, the unit square has the real numbers between 0 and 1 along its two margins. In logic an operator maps the entries from the 2 &#x000D7; 2 table into {0, 1}. In the distributional approach an operator maps the densities on the unit-square to densities on the [0, 1]-interval. The two place operators require two mappings, one for the lower bound and one for the upper bound.</p>
<p>Each fixed value of the lower probability in (3) determines a contour line in the joint distribution on the unit square. Collecting the densities along such a contour line gives the probability density for a fixed value of the lower probability. And the same holds for the upper probability. So we get two distributions, one for the lower and one for the upper probabilities. Technically in most cases these steps cannot be performed analytically in closed form. We use a stochastic simulation method implemented in the VineCopula package (Mai and Scherer, <xref ref-type="bibr" rid="B55">2012</xref>; Schepsmeier et al., <xref ref-type="bibr" rid="B67">2018</xref>) of the statistical software R (R Development Core Team, <xref ref-type="bibr" rid="B64">2016</xref>). The R code of program for the analysis of the four inference forms discussed below is contained in the <xref ref-type="supplementary-material" rid="SM1">Supplementary Material</xref>.</p>
<p>We applied the stochastic simulation method to the conjunction, the disjunction, to the conditional event interpretation of the conditional (if <italic>A</italic>, then <italic>B</italic> means <italic>B</italic>|<italic>A</italic>) and to the exclusive disjunction. Figure <xref ref-type="fig" rid="F3">3</xref> shows a numerical example for each one of the four operators. The distributions of the probabilities of <italic>X</italic> &#x0007E; <italic>Be</italic>(30, 3) and of <italic>Y</italic> &#x0007E; (20, 20) are plotted in the left panel of the top row. The two first-order probabilities are correlated with the Gaussian copula <italic>&#x003C1;</italic> &#x0003D; 0.5. The scatter diagram shows the simulation of 10,000 points of the joint distribution on the unit square.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Basic logical operators. <bold>(Top row, Left)</bold> Premises <italic>P</italic>(<italic>A</italic>) &#x0007E; <italic>Be</italic>(30, 3) and <italic>P</italic>(<italic>B</italic>) &#x0007E; <italic>Be</italic>(20, 20). <bold>(Right)</bold> Scatter diagram of the joint distribution with Gaussian copula <italic>&#x003C1;</italic> &#x0003D; 0.5. <bold>(Middle and Bottom row)</bold> Histograms of the lower and upper probabilities for <sc>and</sc>, <sc>or</sc>, <sc>if-then</sc>, and <sc>xor</sc> operators together with the bold lines showing the probability of being coherent. The upper probability of the disjunction degenerates at 1.</p></caption>
<graphic xlink:href="fpsyg-09-02051-g0003.tif"/>
</fig>
<p>The histograms in the four panels show the relative frequencies of the lower and upper bounds resulting from the simulations. The continuous distributions approximate the probability density of being coherent. This is a meta-criterion. It corresponds to the probability that the value of a first-order probability assessment falls into the coherent interval between the two Fr&#x000E9;chet-Hoeffding bounds. The concept will be explained below.</p>
<p>To consider correlations between probabilities may require a short comment. Probabilities may provide information about other probabilities. Take as an example co-morbidity in age-related diseases. Diabetes, Parkinson&#x00027;s and Alzheimer&#x00027;s disease often come together (Bellantuono, <xref ref-type="bibr" rid="B8">2018</xref>). If we are 90% sure that an elderly person gets diabetes we infer that the probability that the person gets Parkinson&#x00027;s disease rises to a value above average. The probabilities of having the two diseases are correlated. Risks may be correlated. Assume the father of a male person suffers from prostate cancer. Knowing that the probability of having inherited some of the critical gens is high, increases the risk that the person will get prostate cancer.</p>
<p>Figure <xref ref-type="fig" rid="F3">3</xref> shows a stunning result: The conjunction and the conditional (with conditional event interpretation) lead to nearly the same results<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref>. It will not be possible to distinguish the two operators empirically in a psychological study. For a speaker who expresses imprecise uncertainties the <italic>if-then</italic> and the <italic>and</italic> have practically the same &#x0201C;meaning.&#x0201D; This throws a new light on the conjunctive interpretation of conditionals. In Fugard et al. (<xref ref-type="bibr" rid="B23">2011</xref>) and Kleiter et al. (<xref ref-type="bibr" rid="B47">2018</xref>) we observed that about twenty percent of the participants give conjunctive interpretations of the conditional. We also observed a higher frequency of conjunctive interpretations in female participants. In real life communication, where most content is uncertain and the uncertainty is imprecise, this may not make a practical difference. We will come back to this question below after we will have introduced the distribution of being coherent.</p>
<p>Figure <xref ref-type="fig" rid="F4">4</xref> shows the results for an example with rectangular distributions. It assumes rectangular distributions of <italic>X</italic> and <italic>Y</italic> on the intervals <italic>Re</italic>[<italic>l</italic><sub>1</sub>, <italic>u</italic><sub>1</sub>] and <italic>Re</italic>[<italic>l</italic><sub>2</sub>, <italic>u</italic><sub>2</sub>]. Again, the conjunction and the conditional are so similar that they cannot be distinguished empirically.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Logical operators applied to rectangular distributions <italic>Re</italic>(0.60, 0.90) and <italic>Re</italic>(0.10, 0.30) and <italic>&#x003C1;</italic> &#x0003D; 0.7. The modes of the four probability-of-coherence distributions are 0.101, 0.901, 0.157, and 0.701, respectively.</p></caption>
<graphic xlink:href="fpsyg-09-02051-g0004.tif"/>
</fig>
<p>Before we proceed with a discussion of the conjunction fallacy we introduce the concept of the probability of being coherent. The conjunction fallacy focuses on errors. The probability of being coherent focuses on coherent probability assessments.</p>
</sec>
<sec>
<title>2.2. The probability of being coherent</title>
<p>Probabilistic inferences that mimic logical inferences lead from a set of precise coherent probabilities of the premises to coherent interval probabilities of the conclusion. Coherence means to not allow a Dutch book, i.e., a bet where you lose for sure<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref>. Denote the inferred interval by [<italic>w, m</italic>]. All values between <italic>w</italic> and <italic>m</italic> are coherent.</p>
<p>In the present approach <italic>w</italic> and <italic>m</italic> are realizations of random variables. The probability for an assessment <italic>z</italic> to be coherent is equal to the probability that <italic>z</italic> is greater than <italic>w</italic> and less than <italic>m</italic>, i.e., <italic>p</italic>(<italic>z</italic>&#x02208;[<italic>w, m</italic>]). The distribution cannot be obtained in closed form. Numerical results are determined by stochastic simulation. Consider for example the conjunction of <italic>A</italic> and <italic>B</italic> with <italic>P</italic>(<italic>A</italic>) &#x0003D; <italic>X</italic> &#x0007E; <italic>Be</italic>(<italic>&#x003B1;</italic><sub>1</sub>, <italic>&#x003B2;</italic><sub>1</sub>), <italic>P</italic>(<italic>B</italic>) &#x0003D; <italic>Y</italic> &#x0007E; <italic>Be</italic>(<italic>&#x003B1;</italic><sub>2</sub>, <italic>&#x003B2;</italic><sub>2</sub>), and the copula <italic>C</italic>(<italic>x, y</italic>). We perform the following steps:
<list list-type="order">
<list-item><p>Discretize the real numbers between 0 and 1 into <italic>n</italic> steps; we rescale the [0, 1] interval by [0, 1, &#x02026;, 1000].</p></list-item>
<list-item><p>Initialize an array <italic>f</italic>[0], <italic>f</italic>[1], &#x02026;, <italic>f</italic>[<italic>n</italic>] of length <italic>n</italic>&#x0002B;1 with all values equal to 0. The array will collect frequency counts.</p></list-item>
<list-item><p>Sample two random probabilities <italic>x</italic> and <italic>y</italic> from the two beta distributions of <italic>A</italic> and <italic>B</italic>; for doing this use the copula <italic>C</italic>(<italic>x, y</italic>). Independence is a special case.</p></list-item>
<list-item><p>Determine the lower and upper bounds <italic>w</italic> &#x0003D; max{0, <italic>x</italic>&#x0002B;<italic>y</italic>&#x02212;1} and <italic>m</italic> &#x0003D; min{<italic>x, y</italic>}.</p></list-item>
<list-item><p>Add 1 to the frequency count of each discretized value between <italic>w</italic> and <italic>m</italic>, <italic>f</italic>[<italic>i</italic>] &#x0003D; <italic>f</italic>[<italic>i</italic>]&#x0002B;1, <italic>i</italic> &#x0003D; 1000&#x000B7;<italic>w</italic>, &#x02026;, 1000&#x000B7;<italic>m</italic>.</p></list-item>
<list-item><p>Repeat the steps 3 to 5 <italic>N</italic> times. <italic>N</italic> may, for example, be 50,000.</p></list-item>
<list-item><p>Divide the frequency counts of the discretized values by <italic>N</italic>. The result approximates the distribution of the probability of being coherent.</p></list-item>
</list></p>
<p>We implemented these steps in R (R Development Core Team, <xref ref-type="bibr" rid="B64">2016</xref>) using the package VineCopula (Schepsmeier et al., <xref ref-type="bibr" rid="B67">2018</xref>). The package offers a multitude of different copulas that may be used to specify the kind and the strength of dependencies (see also Mai and Scherer, <xref ref-type="bibr" rid="B55">2012</xref>).</p>
<p>It is rational to require that a precise probability assessment in a probabilistically imprecise world maximizes the probability of being coherent. The second-order probabilities do not lose the Dutch book criterion as claimed by Smets and Kruse (<xref ref-type="bibr" rid="B69">1997</xref>, p. 243). If there is a set of bets, it is reasonable to prefer that one that maximizes the probability to avoid losses. <italic>The hierarchical construction of first- and second-order probabilities goes hand in hand with a multi-level rationality criterion</italic>.</p>
<p>Smets (<xref ref-type="bibr" rid="B68">1990</xref>) distinguished two levels of uncertainty representation: The <italic>credal</italic> level&#x02014;beliefs are entertained&#x02014;and the <italic>pignistic</italic> level&#x02014;beliefs are used to act. Interval probabilities are typical of the credal level. They may be entertained in the cognitive representation of uncertainty. Practical decisions, however, require the selection of precise point values that maximize, e.g., expected utility. Smets&#x00027; pignistic probabilities are different from the maximum probability of being coherent. We note that point probabilities are not always required for decision making. In decision theory, economics, and risk management <italic>distributions</italic> and not only exact probabilities are compared. The criterion of stochastic dominance (Sriboonchitta et al., <xref ref-type="bibr" rid="B71">2010</xref>) may, for example, be applied to two distributions of being coherent.</p>
<p>The discriminatory sensitivity of the logical connectives may be studied by measuring the distance between two distributions of being coherent. A well known measure for the distance between two distributions is the Kullback-Leibler distance. Because of the stochastic simulation the distributions of the probability of being coherent are discrete, in our case having <italic>N</italic> &#x0003D; 1, 000 increments. The Kullback-Leibler distance between two probability distribution <italic>P</italic> and <italic>Q</italic> is given by</p>
<disp-formula id="E6"><label>(5)</label><mml:math id="M8"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>D</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>P</mml:mi><mml:mo>;</mml:mo><mml:mi>Q</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo></mml:mtd><mml:mtd><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo class="qopname">log</mml:mo><mml:mfrac><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>Q</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd></mml:mtd><mml:mtd><mml:mtext class="textrm" mathvariant="normal">where</mml:mtext><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mi>N</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>/</mml:mo><mml:mi>N</mml:mi><mml:mo>,</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Numerical probabilities equal to zero were set equal to 0.0001. Table <xref ref-type="table" rid="T2">2</xref> shows the distances between ten pairs of distributions, three kinds of beta distributions, and the two correlation coefficients <italic>&#x003C1;</italic> &#x0003D; 0.5 and <italic>&#x003C1;</italic> &#x0003D; &#x02212;0.5.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Logical operators: Kullback-Leibler distances between the second order distributions of the probability of being coherent and the uniform distribution (UFD) and between the distributions of the conjunction (AND), the disjunction (OR), the conditional (IF) and the exclusive disjunctions (XOR).</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>P(A)</bold></th>
<th valign="top" align="left"><bold>P(B)</bold></th>
<th valign="top" align="center"><bold>&#x003C1;</bold></th>
<th valign="top" align="center"><bold>AND</bold></th>
<th valign="top" align="center"><bold>OR</bold></th>
<th valign="top" align="center"><bold>IF</bold></th>
<th valign="top" align="center"><bold>XOR</bold></th>
<th valign="top" align="center"><bold>OR</bold></th>
<th valign="top" align="center"><bold>IF</bold></th>
<th valign="top" align="center"><bold>XOR</bold></th>
<th valign="top" align="center"><bold>IF</bold></th>
<th valign="top" align="center"><bold>XOR</bold></th>
<th valign="top" align="center"><bold>XOR</bold></th>
</tr>
<tr>
<th/>
<th/>
<th/>
<th valign="top" align="center"><bold>UFD</bold></th>
<th valign="top" align="center"><bold>UFD</bold></th>
<th valign="top" align="center"><bold>UFD</bold></th>
<th valign="top" align="center"><bold>UFD</bold></th>
<th valign="top" align="center"><bold>AND</bold></th>
<th valign="top" align="center"><bold>AND</bold></th>
<th valign="top" align="center"><bold>AND</bold></th>
<th valign="top" align="center"><bold>OR</bold></th>
<th valign="top" align="center"><bold>OR</bold></th>
<th valign="top" align="center"><bold>IF</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Be(30,3)</td>
<td valign="top" align="left">Be(20,20)</td>
<td valign="top" align="center">0.5</td>
<td valign="top" align="center">7.80</td>
<td valign="top" align="center">8.78</td>
<td valign="top" align="center">7.80</td>
<td valign="top" align="center">7.74</td>
<td valign="top" align="center">11,06</td>
<td valign="top" align="center">0.14</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">9.94</td>
<td valign="top" align="center">9.30</td>
<td valign="top" align="center">0.21</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">Be(30,3)</td>
<td valign="top" align="left">Be(20,20)</td>
<td valign="top" align="center">&#x02212;0.5</td>
<td valign="top" align="center">8.77</td>
<td valign="top" align="center">7.80</td>
<td valign="top" align="center">7.80</td>
<td valign="top" align="center">7.74</td>
<td valign="top" align="center">11,06</td>
<td valign="top" align="center">0.47</td>
<td valign="top" align="center">0.19</td>
<td valign="top" align="center">9.40</td>
<td valign="top" align="center">9.77</td>
<td valign="top" align="center">0.21</td>
</tr> <tr>
<td valign="top" align="left">Be(100,10)</td>
<td valign="top" align="left">Be(20,2)</td>
<td valign="top" align="center">0.5</td>
<td valign="top" align="center">8.22</td>
<td valign="top" align="center">9.10</td>
<td valign="top" align="center">8.45</td>
<td valign="top" align="center">8.17</td>
<td valign="top" align="center">6.78</td>
<td valign="top" align="center">3.16</td>
<td valign="top" align="center">10.43</td>
<td valign="top" align="center">0.716</td>
<td valign="top" align="center">10.47</td>
<td valign="top" align="center">10.46</td>
</tr>
<tr style="border-bottom: thin solid #000000;">
<td valign="top" align="left">Be(100,10)</td>
<td valign="top" align="left">Be(20,2)</td>
<td valign="top" align="center">&#x02212;0.5</td>
<td valign="top" align="center">8.55</td>
<td valign="top" align="center">9.34</td>
<td valign="top" align="center">8.54</td>
<td valign="top" align="center">8.33</td>
<td valign="top" align="center">8.86</td>
<td valign="top" align="center">4.80</td>
<td valign="top" align="center">10.63</td>
<td valign="top" align="center">1.35</td>
<td valign="top" align="center">10.63</td>
<td valign="top" align="center">10.63</td>
</tr> <tr>
<td valign="top" align="left">Be(20,100)</td>
<td valign="top" align="left">Be(5,20)</td>
<td valign="top" align="center">0.5</td>
<td valign="top" align="center">8.55</td>
<td valign="top" align="center">7.88</td>
<td valign="top" align="center">6.91</td>
<td valign="top" align="center">7.67</td>
<td valign="top" align="center">6.41</td>
<td valign="top" align="center">6.18</td>
<td valign="top" align="center">3.25</td>
<td valign="top" align="center">3.13</td>
<td valign="top" align="center">1.46</td>
<td valign="top" align="center">0.68</td>
</tr>
<tr>
<td valign="top" align="left">Be(20,100)</td>
<td valign="top" align="left">Be(5,20)</td>
<td valign="top" align="center">&#x02212;0.5</td>
<td valign="top" align="center">8.66</td>
<td valign="top" align="center">8,12</td>
<td valign="top" align="center">6.92</td>
<td valign="top" align="center">7.77</td>
<td valign="top" align="center">8.73</td>
<td valign="top" align="center">6.51</td>
<td valign="top" align="center">4.82</td>
<td valign="top" align="center">4.16</td>
<td valign="top" align="center">1.98</td>
<td valign="top" align="center">0.74</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>&#x003C1; denotes the value of the Gaussian copula</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>The left side of Table <xref ref-type="table" rid="T2">2</xref> contains distances from the uniform distribution (UFD). These distances are all high and relative insensitive to the kind of the distributions of <italic>P</italic>(<italic>A</italic>) and <italic>P</italic>(<italic>B</italic>) and the correlation coefficients <italic>&#x003C1;</italic>. The greatest distances are between <sc>or</sc> and UFD and between <sc>and</sc> and UFD.</p>
<p>On the right side of Table <xref ref-type="table" rid="T2">2</xref> small distance indicate that the probabilistic semantics of the two operators is similar. The smallest value of <italic>D</italic>(<italic>P</italic>; <italic>Q</italic>) &#x0003D; 0.14 is obtained for the distance between <sc>if</sc> and <sc>and</sc> for <italic>P</italic>(<italic>A</italic>) &#x0007E; <italic>Be</italic>(30, 3) and <italic>P</italic>(<italic>B</italic>) &#x0007E; (20, 20), that is, for one distribution with a high mean of 0.91 and one distribution with a mean of 0.5. This may be related to the empirical finding that about twenty percent of the interpretations of if-then sentences are conjunction interpretations (Fugard et al., <xref ref-type="bibr" rid="B23">2011</xref>; Kleiter et al., <xref ref-type="bibr" rid="B47">2018</xref>).</p>
<p>The conclusion that may be drawn from this analysis is: <italic>The difference or the similarity of the probabilistic meaning of two logical operators depends on the high, middle, or low probabilities of the events and on the copula between the two</italic>. This makes the empirical investigation of the semantics of the logical operators in reasoning and everyday language more difficult than often assumed. This holds, for example, for our own experiments where we used truth-table tasks in which relative frequencies were selected that may discriminate conjunctions, disjunctions, conditionals etc. This is only possible if the frequencies presented to the participants in the truth tables are close to being equally distributed and not rather high or low.</p>
<p>We next turn to the conjunction fallacy, one of the best known fallacies in the heuristics and biases paradigm. We will see that imprecision is a factor that may explain the fallacy at least to some degree.</p>
</sec>
<sec>
<title>2.3. Conjunction fallacy</title>
<p>In the same way as we asked for the probability of being coherent, we may ask for the probability of being incoherent. A prototypical example for incoherent probability judgments is the Linda task (Tversky and Kahneman, <xref ref-type="bibr" rid="B77">1983</xref>):</p>
<p>Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was especially concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Rank order the probabilities for</p>
<list list-type="bullet">
<list-item><p>Linda is a bank teller.</p></list-item>
<list-item><p>Linda is active in the feminist movement.</p></list-item>
<list-item><p>Linda is a bank teller and is active in the feminist movement.</p></list-item>
</list>
<p>Many people think the conjunction is more probable than one or even both its conjuncts. They are victims of the conjunction fallacy.</p>
<p>Like many other tasks in the literature on fallacies and biases, the Linda task is an example for highly imprecise probabilities. Denote &#x0201C;Linda is a bank teller&#x0201D; by <italic>A</italic>, &#x0201C;Linda is a feminist&#x0201D; by <italic>B</italic> and assume <italic>P</italic>(<italic>A</italic>) &#x0003D; <italic>X</italic> &#x0007E; <italic>Be</italic>(<italic>&#x003B1;</italic><sub>1</sub>, <italic>&#x003B2;</italic><sub>1</sub>), <italic>P</italic>(<italic>B</italic>) &#x0003D; <italic>Y</italic> &#x0007E; <italic>Be</italic>(<italic>&#x003B1;</italic><sub>2</sub>, <italic>&#x003B2;</italic><sub>2</sub>), and a Gaussian copula with <italic>&#x003C1;</italic> &#x0003D; 0.7.</p>
<p>You create two vague ideas of the probabilities of <italic>A</italic> and <italic>B</italic>, modeled here by two beta distributions. Next you think about reasonable values for the probabilities of the conjunction, modeled here by the distribution of the probability of being coherent. In the terminology of Smets the three distributions belong to the <italic>credal</italic> level. The beliefs are just &#x0201C;entertained&#x0201D; and their imprecision is part of their representation. When it is time for judgment one value <italic>x</italic> is sampled from the distribution for <italic>A</italic> and one value <italic>y</italic> from the distribution for <italic>B</italic>. Now if you really think hard you <italic>infer</italic> the third value <italic>z</italic> on the basis of <italic>x</italic> and <italic>y</italic> and the inferred value may be coherent. If you are lazy you sample a third time, now a value <italic>z</italic> from the distribution for being coherent. You come up with a judgment <italic>z</italic> that is <italic>decoupled</italic> from <italic>x</italic> and <italic>y</italic>. If you think hard your judgment of <italic>z</italic> is coupled to the precise values <italic>x</italic> and <italic>y</italic>, with less strain it is sampled from a distribution. In this case <italic>z</italic> may easily exceed the upper bound of the conjunction probability, i.e., the minimum of <italic>x</italic> and <italic>y</italic> and the result is a conjunction error. The probability of this one-sided incoherence corresponds to the probability that <italic>z</italic> is in the interval between the upper bound <italic>m</italic> and 1, <italic>P</italic>(<italic>z</italic>&#x02208;[<italic>m</italic>, 1]).</p>
<p>Applying simulation methods again gives a surprising result. If my probability assessment of &#x0201C;Linda is a bank teller&#x0201D; is close to 0.5 or if my assessment of &#x0201C;Linda is active in the feminist movement&#x0201D; is close to 0.5, the probability of a conjunction error may be as high as 50%. <italic>Imprecise probabilities may induce a high percentage of conjunction errors</italic>. If the location of the central tendency of one of the marginals is close to 0.5, then the probability of a conjunction error is close to 0.5. The probability decreases when both means move away from 0.5. The size of the correlation (or the copula parameter) does nearly not matter. Table <xref ref-type="table" rid="T3">3</xref> gives a few numerical examples.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Probability of a conjunction error.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold><bold>Beta distribution</bold></bold></th>
<th valign="top" align="center"><bold><bold>Be(1,1)</bold></bold></th>
<th valign="top" align="center"><bold><bold>Be(2,2)</bold></bold></th>
<th valign="top" align="center"><bold><bold>Be(4,2)</bold></bold></th>
<th valign="top" align="center"><bold><bold>Be(8,2)</bold></bold></th>
<th valign="top" align="center"><bold><bold>Be(16,2) </bold></bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Probability of a conjunction error</td>
<td valign="top" align="center">0.50</td>
<td valign="top" align="center">0.50</td>
<td valign="top" align="center">0.33</td>
<td valign="top" align="center">0.22</td>
<td valign="top" align="center">0.15</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The beta distribution of one conjunct is held constant at Be(30, 5); Gaussian copula <italic>&#x003C1;</italic> &#x0003D; 0.5</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>We next turn to uncertain conditionals, the salt in the soup of probability logic. The interpretation of conditionals by humans was and is an especially important topic in human reasoning research. Imprecise conditionals were studied in terms of lower and upper probabilities. In the next section we will turn to distributional imprecision.</p>
</sec>
<sec>
<title>2.4. Conditional</title>
<p>Modeling conditioning with imprecise probabilities is an intricate problem. This is seen from the many different proposals made in many-valued logic, in work on lower probabilities and the Dempster-Shafer belief functions, or in work on possibilistic and fuzzy approaches. In the coherence approach inferences where the <italic>conclusion</italic> is a conditional require special methods. The extension of the Fundamental Theorem of de Finetti to conditional probabilities is due to Lad (<xref ref-type="bibr" rid="B52">1996</xref>). He also explains how numerical results are found by linear in-equalities and fractional programming (Lad, <xref ref-type="bibr" rid="B52">1996</xref>).</p>
<p>The psychological literature reports many experiments on the interpretation of uncertain conditionals.The <italic>truth table method</italic> is used to distinguish between the material implication of classical logic and the conditional event interpretation. Especially the &#x0201C;new probabilistic paradigm&#x0201D; (Over, <xref ref-type="bibr" rid="B59">2009</xref>; Elqayam, <xref ref-type="bibr" rid="B17">2017</xref>) in reasoning research has used this task. The task is based on the truth values of the antecedent and the consequent. I, the experimenter, show you, the participant, the four combinations of the binary truth values of <italic>A</italic> and of <italic>B</italic> together with their associated probabilities. You tell me the probability you assign to &#x0201C;If <italic>A</italic> then <italic>B</italic>.&#x0201D; I infer on which truth values you were attending and this allows me to reconstruct your logical interpretation of the conditional.</p>
<p>Given <italic>P</italic>(<italic>A</italic>) &#x0003D; <italic>x</italic> and <italic>P</italic>(<italic>B</italic>) &#x0003D; <italic>y</italic> the probability of <italic>P</italic>(<italic>B</italic>|<italic>A</italic>) &#x0003D; <italic>z</italic> is in the interval</p>
<disp-formula id="E8"><label>(6)</label><mml:math id="M10"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>z</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mo class="qopname">max</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mfrac><mml:mrow><mml:mi>x</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>y</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo class="qopname">min</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mfrac><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mi>x</mml:mi><mml:mo>&#x0003E;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The Figures <xref ref-type="fig" rid="F3">3</xref>, <xref ref-type="fig" rid="F4">4</xref> show examples for the distribution of <italic>P</italic>(<italic>B</italic>|<italic>A</italic>), the probability of a conditional. We have already pointed out that the results for the conjunction and the conditional can be very similar.</p>
<p>For the material implication (denoted by <italic>A</italic> &#x02192; <italic>B</italic>) this is different. Given <italic>P</italic>(<italic>A</italic>) &#x0003D; <italic>x</italic> and <italic>P</italic>(<italic>B</italic>) &#x0003D; <italic>y</italic> the probability of <italic>P</italic>(<italic>A</italic> &#x02192; <italic>B</italic>) &#x0003D; <italic>z</italic> is in the interval</p>
<disp-formula id="E9"><label>(7)</label><mml:math id="M11"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>z</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mo class="qopname">min</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mi>y</mml:mi><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>x</mml:mi></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo class="qopname">min</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The lower and upper probabilities are equivalent to those of the disjunction &#x000AC;<italic>A</italic> &#x02228; <italic>B</italic>. If the probability of the antecedent <italic>P</italic>(<italic>A</italic>) is high then the distribution of the lower and upper probabilities and the probability of being coherent are very similar to the disjunction <italic>A</italic> &#x02228; <italic>B</italic>. With increasing <italic>P</italic>(<italic>A</italic>) the distributions of &#x000AC;<italic>A</italic> &#x02228; <italic>B</italic> and <italic>A</italic> &#x02228; <italic>B</italic> get more and more indistinguishable. In an imprecise probabilistic environment the question &#x0201C;material implication or disjunction?&#x0201D; does not matter. The question &#x0201C;conditional event or material implication?&#x0201D;, however, makes a big difference: The conditional event interpretation leads to much lower probabilities than the material implication. This is a highly relevant aspect for the interpretation of <italic>if-then</italic> sentences in the context of risk assessment.</p>
<p>The interpretation of conditionals leads us to the next section, to logical inference rules. Psychologists have often investigated the <sc>modus ponens</sc> along with the <sc>modus tollens</sc> and two logically non-valid argument forms.</p>
</sec>
<sec>
<title>2.5. The MP-quartet</title>
<p>Four inference rules were often investigated in the psychology of human reasoning: The quartet of the <sc>modus ponens</sc>, the <sc>modus tollens</sc> (both logically valid) and the argument forms of <sc>denying the antecedent</sc> and <sc>affirming the consequent</sc> (both logically nonvalid), here called &#x0201C;the MP-quartet&#x0201D; for short. The <sc>modus ponens</sc></p>
<disp-formula id="E10"><mml:math id="M12"><mml:mrow><mml:mtext class="textrm" mathvariant="normal">From&#x000A0;&#x000A0;</mml:mtext><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtext class="textrm" mathvariant="normal">if&#x000A0;</mml:mtext><mml:mi>A</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mtext class="textrm" mathvariant="normal">then&#x000A0;</mml:mtext><mml:mi>B</mml:mi><mml:mo>,</mml:mo><mml:mi>A</mml:mi></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mtext>&#x000A0;&#x000A0;</mml:mtext><mml:mtext class="textrm" mathvariant="normal">infer</mml:mtext><mml:mi>B</mml:mi></mml:mrow></mml:math></disp-formula>
<p>is the best known and most important inference rule in deductive logic. It is endorsed by practically all people (Rips, <xref ref-type="bibr" rid="B65">1994</xref>). If the premises are uncertain and the conditional is interpreted as a conditional event we have in terms of point probability:</p>
<disp-formula id="E11"><label>(8)</label><mml:math id="M13"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mtext class="textrm" mathvariant="normal">From&#x000A0;</mml:mtext><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>B</mml:mi><mml:mo>|</mml:mo><mml:mi>A</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>y</mml:mi></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mtext>&#x000A0;</mml:mtext><mml:mtext class="textrm" mathvariant="normal">infer</mml:mtext><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>z</mml:mi><mml:mo>,</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mtext class="textrm" mathvariant="normal">and&#x000A0;</mml:mtext><mml:mi>z</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mi>y</mml:mi><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>x</mml:mi><mml:mi>y</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>For the lower and upper bounds for the three other rules see for example (Pfeifer and Kleiter, <xref ref-type="bibr" rid="B61">2005</xref>).</p>
<p>Figure <xref ref-type="fig" rid="F5">5</xref> shows the results for the four inference rules for a numerical example. The premises have the distributions <italic>X</italic> &#x0007E; <italic>Be</italic>(15, 3), <italic>Y</italic> &#x0007E; <italic>Be</italic>(6, 3), and the Gaussian copula <italic>&#x003C1;</italic> &#x0003D; 0.5<xref ref-type="fn" rid="fn0004"><sup>4</sup></xref>.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Four inferences rules. <bold>(Upper panels)</bold> Probability distribution of the minor premise and the major premises <italic>P</italic>(<italic>B</italic>|<italic>A</italic>). Histograms of the lower and upper probabilities of the four rules. The continuous distributions show the distributions of the probability of being coherent.</p></caption>
<graphic xlink:href="fpsyg-09-02051-g0005.tif"/>
</fig>
<p>The <sc>modus ponens</sc> has a maximum probability of being coherent that is close to the distribution of the minor premise <italic>P</italic>(<italic>A</italic>). For the <sc>modus tollens</sc> the maximum probability is at 1.0. The <sc>modus tollens</sc> is the strongest inference rule (Pfeifer and Kleiter, <xref ref-type="bibr" rid="B61">2005</xref>, <xref ref-type="bibr" rid="B63">2006b</xref>). Psychologically the <sc>modus tollens</sc> is difficult and complex; it&#x00027;s a &#x0201C;backwards&#x0201D; rule and it involves two negations. Usually the endorsement is much lower than for the <sc>modus ponens</sc>.</p>
<p>The two logically non-valid inference forms lead to probabilities of being coherence that are close to uniform distributions. In a psychological investigation the two rules should stick out by the <italic>variance</italic> of the probability judgments. More or less any probability judgment in [0, 1] is coherent.</p>
<p>The following section applies distributional imprecision to a series of examples. Most of them are well-known from the psychological literature but the inclusion of imprecision into their analysis leads to new properties and results.</p>
</sec>
</sec>
<sec id="s3">
<title>3. Applications and examples</title>
<sec>
<title>3.1. Natural sampling</title>
<p>One of the best known fallacies in judgment under uncertainty is the base rate neglect (Kahneman and Tversky, <xref ref-type="bibr" rid="B39">1973</xref>; Bar-Hillel, <xref ref-type="bibr" rid="B6">1980</xref>; Koehler, <xref ref-type="bibr" rid="B48">1996</xref>). A doctor may, for example, neglect the prevalence of a disease and concentrate only on the likelihood of a symptom given the disease. While this is often a major fallacy, there are situations in which base rate neglect is completely rational. This holds also for beta distributions: Assume the shape parameters <italic>&#x003B1;</italic> and <italic>&#x003B2;</italic> of a distribution <italic>Be</italic>(<italic>&#x003B1;</italic>, <italic>&#x003B2;</italic>) are equal to the frequency of a binary feature in a sample of <italic>n</italic> observations, <italic>n</italic> &#x0003D; <italic>&#x003B1;</italic> &#x0002B; <italic>&#x003B2;</italic>. Split the total sample into two subsamples so that the sample sizes add-up to <italic>n</italic>. So the subsample sizes are not pre-planned. In statistics this is called <italic>natural sampling</italic> (Aitchison and Dunsmore, <xref ref-type="bibr" rid="B3">1975</xref>). We have <italic>Be</italic>(<italic>&#x003B1;</italic><sub>1</sub>, <italic>&#x003B2;</italic><sub>1</sub>), <italic>Be</italic>(<italic>&#x003B1;</italic><sub>2</sub>, <italic>&#x003B2;</italic><sub>2</sub>) and <italic>&#x003B1;</italic> &#x0003D; <italic>&#x003B1;</italic><sub>1</sub> &#x0002B; <italic>&#x003B1;</italic><sub>2</sub> and <italic>&#x003B2;</italic> &#x0003D; <italic>&#x003B2;</italic><sub>1</sub> &#x0002B; <italic>&#x003B2;</italic><sub>2</sub>, and <italic>n</italic> &#x0003D; <italic>&#x003B1;</italic><sub>1</sub> &#x0002B; <italic>&#x003B1;</italic><sub>2</sub> &#x0002B; <italic>&#x003B2;</italic><sub>1</sub> &#x0002B; <italic>&#x003B2;</italic><sub>2</sub>. For natural sampling it was proven (Kleiter, <xref ref-type="bibr" rid="B41">1994</xref>) that the base rates in Bayes&#x00027; Theorem are &#x0201C;redundant&#x0201D; and may be ignored. The result for precise probabilities has often been used by Gigerenzer within his frequentistic approach (Gigerenzer and Hoffrage, <xref ref-type="bibr" rid="B24">1995</xref>; Kleiter, <xref ref-type="bibr" rid="B42">1996</xref>).</p>
<p>Ignoring base rates may not only be rational for precise but also for imprecise probabilities. For natural sampling it holds that if the knowledge about the prevalence of a disease <italic>H</italic> is represented by the beta <italic>P</italic>(<italic>H</italic>) &#x0007E; <italic>Be</italic>(<italic>&#x003B1;</italic>, <italic>&#x003B2;</italic>) and the conditional probabilities of a symptom <italic>D</italic> are represented by the betas <italic>P</italic>(<italic>D</italic>|<italic>H</italic>) &#x0007E; <italic>Be</italic>(<italic>&#x003B1;</italic><sub>1</sub>, <italic>&#x003B2;</italic><sub>1</sub>) and <italic>P</italic>(<italic>D</italic>|&#x000AC;<italic>H</italic>) &#x0007E; <italic>Be</italic>(<italic>a</italic><sub>2</sub>, <italic>b</italic><sub>2</sub>), then the posterior distribution of the disease given the symptom <italic>D</italic> is simply</p>
<disp-formula id="E12"><label>(9)</label><mml:math id="M14"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mtable style="text-align:axis;" equalrows="false" columnlines="" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>H</mml:mi><mml:mo>|</mml:mo><mml:mi>D</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo> &#x0007E; </mml:mo><mml:mi>B</mml:mi><mml:mi>e</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext class="textrm" mathvariant="normal">mean&#x000A0;</mml:mtext><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:mtext>&#x000A0;&#x000A0;</mml:mtext><mml:mtext class="textrm" mathvariant="normal">variance&#x000A0;</mml:mtext><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>If frequencies are used to update subjective probabilities and if (and only if) natural sampling conditions hold, the resulting degrees of belief remain in the family of beta distributions, i.e., the distributions are natural-conjugates. Note that (relative) frequencies and probabilities are not the same. The frequencies are used to estimate probabilities and the representation of the imprecision of these estimates is an integral part of any statistical approach. The property of natural sampling extends to multivariate Dirichlet distributions and is thus helpful to represent imprecise degrees of belief in more complex environments. If the natural sampling assumption is dropped, then vines and copulas offer elegant methods to model the representation and propagation of degrees of belief.</p>
</sec>
<sec>
<title>3.2. Rips inference tasks</title>
<p>To show that a wide range of logical inference tasks can be modeled within the distributional approach we discuss very briefly two examples from Rips (<xref ref-type="bibr" rid="B65">1994</xref>). Rips compared the predictions of his proof-logical PSYCOP model with empirical data. He investigated 32 inference problems of classical sentential logic. Among them the following one:</p>
<disp-quote><p>IF Betty is in Little Rock THEN Ellen is in Hammond. Phoebe is in Tucson AND Sandra is in Memphis. Is the following conclusion true: IF Betty is in Little Rock THEN (Ellen is in Hammond AND Sandra is in Memphis) (Rips, <xref ref-type="bibr" rid="B65">1994</xref>, p. 105).</p></disp-quote>
<p>When we represent the conditional by a conditional event<xref ref-type="fn" rid="fn0005"><sup>5</sup></xref> and first introduce precise probabilities:</p>
<disp-formula id="E18"><mml:math id="M100"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>B</mml:mi><mml:mo>&#x0007C;</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>x</mml:mi></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>C</mml:mi><mml:mo>&#x02227;</mml:mo><mml:mi>D</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>y</mml:mi></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mover accent='true'><mml:mrow><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>B</mml:mi><mml:mo>&#x02227;</mml:mo><mml:mi>D</mml:mi><mml:mo>&#x0007C;</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02208;</mml:mo><mml:mo stretchy='false'>[</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy='false'>]</mml:mo></mml:mrow><mml:mo stretchy='true'>&#x000AF;</mml:mo></mml:mover></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The interval probability of the conclusion, <italic>P</italic>(<italic>B</italic>&#x02227;<italic>D</italic>|<italic>A</italic>)&#x02208;[0, <italic>x</italic>], is easily obtained after seeing that the probability of the conjunctive premise is irrelevant. <italic>P</italic>(<italic>D</italic>) is greater than <italic>P</italic>(<italic>C</italic>&#x02227;<italic>D</italic>) and may maximally be 1. The upper probability of the conclusion is thus <inline-formula><mml:math id="M15"><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>B</mml:mi><mml:mo>&#x02227;</mml:mo><mml:mi>D</mml:mi><mml:mo>|</mml:mo><mml:mi>A</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x02227;</mml:mo><mml:mi>B</mml:mi><mml:mo>&#x02227;</mml:mo><mml:mi>D</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:math></inline-formula> and <inline-formula><mml:math id="M16"><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>B</mml:mi><mml:mo>&#x02227;</mml:mo><mml:mi>D</mml:mi><mml:mo>|</mml:mo><mml:mi>A</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mfrac><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x02227;</mml:mo><mml:mi>B</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>B</mml:mi><mml:mo>|</mml:mo><mml:mi>A</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>. Analog relationships hold for the probability distributions.</p>
<p>In a second step beta distributions for the premises are introduced, say <italic>X</italic> and <italic>Y</italic>, and by stochastic simulation the distributions for the lower and upper probabilities and the distribution of the probability of being coherent are determined. The distribution of the probability of being coherent is practically uniform over the range between 0 and the mean of <italic>X</italic>. For high probabilities of the conditional premise the inference is inconclusive. In classical logic and in the proof-logical approach of Rips the inference is valid.</p>
<p>Here is a second example (Example M in Rips, 1994, p. 151):</p>
<disp-formula id="E20"><mml:math id="M101"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mo>&#x000AC;</mml:mo><mml:mi>A</mml:mi></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>B</mml:mi><mml:mo>&#x000AC;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mover accent='true'><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo>&#x02227;</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02227;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mi>B</mml:mi><mml:mo>&#x02228;</mml:mo><mml:mi>D</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo stretchy='true'>&#x000AF;</mml:mo></mml:mover></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>With <italic>P</italic>(&#x000AC;<italic>A</italic>) &#x0003D; <italic>x</italic> and <italic>P</italic>(<italic>B</italic>) &#x0003D; <italic>y</italic> the probability of the conclusion is in the interval <italic>z</italic>&#x02208;[max{0, <italic>x</italic>&#x0002B;<italic>y</italic>&#x02212;1}, 1}]. The lower probability is the same as the lower probability of a conjunction. If <italic>x</italic> and <italic>y</italic> are less than 0.5, then the inference is noninformative and the distribution of the probability of being coherent is a uniform distribution. The inference was endorsed by only 22.2% of the participants.</p>
<p>We next turn to an example from the judgment under uncertainty domain. It may be considered as an example of Ockham&#x00027;s razor (Tweney et al., <xref ref-type="bibr" rid="B78">2010</xref>) where less is more.</p>
</sec>
<sec>
<title>3.3. The doherty task</title>
<p>For the conjunction of <italic>n</italic> events we have: If <italic>P</italic>(<italic>D</italic><sub><italic>i</italic></sub>) &#x0003D; <italic>&#x003B1;</italic><sub><italic>i</italic></sub> for <italic>i</italic> &#x0003D; 1, &#x02026;, <italic>n</italic>, then</p>
<disp-formula id="E13"><label>(10)</label><mml:math id="M17"><mml:mi>P</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>D</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x02227;</mml:mo><mml:msub><mml:mi>D</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x02227;</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>&#x02227;</mml:mo><mml:msub><mml:mi>D</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>[</mml:mo> <mml:mrow><mml:mi>max</mml:mi><mml:mrow><mml:mo>{</mml:mo> <mml:mrow><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>&#x003B1;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>&#x02212;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mi>n</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>&#x0007D;</mml:mo><mml:mo>,</mml:mo><mml:mi>min</mml:mi><mml:mo stretchy='false'>&#x0007B;</mml:mo><mml:msub><mml:mi>&#x003B1;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>&#x0007D;</mml:mo></mml:mrow> <mml:mo>}</mml:mo></mml:mrow></mml:mrow> <mml:mo>]</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:math></disp-formula>
<p>This is a straightforward generalization of the elementary conjunction rule. Such generalizations were first investigated by Gilio (<xref ref-type="bibr" rid="B27">2012</xref>) and are also studied in Wallmann and Kleiter (<xref ref-type="bibr" rid="B81">2012a</xref>,<xref ref-type="bibr" rid="B82">b</xref>, <xref ref-type="bibr" rid="B80">2014a</xref>,<xref ref-type="bibr" rid="B83">b</xref>). There is a psychologically interesting property of such generalizations. It is the phenomenon called <italic>degradation</italic>. As <italic>n</italic>, the number of events in the generalization, increases the inferences become less and less informative. More information leads to less conclusive inferences.</p>
<p>An example in the field of judgment under uncertainty is the so called pseudodiagnosticity task introduced by Michael Doherty (Doherty et al., <xref ref-type="bibr" rid="B14">1979</xref>, <xref ref-type="bibr" rid="B13">1996</xref>; Tweney et al., <xref ref-type="bibr" rid="B78">2010</xref>; Kleiter, <xref ref-type="bibr" rid="B43">2013</xref>). It was analyzed with second-order distribution by Kleiter (<xref ref-type="bibr" rid="B44">2015</xref>).</p>
<disp-quote><p>Assume you are a physician and you are 50% sure that one of your patients is suffering from disease <italic>H</italic>, <italic>P</italic>(<italic>H</italic>) &#x0003D; 0.5. You know that the probability that if the patient is suffering from <italic>H</italic>, the patient shows symptom <italic>D</italic><sub>1</sub> is 0.7, <italic>P</italic>(<italic>D</italic><sub>1</sub>|<italic>H</italic>) &#x0003D; 0.7. You may obtain just one more piece of information. There are three options:
<list list-type="order">
<list-item><p><italic>P</italic>(<italic>D</italic><sub>2</sub>|<italic>H</italic>), the probability of a second symptom given the presence of the disease,</p></list-item>
<list-item><p><italic>P</italic>(<italic>D</italic><sub>1</sub>|&#x000AC;<italic>H</italic>), the probability of the first symptom given the absence of the disease, or</p></list-item>
<list-item><p><italic>P</italic>(<italic>D</italic><sub>2</sub>|&#x000AC;<italic>H</italic>), the probability of the second symptom given the absence of the disease.</p></list-item></list></p>
<p>What is your choice?</p>
</disp-quote>
<p>Most people select <italic>P</italic>(<italic>D</italic><sub>2</sub>|<italic>H</italic>). Actually <italic>P</italic>(<italic>D</italic><sub>1</sub>|&#x000AC;<italic>H</italic>) is the best choice. With <italic>P</italic>(<italic>D</italic><sub>1</sub>|&#x000AC;<italic>H</italic>) Bayes&#x00027; theorem gives the posterior probability</p>
<disp-formula id="E14"><label>(11)</label><mml:math id="M18"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>H</mml:mi><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mo>&#x000AC;</mml:mo><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Before any of the three options is selected, the posterior probability is in the interval (Tweney et al., <xref ref-type="bibr" rid="B78">2010</xref>)</p>
<disp-formula id="E15"><label>(12)</label><mml:math id="M19"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>H</mml:mi><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>If however, as most participants do, <italic>P</italic>(<italic>D</italic><sub>2</sub>|<italic>H</italic>) is selected, then the interval is</p>
<disp-formula id="E16"><label>(13)</label><mml:math id="M20"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>H</mml:mi><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The interval in (13) is wider than the interval in (12) as</p>
<disp-formula id="E17"><mml:math id="M21"><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x02264;</mml:mo><mml:mo class="qopname">min</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>&#x02264;</mml:mo><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mi>H</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>Selecting <italic>P</italic>(<italic>D</italic><sub>1</sub>|&#x000AC;<italic>H</italic>) results in a precise point probability while selecting <italic>P</italic>(<italic>D</italic><sub>2</sub>|<italic>H</italic>) results in an interval that is <italic>wider</italic> than the initial one.</p>
<p>If we continue to select only the &#x0201C;affirmative &#x0201D; likelihoods given <italic>H</italic> and not those given &#x000AC;<italic>H</italic>, then the intervals get wider and wider and after a few more steps become noninformative, that is, [0, 1]. The additional information imports noise. Figure <xref ref-type="fig" rid="F6">6</xref> shows an example for <italic>P</italic>(<italic>H</italic>) &#x0007E; <italic>Be</italic>(5, 5), <italic>P</italic>(<italic>D</italic><sub><italic>i</italic></sub>|<italic>H</italic>) &#x0007E; <italic>Be</italic>(20, 10), and <italic>P</italic>(<italic>D</italic><sub><italic>i</italic></sub>|&#x000AC;<italic>H</italic>) &#x0007E; <italic>Be</italic>(1, 1). For <italic>i</italic> &#x0003D; 1 there is one posterior distribution, the lower and the upper distributions coincide; for <italic>i</italic> &#x0003D; 3 and <italic>i</italic> &#x0003D; 4 the lower and upper distributions get close to 0 and 1. The probability of being coherent becomes a uniform distribution. One reason that contributes to the degradation effect are the unknown probabilities of the conjunctions <italic>P</italic>(<italic>D</italic><sub>1</sub>|<italic>H</italic>)&#x02227;&#x02026;&#x02227;<italic>P</italic>(<italic>D</italic><sub><italic>n</italic></sub>|<italic>H</italic>) and <italic>P</italic>(<italic>D</italic><sub>1</sub>|&#x000AC;<italic>H</italic>)&#x02227;&#x02026;&#x02227;<italic>P</italic>(<italic>D</italic><sub><italic>n</italic></sub>|&#x000AC;<italic>H</italic>).</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Degradation in the Doherty task. <bold>(Top left panel)</bold> The symmetric beta distributions <italic>Be</italic>(5, 5) of <italic>P</italic>(<italic>H</italic>) (blue), <italic>Be</italic>(20, 10) for <italic>P</italic>(<italic>D</italic>|<italic>H</italic>) (red), and the uniform distribution <italic>Be</italic>(1, 1) for <italic>P</italic>(<italic>D</italic>|&#x000AC;<italic>H</italic>) (black). <bold>(Top right panel)</bold> Second-order posterior distribution of the probability of <italic>H</italic> when the distributions of the likelihoods <italic>P</italic>(<italic>D</italic>|<italic>H</italic>) and <italic>P</italic>(<italic>D</italic>|&#x000AC;<italic>H</italic>) are known. <bold>(Bottom panels)</bold> Lower and upper distributions of the probability of <italic>H</italic> when the distributions of the likelihoods of three <bold>(Left panel)</bold> and four <bold>(Right panel)</bold> symptoms are known; all likelihood distributions are <italic>Be</italic>(20, 10) and <italic>Be</italic>(1, 1), respectively. The black line shows the probability of being coherent.</p></caption>
<graphic xlink:href="fpsyg-09-02051-g0006.tif"/>
</fig>
<p>The Doherty task demonstrates that we should compare the results from experimental groups with those from control groups. The preference for selecting the affirmative likelihood only is seen as a confirmation bias: people do not consider alternative hypotheses. The phenomenon that more information may induce more imprecision has been studied in Wallmann and Kleiter (<xref ref-type="bibr" rid="B81">2012a</xref>,<xref ref-type="bibr" rid="B82">b</xref>, <xref ref-type="bibr" rid="B80">2014a</xref>,<xref ref-type="bibr" rid="B83">b</xref>) and Kleiter (<xref ref-type="bibr" rid="B43">2013</xref>).</p>
<p>Technically the analysis of a <italic>multivariate</italic> problem like the Doherty task requires stochastic simulation in <italic>vines</italic>. &#x0201C;Vines are graphical structures that represent joint probabilistic distributions. They were named for their close visual resemblance to grapes &#x02026;&#x0201D; Kurowicka and Joe (<xref ref-type="bibr" rid="B51">2011</xref>, p. 1). Vines may be compared to Bayesian networks. In psychology Bayesian networks were used, for example, to model uncertain reasoning (Oaksford and Chater, <xref ref-type="bibr" rid="B58">2007</xref>), to model causal reasoning (Tenenbaum et al., <xref ref-type="bibr" rid="B75">2007</xref>), word learning (Xu and Tenenbaum, <xref ref-type="bibr" rid="B86">2007</xref>), or to model cognitive development (Gopnik and Tenenbaum, <xref ref-type="bibr" rid="B29">2007</xref>). Bayesian networks encode conditional independencies and represent the (usually precise) joint probabilities in tables. Vines encode marginal probabilities and (partial) correlations, or more generally, copulas. Psychologically it is more plausible that humans encode multivariate uncertain structures by their (conditional) dependencies and not by their (conditional) independencies. Moreover, encoding marginal probabilities is much easier than encoding multivariate probability tables. There is no space here for further speculations. For the mathematical treatment of vines the reader is referred to Kurowicka and Cooke (<xref ref-type="bibr" rid="B49">2004</xref>, <xref ref-type="bibr" rid="B50">2006</xref>), Kurowicka and Joe (<xref ref-type="bibr" rid="B51">2011</xref>), and Mai and Scherer (<xref ref-type="bibr" rid="B55">2012</xref>).</p>
<p>A psychologically interesting difference between Bayesian networks and vines is that vines encode dependencies &#x0201C;directly&#x0201D; by (partial) correlations (actually copulas) and not by conditional probabilities. It is highly plausible (but seldom investigated) that humans encode the strength of a dependence not by a probability table but by a one-dimensional quantity.</p>
<p>While Bayesian networks rely on (conditional) independence assumptions, vines rely on <italic>copulas</italic>. Copulas encode dependencies. To keep the present text simple we use Gaussian copulas (correlations) only (see Equation 4). The recent advances in the theory of copulas and vines, and the development of software for the <italic>simulation</italic> methods allow to model multivariate imprecise inference. There is not enough space here to discuss a more complex example, but see the study of the Doherty&#x00027;s pseudodiagnosticity task in (Kleiter, <xref ref-type="bibr" rid="B44">2015</xref>). The suppression task in the following section involves three variables.</p>
</sec>
<sec>
<title>3.4. Suppression task</title>
<p>The Suppression Task was introduced by Byrne (<xref ref-type="bibr" rid="B10">1989</xref>). She observed that while a simple <sc>modus ponens</sc> is endorsed by nearly all people, the endorsement decreases substantially when an <italic>additional</italic> conditional premise is introduced. The additional premise <italic>suppresses</italic> the acceptance of the conclusion. Table <xref ref-type="table" rid="T4">4</xref> shows Byrne&#x00027;s by now classical example:</p>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p>The various premises and the conclusion in the Suppression Task.</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr>
<td valign="top" align="left"><italic>P</italic>1</td>
<td valign="top" align="left">Main conditional</td>
<td valign="top" align="left">If Mary has an essay to write, then she will study late in the library.</td>
</tr>
<tr>
<td valign="top" align="left"><italic>P</italic>2<italic>a</italic></td>
<td valign="top" align="left">Additional conditional</td>
<td valign="top" align="left">If the library is open, then she will study late in the library.</td>
</tr>
<tr>
<td valign="top" align="left"><italic>P</italic>2<italic>b</italic></td>
<td valign="top" align="left">Alternative conditional</td>
<td valign="top" align="left">If Mary has some textbook to read, then she will study late in the library.</td>
</tr>
<tr>
<td valign="top" align="left"><italic>P</italic>3</td>
<td valign="top" align="left">Categorical premise</td>
<td valign="top" align="left">Mary has an essay to write.</td>
</tr>
<tr>
<td valign="top" align="left"><italic>C</italic></td>
<td valign="top" align="left">Conclusion</td>
<td valign="top" align="left">Mary will study late in the library.</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The simple <sc>modus ponens</sc> &#x0201C;from {<italic>P</italic>1, <italic>P</italic>3} infer <italic>C</italic>&#x0201D; is endorsed by 96% of the participants in Byrne&#x00027;s Experiment 1. When the additional premise P2a is included, &#x0201C;from {<italic>P</italic>1, <italic>P</italic>2<italic>a, P</italic>3} infer <italic>C</italic>&#x0201D; the endorsement drops to 38%. When the alternative premise <italic>P</italic>2<italic>b</italic> is introduced, &#x0201C;from {<italic>P</italic>1, <italic>P</italic>2<italic>b, P</italic>3} infer <italic>C</italic>,&#x0201D; the endorsement is the same as for the simple <sc>modus ponens</sc>.</p>
<p>In an abstract formal system the second premise is logically and probabilistically irrelevant. It has no impact upon the conclusion, neither upon its truth nor upon its probability. Attending to the semantic content of the conditional premises, however, leads to a reinterpretation of the inferences. The conditionals <italic>P</italic>1 and <italic>P</italic>2 have the same consequent and Mary can only study late in the library if the library is open. Thus for the additional conditional the semantic content (Byrne, <xref ref-type="bibr" rid="B10">1989</xref>) invites a conjunctive interpretation of the antecedent, {if <italic>A</italic>&#x02227;<italic>B</italic> then <italic>C, A</italic>}. The alternative conditional P2b, however, invites a disjunctive interpretation of the antecedent, {if <italic>A</italic> &#x02228; <italic>B</italic> then <italic>C, A</italic>}.</p>
<p>The distributional interpretation of the three different inferences are:
<list list-type="order">
<list-item><p>Simple <sc>modus ponens</sc>: <italic>P</italic>(<italic>C</italic>|<italic>A</italic>) &#x0003D; <italic>X</italic>, <italic>P</italic>(<italic>A</italic>) &#x0003D; <italic>Y</italic>.</p></list-item>
<list-item><p>Conjunctive antecedent: <italic>P</italic>(<italic>C</italic>|<italic>A</italic> &#x02227; <italic>B</italic>) &#x0003D; <italic>X</italic> , <italic>P</italic>(<italic>A</italic> &#x02227; <italic>B</italic>) &#x0003D; <italic>Y</italic>. We note that if <italic>P</italic>(<italic>A</italic>) &#x0003D; <italic>x</italic> and <italic>P</italic>(<italic>B</italic>) is unknown and thus may have any value between 0 and 1, <italic>P</italic>(<italic>A</italic> &#x02227; <italic>B</italic>) is in the interval [0, <italic>x</italic>]. The bounds for the <sc>modus ponens</sc> are <italic>z</italic>&#x02208;[0, 1&#x02212;<italic>x</italic>&#x0002B;<italic>xy</italic>]</p></list-item>
<list-item><p>Disjunctive antecedent: <italic>P</italic>(<italic>C</italic>|<italic>A</italic> &#x02228; <italic>B</italic>) &#x0003D; <italic>X</italic> , <italic>P</italic>(<italic>A</italic> &#x02228; <italic>B</italic>) &#x0003D; <italic>Y</italic>. <italic>P</italic>(<italic>B</italic>) is unknown and <italic>P</italic>(<italic>A</italic> &#x02228; <italic>B</italic>) may have any value in the interval [<italic>x</italic>, 1]. The bounds for the <sc>modus poens</sc> are <italic>z</italic>&#x02208;[<italic>xy, y</italic>].</p></list-item>
</list></p>
<p>Figure <xref ref-type="fig" rid="F7">7</xref> shows the distributions of the lower and the upper bounds and of the probability of being coherent. The example uses the following distributions: (1) For the simple <sc>modus ponens</sc> <italic>P</italic>(<italic>A</italic>) &#x0003D; <italic>X</italic> &#x0007E; <italic>Be</italic>(10, 5) and <italic>P</italic>(<italic>C</italic>|<italic>A</italic>) &#x0003D; <italic>Y</italic> &#x0007E; <italic>Be</italic>(20, 5). (2) For the conjunctive interpretation (additional conditional) <italic>P</italic>(<italic>A</italic> &#x02227; <italic>B</italic>) &#x0003D; <italic>X</italic> &#x0007E; <italic>Be</italic>(10, 5) and <italic>P</italic>(<italic>C</italic>|<italic>A</italic> &#x02227; <italic>B</italic>) &#x0003D; <italic>Y</italic> &#x0007E; <italic>Be</italic>(20, 5) (3) For the disjunctive interpretation (alternative conditional) <italic>P</italic>(<italic>A</italic> &#x02228; <italic>B</italic>) &#x0003D; <italic>X</italic> &#x0007E; <italic>Be</italic>(10, 5) and <italic>P</italic>(<italic>C</italic>|<italic>A</italic> &#x02228; <italic>B</italic>) &#x0003D; <italic>Y</italic> &#x0007E; <italic>Be</italic>(20, 5).</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><sc>Modus ponens</sc> in the Suppression Task. <bold>(Top panels)</bold> Probability distributions of the premises <italic>P</italic>(<italic>C</italic>|<italic>A</italic>) &#x0007E; <italic>Be</italic>(15, 3) <bold>(Right)</bold> and <italic>P</italic>(<italic>A</italic>) &#x0007E; <italic>Be</italic>(6, 4) <bold>(Left)</bold>. Simple <sc>modus ponens</sc>: Lower and upper histograms of the probability of the conclusion <italic>P</italic>(<italic>C</italic>). <bold>(Bottom panels, Left)</bold> The premises are interpreted as a conjunction, <italic>P</italic>(<italic>C</italic>|<italic>A</italic> &#x02227; <italic>B</italic>) and <italic>P</italic>(<italic>A</italic> &#x02227; <italic>B</italic>). <bold>(Right)</bold> The premises are interpreted as a disjunction , <italic>P</italic>(<italic>C</italic>|<italic>A</italic> &#x02228; <italic>B</italic>) and <italic>P</italic>(<italic>A</italic> &#x02228; <italic>B</italic>).</p></caption>
<graphic xlink:href="fpsyg-09-02051-g0007.tif"/>
</fig>
<p>In the figure the simple <sc>modus ponens</sc> and the disjunctive antecedent (If Mary has an essay to write or if Mary has a textbook to read) lead to very similar results. The conjunctive antecedent (If Mary has an essay to write and if the library is open) leads to a very flat distribution. The distribution of the lower bound is degenerate at zero. The probability of the conjunction is much lower than the probability of the disjunction.</p>
<p>The distributional approach models the results of the Suppression Task pretty well. Moreover, it provides quantitative predictions for the differences in the various experimental conditions.</p>
<p>The suppositional interpretation of an &#x0201C;if H then E&#x0201D; sentence assumes <italic>H</italic> to be true. Also in a conditional probability <italic>P</italic>(<italic>E</italic>|<italic>H</italic>) the event <italic>H</italic> is assumed to be true. Jeffrey pointed at cases where observations are blurred. Under candle light the color of an object may be ambiguous. How to condition on soft evidence? Jeffrey was the pioneer of the analysis of soft evidence to which we will turn next.</p>
</sec>
<sec>
<title>3.5. Soft evidence</title>
<p>Usually <italic>conditioning</italic> updates probabilities in the light of <italic>hard</italic> evidence, that is, the conditioning event is supposed to be <italic>true</italic>. But what if the conditioning event is only uncertain? Jeffrey introduced &#x0201C;Jeffrey&#x00027;s rule,&#x0201D; a proposal of how to update probabilities by <italic>soft</italic> evidence (Jeffrey, <xref ref-type="bibr" rid="B34">1965</xref>, <xref ref-type="bibr" rid="B35">1992</xref>, <xref ref-type="bibr" rid="B36">2004</xref>). Historically the problem was ready posed by Donkin (<xref ref-type="bibr" rid="B15">1851</xref>) and his solution is equivalent to Jeffrey&#x00027;s rule (for a proof see Draheim, <xref ref-type="bibr" rid="B16">2017</xref>). Draheim gives an overview of the literature in Appendix A of his monograph. Jeffrey&#x00027;s rule has been criticized by several authors (Levi, <xref ref-type="bibr" rid="B53">1967</xref>; Diaconis and Zabell, <xref ref-type="bibr" rid="B12">1982</xref>; Wedlin, <xref ref-type="bibr" rid="B84">1996</xref>; Halpern, <xref ref-type="bibr" rid="B32">2003</xref>; Jaynes, <xref ref-type="bibr" rid="B33">2003</xref>). The rule is <italic>non-commutative</italic>, i.e., it is not invariant with respect to the order of updating. Moreover, it involves an independence assumption. For a psychological investigation of Jeffrey&#x00027;s rule see Hadjichristidis et al. (<xref ref-type="bibr" rid="B31">2014</xref>).</p>
<p>In the present approach it is straightforward to update probabilities by evidence that is probable only. We have two random variables <italic>X</italic> and <italic>Y</italic> (first-order probabilities). We want to know the (second-order) distribution of <italic>Y</italic> given a fixed value of <italic>X</italic>. The problem is analog to a regression problem in statistics: The distribution of <italic>Y</italic> is predicted on the basis of a given value of <italic>X</italic> &#x0003D; <italic>x</italic> . The distributional approach offers a direct representation of Jeffrey&#x00027;s problem.</p>
<p>Figure <xref ref-type="fig" rid="F8">8</xref> shows a numerical example. On the left side the unit square [0, 1]<sup>2</sup> and the contour lines from the bivariate joint distribution resulting from two beta marginals and a Spearman copula<xref ref-type="fn" rid="fn0006"><sup>6</sup></xref>. On the right side the two marginals and the distribution of <italic>Y</italic> at <italic>X</italic> &#x0003D; 0.9. The contour lines and the distribution at the cutting point 0.9 is obtained by stochastic simulation.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p><bold>(Left panel)</bold> Contour lines of the joint distribution with the marginals <italic>X</italic> &#x0007E; <italic>Be</italic>(9, 3) and <italic>Y</italic> &#x0007E; <italic>Be</italic>(4, 4), and Spearman correlation &#x003C4; = 0.5. Regression line at <italic>x</italic> &#x0003D;.9 (quantile at 0.5) together with 90 % confidence band (quantile at 0.05 and 0.95). <bold>(Right panel)</bold> The two marginal betas and the conditional distribution <italic>p</italic>(<italic>y</italic>|<italic>x</italic><sub>0</sub> &#x0003D; 0.9) along the vertical line in the contour plot.</p></caption>
<graphic xlink:href="fpsyg-09-02051-g0008.tif"/>
</fig>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>4. Discussion</title>
<p>We have distinguished logical, probabilistic, and statistical principles and argued that for a plausible model of human reasoning ingredients from all the three domains are necessary. We have seen that the constraints of probability logic induce only lower and upper probabilities, or lower and upper distributions in the case of imprecision; they do not lead to exact point probabilities, or to just one distribution in the case of imprecision. To overcome this kind of indeterminacy we have introduced the concept of the <italic>probability of being coherent</italic>. One may follow the proposal of Smets (<xref ref-type="bibr" rid="B68">1990</xref>) and distinguish <italic>credal</italic> and <italic>pignistic</italic> degrees of belief, corresponding to the whole distribution for the cognitive representation and the maximum for selecting just one favorite value. It is rational to base one&#x00027;s decisions on values obtaining a maximum probability of being coherent.</p>
<p>We have investigated the differences between the logical conjunction and the conditional. For not too extreme probabilities these differences may be small, so small that it will be impossible to distinguish the two interpretations empirically. We observed that in typical truth table tasks about twenty percent of the participants interpret if-then sentences as conjunctions (Fugard et al., <xref ref-type="bibr" rid="B23">2011</xref>; Kleiter et al., <xref ref-type="bibr" rid="B47">2018</xref>). In the context of everyday conversation, say, the different interpretations would not matter. We compared the sensitivity of the differences between the logical operators by the Kullback-Leibler distances between their distributions. The distance of an inferred distribution, inferred from a logical argument, from the uniform distributions, as a standard of ignorance, is an indicator of the informativeness and strength of the argument.</p>
<p>We remembered that neglecting base rates may be rational under natural sampling conditions. This property holds for beta distributions, their expected values and variances. We have demonstrated how typical tasks of deductive reasoning (Rips, <xref ref-type="bibr" rid="B65">1994</xref>) can be cast into a probabilistic format including imprecision. A paradoxical property is observed in Doherty&#x00027;s information seeking task (Doherty et al., <xref ref-type="bibr" rid="B14">1979</xref>; Tweney et al., <xref ref-type="bibr" rid="B78">2010</xref>; Kleiter, <xref ref-type="bibr" rid="B44">2015</xref>): Sampling more and more information from just one experimental condition, without sampling from a control condition, leads to less and less precise conclusions. The suppression task (Byrne, <xref ref-type="bibr" rid="B10">1989</xref>) was among the first tasks framed and analyzed in a probabilistic format (Stevenson and Over, <xref ref-type="bibr" rid="B73">1995</xref>). Expressing the implicit assumptions by second order probability distribution predicts the empirical results reported in the literature. Jeffrey&#x00027;s proposal of how to update probabilities by uncertain evidence is well known as Jeffrey&#x00027;s rule (Jeffrey, <xref ref-type="bibr" rid="B34">1965</xref>). In a bivariate model with two first order probabilities <italic>X</italic> and <italic>Y</italic> treated as random variables the problem becomes a typical regression problem, predicting the distribution of <italic>Y</italic> given a value of <italic>X</italic>.</p>
<p>Gigerenzer et al. (<xref ref-type="bibr" rid="B25">1991</xref>) proposed a probabilistic mental model (PMM) of confidence judgments. The model was introduced and demonstrated by the experimental paradigm of <italic>city size judgments</italic>. In the first of two experiments twenty five German cities with more than 100,000 inhabitants were selected. Participants were presented all 300 pairs of the cities and asked to decide which one has more inhabitants. In addition, the participants rated how sure they were that each of their choices was correct.</p>
<p>Using just one quantitative property, city size, underlying all questions in the experimental procedure introduced a big difference with respect to the general knowledge almanac questions widely used in other studies of overconfidence<xref ref-type="fn" rid="fn0007"><sup>7</sup></xref>.</p>
<p>The data may be looked at from the perspective of the method of paired comparison (Thurstone, <xref ref-type="bibr" rid="B76">1927</xref>). Processing the data with Thurstone&#x00027;s probabilistic model of paired comparison one would introduce a normal distribution for the size of each of the cities. Such a probability distribution models the participant&#x00027;s knowledge about the size of a city and the precision of this knowledge. The confidence judgment then becomes a function of the differences in the location and spread of these distributions. The distributions are thus not second order probability distributions, but distributions over a quantitative property, here the number of inhabitants of a city. The property is imprecise (compare the intervals in Figure 2 of Gigerenzer et al., <xref ref-type="bibr" rid="B25">1991</xref>), not the probability<xref ref-type="fn" rid="fn0008"><sup>8</sup></xref>. The same holds for the cues in the PMMs.</p>
<p>I consider the analyses presented in this contribution as part of a thorough task analysis of reasoning tasks. Task analysis is a prerequisite for a good psychological investigation. The results of our analyses show how difficult it may be to run a good reasoning experiment. A major problem, e.g., is how to manipulate and measure imprecision. Another problem is that inferences with the same logical operators or the same logical inference rules may be different for different levels of the probabilities of the premises. High probabilities may lead to one result, low probabilities to a different one. Results may also not be invariant with respect to positive or negative correlations of the involved uncertain quantities and risks.</p>
<p>Modeling imprecise judgments has a long history. It started with Gauss and his analysis of human judgment errors in astronomical observations. It continued in the nineteenth century with Weber&#x00027;s and Fechner&#x00027;s just noticeable differences, thresholds and psychophysical functions. The probabilistic modeling of sensory data by von Helmholtz pioneered present day&#x00027;s Free Energy Principle. Thorndike introduced the law of comparative judgment. In the second half of the twentieth century signal detection theory, stimulus sampling theory, stochastic choice theory, Brunswick&#x00027;s lens model, stochastic response models, neural networks, and decision theory took up the problem. At the beginning of the twenty first century computational neuroscience contributed substantially to model imprecision in information processing.</p>
<p>Models of the functioning of the brain claim that the neuronal processes underlying cognitive processes like memory, perception, or decision making are inherently <italic>stochastic</italic> and <italic>noisy</italic>. A good example is the work of Rolls and Deco (<xref ref-type="bibr" rid="B66">2010</xref>). Spike trains of neurons follow Poisson distributions, cell assemblies are modeled by mean-field analysis and the dynamics of elementary decision processes are simulated by integrate-and-fire neural networks. The authors observe that &#x0201C;&#x02026;if a decision must be made based on one&#x00027;s confidence about a decision just made, a second decision-making network can read the information encoded in the firing rates of the first decision-making network to make a decision based on confidence &#x02026;&#x0201D; (Rolls and Deco, <xref ref-type="bibr" rid="B66">2010</xref>, p. 167). A probability assessment is a read-out of one&#x00027;s own confidence, the product of an auto-epistemic self-monitoring process (Rolls and Deco, <xref ref-type="bibr" rid="B66">2010</xref>, p.196ff.). The assessment might correspond to the point of maximum probability of being coherent.</p>
<p>Precision plays an important role in the theories of free energy, active inference, and predictive coding (Friston, <xref ref-type="bibr" rid="B21">2010</xref>; Buckley et al., <xref ref-type="bibr" rid="B9">2017</xref>). In a task in which the participants had to decide on the direction of a set of systematically moving dots in a set of randomly moving dots the precision of the responses was related to the response times. It was shown that the precision of the responses was controlled (among other locations) in the posterior parietal cortex (FitzGerald et al., <xref ref-type="bibr" rid="B19">2015</xref>). Precision may be modulated by neurotransmitters. Friston et al. (<xref ref-type="bibr" rid="B22">2012</xref>), for example, hypothesized that precision is related to dopamin.</p>
<p>In probability logic all operators and inference rules infer interval probabilities. Using conclusions iteratively would require to propagate lower and upper probabilities again and again. For a human brain to keeping track of lower and upper bounds will soon become too messy. One way out of the exploding complexity is to simplify and process the probability distributions of being coherent. To use a metaphor: In a cell assembly the distributions may result from the many single cell activations constrained by the coherence criterion.</p>
</sec>
<sec id="s5">
<title>Author contributions</title>
<p>The author confirms being the sole contributor of this work and has approved it for publication.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<sec sec-type="supplementary-material" id="s6">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fpsyg.2018.02051/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fpsyg.2018.02051/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.PDF" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Adams</surname> <given-names>E. W.</given-names></name></person-group> (<year>1965</year>). <article-title>The logic of conditionals</article-title>. <source>Inquiry</source> <volume>8</volume>, <fpage>166</fpage>&#x02013;<lpage>197</lpage>. <pub-id pub-id-type="doi">10.1080/00201746508601430</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Adams</surname> <given-names>E. W.</given-names></name></person-group> (<year>1966</year>). <article-title>Probability and the logic of conditionals</article-title> in <source>Aspects of Inductive Logic</source>, eds <person-group person-group-type="editor"><name><surname>Hintikka</surname> <given-names>J.</given-names></name> <name><surname>Suppes</surname> <given-names>P.</given-names></name></person-group> (<publisher-loc>Amsterdam</publisher-loc>: <publisher-name>North-Holland</publisher-name>), <fpage>265</fpage>&#x02013;<lpage>316</lpage>.</citation></ref>
<ref id="B3">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Aitchison</surname> <given-names>J.</given-names></name> <name><surname>Dunsmore</surname> <given-names>I. R.</given-names></name></person-group> (<year>1975</year>). <source>Statistical Prediction Analysis</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation></ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="editor"><name><surname>Augustin</surname> <given-names>T.</given-names></name> <name><surname>Colen</surname> <given-names>F. A.</given-names></name> <name><surname>de Cooman</surname> <given-names>G.</given-names></name> <name><surname>Troffaes</surname> <given-names>M. C. M.</given-names></name></person-group> editors (<year>2014</year>). <source>Introduction to Imprecise Probabilities</source>. <publisher-loc>Chichester</publisher-loc>: <publisher-name>Wiley</publisher-name>.</citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baratgin</surname> <given-names>J.</given-names></name> <name><surname>Over</surname> <given-names>D. E.</given-names></name> <name><surname>Politzer</surname> <given-names>G.</given-names></name></person-group> (<year>2013</year>). <article-title>Uncertainty and the de Finetti tables</article-title>. <source>Think. Reason.</source> <volume>19</volume>, <fpage>308</fpage>&#x02013;<lpage>328</lpage>. <pub-id pub-id-type="doi">10.1080/13546783.2013.809018</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bar-Hillel</surname> <given-names>M.</given-names></name></person-group> (<year>1980</year>). <article-title>The base-rate fallacy in probability judgments</article-title>. <source>Acta Psychol.</source> <volume>44</volume>, <fpage>211</fpage>&#x02013;<lpage>233</lpage>. <pub-id pub-id-type="doi">10.1016/0001-6918(80)90046-3</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bayes</surname> <given-names>T. R.</given-names></name></person-group> (<year>1763/1958</year>). <article-title>An essay towards solving a problem in the doctrine of chance</article-title>. <source>Biometrika</source> <volume>45</volume>, <fpage>293</fpage>&#x02013;<lpage>315</lpage>. <pub-id pub-id-type="doi">10.1093/biomet/45.3-4.296</pub-id>.</citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bellantuono</surname> <given-names>I.</given-names></name></person-group> (<year>2018</year>). <article-title>Find drugs that delay many diseases of old age</article-title>. <source>Nature</source> <volume>554</volume>, <fpage>293</fpage>&#x02013;<lpage>295</lpage>. <pub-id pub-id-type="doi">10.1038/d41586-018-01668-0</pub-id><pub-id pub-id-type="pmid">29446384</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Buckley</surname> <given-names>C. L.</given-names></name> <name><surname>Kim</surname> <given-names>C. S.</given-names></name> <name><surname>McGregor</surname> <given-names>S.</given-names></name> <name><surname>Seth</surname> <given-names>A. K.</given-names></name></person-group> (<year>2017</year>). <article-title>The free energy principle for action and perception: a mathematical review</article-title>. <source>J. Math. Psychol.</source> <volume>81</volume>, <fpage>55</fpage>&#x02013;<lpage>79</lpage>. <pub-id pub-id-type="doi">10.1016/j.jmp.2017.09.004</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Byrne</surname> <given-names>R. M. J.</given-names></name></person-group> (<year>1989</year>). <article-title>Suppressing valid inferences with conditionals</article-title>. <source>Cognition</source> <volume>39</volume>, <fpage>61</fpage>&#x02013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1016/0010-0277(89)90018-8</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cuzzolin</surname> <given-names>F.</given-names></name></person-group> (<year>2012</year>). <article-title>Generalizations of the relative belief transform</article-title> in <source>Belief Functions: Theory and Applications. 2nd International Converence on Belief Functions</source>, eds <person-group person-group-type="editor"><name><surname>Denoeux</surname> <given-names>T.</given-names></name> <name><surname>Masson</surname> <given-names>M.</given-names></name></person-group> (<publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>109</fpage>&#x02013;<lpage>116</lpage>.</citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Diaconis</surname> <given-names>P.</given-names></name> <name><surname>Zabell</surname> <given-names>S. L.</given-names></name></person-group> (<year>1982</year>). <article-title>Updating subjective probability</article-title>. <source>J. Am. Stat. Assoc.</source> <volume>77</volume>, <fpage>822</fpage>&#x02013;<lpage>830</lpage>. <pub-id pub-id-type="doi">10.1080/01621459.1982.10477893</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Doherty</surname> <given-names>M. E.</given-names></name> <name><surname>Chadwick</surname> <given-names>R.</given-names></name> <name><surname>Garavan</surname> <given-names>H.</given-names></name> <name><surname>Barr</surname> <given-names>D.</given-names></name> <name><surname>Mynatt</surname> <given-names>C. R.</given-names></name></person-group> (<year>1996</year>). <article-title>On people&#x00027;s understanding of the diagnostic implications of probabilistic data</article-title>. <source>Mem. Cogn.</source> <volume>24</volume>, <fpage>644</fpage>&#x02013;<lpage>654</lpage>. <pub-id pub-id-type="pmid">8870533</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Doherty</surname> <given-names>M. E.</given-names></name> <name><surname>Mynatt</surname> <given-names>C. R.</given-names></name> <name><surname>Tweney</surname> <given-names>R. D.</given-names></name> <name><surname>Schiavo</surname> <given-names>M. D.</given-names></name></person-group> (<year>1979</year>). <article-title>Pseudodiagnosticity</article-title>. <source>Acta Psychol.</source> <volume>43</volume>, <fpage>111</fpage>&#x02013;<lpage>121</lpage>. <pub-id pub-id-type="doi">10.1016/0001-6918(79)90017-9</pub-id></citation></ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Donkin</surname> <given-names>W. F.</given-names></name></person-group> (<year>1851</year>). <article-title>On certain questions relating to the theory of probabilities</article-title>. <source>Philos. Mag.</source> <volume>1</volume>, <fpage>353</fpage>&#x02013;<lpage>368</lpage>. <pub-id pub-id-type="doi">10.1080/14786445108646751</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Draheim</surname> <given-names>D.</given-names></name></person-group> (<year>2017</year>). <source>Generalized Jeffrey Conditionalization. A Frequentist Semantics of Partial Conditionalization</source>. <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>.</citation></ref>
<ref id="B17">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Elqayam</surname> <given-names>S.</given-names></name></person-group> (<year>2017</year>). <article-title>New psychology of reasoning</article-title> in <source>International Handbook of Thinking and Reasoning</source>, eds <person-group person-group-type="editor"><name><surname>Ball</surname> <given-names>L. J.</given-names></name> <name><surname>Thompson</surname> <given-names>V. E.</given-names></name></person-group> (<publisher-loc>Hove</publisher-loc>: <publisher-name>Routledge</publisher-name>), <fpage>130</fpage>&#x02013;<lpage>150</lpage>.</citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Erev</surname> <given-names>I.</given-names></name> <name><surname>Wallsten</surname> <given-names>T. S.</given-names></name> <name><surname>Budescu</surname> <given-names>D. V.</given-names></name></person-group> (<year>1994</year>). <article-title>Simultaneous over- and underconfidence: The role of error in judgment processes</article-title>. <source>Psychol. Rev.</source> <volume>101</volume>, <fpage>519</fpage>&#x02013;<lpage>527</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.101.3.519</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>FitzGerald</surname> <given-names>T. H. B.</given-names></name> <name><surname>Moran</surname> <given-names>R. J.</given-names></name> <name><surname>Friston</surname> <given-names>K. J.</given-names></name> <name><surname>Dolan</surname> <given-names>R. J.</given-names></name></person-group> (<year>2015</year>). <article-title>Precision and neuronal dynamics in the human posterior parietal cortex during evidence accumulation</article-title>. <source>NeuroImage</source> <volume>107</volume>, <fpage>219</fpage>&#x02013;<lpage>228</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2014.12.015</pub-id><pub-id pub-id-type="pmid">25512038</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friedman</surname> <given-names>J. A.</given-names></name> <name><surname>Baker</surname> <given-names>J.</given-names></name> <name><surname>Mellers</surname> <given-names>B. A.</given-names></name> <name><surname>Tedlock</surname> <given-names>P. E.</given-names></name> <name><surname>Zeckhauser</surname> <given-names>R.</given-names></name></person-group> (<year>2018</year>). <article-title>The value of precision in probability assessment: evidence from a large-scale geopolitical forecasting tournament</article-title>. <source>Int. Stud. Q.</source> <volume>62</volume>, <fpage>410</fpage>&#x02013;<lpage>422</lpage>. <pub-id pub-id-type="doi">10.1093/isq/sqx078</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname> <given-names>K.</given-names></name></person-group> (<year>2010</year>). <article-title>The free-energy principle: a rough guide to the brain?</article-title> <source>Nat. Rev. Neurosci.</source> <volume>11</volume>, <fpage>127</fpage>&#x02013;<lpage>138</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2787</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friston</surname> <given-names>K.</given-names></name> <name><surname>Shiner</surname> <given-names>T.</given-names></name> <name><surname>FitzGerald</surname> <given-names>T.</given-names></name> <name><surname>Galea</surname> <given-names>J. M.</given-names></name> <name><surname>Adams</surname> <given-names>R.</given-names></name> <name><surname>Brown</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>Dopamin, affordance and active inference</article-title>. <source>PLoS Comput. Biol</source>. <volume>8</volume>:<fpage>e1002327</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1002327</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fugard</surname> <given-names>A. J.</given-names></name> <name><surname>Pfeifer</surname> <given-names>N.</given-names></name> <name><surname>Mayerhofer</surname> <given-names>B.</given-names></name> <name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>2011</year>). <article-title>How people interpret conditionals: shifts toward the conditional event</article-title>. <source>J. Exp. Psychol.</source> <volume>37</volume>, <fpage>635</fpage>&#x02013;<lpage>648</lpage>. <pub-id pub-id-type="doi">10.1037/a0022329</pub-id><pub-id pub-id-type="pmid">21534706</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gigerenzer</surname> <given-names>G.</given-names></name> <name><surname>Hoffrage</surname> <given-names>U.</given-names></name></person-group> (<year>1995</year>). <article-title>How to improve Bayesian reasoning without instruction: frequency formats</article-title>. <source>Psychol. Rev.</source> <volume>102</volume>, <fpage>684</fpage>&#x02013;<lpage>704</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.102.4.684</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gigerenzer</surname> <given-names>G.</given-names></name> <name><surname>Hoffrage</surname> <given-names>U.</given-names></name> <name><surname>Kleinb&#x000F6;lting</surname> <given-names>H.</given-names></name></person-group> (<year>1991</year>). <article-title>Probabilistic mental models: a Brunswikian theory of confidence</article-title>. <source>Psychol. Rev.</source> <volume>98</volume>, <fpage>509</fpage>&#x02013;<lpage>528</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.98.4.506</pub-id><pub-id pub-id-type="pmid">1961771</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gilio</surname> <given-names>A.</given-names></name></person-group> (<year>1995</year>). <article-title>Algorithms for precise and imprecise conditional probability assessments</article-title> in <source>Mathematical Models for Handling Partial Knowledge in Artificial Intelligence</source>, eds <person-group person-group-type="editor"><name><surname>Coletti</surname> <given-names>G.</given-names></name> <name><surname>Dubois</surname> <given-names>D.</given-names></name> <name><surname>Scozzafava</surname> <given-names>R.</given-names></name></person-group> (<publisher-name>Planum Press</publisher-name>, <publisher-loc>New York</publisher-loc>), <fpage>231</fpage>&#x02013;<lpage>254</lpage>.</citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gilio</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>Generalization of inference rules in coherence-based probabilistic default reasoning</article-title>. <source>Int. J. Approx. Reason.</source> <volume>53</volume>, <fpage>413</fpage>&#x02013;<lpage>434</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijar.2011.08.004</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gilio</surname> <given-names>A.</given-names></name> <name><surname>Sanfilippo</surname> <given-names>G.</given-names></name></person-group> (<year>2013</year>). <article-title>Conditional random quantities and iterated conditioning in the setting of coherence</article-title> in <source>Symbolic and Quantitative Approaches to Reasoning with Uncertainty</source>, ed <person-group person-group-type="editor"><name><surname>Van der Gaag</surname> <given-names>L.</given-names></name></person-group> (<publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>218</fpage>&#x02013;<lpage>229</lpage>.</citation></ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gopnik</surname> <given-names>A.</given-names></name> <name><surname>Tenenbaum</surname> <given-names>J. B.</given-names></name></person-group> (<year>2007</year>). <article-title>Bayesian networks, bayesian learning, and cognitive development</article-title>. <source>Dev. Sci.</source> <volume>10</volume>, <fpage>281</fpage>&#x02013;<lpage>287</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-7687.2007.00584.x</pub-id><pub-id pub-id-type="pmid">17444969</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gupta</surname> <given-names>A. K.</given-names></name> <name><surname>Nadarajah</surname> <given-names>S.</given-names></name></person-group> editors (<year>2004</year>). <source>Handbook of Beta Distribution and Its Application</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Marcel Dekker</publisher-name>.</citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hadjichristidis</surname> <given-names>C.</given-names></name> <name><surname>Sloman</surname> <given-names>S. A.</given-names></name> <name><surname>Over</surname> <given-names>D. E.</given-names></name></person-group> (<year>2014</year>). <source>Categorical Induction From Uncertain Premises: Jeffrey&#x00027;s Doesn&#x00027;t Completely Rule</source>. Technical report, <publisher-name>Department of Economics and Management, University of Trento</publisher-name> (<publisher-loc>Trento</publisher-loc>).</citation></ref>
<ref id="B32">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Halpern</surname> <given-names>J. Y.</given-names></name></person-group> (<year>2003</year>). <source>Reasoning About Uncertainty</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</citation></ref>
<ref id="B33">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jaynes</surname> <given-names>E. T.</given-names></name></person-group> (<year>2003</year>). <source>Probability Theory. The Logic of Science</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation></ref>
<ref id="B34">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jeffrey</surname> <given-names>R.</given-names></name></person-group> (<year>1965</year>). <source>The Logic of Decision</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>McGraw-Hill</publisher-name>.</citation></ref>
<ref id="B35">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jeffrey</surname> <given-names>R.</given-names></name></person-group> (<year>1992</year>). <source>Probability and the Art of Judgment</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation></ref>
<ref id="B36">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jeffrey</surname> <given-names>R.</given-names></name></person-group> (<year>2004</year>). <source>Subjective Probability. The Real Thing</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation></ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Johnson</surname> <given-names>N. L.</given-names></name> <name><surname>Kotz</surname> <given-names>S.</given-names></name></person-group> (<year>1970</year>). <source>Continuous Univariate Disbrigugions</source>, <volume>Vol. 2</volume>. <publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Hoghton Mifflin</publisher-name>.</citation></ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Johnson-Laird</surname> <given-names>P. N.</given-names></name> <name><surname>Shafir</surname> <given-names>E.</given-names></name></person-group> (<year>1993</year>). <article-title>The interaction between reasoning and decision making: an introduction</article-title>. <source>Cognition</source> <volume>49</volume>, <fpage>1</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1016/0010-0277(93)90033-R</pub-id><pub-id pub-id-type="pmid">8287670</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kahneman</surname> <given-names>D.</given-names></name> <name><surname>Tversky</surname> <given-names>A.</given-names></name></person-group> (<year>1973</year>). <article-title>On the psychology of prediction</article-title>. <source>Psychol. Rev.</source> <volume>80</volume>, <fpage>237</fpage>&#x02013;<lpage>251</lpage>. <pub-id pub-id-type="doi">10.1037/h0034747</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>1981</year>). <source>Bayes-Statistik. Grundlagen und Anwendungen</source>. <publisher-loc>Berlin</publisher-loc>: <publisher-name>Walter de Gruyter</publisher-name>.</citation></ref>
<ref id="B41">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>1994</year>). <article-title>Natural sampling: rationality without base rates</article-title> in <source>Contributions to Mathematical Psychology, Psychometrics, and Methodology</source>, eds <person-group person-group-type="editor"><name><surname>Fischer</surname> <given-names>G. H.</given-names></name> <name><surname>Laming</surname> <given-names>D.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>375</fpage>&#x02013;<lpage>388</lpage>.</citation></ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>1996</year>). <article-title>Critical and natural sensitivity to base rates [comments to Koehler (1996)]</article-title>. <source>Behav. Brain Sci.</source> <volume>19</volume>, <fpage>27</fpage>&#x02013;<lpage>29</lpage>. <pub-id pub-id-type="doi">10.1017/S0140525X00041297</pub-id></citation></ref>
<ref id="B43">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>2013</year>). <article-title>Ockham&#x00027;s razor in probability logic</article-title> in <source>Synergies of Soft Computing and Statistics for Intelligent Data Analysis, Advances in Intelligent Systems and Computation</source>, <volume>190</volume>, eds <person-group person-group-type="editor"><name><surname>Kruse</surname> <given-names>R.</given-names></name> <name><surname>Berthold</surname> <given-names>M. R.</given-names></name> <name><surname>Moewes</surname> <given-names>C.</given-names></name> <name><surname>Gil</surname> <given-names>M. A.</given-names></name> <name><surname>Grzegorzewski</surname> <given-names>P.</given-names></name> <name><surname>Hryniewicz</surname> <given-names>O.</given-names></name></person-group> (<publisher-loc>Heidelberg</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>409</fpage>&#x02013;<lpage>417</lpage>.</citation></ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>2015</year>). <article-title>Modeling biased information seeking with second order probability distributions</article-title>. <source>Kybernetika</source> <volume>51</volume>, <fpage>469</fpage>&#x02013;<lpage>485</lpage>. <pub-id pub-id-type="doi">10.14736/kyb-2015-3-0469</pub-id></citation></ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>2018</year>). <article-title>Adams&#x00027; p-validity in the research on human reasoning</article-title>. <source>J. Appl. Logics</source> <volume>5</volume>, <fpage>775</fpage>&#x02013;<lpage>825</lpage>.</citation></ref>
<ref id="B46">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kleiter</surname> <given-names>G. D.</given-names></name> <name><surname>Doherty</surname> <given-names>M. E.</given-names></name> <name><surname>Brake</surname> <given-names>G. L.</given-names></name></person-group> (<year>2002</year>). <article-title>The psychophysics metaphor in calibration research</article-title> in <source>Frequency Processing and Cognition</source>, eds <person-group person-group-type="editor"><name><surname>Sedlmeier</surname> <given-names>P.</given-names></name> <name><surname>Betsch</surname> <given-names>T.</given-names></name></person-group> (<publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>), <fpage>239</fpage>&#x02013;<lpage>255</lpage>.</citation></ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kleiter</surname> <given-names>G. D.</given-names></name> <name><surname>Fugard</surname> <given-names>A. J. B.</given-names></name> <name><surname>Pfeifer</surname> <given-names>N.</given-names></name></person-group> (<year>2018</year>). <article-title>A process model of the understanding of uncertain conditionals</article-title>. <source>Think. Reason.</source> <volume>24</volume>, <fpage>386</fpage>&#x02013;<lpage>422</lpage>. <pub-id pub-id-type="doi">10.1080/13546783.2017.1422542</pub-id></citation></ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koehler</surname> <given-names>J. J.</given-names></name></person-group> (<year>1996</year>). <article-title>The base rate fallacy reconsidered: descriptive, normative, and methodological challenges</article-title>. <source>Behav. Brain Sci.</source> <volume>19</volume>, <fpage>1</fpage>&#x02013;<lpage>53</lpage>. <pub-id pub-id-type="doi">10.1017/S0140525X00041157</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kurowicka</surname> <given-names>D.</given-names></name> <name><surname>Cooke</surname> <given-names>R.</given-names></name></person-group> (<year>2004</year>). <article-title>Distribution-free continuous Bayesian belief nets</article-title> in <source>Proceedings of the Fourth International Conference on Mathematical Methods in Reliability Methodology and Practice</source> (<publisher-loc>Santa Fe</publisher-loc>).</citation></ref>
<ref id="B50">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kurowicka</surname> <given-names>D.</given-names></name> <name><surname>Cooke</surname> <given-names>R.</given-names></name></person-group> (<year>2006</year>). <source>Uncertainty Analysis With High Dimension Dependence Modelling</source>. <publisher-loc>Chichester</publisher-loc>: <publisher-name>Wiley</publisher-name>.</citation></ref>
<ref id="B51">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kurowicka</surname> <given-names>D.</given-names></name> <name><surname>Joe</surname> <given-names>R.</given-names></name></person-group> (<year>2011</year>). <source>Dependence Modeling: Vine Copula Handbook</source>. <publisher-loc>Singapure</publisher-loc>: <publisher-name>World Scientific</publisher-name>.</citation></ref>
<ref id="B52">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Lad</surname> <given-names>F.</given-names></name></person-group> (<year>1996</year>). <source>Operational Subjective Statistical Methods: A Mathematical, Philosophical, and Historical Introduction</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Wiley</publisher-name>.</citation></ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Levi</surname> <given-names>I.</given-names></name></person-group> (<year>1967</year>). <article-title>Probability kinematics</article-title>. <source>Brit. J. Philos. Sci.</source> <volume>18</volume>, <fpage>197</fpage>&#x02013;<lpage>209</lpage>. <pub-id pub-id-type="doi">10.1093/bjps/18.3.197</pub-id></citation></ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lichtenstein</surname> <given-names>S.</given-names></name> <name><surname>Newman</surname> <given-names>J. R.</given-names></name></person-group> (<year>1967</year>). <article-title>Empirical scaling of common verbal phrases associated with numerical probabilities</article-title>. <source>Psychon. Sci.</source> <volume>9</volume>, <fpage>563</fpage>&#x02013;<lpage>564</lpage>. <pub-id pub-id-type="doi">10.3758/BF03327890</pub-id></citation></ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mai</surname> <given-names>J.-F.</given-names></name> <name><surname>Scherer</surname> <given-names>M.</given-names></name></person-group> (<year>2012</year>). <source>Simulating Copulas. Stochastic Models, Sampling Algorithms, and Applications</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Imperial College Press</publisher-name>.</citation></ref>
<ref id="B56">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Nelsen</surname> <given-names>R. B.</given-names></name></person-group> (<year>2006</year>). <source>An Introduction to Copulas</source>. <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>.</citation></ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oaksford</surname> <given-names>M.</given-names></name> <name><surname>Chater</surname> <given-names>N.</given-names></name></person-group> (<year>1995</year>). <article-title>Information gain explains relevance which explains the selection task</article-title>. <source>Cognition</source> <volume>57</volume>, <fpage>97</fpage>&#x02013;<lpage>108</lpage>. <pub-id pub-id-type="doi">10.1016/0010-0277(95)00671-K</pub-id><pub-id pub-id-type="pmid">7587019</pub-id></citation></ref>
<ref id="B58">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Oaksford</surname> <given-names>M.</given-names></name> <name><surname>Chater</surname> <given-names>N.</given-names></name></person-group> (<year>2007</year>). <source>Bayesian Rationality. The Probabilistic Approach to Human Reasoning</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>. <pub-id pub-id-type="pmid">19210833</pub-id></citation></ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Over</surname> <given-names>D.</given-names></name></person-group> (<year>2009</year>). <article-title>New paradigm psychology of reasoning</article-title>. <source>Think. Reason.</source> <volume>15</volume>, <fpage>431</fpage>&#x02013;<lpage>438</lpage>. <pub-id pub-id-type="doi">10.1080/13546780903266188</pub-id></citation></ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peterson</surname> <given-names>C. R.</given-names></name> <name><surname>Beach</surname> <given-names>L. R.</given-names></name></person-group> (<year>1967</year>). <article-title>Man as an intuitive statistician</article-title>. <source>Psychol. Bull.</source> <volume>68</volume>, <fpage>29</fpage>&#x02013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1037/h0024722</pub-id><pub-id pub-id-type="pmid">6046307</pub-id></citation></ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfeifer</surname> <given-names>N.</given-names></name> <name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>2005</year>). <article-title>Towards a mental probability logic</article-title>. <source>Psychol. Bel.</source> <volume>45</volume>, <fpage>71</fpage>&#x02013;<lpage>99</lpage>. <pub-id pub-id-type="doi">10.5334/pb-45-1-71</pub-id></citation></ref>
<ref id="B62">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Pfeifer</surname> <given-names>N.</given-names></name> <name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>2006a</year>). <article-title>Towards a probability logic based on statistical reasoning</article-title> in <source>Proceedings of the 11th IPMU Conference</source> (<publisher-loc>Paris</publisher-loc>), <fpage>2308</fpage>&#x02013;<lpage>2315</lpage>.</citation></ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfeifer</surname> <given-names>N.</given-names></name> <name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>2006b</year>). <article-title>Inference in conditional probability logic</article-title>. <source>Kybernetika</source> <volume>42</volume>, <fpage>391</fpage>&#x02013;<lpage>404</lpage>.</citation></ref>
<ref id="B64">
<citation citation-type="book"><person-group person-group-type="author"><collab>R Development Core Team</collab></person-group> (<year>2016</year>). <source>R: A Language and Environment for Statistical Computing</source>. <publisher-loc>Vienna</publisher-loc>: <publisher-name>R Foundation for Statistical Computing</publisher-name>.</citation></ref>
<ref id="B65">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rips</surname> <given-names>L. J.</given-names></name></person-group> (<year>1994</year>). <source>The Psychology of Proof. Deductive Reasoning in Human Thinking</source>. Bradford; <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</citation></ref>
<ref id="B66">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rolls</surname> <given-names>E. T.</given-names></name> <name><surname>Deco</surname> <given-names>G.</given-names></name></person-group> (<year>2010</year>). <source>The Noisy Brain. Stochastic Dynamics as a Principle of Brain Function</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</citation></ref>
<ref id="B67">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schepsmeier</surname> <given-names>U.</given-names></name> <name><surname>Stoeber</surname> <given-names>J.</given-names></name> <name><surname>Brechmann</surname> <given-names>E. C.</given-names></name> <name><surname>Graeler</surname> <given-names>B.</given-names></name> <name><surname>Nagler</surname> <given-names>T.</given-names></name> <name><surname>Erhardt</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2018</year>). <source>Statistical Inference of Vine Copulas</source>. Software.</citation></ref>
<ref id="B68">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Smets</surname> <given-names>P.</given-names></name></person-group> (<year>1990</year>). <article-title>Constructing the pignistic probability function in a context of uncertainty</article-title> in <source>Uncertainty in Artificial Intelligence</source>, <volume>Vol. 5</volume> (<publisher-loc>Amsterdam</publisher-loc>: <publisher-name>North Holland</publisher-name>), <fpage>29</fpage>&#x02013;<lpage>40</lpage>.</citation></ref>
<ref id="B69">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Smets</surname> <given-names>P.</given-names></name> <name><surname>Kruse</surname> <given-names>R.</given-names></name></person-group> (<year>1997</year>). <article-title>Imperfect information: imprecision and uncertainty</article-title> in <source>Uncertaint Management in Information Systems</source>, eds <person-group person-group-type="editor"><name><surname>Motro</surname> <given-names>A.</given-names></name> <name><surname>Smets</surname> <given-names>P.</given-names></name></person-group> (<publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Kluwer</publisher-name>), <fpage>343</fpage>&#x02013;<lpage>368</lpage>.</citation></ref>
<ref id="B70">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Spiegelhalter</surname> <given-names>D. J.</given-names></name> <name><surname>Franklin</surname> <given-names>R. C. G.</given-names></name> <name><surname>Bull</surname> <given-names>K.</given-names></name></person-group> (<year>1990</year>). <article-title>Assessment, criticism and improvement of imprecise subjective probabilities for a medical expert system</article-title> in <source>Uncertainty in Artificial Intelligence 5</source>, eds <person-group person-group-type="editor"><name><surname>Henrion</surname> <given-names>M.</given-names></name> <name><surname>Shachter</surname> <given-names>R. D.</given-names></name> <name><surname>Kanal</surname> <given-names>L. N.</given-names></name> <name><surname>Lemmer</surname> <given-names>J. F.</given-names></name></person-group> (<publisher-loc>Amsterdam</publisher-loc>: <publisher-name>North-Holland</publisher-name>), <fpage>285</fpage>&#x02013;<lpage>294</lpage>.</citation></ref>
<ref id="B71">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sriboonchitta</surname> <given-names>S.</given-names></name> <name><surname>Wong</surname> <given-names>W.-K.</given-names></name> <name><surname>Dhompongsa</surname> <given-names>S.</given-names></name> <name><surname>Nguyen</surname> <given-names>H. R.</given-names></name></person-group> (<year>2010</year>). <source>Stochastic Dominance and Applications to Finance, Risk and Economics</source>. <publisher-loc>Boca Raton, FL</publisher-loc>: <publisher-name>CRC, Taylor &#x00026; Francis</publisher-name>.</citation></ref>
<ref id="B72">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sta&#x000EB;l von Holstein</surname> <given-names>C.-A.S.</given-names></name></person-group>. (<year>1970</year>). <source>Assessment and Evaluation of Subjective Probabililty Distributions</source>. <publisher-loc>Stockholm</publisher-loc>: <publisher-name>The Economic Research Institute at the Stockholm School of Economics</publisher-name>.</citation></ref>
<ref id="B73">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stevenson</surname> <given-names>R. J.</given-names></name> <name><surname>Over</surname> <given-names>D. E.</given-names></name></person-group> (<year>1995</year>). <article-title>Deduction from uncertain premises</article-title>. <source>Q. J. Exp. Psychol.</source> <volume>48</volume>, <fpage>613</fpage>&#x02013;<lpage>643</lpage>. <pub-id pub-id-type="doi">10.1080/14640749508401408</pub-id></citation></ref>
<ref id="B74">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Suppes</surname> <given-names>P.</given-names></name></person-group> (<year>1966</year>). <article-title>Probabilistic inference and the concept of total evidence</article-title> in <source>Aspects of Inductive Logic</source>, eds <person-group person-group-type="editor"><name><surname>Hintikka</surname> <given-names>J.</given-names></name> <name><surname>Suppes</surname> <given-names>P.</given-names></name></person-group> (<publisher-loc>Amsterdam</publisher-loc>: <publisher-name>North-Holland</publisher-name>), <fpage>49</fpage>&#x02013;<lpage>65</lpage>.</citation></ref>
<ref id="B75">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Tenenbaum</surname> <given-names>J. B.</given-names></name> <name><surname>Griffiths</surname> <given-names>T. L.</given-names></name> <name><surname>Niyogi</surname> <given-names>S.</given-names></name></person-group> (<year>2007</year>). <article-title>Intuitive theories as grammars for causal inference</article-title> in <source>Causal Learning: Psychology, Philosophy, and Computation</source>, eds <person-group person-group-type="editor"><name><surname>Gupnik</surname> <given-names>A.</given-names></name> <name><surname>Schulz</surname> <given-names>L.</given-names></name></person-group> (<publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>), <fpage>301</fpage>&#x02013;<lpage>322</lpage>.</citation></ref>
<ref id="B76">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thurstone</surname> <given-names>L. L.</given-names></name></person-group> (<year>1927</year>). <article-title>The method of paired comparisons for social values, 21, (1927), 384-400</article-title>. <source>J. Abnorm. Soc. Psychol.</source> <volume>21</volume>, <fpage>384</fpage>&#x02013;<lpage>400</lpage>. <pub-id pub-id-type="doi">10.1037/h0065439</pub-id></citation></ref>
<ref id="B77">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tversky</surname> <given-names>A.</given-names></name> <name><surname>Kahneman</surname> <given-names>D.</given-names></name></person-group> (<year>1983</year>). <article-title>Extension versus intuitive reasoning: the conjunction fallacy in probability judgment</article-title>. <source>Psychol. Rev.</source> <volume>90</volume>, <fpage>293</fpage>&#x02013;<lpage>315</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.90.4.293</pub-id></citation></ref>
<ref id="B78">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tweney</surname> <given-names>R. D.</given-names></name> <name><surname>Doherty</surname> <given-names>M. E.</given-names></name> <name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>2010</year>). <article-title>The pseudodiagnosticity trap. Should subjects consider alternative hypotheses?</article-title> <source>Think. Reason.</source> <volume>16</volume>, <fpage>332</fpage>&#x02013;<lpage>345</lpage>. <pub-id pub-id-type="doi">10.1080/13546783.2010.525860</pub-id></citation></ref>
<ref id="B79">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Walley</surname> <given-names>P.</given-names></name></person-group> (<year>1991</year>). <source>Statistical Reasoning with Imprecise Probabilities</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Chapman and Hall</publisher-name>.</citation></ref>
<ref id="B80">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wallmann</surname> <given-names>C.</given-names></name> <name><surname>Kleiter</surname> <given-names>G.</given-names></name></person-group> (<year>2014a</year>). <article-title>Probability propagation in generalized inference forms</article-title>. <source>Studia Logica</source> <volume>102</volume>, <fpage>913</fpage>&#x02013;<lpage>929</lpage>. <pub-id pub-id-type="doi">10.1007/s11225-013-9513-4</pub-id></citation></ref>
<ref id="B81">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wallmann</surname> <given-names>C.</given-names></name> <name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>2012a</year>). <article-title>Beware of too much information</article-title> in <source>Proceedings of the 9th Workshop on Uncertainty Processing, WUPES</source>, eds <person-group person-group-type="editor"><name><surname>Kroupa</surname> <given-names>T.</given-names></name> <name><surname>Vejnarova</surname> <given-names>J.</given-names></name></person-group> (<publisher-loc>Prague</publisher-loc>: <publisher-name>Fakulty of Managment, University of Economics</publisher-name>), <fpage>214</fpage>&#x02013;<lpage>225</lpage>.</citation></ref>
<ref id="B82">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wallmann</surname> <given-names>C.</given-names></name> <name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>2012b</year>). <article-title>Exchangeability in probability logic</article-title> in <source>Communications in Computer and Information Science, IPMU (4)</source>, <volume>Vol. 300</volume>, eds <person-group person-group-type="editor"><name><surname>Greco</surname> <given-names>S.</given-names></name> <name><surname>Bouchon-Meunier</surname> <given-names>B.</given-names></name> <name><surname>Coletti</surname> <given-names>G.</given-names></name> <name><surname>Fedrizzi</surname> <given-names>M.</given-names></name> <name><surname>Matarazzo</surname> <given-names>B.</given-names></name> <name><surname>Yager</surname> <given-names>R. R.</given-names></name></person-group> (<publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>157</fpage>&#x02013;<lpage>167</lpage>.</citation></ref>
<ref id="B83">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wallmann</surname> <given-names>C.</given-names></name> <name><surname>Kleiter</surname> <given-names>G. D.</given-names></name></person-group> (<year>2014b</year>). <article-title>Degradation in probability logic: when more information leads to less precise conclusions</article-title>. <source>Kybernetika</source> <volume>50</volume>, <fpage>268</fpage>&#x02013;<lpage>283</lpage>. <pub-id pub-id-type="doi">10.14736/kyb-2014-2-0268</pub-id></citation></ref>
<ref id="B84">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wedlin</surname> <given-names>A.</given-names></name></person-group> (<year>1996</year>). <article-title>Some remarks on the transition from a standard Bayesian to a subjectivistic statistical standpoint</article-title> in <source>Proceedings of the &#x0201C;International Conference, The Notion of Event in Probabilistic Epistemology&#x0201D;. Applicata &#x0201C;Bruno de Finetti&#x0201D;</source> (<publisher-loc>Triest</publisher-loc>: <publisher-name>Dipartimento di Matematica</publisher-name>), <fpage>91</fpage>&#x02013;<lpage>110</lpage>.</citation></ref>
<ref id="B85">
<citation citation-type="journal"><person-group person-group-type="editor"><name><surname>Wuebbles</surname> <given-names>D. J.</given-names></name> <name><surname>Fahey</surname> <given-names>D. W.</given-names></name> <name><surname>Hibbard</surname> <given-names>K. A.</given-names></name> <name><surname>Dokken</surname> <given-names>B. C.</given-names></name> <name><surname>Stewart</surname> <given-names>B. C.</given-names></name> <name><surname>Maycock</surname> <given-names>T. K.</given-names></name></person-group> (eds.). (<year>2017</year>). <source>Climate Science Special Report: Fourth National Climate Assessment</source>, <volume>Vol. 1</volume>. <publisher-loc>Washington, DC</publisher-loc>: <publisher-name>U. S. Global Change Research Program</publisher-name>.</citation>
</ref>
<ref id="B86">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>F.</given-names></name> <name><surname>Tenenbaum</surname> <given-names>J. B.</given-names></name></person-group> (<year>2007</year>). <article-title>Word learning as Bayesian inference</article-title>. <source>Psychol. Rev.</source> <volume>114</volume>, <fpage>245</fpage>&#x02013;<lpage>272</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.114.2.245</pub-id><pub-id pub-id-type="pmid">17500627</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn id="fn0001"><p><sup>1</sup>For Adam&#x00027;s probabilistic validity in the more recent research on human reasoning see Kleiter (<xref ref-type="bibr" rid="B45">2018</xref>).</p></fn>
<fn id="fn0002"><p><sup>2</sup>From Equation (6) is may be seen that as <italic>x</italic> approaches 1 the bounds of the conditional approach the bounds of the conjunction in Equation (3).</p></fn>
<fn id="fn0003"><p><sup>3</sup>If the premises are specified by interval probabilities the situation gets more complicated and requires the concepts of g-coherence (Gilio, <xref ref-type="bibr" rid="B26">1995</xref>) or the avoidance of sure loss (Walley, <xref ref-type="bibr" rid="B79">1991</xref>). We do not need the concepts here.</p></fn>
<fn id="fn0004"><p><sup>4</sup>Denying the antecedent and affirming the consequent degenerate at 0; the <sc>modus tollens</sc> degenerates at 1.</p></fn>
<fn id="fn0005"><p><sup>5</sup>Note that Rips (<xref ref-type="bibr" rid="B65">1994</xref>, p. 125) prefers the suppositional interpretation of the conditional; the domain of a conditional consists only of those possibilities in which the antecedent is true. PSYCOP rejects the paradoxes of the material implication!</p></fn>
<fn id="fn0006"><p><sup>6</sup>In the literature Spearman correlation copulas are often preferred to Gaussian copulas as they keep the distribution of the marginals and the correlation independent.</p></fn>
<fn id="fn0007"><p><sup>7</sup>The study of overconfidence can be tricky as overconfidence for <italic>E</italic> goes hand in hand with underconfidence for non-<italic>E</italic>. Scoring rules avoid this problem (Kleiter et al., <xref ref-type="bibr" rid="B46">2002</xref>).</p></fn>
<fn id="fn0008"><p><sup>8</sup>It may be mentioned that the evaluation of the data by the method of paired comparison would allow to calculate several interesting statistics like item characteristics, the consistency of the judgments, or interindividual differences.</p></fn>
</fn-group>
</back>
</article> 