<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Phys.</journal-id>
<journal-title>Frontiers in Physics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Phys.</abbrev-journal-title>
<issn pub-type="epub">2296-424X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fphy.2016.00047</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Physics</subject>
<subj-group>
<subject>Hypothesis and Theory</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Probabilities and Shannon&#x00027;s Entropy in the Everett Many-Worlds Theory</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Wichert</surname> <given-names>Andreas</given-names></name>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/152872/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Moreira</surname> <given-names>Catarina</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/258336/overview"/>
</contrib>
</contrib-group>
<aff><institution> Department of Informatics, INESC-ID/Instituto Superior T&#x000E9;cnico - University of Lisboa</institution> <country>Porto Salvo, Portugal</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Lev Shchur, Landau Institute for Theoretical Physics, Russia</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Ignazio Licata, Institute for Scientific Methodology, Italy; Arkady M. Satanin, N. I. Lobachevsky State University of Nizhny Novgorod, Russia</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Andreas Wichert <email>andreas.wichert&#x00040;tecnico.ulisboa.pt</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Interdisciplinary Physics, a section of the journal Frontiers in Physics</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>06</day>
<month>12</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<year>2016</year>
</pub-date>
<volume>4</volume>
<elocation-id>47</elocation-id>
<history>
<date date-type="received">
<day>23</day>
<month>09</month>
<year>2016</year>
</date>
<date date-type="accepted">
<day>22</day>
<month>11</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2016 Wichert and Moreira.</copyright-statement>
<copyright-year>2016</copyright-year>
<copyright-holder>Wichert and Moreira</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>Following a controversial suggestion by David Deutsch that decision theory can solve the problem of probabilities in the Everett many-worlds we suggest that the probabilities are induced by Shannon&#x00027;s entropy that measures the uncertainty of events. We argue that a relational person prefers certainty to uncertainty due to fundamental biological principle of homeostasis.</p></abstract>
<kwd-group>
<kwd>decision theory</kwd>
<kwd>Everett many-worlds</kwd>
<kwd>homeostasis</kwd>
<kwd>probability</kwd>
<kwd>Shannon&#x00027;s entropy</kwd>
</kwd-group>
<contract-num rid="cn001">UID/CEC/50021/2013</contract-num>
<contract-num rid="cn001">SFRH/BD/92391/2013</contract-num>
<contract-sponsor id="cn001">Funda&#x000E7;&#x000E3;o para a Ci&#x000EA;ncia e a Tecnologia<named-content content-type="fundref-id">10.13039/501100001871</named-content></contract-sponsor>
<counts>
<fig-count count="0"/>
<table-count count="0"/>
<equation-count count="58"/>
<ref-count count="35"/>
<page-count count="7"/>
<word-count count="4479"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>The Everett many-worlds theory views reality as a many-branched tree in which every possible quantum outcome is realized [<xref ref-type="bibr" rid="B1">1</xref>&#x02013;<xref ref-type="bibr" rid="B8">8</xref>]. The modern version of the wave-function collapse is based on decoherence and leads to the multiverse interpretation of quantum mechanics [<xref ref-type="bibr" rid="B9">9</xref>]. Every time a quantum experiment with different possible outcomes is performed, all outcomes are obtained. If a quantum experiment is preformed with two outcomes with quantum mechanical probability 1/100 for outcome <italic>A</italic> and 99/100 for outcome <italic>B</italic>, then both the world with outcome <italic>A</italic> and the world with outcome <italic>B</italic> will exist. A person should not expect any difference between the experience in a world <italic>A</italic> and <italic>B</italic>. The open question is the following one: Due to deterministic nature of the branching, why should a rational person care about the corresponding probabilities? Why not simply assume that they are equally probable due to deterministic nature of the branching [<xref ref-type="bibr" rid="B10">10</xref>]. How can we solve this problem without introducing additional structure into the many-worlds theory?</p>
<p>David Deutsch suggested that decision theory can solve this problem [<xref ref-type="bibr" rid="B11">11</xref>&#x02013;<xref ref-type="bibr" rid="B13">13</xref>]. A person identifies the consequences of decision theory with things that happen to its individual future copies on particular branches. A person who does not care in receiving 1 on the first branch <italic>A</italic> and 99 on the second branch <italic>B</italic> labels them with a probability 1/2. A rational person that cares assigns the probability 1/100 for outcome <italic>A</italic> and 99/100 for outcome <italic>B</italic>. It should be noted that David Deutsch introduced a rational person into the explanation of the corresponding problem. The probability rule within unitary quantum mechanics is based on the rationality of the person. However since the branching is deterministic and no uncertainty is present, how can this rational person justify the application of decision-making?</p>
<p>In this work, we pretend to give an alternative and simpler explanation of Deutch&#x00027;s decision theoretic argument motivated by biological mechanisms and also based on Epstein&#x00027;s ideas that toward uncertainty, human beings tend to have aversion preferences. They prefer to choose an action that brings them a certain but lower utility instead of an action that is uncertain but can yield a higher utility [<xref ref-type="bibr" rid="B14">14</xref>]. We precede in two steps.</p>
<p>In the first step we propose to use Shannon&#x00027;s entropy as the expected utility in Deutsch&#x00027;s approach. Probabilities in Shannon&#x00027;s entropy function can be seen as frequencies; they can be measured only by performing an experiment many times and indicate us the past experience. Surprise is inversely related to probability. The larger the probability that we receive a certain message, the less surprised we are. For example the message &#x0201C;Dog bites man&#x0201D; is quite common, has a high probability and usually we are not surprised. However, the message &#x0201C;Man bites dog&#x0201D; is unusual and has a low probability. The more we are surprised about the occurrence of an event the more difficult an explanation of such an event is. The surprise is defined in relation to previous events, in our example men and dogs.</p>
<p>In the second step we introduce the experience of identity derived from homeostasis as a fundamental biological principle. It is preformed subconsciously by our brain as a coherent explanation of events in a temporal window [<xref ref-type="bibr" rid="B15">15</xref>]. Events with higher surprise are more difficult to explain and require more energy. Before an event happens an explanation has to be be initiated, so after the event happened it can be integrated in the present explanation in the temporal window. A rational person may not care about the attached weights during deterministic branching, but our brain machinery cares. This information is essential for the ability to give a continuous explanation of our &#x0201C;self&#x0201D; identity.</p>
<p>The paper is organized as follows:
<list list-type="bullet">
<list-item><p>We review Deutsch&#x00027;s decision-theoretic argument.</p></list-item>
<list-item><p>We propose Shannon&#x00027;s entropy as the expected utility function and surprisal as the utility function.</p></list-item>
<list-item><p>We argue that probabilities that are the basis of surprise are essential for the ability to give a continuous explanation.</p></list-item>
</list></p>
</sec>
<sec id="s2">
<title>2. Review of Deutsch&#x00027;s decision-theoretic argument</title>
<p>Decision theory according to Savage is a theory designed for the analysis of rational decision-making under conditions of uncertainty [<xref ref-type="bibr" rid="B11">11</xref>]. A rational person faces a choice of acts as a function from the set of possible states to the set of consequences. There are some constraints on the acts of the rational person, as for example the preferences must be transitive. It can be then shown that there exists a probability measure <italic>p</italic> on states <italic>s</italic> and a utility function <italic>U</italic> on the set of consequences of an act A so that the expected utility is defined as
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mrow><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:mo>:</mml:mo><mml:mtext>&#x0200B;</mml:mtext><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>s</mml:mi></mml:munder><mml:mi>p</mml:mi></mml:mstyle><mml:mo stretchy='false'>(</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000B7;</mml:mo><mml:mi>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>s</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>It follows that a rational person prefers act A to act B if the expected utility of A is greater than that of B. The behavior corresponds to the maximization of the expected utility with respect to some probability measure [<xref ref-type="bibr" rid="B16">16</xref>].</p>
<sec>
<title>2.1. Decision-theoretic argument</title>
<p>In the context of the many-worlds the rational person is able to describe each of its acts as a function from the set of possible future branches that will result from a given quantum measurement to the set of consequences [<xref ref-type="bibr" rid="B11">11</xref>]. Consequences are the things that happen to individual future copies of the person on particular branch. Act is a function from states to consequences, the preferences must be transitive. If a rational person prefers act <italic>A</italic> to act <italic>B</italic>, and prefers act <italic>B</italic> to act <italic>C</italic>, then the same person must prefer act <italic>A</italic> to act <italic>C</italic>. This can be summarized by assigning a real number to each possible outcome in such a way that the preferences are transitive. The corresponding number is called the utility or value. The deterministic process of branching is identified as a chance setup for the rational person by a quantum game with a payoff function <italic>P</italic> associating a consequence with each eigenvalue of the observable <inline-formula><mml:math id="M2"><mml:mover accent="true"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula>. When the measurement is performed the state vector collapses into one or the other eigenstate of the observable being measured, a projection into a eigenstate is preformed. For observable <inline-formula><mml:math id="M3"><mml:mover accent="true"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula> and the state |<italic>y</italic>&#x0232A; in which the expression for the probability reduces to |&#x02329;<italic>x</italic>|<italic>y</italic>&#x0232A;|<sup>2</sup> in which <italic>x</italic> is an eigenvector of <inline-formula><mml:math id="M4"><mml:mover accent="true"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula>. A quantum game is specified by the triple
<disp-formula id="E2"><mml:math id="M5"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo>,</mml:mo><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>It is assumed that the utilities of the possible payoffs have an additivity property [<xref ref-type="bibr" rid="B11">11</xref>]. The approach is based on two non-probabilistic axioms of decision theory. The principle of substitutability, constrains the values of sub-games. If any of the sub-games is replaced by a game of equal value then the value of the composite game is unchanged. The other axiom concerns two-player zero sum games. If there are two possible acts A and B with payoff <italic>c</italic> for A and &#x02212;<italic>c</italic> for <italic>B</italic> then playing <italic>A</italic> and <italic>B</italic> results in zero. The expected utility or value for playing <italic>A</italic> and <italic>B</italic> is
<disp-formula id="E3"><mml:math id="M6"><mml:mrow><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>c</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>c</mml:mi><mml:mo>=</mml:mo><mml:mn>0.</mml:mn></mml:mrow></mml:math></disp-formula></p>
<p>For any two acts <italic>A</italic>, <italic>B</italic>, the rational person prefers <italic>A</italic> to <italic>B</italic> if
<disp-formula id="E4"><mml:math id="M7"><mml:mrow><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0003E;</mml:mo><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>The person acts if she regarded her multiple future branches as multiple possible futures. In classical decision theory two rational persons may be represented by different probability measures on the set of states. Not so in David Deutsch suggested approach that deals with many-worlds theory [<xref ref-type="bibr" rid="B10">10</xref>]. Expected value with an observable <inline-formula><mml:math id="M8"><mml:mover accent="true"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula> is
<disp-formula id="E5"><label>(2)</label><mml:math id="M9"><mml:mrow><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mo>&#x02329;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0007C;</mml:mo><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:math></disp-formula>
and since
<disp-formula id="E6"><label>(3)</label><mml:math id="M10"><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0007C;</mml:mo><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mstyle><mml:msup><mml:mo>&#x0007C;</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></disp-formula>
representing the weighted mean over the eigenvalues of <inline-formula><mml:math id="M11"><mml:mover accent="true"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mo>.</mml:mo></mml:math></inline-formula> In quantum physics &#x02329;<italic>y</italic>&#x0232A; is called the expected value. A rational person that makes decision about outcomes of measurement believes that each possible eigenvalue <italic>x</italic><sub><italic>i</italic></sub> had the probability <inline-formula><mml:math id="M12"><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mi>y</mml:mi></mml:mrow><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> due to the process of maximizing the probabilistic expectation value of the payoff [<xref ref-type="bibr" rid="B11">11</xref>].</p>
<sec>
<title>2.1.1. The derivation of probabilities for real amplitudes</title>
<p>We sketch the proof by David Elieser Deutsch, see Deutsch [<xref ref-type="bibr" rid="B11">11</xref>]. If |<italic>y</italic>&#x0232A; is an eigenstate |<italic>x</italic>&#x0232A; of <inline-formula><mml:math id="M13"><mml:mover accent="true"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula> it follows that
<disp-formula id="E7"><mml:math id="M14"><mml:mrow><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>x</mml:mi></mml:mrow></mml:math></disp-formula>
is equal to the eigenvalue <italic>x</italic>. By appealing twice to the additivity of utilities for eigenvectors |<italic>x</italic><sub><italic>i</italic></sub>&#x0232A; and adding a constant <italic>k</italic> we arrive at
<disp-formula id="E8"><label>(4)</label><mml:math id="M15"><mml:mrow><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:mi>k</mml:mi><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>k</mml:mi><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>The rational person is indifferent in receiving the separate payoffs with utilities <italic>x</italic><sub>1</sub> and <italic>x</italic><sub>2</sub> or receiving a single payoff with utility <italic>x</italic><sub>1</sub> &#x0002B; <italic>x</italic><sub>2</sub>. The expected utility of |<italic>x</italic><sub><italic>i</italic></sub> &#x0002B; <italic>k</italic>&#x0232A; has by additivity the same value as the expected utility of |<italic>x</italic><sub><italic>i</italic></sub>&#x0232A; followed by <italic>k</italic>. This is the central equation on which the proof of David Deutsch is based. The constant <italic>k</italic> on the left side corresponds to a combination of eigenstates. This is because it is required that for each branch the payoffs are present. The constant <italic>k</italic> on the right side corresponds to a combination of the corresponding eigenvalues.</p>
<p>According to additivity the left side of equation has the same expected utility as the superposition of possible branches and the payoffs represented by a corresponding combination of eigenvalues is on the right side. The other equation follows from the axiom concerns two-player zero- sum games
<disp-formula id="E9"><label>(5)</label><mml:math id="M16"><mml:mrow><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>0.</mml:mn></mml:mrow></mml:math></disp-formula></p>
<p>If |<italic>y</italic>&#x0232A; is in a superposition
<disp-formula id="E10"><label>(6)</label><mml:math id="M17"><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mn>2</mml:mn></mml:msqrt></mml:mrow></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mn>2</mml:mn></mml:msqrt></mml:mrow></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
and with
<disp-formula id="E11"><label>(7)</label><mml:math id="M18"><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></disp-formula>
it follows
<disp-formula id="E12"><label>(8)</label><mml:math id="M19"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mn>2</mml:mn></mml:msqrt></mml:mrow></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;</mml:mtext><mml:mo>=</mml:mo><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mn>2</mml:mn></mml:msqrt></mml:mrow></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:mtext>&#x0200B;</mml:mtext><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;</mml:mtext><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mn>2</mml:mn></mml:msqrt></mml:mrow></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;&#x02009;</mml:mtext><mml:mo>=</mml:mo><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mn>2</mml:mn></mml:msqrt></mml:mrow></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
and by the Equation the value of the expected utility is derived as
<disp-formula id="E13"><label>(9)</label><mml:math id="M20"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mn>2</mml:mn></mml:msqrt></mml:mrow></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>For a superposition of <italic>n</italic> eigenstates
<disp-formula id="E14"><label>(10)</label><mml:math id="M21"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mi>n</mml:mi></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:mtext>&#x0200B;</mml:mtext><mml:mo>&#x022EF;</mml:mo><mml:mtext>&#x0200B;</mml:mtext><mml:mo>+</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mi>n</mml:mi></mml:msqrt></mml:mrow></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mtext>&#x0200B;</mml:mtext><mml:mo>&#x022EF;</mml:mo><mml:mtext>&#x0200B;</mml:mtext><mml:mo>+</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
the proof is based on induction from the principle of substitutability and additivity. For <italic>n</italic> &#x0003D; 2<sup><italic>m</italic></sup> the proof follows from substitutability by inserting a two equal amplitude game into remaining 2<sup><italic>m</italic>&#x02212;1</sup> equal-amplitude outcomes. Otherwise the proof follows inductive hypothesis and additivity by replacing <italic>n</italic> &#x02212; 1 by <italic>n</italic>, for details see Deutsch [<xref ref-type="bibr" rid="B11">11</xref>]. For unequal amplitudes
<disp-formula id="E15"><label>(11)</label><mml:math id="M22"><mml:mrow><mml:mfrac><mml:mi>m</mml:mi><mml:mi>n</mml:mi></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:mfrac><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>m</mml:mi></mml:mrow><mml:mi>n</mml:mi></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msqrt><mml:mrow><mml:mfrac><mml:mi>m</mml:mi><mml:mi>n</mml:mi></mml:mfrac></mml:mrow></mml:msqrt><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:msqrt><mml:mrow><mml:mfrac><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>m</mml:mi></mml:mrow><mml:mi>n</mml:mi></mml:mfrac></mml:mrow></mml:msqrt><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
we introduce auxiliary system that can be in two states different states, either
<disp-formula id="E16"><label>(12)</label><mml:math id="M23"><mml:mrow><mml:msqrt><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mi>n</mml:mi></mml:mfrac></mml:mrow></mml:msqrt><mml:mo>&#x000B7;</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>a</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>m</mml:mi></mml:munderover><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mstyle></mml:mrow></mml:math></disp-formula>
or
<disp-formula id="E17"><label>(13)</label><mml:math id="M24"><mml:mrow><mml:msqrt><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:msqrt><mml:mo>&#x000B7;</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>a</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>m</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>+</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mstyle></mml:mrow></mml:math></disp-formula>
with eigenstates |<italic>z</italic><sub><italic>a</italic></sub>&#x0232A; and eigenvalues <italic>z</italic><sub><italic>a</italic></sub> of the observable &#x01E90; that are all distinct. Then the joint state is given by
<disp-formula id="E18"><label>(14)</label><mml:math id="M25"><mml:mrow><mml:msqrt><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mi>n</mml:mi></mml:mfrac></mml:mrow></mml:msqrt><mml:mo>&#x000B7;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>a</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>m</mml:mi></mml:munderover><mml:mo>&#x0007C;</mml:mo></mml:mstyle><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>a</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>m</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>+</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mo>&#x0007C;</mml:mo></mml:mstyle><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>When we measure the observable &#x01E90; depending on the index <italic>a</italic> of the eigenvalues <italic>z</italic><sub><italic>a</italic></sub> we know that <inline-formula><mml:math id="M26"><mml:mover accent="true"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula> is <italic>x</italic><sub>1</sub> if <italic>a</italic> &#x0003C; <italic>m</italic> &#x0002B; 1 or <italic>x</italic><sub>2</sub> otherwise. With additional properties of the eigenvalues
<disp-formula id="E19"><label>(15)</label><mml:math id="M27"><mml:mrow><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>a</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>m</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>a</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>m</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>+</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math></disp-formula>
and all <italic>n</italic> values
<disp-formula id="E20"><label>(16)</label><mml:math id="M28"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x000A0;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x02264;</mml:mo><mml:mi>a</mml:mi><mml:mo>&#x02264;</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:math></disp-formula>
and
<disp-formula id="E21"><label>(17)</label><mml:math id="M29"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x000A0;</mml:mo><mml:mi>m</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x02264;</mml:mo><mml:mi>a</mml:mi><mml:mo>&#x02264;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:math></disp-formula>
are all distinct, the composite measurement with observable <inline-formula><mml:math id="M30"><mml:mover accent="true"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula> and &#x01E90; has the same value as the one with observable <inline-formula><mml:math id="M31"><mml:mover accent="true"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula> alone. Because of the additivity the state is equivalent to the amplitude superposition
<disp-formula id="E22"><label>(18)</label><mml:math id="M32"><mml:mrow><mml:msqrt><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mi>n</mml:mi></mml:mfrac></mml:mrow></mml:msqrt><mml:mo>&#x000B7;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>a</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>m</mml:mi></mml:munderover><mml:mo>&#x0007C;</mml:mo></mml:mstyle><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>a</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>m</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>+</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mo>&#x0007C;</mml:mo></mml:mstyle><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
and it follows
<disp-formula id="E23"><label>(19)</label><mml:math id="M33"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mi>n</mml:mi></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>a</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>m</mml:mi></mml:munderover><mml:mo>&#x0007C;</mml:mo></mml:mstyle><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>a</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mi>m</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>+</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mo>&#x0007C;</mml:mo></mml:mstyle><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mi>m</mml:mi><mml:mi>n</mml:mi></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:mfrac><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>m</mml:mi></mml:mrow><mml:mi>n</mml:mi></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>A rational person would choose the expected value of |<italic>y</italic>&#x0232A; with an observable <inline-formula><mml:math id="M34"><mml:mover accent="true"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula>
<disp-formula id="E24"><label>(20)</label><mml:math id="M35"><mml:mrow><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mo>&#x02329;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0007C;</mml:mo><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:math></disp-formula>
and since
<disp-formula id="E25"><label>(21)</label><mml:math id="M36"><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0007C;</mml:mo><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:mo>&#x02329;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:mstyle><mml:msup><mml:mo>&#x0007C;</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></disp-formula>
represents the weighted mean over the eigenvalues of <inline-formula><mml:math id="M37"><mml:mover accent="true"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula>, the rational person interprets <italic>p</italic><sub><italic>i</italic></sub> as probabilities. For complex amplitudes it is assumed that the unitary transformation
<disp-formula id="E26"><label>(22)</label><mml:math id="M38"><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>&#x02192;</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:msup><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>a</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:math></disp-formula>
with a corresponding phase &#x003B8;<sub><italic>a</italic></sub> does not alter the payoff, the player is indifferent as to whether it occurs or not. The proof can be extended to irrational numbers, it is based on the idea that the state undergoes some unitary evolution before the measurement [<xref ref-type="bibr" rid="B11">11</xref>]. The unitary evolution leads to real amplitudes with eigenvalues that exceeds the original state. Each game played after the unitary transformation is as valuable as the original game, and the values of such games have a lower bound of the original game.</p>
</sec>
</sec>
</sec>
<sec id="s3">
<title>3. Expected utility and entropy</title>
<p>A fundamental property of rational persons is that they prefer certainty to uncertainty. Humans prefer to choose an action that brings them a certain, but lower utility instead of an action that is uncertain, but can bring a higher utility [<xref ref-type="bibr" rid="B14">14</xref>].</p>
<p>We measure the uncertainty by the entropy of the experiment. The experiment starts at <italic>t</italic><sub>0</sub> and ends at <italic>t</italic><sub>1</sub>. At <italic>t</italic><sub>0</sub>, we have no information about the results of the experiment, and at <italic>t</italic><sub>1</sub>, we have all of the information, so that the entropy of the experiment is 0. We can describe an experiment by probabilities. For the outcome of the flip of a fair coin, the probability for a head or tail is 0.5, <italic>p</italic> &#x0003D; (0.5, 0.5). A person <italic>A</italic> knows the outcome, but person <italic>B</italic> does not. Person <italic>B</italic> could ask <italic>A</italic> about the outcome of the experiment. If the question is of the most basic nature, then we could measure the minimal number of optimally required questions <italic>B</italic> must pose to know the result of the experiment. A most basic question corresponds to the smallest information unit that could correspond to a yes or no answer. For a fair coin, we pose just one question, for example, is it a tail? For a card game, to determine if a card is either red, clubs or spades, we have a different number of possible questions. If the card is red, then we need only one question. However, in the case in which the card is not red, we need another question to determine whether it is a spade or a club. The probability of being red is 0.5, of clubs 0.25 and spades 0.25, <italic>p</italic> &#x0003D; (0.5, 0.25, 0.25). For clubs and spades, we need two questions. In the meantime, we must ask 1 &#x000B7; 0.5 &#x0002B; 2 &#x000B7; 0.25 &#x0002B; 2 &#x000B7; 0.25 questions, which would result in 1.5 questions. The entropy is represented by Shannon&#x00027;s entropy <italic>H</italic> for an experiment <italic>A</italic>
<disp-formula id="E27"><label>(23)</label><mml:math id="M39"><mml:mrow><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>It indicates the minimal number of optimal binary yes/no questions that a rational person must pose to know the result of an experiment [<xref ref-type="bibr" rid="B17">17</xref>, <xref ref-type="bibr" rid="B18">18</xref>]. We can describe the process of measuring a state by an observable as measuring the entropy of the experiment. Before the measurement of a state |<italic>x</italic>&#x0232A; by an observable we are uncertain about the outcome. We measure the uncertainty by Shannon&#x00027;s entropy. After the measurement the state is in eigenstate, the entropy is zero, Shannon&#x00027;s entropy is defined for any observable and any probability distribution, according to Ballentin [<xref ref-type="bibr" rid="B19">19</xref>, p. 617] &#x0201C;It measures the maximum amount of information that may be gained by measuring that observable.&#x0201D;</p>
<p>Assuming that a Hilbert space <italic>H</italic><sub><italic>n</italic></sub> can be represented as a collection of orthogonal subspaces
<disp-formula id="E28"><label>(24)</label><mml:math id="M40"><mml:mrow><mml:msub><mml:mi>H</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x02295;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x02295;</mml:mo><mml:mo>&#x02026;</mml:mo><mml:mo>&#x02295;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mi>f</mml:mi></mml:msub></mml:mrow></mml:math></disp-formula>
with <italic>f</italic> &#x02264; <italic>n</italic>. A state |<italic>y</italic>&#x0232A; can be represented with |<italic>x</italic><sub><italic>i</italic></sub>&#x0232A; &#x02208; <italic>E</italic><sub><italic>i</italic></sub> as
<disp-formula id="E29"><mml:math id="M41"><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mi>f</mml:mi></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>f</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>For one dimensional subspaces <italic>f</italic> &#x0003D; <italic>n</italic> and the value |<italic>x</italic><sub><italic>k</italic></sub>&#x0232A; is observed with a probability ||&#x003C9;<sub><italic>k</italic></sub> &#x000B7; |<italic>x<sub>k</sub></italic>&#x0232A;||<sup>2</sup> = |&#x003C9;<sub><italic>k</italic></sub>|<sup>2</sup>. Shannon&#x00027;s entropy is defined as
<disp-formula id="E30"><label>(25)</label><mml:math id="M42"><mml:mrow><mml:mi>E</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:msup><mml:mo>&#x0007C;</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:msup><mml:mo>&#x0007C;</mml:mo><mml:mn>2</mml:mn></mml:msup><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mstyle><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<sec>
<title>3.1. Weighted sum of surprisals</title>
<p>We say that events that seldom happen, for example, the letter <italic>x</italic> in a message, have a higher surprise. Some letters are more frequent than others; an <italic>e</italic> is more frequent than an <italic>x</italic>. The larger the probability that we receive a character, the less surprised we are. Surprise is inversely related to probability.</p>
<disp-formula id="E31"><mml:math id="M43"><mml:mrow><mml:msub><mml:mi>s</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>
<p>The logarithm of surprise
<disp-formula id="E32"><label>(26)</label><mml:math id="M44"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:msup><mml:mo>&#x0007C;</mml:mo><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mrow><mml:msqrt><mml:mn>2</mml:mn></mml:msqrt></mml:mrow></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>s</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></disp-formula>
is the self-information or surprisal <italic>I</italic><sub><italic>i</italic></sub>. The Shannon&#x00027;s entropy <italic>H</italic> represents the weighted sum of surprisals.</p>
<disp-formula id="E33"><label>(27)</label><mml:math id="M45"><mml:mrow><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>s</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
<p>It can be interpreted as an expected utility
<disp-formula id="E34"><label>(28)</label><mml:math id="M46"><mml:mrow><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:mo>:</mml:mo><mml:mtext>&#x0200B;</mml:mtext><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:mi>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>i</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
with the utility function <italic>U</italic>(<italic>A</italic>(<italic>i</italic>))
<disp-formula id="E35"><label>(29)</label><mml:math id="M47"><mml:mrow><mml:mi>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>i</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>For acts the expected utility is identified with the representation of the entropy of an action, represented by <italic>H</italic>
<disp-formula id="E36"><mml:math id="M48"><mml:mrow><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mtext>&#x02009;</mml:mtext><mml:mo>:</mml:mo><mml:mtext>&#x0200B;</mml:mtext><mml:mo>=</mml:mo><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>That the rewards are determined by the negative information content is already known and is used for the utility representation problems in economics [<xref ref-type="bibr" rid="B20">20</xref>&#x02013;<xref ref-type="bibr" rid="B22">22</xref>]. If there are two possible acts A and B with payoff <italic>c</italic> for A and &#x02212;<italic>c</italic> for B then playing A and B results in zero. The expected utility or value for playing A and B is
<disp-formula id="E37"><mml:math id="M49"><mml:mrow><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>+</mml:mo><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mi>c</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>c</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
but for any two acts A, B, the rational person prefers A to B if
<disp-formula id="E38"><mml:math id="M50"><mml:mrow><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0003C;</mml:mo><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
because the uncertainty is lower. The theoretic proof David Deutsch can be applied, for example
<disp-formula id="E39"><label>(30)</label><mml:math id="M51"><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mn>2</mml:mn></mml:msqrt></mml:mrow></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mn>2</mml:mn></mml:msqrt></mml:mrow></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
and with
<disp-formula id="E40"><label>(31)</label><mml:math id="M52"><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></disp-formula>
the right side of the equation with eigenvalues <italic>x</italic><sub>1</sub> &#x0003D; 1 and <italic>x</italic><sub>2</sub> &#x0003D; 1
<disp-formula id="E41"><label>(32)</label><mml:math id="M53"><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:math></disp-formula>
or <italic>U</italic>(<italic>A</italic>(<italic>i</italic>)) &#x0003D; <italic>I</italic><sub><italic>i</italic></sub> &#x0003D; log<sub>2</sub>2 &#x0003D; 1. Therefore
<disp-formula id="E42"><label>(33)</label><mml:math id="M54"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>E</mml:mi><mml:mi>U</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mn>2</mml:mn></mml:msqrt></mml:mrow></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
and
<disp-formula id="E43"><label>(34)</label><mml:math id="M55"><mml:mrow><mml:mn>1</mml:mn><mml:mo>=</mml:mo><mml:mn>0.5</mml:mn><mml:mo>&#x000B7;</mml:mo><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mn>0.5</mml:mn><mml:mo>&#x000B7;</mml:mo><mml:mn>1.</mml:mn></mml:mrow></mml:math></disp-formula></p>
<p>However we can as well recover the probabilities from the definition of Shannon&#x00027;s entropy. This strengthens idea even more.</p>
</sec>
<sec>
<title>3.2. Recovering probabilities from entropy</title>
<p>The entropy or the uncertainty is maximal in the case in which all probabilities are equal, which means that <italic>p</italic> &#x0003D; (1/<italic>n</italic>, 1/<italic>n</italic>&#x02026;, 1/<italic>n</italic>). In this case
<disp-formula id="E44"><label>(35)</label><mml:math id="M56"><mml:mrow><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>F</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:munder><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mn>1</mml:mn><mml:mo>/</mml:mo><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mi>n</mml:mi><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula>
the surprisal <italic>I</italic><sub><italic>i</italic></sub> is equal to the entropy
<disp-formula id="E45"><label>(36)</label><mml:math id="M57"><mml:mrow><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>F</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>If |<italic>y</italic>&#x0232A; is in a superposition of <italic>n</italic> eigenstates
<disp-formula id="E46"><label>(37)</label><mml:math id="M58"><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mi>n</mml:mi></mml:msqrt></mml:mrow></mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mo>+</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
it follows that a surprisal <inline-formula><mml:math id="M59"><mml:msubsup><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup></mml:math></inline-formula> of each state represented by amplitude <inline-formula><mml:math id="M60"><mml:msub><mml:mrow><mml:mi>&#x003C9;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msqrt><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msqrt></mml:mrow></mml:mfrac></mml:math></inline-formula> is
<disp-formula id="E47"><label>(38)</label><mml:math id="M61"><mml:mrow><mml:msubsup><mml:mi>I</mml:mi><mml:mi>i</mml:mi><mml:mo>*</mml:mo></mml:msubsup><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:msqrt><mml:mi>n</mml:mi></mml:msqrt><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mi>n</mml:mi><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>We know that in the case of equal amplitudes log<italic>n</italic> yes/no questions have to be asked with <italic>H</italic>(<italic>F</italic>) &#x0003D; <italic>I</italic><sub><italic>i</italic></sub> so the function <italic>f</italic>(<italic>I</italic><sup>&#x0002A;</sup>) is equal to multiply by two
<disp-formula id="E48"><label>(39)</label><mml:math id="M62"><mml:mrow><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mi>f</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msubsup><mml:mi>I</mml:mi><mml:mi>i</mml:mi><mml:mo>*</mml:mo></mml:msubsup><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:msqrt><mml:mi>n</mml:mi></mml:msqrt><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mi>n</mml:mi></mml:msqrt></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math></disp-formula>
that is equivalent to
<disp-formula id="E49"><label>(40)</label><mml:math id="M63"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mi>n</mml:mi></mml:msqrt></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>In the case of complex amplitudes with phase &#x003B8;<sub><italic>a</italic></sub>
<disp-formula id="E50"><label>(41)</label><mml:math id="M64"><mml:mrow><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:msup><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mi>n</mml:mi></mml:msqrt></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>
<disp-formula id="E51"><label>(42)</label><mml:math id="M65"><mml:mrow><mml:msubsup><mml:mi>I</mml:mi><mml:mi>i</mml:mi><mml:mo>*</mml:mo></mml:msubsup><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:msup><mml:msqrt><mml:mi>n</mml:mi></mml:msqrt><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mi>log</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mo>+</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mi>n</mml:mi><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
the operations described by <italic>f</italic>(<italic>I</italic><sup>&#x0002A;</sup>) would be, first get rid of the complex number <italic>i</italic>&#x000B7;&#x003B8;<sub><italic>a</italic></sub>/log2 by subtracting &#x02212;<italic>i</italic>&#x000B7;&#x003B8;<sub><italic>a</italic></sub>/log2 and then multiply by 2. Or
<disp-formula id="E52"><label>(43)</label><mml:math id="M66"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x02212;</mml:mo><mml:mi>i</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:msup><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>&#x003B8;</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:msup><mml:mo>&#x000B7;</mml:mo><mml:msqrt><mml:mi>n</mml:mi></mml:msqrt></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mi>n</mml:mi></mml:msqrt></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>For unequal amplitudes for each surprisal <inline-formula><mml:math id="M67"><mml:msubsup><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup></mml:math></inline-formula> of each state we multiply it by two and recover from <italic>I</italic><sub><italic>i</italic></sub> the value <inline-formula><mml:math id="M68"><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup></mml:math></inline-formula>. This operation leads to the minimal number of optimal binary yes/no questions. For example for
<disp-formula id="E53"><label>(44)</label><mml:math id="M69"><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo>=</mml:mo><mml:msqrt><mml:mrow><mml:mfrac><mml:mi>m</mml:mi><mml:mi>n</mml:mi></mml:mfrac></mml:mrow></mml:msqrt><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo><mml:mo>+</mml:mo><mml:msqrt><mml:mrow><mml:mfrac><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>m</mml:mi></mml:mrow><mml:mi>n</mml:mi></mml:mfrac></mml:mrow></mml:msqrt><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x0232A;</mml:mo></mml:mrow></mml:math></disp-formula>
we get by operations described by <italic>f</italic>(<italic>I</italic><sup>&#x0002A;</sup>)
<disp-formula id="E54"><label>(45)</label><mml:math id="M70"><mml:mrow><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:mi>y</mml:mi><mml:mo>&#x0232A;</mml:mo><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mo>&#x02212;</mml:mo><mml:mfrac><mml:mi>m</mml:mi><mml:mi>n</mml:mi></mml:mfrac><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mfrac><mml:mi>m</mml:mi><mml:mi>m</mml:mi></mml:mfrac><mml:mo>&#x02212;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>m</mml:mi></mml:mrow><mml:mi>n</mml:mi></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x000B7;</mml:mo><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>g</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mi>m</mml:mi></mml:mrow><mml:mi>n</mml:mi></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
the correct value of Shannon&#x00027;s entropy. We assume
<disp-formula id="E55"><label>(46)</label><mml:math id="M71"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:msub><mml:mi>&#x003C9;</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x0007C;</mml:mo></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
</sec>
<sec>
<title>3.3. Biological principle of energy minimization</title>
<p>Identity is a concept that defines the properties of a rational person over time [<xref ref-type="bibr" rid="B23">23</xref>]. It is a unifying concept based on the biological principles of homeostasis [<xref ref-type="bibr" rid="B24">24</xref>, <xref ref-type="bibr" rid="B25">25</xref>]. Organisms have to be kept stable to guarantee the maintenance of life, like for example the regulation of body temperature. This principle was extended by Allostasis [<xref ref-type="bibr" rid="B26">26</xref>] for regulation of bodily functions over time. To preform this task efficient mechanisms for the prediction of future states are needed to anticipate future environmental constellations [<xref ref-type="bibr" rid="B27">27</xref>, <xref ref-type="bibr" rid="B28">28</xref>]. This is done, because the homeostatic state may be violated by unexpected changes in the future. It means as well that every organism implies a kind of self identity over time [<xref ref-type="bibr" rid="B29">29</xref>]. This identity requires a time interval of finite duration within which sensory information is integrated. Different sensor information arrives at different time stamps. The fusion process has to be done over some time window. Similar problems are present during a sensor fusion task in a mobile robot. For example in visual and auditory perception in humans the transduction of the acoustic information is much shorter then the visual [<xref ref-type="bibr" rid="B30">30</xref>]. In it is suggested that in humans a temporal window with the duration of 3 s is created [<xref ref-type="bibr" rid="B15">15</xref>]. This window represents the psychological concept of &#x0201C;now&#x0201D; [<xref ref-type="bibr" rid="B29">29</xref>]. The consciousness concept of &#x0201C;now&#x0201D; represented by the temporal window is shifted backward in time of the consciousness itself, since a subconsciousness mechanism is required to preform the integration task.</p>
<sec>
<title>3.3.1. Prediction of events</title>
<p>One of the brain functions is to provide a casual consistent explanation of events to maintain self identity over time leading to the psychological concept of &#x0201C;now.&#x0201D; Split brain research and stimulation or brain regions during awake operation suggest that the brain generates an explanation of effects that were not initiated by consciousness [<xref ref-type="bibr" rid="B31">31</xref>, <xref ref-type="bibr" rid="B32">32</xref>]. Before an event happens an explanation has to be incited by the subconsciousness parts of the brain so that it is possible to integrate it into the temporal window of the self when the event happens. As well other organism functions need be put in alert due to some predicted possible events.</p>
<p>Events with higher surprise are more difficult to explain than events with low surprise values. An explanation has to be possible. When the surprise is too high, an explanation may be impossible and the identity of self could break. The idea is related to the general constructor theory of David Elieser Deutsch, see Deutsch [<xref ref-type="bibr" rid="B33">33</xref>]. The metabolic cost of neural information processing of explaining higher surprise events require higher energy levels then lower surprise events. Fechner&#x00027;s law states that there is a logarithmic relation between the stimulus and its intensity [<xref ref-type="bibr" rid="B34">34</xref>]. We assume as well that there is a logarithmic relation between the cost of initiation an explanation of an event and its surprise value <italic>s</italic><sub><italic>i</italic></sub>, that is log<italic>s</italic><sub><italic>i</italic></sub>. Neuronal computation is energetically expensive [<xref ref-type="bibr" rid="B35">35</xref>]. Consequently, the brain&#x00027;s limited energy supply imposes constraints on its information processing capability. The costs should be fair divided into the explanation of all predicted possible branches since the organism will be deterministic present in all of them. A possible solution is given by the Shannon&#x00027;s entropy. The corresponding value <italic>I</italic><sub><italic>i</italic></sub> is weighted in relation
<disp-formula id="E56"><label>(47)</label><mml:math id="M72"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:math></disp-formula>
and the resulting costs are <italic>p</italic><sub><italic>i</italic></sub> &#x000B7; <italic>I</italic><sub><italic>i</italic></sub>. The resulting costs of initiating <italic>n</italic> explanations for action <italic>A</italic> of <italic>n</italic> predicted branches before a split are
<disp-formula id="E57"><label>(48)</label><mml:math id="M73"><mml:mrow><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>log</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mi>s</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></disp-formula></p>
<p>For the human (subconsciousness) brain it makes sense to choose A to B if
<disp-formula id="E58"><mml:math id="M74"><mml:mrow><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x0003C;</mml:mo><mml:mi>H</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>since it requires less energy.</p>
</sec>
</sec>
</sec>
<sec sec-type="conclusions" id="s4">
<title>4. Conclusion</title>
<p>Since the branching is deterministic and no uncertainty is present, how can a rational person justify the application of decision-making? Why not simply assume that they are equally probable due to deterministic nature of the branching. David&#x00027;s Deutsch probability rule within unitary quantum mechanics is based on rationality. Instead of rationality we introduced the biological principle of homeostasis. Before an event happens an explanation has to be be prepared. The costs of the explanation should be fair divided into the explanation of all predicted possible branches since the organism will be deterministic present in all of them. The costs are described by the negative expected utility represented by Shannon&#x00027;s entropy. The probabilities can be recovered from the Shannon&#x00027;s entropy.</p>
</sec>
<sec id="s5">
<title>Author contributions</title>
<p>All authors listed, have made substantial, direct and intellectual contribution to the work, and approved it for publication.</p>
</sec>
<sec id="s6">
<title>Funding</title>
<p>This work was supported by national funds through Funda&#x000E7;&#x000E3;o para a Ci&#x000EA;ncia e Tecnologia (FCT) with reference UID/CEC/50021/2013 and through the PhD. grant SFRH/BD/92391/2013. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<label>1.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Everett</surname> <given-names>H</given-names></name></person-group>. <article-title>&#x0201C;relative state&#x0201D; formulation of quantum mechanics</article-title>. <source>Rev Mod Phys.</source> (<year>1959</year>) <volume>29</volume>:<fpage>454</fpage>&#x02013;<lpage>62</lpage>. <pub-id pub-id-type="doi">10.1103/RevModPhys.29.454</pub-id></citation>
</ref>
<ref id="B2">
<label>2.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wheeler</surname> <given-names>J</given-names></name></person-group>. <article-title>Assessment of everett&#x00027;s &#x0201C;relative state</article-title>.&#x0201D; <source>Rev Mod Phys.</source> (<year>1957</year>) <volume>29</volume>:<fpage>463</fpage>&#x02013;<lpage>5</lpage>. <pub-id pub-id-type="doi">10.1103/RevModPhys.29.463</pub-id></citation>
</ref>
<ref id="B3">
<label>3.</label>
<citation citation-type="book"><person-group person-group-type="editor"><name><surname>Dewitt</surname> <given-names>BS</given-names></name> <name><surname>Graham</surname> <given-names>N</given-names></name></person-group> (eds.). <source>The Many-Worlds Interpretation of Quantum Mechanics</source>. <publisher-loc>Princeton, NJ</publisher-loc>: <publisher-name>Princeton University Press</publisher-name> (<year>1973</year>).</citation>
</ref>
<ref id="B4">
<label>4.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Deutsch</surname> <given-names>D</given-names></name></person-group>. <source>The Fabric of Reality</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Penguin Group</publisher-name> (<year>1997</year>).</citation>
</ref>
<ref id="B5">
<label>5.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deutsch</surname> <given-names>D</given-names></name></person-group>. <article-title>The structure of the multiverse</article-title>. <source>Proc R Soc A</source> (<year>2002</year>) <volume>458</volume>:<fpage>2911</fpage>&#x02013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1098/rspa.2002.1015</pub-id></citation>
</ref>
<ref id="B6">
<label>6.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wallace</surname> <given-names>D</given-names></name></person-group>. <article-title>Worlds in the everett interpretation</article-title>. <source>Stud Hist Philos Mod Phys.</source> (<year>2002</year>) <volume>33</volume>:<fpage>637</fpage>&#x02013;<lpage>61</lpage>. <pub-id pub-id-type="doi">10.1016/S1355-2198(02)00032-1</pub-id></citation>
</ref>
<ref id="B7">
<label>7.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wallace</surname> <given-names>D</given-names></name></person-group>. <article-title>Everett and structure</article-title>. <source>Stud Hist Philos Mod Phys.</source> (<year>2003a</year>) <volume>34</volume>:<fpage>87</fpage>&#x02013;<lpage>105</lpage>. <pub-id pub-id-type="doi">10.1016/S1355-2198(02)00085-0</pub-id></citation>
</ref>
<ref id="B8">
<label>8.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Byrne</surname> <given-names>P</given-names></name></person-group>. <article-title>The many worlds of hugh everett</article-title>. <source>Sci Am.</source> (<year>2007</year>) <volume>297</volume>:<fpage>98</fpage>&#x02013;<lpage>105</lpage>. <pub-id pub-id-type="doi">10.1038/scientificamerican1207-98</pub-id><pub-id pub-id-type="pmid">18237103</pub-id></citation>
</ref>
<ref id="B9">
<label>9.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bousso</surname> <given-names>R</given-names></name> <name><surname>Susskind</surname> <given-names>L</given-names></name></person-group>. <article-title>Multiverse interpretation of quantum mechanics</article-title>. <source>Phys Rev D</source> (<year>2012</year>) <volume>85</volume>:045007. <pub-id pub-id-type="doi">10.1103/PhysRevD.85.045007</pub-id><pub-id pub-id-type="pmid">25015084</pub-id></citation>
</ref>
<ref id="B10">
<label>10.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Greaves</surname> <given-names>H</given-names></name></person-group>. <article-title>Probability in the everett interpretation</article-title>. <source>Philos Comp.</source> (<year>2007</year>) <volume>1</volume>:<fpage>109</fpage>&#x02013;<lpage>128</lpage>.</citation>
</ref>
<ref id="B11">
<label>11.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deutsch</surname> <given-names>D</given-names></name></person-group>. <article-title>Quantum theory of probability and decisions</article-title>. <source>Proc R Soc A</source> (<year>1999</year>) <volume>455</volume>:<fpage>3129</fpage>&#x02013;<lpage>97</lpage>. <pub-id pub-id-type="doi">10.1098/rspa.1999.0443</pub-id></citation>
</ref>
<ref id="B12">
<label>12.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wallace</surname> <given-names>D</given-names></name></person-group>. <article-title>Everettian rationality: defending deutsch&#x00027;s approach to probability in the everett interpretation</article-title>. <source>Stud Hist Philos Mod Phys.</source> (<year>2003b</year>) <volume>24</volume>:<fpage>415</fpage>&#x02013;<lpage>39</lpage>. <pub-id pub-id-type="doi">10.1016/S1355-2198(03)00036-4</pub-id></citation>
</ref>
<ref id="B13">
<label>13.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wallace</surname> <given-names>D</given-names></name></person-group>. <article-title>Quantum probability from subjective likelihood: improving on deutsch&#x00027;s proof of the probability rule</article-title>. <source>Stud Hist Philos Mod Phys.</source> (<year>2007</year>) <volume>38</volume>:<fpage>311</fpage>&#x02013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1016/j.shpsb.2006.04.008</pub-id></citation>
</ref>
<ref id="B14">
<label>14.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Epstein</surname> <given-names>LG</given-names></name></person-group>. <article-title>A definition of uncertainty aversion</article-title>. <source>Rev Econ Stud.</source> (<year>1999</year>) <volume>66</volume>:<fpage>579</fpage>&#x02013;<lpage>608</lpage>. <pub-id pub-id-type="doi">10.1111/1467-937X.00099</pub-id></citation>
</ref>
<ref id="B15">
<label>15.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>P&#x000F6;ppel</surname> <given-names>E</given-names></name></person-group>. <article-title>Pre-semantically defined temporal windows for cognitive processing</article-title>. <source>Philos Trans R Soc Lond B Biol Sci.</source> (<year>2009</year>) <volume>364</volume>:<fpage>1887</fpage>&#x02013;<lpage>1896</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.2009.0015</pub-id><pub-id pub-id-type="pmid">19487191</pub-id></citation>
</ref>
<ref id="B16">
<label>16.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Russell</surname> <given-names>SJ</given-names></name> <name><surname>Norvig</surname> <given-names>P</given-names></name></person-group>. <source>Artificial Intelligemce: A Modern Approach, 2nd Edn</source>. <publisher-loc>Harlow</publisher-loc>: <publisher-name>Prentice-Hall</publisher-name> (<year>2003</year>).</citation>
</ref>
<ref id="B17">
<label>17.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shannon</surname> <given-names>CE</given-names></name></person-group>. <article-title>A mathematical theory of communication</article-title>. <source>Bell Syst Tech J.</source> (<year>1948</year>) <volume>27</volume>:<fpage>379</fpage>&#x02013;<lpage>423</lpage>. <pub-id pub-id-type="doi">10.1002/j.1538-7305.1948.tb01338.x</pub-id></citation>
</ref>
<ref id="B18">
<label>18.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Topsoe</surname> <given-names>F</given-names></name></person-group>. <source>Informationstheorie</source>. <publisher-loc>Stuttgart</publisher-loc>: <publisher-name>Teubner Sudienbucher</publisher-name> (<year>1974</year>). <pub-id pub-id-type="doi">10.1007/978-3-322-94886-1</pub-id></citation>
</ref>
<ref id="B19">
<label>19.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ballentine</surname> <given-names>LE</given-names></name></person-group>. <source>Quantum Mechanics A Modern Development, 2nd Edn</source>. <publisher-loc>Singapore</publisher-loc>: <publisher-name>World Scientific</publisher-name> (<year>2012</year>).</citation>
</ref>
<ref id="B20">
<label>20.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Candeal</surname> <given-names>JC</given-names></name> <name><surname>De Miguel</surname> <given-names>JR</given-names></name> <name><surname>Indura</surname> <given-names>E</given-names></name> <name><surname>Mehta</surname> <given-names>GB</given-names></name></person-group>. <article-title>Utility and entropy</article-title>. <source>Econ Theory</source> (<year>2001</year>) <volume>17</volume>:<fpage>233</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1007/PL00004100</pub-id></citation>
</ref>
<ref id="B21">
<label>21.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ortega</surname> <given-names>PA</given-names></name> <name><surname>Braun</surname> <given-names>DA</given-names></name></person-group>. <article-title>A conversion between utility and information</article-title>. In: <source>Proceedings of the Third Conference on Artificial General Intelligence</source>. <publisher-loc>Mountain View, CA</publisher-loc>: <publisher-name>Springer; Heidelberg</publisher-name> (<year>2009</year>). p. <fpage>115</fpage>&#x02013;<lpage>120</lpage>.</citation>
</ref>
<ref id="B22">
<label>22.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Robert</surname> <given-names>FN</given-names></name> <name><surname>Jose</surname> <given-names>VRR</given-names></name> <name><surname>Winkler</surname> <given-names>RL</given-names></name></person-group>. <article-title>Duality between maximization of expected utility and minimization of relative entropy when probabilities are imprecise</article-title>. In: <source>6th International Symposium on Imprecise Probability: Theories and Applications</source>. <publisher-loc>Durham</publisher-loc> (<year>2009</year>).</citation>
</ref>
<ref id="B23">
<label>23.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>P&#x000F6;ppel</surname> <given-names>E</given-names></name></person-group>. <article-title>Perceptual identity and personal self: neurobiological reflections</article-title>. In: <person-group person-group-type="editor"><name><surname>Fajkowska</surname> <given-names>M</given-names></name> <name><surname>Eysenck</surname> <given-names>MM</given-names></name></person-group> editors. <source>Personality From Biological, Cognitive, and Social Perspectives</source>. <publisher-loc>Clinton Corners, NY</publisher-loc>: <publisher-name>Eliot Werner Public</publisher-name> (<year>2010</year>). p. <fpage>75</fpage>&#x02013;<lpage>82</lpage>.</citation>
</ref>
<ref id="B24">
<label>24.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bernard</surname> <given-names>C</given-names></name></person-group>. <source>An Introduction to the Study of Experimental Medicine</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Dover</publisher-name> (<year>1957</year>).</citation>
</ref>
<ref id="B25">
<label>25.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gross</surname> <given-names>CG</given-names></name></person-group>. <article-title>Claude bernard and the constancy of the internal environment</article-title>. <source>Neuroscientist</source> (<year>1998</year>) <volume>4</volume>:<fpage>380</fpage>&#x02013;<lpage>5</lpage>. <pub-id pub-id-type="doi">10.1177/107385849800400520</pub-id></citation>
</ref>
<ref id="B26">
<label>26.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sterling</surname> <given-names>P</given-names></name></person-group>. <article-title>Principles of allostasis: optimal design, predictive regulation, pathophysiology and rational therapeutics</article-title>. In: <person-group person-group-type="editor"><name><surname>Schulkin</surname> <given-names>J</given-names></name></person-group> editor. <source>Allostasis, Homeostasis, and the Costs of Adaptation</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>University Press</publisher-name> (<year>2004</year>). p. <fpage>17</fpage>&#x02013;<lpage>64</lpage>. <pub-id pub-id-type="doi">10.1017/CBO9781316257081.004</pub-id></citation>
</ref>
<ref id="B27">
<label>27.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>von</surname> <given-names>Holst E</given-names></name> <name><surname>Mittelstaedt</surname> <given-names>H</given-names></name></person-group>. <article-title>Das reafferenzprinzip (wechselwirkungen zwischen zentralnervensystem und peripherie</article-title>. <source>Naturwissenschaften</source> (<year>1950</year>) <volume>37</volume>:<fpage>464</fpage>&#x02013;<lpage>76</lpage>. <pub-id pub-id-type="doi">10.1007/BF00622503</pub-id></citation>
</ref>
<ref id="B28">
<label>28.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bao</surname> <given-names>Y</given-names></name> <name><surname>P&#x000F6;ppel</surname> <given-names>E</given-names></name> <name><surname>Liang</surname> <given-names>W</given-names></name> <name><surname>Yang</surname> <given-names>T</given-names></name></person-group>. <article-title>When is the right time? a little later! &#x02013; delayed responses show better temporal control</article-title>. <source>Proc Soc Behav Sci.</source> (<year>2014</year>) <volume>126</volume>:<fpage>199</fpage>&#x02013;<lpage>200</lpage>. <pub-id pub-id-type="doi">10.1016/j.sbspro.2014.02.370</pub-id></citation>
</ref>
<ref id="B29">
<label>29.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhou</surname> <given-names>B</given-names></name> <name><surname>P&#x000F6;ppel</surname> <given-names>E</given-names></name> <name><surname>Bao</surname> <given-names>Y</given-names></name></person-group>. <article-title>In the jungle of time: the concept of identity as a way out</article-title>. <source>Front Psychol.</source> (<year>2014</year>) <volume>5</volume>:844. <pub-id pub-id-type="doi">10.3389/fpsyg.2014.00844</pub-id><pub-id pub-id-type="pmid">25120528</pub-id></citation>
</ref>
<ref id="B30">
<label>30.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>P&#x000F6;ppel</surname> <given-names>E</given-names></name> <name><surname>Schill</surname> <given-names>K</given-names></name> <name><surname>von</surname> <given-names>Steinb&#x000FC;chel N</given-names></name></person-group>. <article-title>Sensory integration within temporally neutral system states: a hypothesis</article-title>. <source>Naturwissenschaftem</source> (<year>1990</year>) <volume>77</volume>:<fpage>89</fpage>&#x02013;<lpage>91</lpage>. <pub-id pub-id-type="doi">10.1007/BF01131783</pub-id></citation>
</ref>
<ref id="B31">
<label>31.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Libet</surname> <given-names>B</given-names></name></person-group>. <source>Mind Time - The Temporal Factor in Consciousness</source>. <publisher-loc>Cambridge, MA; London</publisher-loc>: <publisher-name>Harvard University Press</publisher-name> (<year>2004</year>).</citation>
</ref>
<ref id="B32">
<label>32.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Coon</surname> <given-names>D</given-names></name> <name><surname>Mitterer</surname> <given-names>JO</given-names></name></person-group>. <source>Introduction to Psychology Gateways to Mind and Behavior, 13th Edn</source>. <publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Wadsworth Publishing</publisher-name> (<year>2012</year>).</citation>
</ref>
<ref id="B33">
<label>33.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deutsch</surname> <given-names>D</given-names></name></person-group>. <article-title>Constructor theory</article-title>. <source>Synthese</source> (<year>2013</year>) <volume>190</volume>:<fpage>4331</fpage>&#x02013;<lpage>4359</lpage>. <pub-id pub-id-type="doi">10.1007/s11229-013-0279-z</pub-id></citation>
</ref>
<ref id="B34">
<label>34.</label>
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Frisby</surname> <given-names>JP</given-names></name> <name><surname>Stone</surname> <given-names>JV</given-names></name></person-group>. <source>Seeing, The Computational Approach to Biological Vision, 2nd Edn</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name> (<year>2010</year>).</citation>
</ref>
<ref id="B35">
<label>35.</label>
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Laughlin</surname> <given-names>SB</given-names></name> <name><surname>de Ruyter van Steveninck</surname> <given-names>RR</given-names></name> <name><surname>Anderson</surname> <given-names>JC</given-names></name></person-group>. <article-title>The metabolic cost of neural information</article-title>. <source>Nat Neurosci.</source> (<year>1998</year>) <volume>1</volume>:<fpage>36</fpage>&#x02013;<lpage>41</lpage>. <pub-id pub-id-type="doi">10.1038/236</pub-id><pub-id pub-id-type="pmid">10195106</pub-id></citation>
</ref>
</ref-list>
</back>
</article>