Inconclusive quantum measurements and decisions under uncertainty

We give a mathematical definition for the notion of inconclusive quantum measurements. In physics, such measurements occur at intermediate stages of a complex measurement procedure, with the final measurement result being operationally testable. Since the mathematical structure of Quantum Decision Theory has been developed in analogy with the theory of quantum measurements, the inconclusive quantum measurements correspond, in Quantum Decision Theory, to intermediate stages of decision making in the process of taking decisions under uncertainty. The general form of the quantum probability for a composite event is the sum of a utility factor, describing a rational evaluation of the considered prospect, and of an attraction factor, characterizing irrational, subconscious attitudes of the decision maker. Despite the involved irrationality, the probability of prospects can be evaluated. This is equivalent to the possibility of calculating quantum probabilities without specifying hidden variables. We formulate a general way of evaluation, based on the use of non-informative priors. As an example, we suggest the explanation of the decoy effect. Our quantitative predictions are in very good agreement with experimental data.


Introduction
The standard theory of quantum measurements (von Neumann 1955) is based on the projection operator measure corresponding to operationally testable events. Simple measurements really have to be operationally testable in order to possess physical meaning. However if a measurement is composite, consisting of several parts, the intermediate stages do not have to necessarily be operationally testable, but can be inconclusive.
As a typical example, we can recall the known double-slit experiment, when particles pass through a screen with two slits and then are registered by particle detectors some distance away from the screen. This experiment can be treated as a composite event consisting of two parts, one being the passage through one of the slits and second, registration by detectors. The registration of a particle by a detector is an operationally testable event, since the particle is either detected or not, with the result being evident for the observer. But the passage of the particle through one of the slits is not directly observed, and the experimentalist does not know which of the slits the particle has passed through. In that sense, the passage of the particle through a slit is an inconclusive event. The existence of this inconclusive event, occurring at the intermediate stage of the experiment, is intimately associated with an interference effect. Otherwise, if the experimentalist would precisely determine the slit through which the particle has passed, the interference pattern registered by the particle detectors would be destroyed. The existence of interference is precisely due to the presence of the inconclusive event that happened at the intermediate stage.
The occurrence of inconclusive events in decision making is even more frequent and important. Practically any decision, before it is explicitly formulated, passes through a stage of deliberation and hesitation accompanying the choice. That is, any decision is actually a composite event consisting of an intermediate stage of deliberation and of the final stage of taking a decision. The final stage of decision making is equivalent to an operationally testable event in quantum measurements. While the intermediate stage of deliberation is analogous to an inconclusive event.
The analogy between the theory of quantum measurements and decision theory has been mentioned by von Neumann (1955). Following this analogy, Quantum Decision Theory (QDT) has been advanced (Yukalov and Sornette, 2008,2009a,2011,2014a, with the mathematical structure that is applicable to both decision making as well as to quantum measurements. The generality of our framework, being equally suitable for quantum measurements and decision making, is its principal difference from all other attempts that employ quantum techniques in psychological sciences. An extensive literature on various quantum models in psychology and cognitive science can be found in the books (Khrennikov, 2010;Busemeyer and Bruza, 2012;Bagarello, 2013;Haven and Khrennikov, 2013) and review articles (Yukalov and Sornette, 2009b;Sornette, 2014;Ashtiani and Azgomi, 2015;Haven and Khrennikov, 2016).
Any approach, applying quantum techniques to decision theory, is naturally based on the notion of probability. This is because quantum theory is intrinsically probabilistic. Respectively, the intrinsically probabilistic nature of quantum decision theory is what makes it principally different from stochastic decision theory, where the choice is treated as always being deterministic, while in the process of choosing the decision maker acts with errors (Hey, 1995;Loomes and Sugden, 1998;Carbone and Hey, 2000;Hey, 2005;Conte et al., 2010). Such stochastic decision theories can be termed as "deterministic theories embedded into an environment with stochastic noise". The standard way of using a stochastic approach is to assume a probability distribution over the values characterizing the errors made by the subjects in the process of decision making. Then the parameters entering the distribution are fitted to a posteriori empirical data by maximizing the log-likelihood function. Such a procedure allows one to better fit the given set of data to the assumed basic deterministic decision theory, in particular due to the introduction of additional fitting parameters. However, it does not essentially change the structure of the underlying deterministic theory, although improving it slightly. And, being descriptive, the classical stochastic approach does not provide quantitative predictions.
Contrary to classical stochastic theory, in the quantum approach, we do not assume that the choice of a decision maker is deterministic, with just some weak disturbance by errors. Following the general quantum interpretation, we consider the choice process, including deliberations, hesitations, and subconscious estimation of competing outcomes, as intrinsically random. The probabilistic decision, in the quantum case, is not just a stochastic decoration of a deterministic process, but it is an unavoidable random part of any choice. The existence of the hidden, often irrational subconscious feelings and deliberations, results in the appearance of quantum interference and entanglement. The difference between classical stochastic decision theory and quantum decision theory is similar to the difference between classical statistical physics and quantum theory. In the former, all processes are assumed to be deterministic, with statistics coming into play because of measurement uncertainties, such as no precise knowledge of initial conditions and the impossibility of measuring exactly the locations and velocities of all particles. In contrast, quantum theory is principally probabilistic, which becomes especially important for composite measurements.
A detailed mathematical theory of quantum measurements in the case of composite events has been developed in our previous papers (Yukalov and Sornette, 2013. In the present paper, we concentrate our attention on composite measurements including intermediate inconclusive events and on the application of this notion for characterizing decision making under risk and uncertainty. The importance of composite events, including intermediate inconclusive events, in decision theory makes it necessary to pay a special attention to the correct mathematical formulation of such events and to the description of their properties allowing for the quantitative evaluation of the corresponding quantum probabilities. We show that, despite uncertainty accompanying inconclusive events, it is possible to give quantitative evaluations for quantum probabilities in decision theory, based on non-informative priors. Considering, as an illustration, the decoy effect, we demonstrate that even the simple non-informative priors provide predictions in very good agreement with experimental data.

Composite quantum measurements and events
In this section, we give a brief summary of the general scheme for defining quantum probabilities for composite events. As we have stressed above, in our approach, the mathematics is the same for describing either quantum measurements or decision making. Therefore, when referring to an event, we can keep in mind either a fact of measurement or a decision action.
Let A n be a conclusive operationally testable event labelled by an index n. And let B = {B α } be a set of inconclusive events labelled by α. Defining the space of events as a Hilbert space H, we associate with an event A n a state |n in this Hilbert space and an event operatorP n , A n → |n →P n = |n n| . (1) The event operator for an operationally testable event is a projector. The set of inconclusive events B generates in the Hilbert space H the state |B and the event operatorP B , where the state reads with coefficients b α being random complex numbers. The event operator for an inconclusive event is not necessarily a projector, but a member of a positive operator-valued measure (Yukalov and Sornette, 2013,b, 2016. The space of events, in the quantum approach, is the Hilbert space that is a tensor product of the spaces Each decision maker is characterized by an operatorρ that can be termed the strategic state of a decision maker, which, in quantum theory, corresponds to a statistical operator. The pair {H,ρ}, in physics, is named a statistical ensemble, and in decision theory, it is a decision ensemble. A composite event is called a prospect and is denoted as A prospect π n generates a state |π n in the Hilbert space of events H and a prospect operator with the prospect state The prospect operator is a member of a positive operator-valued measure, which implies that these operators satisfy the resolution of unity Sornette, 2013, 2016). Since they contain random quantities b α , the corresponding random resolution has to be understood not as a direct equality between numbers, but, e.g., as the equality in mean (Kallenberg, 2001). The prospect probability is with the trace over the space H. To form a probability measure, the prospect probabilities are to be normalized: Taking explicitly the trace in expression (8) and separating diagonal and off-diagonal terms, we see that the prospect probability is represented as a sum of a positive-definite term and a sign-undefined term The appearance of the sign-undefined term is a purely quantum effect responsible, in quantum measurements, for interference patterns. The attenuation of this quantum term is called decoherence. In quantum theory, decoherence can be due to external as well as to internal perturbations and the influence of measurements (Yukalov, 2011(Yukalov, , 2012a. And in quantum decision theory, decoherence can occur due to the accumulation of information (Yukalov and Sornette, 2015c).
The disappearance of the quantum term implies the transition to classical theory. This is formulated as the quantum-classical correspondence principle (Bohr, 1976), which in our case reads as This principle tells us that the term f (π n ) plays the role of classical probability, hence is to be normalized: When decisions concern a choice between lotteries, the classical term f (π n ) has to be defined according to classical decision theory based either on expected utility theory or on some nonexpected value functional. This suggests to call f (π n ) a utility factor, since it is defined on rational grounds and reflects the utility of a choice. The quantum term is caused by the interference and entanglement effects in quantum theory, which correspond, in decision making, to irrational effects describing the attractiveness of choice. Therefore it can be called the attraction factor. From equations (9) and (14), it follows the alternation law Note that, in quantum theory, the definition of the composite event in the form of prospect (5) is valid for any type of operators, since they are defined on different spaces. No problem with noncommutativity of operators defined on a common Hilbert space arises. This way makes it possible to introduce joint quantum probabilities for several measurements Sornette, 2013, 2016). Contrary to this, considering operators on the same Hilbert space does not allow one to define joint probabilities for noncommuting operators. Sometimes, one treats the Lüders probability of consecutive measurements as a conditional probability. This, however, is not justified from the theoretical point of view (Yukalov and Sornette, 2013, 2014b and also contradicts experimental data (Boyer-Kassem et al., 2015a,b). But defining the quantum joint probability according to expression (8) contains no contradictions.
In the present section, the general scheme of QDT is presented. Being limited by the length of this paper, we cannot go into all mathematical details that have been thoroughly described in our previous publications. However, we would like to stress that for the purpose of practical applications, it is not necessary to study all these mathematical peculiarities, but it is sufficient to employ the final formulas following the prescribed rules. One can use the formulated rules as given prescriptions, without studying their justification. The main formulas of this section, which are necessary for the following application, are: the form of the quantum probability (10), the normalization conditions (9) and (14), and the alternation law (15). More details required for practical application will be given in the sections below.
3 Non-informative prior for utility factors To make the above scheme applicable to decision theory, it is necessary to specify how one should quantify the values of the utility factors and attraction factors. Here we show how these values can be defined as non-informative priors.
Let us consider a set of lotteries L n = {x i , p n (x i ) : i = 1, 2, . . . , N n }, enumerated by the index n = 1, 2, . . . , N L , with payoffs x i and their probabilities p n (x i ). The related expected utilities U (L n ) = i u(x i )p n (x i ) can be defined according to the expected utility theory (von Neumann and Morgenstern, 1953). For the present consideration, the utility functions u(x) do not need to be specified. For instance, they can be taken as linear functions, since this choice has the advantage of making the utility factors independent from the units measuring the payoffs.
In quantum decision theory, the act of choosing a lottery L n , denoted as A n , together with the accompanying set of inconclusive events B, including the decision-maker hesitations Sornette, 2014a, 2014b), compose the prospect (5). Depending on whether the expected utilities are positive on negative, there can be two cases.
If the expected utilities of the considered set of lotteries are all positive (non-negative), such that then it is reasonable to require that zero utility corresponds to zero utility factor: The case where the utility factor is simply proportional to the related expected utility trivially obeys this condition (17). Taking into account the normalization condition (14) gives the utility factor When the expected utilities are negative, which happens in the domain of losses, such that the required condition is that an infinite loss corresponds to zero utility factor: The simplest way to satisfy this condition (20) is that the utility factor is inversely proportional to the related expected utility. Taking into account the normalization condition, we get The utility-factor forms (18) and (21) coincide with the choice probabilities in the Luce choice axiom (Luce, 1959). It is possible to show that generalized forms for the utility factors can be derived by maximizing a conditional Shannon entropy or from the principle of minimal information (Yukalov and Sornette, 2009b;Frank, 2009;Batty, 2010).
In the case of positive expected utilities, we consider the information functional, taking into account the normalization condition (14) and the expected log-likelihood Λ. This functional reads as where Λ(π n ) = − ln U (L n ) , U (L n ) ≥ 0 .
Minimizing functional (22) results in the utility factor in which the positive sign of α is prescribed by the condition that the larger utility implies the larger factor. In the case of negative expected utilities, the information functional takes the form where Then its minimization yields the utility factor with the positive sign of γ prescribed by the requirement that the larger cost implies the smaller factor. The utility factors (23) and (25) are the examples of power-law distributions that are known in many applications (Frank, 2009;Batty, 2010;Saichev et al. 2010).

Non-informative prior for attraction factors
Although the attraction factor characterizes irrational features of decision making, it can be estimated by invoking non-informative prior assumptions. An important consequence of the latter is the quarter law derived earlier (Yukalov and Sornette, 2009b, 2011. Here we first give the new, probably the simplest, derivation of the quarter law and, second, we show how this law can be used for estimating the attraction factors in the case of an arbitrary number of prospects.
Let us consider the sum 1 of the attraction factor moduli, where plays the role of the attraction-factor distribution. The latter is normalized as since the attraction factors, in view of condition (15), vary in the interval [−1, 1]. If q(π n ) does not equal zero, then normalization (28) is evident. And when q(π n ) = 0, then one should use the identity 1 0 δ(x) dx = 1 2 for the semi-integral of the Dirac function. The use of a non-informative prior implies that the values of the attraction factor are not known. A full ignorance is captured by a uniform distribution, which, according to normalization (28), gives In that case, integral (26) results in the quarter law If the prospect lattice L = {π n } consists of N L prospects, we can always arrange the corresponding attraction factors in the ascending order, such that q(π n ) > q(π n+1 ) (n = 1, 2, . . . , N L − 1) .
We denote the largest attraction factor as Given the unknown values of the attraction factors, the non-informative prior assumes that they are uniformly distributed and at the same time they must obey the ordering constraint (31). Then, the joint cumulative distribution of the attraction factors is given by where the series η 1 ≤ η 2 ≤ ... ≤ η N L of inequalities ensure the ordering. It is then straightforward to show that the average values of the q(π n ) are equidistant, i.e. the difference between any two neighboring factors is on average Taking their average values as determining their typical values, we omit the symbol . representing the average operator and use equation (34) to represent the n-th attraction factor as With notations (32) and (34), the alternation condition (15) yields And the quarter law (30) leads to the gap If N L is even, then while when N L is odd, then This allows us to represent gap (37) as And for the largest attraction factor, we find The above expressions make it possible to evaluate, on the basis of the non-informative prior, the whole set Q N L ≡ {q(π n ) : n = 1, 2, . . . , N L } (40) of the attraction factors: For example, in the case of two prospects, we have which yields the attraction set For three prospects, we get Similarly, for four prospects, we find with the attraction set When there are five prospects, then Thus, we can evaluate the attraction factors for any number of prospects, obtaining a kind of a quantized attraction set. In the case of an asymptotically large number N L of prospects, we have and The non-informative priors can be employed for predicting the results of decision making. This makes the principal difference compared with the introduction into expected utility of adjustment parameters that are fitted post hoc to the given experimental data (Sinisalchi, 2009).

Quantitative explanation of decoy effect
We now show how the non-informative priors of the attraction factors can be employed to explain the decoy effect and for quantitative prediction in decision-making. Throughout this section, we denote, for simplicity, the objects of choice, say A, as well as the act of choosing an object A, by the same letter A. As has been emphasized above, the act of choice under uncertainty is a composite prospect. But, again for simplicity, we employ the same letter for denoting the action A and the related prospect (5).
The decoy effect was first studied by Huber, Payne and Puto (1982), who called it the effect of asymmetrically dominated alternatives. Later this effect has been confirmed in a number of experimental investigations (Simonson, 1989;Wedell, 1991;Tversky and Simonson, 1993;Ariely and Wallsten, 1995). The meaning of the decoy effect can be illustrated by the following example. Suppose a buyer is choosing between two objects, A and B. The object A is of better quality, but of higher price, while the object B is of slightly lower quality, while less expensive. As far as the functional properties of both objects are not drastically different, but B is cheaper, the majority of buyers value the object B higher. At this moment, the salesperson mentions that there is a third object C, which is of about the same quality as A, but of a much higher price than A. This causes the buyer to reconsider the choice between the objects A and B, while the object C, having the same quality as A but being much more expensive, is of no interest. Choosing now between A and B, the majority of buyers prefer the higher quality but more expensive object A. The object C, being not a choice alternative, plays the role of a decoy. Experimental studies confirmed the decoy effect for a variety of objects: cars, microwave ovens, shoes, computers, bicycles, beer, apartments, mouthwash, etc. The same decoy effect also exists in the choice of human mates distinguished by attractiveness and sense of humor (Bateson and Healy, 2005). It is common as well for animals, for instance, in the choice of female frogs of males with different attraction calls characterized either by low-frequency and longer duration or by faster call rates (Lea and Ryan, 2015).
The decoy effect contradicts the regularity axiom in decision making telling that if B is preferred to A in the absence of C, then this preference has to remain in the presence of C.
In the frame of QDT, the decoy effect is explained as follows. Assume buyers consider an object A, which is of higher quality but more expensive, and an object B, which is of moderate quality but cheaper. Suppose the buyers have evaluated these objects A and B, which implies that the initial values of the objects are described by the utility factors f (A) and f (B). In experiments, the latter correspond to the fractions of buyers evaluating higher the related object. When the decoy C, of high quality but essentially more expensive, is presented, it attracts the attention of buyers to the quality characteristic. The role of the decoy is well understood as attracting the attention of buyers to a particular feature of the considered objects, because of which the decoy effect is sometimes named the attraction effect (Simonson, 1989). In the present case, the decoy attracts the buyer attention to quality. The attraction, induced by the decoy, is described by the attraction factors q(A) and q(B). Hence the probabilities of the related choices are now Since the quality feature becomes more attractive, q(A) > q(B). According to the non-informative prior, we can estimate the attraction factors as q(A) = 1/4 and q(B) = −1/4.
To be more precise, let us take numerical values from the experiment of Ariely and Wallsten (1995) Another example can be taken from the studies of the frog mate choice (Lea and Ryan, 2015), where frog males have attraction calls differing in either low-frequency sound or call rate. The males with lower frequency calls are denoted as A, while those with high call rate, as B. In an experiment with 80 frog females, without a decoy, it was found that females evaluate higher the fastest call rate, so that f To make it clear how the decoy effect fits the title of the paper "Inconclusive quantum measurements and decisions under uncertainty", it is worth extending the comments that have been mentioned in the Introduction.
Our principal point of view is that decision making, generally, almost always deals with composite events, since any choice is accompanied by subconscious feelings and irrational biases. The latter are often difficult to formalize and, even more, their weights usually are not known and are practically unmeasurable. This is why these subconscious irrational factors can be treated as what is called inconclusive events. When choosing between several possibilities, say A n , one actually considers composite prospects, as defined in (5). And the composite nature of choices requires the use of quantum techniques, as has been explained in our previous paper (Yukalov and Sornette, 2014b). Otherwise, the probabilities of simple events could be characterized by classical theory. It is the composite nature of the considered prospects that yields the appearance of the quantum term q(π n ) related to interference and coherence effects. In that way, the choice between the objects in the decoy effect is also a composite prospect, being composed of the choice as such and accompanying subconscious feelings forming an inconclusive set. This is why the use of QDT here is necessary and why it gives so good results.
It is admissible to give a schematic picture of the choice in the decoy effect by analogy with the double-slit experiment in physics, which is mentioned in the Introduction. Thus making a concrete selection of either an object A or B is the analog of the registration of the particle by a detector. But before such a selection is done, there exists the uncertainty of deciding which of the object features are actually more important. These not precisely defined acts of hesitation play the role of the slits, with the uncertainty associated with which of them the particle has passed through. When it is known which of the slits the particle has passed through, then the interference effects in physics disappear. Similarly, in decision theory, if the values of each object are clearly defined, there are no hesitations, no interference, and the selection can be based on classical rules. Such an objective evaluation in the decoy effect happens in the absence of any decoy, when a decision maker rationally evaluates the features of the given objects, say quality and price. The appearance of a decoy induces hesitations concerning which of the features are actually more important. These hesitations before the choice are the analogs of the uncertainty of which slits will be visited by the traveling particle. The uncertainty results in the interference and the arising quantum term, whether in the registration of a particle or in the final choice of a decision maker.

Discussion
We have presented a mathematical formulation for the concept of inconclusive quantum measurements and events. This type of measurements in physics happens at intermediate stages of composite measuring procedures, while the final measurement stage is operationally testable. In decision making, inconclusive events correspond to the intermediate stage of deliberations. Invoking non-informative priors, it is possible to estimate the prospect probabilities, thus, predicting the results of decision making.
Generally, invoking more information on the properties of the attraction factor, it is possible to define its form more accurately than the value given by non-informative prior. For example, from condition (9) it follows that −f (π n ) ≤ q(π n ) ≤ 1 − f (π n ) .
This suggests that the absolute value of the attraction factor can be modelled by an expression proportional to q(π n ) ∝ f µ (π n )[1 − f (π n )] ν , with µ and ν being positive parameters and the sign defined by the ambiguity and risk aversion principle (Yukalov and Sornette, 2009b, 2011, 2014a. More detailed study of such a form will be given in a separate paper. But it turns out that even the simple non-informative prior provides us a rather good estimate allowing for quantitative predictions in decision making. And we have illustrated the approach by the decoy effect for which the non-informative priors yield quantitative predictions in very good agreement with empirical data. In this paper, decision making by separate subjects is considered. We think that the theory can be generalized by considering societies of decision makers. The exchange of information in a society should certainly influence the decisions of the society members. To develop a theory of many agents, it is necessary to generalize the approach by treating a dynamical model of agents exchanging information. Then, we think, it would be feasible to describe the behavior of the agents operating in a financial market and taking decisions about buying or selling shares in the presence of information asymmetry. And it would be possible to explain the known stylized facts in financial markets, such as, for example, the fat tails of return distributions and volatility clustering, as well as transient bubbles and crashes, which are connected with herding behavior. Some first results in that direction are reported in our previous papers (Yukalov and Sornette 2014a,20015c), where the role of additional information, received by decision makers, is analyzed and it is shown that the amount of the additional information essentially influences the value of the quantum term. Further work on the generalization of the approach towards a dynamical theory of decison-maker societies is in progress.