Next Article in Journal
Recoverable Random Numbers in an Internet of Things Operating System
Next Article in Special Issue
Heisenberg and Entropic Uncertainty Measures for Large-Dimensional Harmonic Systems
Previous Article in Journal
The Two-Time Interpretation and Macroscopic Time-Reversibility
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum Probabilities as Behavioral Probabilities

by
Vyacheslav I. Yukalov
1,* and
Didier Sornette
1,2
1
Department of Management, Technology and Economics, ETH Zürich, Swiss Federal Institute of Technology, CH 8032 Zürich, Switzerland
2
Finance Institute, c/o University of Geneva, 40 blvd. Du Pont d’Arve, CH 1211 Geneva, Switzerland
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(3), 112; https://doi.org/10.3390/e19030112
Submission received: 20 December 2016 / Revised: 27 February 2017 / Accepted: 7 March 2017 / Published: 13 March 2017
(This article belongs to the Special Issue Foundations of Quantum Mechanics)

Abstract

:
We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.

1. Goals

Quantum theory is one of the most fascinating and successful constructs in the intellectual history of mankind. It applies to light and matter, from the smallest scales so far explored, up to mesoscopic scales. It is also a necessary ingredient for understanding the evolution of the universe. It has given rise to an impressive number of new technologies. Additionally, in recent years, it has been applied to several branches of science, which have previously been analyzed by classical means, such as quantum information processing and quantum computing [1,2,3] and quantum games [4,5,6,7].
Our goal here is to demonstrate that the applicability of quantum theory can be extended in one more direction, to the theory of decision making, thus generalizing classical utility theory. This generalization allows us to characterize the behavior of real decision makers, who, when making decisions, evaluate not only the utility of the given prospects, but also are influenced by their respective attractiveness. We show that behavioral probabilities of human decision makers can be modeled by quantum probabilities.
We are perfectly aware that, in the philosophical interpretation of quantum theory, there are yet a number of unsettled problems. These have triggered active discussions that started with Einstein’s objection to considering quantum theory as complete, which are reported in detail in the famous Einstein–Bohr debate [8,9]. The discussions on the completeness of quantum theory stimulated approaches assuming the existence of hidden variables satisfying classical probabilistic laws. The best known of such hidden-variable theories is the De Broglie–Bohm pilot-wave approach [10]. It can be shown that the description of a quantum system can be done as if it would be a classical system, with the introduction of the so-called nonlocal contextual hidden variables. However, their number has to be infinite in order to capture the same level of elaboration as their quantum equivalent [11], which makes unpractical the use of a classical equivalent description. Instead, quantum techniques are employed, which is much simpler than dealing with a classical system having an infinite number of hidden unknown and nonlocal variables.
In the present paper, our aim is not to address the open issues associated with the not yet fully-resolved interpretation of quantum theory. Instead, our focus is more at the technical level, to demonstrate that the mathematics of quantum theory can be applied for describing human decision making.
The intimate connection of quantum laws with human decision making was suggested by Deutsch [12] who attempted to derive the quantum Born rules from the notion of rational preferences of standard classical decision theory. This attempt, however, was criticized by several authors, as summarized by Lewis [13,14].
Here, we consider the inverse problem to that of Deutsch. We do not try to derive quantum rules from classical decision theory, which, we think, is impossible, but we demonstrate that human decision making can be described by laws resembling those of quantum theory that use the Hilbert space formalism.
Our approach is a generalization of quantum decision theory, advanced earlier by the authors [15,16,17,18,19,20,21,22,23,24], for the lotteries consisting only of gains, to be applicable for both types of lotteries, with gains, as well as with losses. The extension of the approach to the case of lotteries with losses is necessary for developing the associated methods of decision theory, taking into account behavioral biases. A behavioral quantum probability is found to be the sum of two terms, the utility factor, quantifying the objective utility of a lottery, and the attraction factor, describing subjective behavioral biases. The utility factor can be found from the minimization of an information functional, resulting in different forms for the lotteries with prevailing gains or with prevailing losses. To quantify behavioral deviation from rationality, we introduce a measure q ¯ describing the aggregate deviation from the rationality of a population of decision makers for a given set of lotteries. This measure is shown to depend on the level of difficulty in comparing the game lotteries, for which we propose a simple metric. For a given set of choices, the determination of the fraction ν of difficult choices allows us to propose a prediction for the dependence of q ¯ as a function of ν, thus generalizing the zero-information prior q null = 0 . 25 that we have derived in previous articles.
An important part of our approach is the use of the principle of minimal information, which is equivalent to the conditional maximization of entropy under additional constraints imposed on the process of decision making. This method is justified by the Cox results proving that the Shannon entropy is the natural information measure for probabilistic theories [25,26,27,28].
By analyzing a large set of empirical data, we show that human decision making is well quantified by the quantum approach to calculate behavioral probabilities, even in those cases where classical decision theory fails in principle.

2. Related Literature

The predominant theory, describing decision maker behavior under risk and uncertainty, is nowadays the expected utility theory of preferences over uncertain prospects. This theory was axiomatized by von Neumann and Morgenstern [29] and integrated with the theory of subjective probability by Savage [30]. The theory was shown to possess great analytical power by Arrow [31] and Pratt [32] in their work on risk aversion and by Rothschild and Stiglitz [33,34] in their work on comparative risk. Friedman and Savage [35] and Markowitz [36] demonstrated its tremendous flexibility in representing decision makers’ attitudes toward risk. Expected utility theory has provided solid foundations to the theory of games, the theory of investment and capital markets, the theory of search and other branches of economics, finance and management [37,38,39,40,41,42,43,44,45,46].
However, a number of economists and psychologists have uncovered a growing body of evidence that individuals do not always conform to prescriptions of expected utility theory and indeed very often depart from the theory in a predictable and systematic way [47]. Many researchers, starting with the works by Allais [48], Edwards [49,50] and Ellsberg [51], and continuing through the present, have experimentally confirmed pronounced and systematic deviations from the predictions of expected utility theory, leading to the appearance of many paradoxes. A large literature on this topic can be found in the recent reviews by Camerer et al. [52] and Machina [53].
Because of the large number of paradoxes associated with classical decision making, there have been many attempts to change the expected utility approach, which have been classified as non-expected utility theories. There are a number of such non-expected utility theories, among which we may mention a few of the best known ones: prospect theory [49,54] weighted-utility theory [55,56,57], regret theory [58], optimism-pessimism theory [59], dual-utility theory [60], ordinal-independence theory [61] and quadratic-probability theory [62]. More detailed information can be found in the articles [53,63,64].
However, as has been shown by Safra and Segal [65], none of non-expected utility theories can explain all of those paradoxes. The best that could be achieved is a kind of fitting for interpreting just one or, in the best case, a few paradoxes, while the other paradoxes remained unexplained. In addition, spoiling the structure of expected utility theory results in the appearance of several complications and inconsistencies. As has been concluded in the detailed analysis of Al-Najjar and Weinstein [66,67], any variation of the classical expected utility theory “ends up creating more paradoxes and inconsistencies than it resolves”.
The idea that the functioning of the human brain could be described by the techniques of quantum theory has been advanced by Bohr [8,68], one of the founders of quantum theory. von Neumann, who is both a founding father of game theory and of expected utility theory, on the one hand, and the developer of the mathematical theory of quantum mechanics, on the other hand, mentioned that the quantum theory of measurement can be interpreted as decision theory [69].
The main difference between the classical and quantum techniques is the way of calculating the probability of events. As soon as one accepts the quantum way of defining the concept of probability, the latter, generally, becomes nonadditive. Additionally, one immediately meets such quantum effects as coherence and interference.
The possibility of employing the techniques of quantum theory in several branches of science, which have previously been analyzed by classical means, are nowadays widely known. As examples, we can recall quantum game theory [4,5,6,7], quantum information processing and quantum computing [1,2,3].
After the works by Bohr [8,68] and von Neumann [69], there have been a number of discussions on the possibility of applying quantum rules for characterizing the process of human decision making. These discussions have been summarized in the books [70,71,72,73,74] and in the review articles [75,76,77], where numerous citations to the previous literature can be found. However, this literature suffers from a fragmentation of approaches that lack general quantitative predictive power.
An original approach has been developed by the present authors [15,16,17,18,19,20,21,22,23,24], where we have followed the ideas of Bohr and von Neumann, treating decision theory as a theory of quantum measurements, generalizing these ideas for being applicable to human decision makers.
Although we have named our approach Quantum Decision Theory (QDT), it is worth stressing that the use of quantum techniques requires that neither brain nor consciousness would have anything to do with genuinely quantum systems. The techniques of quantum theory are used solely as a convenient and efficient mathematical tool and language to capture the complicated properties associated with decision making. The necessity of generalizing classical utility theory has been mentioned in connection with taking into account the notion of bounded rationality [78] and confirmed by numerous studies in behavioral economics, behavioral finance, attention economy and neuroeconomics [79,80,81].

3. Main Results

In our previous papers [15,16,17,18,19,20,21,22,23,24], we have developed a rigorous mathematical approach for defining behavioral quantum probabilities. The approach is general and can be applied to human decision makers, as well as to the interpretation of quantum measurements [19,24]. However, when applying the approach to human decision makers, we have limited ourselves to lotteries with gains.
In the present paper, we extend the applicability of QDT to the case of lotteries with losses. Such an extension is necessary for developing the associated general methods taking into account behavioral biases in decision theory and risk management [82]. Risk assessment and classification are parts of decision theory that are known to be strongly influenced by behavioral biases. We generalize the approach to be applicable for both types of lotteries, with gains, as well as with losses.
It is necessary to stress that quantum theory is intrinsically probabilistic. The probabilistic theory of decision making has to include, as a particular case, the standard utility theory. However, the intrinsically probabilistic nature of quantum theory makes our approach principally different from the known classical stochastic utility theories, which are based on the assumption of the existence of a value functional composed of value functions and weighting functions with random parameters [83,84]. In classical probabilistic theories, one often assumes that the decision maker choice is actually deterministic, based on the rules of utility theory or some of its variants, but this choice is accompanied by random noise and decision errors, which makes the overall process stochastic. In this picture, one assumes that a deterministic decision theory is embedded into a stochastic environment [85,86,87].
However, there are in the mathematical psychology literature a number of approaches, such as the random preference models or mixture models, which also assume intrinsically random preferences [88,89,90,91,92,93]. These models emphasize that “stochastic specification should not be considered as an ‘optional add-on,’ but rather as integral part of every theory which seeks to make predictions about decision making under risk and uncertainty” [58]. Using different probabilistic specifications has been shown to lead to possibly opposite predictions, even when starting from the same core (deterministic) theory [58,85,94,95,96,97]. This stresses the important role of the probabilistic specification together with the more standard core component.
In this spirit, our approach also assumes that the probability is not a notion that arises due to some external noise or random errors, but it is the basic characteristic of decision making. This is because quantum theory does not simply decorate classical theory with stochastic elements, but is probabilistic in principle. This is in agreement with the understanding that the genuine ontological indeterminacy, pertaining to quantum systems, is not due to errors or noise.
The quantum probability in QDT is defined according to the general rules of quantum theory, which naturally results in the probability being composed of two terms that are called the utility factor and the attraction factor. The utility factor describes the utility of a prospect, being defined according to rational evaluations of a decision maker. Such factors are introduced as minimizers of an information functional, producing the best factors under the minimal available information. In the process of the minimization, there appears a Lagrange multiplier playing the role of a belief parameter characterizing the level of belief of a decision maker in the given set of lotteries. The belief parameter is zero under decision maker disbelief.
The additional quantum term is called the attraction factor. It shows how a decision maker appreciates the attractiveness of a lottery according to his/her subconscious feelings and biases. That is, the attraction factor quantifies the deviation from rationality. In the present paper, we introduce a measure that is defined as the average modulus of the attraction factors over the given set of lotteries. This measure, from one side, characterizes the level of difficulty in choosing between the given set of lottery games, as quantified by decision makers. From the other side, the measure reflects the influence of irrationality in the choice of decision makers. These two sides are intimately interrelated with each other, since the irrationality in decision making results from the uncertainty of the game, when a decision maker chooses between two lotteries, whose mutual advantages and disadvantages are not clear [23]. To summarize, the introduced measure, from one side, is a measure of the difficulty in evaluating the given lotteries and is also from the other side an irrationality measure.
To illustrate the approach, we analyze two sets of games, one set of games with very close lotteries and another large set of empirical data of diverse lotteries. We calculate the introduced measure and show that the predicted values of this measure are in good agreement with experimental data.
Briefly, the main novel results of the present paper are the following:
(i)
We demonstrate that the laws of quantum theory can be applied for modeling human decision making, so that quantum probabilities can characterize behavioral probabilities of human choice in decision making.
(ii)
Generalization of the approach for defining quantum behavioral probabilities to arbitrary lotteries, with gains, as well as with losses.
(iii)
Introduction of an irrationality measure, quantifying the strength of the deviation from rationality in decision making, under a given set of games involving different lotteries.
(iv)
Analysis of a large collection of empirical data demonstrating good agreement between the predicted values of the irrationality measure with experimentally-observed data.
(v)
Demonstration that predicted quantum probabilities are in good agreement with empirically-observed behavioral probabilities, which suggests that human decision making can be described by rules similar to those of quantum theory.

4. Expected Utility

As will be illustrated in the following sections, expected utility theory is a particular case of QDT. Therefore, it is useful to very briefly recall the notion of expected utility. Below, we introduce the related notations to be used in the following sections.
Let us consider several sets of payoffs, or outcomes,
X n = { x i n : i = 1 , 2 , M n } ( n = 1 , 2 , , N ) ,
where the index i enumerates payoffs and n enumerates the payoff sets. The sets can be different or identical. In the latter case, they are copies of each other. A payoff x i n is characterized by its probabilistic weight p n ( x i n ) . The probability distribution over a payoff set composes a lottery:
L n = { x i n , p n ( x i n ) : i = 1 , 2 , M n } ,
with x i n X n , under the condition:
i p n ( x i n ) = 1 , 0 p n ( x i n ) 1 .
The total number of lotteries equals the number of payoff sets N, assuming a one-to-one correspondence between payoff x i n and its probability p n ( x i n ) , both being associated with a given state of the world.
The expected utility of a lottery is:
U ( L n ) = i u ( x i n ) p n ( x i n ) ,
where u ( x ) is a utility function.
In classical utility theory, one compares the expected utilities for the set of lotteries and selects with probability one the lottery with the largest expected utility. In each lottery, its payoffs can be either positive, corresponding to gains, or negative, corresponding to losses. The expected utility, which is a linear combination (4), can be either positive or negative, depending on which terms, positive or negative, prevail in the sum.
Note that a positive expected utility does not imply that all of its payoffs are gains. Respectively, a negative expected utility can be composed of both gains and losses. Therefore, the signs of payoffs can be positive or negative in each lottery. What matters for the following is only the resulting sign of the total sum (4), that is of the expected utility. We show that a set of lotteries with negative expected utilities has to be treated differently from a set of lotteries with positive expected utilities. In both of these cases, behavioral biases are important.
A negative expected utility defines the expected cost:
C ( L n ) U ( L n ) > 0 .
Among the lotteries with negative utilities, the most preferable is that one enjoying the minimal cost.
Decision making is concerned with the choice of a lottery among the given set { L n } of several lotteries. Generally, the considered lotteries of the given set can have different signs. In the present paper, we consider the case where the lotteries of a set enjoy the same signs. That is, although in each separate lottery there can exist payoffs of both signs, representing gains, as well as losses, we consider the situation when the expected utilities (4) of the lotteries in the frame of a given set are all of the same sign, either plus or minus.

5. Quantum Decision Theory

We identify behavioral probabilities with the probabilities defined by quantum laws, as is done in the theory of quantum measurements [15,19,24]. Then, it is straightforward to calculate such probabilities. Here, we shall not plunge into the details of these calculations, which has been described in full mathematical detail in our previous papers, but will list only the main notions and properties of the defined behavioral probabilities. All related mathematics have been thoroughly expounded in our previous papers [15,16,17,18,19,20,21,22,23,24].
Although these mathematics are rather involved, the final results can be formulated in a simple form sufficient for practical usage. Therefore, the reader actually does not need to know the calculation details, provided the final properties are clearly stated, as we do below.
Let the letter A n denote the action of choosing a lottery L n , with n = 1 , 2 , , N . Strictly speaking, any such choice is accompanied by uncertainty that has two sides, objective and subjective. Objectively, when choosing a lottery, the decision maker does not know what particular payoff he/she would get. Subjectively, the decision maker may be unsure whether the setup is correctly understood, whether there are hidden traps in the problem and whether he/she is able to make an optimal decision. Let such a set of uncertain items be denoted by the letter B = { B α : α = 1 , 2 , } .
The operationally testable event A n is represented by a vector | n pertaining to a Hilbert space:
H A = span n { | n } .
The uncertain event B is represented by a vector:
| B = α b α | α
in the Hilbert space:
H B = span α { | α } ,
with the coefficients b α being random. The sets { | n } and { | α } form orthonormal bases in their related Hilbert spaces.
Thus, choosing a lottery is a composite event, consisting of a final choice A n as such, which is accompanied by deliberations involving the set of uncertain events B. The choice of a lottery, under uncertainty, defines a composite event called the prospect:
π n = A n B ,
which is represented by a state:
| π n = | n | B
in the Hilbert space:
H = H A H B = span n α { | n α } .
The prospect operators:
P ^ ( π n ) = | π n π n |
play the role of observables in the theory of quantum measurements. The decision maker strategic state is defined by an operator ρ ^ that is analogous to a statistical operator in quantum theory.
Each prospect π n is characterized by its quantum behavioral probability:
p ( π n ) = Tr ρ ^ P ^ ( π n ) ,
where the trace is taken over the basis of the space H formed by the vectors | n α = | n | α . The family of these probabilities composes a probability measure, such that:
n = 1 N p ( π n ) = 1 , 0 p ( π n ) 1 .
Calculating the prospect probability, employing the basis of the space H , it is straightforward to separate the diagonal, positive-defined terms, and off-diagonal, sign-undefined terms, which results in the expression:
p ( π n ) = f ( π n ) + q ( π n ) ,
consisting of two terms. The first, positive-defined term,
f ( π n ) = α | b α | 2 n α | ρ ^ | n α ,
corresponds to a classical probability describing the objective utility of the lottery L n ; because of which, it is called the utility factor, satisfying the condition:
n = 1 N f ( π n ) = 1 , 0 f ( π n ) 1 .
The explicit form of the utility factors depends on whether the expected utilities are positive or negative and will be presented in the following sections.
The second, off-diagonal term,
q ( π n ) = α β b α * b β n α | ρ ^ | n β ,
represents subjective attitudes of the decision maker towards the prospect π n , thus being called the attraction factor. This factor encapsulates behavioral biases of the decision maker and is nonzero, when the prospect (6) is entangled, that is when the decision maker deliberates, being uncertain about the correct choice. The prospect π n is called entangled, when its prospect operator P ^ ( π n ) cannot be represented as a separable operator in the corresponding composite Hilbert–Schmidt space. The accurate mathematical formulation of entangled operators requires rather long explanations and can be found, with all details, in [19,24].
The attraction factor reflects the subjective attitude of the decision maker, caused by his/her behavioral biases, subconscious feelings, emotions, and so on. Being subjective, the attraction factors are different for different decision makers and even for the same decision maker at different times. Subjective feelings are known to essentially depend on the emotional state of decision makers, their affect, framing, available information and the like [98,99,100,101,102]. It would thus seem that such a subjective and contextual quantity is difficult to characterize. Nevertheless, in agreement with its structure and employing the above normalization conditions, it is easy to show that the attraction factor enjoys the following general features. It ranges in the interval:
1 q ( π n ) 1
and satisfies the alternation law:
n = 1 N q ( π n ) = 0 .
The absence of deliberations, hence lack of uncertainty, in quantum parlance, is equivalent to decoherence, when:
p ( π n ) f ( π n ) , q ( π n ) 0 .
This is also called the quantum-classical correspondence principle, according to which the choice between the lotteries reduces to the evaluation of their objective utilities, which occurs when uncertainty is absent, leading to vanishing attraction factors. In this case, decision making is reduced to its classical probabilistic formulation described by the utility factors f ( π n ) playing the role of classical probabilities.
We have also demonstrated [17,18,20,75] that the average values of the attraction factor moduli for a given set of lotteries can be determined on the basis of assumptions constraining the distribution of the attraction factor values. This suggests the necessity of studying this aggregate quantity more attentively. For this purpose, we introduce here the notation:
q ¯ 1 N n = 1 N | q ( π n ) | .
This quantity appears due to the quantum definition of probability, and it characterizes the level of deviation from rational decision making for the given set of N lotteries considered by decision makers. If decision making would be based on completely rational grounds, this measure would be zero. Since the value q ¯ describes the deviation from rationality in decision making, it can be named the irrationality measure. Additionally, since it results from the use of quantum rules, it is also a quantum correction measure.
It can be shown that for the case of non-informative prior, when the attraction factors of the considered lotteries are uniformly distributed in the interval [ 1 , 1 ] , then q ¯ = 1 / 4 , which is termed the quarter law. This property also holds for some other distributions, symmetric with respect to the inversion q q [17,18,20,75]. The value 1 / 4 describes the average level of irrationality. However, it is clear that this value does not need to be the same for all sets of lotteries. It is straightforward to compose a set of simple lotteries, having practically no uncertainty, so that decision choices could be governed by rational evaluations. For such lotteries, the attraction factors can be very small, since the behavioral probability p ( π n ) practically coincides with the utility factor f ( π n ) . For instance, this happens for the lotteries with low uncertainty, when one of the lotteries from the given set enjoys much higher gains, with higher probabilities, than the other lotteries. In such a case, the irrationality measure (13) can be rather small. On the contrary, it is admissible to compose a set of highly uncertain lotteries, for which the irrationality measure would be larger than 0 . 25 . In this way, the irrationality measure (13) is a convenient characteristic allowing for a quantitative classification of the typical deviation from rationality in decision making.
The above consideration concerns a single decision maker. It is straightforward to extend the theory to a society of D decision makers choosing between N prospects, with the collective state being represented by a tensor product of partial statistical operators. Then, for an i-th decision maker, we have the prospect probability:
p i ( π n ) = f i ( π n ) + q i ( π n ) ,
where i = 1 , 2 , , D .
The typical behavioral probability, characterizing the decision maker society, is the average:
p ( π n ) 1 D i = 1 D p i ( π n ) .
The utility factor f i ( π n ) is an objective quantity for each decision maker. In general, this utility factor can be person-dependent, in order to reflect the specific skills, intelligence, knowledge, etc. of the decision maker, which shape his/her rational decision. Another reason to consider different f i ( π n ) is that risk aversion is, in general, different for different people. However, being averaged, as in (15), such an aggregate quantity describes the behavioral probability of a typical agent, which is a feature of the considered society on average.
Similarly, the typical attraction factor is:
q ( π n ) 1 D i = 1 D q i ( π n ) ,
which, generally, depends on the number of the questioned decision makers, q ( π n ) = q ( π n , D ) . These typical values describe the society of D agents on average, thus, defining a typical agent. For a society of D decision makers, the decision irrationality measure takes the aggregate form:
q ¯ = 1 N n = 1 N 1 D i = 1 D q i ( π n ) .
In standard experiments, one usually questions a pool of D decision makers. If the number of decision makers choosing a prospect π n is D ( π n ) , such that:
n = 1 N D ( π n ) = D ,
then the experimental frequentist probability is:
p e x p ( π n ) = D ( π n ) D .
This, using the notation f ( π n ) for the aggregate utility factor, makes it possible to define the experimental attraction factor:
q e x p ( π n ) p e x p ( π n ) f ( π n ) ,
depending on the number of decision makers D.
More generally, in standard experimental tests, decision makers are asked to formulate, not a single choice between N lotteries, but multiple choices with different lottery sets. A single choice between a given set of lotteries is called a game that is the operation:
G ^ : { L n } { L n , p ( π n ) }
ascribing the probabilities p ( π n ) to each lottery L n . When a number of games are proposed to the decision maker, enumerated by the index k = 1 , 2 , , they are the operations:
G ^ k : { L n } k { L n , p ( π n ) } k .
Averaging over all games gives the aggregate irrationality or quantum correction measure:
q ¯ 1 K k = 1 K 1 N n = 1 N 1 D i = 1 D q ( π n ) ,
quantifying the level of irrationality associated with the whole set of these games.
As is clear from Equations (8), (14) and (20), the attraction factor defines the deviation of the behavioral probability from the classical rational value prescribed by the utility factor. We shall say that a prospect π m is more useful than π n , if and only if f ( π m ) > f ( π n ) . A prospect π m is said to be more attractive than π n , if and only if q ( π m ) > q ( π n ) . Additionally, a prospect π m is preferable to π n , if and only if p ( π m ) > p ( π n ) . Therefore, a prospect can be more useful, but less attractive, as a result being less preferable. This is why the behavioral probability, combining both the objective and subjective features, provides a more correct and full description of decision making.
It is important to emphasize that in our approach, the form of the probability (8) is not an assumption, but it directly follows from the definition of the quantum probability. This is principally different from suggestions of some authors to add to expected utility an additional phenomenological term corresponding to either information entropy [103,104] or taking account of social interactions [105]. In our case, we do not spoil expected utility, but we work with probability whose form is prescribed by quantum theory.
Our approach is principally different from the various models of stochastic decision making, where one assumes a particular form of a utility functional or a value functional, whose parameters are treated as random and fitted a posteriori for a given set of lotteries. Such stochastic models are only descriptive and do not enjoy predictive power. In the following sections, we show that our method provides an essentially more accurate description of decision making.
It is worth mentioning the Luce choice axiom [106,107,108,109], which states the following. Let us consider a set of objects enumerated by an index n and labeled each by a scaling quantity s n . Then, the probability of choosing the n-th object can be written as s n / n s n . In the case of decision making, one can associate the objects with lotteries and their scaling characteristics with expected utilities. Then, the Luce axiom gives a way of estimating the probability of choosing among the lotteries. Below, we show that the Luce axiom is a particular case of the more general principle of minimal information. Moreover, it allows for the estimation of only the rational part of the behavioral probability, related to the lottery utilities. However, in our approach, there also exists the other part of the behavioral probability, represented by the attraction factor. This makes the QDT principally different and results in essentially a more accurate description of decision making.
Furthermore, it is important to distinguish our approach from a variety of the so-called non-expected utility theories, such as weighted-utility theory [55,56,57], regret theory [58], optimism-pessimism theory [59], dual-utility theory [60], ordinal-independence theory [61], quadratic-probability theory [62] and prospect theory [49,54,110]. These non-expected utility theories are based on an ad hoc replacement of expected utility by a phenomenological functional, whose parameters are fitted afterwards from empirical data. See more details in the review articles [53,63,64]. Therefore, all such theories are descriptive, but not predictive. After a posterior fitting, practically any such theory can be made to correspond to experimental data, so that it is difficult to distinguish between them [64]. Contrary to most of these, as is stressed above, we first of all do not deal with only utility, but are concerned with probability. More importantly, we do not assume phenomenological forms, but derive all properties of the probability from a self-consistent formulation of quantum theory. In particular, the properties of the attraction factor, described above, and the explicit form of the utility factor, to be derived below, give us a unique opportunity for quantitative predictions.
Similarly to quantum theory, where one can accomplish different experiments for different systems, in decision theory, one can arrange different sets of lotteries for different pools of decision makers. The results of such empirical data can be compared with the results of calculations in QDT.

6. Difficult or Easy Choice

In the process of decision making, subjects try to evaluate the utility of the given lotteries. Such an evaluation can be easy when the lotteries are noticeably different form each other, and alternatively, the choice is difficult when the lotteries are rather similar.
The difference between lotteries can be quantified as follows. Suppose we compare two lotteries, whose utility factors are f ( π 1 ) and f ( π 2 ) . Let us introduce the relative lottery difference as:
Δ ( π 1 , π 2 ) 2 | f ( π 1 ) f ( π 2 ) | f ( π 1 ) + f ( π 2 ) × 100 % .
When there are only two lotteries in a game, because of the normalization (9), we then have:
Δ ( π 1 , π 2 ) 2 | f ( π 1 ) f ( π 2 ) | × 100 % .
As is evident, the difficulty in choosing between the two lotteries is a decreasing function of the utility difference (22). The problem of discriminating between two similar objects or stimuli has been studied in psychology and psychophysics, where the critical threshold quantifying how much difference between two alternatives is sufficient to decide that they are really different is termed the discrimination threshold, just noticeable difference or difference threshold [111]. In applications of decision theory to economics, one sometimes selects the threshold difference of 1 % , because “it is worth spending one percent of the value of a decision analyzing the decision” [112]. This implies that the value of 1 % , being spent to improve the decision, at the same time does not change significantly the value of the chosen lottery.
More rigorously, the threshold difference, when the difference (22) is smaller than some critical value below which the lotteries can be treated as almost equivalent, can be justified in the following way. In psychology and operation research, to quantify the similarity or closeness of two alternatives f 1 and f 2 with close utilities or close probabilities, one introduces [113,114,115,116] the measure of distance between alternatives as | f 1 f 2 | m with m > 0 . In applications, one employs different values of the exponent m, getting the linear distance for m = 1 , quadratic distance for m = 2 , and so on. In order to remove the arbitrariness in setting the exponent m, it is reasonable to require that the difference threshold be invariant with respect to the choice of m, so that:
| Δ ( π 1 , π 2 ) | m 1 = | Δ ( π 1 , π 2 ) | m 2 ,
for any positive m 1 and m 2 . Counting the threshold in percentage units, the sole nontrivial solution to the above equation is | Δ ( π 1 , π 2 ) | = 1 percent. This implies that the difference threshold, capturing the psychological margin of significance, has to be equal to the value of 1 % . Then, one says that the choice is difficult when:
Δ ( π 1 , π 2 ) < 1 % .
Otherwise, when the lottery utility factors differ more substantially, the choice is said to be easy.
The value of the aggregate attraction factor depends on whether the choice between lotteries is difficult or easy. The attraction factors for different decision makers are certainly different. However, they are not absolutely chaotic, so that, being averaged over many decision makers and several games, the average modulus of the attraction factor can represent a sensible estimation of the irrationality measure q ¯ defined in the previous section.
An important question is whether it is possible to predict the irrationality measure for a given set of games. Such a prediction, if possible, would provide a very valuable information predicting what decisions a society could make. The evaluation of the irrationality measure can be done in the following way. Suppose that φ ( q ) is a probability distribution of attraction factors for a society of decision makers. From the admissible domain (10) of attraction factor values, the distribution satisfies the normalization condition:
1 1 φ ( q ) d q = 1 .
Experience suggests that, except in the presence of certain gains and losses, there are practically no absolutely certain games that would involve no hesitations and no subconscious feelings. In mathematical terms, this can be formulated as follows. In the manifold of all possible games, absolutely rational games compose a set of zero measures:
φ ( q ) 0 ( q 0 ) .
On the other side, there are almost no completely irrational decisions, containing absolutely no utility evaluations. That is, on the manifold of all possible games, absolutely irrational games make a set of zero measures:
φ ( q ) 0 ( | q | 1 ) .
The last condition is also necessary for the probability p ( π i ) = f ( π i ) + q ( π i ) to be in the range [ 0 , 1 ] .
Consider a decision marker to whom a set of games is presented, which can be classified between difficult and easy according to condition (23). Additionally, let the fraction of difficult choices be ν. Then, a simple probability distribution that satisfies all of the above conditions is the Bernoulli distribution:
φ ( q ) = 1 Γ ( 1 + ν ) Γ ( 2 ν ) | q | ν ( 1 | q | ) 1 ν .
The Bernoulli distribution is a particular case of the beta distribution employed as a prior distribution under Conditions (25) and (26) in standard inference tasks [117,118,119].
The expected irrationality measure then reads:
q ¯ = 0 1 q φ ( q ) d q ,
which, with Expression (27), yields:
q ¯ = 1 6 ( 1 + ν ) .
In this way, for any set of games, we can a priori predict the irrationality measure by Formula (29) and compare this prediction with the corresponding quantity (20) that can be defined from a posteriori experimental data.
For example, when there are no difficult choices, hence ν = 0 , we have q ¯ = 1 / 6 . On the contrary, when all games involve difficult choices and ν = 1 , then q ¯ = 1 / 3 . In the case when half of the games involve difficult choices, so that ν = 1 / 2 , then q ¯ = 1 / 4 . This case reproduces the result of the non-informative prior. It is reasonable to argue that, if we would know nothing about the level of the games’ difficulty, we could assume that half of them are difficult and half are easy.

7. Positive Expected Utilities

To be precise, it is necessary to prescribe a general method for calculating the utility factors. When all payoffs in sets (1) are gains, then all expected utilities (4) are positive. This is the case we have treated in our previous papers [16,17,18,21,75]. However, if among the payoffs there are gains, as well as losses, then the signs of expected utilities can be positive, as well as negative. Here, we shall consider two classes of lotteries including both gains and losses, a first class of lotteries with positive expected utilities and a second class with negative expected utilities.
In the present section, we consider the lotteries with semi-positive utilities, such that:
U ( L n ) 0 .
Recall that such a lottery does not need to be composed solely of gains, but it can include both gains and losses, in such a way that the expected utility (4) is semi-positive.
The utility factor, by its definition, defines the objective utility of a lottery; in other words, it is supposed to be a function of the lottery expected utility. The explicit form of this function can be found from the conditional minimization of the Kullback–Leibler [120,121] information:
I K L [ f ( π n ) ] = n = 1 N f ( π n ) ln f ( π n ) f 0 ( π n ) ,
in which f 0 ( π n ) is a trial likelihood function [16].
The use of the Kullback–Leibler information for deriving the classical utility distribution is justified by the Shore–Johnson theorem [122]. This theorem proves that there exists only one distribution satisfying consistency conditions, and this distribution is uniquely defined by the minimum of the Kullback–Leibler information, under given constraints. This method has been successfully employed in a remarkable variety of fields, including physics, statistics, reliability estimations, traffic networks, queuing theory, computer modeling, system simulation, optimization of production lines, organizing memory patterns, system modularity, group behavior, stock market analysis, problem solving and decision theory. Numerous references related to these applications can be found in the literature [122,123,124,125,126].
It also worth recalling that the Kullback–Leibler information is actually a slightly modified Shannon entropy. Additionally, this entropy is known to be the natural information measure for probabilistic theories [25,26,27,28].
The total information functional is prescribed to take into account those additional constraints that uniquely define a representative statistical ensemble [124,127,128]. First of all, such a constraint is the normalization condition (9). Then, since the utility factor plays the role of a classical probability, the average quantity should be defined:
n = 1 N f ( π n ) U ( L n ) = U .
This quantity can be either finite or infinite. The latter case would mean that there could exist infinite (or very large) utilities, which, in real life, could be interpreted as a kind of “miracle”, leading to large “surprise” [82]. Therefore, the assumption that U can be infinite (or extremely large) can be interpreted as equivalent to the belief in the absence of constraints, in other words, to the assumption of strong uncertainty.
In this way, the information functional is written as:
I [ f ( π n ) ] = I K L [ f ( π n ) ] + λ n = 1 N f ( π n ) 1 + β U n = 1 N f ( π n ) U ( L n ) ,
in which λ and β are the Lagrange multipliers guaranteeing the validity of the imposed constraints.
In order to correctly reflect the objective meaning of the utility factor, it has to grow together with the utility, so as to satisfy the variational condition
δ f ( π n ) δ U ( L n ) > 0
for any value of the expected utility. Additionally, the utility factor has to be zero for zero utility, which implies the boundary condition:
f ( π n ) 0 , U ( L n ) 0 .
To satisfy these conditions, it is feasible to take the likelihood function f 0 ( π n ) proportional to U ( L n ) .
The minimization of the information functional (33) yields the utility factor:
f ( π n ) = U ( L n ) Z exp { β U ( L n ) } ,
with the normalization factor:
Z = n = 1 N U ( L n ) exp { β U ( L n ) } .
Conditions (34) and (35) require that the Lagrange multiplier β be non-negative, varying in the interval 0 β < . This quantity can be called belief parameter or certainty parameter, because of its meaning following from Equations (32) and (33). The value of β reflects the level of certainty of a decision maker with respect to the given set of lotteries and to the possible occurrence of infinite (or extremely large) utilities.
If one is strongly uncertain about the outcome of a decision to be made, with respect to the given lotteries, thinking that nothing should be excluded, when quantity (32) can take any values, including infinite, then, to make the information functional (33) meaningful, one must set the belief parameter to zero: β = 0 . Thus, the zero belief parameter reflects strong uncertainty with respect to the given set of lotteries. Then, the utility factor (36) becomes:
f ( π n ) = U ( L n ) n = 1 N U ( L n ) ( β = 0 ) .
In that way, the uncertainty in decision making leads to the probabilistic decision theory, with the probability weight described by (36). In the case of strong uncertainty, with the zero belief parameter, the probabilistic weight (36) reduces to form (38) suggested by Luce. It has been mentioned [129] that the Luce form cannot describe the situations where behavioral effects are important. However, in our approach, form (38) is only a part of the total behavioral probability (8). Expression (36), by construction, represents only the objective value of a lottery; hence, it is not supposed to include subjective phenomena. The subjective part of the behavioral probability (8) is characterized by the attraction factor (16). As a consequence, the total behavioral probability (8) includes both objective, as well as subjective effects.
In the intermediate case, when one is not completely certain, but, anyway, assumes that (32) cannot be infinite (or extremely large), then β is also finite, and the utility factor (36) is to be used. This is the general case of the probabilistic decision making.
However, when one is absolutely certain in the rationality of the choice between the given lottery set, that is when one believes that the decision can be made completely rationally, then the belief parameter is large, β , which results in the utility factor:
f ( π n ) = 1 , U ( L n ) = max n U ( L n ) 0 , U ( L n ) max n U ( L n ) ,
corresponding to the deterministic classical utility theory, when, with probability one, the lottery with the largest expected utility is to be chosen.

8. Negative Expected Utilities

We now consider lotteries with non-positive expected utilities, when:
U ( L n ) < 0 .
Instead of the negative lottery expected utility, it is convenient to introduce a positive quantity:
C ( L n ) U ( L n ) > 0 ,
called the lottery expected cost or the lottery expected risk.
Similarly to Equation (32), it is possible to define the average cost:
n = 1 N f ( π n ) C ( L n ) = C
that can be either finite or infinite. The latter case, assuming the existence of infinite (or extremely large) costs, corresponds to the situation that can be interpreted as a “disaster”, ending the decision making process. For instance, this can be interpreted as the loss of life of the decision maker, who when dead, “sees” in a sense any arbitrary positive lottery payoff as useless, i.e., dwarfed by his/her infinite loss. One should not confuse the perspective of society that puts a price tag on human life, which depends on culture and affluence. It remains however true that an arbitrary positive payoff has no impact on a dead person, neglecting here bequest considerations.
The utility factor can again be defined as a minimizer of the information functional that now reads as:
I [ f ( π n ) ] = I K L [ f ( π n ) ] + λ n = 1 N f ( π n ) 1 + β n f ( π n ) C ( L n ) C .
To preserve the meaning of the utility factor, reflecting the usefulness of a lottery, it is required that the larger cost would correspond to the smaller utility factor, so that:
δ f ( π n ) δ C ( L n ) < 0
for any cost. Additionally, as is obvious, infinite cost must suppress the utility, thus requiring the boundary condition:
f ( π n ) 0 , C ( L n ) .
These conditions make it reasonable to consider a likelihood function f 0 ( π n ) inversely proportional to the lottery cost C ( π n ) .
Minimizing the information functional (43), we obtain the utility factor:
f ( π n ) = C 1 ( L n ) Z exp { β C ( L n ) } ,
with the normalization constant:
Z = n = 1 N C 1 ( L n ) exp { β C ( L n ) } .
Here, again, β has the meaning of a belief parameter connected with the belief of a decision maker in the rationality of the choice among the given lotteries and the possibility of a disaster related to a lottery with an infinite (or extremely large) cost.
The possible occurrence of any outcome, including a disaster, tells that quantity (42) is not restricted and even could go to infinity. In the presence of such an occurrence, to make the information functional meaningful, we need to set β = 0 . Thus, similarly to the considerations for utility functions with non-negative expectation, strong uncertainty about the given lottery set and the related outcome of decision making implies the zero belief parameter β = 0 . Then, the utility factor is:
f ( π n ) = C 1 ( L n ) n = 1 N C 1 ( L n ) ,
and we recover the probabilistic utility theory (or probabilistic cost theory).
Similarly to the previous case, the intermediate level of uncertainty implies a finite belief parameter β, when form (46) should be used. This is the general situation in the probabilistic cost theory.
Additionally, when one is absolutely certain of the full rationality of the given lottery set, then the belief parameter β , which gives:
f ( π n ) = 1 , C ( L n ) = min n C ( L n ) 0 , C ( L n ) min n C ( L n ) .
Then, we return to the deterministic classical utility theory (cost theory).
Following the procedure used for positive utilities, it is straightforward to classify the lotteries into more or less useful, more or less attractive and more or less preferable.
In what follows, considering a lottery L n , we shall keep in mind the related prospect π n , but for simplicity, it is also possible to write f ( L n ) instead of f ( π n ) .

9. Typical Examples of Decisions under Strong Uncertainty

To give a feeling of how our approach works in practice, let us consider the series of classical laboratory experiments treated by Kahneman and Tversky [54], where the number of decision makers was around D 100 and the related statistical errors on the frequencies of decisions were about ± 0 . 1 . The respondents had to choose between two lotteries, where payoffs were counted in monetary units. These experiments stress the influence of uncertainty, similarly to the Allais paradox [48], although being simpler in their setup.
Each pair of lotteries in a decision choice have been composed in such a way that they have very close, or in many cases just coinciding, expected utilities, hence coinciding or almost coinciding utility factors, and not much different payoff weights. The choice between these very similar lotteries is essentially uncertain. Therefore, we would expect that the irrationality measure, for such strongly uncertain lotteries, should be larger than 0 . 25 .
In order to interpret these experimental results in our framework, we use a linear utility function u ( x ) = c o n s t · x . The advantage of working with the linear utility function, using the utility factors, is that, by their structure, the latter do not depend on the constant in the definition of the utility function, as well as on the used monetary units that can be arbitrary. We calculate the utility factors assuming that decisions are made under uncertainty, such that the belief parameter β = 0 .
The most important point of the consideration is to calculate the predicted irrationality measure (29) and to compare it with the irrationality measure (20) found in the experiments.
Below, we analyze fourteen problems in decision making, seven of which deal with lotteries with positive expected utilities and seven with negative expected utilities. The general scheme is as follows. First, the problem of choosing between two lotteries is formulated. Then, the utility factors are calculated. For positive expected utilities, these factors are given by expression (38) that, in the case of two lotteries, read as:
f ( π n ) = U ( L n ) U ( L 1 ) + U ( L 2 ) .
For negative expected utilities, it is necessary to use Expression (48) that, in the case of two lotteries, reduces to:
f ( π n ) = 1 C ( L n ) C ( L 1 ) + C ( L 2 ) .
Calculating the lottery utility differences (22) for each game and using the threshold (23) allows us to determine the fraction ν of the games that can be classified as difficult, from which we obtain the predicted irrationality measure (29).
After this, using the experimental results to determine the frequentist probabilities, we find the attraction factors (19). We then calculate the irrationality measure (20) as the average over the absolute values of the attraction factors | q e x p | found from (19) for each game. Finally, we compare this aggregate quantity over the set of games and ensemble of subjects with the predicted value (29).
Below, we give a brief description of the games treated by Kahneman and Tversky [54] and then summarize the results in Table 1.
Game 1. Lotteries:
L 1 = { 2 . 5 , 0 . 33 | 2 . 4 , 0 . 66 | 0 , 0 . 01 } , L 2 = { 2 . 4 , 1 } .
The first lottery is more useful; however, it is less attractive, becoming less preferable. It is clear why the second lottery is more attractive: it provides a more certain gain, although the gains in both lotteries are close to each other. As a result, the second lottery is preferable ( π 1 < π 2 ) .
Game 2. Lotteries:
L 1 = { 2 . 5 , 0 . 33 | 2 . 4 , 0 . 67 } , L 2 = { 2 . 4 , 0 . 34 | 0 , 0 . 66 } .
Now, the first lottery is more useful and more attractive, as far as the payoff weights in both lotteries are close to each other, while the first lottery allows for a bit higher gain. Thus, the first lottery is preferable ( π 1 > π 2 ) .
Game 3. Lotteries:
L 1 = { 4 , 0 . 8 | 0 , 0 . 2 } , L 2 = { 3 , 1 } .
The first lottery is more useful, but less attractive. The second lottery is more attractive because it gives a more certain gain, although the gains in both lotteries are comparable. The second lottery becomes preferable ( π 1 < π 2 ) .
Game 4. Lotteries:
L 1 = { 4 , 0 . 2 | 0 , 0 . 8 } , L 2 = { 3 , 0 . 25 | 0 , 0 . 75 } .
The first lottery is more useful and also more attractive, since it suggests a slightly larger gain under very close payoff weights. The first lottery is preferable ( π 1 > π 2 ) .
Game 5. Lotteries:
L 1 = { 6 , 0 . 45 | 0 , 0 . 55 } , L 2 = { 3 , 0 . 9 | 0 , 0 . 1 } .
Both lotteries are equally useful. However, the second lottery gives a more certain gain, being, thus, more attractive and becoming preferable ( π 1 < π 2 ) .
Game 6. Lotteries:
L 1 = { 6 , 0 . 001 | 0 , 0 . 999 } , L 2 = { 3 , 0 . 002 | 0 , 0 . 998 } .
Again, both lotteries are equally useful, but the first lottery is more attractive, suggesting a larger gain under close payoff weights. Therefore, the first lottery is preferable ( π 1 > π 2 ) .
Game 7. Lotteries:
L 1 = { 6 , 0 . 25 | 0 , 0 . 75 } , L 2 = { 4 , 0 . 25 | 2 , 0 . 25 | 0 , 0 . 5 } .
Both lotteries are equally useful. However, the second lottery gives more chances for gains, being more attractive. The second lottery becomes preferable ( π 1 < π 2 ) .
Game 8. Lotteries:
L 1 = { 4 , 0 . 8 | 0 , 0 . 2 } , L 2 = { 3 , 1 } .
The second lottery is more useful, however, being less attractive, since it suggests a certain loss. Therefore, the first lottery is preferable ( π 1 > π 2 ) .
Game 9. Lotteries:
L 1 = { 4 , 0 . 20 | 0 , 0 . 80 } , L 2 = { 3 , 0 . 25 | 0 , 0 . 75 } .
The second lottery is more useful and more attractive, since its loss is lower, while the loss weights in both lotteries are close to each other. This makes the second lottery preferable ( π 1 < π 2 ) .
Game 10. Lotteries:
L 1 = { 3 , 0 . 9 | 0 , 0 . 1 } , L 2 = { 6 , 0 . 45 | 0 , 0 . 55 } .
Although the utilities of both lotteries are the same, the first lottery is less attractive, since the loss there is more certain. As a result, the second lottery is preferable ( π 1 < π 2 ) .
Game 11. Lotteries:
L 1 = { 3 , 0 . 002 | 0 , 0 . 998 } , L 2 = { 6 , 0 . 001 | 0 , 0 . 999 } .
Both lotteries are again equally useful. However, the second lottery is less attractive, yielding higher loss under close loss weights. This is why the first lottery is preferable ( π 1 > π 2 ) .
Game 12. Lotteries:
L 1 = { 1 , 0 . 5 | 0 , 0 . 5 } , L 2 = { 0 . 5 , 1 } .
The utilities of both lotteries are equal. However, the second lottery is less attractive, resulting in a more certain loss. Hence, the first lottery is preferable ( π 1 > π 2 ) .
Game 13. Lotteries:
L 1 = { 6 , 0 . 25 | 0 , 0 . 75 } , L 2 = { 4 , 0 . 25 | 2 , 0 . 25 | 0 , 0 . 5 } .
Although the utilities of the lotteries are again equal, the second lottery has more chances to result in a loss, thus being less attractive. Consequently, the first lottery is preferable ( π 1 > π 2 ) .
Game 14. Lotteries:
L 1 = { 5 , 0 . 001 | 0 , 0 . 999 } , L 2 = { 0 . 005 , 1 } .
Both lotteries are equally useful. However, the second lottery is more attractive, as far as the loss there is three orders smaller than in the first lottery. Then, the second lottery is preferable ( π 1 > π 2 ) .
The summarizing results for these games are presented in Table 1. Among the 14 above games, nine of them are difficult, which yields the fraction of difficult choices equal to ν = 9 / 14 . Expression (29) then predicts q ¯ = 0 . 274 .
The irrationality measure is larger than 0 . 25 . This is not surprising, since the lotteries are arranged in such a way that their expected utilities are very close to each other or, in the majority of cases, even coincide. Hence, it is not easy to choose between such similar lotteries, which makes the decision choice rather difficult. Averaging the experimentally found moduli of the attraction factors over all fourteen problems, we get the irrationality measure q ¯ = 0 . 275 , which practically coincides with the theoretical predicted value q ¯ = 0 . 274 .
It is worth mentioning that the values of the attraction factors for a particular decision choice and, moreover, for each separate decision maker, are, of course, quite different. Dealing with a large pool of decision makers and several choices smooths out the particular differences, so that the found irrationality measure characterizes typical decision making of a large society, dealing with quite uncertain choices. In the case treated above, the number of decision makers D was about 100. Hence, the total number of choices for 14 lotteries is sufficiently large, being around 100 × 14 = 1400 .

10. Analysis for a Large Recent Set of Empirical Data

When the lotteries are not specially arranged to produce high uncertainty in decision choice, but are composed in a random way, we may expect that the irrationality measure will be smaller than in the case of the previous section. In order to check this, let us consider a large set of decision choices, using the results of the recently accomplished massive experimental tests with different lotteries, among which there are many for which the decision choice is simple [130]. The subject pool consisted of 142 participants having to make 91 choices over a set of 91 pairs of binary option lotteries. The choices were administered in two sessions, approximately two weeks apart, with the same set of the lotteries. However, the item order was randomized, so that the choices in the sessions could be treated as independent. Thus, the total effective number of choices was 142 × 91 × 2 = 25,844.
Each choice was made between two binary option lotteries, which are denoted as:
A = { A 1 , p ( A 1 ) | A 2 , p ( A 2 ) } , B = { B 1 , p ( B 1 ) | B 2 , p ( B 2 ) } ,
with payoffs A i and B i , weights p ( A i ) and p ( B i ) and with p A and p B denoting the fraction of subjects choosing, either the lottery A or B, respectively. There were three main types of the lotteries, lotteries with only gains (Table 2 and Table 3), with only losses (Table 4 and Table 5), and mixed lotteries, with both gains and losses (Table 6 and Table 7 ). The specific order of the 91 choices in each session is not of importance, because they were administered two weeks apart, and the items were randomized, so that the choices could be treated as independent. The fractions of the decision makers choosing the same lottery in the first and second sessions, although being close with each other, were generally different, varying between zero and 0 . 15 , which reflects the contextuality of decisions. This variation represents what can be considered as a random noise in the decision process, limiting the accuracy of results by an error of order 0 . 1 .
We calculate the expected utilities U ( A ) and U ( B ) , or the expected costs C ( A ) and C ( B ) , and the corresponding utility factors f ( A ) and f ( B ) , as explained above. Then, we find the attraction factors:
q ( A ) p A f ( A ) , q ( B ) p B f ( B ) .
The results are presented in Table 2 and Table 3 for the lotteries with gains, in Table 4 and Table 5 for the lotteries with losses and in Table 6 and Table 7 for mixed lotteries. Table 3, Table 5 and Table 7 include the difference Δ given by expression (22), allowing us to find out the number of games with a difficult choice.
Analyzing these games, we see that there is just one difficult game; hence, ν = 1 / 91 . Formula (29) then predicts the irrationality measure to be q ¯ = 0 . 17 . Averaging the empirical attraction-factor modulus over all lotteries yields the experimentally-observed irrationally measure q ¯ = 0 . 17 , in perfect agreement with the predicted value.
In that way, the irrationality measure is a convenient characteristic quantifying the lottery sets under decision making. It measures the level of irrationality, that is the deviation from rationality, of decision makers considering the games with the given lottery set. Such a deviation is caused by uncertainty encapsulated in the lottery set. On average, the irrationality measure that is typical for non-informative prior is equal to 0 . 25 . However, in different particular realizations, this measure can deviate from 0 . 25 , depending on the typical level of uncertainty contained in the given set of lotteries. The irrationality measure for societies of decision makers can be predicted. The considered large set of games demonstrates that the predicted values of the irrationality measure are in perfect agreement with the empirical data considered.
Recall that, analyzing the experimental data, we have used two conditions, the threshold of one percent in the relative lottery difference in (23) and the Bernoulli distribution (27). However, employing these conditions cannot be treated as fitting. The standard fitting procedure is done by introducing unknown fitting parameters that are calibrated to the observed data for each given experiment. In contrast, in our approach, the imposed conditions are introduced according to general theoretical arguments, but not fitted afterwards. Thus, the threshold of one percent follows from the requirement that the distance between two alternatives be invariant with respect to the definition of the distance measure [113,114,115,116]. Additionally, the Bernoulli distribution is the usual prior distribution under Conditions (25) and (26) in standard inference tasks [117,118,119]. Moreover, as is easy to check, the results do not change if instead of the difference threshold of one percent, we accept any value between 0 . 8 % and 1 . 2 % .
Being general, the developed scheme can be applied to any set of games, without fitting to each particular case. In this respect, it is very instructive to consider the limiting cases, predicted at the end of Section 6. Formula (29) predicts that, when there are not difficult choices, hence ν = 0 , one should have q ¯ = 1 / 6 . On the contrary, when all games involve difficult choices, and ν = 1 , then q ¯ = 1 / 3 . To check these predictions, we can take from Table 3, Table 5 and Table 7 all 90 games with an easy choice, implying ν = 0 . Then, we find q ¯ = 0 . 17 , which is very close to the predicted value q ¯ = 1 / 6 = 0 . 167 . The opposite limiting case of ν = 1 , when all games involve a difficult choice, is represented by the set of nine games, 1, 5, 6, 7, 10, 11, 12, 13 and 14 from Table 1. For this set of games, we find q ¯ = 0 . 29 , which is close to the predicted value q ¯ = 1 / 3 = 0 . 333 . Actually, the found and predicted values are not distinguishable within the accuracy of experiments.
It is important to recall that the irrationality measure q ¯ has been defined as an aggregate quantity, thus constructed as the average over many decision makers and many games. To be statistically representative, such an averaging has to involve as much subjects and games as possible. Therefore, when taking a subset of easy or difficult games, among the set of all given games, we have to take the maximal number of them, as is done in the examples above. It is possible that, when taking not the maximal number of games, but an arbitrary limited subset, we could come to quite different values of q ¯ . In the extreme case, the attraction factor q for separate single games can be very different. However, the value of q for separate games has nothing to do with the aggregate quantity q ¯ . When selecting subsets of easy or difficult games, we must take into account the maximal number of available games.

11. Conclusions

We have developed a probabilistic approach to decision making under risk and uncertainty for the general case of payoffs that can be gains or losses. The approach is based on the notion of behavioral probability that is defined as the probability of making decisions in the presence of behavioral biases. This probability is calculated by invoking quantum techniques, justifying the name quantum decision theory. The resulting behavioral probability is a sum of two terms, a utility factor, representing the objective value of the considered lottery, and an attraction factor, characterizing the subjective attitude of a decision maker toward the treated lotteries. The utility factors are defined from the principle of minimal information, yielding the best utility weights under the given minimal information. Minimizing the information functional yields the explicit form of the utility factor as a function of the lottery expected utility, or lottery expected cost. The utility factor form is different for two different situations depending on whether gains or losses prevail. In the first case, the expected utilities of all lotteries are positive (or semi-positive), while in the second case, the expected utilities are negative.
In the process of minimizing the information functional, there appears a parameter named the belief parameter, which is a Lagrange multiplier related to the validity of the condition reflecting the belief of decision makers in the level of uncertainty in the process of decision making. When the decision makers are absolutely sure about the way of choosing, the theory reduces to the classical deterministic decision making, where, with probability one, the lottery that enjoys the largest expected utility, or the minimal expected cost, is chosen. However, for choices under uncertainty, decision making remains probabilistic.
The attraction factor, despite its contextuality, possesses several general features allowing for a quantitative description of decision making. We introduced an irrationality measure defined as the average of the attraction factor moduli over the given lottery set, characterizing how much decision makers deviate from rationality in decision making with this set of lotteries. In the case of non-informative prior, the irrationality measure equals 0 . 25 . However, for other particular lottery sets, it may deviate from this value, depending on the level of uncertainty encapsulated in the considered sets of lotteries. Thus, Formula (29) predicts that when there are no difficult choices, hence ν = 0 , it should be q ¯ = 1 / 6 . On the contrary, when all games involve difficult choices, and ν = 1 , then q ¯ = 1 / 3 . These predictions are surprisingly accurate, being compared with empirical data.
We illustrated in detail the applicability of our approach to fourteen examples of highly uncertain lotteries, suggested by Kahneman and Tversky [54], including the lotteries with both gains and losses. Calculations are done with the use of a linear utility function, whose advantage is in the universality with respect to payoff measures. The used form of the utility factor corresponds to decision making under uncertainty. Then, we extended the consideration by analyzing 91 more problems of binary options, administered in two sessions [130]. Taking into account the number of subjects involved, the total number of analyzed choices is about 27,250. The irrationality measure demonstrates that it serves as a convenient tool for quantifying the deviation from rationality in decision making, as well as characterizing the level of uncertainty in the considered set of lotteries.
Theoretical predictions for the irrationality measure have been found in perfect agreement with the observed empirical data.
Finally, the reader could ask whether quantum theory is needed after all. Indeed, one could formulate the whole approach by just postulating the basic features of the considered quantities and the main rules of calculating the probabilities, without mentioning quantum theory. One can indeed always replace derived properties by postulates, to exploit further. However, then, this so-called “theory” would be a collection of numerous postulates and axioms, whose origin and meaning would be unclear. In contrast, following our approach, the basic properties have been derived from the general definition of quantum probabilities. Thus, the structure of the quantum probability as the sum p = f + q is not a postulate, but the consequence of calculating the probability according to quantum rules. The meaning of f, as the classical probability representing the utility of a prospect, follows from the quantum-classical correspondence principle. Conditions, showing when q is to be nonzero, can be found from the underlying quantum techniques [21,24]. Thus, instead of formally fixing a number of postulates with not always clear origin, we derive the main facts of our approach from the general rules of quantum theory. In a deep sense, the anchoring of our theory of decision making and its structure on the rigid laws of quantum theory makes our approach more logical. It is also possible to mention that the theories based on smaller number of postulates, as compared to those based on larger number of the latter, are termed more “beautiful” [131].

Acknowledgments

We are indebted to Ryan O. Murphy and Robert H.W. ten Brincke for their generosity in sharing the experimental data. One of the authors (Vyacheslav I. Yukalov) appreciates the help from and discussions with Elizaveta P. Yukalova.

Author Contributions

Vyacheslav I. Yukalov and Didier Sornette discussed methodology and wrote the text. Vyacheslav I. Yukalov and Didier Sornette contributed equally to this work. Both the authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Williams, C.P.; Clearwater, S.H. Explorations in Quantum Computing; Springer: New York, NY, USA, 1988. [Google Scholar]
  2. Nielsen, M.A.; Chuang, I.L. Quantum Computation and Quantum Information; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  3. Keyl, M. Fundamentals of quantum information theory. Phys. Rep. 2002, 369, 431–548. [Google Scholar] [CrossRef]
  4. Eisert, J.; Wilkens, M. Quantum games. J. Mod. Opt. 2000, 47, 2543–2556. [Google Scholar] [CrossRef]
  5. Landsburg, S.E. Quantum game theory. Am. Math. Soc. 2004, 51, 394–399. [Google Scholar]
  6. Guo, H.; Zhang, J.; Koehler, G.J. A survey of quantum games. Decis. Support. Syst. 2008, 46, 318–332. [Google Scholar] [CrossRef]
  7. Martinez-Martinez, I. A connection between quantum decision theory and quantum games: The Hamiltonian of strategic interaction. J. Math. Psychol. 2014, 58, 33–44. [Google Scholar] [CrossRef]
  8. Bohr, N. Atomic Physics and Human Knowledge; Wiley: Hoboken, NJ, USA, 1958. [Google Scholar]
  9. Bohr, N. Collected Works, Foundations of Quantum Physics; North-Holland: Amsterdam, The Netherlands, 1985. [Google Scholar]
  10. Bohm, D.; Hiley, B. The Undivided Universe: An Ontological Interpretation of Quantum Theory; Routledge Chapman & Hall: London, UK, 1993. [Google Scholar]
  11. Dakic, B.; Suvakov, M.; Paterek, T.; Brukner, C. Efficient hidden-variable simulation of measurements in quantum experiments. Phys. Rev. Lett. 2008, 101, 190402. [Google Scholar] [CrossRef] [PubMed]
  12. Deutsch, D. Quantum theory of probability and decision. Proc. R. Soc. Lond. A 1999, 455, 3129–3137. [Google Scholar] [CrossRef]
  13. Lewis, P.J. Uncertainty and probability for branching selves. Stud. Hist. Philos. Mod. Phys. 2007, 38, 1–14. [Google Scholar] [CrossRef]
  14. Lewis, P.J. Probability, self-location, and quantum branching. Philos. Sci. 2009, 76, 1009–1019. [Google Scholar] [CrossRef]
  15. Yukalov, V.I.; Sornette, D. Quantum decision theory as quantum theory of measurement. Phys. Lett. A 2008, 372, 6867–6871. [Google Scholar] [CrossRef]
  16. Yukalov, V.I.; Sornette, D. Physics of risk and uncertainty in quantum decision making. Eur. Phys. J. B 2009, 71, 533–548. [Google Scholar] [CrossRef]
  17. Yukalov, V.I.; Sornette, D. Mathematical structure of quantum decision theory. Adv. Complex Syst. 2010, 13, 659–698. [Google Scholar] [CrossRef]
  18. Yukalov, V.I.; Sornette, D. Decision theory with prospect interference and entanglement. Theor. Decis. 2011, 70, 283–328. [Google Scholar] [CrossRef]
  19. Yukalov, V.I.; Sornette, D. Quantum probabilities of composite events in quantum measurements with multimode states. Laser Phys. 2013, 23, 105502. [Google Scholar] [CrossRef]
  20. Yukalov, V.I.; Sornette, D. Manipulating decision making of typical agents. IEEE Trans. Syst. Man Cybern. Syst. 2014, 44, 1155–1168. [Google Scholar] [CrossRef]
  21. Yukalov, V.I.; Sornette, D. Conditions for quantum interference in cognitive sciences. Top. Cogn. Sci. 2014, 6, 79–90. [Google Scholar] [CrossRef] [PubMed]
  22. Yukalov, V.I.; Sornette, D. Self-organization in complex systems as decision making. Adv. Compl. Syst. 2014, 17, 1450016. [Google Scholar] [CrossRef]
  23. Yukalov, V.I.; Sornette, D. Preference reversal in quantum decision theory. Front. Psychol. 2015, 6, 01538. [Google Scholar] [CrossRef] [PubMed]
  24. Yukalov, V.I.; Sornette, D. Quantum probability and quantum decision making. Philos. Trans. R. Soc. A 2016, 374, 20150100. [Google Scholar] [CrossRef] [PubMed]
  25. Cox, R.T. The Algebra of Probable Inference; Johns Hopkins Press: Baltimore, MD, USA, 1961. [Google Scholar]
  26. Holik, F.; Saenz, M.; Plastino, A. A discussion on the origin of quantum probabilities. Ann. Phys. 2014, 340, 293–310. [Google Scholar] [CrossRef]
  27. Holik, F.; Bosyk, G.M.; Bellomo, G. Quantum information as a non-Kolmogorovian generalization of Shannon’s theory. Entropy 2015, 17, 7349–7373. [Google Scholar] [CrossRef]
  28. Holik, F.; Plastino, A.; Saenz, M. Natural information measures in Cox approach for contextual probabilistic theories. Quant. Inf. Comput. 2016, 16, 115–133. [Google Scholar]
  29. Von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University: Princeton, NJ, USA, 1953. [Google Scholar]
  30. Savage, L.J. The Foundations of Statistics; Wiley: Hoboken, NJ, USA, 1954. [Google Scholar]
  31. Arrow, K.J. Essays in the Theory of Risk Bearing; Markham: Chicago, IL, USA, 1971. [Google Scholar]
  32. Pratt, J.W. Risk aversion in the small and in the large. Econometrica 1964, 32, 122–136. [Google Scholar] [CrossRef]
  33. Rothschild, M.; Stiglitz, J. Increasing risk: A definition. J. Econ. Theor. 1970, 2, 225–243. [Google Scholar] [CrossRef]
  34. Rothschild, M.; Stiglitz, J. Increasing risk: Its economic consequences. J. Econ. Theor. 1971, 3, 66–84. [Google Scholar] [CrossRef]
  35. Friedman, M.; Savage, L. The utility analysis of choices involving risk. J. Political Econ. 1948, 56, 279–304. [Google Scholar] [CrossRef]
  36. Markowitz, H. The utility of wealth. J. Political Econ. 1952, 60, 151–158. [Google Scholar] [CrossRef]
  37. Lindgren, B.W. Elements of Decision Theory; Macmillan: New York, NY, USA, 1971. [Google Scholar]
  38. White, D.I. Fundamentals of Decision Theory; Elsevier: Amsterdam, The Netherlands, 1976. [Google Scholar]
  39. Rivett, P. Model Building for Decision Analysis; Wiley: Hoboken, NJ, USA, 1980. [Google Scholar]
  40. Berger, J.O. Statistical Decision Theory and Bayesian Analysis; Springer: Berlin/Heidelberg, Germany, 1985. [Google Scholar]
  41. Marshall, K.T.; Oliver, R.M. Decision Making and Forecasting; McGraw-Hill: New York, NY, USA, 1995. [Google Scholar]
  42. Bather, J. Decision Theory; Wiley: Hoboken, NJ, USA, 2000. [Google Scholar]
  43. French, S.; Insua, D.R. Statistical Decision Theory; Arnold: London, UK, 2000. [Google Scholar]
  44. Raiffa, H.; Schlaifer, R. Applied Statistical Decision Theory; Wiley: Hoboken, NJ, USA, 2000. [Google Scholar]
  45. Weirich, P. Decision Space; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  46. Gollier, C. Economics of Risk and Time; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
  47. Ariely, D. Predictably Irrational; Harper: New York, NY, USA, 2008. [Google Scholar]
  48. Allais, M. Le comportement de l’homme rationnel devant le risque: Critique des postulats et axiomes de l’ecole Americaine. Econometrica 1953, 21, 503–546. [Google Scholar] [CrossRef]
  49. Edwards, W. The prediction of decision among bets. J. Exp. Psychol. 1955, 50, 200–204. [Google Scholar] [CrossRef]
  50. Edwards, W. Subjective probabilities inferred from decisions. Psychol. Rev. 1962, 69, 109–135. [Google Scholar] [CrossRef] [PubMed]
  51. Ellsberg, D. Risk, ambiguity, and the Savage axioms. Q. J. Econ. 1961, 75, 643–669. [Google Scholar] [CrossRef]
  52. Camerer, C.F.; Loewenstein, G.; Rabin, R. Advances in Behavioral Economics; Princeton University: Princeton, NJ, USA, 2003. [Google Scholar]
  53. Machina, M.J. Non-expected utility theory. In New Palgrave Dictionary of Economics; Durlauf, S.N., Blume, L.E., Eds.; Macmillan: New York, NY, USA, 2008. [Google Scholar]
  54. Kahneman, D.; Tversky, A. Prospect theory: An analysis of decision under risk. Econometrica 1979, 47, 263–292. [Google Scholar] [CrossRef]
  55. Karmarkar, U. Subjectively weighted utility: A descriptive extension of the expected utility model. Org. Behav. Hum. Perform. 1978, 21, 61–72. [Google Scholar] [CrossRef]
  56. Karmarkar, U. Subjectively weighted utility and the Allais paradox. Org. Behav. Hum. Perform. 1979, 24, 67–72. [Google Scholar] [CrossRef]
  57. Chew, S. A generalization of the quasilinear mean with applications to the measurement of income inequality and decision theory resolving the Allais paradox. Econometrica 1983, 51, 1065–1092. [Google Scholar]
  58. Loomes, G.; Sugden, R. Incorporating a stochastic element into decision theories. Eur. Econ. Rev. 1995, 39, 641–648. [Google Scholar] [CrossRef]
  59. Hey, J. The economics of optimism and pessimism: A definition and some applications. Kyklos 1984, 37, 181–205. [Google Scholar] [CrossRef]
  60. Yaari, M. The dual theory of choice under risk. Econometrica 1987, 55, 95–115. [Google Scholar] [CrossRef]
  61. Green, J.; Jullien, B. Ordinal independence in nonlinear utility theory. J. Risk Uncertain. 1988, 1, 355–387. [Google Scholar] [CrossRef]
  62. Chew, S.; Epstein, L.; Segal, U. Mixture symmetry and quadratic utility. Econometrica 1991, 59, 139–163. [Google Scholar] [CrossRef]
  63. Kothiyal, A.; Spinu, V.; Wakker, P.P. An experimental test of prospect theory for predicting choice under ambiguity. J. Risk Uncertain. 2014, 48, 1–17. [Google Scholar] [CrossRef]
  64. Hey, J.D.; Pace, N. The explanatory and predictive power of non-two-stage probability theories of decision making under ambiguity. J. Risk Uncertain. 2014, 49, 1–29. [Google Scholar] [CrossRef] [Green Version]
  65. Safra, Z.; Segal, U. Calibration results for non-expected utility theories. Econometrica 2008, 76, 1143–1166. [Google Scholar]
  66. Al-Najjar, N.I.; Weinstein, J. The ambiguity aversion literature: A critical assessment. Econ. Philos. 2009, 25, 249–284. [Google Scholar] [CrossRef]
  67. Al-Najjar, N.I.; Weinstein, J. The ambiguity aversion literature: A critical assessment. Econ. Philos. 2009, 25, 357–369. [Google Scholar] [CrossRef]
  68. Bohr, N. Light and life. Nature 1933, 131, 421–423; 457–459. [Google Scholar] [CrossRef]
  69. Von Neumann, J. Mathematical Foundations of Quantum Mechanics; Princeton University: Princeton, NJ, USA, 1955. [Google Scholar]
  70. Baaquie, B.E. Quantum Finance; Cambridge University: Cambridge, UK, 2004. [Google Scholar]
  71. Khrennikov, A. Ubiquitos Quantum Structure; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  72. Busemeyer, J.R.; Bruza, P. Quantum Models of Cognition and Decision; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  73. Haven, E.; Khrennikov, A. Quantum Social Science; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  74. Bagarello, F. Quantum Dynamics for Classical Systems; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar]
  75. Yukalov, V.I.; Sornette, D. Processing information in quantum decision theory. Entropy 2009, 11, 1073–1120. [Google Scholar] [CrossRef]
  76. Sornette, D. Physics and financial economics (1776-2014): Puzzles, Ising and agent-based models. Rep. Prog. Phys. 2014, 77, 062001. [Google Scholar] [CrossRef] [PubMed]
  77. Ashtiani, M.; Azgomi, M.A. A survey of quantum-like approaches to decision making and cognition. Math. Soc. Sci. 2015, 75, 49–50. [Google Scholar] [CrossRef]
  78. Simon, H.A. A behavioral model of rational choice. Q. J. Econ. 1955, 69, 99–118. [Google Scholar] [CrossRef]
  79. Cialdini, R.B. The science of persuasion. Sci. Am. 2001, 284, 76–81. [Google Scholar] [CrossRef]
  80. Loewenstein, G.; Rick, S.; Cohen, J.D. Neuroeconomics. Ann. Rev. Psychol. 2008, 59, 647–672. [Google Scholar] [CrossRef] [PubMed]
  81. Hens, T.; Bachmann, K. Behavioral Finance for Private Banking; Wiley: Hoboken, NJ, USA, 2008. [Google Scholar]
  82. Malvergne, Y.; Sornette, D. Extreme Financial Risks; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  83. Marley, A.A.J. A historical and contemporary perspective on random scale representations of choice probabilities and reaction time. J. Math. Psychol. 1990, 34, 81–87. [Google Scholar] [CrossRef]
  84. Scott, H.P. Cumulative prospect theory’s functional menagerie. J. Risk Uncertain. 2006, 32, 101–130. [Google Scholar]
  85. Hey, J.D.; Orme, C. Investigating generalizations of expected utility theory using experimental data. Econometrica 1994, 62, 1291–1326. [Google Scholar] [CrossRef]
  86. Loomes, G.; Sugden, R. Regret theory: An alternative theory of rational choice under uncertainty. Econ. J. 1982, 92, 805–824. [Google Scholar] [CrossRef]
  87. Loomes, G.; Moffatt, P.G.; Sugden, R. A microeconomic test of alternative stochastic theories of risky choice. J. Risk Uncertain. 2002, 24, 103–130. [Google Scholar] [CrossRef]
  88. Heyer, D.; Niederee, R. Elements of a model-theoretic framework for probabilistic measurement. In Mathematical Psychology in Progress; Springer: Berlin/Heidelberg, Germany, 1989; pp. 99–112. [Google Scholar]
  89. Heyer, D.; Niederee, R. Generalizing the concept of binary choice systems induced by rankings: One way of probabilizing deterministic measurement structures. Math. Soc. Sci. 1982, 23, 31–44. [Google Scholar] [CrossRef]
  90. Regenwetter, M. Random utility representations of finite many relations. J. Math. Psychol. 1996, 40, 219–234. [Google Scholar] [CrossRef] [PubMed]
  91. Niederee, R.; Heyer, D. Generalized random utility models and the representational theory of measurement: A conceptual link. In Choice, Decision and Measurement: Essays in Honor of R. Duncan Luce; Lawrence Erlbaum: Mahwah, NJ, USA, 1997; pp. 155–189. [Google Scholar]
  92. Regenwetter, M.; Marley, A.A. Random relations, random utilities, and random functions. J. Math. Psychol. 2001, 45, 864–912. [Google Scholar] [CrossRef]
  93. Regenwetter, M.; Dana, J.; Davis-Stober, C.P. Testing transitivity of preferences on two-alternative forced choice data. Front. Psychol. 2010, 1. [Google Scholar] [CrossRef] [PubMed]
  94. Hey, J.D. Experimental investigations of errors in decision making under risk. Eur. Econ. Rev. 1995, 39, 633–640. [Google Scholar] [CrossRef]
  95. Carbone, E.; Hey, J.D. Which error story is best? J. Risk Uncertain. 2000, 20, 161–176. [Google Scholar] [CrossRef]
  96. Hey, J.D. Why we should not be silent about noise. Exp. Econ. 2005, 8, 325–345. [Google Scholar] [CrossRef]
  97. Loomes, G. Modelling the stochastic component of behaviour in expriments: Some issues for the interpretation of the data. Exp. Econ. 2005, 8, 301–323. [Google Scholar] [CrossRef]
  98. Isen, A.M.; Geva, N. The influence of positive affect on acceptable level of risk: The person with a large canoe has a large worry. Org. Behav. Hum. Decis. Proc. 1987, 39, 145–154. [Google Scholar] [CrossRef]
  99. Mano, H. Risk-taking, framing effects, and affect. Org. Behav. Hum. Decis. Proc. 1994, 57, 38–58. [Google Scholar] [CrossRef]
  100. Kühberger, A. The influence of framing on risky decisions: A meta-analysis. Org. Behav. Hum. Decis. Proc. 1998, 75, 23–55. [Google Scholar] [CrossRef]
  101. Kühberger, A.; Perner, J. The role of competition and knowledge in the Ellsberg task. J. Behav. Decis. Mak. 2003, 16, 181–191. [Google Scholar] [CrossRef]
  102. Charness, G.; Karni, E.; Levin, D. Individual and group decision making under risk: An experimental study of Bayesian updating and violations of first-order dominance. J. Risk Uncertain. 2007, 35, 129–148. [Google Scholar] [CrossRef]
  103. Luce, R.D.; Ng, C.N.; Marley, A.A.J.; Aczel, J. Utility of gambling: Entropy modified linear weighted utility. Econ. Theory 2008, 36, 1–33. [Google Scholar] [CrossRef]
  104. Luce, R.D.; Ng, C.T.; Marley, A.A.J.; Aczel, J. Utility of gambling: Risk, paradoxes, and data. Econ. Theory 2008, 36, 165–187. [Google Scholar] [CrossRef]
  105. Brock, W.A.; Durlauf, S.N. Discrete choice with social interactions. Rev. Econ. Stud. 2001, 68, 235–260. [Google Scholar] [CrossRef]
  106. Luce, R.D. A probabilistic theory of utility. Econometrica 1958, 26, 193–224. [Google Scholar] [CrossRef]
  107. Luce, R.D. Individual Choice Behavior: A Theoretical Analysis; Wiley: Hoboken, NJ, USA, 1959. [Google Scholar]
  108. Luce, R.D. The choice axiom after twenty years. J. Math. Psychol. 1977, 15, 215–233. [Google Scholar] [CrossRef]
  109. Yellott, J.J. The ralationship between Luce choice axiom, Thurstone theory of comparative judgement, and the double exponential distribution. J. Math. Psychol. 1977, 15, 109–144. [Google Scholar] [CrossRef]
  110. Tversky, A.; Kahneman, D. Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertain. 1992, 5, 297–323. [Google Scholar] [CrossRef]
  111. Gescheider, G.A. Psychophysics: The Fundamentals; Lawrence Erlbaum Associates: Nahwah, NJ, USA, 1997. [Google Scholar]
  112. Henderson, D.R.; Hooper, C.L. Making Great Decisions in Business and Life; Chicago Park Press: Chicago, IL, USA, 2006. [Google Scholar]
  113. Thurstone, L.L. A law of comparative judgment. Psychol. Rev. 1927, 4, 273–286. [Google Scholar] [CrossRef]
  114. Krantz, D.H. Rational distance functions for multidimensional scaling. J. Math. Psychol. 1967, 4, 226–245. [Google Scholar] [CrossRef]
  115. Rumhelhart, D.L.; Greeno, G. Similarity between stumuli: An experimental test of the Luce and Restle choice models. J. Math. Psychol. 1971, 8, 370–381. [Google Scholar] [CrossRef]
  116. Lorentziadis, P.L. Preference under risk in the presence of indistinguishable probabilities. Oper. Res. 2013, 13, 429–446. [Google Scholar] [CrossRef]
  117. Devroye, L. Non-Uniform Random Variate Generation; Springer: Berlin/Heidelberg, Germany, 1986. [Google Scholar]
  118. MacKay, D. Information Theory, Inference, and Learning; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  119. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
  120. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  121. Kullback, S. Information Theory and Statistics; Wiley: Hoboken, NJ, USA, 1959. [Google Scholar]
  122. Shore, J.E.; Johnson, R.W. Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy. IEEE Trans. Inf. Theory 1980, 26, 26–37. [Google Scholar] [CrossRef]
  123. Tribus, M.; Fitts, G. The wigget problem revisited. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 241–248. [Google Scholar] [CrossRef]
  124. Yukalov, V.I. Phase transitions and heterophase fluctuations. Phys. Rep. 1991, 208, 395–489. [Google Scholar] [CrossRef]
  125. Batty, M. Space, scale and scaling in entropy maximizing. Geogr. Anal. 2010, 42, 395–421. [Google Scholar] [CrossRef]
  126. Langen, T.; Erne, S.; Geiger, R.; Rauer, B.; Schweigler, T.; Kuhnert, M.; Rohringer, W.; Mazets, I.E.; Gasenzer, T.; Schmiedmayer, J.; et al. Experimental observation of a generalized Gibbs ensemble. Science 2015, 348, 207–211. [Google Scholar] [CrossRef] [PubMed]
  127. Yukalov, V.I. Representative ensembles in statistical mechanics. Int. J. Mod. Phys. B 2007, 21, 69–86. [Google Scholar]
  128. Yukalov, V.I. Theory of cold atoms: Basics of quantum statistics. Laser Phys. 2013, 23, 062001. [Google Scholar] [CrossRef]
  129. Gul, F.; Natenzon, P.; Pesendorfer, W. Random choice as behavioral optimization. Econometrica 2014, 82, 1873–1912. [Google Scholar]
  130. Murphy, R.O.; ten Brincke, R.H.W. Hierarchical maximum likelihood parameter estimation for cumulative prospect theory: Improving the reliability of individual risk parameter estimates. Manag. Sci. 2017, in press. [Google Scholar] [CrossRef]
  131. Chandrasekhar, S. Beauty and the quest for beauty in science. Phys. Today 1979, 7, 25–30. [Google Scholar] [CrossRef]
Table 1. Results for the lotteries with close utility factors f ( π 1 ) and f ( π 2 ) . p ( π 1 ) and p ( π 2 ) are the fractions of decision makers choosing the corresponding prospects π i , with the related attraction factors q ( π 1 ) and q ( π 2 ) . The utility difference Δ defined in (21) is given in percents.
Table 1. Results for the lotteries with close utility factors f ( π 1 ) and f ( π 2 ) . p ( π 1 ) and p ( π 2 ) are the fractions of decision makers choosing the corresponding prospects π i , with the related attraction factors q ( π 1 ) and q ( π 2 ) . The utility difference Δ defined in (21) is given in percents.
f ( π 1 ) f ( π 2 ) p ( π 1 ) p ( π 2 ) q ( π 1 ) q ( π 2 ) Δ %
0.5010.4990.180.82−0.3210.3210.4
0.5030.4970.830.170.327−0.3271.2
0.5160.4840.200.80−0.3160.3166.4
0.5160.4840.650.350.134−0.1346.4
0.50.50.140.86−0.360.360
0.50.50.730.270.23−0.230
0.50.50.180.82−0.320.320
0.4840.5160.920.080.436−0.4366.4
0.4840.5160.420.58−0.0640.0646.4
0.50.50.080.92−0.420.420
0.50.50.700.300.20−0.200
0.50.50.690.310.19−0.190
0.50.50.700.300.20−0.200
0.50.50.170.83−0.330.330
Table 2. Experimental data for lotteries with gains. Payoffs A i , B i ; payoff weights p ( A i ) , p ( B i ) ; fractions of subjects choosing the corresponding lottery, p A and p B in the first session, with the results for the second session in brackets.
Table 2. Experimental data for lotteries with gains. Payoffs A i , B i ; payoff weights p ( A i ) , p ( B i ) ; fractions of subjects choosing the corresponding lottery, p A and p B in the first session, with the results for the second session in brackets.
A 1 p ( A 1 ) A 2 p ( A 2 ) B 1 p ( B 1 ) B 2 p ( B 2 ) p A p B
240.34590.66470.42640.580.14 (0.11)0.86 (0.89)
790.88820.12570.20940.800.47 (0.44)0.53 (0.56)
620.7400.26230.44310.560.61 (0.56)0.39 (0.44)
560.05720.95680.95950.050.51 (0.46)0.49 (0.54)
840.25430.7570.43970.570.66 (0.69)0.34 (0.31)
70.28740.72550.71630.290.32 (0.38)0.68 (0.62)
560.09190.91130.76900.240.20 (0.25)0.80 (0.75)
410.63180.37560.9880.020.11 (0.10)0.89 (0.90)
720.88290.12670.39630.610.48 (0.48)0.52 (0.52)
370.61500.3960.60450.400.96 (0.95)0.04 (0.05)
540.08310.92440.15290.850.79 (0.81)0.21 (0.19)
630.9250.08430.63530.370.60 (0.71)0.40 (0.29)
320.78990.22390.32560.680.60 (0.63)0.40 (0.37)
660.16230.84150.79290.210.88 (0.92)0.12 (0.08)
520.12730.88920.98190.020.11 (0.18)0.89 (0.82)
880.29780.71530.29910.710.53 (0.44)0.47 (0.56)
390.31510.69160.84910.160.77 (0.73)0.23 (0.27)
700.17650.831000.35500.650.28 (0.28)0.72 (0.72)
800.91190.09370.64650.360.87 (0.85)0.13 (0.15)
830.09670.91770.4860.520.93 (0.93)0.07 (0.07)
140.44720.5690.21310.790.85 (0.87)0.15 (0.13)
410.68650.321000.8520.150.20 (0.20)0.80 (0.80)
400.38550.62260.14990.860.11 (0.11)0.89 (0.89)
10.62830.38370.41240.590.35 (0.30)0.65 (0.70)
150.49500.51640.94140.060.13 (0.07)0.87 (0.93)
400.10320.90770.1020.900.85 (0.87)0.15 (0.13)
400.20320.80770.2020.800.86 (0.82)0.14 (0.18)
400.30320.70770.3020.700.84 (0.80)0.16 (0.20)
400.40320.60770.4020.600.75 (0.74)0.25 (0.26)
400.50320.50770.5020.500.64 (0.65)0.36 (0.35)
400.60320.40770.6020.400.60 (0.53)0.40 (0.47)
400.70320.30770.7020.300.42 (0.35)0.58 (0.65)
400.80320.20770.8020.200.27 (0.21)0.73 (0.79)
400.90320.10770.9020.100.19 (0.10)0.81 (0.90)
401.00320.00771.0020.000.07 (0.04)0.93 (0.96)
Table 3. Lotteries with gains. Expected utilities U ( A ) , U ( B ) ; utility factors f ( A ) , f ( B ) ; attraction factors q ( A ) and q ( B ) (results for the second session in brackets). The utility difference Δ in percent.
Table 3. Lotteries with gains. Expected utilities U ( A ) , U ( B ) ; utility factors f ( A ) , f ( B ) ; attraction factors q ( A ) and q ( B ) (results for the second session in brackets). The utility difference Δ in percent.
U ( A ) U ( B ) f ( A ) f ( B ) q ( A ) q ( B ) Δ %
47.1056.860.4530.547−0.313 (−0.343)0.313 (0.343)19
79.3686.600.4780.522−0.008 (−0.038)0.008 (0.038)8.8
45.8827.480.6250.375−0.015 (−0.065)0.015 (0.065)50
71.2069.350.5070.4930.003 (−0.047)−0.003 (0.047)2.8
53.2558.300.4770.5230.183 (0.213)−0.183 (−0.213)9.2
55.2457.320.4910.509−0.171 (−0.111)0.171 (0.111)3.6
22.3331.480.4150.585−0.215 (−0.165)0.215 (0.165)34
32.4955.040.3710.629−0.261 (−0.271)0.261 (0.271)52
66.8464.560.5090.491−0.029 (−0.029)0.029 (0.029)3.6
42.0721.600.6610.3390.299 (0.289)−0.299 (−0.289)64
32.8431.250.5120.4880.278 (0.298)−0.278 (−0.298)4.8
58.3646.700.5550.4450.045 (0.155)−0.045 (−0.155)22
46.7450.560.4800.5200.120 (0.150)−0.120 (−0.150)8
29.8817.940.6250.3750.255 (0.295)−0.255 (−0.295)50
70.4890.540.4380.562−0.328 (−0.258)0.328 (0.258)25
80.9079.980.5030.4970.027 (−0.063)−0.027 (0.063)1.2
47.2828.000.6280.3720.142 (0.102)−0.142 (−0.102)51
65.8567.500.4940.506−0.214 (−0.214)0.214 (0.214)2.4
74.5147.080.6130.3870.257 (0.237)−0.257 (−0.237)45
68.4440.080.6310.3690.299 (0.299)−0.299 (−0.299)52
46.4826.380.6380.3620.212 (0.232)−0.212 (−0.232)55
48.6885.300.3630.637−0.163 (−0.163)0.163 (0.163)55
49.3086.200.3640.636−0.254 (−0.254)0.254 (0.254)54
32.1629.330.5230.477−0.173 (−0.223)0.173 (0.223)9.2
32.8561.000.3500.650−0.220 (−0.280)0.220 (0.280)60
32.809.500.7750.2250.075 (0.095)−0.075 (−0.095)110
33.6017.000.6640.3360.196 (0.156)−0.196 (−0.156)66
34.4024.500.5840.4160.256 (0.216)−0.256 (−0.216)34
35.2032.000.5240.4760.226 (0.216)−0.226 (−0.216)9.6
36.0039.500.4770.5230.163 (0.173)−0.163 (−0.173)9.2
36.8047.000.4390.5610.161 (0.091)−0.161 (−0.091)24
37.6054.500.4080.5920.012 (−0.058)−0.012 (0.058)37
38.4062.000.3820.618−0.112 (−0.172)0.112 (0.172)47
39.2069.500.3610.639−0.171 (−0.261)0.171 (0.261)56
40.0077.000.3420.658−0.272 (−0.302)0.272 (0.302)63
Table 4. Experimental data for lotteries with losses. Payoffs A i , B i ; payoff weights p ( A i ) , p ( B i ) ; fractions of subjects choosing the corresponding lottery, p A and p B (results for the second session in brackets).
Table 4. Experimental data for lotteries with losses. Payoffs A i , B i ; payoff weights p ( A i ) , p ( B i ) ; fractions of subjects choosing the corresponding lottery, p A and p B (results for the second session in brackets).
A 1 p ( A 1 ) A 2 p ( A 2 ) B 1 p ( B 1 ) B 2 p ( B 2 ) p A p B
−150.16−670.84−560.72−830.280.77 (0.75)0.23 (0.25)
−190.13−560.87−320.70−370.300.15 (0.17)0.85 (0.83)
−670.29−280.71−460.05−440.950.72 (0.71)0.28 (0.29)
−400.82−900.18−460.17−640.830.56 (0.58)0.44 (0.42)
−250.29−860.71−380.76−990.240.44 (0.41)0.56 (0.59)
−460.60−210.40−990.42−370.580.96 (0.92)0.04 (0.08)
−150.48−910.52−480.28−740.720.70 (0.68)0.30 (0.32)
−930.53−260.47−520.80−930.200.46 (0.50)0.54 (0.50)
−10.49−540.51−330.77−300.230.73 (0.72)0.27 (0.28)
−240.99−130.01−150.44−620.560.79 (0.84)0.21 (0.16)
−670.79−370.2100.46−970.540.34 (0.37)0.66 (0.63)
−580.56−800.44−580.86−970.140.43 (0.43)0.57 (0.57)
−960.63−380.37−120.17−690.830.20 (0.11)0.80 (0.89)
−550.59−770.41−300.47−610.530.11 (0.08)0.89 (0.92)
−290.13−760.87−1000.55−280.450.66 (0.71)0.34 (0.29)
−570.84−900.16−630.25−300.750.13 (0.07)0.87 (0.93)
−290.86−300.14−170.26−430.740.79 (0.74)0.21 (0.26)
−80.66−950.34−420.93−300.070.54 (0.50)0.46 (0.50)
−350.39−720.61−570.76−280.240.18 (0.23)0.82 (0.77)
−260.51−760.49−480.77−340.230.35 (0.30)0.65 (0.70)
−730.73−540.27−420.17−700.830.41 (0.38)0.59 (0.62)
−660.49−920.51−970.78−340.220.55 (0.58)0.45 (0.42)
−90.56−560.44−150.64−800.360.79 (0.86)0.21 (0.14)
−610.96−560.04−70.34−630.660.11 (0.10)0.89 (0.90)
−40.56−800.44−460.04−580.960.76 (0.74)0.24 (0.26)
Table 5. Lotteries with losses. Expected costs C ( A ) , C ( B ) ; utility factors f ( A ) , f ( B ) ; attraction factors q ( A ) and q ( B ) (results for the second session in brackets). The utility difference Δ in percent.
Table 5. Lotteries with losses. Expected costs C ( A ) , C ( B ) ; utility factors f ( A ) , f ( B ) ; attraction factors q ( A ) and q ( B ) (results for the second session in brackets). The utility difference Δ in percent.
C ( A ) C ( B ) f ( A ) f ( B ) q ( A ) q ( B ) Δ %
58.6863.560.5200.4800.250 (0.230)−0.250 (−0.230)8
51.1933.500.3960.604−0.246 (−0.226)0.246 (0.226)42
39.3144.100.5290.4710.191 (0.181)−0.191 (−0.181)12
49.0060.940.5540.4460.006 (0.026)−0.006 (−0.026)22
68.3152.640.4350.5650.005 (−0.025)−0.005 (0.025)26
36.0063.040.6370.3630.323 (0.283)−0.323 (−0.283)55
54.5266.720.5500.4500.150 (0.130)−0.150 (−0.130)20
61.5160.200.4950.505−0.035 (0.005)0.035 (−0.005)2
28.0332.310.5350.4650.195 (0.185)−0.195 (−0.185)14
23.8941.320.6340.3660.156 (0.206)−0.156 (−0.206)54
60.7052.380.4630.537−0.123 (−0.093)0.123 (0.093)15
67.6863.460.4840.516−0.054 (−0.054)0.054 (0.054)6.4
74.5459.310.4430.557−0.250 (−0.243)0.243 (0.333)23
64.0246.430.4200.580−0.310 (−0.340)0.310 (0.340)32
69.8967.600.4920.5080.168 (0.218)−0.168 (−0.218)3.2
62.2838.250.3800.620−0.250 (−0.310)0.250 (0.310)48
29.1436.240.5540.4460.236 (0.186)−0.236 (−0.186)22
37.5841.160.5230.4770.017 (−0.023)−0.017 (0.023)9.2
57.5750.040.4650.535−0.285 (−0.235)0.285 (0.235)14
50.5044.780.4700.530−0.120 (−0.170)0.120 (0.170)12
67.8765.240.4900.510−0.080 (−0.110)0.080 (0.110)4
79.2683.140.5120.4880.038 (0.068)−0.038 (−0.068)4.8
29.6838.400.5640.4360.226 (0.296)−0.226 (−0.296)26
60.8043.960.4200.580−0.310 (−0.320)0.310 (0.320)32
37.4457.520.6060.3940.154 (0.134)−0.154 (−0.134)42
Table 6. Experimental data for mixed lotteries, including gains and losses. Payoffs A i , B i ; payoff weights p ( A i ) , p ( B i ) ; fractions of subjects choosing the corresponding lottery, p A and p B (results for the second session in brackets).
Table 6. Experimental data for mixed lotteries, including gains and losses. Payoffs A i , B i ; payoff weights p ( A i ) , p ( B i ) ; fractions of subjects choosing the corresponding lottery, p A and p B (results for the second session in brackets).
A 1 p ( A 1 ) A 2 p ( A 2 ) B 1 p ( B 1 ) B 2 p ( B 2 ) p A p B
−910.43660.57−830.27240.730.31 (0.34)0.69 (0.66)
−820.06540.94380.91−730.090.85 (0.85)0.15 (0.15)
−700.79980.21−850.65930.350.37 (0.35)0.63 (0.65)
−80.37520.63230.87−390.130.87 (0.82)0.13 (0.18)
960.61−670.39710.50−260.500.49 (0.52)0.51 (0.48)
−470.43630.57−690.02140.980.38 (0.39)0.62 (0.61)
−700.39190.6180.30−370.700.64 (0.61)0.36 (0.39)
−1000.59810.41−730.47150.530.36 (0.46)0.64 (0.54)
−730.92960.08160.11−480.890.29 (0.35)0.71 (0.65)
−310.89270.11260.36−480.640.31 (0.37)0.69 (0.63)
−390.86830.1480.80−880.200.44 (0.44)0.56 (0.56)
770.74−230.2650.67 −70.330.34 (0.40)0.66 (0.60)
−330.91280.0990.27−670.730.72 (0.72)0.28 (0.28)
750.93−900.07960.87−890.130.48 (0.37)0.52 (0.63)
670.99−30.01740.68−20.320.87 (0.85)0.13 (0.15)
580.48−50.52−400.40960.600.42 (0.48)0.58 (0.52)
−550.07950.93−130.48990.520.75 (0.77)0.25 (0.23)
−510.97300.03−890.68460.320.23 (0.30)0.77 (0.70)
−260.86820.14−390.60310.400.49 (0.50)0.51 (0.50)
−900.88880.12−860.80140.200.58 (0.63)0.42 (0.37)
−780.87450.13−690.88830.120.13 (0.08)0.87 (0.92)
170.96−480.04−600.49840.510.61 (0.67)0.39 (0.33)
−490.3820.62190.22−180.780.27 (0.30)0.73 (0.70)
−590.28960.72−40.04630.960.20 (0.17)0.80 (0.83)
980.50−240.50−760.14460.860.67 (0.63)0.33 (0.37)
−200.50600.5000.5000.500.73 (0.73)0.27 (0.27)
−300.50600.5000.5000.500.71 (0.64)0.29 (0.36)
−400.50600.5000.5000.500.70 (0.55)0.30 (0.45)
−500.50600.5000.5000.500.61 (0.62)0.39 (0.38)
−600.50600.5000.5000.500.48 (0.44)0.52 (0.56)
−700.50600.5000.5000.500.37 (0.35)0.63 (0.65)
Table 7. Mixed lotteries containing gains and losses. Expected utilities U ( A ) , U ( B ) ; utility factors f ( A ) , f ( B ) ; attraction factors q ( A ) and q ( B ) (results for the second session in brackets). The utility difference Δ in percents.
Table 7. Mixed lotteries containing gains and losses. Expected utilities U ( A ) , U ( B ) ; utility factors f ( A ) , f ( B ) ; attraction factors q ( A ) and q ( B ) (results for the second session in brackets). The utility difference Δ in percents.
U ( A ) U ( B ) f ( A ) f ( B ) q ( A ) q ( B ) Δ %
−3.22−4.890.6030.397−0.293 (−0.263)0.293 (0.263)41
45.8428.010.6210.3790.229 (0.229)−0.229 (−0.229)48
−34.72−22.700.3950.605−0.025 (−0.045)0.025 (0.045)42
29.8014.940.6660.3340.204 (0.154)−0.204 (−0.154)66
32.4322.500.5900.410−0.100 (−0.070)0.100 (0.070)36
15.7012.340.5600.440−0.180 (−0.170)0.180 (0.170)24
−15.71−23.500.5590.4010.041 (0.011)−0.041 (−0.011)32
−25.79−26.360.5050.495−0.145 (−0.045)0.145 (0.045)2
−59.48−40.960.4080.592−0.118 (−0.058)0.118 (0.058)37
−24.62−21.360.4650.535−0.155 (−0.095)0.155 (0.095)14
−21.92−11.200.3380.6620.102 (0.102)−0.102 (−0.102)65
51.0047.940.5150.485−0.175 (−0.115)0.175 (0.115)6
−27.51−46.480.6280.3720.092 (0.092)−0.092 (−0.092)51
63.4571.950.4690.5310.011 (0.099)−0.011 (−0.099)12
66.3049.680.5720.4280.298 (0.278)−0.298 (−0.278)29
25.2441.600.3780.6220.042 (0.102)−0.042 (−0.102)49
84.5045.240.6510.3490.099 (0.119)−0.099 (−0.119)60
−48.57−45.800.4850.515−0.255 (−0.185)0.255 (0.185)6
−10.88−11.000.5030.497−0.013 (−0.003)0.013 (0.003)1.2
−68.64−66.000.4900.5100.090 (0.140)−0.090 (−0.140)4
−62.01−50.760.4500.550−0.320 (−0.370)0.320 (0.370)20
14.4013.440.5170.4830.093 (0.153)−0.093 (−0.153)6.8
−17.38−9.860.3620.638−0.092 (−0.062)0.092 (0.062)55
52.6060.320.4660.534−0.266 (−0.296)0.266 (0.296)14
37.0028.920.5610.4390.109 (0.069)−0.109 (−0.069)24
20010−0.270 (−0.270)0.270 (0.270)200
15010−0.290 (−0.360)0.290 (0.360)200
10010−0.300 (−0.450)0.300 (0.450)200
5010−0.390 (−0.380)0.390 (0.380)200
000.50.5−0.020 (−0.060)0.020 (0.060)0
−5001−0.370 (−0.350)0.370 (0.350)200

Share and Cite

MDPI and ACS Style

Yukalov, V.I.; Sornette, D. Quantum Probabilities as Behavioral Probabilities. Entropy 2017, 19, 112. https://doi.org/10.3390/e19030112

AMA Style

Yukalov VI, Sornette D. Quantum Probabilities as Behavioral Probabilities. Entropy. 2017; 19(3):112. https://doi.org/10.3390/e19030112

Chicago/Turabian Style

Yukalov, Vyacheslav I., and Didier Sornette. 2017. "Quantum Probabilities as Behavioral Probabilities" Entropy 19, no. 3: 112. https://doi.org/10.3390/e19030112

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop