Next Article in Journal
A Blockchain-Based Data Sharing System with Enhanced Auditability
Next Article in Special Issue
A Liquid Launch Vehicle Safety Assessment Model Based on Semi-Quantitative Interval Belief Rule Base
Previous Article in Journal
A Hybrid Prediction Model Based on KNN-LSTM for Vessel Trajectory
Previous Article in Special Issue
Development of a Robust Data-Driven Soft Sensor for Multivariate Industrial Processes with Non-Gaussian Noise and Outliers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impressionable Rational Choice: Revealed-Preference Theory with Framing Effects

1
Department of Economics, Ruppin Academic Center, Emek Hefer 4025000, Israel
2
Department of Economics, Yildiz Technical University, Istanbul 34349, Turkey
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(23), 4496; https://doi.org/10.3390/math10234496
Submission received: 2 November 2022 / Revised: 24 November 2022 / Accepted: 25 November 2022 / Published: 28 November 2022
(This article belongs to the Special Issue Data-Driven Decision Making: Models, Methods and Applications)

Abstract

:
Revealed preference is one of the most influential ideas in economics. It is, however, not clear how it can be generally applied in cases where agents’ choices depend on arbitrary changes in the decision environment. In this paper, we propose a generalization of the classic rational choice theory that allows for such framing effects. Frames are modeled as different presentations (e.g., visual or conceptual) of the alternatives that may affect choice. Our main premise is that framing effects are neutral (i.e., independent of labeling the alternatives). An agent exhibiting these neutral framing effects who is otherwise rational, is called impressionable rational. We show that our theory encompasses many familiar behavioral models such as status-quo bias, satisficing, present bias, framing effects resulting from indecisiveness, certain forms of limited attention, categorization bias, and the salience theory of choice, as well as hybrid models. Moreover, in all these models, sufficiently rich choice data allow our theory to identify the “correct” underlying preferences without invoking each specific cognitive process. Additionally, we introduce a falsifiable axiom that completely characterizes the behavior of agents who are impressionable rational.
MSC:
91B15; 91B08; 91B06

1. Introduction

The accumulating evidence on choice that varies with arbitrary changes in the decision environment (e.g., the way alternatives are displayed) has motivated a large body of theoretical work to study such framing effects. The standard approach in the revealed preference literature is to generalize the classic rational agent framework via incorporating a particular cognitive bias into the analysis. One of the pioneering examples is [1] who wield this strategy to model an agent with status-quo bias who is otherwise rational. Ref. [2], among others, consider an agent who is sensitive to the timing of choice and exhibits present bias, and Rubinstein and Salant offer five models that each allows for a different (single) cognitive bias with different testable conditions and different welfare implications (see [3,4]). For example, they allow choice to be affected by the order in which alternatives are presented and model a satisficer [5] who exhibits a primacy bias (A few of the other examples are [6,7,8,9,10]. However, in large datasets, identifying each agent’s cognitive bias may be tedious. Moreover, a single agent may exhibit multiple cognitive biases, and testing them simultaneously may be difficult, if not impossible.
A feature that is common to all the models mentioned above and many similar others is that they describe agents who are sensitive to the way the alternatives are presented, but are otherwise standard rational agents. These types of decision-makers will henceforth be referred to as impressionably rational (IR). The aim of this paper is to develop a testable theory of IR agents that is sufficiently general to unify different models of cognitive biases but still enables us to identify individuals’ underlying preferences from their choices.
To achieve these goals, we incorporate a novel component to the standard revealed preference method, namely (See Appendix B on a general discussion on the revealed preference method when applied to behavioral models, its adventurous over the alternative method and its limitations.), impression relations, asymmetric binary relations, which impact choice by promoting some options as more attractive than others as the only possible source of bounded rationality. In other words, whenever the impression relation is silent, the agent chooses rationally (i.e., chooses the best option according to her preferences). The main premise is that frames, which determine how the available alternatives are presented, are mapped into those impression relations in an unbiased way. For example, if presenting alternative x before alternative y in menu { x , y } , promotes x, then presenting y before x must promote y. Another example could be that if receiving an expert advice to choose x over y promotes x over y, then receiving the same expert advice to choose y over x promotes y over x. In other words, we assume that swapping the way any two alternatives are presented, the impression relation covaries (we call this property neutrality).
A promising feature of our IR model is that it encompasses many familiar behavioral models, including not only the aforementioned status-quo bias, present bias, and satisficing, but also certain forms of limited attention, categorization bias, the salience theory of choice, and other hybrid models. In fact, it can accommodate all these cognitive biases simultaneously.
We provide the revealed preference of IR agents; to gain intuition on how it operates, consider two alternatives, x and y , to be chosen, and two frames, f and g , where x is the status-quo under frame f and y is the status-quo under frame g . When an IR agent chooses x over y both under frames f and g , then, by virtue of our neutrality assumption, one of her choices must reveal her underlying preferences. More generally, whenever an agent choosing x over y across two frames that are reflective with respect to x and y (i.e., swapping the way x and y are presented in one frame produces the other frame), she reveals her preferences.
Strikingly, when applied to all of the above-mentioned behavioral models, our revealed preference identifies the “correct” underlying preferences without relying on the specific cognitive processes, provided that the data are rich enough. In other words, our theory allows revealing an individual’s underlying preferences from choice data generated by many different frame-sensitive cognitive biases.
In addition to the revealed preference, we provide each frame’s revealed impression relation. These relations provide information regarding the agent’s impression mapping, which can guide firms on how to present their products (e.g., first or last in the list). This result has interesting implications for industrial and commercial organizations, which, for example, can utilize our theory to promote products with high profitability or to present them prominently relative to the competitors’ products.
Finally, we translate the above-mentioned revealed preference into a single choice axiom that can be tested non-parametrically. In other words, we introduce a falsifiable axiom that completely characterizes the behavior of IR agents. This result is important not only from a positive (i.e., prediction-oriented) perspective, but also from a normative perspective, because it allows us to avoid mistaken inferences on the agent’s welfare. As explained below, it is in this respect that our theory substantially differs from the model-free approaches to choice-based welfare analysis.
Studies adopting the model-free approach to welfare (e.g., [11,12,13,14]. do not rely on any testable theory;)) hence, they must occasionally result in a welfare relation that contains mistaken inferences. Consider, for instance, Bernheim and Rangel’s [12]. Pareto welfare criterion (i.e., alternative x is welfare superior to alternative y if and only if y is never chosen when x is available), which is the most permissive approach in this branch of the literature, in the following scenario. An agent exhibiting a status-quo bias, prefers a pension plan y to a pension plan x , and never chooses x over y unless x is the default plan. However, in the available data, we observe all the choices between x and y when x is the the default plan. Then, the Pareto welfare criterion will wrongly infer that x is welfare superior to y . By contrast, our theory avoids this mistaken inference, because the agent does not choose x over y across two reflective frames (i.e., under both defaults).
In a recent study, Goldin and Reck ([15], henceforth GR) offer a general method to reveal the welfare preferences of agents who are sensitive to framing effects using information of other (IR) agents’ choices, but without providing a testable theory for IR agents. Thus, similar to the model-free approach to welfare, their method may result in wrong welfare inferences. We demonstrate this point and show that our testable axiom can also be useful in this respect.
The remainder of the paper is organized as follows. Section 2 presents the general setup and the main definition; Section 3 provides the revealed preference, revealed impression, and the choice theoretical foundation of our theory. Section 4 examines the applications of our model to certain familiar special cases and to GR’s theory. Proofs that are not part of the texts are in Appendix D.

2. Main Definition

Let X be a finite set representing the grand set of alternatives, χ be the set of all nonempty subsets of X, and F be a finite set of frames. The elements of χ are regarded as menus, collections of alternatives that are available for choice. Frames are the properties of the choice environment that determine how the available alternatives are presented. For example, f F may be a single option designated as the status-quo, a set of options that are highlighted by some external mechanism, or a list (i.e., a sequence of distinct elements) on X representing the order of alternatives in the menu. For any f F and any x , y X , let f x y F be the frame obtained from f F when swapping the way alternatives x and y are presented (Note that when f is a set not containing x and y, f f x y . More generally, as in [4], we do not impose a structure on F. Thus, mathematically speaking, f F need not be representation of alternatives; clearly in this case we also have f f x y for all x , y X .). It is assumed that f x y F for all f F .
The choice domain in our model is Σ χ × F such that for any S χ , there exists f F with ( S , f ) Σ ( This is just the usual richness property (e.g., [16,17]), which assumes that at least one choice from each menu can be observed.). A (general) choice function is any function c : Σ X such that c S , f S denotes the single option chosen from menu S under frame f.
Let A X be the set of asymmetric binary relations on X; the novel component of our model is an impression mapping  : F A X (where ( f ) is simply denoted by f ), which translates frames into impression relations that render some of the available options seem more salient or more attractive than others. For example, when f is an alternative designated as the status quo, a natural f is { ( f , x ) x X { f } } , that is, the status quo is promoted relative to the other options; if f = ( x 1 , x 2 , , x n ) is a list denoting the order in which alternative are presented, a possible f may be { ( x i , x j ) i < j } due to a primacy effect, but f can also take the form of { ( x i , x j ) i > j } due to a recency effect (This example demonstrates that a modeler often cannot deduce the direction of the framing effect from the structure of the frame, which reveals the advantage of our model over one that captures frames by the impression relations they induce (i.e., which assumes that each { f f F } is directly observable).).
An impression mapping ( . ) is neutral if
x f y y f x y x , for all f F .
Thus, ( . ) is neutral if it covaries with the permutation of any pair of alternatives in f F . In the introduction, we demonstrated the appeal of the neutrality property using the status-quo example; here, we provide two other, more general examples. First, assume that x f y , because x is presented in a manner that renders y unnoticeable. That is, the agent is unaware that y is available (e.g., x is located on the store’s shelf in a manner that covers y). Then, by swapping the manner in which x and y are presented, it is expected that x will become unnoticeable. Alternatively, assume that both x and y receive the agent’s full attention, but the manner they are presented increases the perceived value of x relative to that of y (e.g., the agent received expert advice to choose x over y). Then, it is very likely that swapping the manner in which the two alternatives are presented will increase the perceived value of y relative to that of x (see also Section 2.2 on the neutrality property).
For any S χ , and any binary relation B on X, let m a x ( S , B ) : = { x S x By for all y S { x } } be the set of B–maximal elements in S. We are now ready to state our main definition.
Definition 1.
A choice function c is impressionably rational (IR), if there exist a neutral impression mapping : F A and a linear order ⪰ on X such that for any ( S , f ) Σ :
c ( S , f ) m a x ( S , f ) .
That is, an agent is IR if she always chooses the better option over the worse option according to a linear preference relation whenever the latter is not promoted over the former (It is straightforward to show that expression (1) can equivalently be written as x c ( S , f ) and y S imply [ x y or x f y ] .), where the impression relations are mapped from the set of frames in a neutral way. Thus, Definition 1 reduces to the classic rational choice when ( . ) is a constant function (When ( . ) is a constant function, by the neutrality property, each f is empty and rational choice is obtained.).
In passing, we note that (1) is not a complete description of behavior as it does not describe a unique choice for each menu, but only restricts choice to a specific subset of the menu. This is apparently in contrast with most of the literature, where the choice is completely specified. However, most of the models in the choice-theoretic literature are not completely identified from the data (e.g., [18,19,20,21,22,23]) (This is also true for all the models in Examples 1–8 and A1–A2 below.), which means that they too can often be described by means of an incomplete model. (In Appendix C, we demonstrate this for Masatlioglu et al. [22] model.) Moreover, like these models, our model has a complete two-stage representation (see Equations (3)–(4) in Section 2.1). Thus, in terms of completeness, there is no difference between most of the models from the two-stage literature and ours.

2.1. Familiar Behavioral Models as Special Cases

In this section, we present several examples of familiar behavioral models that our theory covers. These examples will demonstrate the generality of our approach (see also Table 1 on p. 6).
Our first example is a model of status-quo bias, as introduced by Masatlioglu and Ok ([1], Theorem 3).
Example 1
(Status-quo bias). A decision-maker has in mind two (one-to-one) functions; u (which represents ) and β from X to the positive reals. For any S χ , the choice problem ( S , f ) Σ if f S is an alternative interpreted as the status quo (F is taken here to be equal to X). Given the choice problem ( S , f ) , the decision-maker chooses the status-quo f if u ( f ) + β ( f ) u ( x ) for every other element x S ; otherwise, she chooses the u maximal element in S (We note that our model can also capture the temptation-derive status-quo bias model [24].).
It is straightforward to show that our theory includes Example 1 for f = { ( f , y ) y X { f } } for all f F . That is, when the status quo is promoted relative to all other options.
Note that Example 1 can also capture choice under advice [25] as long as the content of the advice is available in the data. In each frame, the agent receives advice for the most recommended option, which increases the value of that option when making a choice.
Second, consider the satisficing model [23].
Example 2
(Satisficing). A decision-maker has in mind linear preferences and a threshold mapping r : χ X . F is the set of lists on χ; a choice problem ( S , f ) Σ if S χ and f lists all the alternatives in S. Given ( S , f ) , the decision-maker chooses the first element in f that –dominates the threshold option r ( S ) .
It is readily verified that our theory includes Example 2 for f = { ( f i , f j ) i < j } , where f i denotes the i t h element in list f. That is, when alternative x is promoted over alternative y if x appears before y in the list.
The theory of IR agents captures many other choice processes that are affected by the order of the alternatives in a menu. For example, [3,8,23] allow for such choice procedures, which may capture not only the primacy bias but also the recency bias (see also Example 6 below and Example A1 in Appendix A).
Third, consider the following simple, frame-sensitive, limited attention model.
Example 3
(Limited attention). Given a choice problem ( S , f ) Σ , where f S is interpreted as the set of options highlighted by an external mechanism, the agent chooses the –maximal element from f. If f is empty, then the –maximal element from S is chosen.
Our theory includes Example 3 for f = { ( x , y ) y f x } . That is, the highlighted options are promoted over the non-highlighted options. Note that Example 3 can also capture the familiarity-based attention model [26] if f F is interpreted as a set of options the agent is familiar with. However, our theory does not subsume all frame-sensitive limited attention models. For example, ([27], Section 5) develop a model of limited attention where an agent–given two alternatives–may choose the worse over the better option across all framing of choice, thereby violating IR.
Our next example is a generalization of the familiar β , δ model similar to [28].
Example 4
(A generalized β , δ model). The agent’s task is to choose the timing x X = F : = { 1 , , T } of receiving a reward with a value v ( x ) R + , where f F being the time of choice. A choice problem ( S , f ) Σ , if x f for all x S (i.e., past rewards are not available). Given ( S , f ) , the agent chooses reward x = f if x S and v ( x ) β ( x ) · δ y f v ( y ) for all y S ; otherwise, she chooses the reward that maximizes the function δ y v ( y ) , where β : F ( 0 , 1 ) and δ is some positive constant (Following empirical evidence on time discounting (e.g., Frederick et al., [29]), we allow β to be a function of the timing of choice; the case where β is constant is discussed in Remark 1 in Section 4. We note that allowing δ to be non-constant has no effect on the empirical content of Example 4.).
Our theory includes Example 4 for f = { ( f , y ) y X { f } } (i.e., at each period f, the present reward is promoted over all future periods). We follow prior studies [2,30,31] that measure the agent’s underlying preferences in Example 5 using the “long-run criterion”. In other words, for this example, we assume that the underlying preferences ⪰ is represented by the function δ x v ( x ) .
The next two examples demonstrate that it can also encompass agents who are subject to several cognitive biases.
Example 5
(Status-quo bias, limited attention, and satisficing). Let F = F 1 F 2 F 3 where each F i is the set of frames defined in Example i = 1 , 2 , 3 . Given any problem S , f χ × F with f F i , the agent employs the choice process in Example i .
It follows directly from Examples 1–3 that our theory also includes Example 5 for f = { ( x , y ) x = f and y X } for all f F 1 , f = { ( f i , f j ) i < j } , for all f F 2 , and f = { ( x , y ) y f x } for all f F 3 ; thus, capturing an agent who is subject to both status-quo bias, satisficing, and limited attention.
Example 6
(Primacy and recency bias). Let X 1 , X 2 X such that X 1 X 2 = X , and X 1 X 2 = be a division of the goods in X into two different categories. Let F be the set of lists on X . For any choice problem ( S , f ) Σ with S X 1 and f F , the agent employs the satisficing choice process in Example 2. For example, X 1 may be a set of jams that can be tasted before purchasing, and the agent may choose the first jam that she finds tasty enough; she, thus, exhibits a primacy bias. By contrast, for any choice problem ( S , f ) Σ with S X 2 , the agent employs a different choice process. She examines the entire set of available products and spends a lot of time examining each one of them. X 2 , for instance, may be a set of different types of cars. As the agent remembers better the cars examined later, she is subject here to a recency bias. Specifically, assume that when choosing between options in X 2 the agent chooses the last element in f that is above the threshold option r ( S ) , where r : χ X is as defined in Example 2.
In Example 6, frames always appear in a form of a list but the agent uses different choices processes for different sets of goods. This example is included in our theory for f = { ( f i , f j ) i < j } for all lists f on X 1 and f = { ( f i , f j ) i > j } for all lists f on X 2 .
Examples 1–4 demonstrate that our theory is general enough to encompass status-quo bias, satisficing, and certain forms of limited attention, present bias, and choice under advice (see also Examples 7–8 and A1–A2 in Section 2.2 and in Appendix A, respectively for other choice models that our theory covers). Examples 5–6 demonstrate that our theory encompasses several cognitive biases simultaneously. This is an important feature of our theory rendering it applicable when applied to field choice data.

2.2. Alternative Models of IR Agents

In this section, we briefly discuss four (seemingly) alternative definitions of IR. First, note that it straightforward to show that it is possible to weaken our neutrality property without changing the empirical content of Definition 1. Specifically, consider the following weaker condition:
( ) x f y [ y f x y x for all y x ] .
In contrast to neutrality, condition ( ) allows the possibility that the preferred x is ranked higher than the inferior y (according to the impression relation) both under frame f and under f x y , which is reasonable when the impression relation is preferences driven. For example, we may have x f y because the agent deliberately decides not to consider y under f in which option x that she values greatly is presented in a salient way, but at the same time to consider x under f x y , where y, which she does not value, is presented saliently. Thus, condition ( ) weakens neutrality by allowing for a priori bias toward preferred alternatives.
Second, it is also possible to show that the following definition is empirically equivalent to IR (see Proposition 3 in Section 3).
Definition 2.
A choice function c is weak impressionably rational (WIR), if there exist an acyclic binary relation ´ on X and a neutral impression mapping ( . ) with f being a complete relation for all f F such that for any ( S , f ) Σ :
c ( S , f ) m a x ( S , ´ f ) .
WIR differs from IR by requiring that instead of the agent’s preferences, each impression relation will be complete. Then, whenever the agent is indecisive between two alternatives, she chooses the promoted option. Thus, the preference relation underlying the IR choice functions can also be seen as an incomplete relation; consider, for instance, the following example.
Example 7
(Framing effects resulting from indecisiveness). The agent has in mind a partial order ´ . A choice problem ( S , f ) Σ if S χ and f A X is a complete binary relation where x f y is interpreted as x is promoted over y under f. For each choice problem, the agent chooses c ( S , f ) max ( max ( S , ´ ) , f ) . That is, the agent always maximizes her preferences, but whenever indecisive she chooses the most promoted option.
The following notation will allow us to present a third model that is empirically equivalent to IR. Let ´ : χ × F A X be a shading mapping that translates (in a menu-depended way) frames into shading relations such that x ´ ( S , f ) y is interpreted as alternative x shades alternative y in menu S under frame f (i.e., y is disregarded under ( S , f ) ) ; the equivalent model is then given by (See Proposition A2 in Appendix D on the equivalence between the models.):
c ( S , f ) = max ( max ( S , ´ ( S , f ) ) , ) for all ( S , f ) Σ ,
with the additional condition:
x ´ ( S , f ) y [ y ´ ( S , f x y ) x for all S with c ( S ) = y ] .
In other words, our model can be completely described by means of a two-stage maximization process (e.g., [18,19,20,21,23]). However, while these models and similar others [22,32] are all a special case of (3), because they do not allow frames to vary in the analysis, it is impossible to test whether they satisfy condition (4)—which has a similar interpretation to condition ( ) above. Nevertheless, fixing f F for some f f x y for all x , y X , condition (4) is satisfied automatically. In other words, there always exists a frame such that viewing any of the two-stage models as a representation of behavior under that frame renders it a special case of our model. The following example, which is a generalization of [21] “transitive categorize then choose” model and which is reduced to the original model when fixing (and ignoring) the frame, demonstrates this point.
Example 8
(Frame-sensitive categorization bias). Let each f = ( f 1 , , f n ) F be a list of mutually exclusive subsets of X that cover it (i.e., f i = X ), where each f i denotes a different category of products and i < j is interpreted as category f i is promoted over category f j . Endowed with a preference relation on X, the agent chooses from each ( S , f ) χ × F , the –maximal options from S f i ( S , f ) , with f i ( S , f ) being the first category in f whose intersection with S is nonempty.
Our theory includes Example 8 for f = { ( x , y ) x f i , y f j , and i < j } .
By contrast, fixing f F for some neutral frame (i.e., f = f x y for all x , y X ; e.g., f = X ) , immediately reduces (3)–(4) to the rational choice model. In other words, there is always exists also a frame such that viewing any of the above two-stage models as a representation of behavior under that frame, its only intersection with IR is the rational choice model. (See Counter-example 1 in Section 3 for a formal generalization of this observation.)
Finally, we remark that our notion of neutrality is weaker than that often considered in social choice theory, requiring ⊳ to covary with any permutation of the alternatives (not only those that swap a single pair). Formally, let Π be the set of all permutations (or bijections) on X. Let π f F be the frame obtain from f F by swapping the alternatives according to π . An impression mapping ⊳ is strictly neutral if:
x f y if and only if π ( x ) π f π ( y ) for all π Π .
However, replacing neutrality with this stricter notion rules out some reasonable, unbiased impression mappings that should be included within our theory. Specifically, it results in a theory that often cannot accommodate multiply cognitive biases. For instance, the impression mapping in Example 6: f = { ( f i , f j ) i < j } for all lists f on X 1 and f = { ( f i , f j i > j } for all lists f on X 2 , violates strict neutrality. As another instance, recall our example from the introduction, where receiving an expert advice to choose x over y promotes x over y and, thus, receiving the same expert advice to choose y over x must promote y over x. Strictly neutrality implies that receiving an advice to choose z over t must also promote z over t. However, one can deem strict neutrality too strong due to the fact that the expertise on the categories include x and y does not imply an expertise on the categories z and t belong to. We, hence, conclude that replacing neutrality with its strict version substantially reduces the applicability of our model especially to field choice data (In the same vein, one may wonder about the consequences of drooping the neutrality condition. However, it is easily verified from (1) that defining f = { ( x , y ) y x } for all f F implies that any c vacuously admits this definition of IR for any ⪰.).

3. Revealed Preference, Revealed Impression, and a Characterization

The first goal of this section is to identify when, in our theory, an option is revealed preferred to another. When an IR agent chooses x over y in two reflective frames f and f x y , then by the neutrality assumption (which implies ¬ ( x f y ) or ¬ ( y f x ) ) at least one of these choices must be a consequence of a preference maximization; formally, for any distinct x and y, define:
x R y if x = c ( S , f ) = c ( S , f x y ) and y S , S .
By the argument above, if x R y , then x is directly revealed preferred to y . Thus, if x R y R z , then we have x y z , and by the transitivity of ⪰, x z . More generally, for the transitive closure of R (denoted by R T ) , we have that x R T y implies x y . The following result shows that the converse statement is also true and, thus, R T captures all welfare inferences that are possible in our model.
Proposition 1.
Let c be an IR choice function. Then, x y for all that are consistent with Definition 1 if and only if x R T y .
Hence, we refer to R T as the revealed preference in our model and to R as the direct revealed preference.
As it is very useful for firms to understand which manner of presenting each of their products promotes it relative to other products, we proceed with the revealed impression of our model. We identify three choice situations under which revealed impression of frame f is possible:
x E f y ( i ) x = c ( S , f ) , y = c ( S , f x y ) , and x , y S S , or
( i i ) x = c ( S , f ) , y S , and y R T x , or
( i i i ) y = c ( S , f x y ) , x S , and x R T y .
When an agent exhibits choice reversal in two reflective frames, as in expression ( i ) above, it must be the case that one of her choices is due to impression (i.e., x f y or y f x y x ). Thus, by neutrality, we find that both choices are due to impression and, hence, x f . In addition, when alternative x is chosen over alternative y under frame f , although y is revealed preferred to x (as in expression ( i i ) ), we can be certain that x is promoted over y under f . Moreover, by the neutrality property, we can also state that, in this situation, y is promoted over x under frame f x y , which explains expression ( i i i ) .
Proposition 2.
Let c be an IR choice function. Then, x f y for all ⊳ that are consistent with IR if and only if x E f y .
Proposition 2 shows that, in our model, for any frame f F , E f is the revealed impression of that frame. This identification allows firms to deduce the manner in which each frame operates in promoting the different alternatives.
Next, we provide a testable axiom that can identify whether a choice function is IR. However, before considering it, let us recall the classic rationality axiom.
  • Weak Axiom of revealed preference (WARP). For any T χ , there exists x T such that for any S x , c ( S , . ) T implies x = c ( S , . ) .
WARP states that every menu T has a “best alternative” x such that whenever x is available and the chosen option lies in T, x must be chosen. WARP is too strict for an IR agent: when facing a choice problem ( S , f ) in which the chosen option lies in T and which includes x (i.e., the best alternative in T), she may not choose x , because there is some option y S that is promoted over x under frame f. However, because ⊳ is neutral, this situation can not occur again under frame f x y . This observation strongly suggests the following relaxation of WARP.
  • WARP for Impressionable Rationality (WARP-IR). For any T χ , there exists x T such that for any S , S x , x c ( S , f ) = c ( S , g ) = y T implies g f x y .
That is, whenever WARP is violated, the best option in T is available and rejected, and the same chosen option lies in T under frames f and g, this is because the chosen option is promoted over the best option under both frames f and g and thus–this is what WARP-IR says–f and g must be non-reflective with respect to the best and the chosen options (We note that, while our domain only requires the standard richness property, WARP-IR has bite only under the condition: there exists S , S χ and x , y S S such that ( S , f ) , ( S , f x y ) Σ . This is a very weak condition and is consistent with observing choices under a single frame (e.g., when Σ = χ × { z } with z X { x , y } being a fixed status quo).).
The following examples demonstrate the types of restrictions that WARP-IR imposes.
Counter-example 1
(Non-presentation-driven irrationality). Let c : χ × F be an irrational choice function with F being a set of neutral frames (e.g., frames indicate the deadline for making a choice). Formally, assume that there is no linear order such that c can be written as:
c ( S , f ) = max ( S , ) for all S χ and for all f F
Then, c is inconsistent with WARP-IR (Because the next three proofs are constructive, we provide them here rather than in the Appendix.).
Proof. 
As mentioned, being irrational c violates WARP. That is, there are f , g F , S , S χ , and { x , y } S S such that c ( S , f ) = x and c ( S , g ) = y . By, f = g x y (which follows by the neutrality of all frames), we find that c ( S , f x y ) = x and c ( S , f x y ) = y . This means that no option in T = { x , y } can play the role of x in WARP-IR. □
Many of the choice models in the literature do not allow frames to vary (e.g., [20] or any other of two-stage choice procedures discussed in Section 2.2; [2,33,34]), Counter-example 1 shows that WARP-IR precludes all these models, when viewed as describing choices under neutral frames. More generally, Counter-example 1 demonstrates that our model can only accommodate framing effects resulting from asymmetric presentations of the alternatives.
One special case of Counter-example 1 is an agent who, given ( S , f ) (with f = X ), chooses the available option that is most similar (according to an unobservable similarity metric) to the best option in X. Our second counter-example shows that a generalization of this special case to the aspiration-based choice model [8] is also precluded by WARP-IR.
Counter-example 2
(Aspiration-based choice). Consider an irrational decision-maker who has in mind a similarity metric d : X 2 R + and a linear order . Each f F = χ is a potential set that includes all available options and possibly some phantom alternatives. Given, ( S , f ) with S f , the decision maker chooses the d–most similar option in S to the –best option in f.
Proof. 
Since the decision maker is irrational, she violates WARP. Thus, there are S , S χ and { x , y } S S such that c ( S , f ) = x and c ( S , g ) = y . By, S f and S g , we have x , y f g ; hence, f = g x y , and WARP-IR is violated. □
While agent who admits the aspiration-based model chooses rationally whenever the optional set equals the set of available options, she is not IR (except for the trivial case of full rationality). Indeed, the agent may choose a worse option over a better one, even when both options are presented symmetrically (e.g., choosing x over y x because d ( y , z ) < d ( x , z ) for f z y ).
As a final example of the power of WARP-IR, we informally note that it also precludes Masatlioglu and Ok’s [1] basic status-quo bias model, because, similar to the aspiration-based model, the status quo can render the decision maker to choose a worse over a better option, where neither option is the status quo (Specifically, she may choose worse option y over the better x when only the former beats the status quo in all dimensions she deems relevant.).
We can now state the characterization theorem of IR agents.
Theorem 1.
Let c be a choice function. Then, c satisfies WARP-IR if and only if c is IR.
Proof. 
We first show that for IR choice functions, we have that x R y implies x y . Assume that c admits IR and that x = c ( S , f ) = c ( S , f x y ) and note that by neutrality, we have [ ¬ ( y f x ) or ¬ ( y f x y x ) ] . Thus, by IR, x y holds. Now, assume WARP-IR fails. Then, there exists T χ such that no option can serve the role of x in the axiom. That is, for any x T , there is S , S x such that x c ( S , f ) = c ( S , f x y ) = y T . Thus, if c is IR, then for any x S , there is y S such that y x , and since S is finite, ⪰ cannot be a linear order–a contradiction.
For the other direction, we first show that WARP-IR implies that R is acyclic. Take x i , i = 0 , 1 , , n such that x i 1 R x i for all i > 0 and x n R x 0 . Then, no option in T = x i can serve the rule of x in WARP-IR, as for any x T , there is y T such that y R x , namely, y = c ( S , f ) = c ( S , f x y ) . Being acyclic, R can be extended to a linear order ⪰.
Next, for any f F , let ˜ f : = { ( x , y ) x = c ( S , f ) for some S y x } , and define ( . ) by x f y if and only if [ x ˜ f y or y ˜ f x y x ] . Note that ⊳ satisfies neutrality by construction. We now show that is also asymmetric for all f F . Assume not, then [ x f y and y f x ] , namely: [ x ˜ f y or y ˜ f x y x ] and [ y ˜ f x or x ˜ f x y y ] . However, because x ˜ f y implies y x , we find that [ [ x ˜ f y and x ˜ f x y y ] or [ y ˜ f x and y ˜ f x y x ] ] . Assume first that [ x ˜ f y and x ˜ f x y y ] , then we have x y and y x —a contradiction. The case where y ˜ f x and y ˜ f x y x follows equivalently and, thus, ⊳ is asymmetric.
It now follows directly that c is IR for the constructed ⪰ and ⊳, as x = c ( S , f ) and y x imply x f y . This completes the proof. □
Theorem 1 gives us a falsifiable condition that is equivalent to IR and, thus, allows us to test our theory nonparametrically using the standard revealed-preference technique á la Samuelson [35]. As explained in the introduction, this result is important both from positive and normative perspectives. From a positive perspective, identifying the model and its primitives allows us to make predictions about the agent behavior. From a normative perspective, it allows us to avoid mistaken inferences on the agent’s welfare.
We close this section by noting that WARP-IR also characterizes our WIR model and, thus, the two models are empirically indistinguishable.
Proposition 3.
Let c be a choice function. Then, c is IR if and only if c is WIR.
Proposition 3 shows that WARP-IR also characterizes a version of IR where the agent’s underlying preferences are incomplete. Thus, WARP-IR covers a larger class of behavioral models, for example, those in which framing effects result from indecisiveness (as in Example 7).

4. Applications

In the section, we examine the applications of our theory to Examples 1–8 and to GR’s theory. Strikingly, we find that in all of our examples R T coincides with the same part of the underlying preferences ⪰ that is observable from choice when the exact choice process is known. In other words, R T completely identifies the revealed preference in each of the Examples 1–8 (The same is true for the list-rationalization model and the salient theory of choice, see Proposition A1 in Appendix A for details.).
Theorem 2.
Let c admit Example i = 1 , , 8 and let R T i denote the revealed preference of Example i . Then, R T = R T i for all i.
Theorem 2 demonstrates that our model identifies the “correct” revealed preference from choice data that could have been generated by many different cognitive biases without relying on the specific process that generated the data. It is worth noting that this result holds in spite of the fact that in all of our examples, the revealed preference may be incomplete. Consider, for instance, Example 1, and note that if there are x and y such that u ( z ) + β ( z ) u ( x ) , u ( y ) for all z X , then x is chosen over y if and only if x is the status quo, and it is impossible to reveal whether x is preferred to y. More formally, we showed that x R 1 y (where R 1 denotes the direct revealed preference of Example 1 (Recall that the direct revealed preference contains all couples ( x , y ) X × X such that it is possible to conclude that x y without invoking the transitivity of . )) if and only if for any status-quo bias representation of c , we have [ u ( x ) > u ( y ) and there exists z such that u ( x ) > u ( z ) + β ( z ) ] . However, note that u ( x ) > u ( y ) and u ( x ) > u ( z ) + β ( z ) imply that x = c ( { x , y , z } , f ) with f = z = f x y . Hence, we obtain x R y .
Similarly, in Example 2, whenever there are x and y such that x , y r ( S ) for all S x , y , it is impossible to reveal whether x is preferred to y. Specifically, we find x R 2 y if and only if x = c ( S , ( y , x ) ) for some S χ . However, x = c ( S , ( y , x ) ) implies x = c ( S , ( x , y ) ) , and we obtain x R y .
This fact is especially salient when considering Examples 3 in which the domain is extremely free and the direct revealed preference can be anything from an empty relation to a complete relation, but in any case, it coincides with R . Specifically, we showed that x R 3 y if and only if x = c ( S , f ) , y S and, [ x , y f or f = ] ; that is, if x is chosen over y when either both or none of the two options is highlighted. Note that [ x , y f or f = ] implies that f = f x y and, hence, x R y . Similar arguments follow for all the other examples and, thus, our theory identifies the “correct" revealed preference in all of the special cases discussed in Section 2.
Remark 1
(On the elicitation result in the β , δ model). In Example 4, we assume that the choices from all menus, not containing past rewards, are observable. However, the complete elicitation result also holds without this richness property. Specifically, we have shown that x R 4 y if and only if x = c ( S , f ) , y S , and f x , y . That is, when reward x is chosen over reward y when none of the rewards are immediate. Since, in this case, f is reflective with respect to alternatives x and y , we have R 4 = R . We note that an almost equivalent result is obtained when β ( . ) is a constant function. Specifically, in this special case, we have the following revelation for Example 4: x R y if and only if x R 4 y for all x , y 1 (This is straightforward to show as in Example 4, we observe the agent choosing between any x , y 1 under f = 1 , which is reflective with respect to x and y.).
In the remainder of this section, we demonstrate our claim in the introduction that our model allows us to test an untestable assumption used in GR and, hence, provides another application of our theory.
GR offer a method to reveal the underlying preferences of agents who are sensitive to framing effects using the information of other agents’ choices. Their general idea is that, in a framework with only two alternatives and only two (reflective) frames, this information can reveal the direction of the effect of changing the frame on decision makers. GR then conclude that those who choose against this direction must be consistent and, hence, their choices represent their underlying preferences. This very last inference relies on the following assumption (see GR’s assumption 3, p. 2765):
[ x = c ( { x , y } , f ) = c ( { x , y } , g ) ] x y .
We note that, without writing it explicitly, GR mean that g = f x y . First, this is the case in all the examples that demonstrate their approach (Examples 1.1–1.4). For instance, in Example 1.1 in GR, x is the default option in f , and y is the default option in g. In Example 1.2, x appears before y under frame f and y appears before x under frame g , etc. Second, when g f x y , assumption (5) is violated in many instances. For example, if under both f and g, x appears before y in the menu, then x = c ( { x , y } , f ) = c ( { x , y } , g ) may well be observed, although y x (e.g., in the satisficing model). More generally, when f and g are non-reflective, it is very likely that that one option (say, x) is promoted over the other (y) under both frames. Thus, choosing x under both frames does not allow to conclude that x is preferred to y , and assumption (5) fails. Note that assumption (5) involves the unobservable ⪰; hence, it cannot be tested directly against the data. Finally, it follows directly from our analysis in Section 3 that, for the case of two options, [ x = c ( { x , y } , f ) = c ( { x , y } , f x y ) ] x y if and only if the agent is IR. Thus, WARP-IR provides us a way to test assumption (5) using individuals’ choice data. This allows one to assess the applicability of GR’s approach and to avoid mistaken inferences.

5. Conclusions

With the growing attention in bounded rationality, an increasing number of behavioral models appeared in the economic literature with each model focusing on a single cognitive bias, with different testable conditions, and different welfare implications. Thus, in the current state of the literature, when facing actual consumers’ data sets, one needs to test enormous number of conditions, applying different elicitation rules for different consumers’ observations, and hoping that each consumer is subject to a single cognitive bias. By contrast, the current paper provides a model that encompasses many of the behavioral models in the literature (see Examples 1–8 and A1–A2) including the most important choice procedures, such as satisficing [5] and cognitive traits, such as Status-quo bias [36]. We provide a single choice axiom that characterizes our model, and an intuitive observable criterion that allows to elicit individuals’ underlying preference. Thus, when facing a real data, one can use our single axiom to test several cognitive biases simultaneously, both for different consumers and within a single consumer. In addition, we provide an observable choice relation that delivers the information of the individuals’ enchanted mappings. This information is useful for industrial and other firms in guiding them on how they should present their products to increase sales and profitability.
While very general, our model can still provide a complete identification of the agents’ underlying preferences provided that the data are rich enough. This means that with large enough data, the generality of our approach does not reduce the identification (or predictive) power of our model. However, when there are not enough observations, the information on individuals’ preferences may be incomplete. In this respect, we have suggested a specification of our model, in which the property of neutrality is replaced with the stricter version common in social choice theory. While this, of course, reduces the applicability of our theory, we strongly conjecture that the specialized model will enable elicitation of the underlying preferences of individuals even under missing choice data. This is a topic for future research.

Author Contributions

Conceptualization, G.B.; methodology, G.B.; validation, G.B. and B.Ü.; formal analysis, G.B.; investigation, G.B. and B.Ü.; resources, G.B. and B.Ü.; data curation, unapplicable.; writing—original draft preparation, G.B.; writing—review and editing, G.B. and B.Ü.; visualization, G.B. and B.Ü.; supervision, G.B. and B.Ü.; project administration, G.B. and B.Ü.; funding acquisition, G.B. and B.Ü. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been supported by Yildiz Technical University Scientific Research Projects Coordination Department, the project number: SBA-2020-3815 and by Ruppin Academic Center internal grant 33080.

Data Availability Statement

Not applicable.

Acknowledgments

Guy Barokas, the author is thankful to Adi Luria for helpful comments.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Two More Special Cases

Consider a special case of the list-rationlaizable choice model by [10], as presented in [37] (Example 4), where the agent overviews the alternatives one by one and has a bias towards her current position.
Example A1
(List-rationlaizable choice with primacy bias). A decision-maker has in mind two (one-to-one) functions u (which represents ) and β from X to the positive reals. F is the set of lists on X; a choice problem ( S , f ) Σ if S χ , S 2 , and f lists all the elements of S. For any choice problem ( S , f ) = ( { x , y } , ( x , y ) ) , the agent chooses x if and only if u ( x ) + β ( x ) u ( y ) . For any other choice problem ( S , ( f 1 , , f n ) ) Σ , the agent follows the recursive process: c ( S , f ) = c ( { c f S , f n } , ( c f S , f n ) ) , where c f S = c ( S { f n } , ( f 1 , , f n 1 ) ) .
Our theory also includes Example 9A1 for f = { ( f i , f j ) i < j } .
Next, we consider a simple version of the salience model introduced by Bordalo et al. [38], as studied by GR.
Example A2
(Salience theory of choice). Let each x X be a product with two attributes, quality and price (with a negative sign). Let F = { f 1 , f 2 } , each frame corresponds to one of the attributes, quality or price, being salient relative to the other attribute in that frame. Formally, f 1 and f 2 are two opposite lists on X (Thus, we assume that for any two products, lower quality also means lower price.). For any x X , let x k , k = 1 , 2 , denote position of alternative x in frame f k . Given any ( S , f k ) χ × F , the decision-maker chooses the maximal option from S according to the one-to-one function v k ( x ) = u ( x ) + β k ( x k ) such that u : X R + is also a one-to-one function and β k : [ 1 , , X ] R + is a weakly increasing function for each i.
Our theory includes Example A2 for f = { ( f j , f i ) i < j } , that is, when alternative x is promoted over y under frame f , if and only if x is better in terms of the attribute that is salient in that frame.
Proposition A1.
Let c admit Example i = A1, A2 and let R T i denote the revealed preference of Example i . Then, R T = R T i for all i.

Appendix B. Discussion on the Behavioral Approach to Welfare and the Related Literature

The aim of this appendix is to discuss the behavioral approach to welfare, comparing it to the model-free approach and to enrich the discussion on the related literature. The behavioral approach to welfare applies the revealed preference method into behavioral models. These include three steps: (i) presenting a model that captures certain behavioral traits or a specific choice process (often as a generalization of the classical rational choice theory). ( i i ) Finding the testable conditions (on a choice data) that are equivalent to the presented model, and ( i i i ) finding the “revealed preference” of the model—a condition that allows us to reveal the agent’s “true preferences” according to the model (after confirming it against the data using the conditions in step ( i i ) (In the current paper, these three steps are done in Definition 1, Theorem 1, and Proposition 1, respectively.)). The alternative approach, called the model-free approach, simply offers a normative criterion to apply to all possible models. Thus, the advantage of the model free approach is its comprehensive generality; however, as explained, its lack of testable conditions often results with mistaken inferences. Specifically, in the introduction, we showed this by comparing our and Bernheim and Rangel’s [12] approaches, and in Section 5, we demonstrated this by comparing our approach to that of [15]. Here, we give another example that compares our approach to that of [11]. Assume that we observe an agent, who satisfies the satisficing model of Example 2, choosing from the following lists:
L 1 = ( x , z 1 , y ) , L 2 = ( x , y , z 1 ) , L 3 = ( z 1 , x , y ) , L 4 = ( x , y , z 2 ) , and L 5 = ( y , x , z 2 ) .
The agent chooses x from L 1 L 3 , but y from L 4 L 5 . Since Apesteguia and Ballester’s [11] criterion looks for the preference relation that is closest to the data in terms of the number of choices that should be swapped in order to render the data consistent with that preference relation, it will conclude that x y . By contrast, since that agent chooses y over x in two reflecting frames (i.e., L 4 and L 5 ), our criterion will conclude that y x , which is also the conclusion consistent with Example 2 (where in addition to y x , we have that x passes the agent threshold r ( { x , y , z 1 } ) but not the threshold r ( { x , y , z 2 } ) ) .
Within the behavioral approach to welfare our model is closest to the two-stage choice procedure. While we compared our approach to that literature along the paper, here we wish to give a summary. The main difference between our model and that two-stage approach (e.g., [18,19,20,21,22,23]) is that we allow for framing effects. Thus, to compare these models to ours, one needs to imagine that they describe a behavior in which the frame is held fixed. In this respect, we have shown in Section 2.2 that any model in this literature can be seen as a special case of our theory provided that we can freely choose that specific frame under which it is presented. By contrast, assuming that the fixed frame is neutral, the intersection between our model and any of these models result with the rational choice theory. This means that without framing effects, our model cannot capture any of the choice anomalies captured by these models. This is not surprising, considering that our aim is to model an agents who is susceptible to changing in the framing of choice, but who are otherwise rational.
We close this section by noting a drawback of the behavioral approach to welfare, which we think should be another topic of future research. It was shown elsewhere [21] that two models can have the same testable implications but different welfare consequences. For a simple example, consider an agent who always chooses to minimize his utility (i.e., he chooses against his complete and transitive preferences), while this agent satisfies the conditions of the classical rational choice theory (i.e., WARP), applying the standard revealed preference in this case will result with the wrong preference relation. Thus, in extreme cases, (behavioral) revealed preference can also result in mistaken inferences. As mentioned, this is a subject for future economic and philosophical research.

Appendix C. THE Equivalence between Masatlioglu et al. [22] and an Incomplete Model

The aim of this appendix is to show that Masatlioglu et al’s [22] model has an equivalent incomplete representation. Because we already showed that our model has a complete two-stage representation, this will establish the fact that there is no difference between our model and their model in terms of completeness. Let C : χ X be standard choice function and assume that it admits Masatlioglu et al’s [22] limited attention model. That is, there exist a linear order ⪰ and an attention mapping Γ : χ X with:
x Γ ( S ) ] [ Γ ( S ) = Γ ( S { x } ) ] for all S χ ,
such that:
C ( S ) = m a x ( ( Γ ( S ) , ) ) for all S χ .
Then, C is indistinguishable from the following incomplete model:
C ( S ) m a x ( S , )
[ C ( S ) C ( S y ) ] [ y C ( S ) ] ,
where ⊳ is any binary relation on X. To show this, we first note that [22] shows that (A1)–(A2) are equivalent to the binary relation P on X defined by [ x P y ] [ c ( S ) = x c ( S { y } ) ] being acyclic. Clearly, under (A3)–(A4), x P y x y , and by the acyclicity of ≻, P is also acyclic under (A3)–(A4). The other direction is straightforward and so the complete model in (A1)–(A2) is indistinguishable from the incomplete model (A3)–(A4).

Appendix D. Remaining Proofs

Propositions 1 and 2 are proved as corollaries of Theorem 1.
Proof of Proposition 1.
The “if part” follows as explained in the main text. The “only if part” follows by a standard result (originally due to [39] ), stating that an asymmetric and transitive relation coincides with the intersections of its linear extensions. □
Proof of Proposition 2.
The “if part” follows as explained in the main text. For the “only if part,” assume that ¬ ( x E f y ) , if in addition, y R x , then by ¬ ( x E f y ) , x c ( S , f ) for any S y , and the result follows by the proof Theorem 1, which constructed ⊳ such that ¬ ( x f y ) . Otherwise, ¬ ( y R x ) , if x R y , then x f y , where ⊳ is as defined in Theorem 1, only if [ y = c ( S , f x y ) and x S ] . However, this implies x E f y —a contradiction. Thus, assume that ¬ ( y R x ) and ¬ ( x R y ) , then by the aforementioned Dushnik and Miller’s [39] result, there exist two linear extensions and of R such that x y and y x . Now, note that by ¬ ( x E f y ) , we have for any f F either [ x c ( S , f ) for all S y ] or [ y c ( S , f x y ) for all S x ] . Assume first x c ( S , f ) for all S y and pick = , then by the proof of Theorem 1, there exists ⊳ such that ¬ ( x f y ) . Otherwise, y c ( S , f x y ) for all S x , and the result follows equivalently by picking = .  □
Proof of Proposition 3.
We first show that WIR implies WARP-IR. Recall that being asymmetric and transitive, ´ is also acyclic, which ensures the existence of a maximal element; if x in the unique maximal element, then WARP-IR follows as in the proof of Theorem 1. Otherwise, y = c ( S , f ) = c ( S , f x y ) T implies that y is also a ´ -maximal element in T ; however, by (4) and neutrality, y cannot be chosen at least under one of the frames f and f x y –a contradiction. For the other direction, it is enough to show that restricting ⊳ in Definition 1 to satisfy condition ( ) [ x f y or y f x for any x , y X ] is without loss of generality. It follows directly from Definition 1 that if c admits IR with ⊳ and ⪰, it also admits IR with any ˜ such that x f y implies x ˜ f y for all f F . Thus, it remains to show that ⊳, defined in the proof of Theorem 1 can be extended to a natural impression mapping satisfying ( ) . We call this extended mapping ˜ and construct it recursively. Assume that F = { f 1 , , f n } without loss of generality, and for the recursive construction denote it by F 1 = { f 1 1 , , f n 1 } . For any x , y X , define x ˜ f 1 1 y if and only if [ x f 1 1 y or [ ¬ ( y f 1 1 x ) and x y ] ] . Next, pick j n such that f j 1 is reflective to f 1 1 , and define F 2 = { f 2 2 , , f n 2 } , by f i 2 = f i 1 for all i 1 , j , and f j 2 : = { ( x , y ) y ˜ f 1 1 x } . Note that since ⊳ is neutral, x f 2 j y implies x ˜ f 2 j y . Now, for any x , y X , let x ˜ f 2 2 y if and only if [ x f 2 2 y or [ ¬ ( y f 2 2 x ) and x y ] ] , and so on up to ˜ f n n . It follows by construction that ˜ is both neutral and satisfies property ( ) . This completes the proof. □
Proof of Theorem 2.
First note that, by Proposition 1, for any IR choice function, x R y implies x y . Thus, x R y implies x R i y for all i. The other direction is proved case by case.
For Example 1, let R 1 be a binary relation on X defined by x R 1 y if x = c ( { x , y , z } , z ) for some z X . We now show that x R 1 y implies x R 1 y . Assume that c admits Example 1 with u ( . ) and β ( . ) such that u ( x ) > u ( y ) but ¬ ( x R 1 y ) . We first show that c admits Example 1 also for u and β such that u ( y ) = u ( y ) and β ( y ) = β ( y ) for all y x , u ( x ) = min z X u ( z ) 2 , and β ( x ) = u ( x ) + β ( x ) u ( x ) . Clearly, the representation holds for all choice problems that do not include x. Now, assume that c ( S , z ) x S , then we have: [ u ( z ) + β ( z ) u ( x ) or [ u ( y ) > u ( x ) for some y S ] ] , which imply [ u ( z ) + β ( z ) u ( x ) or [ u ( y ) > u ( x ) for some y S ] ] , respectively. In addition, x = c ( S , x ) if and only if u ( x ) + β ( x ) u ( y ) for all y S , which holds if and only if u ( x ) + β ( x ) u ( y ) for all y S . Finally, z x = c ( S , z ) is impossible because, owing to c admitting Example 1, it implies z c ( { x , y , z } , z ) , and if x = c ( { x , y , z } , z ) , then x R 1 y –a contradiction; otherwise y = c ( { x , y , z } , z ) , and u ( y ) > u ( x ) -a contradiction. This shows that x R 1 y implies x R 1 y . We have, thus, left to show that x R 1 y implies x R y , but since x = c ( { x , y , z } , f ) with f = z satisfies f x , y = z , the proof is completed for the case of status-quo bias.
For Example 2, first note that for any c that admits satisficing, if x = c ( S , f ) and y appears before x in f (henceforth, denoted ( y , x ) f ), then [ r ( S ) y and x y ] . Thus, we have: ( ) x = c ( S , f ) and ( y , x ) f imply y c ( S , g ) for all g F . Second, let R 2 be defined by x R 2 y if x = c ( S , f ) and ( y , x ) f . Clearly, for c that admits satisficing x R 2 y implies x y . Hence, R 2 is acyclic and can be extended to a linear order . We now show that c admits satisficing for any such linear extension of R 2 and for r ( . ) = min ( { x X x = c ( . , f ) for some f F } , ) . Assume x = c ( S , f ) , then x r ( S ) holds by definition; moreover, for all y such that ( y , x ) f , we have (i) x y , by the definition of , and (ii) r ( S ) y , for otherwise, y = c ( S , g ) for some g F and we have a contradiction to ( ) . It is well known that x y for any such extension of R 2 implies x R T 2 y (c.f. [39]), hence, R 2 = R 2 . We have left to show that x R 2 y implies x R y . x = c ( S , f ) and ( y , x ) f imply that x = c ( S , f x y ) and, hence, x R y .
For Examples 3, it follows directly that R is given by x R y if x = c ( S , f ) , y S , and [ y f or f = ] (recall that x = c ( S , f ) implies that x f ). We now show that R = R 3 by showing that if c admits limited attention with an underlying preference ⪰, then it admits limited attention for any linear order that extended R . Assume that x = c ( S , f ) and that f , then because c admits limited attention, we have x f and, by the definition of R , we have x R y for all y f . Thus, assume that x = c ( S , f ) and that f = , then x R y follows in the same way. This shows that R = R 3 .
For Example 4, first note that R in this example is given by x R y if x = c ( S , f ) , y S and f x , y , and that since x R y implies u ( x ) > u ( y ) (where u ( x ) : = δ x v ( x ) ), it is acyclic and can be extended to a linear order represented by u. Second note, that for all x , y 1 , we have without loss of generality, x = c ( { x , y } , 1 ) , and, hence, R is complete on X { 1 } . Thus, to complete the proof it remains to show that x R 4 y implies x , y 1 . To prove this, we show that c admits a generalized β , δ representation with u ( 1 ) > u ( x ) if and only if it admits a generalized β , δ representation with u ( x ) > u ( 1 ) .
Assume first that the representation holds with u ( 1 ) > u ( x ) , and define u and β such that u ( x ) = u ( x ) and β ( x ) = β ( x ) for all x 1 , u ( 1 ) = u ( x ) + β ( x ) u ( x ) 2 and β ( 1 ) = β ( 1 ) u ( 1 ) u ( 1 ) . Note that to show that c admits a generalized β , δ representation also with u and β , we only need to show that the representation holds for all ( S , 1 ) Σ such that 1 c ( S , 1 ) , but this follows directly as u ( 1 ) β ( 1 ) = u ( 1 ) β ( 1 ) . Thus, assume that u ( x ) > u ( 1 ) , and let u ( x ) = u ( x ) and β ( x ) = β ( x ) for all x 1 , u ( 1 ) = u ( x ) + β ( x ) and β ( 1 ) = β ( 1 ) u ( 1 ) u ( 1 ) , and the representation with u and β holds in the same way.
The proof for Example 5 follows directly from those of Examples 1–3, and the proof for of Examples 6 follow very similarly to that of Examples 2, hence, we proceed with the proof for Example 7.
First note that if c admits Example 7, then ( ) x = c ( S , f ) , y S and y = c ( S , f ) imply x S , for otherwise, [ x f y and y f x ] –a contradiction. We show that R 7 R . Let ´ : = R T (we know that R T is acyclic because the agent in Example 7 is WIR), and define ⊳ by x f y if and only if [ x = c ( S , f ) , y S and ¬ ( x ´ y ) ] or [ y = c ( S , f x y ) , x S and ¬ ( y ´ x ) ] . Note that f is asymmetric, for otherwise, we have without loss of generality either ( i ) x = c ( S , f ) , y S , y = c ( S , f ) and x S or ( i i ) x = c ( S , f ) = c ( S , f x y ) , y S , and ¬ ( x ´ y ) . However, ( i ) is a contradiction to ( ) , and ( i i ) also leads to a contradiction because x = c ( S , f ) = c ( S , f x y ) imply x R y . To see that the representation holds for the constructed and f , note that because for any choice function that admits Example 7, x R T y implies x ´ y , we will never observe x = c ( S , f ) , y S , and y R T x . In addition, ¬ ( x R T y ) , ¬ ( y R T x ) , x = c ( S , f ) , and y S imply x f y .
The proof for Example 8 follows in the same manner as that for Example 3, where R is given by x R y if x = c ( S , f ) and y S f i ( S , f ) (recall that x = c ( S , f ) implies that x f i ( S , f ) ). This completes the proof. □
Proof of Proposition 4.
For Example A1, let R 9 be defined by x R 9 y if there exists ( S , f ) Σ such that x = c ( S , f ) and ( y , x ) f . We show that x R 9 y implies x R 9 y . Assume that c is list-rationalizable with u and β such that u ( x ) > u ( y ) and ¬ ( x R 9 y ) , we first show that c is list rationalizable also for the following u and β . Let z : = arg max t : x R 9 t u ( t ) , ϵ : = min x , y X u ( x ) u ( y ) 2 , and define: u ( t ) = u ( t ) and β ( t ) = β ( t ) for all t x , u ( x ) = u ( z ) + β ( z ) + ϵ , and β ( x ) = u ( x ) + β ( x ) . Note that u ( y ) u ( x ) for otherwise u ( z ) + β ( z ) + ϵ > u ( y ) , and we have x = c ( { x , y , z } , ( y , z , x ) ) and x R 5 y -a contradiction. Clearly, the new representation holds for all choice problems in which x is not chosen. In addition, if x = c ( { x , y } , ( x , y ) ) , then u ( x ) + β ( x ) u ( y ) and u ( x ) + β ( x ) u ( y ) . Now, assume that x = c ( { x , t } , ( t , x ) ) , then x R 9 t and u ( x ) u ( t ) + β ( t ) + ϵ , hence, u ( x ) > u ( t ) + β ( t ) . This shows that the representation holds also for all doubleton menus.
We now show by induction that the representation ( u , β ) holds for any other choice problem. Make the induction assumption that the representation holds for all ( S , ( f 1 , , f n ) ) Σ such that n k 1 . Now, assume that x = c ( S , ( f 1 , , f k ) ) , if x = c f S , then because c is list-rationalizable by ( u , β ) , we have x = c ( { x , f k } , ( x , f k ) ) , u ( x ) + β ( x ) u ( c f S ) , and the results follows by the induction assumption. Otherwise, we have x = f k , x = c ( { x , c f S } , ( c f S , x ) ) , and u ( x ) > u ( c f S ) + β ( c f S ) . This shows that x R 9 y implies x R 9 y . We have left to show that x R 9 y implies x R y , but this follows directly from the fact that x = c ( S , f ) and ( y , x ) f imply x = c ( S , f x y ) .
For Example A2, let R 10 by x R 10 y if there exists i such that y i > x i and x = c ( { x , y } , f i ) . Note that R 10 is acyclic as x = c ( { x , y } , ( x , y ) ) implies v i ( x ) v i ( y ) , and y i > x i for i = [ 1 or 2 ] and, thus, u ( x ) > u ( y ) . Also note that x R 10 y implies that x = c ( { x , y } , ( y , x ) ) and, thus, we have x R y . To see that R 10 = R 10 , we show that if ¬ ( x R T 10 y ) , then there is a linear extension of R T 10 with y x such that ( u , β 1 , β 2 ) represents c and u represents . Assume without loss of generality that y 1 > x 1 , and define u ( x ) = v 1 ( x ) , and for all x X , let β 1 ( x 1 ) = 0 and β 2 ( x 2 ) = β 2 ( x 2 ) + v 2 ( x ) u ( x ) . Notice that the representation still holds because v i ( x ) : = u ( x ) + β i ( x i ) = v i ( x ) for all i , and that u ( y ) > u ( x ) . We have left to show that β 2 is an increasing function. Assume x 2 > y 2 , then we need to show that β 2 ( x 2 ) + v 2 ( x ) u ( x ) > β 2 ( y 2 ) + v 2 ( y ) u ( y ) , but because β 2 ( x 2 ) β 2 ( y 2 ) , it is enough to show that v 2 ( x ) u ( x ) > v 2 ( y ) u ( y ) , replacing v 2 ( z ) = u ( z ) + β 2 ( z 2 ) for z = x , y , this inequality holds if and only if β 2 ( x 2 ) β 2 ( y 2 ) , and the proof for the salient model is complete. □
Proposition A2.
A choice function c admits IR if and only if it satisfies (3)–(4).
Proof of Proposition 5.
First note that it follows directly that under (3)–(4), x R y implies x y . Thus, by Theorem 1, WARP-IR follows by the acyclicity of . For the other direction, assume that c satisfies Definition 1 and let ´ ( S , f ) : = { ( x , y ) f c ( S , f ) = x } for all ( S , f ) Σ . Then, (3) holds for the constructed ´ and ⪰. To see that (4) is satisfied, note that x ´ ( S , f ) y implies x f y , which implies y f x y x and, thus, y ´ ( S , f x y ) x for all S such that c ( S ) = y . □

References

  1. Masatlioglu, Y.; Ok, E.A. Rational choice with status quo bias. J. Econ. Theory 2005, 121, 1–29. [Google Scholar] [CrossRef]
  2. O’Donoghue, T.; Rabin, M. Doing it now or later. Am. Econ. Rev. 1999, 89, 103–124. [Google Scholar] [CrossRef] [Green Version]
  3. Rubinstein, A.; Salant, Y. Eliciting welfare preferences from behavioural data sets. Rev. Econ. Stud. 2012, 79, 375–387. [Google Scholar] [CrossRef] [Green Version]
  4. Salant, Y.; Rubinstein, A. (A, f): Choice with frames. Rev. Econ. Stud. 2008, 75, 1287–1296. [Google Scholar] [CrossRef]
  5. Simon, H.A. A behavioral model of rational choice. Q. J. Econ. 1955, 69, 99–118. [Google Scholar] [CrossRef]
  6. Barokas, G. Choice theoretic foundation for libertarian paternalism: Reconciling the behavioral and libertarian approaches to welfare. J. Math. Econ. 2019, 81, 62–73. [Google Scholar] [CrossRef]
  7. Gerasimou, G. Asymmetric dominance, deferral, and status quo bias in a behavioral model of choice. Theory Decis. 2016, 80, 295–312. [Google Scholar] [CrossRef] [Green Version]
  8. Guney, B. A theory of iterative choice in lists. J. Math. Econ. 2014, 53, 26–32. [Google Scholar] [CrossRef]
  9. Rubinstein, A.; Salant, Y. A model of choice from lists. Theor. Econ. 2006, 1, 3–17. [Google Scholar]
  10. Yildiz, K. List-rationalizable choice. Theor. Econ. 2016, 11, 587–599. [Google Scholar] [CrossRef] [Green Version]
  11. Apesteguia, J.; Ballester, M.A. A measure of rationality and welfare. J. Political Econ. 2015, 123, 1278–1310. [Google Scholar] [CrossRef]
  12. Bernheim, B.D.; Rangel, A. Beyond Revealed Preference: Choice-Theoretic Foundations for Behavioral Welfare Economics. Q. J. Econ. 2009, 124, 51–104. [Google Scholar] [CrossRef] [Green Version]
  13. Horan, S.; Sprumont, Y. Welfare criteria from choice: An axiomatic analysis. Games Econ. Behav. 2016, 99, 56–70. [Google Scholar] [CrossRef]
  14. Nishimura, H. The transitive core: Inference of welfare from nontransitive preference relations. Theor. Econ. 2018, 13, 579–606. [Google Scholar] [CrossRef] [Green Version]
  15. Goldin, J.; Reck, D. Revealed-Preference Analysis with Framing Effects. J. Political Econ. 2020, 128, 2759–2795. [Google Scholar] [CrossRef]
  16. Arrow, K.J. Rational choice functions and orderings. Economica 1959, 26, 121–127. [Google Scholar] [CrossRef]
  17. Sen, A.K. Choice functions and revealed preference. Rev. Econ. Stud. 1971, 38, 307–317. [Google Scholar] [CrossRef]
  18. Cherepanov, V.; Feddersen, T.; Sandroni, A. Rationalization. Theor. Econ. 2013, 8, 775–800. [Google Scholar]
  19. Horan, S. A simple model of two-stage choice. J. Econ. Theory 2016, 162, 372–406. [Google Scholar] [CrossRef] [Green Version]
  20. Manzini, P.; Mariotti, M. Sequentially rationalizable choice. Am. Econ. Rev. 2007, 97, 1824–1839. [Google Scholar] [CrossRef]
  21. Manzini, P.; Mariotti, M. Categorize then choose: Boundedly rational choice and welfare. J. Eur. Econ. Assoc. 2012, 10, 1141–1165. [Google Scholar] [CrossRef]
  22. Masatlioglu, Y.; Nakajima, D.; Ozbay, E.Y. Revealed attention. Am. Econ. Rev. 2012, 102, 2183–2205. [Google Scholar] [CrossRef]
  23. Tyson, C.J. Behavioral implications of shortlisting procedures. Soc. Choice Welf. 2013, 41, 941–963. [Google Scholar] [CrossRef] [Green Version]
  24. Barokas, G. Self-Control Preferences and Status-Quo Bias. BE J. Theor. Econ. 2021, 22, 370–405. [Google Scholar] [CrossRef]
  25. Altmann, S.; Falk, A.; Grunewald, A. Incentives and information as driving forces of default effects, IZA Discussion Paper No. 7610. 2013. [Google Scholar]
  26. Barokas, G. Dynamic choice under familiarity-based attention. Soc. Choice Welf. 2021, 57, 703–720. [Google Scholar] [CrossRef]
  27. Caplin, A.; Dean, M. Revealed preference, rational inattention, and costly information acquisition. Am. Econ. Rev. 2015, 105, 2183–2203. [Google Scholar] [CrossRef] [Green Version]
  28. Loewenstein, G.; Prelec, D. Anomalies in intertemporal choice: Evidence and an interpretation. Q. J. Econ. 1992, 107, 573–597. [Google Scholar] [CrossRef] [Green Version]
  29. Frederick, S.; Loewenstein, G.; O’donoghue, T. Time discounting and time preference: A critical review. J. Econ. Lit. 2002, 40, 351–401. [Google Scholar] [CrossRef]
  30. Carroll, G.D.; Choi, J.J.; Laibson, D.; Madrian, B.C.; Metrick, A. Optimal defaults and active decisions. Q. J. Econ. 2009, 124, 1639–1674. [Google Scholar] [CrossRef]
  31. Noor, J. Removed preferences. J. Econ. Theory 2013, 148, 1463–1486. [Google Scholar] [CrossRef]
  32. Lleras, J.S.; Masatlioglu, Y.; Nakajima, D.; Ozbay, E.Y. When more is less: Limited consideration. J. Econ. Theory 2017, 170, 70–85. [Google Scholar] [CrossRef]
  33. Cerigioni, F. Dual decision processes: Retrieving preferences when some choices are automatic. J. Political Econ. 2021, 129, 1667–1704. [Google Scholar] [CrossRef]
  34. Caplin, A.; Dean, M. Search, choice, and revealed preference. Theor. Econ. 2011, 6, 19–48. [Google Scholar] [CrossRef] [Green Version]
  35. Samuelson, P.A. A note on measurement of utility. Rev. Econ. Stud. 1937, 4, 155–161. [Google Scholar] [CrossRef]
  36. Samuelson, W.; Zeckhauser, R. Status quo bias in decision making. J. Risk Uncertain. 1988, 1, 7–59. [Google Scholar] [CrossRef]
  37. Salant, Y. Procedural analysis of choice rules with applications to bounded rationality. Am. Econ. Rev. 2011, 101, 724–748. [Google Scholar] [CrossRef] [Green Version]
  38. Bordalo, P.; Gennaioli, N.; Shleifer, A. Salience and consumer choice. J. Political Econ. 2013, 121, 803–843. [Google Scholar] [CrossRef] [Green Version]
  39. Dushnik, B.; Miller, E.W. Partially ordered sets. Am. J. Math. 1941, 63, 600–610. [Google Scholar] [CrossRef]
Table 1. List of behavioral models.
Table 1. List of behavioral models.
ExampleModelIncluded in IR
1Status-quo bias
2Satisficing
3Limited attention
4 β δ model
5hybrid model including Examples 1–3
6Primacy and recency bias
7Indecisiveness
8Categorization bias
A1List rationlaizability
A2Salience theory
C-1Non-presentation driven irrationality
C-2Aspiration based model
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Barokas, G.; Ünveren, B. Impressionable Rational Choice: Revealed-Preference Theory with Framing Effects. Mathematics 2022, 10, 4496. https://doi.org/10.3390/math10234496

AMA Style

Barokas G, Ünveren B. Impressionable Rational Choice: Revealed-Preference Theory with Framing Effects. Mathematics. 2022; 10(23):4496. https://doi.org/10.3390/math10234496

Chicago/Turabian Style

Barokas, Guy, and Burak Ünveren. 2022. "Impressionable Rational Choice: Revealed-Preference Theory with Framing Effects" Mathematics 10, no. 23: 4496. https://doi.org/10.3390/math10234496

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop