Next Article in Journal
The Recursive Core for Non-Superadditive Games
Previous Article in Journal
The Influence of Priming on Reference States
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

(Un)Bounded Rationality in Decision Making and Game Theory – Back to Square One?

1
Max Planck Institute of Economics, Kahlaische Straße 10, 07745 Jena, Germany
2
Frankfurt School of Finance and Management, Sonnemannstraße 9-11, 60314 Frankfurt am Main, Germany
*
Author to whom correspondence should be addressed.
Games 2010, 1(1), 53-65; https://doi.org/10.3390/g1010053
Submission received: 18 February 2010 / Revised: 15 March 2010 / Accepted: 19 March 2010 / Published: 23 March 2010

Abstract

:
Game and decision theory start from rather strong premises. Preferences, represented by utilities, beliefs represented by probabilities, common knowledge and symmetric rationality as background assumptions are treated as “given.” A richer language enabling us to capture the process leading to what is “given” seems superior to the stenography of decision making in terms of utility cum probability. However, similar to traditional rational choice modeling, boundedly rational choice modeling, as outlined here, is far from being a “global” theory with empirical content; rather it serves as a tool to formulate “local” theories with empirical content.

1. Introduction and Overview

Let G = ( S 1 , S 2 , ... , S n ; π 1 ( ) , π 2 ( ) , , ... , π n ( ) ; Z ) be an interactive structure corresponding to a finite stochastic normal form game model with players i = 1 , 2 , ... , n ( 2 ) whose strategy sets are Si and whose substantive payoff functions πi(s, z) assign a monetary payoff to every strategy vector s = ( s 1 , s 2 , ... , s n ) and chance move z Z .
The πi(.) values merely represent monetary rankings. Other than conventional utilities, they are not indicative of all evaluative considerations of the interacting individuals. Therefore the preference ranking of results can always deviate from the natural order suggested by monetary payoffs πi(.). As a participant involved in G – an objective interactive situation – each individual i = 1 , 2 , ... , n can physically perform siSi.
Human actors do not merely interact in the physical world, they also represent that world to themselves in mental models. Accordingly, assume that each individual i = 1 , 2 , ... , n can and will form a mental model Mi (G), of the interaction. Each of these models represents G in terms of the subjective views of i = 1 , 2 , ... , n . A priori, nothing guarantees that the mental models Mi (G) as well as the first and higher order beliefs etc. of the several individuals i = 1 , 2 , ... , n “coincide” with each other. In view of the complexity of human mental processes and the idiosyncratic nature of individual knowledge, M i ( G ) M j ( G ) , i j , should generally be expected to apply. For the theoretician who intends to incorporate human mental processes and reasoning into his models, this creates an obvious difficulty: he would have to form a higher order model M ( M 1 ( G ) , M 2 ( G ) , ... , M n ( G ) ) containing a representation of all individual models. At least in a way, he would need to know what all individuals know. But it seems impossible to represent all individuals’ idiosyncratic or “local” knowledge.1
To overcome this difficulty, conventional models of interactive decision making typically introduce strong (common) knowledge assumptions. Due to these assumptions on some abstract level of analysis, participants of the interaction are dealing with the same mental model as an object of common concern. Beliefs and desires, respectively, are assumed to be given; it is also assumed to be commonly known which of these aspects are private information and which are not.
Though in some extremely simple situations of interactive decision making with clearly defined (competitive) aims, the substantive (typically monetary) payoffs may plausibly be assumed to be common knowledge and, at the same time, dominant concerns of individual decision making, this abstraction loses its plausibility in contexts in which more complex beliefs and desires exert their influence. Then individual rankings of states of affairs may be expected to diverge from easily observable substantive (monetary) payoffs. In such cases “common” (and even weaker “shared”) knowledge assumptions become exceedingly precarious.2 The mental models become idiosyncratic and can diverge to an extent rendering it impossible to assume that a “common object of reasoning” does in fact exist and is commonly known as a “matter of common concern.”
Though, in general, M j ( G ) = M i ( G ) = G cannot be ruled out, we intend to consider cases in which beliefs and aspirations (standing for desires) are idiosyncratic. Whenever we want to emphasize the potentially idiosyncratic character of variables, we will indicate this by putting a prime to them. Beyond this merely “cosmetic” step, we will, by means of specific examples, introduce and illustrate several additional elements of a language that can be used to build models of situations of (interactive) decision making that do not rely on the rather extreme assumptions of traditional game theory.3 Basically, our aim is to develop a “theory” rather than only a language. It would, however, be premature to claim that our language allows to develop a general theory.
In section 2, we succinctly introduce the basic tools we suggest to use in building models of (interactive) decision-making situations. In section 3, we illustrate the intended use of the tools by forming a model of a non-interactive situation that, in section 4, we extend to interactive situations. While the first two examples are still in line with structures corresponding to stochastic normal form games, the example in section 5 explicitly models sequential aspects of a process of interactive decision making. In section 6, we emphasize that what we are proposing is a language to formulate models and not yet a model or theory. The language as such is mostly devoid of empirical content. However, the traditional utility maximization approach does no better in that regard. As we shall indicate, it merely nurtures a delusion of analytical power and predictive content without in fact delivering on its promise. A bounded rationality approach, on the other hand, does not support such illusions. Conclusions follow in section 7.

2. Language Tools for Model Building

There are i = 1 , 2 , ... , n participants who are involved in an objective situation of the same structure as G. Assume that each participant i = 1 , 2 , ... , n is aware of the number of others also involved. Let us assume, too, that there is a strategy set for each of the actors as well as a representation for the stochastic aspects of the interaction. Let us further make the rather daring assumption that each actor imagines only sets of strategies and sets of chance moves that are in fact subsets of the sets in G (i.e., all primed variables are elements of the sets to which the underlying variables belong).
Finally, assume that each of the i = 1 , 2 , ... , n participants “subjectively” anticipates (is of the opinion) that she cannot control
  • any chance move z' ∈ Z she might imagine
  • the strategy constellation ( s i = ( s 1 , , s i 1 , s i + 1 , , s n X j i S j fixed by her co-players j(≠ i).
Being involved in an objective situation of the type G, each participant i = 1 , 2 , ... , n must decide which scenarios ( s i , z ) ( X j i S j ) × Z she takes into account in forming her mental model of the situation. Each participant or – in more familiar parlance – player i = 1 , 2 , ... , n forms her idiosyncratic set of “relevant possibilities” or “scenarios,” i.e., a set φi with φ ( x j i S j ) x Z and the interpretation that ( s i , z ) φ i if and only if i does not rule out ( s i , z ) .
We refer to φi also as player i’s “belief set.” In doing so, we neither exclude nor require that i has formed – objective or subjective – prior probabilities for the various scenarios in φi. We assume, however, i = 1 , 2 , ... , n : ϕ i .
Besides beliefs, individuals have desires. Both factors enter the process in which actors form their aspirations for several scenarios they imagine on the basis of their mental models. Each player i = 1 , 2 , ... , n forms an aspiration profile A i = ( A i ( s i , z ) : ( s i , z ) φ i ) specifying a payoff aspiration A i ( s i , z ) for each of the scenarios. Here we restrict ourselves to real-valued aspirations A i ( s i , z ) expressing, for instance, monetary success levels. In case of multi-objective decision making, aspiration levels would be vectors of variables which need not be numerical.
Put within a context of repeated decision making, scenarios and/or aspirations may need to be adapted. They may turn out to be too complicated and in need of simplification, or else they may be incomplete and require additions. Moreover, aspirations may be either too or insufficiently ambitious. If too ambitious, i.e., if A i ( s i , z ) is not satisfiable, aspirations would, in all likelihood, be lowered; if insufficiently ambitious, they would, in all likelihood, be raised [2]. Both is suggested by common sense and widely confirmed by numerous empirical studies on actual decision making, including that of corporations [3].4
Aspiration adaptation is a most interesting object of study. Yet for the sake of proceeding with our modest project, let us assume for the time being that some adaptive process has left behind only satisfiable aspiration profiles Ai; i.e., profiles such that there exists at least one strategy siSi with π i ( s i , s i , z ) A i ( s i , z ) for all ( s i , z ) φ i . For this set we can now characterize an undominated subset in a conventional manner. An alternative in that subset is “optimal” in that – given the beliefs ( s i , z ) φ i -- there is no satisfiable aspiration vector payoff dominating it. Accordingly, we define for each player i = 1 , 2 , ... , n the set Oi of optimal (satisfiable) aspiration profiles Ai for which no satisfiable aspiration profile A ° i exists with
( s i , z ) φ i : A ° i ( s i , z ) A i ( s i , z )  and  ( s i , z ) φ i : A ° i ( s i , z ) > A i ( s i , z ) .
Bounded rationality requires that each player i = 1 , 2 , ... , n should generate a belief set φ i on the basis of his mental model of a situation, form aspirations A i ( s i , z ) for all states of affairs, and search for a strategy siSi satisfying these aspirations. As envisioned here, this neither requires optimality – in the sense of undominatedness – nor excludes it. The core rationality requirement is the weak procedural one of a willingness to revise and to adapt aspirations in the light of evidence.
As far as the traditional side of substantive rationality requirements is concerned, it may be worth noting that for deterministic games ( # Z = 1 ) satisfying optimal aspiration profiles via strategies by all players will correspond to equilibrium behavior – i.e., to strategy choices from which no individual player can unilaterally and profitably deviate – if φ i = { ( s i , z ) }  for  i = 1 , 2 , ... , n .
There are neither roofs/proofs? nor primes involved here. In addition to being certain about z Z with # Z = 1 , all n players i form point beliefs s i regarding the choices of their co-players j(≠ i) that agree with the actual choices s−1 = (sj)ji of their co-players j(≠ i), i.e., s i = s i . In slightly different terms, all players i = 1 , 2 , ... , n entertain rational expectations to which they react optimally. Under rational expectations an equilibrium must prevail if all react optimally. General optimality and rational behavioral expectations by all characterize strategic equilibria [4].
Equilibrium behavior is not excluded by bounded rationality but rather a limiting case of it under optimal satisficing and rational expectations. Note, however, that in case of # Z > 1 general optimality need not imply equilibrium behavior. If, for instance, n = 2 and Z = {g, b} with g b player 1 may only consider scenarios ( s ^ 2 , g ) and player 2 only scenarios ( s ^ 1 , b ) . Thus, even in case of rational expectations concerning their co-player’s behavior ( s ^ 2 = s 2 as well as s ^ 1 = s 1 ,) the two players could still disagree on which chance moves they expect. Thus, to align optimal satisficing with equilibrium behavior in case of # Z > 1 much stronger assumptions have to be imposed. In this way, the players form – objective or subjective – probabilities, agree on the probabilities, and use them in the sense of – objective or subjective – expected utility maximization.
The conceptual apparatus to describe scenario generation, aspiration formation, and adaptation sketched so far shows how a bounded rationality or satisficing approach, which neither excludes nor presupposes optimality, can in principle accommodate a model based on conventional optimality and common knowledge assumptions as a limiting case. This makes the relation of the bounded rationality approach to traditional game theoretic approaches transparent. Yet it is rather unlikely that in the conventional sense optimizing behavior will ever prevail.
Before we return to a more general discussion of methodological issues in Section 6, we will, by means of simple models of “noninteractive“ (Section 3), “interactive” (Section 4), and finally “sequentially interactive” (Section 5) decision making, further illustrate some of the uses and features of the conceptual apparatus introduced so far. For our present, modest purposes of “language development” we leave out the crucial procedural aspect of aspiration adaptation.

3. A Noninteractive Situation

Imagine an investor who plans on investing a fixed sum. He wonders which options he might have and which of several possible alternatives he might choose. Assume that the investor is oblivious of other strategic actors or thinks they are irrelevant. He takes into account merely two relevant states of the world that he cannot influence, B and D – Boom and Doom.
The essential aspects of the situation can easily be captured in terms of the conceptual framework outlined above. Each investor i = 1 , 2 , .... , n focuses on zZ rather than on φ i = { ( s i , z ) } .5
Now, let e refer to an initial interest-free credit line that needs to be repaid at a certain future point in time but can be invested in the meantime in
  • liquidity granting a return rate of 1 with certainty,
  • a safe investment yielding a return rate r>1,
  • a risky investment leading to return rate h or return rate e, with h > r > 1
The unfavorable return rate l(<r) applies after D, the favorable rate h(> r) on condition that B prevails.
A typical investor will treat the states contingent on B and D as scenarios. For these (scenarios) he will form aspiration levels A(D) und A(B) with A(D)≤A(B) due to h > 1.
Let a,b,c, respectively, refer to investments in liquidity, safe and risky assets, with a , b , c 0 and a + b + c = e .6 An investment strategy or portfolio ( a , b , c ) satisfies both aspiration levels if we have
( A )   a + b r + c l e A ( D ) a + b r + c h e A ( B )
Due to r > 1 , bounded as well as unbounded rationality require a = 0 since leaving money idle when a risk free investment with positive returns is available is “dominated.” Assuming a = 0 , we can use b = e c to transform condition (A) to yield
e ( r 1 ) + c ( l r ) A ( D ) e ( r 1 ) + c ( h r ) A ( B ) .
This can be transformed into
e ( r 1 ) A ( D ) r l c A ( B ) e ( r 1 ) h r .
Requiring optimality, (O) implies
e ( r 1 ) A ( D ) r l = c = A ( B ) e ( r 1 ) h r .
Otherwise an improvement in one scenario would be viable without reducing the result in the other. If optimality prevails, there is a linear relationship between A(D) and A(B).7 This is very simple. A rational decision maker may find this out on her or his own or learn it from teaching or through advice. Before commenting on these issues, let us generalize to interactive and then to sequential and interactive situations first.

4. An Interactive Situation

Market interaction is, of course, considered the paradigm interactive situation in economics. Therefore it is natural to use it as an example of how to model interactive decision making within the framework proposed here.
Imagine a simple oligopolistic market on which sellers i = 1, 2, …, n offer imperfect substitutes and perceive themselves as facing constant unit costs. For the sake of simplicity, set the cost to zero. Moreover, let that perception be correct in all cases. Let si be the price choice of player i = 1, 2, …, n and x i ( 0 ) the units sold by i = 1 ,2, …, n. Then π i = s i x i is the relevant measure of success for i = 1, 2, …, n.
Participants of the interaction will, as a rule, operate on the basis of boundedly rational conjectures and aspirations. They are objectively involved in an oligopolistic interaction situation but do not necessarily look at it in terms of oligopoly theory. We assume that they treat the situation as “broadly” oligopolistic. As participants of the interaction, they formulate a mental model to represent the interdependencies of their choices to themselves. All i = 1, 2, …, n understand that their sales xi will depend both on how they themselves and how the others fix their prices. It seems likely that the actors do not take into account the whole price vector as charged by their competitors but rather something like the average of other prices. Though it seems doubtful that they would follow a complicated way of calculating that average, their mental model may well focus on the sum of the other prices divided by (n - 1). For each individual we have:
s i
seller i’s choice,
s ¯ i : = 1 n 1 j i n s j
other seller’s choices as expected by i.
Even rather simple-minded sellers will correctly perceive that quantities xi, i = 1, 2, …, n, brought to the market by different sellers, are substitutes. They will form mental models that have at least the property of being endowed with a negative partial in own price setting and a positive partial in the average price fixing by others.
x i = f i ( s i , s ¯ i + )
When going beyond this qualitative prediction, most individuals would presumably base their mental representations of the underlying interactive decision-making situation on linear relationships that exhibit the preceding directional changes as, for instance,
x i = f i ( s i , s ¯ i ) = 1 α s i + β ( s ¯ i s i ) ,
where appropriate directional changes apply for α , β > 0 .
The scenarios a boundedly rational actor might take into account would in all likelihood be rather few. Something like high, intermediate, and low sensitivity, h, i, l, respectively, to changes in either own or other price setting would presumably exhaust the possibilities taken into account by the boundedly rational. Some specific values of α , β > 0 with α h > α m > α l > 0 amp; β h > β m > β l > 0 will form the corresponding set of pairs z = ( α , β ) Z . Accordingly, scenarios of each individual should be expected to be of the form s c i = ( s ¯ i , α , β ) φ i , meaning that i does not rule out ( s ¯ i , α , β ) . After forming the set of all scenarios not excluded from consideration, the aspiration profile would be ( A i ( s c i ) s c i ϕ i ) . For this aspiration profile each individual i seeks a price choice s i such that the aspirations are predicted to be fulfilled for each of the scenarios in ϕ i .8
We believe that the preceding sketch of a boundedly rational approach would apply even if the actors participating in the oligopolistic interaction had access to economic consulting and some kind or other of econometric analysis of former market performance. In any event, we are content to let the discussion of the example rest at this point and turn to our final example of a sequential interaction. The two preceding cases (with only one actor and/or only one stage, respectively) could obviously be regarded as limiting cases of the next one. Conversely, the next case, though fairly special in other regards, is general in that it contains interactive decision making and stages of it.

5. The Ultimatum Game Example

The strategic game metaphor relied upon in the preceding discussion conceals the fact that strategies are composed of moves (in the limit a single move). Since actors can only choose moves in a game and cannot “choose” but only “plan on” strategies (as planned sequences of moves), a framework that can be applied by the decision makers themselves to the action situation as perceived by themselves should allow for an explicit representation of the sequential nature of moves as they occur in real world interactions. To illustrate how this can, in principle, be accomplished within the bounded rationality framework envisioned here, we turn to a simple paradigm example: an ultimatum game-type sequential interaction. Other than in the standard variant of that game, we include a stochastic outside option so as to guarantee again the presence of both strategic and stochastic elements.

5.1. An Ultimatum Interaction with an Outside Option

In the familiar ultimatum interaction, a pie p is to be allocated according to a proposal, y, 0 ≤ y ≤ p, made by a “proposer,” X, to a “responder,” Y. Should Y accept the proposal, the payoff vector will be (p - y, y) – with the first payoff accruing to X and the second to Y.
In the standard case of an ultimatum interaction, both actors receive a zero payoff if Y rejects offer y. As a slight modification of the standard case, let proposer X receive nothing, 0, while Y receives z, z ∈ (0 , p/2) = Z if Y rejects the offer.9 We assume that both, X and Y, know that z ( 0 , p / 2 ) and that X does not know which z applies when choosing y , whereas Y is informed about z when responding to y . Indicating for a given z that Y accepts y by g(y, z) = 1 while using g(y, z) = 0 for rejection, we can sum up the resulting payoffs as g(y, z)(p-y) for X and g(y, z)y+[1-g(y, z)]z for Y.
In terms of the previous “strategic form” characterization of the interaction situation we get
G = (SX, SY; πX(.), πY(.); Z),
with sX SX = [0, p], and the list of responses sY(sX) = (g(sX,z)z∈Z) ∈ SY.10
The proposals y = sX that X makes to Y are chosen from [0, p]. Responder Y’s reactions depend on the stochastically determined outside option z ∈ Z=(0 , p/2). X does not know which z ∈ Z= (0 , p/2) applies and therefore has to work with scenario vectors for all z taken into consideration. For the sake of simplifying the subsequent illustrative discussion, we assume that - as a matter of fact, a behavioral law – X assumes that Y will reject offers y ≤ z and accept all offers y that go beyond his reservation payoff z ∈ Z=(0 , p/2). Empirically, this is a precarious assumption since g(y, z) = 1 might frequently not apply for small y - z > 0. Yet for the sake of the argument, this additional complication is left out of account here. Moreover, we assume that X will consider only a subset of the possible offers Y and of z ∈ Z.

5.2. Mental Models of Ultimatum Interactions

Since a bounded rationality approach does not presuppose so-called rational expectations, it must specify how individuals will perceive situations. In particular, their beliefs about law-like regularities concerning the behavior of others must be mentally represented. A bounded rationality approach, as opposed to a straightforward empirical psychology approach, assumes the presence of some – “psychologically plausible” — form of rationality as an a priori constraint on modeling. In a way, this recalls the principle of causality that guides our modeling of the world, assuring us that for all events there is a cause, a kind of a priori structure to our mental representations.11 Likewise the assumption that all actions of other individuals allow for an understanding and/or explanation in terms of the intentional pursuit of aims, ends, or values restricts the ways of model building to a specific class.
In view of the preceding “normative presupposition,” it is assumed in the case at hand that X is aware of the z-dependence of responder reactions to his offers in the ultimatum interaction and that he conceives of them as purposefully rational. In the simple “either or case” the beliefs g’(.) concerning g(sX, z) are represented by g’(sX, z).12 The set of options S’X that proposer X seriously considers is finite. It contains a “select few” values from the full range [0, p] of possible offers. For these values X anticipates the reactions of Y. Actor X forms opinions on the reactions of Y exclusively with respect to S’X. Depending on the reservation payoff z and the size of the offer y, the value of g’(y, z) represents X’s belief of whether or not offer y ∈ S’X. will be accepted by a responder Y with reservation payoff z.13
A boundedly rational actor does not necessarily – and probably will not – have beliefs that could adequately be represented by a probability distribution concerning the prevalence of responses over z ∈ Z= (0 , p/2). It seems realistic that typical actors will consider relatively few values z’k ∈ (0 , p/2), k = 1, 2 , …, K. Let the imagined conflict payoffs be from the true range, i.e., z’ ∈ Z= (0 , p/2) and let them be ordered according to size with z’k < z’k+1, for all k = 1, 2 , …, K-1. The “subjective uncertainty” concerning the states of the world X perceives as relevant is captured by
Z’X = {z’1, z’2, …, z’K}.
The boundedly rational proposer X will consider only proposals y ∈ S’X such that there is z’ ∈Z’X with y ≥ z’; i.e., there exists at least one z’ ∈{ z’1, z’2, …, z’K} such that y would be accepted. Since p-y ≥ 0, this is, given the assumed beliefs of X, a strategy dominating all strategies, sX = y with y < z’1. To the extent that X trusts in his own model and relies on it in a consistent manner, he should also endorse the view that any y ≥ z’K and therefore y ≥ p/2 and all larger offers y will be accepted with “certainty.”14 In view of the latter, a boundedly rational actor might want to take y = p/2 into account as the largest offer.
In view of the preceding, a boundedly rational proposer X should consider a small number L of offers y j , j = 1, 2 , …, L ≤ K + 1, with 0 <z1y1 < y2 <…< yL-1 < yL = p/2 for which he expects z Z X with z’≤yj. In view of the anticipated states of the world, the space S’X becomes
S’X = {y1, y2,…,yL-1, yL}.
Within a bounded rationality framework, the set of hypotheses g’ concerning g is constrained by requirements of bounded rationality. Corresponding to a variant of what may be called “boundedly rational expectations,” we assume that a boundedly rational actor who believes in the bounded rationality of his interaction partner should also believe that an offer y≥ z’h, 0<h<K, will be accepted by Y and should endorse beliefs such that g’(y, z’j)≥ g’(y, z’j+1) for all j ∈{1, …, K-1}. Likewise for all z ∈ Z’X we would then have g’(yj+1, z)≥ g’(yj, z) for all j ∈{1, …, L-1}. Within an empirical context, it would have to be checked whether the bounded rationality constraint on the function g’ is actually fulfilled or not. We assume here that it is.15
In view of the preceding, the scenarios that X considers are assumed to form a nonempty set
ScX ⊆ {scX=(g’(y, z), z): z ∈ Z’X and y ∈ S’X}.
The elements of ScX give rise to aspirations. These aspirations can be comprised in the aspiration set of X which is
ÅX ={AX(scX) ∈ (0, p]/ scX ∈ScX}.
The monotonicity assumptions outlined before should carry over to AX(scX); i.e., AX(scX) = AX(g’(y, z), z) should neither increase in y nor in z.
Obviously, we could continue here with further specifications and discussions of models other than those hitherto considered. However, the discussion so far should suffice to illustrate, in principle, the central issues involved if it comes to choosing between a bounded or a full rationality approach to interactive decision making. Therefore we now turn to the relative merits of the two approaches.

6. Languages and Theories in Decision Making

In repeated interaction, adaptation and blind selection of more successful forms of behavior will often appear “as if” they are the outcome of a perfectly rational forward-looking choice. This coherence of observed behavior with ideal theory implications should not be misread as implying that perfect rationality provides an explanation of that behavior. In repeated choice making, empirically valid explanations of apparently forward-looking rational behavior will characteristically be in terms of adaptive selection processes and not in terms of forward-looking choice making.
Since we focus on non-repetitive choice making, the interesting relation between teleological and evolutionary views on choice making – often mistakenly viewed as “justification” of traditional explanations in terms of forward-looking perfectly rational choice – has not been the focal point of the preceding discussion. In nonrecurrent situations of choice making, rational decision makers should make up their mind in terms of forward-looking deliberations without having to rely on routines and prescriptions adapted to recurrent problems.16 The underlying mental processes are “teleological” or “deliberative” rather than adaptive or “evolutionary.” The language proposed above and illustrated by specific examples is intended to relate to such forward-looking decisions. Its basic categories are meant to capture what happens in boundedly rational decision making in one-off decision situations as they occur, for instance, in real business life.
Claiming that the language is suitable to describe what occurs in boundedly rational decision making, we do not have to assume that the boundedly rational actors express themselves in the same theoretical terms as proposed here. For instance, should they be asked to give an account of what they are doing, they would typically not use mathematics to describe it. We claim, however, that individuals basically behave in ways corresponding to the mathematical terms of the theoretical language. The description may be in terms other than used by the actors, yet they reason and behave as described.
Traditional rational choice approaches may be interpreted as mere languages. Quite tellingly, more often than not they claim to describe phenomena “as if” these were the outcome of fully rational choice making. The language of boundedly rational choice making outlined before goes beyond such “as if” claims (as typically justified in adaptive terms anyway). We claim that opposed to the accounts of standard rational choice approaches, individuals do not merely behave “as if” they were forming scenarios, aspirations, etc. They may not decide all their behavior by relying on boundedly rational deliberation; but to the extent that they are in fact boundedly rational, they should “more or less” engage the tasks described in the proposed theoretical categories.17
Obviously, rationality requires more than just a description of average behavior. What is implied here is a kind of “technological norm” how individuals should behave if they intend to reach certain aims or ends. This “norm” stipulates the requirements that must be fulfilled if the behavior in question is to count as rational behavior rather than merely as behavior. As far as minimum “normative” standards are concerned, rational choice analysis and psychology differ. However, the role of the normative element should not be overemphasized. The ways in which individuals who are not described as irrational do behave, matter for the standard of what is regarded as rational. The practices and standards that real actors who are obviously not ‘out of their mind’ or overexcited would mention - if asked to give a stylized account of “good behavior” - are normatively relevant.18
To put it slightly different, the theorist of rational behavior has to start where boundedly rational individuals actually “stand,” give a stylized account of this in an effort to identify the characteristically rational, and then form certain proposals how to improve behavior. He will derive his proposals in a more or less local manner, starting from the actual practices and mental processes of real actors who are nevertheless interested in improving their behavior. Real actors are the theoretician’s addressees.
If used by a theoretically trained counselor, the language proposed above seems well adapted to the purpose of representing idiosyncratic knowledge of law-like regularities and to plan within a weakly normative perspective on intervening purposefully in the course of the world. Conceivably, there may be no general regularities that apply across local games and could be known to researchers. What we can offer so far is a conceptual scheme that may be used to organize and store local knowledge available in a rather idiosyncratic form. Our knowledge so far is not in the language, but the language is well adapted to represent the knowledge relevant to practical decision making of real individuals.19
These individuals think in terms of scenarios. They are not endowed with “given” preferences and “beliefs” but have to develop and to form them. They need to build up boundedly rational expectations rather than being blessed with so-called “rational expectations.”

7. Conclusion

The advocates of traditional rationality concepts are going far beyond realistic idealizations. The requirements they impose on behavior and deem rational are developed according to a priori standards. They do not start from real behavior that is to be idealized in certain aspects. Their point of departure is an ideal point whose very nature and characteristics are determined independently of factual behavior in an a priori manner. The result is a fascinating world of ideally rational beings in which we ourselves, as boundedly rational beings, take an interest. Ideal game theory is a way to learn about possible worlds. However, we insist that the world of ideal rationality is not the real world.
In complex human affairs the only knowledge that we really have is local. To represent this kind of knowledge, a language as that sketched earlier is more adequate than that of traditional rational choice. F.A. Hayek's warning that, at least in practice, it seems impossible to know globally in some super-mental model M ( M 1 ( G ) , M 2 ( G ) , ... , M n ( G ) ) what everybody knows locally should be taken seriously here. Who is to be in command of the global mental model M? We can hardly assume that this will be one of the individuals i = 1 , 2 , ... , n . A godlike super-rational theoretician does not exist either. We are clearly back to square one of the original puzzle of what I think that you think that I think …
In proposing our language which, as illustrated by our examples, can be rigorously defined in specific contexts, we agree with the traditional rational choice theorist that science has to work with simplifications based on abstractions and accept finding ourselves “back to square one.” Good science transforms “rich descriptions” that try to represent a situation along all dimensions we are aware of into “lean descriptions” of theoretical interest. Lean descriptions or stylized accounts focus merely on one or a few aspects deemed significant.
What is classified in this process as a significant or, in more traditional terms, an “essential” aspect is not (solely) a matter of pre-theoretical facts. It rather depends on the theoretical framework within which the situation is approached and re-constructed for theoretical purposes. The language proposed here will make us focus on the right aspects of a situation. It will make us ask the right questions when modeling the situation as one in which (boundedly) rational beings of the type of Homo sapiens rather than rational economic men interact. Moreover, it will be useful to build bridges between psychology and normative rational choice theory adapted to the needs and practices of Homo sapiens.
  • 1An argument forcefully made in [1].
  • 2Simply put, in characterizing representative utility u, we cannot have it both ways: maintain the common accessibility of substantive payoffs π as determining u as well as the flexibility of subjective rankings which may diverge from substantive payoffs π in any arbitrary way and yet be represented by u.
  • 3Although we often simplify in a rather daring way as well, the underlying assumptions are not constitutive or defining characteristics of our language. Our vocabulary and grammar could in principle be amended any time, whereas the basic assumptions of the rational choice approach cannot be modified or given up without modifying or giving up the approach itself.
  • 4It is useful to distinguish purely deliberational from experiential adaptation. The first occurs if an individual, by means of a mental model, anticipates certain consequences of action and comes to the conclusion that aspirations are inadequate, the second emerges if the individual experiences certain consequences of action and learns from them to adapt.
  • 5The vector (S−i) is considered as constant or irrelevant.
  • 6The uncertain “z” of our general framework refers to boom and doom here.
  • 7Relying on a stated as opposed to a revealed preference (or probability) approach, satisficing and optimal satisficing can be related to empirical evidence. On this basis hypotheses are then open to empirical testing.
  • 8Again it is possible to directly elicit belief sets and aspiration profiles. Then rigorous definitions of satisficing and optimal satisficing will emerge with respect to market behavior, and empirical analyses become viable; for this, see [5].
  • 9Again the set Z contains results of a chance move.
  • 10g(sX,z) ∈{0, 1}, for all sX ∈SX, z ∈Z=(0, p/2).
  • 11A so-called all-and-some clause that is – due to the “all” part – neither verifiable nor – due to the “some” part – falsifiable, yet a meaningful, practical guide in building mental models.
  • 12If we assume that (g’(sX,z)z∈Z)= (g(sX,z)z∈Z), the remaining uncertainty of X is solely about which z∈Z=(0 , p/2) applies and not about the behavioral relationship per se.
  • 13In the probabilistic case, the function g’ would assign values according to g’: S’X X (0 , p/2) → [0, 1] and assign for y ∈S’X probabilities to g(y)=1 and g(y)=0, respectively.
  • 14Recall that our empirically precarious simplifying assumption guarantees acceptance.
  • 15Note that this does not imply that g’ could not misrepresent g. It only means that certain monotonicity properties are shared by both g’ and g.
  • 16The adaptive toolbox that may be very useful in repetitive contexts is of limited value in non-repetitive decision making or exceptions in the management by exception metaphor.
  • 17This is rather different from making it - by means of a rational expectations assumption - one of the requirements of the applicability of the rational choice approach that individuals do in fact behave as assumed in the approach.
  • 18The methodology is the same as that suggested by Nelson Goodman to give an account of inductive reasoning. The actual practices matter and are reconstructed in a stylized way to derive standards of good practice and to improve them normatively in certain aspects; see [6].
  • 19The relations to Hayekian notions of the necessarily local character of knowledge are again obvious [1].

Acknowledgments

We gratefully acknowledge the constructive and very helpful comments of two anonymous referees as well as of the Editor-in-Chief of Games, Carlos Alós-Ferrer, leading to a major revision of this paper.

References

  1. von Hayek, F.A. The Use of Knowledge in Society. Am. Econ. Rev. 1945, 35, 519–530. [Google Scholar]
  2. Sauermann, H.; Selten, R. Anspruchsanpassungstheorie der Unternehmung. Z. Gesamte Staatswiss. 1962, 118, 577–597. [Google Scholar]
  3. Cyert, R.M.; March, J.G. A behavioral theory of the firm; Prentice-Hall: Englewood Cliffs, NJ, USA, 1963. [Google Scholar]
  4. Aumann, R.J.; Brandenburger, A. Epistemic conditions for Nash Equilibrium. Econometrica 1995, 63, 1161–1180. [Google Scholar] [CrossRef]
  5. Güth, W.; Levati, M.V.; Ploner, M. Satisficing in strategic environments: a theoretical approach and experimental evidence. J. Socio-Econ. 2009. [Google Scholar] [CrossRef]
  6. Goodman, N. Fact, Fiction, and Forecast; Harvard University Press: Cambridge, MA, USA, 1955. [Google Scholar]

Share and Cite

MDPI and ACS Style

Güth, W.; Kliemt, H. (Un)Bounded Rationality in Decision Making and Game Theory – Back to Square One? Games 2010, 1, 53-65. https://doi.org/10.3390/g1010053

AMA Style

Güth W, Kliemt H. (Un)Bounded Rationality in Decision Making and Game Theory – Back to Square One? Games. 2010; 1(1):53-65. https://doi.org/10.3390/g1010053

Chicago/Turabian Style

Güth, Werner, and Hartmut Kliemt. 2010. "(Un)Bounded Rationality in Decision Making and Game Theory – Back to Square One?" Games 1, no. 1: 53-65. https://doi.org/10.3390/g1010053

Article Metrics

Back to TopTop