Next Article in Journal
Collusive Stability with Relative Performance and Network Externalities
Previous Article in Journal
A Controlled Discrete-Time Queueing System as a Model for the Orders of Two Competing Companies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cheap Talk with Transparent and Monotone Motives from a Seller to an Informed Buyer

1
Department of Mathematics, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
2
Department of Economics, Kyung Hee University, 1 Hoegidong, Dongdaenumku, Seoul 130-701, Republic of Korea
*
Author to whom correspondence should be addressed.
Games 2024, 15(3), 20; https://doi.org/10.3390/g15030020
Submission received: 14 March 2024 / Revised: 21 May 2024 / Accepted: 29 May 2024 / Published: 31 May 2024

Abstract

:
We develop a model of cheap talk with transparent and monotone motives from a seller to an informed buyer. By transparent and monotone motives, we mean that the seller’s preference does not depend on the state of the world and is increasing in the choice(s) of the buyer regardless of the state of the world. We first show that if the buyer is completely uninformed, only the babbling equilibrium exists. Then, we obtain our main result that even if the buyer has the slightest information, full revelation can be supported by using the crosschecking strategy of the buyer if and only if the seller has a CARA (constant absolute risk aversion) utility function unless the buyer has too much information. In this equilibrium, the buyer can punish the seller who sends a message far above the buyer’s information by ignoring the seller’s message. Paradoxically, no information and too much information of the buyer both eliminate the fully revealing equilibrium with the crosschecking strategy. We also obtain a counterintuitive result that the seller prefers a more informed buyer than a less informed buyer.

1. Introduction

People often obtain advice from experts. Lawyers give legal advice to clients. Patients obtain medical advice from doctors. Mechanics recommend some repair services to customers. Academic advisers give their opinions on theses of students. In those cases, advisees are often not completely ignorant of the subject, although it is true that they are less informed than the experts.
However, in the literature on cheap talk games that follows [1], which will be abbreviated as CS hereafter, the feature that the receiver (she) is also partially informed has been largely neglected1. It seems obvious that if the receiver is informed to some extent, the message of the sender (he) will be affected by the extent to which the receiver is informed. Then, a natural question would be whether the informativeness of the receiver can discipline the sender’s message. How different could a mechanic’s recommendations to a complete novice and to a customer with some expertise be? Do they make a more honest recommendation to knowledgeable customers? Are they more fraudulent to novices? Does an academic adviser write a recommendation letter for their student differently from someone who knows them a little more than someone who has no prior information about the student at all? Will a professor who has some information about the student interpret the same recommendation letter differently? Are lawyers more honest with professional clients in their legal service? In general, does an advisee’s partial information always make the recommendation of the expert credible and make them worse off? If so, how much?
As an example which is borrowed from Charkraborty and Harbaugh (2010) [7] and will be used as the main scenario in our paper, consider a salesperson who sells a product. If they are paid based on their sales performance, they will not have a proper incentive to be honest about the quality of the product. Such an incentive to exaggerate the quality may be disciplined if the consumer is partially informed about the quality themselves. Actually, many firms hire marketing experts to sell the products they want to sell using deceitful marketing practices, and consumers’ lack of information often tricks them into purchasing products that turn out to be unsatisfactory. If a consumer does not have all of the needed information about a product, they may mistakenly purchase that product, but the proper information about the product may protect them from being deceived into purchasing an unnecessary amount of goods.
In this example, the utility of the seller increases with the quantity purchased by the buyer regardless of the quality of the product. In other words, the seller has a transparent motive in the sense that their preference does not depend on their information of the quality, and they have a monotone motive in the sense that they always prefer the buyer purchasing more2. There was a widely held conjecture that if the sender has transparent motives, credible communication is not possible and only the babbling equilibrium exists since the interests of the sender and the receiver are too conflicting. But the falsity of the conjecture was shown by Seidmann (1990) [2] when the receiver has some private information and their action is one-dimensional, and by Charkraborty and Harbaugh [7] when the private information of the sender is multi-dimensional3.
Also, consider the example of a mechanic which is often used in the literature on experts starting from [8]. In this example, an informed expert recommends a certain kind of repair (expensive repair and inexpensive repair) to a consumer. It is clear that an expert need not be honest at all if the performance of his repair is unobservable to the consumer. However, there may be several disciplining devices of fraud. Pitchik and Schotter (1987) [8] considered second opinions obtained by searches and Wolinsky (1993) [9] considered the expert’s reputation. In this paper, we consider the consumer’s expertise as another disciplining device. Besides, in the model of Pitchik and Schotter (1987) [8], an expert’s recommendation is not cheap talk, but a binding option like a repair price. Our paper is distinguished from them in the sense that we are mainly interested in the payoff-irrelevant message of an expert.
In Crawford and Sobel (1982) [1], the preferences of a sender and a receiver are single-peaked. For example, we can consider the utility functions of a sender and a receiver given by ( a θ ) 2 and ( a θ b ) 2 , respectively, where a is the receiver’s payoff-relevant action, θ is the state of the world, and b ( > 0 ) is the conflict of interest between them. If we take two actions a and a such that θ < θ + b < a < a < θ < θ + b , it is easy to see that both the sender and the receiver prefer a to a for θ , whereas both prefer a to a for θ , implying that there is some common interest. Due to the common interest, the message inducing a and the one inducing a can contain some truthful information about θ . However, in the situation we consider in this paper, the preference of the sender (seller) is not single-peaked. They always prefer a to a for any pair of actions ( a < a ), although the receiver (buyer) may prefer a to a if θ is small and vice versa if θ is large. Thus, the communication mechanism of Crawford and Sobel (1982) [1] does not work in this setting but if the informed buyer has an outside option not to buy from the seller but to buy from other sellers, it could make their threat to penalize an extremely high message of the seller, thereby dissuading them from inflating their message.
Following the spirit of Charkraborty and Harbaugh (2010) [7], we will consider a seller (sender) and a buyer (receiver) in this paper and address the question of whether more informativeness of the buyer prompts the seller to communicate truthfully when the sender has transparent and monotone motives.
We mainly argue that there is a truthfully revealing communicative equilibrium in a fairly general setting with an unbounded state space. The difficulty in our setup arises mainly because the assumption of the normal distribution of noises with full supports makes off-the-equilibrium messages vanish completely. Even if the message is too high or too far from the buyer’s information, it is a possible event, although the likelihood is very low. So, the buyer cannot believe that it is a consequence of the seller’s lying. This makes it difficult to penalize strongly enough the seller who sends a higher message than the true value. However, as long as they have a two-dimensional action space, for example, if they can buy the product either from the seller or from other sellers, they can choose one action (which is relevant to the seller’s utility) to discipline the seller’s message and choose another (which is irrelevant to the seller’s utility) for their best response to the seller’s message based on the consistent posterior belief4.
To support the truth-revealing outcome as an equilibrium, we use a specific form of strategy from the buyer, which will be called “crosschecking strategies”. By a crosschecking strategy of the buyer, we mean a strategy whereby the buyer responds to the seller’s message, based on the belief that the seller’s message is true only if their message is congruent enough with their information in the sense that the message falls within the normal (confidence) range, i.e., it is not too far above from the buyer’s information, and that the message is exaggerated if it falls in the punishment range, i.e., it is much higher than the buyer’s information. Under the crosschecking strategy, the seller always sends a truthful message, but the message can fall either on the normal region or on the punishment region in equilibrium. In the former case, the buyer buys the amount corresponding to the seller’s message from the seller, while in the latter case, they buy a smaller amount than the seller’s message from the seller and buy the rest of the amount (difference between the seller’s message and the amount purchased from the seller) from other sellers. We will call the minimum distance between the buyer’s information and the punishment range the tolerance in deviation.
Our recourse to crosschecking strategies can be justified by casual observations. People often ignore the opinion of experts when it is strongly opposed to what they believe, inter alia, if the experts have a strong motive to go a certain way5. A non-expert tends to use their own information to tell whether the expert is deceiving them or not, and to decide whether to resort to a second source by exercising the outside option if the option is available6.
We first show that unless the seller’s information and the buyer’s information are both noiseless, the crosschecking strategy cannot induce full revelation if the utility function of the seller is linear in the buyer’s choice, since the penalty from inflating the seller’s information is not severe enough if the seller’s utility function is linear. We next show that if the utility function of the seller is strictly concave, full honesty of the seller is possible by using the crosschecking strategy even if the buyer has the slightest information about the state of the world unless their information is too precise. In this case, strict concavity of the seller’s utility function can make the penalty from inflating the message exceed the reward from it, so it can discipline the seller who is tempted to lie. However, if the buyer is too well informed, such truthful communication may not be possible. The rough intuition behind this result goes as follows. The deviation tolerance can become larger as a well-informed buyer becomes much better informed, so the seller will have an incentive to inflate their message. This surprising result is that more information about the buyer may hinder effective communication and consequently hurt them, counters the widely held perception that better information about the buyer will pay off by disciplining the seller’s message. We obtain another counterintuitive comparative static result that the seller as well as the buyer is better off as the buyer is more informed. This result is mainly because less informativeness of a buyer may bottleneck the transaction between them. Finally, we show that the seller having a CARA (constant absolute risk aversion) utility function is the necessary and sufficient condition for the existence of a fully revealing equilibrium with the crosschecking strategy.
If the buyer has no information or completely noisy information in the sense that the variance is infinity, it is clear that only the babbling equilibrium in which no meaningful message is possible. It is interesting that the seller’s message cannot be fully revealed in two opposite extreme cases, i.e., either if the buyer has no information or if they have very precise information. In the former case, the seller says anything because the buyer has no way to check the honesty of their message due to lack of information or useless information. In the latter case, the seller has no reason to be honest because there is little possibility that the buyer will obtain an unacceptable message from the seller, due to the high precision of their information.
There is some literature on cheap talk with transparent motives. Chakraborty and Harbaugh (2010) [7] and Lipnowski and Ravid (2020) [12] are the most notable examples. The term transparent motives is used to mean that the informed sender does not care about the state but only about the receiver’s action. The authors of both papers show that cheap talk can be informative even if the sender has a transparent motive. The main difference between our model from theirs is that the receiver is also (partially) informed in our model. In Charkraborty and Harbaugh (2010) [7], the informativeness of cheap talk relies on the multi-dimensionality of the state variable which implies that the receiver cares about multiple issues rather than one issue, which is not assumed in our model. Lipnowski and Ravid (2020) [12] introduce the possibility of an informed receiver in an example of a salesperson selling several products, but they use a different assumption that the receiver has information not about the valuations of the products but about the valuation of their outside option. In their model, the sender’s information and the receiver’s information are independent, (i.e., the receiver’s information does not give meaningful information about the sender’s information), so our crosschecking strategy cannot be used in their setting. The main source of the effectiveness of the crosschecking strategy in our paper is the correlation between the two pieces of information of the seller and the buyer.
The possibility of an informed receiver in a cheap talk game has been considered by several other authors. However, this paper provides a quite different setup and a new insight. Above all things, as we mentioned above, we assume that a sender has a transparent and monotone motive. Besides, we consider a situation in which the support of each information is unbounded under the assumption of normal distribution of noises, whereas all other papers assume a finite or bounded support7. A direct consequence of a normal distribution assumption is that there is no off-the-equilibrium message. That is, it is impossible to detect a false message for sure because any message is possible in equilibrium. Therefore, we cannot use Lai’s (2014) [5] punishment strategy based on a proper off-the-equilibrium belief due to inconsistent supports between the sender’s message and the receiver’s information, or have no reason to resort to Watson’s (1996) [3] matching strategy because the sender’s message and the receiver’s information do not match with probability one even if the sender is honest, as far as the information of both players has noise with unbounded supports. Since there is no off-the-equilibrium path nor off-the-equilibrium posterior belief in our model, the receiver cannot penalize a sender who is presumed to have sent a false message. In our paper, however, the receiver can penalize a sender by a crosschecking strategy. Owing to this strategy, the receiver is enabled to choose the action favorable to the sender if the sender’s message is not too far above their own information, and chooses the action unfavorable to them if it is too high. Seidmann (1990) [2] also showed the possibility that cheap talk could influence the receiver’s equilibrium actions even when the sender has transparent and monotone motives if the receiver is privately informed, but they did not answer the question of whether more information about the receiver makes honest communication of the sender easier or more difficult. Moreover, their model is quite restrictive in the sense that the support of the players’ information is finite.
In our model in which any message is possible in equilibrium, the crosschecking strategy can work well to discipline the sender’s incentive to inflate their information. The difference between the two pieces of information can give the receiver some information about whether the sender lied or not because of the correlation between the sender’s information and the receiver’s information. The receiver’s belief recognizes such valuable information and thus the crosschecking strategy based on the belief is to exploit the correlation structure between the sender’s information and the receiver’s information. Correlation between them is essential to driving the outcome of full revelation. This distinguishes our paper from Seidmann’s.
Does the receiver believe that the sender lied if the sender’s message is too high from their own information? This is not the case. They do believe that the sender sent the truthful message because they know that the sender chose the equilibrium strategy of reporting truthfully, but act strategically as if they do not believe it by ignoring the message and responding to the sender in a way that they take only their own information seriously. Otherwise, the sender would inflate their message. In other words, the receiver overrules the sender’s message, not because they believe that the message is likely to be wrong, but because they need to give them an incentive to report correctly. This feature of the crosschecking strategy is somewhat similar to the trigger strategy by Green and Porter (1984) [16] in that the player who receives some signal penalizes the opponent even if they know that the opponent did not cheat because they would cheat otherwise.
The main results of Lai (2014) [5] and Ishida and Shimizu (2016) [6] have a common feature with our result in that the incentive for truthful communication diminishes as the receiver is better informed. However, their main insight that the receiver’s response becomes less sensitive to the sender’s message as they are better informed as a result of Bayesian belief updating is not valid in our model. In our model, as the receiver’s expertise becomes more precise, the possibility of truthful communication disappears discontinuously at some level of their expertise. In their model, the receiver’s belief is a weighted average of their information and the sender’s information. So, as the receiver’s information is more accurate, the weight of their own information should be larger, and thus they must rationally respond to the sender’s message less sensitively. In our model, the receiver’s belief which directly governs the sender’s cheap talk message is not a weighted average of their information and the sender’s message.
Our results are sharply contrasted with Lai’s in several respects. According to Lai (2014) [5], the receiver who becomes informed could benefit only if the receiver’s information is sufficiently different from the sender’s, i.e., it is very useful information. In our model, reliable information could be conveyed from the sender only when the receiver’s information was sufficiently close to the sender’s information. Besides, in our model, a receiver who is informed is always better off than a receiver who is uninformed, whereas a completely uninformed receiver may be better off than an informed receiver in their model because the sender would provide an informed receiver less informative advice that yields the receiver a lower payoff. Also, most importantly, the results by Lai (2014) [5] and Ishida and Shimizu (2016) [6] hold only when the preferences of the sender and the receiver are sufficiently congruent, whereas our result of perfectly truthful communication holds even if their preferences are conflicting enough in the sense that the sender’s favorite is independent of the receiver’s favorite. Another important difference is that in Lai (2014) [5], the receiver’s information does not elicit full revelation, which is not the case in our model.
The paper is organized as follows. In Section 2, we set up a cheap talk model of an informed buyer. In Section 3, we characterize the fully revealing communicative equilibrium with crosschecking strategies in the cheap talk game. In Section 4, we provide some comparative statics to examine the effect of a change in the buyer’s informativeness. Concluding remarks and some ramifications follow in Section 5. Proofs are provided in Appendix A.

2. Model

There is a seller (sender) S, and a buyer (receiver) B. The state of nature θ is a random variable that is distributed over R . For example, θ could be the quality of a product that a salesperson sells to a consumer. According to Bayes–Laplace’s principle of indifference (insufficient reason)8, we assume that θ is uniformly distributed over R 9, i.e., players have no information about θ  a priori. Although neither the seller nor the buyer knows the accurate value of θ , both of them receive a noisy signal on the state of nature v i V = R for i = S and B where v i = θ + ϵ i , ϵ i is stochastically independent with θ , and ϵ i ’s are independent. It is important to note that B is also partially informed. We assume that ϵ i follows a normal distribution10 with its mean zero and the variance σ i 2 , i = S , B , where σ S 2 < σ B 2 11. The assumption of the inequality in the variances reflects the feature that the seller has higher expertise about θ than the buyer.
The game proceeds as follows. First, the state of nature θ is realized and then the seller and the buyer receive a private signal v S and v B , respectively, without knowing θ . After observing private information v S , S sends a payoff-irrelevant message (cheap talk) m M = R to B12. Then, receiving a message m M , B updates their posterior belief about v S , b ^ ( m ) , and then forms their belief about θ , b ( m , v B ) , by using m and v B , where b : M × V R . Based on this belief, they choose a 1 , a 2 A ( = R ) . The buyer’s payoff depends directly on a = a 1 + a 2 , while the seller’s payoff depends only on a 1 . Thus, we will call a 1 an S’s payoff-relevant action, and a 2 an S’s payoff-irrelevant action. For example, when a realtor gives some information about the prospect in real estate in some area, the buyer may buy a house that the realtor recommends ( a 1 = 1 , a 2 = 0 ) or some other house in the area ( a 1 = 0 , a 2 = 1 ). Note, that a 1 and a 2 are perfect substitutes from the buyer’s point of view, whereas they are not from the seller’s point of view. Also, note that both a 1 and a 2 are from an unbounded set.
The payoff to S is given by a continuously differentiable function U S : A R and the payoff to B is given by twice continuously differentiable function U B : A × Θ R . Throughout the paper, we will assume that (A1) U S ( a 1 ) = u ( a 1 ) where u > 0 , u 0 , i.e., u ( a 1 ) is increasing and concave in a 1 , and (A2) U B ( a , θ ) = ( a θ ) 2 . The buyer’s utility function implies that it has a unique maximum in a for all θ and the maximizer of U B , denoted by a B ( θ ) , is strictly increasing in θ . The independence of the seller’s utility function on θ means that S has transparent motives, and the utility which is increasing in a 1 means that S has monotone motives. A typical example of transparent and monotone motives is the preference of a salesperson who is paid based on the quantity they sell. One possible interpretation for a 1 , a 2 and a is that a is the total amount of a consumer’s purchases, while a 1 and a 2 are the amounts of purchases from the salesperson and from other sellers, respectively. Then, the salesperson’s utility increases with respect to the consumer’s purchases from them regardless of θ . The monotonic increase in a B ( θ ) in θ means that the buyer will want to buy more units of high θ which can be interpreted as quality. For example, an exponential utility function u ( a 1 ) = 1 e c a 1 (for c > 0 ) or its affine transformation which is called CARA (constant absolute risk aversion) utility function satisfies the assumption (A1), because u ( a 1 ) = c e c a 1 > 0 and u ( a 1 ) = c 2 e c a 1 < 0 .
A strategy for S specifies a signaling rule given by a continuous function s : V M . A strategy for B is an action rule given by a function a : M × V A × A that determines a 1 and a 2 . The equilibrium concept that we will employ is that of a weak Perfect Bayesian equilibrium (wPBE). An equilibrium of this game consists of a signaling rule for S, an action rule of B and a system of beliefs ( s * ( v s ) , a * ( m , v B ) , ( b ^ ( m ) , b ( m , v B ) ) ) where a * = ( a 1 * , a 2 * ) , such that
(2-I)
s * ( v S ) arg max m U S ( a 1 * ( m , v B ) ) f ( v B v S ) d v B , where f ( v B v S ) is the conditional density function of v B given v S ,
(2-II)
B’s posterior belief b ^ ( m ) and b ( m , v B ) are consistent with the Bayes’ rule on the equilibrium path.
(2-III)
a * ( m , v B ) arg max a U B ( a , b ( m , v B ) ) , where a * = a 1 * + a 2 * .
(2-I) and (2-III) are conditions for sequential rationality and (2-II) is the condition for consistency. Henceforth, we will simply use the notation of f ( v B ) for the density function conditional on v S and F ( v B ) for the corresponding distribution function by suppressing v S . For the time being, we restrict our attention pf B’s choice to a 1 (not to a), since our primary concern is the truth-telling incentive of the seller whose utility relies only on a 1 ,
Before we characterize equilibria, we will adapt some standard definitions often used in the literature.
Definition 1. 
A message m induces an action a 1 of the buyer with v B if a 1 = a 1 * ( m ; v B ) .
Definition 2. 
An equilibrium is communicative (or influential) for a 1 if there exist two different observations v S , v S such that s * ( v S ) s * ( v S ) and a 1 * ( m ; v B ) a 1 * ( m ; v B ) for some v B where m = s * ( v S ) , m = s * ( v S ) . An equilibrium is uncommunicative (or babbling) for a 1 otherwise.
In other words, if two different messages sent by two different types of seller induce two different S’s payoff-relevant actions ( a 1 ) of the buyer with some information v B , the equilibrium is communicative (or influential) for a 1 in the sense that some meaningful message that can affect the buyer’s choice of a 1 is conveyed by cheap talk communication in equilibrium13. Henceforth, we will call “communicative for a 1 ” just “communicative” for brevity.
Definition 3. 
A communicative equilibrium is fully revealing if s * ( v S ) s * ( v S ) for any v S , v S such that v S v S . In particular, if s * ( v S ) = v S , a fully-revealing equilibrium is called a truth-revealing equilibrium14.
Observe that, insofar as the buyer has no information, the message sent by the seller cannot be credible at all in this model with transparent and monotone motives. In the CS model, the payoff function of S as well as that of B is single-peaked, so that, given θ , the favorite actions to S and B do not differ very much even if they do differ. This implies that, for some low value of θ , both S and B prefer one action to another, while the reverse is true for some other high value of θ . In other words, there is room for coordination between S and B and in effect, cheap talk enables such coordination to occur by conveying the message whether θ is high or low. In our model, however, the assumption of single-peaked preferences is violated and all the types of S prefer a higher level of the buyer’s action a 1 . Thus, S would like to pretend to have observed as high v S as possible to induce B’s highest action possible, regardless of their type.
We now summarize with
Proposition 1. 
If B has no information about θ, there exists no communicative equilibrium.
It is easy to prove it. Suppose, for some distinct v S and v S , m m and a 1 a 1 where m = s * ( v S ) , m = s * ( v S ) , a 1 = a 1 * ( m ) and a 1 = a 1 * ( m ) in a communicative equilibrium. If a 1 < a 1 , S with information v S would prefer sending m to m, since u ( a 1 ) < u ( a 1 ) . If a 1 < a 1 , S with information v S would choose m instead of m , since u ( a 1 ) < u ( a 1 ) . This violates the definition of an equilibrium.
However, if B has some information about θ , the above argument breaks down. Suppose m induces a 1 and m induces a 1 with a 1 < a 1 for some v B . Nonetheless, we cannot conclude that S will prefer sending m to m, because m could induce a lower level of action for some other v B .
In the next section, we will make a formal analysis of cheap talk to an informed buyer.

3. Informed Buyer

As is typical in cheap talk games, even if the buyer has some noisy information about θ , a babbling equilibrium for a 1 exists if the seller sends any message regardless of their information and the buyer ignores whatever the seller says. Henceforth, we will be mainly interested in characterizing communicative equilibria which are fully revealing.
For our purpose of finding a fully-revealing communicative equilibrium, the following specific form of strategy profile will be useful, although it is not an equilibrium in itself. The actual equilibrium strategy profile will be provided later in (4-I), (4-II) and (4-III);
(3-I)
The seller with v S announces m = v S .
(3-II)
The buyer believes b ^ ( m ) = m , and
b 0 ( m , v B ) = m if m v B ρ v B if m v B > ρ
for some ρ > 0 .
(3-III)
The buyer buys the amount of a ( m , v B ) = a 1 ( m , v B ) = b 0 from the seller and a 2 ( m , v B ) = 0 from other sellers.
The buyer’s purchasing rule given by (3-III) together with the belief given in (3-II) will be called a “crosschecking strategy”, and ρ will be called tolerance in a deviation between m and v B . It says that the buyer punishes the seller by ignoring the seller’s message if their message is too high above their own information15.
If this is an equilibrium, there is no off-the-equilibrium message, because any message can occur even if the seller tells truthfully, as long as ϵ i follows a normal distribution over ( , ) . Therefore, the beliefs given in (3-II) do not satisfy the consistency, because the consistent beliefs must follow Bayes’ law by using both m and v B for any m (on any equilibrium path). Nonetheless, this strategy profile can give us useful insight for characterizing a wPBE.

3.1. Sequential Rationality

Aside from consistency, we can check the sequential rationality of the strategies given in (3-I) and (3-III). Since it is obvious that (3-III) is an optimal decision of the buyer based on the seller’s truth-revealing strategy m ( v S ) = v S and the belief b 0 , it is enough to focus on the optimal decision of the seller. Given that a = a 1 = b 0 ( a 2 = 0 ), the seller will maximize
U S ( m ; v S ) = m ρ u ( v B ) f ( v B ) d v B + m ρ u ( m ) f ( v B ) d v B .
The economic reasoning behind this formula goes as follows. The first term represents the punishment that the seller would obtain when v B is very low ( v B < m ρ ). The second term indicates their utility when v B falls into a normal confidence region ( v B m ρ ). Thus, the effect of inflating the message on the seller’s utility is
U S m = u ( m ρ ) f ( m ρ ) + u ( m ) m ρ f ( v B ) d v B u ( m ) f ( m ρ ) = u ( m ) m ρ f ( v B ) d v B ( u ( m ) u ( m ρ ) ) f ( m ρ ) .
The first term is the effect of utility increases in normal cases due to the inflated announcement ( m > v S ) and the second term is the loss that they are expected to bear from being punished by increasing their announcement marginally.
Truthful revelation requires that U S m | m = v S = 0 , i.e., the marginal gain is equal to the marginal loss. This implies that
u ( v S ) v S ρ f ( v B ) d v B = ( u ( v S ) u ( v S ρ ) ) f ( v S ρ ) .
If u ( a 1 ) is linear in a 1 , u ( v S ) = u ( v S ) u ( v S ρ ) ρ . It is easy to see that there is no ρ which satisfies (3). The proof exploits u ( v S ) u ( v S ρ ) = ρ u ( v S ) due to the linearity of the utility function. Then, because of the equality, it is clear that the direct utility effect measured in a large range of v B [ v S + ϵ ρ , ) , which is u ( v S ) v S ρ f ( v B ) d v B , exceeds the penalty effect measured in a small range of v B [ v S ρ , v S + ϵ ρ ] for a small ϵ > 0 , which is roughly ρ u ( v S ) f ( v S ρ ) .
Proposition 2. 
If the utility function u ( a 1 ) is linear in a 1 , the truth-revealing strategy (3-I) together with the crosschecking strategy (3-III) cannot satisfy sequential rationality16.
If u ( a 1 ) > 0 , it is obvious that there is no truth-revealing equilibrium with the crosschecking strategy, because u ( a 1 ) > 0 implies u ( v S ) > u ( v S ) u ( v S ρ ) ρ , so that a gain from exaggeration exceeds the loss.
If u ( a 1 ) < 0 , however, u ( v S ) u ( v S ρ ) > ρ u ( v S ) , i.e., the penalty effect of exaggerating v S becomes more severe, so Equation (3) may have a solution for ρ which is independent of v S . We will denote the solution by ρ * .
To confirm the existence of the equilibrium tolerance ρ * 17, take u ( a 1 ) = 1 e a 1 which is a special form of CARA utility function. Equation (3) can be simplified into
1 F ( v S ρ ) = ( e ρ 1 ) f ( v S ρ ) ,
or equivalently,
1 e ρ 1 = G ( v S ρ ) ,
where G ( v S ρ ) = f ( v S ρ ) v S ρ f ( v B ) d v B is the hazard rate. This equation determines the equilibrium tolerance ρ * . Moreover, since v B ( = v S + ϵ R ϵ S ) has the distribution of N ( v S , σ S 2 + σ B 2 ) , we have
f ( v S ρ * ) = 1 2 π σ e 1 2 ρ * σ 2 ,
F ( v S ρ * ) = Prob ( v B v S ρ * ) = Φ v S ρ * v S σ = Φ ρ * σ = 1 Φ ρ * σ ,
where σ σ S 2 + σ B 2 and Φ ( x ) = 1 2 1 + erf x 2 . Note, that neither depends on v S , so ρ * does not, either. By substituting (6) and (7) into (4), we obtain
1 2 1 + erf ρ * 2 σ = e ρ * 1 2 π σ e 1 2 ρ * σ 2 .
Figure 1 shows for this particular utility function that (i) there exists σ ¯ ( > 0 ) such that the first order condition given by (3) is satisfied for some ρ * ( σ ) whenever σ σ ¯ and (ii) there does not exist ρ * ( σ ) for low values of σ ( < σ ¯ ) . It also shows that there are two solutions of ρ * ( σ ) except for σ = σ ¯ . Note, that higher values of ρ * ( σ ) change more elastically with respect to a change in σ than lower values. If we define the elasticity of ρ * with respect to σ by ϵ ρ = d ρ * d σ σ ρ * , we can notice that ϵ ρ 1 > 1 > ϵ ρ 2 where ρ 1 ( σ ) > ϵ 2 ( σ ) . Figure 2 illustrates σ ¯ > 0 which is the minimum σ for sequential rationality of fully revealing strategies and the crosschecking strategy given in (3-I)–(3-III), and above which there are two values for ρ * ( σ ) , roughly speaking, an elastic one and an inelastic one.

3.2. Consistency

Unfortunately, however, the beliefs given in (3-II) fail to satisfy consistency, because the consistency of a belief requires it to be updated by Bayes’ law for any m and v B insofar as any m R is an equilibrium message given the equilibrium strategy s * ( v S ) = v S for v S R . Thus, the unique consistent belief in this model is that
b ( m , v B ) = θ ^ ( m , v B ) h S m + h B v B h ,
where h S = 1 / σ S 2 , h B = 1 / σ B 2 , h = h S + h B and θ ^ ( m , v B ) is the maximum likelihood estimator for θ 18, because the likelihood function is
L ( θ ; v S , v B ) = f ( v S , v B ; θ ) = 1 2 π σ S e θ v S σ S 2 · 1 2 π σ B e θ v B σ B 2 .
This leads us to consider the following modified strategies and beliefs;
(4-I)
S with v S announces m = v S .
(4-II)
B believes b ^ ( m ) = m and b ( m , v B ) = θ ^ ( m , v B ) .
(4-III)
B chooses a ( m , v B ) = b ( m , v B ) and a 1 ( m , v B ) = b 0 ( m , v B ) .
Note, that B’s choice of a relies on b, while their choice of a 1 replies on b 0 . In the example of a salesperson, the strategy (4-III) can be interpreted as purchasing a = b ( m , v B ) in total, but purchasing a 1 = b 0 ( m , v B ) only from the seller and the remaining amount b b 0 from other sellers. The outside option to buy from other sellers than the particular seller S can be a threat that enables the seller to report truthfully even if the consistency requirement of the belief does not give a leeway to doubt the genuineness of a suspicious message.
Clearly, the beliefs given in (4-II) satisfy consistency and the buyer’s response, a = b ( m , v B ) , which is given in (4-III), is optimal given the belief. Moreover, given the buyer’s response, a 1 = b 0 ( m , v B ) , the seller’s cheap talk message given in (4-I) is also optimal for ρ * , as we showed before. Therefore, the strategies and beliefs given in (4-I)–(4-III) constitute an equilibrium and it is truth-revealing. Now, our main result follows.
Proposition 3. 
There exists σ ¯ > 0 such that for any σ σ ¯ , there is a truth-revealing equilibrium in which (4-I), (4-II), and (4-III) constitute an equilibrium for some ρ * > 0 that is independent of v S and v B , if the utility function u is any negative affine transformation of e a 1 , i.e., u ( a 1 ) = γ β e a 1 where β > 0 .
Intuitively, if the seller’s utility function is u ( a 1 ) = γ β e a 1 which is strictly concave, inflating the message entails increasing the penalty probability as well as increasing the direct utility but the latter effect on the utility is exceeded by the former penalty effect. Therefore, it may be optimal for the seller not to inflate their message.
This proposition says that the utility function of the seller u ( a 1 ) = γ β e a 1 satisfies the differential equation given by (3) for some ρ which is independent of v S and v B , implying that under this utility function, there is a possibility that there is ρ * characterizing the equilibrium crosschecking strategy and, moreover, it does not depend on v S and v B . This utility function enables the seller to reveal the truth by making the punishment larger than the direct gain when they inflate their information.
Is it still possible that there is a truth-revealing equilibrium for a different form of utility function? The following proposition suggests that it is not possible.
Proposition 4. 
If there is a truth-telling equilibrium such that (4-I), (4-II), and (4-III) constitute an equilibrium for some ρ * > 0 that is independent of v S and v B , then the utility function must have the form of u ( a 1 ) = γ β e c a 1 where β , c > 0 .
It says that an affine transformation of e c a 1 for some c > 0 , i.e., u ( a 1 ) = γ β e c a 1 , is indeed a necessary condition as well as a sufficient condition for the existence of a fully revealing equilibrium. So, we can conclude that when the buyer is partially informed, it is possible to fully reveal the private information of the seller with the crosschecking strategy only if the seller has a utility function of this form.
This result could be interpreted as a possibility theorem in the sense that truth-telling is possible in equilibrium if the seller has this CARA utility function, or interpreted as an impossibility theorem in the sense that truth-telling is possible only if the seller has the CARA utility function. Considering the fact that the CARA function is a reasonable approximation to the real but unknown utility function19, we believe that this result is reassuring20.
Before closing this section, it will be worthwhile to discuss the possibility of other communicative equilibria. Some readers may think that it is possible to construct the following trivial truth-revealing equilibrium in which s * ( v S ) = v S , a 1 * = a ^ and a 2 * = θ ^ a ^ where a ^ is a constant. First of all, it is not a strict equilibrium, although it is a wPBE. The seller’s truth-revealing strategy is not a unique best response to the buyer’s buying strategy. It relies heavily on a tie-breaking rule, whereas our equilibrium consists of a seller’s strategy (4-I) which is a unique best response to the buyer’s strategy and a buyer’s strategy (4-III) which is also a unique best response to the belief (4-II). This means that such an equilibrium is not robust to slight perturbations in payoffs. For example, if the seller obtains a slightly positive utility from a big lie itself, they will strictly prefer such a lie to being honest. Therefore, the argument that the seller is always truthful about their product quality because the buyer always buys the same amount from them regardless of their message does not seem to be so strongly convincing in the sense that they can lie freely for the same reason. Moreover, it is not a communicative equilibrium in our definition because no pair of different messages induces different seller’s payoff-relevant actions a 1 .

4. Comparative Statics

Is the buyer who is less informed more credulous? We will investigate the effect of an increase in σ B on the equilibrium tolerance ρ * . Consider the first order condition of the seller’s incentive compatibility given by (5) as follows:
1 e ρ 1 = G ( v S ρ ) .
The left-hand side is the ratio of marginal benefit to marginal cost of inflating the message slightly in terms of utility. Roughly, G ( v S ρ ) can be interpreted as the relative ratio of the punishment probability to the no-punishment probability. The punishment probability is affected by both ρ and σ B . If σ B increases, the probability is increased because the tail probability v S ρ f ( v B ) d v B is increased. If ρ increases, the probability decreases. Therefore, if the left-hand side of (5) does not change very much for large values of ρ , a larger σ B should be complemented by a larger ρ to maintain the constant probability that a slightly inflated message is penalized, which means that a larger value of σ is associated with a larger value of ρ . This is why the buyer uses a more lenient strategy which allows for a deviation to be more permissible, as information is less accurate for most of the values of σ B (except for small values of σ B and ρ ). Of course, if σ B and ρ are very small, this monotonicity may not hold.
Now, on the seller’s side, is it more likely that the seller tells the truth as the buyer is more informed ( σ B 2 gets smaller)? Proposition 3 suggests that it is not true, because there may not exist a truth-revealing equilibrium for a very low value of σ 2 σ S 2 + σ B 2 . To see the intuition, consider the limiting case of σ S 2 and σ B 2 . Given any fixed ρ , if σ S 2 and σ B 2 (with maintaining σ S 2 < σ B 2 ) keep falling, the penalty probability approaches zero, and thus, the expected net loss from inflating m converges to zero, implying that the seller will have the incentive to inflate their message. However, as σ approaches zero (so that σ is too small), ρ * ( σ ) becomes larger, and thus, the penalty probability becomes even smaller; hence, no communicative equilibrium for low values of σ S 2 and σ B 2 .
We will briefly compare the utilities of players when the buyer is completely uninformed and when they are partially informed. It is clear that the buyer cannot be worse off by obtaining some noisy information v B with any finite variance σ B 2 < . Conditional on information m and v B , the buyer’s belief about θ is b ( m , v B ) = θ ^ ( m , v B ) . So, assuming that the seller reveals v S truthfully, the loss function of the buyer, MSE ( a v S , v B ) = E [ ( a θ ) 2 v S , v B ] is minimized when they choose a * = E ( θ v S , v B ) = θ ^ ( v S , v B ) . Since a * = θ ^ ( v S , v B ) minimizes MSE ( a v S , v B ) , E [ ( a * θ ) 2 v S , v B ] = 2 / h is clearly lower than MSE ( a = 0 ) = , which is the buyer’s loss when they are uninformed. On the other hand, it is also clear that the seller has no reason to prefer an informed buyer, since an informed buyer may choose a 1 < 0 , if v B < 0 , while an uninformed buyer will always choose a 1 = E ( θ ) = 0 .
It is more intriguing to compare utilities when the buyer is more informed and less informed. Increasing σ B has two effects on the seller’s utility, its direct effect and its indirect effect through ρ * . The direct effect is negative because the probability that v B is low enough to penalize S is higher. On the other hand, the indirect effect is positive, because the seller is less likely to be penalized with larger ρ * , implying that B chooses a 1 = v S than a 1 = v B more often, as far as the seller reveals truthfully. The next proposition shows that the direct negative effect dominates the indirect positive effect, so U S decreases as σ B 2 increases for any v S . This is rather contrary to the widely held belief that the seller prefers a less informed buyer. The correct intuition behind this counterintuitive result is that the wrong information from a less informed buyer may screw up the deal between the seller and the buyer in the sense that they choose a very low quantity based on their own preposterous information.
Proposition 5. 
In equilibrium, the seller’s utility decreases as σ ( < ) increases for any v S , if ϵ ρ d ρ d σ σ ρ ( 0 , 1 ) .
Intuitively, if the tolerance schedule is inelastic with respect to a change in σ , the positive indirect effect through ρ * is outweighed by the negative direct effect, so the result immediately follows.
A seller’s preference for an informed buyer clearly relies on their information v S . It is obvious that the seller’s utility in a communicative equilibrium increases as v S increases because a higher value of v S shifts the conditional distribution of v B to the right and it has the direct positive effect of u ( v S ) > 0 . This leads to the following proposition.
Proposition 6. 
For any σ B < , there is a large v ¯ S such that for any v S v ¯ S , the seller with v S prefers an informed buyer with expertise σ B to an uninformed buyer.
This proposition suggests that the monotonicity of U S with respect to σ B for any v S is not extended to σ B = .
Finally, we are interested in how the buyer’s utility is affected by the accuracy of their own information.
Proposition 7. 
The buyer with information v B is better off as their information becomes more accurate, i.e., h B increases.
This proposition implies that a buyer’s utility increases unambiguously with respect to the precision of their own information, unlike the seller’s utility. It is clear because the buyer’s objective is to minimize the mean square error of their decision based on their estimator of θ given the information available to them, which is to minimize the variance of their unbiased estimator θ ^ = h S v S + h B v B h . While the seller’s utility depends on the effect of h B on ρ * because their utility is determined by the amount the buyer buys from them, a 1 , the buyer’s utility is free from the effect of h B on ρ * because their utility is determined by the total amount they buy a = a 1 + a 2 , not just the amount they buy from the seller a 1 .

5. Conclusions

In this paper, we have shown that the information of a buyer can effectively deter an expert from giving false information to the buyer. However, we are also well aware of the limitation of our current analysis that there can be other equilibria including the babbling equilibrium. The empirical validity of the equilibrium we constructed could be tested by investigating the relation between the degree of buyer’s expertise and the transaction volume between the trading partners; it will be a promising future research agenda.
In reality, consumer information can protect a consumer from fraud by a salesperson by disciplining their deceptive motive. A salesperson’s persuasion can be credible if consumers have some relevant information about the product. This is also true for referral processes of the quality of a newly introduced experience. Information diffusion by word-of-mouth communication can be credible if consumers who seek opinions have some expertise. We believe that the general insight in this paper can be applied to richer economic or social situations beyond transactions between a seller and a buyer.

Author Contributions

Conceptualization, J.Y.K.; Writing—Original Draft Preparation, J.Y.K.; Writing—Review & Editing, J.Y.K.; Supervision, J.Y.K.; Formal analysis, J.J. and J.Y.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2022S1A5A2A0304932311).

Data Availability Statement

No data was used for this research.

Conflicts of Interest

The author reports that there are no competing interests to declare.

Appendix A

Proof of Proposition 2. 
We have
u ( v S ) v S ρ f ( v B ) d v B = u ( v S ) v S ρ v S f ( v B ) d v B + v S f ( v B ) d v B > ρ u ( v S ) f ( v S ρ ) = ( u ( v S ) u ( v S ρ ) ) f ( v S ρ ) .
The inequality follows from v S f ( v B ) d v B > 0 and v S ρ v S f ( v B ) d v B > ρ f ( v S ρ ) which is because f ( v B ) > 0 if v B < v S . The last equality follows from u ( v S ) u ( v S ρ ) = ρ u ( v S ) due to the linearity of u ( a ) . Therefore, the first-order condition given by (3) cannot be satisfied for any ρ . □
Proof of Proposition 3. 
Lemma A1. 
If u ( a 1 ) = 1 e a 1 , the first order condition of the seller’s optimization given by (8) implies the second order condition.
Proof. 
We have
2 U S m 2 = u ( m ) m ρ m f ( v B ) d v B u ( m ) f ( m ρ ) ( u ( m ) u ( m ρ ) ) f ( m ρ ) ( u ( m ) u ( m ρ ) ) h ( m ρ ) .
Therefore, we have
2 U S m 2 m = v S = e v S v S ρ f ( v B ) d v B + f ( v S ρ ) + ( e ρ 1 ) ( h ( v S ρ ) f ( v S ρ ) ) = e v S f ( v S ρ ) + ( e ρ 1 ) h ( v S ρ ) < 0 ,
since ρ > 0 , h ( v S ρ ) > 0 and 1 F ( v S ρ ) = ( e ρ 1 ) f ( v S ρ ) from the first order condition given by (4). □
Lemma A2. 
If u ( a 1 ) = 1 e a 1 , local optimality of the seller implies global optimality.
Proof. 
Since it is clear that a seller will not deviate to m < v S , we will check only the incentive to deviate to m > v S .
Since u ( a 1 ) = 1 e a 1 , Equation (2) can be rearranged into
U S m = e m [ 1 F ( m ρ ) ( e ρ 1 ) f ( m ρ ) ] .
We will show ψ ( m ) 1 F ( m ρ ) ( e ρ 1 ) f ( m ρ ) < 0 , m > v S .
Without loss of generality, assume that v S = 0 . Since m = 0 is a local maximum, m = 0 must satisfy the first-order condition,
1 F ( m ρ ) = ( e ρ 1 ) f ( m ρ ) ,
or equivalently,
1 e ρ 1 = G ( m ρ ) ,
where G ( x ) is a hazard rate function. It is well known that a normal distribution function satisfies log-concavity so its hazard rate function G ( x ) is monotonic increasing, i.e., G ( x ) > 0 . (See Bagnoli and Bergstrm (2005) [26].) Also, we have lim x G ( x ) = 0 and lim x G ( x ) = . Therefore, the Equation (A5) has a unique solution, which is m * = 0 and ψ ( m ) < 0 m > 0 . This completes the proof. □
Now, we can rewrite u ( a 1 ) as following:
u ( a ) = γ β e a 1 = γ ( 1 β γ e a 1 ) = γ ( 1 e ( a 1 ln β / γ ) ) .
The graph of u ( a 1 ) is obtained simply by scaling of the vertical axis and the transition of the a-axis. This does not change the first-order condition and second-order condition. □
Proof of Proposition 4. 
By Taylor expansion, we have
u ( v S ρ ) = u ( v S ) u ( v S ) ρ + u ( v S ) ρ 2 2 + O ( ρ 3 ) .
So, for small ρ > 0 , the first order condition given by Equation (3) can be rewritten as
u ( v S ) v S ρ f ( v B ) d v B = ( u ( v S ) ρ u ( v S ) ρ 2 2 ) f ( v S ρ ) .
By using G ( v S ρ ) f ( v S ρ ) v S ρ f ( t ) d t , one can reduce Equation (A8) into
u ( v S ) = 2 ρ 1 G ( v S ρ ) ρ 1 u ( v S ) .
Note that the solution for the differential equation given by (A9) must be of the form u ( x ) = γ β e c x where c = 2 ρ 1 G ( v S ρ ) ρ 1 = 2 ρ e ρ 1 ρ 1 > 0 for ρ > 0 . Also, u > 0 implies that β > 0 . □
Derivation of d U S d σ :
The seller’s utility in the fully revealing equilibrium is given by
U S ( v S ; v S ) = v S ρ * ( σ ) u ( v B ) f ( v B ) d v B + v S ρ * ( σ ) u ( v S ) f ( v B ) d v B ,
where u ( x ) = 1 e x and f ( x ) = 1 2 π σ e 1 2 ρ σ 2 . We will use the following Leibniz rule.
d d x a ( x ) b ( x ) f ( x , t ) d t = f ( x , b ( x ) ) d b d x f ( x , a ( x ) ) d a d x + a ( x ) b ( x ) x f ( x , t ) .
Then, we obtain
U S ( v S ; v S ) σ = u ( v S ρ * ( σ ) ) f ( v S ρ * ( σ ) ) ρ * σ + v S ρ * ( σ ) u ( v B ) σ f ( v B ) d v B + u ( v S ) f ( v s ρ * ( σ ) ) ρ * σ + v S ρ * ( σ ) σ f ( v B ) d v B .
Using f ( v S ρ * ( σ ) ) = f ( v S + ρ * ( σ ) ) , σ f ( v B ) = f ( v B ) ( v B v S ) 2 σ 3 1 σ and v S ρ * ( σ ) v S σ f ( v B ) d v B = v S v S + ρ * ( σ ) σ f ( v B ) d v B , we have
U S ( v S ; v S ) σ = ( u ( v S ) u ( v S ρ * ( σ ) ) ) f ( v S ρ * ( σ ) ) ρ * σ + v S ρ * ( σ ) u ( v B ) f σ ( v B ) d v B + v S ρ * ( σ ) u ( v S ) f σ ( v B ) d v B ,
where f σ σ f ( v B ) . The first term is the indirect effect, and the second and third terms are the direct effects.
Proof of Proposition 5. 
Let U S * ( v S , σ ) U S ( v S ; v S , σ ) be the equilibrium utility of a seller whose information has standard deviation σ . We will prove that U S * ( v S , σ ) σ < 0 for any v S . To do that, we will show that (i) U S * ( v S , σ ) σ is increasing in v S , and (ii) lim v S U S * ( v S , σ ) σ < 0 .
(i) Let x = v B v S σ . Then, x follows a standard normal distribution. Let the density function and the distribution function of the standard normal distribution be f ( x ) and F ( x ) respectively. Then, U S * can be rewritten as
U S * = σ ρ σ u ( v S + σ x ) f ( x ) d x + ρ σ u ( v S ) f ( x ) d x .
Due to Young’s Theorem, it suffices to show that 2 U S * σ v S = 2 U S * v S σ > 0 .
By using u ( v S ) = 1 e v S and u ( v S + σ x ) = 1 e v S σ x , we have
U S * v S = e v S Ψ ( σ ) ,
where Ψ ( σ ) = σ ρ σ e σ x f ( x ) d x + ρ σ f ( x ) d x . By using the Leibniz rule, we have
d Ψ ( σ ) d σ = ρ σ e σ x f ( x ) d x + ρ σ f ( x ) d x + σ ρ σ 2 e ρ f ( ρ σ ) ρ σ x e σ x f ( x ) d x ρ σ 2 f ( ρ σ ) + Δ = ρ σ ( 1 σ x ) e σ x f ( x ) d x + ρ σ f ( x ) d x + ρ σ ( e ρ 1 ) f ( ρ σ ) + Δ ,
where Δ = d Ψ ( σ ) d ρ d ρ * d σ = f ( ρ σ ) ( 1 e ρ ) d ρ * d σ . Therefore, we obtain
d Ψ ( σ ) d σ = ρ σ ( 1 σ x ) e σ x f ( x ) d x + ρ σ f ( x ) d x + ( e ρ 1 ) f ( ρ σ ) ρ σ d ρ * d σ > 0 ,
if ϵ ρ < 1 , where ϵ ρ = d ρ * d σ σ ρ is the elasticity of ρ * with respect to σ .
(ii) lim v S U S * ( v S , σ ) σ = u ( v B ) f σ d v B , since lim v S u ( v S ) u ( v S ρ ) = 0 , lim v S f ( v S ρ ) = 0 and ρ * σ is independent of v S . We want to show that lim v S U S * ( v S , σ ) σ < 0 .
We have
u ( v B ) f σ d v B = ( 1 e v B ) f σ d v B < f σ d v B = f ( v B ) σ v B v S σ 2 2 1 d v B = f ( x ) ( x 2 1 ) d x = 0 ,
since 1 e v B < 1 . □
Proof of Proposition 6. 
Since the seller’s utility in a babbling equilibrium is U S = u ( 0 ) = 1 e 0 = 0 , it suffices to show that there are large v ¯ S such that for all v S v ¯ S , U S ( v S , v S ) > 0 , for all σ < .
Lemma A3. 
Let g ( v B ) u ( v B ) f ( v B ) . Then, g ( v B ) 0 as v B .
Proof. 
We have
g ( v B ) = ( 1 e v B ) 1 2 π σ e 1 2 v B v S σ 2 .
Let x v B v S σ . Then, (A15) is reduced to
g ( v B ) = ( 1 e ( σ x + v S ) ) 1 2 π σ e 1 2 x 2 = 1 2 π σ e 1 2 x 2 1 2 π σ e ( σ x + v S + 1 2 x 2 ) g ^ ( x ) .
Therefore, lim v B g ( v B ) = lim x g ^ ( x ) = 0 . □
Lemma A4. 
For any ϵ > 0 , there exists v ¯ S such that for any v S v ¯ S ( ϵ ) , v S ρ * g ( v B ) d v B > ϵ .
Proof. 
We have
v S ρ * g ( v B ) d v B = ρ * σ 1 2 π σ e 1 2 x 2 d x ρ * σ 1 2 π σ e ( σ x + v S + 1 2 x 2 ) d x .
If σ > 1 , we have
ρ * σ 1 2 π σ e 1 2 x 2 d x = 1 σ Φ ( ρ * σ ) ( 0 , 1 2 ) .
Note that ρ * σ 1 2 π σ e ( σ x + v S + 1 2 x 2 ) d x = ρ * σ 1 2 π σ e ( x + σ ) 2 e v S + 1 2 σ 2 d x . Since lim v S e v S = 0 and ρ * σ 1 2 π σ e ( x + σ ) 2 d x is finite, we have
lim v S ρ * σ 1 2 π σ e ( σ x + v S + 1 2 x 2 ) d x = 0 .
Therefore, for any ϵ > 0 , there exists v S ¯ ( ϵ ) such that for any v S v S ¯ ( ϵ ) ,
ρ * σ 1 2 π σ e ( σ x + v S + 1 2 x 2 ) d x < ϵ .
This implies that
ϵ < v S ρ * g ( v B ) d v B < 1 2 + ϵ .
For any given σ B ( < ) and the corresponding ρ * ( σ B ) < , (A10) implies
U S = v S ρ * g ( v B ) d v B + v S ρ * u ( v S ) f ( v B ) d v B .
Since lim v S u ( v S ) = 1 and v S ρ * f ( v B ) d v B > 1 2 , it follows from Lemma 4 that U S > ϵ + 1 2 > 0 for any v S v S ¯ ( ϵ ) . □
Proof of Proposition 7. 
Since v S = θ + ϵ S , v B = θ + ϵ B where ϵ S N ( 0 , σ S 2 ) , ϵ B N ( 0 , σ B 2 ) , σ S 2 σ B 2 , ϵ S , ϵ B are independent, we have
f ( v B | θ ) = 1 2 π σ B exp ( v B θ ) 2 2 σ B 2 ,
f ( v S | θ ) = 1 2 π σ S exp ( v S θ ) 2 2 σ S 2 .
Thus, it follows that
f ( θ | v S , v B ) f ( v B | θ ) f ( v S | θ ) exp ( v B θ ) 2 2 σ B 2 ( v S θ ) 2 2 σ S 2 exp θ 2 2 v S σ B 2 + v B σ S 2 σ S 2 + σ B 2 θ 2 σ S 2 σ B 2 σ S 2 + σ B 2 exp θ h S v S + h B v B h S + h B 2 2 / ( h S + h B ) ,
which means
θ | v S , v B N h S v S + h B v B h S + h B , 2 h S + h B .
Therefore, we obtain
θ ^ : = arg min E [ ( θ ^ θ ) 2 | v S , v B ] = E ( θ | v S , v B ) = h S v S + h B v B h S + h B ,
E [ ( θ ^ θ ) 2 | v S , v B ] = Var ( θ ^ | v S , v B ) = 2 h S + h B .
Hence, we have
MSE ( θ ^ ) h B = E [ ( θ ^ θ ) 2 | v S , v B ] h B < 0 .

Notes

1
Exceptions include Seidmann (1990) [2], Watson (1996) [3], Olszewski (2004) [4], Lai (2014) [5], Ishida and Shimizu (2016) [6], etc., to name a few.
2
Seidmann (1990) [2] used the terms (IA) and (CI) for a transparent motive and a monotone motive respectively.
3
Seidmann (1990) [2] also demonstrates that the conjecture is incorrect when the receiver has no information and their action is two-dimensional similar to Charkraborty and Harbaugh (2010) [7]. In our model, the sender cares about a one-dimensional action of the receiver unlike the other two papers. In other words, the sender’s utility depends only on an action among two actions available to the receiver.
4
Battaglini (2002) [10] also considers the two-dimensional action space of the buyer, but in their model, the two-dimensional type space and two sellers are crucial to informative cheap talk, while our result does not rely on them.
5
For example, at trials, jurors are instructed to ignore expert testimony rather than discount it, if they find it not credible. They are free to disregard the expert opinion in whole or in part, if they determine that the testimony is motivated by some bias or interest in the case. (Moenssens, 2009) [11].
6
One of the main reasons why we focus on this communicative equilibrium based on crosschecking strategies is that it is faithful to Bayesian spirit which requires exploiting all the available information including the buyer’s information. One can think of other equilibria in which a seller always makes an honest report simply because they are indifferent to being honest and not, or more concretely speaking, because their message is always ignored no matter what they say is strongly convincing. However, this equilibrium seems to be less intuitively appealing. For detailed discussions, see the end of Section 3.
7
These papers include Seidmann (1990) [2], Watson (1996) [3], Olszewski (2004) [4], Chen (2009) [13], Moreno de Barreda (2012) [14], Galeotti [15] et al. (2013), Lai (2014) [5] and Ishida and Shimizu (2016) [6]. The support of the receiver’s information is assumed to be binary in Chen (2009) [13], Galeotti (2013) [15] and Lai (2014) [5], finite in Seidmann (1990) [2], Watson (1996) [3], Olszewski (2004) [4], Ishida and Shimizu (2016) [6], and bounded in Moreno de Barreda (2012) [14].
8
The principle of indifference, so-named by Keynes (1921) [17], specifies that a uniform prior distribution should be assumed when nothing is known about the true state of nature before observable data are available.
9
Note, that we are assuming an improper prior distribution.
10
We assume the unbounded support for v S and v B and correspondingly unbounded strategy spaces to avoid triviality that differing (inconsistent) supports can lead to. For a similar approach, see Kartik et al. (2007) [18].
11
Degan and Li (2021) [19] and Espinosa and Ray (2022) [20] also assume normally distributed noisy signal generating processes like us but they endogenize the precision of noises. In Degan and Li (2021) [19], the seller chooses the precision of the noise with some cost which is increasing in the precision size. Since the precision is observable, it can be used as a signal of their information together with the noisy signal realization. Similarly, Espinosa and Ray (2022) [20] assume that the seller (agent) chooses the precision of noises with some cost. In their model, however, the buyer (principal) cannot observe the precision but only observe the signal realization.
12
Since the cheap talk message of the seller, m, is payoff-irrelevant by the definition of cheap talk, the payoffs of the players ( U S and U B ) which are described below should not depend on m. This is distinguished from Pitchik and Schotter (1987) [8]. In their model, an expert makes a binding recommendation, for example, about the price, so it is not cheap talk, whereas we consider an unbinding recommendation of an expert (for example, about the quality) thereby making the payoff of the buyer not directly depend on the recommendation m.
13
Some information is conveyed in equilibrium if a 1 * ( m ; v B ) = a 1 * ( m ; v B ) but a 2 * ( m ; v B ) a 2 * ( m ; v B ) for some different v S and v S such that m = s * ( v S ) and m = s * ( v S ) . But such an equilibrium is not in our interest, because our main concern is in the relation between the seller and the buyer and the transactions between them.
14
Since even fully-revealing strategies which are v S s * ( v S ) reveal the truth in equilibrium, those strategies are literally truth-revealing, So, in fact, the terms “fully-revealing” and “truth-revealing” could be exchangeable.
15
Since a message farther from the buyer’s signal is more likely to be punished, it can be regarded as a costly signal as in costly signaling models with single-crossing preferences (e.g., Cho and Kreps (1987) [21]) or double-crossing preferences (e.g., Chen et al. (2022) [22]). However, the cost is indirect in our model, whereas the signaling cost is direct in costly signaling models.
16
If σ S 2 = σ B 2 = 0 , a fully revealing equilibrium can be attained by the buyer’s extreme form of crosschecking strategy ( ρ = 0 ), b 0 ( m , v B ) = m if m v B and b 0 ( m , v B ) = v B if m > v B or by their self-confirmatory strategy, b 0 ( m , v B ) = v B for any m. However, our assumption that σ S < σ B excludes this case.
17
We are abusing the word “equilibrium”. We use the term in a broader sense to mean that it satisfies sequential rationality.
18
It is well known that under the normality assumption of error terms, the maximum likelihood estimator is equivalent to the (generalized) Bayesian estimator minimizing the loss function defined by the mean square error, which is the posterior mean.
19
See Zuhair et al. (1992) [23].
20
For empirical findings that a CARA utility function fits the data reasonably well, see Snowberg and Wolfers (2010) [24] and Barseghyan et al. (2018) [25].

References

  1. Crawford, V.; Sobel, J. Strategic Information Transmission. Econometrica 1982, 50, 1431–1451. [Google Scholar] [CrossRef]
  2. Seidmann, D. Effective Cheap Talk with Conflicting Interests. J. Econ. Theory 1990, 50, 445–458. [Google Scholar] [CrossRef]
  3. Watson, J. Information Transmission When the Informed Party Is Confused. Games Econ. Behav. 1996, 12, 143–161. [Google Scholar] [CrossRef]
  4. Olszewski, W. Informal Communication. J. Econ. Theory 2004, 117, 180–200. [Google Scholar] [CrossRef]
  5. Lai, E. Expert Advice for Amateurs. J. Econ. Behav. Organ. 2014, 103, 1–16. [Google Scholar] [CrossRef]
  6. Ishida, J.; Shimizu, T. Cheap Talk with an Informed Receiver. Econ. Theory Bull. 2016, 4, 61–72. [Google Scholar] [CrossRef]
  7. Charkraborty, A.; Harbaugh, R. Persuasion by Cheap Talk. Am. Econ. Rev. 2010, 100, 2361–2382. [Google Scholar] [CrossRef]
  8. Pitchik, C.; Schotter, A. Honesty in a Model of Strategic Information Transmission. Am. Econ. Rev. 1987, 77, 1032–1036. [Google Scholar]
  9. Wolinsky, A. Competition in a Market for Informed Experts’ Services. Rand J. Econ. 1993, 24, 380–398. [Google Scholar] [CrossRef]
  10. Battaglini, M. Multiple Referrals and Multidimensional Cheap Talk. Econometrica 2002, 70, 1379–1401. [Google Scholar] [CrossRef]
  11. Moenssens, A. Jury Instructions on Expert Testimony. In Wiley Encyclopedia of Forensic Science; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
  12. Lipnowski, E.; Ravid, D. Cheap Talk with Transparent Motives. Econometrica 2020, 88, 1631–1660. [Google Scholar] [CrossRef]
  13. Chen, Y. Communication with Two-Sided Asymmetric Information; University of Southhampton: Southampton, UK, 2009. [Google Scholar]
  14. Moreno de Barreda, L. Cheap Talk with Two-Sided Private Information; University of Oxford: Oxford, UK, 2012. [Google Scholar]
  15. Galeotti, A.; Ghiglino, C.; Squintani, F. Strategic Information Transmission Networks. J. Econ. Theory 2013, 148, 1751–1769. [Google Scholar] [CrossRef]
  16. Green, E.; Porter, R. Noncooperative Collusion under Imperfect Price Information. Econometrica 1984, 52, 87–100. [Google Scholar] [CrossRef]
  17. Keynes, J.M. A Treatise on Probability; Macmillan: London, UK, 1921. [Google Scholar]
  18. Kartik, N.; Ottaviani, M.; Squintani, F. Credulity, Lies, and Costly Talk. J. Econ. Theory 2007, 134, 93–116. [Google Scholar] [CrossRef]
  19. Degan, A.; Li, M. Persuasion with Costly Precision. Econ. Theory 2021, 72, 869–908. [Google Scholar] [CrossRef]
  20. Espinosa, F.; Ray, D. Too Good to be True? Retention Rules for Noisy Agents. Am. Econ. J. Microecon. 2022, 15, 493–535. [Google Scholar] [CrossRef]
  21. Cho, I.-K.; Kreps, D.M. Signaling Games and Stable Equilibria. Q. J. Econ. 1987, 2, 179–221. [Google Scholar] [CrossRef]
  22. Chen, C.-H.; Ishida, J.; Suen, W. Signaling under Double-Crossing Preferences. Econometrica 2022, 90, 1225–1260. [Google Scholar] [CrossRef]
  23. Zuhair, S.; Taylor, D.; Kramer, R. Choice of Utility Function Form: Its Effect on Classification of Risk Preferences and the Prediction of Farmer Decisions. Agric. Econ. 1992, 6, 333–344. [Google Scholar] [CrossRef]
  24. Snowberg, E.; Wolfers, J. Explaining the Favorite–Long Shot Bias: Is it Risk-Love or Misperceptions? J. Political Econ. 2010, 118, 723–746. [Google Scholar] [CrossRef]
  25. Barseghyan, L.; Molinari, F.; O’Donoghue, T.; Teitelbaum, J. Estimating Risk Preferences in the Field. J. Econ. Lit. 2018, 56, 501–564. [Google Scholar] [CrossRef]
  26. Bagnoli, M.; Bergstrom, T. Log-Concave Probability and Its Applications. Econ. Theory 2005, 26, 445–469. [Google Scholar] [CrossRef]
Figure 1. Optimal ρ * for various values of σ .
Figure 1. Optimal ρ * for various values of σ .
Games 15 00020 g001
Figure 2. Multiple Solutions for ρ * ( σ ) when σ > σ ¯ .
Figure 2. Multiple Solutions for ρ * ( σ ) when σ > σ ¯ .
Games 15 00020 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jung, J.; Kim, J.Y. Cheap Talk with Transparent and Monotone Motives from a Seller to an Informed Buyer. Games 2024, 15, 20. https://doi.org/10.3390/g15030020

AMA Style

Jung J, Kim JY. Cheap Talk with Transparent and Monotone Motives from a Seller to an Informed Buyer. Games. 2024; 15(3):20. https://doi.org/10.3390/g15030020

Chicago/Turabian Style

Jung, Jeahan, and Jeong Yoo Kim. 2024. "Cheap Talk with Transparent and Monotone Motives from a Seller to an Informed Buyer" Games 15, no. 3: 20. https://doi.org/10.3390/g15030020

APA Style

Jung, J., & Kim, J. Y. (2024). Cheap Talk with Transparent and Monotone Motives from a Seller to an Informed Buyer. Games, 15(3), 20. https://doi.org/10.3390/g15030020

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop