Next Article in Journal
Ignorance Is Bliss, But for Whom? The Persistent Effect of Good Will on Cooperation
Next Article in Special Issue
Exploring the Gap between Perfect Bayesian Equilibrium and Sequential Equilibrium
Previous Article in Journal
Evolutionary Inspection and Corruption Games
Previous Article in Special Issue
When Do Types Induce the Same Belief Hierarchy?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging Possibilistic Beliefs in Unrestricted Combinatorial Auctions

1
Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, USA
2
Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA 02139, USA
*
Author to whom correspondence should be addressed.
Games 2016, 7(4), 32; https://doi.org/10.3390/g7040032
Submission received: 9 August 2016 / Revised: 15 October 2016 / Accepted: 17 October 2016 / Published: 26 October 2016
(This article belongs to the Special Issue Epistemic Game Theory and Logic)

Abstract

:
In unrestricted combinatorial auctions, we put forward a mechanism that guarantees a meaningful revenue benchmark based on the possibilistic beliefs that the players have about each other’s valuations. In essence, the mechanism guarantees, within a factor of two, the maximum revenue that the “best informed player” would be sure to obtain if he/she were to sell the goods to his/her opponents via take-it-or-leave-it offers. Our mechanism is probabilistic and of an extensive form. It relies on a new solution concept, for analyzing extensive-form games of incomplete information, which assumes only mutual belief of rationality. Moreover, our mechanism enjoys several novel properties with respect to privacy, computation and collusion.

1. Introduction

In this paper, we study the problem of generating revenue in unrestricted combinatorial auctions, solely relying on the players’ possibilistic beliefs about each others’ valuations. Let us explain.
In a combinatorial auction, there are multiple indivisible goods for sale and multiple players who are interested in buying. A valuation of a player is a function specifying a non-negative value for each subset of the goods. Many constraints on the players’ valuations have been considered in the literature for combinatorial auctions1. We instead focus on combinatorial auctions that are unrestricted. That is, in our auctions, a player’s value for one subset of the goods may be totally unrelated to his/her value for another subset and to another player’s value for any subset. This is the most general class of auctions. It is well known that, for such auctions, the famous Vickrey-Clarke-Groves (VCG) mechanism [4,5,6] maximizes social welfare in dominant strategies, but offers no guarantee about the amount of revenue it generates. In fact, for unrestricted combinatorial auctions, no known mechanism guarantees any significant revenue benchmark in settings of incomplete information2.
In our setting, the seller has no information about the players’ valuations, and each player knows his/her own valuation, but not necessarily the valuations of his/her opponents. Our players, however, have beliefs about the valuations of their opponents. Typically, beliefs are modeled as probability distributions: for instance, it is often assumed that the valuation profile, θ, is drawn from a common prior. Our setting is instead non-Bayesian: the players’ beliefs are possibilistic and can be arbitrary. That is, a player i’s belief consists of a set of valuation profiles, B i , to which he/she believes θ belongs. We impose no restriction on B i except that, since i knows his/her own valuation, for every profile v B i , we have v i = θ i . In a sense, therefore, such possibilistic beliefs are not assumed to exist, but always exist. For instance, if a player i has no information about his/her opponents, then B i consists of the set of all valuation profiles v, such that v i = θ i ; if i has complete information about his/her opponents, then B i = { θ } ; and if θ is indeed drawn from a common prior D, then B i consists of the support of D conditioned on θ i .
Possibilistic beliefs are much less structured than Bayesian ones. Therefore, it should be harder for an auction mechanism to generate revenue solely based on the players’ possibilistic beliefs. Yet, in single-good auctions, the authors of [10] have constructed a mechanism that guarantees revenue at least as high as the second-highest valuation and, sometimes, much higher. In this paper, for unrestricted combinatorial auctions, we construct a mechanism that guarantees, within a factor of two, another interesting revenue benchmark, B B , solely based on the players’ possibilistic beliefs.
The benchmark B B is formally defined in Section 3, following the framework put forward by Harsanyi [11] and Aumann [12]. However, it can be intuitively described as follows. Let B B i (for “best belief”) be the maximum social welfare player i can guarantee, based on his/her beliefs, by assigning the goods to his/her opponents. Then, B B = max i B B i , and the revenue guaranteed by our main mechanism is virtually B B / 2 . Notice that each B B i does not depend on θ i at all, a property that, as we shall see, gives our mechanism some advantage in protecting the players’ privacy.
To ease the discussion of our main mechanism, in Section 4, we construct a first mechanism, of normal form, that guarantees revenue virtually equal to B B / 2 under two-step elimination of weakly-dominated strategies. The analysis of our first mechanism is very intuitive. However, elimination of weakly-dominated strategies is order-dependent and does not yet have a well-understood epistemic characterization. Moreover, our first mechanism suffers from two problems shared by most normal-form mechanisms. Namely, (1) it reveals all players’ true valuations, and (2) it requires an amount of communication that is exponential in the number of goods. Both problems may not be an issue from a pure game-theoretic point of view3, but are quite serious in several realistic applications, where privacy and communication are, together with collusion and computational complexity, legitimate concerns [14].
Our main mechanism, the best-belief mechanism, significantly decreases the magnitude of the above problems. This second mechanism is designed and analyzed in Section 6 and is of extensive form. In order to analyze it in settings where the players have possibilistic beliefs, we propose a new and compelling solution concept that only assumes mutual belief of rationality, where the notion of rationality is the one considered by Aumann [15].

The Resiliency of the Best-Belief Mechanism.

Besides guaranteeing revenue virtually equal to B B / 2 under a strong solution concept, the best-belief mechanism enjoys several novel properties with respect to privacy, computation, communication and collusion.
  • Privacy: People value privacy. Thus, “by definition”, a privacy-valuing player i de facto receives some “negative utility” if, in an auction, he/she reveals his/her true valuation θ i in its entirety, but does not win any goods. Even if he/she wins some goods, his/her traditional utility (namely, his/her value for the goods he/she receives minus the price he/she pays) should be discounted by the loss he/she suffers from having revealed θ i . One advantage of our best-belief mechanism is that it elicits little information from a player, which presumably diminishes the likelihood that privacy may substantially distort a player’s incentives.
  • Computation: Typically, unrestricted combinatorial auctions require the evaluation of complex functions, such as the maximum social welfare. In principle, one may resort to approximating such functions, but approximation may distort incentives4. By contrast, our mechanism delegates all difficult computation to the players and ensures that the use of approximation is properly aligned with their incentives.
  • Communication: By eliciting little information from the players, our mechanism also has low communication complexity; quadratic in the number of players and the number of goods.
  • Collusion: Collusion can totally disrupt many mechanisms. In particular, the efficiency of the VCG mechanism can be destroyed by just two collusive players [14]. By contrast, collusion can somewhat degrade the performance of our mechanism, but not totally disrupt it, unless all players are collusive. As long as collusive players are also rational, at least in a very mild sense, the revenue guaranteed by our mechanism is at least half of that obtainable by “the best informed independent player”.
For a detailed discussion about these properties of our mechanism, see Section 6.2.

2. Related Work

Generating revenue is one of the most important objectives in auction design; see [16,17] for thorough introductions about this area. Following the seminal result of [13], there has been a huge literature on Bayesian auctions [18]. Since we do not assume the existence of a common prior and we focus on the players’ possibilistic rather than probabilistic beliefs, our study is different from Bayesian auctions. Spectrum auctions have been widely studied both in theory and in practice, and several interesting auction forms have been proposed recently; see, e.g., [19,20,21,22,23]. Most existing works consider auctions of restricted forms, such as auctions with multiple identical goods and single-demand valuations [24], valuations with free disposal [25], auctions with additive valuations [26], auctions with unlimited supply [27], etc. Revenue in unrestricted combinatorial auctions has been considered by [28], which generalizes the second-price revenue benchmark to such auctions and provides a mechanism guaranteeing a logarithmic fraction of their benchmark in dominant strategies.
The solution concept developed in this paper refines the notion of implementation in undominated strategies [29] and considers a two-round elimination of dominated strategies. In particular, we extend the notion of distinguishably-dominated strategies [30] from extensive-form games of complete information to extensive-form games of incomplete information and possibilistic beliefs. As shown in [30], iterated elimination of distinguishably-dominated strategies is order independent with respect to histories and characterizes extensive-form rationalizability [31,32]. In [10,33], elimination of strictly-dominated strategies has been extended to deal with possibilistic beliefs, but only for normal-form games. Moreover, [34] leverages the players’ beliefs for increasing the sum of social welfare and revenue in unrestricted combinatorial auctions.
Preserving the privacy of the players’ valuations, or types in general, in the execution of a mechanism has been studied by [35]. The authors present a general method using some elementary physical equipment (i.e., envelopes and an envelope randomizer) so as to execute any given normal-form mechanism, without trusting any one and without revealing any information about the players’ true types, beyond what is unavoidably revealed in the final outcome. An alternative way to protect the privacy of the players that has often been considered for auctions is to use encryption and zero-knowledge proofs. In particular, the authors of [36] make efficient use of cryptography to implement single-good auctions so that, after learning all bids, an untrusted auctioneer can prove who won the good and what price he/she should pay, without having any player learn any information about the bid of another player. Moreover, in differential privacy [37], the mechanisms or databases inject noise to the final outcome to preserve the participants’ privacy. By contrast, our main mechanism does not rely on envelopes or any other form of physical equipment, nor on cryptography or noise. It preserves the players’ privacy because, despite the fact that all actions are public, a player is asked to reveal little information about herself/himself.
Strong notions of collusion-resilient implementation have been studied in the literature, such as coalition incentive compatibility [38] and bribe proofness [39]. However, the authors prove that many social choice functions cannot be implemented under these solution concepts. The collusive dominant-strategy truthful implementation is defined in [40], together with a mechanism maximizing social welfare in multi-unit auctions under this notion. Other forms of collusion resiliency have also been investigated, in particular by [41,42,43,44,45,46]. Their mechanisms, however, are not applicable to unrestricted combinatorial auctions in non-Bayesian settings. Moreover, the collusion models there assume various restrictions (e.g., collusive players cannot make side-payments to one another or enter binding agreements, there is a single coalition, no coalition can have more than a given number of players, etc.). By contrast, in unrestricted combinatorial auctions, our main mechanism does not assume any such restrictions. The resiliency of our mechanism is similar to that of [28], where the guaranteed revenue benchmark is defined only on independent players’ valuations when collusion exists.

3. Preliminaries and the Best-Belief Revenue Benchmark

A combinatorial auction context is specified by a triple ( n , m , θ ) : the set of players is { 1 , , n } ; the set of goods is { 1 , , m } ; and the true valuation profile is θ. Adopting a discrete perspective, we assume that a player’s value for a set of goods is always an integer. Thus, each θ i , the true valuation of i, is a function from the powerset 2 { 1 , , m } to the set of non-negative integers Z + , with θ i ( ) = 0 . The set of possible valuations of i, Θ i , consists of all such functions, and Θ = Θ 1 × × Θ n . After constructing and analyzing our mechanisms, we will discuss the scenarios where values are real numbers.
An outcome of a combinatorial auction is a pair of profiles ( A , P ) . Here, A is the allocation, with A i { 1 , , m } being the set of goods each player i gets, and A i A j = for each player j i ; and P is the price profile, with P i R denoting how much each player i pays; if P i < 0 , then i receives - P i from the seller. The set of all possible outcomes is denoted by Ω.
The utility function of i, u i , maps each valuation t i Θ i and each outcome ω = ( A , P ) to a real: u i ( t i , ω ) = t i ( A i ) - P i . The social welfare of ω is S W ( ω ) i θ i ( A i ) , and the revenue of ω is R E V ( ω ) i P i . If ω is a probability distribution over outcomes, then u i ( t i , ω ) , S W ( ω ) and R E V ( ω ) denote the corresponding expectations.
Definition 1.
An augmented combinatorial auction context is a four-tuple ( n , m , θ , B ) , where ( n , m , θ ) is a combinatorial auction context and B is the belief profile: for each player i, B i , the belief of i, is a set of valuation profiles, such that t i = θ i for all t B i .
In an augmented combinatorial auction context, B i is the set of candidate valuation profiles in i’s mind for θ. The restriction that t i = θ i for all t B i corresponds to the fact that player i knows his/her own valuation. Player i’s belief is correct if θ B i and incorrect otherwise. As we shall see, our result holds whether or not the players’ beliefs are correct. From now on, since we do not consider any other type of auctions, we use the terms “augmented” and “combinatorial” for emphasis only.
A revenue benchmark f is a function that maps each auction context C to a real number f ( C ) , denoting the amount of revenue that is desired under this context.
Definition 2.
The best-belief revenue benchmark, B B , is defined as follows. For each auction context C = ( n , m , θ , B ) ,
B B ( C ) max i B B i ,
where for each player i,
B B i max ( A , P ) Ω : A i = and P j t j ( A j ) j i , t B i R E V ( A , P ) .
Note that B B i represents the maximum revenue that player i would be sure to obtain if he/she were to sell the goods to his/her opponents via take-it-or-leave-it offers, which is also the maximum social welfare player i can guarantee, based on his/her beliefs, by assigning the goods to his/her opponents. As an example, consider a combinatorial auction with two items and three players. Player 1 only wants Item 1, and θ 1 ( { 1 } ) = 100 ; Player 2 only wants Item 2, and θ 2 ( { 2 } ) = 100 ; and Player 3 only wants the two items together, and θ 3 ( { 1 , 2 } ) = 50 . All the unspecified values are zero. Moreover, Player 1 believes that Player 2’s value for Item 2 is at least 25, and Player 3’s value for the two items together is at least 10: that is, B 1 = { v | v 1 = θ 1 , v 2 ( { 2 } ) 25 , v 3 ( { 1 , 2 } ) 10 } . Accordingly, B B 1 = 25 : the best Player 1 can do in selling to others is to offer Item 2 to Player 2 at price 25. Furthermore, B 2 = { v | v 2 = θ 2 , v 1 ( { 1 } ) 80 , v 3 ( { 2 } ) 20 } , which implies B B 2 = 100 , achieved by offering Item 1 to Player 1 at price 80 and Item 2 to Player 3 at price 20. Finally, B 3 = { v | v 3 = θ 3 , v 1 ( { 1 } ) 80 , v 2 ( { 2 } ) 70 } , which implies B B 3 = 150 , achieved by offering Item 1 to Player 1 at price 80 and Item 2 to Player 2 at price 70. Therefore, B B = 150 in this example. Note that Player 1’s and Player 3’s beliefs are correct, but Player 2’s beliefs are incorrect because θ 3 ( { 2 } ) = 0 .
Furthermore, note that, if there is really a common prior from which the players’ valuations are drawn, then the players’ possibilistic beliefs consist of the support of the distribution. In this case, it is expected that the optimal Bayesian mechanism generates more revenue than the best-belief benchmark. However, this is a totally different ball game, because Bayesian mechanisms assume that the seller has much knowledge about the players. Besides, little is known in the literature about the structure of the optimal Bayesian mechanism for unrestricted combinatorial auctions or even a good approximation to it.
Finally, the best-belief benchmark is measured based on the players’ beliefs about each other, not on their true valuations. If the players all know nothing about each other and believe that the others’ values can be anything from close to zero to close to infinity (or a huge finite number), then the benchmark is low. The power of the benchmark comes from the class of contexts where the players know each other well (e.g., as long-time competitors in the same market) and can effectively narrow down the range of the others’ values. In this case, our mechanism generates good revenue without assuming a common prior.

4. A Normal-Form Mechanism

As a warm up, in this section, we construct a normal-form mechanism that implements the best-belief revenue benchmark within a factor of two, under two-step elimination of weakly-dominated strategies. Indeed, weakly-dominant/dominated strategies have been widely used in analyzing combinatorial auctions where the players only report their valuations: that is, it is weakly dominant for each player to report his/her true valuation. When each player reports both his/her own valuation and his/her beliefs about the other players, it is intuitive that a player i first reasons about what the other players report for their valuations and then reasons about what to report for his/her beliefs about them given their reported valuations: that is, an iterated elimination of dominated strategies. However, in our mechanism, there is no need to go all the way to the end of the iterated procedure, and two steps are sufficient.
Roughly speaking, all players first simultaneously remove all of their weakly-dominated strategies; and then, each player further removes all of his/her strategies that now become weakly dominated, based on all players’ surviving strategies. However, care must be taken when defining this solution concept in our setting. Indeed, since a player does not know the other players’ true valuations, he/she cannot compute their strategies surviving the first round of elimination, which are needed for him to carry out his/her second round of elimination. To be “on the safe side”, we require that the players eliminate their strategies conservatively: that is, a player eliminates a strategy in the second round only if it is dominated by the same alternative strategy with respect to all valuation profiles that he/she believes to be possible. This notion of elimination is the same as the one used by Aumann in [15], except that in the latter, it is strict instead of weak domination. In [33], the authors provide an epistemic characterization for iterated elimination based on the notion of [15].
More precisely, given a normal-form auction mechanism M, let S i be the set of strategies of each player i and S = S 1 × × S n . For any strategy profile s, M ( s ) is the outcome when each player i uses strategy s i . If T = T i × T - i is a subset of strategy profiles, t i Θ i , s i T i , and σ i Δ ( T i ) 5, then we say that s i is weakly dominated by σ i with respect to t i and T, in symbols s i T t i σ i , if:
  • u i ( t i , M ( s i , s - i ) ) u i ( t i , M ( σ i , s - i ) ) for all s - i T - i and
  • u i ( t i , M ( s i , s - i ) ) < u i ( t i , M ( σ i , s - i ) ) for some s - i T - i .
That is, s i is weakly dominated by σ i when the valuation of player i is t i and the set of strategy sub-profiles of the other players is T - i . The set of strategies in T i that are not weakly dominated with respect to t i and T is denoted by U i ( t i , T ) . For simplicity, we use U i to denote U i ( θ i , S ) , the set of undominated strategies of player i.
Definition 3.
Given an auction context C = ( n , m , θ , B ) and a mechanism M, the set of conservatively weakly-rational strategies of player i is:
C i U i { s i : σ i Δ ( U i ) s . t . t B i , s i U ( t ) θ i σ i } ,
where U ( t ) × j U j ( t j , S ) for any t Θ . The set of conservatively weakly-rational strategy profiles is C = C 1 × × C n .
Mechanism M conservatively weakly implements a revenue benchmark f if, for any auction context C and any strategy profile s C ,
R E V ( M ( s ) ) f ( C ) .
Now, we provide and analyze our normal-form mechanism M N o r m a l . Intuitively, the players compete for the right to sell to others, and the mechanism generates revenue by delegating this right to the player who offers the most revenue. Besides the number of players n and the number of goods m, the mechanism takes as input a constant ϵ ( 0 , 1 ] . The players act only in Step 1, and Steps a through f are “steps taken by the mechanism”. The expression “ X : = x ” sets or resets variable X to value x. Moreover, [ m ] = { 1 , 2 , , m } .
Mechanism   M N o r m a l :
1:
Each player i, publicly and simultaneously with the other players, announces:
-
a valuation v i and
-
an outcome ω i = ( α i , π i ) , such that: α i i = , and for each player j, π j i is zero whenever α j i = ; and is a positive integer otherwise.
After the players simultaneously execute Step 1, the mechanism chooses the outcome ( A , P ) by means of the following six steps.
a:
Set A i : = , and P i : = 0 for each player i.
b:
Set R i : = R E V ( ω i ) for each player i, and w : = argmax i R i with ties broken lexicographically.
c:
Publicly flip a fair coin and denote the result by r.
d:
If r = H e a d s , then A w : = argmax a [ m ] v w ( a ) , with ties broken lexicographically, and halt.
e:
(Note that r = T a i l s when this step is reached.)
For each player i, such that α i w :
-
If v i ( α i w ) < π i w , then P w : = P w + π i w .
-
Otherwise, A i : = α i w and P i : = π i w - ϵ n .
f:
For each player i, P i : = P i - δ i with δ i = ϵ n · R i 1 + R i .
The final outcome is ( A , P ) .
In the analysis, we refer to player w as the winner and each δ i as player i’s reward. Furthermore, given a context ( n , m , θ , B ) and an outcome ω, for succinctness, we use u i ( ω ) instead of u i ( θ i , ω ) for player i’s utility under ω. We have the following.
Theorem 1.
For any context ( n , m , θ , B ) and constant ϵ ( 0 , 1 ] , mechanism M N o r m a l conservatively weakly implements the revenue benchmark B B 2 - ϵ .
As we will see in the proof of Theorem 1, the mechanism incentivizes each player i to report his/her true valuation and an outcome whose revenue is at least B B i . In particular, the latter is achieved by the fair coin toss: when r = H e a d s , the winner is given his/her favorite subset of goods for free, which is better than any offer he/she can possibly get if somebody else becomes the winner. Moreover, the rewards are strictly increasing with the revenue of the reported outcomes. Accordingly, the players do not have incentives to underbid; that is, to report an outcome whose revenue is lower than the corresponding B B i . Thus, the winner’s reported outcome has a revenue of at least max i B B i . When r = T a i l s , the mechanism tries to sell the goods as suggested by the winner to the other players, as a take-it-or-leave-it offer. If a player accepts the offer, then he/she pays the suggested price; otherwise, this price is charged to the winner as a fine. Accordingly, with probability 1 / 2 (that is, when r = T a i l s ), the mechanism generates revenue max i B B i . Formally, we show the following two lemmas.
Lemma 1.
For any context ( n , m , θ , B ) , constant ϵ, player i and strategy s i = ( v i , ω i ) , if s i U i , then v i = θ i .
Proof. 
Notice that v i is used in two places in the mechanism: to select player i’s “favorite subset” in Step d when he/she is the winner and to decide whether he/she gets the set α i w in Step e when he/she is not the winner. Intuitively, it is i’s best strategy to announce his/her true valuation so as to select his/her “truly favorite subset” and to take the allocated set if and only of its price is less than or equal to his/her true value for it.
More precisely, arbitrarily fix a strategy s i = ( v i , ω i ) with v i θ i , and let s i = ( θ i , ω i ) . We show that s i S θ i s i , where S is the set of all strategy profiles of M N o r m a l . To do so, arbitrarily fix a strategy sub-profile s - i of the other players; let ( A , P ) be the outcome of s = ( s i , s - i ) , and let ( A , P ) be the outcome of s = ( s i , s - i ) . Since s i and s i announce the same outcome ω i , i is the winner under s if and only if he/she is the winner under s . We discuss these two cases separately.
Case 1:
i is the winner under both s and s .
In this case, conditioned on r = H e a d s , we have A i = argmax a [ m ] v i ( a ) , A i = argmax a [ m ] θ i ( a ) and P i = P i = 0 . Accordingly, θ i ( A i | r = H e a d s ) θ i ( A i | r = H e a d s ) and u i ( A , P | r = H e a d s ) u i ( A , P | r = H e a d s ) .
Conditioned on r = T a i l s , we have A i = A i = and:
P i = P i = j : α j i and v j ( α j i ) < π j i π j i - δ i ,
where δ i = ϵ n · R E V ( ω i ) 1 + R E V ( ω i ) is player i’s reward under both strategy profiles. Accordingly, u i ( A , P | r = T a i l s ) = u i ( A , P | r = T a i l s ) .
In sum, u i ( A , P ) u i ( A , P ) in Case 1.
Case 2:
i is the winner under neither s nor s .
In this case, the winner w is the same under both s and s . Conditioned on r = H e a d s , we have A i = A i = and P i = P i = 0 ; thus, u i ( A , P | r = H e a d s ) = u i ( A , P | r = H e a d s ) .
Conditioned on r = T a i l s , if v i ( α i w ) < π i w and θ i ( α i w ) < π i w , or if both inequalities are reversed, then ( A i , P i ) = ( A i , P i ) and u i ( A , P | r = T a i l s ) = u i ( A , P | r = T a i l s ) . Otherwise, if v i ( α i w ) < π i w and θ i ( α i w ) π i w , then:
u i ( A , P | r = T a i l s ) = θ i ( α i w ) - π i w + ϵ n + δ i > δ i = u i ( A , P | r = T a i l s ) ,
where again δ i is i’s reward under both strategy profiles. Otherwise, we have v i ( α i w ) π i w and θ i ( α i w ) < π i w ; thus:
u i ( A , P | r = T a i l s ) = θ i ( α i w ) - π i w + ϵ n + δ i - 1 + ϵ n + δ i < δ i = u i ( A , P | r = T a i l s ) .
In sum, u i ( A , P ) u i ( A , P ) in Case 2, as well.
It remains to show there exists a strategy sub-profile s - i , such that u i ( A , P ) < u i ( A , P ) , and such an s - i has actually appeared in Case 2 above. Indeed, since v i θ i , there exists a [ m ] , such that v i ( a ) θ i ( a ) . When v i ( a ) < θ i ( a ) , arbitrarily fix a player j i , and choose strategy s j , such that:
α i j = a , π i j = θ i ( a ) ,   and   R E V ( ω j ) > max { π i j , R E V ( ω i ) } .
Notice that such a strategy exists in S j : player j can set π k j to be arbitrarily high for any player k { i , j } . Moreover, for any player k { i , j } , choose s k to be such that R E V ( ω k ) = 0 . By construction, w = j under both s and s , v i ( α i w ) < π i w and θ i ( α i w ) π i w . Following Case 2 above, u i ( A , P | r = H e a d s ) = u i ( A , P | r = H e a d s ) and u i ( A , P | r = T a i l s ) < u i ( A , P | r = T a i l s ) by Inequality 1. Thus, u i ( A , P ) < u i ( A , P ) .
When v i ( a ) > θ i ( a ) , similarly, choose strategy s j , such that:
α i j = a , π i j = v i ( a ) ,   and   R E V ( ω j ) > max { π i j , R E V ( ω i ) } ,
and choose strategy s k the same as above for any k { i , j } . The analysis again follows from Case 2 above (in particular, Inequality 2); thus, u i ( A , P ) < u i ( A , P ) .
Combining everything together, s i S θ i s i , and Lemma 1 holds. ☐
Lemma 2.
For any context ( n , m , θ , B ) , constant ϵ, player i and strategy s i = ( v i , ω i ) , if s i C i , then R E V ( ω i ) B B i .
Proof. 
By Lemma 1, we only need to consider strategies, such that v i = θ i . Arbitrarily fix a strategy s i = ( θ i , ω i ) U i with R E V ( ω i ) < B B i . Consider a strategy s ^ i = ( θ i , ω ^ i ) , such that ω ^ i = ( α ^ i , π ^ i ) satisfies the following conditions:
ω ^ i argmax ( A , P ) Ω : A i = and P j t j ( A j ) j i , t B i R E V ( A , P )
and
π ^ j i > 0   w h e n e v e r   α ^ j i .
Notice that R E V ( ω ^ i ) = B B i > R E V ( ω i ) . We show that for all t B i , s i U ( t ) θ i s ^ i .
To do so, arbitrarily fix a valuation profile t B i and a strategy sub-profile s - i , such that s j U j ( t j , S ) for each player j. Note that t i = θ i by the definition of B i . Moreover, by Lemma 1, each s j is of the form ( t j , ω j ) : that is, the valuation it announces is t j . Let ( A , P ) be the outcome of the strategy profile s = ( s i , s - i ) and ( A ^ , P ^ ) that of the strategy profile s ^ = ( s ^ i , s - i ) . There are three possibilities for the winners under s and s ^ : (1) player i is the winner under both of them; (2) player i is the winner under neither of them; and (3) player i is the winner under s ^ , but not under s. Below, we consider them one by one.
Case 1:
i is the winner under both s and s ^ .
In this case, conditioned on r = H e a d s , ( A i , P i ) = ( A ^ i , P ^ i ) and u i ( A , P | r = H e a d s ) = u i ( A ^ , P ^ | r = H e a d s ) , since under both s and s ^ , player i gets his/her favorite subset for free.
Conditioned on r = T a i l s , A i = A ^ i = , P ^ i = j : α ^ j i and t j ( α ^ j i ) < π ^ j i π ^ j i - δ ^ i , and P i = j : α j i and t j ( α j i ) < π j i π j i - δ i , where δ ^ i is player i’s reward under s ^ and δ i is that under s. By the definition of ω ^ i , the set { j : α ^ j i and t j ( α ^ j i ) < π ^ j i } is empty, so P ^ i = - δ ^ i . As R E V ( ω ^ i ) > R E V ( ω i ) , by definition we have δ ^ i > δ i , which implies P ^ i < - δ i P i . Accordingly, u i ( A ^ , P ^ | r = T a i l s ) > u i ( A , P | r = T a i l s ) .
In sum, we have u i ( A ^ , P ^ ) > u i ( A , P ) in Case 1.
Case 2:
i is the winner under neither s nor s ^ .
In this case, the winner w is the same under both strategy profiles. Conditioned on r = H e a d s , A i = A ^ i = and P i = P ^ i = 0 , thus u i ( A , P | r = H e a d s ) = u i ( A ^ , P ^ | r = H e a d s ) .
Conditioned on r = T a i l s , i gets the set α i w under s if and only if he/she gets it under s ^ , as he/she announces valuation θ i under both strategy profiles. That is, A i = A ^ i . Moreover, the only difference in player i’s prices is the rewards he/she gets, and P i - P ^ i = - δ i + δ ^ i > 0 . Accordingly, u i ( A ^ , P ^ | r = T a i l s ) > u i ( A , P | r = T a i l s ) .
In sum, we have u i ( A ^ , P ^ ) > u i ( A , P ) in Case 2.
Case 3:
i is the winner under s ^ , but not under s.
In this case, letting w be the winner under s, we have R E V ( ω i ) R E V ( ω w ) R E V ( ω ^ i ) , and at least one of the inequalities is strict. We compare player i’s utilities under s and s ^ , but conditioned on different outcomes of the random coin. More specifically, we use r to denote the outcome of the coin under s and r ^ that under s ^ .
First, conditioned on r ^ = H e a d s , A ^ i = argmax a [ m ] θ i ( a ) and P ^ i = 0 ; thus:
u i ( A ^ , P ^ | r ^ = H e a d s ) = θ i ( A ^ i ) .
While conditioned on r = T a i l s , we have either A i = and P i = - δ i , or A i = α i w and P i = π i w - ϵ n - δ i ; thus:
u i ( A , P | r = T a i l s ) max { δ i , θ i ( α i w ) - π i w + ϵ n + δ i } max { δ i , θ i ( A ^ i ) - 1 + ϵ n + δ i } θ i ( A ^ i ) + δ i ,
where the second inequality is because θ i ( α i w ) θ i ( A ^ i ) and π i w 1 , and the third inequality is because both terms in max { · } are less than or equal to θ i ( A ^ i ) + δ i . Accordingly,
u i ( A ^ , P ^ | r ^ = H e a d s ) - u i ( A , P | r = T a i l s ) - δ i .
Second, conditioned on r ^ = T a i l s , A ^ i = and P ^ i = - δ ^ i , similar to Case 1 above. Thus:
u i ( A ^ , P ^ | r ^ = T a i l s ) = δ ^ i .
While conditioned on r = H e a d s , A i = and P i = 0 ; thus:
u i ( A , P | r = H e a d s ) = 0 .
Accordingly,
u i ( A ^ , P ^ | r ^ = T a i l s ) - u i ( A , P | r = H e a d s ) δ ^ i .
Combining Inequalities 3 and 4 and given that r and r ^ are both fair coins, we have:
u i ( A ^ , P ^ ) - u i ( A , P ) δ ^ i - δ i 2 > 0 ,
thus u i ( A ^ , P ^ ) > u i ( A , P ) in Case 3, as well.
In sum, s i U ( t ) θ i s ^ i for all t B i , which implies s i C i . Thus, Lemma 2 holds. ☐
We now analyze the revenue of M N o r m a l .
Proof of Theorem 1.
Arbitrarily fix an auction context C = ( n , m , θ , B ) and a strategy profile s C . By Lemma 1, we can write s i = ( θ i , ω i ) for each player i. Let ( A , P ) be the outcome of M N o r m a l under s. By Lemma 2, R E V ( ω i ) B B i for each i, so:
R w = max i R E V ( ω i ) max i B B i = B B ( C ) .
Note that R E V ( A , P | r = H e a d s ) = 0 , while:
R E V ( A , P | r = T a i l s ) = i P i = P w + i : α i w , θ i ( α i w ) π i w ( π i w - ϵ n - δ i ) + i : α i w , θ i ( α i w ) < π i w ( - δ i ) + i : α i w = , i w ( - δ i ) = i : α i w , θ i ( α i w ) < π i w π i w - δ w + i : α i w , θ i ( α i w ) π i w ( π i w - ϵ n - δ i ) + i : α i w , θ i ( α i w ) < π i w ( - δ i ) + i : α i w = , i w ( - δ i ) i : α i w π i w - i ϵ n - i δ i = R w - ϵ - i δ i > R w - ϵ - i ϵ n = R w - 2 ϵ B B ( C ) - 2 ϵ .
Combining the two cases together, we have R E V ( A , P ) > B B ( C ) 2 - ϵ , and Theorem 1 holds. ☐

5. Conservative Distinguishable Implementation

Our main mechanism, together with an auction context, specifies an extensive game with perfect information, chance moves and simultaneous moves [47]6. For such a mechanism M, we denote the set of all pure strategy profiles by S = S 1 × × S n , the history of a strategy profile s by H ( s ) and, again, the outcome of s by M ( s ) . If σ is a mixed strategy profile, then H ( σ ) and M ( σ ) are the corresponding distributions.
Even for extensive games of complete information, the literature has several notions of rationality, with different epistemic foundations and predictions about the players’ strategies. Since our setting is of incomplete information without Bayesian beliefs, it is important to define a proper solution concept in order to analyze mechanisms in such settings. Iterated eliminations of dominated strategies and their epistemic characterizations have been the focus of many studies in epistemic game theory. In [30], the authors define distinguishable dominance, prove that it is order independent with respect to surviving histories and characterize it with extensive-form rationalizability [31,32,48]. In some sense, distinguishable dominance is the counterpart of strict dominance in extensive-form games. We incorporate this solution concept with the players’ possibilistic beliefs.
Definition 4.
Let C = ( n , m , θ , B ) be an auction context, M an extensive-form mechanism, i a player, t i a valuation of i and T = T i × T - i a set of pure strategy profiles. A strategy s i T i is distinguishably-dominated by another strategy σ i Δ ( T i ) with respect to t i and T, in symbols s i T t i σ i , if:
1. 
s - i T - i distinguishing s i and σ i : that is, H ( s i , s - i ) H ( σ i , s - i ) ; and
2. 
u i ( t i , M ( s i , s - i ) ) < u i ( t i , M ( σ i , s - i ) ) s - i T - i distinguishing s i and σ i .
Intuitively, s i is distinguishably dominated by σ i if it leads to a smaller utility for i than σ i , when played against any s - i , except those s - i that produce the same history with s i and with σ i : when such an s - i is used, not only player i has the same utility under s i and σ i , but also nobody can distinguish whether i is using s i or σ i by observing the history of the game.
For each player i, we denote by D U i ( t i , T ) the set of strategies in T i that are not distinguishably dominated with respect to t i and T and by D U i the set D U i ( θ i , S ) . Having seen how to incorporate the iterated elimination of weakly-dominated strategies into our setting, the readers should find the following definition a natural analog.
Definition 5.
Let C = ( n , m , θ , B ) be an auction context, M a mechanism and i a player. The set of conservatively distinguishably-rational strategies of player i is:
CD i D U i { s i : σ i Δ ( D U i ) s . t . t B i , s i D U ( t ) θ i σ i } ,
where D U ( t ) × j D U j ( t j , S ) for any t Θ . The set of conservatively distinguishably-rational strategy profiles is CD = CD 1 × × CD n .
Mechanism M conservatively distinguishably implements a revenue benchmark f if, for any auction context C and any strategy profile s CD , R E V ( M ( s ) ) f ( C ) .
A player i may further refine CD i , but doing so requires more than mutual belief of rationality. We thus do not consider any further refinements.

6. The Best-Belief Mechanism

Now, we construct and analyze our best-belief mechanism M B B . Similar to the normal-form mechanism, it is parameterized by n, m and a constant ϵ ( 0 , 1 ] . In the description below, Steps 1–3 correspond to decision nodes, while Steps a–e are again “steps taken by the mechanism”.
The   best-belief   mechanism ,   M B B :
a:
Set A i : = and P i : = 0 for each player i.
1:
Each player i, publicly and simultaneously with the other players, announces:
(1)
a subset ξ i of the goods; and
(2)
an outcome ω i = ( α i , π i ) , such that: α i i = , and for each player j, π j i is zero whenever α j i = and is a positive integer otherwise.
b:
Set R i : = R E V ( ω i ) for each player i and w : = argmax i R i with ties broken lexicographically.
2:
Publicly flip a fair coin, and denote the result by r.
c:
If r = H e a d s , then A w : = ξ w , and halt.
3:
(Note that r = T a i l s when this step is reached.)
Each player i, such that α i w publicly and simultaneously announces YES or NO.
d:
For each player i announcing NO, P w : = P w + π i w .
For each player i announcing YES, A i : = α i w and P i : = π i w - ϵ n .
For each player i, P i : = P i - δ i with δ i = ϵ n · R i 1 + R i .
e:
The final outcome is ( A , P ) .

6.1. Analysis of Our Mechanism

As before, given a context ( n , m , θ , B ) and an outcome ω, we use u i ( ω ) instead of u i ( θ i , ω ) for player i’s utility under ω. We have the following.
Theorem 2.
For any context ( n , m , θ , B ) and constant ϵ ( 0 , 1 ] , mechanism M B B conservatively distinguishably implements the revenue benchmark B B 2 - ϵ .
Different from the normal-form mechanism, here, a player does not report his/her true valuation. Instead, the use of his/her valuation is divided into two parts: a subset of the goods, which will be his/her favorite subset as we will see in the proof; and a simple “yes or no” answer to the take-it-or-leave-it offer suggested by the winner. All of the other information about his/her true valuation is redundant and has been removed from the player’s report. This can be done because the mechanism is extensive and the players give their answers directly after seeing the offers; thus, the seller does not need to deduce their answers from their reported valuations. We again start by proving the following two lemmas. Some ideas are similar to those for Lemmas 1 and 2; thus, the details have been omitted.
Lemma 3.
For any context ( n , m , θ , B ) , constant ϵ, player i and strategy s i , if s i D U i , then, according to s i , in Step 3 of M B B , i announces YES if and only if θ i ( α i w ) π i w 7.
Proof. 
We only prove the “if” direction, as the “only if” direction is totally symmetric. Assume that, according to s i , i announces NO at some reachable decision node d of i where θ i ( α i w ) π i w . We refer to such a node d as a deviating node. Consider the following strategy s i :
  • s i announces the same ξ i and ω i as s i in Step 1; and
  • according to s i , in Step 3, i announces YES if and only if θ i ( α i w ) π i w .
Below, we show that s i S θ i s i , where S is the set of all strategy profiles of M B B .
For any deviating node d, since d is reachable by s i , there exists a strategy sub-profile s - i S - i , such that the history H ( s i , s - i ) reaches d with positive probability. In fact, by the construction of the mechanism, the probability is exactly 1 / 2 : when r = T a i l s . For any such s - i , by the construction of s i , the history H ( s i , s - i ) also reaches d with probability 1 / 2 . By definition, i announces YES at d under s i and NO under s i ; thus, H ( s i , s - i | r = T a i l s ) H ( s i , s - i | r = T a i l s ) and s - i distinguishes s i and s i .
Indeed, for any strategy sub-profile s - i , it distinguishes s i and s i if and only if H ( s i , s - i ) reaches a deviating node d (with probability 1 / 2 ). Arbitrarily fixing such an s - i and the corresponding deviating node d, it suffices to show:
u i ( M B B ( s i , s - i ) ) < u i ( M B B ( s i , s - i ) ) .
Because i w under ( s i , s - i ) when r = T a i l s (that is, when d is reached), i w under ( s i , s - i ) when r = H e a d s , as well, since w is the same in the two cases. Moreover, because s i announces the same ξ i and ω i as s i in Step 1, we have H ( s i , s - i | r = H e a d s ) = H ( s i , s - i | r = H e a d s ) and u i ( M B B ( s i , s - i ) | r = H e a d s ) = u i ( M B B ( s i , s - i ) | r = H e a d s ) = 0 .
Similar to Lemma 1, u i ( M B B ( s i , s - i ) | r = T a i l s ) = δ i , as i announces NO at d under s i . Furthermore, u i ( M B B ( s i , s - i ) | r = T a i l s ) = θ i ( α i w ) - P i = θ i ( α i w ) - π i w + ϵ n + δ i ϵ n + δ i > δ i , as θ i ( α i w ) π i w at d, and i announces YES at d under s i . Therefore, u i ( M B B ( s i , s - i ) | r = T a i l s ) > u i ( M B B ( s i , s - i ) | r = T a i l s ) , which implies Equation (5). Accordingly, s i S θ i s i , s i D U i , and Lemma 3 holds. ☐
Lemma 4.
For any context ( n , m , θ , B ) , constant ϵ, player i and strategy s i , if s i CD i , then, according to s i , player i announces ω i in Step 1 with R E V ( ω i ) B B i .
Proof. 
Arbitrarily fix a strategy s i D U i according to which, in Step 1, i announces ξ i , and ω i = ( α i , π i ) with R E V ( ω i ) < B B i . By Lemma 3, according to s i , in Step 3, i announces YES if and only if θ i ( α i w ) π i w . Consider the following strategy s ^ i :
  • In Step 1, i announces ξ ^ i , and ω ^ i = ( α ^ i , π ^ i ) , such that:
    -
    θ i ( ξ ^ i ) = max A { 1 , , m } θ i ( A ) ;
    -
    R E V ( ω ^ i ) = max ( A , P ) Ω : A i = and P j t j ( A j ) j i , t B i R E V ( A , P ) ; and
    -
    π ^ j i > 0 whenever α ^ j i .
  • In Step 3, i announces YES if and only if θ i ( α i w ) π i w .
By definition, R E V ( ω ^ i ) = B B i > R E V ( ω i ) , which implies that s ^ i and s i differ in Step 1: the root of the game tree. Thus, any strategy sub-profile s - i distinguishes them. We show that for all t B i , s i D U ( t ) θ i s ^ i 8. To do so, arbitrarily fixing a valuation profile t B i and a strategy sub-profile s - i × j i D U j ( t j , S ) , it suffices to show:
u i ( M B B ( s i , s - i ) ) < u i ( M B B ( s ^ i , s - i ) ) .
Let δ i and δ ^ i be the rewards of player i in Step d, under ( s i , s - i ) and ( s ^ i , s - i ) , respectively. Because R E V ( ω ^ i ) > R E V ( ω i ) , we have:
δ i < δ ^ i .
Similar to Lemma 2, we distinguish three cases.
Case 1.
i is the winner under both ( s i , s - i ) and ( s ^ i , s - i ) .
In this case, on the one hand,
u i ( M B B ( s i , s - i ) | r = H e a d s ) = θ i ( ξ i ) θ i ( ξ ^ i ) = u i ( M B B ( s ^ i , s - i ) | r = H e a d s ) ,
where the inequality is by the definition of ξ ^ i .
On the other hand, u i ( M B B ( s i , s - i ) | r = T a i l s ) = - ( j : j a n n o u n c e s N O i n ( s i , s - i ) π j i - δ i ) δ i and u i ( M B B ( s ^ i , s - i ) | r = T a i l s ) = - ( j : j a n n o u n c e s N O i n ( s ^ i , s - i ) π ^ j i - δ ^ i ) . For any player j, such that α ^ j i , because t B i , by the construction of ω ^ i , we have π ^ j i t j ( α ^ j i ) . Because s ^ j D U j ( t j , S ) , by Lemma 3, j announces YES in Step 3 under ( s ^ i , s - i ) . Accordingly, j : j a n n o u n c e s N O i n ( s ^ i , s - i ) π ^ j i = 0 and:
u i ( M B B ( s ^ i , s - i ) | r = T a i l s ) = δ ^ i > δ i = u i ( M B B ( s i , s - i ) | r = T a i l s ) ,
where the inequality is by Equation (7). In sum, Equation (6) holds in Case 1.
Case 2.
i is the winner under neither ( s i , s - i ) nor ( s ^ i , s - i ) .
Letting w be the winner under both strategy profiles, we have u i ( M B B ( s i , s - i ) | r = H e a d s ) = u i ( M B B ( s ^ i , s - i ) | r = H e a d s ) = 0 . Moreover, conditioned on r = T a i l s , by the construction of s ^ i , player i announces the same thing under ( s i , s - i ) and ( s ^ i , s - i ) . Thus, the only difference between i’s allocation and price under the two strategy profiles is the rewards: one is δ i , and the other is δ ^ i . Accordingly, u i ( M B B ( s i , s - i ) | r = T a i l s ) - u i ( M B B ( s ^ i , s - i ) | r = T a i l s ) = δ i - δ ^ i < 0 , where the inequality is by Equation (7). In sum, Equation (6) holds in Case 2.
Case 3.
i is the winner under ( s ^ i , s - i ) , but not under ( s i , s - i ) .
In this case, let w be the winner under ( s i , s - i ) and r and r ^ be the outcomes of the coins under ( s i , s - i ) and ( s ^ i , s - i ) , respectively. Similar to Lemma 2, we have:
u i ( M B B ( s i , s - i ) | r = T a i l s ) max { δ i , θ i ( α i w ) - π i w + ϵ n + δ i } θ i ( α i w ) + δ i ,
u i ( M B B ( s i , s - i ) | r = H e a d s ) = 0 ,
u i ( M B B ( s ^ i , s - i ) | r ^ = H e a d s ) = θ i ( ξ ^ i ) ,
and:
u i ( M B B ( s ^ i , s - i ) | r ^ = T a i l s ) = - j : j a n n o u n c e s N O i n ( s ^ i , s - i ) π ^ j i - δ ^ i = δ ^ i .
Accordingly,
u i ( M B B ( s ^ i , s - i ) ) = θ i ( ξ ^ i ) + δ ^ i 2 > θ i ( α i w ) + δ i 2 u i ( M B B ( s i , s - i ) ) ,
and Equation (6) holds in Case 3.
Therefore, s i CD i , and Lemma 4 holds. ☐
Proof of Theorem 2.
Given Lemmas 3 and 4, the proof of Theorem 2 is almost the same as that of Theorem 1, except that, rather than distinguishing players with θ i ( α i w ) π i w or θ i ( α i w ) < π i w , here, we distinguish players announcing YES or NO in Step 3. The details have been omitted. ☐
Note that the revenue guarantee of the mechanism holds no matter whether the players’ beliefs about each other are correct or not. If a player i has low values for the goods and believes the others’ values to be high and if the others’ true values and beliefs are all low, then player i may end up being the winner and getting a negative utility. However, according to player i’s beliefs, his/her utility will always be positive, and it is individually rational for him to participate. This is not too dissimilar to the stock market, where not everybody makes money, but everybody believes he/she will make money when entering. Indeed, the final outcome implemented may not be an ex-post Nash equilibrium and instead is supported by the two-step elimination of dominated strategies.
Furthermore, note that the idea of asking players to report their beliefs about each other has been explored in the Nash implementation literature (see, e.g., [49,50]). However, our mechanism does not assume complete information or common beliefs. Moreover, our mechanism does not try to utilize the winner’s true valuations for generating revenue: indeed, the focus here is how to generate revenue by leveraging the players’ beliefs. Simply choosing at random this mechanism or the VCG mechanism (or any other mechanism for unrestricted combinatorial auctions that may achieve better revenue in some contexts), one can achieve a good approximation to the best of the two.
Finally, it suffices for the players’ values to be numbers within certain precisions, say two decimal digits, so that there is a gap between any two different values. If the values are real numbers, then the rewards in our mechanisms are set to zero, and our results hold under a weaker notion of dominance: that is, the desired strategies are still at least as good as any deviation, but may not be strictly better.

6.2. Privacy, Complexity and Collusion in Our Mechanism

Finally, we discuss the resiliency of our mechanism with respect to privacy, complexity and collusion concerns.

6.2.1. Privacy

Our main mechanism achieves our revenue benchmark by eliciting from the players much less information than they possess. In Step 1, a player does not reveal anything about his/her own valuation except a subset of goods, which is supposed to be his/her favorite subset. Nor does he/she reveal his/her full beliefs about the valuations of his/her opponents: he/she only reveals a maximum guaranteed-revenue outcome, according to his/her beliefs.
This is all of the information that is revealed if the coin flipped by the mechanism ends up as heads. If it ends up as tails, then a player i reveals at most a modest amount of information about his/her own true valuation in Step 3. Namely, only if he/she is offered a subset A of goods for a price p, he/she reveals that his/her true value for that specific subset is ≥p if he/she answers YES and <p otherwise. In particular, therefore, in our mechanism, a player who is not offered any goods does not reveal any information about his/her own valuation. This is very far from what may happen in many other auction mechanisms: that is, fully revealing your valuation and receiving no goods.
Because privacy is important to many strategic agents, we hope that trying to preserve it will become a standard goal in mechanism design. Achieving this goal will require putting a greater emphasis on extensive mechanisms, where the players and the mechanism may interact over time9. The power of “interaction” for privacy preservation is very well documented in cryptography10. This power extends to mechanism design, as well: as we have seen in our case, even three sequential moves can save a considerable amount of privacy compared with the previous normal-form mechanism.

6.2.2. Computation and Communication Efficiency

Our mechanism is highly efficient in both computation and communication. Essentially, it only needs to sum up the prices in each reported outcome ω i and figure out which reported outcome has the highest revenue. Moreover, each player only reports a subset of goods and an outcome and perhaps announces YES or NO in Step 3. One might object, however, that our mechanism transfers all of the hard computation to the players themselves. This is indeed true, but our mechanism also gives them the incentives to approximate this hard computation.
As we have recalled in our Introduction, approximation (1) may be necessary to compute a reasonable outcome when finding “the best one” is computationally hard, but (2) may also distort incentives. Our mechanism instead ensures that approximation is aligned with incentives. Indeed, our mechanism entrusts the players to propose outcomes, but ensures, as per Lemma 4, that each player wishes to become the winner. Thus, our mechanism makes it in a player’s own interest to use the best computationally-efficient approximation algorithm he/she knows, in order to propose a high-revenue outcome. Of course, the best algorithm known by a player may not be the best in the literature, in terms of its approximation ratio to the optimal outcome. In this case, the mechanism’s revenue is at least half of the highest revenue the players are capable of computing. To our best knowledge, this is the first mechanism that gives the buyers incentives to perform computationally-efficient approximations. Incentive-compatible and computationally-efficient approximation on the seller’s side has also been studied, but again for valuations of restricted forms, such as single-minded players [1], single-value players [2], auctions of multiple copies of the same good [53], etc. By contrast, we do not impose any such restrictions.

6.2.3. Collusion

Collusion is traditionally prohibited (e.g., by using solution concepts that only consider individual deviations in game theory) and punished (e.g., by laws). However, it continues to exist. We thus wish to point out that our mechanism offers a reasonable form of protection against collusion. Namely, when at least some players are independent, denoting by I the set of independent player, it guarantees at least half of the revenue benchmark B B max i I B B i .
Thus, our mechanism is not responsible for generating any revenue if all players are collusive, but must generate revenue at least half of B B otherwise. This guarantee holds in a strong collusion model: that is, even when collusive players are capable of making side payments and coordinating their actions via secret and enforceable agreements, and the independent players have no idea that collusion is afoot. The only constraint is that every coalition is rational, that is, its members act so to maximize the sum of their individual utilities. In this case, an independent player i, reporting in Step 1 an outcome ω offering a player j a subset of the goods X for a price p, need not worry whether j is independent or collusive. If i becomes the winner and the coin toss of the mechanism is tails, then j will answer YES if and only if his/her individual true value for X is greater than or equal to p. Accordingly, i will report in Step 1 an outcome whose revenue is at least B B i . If an independent player becomes the winner, then the mechanism will generate at least B B / 2 revenue. Else, some collusive player has become the winner; but then, such a player must have reported an outcome with revenue R B B , and the mechanism will generate at least R / 2 revenue.
Let us point out that the B B benchmark is actually guaranteed under a weaker requirement of coalition rationality11.

6.2.4. Social Welfare

Note that each player i has a “truthful” strategy: to report θ i and the outcome ω ^ i as defined in the proof of Lemma 4, whose revenue is exactly B B i . Since the price suggested by ω ^ i for each player i i is no more than the true value of i for the suggested subset of goods for him/her, the players all say YES when i is the winner, and player i’s utility is non-negative. Under the truthful strategy profile, the social welfare of the final outcome is at least B B 2 . When the players overbid and report outcomes whose revenue is higher than the corresponding B B i ’s, the social welfare may be smaller than the revenue.

6.3. Variants of Our Mechanism

Our mechanism sets aside a “budget” of ϵ > 0 for rewarding the players and achieves the benchmark B B / 2 - ϵ . We note that our analysis also holds if the mechanism chooses his/her reward budget to be not an absolute value ϵ, but an ϵ fraction of the revenue it collects. In such a case, however, its guaranteed revenue will be ( 1 - ϵ ) B B / 2 .
Furthermore, for simplicity, we have assumed that the seller/designer knows nothing about the players. However, it is easy to accommodate the case in which he/she too has some beliefs about the players’ valuations. For instance, in keeping with our overall approach, let ω be the highest revenue outcome among all of the outcomes ( A , P ) for which he/she is sure that θ i ( A i ) P i for all i. Then, he/she can use ω as a “reserve outcome” as follows. If, in Step 1, the revenue of the outcome reported by the winner is at least that of ω , then he/she keeps on running our mechanism; otherwise, roughly speaking, he/she makes himself the “winner” and continues with ω being the outcome reported by the winner.

Acknowledgments

We thank David Easley, Shafi Goldwasser, Avinatan Hassidim, Robert Kleinberg, Eric Maskin, Paul Milgrom, Rafael Pass, Ron Rivest, Paul Valiant and Avi Wigderson for comments and encouragement and several anonymous reviewers for helpful suggestions about the presentation of our results. The authors were partially supported by Office of Naval Research (ONR) Grant No. N00014-09-1-0597. The first author is partially supported by NSF CAREER Award No. 1553385.

Author Contributions

Jing Chen and Silvio Micali contributed equally to this work. Jing Chen and Silvio Micali wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.
  • 1.Such as monotonicity, single-mindedness and additivity [1,2,3].
  • 2.In complete information settings (where the players have common knowledge about their valuations), assuming common knowledge of rationality, [7,8,9] have designed mechanisms that guarantee revenue arbitrarily close to the maximum social welfare.
  • 3.Indeed, the revelation principle [13] explicitly asks the players to directly reveal all of their private information.
  • 4.For instance, the outcome function of the VCG mechanism is NP-hard to compute even when each player only values a single subset of the goods for $1 and all other subsets for $0. Moreover, if one replaces this outcome function with an approximation, then VCG would no longer be dominant-strategy truthful.
  • 5.As usual, for a set T, Δ ( T ) is the set of probability distributions over T.
  • 6.Section 6.3 of [47] defines extensive games with perfect information and chance moves, as well as extensive games with perfect information and simultaneous moves. It is easy to combine the two to define extensive games with all three characteristics. Such a game can be described by a “game tree”. A decision node is an internal node, where the players take actions or chance moves. A terminal node is a leaf, where an outcome is specified. The history of a strategy profile is the probability distribution over paths from the root to the leaves determined by this profile. The outcome of a strategy profile is the probability distribution over outcomes at the leaves determined by this profile.
  • 7.That is, i will announce YES or NO as above at every decision node corresponding to Step 3, which is reachable (with positive probability) by s i together with some strategy sub-profile s - i , where i is an acting player.
  • 8.Without loss of generality, we can assume s ^ i CD i . Otherwise, by the well-studied properties of distinguishable dominance [30], there exists σ i Δ ( CD i ) , such that s ^ i D U ( t ) θ i σ i for all t B i , and we can prove s i D U ( t ) θ i σ i .
  • 9.It is well known that every extensive mechanism can be transformed to an “equivalent” normal-form game, but this equivalence does not extend to privacy. Indeed, in an extensive mechanism M, a player i reveals information only if a decision node of i is reached, and in an execution of M, only some of these nodes are reached. Transforming M into the normal form instead asks i to reveal how he/she would like to act at any possible decision node involving him.
  • 10.Interaction is indeed at the base of zero-knowledge proofs [51,52], where a mistrusted prover can convince a skeptical verifier that a theorem statement is true without revealing any additional information.
  • 11.That is, when the members of a coalition act so as to maximize a different function of their individual utilities. All we need is a mild “monotonicity” condition, informally described as follows. Consider a coalition C and two outcomes ω and ω , such that (1) a member i of C is offered a set A i for a price P i in outcome ω and no goods for price P i in ω ; and (2) every other member j of C is offered the same set of goods A j for the same price P j in both outcomes. Then, the only rationality condition that we require from C is that it prefers ω to ω if and only if θ i ( A i ) - P i - P i . Under this model, in Step 3 of our mechanism, each coalition can delegate the YES or NO decisions to its members as if they were independent. Thus, again, an independent player need not worry whether another player is independent or collusive.

References

  1. Lehmann, D.; O’Callaghan, L.; Shoham, Y. Truth revelation in approximately efficient combinatorial auctions. J. ACM 2002, 49, 577–602. [Google Scholar] [CrossRef]
  2. Babaioff, M.; Lavi, R.; Pavlov, E. Single-Value combinatorial auctions and implementation in undominated strategies. In Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2006), Miami, FL, USA, 22–26 January 2006; pp. 1054–1063.
  3. Hart, S.; Nisan, N. Approximate revenue maximization with multiple items. In Proceedings of the 13th ACM Conference on Electronic Commerce (EC), Valencia, Spain, 4–8 June 2012; p. 656.
  4. Vickrey, W. Counterspeculation, auctions, and competitive sealed tenders. J. Financ. 1961, 16, 8–37. [Google Scholar] [CrossRef]
  5. Clarke, E. Multipart pricing of public goods. Public Choice 1971, 11, 17–33. [Google Scholar] [CrossRef]
  6. Groves, T. Incentives in teams. Econometrica 1973, 41, 617–631. [Google Scholar] [CrossRef]
  7. Abreu, D.; Matsushima, H. Virtual Implementation in iteratively undominated actions: Complete information. Econometrica 1992, 60, 993–1008. [Google Scholar] [CrossRef]
  8. Glazer, J.; Perry, M. Virtual implementation in backwards induction. Games Econ. Behav. 1996, 15, 27–32. [Google Scholar] [CrossRef]
  9. Chen, J.; Hassidim, A.; Micali, S. Robust perfect revenue from perfectly informed players. In Proceedings of the Innovations in Theoretical Computer Science (ITCS), Beijing, China, 5–7 January 2010; pp. 94–105.
  10. Chen, J.; Micali, S. Mechanism design with possibilistic beliefs. J. Econ. Theory 2015, 156, 77–102. [Google Scholar] [CrossRef]
  11. Harsanyi, J. Games with incomplete information played by “Bayesian” players, I–III. Manag. Sci. 1967–1968, 14, 159–182, 320–334, 486–502. [Google Scholar] [CrossRef]
  12. Aumann, R. Agreeing to disagree. Ann. Stat. 1976, 4, 1236–1239. [Google Scholar] [CrossRef]
  13. Myerson, R. Optimal auction design. Math. Oper. Res. 1981, 6, 58–73. [Google Scholar] [CrossRef]
  14. Ausubel, L.; Milgrom, P. The lovely but lonely Vickrey auction. In Combinatorial Auctions; Cramton, P., Shoham, Y., Steinberg, R., Eds.; MIT Press: Cambridge, MA, USA, 2006; pp. 17–40. [Google Scholar]
  15. Aumann, R. Backward Induction and Common Knowledge of Rationality. Games Econ. Behav. 1995, 8, 6–19. [Google Scholar] [CrossRef]
  16. Milgrom, P. Putting Auction Theory to Work; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  17. Cramton, P.; Shoham, Y.; Steinberg, R. (Eds.) Combinatorial Auctions; MIT Press: Cambridge, MA, USA, 2006.
  18. Klemperer, P. Auctions: Theory and Practice; Princeton University Press: Princeton, NJ, USA, 2004. [Google Scholar]
  19. Milgrom, P.; Ausubel, L.; Levin, J.; Segal, I. Incentive Auction Rules Option and Discussion; Technical Report, FCC-12-118A2; Federal Communications Commission: Washington, DC, USA, 2012.
  20. Milgrom, P.; Segal, I. Deferred-Acceptance Auctions and Radio Spectrum Reallocation. In Proceedings of the 15th ACM Conference on Economics and Computation (EC), Palo Alto, CA, USA, 8–12 June 2014; pp. 185–186.
  21. Kash, I.A.; Murty, R.; Parkes, D.C. Enabling Spectrum Sharing in Secondary Market Auctions. IEEE Trans. Mob. Comput. 2014, 13, 556–568. [Google Scholar] [CrossRef]
  22. Cramton, P.; Lopez, H.; Malec, D.; Sujarittanonta, P. Design of the Reverse Auction in the Broadcast Incentive Auction; Working Paper; University of Maryland: College Park, MD, USA, 2015. [Google Scholar]
  23. Nguyen, T.D.; Sandholm, T. Multi-Option Descending Clock Auction. In Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Singapore, 9–13 May 2016; pp. 1461–1462.
  24. Milgrom, P.; Weber, R. A Theory of Auctions and Competitive Bidding, II. In The Economic Theory of Auctions; Klemperer, P., Ed.; Edward Elgar: Cheltenham, UK, 2000; Volume I, pp. 179–194. [Google Scholar]
  25. Parkes, D. Iterative Combinatorial Auctions. In Combinatorial Auctions; Cramton, P., Shoham, Y., Steinberg, R., Eds.; MIT Press: Cambridge, MA, USA, 2006; Chapter 2. [Google Scholar]
  26. Likhodedov, A.; Sandholm, T. Approximating Revenue-Maximizing Combinatorial Auctions. In Proceedings of the 20th National Conference on Artificial Intelligence (AAAI), Pittsburgh, PA, USA, 9–13 July 2005; pp. 267–274.
  27. Balcan, M.; Blum, A.; Mansour, Y. Single Price Mechanisms for Revenue Maximization in Unlimited Supply Combinatorial Auctions; Technical Report, CMU-CS-07-111; Carnegie Mellon University: Pittsburgh, PA, USA, 2007. [Google Scholar]
  28. Micali, S.; Valiant, P. Resilient Mechanisms for Unrestricted Combinatorial Auctions; Technical Report, MIT-CSAIL-TR-2008-067; Massachusetts Institute of Technology: Cambridge, MA, USA, 2008. [Google Scholar]
  29. Jackson, M. Implementation in undominated strategies: A look at bounded mechanisms. Rev. Econ. Stud. 1992, 59, 757–775. [Google Scholar] [CrossRef]
  30. Chen, J.; Micali, S. The order independence of iterated dominance in extensive games. Theor. Econ. 2013, 8, 125–163. [Google Scholar] [CrossRef] [Green Version]
  31. Pearce, D.G. Rationalizable strategic behavior and the problem of perfection. Econometrica 1984, 52, 1029–1050. [Google Scholar] [CrossRef]
  32. Battigalli, P. On rationalizability in extensive games. J. Econ. Theory 1997, 74, 40–61. [Google Scholar] [CrossRef]
  33. Chen, J.; Micali, S.; Pass, R. Tight revenue bounds with possibilistic beliefs and level-k rationality. Econometrica 2015, 83, 1619–1639. [Google Scholar] [CrossRef]
  34. Chen, J.; Micali, S.; Valiant, P. Robustly leveraging collusion in combinatorial auctions. In Proceedings of the Innovations in Theoretical Computer Science (ITCS), Beijing, China, 5–7 January 2010; pp. 81–93.
  35. Izmalkov, S.; Lepinski, M.; Micali, S. Perfect Implementation. Games Econ. Behav. 2011, 71, 121–140. [Google Scholar] [CrossRef]
  36. Parkes, D.; Rabin, M.; Shieber, S.; Thorpe, C. Practical secrecy-preserving, verifiably correct and trustworthy auctions. Electron. Commer. Res. Appl. 2008, 7, 294–312. [Google Scholar] [CrossRef]
  37. Dwork, C.; Roth, A. The Algorithmic Foundations of Differential Privacy; Foundations and Trends in Theoretical Computer Science; NOW Publishers: Breda, The Netherlands, 2014. [Google Scholar]
  38. Green, J.; Laffont, J. On coalition incentive compatibility. Rev. Econ. Stud. 1979, 46, 243–254. [Google Scholar] [CrossRef]
  39. Schummer, J. Manipulation through bribes. J. Econ. Theory 2000, 91, 180–198. [Google Scholar] [CrossRef]
  40. Chen, J.; Micali, S. Collusive dominant-strategy truthfulness. J. Econ. Theory 2012, 147, 1300–1312. [Google Scholar] [CrossRef]
  41. Moulin, H.; Shenker, S. Strategyproof sharing of submodular costs: budget balance versus efficiency. Econ. Theory 2001, 18, 511–533. [Google Scholar] [CrossRef]
  42. Jain, K.; Vazirani, V. Applications of approximation algorithms to cooperative games. In Proceedings of the 33rd ACM Symposium on Theory of Computing (STOC), Heraklion, Greece, 6–8 July 2001; pp. 364–372.
  43. Feigenbaum, J.; Papadimitriou, C.; Shenker, S. Sharing the cost of multicast transmissions. J. Comput. Syst. Sci. 2001, 63, 21–41. [Google Scholar] [CrossRef]
  44. Laffont, J.; Martimort, D. Mechanism design with collusion and correlation. Econometrica 2000, 68, 309–342. [Google Scholar] [CrossRef]
  45. Goldberg, A.; Hartline, J. Collusion-resistant mechanisms for single-parameter agents. In Proceedings of the Symposium on Discrete Algorithms (SODA), Vancouver, BC, Canada, 23–25 January 2005; pp. 620–629.
  46. Che, Y.; Kim, J. Robustly collusion-proof implementation. Econometrica 2006, 74, 1063–1107. [Google Scholar] [CrossRef]
  47. Osborne, M.; Rubinstein, A. A Course in Game Theory; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
  48. Shimoji, M.; Watson, J. Conditional dominance, rationalizability, and game forms. J. Econ. Theory 1998, 83, 161–195. [Google Scholar] [CrossRef]
  49. Maskin, E. Nash equilibrium and welfare optimality. Rev. Econ. Stud. 1999, 66, 23–38. [Google Scholar] [CrossRef]
  50. Jackson, M.; Palfrey, T.; Srivastava, S. Undominated Nash implementation in bounded mechanisms. Games Econ. Behav. 1994, 6, 474–501. [Google Scholar] [CrossRef]
  51. Goldwasser, S.; Micali, S.; Rackoff, C. The knowledge complexity of interactive proof-systems. SlAM J. Comput. 1989, 18, 186–208. [Google Scholar] [CrossRef]
  52. Goldreich, O.; Micali, S.; Wigderson, A. Proofs that yield nothing but their validity or all languages in NP have zero-knowledge proofs. J. ACM 1991, 38, 691–729. [Google Scholar] [CrossRef]
  53. Dobzinski, S.; Dughmi, S. On the power of randomization in algorithmic mechanism design. In Proceedings of the 54th Sympsium on Foundations of Computer Science (FOCS), Atlanta, GA, USA, 25–27 October 2009; pp. 505–514.

Share and Cite

MDPI and ACS Style

Chen, J.; Micali, S. Leveraging Possibilistic Beliefs in Unrestricted Combinatorial Auctions. Games 2016, 7, 32. https://doi.org/10.3390/g7040032

AMA Style

Chen J, Micali S. Leveraging Possibilistic Beliefs in Unrestricted Combinatorial Auctions. Games. 2016; 7(4):32. https://doi.org/10.3390/g7040032

Chicago/Turabian Style

Chen, Jing, and Silvio Micali. 2016. "Leveraging Possibilistic Beliefs in Unrestricted Combinatorial Auctions" Games 7, no. 4: 32. https://doi.org/10.3390/g7040032

APA Style

Chen, J., & Micali, S. (2016). Leveraging Possibilistic Beliefs in Unrestricted Combinatorial Auctions. Games, 7(4), 32. https://doi.org/10.3390/g7040032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop