Next Article in Journal / Special Issue
Accounting Games: Using Matrix Algebra in Creating the Accounting Models
Previous Article in Journal
Computing The Irregularity Strength of Planar Graphs
Previous Article in Special Issue
A Markovian Mechanism of Proportional Resource Allocation in the Incentive Model as a Dynamic Stochastic Inverse Stackelberg Game
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inverse Stackelberg Solutions for Games with Many Followers

by
Yurii Averboukh
1,2
1
Department of Control systems, Krasovskii Institute of Mathematics and Mechanics, 16, S. Kovalevskoi str. Yekaterinburg 620990, Russia
2
Department of Applaied Mathematics and Mechanics, Ural Federal University, 19, Mira str. Yekaterinburg 620002, Russia
Mathematics 2018, 6(9), 151; https://doi.org/10.3390/math6090151
Submission received: 30 July 2018 / Revised: 23 August 2018 / Accepted: 27 August 2018 / Published: 30 August 2018
(This article belongs to the Special Issue Mathematical Game Theory)

Abstract

:
The paper is devoted to inverse Stackelberg games with many players. We consider both static and differential games. The main assumption of the paper is the compactness of the strategy sets. We obtain the characterization of inverse Stackelberg solutions and under additional concavity conditions, establish the existence theorem.

1. Introduction

The paper is concerned with the inverse Stackelberg game, also known as the incentive problem. In ordinary Stackelberg games, one player (called a leader) announces his strategy while the other players (called followers) maximize their payoffs using this information. In the inverse Stackelberg games the leader announces the incentive strategy, i.e., the reaction to the followers’ strategies ([1,2,3,4,5] and reference therein). For dynamic cases, the reaction should be nonanticipative.
The inverse Stackelberg games appear in several models (see, for example, [6,7,8]). In games with many followers, it is often assumed that followers play a Nash game ([6,9,10]). If the strategy sets are normed space, then the incentive strategy can be constructed in the affine form (Ref. [11] for static games and Ref. [12] for differential games).
In this paper, we consider a case where the control spaces of the players are metric compacts. We consider both static and dynamic cases. Moreover, for the dynamic case, we apply punishment strategies. The concept of punishment strategies was first used for the analysis of Stackelberg games in the class of feedback strategies in Ref. [13]. The inverse Stackelberg solutions of two-person differential games were studied via punishment strategies in the paper by Kleimonov [14]. In that paper, the authors described the set of inverse Stackelberg solutions and derived the existence result. In particular, the set of inverse Stackelberg payoffs is equal to the set of feedback Stackelberg payoffs. Note that the incentive strategies considered in the paper by Kleimonov [14] use full memory, i.e., the leader plays with the nonanticipating strategies proposed in the papers by Elliot and Kalton [15] and Varaiya and Lin [16] for zero-sum differential games. The usage of the strategies depends only on the current follower’s control which decreases the payoffs.
In this paper, punishment strategies are applied to static inverse Stackelberg games and to differential inverse Stackelberg games with many followers. We obtain the characterization of the inverse Stackelberg solution and under additional concavity conditions, establish the existence theorem.
The paper is organized as follows. Section 2 is concerned with the static inverse Stackelberg game for a case with n followers. The differential game case is considered in Section 3. In Section 4, we prove the existence theorem for the inverse Stackelberg solution of a differential game.

2. Static Games

We denote the leader by 0. Further, we designate the followers by 1 , n . Player i has a set of strategies ( P i ) and a payoff function( J i : P 0 × P 1 × P n R ). We assume that the sets ( P i ) are compact and the functions ( J i ) are continuous.
The incentive strategy of the leader is a mapping:
α : × i = 1 n P i P 0 .
To define the inverse Stackelberg game, we specify the solution concept used by followers. We suppose that the followers play the Nash game. Let
P = × i = 1 n P i .
An element ( u = ( u 1 , , u n ) ) of P is a profile of the followers’ strategies. If u i P i then ( u i , u i ) is the profile of strategies ( u 1 , , u i 1 , u i , u i + 1 , , u n ) . For simplification, we write J i ( u 0 , u ) to denote J i ( u 0 , u 1 , , u n ) . Furthermore, J i ( u 0 , u i , u i ) J i ( u 0 , ( u i , u i ) ) is put. If α is an incentive strategy of the leader, u is a profile of strategies of the followers. Then, J i [ α , u ] J i ( α [ u ] , u ) , J i [ α , u i , u i ] J i [ α , ( u i , u i ) ] are denoted. Further, let E ( α ) be a set of the followers’ Nash equilibria for a case where the leader uses the incentive strategy α :
E ( α ) { u : J i [ α , u ] J i [ α , u i , u i ] for any i = 1 , n ¯ and any u i P i } .
Definition 1.
The pair ( α , u ) is an inverse Stackelberg solution in the game with one leader and n followers playing the Nash equilibrium if
(1) 
u E ( α ) .
(2) 
J 0 [ α , u ] = max α max u E ( α ) J 0 [ α , u ] .
The structure of the inverse Stackelberg solution is given in the following statements. Denote
B ( u 0 , u ) : for any i = 1 , n ¯ , J i ( u 0 , u ) max u i min u 0 J i ( u 0 , u i , u i ) .
Lemma 1.
The following properties hold true:
(1) 
If u E ( α ) , then ( α [ u ] , u ) B ;
(2) 
If the strategy of the leader ( u 0 ), and the profile of the followers’ strategies ( u ) are ( u , u ) B , then an incentive strategy of the leader α exists such that u E ( α ) .
Proof. 
To use the first statement of the lemma, u ^ i is picked to maximize
max u i P i min u 0 P 0 J i ( u 0 , u i , u i ) .
Using the definition of the set E ( α ) , for u 0 α [ u ] and each i = 1 , , n , we have
J i ( u 0 , u ) = J i [ α , u ] J i [ α , u ^ i , u i ] = J i ( α ( u ^ i , u i ) , u ^ i , u i ) min u 0 P 0 J i ( u 0 , u ^ i , u i ) = max u i P i min u 0 P 0 J i ( u 0 , u i , u i ) .
Thus, ( α [ u ] , u ) B .
Now, let us prove the second statement of the lemma.
For u i P i let β i [ u i ] Argmin { J i ( u 0 , u i , u i ) : u 0 P 0 } . Further, an arbitrary u ¯ P is picked.
Put
α [ u 1 , , u n ] u 0 , u i = u i , i = 1 , n , β i [ u i ] , u i u i , u j = u i , j i , u ¯ , otherwise .
First, notice that α [ u ] = u 0 . Further, if u P is such that u i u i for some i and, for all other j, u j = u j , then
J i ( α [ u ] , u ) = J i ( β i ( u i ) , u i , u i ) max u i P i min u 0 P 0 J i ( u 0 , u i , u i ) J i ( u 0 , u ) = J i [ α , u ] .
This proves the second statement of the lemma. ☐
Theorem 1.
(1) If ( α , u ) is an inverse Stackelberg solution, then the profile of strategies ( u 0 , u 1 ) with u 0 = α ( u 1 ) maximizes the value J 0 ( u 0 , u 1 ) over the set B . (2) If the profile of strategies ( u 0 , u 1 ) maximizes the value J 0 ( u 0 , u 1 ) over the set B , then an incentive strategy ( α ) exists such that α [ u 1 ] = u 0 , and ( α , u 1 ) is an inverse Stackelberg solution. (3) If the function u i J i ( u 0 , u i , u i ) is quasi-concave for all u 0 , u i , and i = 1 , , n , then at least one inverse Stackelberg solution exists.
Proof. 
The proof of the first two statements directly follows from Lemma 1.
Let us prove the third statement of the theorem. Put
K i ( u 1 , , u n ) min u 0 P 0 J i ( u 0 , u 1 , , u i ) .
The functions u i K i ( u i , u i ) are quasi-concave for all u i . Therefore, a profile of followers’ strategies ( u ) exists such that all u i P i K i ( u ) K i ( u i , u i ) . Hence, we any pair ( u 0 , u ) belongs to B . Consequently, B is nonempty. Moreover, the set B is compact. This proves the existence of the pair ( u 0 , u ) maximizing J 0 over the set B . The existence of inverse Stackelberg solution directly follows on from the second statement of the theorem. ☐
Example 1.
Consider a game with two followers. Let the set of strategies of the players be equal to { 0 , 1 } . In addition, let the followers’ rewards for u 0 = 0 be
a, b0, 0
0, 0b, a
where a > b > 0 . Further, let the followers’ rewards for u 0 = 1 be given by
0, 0a, b
b, a0, 0
Finally, we assume that the leader’s reward is equal to 1 when the followers outcome is ( 0 , 0 ) and 0 in the opposite case. One can consider this game as a variant of the battle of sexes with the leader who can shift the roles of the players and win when there is no arrangement between the players.
It is easy to check that the set B is equal to the set of all strategies { 0 , 1 } 3 . By maximizing the leader’s payoff over this set we get that the outcome of the players is ( 1 , 0 , 0 ) .
It is instructive to compare the result with the case where the leader declares his strategy first. Clearly, in this case, whatever the leader’s strategy is, the leader’s outcome is 0, whereas the flowers’ Nash equilibrium payoffs are ( a , b ) and ( b , a ) .

3. Inverse Stackelberg Solution for Differential Games

As above we assume that player 0 is a leader when players 1 , , n are followers. The dynamics of the system is given by the equation
x ˙ = f ( t , x , u 0 , u 1 , , u n ) , t [ 0 , T ] , x R d , x ( 0 ) = x 0 , u i P i .
Player i wishes to maximize the payoff
σ i ( x ( T ) ) + 0 T g i ( t , x , u 0 , u 1 , , u n ) d t .
The set
U i = { u i : [ 0 , T ] P i m e a s u r a b l e }
is the set of open-loop strategies of player i. As above, the n-tuple of open-loop strategies of followers ( u = ( u 1 , , u n ) ) is called the profile of strategies. To simplify notations, denote
f ( t , x , u 0 , u ) f ( t , x , u 0 , u 1 , , u n ) , g ( t , x , u 0 , u ) g ( t , x , u 0 , u 1 , , u n ) .
Further, put
U = × i = 1 n U i .
If u 0 U 0 , u = ( u 1 , , u n ) U , ( t , x ) [ 0 , T ] × R d , then denote by x ( · , t , x , u 0 , u ) the solution of initial value problem
x ˙ ( t ) = f ( t , x ( t ) , u 0 ( t ) , u 1 ( t ) , , u n ( t ) ) , x ( t ) = x .
Put
z i ( t , t , x , u 0 , u ) = t t g i ( t , x ( t ) , u 0 ( t ) , u 1 ( t ) , , u n ( t ) ) d t .
If t = 0 , x = x 0 we omit the arguments t and x . Let z ( · , t , x , u 0 , u ) = ( z 0 ( · , t , x , u 0 , u ) , z 1 ( · , t , x , u 0 , u ) , , z n ( · , t , x , u 0 , u ) ) . We assume that the set of motions is closed, i.e., for all ( t , x ) [ 0 , T ] × R d ,
cl { ( x ( · , t , x , u 0 , u ) , z ( · , t , x , u 0 , u ) ) : u 0 U 0 , u U } = { ( x ( · , t , x , u 0 , u ) , z ( · , t , x , u 0 , u ) ) : u 0 U 0 , u U } .
Here, cl stands for the closure in the space of continuous functions from [ 0 , T ] to R d .
We assume that the followers use open-loop strategies ( u i U i ) when the leader’s strategy is a nonanticipative strategy ( α : U U 0 ). The nonanticipation property means that α [ u ] ( τ ) = α [ u ] ( τ ) for any u and u coinciding on [ 0 , τ ] .
For u 0 U 0 , u U , ( t , x ) define
J i ( t , x , u 0 , u ) σ i ( x ( T , t , x , u 0 , u ) ) + z i ( T , t , x , u 0 , u ) .
Further, put
J i [ t , x , α , u ] J i ( t , x , α ( u ) , u ) .
We omit the arguments t and x if t = 0 , x = x 0 .
We assume that the followers’ solution concept is Nash equilibrium. Let E d ( α ) denote the set of Nash equilibria in the case when the leader plays with the nonanticipating strategy α :
E d ( α ) { u U : J i [ α , u ] J i [ α , u i , u i ] for all u i U i and any i = 1 , n ¯ } .
Denote the set of nonanticpating strategies by Γ .
Definition 2.
The pair consisting of a nonanticipative strategy of the leader ( α ) and u U is an inverse Stackelberg solution of the differential game if
(1) 
u E d ( α )
(2) 
J 0 [ α , u ] = max α max u E d ( α ) J 0 [ α , u ] .
The proposed definition is analogous to the definition of the inverse Stackelberg solution for static games. The characterization in the differential game case is close to the characterization in the static game case.
For a fixed profile of strategies of all players but the i-th u i , one can consider the zero-sum differential game of player 0 and player i. In this case, we assume that player 0 uses the nonaticipating strategies on [ t , T ] which are mappings ( β i : U i U 0 ) that satisfy the feasibility condition: if u i = u i on [ t , τ ] , then β i [ u i ] = β i [ u i ] on [ t , τ ] . Denote the set of feasible mappings β i : U i U 0 by Γ i [ t ] . The lower value of this game is
V i ( t , x , u i ) min β i Γ i [ t ] max u i U i J i ( t , x , β i [ u i ] , u i , u i ) .
Let
C = { ( u 0 , u ) U 0 × U : for any i = 1 , n ¯ , t [ 0 , T ] , and x ( · ) = x ( · , u 0 , u ) J i ( t , x ( t ) , u 0 , u ) V i ( t , x ( t ) , u i ) } .
Lemma 2.
Let α be an incentive strategy of the leader. If u E d ( α ) , then ( α [ u ] , u ) C .
Proof. 
Denote
u 0 α [ u ] .
We claim that
J i [ t , x ( t ) , u 0 , u ] J i [ t , x ( t ) , α [ u i , u i ] , u i , u i ]
for any u i U i , u 0 = α ( u ) , x = x ( · , α [ u ] , u ) . Assume the converse. This means that, for some u i and τ ,
J i [ τ , x ( τ ) , u 0 , u ] < J i [ τ , x ( τ ) , α [ u i , u i ] , u i , u i ] .
Let us introduce the control ( u i ) by the following rule:
u i u i ( t ) , t [ 0 , τ ] u i ( t ) , t [ τ , T ] .
Further, denote
u 0 α [ u i , u i ] ,
x ( · ) = x ( · , u 0 , ( u i , u i ) ) .
We have
J i [ α , u i , u i ] = σ ( x ( T ) ) + 0 T g ( t , x ( t ) , u 0 , ( u i , u i ) ) d t .
Since, for t [ 0 , τ ] ,
u i ( t ) = u i ( t ) , u 0 = u 0 ( t ) = α [ u ] ( t ) , x ( t ) = x ( t ) ,
and, for t [ τ , T ] ,
x ( t ) = x ( t , τ , x ( τ ) , u 0 , ( u i , u i ) ) ,
Equation (3) implies the following inequality:
J i [ α , u i , u i ] > 0 τ g i ( t , x ( t ) , u 0 , u ) d t + J [ τ , x ( τ ) , α , u ] = J [ α , u ] .
This contradicts the assumption that u E d ( α ) .
The inequality (2) yields the inequality J i [ t , x ( t ) , u 0 , u ] V i ( t , x ( t ) , u i ) . ☐
Lemma 3.
For any ( u 0 , u ) C , a nonanticipative strategy of the leader (α) exists so that α ( u ) = u 0 and u E d ( α ) .
Proof. 
Denote x ( · ) = x ( · , u 0 , u ) .
Pick u U . Let i 1 , i 2 , , i n 1 , n ¯ , and let τ i 1 , , τ i n [ 0 , T ] satisfy the following properties
(1) 
i 1 , , i n is a permutation of 1 , , n ;
(2) 
τ i 1 τ i 2 , , τ i n ;
(3) 
for each k, t i k is the greatest time such that u i k = u i k on [ 0 , τ i k ] .
Let y i 1 = x ( τ i 1 ) . The mapping β i 1 Γ i 1 [ τ i 1 ] exists such that
V i ( τ i 1 , y i 1 , u i i ) = max u i 1 U i 1 J ( τ i 1 , y i 1 , β i 1 [ u i 1 ] , u i 1 , u i 1 ] .
Further, pick u ¯ 0 U arbitrarily.
Put
α [ u ] u 0 , t [ 0 , τ i 1 ] ; β i 1 [ u i 1 ] , t ( τ i 1 , τ i 2 ] ; u ¯ 0 , t ( τ i 2 , T ] .
Notice that α [ u ] = u 0 . Now let u = ( u i , i ) Denote by τ the greatest time such that u i = u i on [ 0 , τ ] . In this case, i 1 = i , τ i 1 = τ , τ i k = T for k = 2 , , n . By construction, we have
J i [ α , u i , u i ] = 0 τ g i ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t + J i [ τ , x ( t ) , α , u i , u i ] = 0 τ g i ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t + V i ( τ , x ( τ ) , u i ) 0 τ g i ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t + J i ( τ , x ( τ ) , u 0 , u ) = J i [ α , u ] .
 ☐
Theorem 2.
(1) If the pair ( α , u ) is an inverse Stackelberg solution then ( u 0 , u ) C , and ( u 0 , u 1 ) maximizes the value J 0 over the set C for u 0 = α [ u ] . (2) Conversely, if the pair ( u 0 , u 1 ) maximizes the value J 0 over the set C , then an incentive strategy of the leader α exists such that α [ u 1 ] = u 0 and ( α , u 1 ) is an incentive Stackelberg solution.
The theorem directly follows from the Lemmas 2 and 3.

4. Existence of the Inverse Stackelberg Solution for Differential Game

In this section, we consider the differential game in the mixed strategies. This means that we replace the system (1) with the control system described by the following equation:
x ˙ ( t ) = P 0 P 1 P n f ( t , x ( t ) , u 0 , u 1 , , u n ) μ n ( t , d u n ) μ 1 ( t , d u 1 ) μ 0 ( t , d u 0 ) .
Here, μ i ( t , · ) are probabilistic measures on P i .
The relaxation means that we replace the control spaces P i with the control spaces rpm ( P i ) . Therefore, the open-loop strategy of the i-th player is a weakly measurable function: μ i : [ 0 , T ] rpm ( P i ) . This means that the mapping
t P i ϕ ( u i ) μ i ( t , d u i )
is measurable for any continuous function ( φ C ( P i ) ). The set of open-loop strategies of the i-th player is denoted by M .
Further, we use the following designations. Put
P × j = 1 n P j , P i × j i P j .
If m j rpm ( P j ) , j = 1 , , n , then denote m ( d u ) = m 1 ( d u 1 ) m n ( d u n ) with a slight abuse of notation. Further, for φ C ( P ) ,
P φ ( u ) m ( d u ) = P 1 P n φ ( u 1 , , u n ) m 1 ( d u 1 ) m n ( d u n ) .
Analogously, we assume that m i ( d u i ) × j i m j ( d u j ) . Thus,
P i φ ( u i ) m i ( d u i ) = P 1 P i 1 P i + 1 P n φ ( u 1 , , u i 1 , u i + 1 , , u n ) m 1 ( d u 1 ) m i 1 ( d u i 1 ) m i + 1 ( d u i + 1 ) m n ( d u n ) .
If ( t , x ) [ 0 , T ] × R d , μ 0 M 0 , μ 1 M 1 ,…, μ n M n , then we denote the solution of the initial value problem for equation (4) and the position ( t , x ) by x ( · , t , x , μ 0 , μ 1 , , μ n ) .
As above, we call the n-tuple μ = ( μ 1 , , μ n ) the profile of followers’ mixed strategies. Denote the set of followers’ strategies by M . Put x ( · , t , x , μ 0 , μ ) = x ( · , t , x , μ 0 , μ 1 , , μ n ) , x ( · , t , x , μ 0 , μ i , μ i ) = x ( · , t , x , μ 0 , ( μ i , μ i ) ) .
For the given position ( t , x ) [ 0 , T ] × R d and measures μ 0 M 0 , μ M , the corresponding payoff of player i is equal to
J i ( t , x , μ 0 , μ ) = σ i ( x ( T , t , x , μ 0 , μ ) ) + t T P 0 P g i ( t , x ( t , t , x , μ 0 , μ ) , u 0 , u ) μ 0 ( t , d u 0 ) μ ( t , d u ) d t .
As above, the mapping α : M M 0 satisfying the condition of feasibility (the equality μ and μ on [ 0 , τ ] yields the equality α [ μ ] ( t , · ) = α [ μ ] ( t , · ) on [ 0 , τ ] ) is called the nonanticipative strategy. We denote the set of nonanticipating strategies by Γ . Analogously, the set of mappings β i : M i M 0 satisfying the feasibility property on [ t , T ] is denoted by Γ i [ t ] .
Further, we use the nonanticipating strategies of player i. This is a mapping γ i : M 0 M i satisfying the feasibility property on [ t , T ] : if μ 0 = μ 0 on [ t , τ ] , then γ i [ μ 0 ] = γ i [ μ 0 ] on [ t , τ ] . Let N i stand for the set of nonanticipating strategies of player i on [ t , T ] . By using these strategies, one can introduce the upper value function by the rule: if ( t , x ) [ 0 , T ] × R d , μ 1 M 1 , …, μ i 1 M i 1 , μ i + 1 M i + 1 ,…, μ n M n , then
V + ( t , x , μ i ) max β i N i min μ 0 M 0 J i ( t , x , μ 0 , γ i [ μ 0 ] , μ i ) .
Generally,
V + ( t , x , μ i ) V ( t , x , μ i ) .
Theorem 3.
Assume that the following conditions hold true for each i = 1 , n ¯ :
(1) 
x σ i ( x ) is concave;
(2) 
g i ( t , x , u 0 , u ) = g i 0 ( t , x , u i ) + g i 1 ( t , u 0 , u i ) + g i 2 ( t , u ) and the function x g i 0 ( t , x , u i ) is concave.
Then, an inverse Stackelberg solution exists in mixed strategies ( α , μ ) .
Proof. 
Let us prove that the set C is nonempty.
Define the multivalued map G : M 0 × M M 0 × M by the rule ( μ 0 , μ ) G ( μ 0 , μ ) if, for each i = 1 , n ¯ ,
J i ( t , x i ( t ) , μ 0 , μ i , μ i ) V i ( t , x i ( t ) , μ i ) .
Here, x i ( · ) = x ( · , μ 0 , μ i , μ i ) .
The assumption of the theorem implies that the set G ( μ 0 , μ ) is convex for all μ 0 M 0 , μ M . Moreover, G has a closed graph. Let us prove the nonemptiness of G ( μ 0 , μ ) .
Put μ 0 = μ 0 . From the Bellman principle, it follows that
V i + ( t , x , μ i ) = max γ i N i min ν 0 M 0 [ V ( t + , x ( t + , t , x , ν 0 , γ i [ ν 0 ] , μ i ) ) + t t + P 0 P i P i g i ( t , x ( t + , t , x , ν 0 , γ i ( ν 0 ) , μ i ) ) , u 0 , u i , u i ) μ i ( t , d u i ) γ i [ ν 0 ] ( t , d u i ) ν 0 ( t , d u 0 ) d t ] .
Let N be a natural number. Put t N k = T k / N . Let γ i , N k maximize the right-hand side at (6) for t = t N k , t + = t N k + 1 , x = y i , N k 1 . Here y i , N k is defined inductively by the rule
y i , N 0 = x 0 , y i , N k = x ( t i , N k , t i , N k 1 , y i , N k 1 , μ 0 , γ i , N k 1 [ μ 0 ] , μ i ) .
Put μ ˜ i , N ( t , · ) = γ i , N k [ μ 0 ] ( t , · ) for t [ t N k 1 , t N k ) . Denote x i , N ( · ) = x ( · , t 0 , x 0 , μ 0 , μ ˜ i , N , μ i ) . Notice that y i , N k = x i , N ( t N k ) . We have, for k < l , the inequality
V i + ( t N k , x i , N ( t N k ) , μ i ) V i + ( t N l , x i , N ( t N l ) , μ i ) + t N k t N l P 0 P i P i g i ( t , x i , N ( t ) , u 0 , u i , u i ) μ i ( t , d u i ) μ ˜ i , N ( t , d u i ) μ 0 ( t , d u 0 ) d t .
Note that V i + ( t N N , y i , N N , μ i ) = σ i ( ξ i , N N ) .
Using the continuity of function V i + , we get
V i + ( t , x i , N ( t ) , μ i ) V i + ( T , x i , N ( T ) , μ i ) + t T P 0 P i P i g i ( t , x i , N ( t ) , u 0 , u i , u i ) μ i ( t , d u i ) μ ˜ i , N ( t , d u i ) μ 0 ( t , d u 0 ) d t + δ N .
Here, δ N 0 , as N .
The sequence { μ ˜ i , N r } converges to some μ i M i , as r . Therefore, x i , N r ( · ) = x ( · , t 0 , x 0 , μ 0 , μ ˜ i , N r , μ i ) tends to x i ( · ) = x ( · , t 0 , x 0 , μ 0 , μ i , μ i ) . This and inequalities (5), (7) yield the inequality for any t [ t 0 , T ] :
V i ( t , x i ( t ) , μ i ) V i + ( t , x i ( t ) , μ i ) V i + ( T , x i ( T ) , μ i ) + t T P 0 P i P i g i ( t , x i ( t ) , u 0 , u i , u i ) μ i ( t , d u i ) μ i ( t , d u i ) μ 0 ( t , d u 0 ) d t .
Put μ ( μ 1 , , μ n ) . We have ( μ 0 , μ ) G ( μ 0 , μ ) .
Since M 0 × M is compact, and G is an upper semicontinuous multivalued map with nonempty convex compact values, G admits the fixed point ( μ 0 , μ ) . Obviously, it belongs to C . The consequence of the theorem follows from this and theorem 2. ☐

Funding

This research was funded by Russian Foundation for Basic Research (grant No. 17-01-00069).

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Ho, Y.C.; L, P.; Muralidharan, R. Information structure, Stackelberg games, and incentive controllability. IEEE Trans. Autom. Control. 1981, 26, 454–460. [Google Scholar]
  2. Ho, Y.C.; Luh, P.; Olsder, G. A control-theoretic view on incentives. Automatica 1982, 18, 167–179. [Google Scholar] [CrossRef]
  3. Ho, Y.C. On incentive problems. Syst. Control Lett. 1983, 3, 63–68. [Google Scholar] [CrossRef]
  4. Olsder, G. Phenomena in Inverse Stackelberg Games, Part 1: Static Problems. J. Optim. Theory Appl. 2009, 143, 589–600. [Google Scholar] [CrossRef] [Green Version]
  5. Olsder, G. Phenomena in Inverse Stackelberg Games, Part 2: Dynamic Problems. J. Optim. Theory Appl. 2009, 143, 601–618. [Google Scholar] [CrossRef] [Green Version]
  6. Martín-Herrän, G.; Taboubi, S. Incentive Strategies for Shelf-Space Allocation in Duopolies. In Dynamic Games: Theory and Applications; Haurie, A., Zaccour, G., Eds.; Springer: Berlin, Germany, 2005; pp. 231–253. [Google Scholar]
  7. Staňková, K.; Olsder, G.; Bliemer, M. Bilevel optimal toll design problem solved by the inverse Stackelberg games approach. Urban Transp. 2006, 12, 871–880. [Google Scholar]
  8. Ferrara, M.; Khademi, M.; Salimi, M.; Sharifi, S. A Dynamic Stackelberg Game of Supply Chain for a Corporate Social Responsibility. Discret. Dyn. Nat. Soc. 2017, 2017. [Google Scholar] [CrossRef]
  9. Başar, T.; Olsder, G. Dynamic Noncooperative Game Theory; Academic Press: Philadelphia, PA, USA, 1999. [Google Scholar]
  10. Martín-Herrän, G.; Taboubi, S.; Zaccour, G. A time-consistent open-loop Stackelberg equilibrium of shelf-space allocation. Automatica 2005, 41, 971–982. [Google Scholar] [CrossRef]
  11. Zheng, Y.; Başar, T. Existence and derivation of optimal affine incentive schemes for Stackelberg games with partial information: a geometric approach. Int. J. Control 1982, 35, 997–1011. [Google Scholar] [CrossRef]
  12. Ehtamo, H.; Hämäläinen, R. Incentive strategies and equilibria for dynamic games with delayed information. J. Optim. Theory Appl. 1989, 63, 355–369. [Google Scholar] [CrossRef]
  13. Kleimonov, A. Nonantagonistic Positional Differential Games; Nauka, Ural’skoe Otdelenie: Ekaterinburg, Russian, 1993. [Google Scholar]
  14. Averboukh, Y.; Baklanov, A. Stackelberg Solutions of Differential Games in the Class of Nonanticipative Strategies. Dyn. Games Appl. 2014, 4, 1–9. [Google Scholar] [CrossRef]
  15. Elliot, R.; Kalton, N. The Existence of Value for Differential Games. J. Differ. Equ. 1972, 12, 504–523. [Google Scholar] [CrossRef]
  16. Varaiya, P.; Lin, J. Existence of Saddle Points in differential game. SIAM J. Control Optim. 1967, 7, 141–157. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Averboukh, Y. Inverse Stackelberg Solutions for Games with Many Followers. Mathematics 2018, 6, 151. https://doi.org/10.3390/math6090151

AMA Style

Averboukh Y. Inverse Stackelberg Solutions for Games with Many Followers. Mathematics. 2018; 6(9):151. https://doi.org/10.3390/math6090151

Chicago/Turabian Style

Averboukh, Yurii. 2018. "Inverse Stackelberg Solutions for Games with Many Followers" Mathematics 6, no. 9: 151. https://doi.org/10.3390/math6090151

APA Style

Averboukh, Y. (2018). Inverse Stackelberg Solutions for Games with Many Followers. Mathematics, 6(9), 151. https://doi.org/10.3390/math6090151

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop