1. Introduction
The paper is concerned with the inverse Stackelberg game, also known as the incentive problem. In ordinary Stackelberg games, one player (called a leader) announces his strategy while the other players (called followers) maximize their payoffs using this information. In the inverse Stackelberg games the leader announces the incentive strategy, i.e., the reaction to the followers’ strategies ([
1,
2,
3,
4,
5] and reference therein). For dynamic cases, the reaction should be nonanticipative.
The inverse Stackelberg games appear in several models (see, for example, [
6,
7,
8]). In games with many followers, it is often assumed that followers play a Nash game ([
6,
9,
10]). If the strategy sets are normed space, then the incentive strategy can be constructed in the affine form (Ref. [
11] for static games and Ref. [
12] for differential games).
In this paper, we consider a case where the control spaces of the players are metric compacts. We consider both static and dynamic cases. Moreover, for the dynamic case, we apply punishment strategies. The concept of punishment strategies was first used for the analysis of Stackelberg games in the class of feedback strategies in Ref. [
13]. The inverse Stackelberg solutions of two-person differential games were studied via punishment strategies in the paper by Kleimonov [
14]. In that paper, the authors described the set of inverse Stackelberg solutions and derived the existence result. In particular, the set of inverse Stackelberg payoffs is equal to the set of feedback Stackelberg payoffs. Note that the incentive strategies considered in the paper by Kleimonov [
14] use full memory, i.e., the leader plays with the nonanticipating strategies proposed in the papers by Elliot and Kalton [
15] and Varaiya and Lin [
16] for zero-sum differential games. The usage of the strategies depends only on the current follower’s control which decreases the payoffs.
In this paper, punishment strategies are applied to static inverse Stackelberg games and to differential inverse Stackelberg games with many followers. We obtain the characterization of the inverse Stackelberg solution and under additional concavity conditions, establish the existence theorem.
The paper is organized as follows.
Section 2 is concerned with the static inverse Stackelberg game for a case with
n followers. The differential game case is considered in
Section 3. In
Section 4, we prove the existence theorem for the inverse Stackelberg solution of a differential game.
2. Static Games
We denote the leader by 0. Further, we designate the followers by . Player i has a set of strategies () and a payoff function(). We assume that the sets () are compact and the functions () are continuous.
The incentive strategy of the leader is a mapping:
To define the inverse Stackelberg game, we specify the solution concept used by followers. We suppose that the followers play the Nash game. Let
An element (
) of
P is a profile of the followers’ strategies. If
then
is the profile of strategies
. For simplification, we write
to denote
. Furthermore,
is put. If
is an incentive strategy of the leader,
u is a profile of strategies of the followers. Then,
,
are denoted. Further, let
be a set of the followers’ Nash equilibria for a case where the leader uses the incentive strategy
:
Definition 1. The pair is an inverse Stackelberg solution in the game with one leader and n followers playing the Nash equilibrium if
- (1)
- (2)
.
The structure of the inverse Stackelberg solution is given in the following statements. Denote
Lemma 1. The following properties hold true:
- (1)
If , then ;
- (2)
If the strategy of the leader (), and the profile of the followers’ strategies () are , then an incentive strategy of the leader α exists such that .
Proof. To use the first statement of the lemma,
is picked to maximize
Using the definition of the set
, for
and each
, we have
Thus, .
Now, let us prove the second statement of the lemma.
For let . Further, an arbitrary is picked.
First, notice that
. Further, if
is such that
for some
i and, for all other
j,
, then
This proves the second statement of the lemma. ☐
Theorem 1. (1) If is an inverse Stackelberg solution, then the profile of strategies with maximizes the value over the set . (2) If the profile of strategies maximizes the value over the set , then an incentive strategy () exists such that , and is an inverse Stackelberg solution. (3) If the function is quasi-concave for all , , and , then at least one inverse Stackelberg solution exists.
Proof. The proof of the first two statements directly follows from Lemma 1.
Let us prove the third statement of the theorem. Put
The functions
are quasi-concave for all
. Therefore, a profile of followers’ strategies (
) exists such that all
. Hence, we any pair
belongs to
. Consequently,
is nonempty. Moreover, the set
is compact. This proves the existence of the pair
maximizing
over the set
. The existence of inverse Stackelberg solution directly follows on from the second statement of the theorem. ☐
Example 1. Consider a game with two followers. Let the set of strategies of the players be equal to . In addition, let the followers’ rewards for bewhere . Further, let the followers’ rewards for be given by Finally, we assume that the leader’s reward is equal to 1 when the followers outcome is and 0 in the opposite case. One can consider this game as a variant of the battle of sexes with the leader who can shift the roles of the players and win when there is no arrangement between the players.
It is easy to check that the set is equal to the set of all strategies . By maximizing the leader’s payoff over this set we get that the outcome of the players is .
It is instructive to compare the result with the case where the leader declares his strategy first. Clearly, in this case, whatever the leader’s strategy is, the leader’s outcome is 0, whereas the flowers’ Nash equilibrium payoffs are and .
3. Inverse Stackelberg Solution for Differential Games
As above we assume that player 0 is a leader when players
are followers. The dynamics of the system is given by the equation
Player
i wishes to maximize the payoff
The set
is the set of open-loop strategies of player
i. As above, the
n-tuple of open-loop strategies of followers (
) is called the profile of strategies. To simplify notations, denote
If
,
,
, then denote by
the solution of initial value problem
If
,
we omit the arguments
and
. Let
. We assume that the set of motions is closed, i.e., for all
,
Here, stands for the closure in the space of continuous functions from to .
We assume that the followers use open-loop strategies () when the leader’s strategy is a nonanticipative strategy (). The nonanticipation property means that for any u and coinciding on .
For
,
,
define
We omit the arguments and if , .
We assume that the followers’ solution concept is Nash equilibrium. Let
denote the set of Nash equilibria in the case when the leader plays with the nonanticipating strategy
:
Denote the set of nonanticpating strategies by .
Definition 2. The pair consisting of a nonanticipative strategy of the leader () and is an inverse Stackelberg solution of the differential game if
- (1)
- (2)
The proposed definition is analogous to the definition of the inverse Stackelberg solution for static games. The characterization in the differential game case is close to the characterization in the static game case.
For a fixed profile of strategies of all players but the
i-th
, one can consider the zero-sum differential game of player 0 and player
i. In this case, we assume that player 0 uses the nonaticipating strategies on
which are mappings (
) that satisfy the feasibility condition: if
on
, then
on
. Denote the set of feasible mappings
by
. The lower value of this game is
Lemma 2. Let α be an incentive strategy of the leader. If , then .
Proof. We claim that
for any
,
,
. Assume the converse. This means that, for some
and
,
Let us introduce the control (
) by the following rule:
Since, for
,
and, for
,
Equation (
3) implies the following inequality:
This contradicts the assumption that .
The inequality (
2) yields the inequality
. ☐
Lemma 3. For any , a nonanticipative strategy of the leader (α) exists so that and .
Proof. Denote .
Pick . Let , and let satisfy the following properties
- (1)
is a permutation of ;
- (2)
;
- (3)
for each k, is the greatest time such that on .
Let
. The mapping
exists such that
Further, pick arbitrarily.
Notice that
. Now let
Denote by
the greatest time such that
on
. In this case,
,
,
for
. By construction, we have
☐
Theorem 2. (1) If the pair is an inverse Stackelberg solution then , and maximizes the value over the set for . (2) Conversely, if the pair maximizes the value over the set , then an incentive strategy of the leader exists such that and is an incentive Stackelberg solution.
The theorem directly follows from the Lemmas 2 and 3.
4. Existence of the Inverse Stackelberg Solution for Differential Game
In this section, we consider the differential game in the mixed strategies. This means that we replace the system (
1) with the control system described by the following equation:
Here, are probabilistic measures on .
The relaxation means that we replace the control spaces
with the control spaces
. Therefore, the open-loop strategy of the
i-th player is a weakly measurable function:
. This means that the mapping
is measurable for any continuous function (
). The set of open-loop strategies of the
i-th player is denoted by
.
Further, we use the following designations. Put
If
,
, then denote
with a slight abuse of notation. Further, for
,
Analogously, we assume that
. Thus,
If
,
,
,…,
, then we denote the solution of the initial value problem for equation (
4) and the position
by
.
As above, we call the n-tuple the profile of followers’ mixed strategies. Denote the set of followers’ strategies by . Put , .
For the given position
and measures
,
, the corresponding payoff of player
i is equal to
As above, the mapping satisfying the condition of feasibility (the equality and on yields the equality on ) is called the nonanticipative strategy. We denote the set of nonanticipating strategies by . Analogously, the set of mappings satisfying the feasibility property on is denoted by .
Further, we use the nonanticipating strategies of player
i. This is a mapping
satisfying the feasibility property on
: if
on
, then
on
. Let
stand for the set of nonanticipating strategies of player
i on
. By using these strategies, one can introduce the upper value function by the rule: if
,
, …,
,
,…,
, then
Theorem 3. Assume that the following conditions hold true for each :
- (1)
is concave;
- (2)
and the function is concave.
Then, an inverse Stackelberg solution exists in mixed strategies .
Proof. Let us prove that the set is nonempty.
Define the multivalued map
by the rule
if, for each
,
Here, .
The assumption of the theorem implies that the set is convex for all , . Moreover, has a closed graph. Let us prove the nonemptiness of .
Put
. From the Bellman principle, it follows that
Let
N be a natural number. Put
. Let
maximize the right-hand side at (
6) for
,
,
. Here
is defined inductively by the rule
Put
for
. Denote
. Notice that
. We have, for
, the inequality
Note that .
Using the continuity of function
, we get
Here, , as .
The sequence
converges to some
, as
. Therefore,
tends to
. This and inequalities (
5), (
7) yield the inequality for any
:
Put . We have .
Since is compact, and is an upper semicontinuous multivalued map with nonempty convex compact values, admits the fixed point . Obviously, it belongs to . The consequence of the theorem follows from this and theorem 2. ☐