Next Article in Journal
A Discrete-Time Homing Problem with Two Optimizers
Previous Article in Journal
Centralized versus Decentralized Cleanup of River Water Pollution: An Application to the Ganges
Previous Article in Special Issue
On the Nash Equilibria of a Duel with Terminal Payoffs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a Special Two-Person Dynamic Game

1
Department of Economics, Chuo University, 742-1 Higashi-Nakano, Tokyo 192-0393, Japan
2
Department of Mathematics, Corvinus University, Föván tur 8, 1093 Budapest, Hungary
3
Department of Industrial and Systems Engineering, Lamar University, 2210 Cherry Engineering Building, Beaumort, TX 77710, USA
*
Author to whom correspondence should be addressed.
Games 2023, 14(6), 67; https://doi.org/10.3390/g14060067
Submission received: 29 September 2023 / Revised: 17 October 2023 / Accepted: 19 October 2023 / Published: 24 October 2023
(This article belongs to the Special Issue Learning and Evolution in Games, 1st Edition)

Abstract

:
The asymptotical properties of a special dynamic two-person game are examined under best-response dynamics in both discrete and continuos time scales. The direction of strategy changes by the players depend on the best responses to the strategies of the competitors and on their own strategies. Conditions are given first for the local asymptotical stability of the equilibrium if instantaneous data are available to the players concerning all current strategies. Next, it is assumed that only delayed information is available about one or more strategies. In the discrete case, the presence of delays has an effect on only the order of the governing difference equations. Under continuous scales, several possibilities are considered: each player has a delay in the strategy of its competitor; player 1 has identical delays in both strategies; the players have identical delays in their own strategies; player 1 has different delays in both strategies; and the players have different delays in their own strategies. In all cases, it is assumed that the equilibrium is asymptotically stable without delays, and we examine how delays can make the equilibrium unstable. For small delays, the stability is preserved. In the cases of one-delay models, the critical value of the delay is determined when stability changes to instability. In the cases of two and three delays, the stability-switching curves are determined in the two-dimensional space of the delays, where stability becomes lost if the delay pair crosses this curve. The methodology is different for the one-, two-, and three-delay cases outlined in this paper.

1. Introduction

Game theory is one of the most frequently studied fields in mathematical economics. Its foundation and main concepts are discussed in many textbooks and monographs, for example, Dresher (1961), Szép and Forgó (1985), Vorob’ev (1994), and Matsumoto and Szidarovszky (2016) [1,2,3,4]. These include both two-person and general n-person games. In the earliest decades, the existence and uniqueness of the Nash equilibrium were the central research issues (von Neumann and Morgenstern, 1944; Nash, 1951; Fudenberg and Tirole, 1991; Rosen, 1965) [5,6,7,8], and then the dynamic extensions of these static games received increasing attention (Okuguchi, 1976; Hahn, 1962) [9,10]. Models have been developed and examined in both discrete and continuous time scales. In most studies, the dynamic processes are driven by either gradient adjustments or best-response dynamics. In the first case, only interior equilibria are the steady states of the resulting dynamic systems. In the second case, this difficulty is avoided; however, the construction of the dynamic processes requires knowledge of the best responses of the players (Bischi et al., 2010) [11]. In both cases, the asymptotical stability of the equilibrium in examined by applying the Lyapunov theory (Cheban, 2013, Saeed, 2017) [12,13] or local linearization (Bischi et al., 2010) [11]. In the discrete case, the theory of noninvertable maps and critical sets is a commonly used approach for nonlinear discrete systems (Gumowski and Mira, 1980; Mira et al., 1996) [14,15], while for continuous systems Bellman (1969), LaSalle (1968), and Sánchez (1968) [16,17,18] can be suggested as the main references. After developments in the stability theory of differential equations, time delays were added to dynamic models, in which it is usually assumed that the players make decisions based on delayed information about their own and others’ actions. This additional assumption makes the models more realistic, since data collection, determining the best decision alternatives, and their implementation need time (Bellman and Cooke, 1963) [19]. Delay systems have many applications in population dynamics (Kuang, 1993; Cushing, 1977) [20,21]; economics (Gori et al., 2014; Ösbay et al., 2017) [22,23]; and engineering (Berezowski, 2001; Wang et al., 1999) [24,25], among other disciplines.
A two-person continuous game is considered, in which the strategy sets S 1 and S 2 are compact intervals and the payoff functions φ k ( x 1 , x 2 ) ( k = 1 , 2 ) are continuous and strictly concave in x k , which is the strategy of player k. Under these conditions, φ k ( x 1 , x 2 ) has a unique maximizer R k ( x ) for all x S ( k ) . This function is called the best response of player k. At each time period, the players select strategies to move closer to their best responses. In the discrete case, this concept is realized by the difference equation system
x 1 ( t + 1 ) = x 1 ( t ) + α 1 R 1 ( x 2 ( t ) ) x 1 ( t ) ,
x 2 ( t + 1 ) = x 2 ( t ) + α 2 R 2 ( x 1 ( t ) ) x 2 ( t ) ,
where α 1 and α 2 are positive adjustment coefficients. In order to avoid overshooting, it is assumed that both are less than unity. In the continuous case, the following differential equation system describes the dynamic evolution of the strategies:
x ˙ 1 ( t ) = α 1 R 1 ( x 2 ( t ) ) x 1 ( t ) ,
x ˙ 2 ( t ) = α 2 R 2 ( x 1 ( t ) ) x 2 ( t ) ,
where α 1 , α 2 > 0 .
We provide one example here. Consider a duopoly, when two firms produce the same product and sell their outputs in the same market. Let x 1 and x 2 denote the production levels of the firms. The production costs are
C 1 ( x 1 ) = a 1 x 1 + b 1 and C 2 ( x 2 ) = a 2 x 2 + b 2 ,
and the selling unit price is
p ( x 1 + x 2 ) = A B ( x 1 + x 2 ) .
The profit of firm i ( i = 1 , 2 ) is given as the difference between its revenue and cost:
φ i ( x 1 , x 2 ) = x i A B x i B x j a i x i + b i i j .
It is a natural assumption that firm i does not have instantaneous information on the output level of the competitor, so at time t, it believes that the output of the competitor is x j ( t τ j ) . The marginal profit of the firm is clearly
φ i ( x 1 , x 2 ) x 1 = A 2 B x i B x j a i .
Hence, at time t, the best believed response of firm i is
R i ( x j ( t τ j ) ) = A a i B x j ( t τ j ) 2 B if numerator is positive , 0 otherwise .
The best response dynamics have the following forms:
x i ( t + 1 ) = x i ( t ) + α i R i ( x j ( t τ j ) ) x i ( t ) for i = 1 , 2
in the discrete-time case, and
x ˙ i ( t ) = α i R i ( x j ( t τ j ) ) x i ( t ) for i = 1 , 2
in the continuous-time case, leading to a delay difference and differential equations.
In this paper, a special two-person game is examined with best-response dynamics in discrete and continuous time scales. The main question we try to answer is the following: How does the presence of time delays affect the asymptotical properties of a dynamics extension of a special stable two-person game? In the case of one delay, the stability interval or intervals are determined when the stability of the equilibrium is still preserved. In the case of multiple delays, the stability regions in the delay space are determined, where the equilibrium is still stable. This paper introduces a mathematical methodology to find the stability intervals and regions.
The paper is developed as follows. Section 2 considers discrete time scales and is divided into two parts: models with and without time delays. The structure of Section 3 is similar, with two subsections. Section 4 introduces and examines alternative models, and Section 5 offers concluding remarks and further research areas.

2. Discrete Systems

2.1. Stability without Time Delays

Assuming the differentiability of the best-response functions, we can linearize Equations (1) and (2) about an equilibrium that is also a steady state of the system. Let x 1 and x 2 denote the equilibrium strategies and let
x ¯ 1 ( t ) = x 1 ( t ) x 1 and x ¯ 2 ( t ) = x 2 ( t ) x 2
be the discrepancies of the strategies from their equilibrium levels. Linearizing Equations (1) and (2) around ( x 1 , x 2 ) gives the following:
x ¯ 1 ( t + 1 ) = ( 1 α 1 ) x ¯ 1 ( t ) + α 1 R 1 ( x 2 ) x ¯ 2 ( t )
and
x ¯ 2 ( t + 1 ) = ( 1 α 2 ) x ¯ 2 ( t ) + α 2 R 2 ( x 1 ) x ¯ 1 ( t ) .
This is a linear system. The asymptotic behavior of the state trajectories depends on the locations of the eigenvalues of the system. For finding the eigenvalues, we consider exponential solutions x ¯ 1 ( t ) = λ t u and x ¯ 2 ( t ) = λ t v . Substituting them into Equations ( 5 ) and ( 6 ) , we see that
λ t + 1 u = ( 1 α 1 ) λ t u + α 1 R 1 ( x 2 ) λ t v , λ t + 1 v = ( 1 α 2 ) λ t v + α 2 R 2 ( x 1 ) λ t u .
After simplifying both equations by λ t , a linear algebraic system is obtained for u and v, and nonzero solutions exist if and only if the determinant of the system is zero:
0 = det 1 α 1 λ α 1 R 1 ( x 2 ) α 2 R 2 ( x 1 ) 1 α 2 λ = λ 2 λ 2 α 1 α 2 + ( 1 α 1 ) ( 1 α 2 ) α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) .
This is a quadratic polynomial λ 2 + a 1 λ + a 2 with
a 1 = ( 2 α 1 α 2 ) and a 2 = ( 1 α 1 ) ( 1 α 2 ) α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) .
It is well known (see, for example, Appendix F of Bischi et al., 2010) [11] that its roots are inside the unit circle if and only if
1 + a 1 + a 2 > 0 , 1 a 1 + a 2 < 0 , a 2 < 1 .
In our case, these conditions can be written as
R 1 ( x 2 ) R 2 ( x 1 ) < 1 , R 1 ( x 2 ) R 2 ( x 1 ) < 1 + 4 2 α 1 2 α 2 α 1 α 2 , R 1 ( x 2 ) R 2 ( x 1 ) > 1 α 1 + α 2 α 1 α 2 .
Since α 1 and α 2 are below unity, the first inequality is stronger than the second one, so we have the following:
Proposition 1.
The equilibrium is locally asymptotically stable if and only if
1 α 1 + α 2 α 1 α 2 < R 1 ( x 2 ) R 2 ( x 1 ) < 1 .
Notice that
α 1 + α 2 α 1 α 2 = 1 α 2 + 1 α 1 > 2 ,
so the left-hand side of condition (8) is negative and below 1 .

2.2. Stability with Delays

It is now assumed that the players have access only to delayed information about the strategies of the others. We can select the time unit as the length of the common delay. Thus, in Equation (1), x 2 ( t ) is replaced by x 2 ( t 1 ) , and in (2), x 1 ( t ) is replaced by x 1 ( t 1 ) . Assuming again exponential solutions x ¯ 1 ( t ) = λ t u and x ¯ 2 ( t ) = λ t v , we have
λ t + 1 u = ( 1 α 1 ) λ t u + α 1 R 1 ( x 2 ) λ t 1 v , λ t + 1 v = ( 1 α 2 ) λ t v + α 2 R 2 ( x 1 ) λ t 1 u .
After simplifying by λ t 1 , the resulting algebraic system has the form
λ 2 u = ( 1 α 1 ) λ u + α 1 R 1 ( x 2 ) v , λ 2 v = ( 1 α 2 ) λ v + α 2 R 2 ( x 1 ) u .
A nonzero solution of u and v exists if and only if
0 = det 1 α 1 λ λ 2 α 1 R 1 ( x 2 ) α 2 R 2 ( x 1 ) 1 α 2 λ λ 2 = λ 4 λ 3 2 α 1 α 2 + ( 1 α 1 ) ( 1 α 2 ) λ 2 α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) .
This is a quartic equation λ 4 + a 1 λ 3 + a 2 λ 2 + a 3 λ + a 4 with
a 1 = 2 α 1 α 2 , a 2 = ( 1 α 1 ) ( 1 α 2 ) , a 3 = 0 and a 4 = α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) .
Farebrother (1973) [26] showed that the sufficient and necessary conditions to only have roots inside the unit circle are as follows:
a 4 < 1 , 3 + 3 a 4 > a 2 , 1 + a 1 + a 2 + a 3 + a 4 > 0 , 1 a 1 + a 2 a 3 + a 4 > 0 , ( 1 a 4 ) ( 1 a 4 2 ) a 2 ( 1 a 4 ) 2 + ( a 1 a 3 ) ( a 3 a 1 a 4 ) > 0 .
Notice that the first four inequalities are linear in a 4 as well as in R 1 ( x 2 ) R 2 ( x 1 ) ; however, the last inequality is cubic in a 4 , so it is difficult to find simple stability conditions. Nevertheless, the first four inequalities give necessary stability conditions. They can be written as
R 1 ( x 2 ) R 2 ( x 1 ) > 1 α 1 α 2 , < α 1 + α 2 α 1 α 2 + 2 3 α 1 α 2 , < 1 , < 1 + 4 2 α 1 2 α 2 α 1 α 2 .
Notice that
1 α 1 α 2 < 1
α 1 + α 2 α 1 α 2 + 2 3 α 1 α 2 = 1 3 1 α 2 + 1 α 1 1 3 + 2 3 1 α 1 α 2 > 1
and
1 + 4 2 α 1 2 α 2 α 1 α 2 > 1 .
Hence, the necessary conditions are
1 α 1 α 2 < R 1 ( x 2 ) R 2 ( x 1 ) < 1 .
In this case,
α 1 α 2 < a 4 < 1 .
Let F ( a 4 ) denote the left-hand side of last stability condition. Notice that
F ( 1 ) = a 1 2 < 0 and F ( 0 ) = 1 a 2 > 0 ,
so its sign is indeterminate in general.
We will now examine the addition condition F ( a 4 ) > 0 in interval ( α 1 α 2 , 1 ) . Clearly,
F ( a 4 ) = ( 1 a 4 ) ( 1 a 4 2 ) a 2 ( 1 a 4 ) 2 a 1 2 a 4 = a 4 3 + a 4 2 ( 1 a 2 ) + a 4 ( 1 + 2 a 2 a 1 2 ) + 1 a 2 .
Notice first that
1 a 2 1 , 1 a 2 0
and
1 + 2 a 2 a 1 2 = ( α 1 1 ) 2 ( α 2 1 ) 2 1 1 .
Since
F ( a 4 ) = 3 a 4 2 + 2 a 4 ( 1 a 2 ) + ( 1 + 2 a 2 a 1 2 ) ,
the stationary points of F ( a 4 ) are
a 4 ± = 1 6 2 ( 1 + a 2 ) ± D ,
where
D = 4 ( 1 + a 2 ) 2 + 12 ( 1 2 a 2 + a 1 2 ) 4 ( 1 + a 2 ) 2 + 12 16 ,
implying that a 4 + > 0 and a 4 < 0 . Clearly,
a 4 + > 1 6 2 + 4 = 1 ,
being outside the range of the necessary condition. Additionally,
F ( a 4 ) > 0 for a 4 < a 4 and a 4 > a 4 + ,
and
F ( a 4 ) < 0 for a 4 < a 4 < a 4 + .
Notice that
F ( 0 ) = 1 a 2 0 and F ( 1 ) = a 1 2 0 ,
and so F ( a 4 ) has three real roots: one before a 4 , one between 0 and 1 , and one after 1. Consequently, if a 4 denotes the negative root and a 4 the positive root between 0 and 1, then F ( a 4 ) > 0 if
a 4 < a 4 < a 4
or
a 4 α 1 α 2 < R 1 ( x 2 ) R 2 ( x 1 ) < min 1 , a 4 α 1 α 2 .
Proposition 2.
A necessary condition for the local asymptotical stability of the equilibrium in the delayed model is condition (9). A sufficient and necessary condition is the simultaneous satisfaction of (9) and (10).

3. Continuous Systems

3.1. Stability without Time Delays

We will now examine the stability of systems (3) and (4). Similarly to the discrete case, we linearize the system around the equilibrium ( x 1 , x 2 ) to have the following:
x ˙ 1 ( t ) = α 1 x 1 ( t ) + α 1 R 1 ( x 2 ) x 2 ( t ) ,
x ˙ 2 ( t ) = α 1 R 2 ( x 1 ) x 1 ( t ) α 2 x 2 ( t ) ,
where overbar is avoided for simple notation. In order to find the eigenvalues, we look for the solutions in exponential forms as before:
x 1 ( t ) = e λ t u and x 2 ( t ) = e λ t v .
The substitution of these solutions into Equations (11) and (12) gives
λ e λ t u = α 1 e λ t u + α 1 R 1 ( x 2 ) e λ t v ,
λ e λ t v = α 2 R 2 ( x 1 ) e λ t u α 2 e λ t v .
After simplifying both equations by e λ t , the resulting algebraic system for u and v has nonzero solutions if and only if
0 = det α 1 λ α 1 R 1 ( x 2 ) α 2 R 2 ( x 1 ) α 2 λ = λ 2 + λ α 1 + α 2 + α 1 α 2 α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) .
This is a quadratic polynomial, λ 2 + a 1 λ + a 2 , with
a 1 = α 1 + α 2 and a 2 = α 1 α 2 1 R 1 ( x 2 ) R 2 ( x 1 ) .
It is well known (Bischi et al., 2010) [11] that the real parts of the roots of this quadratic equation are negative if and only if both a 1 and a 2 are positive, which implies the following:
Proposition 3.
The equilibrium with dynamics (3) and (4) is locally asymptotically stable if
R 1 ( x 2 ) R 2 ( x 1 ) < 1
and unstable if R 1 ( x 2 ) R 2 ( x 1 ) > 1 .
In comparing conditions (8), (9), (10), and (13), it is clear that the stability of the equilibrium with the continuous model is implied by its stability for the discrete cases with and without delays. In comparing the two discrete cases, notice that the sufficient and necessary stability condition of the no-delay case implies the necessary conditions for the delay case since
1 α 1 α 2 < 1 α 1 + α 2 α 1 α 2
or
1 α 2 α 1 1 < 0 .

3.2. Stability with Delays

Assume again that both players face delays, τ 1 and τ 2 , in the information about the strategies of the others. Therefore, in Equation (3), x 2 ( t ) is replaced by x 2 ( t τ 1 ) , and in (4), x 1 ( t ) is replaced by x 1 ( t τ 2 ) . The linearized equations can therefore be written as follows:
x ˙ 1 ( t ) = α 1 x 1 ( t ) + α 1 R 1 ( x 2 ) x 2 ( t τ 2 ) ,
x ˙ 2 ( t ) = α 1 R 2 ( x 1 ) x 1 ( t τ 1 ) α 2 x 2 ( t ) .
The eigenvalues clearly depend on the lengths of the delays. Searching for exponential solutions
x 1 ( t ) = e λ t u and x 2 ( t ) = e λ t v
and substituting them into these equations, we obtain
λ e λ t u = α 1 e λ t u + α 1 R 1 ( x 2 ) e λ ( t τ 2 ) v ,
λ e λ t v = α 2 R 2 ( x 1 ) e λ t τ 1 u α 2 e λ t v .
After simplifying by e λ t , the resulting algebraic equations for u and v have nonzero solutions if and only if
0 = det α 1 λ α 1 R 1 ( x 2 ) e λ τ 2 α 2 R 2 ( x 1 ) e λ τ 1 α 2 λ = λ 2 + λ α 1 + α 2 + α 1 α 2 1 R 1 ( x 2 ) R 2 ( x 1 ) e λ τ 1 + τ 2 .
Notice first that the characteristic equation does not depend on the individual values of τ 1 and τ 2 —it depends on only τ = τ 1 + τ 2 . We know that at τ 1 = τ 2 = 0 (no-delay case), the equilibrium is locally asymptotically stable if (13) holds. By increasing the value of τ from zero, the stability might be lost. In this case, an eigenvalue has a zero real part, λ = i ω . Since the complex conjugate of an eigenvalue is also an eigenvalue, we can assume that ω > 0 . Substituting this eigenvalue into the characteristic Equation (16), we have the following:
ω 2 + i ω α 1 + α 2 + α 1 α 2 1 R 1 ( x 2 ) R 2 ( x 1 ) cos ω τ i sin ω τ = 0 .
The separation of the real and imaginary parts shows that
α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) cos ω τ = ω 2 + α 1 α 2 ,
α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) sin ω τ = ω α 1 + α 2 .
By adding the squares of these equations, we have
ω 4 + ω 2 α 1 2 + α 2 2 + α 1 2 α 2 2 1 R 1 ( x 2 ) R 2 ( x 1 ) 2 = 0 .
If R 1 ( x 2 ) R 2 ( x 1 ) 1 , then no stability switch occurs. If 1 R 1 ( x 2 ) R 2 ( x 1 ) < 1 , then the equilibrium remains locally asymptotically stable for all τ > 0 . If R 1 ( x 2 ) R 2 ( x 1 ) = 1 , then the characteristic equation of the no-delay case has a negative and a zero eigenvalue. Thus, no conclusion can be drawn about stability, and the same is the case for all τ > 0 .
Assume next that R 1 ( x 2 ) R 2 ( x 1 ) > 1 . In this case, Equation (19) has two real roots for ω 2 ,
ω ± 2 = ( α 1 2 + α 2 2 ) ± D 2 ,
where
D = ( α 1 2 + α 2 2 ) 2 4 α 1 2 α 2 2 1 R 1 ( x 2 ) R 2 ( x 1 ) 2 > ( α 1 2 + α 2 2 ) 2 ,
implying that ω + 2 > 0 and ω 2 < 0 . Thus, we have a unique solution ω + . From (17) and (18), we see that sin ω τ < 0 and
cos ω τ > 0 if ω + 2 < α 1 α 2 , = 0 if ω + 2 = α 1 α 2 , < 0 if ω + 2 > α 1 α 2 .
Therefore, we have the critical values of the delay:
τ n = 1 ω + 2 n π cos 1 ω + 2 + α 1 α 2 α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) for n = 1 , 2 ,
The direction of the stability switches can be determined by Hopf bifurcation. For this purpose, we select τ as the bifurcation parameter and consider the eigenvalues as functions of τ : λ = λ ( τ ) . Implicitly differentiating the characteristic Equation (16) with respect to τ , we have
2 λ λ + λ ( α 1 + α 2 ) α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) e λ τ ( λ λ τ ) = 0 ,
implying that
λ 2 λ + α 1 + α 2 + α 1 α 2 τ R 1 ( x 2 ) R 2 ( x 1 ) e λ τ + α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) λ e λ τ = 0 .
From Equation (16), we know that
α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) e λ τ = λ 2 + λ ( α 1 + α 2 ) + α 1 α 2 ,
implying that
1 λ = 2 λ + α 1 + α 2 λ λ 2 + λ ( α 1 + α 2 ) + α 1 α 2 τ λ .
Notice that the real parts of λ and 1 / λ have the same sign and at τ = i ω , the last term is a pure imaginary number with a zero real part. At λ = i ω , the first term equals
2 i ω + α 1 + α 2 i ω ω 2 + i ω ( α 1 + α 2 ) + α 1 α 2 = 2 i ω + α 1 + α 2 ω 2 ( α 1 + α 2 ) + i ω ( ω 2 + α 1 α 2 ) .
The real part of this expression has the same sign as
ω 2 ( α 1 + α 2 ) 2 2 ω 2 ( ω 2 + α 1 α 2 ) = ω 2 α 1 2 + α 2 2 + 2 ω 2 > 0 ,
implying that at any critical value, at least one eigenvalue changes the sign of its real part from negative to positive. Consequently, stability cannot be regained with delayed information.
Assume next that R 1 ( x 2 ) R 2 ( x 1 ) < 1 . The equilibrium is locally asymptotically stable without delays. From Equations (17) and (18), we see that sin ω τ > 0 and
cos ω τ > 0 if ω + 2 > α 1 α 2 , = 0 if ω + 2 = α 1 α 2 , < 0 if ω + 2 < α 1 α 2 .
Hence, the critical values are
τ ¯ n = 1 ω + cos 1 ω + 2 + α 1 α 2 α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) + 2 n π for n = 0 , 1 , 2 , ,
and at the smallest critical value τ ¯ 0 , stability is lost.
In summary, we have the following results:
Proposition 4.
(a) If R 1 ( x 2 ) R 2 ( x 1 ) < 1 , then the equilibrium is locally asymptotically stable for τ < τ ¯ 0 . At τ = τ ¯ 0 , the stability is lost via Hopf bifurcation. (b) If 1 R 1 ( x 2 ) R 2 ( x 1 ) < 1 , then the equilibrium is locally asymptotically stable for all τ 0 . (c) If R 1 ( x 2 ) R 2 ( x 1 ) > 1 , then the equilibrium is unstable for all τ 0 . (d) If R 1 ( x 2 ) R 2 ( x 1 ) = 1 , then no stability result can be determined for τ 0 .

4. Alternative Models

4.1. One-Delay Models

Consider first the case where the first player faces delays in its own and the other player’s strategy. In this case, Equations (3) and (4) are modified as follows:
x ˙ 1 ( t ) = α 1 R 1 ( x 2 ( t τ ) ) x 1 ( t τ ) ,
x ˙ 2 ( t ) = α 2 R 2 ( x 1 ( t ) ) x 2 ( t ) .
Linearization around the equilibrium gives
x ˙ 1 ( t ) = α 1 x 1 ( t τ ) + α 1 R 1 ( x 2 ) x 2 ( t τ ) ,
x ˙ 2 ( t ) = α 2 R 2 ( x 1 ) x 1 ( t ) α 2 x 2 ( t ) .
Assuming again exponential solutions,
x 1 ( t ) = e λ t u and x 2 ( t ) = e λ t v ,
and substituting them into the linearized equations, we have
λ e λ t u = α 1 e λ t τ u + α 1 R 1 ( x 2 ) e λ ( t τ ) v ,
λ e λ t v = α 2 R 2 ( x 1 ) e λ t u α 2 e λ t v .
After simplifying with e λ t , the resulting algebraic equations have nonzero solutions if and only if
0 = det α 1 e λ τ λ α 1 R 1 ( x 2 ) e λ τ α 2 R 2 ( x 1 ) α 2 λ = λ 2 + λ α 2 + e λ τ α 1 α 2 + α 1 λ α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) .
We showed earlier that the equilibrium is locally asymptotically stable if R 1 ( x 2 ) R 2 ( x 1 ) < 1 and unstable if R 1 ( x 2 ) R 2 ( x 1 ) > 1 . By increasing the value of τ from zero, stability might be lost or regained when λ = i ω . Substituting this eigenvalue into Equation (26), we have
ω 2 + i ω α 2 + cos ω τ i sin ω τ i ω α 1 + α 1 α 2 1 R 1 ( x 2 ) R 2 ( x 1 ) = 0 .
The separation of the real and imaginary parts shows that
α 1 α 2 1 R 1 ( x 2 ) R 2 ( x 1 ) cos ω τ + ω α 1 sin ω τ = ω 2 ,
α 1 α 2 1 R 1 ( x 2 ) R 2 ( x 1 ) sin ω τ + ω α 1 cos ω τ = ω α 2 .
Adding up the squares of these equations, we have
α 1 α 2 1 R 1 ( x 2 ) R 2 ( x 1 ) 2 + ω 2 α 1 2 = ω 4 + ω 2 α 2 2 .
There is a positive solution for ω 2 ,
ω + 2 = α 1 2 α 2 2 + D 2 ,
with
D = α 1 2 α 2 2 2 + 4 α 1 α 2 1 R 1 ( x 2 ) R 2 ( x 1 ) 2 α 1 2 α 2 2 2 .
For the sake of notational convenience, let
A = α 1 α 2 1 R 1 ( x 2 ) R 2 ( x 1 ) and B = ω α 2
in order to give
A cos ω τ + B sin ω τ = ω 2 ,
A sin ω τ + B cos ω τ = ω α 2 ,
implying that
cos ω τ = ω 2 B sin ω τ A .
Thus,
A sin ω τ + B ω 2 B 2 sin ω τ A = ω α 2
or
sin ω τ = ω B ω + A α 2 A 2 + B 2 ,
and
cos ω τ = ω A ω B α 2 A 2 + B 2 .
If A > 0 , then sin ω τ > 0 and
cos ω τ > 0 if ω > B α 2 A , = 0 if ω = B α 2 A , < 0 if ω < B α 2 A ,
and the critical values are
n n = 1 ω + cos 1 ω + A ω + B α 2 A 2 + B 2 + 2 n π for n = 0 , 1 , 2 ,
If A = 0 , then nothing can be concluded about stability in the no-delay case. From (27), the nonzero solution is
ω 2 = α 1 2 α 2 2 .
Thus, if α 1 α 2 , then no stability switch occurs, and if α 1 > α 2 , then
ω + = α 1 2 α 2 2 .
From (30) and (31), we know that sin ω τ > 0 and cos ω τ < 0 , implying that the critical values are also given by (32).
If A < 0 , then the equilibrium is unstable without delays. In this case, cos ω τ < 0 and
sin ω τ > 0 if ω > A α 2 B , = 0 if ω = A α 2 B , < 0 if ω < A α 2 B .
The critical values are as follows:
τ n = 1 ω + cos 1 ω + A ω + B α 2 A 2 + B 2 + 2 n π if sin ω τ > 0 for n = 0 , 1 , 2 , 1 ω + cos 1 ω + A ω + B α 2 A 2 + B 2 + 2 n π if sin ω τ > 0 for n = 1 , 2 ,
The directions of the stability switches are determined by Hopf bifurcation. Assuming that λ = λ ( τ ) , we differentiate implicitly Equation (26) with respect to τ to obtain
2 λ λ + λ α 2 + e λ τ α 1 + α 1 λ + A e λ τ ( λ τ λ ) = 0 ,
implying that
λ 2 λ + α 2 α 1 λ + A e λ τ τ + e λ τ α 1 α 1 λ 2 A λ = 0 .
From ( 26 ) , we know that
e λ τ = λ 2 + λ α 2 λ α 1 + A ,
implying that
λ 2 λ + α 2 + λ 2 + λ α 2 τ = λ 2 + λ α 2 λ α 1 + A α 1 λ ( α 1 λ + A ) ,
from which we have
1 λ = λ α 1 + A 2 λ + α 2 + λ 2 + λ α 2 τ λ λ + α 2 α 1 λ ( α 1 λ + A ) .
At λ = i ω , this expression becomes
i ω α 1 + A 2 i ω + α 2 + ω 2 + i ω α 2 τ i ω i ω + α 2 α 1 i ω ( i ω α 1 + A ) .
We first simplify the numerator,
N = A α 2 A ω 2 τ α 1 ω 2 ω + ω α 2 τ + i α 1 α 2 ω α 1 ω 3 τ + 2 A ω + ω α 2 τ A .
The denominator is the following:
D = α 1 ω 2 α 1 ω 4 + α 2 A ω 2 + i A ω 3 + α 1 α 2 ω + α 1 α 2 ω 3 .
After multiplying both N and D by the complex conjugate of D, the denominator becomes positive, and the real part of the numerator becomes
A α 2 A ω 2 τ 2 α 1 ω 2 α 1 α 2 ω 2 τ α 1 ω 2 α 1 ω 4 + a 2 A ω 2 + α 1 α 2 ω α 1 ω 3 τ + 2 A ω + ω α 2 τ A A ω 3 + α 1 α 2 ω + α 1 α 2 ω 3 .
It can be shown that (34) can be simplified as
ω 6 2 α 1 2 + ω 4 A α 1 τ + 2 α 1 2 + α 1 2 α 2 2 + 2 A 2 + ω 2 A α 1 α 2 + A 2 α 2 2 + α 1 2 α 2 2 + A α 1 α 2 2 τ ,
which is positve if A > 0 . Thus, if the equilibrium is stable with A > 0 in the no-delay case, then stability is lost at τ 0 , and stability cannot be regained with larger values of delay.
Another model is obtained if we assume that both players have delays in their own strategies. Then, Equations (3) and (4) become
x ˙ 1 ( t ) = α 1 R 1 ( x 2 ( t ) ) x 1 ( t τ ) ,
x ˙ 2 ( t ) = α 2 R 2 ( x 1 ( t ) ) x 2 ( t τ ) .
The linearized equations are
x ˙ 1 ( t ) = α 1 x 1 ( t τ ) + α 1 R 1 ( x 2 ) x 2 ( t ) ,
x ˙ 2 ( t ) = α 2 R 2 ( x 1 ) x 1 ( t ) α 2 x 2 ( t τ ) .
Substituting exponential solutions as before, we have
λ e λ t u = α 1 e λ t τ u + α 1 R 1 ( x 2 ) e λ t v ,
λ e λ t v = α 2 R 2 ( x 1 ) e λ t u α 2 e λ ( t τ ) v .
After simplifying with e λ t , the resulting algebraic system for u and v has nonzero solutions if and only if
0 = det α 1 e λ τ λ α 1 R 1 ( x 2 ) α 2 R 2 ( x 1 ) α 2 e λ τ λ = λ 2 + λ e λ τ α 1 + α 2 + α 1 α 2 e 2 λ τ R 1 ( x 2 ) R 2 ( x 1 ) .
Multiplying these equations by e λ t , we obtain
λ 2 e λ τ + λ α 1 + α 2 + α 1 α 2 e λ τ α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) e λ τ = 0 .
Assuming again that the eigenvalue has a zero real part, λ = i ω , we substitute it into this equation to obtain
0 = ω 2 cos ω τ + i sin ω τ + i ω α 1 + α 2 + α 1 α 2 cos ω τ i sin ω τ α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) cos ω τ + i sin ω τ .
Separating the real and imaginary parts, we have
ω 2 + α 1 α 2 1 R 1 ( x 2 ) R 2 ( x 1 ) cos ω τ = 0 ,
ω 2 α 1 α 2 1 + R 1 ( x 2 ) R 2 ( x 1 ) sin ω τ = ω α 1 + α 2 .
Now, we have two cases:
(a)
cos ω τ = 0 .
Then, sin ω τ = ± 1 . If sin ω τ = 1 , from (39),
ω 2 ω α 1 + α 2 + α 1 α 2 1 + R 1 ( x 2 ) R 2 ( x 1 ) = 0 ,
with roots
ω ± = α 1 + α 2 ± D 2 ,
where
D = α 1 + α 2 2 4 α 1 α 2 1 + R 1 ( x 2 ) R 2 ( x 1 ) .
If D < 0 , no stability switch occurs. If D = 0 , then the unique solution is
ω + = α 1 + α 2 2 ,
and if D > 0 and 1 + R 1 ( x 2 ) R 2 ( x 1 ) 0 , then we have again one positive solution, ω + . Otherwise, both ω + and ω are solutions. In the case of one solution, the critical values are
τ n = 1 ω + π 2 + 2 n π for n = 0 , 1 , 2 , ,
and in the case of two solutions,
τ n + = 1 ω + π 2 + 2 n π for n = 0 , 1 , 2 ,
and
τ n = 1 ω π 2 + 2 n π for n = 0 , 1 , 2 ,
Assume next that sin ω τ = 1 ; then, Equation (39) implies that
ω 2 + ω α 1 + α 2 + α 1 α 2 1 + R 1 ( x 2 ) R 2 ( x 1 ) = 0 ,
with roots
ω ± = ( α 1 + α 2 ) ± D 2 .
If D 0 , then no stability switch occurs. If 1 + R 1 ( x 2 ) R 2 ( x 1 ) 0 , then D ( α 1 + α 2 ) 2 , implying that there is no stability switch. Otherwise, there is a unique positive root,
ω + = D ( α 1 + α 2 ) 2 .
Then, the critical values are
τ n = 1 ω + 3 π 2 + 2 n π for n = 0 , 1 , 2 ,
(b)
cos ω τ 0
ω 2 + α 1 α 2 α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) = 0 .
The multiplier of sin ω τ in (43) can be rewritten as
ω 2 + α 1 α 2 α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) 2 α 1 α 2 = 2 α 1 α 2 .
From (43),
ω + 2 = α 1 α 2 1 R 1 ( x 2 ) R 2 ( x 1 ) .
If R 1 ( x 2 ) R 2 ( x 1 ) < 1 , then the equilibrium is locally asymptotically stable without delay, and we have a positive ω value,
ω + = α 1 α 2 1 R 1 ( x 2 ) R 2 ( x 1 )
Furthermore, the critical values are
τ n + = 1 ω + sin 1 ω + ( α 1 + α 2 ) 2 α 1 α 2 + 2 n π for n = 0 , 1 , 2 ,
and
τ n = 1 ω + π sin 1 ω + ( α 1 + α 2 ) 2 α 1 α 2 + 2 n π for n = 0 , 1 , 2 ,
In the special case, when R 1 ( x 2 ) R 2 ( x 1 ) = 1 , Equation (43) shows that ω = 0 is the only solution implying that no stability switch occurs.

4.2. Two-Delay Model

Assume next that player 1 faces different delays in the data of its own strategy and that of the other player. The associated dynamic equations have the forms
x ˙ 1 ( t ) = α 1 R 1 ( x 2 ( t τ 2 ) ) x 1 ( t τ 1 ) ,
x ˙ 2 ( t ) = α 2 R 2 ( x 1 ( t ) ) x 2 ( t ) ,
and the corresponding linearized system is as follows:
x ˙ 1 ( t ) = α 1 R 1 ( x 2 ) x 2 ( t τ 2 ) x 1 ( t τ 1 ) ,
x ˙ 2 ( t ) = α 2 R 2 ( x 1 ) x 1 ( t ) x 2 ( t ) .
Substituting exponential solutions again into these equations, after simplifying by e λ t , we have
λ u = α 1 R 1 ( x 2 ) e λ τ 2 v e λ τ 1 u ,
λ v = α 2 R 2 ( x 1 ) u v ,
with determinant equation
0 = det λ + α 1 e λ τ 1 α 1 R 1 ( x 2 ) e λ τ 2 α 2 R 2 ( x 1 ) λ + α 2 = λ 2 + e λ τ 1 α 1 λ + α 1 α 2 + α 2 λ α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) e λ τ 2 , = P 0 ( λ ) + P 1 ( λ ) e λ τ 1 + P 2 ( λ ) e λ τ 2 ,
where
P 0 ( λ ) = λ 2 + α 2 λ ,
P 1 ( λ ) = α 1 λ + α 1 α 2 ,
P 2 ( λ ) = α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) .
In order to guarantee that Equation (46) is the characteristic equation of a delay system, we assume the following:
(a)
deg P 0 ( λ ) max deg P 1 ( λ ) , deg P 2 ( λ ) , so there are finitely many eigenvalues on C + . This clearly holds.
(b)
P 0 ( 0 ) + P 1 ( 0 ) + P 2 ( 0 ) 0 , which holds if R 1 ( x 2 ) R 2 ( x 1 ) 1 , as is assumed in the following.
(c)
Polynomials P 0 ( λ ) , P 1 ( λ ) , and P 2 ( λ ) have no common roots. This is obvious if R 1 ( x 2 ) R 2 ( x 1 ) 0 . If R 1 ( x 2 ) R 2 ( x 1 ) = 0 , then λ = α 2 is a common root, which does not remove stability.
(d)
lim λ P 1 ( λ ) P 0 ( λ ) + P 2 ( λ ) P 0 ( λ ) < 1 ,
which is again obvious, since P 0 ( λ ) is quadratic, P 1 ( λ ) is linear, and P 2 ( λ ) is constant.
We will now follow the suggestions given by Gu et al. (2005) [27]. Dividing both sides of (46) by P 0 ( λ ) , we obtain
1 + a 1 ( λ ) e λ τ 1 + a 2 ( λ ) e λ τ 2 = 0 ,
where
a 1 ( λ ) = P 1 ( λ ) P 0 ( λ ) = α 1 λ
and
a 2 ( λ ) = P 2 ( λ ) P 0 ( λ ) = α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) λ 2 + α 2 λ .
We are looking for pure complex solutions, λ = i ω . If P 0 ( i ω ) = 0 , then from (46), we have
P 1 ( i ω ) + P 2 ( i ω ) e i ω τ 2 τ 1 = 0 ,
implying that
τ 2 τ 1 = 1 ω arg P 2 ( i ω ) P 1 ( i ω ) + 2 n π .
However, if P 1 ( i ω ) = 0 as well, then P 2 ( i ω ) 0 implies no solution, and P 2 ( i ω ) = 0 implies that arbitrary positive τ 1 and τ 2 values are solutions. Notice that this case cannot occur, since P 0 ( i ω ) has only two real roots, λ = 0 and λ = α 2 .
Consider next the case of P 0 ( i ω ) 0 . Then, notice that the complex vectors 1 , a 1 ( i ω ) e i ω τ 1 , and a 2 ( i ω ) e i ω τ 2 form a triangle in the complex plane, as shown in Figure 1.
In the special case of a 1 ( i ω ) = 0 , (47) implies that
1 + a 2 ( i ω ) e i ω τ 2 = 0 ,
so τ 1 is arbitrary and
τ 2 n = 1 ω arg a 2 ( i ω ) + 2 n π .
If a 2 ( i ω ) = 0 , then, similarly,
1 + a 1 ( i ω ) e i ω τ 1 = 0 ,
implying that τ 2 is arbitrary and
τ 1 m = 1 ω arg a 1 ( i ω ) + 2 m π .
If a 1 ( i ω ) and a 2 ( i ω ) are nonzero, then the three vectors form a triangle if and only if
a 1 ( i ω ) + a 2 ( i ω ) 1 ,
1 a 1 ( i ω ) a 2 ( i ω ) 1 .
In our case,
a 1 ( i ω ) = i α 1 ω so a 1 ( i ω ) = α 1 ω ,
a 2 ( i ω ) = α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) ω 2 + i ω α 2 so a 2 ( i ω ) = α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) ω ω 2 + α 2 2 .
Relations (48) and (49) are quartic inequalities. In general, they cannot be solved, but if the numerical values are known, then the roots of the quartic equations can be determined, and certain segments between the roots provide solutions. The law of cosine can be used to find angles θ 1 and θ 2 :
θ 1 = cos 1 1 + a 1 ( i ω ) 2 a 2 ( i ω ) 2 2 a 1 ( i ω ) ,
and
θ 2 = cos 1 1 + a 2 ( i ω ) 2 a 1 ( i ω ) 2 2 a 2 ( i ω ) .
The triangle can be placed above and under the horizontal axis, so from Figure 1, we have
arg a 1 ( i ω ) ω τ 1 ± θ 1 = π
and
arg a 2 ( i ω ) ω τ 2 θ 2 = π ,
implying that the critical values are as follows:
τ 1 n ± ( ω ) = 1 ω arg a 1 ( i ω ) + ( 2 n 1 ) π ± θ 1
and
τ 2 m ± ( ω ) = 1 ω arg a 2 ( i ω ) + ( 2 m 1 ) π θ 2 .
Let Ω denote the set of all solutions of conditions (48) and (49), which consists finitely of many intervals:
Ω k ( k = 1 , 2 , N ) .
For k = 1 , 2 , N , we define
T n , m k ± = τ 1 n ± ( ω ) , τ 2 m ± ( ω ) ω Ω k
and
T k = n m ( T n , m k + T n , m k ) R + 2 .
The set T of all stability-switching curves is the union of all sets T k . By not restricting arg a 1 ( i ω ) and arg a 2 ( i ω ) to interval [ 0 , 2 π ] , we make them continuous functions of ω in Ω k , so with fixed n and m, T n , m k + and T n , m k also become continuous curves.
Notice that any left endpoint ω k and right endpoint ω k r of Ω satisfy at least one of the conditions (48) and (49) by equality, so one of the following equalities must hold:
a 1 ( i ω ) + a 2 ( i ω ) = 1 ,
a 2 ( i ω ) a 1 ( i ω ) = 1 ,
a 1 ( i ω ) a 2 ( i ω ) = 1 ,
and the case of ω k = 0 and ω k r = are also possible. If (52) holds, then θ 1 = θ 2 = 0 and T n , m k + is connected with T n , m k at this endpoint. If (53) holds, then similarly, θ 1 = π and θ 2 = 0 , so T n , m k + is connected with T n + 1 , m k at this endpoint, and if (54) is satisfied, then θ 1 = 0 and θ 2 = π , showing that at this endpoint T n , m k + is connected with T n , m 1 k . If ω k = 0 , then as ω 0 , both T n , m k + and T n , m k converge to infinity. This discussion shows that the stability-switching curve T is the intersection of R + 2 and conforms to one of the following types:
(i)
A series of closed curves;
(ii)
A series of spiral-like curves with either horizontal, vertical, or diagonal axes;
(iii)
A series of open-ended curves with endpoints converging to infinity.
If a point ( τ 1 , τ 2 ) crosses the stability-switching curve, then stability switching might occur. Before discussing the direction of stability switching (stability loss or gain), some comments are in order.
The direction of a curve is positive if it corresponds to an increasing value of ω . The direction is reversed if a curve is passing through an endpoint. Moving along a curve in the positive direction, the region on the left-hand side is called the region on the left, and the region on the right right is similar. We define for any point on T the following:
R = Re a ( i ω ) e i ω τ and I = Im a ( i ω ) e i ω τ for = 1 , 2 and ω Ω .
The following result is given by Gu et al. (2005) [27].
Theorem 1.
Let ω Ω k and ( τ 1 , τ 2 ) T k such that λ = i ω is a simple pure complex eigenvalue. As a point ( τ 1 , τ 2 ) moves from the right to the left of the corresponding curve of T k , a pair of eigenvalues cross the imaginary axes to the right (stability loss) if R 2 I 1 R 1 I 2 > 0 . If the inequality is reversed, then the crossing is from right to left (stability gain).

4.3. Three-Delay Model

Consider next the case in which the players have different delays in the data about their own strategies. In this case, the delay equations become
x ˙ 1 ( t ) = α 1 R 1 ( x 2 ( t ) ) x 1 ( t τ 1 ) ,
x ˙ 2 ( t ) = α 2 R 2 ( x 1 ( t ) ) x 2 ( t τ 2 ) .
By linearizing these equations around the equilibrium, we obtain
x ˙ 1 ( t ) = α 1 x 1 ( t τ 1 ) + α 1 R 1 ( x 2 ) x 2 ( t ) ,
x ˙ 2 ( t ) = α 2 R 2 ( x 1 ) x 1 ( t ) α 2 x 2 ( t τ 2 ) .
Substituting the exponential solutions
x 1 ( t ) = e λ t u and x 2 ( t ) = e λ t v
into these equations, we have
λ e λ t u = α 1 e λ t τ 1 u + α 1 R 1 ( x 2 ) e λ t v ,
λ e λ t v = α 2 R 2 ( x 1 ) e λ t u α 2 e λ ( t τ 2 ) v .
After simplifying these equations by e λ t , an algebraic equation system is obtained for u and v , and nonzero solutions exist if and only if
0 = det λ + α 1 e λ τ 1 α 1 R 1 ( x 2 ) α 2 R 2 ( x 1 ) λ + α 2 e λ τ 2 = λ 2 + α 1 λ e λ τ 1 + α 2 λ e λ τ 2 α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) + α 1 α 2 e λ ( τ 1 + τ 2 ) ,
which can be written as
P 0 ( λ ) + P 1 ( λ ) e λ τ 1 + P 2 ( λ ) e λ τ 2 + P 3 ( λ ) e λ ( τ 1 + τ 2 ) = 0 ,
where
P 0 ( λ ) = λ 2 α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) ,
P 1 ( λ ) = α 1 λ , P 2 ( λ ) = α 2 λ and P 3 ( λ ) = α 1 α 2 .
The following assumptions are made to guarantee that (57) can be the characteristic equation of a delay system and to exclude some trivial cases:
(a)
There are a finite number of eigenvalues on C + when
deg P 0 ( λ ) max deg P 1 ( λ ) , deg P 2 ( λ ) , deg P 3 ( λ ) .
This holds, since P 0 ( λ ) is quadratic, while P 1 ( λ ) and P 2 ( λ ) are linear and P 3 ( λ ) is constant.
(b)
The zero frequency λ = 0 is not an eigenvalue with any τ 1 and τ 2 ,
P 0 ( 0 ) + P 1 ( 0 ) + P 2 ( 0 ) + P 3 ( 0 ) 0 .
This condition holds if R 1 ( x 2 ) R 2 ( x 1 ) 1 , so we have to assume this relation in the following discussion.
(c)
Polynomials P 0 ( λ ) , P 1 ( λ ) , P 2 ( λ ) , and P 3 ( λ ) have no common roots, which is trivial in our case.
(d)
lim P 1 ( λ ) P 0 ( λ ) + P 2 ( λ ) P 0 ( λ ) + P 3 ( λ ) P 0 ( λ ) < 1 ,
which also holds, since P 0 ( λ ) is quadratic and the others have lower degrees.
In examining the stability switches with Equation (57), we will follow the ideas of Lin and Wang (2012) [28]. We can rewrite Equation (57) at λ = i ω as follows:
P 0 ( i ω ) + P 1 ( i ω ) e i ω τ 1 + P 2 ( i ω ) + P 3 ( i ω ) e i ω τ 1 e i ω τ 2 = 0 .
Since e i ω τ 2 = 1 ,
P 0 ( i ω ) + P 1 ( i ω ) e i ω τ 1 = P 2 ( i ω ) + P 3 ( i ω ) e i ω τ 1
or
P 0 + P 1 e i ω τ 1 P ¯ 0 + P ¯ 1 e i ω τ 1 = P 2 + P 3 e i ω τ 1 P ¯ 2 + P ¯ 3 e i ω τ 1 ,
where an overbar indicates a complex conjugate and the arguments of P 0 , P 1 , P 2 , and P 3 are omitted for simple notation. A simple calculation shows that (59) has the equivalent form,
P 0 2 + P 1 2 P 2 2 P 3 2 = 2 A 1 ( ω ) cos ω τ 1 2 B 1 ( ω ) sin ω τ 1 ,
where
A 1 ( ω ) = Re P 2 P ¯ 3 P 0 P ¯ 1 and B 1 ( ω ) = Im P 2 P ¯ 3 P 0 P ¯ 1 .
With any value of ω , this is a trigonometric equation for τ 1 . In our case,
P 0 ( i ω ) = ω 2 α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) ,
P 1 ( i ω ) = i ω α 1 ,
P 2 ( i ω ) = i ω α 2 ,
P 3 ( i ω ) = α 1 α 2 .
Thus,
A 1 ( ω ) = Re i ω α 1 α 2 2 + ω 2 + α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) i ω α 1 = 0
and
B 1 ( ω ) = ω α 1 α 2 2 α 1 ω 3 α 1 2 α 2 ω R 1 ( x 2 ) R 2 ( x 1 ) .
Similarly,
P 0 2 + P 1 2 P 2 2 P 3 2 = ω 2 + α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) 2 + ( α 1 ω ) 2 ( α 2 ω ) 2 ( α 1 α 2 ) 2 = ω 4 + ω 2 2 α 1 α 2 R 1 ( x 2 ) R 2 ( x 1 ) + α 1 2 α 2 2 ( α 1 α 2 ) 2 1 R 1 ( x 2 ) R 2 ( x 1 ) 2 .
(A) We assume first that ω is a solution of equation P 2 P ¯ 3 P 0 P ¯ 1 = 0 . Then, A 1 ( ω ) = B 1 ( ω ) = 0 , and an arbitrary value of τ 1 > 0 is a solution. The corresponding values of τ 2 can be obtained from Equation (58) as
τ 2 n = 1 ω arg P 2 + P 3 e i ω τ 1 P 0 + P 1 e i ω τ 1 + 2 n π .
If the denominator is zero, then (58) implies that the numerator is also zero, so an arbitrary τ 2 > 0 is a solution.
(B) We assume next that P 2 P ¯ 3 P 0 P ¯ 1 0 , so A 1 ( ω ) 2 + B 1 ( ω ) 2 > 0 . Clearly, there is a ϕ 1 ( ω ) such that
cos ϕ 1 ( ω ) = A 1 ( ω ) A 1 ( ω ) 2 + B 1 ( ω ) 2 = 0 , sin ϕ 1 ( ω ) = s i g n B 1 ( ω ) .
Then, (60) can be rewritten as
P 0 2 + P 1 2 P 2 2 P 3 2 = 2 A 1 ( ω ) 2 + B 1 ( ω ) 2 cos ϕ 1 ( ω ) + ω τ 1 ,
and by defining
ψ 1 ( ω ) = cos 1 P 0 2 + P 1 2 P 2 2 P 3 2 2 A 1 ( ω ) 2 + B 1 ( ω ) 2 [ 0 , π ]
it becomes clear that
τ 1 n ± = 1 ω ± ψ 1 ( ω ) ϕ 1 ( ω ) + 2 n π .
From (62), we see that a solution for τ 1 exists if and only if
P 0 2 + P 1 2 P 2 2 P 3 2 2 A 1 ( ω ) 2 + B 1 ( ω ) 2 ,
or in our case
P 0 2 + P 1 2 P 2 2 P 3 2 2 B 1 P 0 2 + P 1 2 P 2 2 P 3 2 + 2 B 1 0 .
The two factors must have different signs, or one of them has to be zero. This inequality and (64) cannot be solved in general, but if numerical values of the model parameters are available, then computer solutions can present finitely many segments Ω k as the set of all solutions for ω , which is denoted by Ω . For each ω Ω , the corresponding critical values are given by (63), and then from (58), we have
τ n m = 1 ω arg P 2 ( i ω ) + P 3 ( i ω ) e i ω τ 1 P 0 ( i ω ) + P 1 ( i ω ) e i ω τ 1 + 2 m π
if the denominator is nonzero. If it is zero, then (58) implies that the numerator is also zero, so an arbitrary τ 2 is the solution.
A more simple method can be suggested by interchanging τ 1 and τ 2 and repeating the above procedure for τ 2 . It can be shown that for each segment Ω k , the critical values form the curves as follows:
T n , m k ± = ± ψ 1 ( ω ) ϕ 1 ( ω ) + 2 n π ω , ψ 2 ( ω ) ϕ 2 ( ω ) + 2 m π ω ω Ω k ,
which is continuous in R 2 .
With any left and right endpoints τ k and τ k r ,
F ( ω k ) = F ( ω k r ) = 0 ,
where
F ( ω ) = P 0 2 + P 1 2 P 2 2 P 3 2 2 4 A 1 ( ω ) 2 + B 1 ( ω ) 2 ;
therefore,
cos ψ i ( ω k ) = δ i π and cos ψ i ( ω k r ) = δ i r π ,
where δ k , δ k r { 0 , 1 } . The connection of the segments of the stability-switching curves can be given as follows: T n , m k + connects T n + δ 1 , m δ 2 k and T n + δ 1 , m δ 2 r k at its two ends:
(i)
If δ 1 , δ 2 = δ 1 r , δ 2 r , then T n , m k + and T n δ 1 , m δ 2 k + form a loop, so the set of stability-switching curves is a collection of closed continuous curves.
(ii)
Otherwise, it is a collection of continuous curves with endpoints on the axes or extending to infinity in R 2 .
The direction of stability switching now depends on
R = Re P ( i ω ) e i ω τ + P 3 ( i ω ) e i ω ( τ 1 + τ 2 )
and
I = Im P ( i ω ) e i ω τ + P 3 ( i ω ) e i ω ( τ 1 + τ 2 ) ,
and Theorem 1 remains valid with these quantities.

5. Conclusions

In engineering, population dynamics, and realistic economies, there are many examples where only delayed responses can be observed and instantaneous data are not available. In such cases, the design and decisions are based on delayed data and information. In this paper, we presented some important methods to deal with this situation. Under discrete time scales, it is usually assumed that the lengths of the delays are positive integers that only increase the order of the governing difference equations (Matsumoto and Szidarovszky, 2018) [29]. In the continuous case, different methods can be used based on the number of delays. It is always assumed that the equilibrium is locally asymptotically stable without delays, and stability is preserved if the delays are sufficiently small based on the fact that the matrix eigenvalues continuously depend on the matrix elements. However, by increasing the lengths of the delays, this stability might be lost. In the one-delay case, the critical value of the delay was determined when stability turns into instability (Section 3.2 and Section 4.1). In the two-delay case, the two-dimensional space of the delays was considered, and the stability-switching curves were determined (Section 4.2 and Section 4.3). If a pair of delays crosses this curve from the region containing the origin, then stability is lost. The same is the situation in the three-delay case, when two delays and their sum affect the dynamic properties of the equilibrium. The mathematical methodology is different in these cases, which was illustrated in the case of a special two-person game wherein the dynamic evolution of the strategies was governed by best-response dynamics. Several cases were used for the presentation of the methodology, which could be very useful in solving practical problems including one, two, or even three delays. The material of this paper can be extended in several directions. One could include more players, different dynamic rules, and more variants of the delayed quantities. In addition, the Bayesian methodology is used in population dynamics to assess the survival probabilities of competing species (Dragicevic, 2015) [30] or, in general, to find the probability of the occurence of certain properties among the players. For the same issues, artificial neural networks could be an alternative approach (Poulton, 2001; Swingler, 1996) [31,32].

Author Contributions

Conceptualization, all authors.; methodology, A.M. and F.S.; software, A.M.; validation, A.M., F.S. and M.H.; formal analysis, F.S.; investigation, M.H.; writing—original draft preparation, F.S.; writing—review and editing, M.H.; visualization, A.M.; project administration, F.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are grateful to three referees; their comments and suggestions were very helpful in finalizing the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dresher, M. Games of Strategy: Theory and Applications; Applied Mathematics Series; Prentice Hall: Englewood Cliffs, NJ, USA, 1961. [Google Scholar]
  2. Matsumoto, A.; Szidarovszky, F. Game Theory and Its Applications; Springer: Tokyo, Japan, 2016. [Google Scholar]
  3. Szep, J.; Forgo, F. Introduction to the Theory of Games; Akademia Kiado: Budapest, Hungary, 1985. [Google Scholar]
  4. Vorob’ev, N.N. Foundation of Game Theory. Noncooperative Games; Birkhäuser: Basel, Germany, 1994. [Google Scholar]
  5. Fudenberg, D.; Tirole, J. Game Theory; MIT Press: Cambridge, MA, USA, 1991. [Google Scholar]
  6. Nash, H.F., Jr. Noncooperative Games. Ann. Math. 1951, 54, 286–295. [Google Scholar] [CrossRef]
  7. Rosen, J.B. Existence and uniquness of equilibrium points for concave n-person games. Econometrica 1965, 33, 520–534. [Google Scholar] [CrossRef]
  8. von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1944. [Google Scholar]
  9. Hahn, F. The stability of the Cournot oligopoly soution. Rev. Econ. Stud. 1962, 29, 329–331. [Google Scholar] [CrossRef]
  10. Okuguchi, K. Expectations and Stability in Oligopoly Models; Springer: Berlin, Germany, 1976. [Google Scholar]
  11. Bischi, G.-I.; Chiarella, C.; Kopel, M.; Szidarovszky, F. Nonlinear Oligopolies: Stability and Bifurcations; Springer, Science+Business Media: Heidelberg, Germany, 2010. [Google Scholar]
  12. Cheban, D.D. Lyapunov Stability of Non-Autonomous Systems; Nova Science Pulisher, Hauppauge: New York, NY, USA, 2013. [Google Scholar]
  13. Saeed, R. Lyapunov Stability Theorem with Some Applications; LAMBERT, Academic Publishing: Saarbrüchen, Germany, 2017. [Google Scholar]
  14. Gumowski, I.; Mira, C. Dynamique Chaotique; Cepadues Editions: Toulose, France, 1980. [Google Scholar]
  15. Mira, C.; Gardini, L.; Barugola, L.; Cathala, J. Chaotic Dynamics in Two Dimensional Noninvertible Maps; Nonlinear Sciences, Series A; World Scientific: Singapore, 1996. [Google Scholar]
  16. Bellman, R. Stability Theory of Differential Equations; Dover: New York, NY, USA, 1969. [Google Scholar]
  17. LaSalle, J.P. Stability theory for ordinary differential equations. J. Differ. Equ. 1968, 4, 37–65. [Google Scholar] [CrossRef]
  18. Sánchez, D.A. Ordinary Differential Equations and Stability Theory; Dover Publications: Mineola, NY, USA, 1968. [Google Scholar]
  19. Bellman, R.; Cooke, K.L. Differential-Difference Equations; Academic Press: New York, NY, USA, 1963. [Google Scholar]
  20. Cushing, J. Integro-Differential Equations and Delay Models in Population Dynamics; Springer: Berlin, Germany, 1977. [Google Scholar]
  21. Kuang, Y. Delay Differential Equations with Applications in Population Dynamics; Academic Press: Boston, MA, USA, 1993. [Google Scholar]
  22. Gori, L.; Guerrini, L.; Mauro, S. Hopf bifurcation in a cobweb model with discrete time delays. Discret. Dyn. Nat. Soc. 2014, 2014, 137090. [Google Scholar] [CrossRef]
  23. Ösbay, H.; Saglam, H.; Cagri, H.; Yüksel, M. Hopf bifurcation in one section optimal growth model with time delay. Macroecon. Dyn. 2017, 21, 1887–1901. [Google Scholar] [CrossRef]
  24. Berezowski, M. Effect of delay time on the generation of chaos in continuous systems: One dimensional model, two dimensional model-tubwar chemical reactor with recycle. Chaos Solitions Fractals 2001, 12, 83–89. [Google Scholar] [CrossRef]
  25. Wang, Q.G.; Lee, T.H.; Tan, K.K. Time delay systems. In Finite-Spectrum Assignment for Time Delay Systems; Lecture Notes in Control and Information Systems 239; Springer: London, UK, 1999. [Google Scholar]
  26. Farebrother, R.W. Simplified Samuelson conditions for cubic and quartic equations. Manch. Sch. Econ. Soc. 1973, 41, 395–400. [Google Scholar] [CrossRef]
  27. Gu, K.; Nicolescu, S.-I.; Chen, J. On stability crossing curves for general systems with two delays. J. Math. Appl. 2005, 311, 231–253. [Google Scholar] [CrossRef]
  28. Lin, X.; Wang, H. Stability analysis of delay differential equations with two discrete delays. Can. Appl. Math. 2012, 2, 519–533. [Google Scholar]
  29. Matsumoto, A.; Szidarovszky, F. Dynamic Oligopolies with Time Delays; Springer-Nature: Singapore, 2018. [Google Scholar]
  30. Dragicevic, A. Bayesian population dynamics of spreading species. Environ. Model. Assess. 2015, 20, 17–27. [Google Scholar] [CrossRef]
  31. Poulton, M. Computational Neural Networks for Geophysical Data Processing; Pergaron: Amsterdam, The Netherlands, 2001. [Google Scholar]
  32. Swingler, K. Applying Neural Networks; Academic: San Diego, CA, USA, 1996. [Google Scholar]
Figure 1. Triangle constructed by these vectors.
Figure 1. Triangle constructed by these vectors.
Games 14 00067 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Matsumoto, A.; Szidarovszky, F.; Hamidi, M. On a Special Two-Person Dynamic Game. Games 2023, 14, 67. https://doi.org/10.3390/g14060067

AMA Style

Matsumoto A, Szidarovszky F, Hamidi M. On a Special Two-Person Dynamic Game. Games. 2023; 14(6):67. https://doi.org/10.3390/g14060067

Chicago/Turabian Style

Matsumoto, Akio, Ferenc Szidarovszky, and Maryam Hamidi. 2023. "On a Special Two-Person Dynamic Game" Games 14, no. 6: 67. https://doi.org/10.3390/g14060067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop