Next Article in Journal
Relationship between Body Composition and Gross Motor Coordination in Six-Year-Old Boys and Girls
Next Article in Special Issue
Artificial Bee Colony Algorithm with Pareto-Based Approach for Multi-Objective Three-Dimensional Single Container Loading Problems
Previous Article in Journal
In Situ Strength vs. Potential Strength of Concrete: Proposal of a New Procedure for the Assessment of Excess Voidage
Previous Article in Special Issue
Label-Free Model Evaluation with Out-of-Distribution Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed GNE-Seeking under Partial Information Based on Preconditioned Proximal-Point Algorithms

Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(11), 6405; https://doi.org/10.3390/app13116405
Submission received: 16 April 2023 / Revised: 15 May 2023 / Accepted: 22 May 2023 / Published: 24 May 2023
(This article belongs to the Special Issue Advanced Artificial Intelligence Theories and Applications)

Abstract

:
This paper proposes a distributed algorithm for games with shared coupling constraints based on the variational approach and the proximal-point algorithm. The paper demonstrates the effectiveness of the proximal-point algorithm in distributed computing of generalized Nash equilibrium (GNE) problems using local data and communication with neighbors in any networked game. The algorithm achieves the goal of reflecting local decisions in the Nash–Cournot game under partial-decision information while maintaining the distributed nature and convergence of the algorithm.

1. Introduction

The problem of finding a generalized Nash equilibrium (GNE) for networked systems has gained significant interest recently owing to its applicability to multi-agent decision-making scenarios, such as demand-side management in intelligent grids [1], demand response in competitive markets [2], and electric vehicle charging [3]. In such systems, the agents aim to minimize cost functions under joint feasibility constraints with non-cooperative settings. They cannot reduce their local costs by unilaterally changing their decisions but by relying on the decisions of the other agents. The GNE is a self-executing outcome, and once computed, all agents execute it to achieve their own minimum cost. In these systems, each user has a local cost function to minimize, which is a function of their own and other users’ decisions. Moreover, each user’s feasible decision set is coupled with other users’ decisions due to the limited network resources. A natural solution concept for such systems is the GNE, which captures the non-cooperative behavior of multiple interacting agents. A GNE is a vector of decisions such that no user can reduce their local cost by unilaterally deviating from it, given the decisions of other users. Hence, a GNE is a self-enforcing outcome that all agents will implement once computed, as it achieves their individual minimum costs.
To compute GNE introspectively, most algorithms for finding Nash equilibrium require a central information center to collect and store all game information such as cost functions, coupling constraints, and feasible local sets in a traditional computing environment. However, this approach is not suitable for large-scale network games because it is difficult to observe the true decisions of all other players in this type of game. Additionally, having a central coordinator node may be impractical due to technical, geographical, or game-related reasons. In this case, each player needs to compute their local decision corresponding to GNE in a distributed way by utilizing their local objective function, local feasible set, and possibly local data related to the coupling constraints and communicating with their neighbors. Therefore, our goal is to develop a distributed algorithm that estimates the strategies of all other agents only by communicating with adjacent agents. This algorithm can achieve the goal of reflecting local decisions in the absence of global information and eventually reconstructing actual value.
The proximal-point algorithm has been widely used in distributed computing of GNE problems. In our study, we demonstrate that it can achieve the goal of reflecting local decisions in the absence of global information while maintaining the distributed nature and convergence of the algorithm. That is to say, in the absence of a central coordinator, the generalized Nash equilibrium point is calculated by each participant using local information and communication with neighbors. This approach enables each player to compute a local decision corresponding to GNE by utilizing its local objective function, local feasible set, and possibly local data related to the coupling constraints and communicating with its neighbors. By continuously updating the proximal point, the proximal-point algorithm can achieve the convergence of restricted monotone and Lipschitz continuous pseudo-gradient games under appropriate fixed step size conditions. Our study proposes a new distributed GNE algorithm for games with shared biomimetic coupling constraints, which is based on the variational GNE approach and the proximal-point algorithm. Our results demonstrate the effectiveness of the proposed algorithm in achieving GNE in a distributed manner in the presence of shared constraints.

1.1. Literature Review

The research of GNE was initiated by the seminal works of [4,5]. In recent years, the computation of distributed GNE in monotone games has received considerable interest. However, most existing studies rely on the assumption that every agent in the system has access to the decision information of other agents, i.e., by employing a central node coordinator to orchestrate the information processing. The initial approach was based on the variational inequality method [6]. The variational inequality framework for generalized Nash equilibrium problems established in [6] has been extended to more general settings, such as time-dependent or stochastic problems in [7]. Ref. [8] developed a Tikhonov-regularized primal-dual algorithm, and [9] devised a primal-dual gradient method, which can reduce the compensation and assumed that it can acquire the decisions of all adversaries that influence each cost. In ref. [10], a payoff-based GNE algorithm was introduced, which has a diminishing step size and converges to a class of convex games. Recently, an operator-splitting method has emerged as a very powerful design technique, which ensures global convergence and features a fixed step size and succinct convergence analysis. However, most GNE application scenarios are aggregative games, such as [3,11,12,13,14]. In refs. [12,13], the algorithms are semi-decentralized, requiring a central node to disseminate the common multipliers and aggregate variables, thus resulting in a star topology. In ref. [14], this is relaxed by a projection gradient algorithm based on two-layer consensus, both of which are only suitable for aggregative games.
In networked non-cooperative games, a distributed forward-backward operator-splitting algorithm for seeking the GNE has been proposed by Yi and Pavel [15], based on the convex analysis framework of Bauschke and Combettes [16]. Under partial information, the algorithm requires only local objective function, feasible set, and block of affine constraints for each agent. It also introduces a local estimate of all agents’ decisions. They further extended their algorithm to an asynchronous setting by using auxiliary variables associated with Graphology [17]. In this setting, each agent uses private data and delay information from their neighbors’ latency information to iterate asynchronously. They proved the convergence and effectiveness of their algorithm under mild assumptions. Ref. [18] investigated a generalized Nash equilibrium problem in which players are modeled as nodes of a network and each player’s utility function depends on its own and its neighbors’ actions, and derived a variational decomposition of the game under a quadratic reference model with shared constraints, which is illustrated with numerical examples. Bianchi et al. [19] has developed a novel algorithm that differs from the existing projective pseudo gradient dynamics in that it is fully distributed, one-layer, and uses a proximal best response with consensus terms. It can overcome the limitations of double-layer iterations or conservative step sizes of the gradient-based methods. It can also extend the applicability of the proximal point method by analyzing the restricted monotonicity property. Pavel [20] applied a preconditioned proximal point algorithm (PPPA) that decomposes the merit-seeking task into Nash equilibrium (NE) computation of regularised subgame sequences based on local information. The PPPA also performs distributed updates of multipliers and auxiliary variables, which require multiple communications among agents to solve the subgame. For games with general coupling costs and affine coupling constraints, [15,21] adopt operator methods to perform distributed decentralized GNE optimization: forward-backward optimization algorithm for strongly monotone games [15] and preconditioned proximal optimization algorithm for monotone games [21]. The players exchange local multipliers over the network with arbitrary topology, but each user can access all the agent decisions that affect its cost, thus obtaining complete decision information.

1.2. Contributions

Compared with most existing distributed optimization algorithms, the main contributions of this paper are summarized as follows:
  • The proposal of a GNE algorithm for games with shared biomimetic coupling constraints. The algorithm is based on the variational GNE approach and the proximal-point algorithm, and is improved by introducing two choice matrices to enhance its accuracy, as in [22,23], where we design a novel preconditioning matrix to distribute the computation and obtain a single-layer iteration. Each player has an auxiliary variable to estimate the decisions of other agents. The algorithm is distributed, where each player only utilizes its local objective function, local feasible set, and local data related to the coupling constraints, and there is no centralized coordinator to update and propagate dual variables.
  • An original dual analysis of the Karush–Kuhn–Tucker (KKT) conditions of the variational inequality (VI) is conducted, which introduces a local copy of the multiplier and an auxiliary variable for each player. It is observed that the KKT conditions mandate consensus among all agents on the multiplier for shared constraints. By reformulating the original problem as finding the zero point of a monotone operator that includes the Laplacian matrix of the connected graph, the consistency of local multipliers is enhanced.
This paper presents global and distributed methods for finding GNE in games with shared affine coupling constraints under partial decision information. Section 2 introduces the game model and formulates the GNE problem. Section 3 proposes a global GNE-seeking method based on the proximal-point algorithm with global information. Section 4 develops a distributed GNE-finding method with partial information and proves its convergence and implementation feasibility. Section 5 illustrates the performance of our methods through numerical simulations. Finally, Section 6 concludes the paper and discusses some future work directions.

2. Game Formulation

We study a non-cooperative generalized game with a crowd of agents. Each agent i I ( I = 1 , , N ) has a local decision set Ω i R n i and chooses its own decision x i Ω i , where R is the set of real numbers and n i is the dimension of agent i’s decision-making. The global decision space is Ω = i = 1 N Ω i R n where n = i = 1 N n i and the stacked vector of all agents’ decisions is x = c o l x i i I . Let x i = c o l x j j I , j i denote the decision profile of all other agents except for agent i, then x = x i , x i . The feasible set for any agent depends on the coupling constraints shared with other agents and their own decisions. Each agent aims to minimize its objective function over this feasible set as follows:
min x i J i x i , x i : = f i x i , x i + g i x i .
The objective function (1) can describe how multiple participants can optimize their interests in a non-cooperative situation while being affected by sharing constraints. This model can be applied to a variety of practical scenarios, such as smart grids, competitive markets, and electric vehicle charging.
Assumption 1. 
The cost functions f i and g i of each agent i I are convex. The common cost function f i is continuously differentiable and the local idiosyncratic cost function g i is lower semicontinuous. Ω i is bounded and closed for any i I .
We denote A : = A 1 , , A N and b : = i = 1 N b i , where A i R m × n i and b i R m are local parameters. By the affine function above, the set of overall feasible decisions via affine coupling constraints and cost functions is denoted as follows:
X : = Ω x R n N A x b 0 m R n N ,
where X is a closed and convex set of decisions that is nonempty. It is used to simulate some phenomena in nature, such as group behavior, competition, and cooperation. Such constraints can reflect the interactions and dependencies among participants, making the game more complex and interesting.
Assumption 2. 
Slater’s constraint qualification holds for the collective set X . Therefore, each agent i I in the generalized game tries to solve this interdependent optimization problem as follows:
i I : min x i Ω i J i x i , x i s . t . A i x i b j i N A j x j .
To obtain the primal-dual characterization of each agent i I , we define a Lagrangian function with dual variable multiplier λ i R + m as follows:
L i x i , x i ; λ i : = f i x i , x i + g i x i + λ i T i = 1 N A i x i b i .
We call a decision x * X that satisfies (3) a GNE of the game. This implies that for any agent i I ,
J i x i * , x i * inf J i y , x i * y X i x i * ,
where X i x i : = y i Ω i A i y i b j i N A j x j .
If x i * Ω i satisfies the KKT conditions below with λ i * R + m , then it is an optimal solution to (3).
i I : x i L i x i * , x i * = 0 λ i * , A x * b = 0 A x * b 0 λ i * 0 .
We reformulate the KKT condition using the normal cone operator as follows:
0 x i f i x i * , x i * + g i x i * + A i T λ i * 0 i = 1 N A i x i * i = 1 N b i N R 0 m λ i * .
We denote the pseudogradient of f i x i , x i as F ( x ) = col x i f i x i , x i i I R n . Denote the pseudogradient of g i x i as g ( x ) = i = 1 N g i x i .
Let λ = col ( λ i ) i N denote the vector of Lagrangian multipliers associated with each agent i N . A variational GNE is a decision x * X that satisfies the variational inequality problem VI ( F , X ) and has equal Lagrangian multipliers ( λ 1 = = λ N ).
find x * X = x R n i = 1 N A i x i i = 1 N b i , s . t . F x * , x x * + g ( x ) 0 , x X .
Under Assumption 1, J i is a continuous and convex function, and Ω i is a bounded local decision set. These ensure that VI ( F , X ) has a solution. Let x * be such a solution. Then it satisfies (7) with equal Lagrange multipliers λ * . Therefore, x * is a variational GNE for (3).

3. Iterative Algorithms with Global Information

We propose a distributed algorithm based on preconditioned proximal-point algorithms that use full-decision information. This means that every agent can access the decisions of all other agents that affect its local objective function directly.

3.1. Communication Graph

Let | N | = N and | E | = M . We define E i as the set of edges adjacent to i, which consists of E i in (the incoming edges) and E i out (the outgoing edges). Let W = w i , j i , j I R N × N be the symmetric weight matrix of G , where w i , j > 0 if ( i , j ) E and w i , j = 0 otherwise. We also set w i , i = 0 for all agents. The Laplacian matrix of G is denoted by L : = D W , where D : = diag d i i I is the degree matrix and d i : = j = 1 N w i , j for all i I .
Assumption 3. 
G is a connected and undirected graph.

3.2. Algorithm Development

Assumption 4. 
F ( x ) is a μ-strongly monotone function, meaning that F ( x ) F ( y ) , x y μ | x y | 2 2 , x , y Ω , and a θ 0 -Lipschitz continuous function, implying that | F ( x ) F ( y ) | 2 θ 0 | x y | 2 , x , y Ω .
The strong monotonicity of F implies that a v-GNE exists and is unique. We can write the variational problem for the original game problem as follows:
min x R n F x * , x + g ( x ) , s . t . i = 1 N A i x i i = 1 N b i .
The objective function is defined by the following Lagrangian function:
L ˜ x , λ g : = F x * , x + g ( x ) + λ g T i = 1 N A i x i b i .
Let λ g R m be a global Lagrangian multiplier. To design distributed optimization algorithms, we impose the consistency constraints λ i = λ j , ( i , j ) E , i.e., L I m λ = 0 , where I m is an identity matrix of order m. Define G ( x ) = col g i x i i I R n , λ = col λ 1 , , λ N = col λ i R m , A = A 1 , , A N R m × n , and b = col b i R m . Then,
L 1 ( x , λ ) : = F x * , x + g ( x ) + λ T ( A x b ) .
Its corresponding saddle point problem would be
max λ min x F x * , x + g ( x ) + λ T ( A x b ) , s . t . L I m λ = 0 .
Let L = L I m , W = W I m and D = D I m , the Lagrangian function that defines this saddle point problem is L 2 ( x , λ , z ) = F x * , x + g ( x ) + λ T ( A x b ) + z T L λ . The optimal conditions for this problem are obtained by sequentially finding the partial derivatives of the variables x , y , and z as follows:
0 F x + G x + A T λ 0 = L λ 0 b + L λ + N R 0 N m λ A x L z .
It follows from Lagrange’s dual theorem that the variational problem has an optimal solution x * = x i * , x i * only when λ g * R m satisfies the KKT conditions (13).
Lemma 1. 
If Assumptions 1–3 hold, then (13) implies (7).
Let v = col ( x , z , λ ) and consider the following operator:
A x z λ = F ( x ) 0 b + G ( x ) 0 N R 0 N m ( λ ) + A T λ L λ A x L z .
We regard the iterative algorithm as a special case of proximal-point algorithms (PPA) [16] for finding a zero of A . The general form of PPA can be written as
v k + 1 J A v k ,
where J A v k .
We apply the interaction rule (9) to the operator Φ 1 A , where Φ is defined as the following:
Φ = 0 0 A T 0 0 L A L 0 + α 1 0 0 0 τ 1 0 0 0 γ 1 .
We choose the step sizes α = diag α i I n i , τ = diag τ i I m and γ = diag γ i I m such that Φ > 0 .The next lemma provides sufficient conditions for Φ > 0 based on Gershgorin’s circle theorem.
Lemma 2. 
For any agent i I and any δ > 0 , the preconditioning matrix Φ in (16) is positive definite if
0 < α i max j 1 , , n i k = 1 m A i j k + δ 1 , 0 < τ i 2 d i + δ 1 , 0 < γ i max j { 1 , , m } k = 1 n i A i j k + 2 d i + δ 1 .
Assuming that Φ > 0 for all chosen step sizes, we can state the following results.
Lemma 3. 
The iterative algorithm is equivalent to the following:
( k N ) v k + 1 J Φ 1 A v k ,
where A is given by (14), Φ is given by (16).
Proof of Lemma 3. 
We use the definition of the inverse operator and obtain that
v k + 1 Id + Φ 1 A 1 v k 0 Φ 1 A v k + 1 v k + v k + 1 0 Φ v k + 1 v k + A v k + 1 .
Substituting A and Φ into (19) and simplifying them, we obtain that
x k + 1 + α F x k + 1 + α G x k + 1 = α A T λ k + x k , z k + 1 = τ L λ k + z k , λ k + 1 + N R 0 N m λ k + 1 = γ A x k γ L z k + λ k + 2 γ A x k + 1 + 2 γ L z k + 1 γ b .
Using the definition of the proximal operator [16], we can write (20) as
x k + 1 = arg min x 1 2 x x k 2 + x , α A T λ k + α f ( x ) + α g ( x ) , z k + 1 = τ L λ k + z k , λ k + 1 = proj R 0 N m ( λ k γ A x k γ L z k + 2 γ A x k + 1 + 2 γ L z k + 1 γ b ) .
   □

4. Distributed Algorithm with Partial Information

In a distributed setting, it is challenging to access the decisions of all other agents requiring a central coordinator to collect and deliver information from all participants, as assumed in the previous section. The global information assumptions in Section 3 are relaxed in this section, allowing each participant to iteratively update their decisions, multipliers, and auxiliary variables using only their own local information and the estimates exchanged with their neighbors. Therefore, this section proposes a distributed GNE-seeking algorithm that uses partial information based on the preconditioned proximal-point algorithm.

4.1. Algorithm Development

This section presents an algorithm for GNE-seeking in game (3) in a fully distributed manner.
Each agent i has a cost function J i and a feasible set Ω i but does not know the full state x i of other agents. Agent i can only exchange information with its neighboring agents over a network G ( I , E ) . The edge ( i , j ) belongs to E if agents i and j can exchange information mutually. x i : = col x j i j I R n is defined as agent i’s estimate of agent j’s decision. Then x i i : = x i and x i j : = col x j I \ { i } denotes j’s estimate of all other agents except i. If x i = x j , we can replace the cost function of agent i with J i x i i , x i i . Then, we equivalently transform the game (3) as the following:
min x i J i x i i , x i i = f i x i i , x i i + g i x i i , s . t . x i i X ˜ i , x i = x j , j I .
It is worth noting that problems (3) and (22) are equivalent under a certain condition. We will explain this point below.
Let X ˜ i = x Ω i A i x i i + j i A j x j i i = 1 m b i be the set of feasible solutions for agent i under a consistency constraint. Then, agent i’s cost function J i only depends on its own local information x i .
We introduce matrices R i and S i to develop a distributed algorithm that uses partial decision information for game (22).
R i : = 0 n i × n < i I n i 0 n i × n > i ,
S i : = I n < i 0 n < i × n i 0 n < i × n > i 0 n > i × n < i 0 n > i × n i I n > i .
where n < i : = j = 1 i 1 n j , n > i : = j = i + 1 N n j . Then, R i x i = x i i = x i and S i x i = x i i . Let R : = diag R i i I , S : = diag S i i I . Hence R x = x and R x = col x i i i I R ( N 1 ) n . Moreover, x = R x + S S x .
Similar to (13), we design sufficient conditions of (22) as follows:
0 R T F ^ x ^ * + R T G R * x ^ * + R T A T λ * + c L ^ x ^ * 0 = L λ * 0 A R x ^ * b + L z * N R 0 N m λ * ,
where L ^ = L I n R N n × N n , R = diag R i , F ( x ) = col x i i f i x i i , x i i i I R n , G ( x ) = col g i x i i i I R n , Moreover, c is a parameter about the dual variable associated with the constraint 0 = L ^ x * .
Define the following operator:
A ¯ : x ^ z λ R T F ^ ( x ^ ) + R T G ( R x ^ ) + R T A T λ + c L ^ x ^ L λ A R x ^ + b L z + N R 0 N m λ ,
Φ = c W ^ 0 R T A T 0 0 L A R L 0 + α 1 0 0 0 τ 1 0 0 0 γ 1 .
The variables x , z and λ satisfy condition (25) if 0 A ¯ v where v = x ^ T , z T , λ T T .
Lemma 4. 
Algorithm 1 is equivalent to the following:
( k N ) v k + 1 J Φ 1 A ¯ v k .
Let A ¯ be defined as in (26) and Φ as in (27). Then, Algorithm 1 generates a sequence x k , z k , λ k for k N and any initial condition v 0 = col x 0 , z 0 , λ 0 .
Proof of Lemma 4. 
By applying the definition of inverse operations, we obtain that
v k + 1 Id + Φ 1 A ¯ 1 v k 0 Φ 1 A ¯ v k + 1 v k + v k + 1 0 Φ v k + 1 v k + A ¯ v k + 1 0 α 1 x ^ k + 1 x ^ k + c W ^ x ^ k + 1 c W ^ x ^ k + c L ^ x ^ k + 1 R A λ k + 1 + R A λ k + R T F ^ x ^ k + 1 + R T G R x ^ k + 1 R A λ k + 1 0 τ 1 z k + 1 z k + L λ k + 1 + L λ k L λ k + 1 0 γ 1 λ k + 1 λ k + N R 0 m N λ k + 1 + b A R 2 x ^ k + 1 x ^ k + L 2 z k + 1 z k .
By L ^ = D ^ W ^ , R α R T = α ˜ , S α R T = 0 , R α L ^ = α ˜ R L ^ , and S α L ^ = α ^ S ^ , we have
0 S I + c α ^ D ^ x k + 1 x k c α ^ W ^ x k 0 R I + c α ˜ D ^ x k + 1 x k c α ˜ W ^ x k + α ˜ F ^ x ^ k + 1 + α ˜ G R x k + 1 + α ˜ A T λ k i I x i , k + 1 i = 1 1 + c α i d i x i , k i + α i c j = 1 N w i j x i , k j 0 n i x i k + 1 ( 1 2 x x i , k 2 + α i x T A i T λ i , k + c 2 α i d i x 1 d i j = 1 N w i j x i , k + 1 j 2 + α i f i x , x i , k + 1 i + α i g i ( x ) ) .
Hence, by the property that the subdifferential of a convex function contains only zeros at its minima [16], we can reformulate (30) as follows.
i I : x i , k + 1 i = 1 1 + τ i d i x i , k i + τ i j = 1 N w i , j x i , k j x i , k + 1 = argmin x ( 1 2 x x i , k 2 + α i x T A i T λ i , k + α i f i x , x i , k + 1 i + α i g i ( x ) + c 2 α i d i x 1 d i j = 1 N w i j x i , k + 1 j 2 ) , z k + 1 = τ L λ k + z k , λ k + 1 = proj R 0 m N λ k + γ A R 2 x ^ k + 1 x ^ k + 2 γ L z k + 1 γ L z k γ b .
   □
Write (31) as a distributed algorithm as Algorithm 1.

4.2. Convergence Analysis

In this section, we will prove that Algorithm 1 applying the game (22) converges to the variational GNE through a rigorous mathematical analysis. We show that any limit point of Algorithm 1 satisfies 0 Φ 1 A ¯ and has certain properties. Moreover, we demonstrate that every such zero is in the consensus subspace and solves VI ( F , X ) .
Algorithm 1. Distributed Algorithm with Partial Information
Initialize:   For all i I , set x i , 0 Ω i , x i , 0 i R n n i , λ i , 0 R + m , z i , 0 R m
   for k = 1 , 2 , 3 , do
       x i , k + 1 i = 1 1 + τ i d i x i , k i + τ i j = 1 N w i , j x i , k j
       x i , k + 1 = argmin x ( 1 2 x x i , k 2 + α i x T A i T λ i , k + c 2 α i d i x 1 d i j = 1 N w i j x i , k + 1 j 2
       + α i f i x , x i , k + 1 i + α i g i ( x ) )
       z i , k + 1 = z i , k τ i j = 1 N w i j ( λ i , k λ j , k )
       λ i , k + 1 = prox R 0 m λ i , k + 2 γ i A i x i , k + 1 γ i A i x i , k + 2 γ i j = 1 N w i j ( z i , k + 1 z i , k + 1 ) γ i b i
   end for
Retuen: The sequence x i , k k = 1 will eventually approximate the optimal solution.
Assumption 5. 
F is a Lipschitz continuous mapping, which implies that there exists a positive constant θ such that for any x and y , we have | F ( x ) F y θ | x y | .
Define the operator:
F c ( x ) : = R T F ^ x + R T G R x + c L ^ x .
From (26), we know
A ¯ x z λ = F c ( x ^ ) 0 b + R T A T λ L λ A R x ^ L z + 0 0 N R 0 N m λ .
Lemma 5. 
Define c min = θ + θ 0 2 4 μ s 2 ( L ) + θ s 2 ( L ) .
Ψ = μ N θ + θ 0 2 N θ + θ 0 2 N c s 2 ( L ) θ , μ F c = s min ( Ψ ) .
When c > c min , we have μ F c = s min ( Ψ ) > 0 and A ¯ is restricted monotone if Assumptions 1–5 hold.
Proof of Lemma 5. 
Let v * = col x * , z * , λ * zer ( A ¯ ) be any zero of A ¯ , which exists by Lemma 4. From (33), we can decompose A ¯ as a sum of three operators, two of which are monotone. Moreover, for any c > c min , we have Ψ 0 for all x R N n by Lemma 1. Then,
x x * F c ( x ) F c x * μ F c x x * 2 .
Hence, for any ( v , u ) gra ( A ¯ ) , with v = col ( x , z , λ ) , we obtain v v * u 0 μ F c x x * 2 0 . □
Lemma 6. 
Define B : R q R q as a restricted monotone in H Φ and J B is firmly quasinonexpansive in H Φ , where H Φ is the Hilbert space induced by the inner product · , · Φ . Then for any ( v , u ) gra J B , v * zer ( B ) = fix J B , it holds that
v u v v * Φ u v Φ 2 = v u u v * Φ 0 .
Proof of Lemma 6. 
We use the definition of resolvent: v * J B v * v * + B v * v * 0 B v * . Moreover, for any ( v , u ) gra J B , we have v u B ( u ) . Therefore, (36) follows from the restricted monotonicity of B and some simple algebra. Finally, setting v = v * in (36), we obtain that J B is single-valued. □
Lemma 7. 
Let zer ( B ) by Lemma 6. For k N ,we assume that β k [ 0 , 2 ] and e k R q such that β k e k P 1 . If v 0 R q ,
( k N ) v k + 1 = v k + β k u k v k + e k , u k J B v k .
Then we obtain Algorithm 1 by applying (13) to the operator Φ 1 A ¯ , where
Φ = c W ^ 0 R T A T 0 0 L A R L 0 + α 1 0 0 0 τ 1 0 0 0 γ 1
is called a preconditioning matrix. It ensures that the agents will be able to compute the resulting iteration in a fully distributed manner.
We choose the step sizes α = diag α i I n i , τ = diag τ i I m and γ = diag γ i I m such that Φ > 0 . This implies that zer Φ 1 A ¯ = zer A ¯ . The next lemma provides sufficient conditions for Φ > 0 based on Gershgorin’s circle theorem.
Lemma 8. 
For any agent i I and any δ > 0 , the preconditioning matrix Φ in (38) is positive definite if
0 < α i max j 1 , , n i k = 1 m A i j k + δ 1 , 0 < τ i 2 d i + δ 1 , 0 < γ i max j { 1 , , m } k = 1 n i A i j k + 2 d i + δ 1 .
Using Lemma 4 and Assumption 4, we can show that J Φ 1 A ¯ is single-valued by applying Equation (28) to Φ 1 A ¯ .
Lemma 9. 
Let c > c min , c min as in Lemma 3. Then Φ 1 A ¯ is restricted monotone in H Φ .
It means that finding a zero point of A ¯ is equivalent to finding a variational GNE for problem (3). Moreover, since Φ > 0 , we have zer ( A ¯ ) = zer ( Φ 1 A ¯ ) by Lemma 9. Therefore, finding a zero point of Φ 1 A ¯ is also equivalent to finding a variational GNE for problem (3). This establishes the equivalence between problems (3) and (22) under the condition that c > c min , c min as in Lemma 3.
Theorem 1. 
Assume that c > c min , c min are as defined in Lemma 8 and that the step sizes α , τ , γ satisfy Lemma 5. Then, Algorithm 1 generates a sequence x k , z k , λ k k N that converges to an equilibrium x * , z * , λ * .
Proof of Theorem 1. 
The set of inequalities (29) is equivalent to Equation (37) when β k = 1 , e k = 0 for all k N . By applying Lemma 9, we deduce that Φ 1 A ¯ is restricted monotone on H Φ . Then, we define
u k = J Φ 1 A ¯ v k .
By Lemma 9, Φ 1 A ¯ is restricted monotone in H Φ , which implies u k = v k + 1 .
Let h k : = v k + β k u k v k , so that v k + 1 = h k + β k e k . For all k N ,
h k v * Φ 2 = v k v * Φ 2 2 β k v k u k v k v * Φ + β k 2 u k v k Φ 2 .
Then by Lemma 7, we have
h k v * Φ 2 v k v * Φ 2 γ k 2 β k u k v k Φ 2 .
By the Cauchy–Schwarz inequality, it holds that
h k v * Φ v k v * Φ .
By (43), the sequence v k k N is bounded so that there is at least one cluster point such as v ¯ . Define:
A 1 ¯ x z λ = F c ( x ^ ) 0 b + R T A T λ L λ A R x ^ L z .
A 1 ¯ consists of the sum of the first two equations of A ¯ . By (29) and (44), it holds that
A 1 ¯ u k + Φ u k v k v u k 0 .
Let η : = sup k N v k v * Φ < and ϵ k : = 2 η β k e k Φ + β k e k Φ 2 , for all k N . Then, ϵ k k N 1 . Moreover, for all k N we have
v k + 1 v * Φ 2 h k v * Φ + β k e k Φ 2 v k v * Φ 2 β k 2 β k u k v k Φ 2 + ϵ k .
Followed by recursion, it holds that
β k 2 β k u k v k Φ 2 k N 1 .
By (47), u k v k converges to 0 as k . Let l k be a divergent subsequence such that v l k v ¯ . Then, for any v Ω × R m × R 0 m , we have A 1 ¯ ( v ¯ ) v v ¯ 0 by continuity of A 1 ¯ , which means v ¯ zer ( A 1 ¯ ) = fix J Φ 1 A 1 ¯ . By Lemma 6, any cluster point of v k belongs to C. Hence, v k converges to an equilibrium of (29). □

5. Numerical Studies

In this section, we will explore a network-based Nash–Cournot game [21] that is used to model the competition between N companies in m markets. This game is of particular interest because it has been shown to be an effective tool for analyzing competition between companies. The markets have shared affine constraints or equivalent global coupling affine constraints, which makes it possible to model the interactions between the companies in a realistic way. It is important to note that in this game, the agents can only communicate with their neighbors, and there is no central node with bidirectional communication to all participants. This makes the game even more challenging, as the agents must rely on their own resources to make decisions.
Although this game has been applied in networked Cournot games [15], it is worth noting that it assumes that the agents’ decision information is global. However, in contrast, our work considers network structure and partial decision information, which leads to a more accurate representation of the real-world dynamics of competition between companies. This means that our research is more relevant to the real world, and can help us gain a deeper understanding of the complex interactions between the agents. By incorporating the network structure and partial decision information, we are able to gain a deeper understanding of the complex interactions between the agents, which can have significant implications for the overall performance of the companies in the markets. Additionally, our research can help to inform future policy decisions, as it provides a more accurate picture of the dynamics of competition between companies in networked markets.

5.1. Cournot Market Competition

Each firm i decides the quantity x i R n i of the commodity for n i m markets, subject to 0 n i x i X i . The maximum capacity of each market l = 1 , , m is r l . Hence, we have a shared affine constraint A x r , where r = col r k l = 1 , , m and A = A 1 A N . The matrix A i R m × n i indicates which markets firm i enters. Specifically, A i l , j = 1 if firm i enters market l, and A i l , j = 0 otherwise, for all j = 1 , n i and l = 1 , , m .The profit function of each firm i is J i x i , x i = f i x i , x i + g i x i , where f i x i , x i = p ( A x ) A i x i and g i x i = x i Q i x i q i x i . g i x i is the production cost of firm i, and Q i R n i × n i , Q i > 0 , q i R n i are given parameters. The market price for each market and l = 1 , , m is given by [ p ( x ) ] l = P ¯ l χ l [ A x ] l , where P ¯ l , χ l > 0 are constants.
In this simulation, we set the total number of participants to 20 and the number of markets to 7, i.e., N = 20 , m = 7 . We defined the market structure using Figure 1a, which does not show the actual spatial relationships and distances among markets and firms. We also set n i = 1 for all i I . Then the arrows in Figure 1a represent only the participation of firms in the markets; therefore, we have x = col x i i I R n .
We considered m markets distributed across seven continents. Individual firms could not communicate with all other firms because of geographical location, communication technology, or company systems. The firms could communicate only with their neighbors on the communication graph G c , shown in Figure 1b. Only connected firms i and j in the undirected communication graph G c could exchange their information. We randomly selected r l [ 1 , 2 ] , Q with diagonal elements in [1,8], q i [ 1 , 2 ] , P ¯ l [ 10 , 20 ] , χ l [ 1 , 3 ] , and X i [ 5 , 10 ] for all i I and l = 1 , , m .

5.2. Numerical Results

The experimental settings described above satisfy all assumptions presented in [21]. We selected the step size in Lemma 8 to fulfill all conditions required by Theorem 1. To compare the performance of Algorithm 1 and the algorithm in [21], we conducted experiments using the same random initial condition for both algorithms. As shown in Figure 2 and Figure 3, Algorithm 1 proposed in this paper converges faster than the algorithm in [21], which is referred to as Algorithm 2 in the following text.
Figure 2 compares the convergence of the two algorithms under the partial decision setting by plotting the relative error of their decisions. The results show that Algorithm 1 has smaller relative errors than Algorithm 2 (the algorithm in [21]) for the same number of iterations, indicating a faster convergence rate for Algorithm 1.
Figure 3 illustrates the trajectory of the total cost of all agents in the market corresponding to Algorithm 1 and Algorithm 2 (the algorithm in [21]), where the total cost is generated by i = 1 N J i x i , x i . The trajectory eventually converges to the same minimum value, indicating the correctness and accuracy of Algorithm 1. Furthermore, the convergence of the trajectory represents the effectiveness of the algorithm in terms of optimizing the total cost of all agents in the market.
In Figure 4, we can observe the trajectory of each agent’s decision in the Cournot market game, which has been solved by Algorithm 1. As we can see from the graph, the decision trajectories of all agents converge, which is a strong indication that the GNE obtained by executing Algorithm 1 can effectively minimize the cost incurred by each agent. It is worth noting that the convergence of decision trajectories is a critical aspect of the game theory that is often used to evaluate the effectiveness of a given algorithm. Therefore, the observed convergence in Figure 4 is an encouraging sign that Algorithm 1 can be a viable approach for solving similar problems in the future.
During the iterative process of Algorithm 1, the estimated value of each agent’s decision for all other agents is first obtained according to the formula x i , k + 1 i = 1 1 + τ i d i x i , k i + τ i j = 1 N w i , j x i , k j . This formula is used to compute the value of x i , k + 1 i . Subsequently, the estimated decision value is used for the iteration of x , z , λ , which is a mathematical notation used to represent the values of the decision variables. When the algorithm converges, i.e., when the GNE is obtained, the estimated decision value of each agent for x i is equal to the actual decision value of x i , i.e., x i j = x i i for j = 1 , , N . Figure 5a shows the trajectory of the standard deviation of each agent’s estimated decision set for x i with increasing iteration times, which is generated by 1 N j = 1 N ( x i j j = 1 N x i j ) 2 for j = 1 , , N . It can be seen that the standard deviation of the estimated decision set for each agent eventually converges to 0. This indicates that the results satisfy the condition that the estimated value of each agent’s decision for x i is equal to the actual decision value of x i . As shown in Figure 5b, taking agent 3 as an example, the trajectory plot of all agents’ estimated decision values for agent 3 is displayed, where x 3 3 represents the actual value of agent 3’s decision. The trajectory plot eventually converges to the same value, visually confirming the accuracy of Algorithm 1.

6. Conclusions

In this paper, we propose a distributed algorithm for games with shared coupling constraints based on the preconditioned proximal-point algorithm under partial decision information, which can converge with a fixed step size on arbitrarily connected graphs and is successfully applied to the GNE computation of the Cournot Market competition under partial decision information with a relatively fast convergence rate. A possible direction for future work is to study partial decision information sets and their impact on the memory efficiency of the algorithm. We will also explore how to predict only a subset of agents’ decisions rather than all agents’ decisions and study the convergence properties of the algorithm. Future work will further investigate this direction.

Author Contributions

Methodology, H.L. and Y.S.; Software, Z.W., M.C. and J.C.; Formal analysis, Z.W., H.L. and M.C.; Investigation, Z.W. and Y.S.; Resources, Z.W. and Y.S.; Data curation, Z.W., H.L. and J.T.; Writing—original draft, Z.W.; Supervision, Y.S.; Project administration, Z.W. and Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saad, W.; Han, Z.; Poor, H.V.; Basar, T. Game-theoretic methods for the smart grid: An overview of microgrid systems, demand-side management, and smart grid communications. IEEE Signal Process. Mag. 2012, 29, 86–105. [Google Scholar] [CrossRef]
  2. Li, N.; Chen, L.; Dahleh, M.A. Demand response using linear supply function bidding. IEEE Trans. Smart Grid 2015, 6, 1827–1838. [Google Scholar] [CrossRef]
  3. Grammatico, S. Dynamic control of agents playing aggregative games with coupling constraints. IEEE Trans. Autom. Control 2017, 62, 4537–4548. [Google Scholar] [CrossRef]
  4. Debreu, G. A social equilibrium existence theorem. Proc. Natl. Acad. Sci. USA 1952, 38, 886–893. [Google Scholar] [CrossRef] [PubMed]
  5. Rosen, J.B. Existence and uniqueness of equilibrium points for concave n-person games. Econom. J. Econom. Soc. 1965, 33, 520–534. [Google Scholar] [CrossRef]
  6. Facchinei, F.; Kanzow, C. Generalized Nash equilibrium problems. Ann. Oper. Res. 2010, 175, 177–211. [Google Scholar] [CrossRef]
  7. Mastroeni, G.; Pappalardo, M.; Raciti, F. Generalized Nash equilibrium problems and variational inequalities in Lebesgue spaces. Minimax Theory Appl. 2020, 5, 47–64. [Google Scholar]
  8. Yin, H.; Shanbhag, U.V.; Mehta, P.G. Nash equilibrium problems with congestion costs and shared constraints. In Proceedings of the 48h IEEE Conference on Decision and Control (CDC) Held Jointly with 2009 28th Chinese Control Conference, Shanghai, China, 15–18 December 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 4649–4654. [Google Scholar]
  9. Zhu, M.; Frazzoli, E. Distributed robust adaptive equilibrium computation for generalized convex games. Automatica 2016, 63, 82–91. [Google Scholar] [CrossRef]
  10. Tatarenko, T.; Kamgarpour, M. Learning generalized Nash equilibria in a class of convex games. IEEE Trans. Autom. Control 2018, 64, 1426–1439. [Google Scholar] [CrossRef]
  11. Paccagnan, D.; Gentile, B.; Parise, F.; Kamgarpour, M.; Lygeros, J. Distributed computation of generalized Nash equilibria in quadratic aggregative games with affine coupling constraints. In Proceedings of the 2016 IEEE 55th Conference on Decision and Control (CDC), Las Vegas, NV, USA, 12–14 December 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 6123–6128. [Google Scholar]
  12. Belgioioso, G.; Grammatico, S. Semi-decentralized Nash equilibrium seeking in aggregative games with separable coupling constraints and non-differentiable cost functions. IEEE Control. Syst. Lett. 2017, 1, 400–405. [Google Scholar] [CrossRef]
  13. Belgioioso, G.; Grammatico, S. Projected-gradient algorithms for generalized equilibrium seeking in aggregative games arepreconditioned forward-backward methods. In Proceedings of the 2018 European Control Conference (ECC), Limassol, Cyprus, 12–15 June 2018; IEEE: Piscataway, NJ, USA; pp. 2188–2193. [Google Scholar]
  14. Parise, F.; Gentile, B.; Lygeros, J. A distributed algorithm for average aggregative games with coupling constraints. IEEE Trans. Control. Netw. Syst. 2020, 7, 770–782. [Google Scholar] [CrossRef]
  15. Yi, P.; Pavel, L. An operator splitting approach for distributed generalized Nash equilibria computation. Automatica 2019, 102, 111–121. [Google Scholar] [CrossRef]
  16. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin/Heidelberg, Germany, 2011; Volume 408. [Google Scholar]
  17. Yi, P.; Pavel, L. Asynchronous distributed algorithms for seeking generalized Nash equilibria under full and partial-decision information. IEEE Trans. Cybern. 2019, 50, 2514–2526. [Google Scholar] [CrossRef] [PubMed]
  18. Passacantando, M.; Raciti, F. A note on generalized Nash games played on networks. In Nonlinear Analysis, Differential Equations, and Applications; Springer: Berlin/Heidelberg, Germany, 2021; pp. 365–380. [Google Scholar]
  19. Bianchi, M.; Belgioioso, G.; Grammatico, S. Fast generalized Nash equilibrium seeking under partial-decision information. Automatica 2022, 136, 110080. [Google Scholar] [CrossRef]
  20. Yi, P.; Pavel, L. Distributed generalized Nash equilibria computation of monotone games via double-layer preconditioned proximal-point algorithms. IEEE Trans. Control. Netw. Syst. 2018, 6, 299–311. [Google Scholar] [CrossRef]
  21. Pavel, L. Distributed GNE seeking under partial-decision information over networks via a doubly-augmented operator splitting approach. IEEE Trans. Autom. Control. 2019, 65, 1584–1597. [Google Scholar] [CrossRef]
  22. Gadjov, D.; Pavel, L. A passivity-based approach to Nash equilibrium seeking over networks. IEEE Trans. Autom. Control. 2018, 64, 1077–1092. [Google Scholar] [CrossRef]
  23. Salehisadaghiani, F.; Shi, W.; Pavel, L. Distributed Nash equilibrium seeking under partial-decision information via the alternating direction method of multipliers. Automatica 2019, 103, 27–35. [Google Scholar] [CrossRef]
Figure 1. (a) Network Nash–Cournot game. (b) Communication graph G c .
Figure 1. (a) Network Nash–Cournot game. (b) Communication graph G c .
Applsci 13 06405 g001
Figure 2. Relative error x k x * / x * plot generated by Algorithm 1 and Algorithm 2 (the algorithm in [21]).
Figure 2. Relative error x k x * / x * plot generated by Algorithm 1 and Algorithm 2 (the algorithm in [21]).
Applsci 13 06405 g002
Figure 3. The total cost of all agents generated by Algorithm 1 and Algorithm 2 (the algorithm in [21]).
Figure 3. The total cost of all agents generated by Algorithm 1 and Algorithm 2 (the algorithm in [21]).
Applsci 13 06405 g003
Figure 4. Trajectories of every agent’s decision x i , k generated by Algorithm 1.
Figure 4. Trajectories of every agent’s decision x i , k generated by Algorithm 1.
Applsci 13 06405 g004
Figure 5. (a) Trajectories of the standard deviation of agents’ estimations of x i , k generated by Algorithm 1. (b) Trajectories of agents’ estimations of x 3 , k generated by Algorithm 1.
Figure 5. (a) Trajectories of the standard deviation of agents’ estimations of x i , k generated by Algorithm 1. (b) Trajectories of agents’ estimations of x 3 , k generated by Algorithm 1.
Applsci 13 06405 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Li, H.; Chen, M.; Tang, J.; Cheng, J.; Shi, Y. Distributed GNE-Seeking under Partial Information Based on Preconditioned Proximal-Point Algorithms. Appl. Sci. 2023, 13, 6405. https://doi.org/10.3390/app13116405

AMA Style

Wang Z, Li H, Chen M, Tang J, Cheng J, Shi Y. Distributed GNE-Seeking under Partial Information Based on Preconditioned Proximal-Point Algorithms. Applied Sciences. 2023; 13(11):6405. https://doi.org/10.3390/app13116405

Chicago/Turabian Style

Wang, Zhongzheng, Huaqing Li, Menggang Chen, Jialong Tang, Jingran Cheng, and Yawei Shi. 2023. "Distributed GNE-Seeking under Partial Information Based on Preconditioned Proximal-Point Algorithms" Applied Sciences 13, no. 11: 6405. https://doi.org/10.3390/app13116405

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop