Next Article in Journal
Computational Study of Shocked V-Shaped N2/SF6 Interface across Varying Mach Numbers
Previous Article in Journal
On Topologies Induced by Ideals, Primals, Filters and Grills
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reformulation and Enhancement of Distributed Robust Optimization Framework Incorporating Decision-Adaptive Uncertainty Sets

1
School of Mathematics, Liaoning Normal University, Dalian 116029, China
2
Department of Basic Courses Teaching, Dalian Polytechnic University, Dalian 116034, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(10), 699; https://doi.org/10.3390/axioms13100699
Submission received: 28 August 2024 / Revised: 6 October 2024 / Accepted: 7 October 2024 / Published: 8 October 2024

Abstract

:
Distributionally robust optimization (DRO) is an advanced framework within the realm of optimization theory that addresses scenarios where the underlying probability distribution governing the data is uncertain or ambiguous. In this paper, we introduce a novel class of DRO challenges where the probability distribution of random variables is contingent upon the decision variables, and the ambiguity set is defined through parameterization involving the mean and a covariance matrix, which also depend on the decision variables. This dependency makes DRO difficult to solve directly; therefore, first, we demonstrate that under the condition of a full-space support set, the original problem can be reduced to a second-order cone programming (SOCP) problem. Subsequently, we solve this second-order cone programming problem using a projection differential equation approach. Compared with the traditional methods, the differential equation method offers advantages in providing continuous and smooth solutions, offering inherent stability analysis, and possessing a rich mathematical toolbox, which make the differential equation a powerful and versatile tool for addressing complex optimization challenges.

1. Introduction

Distributed robust optimization (DRO) is a framework within the broader field of optimization that specifically addresses decision-making problems under uncertainty, where the optimization is carried out in a distributed manner and robustness is incorporated to account for model inaccuracies, data uncertainties, or variability in the system parameters. In essence, DRO aims to find optimal solutions that not only minimize the cost function but also ensure a certain level of robust performance when facing a range of potential perturbations or uncertainties in the system. For the latest research progress on distributed robust optimization problems, please refer to literature [1,2,3,4,5,6]. General DRO is defined as follows:
min x X max P U E P [ f ( x , ξ ) ] ,
where X is a closed set on R n , ξ : Ω Ξ is a random variable defined in the measurable space ( Ω , F ) , its probability distribution P is supported by Ξ R k , f : R n × R k R , U is a set of random distributions containing the true probability distribution P of random variable ξ , and the set is related to decision-making. E P [ · ] represents the mathematical expectation that P U . The key to this model is the probability distribution set U, and we call U the ambiguity set. In the early research, scholars generally assumed that they could not obtain the true distribution of probability for random variables and could only ascertain the conditions in the local moment, which were independent of the decision variables. Based on this, ambiguity sets could be constructed to solve optimization problems. Bertsimas and Popescu [7], taking into account the mean and the covariance matrix of the random variable ξ , established an ambiguity set as follows:
U 1 = P Γ P ( ξ Ξ ) = 1 , E P [ ξ ] = μ , E P [ ( ξ μ ) ( ξ μ ) T ] = Σ ,
where Γ is the set of all the probability measures in space ( Ω , F ) , but they also proved that a model under the ambiguity set U 1 was difficult to handle. Then, Wiesemann et al. [8] changed U 1 in Σ to the upper bound and obtained a new ambiguity set:
U 2 = P Γ P ( ξ Ξ ) = 1 , E P [ ξ ] = μ , E P [ ( ξ μ ) ( ξ μ ) T ] Σ
They proved that the linear robust optimization problem under ambiguity set U 1 is treatable. Delage and Ye [9] further improved on U 1 by setting r 1 0 , r 2 1 , and δ > 0 as the known parameters, resulting in an ambiguity set as follows:
U 3 = P Γ P ( ξ ξ ) = 1 , E P [ ξ T Σ 1 ξ ] μ T Σ 1 μ r 1 , E P [ ( ξ μ ) ( ξ μ ) T ] r 2 Σ
They proved the confidence level of the true probability measure α 1 δ under the appropriate conditions P 0 U 3 . Based on the ambiguity set U 3 , Liu [10] discussed a type of DRO involving distributions based on first-order and second-order moment information. It is worth noting that the support set Ξ has an important impact on the processability of the reconstruction problem in DRO. In these works, the ambiguity set of probability measures was independent of the decision-making.
However, in practice, there are many cases where the ambiguity sets in DRO problems are dependent on the decision variables, such as in facility location problems under demand uncertainty [11] and in renovation planning problems [12]. Therefore, studying DRO problems where the ambiguity set is dependent on the decision variables has important practical significance.
In this paper, we propose a new DRO model in which the ambiguity set depends on the decision variables:
min x X max P U ( x ) E P [ x T Q x + c T x + ( D x ) T ξ + ξ T β β T ξ ] ,
where X is a closed and convex set on R n , Q is an n × n semidefinite matrix, c R n , β R k , D R k × n , ξ : Ω Ξ is a random variable defined in the measurable space, and U ( x ) is an ambiguity set defined by
U ( x ) = P Γ P ( ξ Ξ ) = 1 , E P [ ξ ] [ A x , B x ] , E P [ ( ξ μ 0 ( x ) ) ( ξ μ 0 ( x ) ) T ] Σ ( x ) ,
where A , B are k × n matrices; μ 0 : R n R k ; and Σ ( · ) is a mapping from vector space R n to a k × k positive definite matrix space. We assume that the matrix Σ ( x ) here is positive definite because it is in the form of an upper bound for the covariance matrix of a random variable ξ . Since the covariance matrix is at least semi-positive definite, it is reasonable to assume that this upper bound is positive definite. The upper bound, i.e., the matrix Σ ( x ) , is related to the decision variables because the random variable is actually related to the decision variables. Therefore, it is obvious that the upper bound of the covariance matrix of this random variable is related to the decision variables. Of course, our assumption that the matrix Σ ( x ) is positive definite also provides a convenient condition for converting the DRO problem into a SOCP problem later.
Recently, there have been some new advancements in DRO problems based on decision-dependent ambiguity sets. Zhang et al. [13] considered a class of DRO models in which the ambiguity set was dependent on the decision variables x and conducted a quantitative stability analysis. The ambiguity set is represented as
U 4 ( x ) = P Γ E P [ Ψ ( x , ξ ) ] K .
Here, K represents a closed and convex cone, whereas Ψ denotes a random mapping of a vector or matrix, characterized by measurable random components, and its value is contingent upon x. Lin et al. [14] considered conditions under which only the mean of the parameters could be estimated and assumed that the mean was decision-related, thus constructing a decision-related ambiguity set. This type of problem is widely used in practical applications. Basciftci et al. [11] address the facility location problem under decision-dependent stochastic demand. These authors propose a distributionally robust optimization model that considers the demand uncertainty, which is affected by decisions on the facility locations. They develop a mixed-integer linear programming reformulation and demonstrate the effectiveness of their model through computational studies. Noyan et al. [15] introduce a new class of DRO problems under decision-dependent ambiguity sets. They discuss the computational challenges and provide tractable formulations for applications in machine scheduling and humanitarian logistics. Duan [12] proposes a marginal-based DRO framework for integer stochastic optimization with decision-dependent discrete distributions. It is applied to a retrofitting planning problem, and an efficient constraint generation algorithm is developed to solve the problem. Song et al. [16] present a decision-dependent Distributionally Robust Markov Decision Process (DRMDP) approach to addressing dynamic epidemic control problems. The DRMDP model allows for an ambiguous distribution of the transition dynamics and considers the worst-case distribution of these probabilities within a decision-dependent ambiguity set. Li et al. [17] investigate discrete approximation and quantitative analyses for DRO problems with decision-dependent ambiguity sets. The authors establish the local Lipschitz continuity of the decision-dependent ambiguity set and conduct a quantitative analysis for the optimal value and the optimal solution. However, most of the above literature has dealt with specific practical problems in DRO without focusing on general DRO problems rooted in ambiguity sets (6), and it has not explored transformable forms of such problems either.
Since the DRO problem (5) is a min–max problem and the ambiguity set depends on the decision variables, this makes problem (5) difficult to solve directly. Therefore, in this paper, first, we transform DRO (5) into a class of SOCP problem and then show how to solve this kind of SOCP problem through projected differential equations, providing a new way of thinking for solving this new type of DRO problem.
We transform DRO (5) into a SOCP problem for two reasons: one reason is that based on the characteristics of this DRO model, such a transformation can be made, and this method of transformation is common in general DRO problems, but it is not understood in the context of DRO problems with decision-dependent ambiguity sets, such as that in this paper, and it is also an exploration. The second reason is that as conic programming problems, SOCP problems have relatively mature and diverse solving methods, so making such a transformation can reduce the difficulty of solving the original problem. This is equivalent to transforming a min–max problem with two-level solving into a one-level solving problem.
Notice that there are many conventional methods for solving SOCP problems, such as the interior point method [18], sequential quadratic programming [19,20], and the augmented Lagrangian method [21,22]. However, these methods rely on iterative algorithms that may only find local optima and may struggle with non-convex problems. We adopt the differential equation method due to the fact that it provides more accurate solutions, as it directly simulates the dynamic behavior of the system. Moreover, this approach has the potential to find global optimal solutions rather than being limited to local optima. Finally, there are many ready-made toolboxes available for solving differential equations, and the differential equation method offers a continuous framework for solving DRO problems, which is helpful in dealing with nonlinear and non-convex issues. For the latest research progress on using the differential equation method to solve optimization problems or variational inequality problems, please refer to literature [23,24,25,26,27,28,29,30,31,32].
The remaining part of this article is as follows. Section 2 discusses equivalent forms of problem (5), making the original problem easier to handle; Section 3 considers that when the support set Ξ is the entire space, the original problem can be transformed into semidefinite programming; and Section 4 further proves that semidefinite programming can be transformed into SOCP under the conditions in Section 3.
To enhance clarity, we have listed a table of the notations and parameters in the further research.

2. Reformulation of the Original Problem

This section presents a representation of problem (5) with ambiguity set (6).
Following duality theory, we next rewrite (5) with ambiguity set (6):
Proposition 1. 
Problem (5) can be rephrased as the following constrained optimization problem:
min x , α 0 , α , Y α 0 + ( A x ) T α + ( B x A x ) T α + + Σ ( x ) , Y s . t . g ( x , α 0 , α , Y ) 0 , x X , Y 0 ,
where α + = max { α , 0 } and
g ( x , α 0 , α , Y ) = sup ξ Ξ { x T Q x + c T x + ( D x ) T ξ + ξ T β β T ξ α 0 α T ξ ( ξ μ 0 ( x ) ) T Y ( ξ μ 0 ( x ) ) } .
Proof. 
The inner maximum problem for problem (5) is
p ( x ) = max P U ( x ) E P [ x T Q x + c T x + ( D x ) T ξ + ξ T β β T ξ ] .
Let
V ( μ ) = { P Γ P ( ξ Ξ ) = 1 , E P [ ξ ] = μ , E P [ ( ξ μ 0 ( x ) ) ( ξ μ 0 ( x ) ) T ] Σ ( x ) }
and
Ψ ( x , μ ) = max P V ( μ ) E P [ x T Q x + c T x + ( D x ) T ξ + ξ T β β T ξ ] .
We have
p ( x ) = max μ [ A x , B x ] Ψ ( x , μ ) .
Based on the structure of ambiguity set (6), the inner maximum problem can be written as follows:
max d P ( ξ ) 0 Ξ [ x T Q x + c T x + ( D x ) T ξ + ξ T β β T ξ ] d P ( ξ ) s . t . Ξ d P ( ξ ) = 1 , Ξ ξ d P ( ξ ) = μ ( x ) , Ξ ( ξ μ 0 ( x ) ) ( ξ μ 0 ( x ) ) T d P ( ξ ) Σ ( x )
and Ψ ( x , μ ) is the optimal value for problem (11). Furthermore, the Lagrange function of (11) is
L ( d P , α 0 , α , Y ) = Ξ x T Q x + c T x + ( D x ) T ξ + ξ T β β T ξ d P ( ξ ) α 0 Ξ d P ( ξ ) 1 Ξ ξ d P ( ξ ) μ T α Ξ ( ξ μ 0 ( x ) ) ( ξ μ 0 ( x ) ) T d P ( ξ ) Σ ( x ) , Y = Ξ x T Q x + c T x + ( D x ) T ξ + ξ T β β T ξ α 0 α T ξ ( ξ μ 0 ( x ) ) T Y ( ξ μ 0 ( x ) ) d P ( ξ ) + α 0 + μ T α + Σ ( x ) , Y ,
where α 0 , α , and Y are the multipliers. According to Shapiro’s duality theory in [33], it can be concluded that
Ψ ( x , μ ) = min α 0 , α , Y 0 max P ( ξ ) 0 L ( d P , α 0 , α , Y ) = min α 0 , α , Y 0 α 0 + μ T α + Σ ( x ) , Y s . t . x T Q x + c T x + ( D x ) T ξ + ξ T β β T ξ α 0 α T ξ ( ξ μ 0 ( x ) ) T Y ( ξ μ 0 ( x ) ) 0 , ξ Ξ .
From the definition of g ( x , α 0 , α , Y ) , it can be inferred that
p ( x ) = max μ Ψ ( x , μ ) = max μ min α 0 , α , Y 0 α 0 + μ ( x ) T α + Σ ( x ) , Y s . t . g ( x , α 0 , α , Y ) 0 , μ [ A x , B x ] = min α 0 , α , Y 0 max μ α 0 + μ T α + Σ ( x ) , Y s . t . g ( x , α 0 , α , Y ) 0 , μ [ A x , B x ] = min α 0 , α , Y 0 α 0 + ( A x ) T α + ( B x A x ) T α + Σ ( x ) , Y s . t . g ( x , α 0 , α , Y ) 0 .
So, problem (5) is equivalent to min x X p ( x ) , and the original problem is equivalent to the constrained optimization problem (8). □

3. The Equivalence between the Original Problem and the SOCP Model

In this section, we consider a case where the support set Ξ is the entire space. It can be proven that when Ξ = R k , the original problem can be transformed into a semidefinite programming problem; let
H ( x , α 0 , α , Y ) = Θ ( x , α 0 , α ) D x α 2 T + μ 0 ( x ) T β β T D x α 2 + β β T μ 0 ( x ) β β T Y ,
where Θ ( x , α 0 , α ) = x T Q x + c T x α 0 + μ 0 ( x ) T β β T μ 0 ( x ) + ( D x ) T μ 0 ( x ) α T μ 0 ( x ) . The following theorem can be obtained:
Theorem 1. 
In a case where Ξ = R k , problem (5) can be transformed into the following semidefinite programming problem:
min x , α 0 , α , Y α 0 + ( A x ) T α + ( B x A x ) T α + + Σ ( x ) , Y s . t . H ( x , α 0 , α , Y ) 0 , x X , Y 0 .
Proof. 
Notice that the constraint of Ψ ( x , μ ) is x T Q x + c T x + ( D x ) T ξ + ξ T β β T ξ α 0 α T ξ ( ξ μ 0 ( x ) ) T Y ( ξ μ 0 ( x ) ) 0 , ξ Ξ , and can be written equivalently as an inequality related to H ( x , α 0 , α , Y ) , i.e.,
1 ξ μ 0 ( x ) T Θ ( x , α 0 , α ) D x α 2 T + μ 0 ( x ) T β β T D x α 2 + β β T μ 0 ( x ) β β T Y 1 ξ μ 0 ( x ) 0 , ξ Ξ .
This is exactly equivalent to H ( x , α 0 , α , Y ) 0 . The proof is completed. □
Next, we will transform problem (5) into a SOCP problem.
Theorem 2. 
Assuming X is a non-empty compact set on R n , then (5) is equivalent to the following SOCP problem:
min x , α , q , t l ( x , α ) + 1 k T q + p s . t . ( B x A x ) α q 0 ,   p , Σ ( x ) 1 / 2 ( α D x 2 β β T μ 0 ( x ) ) K k + 1     q 0 , x X ,
where l ( x , α ) = ( A x ) T α + + c T x + x T Q x + μ 0 ( x ) T β β T μ 0 ( x ) + ( D x ) T μ 0 ( x ) α T μ 0 ( x ) + β T Σ ( x ) β , K k + 1 R k + 1 is a second-order cone, defined as
K k + 1 = ( ω 0 , ω ) R k + 1 : ω ω 0 ,
1 k = ( 1 , , 1 ) T R k , and ⋆ denotes the Hadamard product.
Proof. 
The Lagrange function of problem (15) is
L ( x , α 0 , α , Y , Ω ) = α 0 + ( A x ) T α + ( B x A x ) T α + + Σ ( x ) , Y + Ω , H ( x , α 0 , α , Y ) ,
where Ω = z 0 z T z M , z 0 R , z R k , and M S k × k . So, L ( x , α 0 , α , Y , Ω ) can be expressed in the following form:
L ( x , α 0 , α , Y , Ω ) = α 0 + ( A x ) T α + ( B x A x ) T α + + Σ ( x ) , Y + Ω , H ( x , α 0 , α , Y ) + z 0 Θ ( x , α 0 , α ) + z T D T x z T α + 2 z T β β T μ 0 ( x ) + M , β β T Y = α 0 + ( A x ) T α + ( B x A x ) T α + + ( x T Q x + c T x α 0 ) z 0 + z T ( D x α ) + β β T Y , M + Σ ( x ) , Y + z 0 μ 0 ( x ) T β β T μ 0 ( x ) + z 0 ( D x ) T μ 0 ( x ) z 0 α T μ 0 ( x ) + 2 z T β β T μ 0 ( x )
Considering that problem (15) satisfies the Slater condition, its optimal solution satisfies the following Karush–Kuhn–Tucker (KKT) condition:
x L ( x , α 0 , α , Y , Ω ) N X ( x ) , α 0 L ( x , α 0 , α , Y , Ω ) = 1 z 0 = 0 , α L ( x , α 0 , α , Y , Ω ) = 0 , Σ ( x ) M = 0 , 0 Ω H ( x , α 0 , α , Y ) 0 ,
where N X ( x ) is the normal cone of X at x in the sense of convex analysis. By using the second and fourth conditions in (17), the specific expression of Ω can be obtained:
Ω = 1 z T z Σ ( x ) .
Since for a fixed value of x, Σ ( x ) is a k × k positive definite matrix, according to the definition of Ω , we see that the rank of Ω is k + 1 or k. We know that matrices Ω and H ( x , α 0 , α , Y ) can be simultaneously diagonalized due to the fact that 0 Ω H ( x , α 0 , α , Y ) 0 . Therefore, we obtain H ( x , α 0 , α , Y ) as a zero matrix or a positive semidefinite matrix with rank 1. Nest, we prove the results of the theorem in two cases.
Case 1.  H ( x , α 0 , α , Y ) is a zero matrix.
In this case, according to the definition of H ( x , α 0 , α , Y ) in (), we have x T Q x + c T x μ 0 ( x ) T Y μ 0 ( x ) = α 0 , D x + 2 Y μ 0 ( x ) = α , and Y = β β T , which mean that the objective function of problem (15) can be rewritten as
x T Q x + c T x + μ 0 ( x ) T β β T μ 0 ( x ) + ( D x ) T μ 0 ( x ) α T μ 0 ( x ) + ( A x ) T α + ( B x A x ) T α + + β T Σ ( x ) β .
Combining H ( x , α 0 , α , Y ) = 0 , according to the definition of l ( x , α ) , and replacing α 0 in (15), we find that problem (15) can be rewritten as
min x , α l ( x , α ) + ( B x A x ) T α + s . t . x X .
Notice that ( B x A x ) T α + in the objective function can reach the constraint, and a parameter t can be introduced; then, problem (18) can be rewritten as
min x , α , q , t l ( x , α ) + 1 k T q + p s . t . ( B x A x ) α q 0 , p 0 , q 0 , x X ,
which is just problem (16) in the case where D x + 2 β β T μ 0 ( x ) = α .
Case 2.  H ( x , α 0 , α , Y ) is a positive semidefinite matrix with rank 1.
In this case, r 0 R and r R k such that ( r 0 , r T ) 0 and
H ( x , α 0 , α , Y ) = r 0 r r 0 r T .
Suppose r 0 = 0 ; then,
H ( x , α 0 , α , Y ) = r 0 r 0 r 0 r T r 0 r T r r T = 0 0 0 r r T ,
and then it can be concluded that Y β β T = r r T . Since H ( x , α 0 , α , Y ) , Ω = 0 , we have r T Σ ( X ) r = 0 , r = 0 , in contradiction with ( r 0 , r T ) 0 , meaning r 0 0 . Now substituting (14) into (20), the following equation relationship can be obtained:
α 0 + Σ ( x ) , Y = r 0 2 + c T x + x T Q x + μ 0 ( x ) T β β T μ 0 ( x ) + ( D x ) T μ 0 ( x ) α T μ 0 ( x ) + Σ ( x ) , β β T + r r T = r 0 2 + c T x + x T Q x + μ 0 ( x ) T β β T μ 0 ( x ) + ( D x ) T μ 0 ( x ) α T μ 0 ( x ) + β T Σ ( x ) β + α D x 2 β β T μ 0 ( x ) 2 r 0 T Σ ( x ) α D x 2 β β T μ 0 ( x ) 2 r 0 c T x + x T Q x + μ 0 ( x ) T β β T μ 0 ( x ) + ( D x ) T μ 0 ( x ) α T μ 0 ( x ) + β T Σ ( x ) β + Σ ( x ) 1 / 2 ( α D x 2 β β T μ 0 ( x ) ) .
In this case, problem (15) can be transformed into the following problem:
min x , α f ( x , α ) s . t . x X ,
where f ( x , α ) = ( A x ) T α + ( B x A x ) T α + + c T x + x T Q x + μ 0 ( x ) T β β T μ 0 ( x ) + ( D x ) T μ 0 ( x ) α T μ 0 ( x ) + β T Σ ( x ) β + Σ ( x ) 1 / 2 ( α D x 2 β β T μ 0 ( x ) ) . Next, introducing t 0 , q 0 such that
Σ ( x ) 1 / 2 ( α D x 2 β β T μ 0 ( x ) ) t
and
( B x A x ) α q 0 ,
we can simplify f ( x , α ) and rewrite (21) as (16).
All in all, we complete the proof. □

4. Solving Using Projection Differential Equations

In this section, we will use the projection differential equation method to solve problem (16).
To show the projection differential equation method, first, we introduce the relevant concepts for projection on a second-order cone. For a second-order cone K m , the metric projection Π K m ( x ) of x onto K m is defined as an optimal solution of the following minimization problem:
min x x x s . t . x K m .
As shown in [34], a Jordan algebra is a very powerful tool for calculating the metric projection onto the second-order cone K m . For any two vectors x = ( x 0 , x ¯ ) and y = ( y 0 , y ¯ ) in R × R m 1 , the Jordan product of x and y is represented as
x y = x 0 y 0 + x ¯ T y ¯ , x 0 y ¯ + y 0 x ¯ .
If we have this product, ( R × R m 1 , ) forms a Jordan algebra, where e = ( 1 , 0 , , 0 ) T in R × R m 1 . Here is the following spectral decomposition for any x = ( x 0 , x ¯ ) in R × R m 1 :
x = λ 1 ( x ) c 1 ( x ) + λ 2 ( x ) c 2 ( x ) ,
and where λ 1 ( x ) , λ 2 ( x ) represent the eigenvalues of x, the specific formula is
λ i ( x ) = x 0 + ( 1 ) i x ¯ , i = 1 , 2 ,
while where c 1 , c 2 represent the eigenvectors of x, the specific formula is
c i ( x ) = 1 2 1 , ( 1 ) i x x ¯ , if x ¯ 0 , 1 , ( 1 ) i ω , if x ¯ = 0 .
Here, ω is any unit vector in R m 1 . Next, we show how to calculate the metric projection onto the second-order cone K m through a Jordan algebra. If the spectral decomposition of x is λ 1 ( x ) c 1 ( x ) + λ 2 ( x ) c 2 ( x ) , then the projection Π K m ( x ) of x onto K m is
Π K m ( x ) = max { 0 , λ 1 ( x ) } c 1 ( x ) + max { 0 , λ 2 ( x ) } c 2 ( x ) .
Substituting the formula λ i ( x ) , c i ( x ) into the above equation, we obtain
Π K m ( x ) = 1 2 ( 1 + x 0 x ¯ ) ( x ¯ , x ¯ ) , if x 0 < x ¯ , ( x 0 , x ¯ ) , if x ¯ x 0 , 0 , if x ¯ x 0 .
On this basis, we can use the differential equation method to calculate the second-order cone programming (16). For simplicity, let μ 0 ( x ) = 0 , β = 0 , and Σ ( x ) = Σ ; (16) becomes
min x , α , q , t c T x + ( A x ) T α + 1 k T q + p s . t . ( B x A x ) α q 0 ,   p , Σ 1 / 2 ( α D x ) K k + 1     q 0 , x X .
We write the Lagrange function for problem (22) as follows:
L ( x , α , q , p , u , ( v 0 , v ) , w ) = c T x + ( A x ) T α + 1 k q + p + u T ( B x A x ) α q ( v 0 , v ) · ( p , Σ 1 2 ( α D x ) ) w T q ,
where u, ( v 0 , v ) , and w are Lagrange multipliers. Then, it can be concluded that
x , α , q , s L ( x , α , q , p ; u , ( v 0 , v ) , w ) = c + A T α + ( B A ) T ( α u ) + D T Σ 1 2 v A x + u ( B x A x ) Σ 1 2 v 1 k u w 1 v 0 .
According to literature [35], if ( x * , α * , q * , p * ) is the optimal solution to problem (22), under some constraint qualification, u * , ( v 0 * , v * ) , and w * exist such that the following KKT condition holds:
[ c + A T α * + ( B A ) T ( α * u * ) + D T Σ 1 2 v * ] N X ( x * ) , A x * + u * ( B x * A x * ) Σ 1 2 v * = 0 , 1 k u * w * = 0 , 1 v 0 * = 0 , 0 u * [ ( B x * A x * ) α * q * ] 0 , K k + 1 ( v 0 * , v * ) p * , Σ 1 2 ( α * D x * ) K k + 1 , 0 w * q * 0 .
Then, solving the second-order cone problem (22) is transformed into solving the KKT system (23).
Notice that the sixth equation in (23) can be rewritten as
Σ 1 2 ( α * D x * ) N K k + 1 ( v 0 * , v * ) ,
which, by Proposition 1.5.8 in [36], means that
Π K k + 1 [ ( v 0 * , v * ) μ Σ 1 2 ( α * D x * ) ] ( v 0 * , v * ) = 0
with μ > 0 . The fourth equation in (23) can be rewritten as
( 1 v 0 * ) N R ( p * ) ,
which can be written as
Π R [ p * μ ( 1 v 0 * ) ] p * = 0 .
In the same way, we can describe (23) by metric projection. Consequently,
( x * , α * , q * , p * u * , ( v 0 * , v * ) , w * )
satisfying (23) means that y * satisfies the following form of projection equation:
Φ μ ( y ) : = Π M [ y μ F ( y ) ] y = 0 ,
where λ = ( v 0 , v ) and
y = x α q p u λ w , F ( y ) = c + A T α + ( B A ) T ( α u ) α + D T Σ 1 2 v A x + u ( B x A x ) Σ 1 2 v 1 k u w 1 v 0 ( B x A x ) α + q ( p , Σ 1 2 ( α D x ) ) q
and M = X × R k × R k × R × R + k × K k + 1 × R + k , and μ is a positive parameter.
Next, we obtain a projection differential equation, i.e.,
d y ( t ) d t = τ Φ μ ( y ( t ) ) , y ( 0 ) = y 0 ,
where τ > 0 is a scalar parameter. An analysis of the stability of the projection differential equation system (25) can be found in [37]. Next, we solve the SOCP problem (22) by solving the system of projection differential equations (25). Here, we solve the specific problem of (22), with c = 1 , A = 2 , B = 2 , D = 1 , and Σ = 4 , that is,
min x , α , q , p ( 2 α 1 ) x + q + p s . t ( t , 2 ( α x ) ) K 2 , q 0 , x X .
Let X be a set that contains the optimal solutions as its interior points; then, the KKT condition corresponding to problem (26) is
1 + 2 α + 2 v = 0 , 2 x 2 v = 0 , 0 ( 1 w ) q 0 , K 2 ( 1 , x ) ( p , 2 ( α x ) ) K 2 , 0 w q 0 .
According to 0 ( 1 w ) q 0 and 0 w q 0 , it can be inferred that w = q = 0 . So, the above equation can be simplified as
2 α + 2 v = 1 , x = v , K 2 ( 1 , x ) ( p , 2 ( α x ) ) K 2 ,
and
α = 1 2 x 2 , v = x .
So, the above problem can be translated into a simple problem:
K 2 1 x p 1 4 x K 2 .
Next, we use the projection differential equation (25) to solve the SOCP, which is
d d t s x = τ Π K 2 1 x μ s 1 4 x + τ 1 x , s ( t 0 ) = s 0 , x ( t 0 ) = x 0 .
Many works in the literature have studied using the projection differential equation method to solve constrained optimization problems, such as [37,38]. Consider the following projection differential equation:
d u d t = Λ { Π Z G ( u ) F ( u ) G ( u ) } ,
where u R n is the state vector, Λ = d i a g ( λ i ) is a positive diagonal matrix, and F ( u ) and G ( u ) are continuously differentiable vector-valued functions from R n to R n . A mapping F is said to be G-monotone at u * if ( F ( u ) F ( u * ) ) T ( G ( u ) G ( u * ) ) 0 , and u R n .
Then, we provide the results of analyzing the convergence and stability from (28) in ([37] Theorem 1).
Theorem 3. 
Assume an equilibrium point u * of (28) exists such that
S * = { u R n : F ( u ) + G ( u ) = F ( u * ) + G ( u * ) }
is bounded and F ( u ) is G-monotone at u * . If J F ( u ) + J G ( u ) is symmetric and positive semidefinite in R n , then the differential equation (28) is stable in the Lyapunov sense and is globally convergent to the equilibrium point of (28).
Next, we give some numerical test and simulation results. We used the toolbox dde23 in Matlab 2016 to solve the projection differential equation (27). The numerical experiments in this part were conducted on a DELL laptop with an Intel(R) Core(TM) i7-7500U CPU @ 2.70 GHz, using a 64-bit operating system for the simulations.
The starting point is ( s 0 , x 0 ) = ( 0 , 1.5 ) . The solutions and the final steps for different values of μ are shown in Table 1.
We know from Table 1 that as the parameter μ increases, the solution of projection differential equation (27) approaches the equilibrium point ( 0.25 , 0 ) T , indicating that the differential equation method has good convergence. However, in order to obtain a more accurate solution, the final number of iterations required may continue to increase.
Next, we show graphs of projection differential equation (27), which are shown in Figure 1, Figure 2 and Figure 3, with μ = 40 and τ = 0.5 , 1 , and 10. ( x 1 ( t ) , x 2 ( t ) ) in Figure 1, Figure 2 and Figure 3 are just ( s ( t ) , x ( t ) ) in projection differential equation (27).
We know from Figure 1, Figure 2 and Figure 3 that all the trajectories converge to a corresponding static state. When τ takes different values, the convergence speed of the solution trajectory is different. It is found that in general, the larger the value of τ , the faster the convergence speed of the solution, and it will eventually converge to the same value, ( 0.25 , 0 ) T .
Overall, we can figure out that x = 1 4 and s = 0 . So, the optimal solution to problem (26) is ( x * , α * , q * , t * ) = 1 4 , 0 , 0 , 0 .
Summarizing the numerical experiments in this section, the following conclusions can be obtained:
(1)
In this section, the SOCP problem is rewritten based on the KKT condition, and the method for solving the original problem by using the projection differential equation constructed is feasible and effective, and the solution trajectory will eventually converge to a certain equilibrium point.
(2)
In the process of solving the problem using projection differential equation (27), the scale factor τ has an effect on the time taken to converge to the equilibrium point; specifically, the larger the parameter τ , the shorter the time taken. However, it has no effect on the position of the equilibrium point. The parameter μ has an effect on the final number of steps and the accuracy of the solution in converging to the equilibrium point; specifically, the larger the parameter μ , the larger the final number of steps and the more accurate the solution obtained—that is, the smaller the error.

5. Conclusions

This paper investigates a class of distributed robust optimization reconstruction problems with constraints in terms of dependency on the decision matrix. By using duality theory, a DRO problem was transformed into a semidefinite programming problem, resulting in an easily solvable second-order cone programming problem. On the basis of introducing the Jordan projection of the second-order cones, the solutions to second-order cone programming problems are transformed into solving KKT system problems. Specifically, using differential equations to solve the KKT condition yields the final result. These problems can also be solved using the method of delayed differential equations, which has high theoretical value and practical significance.

Author Contributions

Conceptualization: S.L. and J.Z.; methodology: S.L.; software: Y.W.; validation: J.Z., S.L. and Y.W.; formal analysis: S.L.; investigation: J.Z.; writing—original draft preparation: S.L.; writing—review and editing: J.Z.; visualization: Y.W.; supervision: J.Z.; project administration: J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Project Grant Nos. 12171219 and 61877032.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

Notations or Parameters
xDecision variable in R n
x T Transpose of vector x
Σ 1 Σ 2 Σ 2 Σ 1 is a positive semidefinite matrix for symmetric matrices Σ 1 and Σ 2
XSet of decision variables in R n
QAn n × n semideifinite matrix
cA vector in R n
β A vector in R k
DA k × n matrix
( Ω , F ) Measure space
Ξ Support set in R k
ξ Random vector in Ξ on ( Ω , F )
Γ Set of all probability measures in ( Ω , F )
A , B k × n matrices
μ 0 ( · ) Mapping from R n to R k
Σ ( · ) Mapping from vector space R n to a k × k positive definite matrix space
PProbability distribution that is supported by Ξ R k
U ( x ) Ambiguity set of random distributions containing the true probability distribution P of random variable ξ
E P [ Z ] Expected value of or expectation that P U ( x ) for a random variable Z defined by the integral Ξ Z ( ξ ) d P ( ξ )
α 0 Multiplier corresponding to the first constraint of (2.10)
α Multiplier corresponding to the second constraint of (2.10)
YMultiplier corresponding to the third constraint of (2.10)
α + max { α , 0 }
Σ 1 , Σ 2 Trace of Σ 1 Σ 2 for symmetric matrices Σ 1 and Σ 2
K k + 1 Second-order cone defined as ( ω 0 , ω ) R k + 1 : ω ω 0
The Hadamard product
f ( x ) Gradient of f : R n R at x
J g ( x ) Jacobian of g : R n R m at x
a b a T b = 0 for vectors a , b
· Norm
S k × k Space of k × k symmetric matrices

References

  1. Gao, R. Finite-sample guarantees for Wasserstein distributionally robust optimization: Breaking the curse of dimensionality. Oper. Res. 2023, 71, 2291–2306. [Google Scholar] [CrossRef]
  2. Kannan, R.; Bayraksan, G.; Luedtke, J.R. Residuals-based distributionally robust optimization with covariate information. Math. Program. 2024, 207, 369–425. [Google Scholar] [CrossRef]
  3. Arrigo, A.; Ordoudis, C.; Kazempour, J.; De Greve, Z.; Toubeau, J.F.; Vallee, F. Wasserstein distributionally robust chance-constrained optimization for energy and reserve dispatch: An exact and physically-bounded formulation. Eur. J. Oper. Res. 2022, 296, 304–322. [Google Scholar] [CrossRef]
  4. Lin, F.; Fang, X.; Gao, Z. Distributionally robust optimization: A review on theory and applications. Numer. Algebr. Control Optim. 2022, 12, 159–212. [Google Scholar] [CrossRef]
  5. Gao, R.; Kleywegt, A. Distributionally robust stochastic optimization with Wasserstein distance. Math. Oper. Res. 2023, 48, 603–655. [Google Scholar] [CrossRef]
  6. Gao, R.; Chen, X.; Kleywegt, A.J. Wasserstein distributionally robust optimization and variation regularization. Oper. Res. 2024, 72, 1177–1191. [Google Scholar] [CrossRef]
  7. Bertsimas, D.; Popescu, I. Optimal inequalities in probability theory: A convex optimization approach. Oper. Res. 2005, 15, 780–804. [Google Scholar] [CrossRef]
  8. Wiesemann, W.; Kuhn, D.; Sim, M. Distributionally robust convex optimization. SIAM J. Optim. 2014, 62, 1358–1376. [Google Scholar] [CrossRef]
  9. Delage, E.; Ye, Y. Distributionally robust optimization under moment uncertainty with application to data-driven problems. Oper. Res. 2010, 58, 595–612. [Google Scholar] [CrossRef]
  10. Liu, Q. Model and Stability Research on Distributionally Robust Optimization. Doctoral Thesis, Dalian University of Technology, Dalian, China, 2018. [Google Scholar]
  11. Basciftci, B.; Ahmed, S.; Shen, S. Distributionally robust facility location problem under decision-dependent stochastic demand. Eur. J. Oper. Res. 2019, 292, 548–561. [Google Scholar] [CrossRef]
  12. Doan, X. Distributionally robust optimization under endogenous uncertainty with an application in retrofitting planning. Eur. J. Oper. Res. 2022, 300, 73–84. [Google Scholar] [CrossRef]
  13. Zhang, J.; Xu, H.; Zhang, L. Quantitative stability analysis for distributionally robust optimization with moment constraints. SIAM J. Optim. 2016, 26, 1855–1882. [Google Scholar] [CrossRef]
  14. Lin, S.; Zhang, J.; Shi, N. An Alternating Iteration Algorithm for a Parameter-Dependent Distributionally Robust Optimization Model. Mathematics 2022, 10, 1175. [Google Scholar] [CrossRef]
  15. Noyan, N.; Rudolf, G.; Lejeune, M. Distributionally robust optimization under a decision-dependent ambiguity set with applications to machine scheduling and humanitarian logistics. INFORMS J. Comput. 2022, 34, 729–751. [Google Scholar] [CrossRef]
  16. Song, J.; Yang, W.; Zhao, C. Decision-dependent distributionally robust Markov decision process method in dynamic epidemic control. IISE Trans. 2024, 56, 458–470. [Google Scholar] [CrossRef]
  17. Li, M.; Tong, X.; Sun, H. Discretization and quantification for distributionally robust optimization with decision-dependent ambiguity sets. Optim. Methods Softw. 2024. [Google Scholar] [CrossRef]
  18. Kuo, Y.J.; Mittelmann, H.D. Interior point methods for second-order cone programming and OR applications. Comput. Optim. Appl. 2004, 28, 55–285. [Google Scholar] [CrossRef]
  19. Luo, X.; Wachter, A. A quadratically convergent sequential programming method for second-order cone programs capable of warm starts. SIAM J. Optim. 2024, 34, 2943–2972. [Google Scholar] [CrossRef]
  20. Andreani, R.; Haeser, G.; Mito, L.M.; Ramirez, C.H.; Silveira, T.P. Global convergence of algorithms under constant rank conditions for nonlinear second-order cone programming. J. Optim. Theory Appl. 2022, 195, 42–78. [Google Scholar] [CrossRef]
  21. Liang, L.; Sun, D.; Toh, K.C. An inexact augmented Lagrangian method for second-order cone programming with applications. SIAM J. Optim. 2021, 31, 1748–1773. [Google Scholar] [CrossRef]
  22. Sun, Y.; Wang, L.; Sun, J.; Wang, B.; Yuan, Y. An Implementable Augmented Lagrangian Method for Solving Second-Order Cone Constrained Variational Inequalities. Pac. J. Oper. Res. 2023, 40, 2250030. [Google Scholar] [CrossRef]
  23. Zhang, Y.; Liu, H. A new projection neural network for linear and convex quadratic second-order cone programming. J. Intell. Fuzzy Syst. 2022, 42, 2925–2937. [Google Scholar] [CrossRef]
  24. Liu, Y.; Mu, X. A new neural network based on smooth function for SOCCVI problems. J. Intell. Fuzzy Syst. 2023, 44, 1257–1268. [Google Scholar] [CrossRef]
  25. Wei, P.; Wang, X.; Wei, Y. Neural network models for time-varying tensor complementarity problems. Neurocomputing 2023, 523, 18–32. [Google Scholar] [CrossRef]
  26. Conchas, R.F.; Loukianov, A.G.; Sanchez, E.N.; Alanis, A.Y. Finite time convergent recurrent neural network for variational inequality problems subject to equality constraints. J. Frankl. Inst. 2024, 361, 583–597. [Google Scholar] [CrossRef]
  27. Wen, X.; Qin, S.; Feng, J. A novel projection neural network for solving a class of monotone variational inequalities. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 5580–5590. [Google Scholar] [CrossRef]
  28. Ju, X.; Che, H.; Li, C.; He, X.; Feng, G. Exponential convergence of a proximal projection neural network for mixed variational inequalities and applications. Neurocomputing 2021, 454, 54–64. [Google Scholar] [CrossRef]
  29. Xu, H.K.; Dey, S.; Vetrivel, V. Notes on a neural network approach to inverse variational inequalities. Optimization 2021, 70, 901–910. [Google Scholar] [CrossRef]
  30. Vuong, P.T.; He, X.; Thong, D.V. Global exponential stability of a neural network for inverse variational inequalities. J. Optim. Theory Appl. 2021, 190, 915–930. [Google Scholar] [CrossRef]
  31. Yang, Y.; Li, W.; Song, B.; Zou, Y.; Pan, Y. Enhanced fault tolerant kinematic control of redundant robots with linear-variational-inequality based zeroing neural network. Eng. Appl. Artif. Intell. 2024, 133, 108068. [Google Scholar] [CrossRef]
  32. Wang, Y.; Lin, S.; Zhang, J.; Qiu, C. A Neural Network Based on a Nonsmooth Equation for a Box Constrained Variational Inequality Problem. J. Math. 2024, 1, 5511978. [Google Scholar]
  33. Shapiro, A. On Duality Theory of Conic Linear Problems. In Semi-Infinite Programming. Nonconvex Optimization and Its Applications; Goberna, M.A., López, M.A., Eds.; Springer: Boston, MA, USA, 2001; Volume 57. [Google Scholar]
  34. Faraut, U.; Korányi, A. Analysis on Symmetric Cones. In Oxford Mathematical Monographs; Oxford University Press: New York, NY, USA, 1994. [Google Scholar]
  35. Bonnans, J.; Shapiro, A. Perturbation Analysis of Optimization Problems; Springer: New York, NY, USA, 2000. [Google Scholar]
  36. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer: New York, NY, USA, 2003. [Google Scholar]
  37. Xia, Y.; Wang, J. A general projection neural network for solving monotone variational inequalities and related optimization problems. IEEE Trans. Neural Netw. 2004, 15, 318–328. [Google Scholar] [CrossRef]
  38. Xia, Y.; Leung, H.; Wang, J. A projection neural network and its application to constrained optimization problems. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 2002, 49, 447–458. [Google Scholar]
Figure 1. Evolution of x ( t ) in the projection differential equation with τ = 0.5 .
Figure 1. Evolution of x ( t ) in the projection differential equation with τ = 0.5 .
Axioms 13 00699 g001
Figure 2. Evolution of x ( t ) in the projection differential equation with τ = 1 .
Figure 2. Evolution of x ( t ) in the projection differential equation with τ = 1 .
Axioms 13 00699 g002
Figure 3. Evolution of x ( t ) in the projection differential equation with τ = 10 .
Figure 3. Evolution of x ( t ) in the projection differential equation with τ = 10 .
Axioms 13 00699 g003
Table 1. The solution x and error of (27) when τ = 10 and ( s 0 , x 0 ) = ( 0 , 1.5 ) with different values of μ .
Table 1. The solution x and error of (27) when τ = 10 and ( s 0 , x 0 ) = ( 0 , 1.5 ) with different values of μ .
Final Step i μ Solution xThe Error of Solution to the Equilibrium Point
6010 ( 0.2589 , 0.0301 ) T 9.8522 × 10 4
27350 ( 0.2512 , 0.0062 ) T 3.9880 × 10 5
521100 ( 0.2492 , 0.0029 ) T 9.0500 × 10 6
1022200 ( 0.2493 , 0.0010 ) T 1.4900 × 10 6
1522300 ( 0.2495 , 0.0002 ) T 2.9000 × 10 7
2390500 ( 0.2497 , 0.0001 ) T 1.0000 × 10 7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, J.; Lin, S.; Wang, Y. Reformulation and Enhancement of Distributed Robust Optimization Framework Incorporating Decision-Adaptive Uncertainty Sets. Axioms 2024, 13, 699. https://doi.org/10.3390/axioms13100699

AMA Style

Zhang J, Lin S, Wang Y. Reformulation and Enhancement of Distributed Robust Optimization Framework Incorporating Decision-Adaptive Uncertainty Sets. Axioms. 2024; 13(10):699. https://doi.org/10.3390/axioms13100699

Chicago/Turabian Style

Zhang, Jie, Shuang Lin, and Yifei Wang. 2024. "Reformulation and Enhancement of Distributed Robust Optimization Framework Incorporating Decision-Adaptive Uncertainty Sets" Axioms 13, no. 10: 699. https://doi.org/10.3390/axioms13100699

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop