Next Article in Journal
Numerical Study of Thermo-Electric Conversion for TEG Mounted Wavy Walled Triangular Vented Cavity Considering Nanofluid with Different-Shaped Nanoparticles
Previous Article in Journal
Neural Teleportation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymptotic Analysis for One-Stage Stochastic Linear Complementarity Problems and Applications

1
Department of Basic Courses Teaching, Dalian Polytechnic University, Dalian 116034, China
2
School of Mathematics, Liaoning Normal University, Dalian 116029, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(2), 482; https://doi.org/10.3390/math11020482
Submission received: 3 December 2022 / Revised: 8 January 2023 / Accepted: 13 January 2023 / Published: 16 January 2023

Abstract

:
One-stage stochastic linear complementarity problem (SLCP) is a special case of a multi-stage stochastic linear complementarity problem, which has important applications in economic engineering and operations management. In this paper, we establish asymptotic analysis results of a sample-average approximation (SAA) estimator for the SLCP. The asymptotic normality analysis results for the stochastic-constrained optimization problem are extended to the SLCP model and then the conditions, which ensure the convergence in distribution of the sample-average approximation estimator for the SLCP to multivariate normal with zero mean vector and a covariance matrix, are obtained. The results obtained are finally applied for estimating the confidence region of a solution for the SLCP.

1. Introduction

The finite-dimensional complementarity problems form a relatively perfect and fruitful topic in mathematical programming. In order to reflect the uncertain factors in practice, stochastic complementarity problems have attracted extensive attention in the recent literature [1,2,3,4]. We investigate the following one-stage stochastic linear complementarity problem (SLCP): find x n , such that
E [ M ( ξ ( w ) ) ] x + E [ q ( ξ ( w ) ) ] 0 , x 0 , ( E [ M ( ξ ( w ) ) ] x + E [ q ( ξ ( w ) ) ] ) T x = 0 ,
where E denotes the mathematical expectation, ξ : Ω Ξ k is a random vector defined on a probability space ( Ω , F , P ) , and M : k n × n and q : k n are functions. Throughout the paper, M ( ξ ( ω ) ) and q ( ξ ( ω ) ) are measurable functions of ω , and the condition
E M ( ξ ( ω ) ) 2 + q ( ξ ( ω ) ) 2 <
holds. To ease the notation, ξ ( ω ) will be written as ξ .
Problem (1) can be seen as a special case of the stochastic nonlinear complementarity problem (SNCP) in Gürkan et al. [5]. Some examples of stochastic complementarity problems in operational research, finance, economics, and engineering, can be found in [6,7].
Accurately calculating the expected value in (1) is either impossible or costly. The sample-average approximation (SAA) method [8,9,10] is considered to be an effective method for estimating the expected value. By generating an independent identically distributed (iid) sample ξ 1 , , ξ N of ξ and approximating the expected value with sample average, the SAA method can effectively estimate the expectations. Throughout the paper, the SLCP (1) will be approximated by
M ^ N x + q ^ N 0 , x 0 , M ^ N x + q ^ N T x = 0 ,
where
M ^ N x + q ^ N = 1 N j = 1 N M ( ξ j ) x + q ( ξ j )
is the sample-average mapping of E [ M ( ξ ( w ) ) ] x + E [ q ( ξ ( w ) ) ] . Equation (1) is called the true problem, and Equation (2) is the SAA problem to Equation (1).
In this paper, for an SAA solution x N and its almost sure cluster point x , we are interested in the asymptotic behavior of the SAA estimator N ( x N x ) , that is, establishing conditions on E [ M ( ξ ( w ) ) ] such that for N ,
N ( x N x ) D Z ,
where Z in n is a normal random variable, and the symbol D above the arrow denotes a convergence in the distribution.
The asymptotic behavior of the SAA estimator has been deeply discussed in plenty of the literature [6,8,11,12] and other references. Most of those results are related to convergence in the distribution of the SAA estimators for stochastic-constrained optimizations or normal map formulations of stochastic variational inequalities. In this paper, we first study the asymptotic behavior of the SAA solutions of stochastic quadratic programming, and then obtain the conditions ensuring the asymptotical normality of the SAA estimators of the SLCP. Finally, methods for estimating confidence regions of true solutions to the SLCP are provided.
The paper proceeds as follows: in Section 2, the asymptotic results are obtained under the nonsingularity condition or positive definiteness condition of E [ M ( ξ ( w ) ) ] . We then applied the results in Section 3 to obtain the confidence intervals of an SLCP solution.

2. Main Results

Notice that if the solution of an SLCP (1) exists, then the SLCP (1) is equivalent to the following quadratic programming:
min x n x T E [ M ( ξ ) ] x + E [ q ( ξ ) ] T x s . t . E [ M ( ξ ) ] x + E [ q ( ξ ) ] 0 , x 0 .
In order to provide the asymptotic behavior of the SAA SLCP, we first consider the following stochastic-constrained optimization (SCO) problem:
min x E [ f ( x , ξ ( w ) ] s . t . E [ g i ( x , ξ ( w ) ) ] 0 , i = 1 , 2 , , p , E [ h j ( x , ξ ( w ) ) ] = 0 , j = 1 , 2 , , q ,
where ξ is defined as in Equation (1), f : n × k , g i : n × k i = 1 , 2 , , p , h j : n × k , j = 1 , 2 , , q are random functions.
The definition of the Karash–Kuhn–Tucker (KKT) point of the problem (5) is as follows:
Definition 1.
Suppose the expectation value functions E [ f ( · , ξ ( w ) ) ] , E [ g ( · , ξ ( w ) ) ] , E [ h ( · , ξ ( w ) ) ] are continuously differentiable. Then ( x , λ , μ ) n × p × q is called the Karash–Kuhn–Tucker (KKT) point of the problem (5) if ( x , λ , μ ) satisfies
0 x E [ f ( x , ξ ( w ) ) ] + J x E [ g ( x , ξ ( w ) ) ] T λ + J x E [ h ( x , ξ ( w ) ) ] T μ ,
λ i 0 , λ i E [ g i ( x , ξ ( w ) ) ] = 0 , i = 1 , 2 , , p
E [ g i ( x , ξ ( w ) ) ] 0 , i = 1 , 2 , , p
E [ h j ( x , ξ ( w ) ) ] = 0 , j = 1 , 2 , , q ,
where
g ( x , ξ ( w ) ) = ( g 1 ( x , ξ ( w ) ) , , g p ( x , ξ ( w ) ) ) T
and
h ( x , ξ ( w ) ) = ( h 1 ( x , ξ ( w ) ) , , h q ( x , ξ ( w ) ) ) T .
Let ξ 1 , ξ 2 , ξ N be an iid sample of ξ , then the SAA problem of SCO is
min x f ^ N ( x ) s . t . g ^ i N ( x ) 0 , i = 1 , 2 , , p , h ^ j N ( x ) = 0 , j = 1 , 2 , , q .
where
f ^ N ( x ) = 1 N k = 1 N f ( x , ξ k ) , g ^ i N ( x ) = 1 N k = 1 N g i ( x , ξ k ) , i = 1 , 2 , , p ,
h ^ j N ( x ) = 1 N k = 1 N h j ( x , ξ k ) , j = 1 , 2 , , q .
We make the following assumptions for the future asymptotic analysis. Let X be a nonempty compact subset of n , and let σ ( x , ξ ( w ) ) be one of the elements in the following:
{ f ( · , ξ ( w ) ) , f ( · , ξ ( w ) ) , 2 f ( · , ξ ( w ) ) , g i ( · , ξ ( w ) ) , 2 g i ( · , ξ ( w ) ) , i = 1 , 2 , , p , h j ( · , ξ ( w ) ) , 2 h j ( · , ξ ( w ) ) , j = 1 , 2 , , q } .
Consider the following conditions.
(A1)
For each x X , E [ σ ( x , ξ ( w ) ] is finite valued, and E [ σ ( x , ξ ( w ) ) ] is well-defined.
(A2)
There exists a positive valued random variable C ( w ) such that
E [ C ( w ) ] < + ,
and for all x 1 , x 2 X , and almost every w Ω , the following inequality holds:
σ ( x 1 , ξ ( w ) ) σ ( x 2 , ξ ( w ) ) C ( w ) x 1 x 2 .
(A3)
For any fixed x X ,
f ( · , ξ ( w ) ) , g i ( · , ξ ( w ) ) , i = 1 , 2 , , p ,
h j ( · , ξ ( w ) ) , j = 1 , 2 , , q ,
is twice continuously differentiable at x for almost every w Ω .
The above assumptions are commonly used in stochastic optimization. Using Theorem 6.3.2 and Theorem 6.3.6 in [8],
E [ f ( · , ξ ( w ) ) ] , E [ g i ( · , ξ ( w ) ) ] , E [ h j ( · , ξ ( w ) ) ] , i = 1 , 2 , , p , j = 1 , 2 , , q
is twice continuously differentiable on X, and
lim sup N , x X 1 N k = 1 N σ ( x , ξ k ) E [ σ ( · , ξ ( w ) ) ] = 0 w . p . 1 .
Let ( x 0 , λ 0 , μ 0 ) n × p × q . We need the following conditions:
(A4)
The second order sufficient condition (SOSC) holds at ( x 0 , λ 0 , μ 0 ) , i.e.,
d T x 2 E [ L ( x 0 , λ 0 , μ 0 , ξ ( w ) ) ] d > 0 ,
for every nonzero vector d satisfying
x E [ g i ( x 0 , ξ ( w ) ) ] T d 0 , i I ( x 0 ) , x E [ h j ( x 0 , ξ ( w ) ) ] T d = 0 , j = 1 , 2 , , q ,
where
I ( x 0 ) = { i = 1 , 2 , , p : E [ g i ( x 0 , ξ ) ] = 0 }
L ( x , λ , μ , ξ ( w ) ) = f ( x , ξ ) + g ( x , ξ ) T λ + h ( x , ξ ) T μ .
(A5)
The Mangasarian–Fromovitz constrant qualification (MFCQ) holds at x 0 , i.e.,
x E [ h j ( x 0 , ξ ( w ) ) ] , j = 1 , 2 , , q
is linear independent and there exists d satisfying
x E [ g i ( x 0 , ξ ( w ) ) ] T d 0 , i I ( x 0 ) , x E [ h j ( x 0 , ξ ( w ) ) ] T d = 0 , j = 1 , 2 , , q .
(A6)
The linear independent constraint qualification (LICQ) holds at x 0 , i.e.,
x E [ g i ( x 0 , ξ ( w ) ) ] T d , i I ( x 0 ) , x E [ h j ( x 0 , ξ ( w ) ) ] T d , j = 1 , 2 , , q
are linearly independent.
(A7)
The strict complementarity condition (SCC) holds, i.e.,
( λ 0 ) i > 0 , i I ( x 0 ) : = J .
Suppose that L ^ N ( x , λ , μ ) is the SAA function of L ( x , λ , μ , ξ ) . Then, the following propositions are directly from Theorems 3.1 and 3.2 in [6].
Proposition 1.
Let ( x 0 , λ 0 , μ 0 ) n × p × q and there exists a compact neighborhood X of x 0 such that condition (A1)–(A3) holds. If ( x 0 , λ 0 , μ 0 ) is a KKT point for (5) and condition (A4) and (A5) holds, then there exists ( x N , λ N , μ N ) satisfying the KKT condition of (6) and ( x N , λ N , μ N ) ( x 0 , λ 0 , μ 0 ) w.p.1 as N .
Proposition 2.
Suppose condition (A1)–(A3) hold on X, where X is a compact neighborhood of x 0 . If a sequence of KKT points ( x N , λ N , μ N ) for (6) converges to ( x 0 , λ 0 , μ 0 ) almost surely, and (A4), (A6) hold at ( x 0 , λ 0 , μ 0 ) , then
N ( x N , ( λ N ) J , μ N ) ( x 0 , ( λ 0 ) J , μ 0 ) D ( u , v , w ) ,
where ( u , v , w ) n × J × q is the KKT point for the random quadratic programming problem
min d n c 1 T d + 1 2 d T x 2 E [ L ( x 0 , λ 0 , μ 0 , ξ ) ] d s . t . [ J x E [ g ( x 0 , ξ ) ] d + c 2 ] J + = 0 , [ J x E [ g ( x 0 , ξ ) ] d + c 2 ] J 0 0 , J x E [ h ( x 0 , ξ ) ] d + c 3 = 0 ,
where J + = { i J : ( λ 0 ) i > 0 } , J 0 = { i J : ( λ 0 ) i = 0 } , c 1 , c 2 , c 3 satisfy
N x L ^ N ( x 0 , λ 0 , μ 0 ) x E [ L ( x 0 , λ 0 , μ 0 , ξ ) ] D c 1 , N g ^ N ( x 0 ) E [ g ( x 0 , ξ ) ] D c 2 , N h ^ N ( x 0 ) E [ h ( x 0 , ξ ) ] D c 3 .
Theorem 1.
Suppose conditions in Proposition 2 hold. If a sequence of KKT points ( x N , λ N , μ N ) for (6) converges to ( x 0 , λ 0 , μ 0 ) almost surely and (A7) hold, then
N ( x N , ( λ N ) J , μ N ) ( x 0 , ( λ 0 ) J , μ 0 )
converges in distribution to a normal with mean 0 and the covariance matrix
T = Ψ G T H T G 0 0 H 0 0 1 Σ 11 Σ 12 Σ 13 Σ 21 Σ 22 Σ 23 Σ 31 Σ 32 Σ 33 Ψ G T H T G 0 0 H 0 0 1 ,
where
Ψ = x 2 E [ L ( x 0 , λ 0 , μ 0 , ξ ) ] , G = J x E [ g ( x 0 , ξ ) ] J , H = J x E [ h ( x 0 , ξ ) ] ,
α 1 ( ξ ) = x L ( x 0 , λ 0 , μ 0 , ξ ) x E [ L ( x 0 , λ 0 , μ 0 , ξ ) ] , α 2 ( ξ ) = [ g ( x 0 , ξ ) E [ g ( x 0 , ξ ) ] ] J ,
α 3 ( ξ ) = h ( x 0 , ξ ) E [ h ( x 0 , ξ ) ] , Σ i j = E [ α i ( ξ ) α j ( ξ ) T ] , i = 1 , 2 , 3 ; j = 1 , 2 , 3 .
Proof. 
By Proposition 2, under condition (A7), we know that
N ( x N , ( λ N ) J , μ N ) ( x 0 , ( λ 0 ) J , μ 0 ) D ( u , v , w ) ,
where ( u , v , w ) is the KKT point for the stochastic quadratic programming
min d n c 1 T d + 1 2 d T x 2 E [ L ( x 0 , λ 0 , μ 0 , ξ ) ] d s . t . [ J x E [ g ( x 0 , ξ ) ] d + c 2 ] J = 0 , J x E [ h ( x 0 , ξ ) ] d + c 3 = 0 .
That is, ( u , v , w ) satisfies
x 2 E [ L ( x 0 , λ 0 , μ 0 , ξ ) ] J x E [ g ( x 0 , ξ ) ] J T J x E [ h ( x 0 , ξ ) ] T J x E [ g ( x 0 , ξ ) ] J 0 0 J x E [ h ( x 0 , ξ ) ] 0 0 u v w = c 1 ( c 2 ) J c 3 .
Notice that under (A4),
x 2 E [ L ( x 0 , λ 0 , μ 0 , ξ ) ] J x E [ g ( x 0 , ξ ) ] J T J x E [ h ( x 0 , ξ ) ] T J x E [ g ( x 0 , ξ ) ] J 0 0 J x E [ h ( x 0 , ξ ) ] 0 0
is nonsingular and
u v w = 2 E [ L ( x 0 , λ 0 , μ 0 , ξ ) ] J x E [ g ( x 0 , ξ ) ] J T J x E [ h ( x 0 , ξ ) ] T J x E [ g ( x 0 , ξ ) ] J 0 0 J x E [ h ( x 0 , ξ ) ] 0 0 1 c 1 ( c 2 ) J c 3 .
Then the conclusion holds. □
Remark 1.
Notice that in [8,11] (Section 5.2.2), the asymptotic analysis of the optimal solutions of SAA stochastic-constrained optimization is established when the constraints are independent of random vectors. The conclusions of Theorem 1 extend the results in [8,11] (Section 5.2.2) to stochastic-constrained optimization in which the constraints contain random vectors.
We next apply the results above to a stochastic linear complementarity problem (1). We first provide some conditions below:
(A8)
E [ M ( ξ ) ] is a symmetric matrix, and
d n , d T E [ M ( ξ ) ] d = 0 d = 0 .
(A9)
Let x 0 be a solution of SLCP (1); the nondegenerate condition holds at x 0 , i.e.,
( x 0 ) i + ( E [ M ( ξ ) ] x 0 + E [ q ( ξ ) ] ) i > 0 , f o r i = 1 , 2 , , n .
(A10)
E [ M ( ξ ) ] is positive definite.
Notice that if (A10) holds, then (A8) holds. (A9) is a typical condition in the study of complementarity problems.
Let the ξ 1 , ξ 2 , , ξ N be the iid samples of ξ . Thus, the SAA problem of (4) is as follows:
min x n x T M ^ N x + x T q ^ N s . t . M ^ N x + q ^ N 0 , x 0 .
Next, we provide our results.
Theorem 2.
Suppose that E [ M ( ξ ) ] and E [ q ( ξ ) ] are well-defined and finite. Let x 0 be a solution of SLCP (1) and ( x 0 , λ 0 , μ 0 ) n × n × n be a KKT point for (4), (A9) holds at x 0 , and (A8) holds. Then
(i) 
for N large enough, there exists ( x N , λ N , μ N ) satisfying the KKT condition of (7), and ( x N , λ N , μ N ) ( x 0 , λ 0 , μ 0 ) w.p.1 as N .
(ii) 
for the ( x N , λ N , μ N ) in (i),
N [ ( x N , ( λ N ) J , ( μ N ) I ) ( x 0 , ( λ 0 ) J , ( μ 0 ) I ) ]
converges in distribution to a normal with 0 and covariance matrix D 1 Y D 1 , where
D = 2 E [ M ( ξ ) ] E [ M ( ξ ) ] J E I E [ M ( ξ ) ] J T 0 0 E I T 0 0 , Υ = Υ 11 Υ 12 0 Υ 21 Υ 22 0 0 0 0
with
J = { i { 1 , 2 , , n } : ( E [ M ( ξ ) ] x 0 + E [ q ( ξ ) ] ) i = 0 } , I = { i { 1 , 2 , , n } : ( x 0 ) i = 0 } ,
E the identity matrix,
β 1 ( ξ ) = ( M ( ξ ) E [ M ( ξ ) ] ) x 0 + q ( ξ ) E [ q ( ξ ) ] ,
β 2 ( ξ ) = M ( ξ ) x 0 E [ M ( ξ ) ] x 0 + q ( ξ ) E [ q ( ξ ) ] J ,
Υ i j = E [ β i ( ξ ) β j ( ξ ) T ] , i = 1 , 2 ; j = 1 , 2 .
Proof. 
We will only need to verify conditions in Theorem 1. We advance the proof with the following three steps:
Step 1. Notice that x 0 satisfies
E [ M ( ξ ) ] x 0 + E [ q ( ξ ) ] 0 , x 0 0 ,
( E [ M ( ξ ) ] x 0 + E [ q ( ξ ) ] ) T x 0 = 0
and ( x 0 , μ 0 , λ 0 ) satisfies the KKT condition of (4), that is,
0 = 2 E [ M ( ξ ) ] x 0 + E [ q ( ξ ) ] E [ M ( ξ ) ] λ 0 μ 0 .
E [ M ( ξ ) ] x 0 + E [ q ( ξ ) ] 0 , λ 0 0 .
( E [ M ( ξ ) ] x 0 + E [ q ( ξ ) ] ) T λ 0 = 0 .
x 0 0 , μ 0 0 , x 0 T μ 0 = 0 .
Then we have
0 = x 0 T ( 2 E [ M ( ξ ) ] x 0 ) + E [ q ( ξ ) ] T x 0 x 0 T E [ M ( ξ ) ] λ 0 x 0 T μ 0
and
0 = 2 λ 0 T E [ M ( ξ ) ] x 0 + λ 0 T E [ q ( ξ ) ] λ 0 T E [ M ( ξ ) ] λ 0 λ 0 T μ 0 ,
which, by (8) and (9), means that
0 = x 0 T E [ M ( ξ ) ] x 0 x 0 T E [ M ( ξ ) ] λ 0
and
0 = λ 0 T E [ M ( ξ ) ] x 0 λ 0 T E [ M ( ξ ) ] λ 0 λ 0 T μ 0 .
Since
λ 0 N + n ( E [ M ( ξ ) ] x 0 + E [ q ( ξ ) ] ) ,
μ 0 N + n ( x 0 )
and condition (A9) hold, we have
λ 0 T μ 0 = 0 .
Then combining (10) and (11), we have
( x 0 λ 0 ) T E [ M ( ξ ) ] ( x 0 λ 0 ) = 0 ,
which by condition (A8) means that
x 0 = λ 0 .
Consequently, by (8), we obtain
μ 0 = E [ M ( ξ ) ] x 0 + E [ q ( ξ ) ] .
Then the SCC in Theorem 1 holds for (4).
Step 2. Let
J = { i { 1 , , n } : ( E [ M ( ξ ) ] x 0 + E [ q ( ξ ) ] ) i = 0 }
and
I = { i { 1 , , n } : ( x 0 ) i = 0 } .
Under condition (A8), we have
J I = , J I = { 1 , , n } .
Let
E [ M ( ξ ) ] = E [ m 1 ( ξ ) ] T E [ m n ( ξ ) ] T ,
where m i ( · ) : k n , i = 1 , 2 , n are random vectors. If
| J | = k ,
without loss of generality, we assume that
J = { 1 , , k } , I = { k + 1 , , n } .
Next, we show
E [ m 1 ( ξ ) ] , , E [ m k ( ξ ) ] , e k + 1 , , e n
are linearly independent. Indeed, let
M ˜ = E [ m 1 ( ξ ) ] T E [ m k ( ξ ) ] T e k + 1 T e n T ,
we only need to show that M ˜ d = 0 d = 0 . If there exists d n such that
M ˜ d = 0 ,
then
M ˜ d = E [ m 1 ( ξ ) ] T d E [ m k ( ξ ) ] T d e k + 1 T d e n T d = 0 .
Therefore, we have
i = 1 n ( e i T d ) ( E [ m i ( ξ ) T d ] ) = d T E [ M ( ξ ) ] d = 0 ,
which by condition (A8), means that
d = 0 .
So M ˜ is nonsingular. Consequently, the LICQ for (4) holds at x 0 .
Step 3. Next we show the SOSC for (4) holds at ( x 0 , λ 0 , μ 0 ) . Notice that under the LICQ, the SOCC is as follows:
d T E [ M ( ξ ) ] d > 0
for every nonzero vector d satisfying d C , where
C = d n : E [ m i ( ξ ) ] T d = 0 , f o r λ i > 0 , i J , e j T d = 0 , f o r μ j > 0 , j I .
Since
λ i > 0 i J ,
μ j > 0 j I ,
then
C = d n : E [ m i ( ξ ) ] T d = 0 , f o r i J , e j T d = 0 , f o r j I .
Similarly to Step 2 above, we have
d T E [ M ( ξ ) ] d = 0 , f o r d C .
Then d = 0 follows from condition (A8). Therefore the SOSC holds for (4). As a result, we verify the conditions in Theorem 1. □
Notice that under (A10), the SLCP (1) is equivalent to the following problem.
min x n 1 2 x T E [ M ( ξ ) ] x + q ( ξ ) T x s . t . x 0
By Theorem 2, we obtain the following results.
Corollary 1.
Suppose E [ M ( ξ ) ] and E [ q ( ξ ) ] are well-defined and finite. Let x 0 be a solution of SLCP (1), ( x 0 , μ 0 ) n × n be a KKT point for (12), (A9) holds at x 0 , and (A10) holds. Then
(i) 
for N large enough, there exists ( x N , μ N ) satisfying the KKT condition of the SAA problem of (12) and
( x N , μ N ) ( x 0 , μ 0 ) w . p . 1 a s N
(ii) 
for ( x N , μ N ) in (i),
N [ ( x N , ( μ N ) I ) ( x 0 , ( μ 0 ) I ) ]
converges in distribution to a normal with 0 and covariance matrix
E [ M ( ξ ) ] E I E I T 0 1 E [ γ ( ξ ) γ ( ξ ) T ] 0 0 0 E [ M ( ξ ) ] E I E I T 0 1
with
I = { i { 1 , 2 , , n } : ( x 0 ) i = 0 } ,
γ ( ξ ) = ( M ( ξ ) E [ M ( ξ ) ] ) x 0 + q ( ξ ) E [ q ( ξ ) ] .

3. Applications

In this section, we apply the results above to estimate the confidence regions of the SLCP solutions (1). Inspired by Theorem 4.2 in [12], we have the following theorem:
Theorem 3.
Suppose conditions in Theorem 2.2 hold. Let Σ N be a sequence converging to E [ γ ( ξ ) γ ( ξ ) T ] with probability one and Σ : = E [ γ ( ξ ) γ ( ξ ) T ] be nonsingular, where γ ( ξ ) = ( M ( ξ ) E [ M ( ξ ) ] ) x 0 + q ( ξ ) E [ q ( ξ ) ] . Let
Σ ˜ N = 2 M ^ N M ^ N E Σ N 1 2 M ^ N , M ^ N , E , Σ ˜ = 2 E [ M ( ξ ) ] E [ M ( ξ ) ] E Σ 1 2 E [ M ( ξ ) ] , E [ M ( ξ ) ] , E .
Assume the decomposition of Σ ˜ N is
Σ ˜ N = U N T C N U N = ( ( U N ) 1 T , ( U N ) 2 T ) Λ N 0 0 0 ( U N ) 1 ( U N ) 2 ,
where U N 3 n × 3 n is an orthogonal matrix, ( U N ) 1 r N × 3 n , ( U N ) 2 ( 3 n r N ) × 3 n , and Λ N r N × r N are diagonal matrices with monotonically decreasing positive elements, and r N is the rank of Σ ˜ N . Thus, for α ( 0 , 1 ) and any ε > 0 ,
lim N Prob ( x 0 , λ 0 , μ 0 ) Q ^ N , ε = 1 α ,
where
Q ^ N , ε = z 3 n N ( z z N ) T ( U N ) 1 T Λ N 1 ( U N ) 1 ( z z N ) χ r N 2 ( α ) , N ( U N ) 2 ( z z N ) ε
with z = ( x , λ , μ ) , z N = ( x N , λ N , μ N ) .
Proof. 
We know from Theorem 2.2 that
2 E [ M ( ξ ) ] E [ M ( ξ ) ] J E I E [ M ( ξ ) ] J T 0 0 E I T 0 0 N ( ( x N , ( λ N ) J , ( μ N ) I ) ( x 0 , ( λ 0 ) J , ( μ 0 ) I ) )
converges in distribution to a normal with 0 and covariance matrix
Y 11 Y 12 0 Y 21 Y 22 0 0 0 0 ,
which, by Theorem 5.1 in [13], means that
N 2 M ^ N ( x N x 0 ) + M ^ N ( ( λ N ) J ( λ 0 ) J ) + E I ( ( μ N ) I ( μ 0 ) I )
converges in distribution to a normal with 0 and covariance matrix Y 11 = E [ γ ( ξ ) γ ( ξ ) T ] . Notice that under condition (A9), for a large enough N, for i { 1 , 2 , , n } I and j { 1 , 2 , , n } J , ( μ N ) i ( μ 0 ) i = 0 , and ( λ N ) i ( λ 0 ) i = 0 . We obtain the following:
E ( μ N μ 0 ) = E I ( ( μ N ) I ( μ 0 ) I ) and M ^ N ( λ N λ 0 ) = ( M ^ N ) J ( ( λ N ) J ( λ 0 ) J )
Therefore,
N 2 M ^ N ( x N x 0 ) + M ^ N ( λ N λ 0 ) + ( μ N μ 0 )
converges in distribution to a normal with 0 and covariance matrix E [ γ ( ξ ) γ ( ξ ) T ] . Since E [ γ ( ξ ) γ ( ξ ) T ] is nonsingular, for a large enough N, Σ N is almost surely nonsingular. Thus, we have
N 2 M ^ N ( x N x 0 ) + M ^ N ( λ N λ 0 ) + μ N μ 0 T Σ N 1 2 M ^ N ( x N x 0 ) + M ^ N ( λ N λ 0 ) + μ N μ 0
weakly converging to an χ n 2 random variable. Consequently, the result is directly from the proof of Theorem 4.2 in [12]. □
In practice, let
Σ N = 1 N i = 1 N F ( x N , ξ i ) 1 N i = 1 N F ( x N , ξ i ) F ( x N , ξ i ) 1 N i = 1 N F ( x N , ξ i ) T ,
where F ( x , ξ ) = M ( ξ ) x + q ( ξ ) . Then, Σ N converges to Σ almost as surely as N tends to infinity.
We next illustrate three examples to show the applications of the results above.
Example 1.
Consider an SLCP (1) with
M ( ξ ) = 2 ξ 1 ξ 2 ξ 2 ξ 3 , q ( ξ ) = ξ 1 ξ 2 ,
where ξ = ( ξ 1 , ξ 2 , ξ 3 ) , ξ 1 , ξ 2 , ξ 3 are independent random variables and each one has a normal distribution N ( 1 , 0.1 ) . Since E [ M ( ξ ) ] is definitely positive, that is, condition (A10) holds, the corresponding true optimization problem (12) is
min x 2 1 2 x T 2 1 1 1 x + 1 1 T x s . t . x 0
Generating an iid sample ξ 1 , , ξ N of ξ, then the SAA problem is
min x 2 1 2 x T M ^ N x + q ^ N T x s . t . x 0 ,
where M ^ N and q ^ N are sample-average mappings of E [ M ( ξ ) ] and E [ q ( ξ ) ] , respectively.
Next, we verify the conditions in Theorem 3. By simple computing, the optimal solution to the true problem (16) is x 0 = ( 0 , 1 ) T and the corresponding multiplier is μ 0 = E [ M ( ξ ) ] x 0 + E [ q ( ξ ) ] = ( 2 , 0 ) T . Then we have
( x 0 ) i + ( E [ M ( ξ ) ] x 0 + E [ q ( ξ ) ] ) i > 0 , for i = 1 , 2 ,
which means condition (A9) holds. Condition (A8) holds due to the fact that (A10) holds. The matrix
E [ γ ( ξ ) γ ( ξ ) T ] = 0.02 0.01 0.01 0.02
is nonsingular. Then all the conditions in Theorem 3 hold. In practice, we can verify such conditions by the corresponding SAA estimators due to the robustness of those conditions.
We denote x N and μ N the corresponding SAA optimal solution and multiplier to (17) respectively. By Theorem 3, for fixed ε , the 95 % confidence region is
( x , μ ) 4 x x N μ μ N T ( U N ) 1 T Λ N 1 ( U N ) 1 x x N μ μ N χ r N 2 ( 0.05 ) , N ( U N ) 2 x x N μ μ N ε ,
where
Σ ˜ N = 2 M ^ N E Σ N 1 2 M ^ N E = ( ( U N ) 1 T , ( U N ) 2 T ) Λ N 0 0 0 ( U N ) 1 ( U N ) 2 ,
Σ N is defined as in (15).
We next examine the performance of the proposed method in Theorem 3 by generating 100 confidence intervals at the 95 % level, with different sample sizes N and parameters ε . We show the coverage rates for x 0 . The related quadratic programming is solved through “fmincon” running on MATLAB. The results are illustrated in Table 1.
We know from Table 1 that when N = 10 and ε = 0.1 , a reasonable coverage rate for x 0 can be obtained. Furthermore, the coverage rates for x 0 increase with the increase of sample numbers and parameters.
We next apply the results obtained to a two-stage stochastic linear complementarity problem (TSLCP), which is a modified version of Example 2.6 in [14].
Example 2.
Consider a TSLCP as follows: finding x 2 such that
0 x A x + E [ B ( ξ ) y ( ξ ) ] + q 1 0 ,
0 y ( ξ ) N ( ξ ) x + M ( ξ ) y ( ξ ) + q 2 ( ξ ) 0 , f o r a . e . ω Ω ,
where “a.e.” means “almost everywhere”, “⊥” denotes the perpendicularity of two vectors,
A = 4 2 0 5 , B ( ξ ) = 3 + ξ 0 2 1 ξ , q 1 = 2 4 ,
N ( ξ ) = 1 0 0 1 , M ( ξ ) = 1 3 ξ 0 1 , q 2 ( ξ ) = 0 1 + ξ ,
y : 2 , ξ follows a uniform distribution over [ 1 2 , 1 2 ] . For any x + 2 and a.e. ξ Ξ , the second-stage TLSCP given by (18) has a unique y = ( x 1 , 0 ) T . Then (19) can be written as
0 x 1 x 2 E 1 + ξ 2 2 5 x 1 x 2 + 2 4 0 .
In a manner similar to Example 3.1, for N = 100 and ε = 0.1 , we obtain the 95 % confidence region of x is [ 0.0012 , 0.0021 ] × [ 0.0011 , 0.0013 ] .
The asymptotic analysis results can be applied to a problem in engineering, that is, the refinery production problem, which is illustrated in [15,16].
Example 3.
In a refinery, there are two products: gasoline and fuel oil. Their output and demand depend on oil production and weather, respectively, in addition to other daily uncertainties. On the supply side, the problem is to minimize production costs under both technical and demand constraints. In order to balance supply and demand, we need an equilibrium condition, which can be constructed into a stochastic linear complementarity problem. For a detailed description of this problem, see [15]. According to Section 4 in [15], the expected value formulation of the refinery production problem is as follows: finding x 5 such that
0 x E [ M ( ξ ) x + q ( ξ ) ] 0 ,
where
M ( ξ ) = 0 0 1 2 ξ 1 3 0 0 1 6 ξ 2 3.4 1 1 0 0 0 2 + ξ 1 6 0 ξ 3 ξ 3 3 3.4 ξ 2 0 ξ 4 ξ 4 , q 2 ( ξ ) = 2 3 100 180 + ξ 3 162 + ξ 4 ,
2 x 1 + 3 x 2 is the initial production cost and ξ i , i = 1 , 2 , 3 , 4 satisfy the distribution ξ 1 U [ 0.8 , 0.8 ] , ξ 2 e ( λ = 2.5 ) , ξ 3 N ( 0 , 12 ) , ξ 4 N ( 0 , 9 ) . Similar to Example 3.1, for N = 100 and ε = 0.1 , we obtain the 95 % confidence region of x is [ 35.9224 , 36.0052 ] × [ 17.9634 , 18.0031 ] × [ 0.0012 , 0.0065 ] × [ 0.2488 , 0.2502 ] × [ 0.4987 , 0.5014 ] .

4. Conclusions

For a one-stage SLCP, the asymptotic normality results of its SAA estimator were obtained in this paper. Under some typical conditions, we show that the SAA estimator of the true solution of the SLCP converges in distribution to a multivariate normal with a zero mean vector and a covariance matrix. As a result, the methods for estimating the confidence regions of solutions of the SLCP were obtained.

Author Contributions

Conceptualization, S.L. and J.Z.; methodology, S.L.; software, C.Q.; validation, J.Z., S.L. and C.Q.; formal analysis, S.L.; investigation, J.Z.; writing—original draft preparation, S.L.; writing—review and editing, J.Z.; visualization, C.Q.; supervision, J.Z.; project administration, J.Z.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Project Grant Nos. 12171219 and 61877032, the Liaoning Revitalization Talents Program No. XLYC2007113, Scientific Research Fund of Liaoning Provincial Education Department under Project No. LJKZ0961.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rockafellar, R.T.; Wets, J.B. Stochastic variational inequalities: Single-stage to multistage. Math. Program. 2017, 165, 291–330. [Google Scholar] [CrossRef]
  2. Chen, X.J.; Sun, H.; Xu, H. Discrete approximation of two-stage stochastic and distributionally robust linear complementarity problems. Math. Program. 2019, 177, 255–289. [Google Scholar] [CrossRef] [Green Version]
  3. Rockafellar, R.T.; Sun, J. Solving monotone stochastic variational inequalities and complementarity problems by progressive hedging. Math. Program. 2019, 174, 453–471. [Google Scholar] [CrossRef] [Green Version]
  4. Zhang, J.; Xu, H.; Zhang, L. Quantitative stability analysis of stochastic quasi-variational inequality problems and applications. Math. Program. 2017, 165, 433–470. [Google Scholar] [CrossRef] [Green Version]
  5. Gürkan, G.; Özge, A.Y.; Robinson, S.M. Sample-path solution of stochastic variational inequalities. Math. Program. 1999, 84, 313–333. [Google Scholar] [CrossRef] [Green Version]
  6. King, A.J.; Rockafellar, R.T. Asymptotic Theory for Solutions in Statistical Estimation and Stochastic Programming. Math. Oper. Res. 1993, 18, 148–162. [Google Scholar] [CrossRef]
  7. Jiang, H.; Xu, H. Stochastic approximation approaches to the stochastic variational inequality problem. IEEE Trans. Autom. Control 2008, 53, 1462–1475. [Google Scholar] [CrossRef]
  8. Shapiro, A.; Dentcheva, D.; Ruszczynski, A. Lectures on Stochastic Programming: Modeling and Theory; SIAM: Philadelphia, PA, USA, 2009. [Google Scholar]
  9. Xu, H. Sample average approximation methods for a class of stochastic variational inequality problems. Asia-Pac. J. Oper. Res. 2010, 27, 103–119. [Google Scholar] [CrossRef]
  10. Zhang, J.; Zhang, L.; Pang, L. On the Convergence of Coderivative of SAA Solution Mapping for a Parametric Stochastic Variational Inequality. Set-Valued Var. Anal. 2012, 20, 75–109. [Google Scholar] [CrossRef]
  11. Shapiro, A. Simulation-based optimization-convergence analysis and statistical inference. Commun. Stat. Model. 1996, 12, 425–454. [Google Scholar] [CrossRef]
  12. Lu, S. Symmetric confidence regions and confidence intervals for normal map formulations of stochastic variational inequalities. SIAM J. Optim. 2014, 24, 1458–1484. [Google Scholar] [CrossRef] [Green Version]
  13. Härdle, W.K.; Simar, L. Applied Multivariate Statistical Analysis; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  14. Chen, L.; Liu, Y.; Yang, X.; Zhang, J. Stochastic Approximation Methods for the Two-Stage Stochastic Linear Complementarity Problem. SIAM J. Optim. 2022, 32, 2129–2155. [Google Scholar] [CrossRef]
  15. Chen, X.; Fukushima, M. Expected residual minimization method for stochastic linear complementarity problems. Math. Oper. Res. 2005, 30, 1022–1038. [Google Scholar] [CrossRef] [Green Version]
  16. Wang, Y.; Du, S.; Li, H.; Chen, M. Projected trust region method for stochastic linear complementarity problems. SCIENCEASIA 2018, 44, 453–460. [Google Scholar] [CrossRef]
Table 1. Summary of coverage rates for x 0 with different ε and N.
Table 1. Summary of coverage rates for x 0 with different ε and N.
Types N = 10 N = 30 N = 60 N = 100
ε = 0.1 90 %
93 %
95 %
97 %
ε = 0.3 91 %
94 %
96 %
98 %
ε = 0.5 93 %
94 %
98 %
99 %
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, S.; Zhang, J.; Qiu, C. Asymptotic Analysis for One-Stage Stochastic Linear Complementarity Problems and Applications. Mathematics 2023, 11, 482. https://doi.org/10.3390/math11020482

AMA Style

Lin S, Zhang J, Qiu C. Asymptotic Analysis for One-Stage Stochastic Linear Complementarity Problems and Applications. Mathematics. 2023; 11(2):482. https://doi.org/10.3390/math11020482

Chicago/Turabian Style

Lin, Shuang, Jie Zhang, and Chen Qiu. 2023. "Asymptotic Analysis for One-Stage Stochastic Linear Complementarity Problems and Applications" Mathematics 11, no. 2: 482. https://doi.org/10.3390/math11020482

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop