1. Introduction
The finite-dimensional complementarity problems form a relatively perfect and fruitful topic in mathematical programming. In order to reflect the uncertain factors in practice, stochastic complementarity problems have attracted extensive attention in the recent literature [
1,
2,
3,
4]. We investigate the following one-stage stochastic linear complementarity problem (SLCP): find
, such that
where
denotes the mathematical expectation,
is a random vector defined on a probability space
, and
and
are functions. Throughout the paper,
and
are measurable functions of
, and the condition
holds. To ease the notation,
will be written as
.
Problem (
1) can be seen as a special case of the stochastic nonlinear complementarity problem (SNCP) in Gürkan et al. [
5]. Some examples of stochastic complementarity problems in operational research, finance, economics, and engineering, can be found in [
6,
7].
Accurately calculating the expected value in (
1) is either impossible or costly. The sample-average approximation (SAA) method [
8,
9,
10] is considered to be an effective method for estimating the expected value. By generating an independent identically distributed (iid) sample
of
and approximating the expected value with sample average, the SAA method can effectively estimate the expectations. Throughout the paper, the SLCP (
1) will be approximated by
where
is the sample-average mapping of
. Equation (
1) is called the true problem, and Equation (
2) is the SAA problem to Equation (
1).
In this paper, for an SAA solution
and its almost sure cluster point
, we are interested in the asymptotic behavior of the SAA estimator
, that is, establishing conditions on
such that for
,
where
Z in
is a normal random variable, and the symbol
above the arrow denotes a convergence in the distribution.
The asymptotic behavior of the SAA estimator has been deeply discussed in plenty of the literature [
6,
8,
11,
12] and other references. Most of those results are related to convergence in the distribution of the SAA estimators for stochastic-constrained optimizations or normal map formulations of stochastic variational inequalities. In this paper, we first study the asymptotic behavior of the SAA solutions of stochastic quadratic programming, and then obtain the conditions ensuring the asymptotical normality of the SAA estimators of the SLCP. Finally, methods for estimating confidence regions of true solutions to the SLCP are provided.
The paper proceeds as follows: in
Section 2, the asymptotic results are obtained under the nonsingularity condition or positive definiteness condition of
. We then applied the results in
Section 3 to obtain the confidence intervals of an SLCP solution.
2. Main Results
Notice that if the solution of an SLCP (
1) exists, then the SLCP (
1) is equivalent to the following quadratic programming:
In order to provide the asymptotic behavior of the SAA SLCP, we first consider the following stochastic-constrained optimization (SCO) problem:
where
is defined as in Equation (
1),
are random functions.
The definition of the Karash–Kuhn–Tucker (KKT) point of the problem (
5) is as follows:
Definition 1. Suppose the expectation value functions,
,
are continuously differentiable. Then is called the Karash–Kuhn–Tucker (KKT) point of the problem (
5) if
satisfies whereand Let
be an iid sample of
, then the SAA problem of SCO is
where
We make the following assumptions for the future asymptotic analysis. Let
X be a nonempty compact subset of
, and let
be one of the elements in the following:
Consider the following conditions.
- (A1)
For each , is finite valued, and is well-defined.
- (A2)
There exists a positive valued random variable
such that
and for all
, and almost every
, the following inequality holds:
- (A3)
For any fixed
,
is twice continuously differentiable at
x for almost every
.
The above assumptions are commonly used in stochastic optimization. Using Theorem 6.3.2 and Theorem 6.3.6 in [
8],
is twice continuously differentiable on
X, and
Let . We need the following conditions:
- (A4)
The second order sufficient condition (SOSC) holds at
, i.e.,
for every nonzero vector
d satisfying
where
- (A5)
The Mangasarian–Fromovitz constrant qualification (MFCQ) holds at
, i.e.,
is linear independent and there exists
d satisfying
- (A6)
The linear independent constraint qualification (LICQ) holds at
, i.e.,
are linearly independent.
- (A7)
The strict complementarity condition (SCC) holds, i.e.,
Suppose that
is the SAA function of
. Then, the following propositions are directly from Theorems 3.1 and 3.2 in [
6].
Proposition 1. Letand there exists a compact neighborhood X ofsuch that condition (A1)–(A3) holds. Ifis a KKT point for (5) and condition (A4) and (A5) holds, then there existssatisfying the KKT condition of (6) andw.p.1 as. Proposition 2. Suppose condition (A1)–(A3) hold on X, where X is a compact neighborhood of. If a sequence of KKT pointsfor (6) converges toalmost surely, and (A4), (A6) hold at, thenwhereis the KKT point for the random quadratic programming problemwhere,
,
satisfy Theorem 1. Suppose conditions in Proposition 2 hold. If a sequence of KKT pointsfor (6) converges toalmost surely and (A7) hold, thenconverges in distribution to a normal with mean 0 and the covariance matrixwhere Proof. By Proposition 2, under condition (A7), we know that
where
is the KKT point for the stochastic quadratic programming
That is,
satisfies
Notice that under (A4),
is nonsingular and
Then the conclusion holds. □
Remark 1. Notice that in [8,11] (Section 5.2.2), the asymptotic analysis of the optimal solutions of SAA stochastic-constrained optimization is established when the constraints are independent of random vectors. The conclusions of Theorem 1 extend the results in [8,11] (Section 5.2.2) to stochastic-constrained optimization in which the constraints contain random vectors. We next apply the results above to a stochastic linear complementarity problem (
1). We first provide some conditions below:
- (A8)
is a symmetric matrix, and
- (A9)
Let
be a solution of SLCP (
1); the nondegenerate condition holds at
, i.e.,
- (A10)
is positive definite.
Notice that if (A10) holds, then (A8) holds. (A9) is a typical condition in the study of complementarity problems.
Let the
be the iid samples of
. Thus, the SAA problem of (
4) is as follows:
Next, we provide our results.
Theorem 2. Suppose thatandare well-defined and finite. Letbe a solution of SLCP (1) andbe a KKT point for (4), (A9) holds at, and (A8) holds. Then - (i)
for N large enough, there existssatisfying the KKT condition of (7), andw.p.1 as. - (ii)
for thein (i),converges in distribution to a normal with 0 and covariance matrix, wherewithE the identity matrix,
Proof. We will only need to verify conditions in Theorem 1. We advance the proof with the following three steps:
Step 1. Notice that
satisfies
and
satisfies the KKT condition of (
4), that is,
Then we have
and
which, by (
8) and (
9), means that
and
Since
and condition (A9) hold, we have
Then combining (
10) and (
11), we have
which by condition (A8) means that
Consequently, by (
8), we obtain
Then the SCC in Theorem 1 holds for (
4).
Under condition (A8), we have
Let
where
are random vectors. If
without loss of generality, we assume that
Next, we show
are linearly independent. Indeed, let
we only need to show that
. If there exists
such that
then
Therefore, we have
which by condition (A8), means that
So
is nonsingular. Consequently, the LICQ for (
4) holds at
.
Step 3. Next we show the SOSC for (
4) holds at
. Notice that under the LICQ, the SOCC is as follows:
for every nonzero vector
d satisfying
, where
Similarly to Step 2 above, we have
Then
follows from condition (A8). Therefore the SOSC holds for (
4). As a result, we verify the conditions in Theorem 1. □
Notice that under (A10), the SLCP (
1) is equivalent to the following problem.
By Theorem 2, we obtain the following results.
Corollary 1. Supposeandare well-defined and finite. Letbe a solution of SLCP (1),be a KKT point for (12), (A9) holds at, and (A10) holds. Then - (i)
for N large enough, there existssatisfying the KKT condition of the SAA problem of (12) and - (ii)
forin (i),converges in distribution to a normal with 0 and covariance matrixwith
3. Applications
In this section, we apply the results above to estimate the confidence regions of the SLCP solutions (
1). Inspired by Theorem 4.2 in [
12], we have the following theorem:
Theorem 3. Suppose conditions in Theorem 2.2 hold. Letbe a sequence converging towith probability one andbe nonsingular, whereLet Assume the decomposition ofiswhereis an orthogonal matrix,, andare diagonal matrices with monotonically decreasing positive elements, andis the rank of. Thus, forand any,wherewith Proof. We know from Theorem 2.2 that
converges in distribution to a normal with 0 and covariance matrix
which, by Theorem 5.1 in [
13], means that
converges in distribution to a normal with 0 and covariance matrix
. Notice that under condition (A9), for a large enough
N, for
and
,
, and
. We obtain the following:
Therefore,
converges in distribution to a normal with 0 and covariance matrix
. Since
is nonsingular, for a large enough
N,
is almost surely nonsingular. Thus, we have
weakly converging to an
random variable. Consequently, the result is directly from the proof of Theorem 4.2 in [
12]. □
In practice, let
where
Then,
converges to
almost as surely as
N tends to infinity.
We next illustrate three examples to show the applications of the results above.
Example 1. Consider an SLCP (1) withwhere,are independent random variables and each one has a normal distribution. Sinceis definitely positive, that is, condition (A10) holds, the corresponding true optimization problem (12) is Generating an iid sampleof ξ, then the SAA problem iswhereandare sample-average mappings ofand, respectively. Next, we verify the conditions in Theorem 3. By simple computing, the optimal solution to the true problem (
16) is
and the corresponding multiplier is
Then we have
which means condition (A9) holds. Condition (A8) holds due to the fact that (A10) holds. The matrix
is nonsingular. Then all the conditions in Theorem 3 hold. In practice, we can verify such conditions by the corresponding SAA estimators due to the robustness of those conditions.
We denote
and
the corresponding SAA optimal solution and multiplier to (
17) respectively. By Theorem 3, for fixed
, the
confidence region is
where
is defined as in (
15).
We next examine the performance of the proposed method in Theorem 3 by generating 100 confidence intervals at the
level, with different sample sizes
N and parameters
. We show the coverage rates for
. The related quadratic programming is solved through “fmincon” running on MATLAB. The results are illustrated in
Table 1.
We know from
Table 1 that when
and
, a reasonable coverage rate for
can be obtained. Furthermore, the coverage rates for
increase with the increase of sample numbers and parameters.
We next apply the results obtained to a two-stage stochastic linear complementarity problem (TSLCP), which is a modified version of Example 2.6 in [
14].
Example 2. Consider a TSLCP as follows: findingsuch thatwhere “a.e.” means “almost everywhere”, “⊥” denotes the perpendicularity of two vectors,, ξ follows a uniform distribution over. For anyand a.e., the second-stage TLSCP given by (18) has a unique. Then (19) can be written as In a manner similar to Example 3.1, forand, we obtain theconfidence region ofis.
The asymptotic analysis results can be applied to a problem in engineering, that is, the refinery production problem, which is illustrated in [
15,
16].
Example 3. In a refinery, there are two products: gasoline and fuel oil. Their output and demand depend on oil production and weather, respectively, in addition to other daily uncertainties. On the supply side, the problem is to minimize production costs under both technical and demand constraints. In order to balance supply and demand, we need an equilibrium condition, which can be constructed into a stochastic linear complementarity problem. For a detailed description of this problem, see [15]. According to Section 4 in [15], the expected value formulation of the refinery production problem is as follows: findingsuch thatwhereis the initial production cost andsatisfy the distribution,. Similar to Example 3.1, forand, we obtain theconfidence region ofis