Next Article in Journal
Robust H Performance Analysis of Uncertain Large-Scale Networked Systems
Previous Article in Journal
Interactive Internet Framework Proposal of WASPAS Method: A Computational Contribution for Decision-Making Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deterministic Bi-Criteria Model for Solving Stochastic Mixed Vector Variational Inequality Problems

School of Mathematics and Statistics, Liaoning University, Shenyang 110036, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(15), 3376; https://doi.org/10.3390/math11153376
Submission received: 21 May 2023 / Revised: 26 July 2023 / Accepted: 28 July 2023 / Published: 2 August 2023
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
In this paper, we consider stochastic mixed vector variational inequality problems. Firstly, we present an equivalent form for the stochastic mixed vector variational inequality problems. Secondly, we present a deterministic bi-criteria model for giving the reasonable resolution of the stochastic mixed vector variational inequality problems and further propose the approximation problem for solving the given deterministic model by employing the smoothing technique and the sample average approximation method. Thirdly, we obtain the convergence analysis for the proposed approximation problem while the sample space is compact. Finally, we propose a compact approximation method when the sample space is not a compact set and provide the corresponding convergence results.

1. Introduction

Let D R n be a nonempty, closed, and convex set, where F j ( x ) : R n R n ( j = 1 , 2 , , m ) are continuously differentiable vector-valued functions. For every j = 1 , 2 , , m ,   g j ( · ) : R n R { + } is a continuously differentiable, proper, and convex function with closed effective domain. The mixed vector variational inequality problems (MVVIP) are defined as x D , for all y D , satisfying
( y x ) T F ( x ) + g ( y ) g ( x ) i n t R + m ,
where F ( x ) : = ( F 1 ( x ) , , F m ( x ) ) and g ( · ) : = ( g 1 ( · ) , , g m ( · ) ) . Obviously, for every y D , problem (1) is equivalent to finding x D , satisfying
( y x ) T F 1 ( x ) , , ( y x ) T F m ( x ) + g 1 ( y ) , , g m ( y ) g 1 ( x ) , , g m ( x ) i n t R + m .
The above MVVIP can be used to deal with the following multiobjective optimization problems, which originated from the study of utility theory in economics and were first proposed in the economic balance [1]:
min P ( x ) = ( p 1 ( x ) , , p m ( x ) ) s . t . q j ( x ) 0 , j = 1 , , t ,
where p j ( j = 1 , , m ) : R n R and q j ( j = 1 , , t ) : R n R are all continuously differentiable convex functions on R n . In [2], by using the exponential penalty method, the authors obtained the penalty approximation problem of (3) as follows:
min p 1 ( x ) , , p m ( x ) + 1 ρ n j = 1 t ϑ [ ρ n q j ( x ) ] , , 1 ρ n j = 1 t ϑ [ ρ n q j ( x ) ] ,
where ρ n > 0 is a penalty parameter and satisfying lim n ρ n = + .   ϑ ( t ) = exp ( t 1 ) is an exponential function. It is also proved in [2] that the efficient solution of (3) can be approximated by solving the weakly efficient solution of problem (4). It is worth noting that problem (4) is an unconstrained multiobjective optimization problem and function 1 ρ n j = 1 t ϑ [ ρ n q j ( x ) ] is a convex function.
In fact, solving the weakly efficient solution of problem (4) is equivalent to solving MVVIP (2) (the proof of which can be found in Theorem 1), which indicates that, for a sufficiently large penalty parameter, the solution of MVVIP can be used to approximate the efficient solution of problem (3). Multiobjective optimization problems can be applied to artificial intelligence, engineering, electrical engineering, and medicine [3,4,5,6]. In addition, multiobjective optimization problems also have wide applications in fields such as design optimization, manufacturing, structural health monitoring [7], and chemical engineering [8]. Therefore, MVVIP also have a wide range of applications in production and life, which have certain research significance.
Due to the existence of many uncertain factors, such as price, output, and sales, in actual production and life, in this paper, we mainly consider MVVIP with uncertain factors, that is, stochastic mixed vector variational inequality problems (SMVVIP): finding x D satisfies
( y x ) T F ( x , ξ ( ω ) ) + g ( y , ξ ( ω ) ) g ( x , ξ ( ω ) ) i n t R + m , y D , a . s . ξ ( ω ) Ξ ,
where ξ : Ω Ξ R b is a stochastic vector defined on the probability space ( Ω , F , P ) and Ξ is a support set of the probability space. Under the given probability measure, “ a . s . ” stands the abbreviation for “almost surely”. For each j = 1 , 2 , , m , F j ( x , ξ ( ω ) ) : R n × Ξ R n is continuously differentiable vector-valued function at ( x , ξ ( ω ) ) and g j ( x , ξ ( ω ) ) is continuously differentiable on R n × Ξ . Here, F ( x , ξ ( ω ) ) : = ( F 1 ( x , ξ ( ω ) ) , , F m ( x , ξ ( ω ) ) ) and g ( · , ξ ( ω ) ) : = ( g 1 ( · , ξ ( ω ) ) , , g m ( · , ξ ( ω ) ) ) . For simplicity, we use ξ to denote either the stochastic vector ξ ( ω ) or an element of R b throughout this paper.
To the best of our knowledge, the SMVVIP have not yet been studied. However, there are a few works on the special cases of MVVIP. For instance, Salahuddin et al. [9] and Xie et al. [10] discussed the MVVIP in topological vector spaces and established some existence Theorems. Moreover, Irfan [11] considered a new class of exponential MVVIP involving multi-valued mappings in a fuzzy environment and proved some existence theorems of solutions of exponential MVVIP. In addition, a gap function of a non-smooth mixed weak vector variational inequality problem is given by Jayswal et al. [12]. For more information about the existence of MVVIP solutions and the gap function error bounds, we refer the readers to [13,14,15] and the references therein.
In fact, the SMVVIP is obviously a generalization of the stochastic variational inequality problems (SVIP): finding x D satisfies
( y x ) T F ( x , ξ ( ω ) ) 0 , y D , a . s . ω Ω ,
where ξ : Ω Ξ R b is a stochastic vector, F ( x , ξ ( ω ) ) : R n × Ξ R n is a vector-valued function, and D is a nonempty closed convex set. Based on Fukushima’s work in [16], for a giving a parameter α > 0 , Luo [17] defined a regularized gap function as follows:
v ( x , ξ ( ω ) ) : = max y D ( x y ) T F ( x , ξ ( ω ) ) α 2 x y 2 .
The SVIP have been considered in many articles. In particular, Shanbhag [18] considered the applications of SVIP in the power market and radio system design. Moreover, Luo et al. [17] formulated SVIP as a minimization problem, and presented the expected residual minimization (ERM) model for SVIP by taking the gap function as the residual. In addition, Chen et al. [19] investigated the SVIP from a standpoint of minimization of conditional value at risk (CVaR); that is, the authors employed the gap function to define a loss function and presented a deterministic CVaR model for solving SVIP. For more research on SVIP, interested readers can refer to [20,21,22,23,24].
The main contributions of this paper are as follows:
  • We present a reasonable deterministic bi-criteria model for solving SMVVIP, which can be regarded as a combination of the ERM model and CVaR model.
  • We employ the smoothing technique and the sample average approximation method to present approximation problems for solving the given deterministic model. In addition, we consider the convergence of the proposed approximation problem when the sample space is a compact set.
  • When the sample space is not compact, we also propose a compact approximation method and provide convergence results.
We adopt the following standard notations throughout this paper. For a given nonempty set R + m , i n t R + m denotes its interior. For a given vector-valued function φ : R n × R r × R t R m , x φ ( x , y , z ) R m × n denotes its Jacobian matrix at x. ( x , y ) φ ( x , y , z ) R m × ( n + r ) denotes its Jacobian matrix at ( x , y ) . ( x , y , z ) φ ( x , y , z ) R m × ( n + r + t ) denotes its Jacobian matrix at ( x , y , z ) . For a given nonempty set D R n , N D ( x ) denotes its normal cone operator at x D and conv D stands for the convex hull of D. For a given real-valued function G : R n R { + } , G ( x ) denotes its subdifferential operator at x. Moreover, · stands for the 2-norm and [ t ] + : = max { t , 0 } , Λ : = { λ R m λ j 0 , j = 1 m λ j = 1 } .

2. Preliminaries

Definition 1
([25]). The normal cone operator for a nonempty closed convex subset D R n at x D is defined as
N D ( x ) : = { z R n ( y x ) T z 0 , y D } .
Definition 2
([25]). The definition of the subdifferential of a convex function G : R n R { + } at x is
G ( x ) = { τ R n G ( y ) > G ( x ) + ( y x ) T τ , y R n } .
Definition 3
([26]). For a function G : R n R { + } and a parameter α > 0 , the mapping P D α : R n R n is the proximal operator defined as
P D α ( z ) = arg min y D G ( y ) + α 2 z y 2 , z R n ,
where D R n is a nonempty closed convex subset.
According to reference [27], when G is proper and convex, the proximal operator has many special properties. For example, P D α ( x ) is single-valued [26] and the proximal operator satisfies the following non-expansive property:
P D α ( x ) P D α ( y ) x y .
We then give the following inequalities which will be used in later.
Cauchy–Schwarz inequality Let the stochastic variables ξ 1 , ξ 2 is evaluated at interval ( a , b ) , a < b + and E [ | ξ 1 | 2 ] < + , E [ | ξ 2 | 2 ] < + , then we have
| E [ ξ 1 ξ 2 ] | 2 E [ | ξ 1 | 2 ] E [ | ξ 2 | 2 ] ,
especially, if ξ 2 = 1 , we have
| E [ ξ 1 ] | 2 E [ | ξ 1 | 2 ] .
We now provide the proof of the statement in the Introduction that “Solving the weakly efficient solution of the problem (4) is equivalent to solve the MVVIP (2)”.
Theorem 1.
Let F j ( x ) = x p j ( x ) and g j ( x ) = 1 ρ n j = 1 t ϑ [ ρ n q j ( x ) ] . For fixed penalty parameter ρ n > 0 , the necessary and sufficient condition for x to be the solution of the MVVIP is that x is the weakly efficient solution of the problem (4).
Proof. 
(i) We first prove the necessity. Suppose x is not a weakly efficient solution of the problem (4). Due to g j ( x ) = 1 ρ n j = 1 t ϑ [ ρ n q j ( x ) ] , for any j = 1 , , m , there exists y D such that p j ( y ) + g j ( y ) ( p j ( x ) + g j ( x ) ) < 0 . Thus, we have
( p 1 ( y ) p 1 ( x ) , , p m ( y ) p m ( x ) ) + ( g 1 ( y ) g 1 ( x ) , , g m ( y ) g m ( x ) ) i n t R + m .
Since p j is a continuous differentiable convex function, for every j = 1 , . , m , it holds that p j ( y ) p j ( x ) ( y x ) T x p j ( x ) . Therefore
( ( y x ) T x p 1 ( x ) , , ( y x ) T x p m ( x ) ) + ( g 1 ( y ) , , g m ( y ) ) ( g 1 ( x ) , , g m ( x ) ) i n t R + m .
Since F j ( x ) = x p j ( x ) , j = 1 , , m , problem (7) is equivalent to
( y x ) T F 1 ( x ) , , ( y x ) T F m ( x ) + ( g 1 ( y ) , , g m ( y ) ) ( g 1 ( x ) , , g m ( x ) ) i n t R + m .
This is in contradiction with the solution that x is the solution of MVVIP. Therefore, the necessity is established.
(ii) We then prove sufficiency. If x is not a solution of the MVVIP, there exists y D such that
( y x ) T F 1 ( x ) , , ( y x ) T F m ( x ) + ( g 1 ( y ) , , g m ( y ) ) ( g 1 ( x ) , , g m ( x ) ) i n t R + m .
That is, for any j = 1 , . , m , there holds ( y x ) T F j ( x ) + g j ( y ) g j ( x ) < 0 . Because g j is a convex function, for every j = 1 , . , m , we have g j ( y ) g j ( x ) ( y x ) T x g j ( x ) . Thus
( y x ) T F j ( x ) + ( y x ) T x g j ( x ) < 0 , j = 1 , . , m .
On the other hand, let x τ = x + τ ( y x ) D , for any j = 1 , . , m the following holds
p j ( x τ ) p j ( x ) = τ ( y x ) T x p j ( x ) + o ( t )
and
g j ( x τ ) g j ( x ) = τ ( y x ) T x g j ( x ) + o ( t ) ,
where τ is the sufficiently small number in ( 0 , 1 ) . Therefore, combine with (8)–(10) and F j ( x ) = x p j ( x ) we can obtain that there exists ϖ > 0 such that for every j = 1 , , m and τ ( 0 , ϖ ) , we have
p j ( x τ ) p j ( x ) + g j ( x τ ) g j ( x ) < 0 .
This contradicts the definition of efficient solution x for problem (4). Therefore, the conclusion holds. □
For giving the deterministic model of SMVVIP, we first propose an equivalent formulation for MVVIP by the following Theorem, which is called the equivalent mixed scalar variational inequality problems.
Theorem 2.
Solving the problem ( 1 ) is equivalent to finding ( x , λ ) D × Λ satisfying
( y x ) T j = 1 m λ j F j ( x ) + j = 1 m λ j g j ( y ) j = 1 m λ j g j ( x ) 0 , y D .
Proof. 
(i) We first prove the necessity. Obviously, problem (2) is equivalent to the following form: finding x D satisfies
max 1 j m ( y x ) T F j ( x ) + g j ( y ) g j ( x ) 0 , y D .
Let M ( y ) = max 1 j m ( y x ) T F j ( x ) + g j ( y ) g j ( x ) . Furthermore, noting that y = x D is an optimal solution of the following problem
min M ( y ) : = max 1 j m ( y x ) T F j ( x ) + g j ( y ) g j ( x ) s . t . y D .
In addition, we observe that the first-order necessary and sufficient condition of the above problem is 0 M ( x ) + N D ( x ) . By [25], it holds that
0 c o n v { F j ( x ) + g j ( x ) } + N D ( x ) , j = 1 , , m .
Thus, there exists λ 0 with j = 1 m λ j = 1 , satisfying 0 j = 1 m λ j F j ( x ) + j = 1 m λ j g j ( x )   + N D ( x ) . By the definition of normal cone, we can easily obtain that
( y x ) T j = 1 m λ j F j ( x ) + ( y x ) T j = 1 m λ j g j ( x ) 0 .
Combining with the subdifferential operator of g j ( x ) , j = 1 , , m , we have
j = 1 m λ j g j ( y ) j = 1 m λ j g j ( x ) > ( y x ) T j = 1 m λ j g j ( x ) .
Adding (13) and (14), we obtain
( y x ) T j = 1 m λ j F j ( x ) + j = 1 m λ j g j ( y ) j = 1 m λ j g j ( x ) 0 , y D .
Therefore, the necessity is established.
(ii) Next, we prove the sufficiency. Suppose that there exists ( x , λ ) D × Λ , satisfying
( y x ) T j = 1 m λ j F j ( x ) + j = 1 m λ j g j ( y ) j = 1 m λ j g j ( x ) 0 , y D .
Since λ Λ , there exists at least one λ j > 0 , j = 1 , , m , such that
( y x ) T λ j F j ( x ) + λ j g j ( y ) λ j g j ( x ) 0 , y D .
Thus, (15) is equivalent to
( y x ) T F j ( x ) + g j ( y ) g j ( x ) 0 .
Hence, we have
( y x ) T F 1 ( x ) , , ( y x ) T F m ( x ) + g 1 ( y ) , , g m ( y ) g 1 ( x ) , , g m ( x ) i n t R + m ,
which means that x is a solution of MVVIP. □
Now, in order to present the deterministic bi-criteria model for solving SMVVIP, we require introducing the following two deterministic models for solving SVIP mentioned in section one:
  • The ERM model for solving SVIP:
    min E [ v ( x , ξ ) ] s . t . x D ,
    where E stands for mathematical expectation and v ( x , ξ ) is the regularized gap function for SVIP.
  • The CVaR model for solving SVIP:
    To present the CVaR model, we first introduce the value at risk (VaR) model. For any fixed x and confidence level β ( 0 , 1 ) , the regularized gap function v ( x , ξ ) is taken as the loss function, and the VaR for the loss associated with x is defined as
    VaR β v ( x ) = min { u P [ v ( x , ξ ) u ] β } ,
    where P stands for the probability. However, as a decision variable function, VaR usually does not satisfy the convexity and consistency of risk measurement selection. Due to this, Rockafellar et al. [28] proposed the CVaR, which had better properties than VaR and is defined as
    CVaR β v ( x ) = 1 1 β v ( x , ξ ) VaR β v ( x ) v ( x , ξ ) d ρ ( ξ ) ,
    where ρ ( ξ ) represents the distribution function of ξ .
Chen et al. [19] gave the CVaR model for solving SVIP as follows:
min x R n CVaR β v ( x ) .
By Theorem 2 of [28], problem (16) is equivalent to
min ( x , u ) R n + 1 V ( x , u ) = u + 1 1 β E [ v ( x , ξ ) u ] + .
Because the above two models contain mathematical expectations that are usually difficult to calculate in practice, scholars usually apply the sample average approximation (SAA) method to dispose the expected value. That is, for an integrable function ψ : R n R , if the independent identically distributed samples of stochastic variable ξ is Ξ k : = { ξ 1 , , ξ N k } Ξ , the sample average value lim k 1 N k ξ i Ξ k ψ ( ξ i ) can be used to approximate the expected value E [ ψ ( ξ ) ] , where N k + , as k + . The above limit process holds with probability one (we abbreviated it by “w.p.1” below) by the strong law of large numbers, that is
lim k 1 N k ξ i Ξ k ψ ( ξ i ) = E [ ψ ( ξ ) ] , w . p . 1 .

3. Deterministic Bi-Criteria Model for Solving SMVVIP

In this section, based on the works for solving SVIP, we will present a deterministic bi-criteria model for solving SMVVIP. By Theorem 2, similar to the equivalent form of MVVIP, we can write the SMVVIP (5) as the following equivalent stochastic mixed scalar variational inequality problems: finding ( x , λ ) D × Λ satisfies
( y x ) T j = 1 m λ j F j ( x , ξ ) + j = 1 m λ j g j ( y , ξ ) j = 1 m λ j g j ( x , ξ ) 0 , y D , a . s . ξ Ξ .
The existence of stochastic factors lead to the fact that problem (19) does not have a common solution for almost every ξ Ξ . Therefore, in order to obtain a reasonable solution, we need to give a reasonable deterministic model for (19), and treat the solution of this deterministic model as the solution of SMVVIP. In order to give the deterministic model, motivated by the work of Fukushima [16], we define the regularized gap function h ( x , λ , ξ ) : R n × Λ × Ξ R of (19) as follows:
h ( x , λ , ξ ) = max y D ( x y ) T j = 1 m λ j F j ( x , ξ ) + j = 1 m λ j g j ( x , ξ ) j = 1 m λ j g j ( y , ξ ) α 2 x y 2 .
As stated by [29], for any ( x , λ ) D × Λ and ξ Ξ , we have
h ( x , λ , ξ ) = ( x H ( x , λ , ξ ) ) T j = 1 m λ j F j ( x , ξ ) + j = 1 m λ j g j ( x , ξ ) j = 1 m λ j g j ( H ( x , λ , ξ ) , ξ ) α 2 x H ( x , λ , ξ ) 2 ,
which can serve as a gap function of (19), while
H ( x , λ , ξ ) = P D α ( x α 1 j = 1 m λ j F j ( x , ξ ) ) .
Obviously, based on Fukushima’s work in [30], for each ( x , λ ) D × Λ , the regularized gap function h ( x , λ , ξ ) 0 , a . s . ξ Ξ . In addition, by Theorem 2.2 in [29], we have h ( x , λ , ξ ) = 0 for each ( x , λ ) D × Λ if and only if ( x , λ ) solves (19). Hence, we can easily obtain that the following optimization problem is equivalent to (19):
min h ( x , λ , ξ ) s . t . ( x , λ ) D × Λ , a . s . ξ Ξ .
According to reference [30], for a given function γ : R n R n , when the optimal solution of max y D { γ ( x ) T ( x y ) } is unique, let
Υ ( x ) = max y D { γ ( x ) T ( x y ) } ,
where the gradient of Υ at x is actually showed by
x Υ ( x ) = γ ( x ) x γ ( x ) T ( y x ) .
In addition, by Theorem 3.2 in [16] and for each j = 1 , , m , F j ( x , ξ ) and g j ( · ) are continuously differentiable, we have h ( x , λ , ξ ) is continuously differentiable at ( x , λ ) . Since the optimal solution of formulation (20) is unique, based on the above results, we can obtain the gradient of h ( x , λ , ξ ) at ( x , λ ) as follows:
( x , λ ) h ( x , λ , ξ ) = j = 1 m λ j F j ( x , ξ ) + j = 1 m λ j g j ( x , ξ ) j = 1 m λ j x F j ( x , ξ ) α I ( H ( x , λ , ξ ) x ) ( x H ( x , λ , ξ ) ) T F 1 ( x , ξ ) g 1 ( H ( x , λ , ξ ) , ξ ) + g 1 ( x , ξ ) ( x H ( x , λ , ξ ) ) T F m ( x , ξ ) g m ( H ( x , λ , ξ ) , ξ ) + g m ( x , ξ ) ,
where I denotes the unit matrix in appropriate dimension.
Next, in combination with the research of ERM model [17,22] and CVaR model [19], along with the bi-criteria model [31], for solving SVIP, we give the deterministic bi-criteria model for problem (19) as follows:
min ( x , λ ) D × Λ ( E Ξ [ h ( x , λ , ξ ) ] , CVaR β h ( x , λ ) ) ,
where E Ξ [ h ( x , λ , ξ ) ] = Ξ h ( x , λ , ξ ) d ξ and
CVaR β h ( x , λ ) = 1 1 β h ( x , λ , ξ ) VaR β h ( x , λ ) h ( x , λ , ξ ) d ρ ( ξ ) .
We adopt the weighted sum method in [32] to deal with the multiobjective optimization problems (23), that is, given a weight coefficient r [ 0 , 1 ] , which can be modeled different degrees of risk aversion to decision-makers in the real world. Using the method mentioned above, we obtain the following weighted optimization problem:
min ( x , λ ) D × Λ θ ( x , λ ) : = ( 1 r ) E Ξ [ h ( x , λ , ξ ) ] + r CVaR β h ( x , λ ) .
Note that if r = 0 , (24) becomes the ERM model for solving SMVVIP. Because this model does not take into account the error between the real possible of the regularized gap function and its expected residual, the ERM model is risk neutral. If r = 1 , (24) converts to the CVaR model for solving SMVVIP, which makes it risk averse. For some investors, though, it may be too conservative. Therefore, the above model can make a compromise between the expected residual of regularized gap function and the robust value model in the worst case. More details about the deterministic bi-criteria model can be found in reference [31]. According to Theorem 2 of [28], for a given parameter r [ 0 , 1 ] , we can easily obtain that problem (24) is equivalent to
min ( x , λ , u ) S θ ( x , λ , u ) : = ( 1 r ) E Ξ [ h ( x , λ , ξ ) ] + r u + r 1 β E Ξ [ [ h ( x , λ , ξ ) u ] + ] ,
where S : = D × Λ × R and E Ξ [ [ h ( x , λ , ξ ) u ] + ] = Ξ [ h ( x , λ , ξ ) u ] + d ξ .
Note that due to the existence of non-smooth function [ h ( x , λ , ξ ) u ] + , problem (25) is a non-smooth optimization problem. In order to deal with the non-smoothness of the function, as the special case of the smoothing function proposed by Peng in [33], given a parameter ϵ > 0 , we can obtain the smoothing form of [ t ] + = max { t , 0 } as follows:
ϕ ( t , ϵ ) = ϵ ln ( e t ϵ + 1 ) .
In addition, ϕ ( t , ϵ ) also has the following properties [34]:
lim ϵ 0 + ϵ ln ( e t ϵ + 1 ) = [ t ] + , 0 ϵ ln ( e t ϵ + 1 ) [ t ] + ϵ ln 2 .
Thus, we give the smoothing approximation problem of problem (25) as follows:
min ( x , λ , u ) S θ ( x , λ , u , ϵ ) : = ( 1 r ) E Ξ [ h ( x , λ , ξ ) ] + r u + r 1 β E Ξ [ l ( x , λ , u , ξ , ϵ ) ] ,
where l ( x , λ , u , ξ , ϵ ) = ϵ ln ( e h ( x , λ , ξ ) u ϵ + 1 ) and let l ( x , λ , u , ξ ) = [ h ( x , λ , ξ ) u ] + .
Note that problem (27) contains a mathematical expectation that is difficult to compute, and we apply the SAA method introduced in Section 2 to obtain the following smoothing SAA problem of (27):
min ( x , λ , u ) S θ k ( x , λ , u , ϵ ) : = ( 1 r ) N k ξ i Ξ k h ( x , λ , ξ i ) + r u + r ( 1 β ) N k ξ i Ξ k l ( x , λ , u , ξ i , ϵ ) .
We will investigate the limiting behavior of these approximation problems in the following section.

4. Convergence Analysis

In this section, we first investigate the limiting behavior of global optimal solutions of problem (28). Then, when Ξ R b is not a compact set, we will propose the compact approximation problem and give the proof of the convergence of compact approximation problem.

4.1. Convergence Behavior of Global Optimal Solutions

We now assume that ϵ is related to k, so we let ϵ in problem (28) is ϵ k , and ϵ k 0 when k . From now on, S ¯ and S k represent the sets of optimal solutions of problems (25) and (28), respectively.
Theorem 3.
Suppose that Ξ R b is a nonempty compact set. For given parameters r, β and for each k, let ( x k ( ϵ k ) , λ k ( ϵ k ) , u k ( ϵ k ) ) S k and ( x ¯ , λ ¯ , u ¯ ) is an accumulation point of ( x k ( ϵ k ) , λ k ( ϵ k ) , u k ( ϵ k ) ) . Then, we have ( x ¯ , λ ¯ , u ¯ ) S ¯ w.p.1.
Proof. 
We assume that lim k ( x k ( ϵ k ) , λ k ( ϵ k ) , u k ( ϵ k ) ) = ( x ¯ , λ ¯ , u ¯ ) . Let D 1 , D 2 , and D 3 be a compact convex set containing { ( x k ( ϵ k ) , λ k ( ϵ k ) } , { u k ( ϵ k ) } , and { ϵ k } , respectively. By the continuous differentiability of h ( x , λ , ξ ) on D 1 × Ξ , there exists a constant C 1 > 0 satisfying
( x , λ ) h ( x , λ , ξ ) C 1 .
In addition, since l ( x , λ , u , ξ , ϵ ) is continuously differentiable on D 1 × D 2 × Ξ × D 3 , we have that for any ( x , λ , u , ξ , ϵ ) D 1 × D 2 × Ξ × D 3 , there exists a constant C 2 > 0 satisfying
( x , λ , u ) l ( x , λ , u , ξ , ϵ ) C 2 .
It follows from the mean value theorem that, for each x k ( ϵ k ) , λ k ( ϵ k ) and each ξ i , there exists ( x k i , λ k i ) = α k i ( x k ( ϵ k ) , λ k ( ϵ k ) ) + ( 1 α k i ) ( x ¯ , λ ¯ ) with α k i ( 0 , 1 ) satisfying
h ( x k ( ϵ k ) , λ k ( ϵ k ) , ξ i ) h ( x ¯ , λ ¯ , ξ i ) = ( x , λ ) h ( x k i , λ k i , ξ i ) T ( x k ( ϵ k ) , λ k ( ϵ k ) ) ( x ¯ , λ ¯ ) .
Similarly, for each k and ϵ k , there exists ( x k i , λ k i , u k i ) = α k i ( x k ( ϵ k ) , λ k ( ϵ k ) , u k ( ϵ k ) ) + ( 1 α k i ) ( x ¯ , λ ¯ , u ¯ ) with α k i ( 0 , 1 ) satisfying
l ( x k ( ϵ k ) , λ k ( ϵ k ) , u k ( ϵ k ) , ξ i , ϵ k ) l ( x ¯ , λ ¯ , u ¯ , ξ i , ϵ k ) = ( x , λ , u ) l ( x k i , λ k i , u k i , ξ i , ϵ k ) T ( x k ( ϵ k ) , λ k ( ϵ k ) , u k ( ϵ k ) ) ( x ¯ , λ ¯ , u ¯ ) .
Thus, according to formulation (29)–(32) we have
| θ k ( x k ( ϵ k ) , λ k ( ϵ k ) , u k ( ϵ k ) , ϵ k ) θ k ( x ¯ , λ ¯ , u ¯ , ϵ k ) | ( 1 r ) N k ξ i Ξ k | h ( x k ( ϵ k ) , λ k ( ϵ k ) , ξ i ) h ( x ¯ , λ ¯ , ξ i ) | + r | u k ( ϵ k ) r u ¯ | + r ( 1 β ) N k ξ i Ξ k | l ( x k ( ϵ k ) , λ k ( ϵ k ) , u k ( ϵ k ) , ξ i , ϵ k ) l ( x ¯ , λ ¯ , u ¯ , ξ i , ϵ k ) | r | u k ( ϵ k ) u ¯ | + C 1 ( 1 r ) N k ξ i Ξ k ( x k ( ϵ k ) x ¯ + λ k ( ϵ k ) λ ¯ ) + r C 2 ( 1 β ) N k ξ i Ξ k x k ( ϵ k ) x ¯ + λ k ( ϵ k ) λ ¯ + | u k ( ϵ k ) u ¯ | 0 as k , w . p . 1 .
On the other hand, according to (18), we know that
lim k 1 N k ξ i Ξ k h ( x ¯ , λ ¯ , ξ i ) = E Ξ [ h ( x ¯ , λ ¯ , ξ ) ]
and
lim k 1 N k ξ i Ξ k l ( x ¯ , λ ¯ , u ¯ , ξ i ) = E Ξ [ l ( x ¯ , λ ¯ , u ¯ , ξ ) ] .
We then have from 0 l ( x , λ , u , ξ , ϵ k ) l ( x , λ , u , ξ ) ϵ k ln 2 , ϵ k 0 as k and (34), (35) that
| θ k ( x ¯ , λ ¯ , u ¯ , ϵ k ) θ ( x ¯ , λ ¯ , u ¯ ) | = | ( 1 r ) N k ξ i Ξ k h ( x ¯ , λ ¯ , ξ i ) + r u ¯ + r ( 1 β ) N k ξ i Ξ k l ( x ¯ , λ ¯ , u ¯ , ξ i , ϵ k ) ( 1 r ) E Ξ [ h ( x ¯ , λ ¯ , ξ ) ] r u ¯ r 1 β E Ξ [ l ( x ¯ , λ ¯ , u ¯ , ξ ) ] | ( 1 r ) | 1 N k ξ i Ξ k h ( x ¯ , λ ¯ , ξ i ) E Ξ [ h ( x ¯ , λ ¯ , ξ ) ] | + r 1 β | 1 N k ξ i Ξ k l ( x ¯ , λ ¯ , u ¯ , ξ i ) E Ξ [ l ( x ¯ , λ ¯ , u ¯ , ξ ) ] | + r ( 1 β ) N k ξ i Ξ k | l ( x ¯ , λ ¯ , u ¯ , ξ i , ϵ k ) l ( x ¯ , λ ¯ , u ¯ , ξ i ) | ( 1 r ) | 1 N k ξ i Ξ k h ( x ¯ , λ ¯ , ξ i ) E Ξ [ h ( x ¯ , λ ¯ , ξ ) ] | + r ( 1 β ) N k ξ i Ξ k ϵ k ln 2 + r 1 β | 1 N k ξ i Ξ k l ( x ¯ , λ ¯ , u ¯ , ξ i ) E Ξ [ l ( x ¯ , λ ¯ , u ¯ , ξ ) ] | 0 as k , w . p . 1 .
Similar to (36), we have
lim k θ k ( x , λ , u , ϵ k ) = θ ( x , λ , u ) .
Since
| θ k ( x k ( ϵ k ) , λ k ( ϵ k ) , u k ( ϵ ) , ϵ k ) θ ( x ¯ , λ ¯ , u ¯ ) | | θ k ( x k ( ϵ k ) , λ k ( ϵ k ) , u k ( ϵ ) , ϵ k ) θ k ( x ¯ , λ ¯ , u ¯ , ϵ k ) | + | θ k ( x ¯ , λ ¯ , u ¯ , ϵ k ) θ ( x ¯ , λ ¯ , u ¯ ) | .
It follows from (33) and (36) that
lim k θ k ( x k ( ϵ k ) , λ k ( ϵ k ) , u k ( ϵ k ) , ϵ k ) = θ ( x ¯ , λ ¯ , u ¯ ) , w . p . 1 .
For all k, since ( x k ( ϵ k ) , λ k ( ϵ k ) , u k ( ϵ k ) ) S k , the following holds
θ k ( x k ( ϵ k ) , λ k ( ϵ k ) , u k ( ϵ k ) , ϵ k ) θ k ( x , λ , u , ϵ k ) , ( x , λ , u ) S .
Let k , by (37) and (38), for any ( x , λ , u ) S , we have
θ ( x ¯ , λ ¯ , u ¯ ) θ ( x , λ , u ) , w . p . 1 .
which means that ( x ¯ , λ ¯ , u ¯ ) S ¯ w.p.1. □
Remark 1.
In Theorem 3, we only show the convergence result of the global optimal solution sequence of smoothing SAA problem (28). In fact, when the function E [ ϵ ln ( e h ( x , λ , ξ ) u ϵ + 1 ) ] can be evaluated, as a part of the proof of Theorem 3, we can easily obtain that the global optimal solution sequence of smoothing approximation problem (27) converges to the global optimal solution of problem (25).
Remark 2.
Note that (28) is a non-convex problem, so it is difficult to obtain a global optimal solution sequence of problem (28) in general. Therefore, it is essential to consider the convergence of stationary points. According to Theorem 2 of reference [35], we can similarly prove that the stationary points of smoothing SAA problem (28) converges to the stationary points of problem (25). Here, we omit the proof process.

4.2. Convergence of the Compact Approximation Problem

It is worth noting that in Theorem 3, the sample space Ξ R b is a compact set. If Ξ R b is not a compact set, we apply the compact approximation method to solve the problem. In order to give the compact approximation problem of (25), for a sufficiently large number ν , we first define the compact approximation set of Ξ as follows:
Ξ ν : = { ξ Ξ ξ ν } .
Then, we present the compact approximation problem of (25) as follows:
min ( x , λ , u ) S θ ν ( x , λ , u ) : = ( 1 r ) E Ξ ν [ h ( x , λ , ξ ) ] + r u + r 1 β E Ξ ν [ [ h ( x , λ , ξ ) u ] + ] ,
where S : = D × Λ × R and D be a nonempty closed convex set, E Ξ ν [ h ( x , λ , ξ ) ] = Ξ ν h ( x , λ , ξ ) d ξ and E Ξ ν [ [ h ( x , λ , ξ ) u ] + ] = Ξ ν [ h ( x , λ , ξ ) u ] + ] d ξ . Ξ ν is a compact set. Next, we make some assumptions.
(A1)
For each j = 1 , , m , there exists an integrable function κ j ( ξ ) such that E Ξ [ κ j 2 ( ξ ) ] < + and
F j ( y , ξ ) F j ( x , ξ ) κ j ( ξ ) y x , x , y D .
Furthermore, for every j = 1 , , m , there exists x ˇ D such that E Ξ [ F j ( x ˇ , ξ ) 2 ] < + .
(A2)
There exists an integrable function δ ( ξ ) such that E Ξ [ δ 2 ( ξ ) ] < + and
g ( y , ξ ) g ( x , ξ ) δ ( ξ ) y x , x , y D .
In addition, there exists x ´ D such that E Ξ [ g ( x ´ , ξ ) ] < + .
Remark 3.
For every x D , by assumption (A1), for every j = 1 , , m , we can easily obtain that
E Ξ [ F j ( x , ξ ) 2 ] E Ξ F j ( x , ξ ) F j ( x ˇ , ξ ) + F j ( x ˇ , ξ ) 2 2 x x ˇ 2 E Ξ [ κ j 2 ( ξ ) ] + 2 E Ξ [ F j ( x ˇ , ξ ) 2 ] < + .
Furthermore, for every x D , we can easily obtain that
E Ξ j = 1 m κ j ( ξ ) 2 < + , E Ξ j = 1 m F j ( x , ξ ) 2 < + .
In addition, under conditions (A1)–(A2), by Cauchy–Schwarz inequalities and (41) and (42), for every x D and j = 1 , , m , we can obtain
E Ξ F j ( x , ξ ) < + , E Ξ κ j ( ξ ) < + , E Ξ δ ( ξ ) < +
and
E Ξ j = 1 m κ j ( ξ ) j = 1 m F j ( x , ξ ) E Ξ j = 1 m κ j ( ξ ) 2 E Ξ j = 1 m F j ( x , ξ ) 2 < + .
Similar to (44), we also have
E Ξ δ ( ξ ) j = 1 m κ j ( ξ ) < + , E Ξ δ ( ξ ) j = 1 m F j ( x , ξ ) < + .
Moreover, by (A2) and (43), we have
E Ξ g ( x , ξ ) E Ξ g ( x , ξ ) g ( x ´ , ξ ) + E Ξ g ( x ´ , ξ ) E Ξ δ ( ξ ) x x ´ + E Ξ g ( x ´ , ξ ) < + .
Thus, for every x D and j = 1 , , m , we have
E Ξ | g j ( x , ξ ) | E Ξ g ( x , ξ ) < + .
Lemma 1.
Let (A1)–(A2) hold, then we have
E Ξ [ h ( x , λ , ξ ) ] < + , ( x , λ ) D × Λ .
Proof. 
By (21), the fact that h ( x , λ , ξ ) 0 , j = 1 m λ j 0 , j = 1 m λ j = 1 and (A2), there holds
α 2 x H ( x , λ , ξ ) 2 ( x H ( x , λ , ξ ) ) T j = 1 m λ j F j ( x , ξ ) + j = 1 m λ j g j ( x , ξ ) j = 1 m λ j g j ( H ( x , λ , ξ ) , ξ ) x H ( x , λ , ξ ) j = 1 m F j ( x , ξ ) + m g ( x , ξ ) g ( H ( x , λ , ξ ) , ξ ) x H ( x , λ , ξ ) j = 1 m F j ( x , ξ ) + m δ ( ξ ) .
Therefore, we have
x H ( x , λ , ξ ) 2 α j = 1 m F j ( x , ξ ) + m δ ( ξ ) .
In addition, we can obtain
E Ξ [ h ( x , λ , ξ ) ] = E Ξ [ ( x H ( x , λ , ξ ) ) T j = 1 m λ j F j ( x , ξ ) + j = 1 m λ j g j ( x , ξ ) j = 1 m λ j g j ( H ( x , λ , ξ ) , ξ ) α 2 x H ( x , λ , ξ ) 2 ] E Ξ [ x H ( x , λ , ξ ) j = 1 m F j ( x , ξ ) + m g ( x , ξ ) g ( H ( x , λ , ξ ) , ξ ) + α 2 x H ( x , λ , ξ ) 2 ] E Ξ [ x H ( x , λ , ξ ) j = 1 m F j ( x , ξ ) + m δ ( ξ ) x H ( x , λ , ξ ) + α 2 x H ( x , λ , ξ ) 2 ] E Ξ [ 4 α ( j = 1 m F j ( x , ξ ) + m δ ( ξ ) ) 2 ] = 4 α E Ξ [ ( j = 1 m F j ( x , ξ ) ) 2 ] + 8 m α E Ξ [ δ ( ξ ) j = 1 m F j ( x , ξ ) ] + 4 m 2 α E Ξ [ δ 2 ( ξ ) ] < + ,
where the second inequality is derived from ( A 2 ) and the inequality of j = 1 m | x j | m x , and the third inequality is given by (49), along with the last inequality follows from ( A 2 ) , (43), (45). Therefore, the conclusion holds. □
Lemma 2.
Suppose that (A1)–(A2) hold and lim ν ( x ν , λ ν ) = ( x ˜ , λ ˜ ) . Then, we have lim ν E Ξ ν [ h ( x ν , λ ν , ξ ) ] = E Ξ [ h ( x ˜ , λ ˜ , ξ ) ] with probability one.
Proof. 
(i) We first prove that
lim ν | E Ξ ν [ h ( x ν , λ ν , ξ ) ] E Ξ ν [ h ( x ˜ , λ ˜ , ξ ) ] | = 0 , w . p . 1 .
By the non-expansive property of P D λ , α (6), j = 1 m λ j 0 , j = 1 m λ j = 1 and (A1), we have
H ( x ν , λ ν , ξ ) H ( x ˜ , λ ˜ , ξ ) x ν x ˜ + α 1 j = 1 m λ j ν F j ( x ν , ξ ) j = 1 m λ ˜ j F j ( x ˜ , ξ ) x ν x ˜ + α 1 j = 1 m λ j ν F j ( x ν , ξ ) F j ( x ˜ , ξ ) + α 1 j = 1 m ( λ j ν λ ˜ j ) F j ( x ˜ , ξ ) x ν x ˜ + α 1 x ν x ˜ j = 1 m κ j ( ξ ) + α 1 λ ν λ ˜ j = 1 m F j ( x ˜ , ξ ) .
In consequence, by j = 1 m λ j 0 , j = 1 m λ j = 1 , ( A 1 ) , (42)–(44) and (51), the following holds
E Ξ | ( x ν x ˜ + H ( x ˜ , λ ˜ , ξ ) H ( x ν , λ ν , ξ ) ) T j = 1 m λ j ν F j ( x ν , ξ ) | E Ξ x ν x ˜ + H ( x ν , λ ν , ξ ) H ( x ˜ , λ ˜ , ξ ) j = 1 m F j ( x ˜ , ξ ) + j = 1 m F j ( x ν , ξ ) F j ( x ˜ , ξ ) E Ξ [ 2 x ν x ˜ + α 1 x ν x ˜ j = 1 m κ j ( ξ ) + α 1 λ ν λ ˜ j = 1 m F j ( x ˜ , ξ ) · j = 1 m F j ( x ˜ , ξ ) + j = 1 m κ j ( ξ ) x ν x ˜ = 2 x ν x ˜ E Ξ j = 1 m F j ( x ˜ , ξ ) + 2 x ν x ˜ 2 E Ξ j = 1 m κ j ( ξ ) + α 1 x ν x ˜ E Ξ j = 1 m κ j ( ξ ) j = 1 m F j ( x ˜ , ξ ) + α 1 x ν x ˜ 2 E Ξ j = 1 m κ j ( ξ ) 2 + α 1 λ ν λ ˜ E Ξ j = 1 m F j ( x ˜ , ξ ) 2 + α 1 x ν x ˜ λ ν λ ˜ E Ξ j = 1 m κ j ( ξ ) j = 1 m F j ( x ˜ , ξ ) 0 , as ν + , w . p . 1 .
Furthermore, by (42), (44), (45), (49), and the second part of the first inequality of (51), we have
E Ξ | ( x ˜ H ( x ˜ , λ ˜ , ξ ) ) T ( j = 1 m λ j ν F j ( x ν , ξ ) j = 1 m λ ˜ j F j ( x ˜ , ξ ) ) | 2 α E Ξ ( j = 1 m F j ( x ˜ , ξ ) + m δ ( ξ ) ) ( x ν x ˜ j = 1 m κ j ( ξ ) + λ ν λ ˜ j = 1 m F j ( x ˜ , ξ ) ) = 2 α x ν x ˜ E Ξ j = 1 m κ j ( ξ ) j = 1 m F j ( x ˜ , ξ ) + 2 α λ ν λ ˜ E Ξ j = 1 m F j ( x ˜ , ξ ) 2 + 2 m α x ν x ˜ E Ξ δ ( ξ ) j = 1 m κ j ( ξ ) + 2 m α λ ν λ ˜ E Ξ δ ( ξ ) j = 1 m F j ( x ˜ , ξ ) 0 , as ν + , w . p . 1 .
It then follows from ( A 1 ) , (42)–(45), (49), and (51) that
α 2 E Ξ | x ν H ( x ν , λ ν , ξ ) 2 x ˜ H ( x ˜ , λ ˜ , ξ ) 2 | α 2 E Ξ ( x ν H ( x ν , λ ν , ξ ) + | x ˜ H ( x ˜ , λ ˜ , ξ ) ) x ν H ( x ν , λ ν , ξ ) x ˜ + H ( x ˜ , λ ˜ , ξ ) E Ξ [ j = 1 m F j ( x ν , ξ ) + j = 1 m F j ( x ˜ , ξ ) + 2 m δ ( ξ ) · 2 x ν x ˜ + α 1 x ν x ˜ j = 1 m κ j ( ξ ) + α 1 λ ν λ ˜ j = 1 m F j ( x ˜ , ξ ) ] E Ξ [ ( j = 1 m F j ( x ˜ , ξ ) + j = 1 m F j ( x ν , ξ ) F j ( x ˜ , ξ ) + j = 1 m F j ( x ˜ , ξ ) + 2 m δ ( ξ ) ) · ( 2 x ν x ˜ + α 1 x ν x ˜ j = 1 m κ j ( ξ ) + α 1 λ ν λ ˜ j = 1 m F j ( x ˜ , ξ ) ) ] E Ξ [ 2 j = 1 m F j ( x ˜ , ξ ) + x ν x ˜ j = 1 m κ j ( ξ ) + 2 m δ ( ξ ) · 2 x ν x ˜ + α 1 x ν x ˜ j = 1 m κ j ( ξ ) + α 1 λ ν λ ˜ j = 1 m F j ( x ˜ , ξ ) ] = 4 x ν x ˜ E Ξ F j ( x ˜ , ξ ) + 2 x ν x ˜ 2 E Ξ j = 1 m κ j ( ξ ) + 4 m x ν x ˜ E Ξ δ ( ξ ) + 2 α x ν x ˜ E Ξ j = 1 m κ j ( ξ ) j = 1 m F j ( x ˜ , ξ ) + 1 α x ν x ˜ 2 E Ξ ( j = 1 m κ j ( ξ ) ) 2 + 2 m α x ν x ˜ E Ξ δ ( ξ ) j = 1 m κ j ( ξ ) + 2 α λ ν λ ˜ E Ξ j = 1 m F j ( x ˜ , ξ ) 2 + 1 α x ν x ˜ λ ν λ ˜ E Ξ j = 1 m κ j ( ξ ) j = 1 m F j ( x ˜ , ξ ) + 2 m α λ ν λ ˜ E Ξ δ ( ξ ) j = 1 m F j ( x ˜ , ξ ) 0 , as ν + , w . p . 1 .
Moreover, by j = 1 m λ j 0 , j = 1 m λ j = 1 , (A2), (43), (47), we can obtain
E Ξ | j = 1 m λ j ν g j ( x ν , ξ ) j = 1 m λ ˜ j g j ( x ˜ , ξ ) | E Ξ | j = 1 m λ j ν g j ( x ν , ξ ) j = 1 m λ j ν g j ( x ˜ , ξ ) | + E Ξ | j = 1 m λ j ν g j ( x ˜ , ξ ) j = 1 m λ ˜ j g j ( x ˜ , ξ ) | E Ξ j = 1 m | g j ( x ν , ξ ) g j ( x ˜ , ξ ) | + λ ν λ ˜ E Ξ j = 1 m | g j ( x ˜ , ξ ) | m x ν x ˜ E Ξ δ ( ξ ) + λ ν λ ˜ E Ξ j = 1 m | g j ( x ˜ , ξ ) | 0 , as ν + , w . p . 1 .
Furthermore, similar to (55), we have
E Ξ | j = 1 m λ j ν g j ( H ( x ν , λ ν , ξ ) , ξ ) j = 1 m λ ˜ j g j ( H ( x ˜ , λ ˜ , ξ ) , ξ ) | m E Ξ δ ( ξ ) H ( x ν , λ ν , ξ ) H ( x ˜ , λ ˜ , ξ ) + λ ν λ ˜ E Ξ j = 1 m | g j ( H ( x ˜ , λ ˜ , ξ ) , ξ ) | m x ν x ˜ E Ξ δ ( ξ ) + m α 1 x ν x ˜ E Ξ δ ( ξ ) j = 1 m κ j ( ξ ) + m α 1 λ ν λ ˜ E Ξ δ ( ξ ) j = 1 m F j ( x ˜ , ξ ) + λ ν λ ˜ E Ξ j = 1 m | g j ( H ( x ˜ , λ ˜ , ξ ) , ξ ) | 0 , as ν + , w . p . 1 ,
where the second inequality follows from (51) and the limit holds following from (43), (45), and (47). As maintained by (39) and (52)–(56), the following holds
| E Ξ ν [ h ( x ν , λ ν , ξ ) h ( x ˜ , λ ˜ , ξ ) ] | E Ξ ν [ | h ( x ν , λ ν , ξ ) h ( x ˜ , λ ˜ , ξ ) | ] E Ξ [ | h ( x ν , λ ν , ξ ) h ( x ˜ , λ ˜ , ξ ) | ] E Ξ [ | ( x ν x ˜ + H ( x ˜ , λ ˜ , ξ ) H ( x ν , λ ν , ξ ) ) T j = 1 m λ j ν F j ( x ν , ξ ) | ] + E Ξ [ | ( x ˜ H ( x ˜ , λ ˜ , ξ ) ) T ( j = 1 m λ j ν F j ( x ν , ξ ) j = 1 m λ ˜ j F j ( x ˜ , ξ ) ) | ] + E Ξ [ | j = 1 m λ j ν g j ( x ν , ξ ) j = 1 m λ ˜ j g j ( x ˜ , ξ ) | ] + E Ξ [ | j = 1 m λ j ν g j ( H ( x ν , λ ν , ξ ) , ξ ) j = 1 m λ ˜ j g j ( H ( x ˜ , λ ˜ , ξ ) , ξ ) | ] + α 2 E Ξ [ | x ν H ( x ν , λ ν , ξ ) 2 x ˜ H ( x ˜ , λ ˜ , ξ ) 2 | ] 0 , as ν + , w . p . 1 .
Thus, the formulation (50) holds.
(ii) We next show that lim ν E Ξ ν [ h ( x ν , λ ν , ξ ) ] = E Ξ [ h ( x ˜ , λ ˜ , ξ ) ] . By Lemma 1 and (39), we can easily obtain that
lim ν | E Ξ [ h ( x ˜ , λ ˜ , ξ ) ] E Ξ ν [ h ( x ˜ , λ ˜ , ξ ) ] | = 0 , w . p . 1 .
Hence, by (50) and (57), we have
| E Ξ ν [ h ( x ν , λ ν , ξ ) ] E Ξ [ h ( x ˜ , λ ˜ , ξ ) ] | | E Ξ ν [ h ( x ν , λ ν , ξ ) ] E Ξ ν [ h ( x ˜ , λ ˜ , ξ ) ] | + | E Ξ ν [ h ( x ˜ , λ ˜ , ξ ) ] E Ξ [ h ( x ˜ , λ ˜ , ξ ) ] | 0 , as ν + , w . p . 1 .
Therefore, the conclusion holds. □
Next, we show the main convergence result and denote the sets of global optimal solutions of problems (25) and (40) by O ˜ and O ν , respectively.
Theorem 4.
Suppose that conditions ( A 1 ) ( A 2 ) hold. For each ν, let ( x ν , λ ν , u ν ) O ν and ( x ˜ , λ ˜ , u ˜ ) is an accumulation point of ( x ν , λ ν , u ν ) . Then, we have ( x ˜ , λ ˜ , u ˜ ) O ˜ w.p.1.
Proof. 
We assume that lim ν x ν = x ˜ , lim ν λ ν = λ ˜ , lim ν u ν = u ˜ . By the inequality | [ s ] + [ t ] + | | s t | given in reference [34] and Lemma 2, we have
| E Ξ ν [ [ h ( x ν , λ ν , ξ ) u ν ] + ] E Ξ ν [ [ h ( x ˜ , λ ˜ , ξ ) u ˜ ] + ] | E Ξ ν | [ h ( x ν , λ ν , ξ ) u ν ] + [ h ( x ˜ , λ ˜ , ξ ) u ˜ ] + | E Ξ ν | h ( x ν , λ ν , ξ ) u ν h ( x ˜ , λ ˜ , ξ ) + u ˜ | E Ξ ν | h ( x ν , λ ν , ξ ) h ( x ˜ , λ ˜ , ξ ) | + | u ν u ˜ | 0 , as ν + , w . p . 1 .
It follows from (58) and Lemma 2 that
| θ ν ( x ν , λ ν , u ν ) θ ν ( x ˜ , λ ˜ , u ˜ ) | ( 1 r ) | E Ξ ν [ h ( x ν , λ ν , ξ ) ] E Ξ ν [ h ( x ˜ , λ ˜ , ξ ) ] | + r | u ν u ˜ | + r 1 β | E Ξ ν [ [ h ( x ν , λ ν , ξ ) u ν ] + ] E Ξ ν [ [ h ( x ˜ , λ ˜ , ξ ) u ˜ ] + ] | 0 , as ν + , w . p . 1 .
On the other hand, similar to (57), we know that
lim ν + | E Ξ [ h ( x ˜ , λ ˜ , ξ ) u ˜ ] + E Ξ ν [ h ( x ˜ , λ ˜ , ξ ) u ˜ ] + ] | = 0 , w . p . 1 .
Consequently, it holds that
| θ ν ( x ˜ , λ ˜ , u ˜ ) θ ( x ˜ , λ ˜ , u ˜ ) | ( 1 r ) | E Ξ [ h ( x ˜ , λ ˜ , ξ ) ] E Ξ ν [ h ( x ˜ , λ ˜ , ξ ) ] | + r 1 β | E Ξ [ h ( x ˜ , λ ˜ , ξ ) u ˜ ] + E Ξ ν [ h ( x ˜ , λ ˜ , ξ ) u ˜ ] + ] | 0 , as ν + , w . p . 1 .
Moreover, by (59) and (61), it holds that
| θ ν ( x ν , λ ν , u ν ) θ ( x ˜ , λ ˜ , u ˜ ) | | θ ν ( x ν , λ ν , u ν ) θ ν ( x ˜ , λ ˜ , u ˜ ) | + | θ ν ( x ˜ , λ ˜ , u ˜ ) θ ( x ˜ , λ ˜ , u ˜ ) | 0 , as ν + , w . p . 1 .
Notice the fact that ( x , λ , u ) S , θ ( x , λ , u ) 0 , since ( x ν , λ ν , u ν ) O ν , for each ν , we have
θ ν ( x ν , λ ν , u ν ) θ ν ( x , λ , u ) θ ( x , λ , u ) .
Taking ν to (63), according to (61), it holds that
θ ( x ˜ , λ ˜ , u ˜ ) θ ( x , λ , u ) , ( x , λ , u ) S ,
which means that ( x ˜ , λ ˜ , u ˜ ) O ˜ w.p.1. □

5. Conclusions

We established the equivalence between SMVVIP and stochastic mixed scalar variational inequality problems. Based on this result, we first proposed a deterministic bi-criteria model (25) for solving SMVVIP. Subsequently, we employed the smoothing technique to derive the smooth approximation problem (27) from (25) and utilized sample average approximation techniques to solve (27). In addition, we have discussed the convergence behavior of the global optimal solutions under the assumption that the sample space Ξ is compact. When Ξ is not a compact set, the convergence analysis of the compact approximation problem has been considered as well.

Author Contributions

Funding acquisition, M.L.; Supervision, M.L.; Writing—original draft, M.L. and M.D.; writing—review and editing, M.L. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation General Project of the Department of Liaoning Province Science and Technology (No. 2021-MS-153).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Smith, A. An Inquiry into the Nature and Causes of the Wealth of Nations; University of Chicago Press: Chicago, IL, USA, 1977. [Google Scholar]
  2. Liu, S.M.; Feng, E. The exponential penalty function method for multiobjective programming problems. Optim. Methods Softw. 2010, 25, 667–675. [Google Scholar] [CrossRef]
  3. Cerda-Flores, S.C.; Rojas-Punzo, A.A.; Nápoles-Rivera, F. Applications of Multi-Objective Optimization to Industrial Processes: A Literature Review. Processes 2022, 10, 133. [Google Scholar] [CrossRef]
  4. Han, S.; Zhu, K.; Zhou, M.C.; Cai, X. Information-Utilization-Method-Assisted Multimodal Multiobjective Optimization and Application to Credit Card Fraud Detection. IEEE Trans. Comput. Soc. Syst. 2021, 8, 856–869. [Google Scholar] [CrossRef]
  5. Lyons, M.A. Pretomanid dose selection for pulmonary tuberculosis: An application of multi-objective optimization to dosage regimen design. CPT Pharmacometrics Syst. Pharmacol. 2021, 10, 211–219. [Google Scholar] [CrossRef] [PubMed]
  6. Li, M.; Wang, L.; Wang, Y.; Chen, Z. Sizing optimization and energy management strategy for hybrid energy storage system using multiobjective optimization and random forests. IEEE Trans. Power Electron. 2021, 36, 11421–11430. [Google Scholar] [CrossRef]
  7. Pereira, J.L.J.; Oliver, G.A.; Francisco, M.B.; Cunha, S.S.; Gomes, G.F. A Review of Multi-objective Optimization: Methods and Algorithms in Mechanical Engineering Problems. Arch. Comput. Methods Eng. 2022, 29, 2285–2308. [Google Scholar] [CrossRef]
  8. Bhaskar, V.; Gupta, S.K.; Ray, A.K. Applications of multiobjective optimization in chemical engineering. Rev. Chem. Eng. 2000, 16, 1–54. [Google Scholar] [CrossRef]
  9. Salahuddin; Ahmad, M.K.; Verma, R.U. Existence solutions to mixed vector variational-type inequalities. Adv. Nonlinear Var. 2013, 16, 115–123. [Google Scholar]
  10. Xie, P.D.; Salahuddin. Generalized mixed general vector variational-like inequalities in topological vector spaces. J. Nonlinear Analysis Optim. Theory Appl. 2013, 4, 163–172. [Google Scholar]
  11. Irfan, S.S. Exponential Type Mixed Vector Variational Inequalities with Fuzzy Mappings. Panam. Math. J. 2020, 30, 11–26. [Google Scholar]
  12. Jayswal, A.; Kumari, B.; Ahmad, I. A gap function and existence of solutions for a non-smooth vector variational inequality on Hadamard manifolds. Optimization 2021, 70, 1875–1889. [Google Scholar] [CrossRef]
  13. Chang, S.; Salahuddin; Wang, L.; Ma, Z.L. Error bounds for mixed set-valued vector inverse quasi-variational inequalities. J. Inequalities Appl. 2020, 1, 160. [Google Scholar] [CrossRef]
  14. Liu, D.Y.; Jiang, Y. Scalarization of Mixed Vector Variational Inequalities and Error Bounds of Gap Functions. Appl. Math. Mech. 2017, 38, 715–726. [Google Scholar]
  15. Salahuddin. Set valued exponential type g-mixed vector variational inequality problems. Commun. Appl. Nonlinear Anal. 2017, 24, 59–72. [Google Scholar]
  16. Fukushima, M. Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems. Math. Program. 1992, 53, 99–110. [Google Scholar] [CrossRef]
  17. Luo, M.J.; Lin, G.H. Expected residual minimization method for stochastic variational inequality problems. J. Optim. Theory Appl. 2009, 140, 103–116. [Google Scholar] [CrossRef]
  18. Shanbhag, U.V. Stochastic Variational Inequality Problems: Applications, Analysis, and Algorithms. In Theory Driven by Influential Applications; Topaloglu, H., Smith, J.C., Greenberg, H.J., Eds.; Informs: Catonsville, MD, USA, 2013; pp. 71–107. [Google Scholar]
  19. Chen, X.J.; Lin, G.H. CVaR-based formulation and approximation method for stochastic variational inequalities. Numer. Algebr. Control Optim. 2011, 1, 35–48. [Google Scholar] [CrossRef]
  20. Bot, R.T.; Mertikopoulos, O.; Staudigl, M.; Vuong, P.T. Minibatch forward-backward-forward methods for solving stochastic variational inequalities. Stoch. Syst. 2021, 11, 112–139. [Google Scholar] [CrossRef]
  21. Jadamba, B.; Khan, A.A.; Sama, M.; Yang, Y. An iteratively regularized stochastic gradient method for estimating a random parameter in a stochastic PDE. A variational inequality approach. J. Nonlinear Var. Anal. 2021, 5, 865–880. [Google Scholar]
  22. Luo, M.J.; Lin, G.H. Convergence results of the ERM method for nonlinear stochastic variational inequality problems. J. Optim. Appl. 2009, 142, 569–581. [Google Scholar] [CrossRef]
  23. Wang, S.H.; Lin, R.G.; Shi, Y.T. A stochastic selective projection method for solving a constrained stochastic variational inequality problem. J. Appl. Numer. Optim. 2022, 4, 341–355. [Google Scholar]
  24. Yang, Z.P.; Zhang, J.; Zhu, X.D.; Lin, G.H. Infeasible interior-point algorithms based on sampling average approximations for a class of stochastic complementarity problems and their applications. J. Comput. Appl. Math. 2019, 352, 382–400. [Google Scholar] [CrossRef]
  25. Fukushima, M. Fundamentals of Nonlinear Optimization; Asakura Shoten: Tokyo, Japan, 2001. [Google Scholar]
  26. Solodov, M.V. Merit functions and error bounds for generalized variational inequalities. J. Math. Anal. Appl. 2003, 287, 405–414. [Google Scholar] [CrossRef] [Green Version]
  27. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  28. Rockafellar, R.T.; Uryasev, S. Optimization of conditional value-at-risk. J. Risk 2000, 2, 21–42. [Google Scholar] [CrossRef] [Green Version]
  29. Chen, L.; Lin, G.H.; Yang, X.M. Expected Residual Minimization Method for Stochastic Mixed Variational Inequality Problems. Pac. J. Optim. 2017, 14, 703–722. [Google Scholar]
  30. Fukushima, M. Merit functions for variational inequality and complementarity problems. In Nonlinear Optimization and Applications; Di Pillo, G., Giannessi, F., Eds.; Plenum Press: New York, NY, USA, 1996; pp. 155–170. [Google Scholar]
  31. Yang, X.M.; Zhao, Y.; Lin, G.H. Deterministic Bicriteria Model for Stochastic Variational Inequalities. J. Oper. Res. Soc. China 2018, 6, 507–527. [Google Scholar] [CrossRef]
  32. Ehrgott, M. Multicriteria Optimization; Springer Science & Business Media: Heidelberg, Germany, 2005. [Google Scholar]
  33. Peng, J.M. A Smoothing Function and Its Applications. Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods; Springer: Boston, MA, USA, 1998; pp. 293–316. [Google Scholar]
  34. Chen, C.H.; Mangasarian, O.L. A class of smoothing function for nonlinear and mixed complementarity problems. Comput. Optim. Appl. 1996, 5, 97–138. [Google Scholar] [CrossRef] [Green Version]
  35. Luo, M.J.; Zhang, K. Convergence Analysis of the Approximation Problems for Solving Stochastic Vector Variational Inequality Problems. Complexity 2020, 2020, 1203627. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luo, M.; Du, M.; Zhang, Y. Deterministic Bi-Criteria Model for Solving Stochastic Mixed Vector Variational Inequality Problems. Mathematics 2023, 11, 3376. https://doi.org/10.3390/math11153376

AMA Style

Luo M, Du M, Zhang Y. Deterministic Bi-Criteria Model for Solving Stochastic Mixed Vector Variational Inequality Problems. Mathematics. 2023; 11(15):3376. https://doi.org/10.3390/math11153376

Chicago/Turabian Style

Luo, Meiju, Menghan Du, and Yue Zhang. 2023. "Deterministic Bi-Criteria Model for Solving Stochastic Mixed Vector Variational Inequality Problems" Mathematics 11, no. 15: 3376. https://doi.org/10.3390/math11153376

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop