Next Article in Journal
Exact Solutions and Conservation Laws of a Generalized (1 + 1) Dimensional System of Equations via Symbolic Computation
Previous Article in Journal
Complete CSC Hypersurfaces Satisfying an Okumura-Type Inequality in Ricci Symmetric Manifolds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimality Conditions, Qualifications and Approximation Method for a Class of Non-Lipschitz Mathematical Programs with Switching Constraints

1
School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China
2
Department of Mathematics, School of Sciences, Nanchang University, Nanchang 330031, China
*
Authors to whom correspondence should be addressed.
Mathematics 2021, 9(22), 2915; https://doi.org/10.3390/math9222915
Submission received: 18 October 2021 / Revised: 12 November 2021 / Accepted: 13 November 2021 / Published: 16 November 2021

Abstract

:
In this paper, we consider a class of mathematical programs with switching constraints (MPSCs) where the objective involves a non-Lipschitz term. Due to the non-Lipschitz continuity of the objective function, the existing constraint qualifications for local Lipschitz MPSCs are invalid to ensure that necessary conditions hold at the local minimizer. Therefore, we propose some MPSC-tailored qualifications which are related to the constraints and the non-Lipschitz term to ensure that local minimizers satisfy the necessary optimality conditions. Moreover, we study the weak, Mordukhovich, Bouligand, strongly (W-, M-, B-, S-) stationay, analyze what qualifications making local minimizers satisfy the (M-, B-, S-) stationay, and discuss the relationship between the given MPSC-tailored qualifications. Finally, an approximation method for solving the non-Lipschitz MPSCs is given, and we show that the accumulation point of the sequence generated by the approximation method satisfies S-stationary under the second-order necessary condition and MPSC Mangasarian-Fromovitz (MF) qualification.

1. Introduction

The mathematical program with switching constraints (MPSCs) is a class of optimization problems which consider products of two functions as their sum of equality constraints. “Switching” means that the product of two functions is equal to 0 if and only if at least one of two functions equals 0. The current applications for switching structures appear in optimal control frequently [1,2,3] and optimization problems with either–or constraints [4]. In real world applications, switching structures naturally appear via modelling the bacteria switch instantaneously between active and dormant states [5], and modelling the state at the two boundary points to control a finite string to the zero state in finite time [6]. From the perspective of problem structure, mathematical programs with switching constraints (MPSCs) are closely related to mathematical programs with complementarity constraints (MPCCs) [7,8], mathematical programs with vanishing constraints (MPVCs) [9,10], and mathematical programs with cardinality constraints [11,12]. By the structures of MPCC and MPSC, we know that any MPCC is a special form of MPSC. There are many studies on MPCCs; as is known to all, standard constraint qualifications such as linear independence constraint qualification (LICQ) and Mangasarian–Fromovitz constraint qualification (MFCQ) do not hold at the feasible points of MPCCs due to the complementarity constraints. Similarly, LICQ and MFCQ also do not hold at the feasible points of MPSCs due to the switching constraints. In order to resolve this problem, Mehlitz [13] studied the (W-, M-, S-) stationarity and some MPSC-constraint qualifications and their relationships are given. Thereafter, there are some studies on the optimality conditions of MPSCs [14,15]. The solution methods of MPSCs at present are the relaxation method [16] and penalty methods [15,17,18].
Sparse solutions of systems have been widely used to seek a sparse solution in regression, feature selection in machine learning and compressed sensing [19,20,21,22]. In fact, finding the sparsest solutions is to find the solution which has the smallest number of nonzero components. The number of nonzero components of a vector z denoted as z 0 , the so-called l 0 -norm, is a nonconvex discontinuous function, usually approximated by l p -norm. The non-Lipschitz function is a better continuous approximation for l 0 -norm. Moreover, some smoothing methods [23,24,25,26] are provided for solving such problems. Guo and Chen [27] derived associated qualifications, necessary optimality conditions and an approximation method for MPCCs with the objective function involving a non-Lipschitz term. However, we know MPCC is a special structure of MPSC. Existing theory for MPCCs with a non-Lipschitz term are not applicable to MPSCs due to the switching constrains. In order to replenish this shortcoming, it is necessary to introduce the concepts of stationarity points and compatible constraint qualifications for MPSCs with the objective function involving a non-Lipschitz term. This is the central purpose of this paper. Our main contributions are summarized as follows.
  • We prove that the basic qualification (BQ) fails at any feasible point of Equation (1) when D is full row rank.
  • We present the (W-, M-, B-, S-) stationarity conditions for Equation (1), propose 12 MPSC-tailored qualifications, and give the qualifications which can guarantee that M-stationarity and S-stationarity are necessary for local optimality of Equation (1), respectively. In addition, the diagram of the relations for 12 qualifications and (M-, S-) stationarity are given.
  • We propose an approximation method for solving problem Equation (1), in which a local Lipschitz function is used to approximate the non-Lipschitz term. We show that any accumulation point of the sequence generated by our method is W-stationary under MPSC-MF qualification, and is S-stationary under second-order necessary condition and MPSC-MF qualification.
The rest of this paper is organized as follows. In Section 2, the notations and preliminary definitions are given. In Section 3, we propose some qualifications which are related to the constraints and the non-Lipschitz term to ensure that local minimizers satisfy the necessary optimality conditions; and the relations among MPSC tailored qualifications are discussed. In Section 4, a relaxation scheme for solving the non-Lipschitz MPSCs is presented, and under the MPSC-LI qualification and second-order necessary condition, we can obtain its W-stationary and S-stationary. Conclusions are summarized in Section 5.

2. Notations and Preliminaries

In order to facilitate the reading of this article, some notations are given first:
N Set of natural numbers N ^ A ( x ) The regular normal cone of A at x
R Set of real numbers N A ( x ) The limiting normal cone of A at x
R n Set of real n-vectors T A ( x ) The tangent cone of A at x
R m × n Set of real m × n-matrices ^ ψ ( x ) The regular subdifferential of ψ at x
· A norm ψ ( x ) The gradient of ψ at x
· p p norm ψ ( x ) The limiting subdifferential of ψ at x
D T Transpose of matrix D ψ ( x ) The horizon subdifferential of ψ at x
span DThe spanning set of set D 2 ψ ( x ) The Hessian Matrix of ψ at x
· , · Inner product s u p p ( A ) Index set of non-zero elements of A
The following notation will be used throughout this paper. Given a nonempty closed set A and a point x * A .
Definition 1.
(Regular normal cone) The regular normal cone of A at  x *  is defined as
N ^ A ( x * ) : = { d : d , x x * o ( x x * ) , x A } ,
where  o ( · )  means that  o ( α ) / α 0  as  α 0 .
Definition 2.
(Limiting normal cone) The limiting normal cone of A at  x *  is defined as
N A ( x * ) : = { d : x k A , x k x * , d k N ^ A ( x k ) , s . t . d k d } .
Definition 3.
(Tangent cone) The tangent cone of A at  x *  is defined as
T A ( x * ) : = { d : x k A , t k 0 : x k x * , a n d ( z * z k ) / t k d } .
Given a continuous function ψ : R s R and a point x ¯ R s , we recall the definitions of regular subdifferential, limiting subdifferential, and horizon subdifferential [28] (Definition 8.3).
Definition 4.
(Regular subdifferential) The regular subdifferential of ψ at  x ¯  is defined as
^ ψ ( x ¯ ) : = { υ : ψ ( x ) ψ ( x ¯ ) + υ T ( x x ¯ ) + o ( { x x ¯ ) x R n } .
Definition 5.
(Limiting subdifferential) The limiting subdifferential of ψ at  x ¯  is defined as
ψ ( x ¯ ) : = { υ : x k x ¯ , υ k ^ ψ ( x k ) , s . t . υ k υ } .
Definition 6.
(Horizon subdifferential) The horizon subdifferential of ψ at  x ¯  is defined as
ψ ( x ¯ ) : = { υ : x k x ¯ , υ k ^ ψ ( x k ) a n d t k 0 , s . t . t k υ k υ } .
It is worth noting that ψ ( x ¯ ) = { 0 } if function ψ is strictly continuous at x ¯ [28] (Theorem 9.13); and ψ ( x ¯ ) = { ψ ( x ¯ ) } if ψ is continuously differential at x ¯ [28] (Exercise 8.8).

3. The Non-Lipschitz MPSCs and Their Optimality Conditions

We mainly consider the MPSCs with the objective function involving a non-Lipschitz term | | · | | p p as below
min x R n F ( x ) = f ( x ) + D x p p s . t . g i ( x ) 0 , i I = { 1 , , m } , h j ( x ) = 0 , j J = { 1 , , q } , G k ( x ) H k ( x ) = 0 , k K = { 1 , , l } .
where all functions f , g i , h j , G k , H k : R n R are continuously differentiable for all i I , j J , k K , and D is a given real matrix in R r × n , and x p : = ( i = 1 n | x i | p ) 1 p . We call G k ( x ) H k ( x ) = 0 the switching constraint since functions G k ( x ) , H k ( x ) are active at least one, that is to say, the last l constraints in problem (1) force G k ( x ) = 0 or H k ( x ) = 0 for all k = 1 , , l at any feasible point of (MPSC). Denote the feasible set of problem (1) by X : = { x R n : g i ( x ) 0 , i I , h j ( x ) = 0 , j J , G k ( x ) H k ( x ) = 0 , k K } . Comparing with the classical MPSCs, the difficulty of problem (1) is to deal with the term D x p p which is non-convex and non-Lipschitz when 0 < p < 1 . The basic qualification (BQ) of problem (1) holds at x ¯ X if F ( x ¯ ) N X ( x ¯ ) = { 0 } . The BQ ensures the validity of the sum rule of the subdifferentials of the objective function F and an indicator function, which plays an important role in deriving necessary optimality conditions for problem (1).

3.1. Stationary Conditions and Qualifications

We derive three different stationarity notions of problem (1) which are not stronger than the associated KKT conditions. We first introduce some index sets of a feasible point x Ω of problem (1), D i is the transpose of the ith row of D:
I D 0 ( x ) : = { i { 1 , , r } : D i T x = 0 } , I D ( x ) : = { i { 1 , , r } : D i T x 0 } , I G ( x ) : = { k K : G i ( x ) = 0 , H k ( x ) 0 } , I H ( x ) : = { k K : G i ( x ) 0 , H k ( x ) = 0 } , I G H ( x ) : = { k K : G i ( x ) = 0 , H k ( x ) = 0 } , I g ( x ) : = { i I : g i ( x ) = 0 } .
If there is no ambiguity, we abbreviate the above sets to I D 0 , I D , I G , I H , I G H , I g .
Note that the index sets I G , I H , I G H are a partition of K , i.e., I G I H I G H = K , I G I H I G H = . In [13] (Lemma 4.1), we know that MFCQ and LICQ are violated at x ¯ when I G H 0 .
Denote C : = { ( a , b ) R 2 : a b = 0 } , for any ( a , b ) C , the following formulas are valid [13] (Lemma 3.2):
N ^ C ( a , b ) = R × { 0 } if a = 0 , b 0 , { 0 } × R if a 0 , b = 0 , { ( 0 , 0 ) } if a = 0 , b = 0 , N C ( a , b ) = R × { 0 } if a = 0 , b 0 , { 0 } × R if a 0 , b = 0 , C if a = 0 , b = 0 .
Next, we would like to prove that the BQ fails for problem (1) at any nonzero feasible point, and the following lemma is crucial for the proof.
Lemma 1.
Let  x Θ . Then for any  ( μ , ν , α , β ) R m + p + 2 l  with  μ i = 0 , i I I g , α k = 0 , k I H I G H , and β k = 0 , k I G I G H , we have that
μ T g ( x ) + ν T h ( x ) + α T G ( x ) + β T H ( x ) N Θ ( x ) .
Proof. 
For the normal cone, define sets Ω : = { x : g i ( x ) 0 , i I , h j ( x ) = 0 , j J } and C : = { ( a , b ) R 2 : a b = 0 } , then Θ = { x Ω : ( G ( x ) , H ( x ) ) C } . From [28] (theorem 6.14) and the definition of normal cone, we have
α T G ( x ) + β T H ( x ) + z N ^ Θ ( x ) N Θ ( x ) ,
where ( α , β ) N ^ C ( G ( x ) , h ( x ) ) , z N ^ Ω ( x ) . According to N ^ C ( a , b ) in Equation (2), then
N ^ C ( G ( x ) , H ( x ) ) = ( α k , β k ) : α k R , β k = 0 , if k I G , α k = 0 , β k R , if k I H , α k = 0 , β k = 0 , if k I G H .
We let z = i I g μ i g i ( x ) + j J ν j h j ( x ) with μ i 0 , i I g . Then the result follows immediately form the above formulas. □
Proposition 1.
Let  x ¯ Θ  and  I D 0 . Assume that D is full row rank. If there exists a vector  ( λ , μ , ν , α , β ) R | I D 0 | + m + p + 2 l  with  λ 0  such that
i = 1 m μ i g i ( x ¯ ) + j = 1 p ν j h j ( x ¯ ) + k = l m α k G k ( x ¯ ) + k = 1 l β k H k ( x ¯ ) + i I D 0 λ i D i = 0 , μ i = 0 i I I g , α k = 0 k I H I G H , β k = 0 k I G I G H ,
then the BQ fails at  x ¯  for problem (1).
Proof. 
From Lemma 1 where for all ( μ , ν , α , β ) with μ i = 0 , i I I g , α k = 0 , k I H I G H , and β k = 0 , k I G I G H , there is
i = 1 m μ i g i ( x ¯ ) + j = 1 p ν j h j ( x ¯ ) + k = l m α k G k ( x ¯ ) + k = 1 l β k H k ( x ¯ ) N Θ ( x ¯ ) .
Combine with the assumption Equation (3) with λ 0 such that
i I D 0 λ i D i N Θ ( x ¯ ) .
It is clear that i I D 0 λ i D i 0 since D is full row rank. Now let ϕ ( · ) = · p p and S ( x ) = D x which imply D x ¯ p p = ϕ ( S ( x ¯ ) ) . By [28] (Exercise 10.7), we have
( D x ¯ p p ) = S ( x ) ϕ ( S ( x ¯ ) ) ,
where
ϕ ( S ( x ¯ ) ) = { ( a , b ) R n : a i = 0 if S i ( x ¯ ) > 0 , b = 0 } ,
by [27] (euqation (2.2)). Moreover, by [28] (Exercise 8.8(c)), it follows that
F ( x ¯ ) = ( D x ¯ p p ) = i I D 0 a i D i : a i R i I D 0 .
According to Equations (4) and (5),
F ( x ¯ ) N Θ ( x ¯ ) = i I D 0 λ i D i 0 .
Thus the BQ fails at point x ¯ and the proof is complete.  □

3.1.1. The Stationary Condition

We give four stationary conditions for problem (1). However, first we introduce a tangent cone and a sign function. The MPSC-tailored linearized tangent cone of problem (1) is defined as
L M P S C ( x ¯ ) = d g i ( x ¯ ) T d 0 , i I g , h j ( x ¯ ) T d = 0 , j J , G k ( x ¯ ) T d = 0 , k I G , H k ( x ¯ ) T d = 0 , k I H , ( G k ( x ¯ ) T d ) ( H ( x ¯ ) T d ) = 0 , k I G H . ,
and for any θ R , the sign function of θ is denoted as
s i g n θ = 1 if θ > 0 , [ 1 , 1 ] if θ = 0 , 1 if θ < 0 .
Definition 7.
Let  x ¯ R n  be a feasible point of problem (1). Then, x ¯  is called
  • weak stationary (W-stationary) point if there exist multipliers ( λ , μ , ν , α , β ) such that
    f ( x ¯ ) + r = 1 s λ r D r + i I g μ i g i ( x ¯ ) + j J ν j h j ( x ¯ ) + k K [ α k G k ( x ¯ ) + β k H k ( x ¯ ) ] = 0 , λ i = p | D r T x ¯ | p 1 s i g n ( D r T x ¯ ) , r I D , μ i 0 , i I g , α k = 0 , k I H , β k = 0 , k I G .
  • Mordukhovich-stationary (M-stationary) point if there exist multipliers ( λ , μ , ν , α , β ) such that Equation (8) holds and satisfies the condition:
    α k β k = 0 , k I G H .
  • Bouligand-stationay (B-stationary) point if for  ζ F ( x ¯ ) , the following condition holds:
    ζ T d 0 , d L M P S C ( x ¯ ) .
  • strongly stationary (S-stationary) point if there exist multipliers ( λ , μ , ν , α , β ) such that Equation (8) holds and satisfies the condition:
    α k = 0 , β k = 0 , k I G H .
Theorem 1.
Let x ¯ Θ . (i) If x ¯ is an S-stationary point of problem (1), then x ¯ is a B-stationary point of problem (1). (ii) If x ¯ is a B-stationary point of problem (1), then x ¯ is an M-stationary point of problem (1).
Proof. 
(i) Let x ¯ be an S-stationary point of problem (1); there exist multipliers { λ , μ , ν , α , β } satisfying Equations (8) and (11). Then for any d R n , denote F ( x ¯ ) = f ( x ¯ ) + r = 1 s λ r D r ; we have
F ( x ¯ ) T d + i I g μ i g i ( x ¯ ) T d + j J ν j h j ( x ¯ ) T d + k K [ α k G k ( x ¯ ) T d + β k H k ( x ¯ ) T d ] = 0 .
By contradiction, if x ¯ is not a B-stationary point of problem (1), which means that there exist d k L M P S C ( x ¯ ) such that F ( x ¯ ) T d k < 0 . From Equations (6) and (11), we get
F ( x ¯ ) T d k + i I g μ i g i ( x ¯ ) T d k + j J ν j h j ( x ¯ ) T d k + k K [ α k G k ( x ¯ ) T d k + β k H k ( x ¯ ) T d k ] < 0 ,
which is a contraction.
(ii) Let x ¯ be a B-stationary point of problem (1). For d L M P S C ( x ¯ ) , we have F ( x ¯ ) T d 0 , which means that d = 0 is an optimal solution of
min d L M P S C ( x ¯ ) F ( x ¯ ) T d .
In fact, problem (12) is an MPSC concerning which all constraints are affine. By [13] (Corollary 5.2, Theorem 5.1), MPSC-ACQ is valid for problem (12) at d = 0 which is an M-stationary point of problem (12). There exist multipliers { λ , μ , ν , α , β } such that
F ( x ¯ ) + i I g μ i g i ( x ¯ ) + j J ν j h j ( x ¯ ) + k K [ α k G k ( x ¯ ) + β k H k ( x ¯ ) ] = 0 , μ i 0 , i I g , α k = 0 , k I H , β k = 0 , k I G , α k β k = 0 , k I G H ,
implying that x ¯ is an M-stationary point of problem (1).  □
By Theorem 1 and Definition 7, the following relations hold easily
S-stationary B-stationary M-stationary W-stationary .

3.1.2. The Qualifications

Notice that D x p p is non-convex and non-Lipschitz when 0 < p < 1 , we need to consider the appropriate subdifferentiable and constraint qualification to ensure that a minimizer x * of problem (1) is a stationary point.
Let P ( I G H ( x ¯ ) ) be the set of all the disjoint bipartitions of I G H ( x ¯ ) . For fixed ( η 1 , η 2 ) P ( I G H ( x ¯ ) ) , we reformulate problem (1) as the following
min x φ ( x ) = f ( x ) + r I D | D r T x | p s . t . D r T x = 0 , r I D 0 , g i ( x ) 0 , i I , h j ( x ) = 0 , j J , G k ( x ) = 0 , k I G η 1 , H k ( x ) = 0 , k I H η 2 .
Denote Ξ as the feasible set of problem (13).
Lemma 2.
The point x ¯ Θ is a local minimizer of problem (1) if and only if there exists a partition ( η 1 , η 2 ) P ( I G H ( x ¯ ) ) such that x ¯ is a local minimizer of problem (13).
Proof. 
Assume that x ¯ Θ is a local minimizer of problem (1). If the set I G H ( x ¯ ) = , problem (13) is equivalent to problem (1), and the lemma is clearly true. Otherwise, there exists ( η 1 , η 2 ) P ( I G H ( x ¯ ) ) , but the point x ¯ is not a local minimizer of problem (13). Hence there exists x * Ξ such that φ ( x * ) φ ( x ¯ ) . By the feasibility of x * , we know that x * Θ and D r T x * = 0 , r I D 0 ( x * ) . Thus, F ( x * ) F ( x ¯ ) , which is a contradiction of x ¯ , a local minimizer of problem (1).
Conversely, x ¯ is a local minimizer of problem (13). We assume that x ¯ is not a local minimizer of problem (1); there exists a point x * Θ such that F ( x * ) F ( x ¯ ) . By the feasibility of x * , we know G k ( x * ) H k ( x * ) = 0 , k K , which implies G k ( x * ) = 0 , k I G ( x * ) I G H ( x * ) or H k ( x * ) = 0 , k I H ( x * ) I G H ( x * ) . Let z * = D x * , and there exists a partition ( η 1 , η 2 ) P ( I G H ( x ¯ ) ) such that x * is a feasible point of problem (13) and φ ( x * ) φ ( x ¯ ) , which is a contradiction of x ¯ , a local minimizer of problem (13). The proof is complete.  □
Here, we give some qualifications of problem (1).
Definition 8.
Let x ¯ Θ . We say
(a) 
MPSC linear independence (MPSC-LI) qualification holds at x ¯ for problem (1) if the following gradients are linearly indepedent:
{ D r , g i ( x ¯ ) , h j ( x ¯ ) , G k ( x ¯ ) , H k ( x ¯ ) | r I D 0 , i I g , j J , k I G I G H , k I H I G H } .
(b) 
MPSC linear constraint qualification (MPSC-LCQ) holds if all functions { g , h , G , H } in the constraints are affine.
(c) 
MPSC Mangasarian-Fromovitz (MPSC-MF) qualification holds at x ¯ iff there are no nonzero multipliers { λ , μ , ν , α , β } such that
r I D 0 λ r D r + i I g μ i g i ( x ¯ ) + j J ν j h j ( x ¯ ) + k K ( α k G k ( x ¯ ) + β k H k ( x ¯ ) ) = 0 , μ i 0 , i I g , μ T g ( x ¯ ) = 0 , α k = 0 , k I H , β k = 0 , k I G .
(d) 
MPSC no nonzero abnormal multiplier (MPSC-NNAM) qualification holds at x ¯ iff there are no nonzero multipliers { λ , μ , ν , α , β } such that
r I D 0 λ r D r + i I g μ i g i ( x ¯ ) + j J ν j h j ( x ¯ ) + k K ( α k G k ( x ¯ ) + β k H k ( x ¯ ) ) = 0 , μ i 0 , i I g , μ T g ( x ¯ ) = 0 , α k = 0 , k I H , β k = 0 , k I G , α k β k = 0 , i I G H .
(e) 
MPSC constant rank (MPSC-CR) qualification holds at x ¯ if there exists δ > 0 such that, for any I 1 I D 0 , I 2 I g , I 3 J , I 4 I G I G H , and I 5 I H I G H , the gradients
{ D r , g i ( x ) , h j ( x ) , G k ( x ) , H k ( x ) : r I 1 , i I 2 , j I 3 , k I 4 , k I 5 }
has the same rank for each x B δ ( x ¯ ) : = { x : x x ¯ δ } .
(f) 
MPSC relaxed constant rank (MPSC-RCR) qualification holds at x ¯ if there exists δ > 0 such that, for any I 1 I D 0 , I 2 I g , I 3 , I 4 I G H , the gradients
{ D r , g i ( x ) , h j ( x ) , G k ( x ) , H k ( x ) : r I 1 , i I 2 , j J , k I G I 3 , k I H I 4 }
have the same rank for each x B δ ( x ¯ ) .
(g) 
MPSC constant positive linear dependent (MPSC-CPLD) holds at x ¯ if, for any I 1 I D 0 , I 2 I g , I 3 J , I 4 I G I G H , and I 5 I H I G H , whenever there exist multipliers { λ , μ , ν , α , β } , not all zero, with μ i 0 for each i I 2 and α i β i = 0 for each i I G H , such that
r I 1 λ r D r + i I 2 μ i g i ( x ¯ ) + j I 3 ν j h j ( x ¯ ) + k I 4 α k G k ( x ¯ ) + k I 5 β k H k ( x ¯ ) = 0 ,
there exists δ > 0 such that, for any x B δ ( x ¯ ) , the vectors
{ D r , g i ( x ) , h j ( x ) , G k ( x ) , H k ( x ) : r I 1 , i I 2 , j I 3 , k I 4 , k I 5 }
are linearly dependent.
(h) 
Let I 1 I D 0 , I 2 J , I 3 I G , and I 4 I H , be such that G ( x ¯ , I 1 , I 2 , I 3 , I 4 ) is a basis for span G ( x ¯ , I D 0 , J , I G , I H ) . MPSC relaxed constant positive linear dependence (MPSC-RCPLD) holds at x ¯ if there exists δ > 0 such that
  • G ( x , J , I D 0 , I G , I H ) has the same rank for x B δ ( x ¯ ) ;
  • for any I 5 I g , I 6 , I 7 I G H , if there exist multipliers { λ , μ , ν , α , β } , not all zero, with μ i 0 for each i I 5 and α i β i = 0 for each i I G H , such that
    r I 1 λ r D r + i I 5 μ i g i ( x ¯ ) + j I 2 ν j h j ( x ¯ ) + k I 3 I 6 α k G k ( x ¯ ) + k I 4 I 7 β k H k ( x ¯ ) = 0 ,
    then for any  x B δ ( x ¯ ) , the following vectors are linearly dependent:
    { D r , g i ( x ) , h j ( x ) , G k ( x ) , H k ( x ) : r I 1 , i I 5 , j I 2 , k I 3 I 6 , k I 4 I 7 } ,
    where G ( x ¯ , I 1 , I 2 , I 3 , I 4 ) : = { D r , h j ( x ) , G k ( x ) , H k ( x ) : r I 1 , j I 2 , k I 3 , k I 4 } .
(i) 
MPSC pseudonormality holds at x ¯ if there are no nonzero multipliers { λ , μ , ν , α , β } such that
  • r I D 0 λ r D r + i I g μ i g i ( x ¯ ) + j J ν j h j ( x ¯ ) + k K ( α k G k ( x ¯ ) + β k H k ( x ¯ ) ) = 0 ;
  • μ i 0 , μ T g ( x ¯ ) = 0 , α k = 0 , k I H , β k = 0 , k I G , and α k β k = 0 for k I G H ;
  • there exists a sequence { x k } x ¯ such that, for each k
    r = 1 s λ r D r + i = 1 m μ i g i ( x ¯ ) + j = 1 p ν j h j ( x ¯ ) + k = 1 l α k G k ( x ¯ ) + k = 1 l μ k H k ( x ¯ ) > 0 .
(j) 
MPSC quasinormality holds at x ¯ if there are no nonzero multipliers { λ , μ , ν , α , β } such that
  • r I D 0 λ r D r + i I g μ i g i ( x ¯ ) + j J ν j h j ( x ¯ ) + k K ( α k G k ( x ¯ ) + β k H k ( x ¯ ) ) = 0 ;
  • μ i 0 , μ T g ( x ¯ ) = 0 , α k = 0 , k I H , β k = 0 , k I G , and α k β k = 0 for k I G H ;
  • there exists a sequence { x k } x ¯ such that, for each k
    λ r > 0 λ r ( D r T x k ) > 0 , μ i > 0 μ i g i ( x k ) > 0 , ν j > 0 ν j h j ( x k ) > 0 , α k > 0 α k G k ( x k ) > 0 , β k > 0 β k H k ( x k ) > 0 .
(k) 
MPSC Abadie qualification holds at x ¯ if T Ξ ( x ¯ ) = L Ξ M P S C ( x ¯ ) , where
L Ξ M P S C ( x ¯ ) = d g i ( x ¯ ) T d 0 , i I g , D r T d = 0 , r I D 0 , h j ( x ¯ ) T d = 0 , j J , G k ( x ¯ ) T d = 0 , k I G , H k ( x ¯ ) T d = 0 , k I H , ( G k ( x ¯ ) T d ) ( H ( x ¯ ) T d ) = 0 , k I G H .
(l) 
MPSC Guignard qualification holds at x ¯ if T Ξ ( x ¯ ) = L Ξ M P S C ( x ¯ ) .
In the following, we show that x ¯ is an S-stationary point of problem (1) when MPSC-LI qualification holds at x ¯ , and is an M-stationary point of problem (1) when MPSC-RCLPD holds at x ¯ .
Theorem 2.
Let x ¯ Θ be a locally optimal solution of problem (1). If MPSC-LI qualification holds at x ¯ , then x ¯ is an S-stationary point of problem (1).
Proof. 
Assume x ¯ Θ is a locally optimal solution of problem (1) and by Lemma 2, x ¯ is a local minimizer of problem (13). Note that D i x ¯ 0 for all i I D , it is easy to see that the objective function of problem (13) is continuously differentiable at x ¯ . By the assumption that MPSC-LI qualification holds at point x ¯ , i.e.,
{ D r , g i ( x ¯ ) , h j ( x ¯ ) , G k ( x ¯ ) , H k ( x ¯ ) : r I D 0 , i I g , j J , k I G I G H , k I H I G H } ,
which implies that LICQ holds at point x ¯ for problem (13). Then for ( η 1 , η 2 ) { ( I G H , ) } , there exist multipliers { λ 1 , μ 1 , ν 1 , α 1 , β 1 } satisfying
f ( x ¯ ) + r I D ρ r D r + r I D 0 λ r 1 D r + i I g μ i 1 g i ( x ¯ ) + j J ν j 1 h j ( x ¯ ) + k I G I G H α k 1 G k ( x ¯ ) + k I H β k 1 H k ( x ¯ ) = 0 , ρ r = p | D r x ¯ | p 1 s i g n ( D r x ¯ ) , r I D , μ i 0 , i I g ,
and for ( η 1 , η 2 ) { ( , I G H ) } , there exist multipliers { λ 2 , μ 2 , ν 2 , α 2 , β 2 } satisfying
f ( x ¯ ) + r I D ρ r D r + r I D 0 λ r 2 D r + i I g μ i 2 g i ( x ¯ ) + j J ν j 2 h j ( x ¯ ) + k I G α k 2 G k ( x ¯ ) + k I H I G H β k 2 H k ( x ¯ ) = 0 , ρ r = p | D r x ¯ | p 1 s i g n ( D r x ¯ ) , r I D , μ i 0 , i I g .
Let Equations (16) and (17), we have
r I D 0 ( λ r 1 λ r 2 ) D r + i I g ( μ i 1 μ i 2 ) g i ( x ¯ ) + j J ( ν j 1 ν j 2 ) h j ( x ¯ ) + k I G ( α k 1 α k 2 ) G k ( x ¯ ) + k I H ( β k 1 β k 2 ) H k ( x ¯ ) + k I G H α k 1 G k ( x ¯ ) k I G H β k 2 H k ( x ¯ ) = 0 .
Due to the validity of MPSC-LI qualification of problem (13), λ r 1 = λ r 2 , r I D 0 , μ i 1 = μ i 2 , i I g , ν j 1 = ν j 2 , j J , α k 1 = α k 2 , k I G , β k 1 = β k 2 , k I H , and α k 1 = β k 2 = 0 , k I G H are obtained. Furthermore, I D 0 I D = { 1 , , s } and I G I H I G H = K ; hence, we can write Equation (16) or Equation (17) as Equations (8) and (11) and the proof is complete.  □
Before we prove the following theorem, we give two lemmas in [28,29].
Lemma 3.
[28] A vector d T A ( x ¯ ) iff there exists a smooth function ϕ such that ϕ ( x ¯ ) = d and arg min x A ϕ ( x ) = { x ¯ } .
Lemma 4.
[29] Let x = i = 1 m + p λ i υ i where { υ 1 , , υ m } is linearly independent and λ i 0 for each i = m + 1 , , m + p . Then there exist P { m + 1 , , m + p } and  λ ¯ i , i { 1 , , m } P , such that x = i { 1 , , m } P λ ¯ i υ i with λ i λ ¯ i > 0 for each i P and { υ i } i { 1 , , m } P is linearly independent.
Theorem 3.
Let x ¯ Θ be a locally optimal solution of problem (1). If MPSC-RCLPD holds at x ¯ , then x ¯ is an M-stationary of problem (1).
Proof. 
Assume x ¯ Θ is a locally optimal solution of problem (1) and by Lemma 2, x ¯ is a local minimizer of problem (13). Let y ¯ : = G ( x ¯ ) , z ¯ : = H ( x ¯ ) . For each k N , we consider the following optimization problem
min F k ( x , y , z ) : = φ ( x ) + k 2 max { 0 , g ( x ) } 2 + k 2 D I D 0 T x 2 + k 2 h ( x ) 2 + k 2 G ( x ) y 2 + k 2 H ( x ) z 2 + 1 2 ( x , y , z ) ( x ¯ , y ¯ , z ¯ ) 2 s . t . ( x , y , z ) U Λ ,
where U : = R n × C and Λ : = { ( x , y , z ) : ( x , y , z ) ( x ¯ , y ¯ , z ¯ ) ε } . Since the set U Λ is compact and F k ( x , y , z ) is continuous, problem (18) has at least one optimal solution, denote as ( x k , y k , z k ) . Next, we show that ( x k , y k , z k ) ( x ¯ , y ¯ , z ¯ ) as k . By the optimality at point ( x k , y k , z k ) , we have F k ( x k , y k , z k ) F k ( x ¯ , y ¯ , z ¯ ) = φ ( x ¯ ) for all k N , that is
φ ( x k ) + k 2 max { 0 , g ( x k ) } 2 + k 2 D I D 0 T x k 2 + k 2 h ( x k ) 2 + k 2 G ( x k ) y k 2 + k 2 H ( x k ) z k 2 + 1 2 ( x k , y k , z k ) ( x ¯ , y ¯ , z ¯ ) 2 φ ( x ¯ ) .
By the compactness of U Λ , φ ( x ¯ ) is bounded, which implies that
lim k max { 0 , g ( x k ) } = 0 , lim k D I D 0 T x k = 0 , lim k h ( x k ) = 0 , lim k G ( x k ) y k = 0 , lim k H ( x k ) z k = 0 .
Assume ( x * , y * , z * ) is an accumulation point of { ( x k , y k , z k ) } . From Equations (19) and (20), we have
φ ( x * ) + 1 2 ( x * , y * , z * ) ( x ¯ , y ¯ , z ¯ ) 2 φ ( x ¯ ) , g ( x * ) 0 , D I D 0 T x * = 0 , h ( x * ) = 0 , G ( x * ) = y * , H ( x * ) = z * .
Note that φ ( x ¯ ) φ ( x * ) by the feasibility of x * for problem (13) and local optimality of x ¯ ; thus the whole sequence { x k , y k , z k } converges to ( x ¯ , y ¯ , z ¯ ) .
Without loss of generality, we assume that ( x k , y k , z k ) is an interior point of Λ for all k N . Due to Fermat’s rule, for k N
F k ( x k , y k , z k ) N ^ U ( x k , y k , z k ) ,
where
N ^ U ( x k , y k , z k ) = ( 0 , a , b ) : a i R , b i = 0 if y i = 0 , z i 0 , a i = 0 , b i R if y i 0 , z i = 0 , a i = 0 , b i = 0 if y i = 0 , z i = 0 ,
by Equation (2). On the basis of Equation (21), we obtain that
φ ( x k ) + r I D 0 λ r k D r + i = 1 m μ i k g i ( x k ) + j = 1 p ν j k h j ( x k ) + ( x k x ¯ ) + i I G I G H α i k G i ( x k ) + i I H I G H β i k H i ( x k ) = 0
where λ r k = k ( D r x k ) , r I D 0 , μ i k = k max { 0 , g i ( x k ) } , i I , ν j k = k h j ( x k ) , j J , α t k = k ( G t ( x k ) y t k ) , t I G I G H , β t k = k ( H t ( x k ) z i k ) , t I H I G H , and
y i k = 0 , z i k 0 β i k = z i k z ¯ i , y i k 0 , z i k = 0 α i k = y i k y ¯ i , y i k = 0 , z i k = 0 α i k = y i k y ¯ i , β i k = z i k z ¯ i .
From Equations (21) and (23), it is easy to obtain ( α i k , β i k ) N C ( y i k , z i k ) . For convenience, we define the index sets as follows
I D 0 k : = { i { 1 , , s } : D i x k = 0 } , I g k : = { i I : g i ( x k ) = 0 } , I G k : = { k K : G i ( x k ) = 0 , H k ( x k ) 0 } , I H k : = { k K : G i ( x k ) 0 , H k ( x k ) = 0 } , I G H k : = { k K : G i ( x k ) = 0 , H k ( x k ) = 0 } ,
and it is obvious that I G I G k and I H I H k for each k sufficiently large. Define s u p p ( a ) : = { i : a i 0 } , then by Equations (22) and (23), we have
0 = Υ k + r I D 0 λ r k D r + i s u p p ( μ k ) μ i k g i ( x k ) + j J ν j k h j ( x k ) + t I G a t k G t ( x k ) + t I H b t k H t ( x k ) + t I G k I G I G H k s u p p ( a t k ) a t k G t ( x k ) + t I H k I H I G H k s u p p ( b t k ) b t k H t ( x k ) ,
where
a t k = α t k ( y i k y ¯ i ) , b t k = β t k ( z i k z ¯ i ) , Υ k = φ ( x k ) + ( x k x ¯ ) + i I G I G H α i k G i ( x k ) + i I H I G H β i k H i ( x k ) .
Let I 1 I D 0 , I 2 J , I 3 I G , and I 4 I H be index sets such that G ( x ¯ , I 1 , I 2 , I 3 , I 4 ) is a basis for G ( x ¯ , I D 0 , J , I G , I H ) . Because x ¯ satisfies the MPSC-RCPLD, there exists a constant δ > 0 such that the rank of G ( x ¯ , I D 0 , J , I G , I H ) is constant for each x B δ ( x ¯ ) . Hence G ( x ¯ , I 1 , I 2 , I 3 , I 4 ) is a basis for span G ( x ¯ , I D 0 , J , I G , I H ) as k sufficiently large. From Lemma 4, there exist I 5 k s u p p ( μ k ) , I 6 k I G k I G I G H k s u p p ( a t k ) , I 7 k I H k I H I G H k s u p p ( b t k ) and ( λ ¯ k , μ ¯ k , ν ¯ k , a ¯ k , b ¯ k ) such that
0 = Υ k + r I 1 λ ¯ r k D r + j I 2 ν ¯ j k h j ( x k ) + t I 3 a ¯ t k G t ( x k ) + t I 4 b ¯ t k H t ( x k ) + i I 5 k μ ¯ i k g i ( x k ) + t I 6 k a ¯ t k G t ( x k ) + t I 7 k b ¯ t k H t ( x k ) ,
and the vectors { D r , g i ( x ) , h j ( x ) , G s ( x ) , H t ( x ) : r I 1 , i I 5 k , j I 2 , s I 3 I 6 k , t I 4 I 7 k } are linearly independent for sufficiently large k. Set { λ ¯ r k = μ ¯ i k = ν ¯ j k = a ¯ s k = b ¯ t k = 0 : r I 1 , i I 5 k , j I 2 , s I 3 I 6 k , t I 4 I 7 k } , and by Lemma 4 follows that μ ¯ i k 0 , i I 5 k and ( a ¯ i k , b ¯ i k ) N C ( y i k , z i k ) , i K . We assume that I 5 k I 5 , I 6 k I 6 , I 7 k I 7 for sufficiently large k; then the vectors
{ D r , g i ( x ) , h j ( x ) , G s ( x ) , H t ( x ) : r I 1 , i I 5 , j I 2 , s I 3 I 6 , t I 4 I 7 }
are linearly independent, and it is easy to obtain I 5 I g , I 6 , I 7 I G H with I G I H I G H = I G k I H k I G H k . Denote
M k : = max { λ ¯ r k , μ ¯ i k , ν ¯ j k , a ¯ s k , b ¯ t k : r I 1 , i I 5 , j I 2 , s I 3 I 6 , t I 4 I 7 } .
Now, we show that M k is bounded. By contraction, assume M k ; then there exists a subsequence such that for any r I 1 , i I 5 , j I 2 , s I 3 I 6 , t I 4 I 7 ,
( λ ¯ r k , μ ¯ i k , ν ¯ j k , a ¯ s k , b ¯ t k ) M k ( λ ¯ r * , μ ¯ i * , ν ¯ j * , a ¯ s * , b ¯ t * ) as k .
Dividing Equation (24) by M k , and taking the limit as k , we have
r I 1 λ ¯ r * D r + j I 2 ν ¯ j * h j ( x ¯ ) + t I 3 a ¯ t * G t ( x ¯ ) + t I 4 b ¯ t * H t ( x ¯ ) + i I 5 μ ¯ i * g i ( x ¯ ) + t I 6 a ¯ t * G t ( x ¯ ) + t I 7 b ¯ t * H t ( x ¯ ) = 0 .
Furthermore, μ ¯ i * , i I 5 , following that ( a ¯ i * , b ¯ i * ) N C ( y ¯ i , z ¯ i ) by ( a ¯ i k , b ¯ i k ) N C ( y i k , z i k ) , i K and the outer semi-continuity of limiting normal cone, which give a contradiction with MPSC-RCPLD. Therefore, M k is bounded. There exists a subsequence such that for any r I 1 , i I 5 , j I 2 , s I 3 I 6 , t I 4 I 7 ,
( λ ¯ r k , μ ¯ i k , ν ¯ j k , a ¯ s k , b ¯ t k ) M k ( λ ¯ r , μ ¯ i , ν ¯ j , a ¯ s , b ¯ t ) as k ,
and from Equation (24) it follows that
φ ( x ¯ ) + r I 1 λ ¯ r D r + j I 2 ν ¯ j h j ( x ¯ ) + t I 3 a ¯ t G t ( x ¯ ) + t I 4 b ¯ t H t ( x ¯ ) + i I 5 μ ¯ i g i ( x ¯ ) + t I 6 a ¯ t G t ( x ¯ ) + t I 7 b ¯ t H t ( x ¯ ) .
Note that φ ( x ) = f ( x ) + r I D | D r T x | p , and ( a ¯ i , b ¯ i ) N C ( y ¯ i , z ¯ i ) , i K which implies that a ¯ i b ¯ i = 0 , i I G H . It is easy to verify that x ¯ satisfies the definition of M-stationary in definition 7 and the proof is complete.  □
Theorem 4.
Let x ¯ Θ be a locally optimal solution of problem (1) where MPSC Guignard qualification is valid. Then, x ¯ is a B-stationary of problem (1).
Proof. 
Assume x ¯ Θ is a locally optimal solution of problem (1) and by Lemma 2, x ¯ is a local minimizer of problem (13). It follows that
φ ( x ¯ ) T d 0 , d T Ξ ( x ¯ ) ,
which implies that φ ( x ¯ ) T Ξ ( x ¯ ) = L Ξ M P S C ( x ¯ ) . From φ ( x ¯ ) L Ξ M P S C ( x ¯ ) , we have
φ ( x ¯ ) T d 0 , d L Ξ M P S C ( x ¯ ) .
Note that φ ( x ¯ ) = f ( x ¯ ) + r I D λ r D r ; denote
ζ = f ( x ) + r I D λ r D r + r I D 0 λ r D r F ( x ¯ ) .
From D r T d = 0 , r I D 0 , for d L M P S C ( x ¯ ) , we have
ζ T d = ( f ( x ) + r I D λ r D r + r I D 0 λ r D r ) T d = φ ( x ¯ ) T d + r I D 0 λ r D r T d 0 .
Therefore, x ¯ is a B-stationary of problem (1) and the proof is complete.  □

3.1.3. The Relation between Various Qualifications

By Definition 8, the following relations are obtained immediately,
MPSC-LI qualification ⇒ MPSC-MF qualification ⇒ MPSC-NNAM qualification ⇒ MPSC-pseudonormality ⇒ MPSC-quasinormality; MPSC-LI qualification ⇒ MPSC-CR qualification ⇒ MPSC-RCR qualification; MPSC-LCQ ⇒ MPSC-CR qualification; MPSC-CPLD ⇒ MPSC-RCPLD; MPSC-Abadie qualification ⇒ MPSC Guignard qualification.
Next, we show that MPSC-CR qualification(RCR qualification) ⇒ MPSC-CPLD (RCPLD), and MPSC-quasinormality ⇒ MPSC-Abadie qualification.
Theorem 5.
If the MPSC-CR qualification holds at  x ¯ Θ , then MPSC-CPLD holds at x ¯ . If the MPSC-RCR qualification holds at  x ¯ Θ , then MPSC-RCPLD holds at  x ¯ .
Proof. 
(i) Let the MPSC-CR qualification hold at x ¯ Θ . Let I 1 I D 0 , I 2 I g , I 3 J , I 4 I G I G H , and I 5 I H I G H . Assume that there exist multipliers { λ , μ , ν , α , β } , not all zero, with μ i 0 for each i I 2 and α i β i = 0 for each i I G H , such that
r I 1 λ r D r + i I 2 μ i g i ( x ¯ ) + j I 3 ν j h j ( x ¯ ) + k I 4 α k G k ( x ¯ ) + k I 5 β k H k ( x ¯ ) = 0 ,
which means that the vectors
{ D r , g i ( x ) , h j ( x ) , G k ( x ) , H k ( x ) | r I 1 , i I 2 , j I 3 , k I 4 , k I 5 }
are linearly dependent at x ¯ . By the MPSC-CR qualification, there exists δ > 0 , for any x B δ ( x ¯ ) , the vectors of Equation (26) have the same rank, implying that they are linearly dependent at x B δ ( x ¯ ) . Thus, MPSC-CLPD holds at x ¯ .
(ii) Assume MPSC-RCR qualification holds at x ¯ Θ . Let I 1 I D 0 , I 2 J , I 3 I G and I 4 I H be such that G ( x ¯ , I 1 , I 2 , I 3 , I 4 ) is a basis for span G ( x ¯ , I D 0 , J , I G , I H ) . Let I 5 I g , I 6 , I 7 I G H be such that there exist multipliers { λ , μ , ν , α , β } , not all zero, with μ i 0 for each i I 5 and α i β i = 0 for each i I G H , such that
r I 1 λ r D r + i I 5 μ i g i ( x ¯ ) + j I 2 ν j h j ( x ¯ ) + k I 3 I 6 α k G k ( x ¯ ) + k I 4 I 7 β k H k ( x ¯ ) = 0 ,
which means that
{ D r , g i ( x ¯ ) , h j ( x ¯ ) , G k ( x ¯ ) , H k ( x ¯ ) | r I 1 , i I 5 , j I 2 , k I 3 I 6 , k I 4 I 7 }
are linearly dependent at x ¯ . Note that G ( x ¯ , I 1 , I 2 , I 3 , I 4 ) and G ( x ¯ , I D 0 , J , I G , I H ) have the same rank. Combine the MPSC-RCR qualification; there exists δ > 0 such that the vectors of Equation (27) have the same rank at x B δ ( x ¯ ) , meaning that they are linearly dependent. Thus, MPSC-RCPLD holds at x ¯ and the proof is complete.  □
Theorem 6.
If the MPSC-quasinormality holds at x ¯ Θ , the MPSC-Abadie qualification holds at x ¯ .
Proof. 
Assume the MPSC-quasinormality holds at x ¯ Θ . Note that the MPSC-quasinor-mality is the quasinormality condition for nonlinear optimization problem (13) and the MPSC-Abadie qualification is the ACQ for problem (13). By [30] (Theorem 10.5), the result is true and the proof is complete.  □
Finally, we summarize the relations between the qualifications of non-Lipschitz MPSC in Figure 1.

4. The Approximation Method for the Non-Lipschitz MPSC Problem

The non-Lipschitz term and switching constraints make problem (1) difficult to solve. Here, we propose the relaxation method to problem (1) as follows
min x R n F ( x ) = f ( x ) + φ ϵ ( x ) s . t . g i ( x ) 0 , i I , h j ( x ) = 0 , j J , t G ( x ) H ( x ) t , K .
Here, the non-Lipschitz term D x p p is approximated by a local Lipschitz function
φ ϵ ( x ) : = i = 1 s ( | D i T x | + ϵ i ) p ,
where ϵ i > 0 , i = 1 , , s . The feasible region denotes
Θ t = { g i ( x ) 0 , i I , h j ( x ) = 0 , j J , t G k ( x ) H k ( x ) t , k K } .
This is the Scholtes global relaxation method, see [31], and is used in mathematical programs with switching constraints, see [16] (Section 3).
Theorem 7.
Let { t k } k N R + be a sequence of positive relaxation parameters converging to zero. For each  k N , let x k Θ t k be a KKT point of problem (28). Assume that the sequence { x k } k N converges to a point x ¯ Θ where MPSC-MF qualification holds. Then x ¯ is a W-stationary point of problem (1).
Proof. 
Noting that x k is a KKT point of problem (28), we can find multipliers μ k R m , ν k R p , and λ k R l which satisfy the conditions as follows
0 = f ( x k ) + p i = 1 s ( | D i T x | + ϵ i ) p 1 s i g n ( D i T x k ) D i + i I g ( x k ) μ i k g i ( x k ) + j J ν j k h j ( x k ) + K λ k [ H ( x k ) G ( x k ) + G ( x k ) H ( x k ) ] , i I g ( x k ) : μ i k 0 , i I I g ( x k ) : μ i k = 0 , I t + ( x k ) : λ k 0 , I t ( x k ) : λ k 0 , K I t + ( x k ) I t ( x k ) : λ k = 0 ,
where I t + ( x ) : = { K : G ( x ) H ( x ) = t } , and I t ( x ) : = { K : G ( x ) H ( x ) = t } . For each k N and K , we define multipliers α k , β k R as stated below:
α k : = λ k H ( x k ) I G I G H , 0 I H , β k : = λ k G ( x k ) I H I G H , 0 I G ,
and γ i k = p ( | D i T x k | + ϵ i ) p 1 s i g n ( D i T x k ) . Thus, we obtain
0 = f ( x k ) + i = 1 s γ i k D i + i I g ( x k ) μ i k g i ( x k ) + j J ν j k h j ( x k ) + K [ α k G ( x k ) + β k H ( x k ) ] + I H λ k H ( x k ) G ( x k ) + I G λ k G ( x k ) H ( x k ) .
We next show that the sequence { ( γ k , μ k , ν k , α k , β k , λ I G I H k ) } k N is bounded. It is worth noting that γ i k R for i I D 0 ( x ¯ ) , since I D 0 ( x k ) I D 0 ( x ¯ ) and s i g n ( D i T x k ) = [ 1 , 1 ] , and γ i k is bounded for i I D . Hence, we just show the boundness of { ( γ I D 0 k , μ k , ν k , α k , β k , λ I G I H k ) } k N . In contrast, we assume that ρ k : = ( γ I D 0 k , μ k , ν k , α k , β k , λ I G I H k ) when k . Thus, 1 ρ k ( γ I D 0 k , μ k , ν k , α k , β k , λ I G I H k ) ( γ I D 0 , μ , ν , α , β , λ I G I H ) and ( γ I D 0 , μ , ν , α , β , λ I G I H ) = 1 . Note that I g ( x k ) I g ( x ¯ ) by the continuity of g for sufficiently large k N . Dividing Equation (30) by ρ k and taking the limit k , we obtain
0 = i I D 0 γ i D i + i I g μ i g i ( x ¯ ) + j J ν j h j ( x ¯ ) + K [ α G ( x ¯ ) + β H ( x ¯ ) ] , i I g : μ i 0 , i I I g : μ i = 0 , I G : β = 0 , I H : α = 0 .
Therefore, we can yield γ I D 0 = 0 , μ = 0 , ν = 0 , α = 0 , and β = 0 by the assumption that the MPSC-MF qualification holds at x ¯ , which is defined as (a) of Definition 8. Hence, λ 0 0 holds for at least one index 0 I G I H . Without loss of generality, we assume 0 I G , then we have α 0 k = λ 0 k H 0 ( x k ) . Besides, we have
α 0 = lim k α 0 k ρ k = lim k λ 0 k H 0 ( x k ) ρ k = λ 0 H 0 ( x ¯ ) 0 ,
which is a contradiction since α = 0 . As a consequence, the sequence { ( γ k , μ k , ν k , α k , β k , λ I G I H k ) } k N is bounded.
Thus, we may assume the sequence { ( γ k , μ k , ν k , α k , β k , λ I G I H k ) } k N converges to { ( γ ¯ , μ ¯ , ν ¯ , α ¯ , β ¯ , λ ¯ I G I H ) } . We take the limit in Equation (30) and obtain
0 = f ( x ¯ ) + i = 1 s γ ¯ i D i + i I g μ ¯ i g i ( x ¯ ) + j J ν ¯ j h j ( x ¯ ) + K [ α ¯ G ( x ¯ ) + β ¯ H ( x ¯ ) ] , i I D : γ ¯ i = p ( | D i T x ¯ | + ϵ i ) p 1 s i g n ( D i T x ¯ ) , i I g : μ ¯ i 0 , i I I g : μ ¯ i = 0 , I G : β ¯ = 0 , I H : α ¯ = 0 ,
which shows that x ¯ is a W-stationary point of problem (1).  □
In some cases, even if the MPSC-LICQ is valid for the relaxation (28), a better solution may not be obtained than the W-stationary point. Study [16] (Example 3.3) has illustrated this point. Note that the second-order necessary condition is not valid at the stationary point ( t , t ) . Consider its relaxation scheme similar to problem (28):
min f ( x ) : = 1 2 ( x 1 1 ) 2 + 1 2 ( x 2 1 ) 2 s . t . g 1 ( x ) : = x 1 x 2 t 0 g 2 ( x ) : = x 1 x 2 t 0
The first-order necessary condition is
f ( x ) + μ 1 g 1 ( x ) + μ 2 g 2 ( x ) = 0 , μ 1 , μ 2 0 , μ 1 g 1 ( x ) = 0 , μ 2 g 2 ( x ) = 0 ,
and the second-order necessary condition is
2 f ( x ) + μ 1 2 g 1 ( x ) + μ 2 2 g 2 ( x ) 0 ,
where
2 f ( x ) = 1 0 0 1 , 2 g 1 ( x ) = 0 1 1 0 .
Concerning problem (31), there are stationary points ( 1 2 , 1 2 ) ± 1 4 t ( 1 , 1 ) and ( t , t ) . In fact, the constraint g 1 ( x ) is active and g 2 ( x ) is inactive at point x t : = ( t , t ) . Hence, the corresponding multipliers μ 1 = 1 t 1 and μ 2 = 0 . Now, we verify that second-order necessary condition fails at x t . For d t = ( d 1 t , d 2 t ) T , d t T d ( x ) : = { d R 2 : g 1 ( x t ) T d = 0 } , we have d 1 t = d 2 t . Then, the second-order necessary condition
( d t ) T [ 2 f ( x t ) + μ 1 2 g 1 ( x t ) ] d t = ( d 1 t ) 2 + ( d 2 t ) 2 + 2 μ 1 d 1 t d 2 t = 2 ( 2 1 t ) ( d 1 t ) 2 .
Note that t is sufficiently small; then 2 1 t < 0 , which implies that the second-order necessary condition is not valid at x t .
However, for points ( 1 2 , 1 2 ) ± 1 4 t ( 1 , 1 ) , the corresponding multipliers μ 1 = 1 and μ 2 = 0 . For all d T d ( x ) , the second-order necessary condition
d T [ 2 f ( x ) + μ 1 2 g 1 ( x ) ] d = d 1 2 + d 2 2 + 2 d 1 d 2 = ( d 1 + d 2 ) 2 0 .
Furthermore, we know that the limit points of ( 1 2 , 1 2 ) ± 1 4 t ( 1 , 1 ) are ( 1 , 0 ) and ( 0 , 1 ) , which are S-stationary for the problem in [16] (Example 3.3).
Definition 9.
We say, for problem (28), the second-order necessary condition holds at x k if there exists d such that
d T [ Φ ( x k ) + M ( x k ) ] d 0 , d T d ( x k ) ,
where
Φ ( x k ) : = 2 f ( x k ) + p ( p 1 ) r = 1 s ( | D r T x k | + ϵ r ) p 2 D r D r T , M ( x k ) : = i = 1 m μ i 2 g i ( x k ) + j = 1 p ν j 2 h j ( x k ) + = 1 l λ [ G ( x k ) 2 H ( x k ) + H ( x k ) 2 G ( x k ) ] + = 1 l λ [ G ( x k ) H ( x k ) + H ( x k ) G ( x k ) ] , T d ( x k ) : = { d R n : D r T d = 0 r I D 0 , g i ( x k ) T d = 0 i I g , h j ( x k ) T d = 0 j J , [ G ( x k ) H ( x k ) + H ( x k ) G ( x k ) ] T d = 0 I t + ( x k ) I t ( x ¯ ) } .
Theorem 8.
Let { t k } k N R + be a sequence of positive relaxation parameters converging to zero. For each  k N , let x k Θ t k be a KKT point of problem (28), and the sequence { x k } k N converges to a point x ¯ Θ . Assume that the second-order necessary condition holds at x k for all large k and the MPSC-MF qualification holds at x ¯ . Then x ¯ is an S-stationary point of problem (1).
Proof. 
In the proof of Theorem 7, we have shown that x ¯ is a W-stationary point under the MPSC-MF qualification condition. Assume to the contrary that x ¯ is not an S-stationary point. There must exist 0 I G H such that α 0 k α ¯ 0 0 or β 0 k β ¯ 0 0 . Without loss of generality, we assume α ¯ 0 0 . Let
A k : = D i T i I D 0 g i ( x k ) T i I g h i ( x k ) T i J G i ( x k ) T i = 0 G i ( x k ) T i I G H { 0 } I G ( x k ) G i ( x k ) H i ( x k ) H i ( x k ) T + G i ( x k ) T i I G I G ( x k ) H i ( x k ) T i = 0 H i ( x k ) T i I G H { 0 } I H ( x k ) H i ( x k ) G i ( x k ) G i ( x k ) T + H i ( x k ) T i I H I H ( x k )
and
z k : = 0 i I D 0 0 i I g 0 i J 1 i = 0 0 i I G H { 0 } 0 i I G λ 0 H 0 2 ( x k ) G 0 ( x k ) i = 0 0 i I G H { 0 } 0 i I H
The index sets I G ( x k ) I G and I H ( x k ) I H for all k sufficiently large; then we have that ( I G H { 0 } I G ( x k ) ) ( I G I G ( x k ) ) = I G H I G { 0 } and ( I G H { 0 } I H ( x k ) ) ( I H I H ( x k ) ) = I G H I H { 0 } . By the MPSC-LI qualification at x ¯ , it follows that the limit of the matrix sequence { A k } has full row rank. Hence, A k has full rank for all k sufficiently large. Let d k : = A k T ( A k A k T ) 1 z k , which satisfies A k d k = z k . Furthermore, the sequence { z k } is convergent since λ 0 H 0 2 ( x k ) G 0 ( x k ) α ¯ 2 β ¯ with k by the definitions of α and β in the proof of Theorem 7, which means that { d k } is bounded. In the following, we show the second-order derivatives of the objective and constraints for problem (28). Let ψ ( x ) : = G ( x ) H ( x ) , for I t + ( x k ) I t ( x k ) , we know I G I G ( x k ) I H I H ( x k ) ; then we have
ψ ( x k ) T d k = G ( x k ) H ( x k ) T d k + H ( x k ) G ( x k ) T d k = G ( x k ) H ( x k ) G ( x k ) G ( x k ) T d k + H ( x k ) G ( x k ) T d k = 0 .
From A k d k = z k and Equation (32) we obtain that d k T d ( x k ) for all sufficiently large k.
We next show that ψ k ( x k ) : = ( d k ) T ( Φ k + M k ) d k as k , which contradicts the second-order necessary condition at x k when k is sufficiently large. Note that D r T d k = 0 for all r I D 0 , we have that
( d k ) T Φ ( x k ) d k = ( d k ) T 2 f ( x k ) d k + p ( p 1 ) r I D ( | D r T x k | + ϵ r ) p 2 ( D r T d k ) 2 .
Since { d k } , { x k } are bounded, and | D r T x k | | D r T x ¯ | > 0 for all r I D , it is easy to verify that the sequence { ( d k ) T Φ k d k } is bounded. For the constraints, we can obtain
( d k ) T M ( x k ) d k = i = 1 m μ i k ( d k ) T 2 g i ( x k ) d k + j = 1 p ν j k ( d k ) T 2 h j ( x k ) d k + = 1 l λ k [ G ( x k ) ( d k ) T 2 H ( x k ) d k + H ( x k ) ( d k ) T 2 G ( x k ) d k ] + 2 = 1 l λ k G ( x k ) T d k H ( x k ) T d k .
Note that λ k G ( x k ) = β k and λ k H ( x k ) = α k according to Equation (29). In the proof of Theorem 7, we proved that { μ k } , { ν k } , { α k } and { β k } are all bounded. Since { d k } is bounded, it is not hard to see that the top four items in Equation (33) are all bounded. At the same time, by A k d k = z k and the definitions of A k and z k , we have
= 1 l λ k G ( x k ) T d k H ( x k ) T d k = ( I G + I H + I G H { 0 } + = 0 ) ( λ k G ( x k ) T d k H ( x k ) T d k ) = I G λ k G ( x k ) H ( x k ) ( H ( x k ) T d k ) 2 I H λ k H ( x k ) G ( x k ) ( G ( x k ) T d k ) 2 + ( λ 0 k H 0 ( x k ) ) 2 G 0 ( x k ) .
Since G 0 0 and λ 0 k H 0 ( x k ) = α 0 k α ¯ 0 0 , the term ( λ 0 k H 0 ( x k ) ) 2 G 0 ( x k ) as k . Thus, ( d k ) T M k d k as k . Together with the boundness of { ( d k ) T Φ k d k } , which contradicts the second-order necessary condition. Thus, α ¯ 0 = 0 .
Similarly, we can show that β ¯ 0 = 0 . Furthermore, the second-order necessary condition holding at point x k implies that α 0 k and β 0 k vanished at the same time. The proof is completed.  □

5. Conclusions

In this paper, we study optimality conditions and qualifications for non-Lipschitz MPSCs. Our theoretical investigations enrich the landscape of available qualifications in the field of non-Lipschitz MPSCs. Moreover, we present an approximation method similar as Scholtes’ relaxation scheme. Theoretically, the accumulation point of the sequences produced by our method is a stationary point under certain conditions, however, there is no efficient computational solution method to solve it. Referring to the existing algorithms, some penalty methods for MPSCs do not apply for the non-Lipschitz term, and some smoothing methods for non-Lipschitz optimization do not take into account the switching constrains. Hence, combing the penalty method and the smoothing method to propose an efficient method to solve the non-Lipschitz MPSCs will be interesting and meaningful work in the future.

Author Contributions

Conceptualization, methodology, writing—original draft preparation and writing—review and editing, J.L.; methodology, writing—review and editing, Z.P.; writing—original draft preparation, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the Natural Science Foundation of China (Grant No. 11871383, 12001260).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liberzon, D. Switching in Systems and Control; Birkhäuser: Boston, MA, USA, 2003. [Google Scholar]
  2. Sager, S. Reformulations and algorithms for the optimization of switching decisions in nonlinear optimal control. J. Process Control 2009, 19, 1238–1247. [Google Scholar] [CrossRef] [Green Version]
  3. Wang, L.; Yan, Q. Time optimal controls of semilinear heat equation with switching control. J. Optim. Theory Appl. 2015, 165, 263–278. [Google Scholar] [CrossRef]
  4. Sarker, R.A.; Newton, C.S. Optimization Modelling; CRC Press: Boca Raton, FL, USA, 2008. [Google Scholar]
  5. Seidman, T.I. Optimal control of a diffusion/reaction/switching system. Evol. Equ. Control Theory 2013, 2, 723–731. [Google Scholar] [CrossRef]
  6. Gugat, M. Optimal switching boundary control of a string to rest in finite time. ZAMM J. Appl. Math. Mech. 2008, 88, 283–305. [Google Scholar] [CrossRef]
  7. Luo, Z.Q.; Pang, J.S.; Ralph, D. Mathematical Programs with Equilibrium Constraints; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  8. Outrata, J.V.; Kocvara, M.; Zowe, J. Nonsmooth Approach to Optimization Problems with Equilibrium Constraints: Theory, Applications and Numerical Results; Kluwer Academic Publishers: Boston, MA, USA, 1998. [Google Scholar]
  9. Achtziger, W.; Kanzow, C. Mathematical programs with vanishing constraints: Optimality conditions and constraint qualifications. Math. Program. 2008, 114, 69–99. [Google Scholar] [CrossRef]
  10. Sadeghieh, A.; Kanzi, N.; Caristi, G. On stationarity for nonsmooth multiobjective problems with vanishing constraints. J. Glob. Optim. 2021, 1–21. [Google Scholar] [CrossRef]
  11. Burdakov, O.P.; Kanzow, C.; Schwartz, A. Mathematical programs with cardinality constraints: Reformulation by complementarity-type conditions and a regularization method. SIAM J. Optim. 2016, 26, 397–425. [Google Scholar] [CrossRef] [Green Version]
  12. Červinka, M.; Kanzow, C.; Schwartz, A. Constraint qualifications and optimality conditions for optimization problemswith cardinality constraints. Math. Program. 2016, 160, 353–377. [Google Scholar] [CrossRef]
  13. Mehlitz, P. Stationarity conditions and constraint qualifications for mathematical programs with switching constraints. Math. Program. 2020, 181, 149–186. [Google Scholar] [CrossRef]
  14. Li, G.; Guo, L. Mordukhovich Stationarity for Mathematical Programs with Switching Constraints under Weak Constraint Qualifications. Available online: http://www.optimization-online.org/DB_FILE/2019/07/7288.pdf (accessed on 7 November 2021).
  15. Liang, Y.-C.; Ye, J.J. Optimality conditions and exact penalty for mathematical programs with switching constraints. J. Optim. Theory Appl. 2021, 190, 1–31. [Google Scholar] [CrossRef]
  16. Kanzow, C.; Mehlitz, P.; Steck, D. Relaxation schemes for mathematical programs with switching constraints. Optim. Methods Softw. 2019, 1–36. [Google Scholar] [CrossRef]
  17. Clason, C.; Rund, A.; Kunisch, K.; Barnard, R.C. A convex penalty for switching control of partial differential equations. Syst. Control Lett. 2016, 89, 66–73. [Google Scholar] [CrossRef] [Green Version]
  18. Clason, C.; Rund, A.; Kunisch, K. Nonconvex penalization of switching control of partial differential equations. Syst. Control Lett. 2017, 106, 1–8. [Google Scholar] [CrossRef] [Green Version]
  19. Brodie, J.; Daubechies, I.; De Mol, C.; Giannone, D.; Loris, I. Sparse and stable Markowitz portfolios. Proc. Nat. Acad. Sci. USA 2009, 106, 12267–12272. [Google Scholar] [CrossRef] [Green Version]
  20. Bruckstein, A.M.; Donoho, D.L.; Elad, M. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 2009, 51, 34–81. [Google Scholar] [CrossRef] [Green Version]
  21. Bian, W.; Chen, X. Linearly constrained non-Lipschitz optimization for image restoration. SIAM J. Imaging Sci. 2015, 8, 2294–2322. [Google Scholar] [CrossRef]
  22. Zeng, C.; Wu, C.; Jia, R. Non-Lipschitz Models for Image Restoration with Impulse Noise Removal. SIAM J. Imaging Sci. 2019, 12, 420–458. [Google Scholar] [CrossRef]
  23. Chen, X.; Niu, L.; Yuan, Y. Optimality conditions and a smoothing trust region newton method for non-Lipschitz optimization. SIAM J. Optim. 2013, 23, 1528–1552. [Google Scholar] [CrossRef] [Green Version]
  24. Lu, Z. Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Math. Program. 2014, 147, 277–307. [Google Scholar] [CrossRef]
  25. Huang, Y.; Liu, H. Smoothing projected Barzilai-Borwein method for constrained non-Lipschitz optimization. Comput. Optim. Appl. 2016, 65, 671–698. [Google Scholar] [CrossRef]
  26. Liu, Y.; Ma, S.; Dai, Y.-H.; Zhang, S. A smoothing SQP framework for a class of composite lp minimization over polyhedron. Math. Program. 2016, 158, 467–500. [Google Scholar] [CrossRef]
  27. Guo, L.; Chen, X.J. Mathematical programs with complementarity constraints and a non-Lipschitz objective: Optimality and approximation. Math. Program. 2021, 185, 455–485. [Google Scholar] [CrossRef]
  28. Rockafellar, R.T.; Wets, R.J.-B. Variational Analysis; Springer: Berlin, Germany, 1998. [Google Scholar]
  29. Andreani, R.; Haeser, G.; Schuverdt, M.L.; Silva, P.J.S. A relaxed constant positive linear dependence constraint qualification and applications. Math. Program. 2012, 135, 255–273. [Google Scholar] [CrossRef]
  30. Hestenes, M.R. Optimization Theory: The Finite Dimensional Case; Wiley: New York, NY, USA, 1975. [Google Scholar]
  31. Scholtes, S. Convergence properties of a regularization scheme for mathematical programs with complementarity constraints. SIAM J. Optim. 2001, 11, 918–936. [Google Scholar] [CrossRef]
Figure 1. Relations between various qualifications.
Figure 1. Relations between various qualifications.
Mathematics 09 02915 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lv, J.; Peng, Z.; Wan, Z. Optimality Conditions, Qualifications and Approximation Method for a Class of Non-Lipschitz Mathematical Programs with Switching Constraints. Mathematics 2021, 9, 2915. https://doi.org/10.3390/math9222915

AMA Style

Lv J, Peng Z, Wan Z. Optimality Conditions, Qualifications and Approximation Method for a Class of Non-Lipschitz Mathematical Programs with Switching Constraints. Mathematics. 2021; 9(22):2915. https://doi.org/10.3390/math9222915

Chicago/Turabian Style

Lv, Jinman, Zhenhua Peng, and Zhongping Wan. 2021. "Optimality Conditions, Qualifications and Approximation Method for a Class of Non-Lipschitz Mathematical Programs with Switching Constraints" Mathematics 9, no. 22: 2915. https://doi.org/10.3390/math9222915

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop