1. Introduction
DC (difference of two convex functions) programming has been one of the most active areas in nonconvex optimization. It arises in various applications, such as digital communication systems [
1], assignment and power allocation [
2] and compressed sensing [
3]. Up to now, most of the work has concentrated on DC programming with convex inequality constraints or convex cone constraints. By virtue of the conjugate theories, epigraph techniques, subdifferentials and perturbation approaches, several duality results, and optimality conditions for DC or composite DC programming are established, see [
4,
5,
6,
7,
8,
9,
10,
11,
12] and the references therein. In these fruitful developments, the convexity of the constraint system is vital. However, many practical problems take the mathematical forms of DC problems with nonconvex constraints. For instance, the problem involving DC inequality and DC equality constraints is an interesting subject, whose constraint system is nonconvex and owns a special form. It will be the focus of our research.
Over the years, various local search and global search methods for solving smooth and nonsmooth DC programming were proposed, see, for example, [
13,
14]. It is worth noting that these methods are often based on global necessary optimality conditions. Therefore, how to establish the optimality conditions that are easily verified gives us one of the motivations. As we know, relaxed constant rank constraint qualification (RCRCQ for short) was presented by Minchenko and Stakhovski [
15] for the continuously differentiable inequality and equality constraint functions. It proved to be a suitable assumption to discuss the optimality conditions for optimization problems involving inequality and equality constraints, see, for example, [
16,
17,
18]. So, we will extend the original RCRCQ to the nondifferentiable DC constraint system and use it to carry out our research.
Wolfe and Mond-Weir dualities, which are closely related to the optimality of the primal problem, are highly important in dual theory. To the best of our knowledge, the Wolfe and Mond-Weir dualities were discussed intensively for convex optimization problems, see [
19] for more details. However, there are few papers on the study of Wolfe and Mond-Weir type dual problems for DC programming with DC inequality and DC equality constraints since neither the objective nor the constrained functions enjoy the convexity. In this case, how to characterize the Wolfe and Mond-Weir dualities gives us other motivations.
In this paper, we mainly investigate a class of nondifferentiable DC programming with DC inequality and DC equality constraints. We firstly extend RCRCQ by taking points from the subdifferential sets of two convex functions in DC constraint functions, respectively. Then, we use the extended RCRCQ to deduce one necessary optimality condition. Moreover, we adopt the convexification technique to translate DC constraints into convex constraints and establish another necessary condition. For the purpose of constructing Wolfe and Mond-Weir dual problems, if we directly use the first necessary optimality condition as constraint and give the objective function according to the traditional way, we find that even the weak duality is not satisfied. For this reason, we adopt the convexification technique again and the Fenchel-Moreau theorem to formulate an inner convex problem. By constructing the Wolfe and Mond-Weir dual problems of this inner convex problem, we can obtain the corresponding Wolfe and Mond-Weir type dual problems of the primal problem and characterize the zero duality gap properties, respectively.
Compared with [
8], the DC equality constraints are added to our model. Moreover, the constraint system considered in [
4,
7,
10,
11,
12] is convex. However, our model focuses on the DC constraint system, which is nonconvex. Finally, our results of the Wolfe and Mond-Weir dualities for DC programming with the DC constraint system have not been discussed in any existing studies.
The rest of the paper is organized as follows. In
Section 2, we recall some basic concepts and properties. In
Section 3, we deduce two necessary optimality conditions for the DC programming. In
Section 4, we characterize the zero duality gap properties between the pairs of Wolfe and Mond-Weir type primal-dual problems, respectively.
2. Preliminaries
Let
be a
d-dimensional Euclidean space and
be a closed ball centered at
x with positive radius
. For any two vectors
, we denote by
their inner product and denote by
the Euclidean norm. For a nonempty set
,
and
denote the cardinality and conical hull of
A, respectively. If
A is a convex set, the normal cone of
A at
is given by
For a real-valued function , the conjugate function of f is defined by with . Similarly, the biconjugate function of f is defined by with . We denote .
For a multifunction
, the expression
signifies the sequential Painlevé-Kuratowski upper limit.
Here, we also recall the concepts of locally Lipschitz continuity and semicontinuity of a real-valued function.
is said to be semicontinuous around
if for any
, there exists an open neighborhood
U of
such that
or, equivalently,
is said to be locally Lipschitz continuous around
if there exists an open neighborhood
U of
and
such that
Definition 1. [20] Let be a lower semicontinuous function. - (i)
The Fréchet subdifferential of f at is defined by - (ii)
The limiting subdifferential of f at is defined bywhere means that with .
Remark 1. Note that and if f is locally Lipschitz continuous at . Moreover,if f is a convex function. In this case, we denote the rightmost set of (1) by . The Young inequality and Young equality are expressed as follows, respectively, Before proceeding further, let us remark on the functions involved in the next discussion. We denote by
and
the index sets. Suppose that
,
,
and
,
are convex functions from
to
. Now, we consider a DC problem with DC inequality and DC equality constraints:
Set . For , set . is usually called the active index set. We denote by the optimal value of .
Remark 2. The DC problems with convex inequality constraints and convex cone constraints are studied in [4,5,6,7,10]. Our model (DC) focuses on the DC inequality and DC equality constraints, which can be regarded as the generalization of a convex case. Let us close this section by recalling the classical relaxed constant rank constraint qualification, which is useful for the next section. Let be continuously differentiable and .
Definition 2. [15] We say that C satisfies the relaxed constant rank constraint qualification at if there exists a neighborhood of the point such that for any index subset , where , the family of gradient vectors has the same rank at all . 3. Necessary Optimality Conditions for (DC)
In this section, our main goal is to establish necessary optimality conditions for . We firstly extend RCRCQ given in Definition 2 to the special constraint system .
Definition 3. We say that Ω satisfies RCRCQ at if there exist , , and , , such that for any index set and all sufficiently large ,where , , , , , for all sequences , , , and satisfying , , , , , , as . Remark 3. When and are continuously differentiable for , Definition 3 collapses into Definition 1 of [15]. So, Definition 3 is regarded as a nonsmooth version of RCRCQ for the special constraint system Ω. The following observation is considered as an extension of [
18] (Proposition 3.4) to the nonsmooth case, which is useful for the derivation of necessary conditions of (DC).
Lemma 1. Suppose that Ω satisfies at . Then, there exist , , , and such that is linearly independent, where , , for all sequences , and satisfying , , , as .
Proof. If , is linearly independent, it is evident that the assertion is true.
If
,
is linearly dependent. Then, there exists an index set
such that
is linearly independent. From the fact that
satisfies RCRCQ at
, we have
for
, where
,
,
for all sequences
,
and
satisfying
,
,
,
as
. Thus, there exists an index set
and
such that
is linearly independent. The proof is complete. □
Further, we introduce a function
:
Obviously,
is locally Lipschitz continuous on
since convex functions are locally Lipschitz continuous in the Euclidean space (see [
21] (Theorem 1.4.1)).
Lemma 2. For arbitrary , there exist , , , such thatwhere . Proof. This conclusion is obtained immediately by [
22] (Lemma 2.1) and [
20] (Corollary 3.3). We omit the proof here. □
Theorem 1. Let . Suppose that Ω satisfies at . If is an optimal solution of (DC), then there exist , , , such that Proof. Step 1. For every
, let us first consider a penalty problem
:
where
are such that
for every
.
Without loss of generality, we may assume that
is the optimal solution of
and
as
since
is a compact set and
is continuous on
. Then, one has
If
, then
and this contradicts with (
3). Thus,
and
. Further, we have
which implies that
. If not,
. This is a contradiction with
for every
. Consequently, we infer that
and
as
.
Step 2. By the necessary optimality condition in terms of
limiting subdifferential and nonsmooth calculus rules, it follows that
According to [
20] (Corollary 3.3) and Lemma 2 and
, there exist
,
,
,
such that
Further, there exist
,
,
,
,
,
,
,
such that
It is easy to verify that
for sufficiently large
k. Let
,
. Then, (
4) is replaced with
where
,
.
Since
satisfies RCRCQ at
, by Lemma 1, there exist
,
,
,
and
such that
is linearly independent, where
,
,
satisfying
,
,
as
. Then, there exists
,
such that
Let
. Combining (
5) with (
6), one has
Moreover, by [
22] (Lemma 2.2), there exist
,
,
such that
for every
,
is linearly independent and
Together with (
7), we obtain that
Due to the index set
being finite for every large
k, we may assume that
. Hence,
is linearly independent, and (
8) can be rewritten as
Step 3. Finally, we have to show that
is bounded. Let
. To the contrary, suppose that
is unbounded such that
as
and
It is easy to see that
,
. Without loss of generality, we assume that
,
,
,
,
. Then, we have
,
,
,
,
by [
23] (Proposition 2.1.5). Dividing
on both sides of (
9) and taking limits with
, one has
This means that is linearly dependent since .
Because
satisfies RCRCQ at
and
is linearly independent, we have the following relation,
This contradicts the fact that
is linearly dependent. Thus,
is bounded. We may assume that
. Taking limits in both sides of (
9) to get that
Let
,
. We can see that
where
,
. The proof is complete. □
Remark 4. As shown in [13], the necessary optimality condition of a unconstrained DC problem is significant for algorithm design and convergence analysis. Our result (2) is expressed as the formulation of classical convex subdifferentials, which is easier to calculate than the Fréchet
or limiting
subdifferentials and may be helpful for some relevant research of DC algorithm with nonconvex constraints. The following example is given to verify the conclusion of Theorem 1. Example 1. Suppose that , , , , , , , and . Then, The feasible set . It is easy to see that the optimal solution of is and the optimal value is .
It is not difficult to verify that Ω satisfies RCRCQ at . Moreover, by calculating, we have Taking and , we obtain thatand . Next, we establish another necessary optimality condition from the point of translating (DC) into a problem whose objective function is the same as (DC) and the constrained functions are convex. Let
. We take the following DC problem into consideration:
where
,
and
,
and
and
. Denote
.
Remark 5. In fact, if we directly convexificate the Ω to yield the problem where , and , then we observe that has no relation with . In this way, we firstly translate the DC equality constraints of (DC) into DC inequality constraints, and then deal with these DC inequality constraints by convexification technique to formulate and obtain the next proposition.
Proposition 1. If is an optimal solution of , then is an optimal solution of for each .
Proof. For each
and
, by Young inequality, we have
The last two inequalities imply that for all . Together with the first inequality, we obtain that . Moreover, for , it follows from Young equality that . Since is an optimal solution of (DC), then for any . Due to the previous conclusion , we see that is also an optimal solution of . □
Here, we recall the basic constraint qualification (BCQ for short) of a constrained set at some point.
Definition 4. [8] Let be convex for . For a set , we call that A satisfies BCQ at ifwhere . Finally, we derive the necessary optimality condition of via .
Theorem 2. Let . If is an optimal solution of and satisfies at for each . Then for each and , there exist and such that Proof. For each
, it follows from Proposition 1 that
is an optimal solution of
. Since
and
are convex, by virtue of [
20] (Theorem 3.1), one has
which means that
for any
. Because
satisfies BCQ at
, then for each
,
and
, we have
where
and
. That is, for each
,
and
, there exist
and
such that
or, equivalently,
□
The following Example 2 explains that the necessary optimality result in Theorem 2 fails if the BCQ of at is not satisfied. In addition, Example 3 is given to verify the result of Theorem 2.
Example 2. Let us reconsider the problem of Example 1. Through some calculation in Example 1, we have , and . We also calculate . Then, we get the feasible set . Further, we obtain and Obviously, . Indeed, taking , we observe that for any , Example 3. Suppose that , , , , , , , and . Then, (DC) is given as: The feasible set . It is easy to see that the optimal solution of is and the optimal value is . By a simple analysis, we have Then, the feasible set . Obviously, namely, BCQ is satisfied. Indeed, for , taking , and , we have The conclusion of Theorem 2 is verified. Simultaneously, we observe that the conclusion of Theorem 1 is true. That is, Remark 6. Examples 1, 2, and 3 also illustrate that the results of Theorems 1 and 2 have no relation in general. The necessary optimality condition established in Theorem 2 will be important for the discussion of the next section.
4. Wolfe and Mond-Weir Type Dualities for (DC)
By Fenchel-Moreau theorem, (DC) can be equivalently translated into
Moreover, by convexification technique, (DC) can be associated with
where
for some
and
is the same as the aforementioned one in
Section 3. Motivated by this idea, for
and
, we consider the two problems below:
Firstly, we give the following corollary, which is helpful for the discussion of Wolfe and Mond-Weir type dualities of (DC) in the sequel.
Corollary 1. For some and , if is an optimal solution of and satisfies at for each . Then, there exist and such that Proof. Applying Proposition 1 and Theorem 2 to
, (
12) is obtained immediately. □
Now, we construct the Wolfe and Mond-Weir type dual problems of
via
. Let
and
,
The Wolfe and Mond-Weir dual problems of
are constructed by, respectively,
Based on this, the Wolfe type and Mond-Weir type dual problems of
are defined as, respectively,
Denote by (resp. ) the optimal values of (resp. ). The zero duality gap property holds between and (resp. ) if and only if (resp. ).
Remark 7. If we defineand present the Wolfe and Mond-Weir dual problems of in the traditional way: Then, it is not easy to characterize the zero duality gap properties since some subdifferential calculus rules do not work for DC functions. Here, we observe that and are slightly different from the traditional Wolfe and Mond-Weir dual problems and , so we call them the Wolfe type and Mond-Weir type dual problems, respectively.
We close this section by characterizing the zero duality gap properties between (DC) and , .
Theorem 3. For any , if has an optimal solution , then for each , . Furthermore, if satisfies at for each , then .
Proof. If there exists
such that
is empty, then
is empty. In this case,
. So, we suppose that
is nonempty for any
. Then, for each
and
, we have
,
and
If
, then
and
That is,
. Due to the arbitrariness of
, we get
. Let
for
, (
13) implies that
In particular, for
, (
14) also holds and means that
Considering the arbitrariness of
and
, we obtain that
Furthermore, if
satisfies
at
for each
, then by Corollary 1, there exist
and
such that
Then, we see that
. Let
for
, we get
. Hence, one has
Considering the arbitrariness of , we get . In conclusion, . □
Remark 8. There are no convexity or generalized convexity assumptions imposed on and for in Theorem 3, yet we still obtain dualities. This is due to some characteristics of the structure of DC programming itself.
5. Conclusions
In this paper, under the extended RCRCQ given by Definition 3, we directly established one necessary optimality condition for (DC), see Theorem 1. Then, we derived another necessary optimality condition by virtue of convexification and BCQ, see Theorem 2. Finally, we constructed the Wolfe type and Mond-Weir type dual problems and characterized the zero duality gap properties between them and (DC), see Theorems 3. It is worth mentioning that the possible future research directions are as follows: (i) If the objective function of DC programming considered in our paper is replaced by a vector-valued function, namely, every component function is a DC function, then how do we study the necessary optimality conditions and the Wolfe and Mond-Weir dualities? (ii) As we know, if we study DC programming in Banach space, the lower semi-continuity of the DC functions plays a key role. Then, in the absence of the lower semi-continuity of DC functions, or even when the DC functions are replaced by the difference of two quasiconvex functions, how can we deduce the optimality conditions and various types of dualities (Wolfe, Mond-Weir, Lagrange, Fenchel-Lagrange dualities)?