Abstract
This paper deals with the strong approximate subdifferential formula for the difference of two vector convex mappings in terms of the star difference. This formula is obtained via a scalarization process by using the approximate subdifferential of the difference of two real convex functions established by Martinez-Legaz and Seeger, and the concept of regular subdifferentiability. This formula allows us to establish approximate optimality conditions characterizing the approximate strong efficient solution for a general DC problem and for a multiobjective fractional programming problem.
MSC:
90C46; 58C20; 90C32
1. Introduction
It is well known that the theory of DC mathematical programming, dealing with functions expressed as a difference of two convex functions, is now very well developed due to its theoretical aspects and extensive range of practical applications in optimal control, mechanics, operations research, and others (see [1,2,3,4,5,6,7,8] and references therein). This theory constitutes an important approach to nonconvex optimization problems. In machine learning, a lot of important learning problems such as Boltzmann machines can be formulated as DC programming (see [9]).
The overview paper [4] presents essential results on theory, applications, and solution methods for DC programming in the sense of global optimization. Significant advances have been made in the study of duality theory associated with constrained DC optimization problems (see [10,11,12,13,14,15]).
The motivation for this paper stems from the significant contributions of Martinez-Legaz and Seeger [16], who established a formula for the approximate subdifferential of the difference of two convex functions over a locally convex topological vector space. This formula is expressed in terms of the star difference of two subsets, and the authors provided an application for DC programming.
The aim of this work is to show how the formula established by Martinez-Legaz and Seeger can be used to obtain the approximate subdifferential of the difference of two vector convex mappings by using the vector strong subdifferential, the concept of subdifferential regularity [17], and a scalarization process. Two illustrations are given: the first deals with a constrained DC vector programming problem, and the second deals with a constrained multiobjective fractional programming problem. The rest of the work is organized as follows. In Section 2, we present some basic definitions and preliminary material. In Section 3, we recall the formula established by Martinez-Legaz and Seeger [16] and show how this formula can be used to obtain the approximate subdifferential of the difference of two vector convex mappings. In Section 4 and Section 5, we derive, from the obtained formula, optimality conditions for two vector cone-constrained programming problems. Finally, the paper ends with a conclusion and future work.
2. Preliminaries
In this paper, let E, F, and G be tree real Hausdorff locally convex topological vector spaces. The space F (respectively, G) is endowed with a nonempty convex cone (respectively, ) introducing a partial preorder in F (respectively, in G) defined by: for
We adjoin to F (respectively to G) two abstract elements and , such that
The dual topological spaces of E and G are denoted respectively by and , and the duality pairing in G is denoted by , with and . The positive dual cone of is defined by
Let . The point is said to be a lower bound of S if for all We denote by , if it exists, the greatest lower bound of S.
Let B and C be two nonempty subsets of F, and . The following operations will be used:
Let be a given mapping. The effective domain of H is denoted by
We say that H is proper when . The epigraph of the mapping H is denoted by , which is defined as follows:
H is called -convex if
A mapping is said to be -increasing if for all ,
The composed mapping is defined as follows:
Let us note that if K is -increasing and -convex and if H is -convex, then is -convex.
Following [18], whenever and , the strong -subdifferential of H at is defined by
where denotes the vector space of continuous linear mappings from E to F. For , we have the usual strong vector subdifferential
If , we set . Let us note that when , reduces to the usual subdifferential of convex analysis, denoted by
3. Approximate Subdifferential of the Difference of Two Vector Convex Mappings
In this section, we attempt to extend the formula of [16] for the difference of two vector-valued mappings. Let us recall this scalar formula [16] expressed by means of the star difference of two subsets of
Definition 1
([19]). The star difference between two subsets B and C of is given by
Theorem 1
([16]). Let be two proper functions, , and . If H and K are lower semicontinuous and convex, then
Let and . The scalar function is defined by
Let us note that, for any , and, if H is -convex, then is convex. In order to state our main result, we will need the following lemma.
Lemma 1.
- 1.
- If is closed and if there exists such that for all then ;
- 2.
- Let be a given -convex mapping, , and If is closed, then
- 3.
- If the topological interior of is nonempty and , then for any , we havewhere
Proof.
- 1.
- See ([20] Proposition 2.1).
- 2.
- See ([17] Theorem 3.2).
- 3.
- We have for any . For the reverse inclusion, let and . If , we obviously have Following ([20] Proposition 2.1), there exists some such that , and hence we write for any . Let us note that since , and is a cone.
□
We say that a vector valued mapping is star -lower semicontinuous at if the function is lower semicontinuous at for any (see [21]), and K is called weak regular -subdifferentiable at where (see [17]), if
where and if
If , we say K is weak regular subdifferentiable at .
Theorem 2.
Let be two given mappings, and . Then
with equality if H and K are proper, -convex, and star -lower semicontinuous; K is weak regular γ-subdifferentiable at for all and the cone is closed; and .
Proof.
Let , i.e.,
Let . Then, for all , we have
Adding (2) and (3) term by term, and since is a convex cone, we obtain
i.e.,
which yields that, for any ,
i.e.,
and the direct inclusion is proved. For the reverse inclusion, let
then, for every , we have
i.e.,
Since H is -convex, it follows according to property of Lemma 1 that
and then
Let Since K is weak regular -subdifferentiable at for all then K is weak regular -subdifferentiable at for all i.e.,
with
Again, by applying the property of Lemma 1, we can write
which yields
Since H and K are proper -convex, star -lower semicontinuous at , then and are proper convex, lower semicontinuous functions and finite at ; hence, by applying Theorem 1, we obtain
i.e.,
By using the scalarization process of the strong subdifferential given by property of Lemma 1, we obtain
This completes the proof. □
By taking in Theorem 2, we obtain the formula of the exact subdifferential of the difference of two vector convex mappings.
Corollary 1.
Let be two given mappings and . Then
with equality if H and K are proper, -convex, and star -lower semicontinuous; K is weak regular γ-subdifferentiable at for all ; and the positive cone is closed and .
4. Application to DC Vector Programming Problems
Let be a mapping and . A point is called an -minimizer of H on C if
where . If , we have that is an -minimizer of H if and only if . The vector indicator mapping is defined by
The -normal set of C at in a vector sense is defined by
It is clear that if , then T is -increasing. By taking a -convex mapping , it follows that the composed mapping is -convex.
We will need the following result later.
Lemma 2
([22]). We suppose that the convex cone is closed. For every we have
- 1.
- If then
- 2.
- If then
In [18], Théra developed the calculus formula for the strong -subdifferential of the addition of two convex vector mappings. We need to recall that is said to be order complete if exists, for each nonempty subset order-bounded from below. We say that G is normal if there exists a basis of neighborhoods N of such that
Theorem 3
([18]). Let be two -convex mappings. If H is continuous at some point of and is normal order-complete, then for every and , we have
Consider the following constrained DC vector minimization problem,
where is convex and are two proper -convex mappings. By using the vector indicator mapping, the problem is equivalent to the following unconstrained problem:
Now, we establish necessary and sufficient optimality conditions for the minimization problem characterizing an -minimizer.
Theorem 4.
Let be two proper, -convex and star -lower semicontinuous mappings, and be convex and closed. If H is continuous at some point of is normal order-complete, K is weak regular γ-subdifferentiable at for all , and the cone is closed and . Then is an ϵ-minimizer of the problem if and only if, for all ,
Proof.
We have that is an -minimizer of the problem if and only if
Since the subset C is convex and nonempty, the vector indicator mapping is -convex and proper. It is simple to observe that , for any , where is the scalar indicator function of the subset C. Given the fact that C is closed, it follows that is lower semicontinuous, and we deduce that is star -lower semicontinuous.
Since H and are -convex, star -lower semicontinuous, and , then is -convex, proper, and star -lower semicontinuous. As K is weak regular -subdifferentiable at for any , and the positive cone is closed and , then by virtue of Theorem 2, (8) becomes
i.e.,
As H and are -convex, H is continuous at some point of , and is normal order-complete; then, according to Theorem 3, (9) becomes
The proof is complete. □
By taking in (9) of the above proof, we deduce the following proposition.
Proposition 1.
Let be two proper, -convex, and star -lower semicontinuous mappings. If K is weak regular γ-subdifferentiable at for all , and the cone is closed and , then is an ϵ-minimizer of if and only if
In particular, is a minimizer of if and only if
Remark 1.
The above proposition generalizes a result due to Hiriart-Urruty’s [3] characterizing a global minimum for a scalar DC programming problem.
A point is said to be an -maximizer of H on C if
Consider the following constraint convex vector maximization problem:
The problem becomes equivalent to
Corollary 2.
Let be a proper, -convex, and star -lower semicontinuous mapping and be convex and closed. If K is weak regular γ-subdifferentiable at for all , and the cone is closed and , then is an ϵ-maximizer of the problem if and only if
Proof.
It suffices to take in Proposition 1. □
Let us now consider the following constrained vector minimization problem,
where are two -convex mappings and is a proper -convex mapping. By using the vector indicator mapping , the unconstrained minimization problem below is equivalent to the problem :
The following result will be required to state the necessary and sufficient approximate optimality conditions that characterize an -minimizer of problem .
Theorem 5
([23]). Let be a proper -convex mapping, be a proper, -convex, and -increasing mapping and be a proper and -convex mapping. If there exists such that K is continuous at the point , then
for any and .
Now, we are prepared to announce the approximate optimality conditions related to problem .
Theorem 6.
Let be two proper, -convex, and star -lower semicontinuous mappings, and be a proper and -convex mapping. If there exists some point is closed, K is weak regular γ-subdifferentiable at for all , and the cone is closed and , then is an ϵ-minimizer of the problem if and only if for any and for any there exist and , satisfying , and
Proof.
The point is an -minimizer of the problem if and only if
Let us recall that the vector indicator mapping is -increasing (see [20]) and -convex. Since L is -convex, then is -convex. The fact that for any , it follows that
and as is closed, we deduce that is closed, which yields that is star -lower semicontinuous. Since H is -convex, star -lower semicontinuous and , it follows that is -convex, star -lower semicontinuous, and proper. We claim that is continuous on Indeed, for any neighborhood V of we have As then is continuous at . Let us note that all assumptions of Proposition 1 are satisfied; therefore, we obtain
Let us observe that all hypotheses of Theorem 5 are satisfied and therefore (11) becomes equivalent to
i.e.,
Therefore, by virtue of Lemma 2 we obtain, for any and for any there exist , and satisfying , and This completes the proof. □
5. Application to a Multiobjective Fractional Programming Problem
This section focuses on a general multiobjective fractional programming problem,
where the functions are convex such that , for any , and L: is a proper -convex mapping. The following notation will be required:
The finite-dimensional space is equipped with its natural order induced by the positive cone
i.e.,
The following definition is equivalent to the one of an -minimizer.
Definition 2.
Let . We say that a point is an ϵ-minimizer of the problem if
By using a parametric approach, we can equivalently convert the multiobjective fractional programming problem into a DC vector nonfractional programming problem defined in the following way:
where and are two mappings defined for every by
In order to relate the fractional programming problem to the DC vector optimization problem (), we formulate the following lemma.
Lemma 3.
A point is an ϵ-minimizer of if and only if is an -minimizer of the problem
Proof.
Assume that is an -minimizer of From Definition 2, we have for each
Since , we deduce from (12) that , for any and . As , we write
i.e.,
which yields that is an -minimizer for the problem
By using similar arguments as above, we show easily that if is an -minimizer for the problem , then is an -minimizer for the problem
This completes the proof. □
The problem is reduced to the following unconstrained minimization problem:
Proposition 2.
Let be convex and lower semicontinuous functions such that and , for each and for any . Let be a proper -convex mapping. We assume that is closed nonempty and there exists some such that functions are continuous at . Let , , and Then, is an ϵ-minimizer of the problem if and only if for any and for any there exist and satisfying , and
Proof.
Let , , and . By Lemma 3, we have that is an -minimizer of if and only if is an -minimizer of the problem i.e.,
Let us note that in this situation and , which is a closed convex cone, and ; hence, the -convexity of the mappings H and follows easily from the convexity of the functions and for For any , we have and, since is lower semicontinuous, we deduce that is lower semicontinuous, which yields that H is star -lower semicontinuous. Similarly, we show that is also star -lower semicontinuous.
Let , by virtue of [17], the -weak subdifferential regularity of becomes exactly a famous chain rule of convex analysis, i.e., for any , we have
and this formula holds under the popular Moreau–Rockafellar qualification condition, i.e., the functions are convex and there exits some such that functions are continuous at . For our purpose, this qualification condition is satisfied. Let us emphasize that all the assumptions of Theorem 6 are fulfilled; therefore, is an -minimizer of the problem if and only if for any there exist and satisfying and
The strong -subdifferential reduces to
The condition can be written as where . The composed mapping is defined by
Now, we can write with , and hence we obtain
The condition may be rewritten as for any Obviously, the condition is equivalent to , for any .
The proof is complete. □
6. Conclusions and Discussion
Our investigation in this article aimed to extend within the setting of vector convex mappings a formula [16] dealing with the approximate subdifferential of the difference of two real convex functions. This is obtained by a scalarization process by using this scalar formula, the regular subdifferentiability concept, and the difference star operation. Therefore, the established result allows us to obtain the existence of approximate strong solutions to a constrained vector DC programming problem and a constrained multiobjective fractional problem.
Let us note that a similar result of Proposition 1 was developed by Hiriart-Urruty [3] for an unconstrained scalar DC optimization problem in terms of Fenchel approximate subdifferentials characterizing a global (exact or approximate) solution. Additionally, in [5], a similar condition is established characterizing a weakly efficient solution for the difference of two vector mappings in finite or infinite-dimensional preordered space.
In a forthcoming work, we will try to study a Pareto version (weak and proper) of the above formula and also, we will attempt to find efficient algorithms for solving numerically this class of problems.
We would like to express our gratitude to the referees for offering us the opportunity to read three papers [24,25,26] in order to elaborate a concrete application model that we will study using our results, and for valuable comments and suggestions, which certainly contribute to improve the quality of the paper.
Author Contributions
Methodology, A.A.; Validation, M.L.; Formal analysis, M.L.; Resources, A.E.-d.; Visualization, A.A.; Supervision, M.H. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Khazayel, B.; Farajzadeh, A. On the optimality conditions for DC vector optimization problems. Optimization 2022, 71, 2033–2045. [Google Scholar] [CrossRef]
- Shafie, A. Necessary and sufficient optimality conditions for DC vector optimization. Acta Univ. Apulensis Math. Inform. 2017, 51, 41–52. [Google Scholar]
- Hiriart-Urruty, J.B. From Convex Optimization to Nonconvex Optimization. Necessary and Sufficient Conditions for Global Optimality. In Nonsmooth Optimization and Related Topics; Clarke, F.H., Dem’yanov, V.F., Giannessi, F., Eds.; Ettore Majorana International Science Series; Springer: Boston, MA, USA, 1989; Volume 43. [Google Scholar]
- Horst, R.; Thoai, N.V. DC programming: Overview. J. Optim. Theory Appl. 1999, 103, 1–43. [Google Scholar] [CrossRef]
- El Maghri, M. (ϵ-)Efficiency in difference vector optimization. J. Glob. Optim. 2015, 61, 803–812. [Google Scholar] [CrossRef]
- Dolgopolik, M.V. New global optimality conditions for nonsmooth DC optimization problems. J. Glob. Optim. 2020, 76, 25–55. [Google Scholar] [CrossRef]
- Laghdir, M. Optimality conditions in DC-constrained optimization. Acta Math. Vietnam 2005, 30, 169–179. [Google Scholar]
- Amahroq, T.; Penot, J.P.; Syam, A. On the subdifferentiability of the difference of two functions and local minimization. Set-Valued Anal. 2008, 16, 413–427. [Google Scholar] [CrossRef]
- Nitanda, A.; Suzuki, T. Stochastic difference of convex algorithm and its application to training deep boltzmann machines. In Proceedings of the Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 470–478. [Google Scholar]
- Volle, M. Duality principles for optimization problems dealing with the difference of vector-valued convex mappings. J. Optim. Theory Appl. 2002, 114, 223–241. [Google Scholar] [CrossRef]
- Laghdir, M.; Benkenza, N.; Najeh, N. Duality in DC-constrained programming via duality in reverse convex programming. J. Nonlinear Convex Anal. 2004, 5, 275–284. [Google Scholar]
- Li, G.; Zhang, L.; Liu, Z. The stable duality of DC programs for composite convex functions. J. Ind. Manag. Optim. 2016, 13, 63–79. [Google Scholar] [CrossRef]
- Xu, Y.; Li, S. Optimality and Duality for DC Programming with DC Inequality and DC Equality Constraints. Mathematics 2022, 10, 601. [Google Scholar] [CrossRef]
- Xu, Y.; Li, S. Duality for minimization of the difference of two Φc-convex functions. J. Ind. Manag. Optim. 2023, 19, 5045–5059. [Google Scholar] [CrossRef]
- Sun, X.; Long, X.J.; Li, M. Some characterizations of duality for DC optimization with composite functions. Optimization 2017, 66, 1425–1443. [Google Scholar] [CrossRef]
- Martinez-Legaz, J.E.; Seeger, A. A formula on the approximate subdifferential of the difference of two convex functions. Bull. Austral. Math. Soc. 1992, 45, 37–42. [Google Scholar] [CrossRef]
- El Maghri, M. Pareto-Fenchel ϵ-subdifferential sum rule and ϵ-efficiency. Optim. Lett. 2012, 6, 763–781. [Google Scholar] [CrossRef]
- Théra, M. Calcul ϵ-sous-différentiel des applications convexes. C. R. Acad. Sci. Paris 1980, 290, 549–551. [Google Scholar]
- Pontryagin, L.S. Linear differential games II. Soviet Math. Dokl. 1967, 8, 910–912. [Google Scholar]
- El Maghri, M.; Laghdir, M. Pareto subdifferential calculus for convex vector mappings and applications to vector optimization. SIAM J. Optim. 2009, 19, 1970–1994. [Google Scholar] [CrossRef]
- Boţ, R.I.; Grad, S.M.; Wanka, G. Duality in Vector Optimization; Springer Science & Business Media: Berlin, Germany, 2009. [Google Scholar]
- Moustaid, M.B.; Rikouane, A.; Dali, I.; Laghdir, M. Sequential approximate weak optimality conditions for multiobjective fractional programming problems via sequential calculus rules for the Brøndsted-Rockafellar approximate subdifferential. Rend. Circ. Mat. Palermo, II. Ser. 2022, 71, 737–754. [Google Scholar] [CrossRef]
- Laghdir, M.; Rikouane, A. A Note on Approximate Subdifferential of Composed Convex Operator. Appl. Math. Sci. 2014, 8, 2513–2523. [Google Scholar]
- Song, D.; Tang, L.; Liu, C.; Wu, J.; Song, X. A Novel Operation Optimization Method Based on Mechanism Analytics for the Quality of Molten Steel in the BOF Steelmaking Process. IEEE Trans. Autom. Sci. Eng. 2022, 20, 218–232. [Google Scholar] [CrossRef]
- Yang, L.; Sun, Q.; Zhang, N.; Li, Y. Indirect multi-energy transactions of energy internet with deep reinforcement learning approach. IEEE Trans. Power Syst. 2022, 37, 4067–4077. [Google Scholar] [CrossRef]
- Lai, X.; Zhang, P.; Wang, Y.; Chen, L.; Wu, M. Continuous State Feedback Control Based on Intelligent Optimization for First-Order Nonholonomic Systems. IEEE Trans. Syst. Man Cyber. Syst. 2020, 50, 2534–2540. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).