Next Article in Journal
Federated Learning Incentive Mechanism Design via Shapley Value and Pareto Optimality
Next Article in Special Issue
Fixed Point Results in Generalized Menger Probabilistic Metric Spaces with Applications to Decomposable Measures
Previous Article in Journal
Outer Topology Network Synchronization Using Chaotic Nodes with Hidden Attractors
Previous Article in Special Issue
New Generalized Ekeland’s Variational Principle, Critical Point Theorems and Common Fuzzy Fixed Point Theorems Induced by Lin-Du’s Abstract Maximal Element Principle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimality Conditions of the Approximate Efficiency for Nonsmooth Robust Multiobjective Fractional Semi-Infinite Optimization Problems

1
School of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China
2
School of Mathematics and Statistics, Ningxia University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(7), 635; https://doi.org/10.3390/axioms12070635
Submission received: 30 May 2023 / Revised: 25 June 2023 / Accepted: 26 June 2023 / Published: 27 June 2023

Abstract

:
This paper is devoted to the investigation of optimality conditions and saddle point theorems for robust approximate quasi-weak efficient solutions for a nonsmooth uncertain multiobjective fractional semi-infinite optimization problem (NUMFP). Firstly, a necessary optimality condition is established by using the properties of the Gerstewitz’s function. Furthermore, a kind of approximate pseudo/quasi-convex function is defined for the problem (NUMFP), and under its assumption, a sufficient optimality condition is obtained. Finally, we introduce the notion of a robust approximate quasi-weak saddle point to the problem (NUMFP) and prove corresponding saddle point theorems.

1. Introduction

Recently, much attention had been paid to semi-infinite optimization problems; see [1,2,3]. Specially, multiobjective semi-infinite optimization refers to finding values of decision variables that give the optimum of more than one objective, and many interesting results have been presented in [4] and the references therein. Moreover, fractional optimization is a ratio of two functions, and it is widely used in the fields of information technology, resource allocation and engineering design; see [5,6,7,8]. It is worth noting that in many practical problems the objective or constraint functions to optimization models are nonsmooth and are affected by various uncertain information. Therefore, it is meaningful to investigate nonsmooth uncertain optimization problems; see [9,10,11]. Robust optimization [12,13] is one of the powerful tools to deal with optimization problems with data uncertainty. The aim of the robust optimization approach is to find the worst-case solution, which is immunized against the data uncertainty to optimization problems. However, most of solutions obtained by numerical algorithms are approximate solutions. In these situations, the study of approximate solutions is very significant from both the theoretical aspect and practical application. This paper intends to investigate the properties of a problem (NUMFP) with respect to approximate quasi-weak efficient solutions by the robust approach.
Optimality conditions and saddle point theorems are two important contents of nonsmooth optimization problems. The subdifferential is a powerful tool to characterize optimality conditions. For a nonsmooth multiobjective optimization problem, Caristi et al. [14] and Kabgani et al. [15] investigated optimality conditions of weakly efficient solutions by using the Michel–Penot subdifferential and convexificator, respectively. Chuong [5] obtained optimality theorems for robust efficient solutions to a nonsmooth multiobjective fractional optimization problem based on the Mordukhovich subdifferential. It is important to mention that the Clarke subdifferential has attracted much attention because of its good properties [16]. Fakhar et al. [17] constructed optimality conditions and saddle point theorems of robust efficient solutions for a nonsmooth multiobjective optimization problem by utilizing the Clarke subdifferential. The purpose of this article is to examine optimality conditions and saddle point theorems of robust approximate quasi-weak efficient solutions for a problem (NUMFP) under the Clarke subdifferential. Moreover, Lee et al. [18] employed a separation theorem to established necessary conditions for approximate solutions. Chen et al. [19] used a generalized alternative theorem to obtain necessary optimality conditions for weakly robust efficient solutions. It is worth noting that the Gerstewitz’s function is an important nonlinear scalar function, which plays a significant role in solving optimization problems due to its good properties, such as convexity, positive homogeneity and continuity; see [20,21]. This paper will use the Gerstewitz’s function to examine a necessary optimality condition of robust approximate quasi-weak efficient solutions for a problem (NUMFP).
In addition, convexity and its generalization play an important role in establishing sufficient optimality conditions of optimization problems. In this paper, we will define a class of approximate (pseudo quasi) convex functions for the objective and constraint functions of a problem (NUMFP) and establish a sufficient optimality condition and saddle point theorems for robust approximate quasi-weak efficient solutions under their assumptions.
This paper is organized as follows. Section 2 provides some basic concepts and lemmas, which will be used in the subsequent sections. In Section 3, we establish optimality conditions for robust approximate quasi-weak efficient solutions to a problem (NUMFP). In Section 4, we introduce the concept of a robust approximate quasi-weak saddle point to a problem (NUMFP) and prove corresponding saddle point theorems.

2. Preliminaries

Throughout this paper, N and R n stand for the set of natural numbers and n-dimensional Euclidean space, respectively. B ( x ¯ , r ) denotes the open ball with center x ¯ R n and radius r > 0 ; B represents the closed unit ball of R n . The inner product in R n is denoted by x , y for any x , y R n . We set
R + n = { x = ( x 1 , , x n ) R n x i 0 , i = 1 , , n } ,
R + + n = { x = ( x 1 , , x n ) R n x i > 0 , i = 1 , , n } ,
and utilize the following symbols to represent an order relation in R n :
x < y y x R + + n ,
x y y x R + n .
Let C R n be a nonempty subset and int C , cl C and co C stand for the interior, the closure, and the convex hull of C, respectively. The Clarke contingent cone and normal cone to C at point x ¯ R n are defined, respectively, by the following (see [16]):
T ( C , x ¯ ) = { y R n x n C , x n x ¯ , t n 0 , y n y , s . t . x n + t n y n C , n N } ,
N ( C , x ¯ ) = { ξ R n ξ , y 0 , y T ( C , x ¯ ) } .
The conical convex hull (see [22]) of the set C is defined as
p o s ( C ) = { y R n l N s . t . y = i = 1 l λ i y i , λ i 0 , y i C , i = 1 , , l } .
Let F : R n R m . F is said to be locally Lipschitz at x ¯ R n if there exist constant L > 0 and r > 0 such that
F ( x 1 ) F ( x 2 ) L x 1 x 2 , x 1 , x 2 B ( x ¯ , r ) .
If F is locally Lipschitz at x for any x R n , then F is called locally Lipschitz mapping. Particularly, for a real value locally Lipschitz function f : R n R ( R denotes real number), the Clarke generalized directional derivative of f at x ¯ R n in the direction d R n is given by (see [16])
f ( x ¯ ; d ) = lim   sup y x ¯ , t 0 + f ( y + t d ) f ( y ) t ,
and the Clarke subdifferential (see [16]) of f at x ¯ is denoted by
f ( x ¯ ) = { ξ R n f ( x ¯ ; d ) ξ , d , d R n } .
Let C R n be a nonempty subset, and the indicator function of C is defined as
δ C ( x ) = 0 , x C , + , x C .
It is pointed out in [16] that
δ C ( x ¯ ) = N ( C , x ¯ ) , x ¯ C .
The following lemmas characterize some properties of the Clarke subdifferential.
Lemma 1
([16]). Let f , h : R n R be locally Lipschitz at x ¯ R n . Then, the following applies:
(i)
f ( x ¯ ) is nonempty compact convex;
(ii)
for any t R , f ( t x ¯ ) = t f ( x ¯ ) ;
(iii)
( f + h ) ( x ¯ ) f ( x ¯ ) + h ( x ¯ ) ;
(iv)
if f attains a local minimum at x ¯ , then 0 f ( x ¯ ) .
Lemma 2
([16]). Let f , h : R n R be locally Lipschitz at x ¯ R n , and h ( x ¯ ) 0 . Then, f h is also locally Lipschitz at x ¯ , and
( f h ) ( x ¯ ) h ( x ¯ ) f ( x ¯ ) f ( x ¯ ) h ( x ¯ ) ( h ( x ¯ ) ) 2 .
Lemma 3
([16]). Let F : R n R m be locally Lipschitz at x ¯ R n and f : R m R be locally Lipschitz at F ( x ¯ ) . Then,
( f F ) ( x ¯ ) cl ( c o ( Λ f ( F ( x ¯ ) ) ( Λ F ) ( x ¯ ) ) ) .
Next, we give a scalar function, which will play an essential role in the proof of optimality conditions in Section 3.
Definition 1
([20]). Let C R n be a pointed closed convex cone, e ¯ i n t C . The Gerstewitz’s function Ψ e ¯ : R n R is defined as
Ψ e ¯ ( y ) = i n f { t R y t e ¯ C } , y R n .
Some properties of the Gerstewitz’s function are summarized in the following Lemma 4.
Lemma 4
([20]). Let C R n be a pointed closed convex cone, e ¯ int C . Then, the following applies:
(i)
Ψ e ¯ is continuous and locally a Lipschitz function;
(ii)
Ψ e ¯ ( y ) t y t e ¯ C ;
(iii)
Ψ e ¯ ( y ) t y t e ¯ int C ;
(iv)
Ψ e ¯ ( y ) = { λ C * λ , y = Ψ e ¯ ( y ) } ;
(v)
Ψ e ¯ ( y ) C * \ { 0 } ,
where C * = { λ R n λ , y 0 , y C } represents the dual cone of C.
Lemma 5
([22]). Let { C i i I } be an arbitrary collection of nonempty convex sets in R n and K be the convex cone generated by the union of the collection. Then, every nonzero vector of K can be expressed as a nonnegative linear combination of n or fewer linearly independent vectors, each belonging to a different C i .
Let T be a nonempty and arbitrary index set. Considering the following nonsmooth uncertain multiobjective fractional semi-infinite optimization problem (NUMFP)
( NUMFP ) min f ( x ) h ( x ) = ( f 1 ( x ) h 1 ( x ) , , f l ( x ) h l ( x ) ) , s . t . g t ( x , v t ) 0 , t T ,
where v t and t T , are uncertain parameters from the uncertainty set V t R p , t T . f i , h i : R n R , h i ( x ) 0 , i = 1 , , l , g t : R n × V t R , f i , h i , g t are locally Lipschitz functions, and the uncertainty map V : T R p is defined as V ( t ) : = V t .
We consider the robust counterpart of the problem (NUMFP) as follows:
( NRMFP ) min f ( x ) h ( x ) = ( f 1 ( x ) h 1 ( x ) , , f l ( x ) h l ( x ) ) , s . t . g t ( x , v t ) 0 , v t V t , t T ,
where the feasible set of the problem (NRMFP) is denoted by
F : = { x R n g t ( x , v t ) 0 , v t V t , t T } .
Definition 2.
Let ε = ( ε 1 , , ε l ) R + l . x ¯ F is called a robust quasi-weak ε-efficient solution of the problem (NUMFP) if x ¯ is a quasi-weak ε-efficient solution of the problem (NRMFP); that is,
f ( x ) h ( x ) f ( x ¯ ) h ( x ¯ ) + ε x x ¯ R + + l , x F .
Remark 1.
When ε = 0 , the quasi-weak ε-efficient solution degrades into the weak efficient solution in [8].

3. Optimality Conditions

In this section, we will establish necessary and sufficient optimality conditions for the quasi-weak ε -efficient solutions to the problem (NRMFP). We begin with the following constraint qualification.
Definition 3
([23]). For x R n , let Π ( x ) : = { ( t , v ) g p h V g t ( x , v ) = 0 } and
Δ ( x ) : = ( t , v ) Π ( x ) x g t ( x , v ) ,
where g p h V : = { ( t , v ) T × R p v V ( t ) } .
Definition 4
([16]). Let x ¯ F . The Basic Constraint Qualification (BCQ) holds at x ¯ if N ( F , x ¯ ) p o s ( Δ ( x ¯ ) ) .
Next, we present a necessary optimality condition for the quasi-weak ε -efficient solutions to the problem (NRMFP) by using the properties of the Gerstewitz’s function.
Theorem 1.
In problem (NRMFP), let ε = ( ε 1 , , ε l ) R + l . If x ¯ F is a quasi-weak ε-efficient solution of the problem (NRMFP) and the BCQ holds at x ¯ , then there exist α ¯ = ( α ¯ 1 , , α ¯ l ) R + l \ { 0 } , λ ¯ i = α ¯ i h i ( x ¯ ) , i = 1 , , l , μ ¯ = ( μ ¯ 1 , , μ ¯ n ) R + n and ( t ¯ j , v ¯ j ) g p h V , j = 1 , , n , such that
0 i = 1 l λ ¯ i ( f i ( x ¯ ) f i ( x ¯ ) h i ( x ¯ ) h i ( x ¯ ) ) + j = 1 n μ ¯ j x g t ¯ j ( x ¯ , v ¯ j ) + i = 1 l α ¯ i ε i B ,
μ ¯ j g t ¯ j ( x ¯ , v ¯ j ) = 0 , j = 1 , , n .
Proof. 
Since x ¯ is a quasi-weak ε -efficient solution of the problem (NRMFP), we obtain
f ( x ) h ( x ) f ( x ¯ ) h ( x ¯ ) + ε x x ¯ R + + l , x F .
Let H ( x ) = ( f ( x ) h ( x ) f ( x ¯ ) h ( x ¯ ) + ε x x ¯ ) , by Lemma 4 (iii), then we obtain
Ψ ( e ¯ ) ( H ( x ) ) 0 , x F .
As x ¯ F , thus,
Ψ ( e ¯ ) ( H ( x ¯ ) ) 0 .
Note that H ( x ¯ ) = 0 , then H ( x ¯ ) R + l , and it follows from Lemma 4 (ii) that
Ψ ( e ¯ ) ( H ( x ¯ ) ) 0 ,
together with (3), and then we have
Ψ ( e ¯ ) ( H ( x ¯ ) ) = 0 .
Therefore,
Ψ ( e ¯ ) ( H ( x ) ) Ψ ( e ¯ ) ( H ( x ¯ ) ) = 0 , x F ,
and this means that x ¯ is a local minimizer of Ψ ( e ¯ ) H ( x ) on F , and x ¯ is also a local minimizer of Ψ ( e ¯ ) H ( x ) + δ F ( x ) on F . By Lemma 1 (iii), we obtain
0 ( Ψ ( e ¯ ) H + δ F ) ( x ¯ ) ( Ψ ( e ¯ ) H ) ( x ¯ ) + δ F ( x ¯ ) .
Due to
δ F ( x ¯ ) = N ( F ; x ¯ ) ,
combined with (4), we arrive at
0 ( Ψ ( e ¯ ) H ) ( x ¯ ) + N ( F ; x ¯ ) .
Since H ( x ) is locally Lipschitz at x ¯ and Ψ ( e ¯ ) is locally Lipschitz at H ( x ¯ ) , it follows from Lemma 3 that
( Ψ ( e ¯ ) H ) ( x ¯ ) cl ( co ( Λ Ψ ( e ¯ ) ( H ( x ¯ ) ) ( Λ H ) ( x ¯ ) ) ) .
According to Lemma 4 (v), there exists α ¯ = ( α ¯ 1 , , α ¯ l ) Ψ ( e ¯ ) ( H ( x ¯ ) ) R + l \ { 0 } such that
0 cl ( co ( ( α ¯ H ) ( x ¯ ) ) ) + N ( F ; x ¯ ) = cl ( co ( ( α ¯ ( f ( · ) h ( · ) f ( x ¯ ) h ( x ¯ ) + ε · x ¯ ) ( x ¯ ) ) ) + N ( F ; x ¯ ) cl ( co ( ( α ¯ ( f h ) ) ( x ¯ ) + α ¯ , ε B ) ) + N ( F ; x ¯ ) .
From Lemma 1 (i), it leads to
0 ( α ¯ ( f h ) ) ( x ¯ ) + α ¯ , ε B + N ( F ; x ¯ ) = i = 1 l α ¯ i ( f i h i ) ( x ¯ ) + i = 1 l α ¯ i ε i B + N ( F ; x ¯ ) .
According to Lemma 2, we obtain
0 i = 1 l α ¯ i h i ( x ¯ ) ( f i ( x ¯ ) f i ( x ¯ ) h i ( x ¯ ) h i ( x ¯ ) ) + i = 1 l α ¯ i ε i B + N ( F ; x ¯ ) .
Since the BCQ holds, it leads to
0 i = 1 l α ¯ i h i ( x ¯ ) ( f i ( x ¯ ) f i ( x ¯ ) h i ( x ¯ ) h i ( x ¯ ) ) + i = 1 l α ¯ i ε i B + p o s ( ( x ¯ ) ) .
By Lemma 5, there exist p n , μ ¯ = ( μ ¯ 1 , , μ ¯ p ) R + p and ( t ¯ j , v ¯ j ) Π ( x ¯ ) , j = 1 , , p such that
0 i = 1 l α ¯ i h i ( x ¯ ) ( f i ( x ¯ ) f i ( x ¯ ) h i ( x ¯ ) h i ( x ¯ ) ) + j = 1 p μ ¯ j x g t ¯ j ( x ¯ , v ¯ j ) + i = 1 l α ¯ i ε i B .
Let α ¯ i h i ( x ¯ ) = λ ¯ i . If p = n , then (1) and (2) hold due to ( t ¯ j , v ¯ j ) Π ( x ¯ ) , j = 1 , , p . When p < n , we take multipliers μ ¯ p + 1 = = μ ¯ n = 0 in (5) to obtain the desired result and complete the proof. □
Remark 2.
In studies [18,19], the necessary optimality conditions were obtained by utilizing a separation theorem and an alternative theorem, respectively. Different from [18,19], the necessary optimality condition of the above Theorem 1 is directly proved by using the properties of the Gerstewitz’s function. However, if ε = 0 and T is a finite index set, then Theorem 1 of this paper will reduce to Theorem 1 in [8].
Before we establish a sufficient optimality condition of quasi-weak ε -efficient solutions for the problem (NRMFP), we next introduce the following two kinds of generalized convexities for the objective and constraint functions of the problem (NRMFP).
Definition 5.
In the problem (NRMFP), ( f , h , g ) is called an approximate convex function at x ¯ F if for any x F , ξ i f i ( x ¯ ) , η i h i ( x ¯ ) , i = 1 , , l , ( t j , v j ) g p h V , and δ j x g t j ( x ¯ , v j ) , j = 1 , , n , such that
f i ( x ) h i ( x ) f i ( x ¯ ) h i ( x ¯ ) 1 h i ( x ¯ ) ( ξ i , x x ¯ f i ( x ¯ ) h i ( x ¯ ) η i , x x ¯ ) , i = 1 , , l ,
g t j ( x , v j ) g t j ( x ¯ , v j ) δ j , x x ¯ , j = 1 , , n .
Here is an example of an approximate convex function.
Example 1.
In the problem (NRMFP), let f i , h i : R R , i = 1 , 2 , g : R × V t R , t T = [ 0 , 1 ] , v V t = [ 2 t , 2 + t ] , ( t , v ) g p h V , and
f ( x ) h ( x ) = ( f 1 ( x ) h 1 ( x ) , f 2 ( x ) h 2 ( x ) ) = ( x 2 + 2 x 3 , 2 x 2 x 2 + 1 ) ,
g t ( x , v ) = 2 t x 2 v x .
By a simple calculation, we obtain F = [ 0 , 1 2 ] . Let x ¯ = 0 F , then we have f 1 ( x ¯ ) = { 2 } , f 2 ( x ¯ ) = { 0 } , h 1 ( x ¯ ) = { 0 } , h 2 ( x ¯ ) = { 0 } , and x g t ( x ¯ , v ) = { v } . For any x F , ξ i f i ( x ¯ ) , η i h i ( x ¯ ) , i = 1 , 2 , δ x g t ( x ¯ , v ) , since
f 1 ( x ) h 1 ( x ) f 1 ( x ¯ ) h 1 ( x ¯ ) = x 2 + 2 x 3 2 x 3 = 1 h 1 ( x ¯ ) ( ξ 1 , x x ¯ f 1 ( x ¯ ) h 1 ( x ¯ ) η 1 , x x ¯ ) ,
f 2 ( x ) h 2 ( x ) f 2 ( x ¯ ) h 2 ( x ¯ ) = 2 x 2 x 2 + 1 0 = 1 h 2 ( x ¯ ) ( ξ 2 , x x ¯ f 2 ( x ¯ ) h 2 ( x ¯ ) η 2 , x x ¯ ) ,
g t ( x , v ) g t ( x ¯ , v ) = 2 t x 2 v x v x = δ , x x ¯ .
We obtain an approximate convex function ( f , h , g ) at x ¯ = 0 .
Definition 6.
In the problem (NRMFP), let ε = ( ε 1 , , ε l ) R + l . ( f , h , g ) is called an approximate pseudo/quasi-convex function at x ¯ F if for any x F , ξ i f i ( x ¯ ) , η i h i ( x ¯ ) , i = 1 , , l , ( t j , v j ) g p h V , and δ j x g t j ( x ¯ , v j ) , j = 1 , , n , such that
1 h i ( x ¯ ) ( ξ i , x x ¯ f i ( x ¯ ) h i ( x ¯ ) η i , x x ¯ ) + ε i x x ¯ 0 f i ( x ) h i ( x ) f i ( x ¯ ) h i ( x ¯ ) + ε i x x ¯ 0 ,
g t j ( x , v j ) g t j ( x ¯ , v j ) 0 δ j , x x ¯ 0 .
Remark 3.
Clearly, if ( f , h , g ) is an approximate convex function at x ¯ F , then ( f , h , g ) is an approximate pseudo/quasi-convex function at x ¯ F ; conversely, it is not true (see the following Example 2).
Example 2.
In problem (NRMFP), let f i , h i : R R , i = 1 , 2 ,
f ( x ) h ( x ) = ( f 1 ( x ) h 1 ( x ) , f 2 ( x ) h 2 ( x ) ) ,
where
f 1 ( x ) = x 4 + x , h 1 ( x ) = 2 , f 2 ( x ) = x 2 , h 2 ( x ) = x 4 + 1 ,
g : R × V t R , t T = [ 1 , 3 ] , v V t = [ 1 , 3 + t ] , ( t , v ) g p h V ,
g t ( x , v ) = x 2 2 v t .
Through a clear calculation, we obtain F = R . Taking x ¯ = 0 F , ε = ( 1 , 2 ) , we obtain f 1 ( x ¯ ) = { 1 } , f 2 ( x ¯ ) = { 0 } , h 1 ( x ¯ ) = { 0 } , h 2 ( x ¯ ) = { 0 } and x g t ( x ¯ , v ) = { 0 } . For any x F , ξ i f i ( x ¯ ) , η i h i ( x ¯ ) , i = 1 , 2 , and δ x g t ( x ¯ , v ) , since
1 h 1 ( x ¯ ) ( ξ 1 , x x ¯ f 1 ( x ¯ ) h 1 ( x ¯ ) η 1 , x x ¯ ) + ε 1 x x ¯ = x 2 + | x | 0 f 1 ( x ) h 1 ( x ) f 1 ( x ¯ ) h 1 ( x ¯ ) + ε 1 x x ¯ = x 4 + x 2 + | x | 0 ,
1 h 2 ( x ¯ ) ( ξ 2 , x x ¯ f 2 ( x ¯ ) h 2 ( x ¯ ) η 2 , x x ¯ ) + ε 2 x x ¯ = 2 | x | 0 f 2 ( x ) h 2 ( x ) f 2 ( x ¯ ) h 2 ( x ¯ ) + ε 2 x x ¯ = x 2 x 4 + 1 + 2 | x | 0 ,
g t ( x , v ) g t ( x ¯ , v ) = x 2 0 δ , x x ¯ = 0 0 .
We obtain ( f , h , g ) , an approximate pseudo/quasi-convex function, at x ¯ = 0 . However, for x F \ { 0 } and δ x g t ( x ¯ , v ) , one has
g t ( x , v ) g t ( x ¯ , v ) = x 2 0 = δ , x x ¯ .
Therefore, ( f , h , g ) is not an approximate convex function at x ¯ = 0 .
The following Theorem 2 presents a sufficient optimality condition for the quasi-weak ε -efficient solutions to the problem (NRMFP).
Theorem 2.
In problem (NRMFP), let ε = ( ε 1 , , ε l ) R + l and ( f , h , g ) be an approximate pseudo/quasi-convex function at x ¯ F . If there exist multipliers α ¯ = ( α ¯ 1 , , α ¯ l ) R + l \ { 0 } , μ ¯ = ( μ ¯ 1 , , μ ¯ n ) R + n and ( t ¯ j , v ¯ j ) g p h V , such that (1) and (2) hold, then x ¯ is a quasi-weak ε-efficient solution of the problem (NRMFP).
Proof. 
It follows from (1) that there exist ξ ¯ i f i ( x ¯ ) , η ¯ i h i ( x ¯ ) , δ ¯ j x g t ¯ j ( x ¯ , v ¯ j ) and b ¯ i B , such that
0 = i = 1 l λ ¯ i ( ξ ¯ i f i ( x ¯ ) h i ( x ¯ ) η ¯ i ) + j = 1 n μ ¯ j δ ¯ j + i = 1 l α ¯ i ε i b ¯ i .
Due to b ¯ i B , for any x F , b ¯ i , x x ¯ x x ¯ , we obtain
i = 1 l λ ¯ i ( ξ ¯ i f i ( x ¯ ) h i ( x ¯ ) η ¯ i ) + j = 1 n μ ¯ j δ ¯ j , x x ¯ + i = 1 l α ¯ i ε i x x ¯ 0 .
This is equivalent to
i = 1 l λ ¯ i ( ξ ¯ i f i ( x ¯ ) h i ( x ¯ ) η ¯ i ) , x x ¯ + i = 1 l α ¯ i ε i x x ¯ j = 1 n μ ¯ j δ ¯ j , x x ¯ .
When μ ¯ j = 0 , j = 1 n μ ¯ j δ ¯ j , x x ¯ = 0 . If μ ¯ j 0 , from (2), we derive
g t ¯ j ( x , v ¯ j ) g t ¯ j ( x ¯ , v ¯ j ) = 0 .
Since ( f , h , g ) is an approximate pseudo/quasi-convex function at x ¯ , for δ ¯ j x g t ¯ j ( x ¯ , v ¯ j ) , we have
δ ¯ j , x x ¯ 0 .
Combining with (6) and (7), we obtain
i = 1 l λ ¯ i ( ξ ¯ i f i ( x ¯ ) h i ( x ¯ ) η ¯ i ) , x x ¯ + i = 1 l α ¯ i ε i x x ¯ 0 .
Conversely, suppose that x ¯ is not a quasi-weak ε -efficient solution of the problem (NRMFP). Then, there exists x ^ F such that
f ( x ^ ) h ( x ^ ) f ( x ¯ ) h ( x ¯ ) + ε x ^ x ¯ R + + l ,
which implies that
f i ( x ^ ) h i ( x ^ ) f i ( x ¯ ) h i ( x ¯ ) + ε i x ^ x ¯ < 0 , i = 1 , , l .
Since ( f , h , g ) is an approximate pseudo/quasi-convex at x ¯ , for ξ ¯ i f i ( x ¯ ) , and η ¯ i h i ( x ¯ ) , i = 1 , , l , we obtain
1 h i ( x ¯ ) ( ξ ¯ i , x ^ x ¯ f i ( x ¯ ) h i ( x ¯ ) η ¯ i , x ^ x ¯ ) + ε i x ^ x ¯ < 0 , i = 1 , , l .
Noticing that α ¯ R + l \ { 0 } , α ¯ i h i ( x ¯ ) = λ ¯ i , i = 1 , , l , we arrive at
i = 1 l λ ¯ i ( ξ ¯ i f i ( x ¯ ) h i ( x ¯ ) η ¯ i ) , x ^ x ¯ + i = 1 l α ¯ i ε i x ^ x ¯ < 0 ,
which contradicts (8). Hence, x ¯ is a quasi-weak ε -efficient solution to the problem (NRMFP).

4. Saddle Point Theorems

In this section, we establish saddle point theorems of quasi-weak ε -efficiency. We first give the definition of a quasi ε -weak saddle point for the problem (NRMFP).
Let x ¯ F , α ¯ = ( α ¯ 1 , , α ¯ l ) R + l \ { 0 } , e = ( 1 , , 1 ) R + l , μ = ( μ 1 , μ n ) R + n and ν = ( t j , v j ) g p h V , j = 1 , , n . The Lagrangian function of the problem (NRMFP) is defined as
L ( x , α ¯ , μ , ν ) = ( L 1 ( x , α ¯ , μ , ν ) , , L l ( x , α ¯ , μ , ν ) ) ,
where
L ( x , α ¯ , μ , ν ) = α ¯ ( f ( x ) h ( x ) f ( x ¯ ) h ( x ¯ ) ) + 1 l j = 1 n μ j g t j ( x , v j ) e ,
L i ( x , α ¯ , μ , ν ) = α ¯ i ( f i ( x ) h i ( x ) f i ( x ¯ ) h i ( x ¯ ) ) + 1 l j = 1 n μ j g t j ( x , v j ) , i = 1 , , l .
Definition 7.
In the problem (NRMFP), let ε = ( ε 1 , , ε l ) R + l . ( x ¯ , α ¯ , μ ¯ , ν ¯ ) F × ( R + l \ { 0 } ) × R + n × g p h V is said to be a quasi ε-weak saddle point if
L ( x , α ¯ , μ ¯ , ν ¯ ) + ε x x ¯ L ( x ¯ , α ¯ , μ ¯ , ν ¯ ) R + + l , x F ,
L ( x ¯ , α ¯ , μ , ν ) ε μ μ ¯ L ( x ¯ , α ¯ , μ ¯ , ν ¯ ) , μ R + n , ν g p h V .
The following example is presented to illustrate Definition 7.
Example 3.
In the problem (NRMFP), let f i , h i : R R , i = 1 , 2 , g : R × V t R , t T = [ 0 , 1 ) , and v V t = [ 2 t , 2 + t ] . Define
f ( x ) h ( x ) = ( f 1 ( x ) h 1 ( x ) , f 2 ( x ) h 2 ( x ) ) = ( x 2 2 , 2 x x + 1 ) ,
g t ( x , v ) = t x v x .
Via calculation, we obtain F = [ 0 , + ) . For any ( x , α ¯ , μ , ν ) F × ( R + 2 \ { 0 } ) × R + × g p h V , the Lagrangian function of the problem (NRMFP) is
L ( x , α ¯ , μ , ν ) = ( α ¯ 1 ( x 2 2 x ¯ 2 2 ) + 1 2 μ ( t x v x ) , α ¯ 2 ( 2 x x + 1 2 x ¯ x ¯ + 1 ) + 1 2 μ ( t x v x ) ) .
Let x ¯ = 0 F , ε = ( ε 1 , ε 2 ) = ( 1 , 1 ) , α ¯ = ( α ¯ 1 , α ¯ 2 ) = ( 1 2 , 1 2 ) , μ ¯ = 1 and v ¯ = 2 t . It is easy to obtain
L ( x , α ¯ , μ ¯ , ν ¯ ) = ( x 2 4 x , x x + 1 x ) , L ( x ¯ , α ¯ , μ ¯ , ν ¯ ) = ( 0 , 0 ) ,
L ( x ¯ , α ¯ , μ , ν ) = ( 0 , 0 ) .
Hence,
L ( x , α ¯ , μ ¯ , ν ¯ ) + ε x x ¯ L ( x ¯ , α ¯ , μ ¯ , ν ¯ ) = ( x 2 4 , x x + 1 ) R + + 2 , x F ,
L ( x ¯ , α ¯ , μ , ν ) ε μ μ ¯ = ε | μ 1 | L ( x ¯ , α ¯ , μ ¯ , ν ¯ ) , μ R + , ν g p h V .
The above result is seen in the following Figure 1 and Figure 2. Therefore, ( x ¯ , α ¯ , μ ¯ , ν ¯ ) is a quasi ε-weak saddle point of the problem (NRMFP).
Theorem 3.
In the problem (NRMFP), suppose that x ¯ F is a quasi-weak ε-efficient solution of the problem (NRMFP) and Theorem 1 is satisfied. If ( f , h , g ) is an approximate convex function at x ¯ , then ( x ¯ , α ¯ , μ ¯ , ν ¯ ) F × ( R + l \ { 0 } ) × R + n × g p h V is a quasi α ¯ ε -weak saddle point, where α ¯ ε = ( α ¯ 1 ε 1 , α ¯ l ε l ) .
Proof. 
Firstly, we verify (10) holds. Since x ¯ is a quasi-weak ε -efficient solution of the NRMFP, it follows from Theorem 1 that
0 i = 1 l λ ¯ i ( f i ( x ¯ ) f i ( x ¯ ) h i ( x ¯ ) h i ( x ¯ ) ) + j = 1 n μ ¯ j x g t ¯ j ( x ¯ , v ¯ j ) + i = 1 l α ¯ i ε i B ,
μ ¯ j g t ¯ j ( x ¯ , v ¯ j ) = 0 , j = 1 , , n .
Therefore, there exist ξ ¯ i f i ( x ¯ ) , η ¯ i h i ( x ¯ ) , δ ¯ j x g t ¯ j ( x ¯ , v ¯ j ) , and b ¯ i B such that
0 = i = 1 l λ ¯ i ( ξ ¯ i f i ( x ¯ ) h i ( x ¯ ) η ¯ i ) + j = 1 n μ ¯ j δ ¯ j + i = 1 l α ¯ i ε i b ¯ i .
Due to b ¯ i B , for x F , b ¯ i , x x ¯ x x ¯ , we have
i = 1 l λ ¯ i ( ξ ¯ i f i ( x ¯ ) h i ( x ¯ ) η ¯ i ) + j = 1 n μ ¯ j δ ¯ j , x x ¯ + i = 1 l α ¯ i ε i x x ¯ 0 .
Suppose that ( x ¯ , α ¯ , μ ¯ , ν ¯ ) is not a quasi ε -weak saddle point of the problem (NRMFP); then, there exists x ^ F such that
L ( x ^ , α ¯ , μ ¯ , ν ¯ ) + α ¯ ε x ^ x ¯ L ( x ¯ , α ¯ , μ ¯ , ν ¯ ) R + + l .
From (9), we deduce
α ¯ ( f ( x ^ ) h ( x ^ ) f ( x ¯ ) h ( x ¯ ) ) + 1 l j = 1 n μ ¯ j ( g t ¯ j ( x ^ , v ¯ j ) g t ¯ j ( x ¯ , v ¯ j ) ) e + α ¯ ε x ^ x ¯ R + + l .
Thus,
α ¯ i ( f i ( x ^ ) h i ( x ^ ) f i ( x ¯ ) h i ( x ¯ ) ) + 1 l j = 1 n μ ¯ j ( g t ¯ j ( x ^ , v ¯ j ) g t ¯ j ( x ¯ , v ¯ j ) ) + α ¯ i ε i x ^ x ¯ < 0 , i = 1 , , l .
Since ( f , h , g ) is an approximate convex function at x ¯ , there exist ξ i ¯ f i ( x ¯ ) , η ¯ i h i ( x ¯ ) and δ ¯ j x g t ¯ j ( x ¯ , v ¯ j ) such that
f i ( x ^ ) h i ( x ^ ) f i ( x ¯ ) h i ( x ¯ ) 1 h i ( x ¯ ) ( ξ i ¯ , x ^ x ¯ f i ( x ¯ ) h i ( x ¯ ) η ¯ i , x ^ x ¯ ) , i = 1 , , l ,
g t ¯ j ( x ^ , v ¯ j ) g t ¯ j ( x ¯ , v ¯ j ) δ ¯ j , x ^ x ¯ , j = 1 , , n .
Let α ¯ i h i ( x ¯ ) = λ ¯ i , i = 1 , , l , and then together with (13) we have
λ ¯ i ( ξ ¯ i f i ( x ¯ ) h i ( x ¯ ) η ¯ i ) , x ^ x ¯ + 1 l j = 1 n μ ¯ j δ ¯ j , x ^ x ¯ + α ¯ i ε i x ^ x ¯ < 0 , i = 1 , , l .
Then, we obtain
i = 1 l λ ¯ i ( ξ ¯ i f i ( x ¯ ) h i ( x ¯ ) η ¯ i ) , x ^ x ¯ + j = 1 n μ ¯ j δ ¯ j , x ^ x ¯ + i = 1 l α ¯ i ε i x ^ x ¯ < 0 ,
which contradicts (12).
Next, we prove (11) holds. Since μ ¯ j g t ¯ j ( x ¯ , v ¯ j ) = 0 , for any μ j 0 , ( t j , v j ) ( g p h V ) , i = 1 , , l , we have μ j g t j ( x ¯ , v j ) 0 ; hence,
j = 1 n μ ¯ j g t ¯ j ( x ¯ , v ¯ j ) j = 1 n μ j g t j ( x ¯ , v j ) 0 .
That is,
α ¯ ε μ j μ ¯ j ( 0 , , 0 ) α ¯ ( f ( x ¯ ) h ( x ¯ ) f ( x ¯ ) h ( x ¯ ) ) + 1 l j = 1 n μ ¯ j g t ¯ j ( x ¯ , v ¯ j ) e α ¯ ( f ( x ¯ ) h ( x ¯ ) f ( x ¯ ) h ( x ¯ ) ) 1 l j = 1 n μ j g t j ( x ¯ , v j ) e ,
which implies that
L ( x ¯ , α ¯ , μ , ν ) α ¯ ε μ j μ ¯ j L ( x ¯ , α ¯ , μ ¯ , ν ¯ ) .
The next Theorem 4 shows that a quasi α ¯ ε -weak saddle point is a quasi-weak ε -efficient solution of the problem (NRMFP). □
Theorem 4.
In the problem (NRMFP), if ( x ¯ , α ¯ , μ ¯ , ν ¯ ) F × R + + l × R + n × g p h V is a quasi α ¯ ε -weak saddle point and x ¯ is an optimal solution of the problem max j = 1 n μ ¯ j g t ¯ j ( x , v ¯ j ) , then x ¯ is a quasi-weak ε-efficient solution, where α ¯ ε = ( α ¯ 1 ε 1 , α ¯ l ε l ) .
Proof. 
Since ( x ¯ , α ¯ , μ ¯ , ν ¯ ) is a quasi α ¯ ε -weak saddle point of the problem (NRMFP), it follows from (10) that
α ¯ ( f ( x ) h ( x ) f ( x ¯ ) h ( x ¯ ) ) + 1 l j = 1 n μ ¯ j g t ¯ j ( x , v ¯ j ) e + α ¯ ε x x ¯ α ¯ ( f ( x ¯ ) h ( x ¯ ) f ( x ¯ ) h ( x ¯ ) ) 1 l j = 1 n μ ¯ j g t ¯ j ( x ¯ , v ¯ j ) e R + + l , x F .
Because x ¯ is an optimal solution of the problem max j = 1 n μ ¯ j g t ¯ j ( x , v ¯ j ) , it holds that
j = 1 n μ ¯ j g t ¯ j ( x , v ¯ j ) j = 1 n μ ¯ j g t ¯ j ( x ¯ , v ¯ j ) 0 , x F .
Together with (14) and (15), we obtain
α ¯ ( f ( x ) h ( x ) f ( x ¯ ) h ( x ¯ ) ) + α ¯ ε x x ¯ R + + l , x F .
Note that α ¯ R + + l , and we obtain
( f ( x ) h ( x ) f ( x ¯ ) h ( x ¯ ) ) + ε x x ¯ R + + l , x F .
Hence, x ¯ is a quasi-weak ε -efficient solution of the problem (NRMFP). □

5. Conclusions

We have established a necessary condition for robust approximate quasi-weak efficient solutions of a problem (NUMFP) based on the properties of the Gerstewitz’s function. We have also introduced two kinds of generalized convex function pairs for the problem (NUMFP), and under their assumptions we have presented sufficient conditions and saddle point theorems for robust approximate quasi-weak efficient solutions.
It would be meaningful to further investigate the proper efficient solutions, duality theorems and some special applications for the problem (NUMFP), such as multiobjective optimization problems and minimax optimization problems. Indeed, ref. [4] has discussed duality theorems and special applications for a nonsmooth semi-infinite multiobjective optimization problem, respectively. Therefore, the further works seem feasible.

Author Contributions

Conceptualization, L.G. and G.Y.; methodology, L.G., G.Y. and W.H.; writing—original draft, L.G.; writing—review and editing, L.G., G.Y. and W.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Fundamental Research Funds for the Central Universities (No. 2021KYQD23, No. 2022XYZSX03), in part by the Natural Science Foundation of Ningxia Provincial of China (No. 2022AAC03260), in part by the Key Research and Development Program of Ningxia (Introduction of Talents Project) (No. 2022BSB03046), in part by the Natural Science Foundation of China under Grant No. 11861002 and the Key Project of North Minzu University under Grant No. ZDZX201804, Postgraduate Innovation Project of North Minzu University (YCX23081).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Khantree, C.; Wangkeeree, R. On quasi approximate solutions for nonsmooth robust semi-infinite optimization problems. Carpathian J. Math. 2019, 35, 417–426. [Google Scholar] [CrossRef]
  2. Kanzi, N.; Nobakhtian, S. Nonsmooth semi-infinite programming problems with mixed constraints. J. Math. Anal. Appl. 2009, 351, 170–181. [Google Scholar] [CrossRef]
  3. Sun, X.K.; Teo, K.L.; Zeng, J. Robust approximate optimal solutions for nonlinear semi-infinite programming with uncertainty. Optimization 2020, 69, 2109–2129. [Google Scholar] [CrossRef]
  4. Pham, T.H. On Isolated/Properly Efficient Solutions in Nonsmooth Robust Semi-infinite Multiobjective Optimization. Bull. Malays. Math. Sci. Soc. 2023, 46, 73. [Google Scholar] [CrossRef]
  5. Chuong, T.D. Nondifferentiable fractional semi-infinite multiobjective optimization problems. Oper. Res. 2016, 44, 260–266. [Google Scholar] [CrossRef]
  6. Mishra, S.K.; Jaiswal, M. Optimality and duality for nonsmooth multiobjective fractional semi-infinite programming problem. Adv. Nonlinear Var. Inequalities 2013, 16, 69–83. [Google Scholar]
  7. Antczak, T. Sufficient optimality conditions for semi-infinite multiobjective fractional programming under (Φ,ρ)-V-invexity and generalized (Φ,ρ)-V-invexity. Filomat 2017, 30, 3649–3665. [Google Scholar] [CrossRef]
  8. Pan, X.; Yu, G.L.; Gong, T.T. Optimality conditions for generalized convex nonsmooth uncertain multi-objective fractional programming. J. Oper. Res. Soc. 2022, 1, 1–18. [Google Scholar] [CrossRef]
  9. Dem’yanov, V.F.; Vasil’ev, L.V. Nondifferentiable Optimization. In Optimization Software; Inc. Publications Division: New York, NY, USA, 1985. [Google Scholar]
  10. Han, W.Y.; Yu, G.L.; Gong, T.T. Optimality conditions for a nonsmooth uncertain multiobjective programming problem. Complexity 2020, 2020, 1–8. [Google Scholar] [CrossRef]
  11. Long, X.J.; Huang, N.J.; Liu, Z.B. Optimality conditions, duality and saddle points for nondifferentiable multiobjective fractional programs. J. Ind. Manag. Optim. 2017, 4, 287–298. [Google Scholar] [CrossRef]
  12. Kuroiwa, D.; Lee, G.M. On robust multiobjective optimization. J. Nonlinear Convex. Anal. 2012, 40, 305–317. [Google Scholar]
  13. Ben-Tal, A.; Ghaoui, L.E.; Nemirovski, A. Robust Optimization; Princeton University Press: Princeton, NJ, USA, 2009; pp. 16–64. [Google Scholar]
  14. Caristi, G.; Ferrara, M. Necessary conditions for nonsmooth multiobjective semi-infinite problems using Michel-Penot subdifferential. Decis. Econ. Financ. 2017, 40, 103–113. [Google Scholar] [CrossRef]
  15. Kabgani, A.; Soleimani-damaneh, M. Characterization of (weakly/properly/robust) efficient solutions in nonsmooth semi-infinite multiobjective optimization using convexificators. Optimization 2018, 67, 217–235. [Google Scholar] [CrossRef]
  16. Clarke, F.H. Optimization and Nonsmooth Analysis; Wiley: New York, NY, USA, 1983; pp. 46–78. [Google Scholar]
  17. Fakhar, M.; Mahyarinia, M.R.; Zafarani, J. On nonsmooth robust multiobjective optimization under generalized convexity with applications to portfolio optimization. Eur. J. Oper. Res. 2018, 265, 39–48. [Google Scholar] [CrossRef]
  18. Lee, G.M.; Kim, G.S.; Dinh, N. Optimality Conditions for Approximate Solutions of Convex Semi-Infinite Vector Optimization Problems. Recent Dev. Vector Optim. 2012, 1, 275–295. [Google Scholar]
  19. Chen, J.W.; Köbis, E.; Yao, J.C. Optimality Conditions and Duality for Robust Nonsmooth Multiobjective Optimization Problems with Constraints. J. Optim. Theory Appl. 2019, 265, 411–436. [Google Scholar] [CrossRef]
  20. Gerth, C.; Weidner, P. Nonconvex separation theorems and some applications in vector optimization. J. Optim. Theory Appl. 1990, 67, 297–320. [Google Scholar] [CrossRef]
  21. Han, Y. Connectedness of the approximate solution sets for set optimization problems. Optimization 2022, 71, 4819–4834. [Google Scholar] [CrossRef]
  22. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1997; pp. 10–32. [Google Scholar]
  23. Tung, N.M.; Van Duy, M. Constraint qualifications and optimality conditions for robust nonsmooth semi-infinite multiobjective optimization problems. 4OR 2022, 21, 151–176. [Google Scholar] [CrossRef]
Figure 1. The illustration of L ( x , α ¯ , μ ¯ , ν ¯ ) + ε x x ¯ L ( x ¯ , α ¯ , μ ¯ , ν ¯ ) .
Figure 1. The illustration of L ( x , α ¯ , μ ¯ , ν ¯ ) + ε x x ¯ L ( x ¯ , α ¯ , μ ¯ , ν ¯ ) .
Axioms 12 00635 g001
Figure 2. The illustration of L ( x ¯ , α ¯ , μ , ν ) ε μ μ ¯ .
Figure 2. The illustration of L ( x ¯ , α ¯ , μ , ν ) ε μ μ ¯ .
Axioms 12 00635 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, L.; Yu, G.; Han, W. Optimality Conditions of the Approximate Efficiency for Nonsmooth Robust Multiobjective Fractional Semi-Infinite Optimization Problems. Axioms 2023, 12, 635. https://doi.org/10.3390/axioms12070635

AMA Style

Gao L, Yu G, Han W. Optimality Conditions of the Approximate Efficiency for Nonsmooth Robust Multiobjective Fractional Semi-Infinite Optimization Problems. Axioms. 2023; 12(7):635. https://doi.org/10.3390/axioms12070635

Chicago/Turabian Style

Gao, Liu, Guolin Yu, and Wenyan Han. 2023. "Optimality Conditions of the Approximate Efficiency for Nonsmooth Robust Multiobjective Fractional Semi-Infinite Optimization Problems" Axioms 12, no. 7: 635. https://doi.org/10.3390/axioms12070635

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop