Next Article in Journal
Automatic Foreign Matter Segmentation System for Superabsorbent Polymer Powder: Application of Diffusion Adversarial Representation Learning
Previous Article in Journal
The Period Function of the Generalized Sine-Gordon Equation and the Sinh-Poisson Equation
Previous Article in Special Issue
Multidimensional Evolution Effects on Non-Cooperative Strategic Games
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Minimax-Program-Based Approach for Robust Fractional Multi-Objective Optimization

1
Academy for Advanced Interdisciplinary Studies, Northeast Normal University, Changchun 130024, China
2
Department of Mathematics, Yanbian University, Yanji 133002, China
3
Department of Applied Mathematics, Pukyong Natinal University, Busan 48513, Republic of Korea
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(16), 2475; https://doi.org/10.3390/math12162475 (registering DOI)
Submission received: 4 July 2024 / Revised: 31 July 2024 / Accepted: 8 August 2024 / Published: 10 August 2024
(This article belongs to the Special Issue Applied Functional Analysis and Applications: 2nd Edition)

Abstract

:
In this paper, by making use of some advanced tools from variational analysis and generalized differentiation, we establish necessary optimality conditions for a class of robust fractional minimax programming problems. Sufficient optimality conditions for the considered problem are also obtained by means of generalized convex functions. Additionally, we formulate a dual problem to the primal one and examine duality relations between them. In our results, by using the obtained results, we obtain necessary and sufficient optimality conditions for a class of robust fractional multi-objective optimization problems.

1. Introduction

A fractional multi-objective optimization problem with locally Lipschitzian data in the face of data uncertainty in the constraints is of the form
Min R + l { f ( x ) | g j ( x , v j ) 0 , j J } ,
where x R n is the vector of decision variables, f ( x ) : = ( f 1 ( x ) , , f l ( x ) ) ,   f k : = p k q k ,   k K : = { 1 , , l } , and v j V j ,   j J : = { 1 , , m } are uncertain parameters, V j R q ,   j J are uncertainty sets; let p k : R n R ,   q k : R n R ,   k K : = { 1 , , l } be locally Lipschitz functions and let g j : R n R ,   j J be given functions.
We will treat the problem (1) via a robust optimization approach (see, e.g., [1,2,3,4,5,6,7,8,9]), by following its robust counterpart as shown below,
Min R + l { f ( x ) | x F r } ,
where F r is the feasible set of the problem (2), defined by
F r = { x R n | g j ( x , v j ) 0 , v j V j , j J } .
In what follows, for the sake of convenience, we assume that q k ( x ) > 0 ,   k K for all x R n and that p k ( x ¯ ) 0 ,   k K for the reference point x ¯ F r . Furthermore, denote g : = ( g 1 , , g m ) for simplicity.
In this paper, we mainly focus on the study of optimality conditions for a weak Pareto solution of the problem (2) by taking the associated minimax optimization problem into account (see, e.g., [10,11,12,13,14,15,16,17,18,19,20,21] for more related works). Let us consider the following robust fractional minimax programming problem,
min x F r max k K p k ( x ) q k ( x ) ,
where F r is the feasible set that is the same as defined by the problem (2).
Among others, we first establish the necessary optimality conditions for the problem (4) and its sufficient optimality conditions by means of generalized convex functions. Then, we formulate a dual problem to the problem (4) and examine duality relations between them. Finally, by using the obtained results, we obtain necessary and sufficient optimality conditions for the problem (2), the minimax program-based approach is different from some works in the literature; see, e.g., [22,23,24,25]. The main tools are from variational analysis and generalized differentiation; see, e.g., [26,27].

2. Preliminaries

In this section, we recall some notation and preliminary results that will be used in this paper; see e.g., [26,27]. Let R n denote the Euclidean space equipped with the usual Euclidean norm · . The nonnegative orthant of R n is denoted by R + n . As usual, the polar cone of a set Ω R n is defined by
Ω : = { y R n | y , x 0 for all x Ω } .
A given set-valued mapping F : Ω R n R m is said to be closed at x ¯ Ω if for any sequence { x k } Ω ,   x k x ¯ , and for any sequence { y k } R m ,   y k y ¯ , one has y ¯ F ( x ¯ ) .
Given a multifuction F : R n R m with values F ( x ) R m in the collection of all the subsets of R m (and similarly, of course, in infinite dimensions). The limiting construction
Limsup x x ¯ F ( x ) : = { y R m | x k x ¯ , y k y with y k F ( x k ) for all k N : = { 1 , 2 , } }
is known as the Painlevé–Kuratowski upper/outer limit of F at x ¯ .
Given Ω R n and x ¯ Ω , define the collection of Fréchet/regular normal cone to Ω at x ¯ by
N ^ ( x ¯ ; Ω ) = N ^ Ω ( x ¯ ) : = v R n | lim sup x Ω x ¯ v , x x ¯ x x ¯ 0 .
If x Ω , we put N ^ ( x ¯ ; Ω ) : = .
The Mordukhovich/limiting normal cone N ( x ¯ ; Ω ) to Ω at x ¯ Ω R n is obtained from regular normal cones by taking the Painlevé–Kurotowski upper limits as
N ( x ¯ ; Ω ) : = Limsup x Ω x ¯ N ^ ( x ; Ω ) .
If x ¯ Ω , we put N ( x ¯ ; Ω ) : = .
For an extended real-valued function ϕ : R n R ¯ : = [ , ] , its domain is defined by
dom ϕ : = x R n | ϕ ( x ) < ,
and its epigraph is defined by
epi ϕ : = ( x , μ ) R n × R | μ ϕ ( x ) .
Let ϕ : R n R ¯ be finite at x ¯ dom ϕ . Then, the collection of basic subgradients, or the (basic/Mordukhovich/limiting) subdifferential, of ϕ at x ¯ is defined by
ϕ ( x ¯ ) : = v R n | ( v , 1 ) N ( x ¯ , ϕ ( x ¯ ) ) ; epi ϕ .
We say a function φ : R n R ¯ is locally Lipschitz at x ¯ R n with rank L > 0 if there exists τ > 0 such that
| φ ( x 1 ) φ ( x 2 ) |     L x 1 x 2 , x 1 , x 2 B ( x ¯ , τ ) ,
where B ( x ¯ , τ ) stands for the closed unit ball centered at x ¯ of radius τ > 0 . Furthermore, it also holds that [27] (Theorem 1.22)
v L , v φ ( x ¯ ) .
The generalized Fermat’s rule is formulated as follows [27] (Proposition 1.30): Let φ : R n R ¯ be finite at x ¯ . If x ¯ is a local minimizer of φ , then
0 φ ( x ¯ ) .
To establish optimality conditions, the following lemmas, which are related to the Mordukhovich/limiting subdifferential calculus, are very useful.
Lemma 1
([27] (Corollary 2.21 and Theorem 4.10 (ii))).
(i)
Let ϕ i : R n R ¯ ,   i = 1 , 2 , , m ,   m 2 be lower semicontinuous around x ¯ R n , and let all but one of these functions be Lipschitz-continuous around x ¯ . Then,
( ϕ 1 + ϕ 2 + + ϕ m ) ( x ¯ ) ϕ 1 ( x ¯ ) + ϕ 2 ( x ¯ ) + + ϕ m ( x ¯ ) .
(ii)
Let ϕ i : R n R ¯ ,   i = 1 , 2 , , m ,   m 2 be lower semicontinuous around x ¯ for i I max ( x ¯ ) and upper semicontinuous at x ¯ for i I max ( x ¯ ) ; suppose that each ϕ i ,   i = 1 , , m , is Lipschitz continuous around x ¯ . Then, we have the inclusion
( max ϕ i ) ( x ¯ ) i I max ( x ¯ ) λ i ϕ i ( x ¯ ) | ( λ 1 , , λ m ) Λ ( x ¯ ) ,
where the equality holds and the maximum functions are lower regular at x ¯ if each ϕ i is lower and regular at this point and the sets I max ( x ¯ ) and Λ ( x ¯ ) are defined as follows:
I max ( x ¯ ) : = i { 1 , , m } | ϕ i ( x ¯ ) = ( max ϕ i ) ( x ¯ ) , Λ ( x ¯ ) : = ( λ 1 , , λ m ) | λ i 0 , i = 1 m λ i = 1 , λ i ϕ i ( x ¯ ) ( max ϕ i ) ( x ¯ ) = 0 .
Lemma 2
([27] (Corollary 4.8)). Let ϕ i : R n R ¯ for i = 1 , 2 be Lipschitz continuous around x ¯ with ϕ 2 ( x ¯ ) 0 . Then, we have
ϕ 1 ϕ 2 ( x ¯ ) ( ϕ 2 ( x ¯ ) ϕ 1 ) ( x ¯ ) ( ϕ 1 ( x ¯ ) ϕ 2 ) ( x ¯ ) [ ϕ 2 ( x ¯ ) ] 2 .
Lemma 3
([27] Mean Value Inequality (Corollary 4.14 (ii))). If φ is Lipschitz continuous on an open set containing [ a , b ] R n , then
x * , b a φ ( b ) φ ( a ) for some x * φ ( c ) with c [ a , b )
where [ a , b ] : = co { a , b } , and [ a , b ) : = co { a , b } \ { b } .
For a function φ : R n R ¯ being locally Lipschitz continuous at x ¯ , the generalized (in the sense of Clarke) of φ at x ¯ in the direction v R n is defined as follows:
φ ( x ¯ ; v ) : = lim sup x x ¯ , λ 0 φ ( x + λ v ) φ ( x ) λ .
In this case, the convexified/Clarke subdifferential of φ at x ¯ is the set
C φ ( x ¯ ) : = x * R n | x * , v φ ( x ¯ ; v ) , v R n ,
which is nonempty, and the Clarke directional derivative is the support function of the Clarke subdifferential, that is,
φ ( x ¯ , v ) = max x * C φ ( x ¯ ) x * , v ,
for each v R n .
It follows from [27] that the relationship between the above subdifferentials of φ at x ¯ R n is as follows:
φ ( x ¯ ) C φ ( x ¯ ) .

3. Optimality Conditions for Robust Fractional Minimax Programming Problems

In this section, we establish optimality conditions for the robust fractional minimax programming problem (4), which will be pretty helpful for Section 5.
Definition 1.
Let ϕ ( x ) : = max k K f k ( x ) ,   x R n . A point x ¯ F r is called a locally optimal solution to the problem (4) if and only if there is a neighborhood U of x ¯ such that
ϕ ( x ¯ ) ϕ ( x ) , x U F r .
Moreover, we say x ¯ F r is a globally optimal solution to the problem (4) if the above inequality holds for all x F r .
Let us make some assumptions for functions g j ,   j J , given in (3). We refer the reader to [28] for more details.
( A 1 )
For a fixed x ¯ R n , there exists δ j x ¯ > 0 such that the function v j V j g j ( x , v j ) R is upper semicontinuous for each x B ( x ¯ , δ j x ¯ ) , and the functions g j ( · , v j ) , v j V j , are Lipschitz of given rank L j > 0 on B ( x ¯ , δ j x ¯ ) , i.e.,
| g j ( x 1 , v j ) g j ( x 2 , v j ) | L j x 1 x 2 , x 1 , x 2 B ( x ¯ , δ j x ¯ ) , v j V j .
( A 2 )
The multifunction ( x , v j ) B ( x ¯ , δ j x ¯ ) × V j x g j ( x , v j ) R n is closed at ( x ¯ , v ¯ j ) for each v ¯ j V j ( x ¯ ) , where the symbol x stands for the limiting subdifferential operation with respect to x , and the notation V j ( x ¯ ) signifies active indices in V j at x ¯ , i.e.,
V j ( x ¯ ) : = v j V j | g j ( x ¯ , v j ) = G j ( x ¯ )
with G j ( x ¯ ) : = sup v j V j g j ( x ¯ , v j ) .
As usual, in order to obtain the necessary optimality condition of the Karush–Kuhn–Tucker type for a locally optimal solution to the problem (4), the following constraint qualification (see [28] (Definition 3.2)) is needed, which turns into the extended Mangasarian–Fromovitz constraint qualification (see e.g., [29] (p. 72)) in the smooth setting.
Definition 2
([28] (Definition 3.2)). Let x ¯ F r . Then, the constraint qualification ( CQ ) is fulfilled at x ¯ if
0 co x g j ( x ¯ , v j ) | v j V j ( x ¯ ) , j J .
Theorem 1.
Let the ( CQ ) be fulfilled at x ¯ F r . Suppose that x ¯ is a locally optimal solution to the problem (4); then there are some α : = ( α 1 , , α l ) R + l \ { 0 } ,   λ : = ( λ 1 , , λ m ) R + m such that
0 k K α k p k ( x ¯ ) p k ( x ¯ ) q k ( x ¯ ) q k ( x ¯ ) + j J λ j co x g j ( x ¯ , v j ) | v j V j ( x ¯ ) , α k f k ( x ¯ ) max k K f k ( x ¯ ) = 0 , k K , λ j sup v j V j g j ( x ¯ , v j ) = 0 , j J .
Proof. 
Let x ¯ be a locally optimal solution to the problem (4). For j J , we consider δ j x ¯ ,   L j ,   V j ( x ¯ ) , and G j ( x ¯ ) as in assumptions ( A 1 ) and ( A 2 ) . Along with the proof of [28] (Theorem 3.3), we can show that
co x g j ( x ¯ , v j ) | v j V j ( x ¯ )
is closed and compact, and
G j ( x ¯ ) co x g j ( x ¯ , v j ) | v j V j ( x ¯ )
with the help of Lemma 3 and the relationship between G j ( x ¯ ) and C G j ( x ¯ ) .
Set ϕ ( x ) : = max k K f k ( x ) . Because x ¯ is a locally optimal solution to the problem (4), x ¯ F r , there exists a neighborhood U of x ¯ such that
ϕ ( x ¯ ) ϕ ( x ) , x U F r .
We claim that x ¯ is also a local minimizer of the following problem
min ψ ( x ) subject to x U ,
where ψ ( x ) : = max j J ϕ ( x ) ϕ ( x ¯ ) , G j ( x ) ,   x R n . Actually, it is easy to see that ψ ( x ¯ ) = 0 due to x ¯ F r . Let us justify ψ ( x ¯ ) ψ ( x ) for x U . Suppose that x U F r ; then, ψ ( x ) 0 . Otherwise,  ψ ( x ) < 0 shows that
ϕ ( x ) ϕ ( x ¯ ) < 0 ,
which contradicts (13). If x U \ F r , then there exists j 0 J such that G j 0 ( x ) > 0 , which entails that ψ ( x ) > 0 . Therefore,  x ¯ is a local minimizer for ψ .
Applying the nonsmooth version of Fermat’s rule (6), we obtain
0 ψ ( x ¯ ) .
Moreover, applying the formula for the limiting subdifferential of maximum functions and the limiting subdifferential sum rule for local Lipschitz functions (Lemma 1), we have
0 μ 0 ϕ ( x ¯ ) + j J μ j G j ( x ¯ ) | μ 0 0 , μ j 0 , μ j G j ( x ¯ ) = 0 , j J , μ 0 + j j μ j = 1 .
From (12) together with (10), we derive
0 { μ 0 ϕ ( x ¯ ) + j J μ j co x g j ( x ¯ , v j ) | v j V j ( x ¯ ) | μ 0 0 , μ j 0 , μ j sup v j V j g j ( x ¯ , v j ) = 0 , j J , μ 0 + j J μ j = 1 } .
Applying the limiting subdifferential of maximum functions and the sum rule to ϕ ( x ¯ ) , we obtain
ϕ ( x ¯ ) = max k K p k q k ( x ¯ ) { k K ( x ¯ ) β k p k q k ( x ¯ ) | β k 0 , β k f k ( x ¯ ) ( max f k ) ( x ¯ ) = 0 , k K ( x ¯ ) , k K ( x ¯ ) β k = 1 } ,
where K ( x ¯ ) : = { k K | f k ( x ¯ ) = ϕ ( x ¯ ) } . Taking (8) into account, we arrive at
k K ( x ¯ ) β k p k q k ( x ¯ ) k K ( x ¯ ) β k q k ( x ¯ ) p k ( x ¯ ) p k ( x ¯ ) q k ( x ¯ ) [ q k ( x ¯ ) ] 2 = k K ( x ¯ ) β k q k ( x ¯ ) p k ( x ¯ ) p k ( x ¯ ) q k ( x ¯ ) [ q k ( x ¯ ) ] 2 = k K ( x ¯ ) β k q k ( x ¯ ) p k ( x ¯ ) p k ( x ¯ ) q k ( x ¯ ) q k ( x ¯ ) .
Now, we put β k : = 0 for k K \ K ( x ¯ ) , and let α k : = β k q k ( x ¯ ) for k K . If the ( CQ ) is satisfied at x ¯ , then μ 0 0 , set λ j = μ j μ 0 ,   j J , and (14) with (15) establishes (11), which completes the proof. □
In what follows, we formulate sufficient optimality conditions for the problem (4). For this, we recall the following definition of generalized convexity; see [30,31].
Definition 3.
We say that ( p , q ; g ) is generalized convex on R n at x ¯ R n if for any x R n ,   ξ k p k ( x ¯ ) ,   ζ k q k ( x ¯ ) ,   k K ,   η j x g j ( x ¯ , v j ) ,   v j V j ( x ¯ ) ,   j J , there exists h R n such that
p k ( x ) p k ( x ¯ ) ξ k , h , k K , q k ( x ) q k ( x ¯ ) ζ k , h , k K , g j ( x , v j ) g j ( x ¯ , v j ) η j , h , v j V j ( x ¯ ) , j J ,
where V j ( x ¯ ) ,   j J are defined as in(10).
The following theorem provides sufficient optimality conditions for a globally optimal solution of (4).
Theorem 2.
Let the condition ( ) be satisfied at x ¯ F r . If ( p , q ; g ) is generalized convex on R n at x ¯ , then x ¯ is a globally optimal solution to the problem (4).
Proof. 
Let ϕ ( x ) : = max k K f k ( x ) . Since x ¯ F satisfies conditions (11), there exist α : = ( α 1 , , α l ) R + l \ { 0 } ,   ξ k p k ( x ¯ ) ,   ζ k q k ( x ¯ ) ,   k K , and λ j 0 ,   j J ,   λ j i 0 ,   η j i x g j ( x ¯ , v j i ) ,   v j i V j ( x ¯ ) ,   i I j = { 1 , , i j } ,   i j N ,   i I j λ j i = 1 , such that
k K α k ξ k p k ( x ¯ ) q k ( x ¯ ) ζ k + j J λ j i I j λ j i η j i = 0 ,
α k f k ( x ¯ ) max k K f k ( x ¯ ) = 0 , k K ,
λ j sup v j V j g j ( x ¯ , v j ) = 0 , j J .
Assume to the contrary that x ¯ is not a globally optimal solution to the problem (4); then, there is x ^ F r such that
ϕ ( x ¯ ) > ϕ ( x ^ ) .
By the generalized convex on R n at x ¯ , we deduce from (16) that for such x ^ there is h R n such that
0 = k K α k ξ k , b p k ( x ¯ ) q k ( x ¯ ) ζ k , b + j J λ j i I j λ j i η j i , h k K α k [ p k ( x ^ ) p k ( x ¯ ) ] p k ( x ¯ ) q k ( x ¯ ) [ q k ( x ^ ) q k ( x ¯ ) ] + j J λ j i I j λ j i g j ( x ^ , v j i ) g j ( x ¯ , v j i ) .
Hence,
j J λ j i I j λ j i g j ( x ¯ , v j i ) k K α k [ p k ( x ^ ) p k ( x ¯ ) ] p k ( x ¯ ) q k ( x ¯ ) [ q k ( x ^ ) q k ( x ¯ ) ] + j J λ j i I j λ j i g j ( x ^ , v j i ) .
Since v j i V j ( x ¯ ) ,
g j ( x ¯ , v j i ) = sup v j V j g j ( x ¯ , v j ) , j J , i I j .
Thus, it follows from (18) that λ j g j ( x ¯ , v j i ) = 0 for j J and i I j . In addition, due to x ^ F r ,   λ j g j ( x ^ , v j i ) 0 for j J and i I j . So, we obtain by (20) that
0 k K α k [ p k ( x ^ ) p k ( x ¯ ) ] p k ( x ¯ ) q k ( x ¯ ) [ q k ( x ^ ) q k ( x ¯ ) ] = k K q k ( x ^ ) α k p k ( x ^ ) q k ( x ^ ) p k ( x ¯ ) q k ( x ¯ ) .
On the other hand, by (17), it holds that
k K q k ( x ^ ) α k ϕ ( x ¯ ) = k K q k ( x ^ ) α k f k ( x ¯ ) k K q k ( x ^ ) α k f k ( x ^ ) k K q k ( x ^ ) α k ϕ ( x ^ ) .
This implies that
ϕ ( x ¯ ) ϕ ( x ^ )
due to k K q k ( x ^ ) α k > 0 . Obviously, (21) contradicts (19), which completes the proof. □

4. Duality for Robust Fractional Minimax Programming Problems

Let z R n ,   α : = ( α 1 , , α l ) R + l \ { 0 } ,
λ R + N : = λ : = ( λ j , λ j i ) , j J , i I j = { 1 , , i j } | i j N , λ j 0 , λ j i 0 , i I j λ j i = 1 .
In connection with the robust fractional minimax programming problem (4), we consider a dual problem in the following form:
max ( z , α , λ ) F D ϕ ˜ ( z , α , λ ) : = ϕ ( z ) .
Here, we denote ϕ ( z ) : = max k K f k ( z ) and
F D : = { ( z , α , λ ) R n × R + l × R + m | 0 k K α k p k ( z ) p k ( z ) q k ( z ) q k ( z ) + j J λ j i I j λ j i η j i , j J λ j sup v j V j g j ( z , v j ) 0 , η j i { x g j ( z , v j i ) | v j i V j ( z ) } , α k f k ( z ) ϕ ( z ) = 0 , k K } ,
where V j ( z ) is defined as in (10) by replacing x ¯ by z .
Definition 4.
We say that a point ( z ¯ , α ¯ , λ ¯ ) F D is called a globally optimal solution to the problem (22) if and only if
ϕ ˜ ( z ¯ , α ¯ , λ ¯ ) ϕ ˜ ( z , α , λ ) , ( z , α , λ ) F D .
Theorem 3
(weak duality). Let x F r and let ( z , α , λ ) F D . If ( p , q ; g ) is generalized convex R n at z , then
ϕ ( x ) ϕ ˜ ( z , α , λ ) .
Proof. 
Since ( z , α , λ ) F D , there exist α : = ( α 1 , , α l ) R + l \ { 0 } ,   ξ k p k ( z ) ,   ζ k q k ( z ) ,   k K , and λ j 0 ,   j J ,   λ j i 0 ,   η j i x g j ( z , v j i ) ,   v j i V j ( z ) ,   i I j = { 1 , , i j } ,   i j N ,   i I j λ j i = 1 , such that
k K α k ξ k p k ( z ) q k ( z ) ζ k + j J λ j i I j λ j i η j i = 0 ,
α k f k ( z ) ϕ ( z ) = 0 , k K ,
j J λ j sup v j V j g j ( x ¯ , v j ) 0 .
By the generalized convex on R n at z , we deduce from (23) that for such x there is h R n such that
0 = k K α k ξ k , h p k ( z ) q k ( z ) ζ k , h + j J λ j i I j λ j i η j i , h k K α k [ p k ( x ) p k ( z ) ] p k ( z ) q k ( z ) [ q k ( x ) q k ( z ) ] + j J λ j i I j λ j i [ g j ( x , v j i ) g j ( z , v j i ) ] .
It stems from x F r that
j J λ j i I j λ j i g j ( x , v j i ) 0 .
We obtain
0 k K α k [ p k ( x ) p k ( z ) ] p k ( z ) q k ( z ) [ q k ( x ) q k ( z ) ] j J λ j i I j λ j i g j ( z , v j i ) = k K q k ( x ) α k f k ( x ) f k ( z ) j J λ j i I j λ j i g j ( z , v j i ) .
Since v j i V j ( z ) ,
g j ( z , v j i ) = sup v j V j g j ( z , v j ) , j J , i I j .
Thus, we obtain
0 k K q k ( x ) α k f k ( x ) f k ( z ) j J λ j i I j λ j i g j ( z , v j i ) = k K q k ( x ) α k f k ( x ) f k ( z ) j J λ j i I j λ j i sup v j V j g j ( z , v j ) = k K q k ( x ) α k f k ( x ) f k ( z ) i I j λ j i j J λ j sup v j V j g j ( z , v j ) k K q k ( x ) α k f k ( x ) f k ( z ) ,
where the last inequality holds true due to (25). Combining now (26) and (24), we have
k K q k ( x ) α k ϕ ( z ) k K q k ( x ) α k f k ( x ) k K q k ( x ) α k ϕ ( x ) .
This implies that
ϕ ˜ ( z , α , μ , γ ) = ϕ ( z ) ϕ ( x )
due to k K q k ( x ) α k > 0 . The proof is complete. □
Theorem 4
(strong duality). Let x ¯ F r be a locally optimal solution to the problem (4) such that the ( CQ ) is satisfied at this point. Then, there exists ( α ¯ , λ ¯ ) R + l × R + m such that ( x ¯ , α ¯ , λ ¯ ) F D and
ϕ ( x ¯ ) = ϕ ˜ ( x ¯ , α ¯ , λ ¯ ) .
Furthermore, if ( p , q ; g ) is generalized convex on R n at any z R n , then ( x ¯ , α ¯ , λ ¯ ) is globally optimal solution to the problem (22).
Proof. 
Thanks to Theorem 1, we can find α : = ( α 1 , , α l ) R + l \ { 0 } ,   ξ k p k ( x ¯ ) ,   ζ k q k ( x ¯ ) ,   k K , and λ j 0 ,   j J ,   λ j i 0 ,   η j i x g j ( x ¯ , v j i ) ,   v j i V j ( x ¯ ) ,   i I j = { 1 , , i j } ,   i j N ,   i I j λ j i = 1 , such that
k K α k ξ k p k ( x ¯ ) q k ( x ¯ ) ζ k + j J λ j i I j λ j i η j i = 0 , α k f k ( x ¯ ) max k K f k ( x ¯ ) = 0 , k K ,
λ j sup v j V j g j ( x ¯ , v j ) = 0 , j J .
Since v j i V j ( x ¯ ) , we have g j ( x ¯ , v j i ) = sup v j V j g j ( x ¯ , v j ) for j J , and i I j = { 1 , , i j } . Thus, it stems from (28) that λ j g j ( x ¯ , v j i ) = 0 for j J , and i I j = { 1 , , i j } . This entails that
j J λ j i I j λ j i g j ( x ¯ , v j i ) = j J i I j λ j i λ j g j ( x ¯ , v j i ) = 0 .
Letting α ¯ : = ( α 1 , , α l ) , and λ ¯ : = ( λ j , λ j i ) , we have
( α ¯ , λ ¯ ) ( R + l \ { 0 } ) × R + N ,
and so ( x ¯ , α ¯ , λ ¯ ) F D 5 due to (27) and (29). Obviously,
ϕ ( x ¯ ) = ϕ ˜ ( x ¯ , α ¯ , λ ¯ ) .
Thus, when ( p , q ; g ) is generalized convex on R n at any z R n , we apply the weak duality result in Theorem 3 to conclude that
ϕ ˜ ( x ¯ , α ¯ , λ ¯ ) = ϕ ( x ¯ ) ϕ ˜ ( z , α , λ )
for any ( z , α , λ ) F D . This means that ( x ¯ , α ¯ , λ ¯ ) is a globally optimal solution to the problem (22). □
Theorem 5
(converse-like duality). Let ( x ¯ , α ¯ , λ ¯ ) F D be such that ϕ ( x ¯ ) = ϕ ˜ ( x ¯ , α ¯ , λ ¯ ) . If x ¯ F r and ( p , q ; g ) is generalized convex on R n at x ¯ , then x ¯ is a globally optimal solution to the problem (4).
Proof. 
Since ( x ¯ , α ¯ , λ ¯ ) F D , there exist α ¯ : = ( α ¯ 1 , , α ¯ l ) R + l \ { 0 } ,   ξ k p k ( x ¯ ) ,   ζ k q k ( x ¯ ) ,   k K , and λ j 0 ,   j J ,   λ j i 0 ,   η j i x g j ( x ¯ , v j i ) ,   v j i V j ( x ¯ ) ,   i I j = { 1 , , i j } ,   i j N ,   i I j λ j i = 1 such that
k K α ¯ k ξ k p k ( x ¯ ) q k ( x ¯ ) ζ k + j J λ j i I j λ j i η j i = 0 ,
α ¯ k f k ( x ¯ ) ϕ ( x ¯ ) = 0 , k K ,
j J λ j sup v j V j g j ( x ¯ , v j ) 0 .
Let x ¯ F r . Then, g j ( x ¯ , v j ) 0 , for all v j V j ,   j J , and thus,
λ j sup v j V j g j ( x ¯ , v j ) 0 .
This, together with (32), yields
λ j sup v j V j g j ( x ¯ , v j ) = 0 , j J .
So, we assert by virtue of (30), (31) that x ¯ satisfies condition (11). To finish the proof, it remains to apply Theorem 2. □

5. Application to Robust Fractional Multi-Objective Optimization Problems

In this section, we will apply the results from Section 3 to study optimality conditions for robust fractional multi-objective optimization problems.
Definition 5.
We say x ¯ F r is a weak Pareto solution to the problem (2) if and only if
f ( x ) f ( x ¯ ) int R + l , x F r .
Theorem 6
([31] (cf. Theorem 5.1)). Suppose that x ¯ is a weak Pareto solution to the problem (2); then, x ¯ is an optimal solution of the following problem:
min x F r max k = 1 , , l f k ( x ) f k ( x ¯ ) .
Proof. 
Let x ¯ be a weak Pareto solution to the problem (2). We set
f ^ k ( x ) : = f k ( x ) f k ( x ¯ ) , k = 1 , , l , x R n ,
and ϕ ^ ( x ) : = max k = 1 , , l f ^ k ( x ) .
On the contrary, assume that x ¯ is not an optimal solution of the problem (33), and hence there exists x 0 F r such that
ϕ ^ ( x 0 ) < ϕ ^ ( x ¯ ) .
Since ϕ ^ ( x ¯ ) = 0 , it holds that ϕ ^ ( x 0 ) < 0 . Thus,
f ( x 0 ) f ( x ¯ ) int R + l ,
thereby leading to a contradiction that x ¯ is a weak Pareto solution of problem (2). Hence,  x ¯ is an optimal solution of problem (33). □
The following result is the Karush–Kuhn–Tucker (KKT) necessary conditions for weak Pareto solutions to the problem (2).
Theorem 7
(necessary condition). Let the ( CQ ) be fulfilled at x ¯ R n . If x ¯ is a weak Pareto solution of problem ( ) , then there are some α : = ( α 1 , , α l ) R + l \ { 0 } ,   λ j 0 ,   j J , such that
0 k K α k p k ( x ¯ ) p k ( x ¯ ) q k ( x ¯ ) q k ( x ¯ ) + j J λ j co x g j ( x ¯ , v j ) | v j V j ( x ¯ ) , λ j sup v j V j g j ( x ¯ , v j ) = 0 , j J .
Proof. 
Since x ¯ is a weak Pareto solution to the problem (2), and with the help of Theorem 6,  x ¯ is an optimal solution to the following minimax fractional program problem:
min x F r max k K f ^ k ( x ) ,
where f ^ k ( x ) : = f k ( x ) f k ( x ¯ ) . So, we can employ the KKT conditions in Theorem 1 but applying to the problem (36). Then, we find α : = ( α 1 , , α l ) R + l \ { 0 } ,   λ j 0 ,   j J , such that
0 k K α k p k ( x ¯ ) p k ( x ¯ ) q k ( x ¯ ) q k ( x ¯ ) + j J λ j co x g j ( x ¯ , v j ) | v j V j ( x ¯ ) , α k f ^ k ( x ¯ ) max k K f ^ k ( x ¯ ) = 0 , k K , λ j sup v j V j g j ( x ¯ , v j ) = 0 , j J .
It is now clear that (37) implies (35) and thus, the proof is complete. □
Theorem 8
(sufficient condition). Let x ¯ F r satisfy condition (35). Suppose that ( p , q ; g ) is generalized convex on R n at x ¯ ; then x ¯ is a weak Pareto solution to the problem (2).
Proof. 
Let ϕ ^ ( x ) : = max k K f ^ k ( x ) with f ^ k ( x ) : = f k ( x ) f k ( x ¯ ) ,   k K ,   x R n . Since x ¯ F r satisfies condition (35), there exist α : = ( α 1 , , α l ) R + l \ { 0 } ,   ξ k p k ( x ¯ ) ,   ζ k q k ( x ¯ ) ,   k K , and λ j 0 ,   j J ,   λ j i 0 ,   η j i x g j ( x ¯ , v j i ) ,   v j i V j ( x ¯ ) ,   i I j = { 1 , , i j } ,   i j N ,   i I j λ j i = 1 , such that
k K α k ξ k p k ( x ¯ ) q k ( x ¯ ) ζ k + j J λ j i I j λ j i η j i = 0 , λ j sup v j V j g j ( x ¯ , v j ) = 0 , j J .
Thanks to the proof of Theorem 2, we obtain
0 k K q k ( x ) α k p k ( x ) q k ( x ) p k ( x ¯ ) q k ( x ¯ ) , x F r ,
i.e.,
k K q k ( x ) α k p k ( x ¯ ) q k ( x ¯ ) p k ( x ¯ ) q k ( x ¯ ) k K q k ( x ) α k p k ( x ) q k ( x ) p k ( x ¯ ) q k ( x ¯ ) , x F r .
On the other hand, it holds that
k K q k ( x ) α k ϕ ^ ( x ¯ ) = k K q k ( x ) α k f ^ k ( x ¯ ) k K q k ( x ) α k f ^ k ( x ) k K q k ( x ) α k ϕ ^ ( x ) ,
where ϕ ^ ( x ) : = max k K f ^ k ( x ) with f ^ k ( x ) : = f k ( x ) f k ( x ¯ ) ,   k K .
Due to k K q k ( x ) α k > 0 ,
ϕ ^ ( x ¯ ) ϕ ^ ( x ) , x F r .
In other words, we obtain
0 max k K { f k ( x ) f k ( x ¯ ) } , x F r ,
which entails that
f ( x ) f ( x ¯ ) int R + l , x F r ,
Hence, x ¯ is a weak Pareto solution to the problem (2). □

Author Contributions

Conceptualization, H.L. and Z.H.; methodology, Z.H. and D.S.K.; investigation, H.L. and Z.H.; resources, Z.H.; writing—original draft preparation, Z.H.; writing—review and editing, H.L. and D.S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Fundamental Research Funds for the Central Universities (2412022QD024).

Data Availability Statement

No data were used for the research described in the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ben-Tal, A.; Nemirovski, A. Robust convex optimization. Math. Oper. Res. 1998, 23, 769–805. [Google Scholar] [CrossRef]
  2. Ben-Tal, A.; Nemirovski, A. Selected topics in robust convex optimization. Math. Program. 2008, 112, 125–158. [Google Scholar] [CrossRef]
  3. Ben-Tal, A.; El Ghaoui, L.; Nemirovski, A. Robust Optimization; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  4. Beck, A.; Ben-Tal, A. Duality in robust optimization: Primal worst equals dual best. Oper. Res. Lett. 2009, 37, 1–6. [Google Scholar] [CrossRef]
  5. Ben-Tal, A.; den Hertog, D.; Vial, J. Deriving robust counterparts of nonlinear uncertain inequalities. Math. Program. 2015, 149, 265–299. [Google Scholar] [CrossRef]
  6. El Ghaoui, L.; Lebret, H. Robust solution to least-squares problems with uncertain data. SIAM J. Matrix Anal. Appl. 1997, 18, 1035–1064. [Google Scholar] [CrossRef]
  7. El Ghaoui, L.; Oustry, F.; Lebret, H. Robust solutions to uncertain semidefinite programs. SIAM J. Optim. 1998, 9, 33–52. [Google Scholar] [CrossRef]
  8. Kouvelis, P.; Yu, G. Robust Discrete Optimization and Its Applications; Springer: New York, NY, USA, 1997. [Google Scholar]
  9. Lee, G.M.; Son, P.T. On nonsmooth optimality theorems for robust optimization problems. Bull. Korean Math. Soc. 2014, 51, 287–301. [Google Scholar] [CrossRef]
  10. Antczak, T. Generalized fractional minimax programming with B-(p, r)-invexity. Comput. Math. Appl. 2008, 56, 1505–1525. [Google Scholar] [CrossRef]
  11. Antczak, T. Nonsmooth minimax programming under locally Lipschitz (Φ, ρ)-invexity. Appl. Math. Comput. 2008, 217, 9606–9624. [Google Scholar] [CrossRef]
  12. Antczak, T. Parametric approach for approximate efficiency of robust multiobjective fractional programming problems. Math. Methods Appl. Sci. 2021, 44, 11211–11230. [Google Scholar] [CrossRef]
  13. Bram, J. The Lagrange multiplier theorem for max–min with several constraints. SIAM J. Appl. Math. 1966, 14, 665–667. [Google Scholar] [CrossRef]
  14. Jayswal, A. Non-differentiable minimax fractional programming with generalized α-univexity. J. Computat. Appl. Math. 2008, 214, 121–135. [Google Scholar] [CrossRef]
  15. Jayswal, A.; Stancu-Minasian, I. Higher-order duality for nondifferentiable minimax programming problem with generalized convexity. Nonlinear Anal. 2011, 74, 616–625. [Google Scholar] [CrossRef]
  16. Liu, J.C.; Wu, C.S. On minimax fractional optimality conditions with invexity. J. Math. Anal. Appl. 1998, 219, 21–35. [Google Scholar] [CrossRef]
  17. Lai, H.C.; Huang, T.Y. Optimality conditions for a nondifferentiable minimax programming in complex spaces. Nonlinear Anal. 2009, 71, 1205–1212. [Google Scholar] [CrossRef]
  18. Lai, H.C.; Huang, T.Y. Nondifferentiable minimax fractional programming in complex spaces with parametric duality. J. Global Optimiz. 2012, 53, 243–254. [Google Scholar] [CrossRef]
  19. Mishra, S.K.; Rueda, N.G. Second-order duality for nondifferentiable minimax programming involving generalized type I functions. J. Optimiz. Theory App. 2006, 130, 477–486. [Google Scholar] [CrossRef]
  20. Tanimoto, S. Nondifferentiable mathematical programming and convex-concave functions. J. Optimiz. Theory App. 1980, 31, 331–342. [Google Scholar] [CrossRef]
  21. Zalmai, G.J. Optimality conditions and duality for constrained measurable subset selection problems with minmax objective functions. Optimization 1989, 20, 377–395. [Google Scholar]
  22. Hong, Z.; Jiao, L.G.; Kim, D.S. On a class of nonsmooth fractional robust multi-objective optimization problems. Part I: Optimality conditions. Appl. Set-Valued Anal. Optim. 2020, 2, 109–121. [Google Scholar]
  23. Husain, Z.; Jayswal, A.; Ahmad, I. Second order duality for nondifferentiable minimax programming problems with generalized convexity. J. Global Optim. 2009, 44, 593–608. [Google Scholar] [CrossRef]
  24. Manesh, S.S.; Saraj, M.; Alizadeh, M.; Momeni, M. On robust weakly ε-efficient solutions for multi-objective fractional programming problems under data uncertainty. AIMS Math. 2021, 7, 2331–2347. [Google Scholar] [CrossRef]
  25. Zalmai, G.J. Parameter-free sufficient optimality conditions and duality models for minmax fractional subset programming problems with generalized (F, ρ, θ)-convex functions. Comput. Math. Appl. 2003, 45, 1507–1535. [Google Scholar] [CrossRef]
  26. Mordukhovich, B.S. Variational Analysis and Generalized Differentiation, I: Basic Theory; Springer: New York, NY, USA, 2006. [Google Scholar]
  27. Mordukhovich, B.S. Variational Analysis and Applications; Springer: New York, NY, USA, 2018. [Google Scholar]
  28. Chuong, T.D. Optimality and duality for robust multiobjective optimization problems. Nonlinear Anal. 2016, 134, 127–143. [Google Scholar] [CrossRef]
  29. Bonnans, J.F.; Shapiro, A. Perturbation Analysis of Optimization Problems; Springer: New York, NY, USA, 2000. [Google Scholar]
  30. Chuong, T.D.; Kim, D.S. A class of nonsmooth fractional multiobjective optimization problems. Ann. Oper. Res. 2016, 244, 367–383. [Google Scholar] [CrossRef]
  31. Chuong, T.D.; Kim, D.S. Nondifferentiable minimax programming problems with applications. Ann. Oper. Res. 2017, 251, 73–87. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Hong, Z.; Kim, D.S. A Minimax-Program-Based Approach for Robust Fractional Multi-Objective Optimization. Mathematics 2024, 12, 2475. https://doi.org/10.3390/math12162475

AMA Style

Li H, Hong Z, Kim DS. A Minimax-Program-Based Approach for Robust Fractional Multi-Objective Optimization. Mathematics. 2024; 12(16):2475. https://doi.org/10.3390/math12162475

Chicago/Turabian Style

Li, Henan, Zhe Hong, and Do Sang Kim. 2024. "A Minimax-Program-Based Approach for Robust Fractional Multi-Objective Optimization" Mathematics 12, no. 16: 2475. https://doi.org/10.3390/math12162475

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop