Next Article in Journal
The Echo Method for Axion Dark Matter Detection
Next Article in Special Issue
Some Fuzzy Riemann–Liouville Fractional Integral Inequalities for Preinvex Fuzzy Interval-Valued Functions
Previous Article in Journal
Multifeature Detection of Microaneurysms Based on Improved SSA
Previous Article in Special Issue
The Well Posedness for Nonhomogeneous Boussinesq Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiobjective Convex Optimization in Real Banach Space

by
Kin Keung Lai
1,*,†,
Mohd Hassan
2,†,
Jitendra Kumar Maurya
3,†,
Sanjeev Kumar Singh
2,† and
Shashi Kant Mishra
2,†
1
International Business School, Shaanxi Normal University, Xi’an 710119, China
2
Department of Mathematics, Institute of Science, Banaras Hindu University, Varanasi 221005, India
3
Kashi Naresh Government Postgraduate College, Bhadohi 221304, India
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2021, 13(11), 2148; https://doi.org/10.3390/sym13112148
Submission received: 8 October 2021 / Revised: 26 October 2021 / Accepted: 27 October 2021 / Published: 10 November 2021
(This article belongs to the Special Issue Symmetry in Mathematical Analysis and Functional Analysis)

Abstract

:
In this paper, we consider convex multiobjective optimization problems with equality and inequality constraints in real Banach space. We establish saddle point necessary and sufficient Pareto optimality conditions for considered problems under some constraint qualifications. These results are motivated by the symmetric results obtained in the recent article by Cobos Sánchez et al. in 2021 on Pareto optimality for multiobjective optimization problems of continuous linear operators. The discussions in this paper are also related to second order symmetric duality for nonlinear multiobjective mixed integer programs for arbitrary cones due to Mishra and Wang in 2005. Further, we establish Karush–Kuhn–Tucker optimality conditions using saddle point optimality conditions for the differentiable cases and present some examples to illustrate our results. The study in this article can also be seen and extended as symmetric results of necessary and sufficient optimality conditions for vector equilibrium problems on Hadamard manifolds by Ruiz-Garzón et al. in 2019.

1. Introduction

Consider the general multiobjective optimization problem
( MOP ) min f ( x ) = ( f 1 ( x ) , , f p ( x ) ) , subject to g ( x ) 0 , h ( x ) = 0 ,
where the functions f : X R p , g : X R q , and h : X R r are real vector valued functions and X is real Banach space.
Multiobjective optimization problem (MOP) arises when two or more objective functions are simultaneously optimized over a feasible region. The multiobjective optimization has been considerably analyzed and studied by many researchers, see for instance [1,2,3,4,5,6]. Multiobjective optimization problems play a crucial role in various fields like economics, engineering, management sciences [2,7,8,9,10,11], and many more places in daily life.
To deal with the multiobjective optimization problems, we have to find Pareto optimal solutions. These solutions are non-dominated by one another. A solution is called non-dominated or Pareto optimal if none of the objective functions can be improved in value without reducing one or more objective values. One of the best techniques to deal with multiobjective optimization problems is scalarization. Wendell and Lee [12] developed the scalarization technique to deal with multiobjective optimization problems. Wendell and Lee [12] generalized the results on efficient points for multiobjective optimization problems to nonlinear optimization problems. The multiobjective problem is converted into a single objective problem in the scalarization technique.
The saddle point optimality conditions are briefly explained in [13], Rooyen et al. [14] constructed a Langrangian function for the convex multiobjective problem and established a relationship between saddle point optimality conditions and Pareto optimal solutions. Cobos-Sànchez et al. [15] proposed Pareto optimality conditions for multiobjective optimization problems of continuous linear operators. Recently, Treanta [16] studied robust saddle point criterion in second order partial differential equations and partial differential inequations.
Rooyen et al. [14] discussed necessary and sufficient optimality conditions for (MOP) without any constraint qualification in Euclidean space. Recently, Antczak and Abdulaleem [17] studied optimality and duality results for E-differentiable functions. Barbu and Precupanu [18] studied the saddle point optimality conditions of convex optimization problem for real Banach space. Valyi [19] proposed the concept of approximate saddle point condition for convex multiobjective optimization problems. Further Rong and Wu [20] generalized the results of Valyi [19] with set valued maps.
Karush–Kuhn–Tucker (KKT) optimality conditions [21] play a pivotal role to solve scalar optimization problem as well as multiobjective optimization problems. Recently Lai et al. [3] discussed unconstrained multiobjective optimization problems. Further, Guu et al. [22] studied strong KKT type sufficient optimality conditions for semi-infinite programming problems.
Motivated by the work of Barbu and Precupanu [18], Rooyen et al. [14] and Wendell and Lee [12], we extend the results related to saddle point optimality conditions and Karush–Kuhn–Tucker optimality conditions from single objective function to multiobjective function with the help of Slater’s constraint qualifications [13]. We also present some illustrative examples to support the theory.
The organization of this paper is as follows: In Section 2, we recall some preliminaries and basic results. In Section 3, results on saddle point and Karush–Kuhn–Tucker necessary optimality conditions for multiobjective optimization problems are extended. Further, we establish the relationship between the Pareto solution and the saddle point for the Lagrange function using Slater’s constraint qualification. The last section is dedicated to conclusions and future remarks.

2. Preliminaries

In this section, we recall some notions and preliminary results which will be used in this paper. R denotes the set of real numbers. Let X be real Banach Space and X * be its dual space. Let x , y R n , then following inequalities represent their meaning as follows:
x y x i y i x y x i y i , x y , x > y x i > y i , x = y x i = y i .
We denote the feasible region as
S = { x X : g ( x ) 0 , h ( x ) = 0 } .
To deal with the multiobjective optimization problems (MOP), we require some basic definitions.
Definition 1
(Ref. [2]). A decision vector x ¯ S is global Pareto optimal solution (global efficient solution) if there does not exist another decision vector x S such that
f ( x ) f ( x ¯ ) .
Consider the following scalarized multiobjective optimization problem (SMOP) corresponding to (MOP):
( S M O P ) min i = 1 p f i ( x ) subject to f ( x ) f ( x ¯ ) , g ( x ) 0 , h ( x ) = 0 , where x ¯ is any feasible point of ( M O P ) .
Now, we recall the result from [2], which relate the solution of (MOP) and (SMOP).
Theorem 1
(Ref. [7]). A feasible point x ¯ S is a Pareto optimal solution of the (MOP) if and only if x ¯ is an optimal solution of the (SMOP).
Definition 2
(Ref. [13]). A subset of the linear space X is said to be convex if for every distinct pair x and y of subset, it contains λ x + ( 1 λ ) y , λ [ 0 , 1 ] .
Definition 3
(Ref. [13]). A function f is said to be convex on X if the inequality
f λ x + ( 1 λ ) y λ f ( x ) + ( 1 λ ) f ( y )
holds for all x , y X and for every λ [ 0 , 1 ] .
Definition 4
(Ref. [18]). The function f : X R ¯ = [ , + ] is said to be proper convex if f ( x ) > x X and if f is not the constant function then + (that is, f ¬ + ).
If f is a convex function, D o m ( f ) denotes the effective domain of f , which is as follows:
D o m ( f ) = x X : f ( x ) < + .
If f is proper, then D o m ( f ) is finite. Conversely, if A is a nonempty convex subset of X and f is a finite and convex function on A, then one can obtain a proper convex function on X by setting f ( x ) = + if x X \ A .
Definition 5
(Ref. [18]). The function f : X R ¯ is called lower semi continuous at x ¯ if
f ( x ¯ ) = lim x x ¯ i n f f ( x ) .
Corollary 1
(Ref. [18]). If A 1 a n d A 2 are two non-empty disjoint convex sets of R n , there exist a non zero element c = ( c 1 , , c n ) R n \ { 0 } such that
i = 1 n c i u i i = 1 n c i v 1 , u = ( u i ) A 1 , v = ( v i ) A 2 .
Definition 6
(Ref. [18]). Given the proper convex function f : X ] , + ] , the subdifferential of such a function is the mapping f : X X * defined by
f ( x ) = { x * X * : f ( u ) f ( x ) ( u x , x * ) , u X } ,
where X * is dual of X and ( . , . ) denote the canonical pairing between X and X * . The element x * f ( x ) called subgradient of f at x .
Corollary 2
(Ref. [18]). If f is a proper convex function on X, then the minimum (global) of f over X is attained at the point x ¯ X if and only if 0 f ( x ¯ ) .
Theorem 2
(Ref. [18]). If the functions f 1 and f 2 are finite at a point in which at least one is continuous then
( f 1 + f 2 ) ( x ) = f 1 ( x ) + f 2 ( x ) x X .
Definition 7.
Slater’s constraint qualification
(1)
There exists a point x ¯ S s u c h t h a t g j ( x ¯ ) < 0 , j = 1 , , q , and
(2)
The equality constraints satisfy interiority conditions, if
0 ( h 1 ( x ) , h 2 ( x ) , , h r ( x ) ) ; x X 0 .

3. Saddle Point and Karush–Kuhn–Tucker Optimality Conditions

In this section, we established saddle point and Karush–Kuhn–Tucker type optimality conditions for considered (MOP) in Banach space.
Theorem 3.
Let f 1 , , f p , g 1 , , g q be proper convex functions and h 1 , , h r be affine functions. If x ¯ is a Pareto optimal solution of the (MOP). Then, there exist real numbers λ 1 f , , λ p f , λ 1 g , , λ q g , λ 1 h , , λ r h not all zero and have the properties:
i = 1 p λ i f f i ( x ¯ ) i = 1 p λ i f f i ( x ) + j = 1 q λ j g g j ( x ) + k = 1 r λ k h h k ( x ) , x X 0 , λ i f 0 , i = 1 , , p , λ j g 0 , j = 1 , , q , λ j g g j ( x ¯ ) = 0 , w h e r e X 0 = i = 1 p D o m ( f i ) j = 1 q D o m ( g j ) .
Proof. 
Let x ¯ be an Pareto optimal solution of the consistent problem (MOP). Then, from Theorem 1 x ¯ is an optimal solution of the problem
( S M O P ) min i = 1 p f i ( x ) subject to f i ( x ) f i ( x ¯ ) i = 1 , , p , g j ( x ) 0 ( j = 1 , , q ) , h k ( x ) = 0 ( k = 1 , , r ) .
Now, we consider the subset
B = { i = 1 p f i ( x ) i = 1 p f i ( x ¯ ) + α 0 f , f 1 ( x ) f 1 ( x ¯ ) + α 1 f , , f p ( x ) f p ( x ¯ ) + α p f , g 1 ( x ) + α 1 g , , g q ( x ) + α q g , h 1 ( x ) , , h r ( x ) ; x X 0 , α i f > 0 i , α j g > 0 j } .
It is easy to see that the set B does not contain origin as well as it is a non-void convex subset of R 1 + p + q + r . Since origin is a nonempty convex set, then from Corollary 1 there exist a homogeneous hyperplane, that is there exist 1 + p + q + r real numbers not all zero λ ^ 0 f , λ ^ 1 f , , λ ^ p f , λ 1 g , , λ q g , λ 1 h , , λ r h , such that
λ ^ 0 f i = 1 p f i ( x ) i = 1 p f i ( x ¯ ) + α 0 f + i = 1 p λ ^ i f f i ( x ) f i ( x ¯ ) + α i f + j = 1 q λ j g g j ( x ) + α j g + k = 1 r λ k h h k ( x ) 0 ,
x X 0 , α i f > 0 ( i = 0 , 1 , , p ) , α j g > 0 ( j = 1 , , q ) , taking x = x ¯ , α j g 0 ( j ) , α i f 0 for i l and α l f . Again taking x = x ¯ , α i f 0 ( i ) , α j g 0 for j l and α l g , we get
λ ^ 0 f 0 , λ ^ i f 0 and λ i g 0 .
Thus, relation (4) becomes
λ ^ 0 f i = 1 p f i ( x ) i = 1 p f i ( x ¯ ) + i = 1 p λ ^ i f f i ( x ) f i ( x ¯ ) + j = 1 q λ j g g j ( x ) + k = 1 r λ k h h k ( x ) 0 ,
i = 1 p f i ( x ) λ ^ 0 f + λ ^ i f + j = 1 q λ j g g j ( x ) + k = 1 r λ k h h k ( x ) i = 1 p f i ( x ¯ ) λ ^ 0 f + λ ^ i f ,
i = 1 p λ i f f i ( x ) + j = 1 q λ j g g j ( x ) + k = 1 r λ k h h k ( x ) i = 1 p λ i f f i ( x ¯ ) ,
where λ i f = λ ^ 0 f + λ ^ i f . Since x ¯ is feasible, therefore
λ j g g j ( x ¯ ) 0 , j ,
substituting x = x ¯ in inequality (5), we get
j = 1 q λ j g g j ( x ¯ ) 0 .
Now, from (6) and (7) we have λ j g g j ( x ¯ ) = 0 , j = 1 , , q , which completes the proof. □
Example 1.
Consider the problem
m i n f ( x ) = ( f 1 ( x ) , f 2 ( x ) ) , subject to g ( x ) 0 ,
w h e r e f 1 ( x ) = x 1 2 , i f 3 x 1 , x 2 3 , + , o t h e r w i s e , f 2 ( x ) = x 2 2 , i f 3 x 1 , x 2 3 , + , o t h e r w i s e , and g ( x ) = ( x 1 1 ) 2 + ( x 2 1 ) 2 1 , i f 3 x 1 , x 2 3 , + , o t h e r w i s e .
Therefore, feasible region S = { ( x 1 , x 2 ) R 2 : ( x 1 1 ) 2 + ( x 2 1 ) 2 1 } and common effective domain X 0 = i = 1 2 D o m ( f i ) D o m ( g ) = { ( x 1 , x 2 ) R n : 1 x 1 , x 2 1 } . Since, x ¯ = ( 1 , 0 ) is a Pareto optimal solution, then for λ 1 f = 0 , λ 2 f > 0 , λ g = 0 , the following inequality satisfies
λ 1 f f 1 ( x ¯ ) + λ 2 f f 2 ( x ¯ ) = 0 λ 1 f x 1 2 + λ 2 f x 2 2 + λ g [ ( x 1 1 ) 2 + ( x 2 1 ) 2 1 ] = λ 1 f f 1 ( x ) + λ 2 f f 2 ( x ) + λ g g ( x ) , x X 0 .
Hence, result is verified.
Thus, it is natural to call the function
L ( x , λ f , λ g , λ h ) = i = 1 p λ i f f i ( x ) + j = 1 q λ j g g j ( x ) + k = 1 r λ k h h k ( x ) ,
λ f = ( λ i f ) R p , λ g = ( λ j g ) R q and λ h = ( λ k h ) R r .
Remark 1.
The necessary conditions (2) with x ¯ S are equivalent to the fact that the point ( x ¯ , λ f , λ g , λ h ) is a saddle point for the Lagrange function (8) on X 0 × R p × R q × R r , with respect to minimization on X 0 and maximization on R p × R q × R r , that is,
i = 1 p λ i f f i ( x ¯ ) + j = 1 q λ j g g j ( x ¯ ) + k = 1 r λ k h h k ( x ¯ ) i = 1 p λ i f f i ( x ) + j = 1 q λ j g g j ( x ) + k = 1 r λ k h h k ( x )
L ( x ¯ , λ f , λ g , λ h ) L ( x , λ f , λ g , λ h ) , x ¯ X 0 ,
and for every ( x , λ f , λ g , λ h ) X × R p × R q × R r .
Remark 2.
The necessary optimality conditions (2) with λ f 0 , and x ¯ S are also sufficient for x ¯ to be a Pareto optimal solution to (MOP). If λ f = 0 , then optimality conditions concern only the constraints functions, without giving any piece of information from the function which is minimized.
Theorem 4.
Let f 1 , , f p , g 1 , , g q be proper convex functions and let h 1 , , h r be affine functions such that Slater’s constraint qualification satisfied at a feasible point x ¯ of (MOP). Then, the point x ¯ is a Pareto optimal solution for (MOP) if and only if there exist p + q + r real numbers λ 1 f , , λ p f , λ 1 g , , λ q g , λ 1 h , , λ r h , such that
i = 1 p λ i f f i ( x ¯ ) i = 1 p λ i f f i ( x ) + j = 1 q λ j g g j ( x ) + k = 1 r λ k h h k ( x ) ,
and λ f 0 , λ f 0 , λ g 0 , λ j g g j ( x ¯ ) = 0 j = 1 , , q .
Proof. 
Let x ¯ be a Pareto optimal solution of (MOP). Then, from above Theorem 3, there exist λ 1 f , , λ p f , λ 1 g , , λ q g , λ 1 h , , λ r h not all zero such that (2) hold. If we suppose λ f = 0 , taking x = x ¯ S , then from (2) we get j = 1 q λ j g g j ( x ¯ ) 0 . Since λ i f 0 a n d g j ( x ¯ ) < 0 ( j ) , we must have λ j g = 0 ( j ) , therefore from (2) we have
k = 1 r λ k h h k ( x ) 0 x X 0 ,
and all components of λ h are not zero, which is contradiction of the interiority conditions of Slater’s constraint qualification. Hence λ f 0 , that is, we can take some components of λ f are greater than zero.
Conversely, suppose x ¯ is not Pareto optimal solution of (MOP), then there exist x * ( x ¯ ) S , which is a Pareto optimal solution for (MOP), that is
f ( x * ) f ( x ¯ ) .
Now, from relation (10) for x * S , we have
i = 1 p λ i f f i ( x ¯ ) i = 1 p λ i f f i ( x * ) ,
which is contradiction of inequality (11). Hence, x ¯ is a Pareto optimal solution for (MOP). Since f ( x ) is a proper convex function, then f ( x ) is necessarily finite. □
Theorem 5.
Under the assumptions of Theorem 4, x ¯ X is a Pareto optimal solution of (MOP) if and only if there exist λ f = ( λ 1 f , , λ p f ) R p , λ g = ( λ 1 g , , λ q g ) R q and λ h = ( λ 1 h , , λ r h ) R r such that ( x ¯ , λ f , λ g , λ h ) is a saddle point for the Lagrange function on X 0 × R p × R q × R r , that is
i = 1 p λ i f f i ( x ¯ ) + j = 1 q λ j g g j ( x ¯ ) + k = 1 r λ k h h k ( x ¯ ) i = 1 p λ i f f i ( x ) + j = 1 q λ j g g j ( x ) + k = 1 r λ k h h k ( x )
for all ( x , λ f , λ g , λ h ) X 0 × R p × R q × R r .
Proof. 
Thus, the proof is obvious from Theorem 4. □
Now, we establish optimality conditions where differentiability of all functions is essential. The following result extends the Karush–Kuhn–Tucker theorem for the lower-semicontinuous multiobjective functions.
Theorem 6.
Under the hypothesis of Theorem 4, if we suppose that the function f i is lower-semicontinuous and g j , h k are continuous real functions, then the optimality conditions for x ¯ S is equivalent to the conditions
0 i = 1 p λ i f f i ( x ¯ ) + j = 1 q λ j g g j ( x ¯ ) + k = 1 r λ k h h k ( x ¯ ) .
Proof. 
From Equation (10), if x ¯ S is the minimum point of the function, then
i = 1 p λ i f f i ( x ¯ ) i = 1 p λ i f f i ( x ) + j = 1 q λ j g g j ( x ) + k = 1 r λ k h h k ( x ) .
Since g j ( x ¯ ) 0 , h k ( x ¯ ) = 0 . Then, inequality (13) takes the form
i = 1 p λ i f f i ( x ¯ ) + j = 1 q λ j g g j ( x ¯ ) + k = 1 r λ k h h k ( x ¯ ) i = 1 p λ i f f i ( x ) + j = 1 q λ j g g j ( x ) + k = 1 r λ k h h k ( x ) .
Now, from Corollary 2, the minimum point of Lagrange function is solution of the equation
0 i = 1 p λ i f f i + j = 1 q λ j g g j + k = 1 r λ k h h k ( x ¯ ) .
Making use of previous results and additive property of subdifferential, we get
0 i = 1 p λ i f f i ( x ¯ ) + j = 1 q λ j g g j ( x ¯ ) + k = 1 r λ k h h k ( x ¯ ) .
We know that h k ( x ¯ ) be an affine function, then
h k ( x ¯ ) = h k ( x ¯ ) .
Hence, we get the required result. □
Example 2.
Consider the following problem
min f ( x ) = ( f 1 ( x ) , f 2 ( x ) ) , subject to g ( x ) 0 , at a feasible point x ¯ = ( 0 , 0 ) .
where, f 1 ( x ) = | x 1 | , f 2 ( x ) = | x 2 | , and g ( x ) = | x 1 | + | x 2 | 1 .
Since, x ¯ is a Pareto optimal solution for the considered problem as well as satisfying Slater’s constraints qualification because g ( x ¯ ) < 0 , then λ f = ( λ 1 f , λ 2 f ) 0 , λ f 0 and λ g g ( x ¯ ) = 0 λ g = 0 . Now from the definition of subdifferential, we get
f 1 ( x ¯ ) = { ( ξ , 0 ) R 2 : 1 ξ 1 } , f 2 ( x ¯ ) = { ( 0 , ξ ) R 2 : 1 ξ 1 } ,
which implies that
0 λ 1 f f 1 ( x ¯ ) + λ 2 f f 2 ( x ¯ ) + λ g g ( x ¯ ) .
Hence, the result is verified.
Remark 3.
Since h k are affine, then there exist a continuous linear functional x k * X * and a real number α k R such that h k = x k * + α k , therefore we have h k = x k * and above condition becomes
0 i = 1 p λ i f f i ( x ¯ ) + j = 1 q λ j g g j ( x ¯ ) + k = 1 r λ k h x k * ( x ¯ ) .
Now, if we consider only the case of the constraint given by inequalities, that is,
S 1 = { x X : g j ( x ) 0 , j = 1 , , q } .
Then, Slater’s constraints qualification is as follows:
There exist a point x ¯ i = 1 p D o m ( f i ) such that g j ( x ¯ ) < 0 , j = 1 , , q .
Theorem 7.
Let f 1 , , f p be a proper convex lower-semicontinuous function and g 1 , , g q be real convex continuous functions satisfying the Slater’s constraint qualification (7) at a feasible point x ¯ . Then, the point x ¯ S 1 is a Pareto optimal solution for (MOP) if and only if there exists λ f = ( λ 1 f , , λ p f ) , λ g = ( λ 1 g , , λ q g ) such that
0 i = 1 p λ i f f i ( x ¯ ) + j = 1 q λ j g g j ( x ¯ ) ,
λ f 0 , λ f 0 , λ j g 0 , λ j g g j ( x ¯ ) = 0 , j = 1 , , q .
Proof. 
Suppose x ¯ S is a Pareto optimal solution of problem (MOP) then, from equation
i = 1 p λ i f f i ( x ¯ ) i = 1 p λ i f f i ( x ) + j = 1 q λ j g g j ( x ) .
By Slater’s constraint qualification there exists x ¯ S , such that
i = 1 p λ i f f i ( x ¯ ) + j = 1 q λ j g g j ( x ¯ ) i = 1 p λ i f f i ( x ) + j = 1 q λ j g g j ( x ) .
Now from Corollary (2) the minimum point of the Langrange function is the solutuon of the relation
0 i = 1 p λ i f f i + j = 1 q λ j g g j + k = 1 r λ k h h k ( x ¯ ) .
Using the additive property of subdifferential, we get
0 i = 1 p λ i f f i ( x ¯ ) + j = 1 q λ j g g j ( x ¯ ) + k = 1 r λ k h h k ( x ¯ ) .
λ f 0 , λ f 0 , λ j g 0 , λ j g g j ( x ¯ ) = 0 , j = 1 , , q .
Conversely, suppose x ¯ is not pareto optimal solution of (MOP), then there exists x * ( x ¯ ) S which is Pareto optimal solution for MOP that is
f ( x * ) f ( x ¯ ) .
Now, from relation (10) for x * S
i = 1 p λ i f f i ( x ¯ ) i = 1 p λ i f f i ( x * ) ,
which is contradiction to inequality (18). Hence x ¯ is a pareto optimal solution for MOP. Since f(x) is a proper convex function, therefore f(x) is necessarily finite. □
Corollary 3.
Let f 1 , , f p , g 1 , , g q be real convex and differentiable functions on X which satisfy (1). Then, a feasible point x ¯ is a Pareto solution of problem (MOP) with ( 1 ) given by (15) if and only if there exist real numbers λ 1 f , , λ p f , λ 1 g , , λ q g such that
i = 1 p λ i f f i ( x ¯ ) + j = 1 q λ j g g j ( x ¯ ) = 0 ,
λ f 0 , λ f 0 , λ j g 0 , λ j g g j ( x ¯ ) = 0 , j = 1 , , q .

4. Conclusions

In this paper, we have established saddle point optimality conditions for a convex MOP in real Banach space. We recall Slater’s constraint qualification from [18] and derive saddle point necessary and sufficient Pareto optimality condition for the considered problem where multipliers of objective functions never vanished simultaneously. We have deduced Karush–Kuhn–Tucker optimality conditions from saddle point optimality conditions for the subdifferentiable case and present some examples to verify our results. We have characterized saddle point optimality conditions for Pareto points to convex MOPs in real Banach space which is more general as well as proofing technique is different from Ehrgott and Wiecek [23]. Further, we have concluded Karush–Kuhn–Tucker optimality conditions for smooth and nonsmooth cases from saddle point optimality conditions that is a new thing as compared to Ehrgott and Wiecek. Our derived Karush–Kuhn–Tucker optimality conditions are the same as in Miettinen [2] and Haeser and Ramos [24]. That is why our paper includes novelty from Ehrgott and Wiecek [23] in some senses. Further, these results can be extended for convex semi-infinite programming problems [25,26]. In the future, we can extend these results to interval-valued optimality conditions and can deduce some applications motivated by the recent article by Treanta [27]. Further, we can extend these results on vector equilibrium on Hadamard manifolds motivated by Ruiz-Garzòn et al. [28].

Author Contributions

Writing—original draft preparation, K.K.L., M.H., J.K.M., S.K.S. and S.K.M.; writing—review and editing, K.K.L., M.H., J.K.M., S.K.S. and S.K.M.; funding acquisition, K.K.L. All authors have read and agreed to the published version of the manuscript.

Funding

The second author is financially supported by CSIR-UGC JRF, New Delhi, India, through Reference no.: 1009/(CSIR-UGC NET JUNE 2018). The fourth author is financially supported by CSIR-UGC JRF, New Delhi, India, through Reference no.: 1272/(CSIR-UGC NET DEC.2016). The fifth author is financially supported by "Research Grant for Faculty" (IoE Scheme) under Dev. Scheme NO. 6031.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No data were used to support this study.

Acknowledgments

The authors are indebted to the anonymous reviewers for their valuable comments and remarks that helped to improve the presentation and quality of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Branke, J.; Deb, K.; Miettinen, K.; Słowiński, S. Multiobjective Optimization: Interactive and Evolutionary Approaches; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  2. Miettinen, K.M. Nonlinear Multiobjective Optimization; Kluwer Academic Publishers: Boston, MA, USA, 1999. [Google Scholar]
  3. Lai, K.K.; Mishra, S.K.; Panda, G.; Ansary, M.A.; Ram, B. On q-steepest descent method for unconstrained multiobjective optimization problems. AIMS Math. 2020, 5, 5521–5540. [Google Scholar]
  4. Tzou, J.; Wetton, B. Optimal covering points and curves. AIMS Math. 2019, 4, 1796–1804. [Google Scholar] [CrossRef]
  5. Sawaragi, Y.; Nakayama, H.; Tanino, T. Theory of Multiobjective Optimization; Academic Press Inc.: Orlando, FL, USA, 1985. [Google Scholar]
  6. Mishra, S.K.; Wang, S.Y. Second order symmetric duality for nonlinear multiobjective mixed integer programming. Eur. J. Oper. Res. 2005, 161, 673–682. [Google Scholar] [CrossRef]
  7. Ehrgott, M. Multicriteria Optimization; Springer: Berlin, Germany, 2005. [Google Scholar]
  8. Moreno-Pulido, S.; Garcia-Pacheco, F.J.; Cobos-Sanchez, C.; Sanchez-Alzola, A. Exact Solutions to the Maxmin Problem max ∥Ax∥ Subject to ∥Bx∥ ≤1. Mathematics 2020, 8, 85. [Google Scholar] [CrossRef] [Green Version]
  9. Garcia-Pacheco, F.J.; Cobos-Sanchez, C.; Moreno-Pulido, S.; Sanchez-Alzola, A. Exact solutions to max x = 1 i = 1 T i ( x ) 2 with applications to Physics, Bioengineering and Statistics. Comm. Nonlinear Sci. Numer. Simul. 2020, 82, 105054. [Google Scholar] [CrossRef]
  10. Sánchez, C.C.; Garcia-Pacheco, F.J.; Guerrero-Rodriguez, J.M.; Garcia-Barrachina, L. Solving an IBEM with supporting vector analysis to design quiet TMS coils. Eng. Anal. Bound. Elem. 2020, 117, 1–12. [Google Scholar] [CrossRef]
  11. Sánchez-Alzola, A.; García-Pacheco, F.J.; Naranjo-Guerra, E.; Moreno-Pulido, S. Supporting vectors for the 1-norm and the -norm and an application. Math. Sci. 2021, 15, 173–187. [Google Scholar] [CrossRef]
  12. Wendell, R.E.; Lee, D.N. Efficiency in multiple objective optimization problems. Math. Program. 1977, 12, 406–414. [Google Scholar] [CrossRef]
  13. Mangasarian, O.L. Nonlinear Programming; McGraw-Hill: New York, NY, USA, 1969. [Google Scholar]
  14. Van Rooyen, M.; Zhou, X.; Zlobec, S. A saddle-point characterization of Pareto optima. Math. Program. 1994, 67, 77–88. [Google Scholar] [CrossRef]
  15. Cobos-Sánchez, C.; Vilchez-Membrilla, J.A.; Campos-Jiménez, A.; García-Pacheco, F.J. Pareto Optimality for Multioptimization of Continuous Linear Operators. Symmetry 2021, 13, 661. [Google Scholar] [CrossRef]
  16. Treanţă, S. Robust saddle-point criterion in second-order partial differential equation and partial differential inequation constrained control problems. Int. J. Robust Nonlinear Control 2021. [Google Scholar] [CrossRef]
  17. Antczak, T.; Abdulaleem, N. Optimality and duality results for E-differentiable multiobjective fractional programming problems under E-convexity. J. Ineq. Appl. 2019, 2019, 1–24. [Google Scholar] [CrossRef]
  18. Barbu, V.; Precupanu, T. Convexity and Optimization in Banach Spaces; Springer: Dordrecht, The Netherlands; New York, NY, USA, 2012. [Google Scholar]
  19. Valyi, I. Approximate saddle-point theorems in vector optimization. J. Optim. Theory Appl. 1987, 55, 435–448. [Google Scholar] [CrossRef]
  20. Rong, W.D.; Wu, N.Y. ϵ- weak minimal solutions of vector optimization problems with set-valued maps. J. Optim. Theory Appl. 2000, 106, 569–579. [Google Scholar] [CrossRef]
  21. Kuhn, H.W.; Tucker, A.W. Nonlinear Programming; University of California Press: Berkeley, CA, USA; Los Angeles, CA, USA, 1951. [Google Scholar]
  22. Guu, S.M.; Singh, Y.; Mishra, S.K. On strong KKT type sufficient optimality conditions for multiobjective semi-infinite programming problems with vanishing constraints. J. Ineq. Appl. 2017, 2017, 1–9. [Google Scholar] [CrossRef] [Green Version]
  23. Ehrgott, M.; Wiecek, M.M. Saddle points and Pareto points in multiple objective programming. J. Glob. Optim. 2005, 32, 11–33. [Google Scholar] [CrossRef] [Green Version]
  24. Haeser, G.; Ramos, A. Constraint Qualifications for Karush–Kuhn–Tucker Conditions in Multiobjective Optimization. J. Optim. Theory Appl. 2020, 187, 469–487. [Google Scholar] [CrossRef]
  25. Hettich, R.; Kortanek, K.O. Semi-Infinite Programming: Theory, Methods, and Applications. SIAM Rev. 1993, 35, 380–429. [Google Scholar] [CrossRef]
  26. Li, W.; Nahak, C.; Singer, I. Constraint qualifications for semi-infinite systems of convex inequalities. SIAM J. Optim. 2000, 11, 31–52. [Google Scholar] [CrossRef] [Green Version]
  27. Treanţă, S. LU-Optimality Conditions in Optimization Problems With Mechanical Work Objective Functionals. IEEE Trans. Neural Networks Learn. Syst. 2021. [Google Scholar] [CrossRef]
  28. Ruiz-Garzón, G.; Osuna-Gómez, R.; Ruiz-Zapatero, J. Necessary and sufficient optimality conditions for vector equilibrium problems on Hadamard manifolds. Symmetry 2019, 11, 1037. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lai, K.K.; Hassan, M.; Maurya, J.K.; Singh, S.K.; Mishra, S.K. Multiobjective Convex Optimization in Real Banach Space. Symmetry 2021, 13, 2148. https://doi.org/10.3390/sym13112148

AMA Style

Lai KK, Hassan M, Maurya JK, Singh SK, Mishra SK. Multiobjective Convex Optimization in Real Banach Space. Symmetry. 2021; 13(11):2148. https://doi.org/10.3390/sym13112148

Chicago/Turabian Style

Lai, Kin Keung, Mohd Hassan, Jitendra Kumar Maurya, Sanjeev Kumar Singh, and Shashi Kant Mishra. 2021. "Multiobjective Convex Optimization in Real Banach Space" Symmetry 13, no. 11: 2148. https://doi.org/10.3390/sym13112148

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop