Next Article in Journal
A Review of Representative Points of Statistical Distributions and Their Applications
Next Article in Special Issue
A Weighted Generalization of Hardy–Hilbert-Type Inequality Involving Two Partial Sums
Previous Article in Journal
Univariate Probability-G Classes for Scattered Samples under Different Forms of Hazard: Continuous and Discrete Version with Their Inferences Tests
Previous Article in Special Issue
Bounds for the Error in Approximating a Fractional Integral by Simpson’s Rule
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Sub Convexlike Optimization Problems

Mathematics Department, Saskatchewan Polytechnic, Saskatoon, SK S7L 4J7, Canada
Mathematics 2023, 11(13), 2928; https://doi.org/10.3390/math11132928
Submission received: 13 June 2023 / Revised: 26 June 2023 / Accepted: 27 June 2023 / Published: 29 June 2023
(This article belongs to the Special Issue Recent Trends in Convex Analysis and Mathematical Inequalities)

Abstract

:
In this paper, we show that the sub convexlikeness and subconvexlikeness defined by V. Jeyakumar are equivalent in locally convex topological spaces. We also deal with set-valued vector optimization problems and obtained vector saddle-point theorems and vector Lagrangian theorems.

1. Introduction

Generalized convex optimization are very well-studied branches of mathematics. There are many very meaningful and useful definitions of generalized convexities. Let X be a normed space, and X+ a convex cone of X. K. Fan [1] introduced the definition of X+-convexlike function. Jeyakumar [2] introduced the definition of X+-sub convexlike function and defined X+-subconvexlike function in [3]. There are many research articles discussing subconvexlike optimization problems, e.g., see [4,5,6,7,8]. In this paper, by using partial order relations and the absorbing property of bounded convex sets in locally convex topological spaces [9], we proved that the sub convexlikeness introduced in [2] and subconvexlikeness in [3] are equivalent in locally convex topological spaces (including normed linear spaces).
Most papers in set-valued optimization studied the problem with inequality and abstract constraints. In this paper, we consider the set-valued optimization problem with not only inequality and abstract, but also equality constraints. The explicit statement of the equality constraint is very convenient in various applications. For example, recently, mathematical programs with equilibrium constraints have received considerable attention from the optimization community. The mathematical programs with equilibrium constraints are a class of optimization problem with variational inequality constraints. By representing the variational inequality as a generalized equation, e.g., [10,11,12], a mathematical program with equilibrium constraints can be reformulated as an optimization problem with an equality constraint. This paper works with a set-valued optimization problem with inequality, equality as well as abstract constraints. By using the separation theorem for convex sets, we extend or modify some results (theorems of alternatives, saddle-points theorems and Lagrangian theorems) in [4,7,8,10,13,14,15] to vector optimization problems with weakened convexities.

2. Preliminary

Let X be a real topological vector space; a subset X+ of X is said to be a convex cone if
α x 1 + β x 2 X + , x 1 , x 2 X + ,   α , β 0 .
We denoted by 0 X the zero element in the topological space X and simply by 0 if there is no confusion.
A convex cone X+ of X is called a pointed cone if X + ( X + ) = { 0 } .
A real topological vector space X with a pointed cone is said to be an ordered topological liner space. We denote intX+ the topological interior of X+. The partial order on X is defined by
x 1 X + x 2 ,   if   x 1 x 2 X + ,
x 1 X + x 2 ,   if   x 1 x 2 int   X + .
Alternatively, if there is no confusion, they may just be denoted by
x 1 x 2 ,   if   x 1 x 2   X + ,
x 1 x 2 ,   if   x 1 x 2 int   X + .
If A , B X , we denoted by
A X + B ,   if   x X + y   for   x A , y B ,
A X + B ,   if   x X + y   for   x A , y B .
Alternatively,
A B ,   if   x y   for   x A , y B ,
A B ,   if   x y   for   x A , y B .
A linear functional on X is a continuous linear function from X to R (1-dimensinal Euclidean space). The set X* of all linear functionals on X is the dual space of X. The subset
X + * = { ξ X * : x , ξ 0 , x X + } .
of X * is said to be the dual cone of the cone X+, where x , ξ = ξ ( x ) .
Suppose that X and Y are two real topological vector spaces. Let f: X→2Y be a set-valued function, where 2Y denotes the power set of Y.
Let D be a nonempty subset of X. We set f ( D ) = x D f ( x ) and
f ( x ) , η = { y , η : y f ( x ) } ,
f ( D ) , η = x D f ( x ) , η .
For x D ,   η Y * , we wrote
f ( x ) , η 0 ,   if   y , η 0 ,   y f ( x ) ,
f ( D ) , η 0 ,   if   f ( x ) , η 0 ,   x D .
The following Definitions 1 and 2 can be found in [4].
Definition 1 (convex, bounded, and absorbing).
A subset M of X is said to be convex, if  x 1 , x 2 M  and  0 < α < 1  implies  α x 1 + ( 1 α ) x 2 M ; M is said to be balanced if  x M  and  | α | 1  implies  α x M ; M is said to be absorbing if, for any given neighborhood U of 0, there exists a positive scalar  β , such that  β 1 M U , where  β 1 M = { x X ; x = β 1 v ; v M } .
Definition 2 (locally convex topological space).
A topological vector space X is called a locally convex topological space if any neighborhood of  0 X  contains a convex, balanced and absorbing open set.
From [9], p. 26 Theorem, p. 33 Definition 1, a normed linear space is a locally convex topological space.

3. The Sub Convexlikeness

This section shows that the definitions of sub convexlikeness and subconvexlikeness provided by Jeyakumar [2,3] are actually one.
A set-valued function f: X2Y is said to be Y+-convex on D if ∀x1, x2D, ∀α ∈ [0, 1]; one has
α   f ( x 1 ) + ( 1 α )   f   ( x 2 )   Y +   f   ( α   x 1 + ( 1 α )   x 2 ) .
The following definition of convexlikeness was introduced by Ky Fan [1].
A set-valued function f: X → 2Y is said to be Y+-convexlike on D if ∀x1, x2D, ∀α ∈ [0, 1], ∃x3D such that
α   f ( x 1 ) + ( 1 α )   f   ( x 2 )   Y +   f   ( x 3 ) .
Jeyakumar [9] introduced the following subconvexlikeness.
Definition 3
(subconvexlike). Let Y be a topological vector space and  D X  be a nonempty set and Y+ be a convex cone in Y. A set-valued map f: D 2 Y is said to be Y+-subconvexlike on D if θ int Y + , such that x 1 , x 2 D , ε > 0 , α [ 0 , 1 ] , x 3 D holds
ε θ + α f ( x 1 ) + ( 1 α ) f ( x 2 ) Y + f ( x 3 ) .
Lemma 1 is Lemma 2.3 in [14].
Lemma 1.
Let Y be a topological vector space and  D X  be a nonempty set and Y+ be a convex cone in Y. A set-valued map f:  D 2 Y  is Y+-sub-convex-like on D if, and only if,  θ int Y + ,  x 1 , x 2 D ,  α [ 0 , 1 ] ,  x 3 D , such that
θ + α f ( x 1 ) + ( 1 α ) f ( x 2 ) Y + f ( x 3 ) .
A bounded function in a topological space can be defined as following Definition 4 (e.g., see Yosida [9]).
Definition 4 (bounded set-valued map).
A subset M of a real topological vector space Y is said to be a bounded subset if, for any given neighborhood U of 0, there exists a positive scalar  β  such that  β 1 M U , where  β 1 M = { y Y ; y = β 1 v ; v M } . A set-valued map f:  D Y  is said to bounded map if f (Y) is a bounded subset of Y.
Jeyakumar [2] introduced the following sub convexlikeness.
Definition 5 (sub convexlike).
Let Y be a topological vector space and  D X  be a nonempty set. A set-valued map f:  D 2 Y  is said to be Y+-sub convexlike on D if  bounded set-valued map u:  D Y ,  x 1 , x 2 D ,  ε > 0 ,  α [ 0 ,   1 ] ,  x 3 D , such that
ε u + α f ( x 1 ) + ( 1 α ) f ( x 2 ) Y + f ( x 3 ) .
Lemma 2.
Let Y be a locally convex topological space and  D X  be a nonempty set Y. A set-valued map f:  D 2 Y  is Y+-sub-convex-like on D if, and only if,  f ( D ) + int Y +  is Y+-convex.
Theorem 1.
Let Y be a locally convex topological space,  D X  a nonempty set, and Y+ a convex cone in Y. A set-valued map f:  D 2 Y  is Y+-sub convexlike on D if, and only if,  f ( D ) + int Y +  is Y+-convex.
Proof. 
The necessity.
Suppose that f is Y+-sub convexlike.
z 1 = y 1 + y 0 1 ,   z 2 = y 2 + y 0 2 f ( D ) + int Y + ,   x 1 , x 2 D , such that  y 1 f ( x 1 ) ,   y 2 f ( x 2 ) . Let
y 0 = α y 0 1 + ( 1 α ) y 0 2 ,
Then, y 0 int Y + . Therefore, neighborhood U of 0, such that y + 0 + U is a neighborhood of y + 0 and
y + 0 + U int Y + .
By Definition 2, we may assume that U is convex, balanced and absorbing.
From the assumption of sub convexlikeness, i.e., bounded set-valued map u: x 1 , x 2 D , ε > 0 , α [ 0 ,   1 ] , x 3 D , such that
ε u + α f ( x 1 ) + ( 1 α ) f ( x 2 ) f ( x 3 ) + Y + .
Therefore,
α z 1 + ( 1 α ) z 2 = α y 1 + ( 1 α ) y 2 + α y + 1 + ( 1 α ) y + 2 f ( x 3 ) ε u + Y + + y + 0
Since U is convex, balanced and absorbing, we may take ε > 0 to be small enough, such that
ε u U .
Therefore,
ε u + y + 0 y + 0 + U int Y + .
Then,
α z 1 + ( 1 α ) z 2 = α y 1 + ( 1 α ) y 2 + α y + 1 + ( 1 α ) y + 2 f ( x 3 ) + int Y + f ( D ) + int Y + .
Hence,  f ( D ) + int Y + is a Y+-convex set.
The sufficiency.
If f ( D ) + int Y + is Y+-convex, then, by Lemma 1, f is Y+-subconvexlike. It is clear that Y+-subconvexlikeness implies Y+-sub convexlikeness. □
From Lemma 2 and Theorem 1, we obtained Theorem 2.
Theorem 2.
Let Y be a locally convex topological space,  D X  a nonempty set and Y+ a convex cone in Y. A set-valued map f:  D 2 Y  is Y+-subconvexlike on D if, and only if, f is Y+-sub convexlike on D.

4. Vector Saddle-Point Theorems

This section presents vector saddle-point theorems for set-valued optimization problems.
A set-valued map f: D 2 Y is said to be affine on D if x 1 , x 2 D ,   β R ; therefore,
β f ( x 1 ) + ( 1 β ) f ( x 2 ) = f ( β x 1 + ( 1 β ) x 2 .
We introduced the notion of sub affinelike functions, as follows.
Definition 6 (sub affinelike).
A set-valued map f:  D 2 Y  is said to be Y+-sub affinelike on D if  x 1 , x 2 D ,   α ( 0,1 ) ,     v int Y + ,  x 3 D ; therefore,
v + α f ( x 1 ) + ( 1 α ) f ( x 2 ) = f ( x 3 ) .
Theorem 3.
Let X, Y, Z and W be real topological vector spaces,  D X .   Y + , Z +  and  W +  are pointed convex cones of Y, Z and W, respectively. Assume that the functions  f : D Y ,   g : D Z ,   h : D W  satisfy:
(a)
f and g are sub convexlike maps on D, i.e.,  u 1 int Y + , u 2 int Z + ,   α ( 0 ,   1 ) , x 1 , x 2 D  ,  x , x D , such that
u 1 + α f ( x 1 ) + ( 1 α ) f ( x 2 ) f ( x ) , u 2 + α g ( x 1 ) + ( 1 α ) g ( x 2 ) g ( x ) ;
(b)
h is a sub affinelike map on D, i.e.,  α ( 0,1 ) , x 1 , x 2 D ,   x D ,   v int W +  such that
v + α h ( x 1 ) + ( 1 α ) h ( x 2 ) = h ( x ) ;
(c)
int h ( D ) ;
(i) and (ii) denote the system:
(i) 
x D ,   s . t . ,   f ( x ) 0 ,   g ( x ) 0 ,   h ( x ) = 0 ;
(ii) 
( ξ , η , ζ ) ( Y + * × Z + * × W * ) \ { ( 0 Y , 0 Z , 0 W ) }  such that
ξ ( f ( x ) ) + η ( g ( x ) ) + ς ( h ( x ) ) 0 ,   x D .
If (i) has no solutions, then (ii) has solutions.
Moreover, if (ii) has a solution  ( ξ , η , ς )  with  ξ 0 Y * , then (i) has no solutions.
Proof. 
w 1 , w 2 t > 0 t h ( D ) + int W + ,   α ( 0 , 1 ) ,   x 1 , x 2 D ,   b 1 , b 2 int W + ,   t 1 , t 2 > 0 , such that
α w 1 + ( 1 α ) w 2 = α t 1 h ( x 1 ) + ( 1 α ) t 2 h ( x 2 ) + α b 1 + ( 1 α ) b 2 = ( α t 1 + ( 1 α ) t 2 ) [ α t 1 α t 1 + ( 1 α ) t 2 h ( x 1 ) + ( 1 α ) t 2 α t 1 + ( 1 α ) t 2 h ( x 2 ) ] + α b 1 + ( 1 α ) b 2 .
By the assumption (b), x 3 D ,   v int W + ,   ε > 0 , such that
α t 1 α t 1 + ( 1 α ) t 2 h ( x 1 ) + ( 1 α ) t 2 α t 1 + ( 1 α ) t 2 h ( x 2 ) = h ( x 3 ) ε v .
Since v int Z + ,   neighborhood U of 0 in W for which V = α b 1 + ( 1 α ) b 2 + U is a neighborhood of α b 1 + ( 1 α ) b 2 .
By Definition 2, we may take ε > 0 to be small enough, such that
ε ( α t 1 + ( 1 α ) t 2 ) v U .
Then,
α b 1 + ( 1 α ) b 2 ε ( α t 1 + ( 1 α ) t 2 ) v V int W + .
Therefore,
α w 1 + ( 1 α ) w 2 α t 1 h ( x 1 ) + ( 1 α ) t 2 h ( x 2 ) + α b 1 + ( 1 α ) b 2 = ( α t 1 + ( 1 α ) t 2 ) [ α t 1 α t 1 + ( 1 α ) t 2 h ( x 1 ) + ( 1 α ) t 2 α t 1 + ( 1 α ) t 2 h ( x 2 ) ] + α b 1 + ( 1 α ) b 2 = ( α t 1 + ( 1 α ) t 2 ) h ( x 3 ) + α b 1 + ( 1 α ) b 2 ε ( α t 1 + ( 1 α ) t 2 ) v t > 0 t h ( D ) + int W + .
So, t > 0 t h ( D ) + int Y + ,  is a convex set.
Similarly, t > 0 t f ( D ) + int Y + , and t > 0 t g ( D ) + int Z + are also convex. Therefore, the set
C = ( t > 0 t f ( D ) + int Y + ) × ( t > 0 t g ( D ) + int Z + ) × ( t > 0 t h ( D ) + int W + )
is convex.
From assumption (c), int C . We also have ( 0 Y , 0 Z , 0 W ) B since (i) has no solution. Therefore, according to the separation theorem of convex sets of topological vector space, nonzero vector ( ξ , η , ς ) Y * × Z * × W * , such that
ξ ( t 1 f ( x ) + y 0 ) + η ( g t 2 ( x ) + z 0 ) + ς ( t 3 h ( x ) + w 0 ) 0 ,
for  t 1 , t 2 , t 3 > 0 x D ,   y 0 int Y + , z 0 int Z + , w 0 B .
Since  int Y + , int Z +  are convex cones, and B is a linear space, we obtained
ξ ( t 1 f ( x ) + λ 1 y 0 ) + η ( t 2 g ( x ) + λ 2 z 0 ) + ς ( t 3 h ( x ) + λ 3 w 0 ) 0
x D , y 0 int Y + , z 0 int Z + , w 0 B , λ i > 0 , ( i = 1,2 , 3 ) , t i > 0 , ( i = 1,2 , 3 ) .
Let  λ i 0   ( i = 2,3 ) , t i 0   ( i = 1,2 , 3 ) ; therefore,
ξ ( y 0 ) 0 , y 0 int Y + .
Therefore,  ξ ( y ) 0 , y Y + . Hence,  ξ Y + * . Similarly,  η int Z + ς int W + .
Thus,
( ξ , η , ς ) Y + * × Z + * × W * .
Therefore,
ξ ( f ( x ) ) + η ( g ( x ) ) + ς ( h ( x ) ) 0 ,   x D ,
which means that (ii) has solutions.
On the other hand, suppose that (ii) has a solution  ( ξ , η , ς )  with  ξ 0 Y + * , i.e.,
ξ ( f ( x ) ) + η ( g ( x ) ) + ς ( h ( x ) ) 0 ,   x D .
We are going to prove that (i) has no solution.
Otherwise, if (i) has a solution  x ˜ D , then  f ( x ˜ ) 0 ,   g ( x ˜ ) 0 ,   h ( x ˜ ) = 0 . Hence, one would have
ξ ( f ( x ˜ ) ) + η ( g ( x ˜ ) ) + ς ( h ( x ˜ ) ) < 0 ,
which is a contradiction. The proof is completed. □
We considered the following optimization problem with set-valued maps:
(VP)   Y+-min  f (x)
s . t .   g i ( x ) ( Z i + ) 0 ,   i = 1 , 2 , , m ,
0 h j ( x ) ,   j = 1,2 , , n ,
x D ,
where f: X 2 Y , g i : X 2 Z i , h j : X 2 W j are set-valued maps, Zi+ is a closed convex cone in Zi and D is a nonempty subset of X.
Definition 7 (weakly efficient solution).
A point  x ¯ F  is said to be a weakly efficient solution of (VP) if there exists no  x D  satisfying  f ( x ¯ ) f ( x ) , where
F : = { x D : g ( x ) 0 ,   h ( x ) = 0 } .
Let
P   min [ A , Y + ] = { y A : ( y A ) int Y + = } ,
P   max [ A , Y + ] = { y A : ( A y ) int Y + = } .
In the sequel,  B ( W , Y )  denotes the set of all continuous linear mappings T from W to Y;  B + ( Z , Y )  denotes the set of all non-negative and continuous linear mappings S from Z to Y, where non-negative mapping S means that  S ( z ) Y + , z Z , write
L ( x ¯ , S ¯ , T ¯ ) = f ( x ¯ ) + S ¯ ( g ( x ¯ ) ) + T ¯ ( h ( x ¯ ) ) .
Definition 8 (vector saddle-point).
( x ¯ , S ¯ , T ¯ ) X × B + ( Z , Y ) × B ( W , Y )  is said to be a vector saddle-point of  L ( x ¯ , S ¯ , T ¯ )  if
L ( x ¯ , S ¯ , T ¯ ) P   min [ L ( X , S ¯ , T ¯ ) , Y + ] P   max [ L ( x ¯ , B + ( Z , Y ) , B ( W , Y ) ) , Y + ] .
where
P   max [ L ( x ¯ , B + ( Z , Y ) , B ( W , Y ) ) , Y + ] = { μ : μ = P   max [ L ( x ¯ , S , T ) , Y + ] , ( S , T ) B + ( Z , Y ) × B ( W , Y ) } .
Theorem 4.
( x ¯ , S ¯ , T ¯ ) X × B + ( Z , Y ) × B ( W , Y )  is a vector saddle-point of  L ( x ¯ , S ¯ , T ¯ )  if and only if  y ¯ f ( x ¯ ) , z ¯ g ( x ¯ ) , such that
(i)
y ¯ P   min [ L ( X , S ¯ , T ¯ ) , Y + ] ;
(ii)
g ( x ¯ ) Z + ,   h ( x ¯ ) = { 0 } ;
(iii)
( f ( x ¯ ) y ¯ S ¯ ( z ¯ ) ) int Y + = .
Proof. 
The sufficiency. Suppose that the conditions (i)–(iii) are satisfied. Note that  g ( x ¯ ) Z + h ( x ¯ ) = { 0 }  implies
S ( g ( x ¯ ) ) Y + ,   T ( h ( x ¯ ) ) = { 0 } ,   ( S , T ) B + ( Z , Y ) × B ( W , Y ) ,
and the condition (i) states that
{ y ¯ [ f ( X ) + S ¯ ( g ( X ) ) + T ¯ ( h ( X ) ) ] } int Y + = ,
So,  Y + + int Y + Y +  and  S ( z ¯ ) Y +  imply
{ y ¯ + S ¯ ( z ¯ ) + T ¯ ( w ¯ ) [ f ( X ) + S ¯ ( g ( X ) ) + T ¯ ( h ( X ) ) ] } int Y + = .
Hence,
y ¯ + S ¯ ( z ¯ ) + T ¯ ( w ¯ ) P min [ L ( X , S ¯ , T ¯ ) , Y + ] .
On the other hand, since  ( f ( x ¯ ) [ y ¯ + S ( z ¯ ) ] ) int Y + = , from  int Y + + Y + int Y + , we conclude that
{ ( S , T ) B + ( Z , Y ) × B ( W , Y ) [ f ( x ¯ ) + S ( g ( x ¯ ) ) + T ( h ( x ¯ ) ) ] [ y ¯ + S ¯ ( z ¯ ) + T ¯ ( w ¯ ) ] } int Y + = .
Hence,
y ¯ + S ¯ ( z ¯ ) + T ¯ ( w ¯ ) P max [ L ( x ¯ , B + ( Z , Y ) , B ( W , Y ) ) , Y + ]
Consequently,
L ( x ¯ , S ¯ , T ¯ ) P min [ L ( X , S ¯ , T ¯ ) , Y + ] P max [ L ( x ¯ , B + ( Z , Y ) , B ( W , Y ) ) , Y + ] .
Therefore, ( x ¯ , S ¯ , T ¯ ) X × B + ( Z , Y ) × B ( W , Y )  is a vector saddle-point of  L ( x ¯ , S ¯ , T ¯ ) .
The necessity. Assume that ( x ¯ , S ¯ , T ¯ ) X × B + ( Z , Y ) × B ( W , Y )  is a vector saddle-point of  L ( x ¯ , S ¯ , T ¯ ) . From Definition 8, one has
L ( x ¯ , S ¯ , T ¯ ) P min [ L ( X , S ¯ , T ¯ ) , Y + ] P max [ L ( x ¯ , B + ( Z , Y ) × B ( W , Y ) ) , Y + ] .
So, y ¯ f ( x ¯ ) ,   z ¯ g ( x ¯ ) ,   w ¯ h ( x ¯ ) , i.e.,
y ¯ + S ( z ¯ ) + T ( w ¯ ) L ( x ¯ , S ¯ , T ¯ ) = f ( x ¯ ) + S ( g ( x ¯ ) ) + T ( h ( x ¯ ) ) ,
such that
f ( x ¯ ) + S ( g ( x ¯ ) ) + T ( h ( x ¯ ) ) [ y ¯ + S ¯ ( z ¯ ) + T ¯ ( w ¯ ) ] int Y + = , ( S , T ) B + ( Z , Y ) × B ( W , Y ) ,
and
( y ¯ + S ¯ ( z ¯ ) + T ¯ ( w ¯ ) [ f ( X ) + S ¯ ( g ( X ) ) + T ¯ ( h ( X ) ) ] ) int Y + = .
Taking T = T ¯ we obtained
S ( z ) S ¯ ( z ¯ ) int Y + , z g ( x ¯ ) , S B + ( Z , Y ) .
We aim to show that z ¯ Z + .
Otherwise, since 0 Z + , if z ¯ Z + , we would have z ¯ 0 ,
Because Z + is a closed convex set, by the separate theorem η Z * \ { 0 } ,
η ( t z + ) > η ( z ¯ ) ,   z Z + ,   t > 0 .
i.e.,
η ( z + ) > 1 t η ( z ¯ ) ,   z Z + ,   t > 0 .
Let t ; we obtained  η ( z + ) 0 ,   z Z + , which means that η Z + * \ { 0 } . Meanwhile, 0 Z + yields η ( z ¯ ) > 0 . Given z ˜ int Z + and let
S ( z ) = η ( z ) η ( z ¯ ) z ˜ + S ¯ ( z ) .
Then, S ¯ B + ( Z , Y ) and
  ( z ¯ ) S ¯ ( z ¯ ) = z ˜ int Y + .
This is a contradiction. Therefore,
z ¯ Z + .
At this point, we aim to prove that g ( x ¯ ) Z + .
Otherwise, if g ( x ¯ ) Z + , then z 0 g ( x ¯ ) , such that 0 z 0 Z + . Similar to the above η 0 Z * \ { 0 } , such that η 0 Z + * \ { 0 } , η 0 ( z 0 ) > 0 . Given z ˜ int Z + and let
S 0 ( z ) = η 0 ( z ) η 0 ( z 0 ) z ˜ .
Then, S 0 B + ( Z , Y ) and S 0 ( z 0 ) = z ˜ int Y + . We proved that z ¯ Z + , so S ¯ ( z ¯ ) Y + . Therefore,
S 0 ( z 0 ) S ¯ ( z ¯ ) int Y + + Y + int Y + .
Again, a contradiction.
Therefore, g ( x ¯ ) Z + . Similarly, one has h ( x ¯ ) W + . From (Lemma 2), we obtain
[ T ( h ( x ¯ ) ) T ¯ ( w ¯ ) ] int Y + = .
Hence,
T ( w ¯ ) T ¯ ( w ¯ ) int Y + ,   T B ( W , Y ) .
Similarly, we obtained
T ( w ) T ¯ ( w ¯ ) int Y + , w h ( x ¯ ) ,   T B ( W , Y ) .
If w ¯ 0 , since h ( x ¯ ) W + and W + is a pointed cone, we have w ¯ W + . Because Y + is a closed convex set, by the separation theorem ς W * , such that
ς ( w ) < ς ( w ¯ ) , w W + .
So, ς ( w ¯ ) 0 since 0 W + . Taking y 0 int Y + and defining T 0 B + ( W , Y ) by
T 0 ( w ) = ς ( w ) ς ( w ¯ ) y 0 + T ¯ ( w ) .
Then,
T 0 ( w ¯ ) T ¯ ( w ¯ ) = y 0 int Y + ,
A contradiction. Therefore, w ¯ = 0 . Thus,
0 h ( x ¯ ) .
At this point, we aim to prove h ( x ¯ ) = { 0 } .
Otherwise, if w 0 h ( x ¯ ) : w 0 0 , then ς 0 W * , such that ς 0 ( w ) < ς 0 ( w 0 ) , w W + . So, ς 0 ( w 0 ) 0 . Given y 0 int Y + and defining T 0 B ( W , Y ) , by
T 0 ( w ) = ς 0 ( w ) ς 0 ( w 0 ) y 0 .
Then, T 0 ( w 0 ) = y 0 int Y + . This contradiction implies that we must have
h ( x ¯ ) = { 0 } .
We conclude that
y ¯ P   min [ L ( X , S ¯ , T ¯ ) , Y + ] ,
and
( f ( x ¯ ) y ¯ S ¯ ( z ¯ ) ) int Y + = .
We proved that, if ( x ¯ , S ¯ , T ¯ ) X × B + ( Z , Y ) × B ( W , Y ) is a vector saddle-point of L ( x ¯ , S ¯ , T ¯ ) , then the conditions (i)–(iii) hold. □
Theorem 5.
If  ( x ¯ , S ¯ , T ¯ ) X × B + ( Z , Y ) × B ( W , Y )  is a vector saddle-point of  L ( x ¯ , S ¯ , T ¯ ) , and if  0 S ¯ ( g ( x ¯ ) ) , then  x ¯  is a weak efficient solution of (VP).
Proof. 
Assume that  ( x ¯ , S ¯ , T ¯ ) D × B + ( Z , Y ) × B ( W , Y )  is a vector saddle-point of  L ( x ¯ , S ¯ , T ¯ ) ; from Theorem 2, we have
S ( g ( x ¯ ) ) Y + ,   h ( x ¯ ) = { 0 } .
So,  x ¯ D  (the feasible solution of (VP). y ¯ f ( x ¯ ) , such that  y ¯   P min [ L ( X , S ¯ , T ¯ ) , Y + ] , i.e.,
( y ¯ [ f ( X ) + S ¯ ( g ( X ) ) + T ¯ ( h ( X ) ] ) int Y + = .
Thus,
( y ¯ [ f ( D ) + S ¯ ( g ( x ¯ ) ) + T ¯ ( h ( x ¯ ) ] ) int Y + = .
Since  0 S ¯ ( g ( x ¯ ) ) , one has
( y ¯ f ( D ) ) int Y + = .
Therefore,  x ¯  is a weakly efficient solution of (VP). □

5. Vector Lagrangian Theorems

Definition 9 (vector Lagrangian map).
The vector Lagrangian map  L : X × B + ( Z , Y ) × B ( W , Y ) 2 Y of (VP) is defined by the set-valued map
L ( x , S , T ) = f ( x ) + S ( g ( x ) ) + T ( h ( x ) ) .
Given  ( S , T ) B + ( Z , Y ) × B ( W , Y ) , we considered the minimization problem induced by (VP):
( VPST )   Y + min L ( x , S ,   T ) , s . t . , x D .
Definition 10 (slater constrained qualification (SC)).
Let  x ¯ F . We consider that (VP) satisfies the Slater Constrained Qualification at  x ¯  if the following conditions hold:
(1) 
x D , s.t.  h j ( x ) = 0 ,   g i ( x ) 0 ;
(2) 
0 int h j ( D )  for all j.
According the following Theorem 6, (VPST) can also be considered as a dual problem of (VP).
Theorem 6.
Let  x ¯ D . Assume that  f ( x ) f ( x ¯ ) ,   g ( x ) ,   h ( x )  satisfies the generalized convexity condition (a), the generalized affineness condition (b) as well as the inner point condition (c), and (VP) satisfies the Slater Constrained Qualification (SC). Then,  x ¯ D  is a weakly efficient solution of (VP) if, and only if,  ( S , T ) B + ( Z , Y ) × B ( W , Y ) , such that  x ¯ D  is a weakly efficient solution of (VPST).
Proof. 
Assume ( S , T ) B + ( Z , Y ) × B ( W , Y ) , such that x ¯ D is a weakly efficient solution of (VPST). Then, there exist y ¯ f ( x ¯ ) ,   z ¯ g ( x ¯ ) ,   w ¯ h ( x ¯ ) , such that
( y ¯ + S ( z ¯ ) + T ( w ¯ ) [ f ( D ) + S ( g ( D ) ) + T ( h ( D ) ) ] ) int Y + = ,
If ( y ¯ f ( D ) ) int Y + , then y f ( D ) , such that y ¯ y int Y + , i.e.,
( y ¯ + S ( z ¯ ) + T ( w ¯ ) [ y + S ( z ¯ ) + T ( w ¯ ) ] ) int Y + .
This means that
( y ¯ + S ( z ¯ ) + T ( w ¯ ) [ f ( D ) + S ( g ( D ) ) + T ( h ( D ) ) ] ) int Y + ,
which is a contradiction.
Therefore,
( y ¯ f ( D ) ) int Y + = .
Hence,  x ¯ D is a weakly efficient solution of (VP).
Conversely, suppose that x ¯ D is a weakly efficient solution of (VP). So, y ¯ f ( x ¯ ) , such that there is not any x D for which f ( x ) y ¯ int Y + . That is to say, there is not any x X , such that
f ( x ) y ¯ int Y + ,   g ( x ) Z + , 0 W h ( x ) .
By Theorem 3, ( ξ , η , ς ) Y + * × Z + * × W * \ { ( 0 Y * , 0 Z * , 0 W * ) } , such that
ξ ( f ( x ) y ¯ ) + η ( g ( x ) ) + ς ( h ( x ) ) 0 ,   x D .
Since y ¯ f ( x ¯ ) and 0 W h ( x ¯ ) , taking x = x ¯ in (1), we obtained
η ( g ( x ¯ ) ) 0 .
However, x ¯ D and η Z + * imply that z ¯ g ( x ¯ ) ( Z + ) , for which
η ( z ¯ ) 0 .
Hence,  η ( z ¯ ) = 0 , which means
0 η ( g ( x ¯ ) )
Since x D implies 0 W h ( x ) and g ( x ) ( Z + ) implies z g ( x ) ( Z + ) , such that η ( z ) 0 , we have
ξ ( f ( x ) y ¯ ) 0 ,   x D .
Because the Slater Constraint Qualification is satisfied, similar to the proof of Theorem 4, we have ξ 0 Y * . So, we may take y 0 int Y + , such that
ξ ( y 0 ) = 1 .
Define the operator S : Z Y and T : W Y by
S ( z ) = η ( z ) y 0 , T ( w ) = ς ( w ) y 0 .
It is easy to see that
S B + ( Z , Y ) , S ( Z + ) = η ( Z + ) y 0 Y + , T B ( W , Y ) .
Therefore
S ( g ( x ¯ ) ) = η ( g ( x ¯ ) ) y 0 0 Y + = 0 Y .
Since x ¯ D , we have 0 W h ( x ¯ ) . Hence,
0 Y T ( h ( x ¯ ) ) .
Therefore,
y ¯ f ( x ¯ ) f ( x ¯ ) + S ( g ( x ¯ ) ) + T ( h ( x ¯ ) ) .
And then
ξ [ f ( x ) + S ( g ( x ) ) + T ( h ( x ) ) ] = ξ ( f ( x ) ) + η ( ( g ( x ) ) ξ ( y 0 ) + ς ( h ( x ) ) ξ ( y 0 ) = ξ ( f ( x ) ) + η ( g ( x ) ) + ς ( h ( x ) ) ξ ( y ¯ ) , x D .
i.e.,
ξ [ f ( x ) y ¯ ) + S ( g ( x ) ) + T ( h ( x ) ) ] 0 , x D .
Taking F ( x ) = f ( x ) + S ( g ( x ) ) + T ( h ( x ) ) , G ( x ) = { 0 Z } and H ( x ) = { 0 W } and applying Theorem 4 to the functions F ( x ) y ¯ ,   G ( x ) ,   H ( x ) , we have
( y ¯ [ f ( D ) + S ( g ( D ) ) + T ( h ( D ) ) ] int Y + = ,
as well as
y ¯ F ( x ¯ ) = f ( x ¯ ) + S ( g ( x ¯ ) ) + T ( h ( x ¯ ) ) ,
since 0 Y S ( g ( x ¯ ) ) ,   0 Y T ( h ( x ¯ ) ) .
Consequently, x ¯ D is a weakly efficient solution of (VPST).
We complete the proof. □
Definition 11 (NNAMCQ).
Let  x ¯ F . We say that (VP) satisfies the No Nonzero Abnormal Multiplier Constraint Qualification (NNAMCQ) at  x ¯ , if there is no nonzero vector  ( η , ς ) Π i = 1 m Z i * × Π j = 1 n W j *  satisfying the system
m i n x D U ( x ¯ ) [ i = 1 m η i g i ( x ) + j = 1 n ς j h j ( x ) ] = 0 i = 1 m η i g i ( x ¯ ) = 0 ,
where  U ( x ¯ )  is some neighborhood of  x ¯ .
Similar to the proof of Theorem 6, we obtained Theorem 7.
Theorem 7.
Let  x ¯ D .  Assume that  f ( x ) f ( x ¯ ) , g ( x ) , h ( x )  satisfies the generalized convexity condition (a), the generalized affineness condition (b), as well as the inner point condition (c). If  x ¯  is a weakly efficient solution of (VP), then ∃ vector Lagrangian multiplier  ( S , T ) B + ( Z , Y ) × B ( W , Y ) , such that  x ¯ D  is a weakly efficient solution of (VPST). Inversely, if (NNAMCQ) holds at  x ¯ D , and if ∃ vector Lagrangian multiplier  ( S , T ) B + ( Z , Y ) × B ( W , Y ) , such that  x ¯  is a weakly efficient solution of (VPST), then  x ¯  is a weakly efficient solution of (VP).

6. Conclusions

Jeyakumar [2] introduced the following definition of sub convexlike functions for single-valued functions.
Let Y be a topological vector space and D X be a nonempty set. A set-valued map f D 2 Y is said to be Y+-sub convexlike on D if bounded set-valued map u: D Y , x 1 , x 2 D , ε > 0 α [ 0 ,   1 ] x 3 D , such that
ε u + α f ( x 1 ) + ( 1 α ) f ( x 2 ) Y + f ( x 3 ) ,
where the partial order is induced by a convex cone Y + of Y.
Jeyakumar [3] introduced the following subconvexlikeness.
A set-valued map f: D 2 Y is said to be Y+-subconvexlike on D if θ int Y + , such that x 1 , x 2 D , ε > 0 , α [ 0 ,   1 ] , x 3 D holds
ε θ + α f ( x 1 ) + ( 1 α ) f ( x 2 ) Y + f ( x 3 ) .
In this paper, we proved that the above two generalized convexities are equivalent in locally convex topological spaces. Since Banach spaces are locally convex topological spaces (n-dimensional Euclidean spaces are Banach spaces), we proved that the two definitions of generalized convexities are the same. Then, we solved set-valued vector optimization problems and obtained vector saddle-point theorems and some vector Lagrangian theorems. Our optimization problems have inequality, equality as well as an abstract constraint. Our inequality constraints are generalized convex maps and the generalized convexities are defined by partial order relations.
A set-valued map f: D 2 Y is said to be affine on D if x 1 , x 2 D , β R ; there holds
β f ( x 1 ) + ( 1 β ) f ( x 2 ) = f ( β x 1 + ( 1 β ) x 2 .
We defined the following sub affinelike maps in order to weaken the condition of the “equality constraints” for optimization problems.
A set-valued map f: D 2 Y is said to be Y+-sub affinelike on D if x 1 , x 2 D , α ( 0 ,   1 ) , v int W + ,   x 3 D ; there holds
v + α f ( x 1 ) + ( 1 α ) f ( x 2 ) = f ( x 3 ) .
Then, we considered the following optimization problem with set-valued maps:
(VP)   Y+-min   f (x)
s . t .   g i ( x ) ( Z i + ) ,   i = 1 , 2 , , m ,
0 h j ( x ) ,   j = 1 , 2 , , n ,
x D ,
where f: X 2 Y and g i : X 2 Z i are sub convexlike and h j : X 2 W j are sub affinelike.
For a single-valued situation, the above optimization problem (VP) may be written as follows.
Y+-min   f (x)
s . t .   g i ( x ) 0 ,   i = 1 , 2 , , m ,
h j ( x ) = 0 ,   j = 1 , 2 , , n ,
x D .
We obtained vector saddle-point theorems and vector Lagrangian theorems for the set-valued optimization problem (VP). Our Theorem 3 is a generalization of theorems of alternatives in [2,3] and a modification of theorems of alternatives in [4,7,8,11,13]. Our saddle-points theorems (Theorems 4 and 5) are generalizations of the saddle-point theorem in [4,14] and modifications of saddle-point theorems in [14,16]. Our Lagrangian theorems (Theorems 6 and 7) are generalizations of Lagrangian theorems in [14] and modifications of those in [10,15]. We can also extended the results in [12] according to our methods used in this paper.

Funding

This research received no external funding.

Data Availability Statement

All relevant data are within the manuscript.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Fan, K. Minimax theorems. Proc. Natl. Acad. Sci. USA 1953, 39, 42–47. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Jeyakumar, V. Convexlike Alternative Theorems and Mathematical Programming. Optimization 1985, 16, 643–652. [Google Scholar] [CrossRef]
  3. Jeyakumar, V. A Generalization of a Minimax Theorem of Fan via a Theorem of the Alternative. J. Optim. Theory Appl. 1986, 48, 525–533. [Google Scholar] [CrossRef]
  4. Gutiérrez, C.; Huerga, L.; Novo, V. Scalarization and Saddle Points of Approximate Proper Solutions in Nearly Subconvexlike Vector Optimization Problems. J. Math. Anal. Appl. 2012, 389, 1046–1058. [Google Scholar] [CrossRef] [Green Version]
  5. Hern, E.; Novo, V. Weak and Proper Efficiency in Set-Valued Optimization on Real Linear Spaces. J. Convex Anal. 2007, 14, 275–296. [Google Scholar]
  6. Xu, Y.D.; Li, S.J. Tightly Proper Efficiency in Vector Optimization with Nearly Cone-Subconvexlike Set-Valued Maps. J. Inequal. Appl. 2011, 2011, 839679. [Google Scholar] [CrossRef] [Green Version]
  7. Yuan, C.H.; Rong, W.D. ε-Properly Efficiency of Multiobjective Semidefinite Programming with Set Valued Functions. Math. Probl. Eng. 2017, 2017, 5978130. [Google Scholar] [CrossRef] [Green Version]
  8. Zhou, Z.-A.; Peng, J.-W. A Generalized Alternative Theorem of Partial and Generalized Cone Subconvexlike Set-Valued Maps and Its Applications in Linear Spaces. J. Appl. Math. 2012, 2012, 370654. [Google Scholar] [CrossRef] [Green Version]
  9. Yosida, K. Functional Analysis; Springer: Berlin, Germany, 1978. [Google Scholar]
  10. Chen, J.; Xu, Y.; Zhang, K. Approximate Weakly Efficient Solutions of Set-Valued Vector Equilibrium Problems. J. Inequal. Appl. 2018, 2018, 181. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Huang, Y.W. A Farkas-Makowski Type Alternative Theorem and Its Applications to Set-Valued Equilibrium Problems. J. Nonlinear Convex Anal. 2002, 3, 100–118. [Google Scholar]
  12. Zhang, C.-L. Set-valued equilibrium problems based on the concept of null set with applications. Optimization 2022, 2022, 2109969. [Google Scholar] [CrossRef]
  13. Galán, M.R. A Theorem of the Alternative with an Arbitrary Number of Inequalities and Quadratic Programming. J. Glob. Optim. 2017, 69, 427–442. [Google Scholar] [CrossRef]
  14. Li, Z.F.; Wang, S.Y. Lagrange Multipliers and Saddle Points in Multiobjective Programming. J. Optim. Theory Appl. 1994, 83, 63–81. [Google Scholar] [CrossRef]
  15. Zhou, Y.Y.; Zhou, J.C.; Yang, X.Q. Existence of Augmented Lagrange Multipliers for Cone Constrained Optimization Problems. J. Glob. Optim. 2014, 58, 243–260. [Google Scholar] [CrossRef]
  16. Zeng, R. Generalized Gordon Alternative Theorem with Weakened Convexity and Its Applications. Optimization 2002, 51, 709–717. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zeng, R. On Sub Convexlike Optimization Problems. Mathematics 2023, 11, 2928. https://doi.org/10.3390/math11132928

AMA Style

Zeng R. On Sub Convexlike Optimization Problems. Mathematics. 2023; 11(13):2928. https://doi.org/10.3390/math11132928

Chicago/Turabian Style

Zeng, Renying. 2023. "On Sub Convexlike Optimization Problems" Mathematics 11, no. 13: 2928. https://doi.org/10.3390/math11132928

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop