Next Article in Journal
A Comprehensive Security Framework for Asymmetrical IoT Network Environments to Monitor and Classify Cyberattack via Machine Learning
Previous Article in Journal
Sampled-Data Control for T-S Fuzzy Systems Using Refined Looped Lyapunov Functional Approach
Previous Article in Special Issue
DTSA: Dynamic Tree-Seed Algorithm with Velocity-Driven Seed Generation and Count-Based Adaptive Strategies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interval-Valued Multiobjective Programming Problems Based on Convex Cones

Department of Mathematics, National Kaohsiung Normal University, Kaohsiung 804, Taiwan
Symmetry 2024, 16(9), 1120; https://doi.org/10.3390/sym16091120
Submission received: 21 July 2024 / Revised: 24 August 2024 / Accepted: 27 August 2024 / Published: 28 August 2024
(This article belongs to the Special Issue Advanced Optimization Methods and Their Applications)

Abstract

:
The new solution concepts of interval-valued multiobjective optimization problems using ordering cone are proposed in this paper. An equivalence relation is introduced to divide the collection of all bounded closed intervals into the equivalence classes. The family of all equivalence classes is also called a quotient set. In this case, this quotient set can turn into a vector space under some suitable settings for vector addition and scalar multiplication. The notions of ordering cone and partial ordering on a vector space are essentially equivalent. It means that an ordering in the quotient set can be defined to study the Pareto optimal solution in multiobjective optimization problems. In this paper, we consider the multiobjective optimization problem such that its coefficients are taken to be the bounded closed intervals. With the help of the convex cone, we can study the Pareto optimal solutions of the multiobjective optimization problem with interval-valued coefficients.

1. Introduction

The coefficients of objective functions and constraint functions in optimization problems are usually taken to be real numbers. In this case, the problems are categorized as deterministic optimization problems. When uncertainties are taken into account in optimization problems, the coefficients will be taken to be uncertain quantities. There are two kinds of uncertainties that can be considered, which are randomness and fuzziness.
When the randomness is taken into account, the coefficients of objective functions and constraint functions in optimization problems can be assumed to be random variables with known probability distributions. In this case, the so-called stochastic optimization problems can be studied using the probability theory. We can refer to the books written by Birge and Louveaux [1], Kall [2], Prékopa [3], Stancu-Minasian [4], and Vajda [5], which address the main stream of this topic and provide many useful techniques to solve the stochastic optimization problems.
When the fuzziness is taken into account, the coefficients of objective functions and constraint functions in optimization problems can be assumed to be fuzzy quantities. In this case, the so-called fuzzy optimization problems can be studied using the fuzzy set theory. We can refer to the books written by Słowiński [6] and Delgado et al. [7], which provide many interesting concepts and topics. On the other hand, the book edited by Słowiński and Teghem [8] presents the fusion of randomness and fuzziness in optimization problems. In particular, Inuiguchi and Ramík [9] provide a brief review of fuzzy optimization and a comparison with stochastic optimization in portfolio selection problem.
Without considering randomness and fuzziness, the bounded closed intervals in R can also be used to formulate the uncertainties. When the coefficients of objective functions and constraint functions in optimization problems are assumed to be bounded closed intervals in R , the so-called interval-valued optimization problems can be studied using interval analysis by referring to Moore [10,11]. The probability distributions in stochastic optimization problems and the membership functions in fuzzy optimization problems are frequently determined by the decision-makers, which can be subjective and sometimes very difficult to determine the suitable probability distributions and the membership functions. In this case, we can consider the bounded closed intervals in optimization problems to take care of uncertainties. Although the specification of bounded closed intervals may still be judged as a subjective viewpoint, we may argue that the bounds of uncertain data, by determining the bounded closed intervals to restrict the possible observed data, are easier to be handled than specifying the probability distributions in stochastic optimization problems and membership functions in fuzzy optimization problems.
Suppose that a company produces p products and has n objectives that should be minimized. The amounts of these p products are denoted by x 1 , , x p . For the ith objective, the cost for producing x 1 , , x p needs
C i ( x 1 , , x p ) = c i 1 x 1 + + c i p x p ,
where the quantities c i j for i = 1 , n and j = 1 , , p are real numbers representing the demands for producing one item of these p products. The purpose of this company is to minimize these n objectives. However, owing to some unexpected situation, the exact quantities of c i j cannot be determined:
  • When the randomness is taken into account, the coefficients c i j can be take to be the random variables with known probability distributions. For example, we may assume that the random variables c i j have normal distribution.
  • When the fuzziness is taken into account, the coefficients c i j can be assumed to be fuzzy numbers with known membership functions. For example, the coefficients c i j can be taken to be trapezoidal fuzzy numbers or L-R fuzzy numbers.
However, determining the suitable probability distribution functions and membership functions are not the easy tasks. Also, their mathematical forms may be complicated such that the numerical simulation is difficult to perform. The easier way is to consider the bounded closed intervals. In practical situation, owing to the fluctuation, the decision-makers can just know that c i j is located in the bounded closed interval [ a i j L , a i j U ] for i = 1 , , n and j = 1 , , p . In this case, this company needs to minimize the following objective functions
F i ( x 1 , , x p ) = a i 1 L , a i 1 U x 1 a i p L , a i p U x p   for   i = 1 , , n ,
which are the functions with interval-valued coefficients.
Ishibuchi and Tanaka [12] proposed three ordering relations on the space of all bounded and closed intervals in R to study the interval-valued optimization problems. Jiang et al. [13] used the ordering relation proposed by Ishibuchi and Tanaka [12], and the concept of possibility degree to transform the interval-valued optimization problems into a bi-objective optimization problem. Chanas and Kuchta [14] extended those ordering relations by considering the t 0 and t 1 cuts of intervals in R , and also used them to propose the solution concepts of interval-valued optimization problems.
Costa et al. [15] presented a preference ordering relations on the space of all bounded and closed intervals in R such that many existing relations presented in the literature can be considered as the particular cases. This family of preference order relations was also used to provide the solution concepts of interval-valued optimization problems.
The Karush–Kuhn–Tucker optimality conditions in interval-valued optimization problems was studied by Wu [16], in which the Hukuhara derivative of interval-valued functions was considered. The generalization has also been performed in two different directions. Chalco-Canoet et al. [17] studied the Karush–Kuhn–Tuck optimality conditions in interval-valued optimization problems by considering the generalized Hukuhara derivative of interval-valued functions. Jayswal et al. [18] studied the Karush–Kuhn–Tucker optimality conditions in interval-valued optimization problems by considering the generalized convexity (also called invexity). Osuna-Gomez et al. [19] also studied the necessary and sufficient optimality conditions for unconstrained interval-valued optimization problems.
Li and Tian [20,21] studied the interval-valued quadratic programming problems in which the coefficients were taken to be the bounded closed intervals. The solution concept of this kind of interval-valued optimization problems was not considered. They just designed a numerical method to solve the upper bound and lower bound of uncertain objective values of the uncertain quadratic programming problems, in which the uncertainties were assumed to take all possible values from the corresponding bounded closed intervals. In other words, a lot of counterparts of the quadratic programming problem were considered such that the coefficients were taken from the bounded closed intervals. The purpose of their approach was to find the lower bound and upper bound of the objective values of all possible counterparts.
Soyster [22,23,24], Thuente [25] and Falk [26] provided some properties for inexact linear programming problems, which are a kind of interval-valued optimization problem. However, Pomerol [27] pointed out some drawbacks of Soyster’s results and also provided some mild conditions to improve Soyster’s results. The main difference between the interval-valued optimization problems and inexact programming problems is the solutions concepts imposed upon the objective functions. The solution concept in inexact programming problem uses the conventional solution concept However, the solution concept of interval-valued optimization problems follows from the solution concept of multiobjective optimization problems.
In this paper, we consider a different solution concept by introducing an equivalence relation to divide the set of all bounded closed intervals into equivalence classes such that we can consider the concept of the convex cone in the family of all bounded closed intervals in R . The purpose is to consider the solution concept of interval-valued optimization problems using ordering cones since the ordering cone can induce a partial ordering. In this case, the solution concepts of interval-valued optimization problems can be elicited by using the similar concept of the Pareto optimal solution in multiobjective optimization problems.
In Section 2, an equivalence relation is introduced to divide the collection of all bounded closed intervals in R into equivalence classes. After that, we can introduce the vector structure to the family of equivalence classes such that it can turn into a vector space. In this case, we can use the technique in vector optimization to study the interval-valued multiobjective optimization problems. In Section 3, we introduce the optimality notions in the quotient set using the ordering cones, where the quotient set consists of all equivalence classes. In Section 4, the interval-valued multiobjective optimization problems can be formulated by using the ordering cones such that the solution concepts of interval-valued multiobjective optimization problems can be reasonable realized. On the other hand, the sufficient conditions to obtain the Pareto optimal solutions are also provided. In Section 5, we take into account the multiobjective optimization problem, in which the coefficients of objective functions and constraint functions are taken to be the bounded closed intervals in R . The purpose is to show that the optimal solutions of its transformed (conventional) optimization problem are the Pareto optimal solutions of the original multiobjective optimization problem with interval-valued coefficients. In this case, in order to solve the interval-valued multiobjective optimization problems, we can simply solve their corresponding transformed (conventional) optimization problem by using the well-known technique. In Section 6, we present some practical problems. We use the results obtained in Section 4 and Section 5 to interpret the ordering cone as a partial ordering, and provide some examples to clarify the notion of ordering cones.

2. Compact Intervals and Vector Spaces

The bounded closed interval in R is also called a compact interval. Each real number a R can also be treated as a compact interval A = [ a , a ] , which can be called the degenerated interval. Let I denote the family of all compact intervals. Given a compact interval A, we also write A = [ a L , a U ] . Given two compact intervals
A = [ a L , a U ]   and   B = [ b L , b U ] ,
the addition is defined by
A B = { a + b : a A   and   b B } = a L + b L , a U + b U .
Given any k R and A I , the scalar multiplication is defined by
k A = { k a : a A } = [ k a L , k a U ] if   k 0 [ k a U , k a L ] if   k < 0 .
It is clear to see
A = { a : a A } = a U , a L
and
A B = A ( B ) = [ a L b U , a U b L ] .
Given any compact interval A = [ a L , a U ] , for convenience, we also write
A L = a L   and   A U = a U .
In this case, it is clear to see
( A B ) L = A L + B L = a L + b L   and   ( A B ) U = A U + B U = a U + b U .
Each compact interval A I cannot have an additive inverse element. It says that the collection of all compact intervals I cannot form a vector space under the vector addition A B and scalar multiplication k A defined above. Therefore, the existing technique of vector optimization is not valid to study the interval-valued optimization problems. In order to conquer this difficulty, we introduce a scalar function η : I R to scalarize I by assigning a compact interval A I to a real number η ( A ) R .
Definition 1.
We say that the scalar function  η : I R  is linear when the following conditions are satisfied:
  • η ( A B ) = η ( A ) + η ( B ) ;
  • η ( λ A ) = λ · η ( A )  for  λ R ;
  • η ( [ 0 , 0 ] ) = 0 .
Example 1.
Given a compact interval  A = [ a L , a U ] , we define a scalar function  η : I R  by
η ( A ) = 1 2 · ( A L + A U ) = 1 2 · ( a L + a U ) .
Let  B = [ b L , b U ]  be another compact interval. Then, we have
η ( A B ) = 1 2 ( A B ) L + ( A B ) U = 1 2 ( a L + b L + a U + b U ) = η ( A ) + η ( B )
and
η ( λ A ) = 1 2 ( λ A ) L + ( λ A ) U = 1 2 λ · A L + λ · A U i f   λ > 0 1 2 λ · A U + λ · A L i f   λ < 0 = λ · η ( A ) .
It shows that the scalar function η is linear.
Using the scalar function η , we can define a binary relation as follows. Let “∼” be a binary relation on I defined by
A B   if   and   only   if   η ( A ) = η ( B ) .
It is clear to see that the binary relation “∼” is an equivalence relation, which means that this binary relation is reflexive, symmetric, and transitive. In this case, this equivalence relation can induce a quotient set given by
I / = { A : A I } ,
where
A = { B : B A   and   B I }
is an equivalence class. We also adopt the notation I η = I / to say that the family I η of equivalence classes depends on the scalar function η .
The addition in I η is defined by
A + B = A B   for   A , B I η .
We need to claim that the definition in (2) is well defined. In other words, given any A ¯ A and B ¯ B , we need to show A B = A ¯ B ¯ . Now, we have η ( A ¯ ) = η ( A ) and η ( B ¯ ) = η ( B ) . According to Definition 1, we have
η ( A ¯ B ¯ ) = η ( A ¯ ) + η ( B ¯ ) = η ( A ) + η ( B ) = η ( A B ) ,
which shows A ¯ B ¯ A B . Therefore, we obtain A B = A ¯ B ¯ . This says that the definition in (2) is well defined.
For convenience, we write
0 = { A : A [ 0 , 0 ]   and   A I } = { A : η ( A ) = η ( [ 0 , 0 ] ) = 0 } .
Then, we have
A + 0 = A [ 0 , 0 ] = A = 0 + A ,
which says that 0 is a two-sided zero element of I η .
Example 2.
Consider the scalar function η in Example 1, we have
0 = A : η ( A ) = 1 2 ( a L + a U ) = 0 = { A : A = [ a , a ]   for   a 0 } .
Given any compact interval A, we cannot say that A is the inverse element of A since A ( A ) is not a zero element of I . This is the reason why I cannot form a vector space. However, we can show that A is the inverse element of A ; that is to say, we can claim A + A = 0 . Now, we have
A + A = A ( A )
and
η ( A ( A ) ) = η ( A ) + η ( A ) = η ( A ) η ( A ) = 0
by Definition 1. This also says A ( A ) 0 . Therefore, we obtain
A + A = A ( A ) = 0 = A + A ,
which shows that A is the inverse element of A . In other words, we have A = A .
Given any λ R and A I , the scalar multiplication in I η is defined by
λ A = λ A .
We also need to claim that the definition in (3) is well defined. In other words, given any A ¯ A , we want to show λ A = λ A ¯ . Now, we have η ( A ¯ ) = η ( A ) . According to Definition 1, we also have
η ( λ A ) = λ · η ( A ) = λ · η ( A ¯ ) = η ( λ A ¯ ) ,
which says λ A ¯ λ A . Therefore, we obtain λ A = λ A ¯ . This shows that the definition in (3) is well-defined. Then, we have the following proposition.
Proposition 1.
Let η be a linear scalar function defined on  I . Then, the family  I η  is a vector space with vector addition and scalar multiplication defined by (2) and (3), respectively.
Proof. 
The basic axioms of vector space can be easily checked. □
Now, we consider the product space
I η n = I η × I η × × I η ( n   times ) .
It is clear to see that
A I η n   means   A = ( A 1 , A 2 , , A n ) ,
where A i I η for i = 1 , 2 , , n . The vector addition in I η n is defined by
A + B = ( A 1 , , A n ) + ( B 1 , , B n ) = A 1 + B 1 , , A n + B n = A 1 B 1 , , A n B n .
Given any λ R , the scalar multiplication in I η n is defined by
λ A = λ A 1 , λ A 2 , , λ A n = λ A 1 , λ A 2 , , λ A n .
We write
0 = ( 0 , 0 , , 0 ) .
It is clear to see that 0 is a two-sided zero element of the product space I η n . Given any A I η , we are going to claim that the inverse elements of A is A . Now, we have
A + A = A 1 , , A n + A 1 , , A n = A 1 + A 1 , , A n + A n = 0 , , 0 = 0 ,
which says that A is an inverse element of A . In other words, we have A = A . The following proposition is obvious.
Proposition 2.
Let η be a linear scalar function defined on  I . Then, the product space  I η n  is a vector space with vector addition and scalar multiplication defined by (4) and (5), respectively.

3. Ordering Cones and Optimality Notions

Each binary relation “⪯” on the vector space I η n is called a partial ordering on I η n when the following conditions are satisfied.
  • (Reflexivity). We have A A for any A I η n .
  • (Transitivity). If A B and B C , then A C for any A , B , C I η n .
  • (Compatibility with vector addition). If A B and C D , then A + C B + D for any A , B , C , D I η n .
  • (Compatibility with scalar multiplication). If A B and λ is a positive real number, then λ A λ B for any A , B I η n .
Suppose that “⪯” is a partial ordering on I η n . We can show that the following set
C = { A I η n : 0 A }
is a convex cone in the vector space I η n . Conversely, let C be a convex cone in I η n . Then, we can induce a binary relation “⪯” defined by
A B   if   and   only   if   B A C .
We can show that this binary relation “⪯” is a partial ordering on I η n . We also have
B A = ( B 1 + ( A 1 ) , , B n + ( A n ) ) = ( B 1 ( A 1 ) , , B n ( A n ) ) = ( B 1 A 1 , , B n A n ) .
A convex cone C that defines a partial ordering as described above in the vector space I η n is called an ordering cone. By referring to Jahn [28], we can consider the optimality notions in I η n based on the convex cone C .
Definition 2.
Let  S  be a subset of  I η n , and let “⪯” be a partial ordering on  I η n .
  • An element  A * S  is called a minimal element of  S  when
    A A *   f o r   A S   i m p l i e s   A * A .
  • An element  A * S  is called a maximal element of  S  when
    A * A   f o r   A S   i m p l i e s   A A * .
Remark 1.
Suppose that the above binary relation “⪯” is also antisymmetric; that is,
A B   a n d   B A   i m p l i e s   A = B .
Then, we have the following observations.
  • An element  A * S  is a minimal element of  S  when
    A A *   f o r   A S   i m p l i e s   A = A * .
  • An element  A * S  is a maximal element of  S  when
    A * A   f o r   A S   i m p l i e s   A = A * .
Let S be a subset of the vector space I η n , and let C be an ordering cone in I η n . Then, we can obtain a corresponding partial ordering “⪯” on I η n , which is induced from C . More precisely, we have
A B   if   and   only   if   B A C ,
where B A C means B A = C for some C C . By adding A and C on both sides, we obtain A = B + ( C ) . It shows
A { B } + ( C ) ,
where
C = { C : C C } .
Therefore, by referring to (7), we see that A * A means
A { A * } + C .
On the other hand, we see that A A * for A S means
A ( { A * } + ( C ) ) S .
Using (9) and (8), Definition 2 says that A * S is a minimal element of the set S when the following inclusion is satisfied:
( { A * } + ( C ) ) S { A * } + C .
Therefore, without considering the partial ordering “⪯”, using an ordering cone C in the vector space I η n , we propose the concepts of cone-extreme elements as follows.
Definition 3.
Let  S  be a nonempty subset of the vector space  I η n , and let  C  be an ordering cone in  I η n .
  • An element  A * S  is called a cone-minimal element of the set  S  when
    ( { A * } + ( C ) ) S { A * } + C .
  • An element  A * S  is called a cone-maximal element of the set  S  when
    ( { A * } + C ) S { A * } + ( C ) .
The ordering cone C is called pointed when
C ( C ) = { 0 } .
It is clear to see that if the ordering cone C is pointed, then the partial ordering “⪯” induced by C is antisymmetric.
Remark 2.
Suppose that the ordering cone  C  is pointed. Using Remark 1 and Definition 3, we see that  A * S  is a cone-minimal element of the set  S  when
( { A * } + ( C ) ) S = { A * } .
We also see that  A * S  is a cone-maximal element of the set  S  when
( { A * } + C ) S = { A * } .
Definition 4.
Let  S  be a subset of vector space  I η n , and let “⪯” be a partial ordering on  I η n .
  • We say that  A * S  is a strongly minimal element of  S  when  A * A  for all  A S .
  • We say that  A * S  is a strongly maximal element of  S  when  A A *  for all  A S .
Equivalently, we see that A * is a strongly minimal element when A S implies A * A , which also means
S { A * } + C
by referring to (8). Therefore, we can also propose the concept of strongly cone-extreme elements based on the ordering cone C as follows.
Definition 5.
Let  S  be a nonempty subset of the vector space  I η n , and let  C  be an ordering cone in  I η n .
  • An element  A * S  is called a strongly cone-minimal element of  S  when
    S { A * } + C .
  • An element  A * S  is called a strongly cone-maximal element of  S  when
    S { A * } + ( C ) .
Next, we introduce the concept of weakly extreme elements. Let S be a nonempty subset of the vector space I η n . The following set
aint ( S ) = { A S :   for   each   B I η n   there   exists   ζ > 0 satisfying   A + λ B S   for   all   λ [ 0 , ζ ] }
is called the algebraic interior of S .
Let C be an ordering cone. Recall that C can induce a partial ordering defined by
A B   if   and   only   if   B A C .
Using the algebraic interior aint ( S ) in (10), we define
A B   if   and   only   if   B A aint ( C ) ,
which can be used to define the concept of weakly extreme elements as follows.
Definition 6.
Let  C  be an ordering cone in the vector space  I η n  such that it has a nonempty algebraic interior  a i n t ( C ) .
  • An element  A * S  is called a weakly minimal element of  S  when there does not exist  A S  satisfying  A A * .
  • An element  A * S  is called a weakly maximal element of  S  when there does not exist  A S  satisfying  A * A .
Suppose that A * S is a weakly minimal element of S . It means that there does not exist A S satisfying A A * , which says that there does not exist A S satisfying
A { A * } + ( aint ( C ) )
by referring to (11). Equivalently, we have
( { A * } + ( aint ( C ) ) ) S = .
Therefore, without considering the partial ordering “⪯”, using the ordering cone, we propose the following concepts.
Definition 7.
Let  S  be a nonempty subset of the vector space  I η n , and let  C  be an ordering cone in  I η n  such that it has nonempty algebraic interior  a i n t ( C ) .
  • An element  A * S  is called a weakly cone-minimal element of  S  when
    ( { A * } + ( a i n t ( C ) ) ) S = .
  • An element  A * S  is called a weakly cone-maximal element of  S  when
    ( { A * } + a i n t ( C ) ) S = .
In this paper, we are going to study the (strongly, weakly) cone-minimal and (strongly, weakly) cone-maximal solutions of the interval-valued multiobjective optimization problems.

4. Solution Concepts

We consider an interval-valued function F : X I defined on a vector space X. The range of F is given by
F ( X ) = { F ( x ) : x X } .
The difference between the interval-valued functions and the functions with interval-valued coefficients can be realized from the following example.
Example 3.
The function  F : R + 2 I  defined by
F ( x , y ) = [ x + y , 2 x + 3 y ] min sin x , cos x , max sin x , cos x   f o r   x , y 0
is realized as an interval-valued function. Moreover, we have
F ( x , y ) = [ x + y + min sin x , cos x , 2 x + 3 y + max sin x , cos x ]   f o r   x , y 0
The function  F : R + 2 I  defined by
F ( x , y ) = [ 1 , 2 ] x 2 [ 2 , 4 ] x y [ 1 , 5 ] y 2   f o r   x , y 0
can be realized as a function with interval-valued coefficients. It is clear to see that the functions with interval-valued coefficients are also interval-valued functions.
Let us recall that, given any A , B I ,
A B   if   and   only   if   η ( A ) = η ( B ) ,
where η : I R is a scalar function. Let F be an interval-valued function defined on a vector space X. Then, given any x , y X , we see that
F ( x ) F ( y )   if   and   only   if   η ( F ( x ) ) = η ( F ( y ) ) .
In this case, given any fixed z X , we can define a subset I ( z ) of I by
I ( z ) = F ( y ) : F ( y ) F ( z )   for   y X = F ( y ) : η ( F ( y ) ) = η ( F ( z ) )   for   y X .
This says that the range F ( X ) can be partitioned into disjoint subsets I ( z ) such that each bounded closed interval in I ( z ) has the same scalar via the scalar function η . In other words, we have
F ( X ) = z X I ( z ) with   I ( z 1 ) I ( z 2 ) =   for   z 1 z 2 .
It is clear to see that
A I ( z )   implies   I ( z ) A I η .
Example 4.
We consider the interval-valued function  F : R + 2 I  defined by
F ( x ) = [ x 1 + x 2 , 2 x 1 + 3 x 2 ] ,
where  x = ( x 1 , x 2 ) R 2 . The scalar function η is taken in Example 1. Then, we have
η ( F ( x ) ) = 1 2 x 1 + x 2 + 2 x 1 + 3 x 2 = 1 2 3 x 1 + 4 x 2 .
In this case, we have
I ( z ) = F ( y ) : η ( F ( y ) ) = η ( F ( z ) )   f o r   y R + 2 .
More precisely, given any fixed  z = ( z 1 , z 2 ) R + 2 , we have
I ( z ) = F ( y ) : 1 2 3 z 1 + 4 z 2 = 1 2 3 y 1 + 4 y 2   f o r   ( y 1 , y 2 ) R + 2 .
In other words, the image  F ( R + 2 )  is given by
F ( R + 2 ) = z R + 2 I ( z )   w i t h   I ( z ( 1 ) ) I ( z ( 2 ) ) =   f o r   z ( 1 ) z ( 2 ) ,
where  z ( 1 ) = ( z 1 ( 1 ) , z 2 ( 1 ) )  and  z ( 2 ) = ( z 1 ( 2 ) , z 2 ( 2 ) ) .
Under the above settings, it is reasonable to define a function F : X I η by
F ( x ) = F ( x ) = A : A F ( x )   for   A I = A : η ( A ) = η ( F ( x ) ) for   A I ,
where F ( x ) I ( z ) for some z X by referring to (12). If F ( x ) F ( x ) , then
F ( x ) = F ( x ) = F ( x ) .
Suppose that x = y . Then, we have F ( x ) = F ( y ) , which also says
F ( x ) = F ( x ) = F ( y ) = F ( y ) .
Therefore, the function F corresponding to F is well-defined.
Example 5.
Continued from Example 4, by referring to (13), we can define a function  F : R + 2 I η  by
F ( x ) = F ( x ) = [ x 1 + x 2 , 2 x 1 + 3 x 2 ] = A = [ a L , a U ] : η ( A ) = η ( F ( x ) )   f o r   A I = A = [ a L , a U ] : a L + a U = 3 x 1 + 4 x 2   f o r   a L , a U R I η .
The function  F  is well defined.
Now, we consider an interval-valued vector function F : X I n given by
F ( x ) = F 1 ( x ) , , F n ( x ) ,
where each F i : X I is an interval-valued function defined on X for i = 1 , , n . Therefore, each F i has a corresponding function F i : X I η given by F i ( x ) = F i ( x ) for i = 1 , , n . In this case, the interval-valued vector function F can have a corresponding function F : X I η n given by
F ( x ) = F 1 ( x ) , F 2 ( x ) , , F n ( x ) .
Given a partial ordering “⪯” on I , we consider the following constrained interval-valued multiobjective programming problem
min F ( x ) = F 1 ( x ) , F 2 ( x ) , , F n ( x ) subject   to G i ( x ) [ 0 , 0 ] , i = 1 , 2 , , m x X ,
where G i are interval-valued functions defined on X for i = 1 , , m . We need to interpret the meaning of G i ( x ) [ 0 , 0 ] . Recall that the equivalent class 0 is given by
0 = A I : A [ 0 , 0 ] .
According to (13), let G i : X I η be a function corresponding to G i given by G i ( x ) = G i ( x ) for i = 1 , , m . Then, each constraint G i ( x ) [ 0 , 0 ] is interpreted as G i ( x ) 0 . In this case, the constrained interval-valued multiobjective programming problem can be interpreted as follows
min F ( x ) = F 1 ( x ) , F 2 ( x ) , , F n ( x ) subject   to G i ( x ) 0 , i = 1 , 2 , , m x X ,
It is clear to see that
C 0 = { A I η : 0 A }
is a convex cone in the vector space I η . We see that A C 0 means A C 0 , i.e., 0 A . Using the compatibility of vector addition for the partial ordering, we can add A on both sides to obtain
A + 0 A + A ,   i . e . ,   A 0 ,
which says that the constraint G i ( x ) 0 means G i ( x ) C 0 for i = 1 , , n . In this case, the constrained interval-valued multiobjective programming problem can now be interpreted as follows:
min F ( x ) = F 1 ( x ) , F 2 ( x ) , , F n ( x ) subject   to G i ( x ) C 0 , i = 1 , 2 , , m x X ,
where C 0 given in (15) is a special kind of ordering cone in I η .
In the sequel, we shall consider a general ordering cone C to study the following constrained interval-valued multiobjective programming problem:
( IMOP ) min F ( x ) = F 1 ( x ) , F 2 ( x ) , , F n ( x ) subject   to G i ( x ) C , i = 1 , 2 , , m x X ,
By referring to (14), the interval-valued vector function F can have a corresponding function F . Therefore, we consider the following constrained interval-valued multiobjective programming problem
( IMOP ) min F ( x ) = F 1 ( x ) , F 2 ( x ) , , F n ( x ) subject   to G i ( x ) C , i = 1 , 2 , , m x X ,
In order to introduce the solution concepts of problem (IMOP), we also need to consider another ordering cone in the vector space I η n . Let C be an ordering cone in I η n . This ordering cone C is used to rank the multiobjective function values F ( x ) . In this case, it is more convenient to say that the constrained interval-valued multiobjective programming problem (IMOP) is considered under the ordering cones ( C , C ) .
Let
V = x X : G i ( x ) C , i = 1 , , m
be the feasible set of problem (IMOP), and let
S = F ( x ) = F 1 ( x ) , F 2 ( x ) , , F n ( x ) : x V .
be the set of all objective values of problem (IMOP). Then, we propose the following solution concepts.
Definition 8.
Let the constrained interval-valued multiobjective programming problem (IMOP) be considered under the ordering cones  ( C , C ) , and let  S  be the set defined in (16):
  • We say that  x *  is a complete optimal solution of problem (IMOP) when  F ( x * )  is a strongly cone-minimal element of the set  S  with respect to the ordering cone  C  ; that is to say,
    S { F ( x * ) } + C .
  • We say that  x *  is a Pareto optimal solution of problem (IMOP) when  F ( x * )  is a cone-minimal element of the set  S  with respect to the ordering cone  C  ; that is to say,
    { F ( x * ) } + ( C ) S { F ( x * ) } + C .
  • Assume that  a i n t ( C ) )  is nonempty. We say that  x *  is a weak Pareto optimal solution of problem (IMOP) when  F ( x * )  is a weakly cone-minimal element of the set  S  with respect to the ordering cone  C  ; that is to say,
    { F ( x * ) } + ( a i n t ( C ) ) S = .
We denote by X C O , X P , and X W P the set of all complete optimal solutions, Pareto optimal solutions, and weak Pareto optimal solutions of problem (IMOP), respectively. Then, we have the following inclusions.
Proposition 3.
Let η be a linear scalar function defined on  I , and let the constrained interval-valued multiobjective programming problem (IMOP) considered under the ordering cones  ( C , C )  be feasible. Suppose that  C I η n  is pointed, satisfying  a i n t ( C ) . Then, we have the following inclusions
X C O X P X W P .
Proof. 
The feasibility of problem (IMOP) says that the set S defined in (16) is nonempty. Using (17) and (18), it is clear to see
( { F ( x * ) } + ( C ) ) S S { F ( x * ) } + C .
Therefore, we obtain the inclusion X C O X P . Since C I η n and aint ( C ) , we have
( { F ( x * ) } + ( aint ( C ) ) ) S = ( { F ( x * ) } + ( aint ( C ) ) ) { F ( x * ) } + ( C ) S ( since   aint ( C ) C ) ( { F ( x * ) } + ( aint ( C ) ) ) { F ( x * ) } + C ( using ( 18 ) ) = ( since aint ( C ) C = using   the   pointedness   of   C ) .
Therefore, we obtain
( { F ( x * ) } + ( aint ( C ) ) ) S = ,
which shows the inclusion X P X W P . This completes the proof. □
We define ( I η n ) to be the set of all linear functionals from the vector space I η n to R , which says that if ϕ ( I η n ) then the functional ϕ : I η n R is linear. We can show that ( I η n ) is also a vector space with vector addition and scalar multiplication defined by
( ϕ 1 + ϕ 2 ) ( A ) = ϕ 1 ( A ) + ϕ 2 ( A )   and   ( λ ϕ ) ( A ) = λ · ϕ ( A ) ,
respectively, for all ϕ 1 , ϕ 2 , ϕ ( I η n ) and λ R . We also define the following sets
C = ϕ ( I η n ) : ϕ ( A ) 0   for   all   A C
and
C = ϕ ( I η n ) : ϕ ( A ) > 0   for   all   A C { 0 } .
Then, we have the following results.
Lemma 1.
Suppose that  C ( I η n )  and  aint ( C ) . Given any  A aint ( C )  and  ϕ C , we have  ϕ ( A ) > 0 .
Proof. 
Suppose that ϕ ( A ) 0 for all A I η n . Then, C = ( I η n ) . This contradiction says that there exists B I η satisfying ϕ ( B ) < 0 . Using (10), the nonempty of aint ( C ) says that there exists ζ > 0 satisfying
A + λ B C   for   all   λ [ 0 , ζ ] .
Therefore, we obtain
ϕ A + ζ B 0 .
The linearity of ϕ says
ϕ A ζ ϕ B > 0 .
This completes the proof. □
Theorem 1.
Let η be a linear scalar function defined on  I , and let the constrained interval-valued multiobjective programming problem (IMOP) considered under the ordering cones  ( C , C )  be feasible. Suppose that  C ( I η n )  and  aint ( C ) , and that there exist a linear functional  ϕ C  and an element  x * V  satisfying
ϕ ( F ( x * ) ) ϕ ( F ( x ) )   f o r   a l l   x V .
Then, x * is a weak Pareto optimal solution of problem (IMOP).
Proof. 
Suppose that x * is not a weak Pareto optimal solution of problem (IMOP). The definition says that F ( x * ) = F ( x * ) is not a weakly cone-minimal element of the set S with respect to the ordering cone C , which says
( { F ( x * ) } + ( aint ( C ) ) ) S .
In this case, there exists x ¯ V satisfying
F ( x ¯ ) S   and   F ( x ¯ ) { F ( x * ) } + ( aint ( C ) ) .
Therefore, we obtain
F ( x * ) F ( x ¯ ) aint ( C ) .
Lemma 1 says
ϕ F ( x * ) F ( x ¯ ) > 0 .
Using the linearity of ϕ , we also have ϕ ( F ( x * ) ) > ϕ ( F ( x ) ) , which contradicts (20), and the proof is complete. □
Theorem 2.
Let η be a linear scalar function defined on  I , and let the constrained interval-valued multiobjective programming problem (IMOP) considered under the ordering cones  ( C , C )  be feasible. Suppose that the ordering cone  C  is pointed. Then, we have the following properties:
(i)
Suppose that there exist a linear functional  ϕ C  and an element  x * V  satisfying
ϕ ( F ( x * ) ) < ϕ ( F ( x ) )   f o r   a l l   x V { x * } .
Then, x * is a Pareto optimal solution of problem (IMOP).
(ii)
Suppose there exist a linear functional  ϕ C  and an element  x * V  satisfying
ϕ ( F ( x * ) ) ϕ ( F ( x ) )   f o r   a l l   x V .
Then,  x *  is a Pareto optimal solution of problem (IMOP).
Proof. 
Suppose that x * is not a Pareto optimal solution of problem (IMOP). The definition says that F ( x * ) = F ( x * ) is not a cone-minimal element of the set S with respect to the ordering cone C . Since C is pointed, using Remark 2, we have
( { F ( x * ) } + ( C ) ) S { F ( x * ) } .
In this case, there exist x ¯ V satisfying
F ( x ¯ ) { F ( x * ) } + ( C )   and   F ( x ¯ ) F ( x * ) .
Therefore, we obtain
F ( x * ) F ( x ¯ ) C   and   F ( x * ) F ( x ¯ ) 0 .
To prove part (i), since ϕ C and F ( x * ) F ( x ¯ ) C from (23), it follows
ϕ F ( x * ) F ( x ¯ ) 0 .
The linearity of ϕ says ϕ ( F ( x * ) ) ϕ ( F ( x ) ) , which contradicts (21).
To prove part (ii), since ϕ C and F ( x * ) F ( x ¯ ) C { 0 } from (23), it follows
ϕ F ( x * ) F ( x ¯ ) > 0 .
The linearity of ϕ says ϕ ( F ( x * ) ) > ϕ ( F ( x ) ) , which contradicts (22). This completes the proof. □

5. Multiobjective Programming Problems with Interval-Valued Coefficients

The difference between the interval-valued functions and the functions with interval-valued coefficients can be realized from Example 3. Let C be an ordering cone in the vector space I η n , and let C be an ordering cone in the vector space I η . Now, we consider the following multiobjective programming problem with interval-valued coefficients:
( IMOP 1 ) min F ( x ) = F 1 ( x ) , F 2 ( x ) , , F n ( x ) subject   to G i ( x ) C , i = 1 , 2 , , m x R p ,
where F j : R p I and G i : R p I are functions with interval-valued coefficients for i = 1 , , m and j = 1 , , n .
Let F : R p I be a function with interval-valued coefficients. Suppose that A I is any coefficients of F. Then, there exists B I η satisfying A B , i.e., A = B . In this case, the corresponding function F of F can be defined as a function with coefficients A I η , where A I is a coefficient of F.
Example 6.
We consider the function F : R + m I defined by
F ( x ) = A 1 x 1 A n x m   f o r   x = ( x 1 , , x m ) R m .
It is clear to see that F is a linear function with interval-valued coefficients A j I for j = 1 , , m . Then, its corresponding function F is given by
F ( x ) = A 1 x 1 + + A n x n ,
where A j I η for j = 1 , , m .
Let F = ( F 1 , F 2 , , F n ) be a vector-valued function with interval-valued coefficients, where each component F j is a function with interval-valued coefficients for j = 1 , , n . The corresponding function F of F is given by
F = F 1 , F 2 , , F n .
Also, the corresponding functions G i of G i can be similarly defined for i = 1 , , m . The differences between problems (IMOP) and (IMOP1) are as follows:
  • The decision variables in problem (IMOP) are in a vector space X. However, the decision variables in problem (IMOP1) are in the Euclidean space R m . Problem (IMOP) is a general problem of the interval-valued multiobjective optimization problem.
  • The objective functions and constraint functions in problem (IMOP) are interval-valued functions. However, the objective functions and constraint functions in problem (IMOP1) are functions with interval-valued coefficients.
Let η be a linear scalar function defined on I , and let ψ be a functional defined on I η by ψ ( A ) = η ( A ) . Given any B A , we have η ( A ) = η ( B ) . It says that ψ is well defined. Since η is linear, we have
ψ ( λ A + B ) = ψ ( λ A B ) = η ( λ A B ) = λ · η ( A ) + η ( B ) = λ · ψ ( A ) + ψ ( B )
It shows that ψ is a linear functional on I η . The linearity of ψ and η implies
ψ ( F j ( x ) ) = η ( F j ( x ) ) = f j ( x ) ,
where f j is a real-valued function with coefficients that are scalars obtained from the corresponding coefficients of F j . Similarly, regarding the constraints, we can have the corresponding real-valued functions
g i ( x ) = ψ ( G i ( x ) ) = η ( G i ( x ) )
for i = 1 , , m .
Example 7.
We consider the function F j : R + 2 I with interval-valued coefficients given by
F j ( x 1 , x 2 ) = [ 1 , 2 ] x 1 2 [ 2 , 4 ] x 1 x 2 [ 1 , 5 ] x 2 2   f o r   x 1 , x 2 0 .
Then, we have the corresponding function
F j ( x 1 , x 2 ) = [ 1 , 2 ] x 1 2 + [ 2 , 4 ] x 1 x 2 + [ 1 , 5 ] x 2 2   f o r   x 1 , x 2 0 .
Applying the linear scalar function ψ, we obtain
ψ ( F j ( x 1 , x 2 ) ) = ψ ( [ 1 , 2 ] ) x 1 2 + ψ ( [ 2 , 4 ] ) x 1 x 2 + ψ ( [ 1 , 5 ] ) x 2 2 = η ( [ 1 , 2 ] ) x 2 + η ( [ 2 , 4 ] ) x 1 x 2 + η ( [ 1 , 5 ] ) x 2 2 f j ( x 1 , x 2 ) ,
which is a real-valued function. In particular, for A = [ a L , a U ] I , we take
η ( A ) = η a L , a U = a L + a U 2 .
Then, we have
f j ( x 1 , x 2 ) = η ( [ 1 , 2 ] ) x 1 2 + η ( [ 2 , 4 ] ) x 1 x 2 + η ( [ 1 , 5 ] ) x 2 2 = 3 2 x 1 2 + 3 x 1 x 2 + 3 x 2 2 .
Definition 9.
Let η be a scalar function defined on I . We say that η is canonical when η ( A ) 0 for each A C , where C is an ordering cone in I η used for the constraint functions G i in problem (IMOP1) for i = 1 , , m .
The above definition for canonical scalar function η is well defined, since if B A , then η ( B ) = η ( A ) 0 . Let η be a linear and canonical scalar function defined on I . Then, we have
A = A C   if   and   only   if   η ( A ) = η ( A ) 0 .
It also says
A C   if   and   only   if   ψ ( A ) = η ( A ) 0 .
Therefore, we see that
G i ( x ) C   if   and   only   if   g i ( x ) = η ( G i ( x ) ) = ψ ( G i ( x ) ) 0   for   i = 1 , , n
Using (24), a corresponding (usual) multiobjective programming problem (MOP) of problem (IMOP1) can be formulated below:
( MOP ) min f ( x ) = f 1 ( x ) , f 2 ( x ) , , f n ( x ) subject   to g i ( x ) 0 , i = 1 , 2 , , m x R p ,
where ψ is a linear functional defined on I η by ψ ( A ) = η ( A ) and
f j ( x ) = η ( F j ( x ) ) = ψ ( F j ( x ) )   and   g i ( x ) = η ( G i ( x ) ) = ψ ( G i ( x ) )
for i = 1 , , m and j = 1 , , n .
Let ϕ be a linear functional defined on the product space I η n by
ϕ ( A ) = ϕ A 1 , A 2 , , A n = j = 1 n w j · ψ ( A j ) = j = 1 n w j · η ( A j ) ,
where w j > 0 are any fixed positive constants for j = 1 , 2 , , n . It is clear to see
ϕ ( F ( x ) ) = ϕ ( F ( x ) ) = ϕ ( F 1 ( x ) , , F n ( x ) ) = j = 1 n w j · ψ ( F j ( x ) ) = j = 1 n w j · f j ( x ) .
From the well-known scalarization technique in (conventional) multiobjective programming problems, we can consider a corresponding weighting problem (WP) of the multiobjective problem (MOP) as follows:
( WP ) min f ( x ) = w 1 · f 1 ( x ) + + w n · f n ( x ) subject   to g i ( x ) 0 , i = 1 , 2 , , m x R p ,
where the weights w i > 0 for i = 1 , 2 , , n are taken from (25).
Theorem 3
(Scalarization). Let η be a linear and canonical scalar function defined on  I , and let ϕ be a functional defined on  I η n  by
ϕ ( A ) = ϕ A 1 , A 2 , , A n = j = 1 n w j · η ( A j ) .
The multiobjective optimization problem (IMOP1) with interval-valued coefficients is considered under the ordering cones ( C , C ) such that it is feasible. Assume that the ordering cone C is pointed. Then, we have the following properties:
(i)
Suppose that ϕ C , and that x * R p is a unique optimal solution of the corresponding weighting problem (WP). Then, x * is a Pareto optimal solution of the original problem (IMOP1).
(ii)
Suppose that ϕ C , and that x * R p is an optimal solution of the corresponding weighting problem (WP). Then, x * is a Pareto optimal solution of the original problem (IMOP1).
Proof. 
From (24), we see that the feasible sets of problems (IMOP1) and (WP) are identical. To prove part (i), since x * is a unique optimal solution of problem (WP), it means that f ( x * ) < f ( x ) for all feasible solutions x x * . Using (26), we have
ϕ ( F ( x * ) ) = j = 1 n w j · f j ( x * ) = f ( x * ) < f ( x ) = j = 1 n w j · f j ( x ) = ϕ ( F ( x ) )
for all feasible solutions x x * , which shows that x * is a Pareto optimal solution of problem (IMOP1) by using part (i) of Theorem 2.
To prove part (ii), we also have
ϕ ( F ( x * ) ) = f ( x * ) f ( x ) = ϕ ( F ( x ) )
for all feasible solutions x . Therefore, the desired result follows immediately from part (ii) of Theorem 2. This completes the proof. □

6. Practical Problems

In this section, we are going to present the practical interval-valued multiobjective optimization problems by separately interpreting the ordering cones as the partial orderings, and interpreting the partial orderings as the ordering cones. In this case, the meaning of interval-valued inequality constraints can be understood via the ordering cones or partial orderings separately.

6.1. Interpreting Ordering Cones as Partial Orderings

Let η be a linear scalar function defined on I . Regarding the constraints, we take a special kind of ordering cone C c in I η as follows:
C c = { A I η : η ( A ) 0 } .
It is clear to see that η is a canonical scalar function according to Definition 9. In this case, we can consider a multiobjective programming problem with interval-valued coefficients as follows:
( IMOP 2 ) min F ( x ) = F 1 ( x ) , F 2 ( x ) , , F n ( x ) subject   to G i ( x ) C c , i = 1 , 2 , , m x R p ,
where F j : R p I and G i : R p I are functions with interval-valued coefficients for i = 1 , , m and j = 1 , , n . Recall that the constraints G i ( x ) C c are interpreted as
G i ( x ) C c   for   i = 1 , , m .
We also see that problem (IMOP2) is a special case of problem(IMOP1) by taking a special kind of ordering cone C c given in (27).
The ordering cone C c can induce a partial ordering “⪯” on I η given by
A B   if   and   only   if   B A C c .
Therefore, we have
A [ 0 , 0 ]   if   and   only   if   [ 0 , 0 ] A C c .
Suppose that the constraints G i ( x ) [ 0 , 0 ] for i = 1 , , m are interpreted as
G i ( x ) [ 0 , 0 ]   for   i = 1 , , m .
Then, using (28)–(30), problem (IMOP2) can be rewritten as
( IMOP 2 ) min F ( x ) = F 1 ( x ) , F 2 ( x ) , , F n ( x ) subject   to G i ( x ) [ 0 , 0 ] , i = 1 , 2 , , m x R p .
Since
[ 0 , 0 ] ( A ) = [ 0 , 0 ] A C c ,
it follows
η ( [ 0 , 0 ] ( A ) ) 0 .
Using the linearity of η , we obtain
η ( A ) η ( [ 0 , 0 ] ) = 0 .
Therefore, using (29), we see that
A [ 0 , 0 ]   if   and   only   if   η ( A ) 0 .
Using (30) and (31), problem (IMOP2) can also be rewritten as
( IMOP 2 ) min F ( x ) = F 1 ( x ) , F 2 ( x ) , , F n ( x ) subject   to η ( G i ( x ) ) 0 , i = 1 , 2 , , m x R p .
Now, we are going to consider another ordering cone C o in the product space I η n regarding the multiple objective functions. We take a special kind of ordering cone C o as follows:
C o = C = ( C 1 , , C n ) I η n : η ( C j ) 0   fo   all   j = 1 , , n .
Let ϕ be a functional defined on I η n by
ϕ ( A ) = ϕ A 1 , A 2 , , A n = j = 1 n w j · η ( A j ) ,
where w j > 0 are any fixed constants for j = 1 , 2 , , n . Then, we have,
ϕ ( A + B ) = ϕ ( A 1 B 1 , , A n B n ) = j = 1 n w j · η ( A j B j ) = j = 1 n w j · η ( A j ) + j = 1 n w j · η ( B j ) = ϕ ( A ) + ϕ ( B )
and
ϕ ( λ A ) = ϕ ( λ A 1 , , λ A n ) = λ j = 1 n w j · η ( A j ) = λ ϕ ( A ) .
This shows ϕ ( I η n ) .
Assume that the multiobjective programming problem (IMOP1) with interval-valued coefficients is considered under the ordering cones ( C o , C c ) , where C c and C o are given in (27) and (32), respectively. Then, problem (IMOP2) is a special case of problem (IMOP1), which says that Theorem 3 is applicable.
Theorem 4.
Let η be a linear and canonical scalar function defined on  I , and let ϕ be a functional defined on  I η n  by
ϕ ( A ) = ϕ A 1 , A 2 , , A n = j = 1 n w j · η ( A j ) .
The multiobjective optimization problem (IMOP2) with interval-valued coefficients is considered under the ordering cones ( C o , C c ) such that it is feasible, where C c and C o are given in ( 27 ) and ( 32 ) , respectively. Suppose that x * R p is an optimal solution of the corresponding weighting problem(WP). Then, x * is a Pareto optimal solution of the original problem (IMOP2).
Proof. 
We first claim that the ordering cone C o is pointed. Given any C C o ( C o ) , we have η ( C i ) = 0 since η ( C i ) = η ( C i ) by the linearity of η . It says C i 0 since η ( [ 0 , 0 ] ) = 0 . Therefore, we obtain C i = 0 for all i = 1 , , n , which says C o ( C o ) = 0 . This shows that C o is pointed.
Given any A C o , we have η ( A j ) 0 for all j = 1 , , n . Using (33), we also have ϕ ( A ) 0 , which says ϕ C o . In order to use part (ii) of Theorem 3, we need to further claim ϕ ( C o ) .
Suppose that ϕ ( A ) = 0 . From (33), we have
ϕ ( A ) = ϕ A 1 , A 2 , , A n = j = 1 n w j · η ( A j ) = 0 .
Since w j > 0 and η ( A j ) 0 for all j = 1 , , n , it follows η ( A j ) = 0 for all j = 1 , , n . Therefore, we obtain A j 0 for all j = 1 , , n , i.e., A j = 0 for all j = 1 , , n . This shows that A 0 implies ϕ ( A ) 0 . Therefore, we conclude ϕ ( C o ) . The result follows immediately from part (ii) of Theorem 3. This completes the proof. □
Example 8.
We consider the scalar function η in Example 1. The ordering cone  C c  regarding the constraints is taken by
C c = { A I η : η ( A ) 0 } = { A I η : η ( A ) 0 } .
The ordering cone C o regarding the multiple objective functions is taken by
C o = C = ( C 1 , , C n ) I η n : η ( C j ) 0 f o r a l l j = 1 , , n .
Under the ordering cones ( C o , C c ) , we consider the following multiobjective optimization problem with interval-valued coefficients
( I M O P 2 * ) min F ( x 1 , x 2 ) = F 1 ( x 1 , x 2 ) , F 2 ( x 1 , x 2 ) = [ 2 , 0 ] x 1 [ 3 , 1 ] x 2 , [ 2 , 4 ] x 1 [ 1 , 3 ] x 2 s u b j e c t   t o [ 1 , 3 ] x 1 [ 5 , 7 ] x 2 [ 28 , 26 ] C c [ 7 , 9 ] x 1 [ 5 , 7 ] x 2 [ 46 , 44 ] C c [ 2 , 4 ] x 1 [ 0 , 2 ] x 2 [ 16 , 14 ] C c x 1 , x 2 0 ,
Under the above settings, the corresponding multiobjective optimization problem of (IMOP2*) is given by
( M O P * ) min f ( x 1 , x 2 ) = f 1 ( x 1 , x 2 ) , f 2 ( x 1 , x 2 ) = η ( [ 2 , 0 ] ) x 1 + η ( [ 3 , 1 ] ) x 2 , η ( [ 2 , 4 ] ) x 1 + η ( [ 1 , 3 ] ) x 2 s u b j e c t   t o η ( [ 1 , 3 ] ) x 1 + η ( [ 5 , 7 ] ) x 2 + η ( [ 28 , 26 ] ) 0 η ( [ 7 , 9 ] ) x 1 + η ( [ 5 , 7 ] ) x 2 + η ( [ 46 , 44 ] ) 0 η ( [ 2 , 4 ] ) x 1 + η ( [ 0 , 2 ] ) x 2 + η ( [ 16 , 14 ] ) 0 x 1 , x 2 0 ,
By applying the scalar function η in Example 1 to the above problem, the corresponding weighting problem of (MOP*) is given by
( W P * ) min f ( x 1 , x 2 ) = w 1 ( x 1 2 x 2 ) + w 2 ( 3 x 1 + 2 x 2 ) s u b j e c t   t o 2 x 1 + 6 x 2 27 0 8 x 1 + 6 x 2 45 0 3 x 1 + x 2 15 0 x 1 , x 2 0 ,
where w 1 > 0 and w 2 > 0 . Suppose that we take w 1 = 11 / 4 and w 2 = 1 / 4 . Then, the optimal solution of (WP*) is ( x 1 * , x 2 * ) = ( 3 , 3.5 ) . Theorem 4 says that ( x 1 * , x 2 * ) = ( 3 , 3.5 ) is a Pareto optimal solution of the original problem (IMOP2*).

6.2. Interpreting Partial Orderings as Ordering Cones

Regarding the constraints, we define a partial ordering “ I ” on I η given by
A I B   if   and   only   if   η ( A ) η ( B ) .
The above partial ordering is well defined since A ¯ A and B ¯ B imply η ( A ¯ ) = η ( A ) and η ( B ¯ ) = η ( B ) .
Regarding the multiple objective functions, for A = ( A 1 , A 2 , , A n ) and B = ( B 1 , B 2 , , B n ) , we define a partial ordering “ I n ” on I n by
A I n B   if   and   only   if   A j I B j   for   all   j = 1 , 2 , , n .
We consider a new multiobjective optimization problem with interval-valued coefficients as follows:
( IMOP 3 ) min F ( x ) = F 1 ( x ) , F 2 ( x ) , , F n ( x ) subject   to G i ( x ) I [ 0 , 0 ] , i = 1 , 2 , , m x R p ,
where F j : R p I and G i : R p I are functions with interval-valued coefficients for i = 1 , , m and j = 1 , , n . The constraints G i ( x ) I [ 0 , 0 ] are interpreted as
G i ( x ) I 0   for   i = 1 , , m .
The partial ordering “⪯” in (34) can induce an ordering cone given by
C I = { A I η : A 0 } .
Since η ( [ 0 , 0 ] ) = 0 , we also have
C I = { A I η : η ( A ) 0 } .
We can show that this partial ordering “⪯” is antisymmetric, which says that the ordering cone C I is pointed. Using (36) and (37), problem (IMOP3) can be rewritten as
( IMOP 3 ) min F ( x ) = F 1 ( x ) , F 2 ( x ) , , F n ( x ) subject   to G i ( x ) C I , i = 1 , 2 , , m x R p ,
On the other hand, the partial ordering “ I n ” in (35) can also induce an ordering cone C I given by
C I = { A I η n : A I n 0 } ,
where 0 = ( 0 , , 0 ) . Since
A I n 0   if   and   only   if   A j I 0   for   j = 1 , , n ,
it follows
C I = { A I η n : η ( A j ) 0   for   j = 1 , , n } .
We also see that the partial ordering “ I n ” is antisymmetric, which says that the ordering cone C I is pointed. Now, we assume that the multiobjective optimization problem (IMOP3) with interval-valued coefficients is considered under the ordering cones ( C I , C I ) . Then, problem (IMOP3) is a special case of problem (IMOP1). We can similarly obtain the corresponding multiobjective optimization problem
( MOP ) min f ( x ) = f 1 ( x ) , , f n ( x ) = η ( F 1 ( x ) ) , , η ( F n ( x ) ) subject   to η ( G i ( x ) ) = g i ( x ) 0 , i = 1 , , m x R p
and the weighting problem
( WP ) min f ( x ) = w 1 · η ( F 1 ( x ) ) + + w n · η ( F n ( x ) ) subject   to η ( G i ( x ) ) = g i ( x ) 0 , i = 1 , , m x R p .
In this case, Theorem 3 is applicable.
Theorem 5.
Let η be a linear scalar function defined on  I , and let ϕ be a functional defined on  I η n  by
ϕ ( A ) = ϕ A 1 , A 2 , , A n = j = 1 n w j · η ( A j ) .
The multiobjective optimization problem (IMOP3) with interval-valued coefficients is considered under the partial orderings “ I ” and “ I n ” given in ( 34 ) and ( 35 ) , respectively, such that it is feasible. Suppose x * R p is an optimal solution of the corresponding weighting problem(WP). Then, x * is a Pareto optimal solution of the original problem (IMOP3).
Proof. 
It is clear to see that ϕ is linear since η is linear. Using (38), we also see that η is a canonical scalar function. Now, we are going to claim that the partial ordering “ I n ” is antisymmetric. We have
A I n B   and   B I n A   if   and   only   if   A j I B j   and   B j I A j   for   all   j = 1 , , n ,
which is equivalent to saying
η ( A j ) = η ( B j )   for   all   j = 1 , , n .
Therefore, we obtain A j = B j for all j = 1 , , n , i.e., A = B . This shows that the partial ordering “ I n ” is indeed antisymmetric, which also says that the ordering cone C I is pointed.
Using (40) and (41), we have ϕ C I . We are going to further claim ϕ ( C I ) . Suppose that ϕ ( A ) = 0 . Using (41), we have
ϕ ( A ) = ϕ A 1 , A 2 , , A n = j = 1 n w j · η ( A j ) = 0 .
Since w j > 0 and η ( A j ) 0 for all j = 1 , , n , it follows η ( A j ) = 0 for all j = 1 , , n . Therefore, we obtain A j 0 for all j = 1 , , n , i.e., A j = 0 for all j = 1 , , n . This shows that A 0 implies ϕ ( A ) 0 . Therefore, we conclude ϕ ( C I ) . The result follows immediately from part (ii) of Theorem 3. This completes the proof. □
Example 9.
Suppose that a company produces two products and has two objectives that should be optimized. The amounts of these two products are denoted by  x 1  and  x 2 . For the first objective, the benefit of producing  x 1  and  x 2  gives
C 1 ( x 1 , x 2 ) = c 11 x 1 + c 12 x 2 .
For the second objective, the cost of producing x 1 and x 2 needs
C 2 ( x 1 , x 2 ) = c 21 x 1 + c 22 x 2 ,
where the quantities c i j for i , j = 1 , 2 are real numbers representing the demands for producing one item of these two products. The purpose of this company is to maximize the first objective and to minimize the second objective. However, owing to some unexpected situation, the exact quantities of c i j cannot be determined. The decision-makers can just know that c 11 is located in the interval [ 0 , 2 ] , that c 12 is located in the interval [ 1 , 3 ] , that c 21 is located in the interval [ 2 , 4 ] and that c 22 is located in the interval [ 1 , 3 ] . In this case, this company needs to maximize the following objective function
[ 0 , 2 ] x 1 [ 1 , 3 ] x 2
and to minimize the following objective function
[ 2 , 4 ] x 1 [ 1 , 3 ] x 2 ,
which also means that this company wants to minimize the following two objective functions:
F 1 ( x 1 , x 2 ) = [ 2 , 0 ] x 1 [ 3 , 1 ] x 2   a n d   F 2 ( x 1 , x 2 ) = [ 2 , 4 ] x 1 [ 1 , 3 ] x 2 ,
which are the functions with interval-valued coefficients. Under some conditions, the constraints have also been established. In the end, this company needs to solve the following multiobjective optimization problem with interval-valued coefficients
( I M O P 3 * ) min F ( x 1 , x 2 ) = F 1 ( x 1 , x 2 ) , F 2 ( x 1 , x 2 ) = [ 2 , 0 ] x 1 [ 3 , 1 ] x 2 , [ 2 , 4 ] x 1 [ 1 , 3 ] x 2 s u b j e c t   t o [ 1 , 3 ] x 1 [ 5 , 7 ] x 2 [ 28 , 26 ] I [ 0 , 0 ] [ 7 , 9 ] x 1 [ 5 , 7 ] x 2 [ 46 , 44 ] I [ 0 , 0 ] [ 2 , 4 ] x 1 [ 0 , 2 ] x 2 [ 16 , 14 ] I [ 0 , 0 ] x 1 , x 2 0 ,
where the partial ordering “ I ” is given in (34). Under the above settings, the corresponding multiobjective optimization problem of problem (IMOP3*) is given by
( M O P * ) min f ( x 1 , x 2 ) = f 1 ( x 1 , x 2 ) , f 2 ( x 1 , x 2 ) = η ( [ 2 , 0 ] ) x 1 + η ( [ 3 , 1 ] ) x 2 , η ( [ 2 , 4 ] ) x 1 + η ( [ 1 , 3 ] ) x 2 s u b j e c t   t o η ( [ 1 , 3 ] ) x 1 + η ( [ 5 , 7 ] ) x 2 + η ( [ 28 , 26 ] ) 0 η ( [ 7 , 9 ] ) x 1 + η ( [ 5 , 7 ] ) x 2 + η ( [ 46 , 44 ] ) 0 η ( [ 2 , 4 ] ) x 1 + η ( [ 0 , 2 ] ) x 2 + η ( [ 16 , 14 ] ) 0 x 1 , x 2 0 ,
Since η ( A ) = η ( A ) , the corresponding weighting problem of (MOP*) is given by
( W P * ) min f ( x 1 , x 2 ) = w 1 ( x 1 2 x 2 ) + w 2 ( 3 x 1 + 2 x 2 ) s u b j e c t   t o 2 x 1 + 6 x 2 27 0 8 x 1 + 6 x 2 45 0 3 x 1 + x 2 15 0 x 1 , x 2 0 ,
where w 1 > 0 and w 2 > 0 . Suppose that we take w 1 = 11 / 4 and w 2 = 1 / 4 . Then, the optimal solution of (WP*) is ( x 1 * , x 2 * ) = ( 3 , 3.5 ) . Theorem 5 says that ( x 1 * , x 2 * ) = ( 3 , 3.5 ) is a Pareto optimal solution of the original problem (IMOP3*).

7. Conclusions

It is well known that the collection of all bounded closed intervals in R cannot form a vector space. In order to use the technique in vector optimization problems, we need to equivalently transform the collection of all bounded closed intervals in R into a vector space. Therefore, this paper introduces an equivalence relation to divide the collection of all bounded closed intervals in R into equivalence classes. In this case, the family of all equivalence classes is called a quotient set. After introducing the suitable vector addition and scalar multiplication to the quotient set, we can show that the quotient set can turn into a vector space. In this case, a partial ordering on the quotient set can be defined using the notion of the ordering cone (convex cone). In vector space, the concepts of the ordering cone and partial ordering are essentially equivalent. In other words, we can simultaneously consider the ordering cone and partial ordering in this quotient set.
The solution concepts of interval-valued multiobjective optimization problems are based on the ordering cones or partial orderings on the transformed quotient set. In this case, the concepts of the complete optimal solution, Pareto optimal solution, and weak Pareto optimal solution of the interval-valued multiobjective optimization problems are introduced by using the concepts of the strongly cone-minimal element, cone-minimal element, and weakly cone-minimal element, respectively. We denote by X C O , X P , and X W P the set of all complete optimal solutions, Pareto optimal solutions, and weak Pareto optimal solutions of problem (IMOP), respectively. Proposition 3 shows the following inclusions
X C O X P X W P .
The difference between the interval-valued functions and the functions with interval-valued coefficients can be realized from Example 3. In practice, we frequently encounter the functions with interval-valued coefficients. Theorem 3 presents the method for obtaining the Pareto optimal solution of the multiobjective optimization problem with interval-valued coefficients by solving the corresponding weighting problem, where the weighting problem is a conventional optimization problem that can be solved by the existing numerical method. Moreover, Theorems 4 and 5 present effective methods to solve the practical problems by considering the special kinds of ordering cones. The numerical examples are also provided in Examples 8 and 9 to demonstrate the possible usefulness of the technique proposed in this paper.
In the future research, based on the theoretical aspect, we can study the optimality conditions of interval-valued multiobjective optimization problems, which may need more theoretical materials from the functional analysis in mathematics. Based on the practical aspect, we can design more effective numerical methods to solve the practical engineering and economic problems.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Birge, J.R.; Louveaux, F. Introduction to Stochastic Programming; Physica-Verlag: New York, NY, USA, 1997. [Google Scholar]
  2. Kall, P. Stochastic Linear Programming; Springer: New York, NY, USA, 1976. [Google Scholar]
  3. Prékopa, A. Stochastic Programming; Kluwer Academic Publishers: Boston, MA, USA, 1995. [Google Scholar]
  4. Stancu-Minasian, I.M. Stochastic Programming with Multiple Objective Functions; D. Reidel Publishing Company: Dordrecht, The Netherland, 1984. [Google Scholar]
  5. Vajda, S. Probabilistic Programming; Academic Press: New York, NY, USA, 1972. [Google Scholar]
  6. Słowiński, R. (Ed.) Fuzzy Sets in Decision Analysis, Operations Research and Statistics; Kluwer Academic Publishers: Boston, MA, USA, 1998. [Google Scholar]
  7. Delgado, M.; Kacprzyk, J.; Verdegay, J.-L.; Vila, M.A. (Eds.) Fuzzy Optimization: Recent Advances; Physica-Verlag: New York, NY, USA, 1994. [Google Scholar]
  8. Słowiński, R.; Teghem, J. (Eds.) Stochastic versus Fuzzy Approaches to Multiobjective Mathematical Programming under Uncertainty; Kluwer Academic Publishers: Boston, MA, USA, 1990. [Google Scholar]
  9. Inuiguchi, M.; Ramík, J. Possibilistic linear programming: A brief review of fuzzy mathematical programming and a comparison with stochastic programming in portfolio selection problem. Fuzzy Sets Syst. 2000, 111, 3–28. [Google Scholar] [CrossRef]
  10. Moore, R.E. Interval Analysis; Prentice-Hall: Englewood Cliffs, NJ, USA, 1966. [Google Scholar]
  11. Moore, R.E. Method and Applications of Interval Analysis; SIAM: Philadelphia, PA, USA, 1979. [Google Scholar]
  12. Ishibuchi, H.; Tanaka, H. Multiobjective Programming in Optimization of the Interval Objective Function. Eur. J. Oper. Res. 1990, 48, 219–225. [Google Scholar] [CrossRef]
  13. Jiang, C.; Han, X.; Liu, G.R.; Liu, G.P. A Nonlinear Interval Number Programming Method for Uncertain Optimization Problems. Eur. J. Oper. Res. 2008, 188, 1–13. [Google Scholar] [CrossRef]
  14. Chanas, S.; Kuchta, D. Multiobjective Programming in Optimization of Interval Objective Functions—A Generalized Approach. Eur. J. Oper. Res. 1996, 94, 594–598. [Google Scholar] [CrossRef]
  15. Costa, T.M.; Chalco-Cano, Y.; Osuna-Gomez, R.; Lodwick, W.A. Interval order relationships based on automorphisms and their application to interval optimization. Inf. Sci. 2022, 615, 731–742. [Google Scholar] [CrossRef]
  16. Wu, H.-C. The Karush-Kuhn-Tucker Optimality Conditions in an Optimization Problem with Interval-Valued Objective Function. Eur. J. Oper. Res. 2007, 176, 46–59. [Google Scholar] [CrossRef]
  17. Chalco-Cano, Y.; Lodwick, W.A.; Rufian-Lizana, A. Optimality conditions of type KKT for optimization problem with interval-valued objective function via generalized derivative. Fuzzy Optim. Decis. Mak. 2013, 12, 305–322. [Google Scholar] [CrossRef]
  18. Jayswal, A.; Stancu-Minasian, I.; Ahmad, I. On sufficiency and duality for a class of interval-valued programming problems. Appl. Math. Comput. 2011, 218, 4119–4127. [Google Scholar] [CrossRef]
  19. Osuna-Gomez, R.; Chalco-Cano, Y.; Hernandez-Jimenez, B.; Ruiz-Garzon, G. Optimality conditions for generalized differentiable interval-valued functions. Inf. Sci. 2015, 321, 136–146. [Google Scholar] [CrossRef]
  20. Li, W.; Tian, X. A numerical solution method to interval quadratic programming. Appl. Math. Comput. 2007, 189, 1274–1281. [Google Scholar] [CrossRef]
  21. Li, W.; Tian, X. Numerical solution method for general interval quadratic programming. Appl. Math. Comput. 2008, 202, 589–595. [Google Scholar] [CrossRef]
  22. Soyster, A.L. Convex Programming with Set-Inclusive Constraints and Applications to Inexact Linear Programming. Oper. Res. 1973, 21, 1154–1157. [Google Scholar] [CrossRef]
  23. Soyster, A.L. A Duality Theory for Convex Programming with Set-Inclusive Constraints. Oper. Res. 1974, 22, 892–898, Erratum, 1279–1280. [Google Scholar] [CrossRef]
  24. Soyster, A.L. Inexact Linear Programming with Generalized Resource Sets. Eur. J. Oper. Res. 1979, 3, 316–321. [Google Scholar] [CrossRef]
  25. Thuente, D.J. Duality Theory for Generalized Linear Programs with Computational Methods. Oper. Res. 1980, 28, 1005–1011. [Google Scholar] [CrossRef]
  26. Falk, J.E. Exact Solutions of Inexact Linear Programs. Oper. Res. 1976, 24, 783–787. [Google Scholar] [CrossRef]
  27. Pomerol, J.C. Constraint Qualification for Inexact Linear Programs. Oper. Res. 1979, 27, 843–847. [Google Scholar] [CrossRef]
  28. Jahn, J. Mathematical Vector Optimization in Partially Ordered Linear Spaces; Verlag Peter Lang GmbH: Frankfurt am Main, Germany, 1986. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, H.-C. Interval-Valued Multiobjective Programming Problems Based on Convex Cones. Symmetry 2024, 16, 1120. https://doi.org/10.3390/sym16091120

AMA Style

Wu H-C. Interval-Valued Multiobjective Programming Problems Based on Convex Cones. Symmetry. 2024; 16(9):1120. https://doi.org/10.3390/sym16091120

Chicago/Turabian Style

Wu, Hsien-Chung. 2024. "Interval-Valued Multiobjective Programming Problems Based on Convex Cones" Symmetry 16, no. 9: 1120. https://doi.org/10.3390/sym16091120

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop