Next Article in Journal
A Generalization of the First Tits Construction
Previous Article in Journal
GHF-COPRAS Multiple Attribute Decision-Making Method Based on Cumulative Prospect Theory and Its Application to Enterprise Digital Asset Valuation
Previous Article in Special Issue
Probabilistic Interval Ordering Prioritized Averaging Operator and Its Application in Bank Investment Decision Making
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Genetic Algorithms and Core Values of Cooperative Games to Solve Fuzzy Multiobjective Optimization Problems

Department of Mathematics, National Kaohsiung Normal University, Kaohsiung 82444, Taiwan
Axioms 2024, 13(5), 298; https://doi.org/10.3390/axioms13050298
Submission received: 15 March 2024 / Revised: 15 April 2024 / Accepted: 27 April 2024 / Published: 29 April 2024

Abstract

:
A new methodology for solving the fuzzy multiobjective optimization problems is proposed in this paper by considering the fusion of cooperative game theory and genetic algorithm. The original fuzzy multiobjective optimization problem needs to be transformed into a scalar optimization problem, which is a conventional optimization problem. Usually, the assignments of suitable coefficients to the corresponding scalar optimization problem are subjectively determined by the decision makers. However, these assignments may cause some biases by their subjectivity. Therefore, this paper proposes a mechanical procedure to avoid this subjective biases. We are going to formulate a cooperative game using the α -level functions of the multiple fuzzy objective functions. Under this setting, the suitable coefficients can be determined mechanically by involving the core values of the cooperative game, which is formulated using the multiple fuzzy objective functions. We shall prove that the optimal solutions of the transformed scalar optimization problem are indeed the nondominated solutions of fuzzy multiobjective optimization problem. Since the core-nondominated solutions will depend on the coefficients that are determined by the core values of cooperative game, there will be a lot of core-nondominated solutions that will also depend on the corresponding coefficients. In order to obtain the best core-nondominated solution, we shall invoke the genetic algorithms by evolving the coefficients.

1. Introduction

Bellman and Zadeh [1] initiate the research topic of fuzzy optimization. The main idea of their approach is to combine the fuzzy goals and fuzzy decision space using the aggregation operators. Tanaka et al. [2] and Zimmermann [3,4] proposed the concept of aspiration level to study the linear programming problems and linear multiobjective programming problems. Herrera et al. [5] also used the concept of aspiration level and triangular norm (t-norm) to aggregate the fuzzy goals and fuzzy constraints. On the other hand, by mimicking the probability distribution in stochastic optimization, Buckley [6], Julien [7] and Luhandjula et al. [8] considered the concept of possibility distribution to study the fuzzy optimization problems. Inuiguchi [9] also used the so-called possibility and necessity measures to study the modality constrained optimization problems. Those approaches were mainly to fuzzify the crisp constraints and crisp objective functions. More precisely, given the following crisp constraints
a i 1 x 1 + a i 2 x 2 + + a i n x n b i   for   i = 1 , , m ,
where a i j and b i are real numbers, we can fuzzify the crisp constraints as follows
a i 1 x 1 + a i 2 x 2 + + a i n x n ˜ b i   for   i = 1 , , m ,
where the membership functions are assigned using the aspiration level to describe the degree of violation for the original crisp constraints. Another method is to fuzzify the real numbers a i j and b i using the possibility distributions.
There is another interesting approach without considering the fuzzification. This kind of approach mainly takes care of the coefficients of optimization problems. Usually, those coefficients are auumed to be fuzzy intervals (fuzzy numbers). For instance, the fuzzy linear programming problem (FLP) is formulated as follows
( FLP ) maximize A ˜ ( 1 ) 1 ˜ { x 1 } A ˜ ( 2 ) 1 ˜ { x 2 } A ˜ ( n ) 1 ˜ { x n } subject   to b j 1 x 1 + b j 2 x 2 + + b j n x n γ j   for   j = 1 , , m ; x i 0   for   i = 1 , , n ,
where the addition ⊕ and multiplication ⊗ of fuzzy intervals are involved and appeared as the coefficients. Owing to the unexpected fluctuation and turbulence in a uncertain environment, we sometimes cannot precisely measure the desired data. In this case, the corresponding optimization problems cannot be precisely formulated, since the data appear to be uncertain. Therefore, the reasonable way is to consider the fuzzy intervals or fuzzy numbers to be the coefficients of these optimization problems. In other words, the fuzzy optimization problems can be formulated such that the coefficients are assumed to be fuzzy intervals or fuzzy numbers. This kind of approach seems to become a mainstream of the topic of fuzzy optimization.
Regarding the fuzzy coefficients and using the Hukuhara derivative, Wu [10,11,12,13] studied the duality theorems and optimality conditions for fuzzy optimization problems. The so-called generalized Hukuhara derivative was adopted by Chalco-Cano et al. [14] to extend the Karush–Kuhn–Tucker optimality conditions for fuzzy optimization problems with fuzzy coefficients. The concept of generalized convexity was also considered by Li et al. [15] to study the optimality conditions of fuzzy optimization problems. On the other hand, regarding the issue of numerical methods, Chalco-Cano et al. [16] and Pirzada and Pathak [17] proposed the so-called Newton method to solve the fuzzy optimization problems.
The multiobjective programming problems with fuzzy objective functions was studied by Luhandjula [18] in which the approach of defuzzification was adopted. An interactive method was proposed by Yano [19] to solve the multiobjective linear programming problems in which fuzzy coefficients were considered in the objective functions. Regarding the applications of fuzzy multiobjective optimization problem, Ebenuwa et al. [20] proposed a multi-objective design optimization approach for the optimal analysis of buried pipe. Charles et al. [21] studied a probabilistic fuzzy goal multi-objective supply chain network problem. Roy et al. [22] studied a multiobjective multi-product solid transportation problem in which the system parameters are assumed to be rough fuzzy variables.
In this paper, we study the fuzzy multiobjective linear programming problem (FMLP) as follows
( FMLP ) maximize H ˜ ( 1 ) A ˜ , x , , H ˜ ( u ) A ˜ , x subject   to b j 1 x 1 + b j 2 x 2 + + b j n x n γ j   for   j = 1 , , m ; x i 0   for   i = 1 , , n ,
where the objective functions are fuzzy linear functions given by
H ˜ ( 1 ) A ˜ ( 1 ) , x = A ˜ ( 11 ) 1 ˜ { x 1 } A ˜ ( 12 ) 1 ˜ { x 2 } A ˜ ( 1 n ) 1 ˜ { x n } H ˜ ( 2 ) A ˜ ( 2 ) , x = A ˜ ( 21 ) 1 ˜ { x 1 } A ˜ ( 22 ) 1 ˜ { x 2 } A ˜ ( 2 n ) 1 ˜ { x n } H ˜ ( u ) A ˜ ( u ) , x = A ˜ ( u 1 ) 1 ˜ { x 1 } A ˜ ( u 2 ) 1 ˜ { x 2 } A ˜ ( u n ) 1 ˜ { x n } .
In order to introduce the concepts of nondominated solution, we need to propose an ordering relation among the set of all fuzzy intervals or fuzzy numbers. Using this ordering relation, the concept of nondominated solution of fuzzy multiobjective optimization problems can be defined. One of the main approaches is to transform the original fuzzy multiobjective optimization problem into a scalar optimization problem by considering the suitable weights, which is a conventional optimization problem such that the coefficients are real numbers. Under these settings, the important issue is to show that the optimal solution of the transformed scalar optimization problem is also a nondominated solution of the original fuzzy multiobjective optimization problem. This also says that it is sufficient to just solve the transformed scalar optimization problem. As we can see later, the set of all nondominated solutions can be a large set, which depends on the weights that are determined in the step of transformation. In order to find the best nondominated solution, we are going to design a genetic algorithm by providing a suitable fitness function.
There are many ways to formulate the scalar optimization problem. The issue is to assign the suitable weights to the fuzzy objective functions. Usually, the weights are determined by the decision maker using their subjectivity. In order to avoid the possible bias caused by their subjectivity, this paper considers a cooperative game, which is formulated by using the fuzzy objective functions. The important assignment of weights are mechanically determined according to the core values of this formulated cooperative game. This kind of mechanical assignment can rule out the bias caused by the intuition that are believed by the decision-makers for determining the weights.
Game theory mainly concerns the behavior of players like cooperation or non-cooperation such that the decisions determined by the players may affect each other. The pioneering work was initiated by von Neumann and Morgenstern [23]. Nash [24] proposed the concept of a two-person cooperative game in which the concept of Nash equilibrium was proposed. Another solution concept called monotonic solutions was proposed by Young [25]. The monographs by Barron [26], Branzei et al. [27], Curiel [28], González-Díaz et al. [29] and Owen [30] also provide detailed concepts of game theory. On the other hand, Yu and Zhang [31] used the generalized triangular fuzzy number to study a cooperative game with fuzzy payoffs. Three solution concepts called fuzzy cores were defined using the fuzzy max order.
Jing et al. [32] studied a bi-objective optimization problem, where the multi-benefit allocation constraints are modeled. The approach by Lokeshgupta and Sivasubramani [33] treated the two objective functions as a cooperative game with two players. Alternatively, the approach by Lee [34] treated the two objective functions as a non-cooperative game with two players and tried to obtain the Nash equilibrium. Meng and Xie [35] formulated a competitive–cooperative game method to obtain the optimal preference solutions. A three-objective optimization problem was studied by Li et al. [36] in which a three-players game was formulated. The approach by Yu et al. [37] and Zhang et al. [38] also formulated a three-players game. Zhang et al. [38] considered the sub-game perfect Nash equilibrium, and Yu et al. [37] incorporated the genetic algorithm to obtain the solutions. A four-objective optimization problem by Chai et al. [39] and a bi-objective optimization problem by Cao et al. [40] were solved by using the genetic algorithm in which the non-cooperative game theory was adopted.
Solving the multiobjective optimization problems via genetic algorithms has attracted attention for a long time. We may refer to the monographs by Deb [41], Osyczka [42], and Tan et al. [43] for more details on this topic. Also, the topic of fuzzy the multiobjective optimization problem can refer to the monograph by Sakawa [44]. On the other hand, Tiwari et al. [45] studied a nonlinear optimization problem in which a genetic algorithm was proposed to solve the problem.
In Section 2, a multiobjective optimization problem with fuzzy coefficients is introduced, and its nondominated solutions are also defined. In Section 3, the concept of core values in cooperative games is introduced. In Section 4, the multiple objective functions are formulated as a cooperative game. In Section 5, the suitable coefficients of the scalar optimization problem are determined by using the core values of the formulated cooperative games. The different settings of coefficients can generate the different core-nondominated solutions. Therefore, in Section 6, an genetic algorithm is designed to find the best core-nondominated solution by providing a suitable fitness function. Finally, a practical numerical example is provided in Section 7 to illustrate the possible usage of the methodology proposed in this paper.

2. Formulation

The fuzzy set A ˜ in R is defined by a membership function ξ A ˜ : R [ 0 , 1 ] . The α -level set of A ˜ is denoted and defined by
A ˜ α = { x R : ξ A ˜ ( x ) α }
for all α ( 0 , 1 ] . According to the usual topology of R , the 0-level set A ˜ 0 is defined to be the closure of the support
{ x R : ξ A ˜ ( x ) > 0 } .
In other words, the 0-level set A ˜ 0 is defined by
A ˜ 0 = cl { x R : ξ A ˜ ( x ) > 0 } .
Then, we have A ˜ α A ˜ β for α , β [ 0 , 1 ] with α > β .
Given a subset A of R , we can treat it as a fuzzy set 1 ˜ A in R with membership function defined by
ξ 1 ˜ A ( x ) = 1   for   x A 0   for   x A .
In particular, if A is a singleton { a } , then we write 1 ˜ { a } . In other words, each real number a R can be identified with the membership function 1 ˜ { a } .
We say that A ˜ is a fuzzy interval when it is a fuzzy set in R satisfying the following conditions.
  • A ˜ is normal; that is, ξ A ˜ ( x ) = 1 for some x R .
  • The membership function ξ A ˜ is quasi-concave and upper semicontinuous.
  • The 0-level set A ˜ 0 is a closed and bounded subset of R .
Since A ˜ is normal, it says that the α -level sets A ˜ α are nonempty for all α [ 0 , 1 ] . We also see that the α -level sets A ˜ α of fuzzy interval A ˜ are bounded closed intervals given by
A ˜ α = [ A ˜ α L , A ˜ α U ]   for   all   α [ 0 , 1 ] .
Let F R denote the family of all fuzzy intervals in R . The fuzzy optimization problem considers the fuzzy-valued functions H ˜ : X F R , which is defined on a nonempty subset X of R n . It means that, for each x X , the function value H ˜ ( x ) is a fuzzy interval. Now, given any fixed α [ 0 , 1 ] , we can generate two real-valued functions
H ˜ α L : X R   and   H ˜ α U : X R ,
which are defined by
H ˜ α L ( x ) = H ˜ α ( x ) α L   and   H ˜ α U ( x ) = H ˜ α ( x ) α U .
Given any two fuzzy A ˜ and B ˜ in R , according to the extension principle, the addition and multiplication between A ˜ and B ˜ are defined by
ξ A ˜ B ˜ ( z ) = sup { ( x , y ) : z = x + y } min ξ A ˜ ( x ) , ξ B ˜ ( y )
and
ξ A ˜ B ˜ ( z ) = sup { ( x , y ) : z = x y } min ξ A ˜ ( x ) , ξ B ˜ ( y ) .
In this paper, the following fuzzy multiobjective optimization problem (FMOP)
( FMOP ) maximize H ˜ ( 1 ) A ˜ ( 1 ) , x , , H ˜ ( u ) A ˜ ( u ) , x subject   to x X
is considered, where X is a feasible region and is a subset of R n , A ˜ ( j ) are vectors of fuzzy intervals and H ˜ ( j ) ( A ˜ , x ) are the fuzzy objective functions of (FMOP) for j = 1 , , u . For example, we can take
H ˜ ( 1 ) A ˜ ( 1 ) , x = A ˜ ( 11 ) 1 ˜ { x 1 } A ˜ ( 12 ) 1 ˜ { x 2 } A ˜ ( 1 n ) 1 ˜ { x n } H ˜ ( 2 ) A ˜ ( 2 ) , x = A ˜ ( 21 ) 1 ˜ { x 1 } A ˜ ( 22 ) 1 ˜ { x 2 } A ˜ ( 2 n ) 1 ˜ { x n } H ˜ ( u ) A ˜ ( u ) , x = A ˜ ( u 1 ) 1 ˜ { x 1 } A ˜ ( u 2 ) 1 ˜ { x 2 } A ˜ ( u n ) 1 ˜ { x n } ,
where the coefficients A ˜ r s for r = 1 , , u and s = 1 , , n are taken to be the fuzzy intervals. In particular, the fuzzy multiobjective linear programming problem (FMLP) is formulated below
( FMLP ) maximize H ˜ ( 1 ) A ˜ ( 1 ) , x , , H ˜ ( u ) A ˜ ( u ) , x subject   to b j 1 x 1 + b j 2 x 2 + + b j n x n γ j   for   j = 1 , , m ; x i 0   for   i = 1 , , n ,
where the objective functions are fuzzy-valued functions and the constraint functions are real-valued functions. The meaning of nondominated solution of problem (FMOP) should be defined based on an ordering relation among the set of all fuzzy intervals, which is shown below.
Definition 1.
Let A ˜ and B ˜ be two fuzzy intervals in R .
  • We define A ˜ B ˜ when A ˜ α L B ˜ α L and A ˜ α U B ˜ α U for all α [ 0 , 1 ] .
  • We define A ˜ B ˜ when A ˜ B ˜ and there exists α [ 0 , 1 ] satisfying
    A ˜ α L < B ˜ α L   f o r   a l l   0 α α o r A ˜ α U < B ˜ α U   f o r   a l l   α α 1 .
We see that A ˜ B ˜ implies A ˜ B ˜ . The ordering relation “≺” is transitive on F R in the sense of A ˜ B ˜ and B ˜ C ˜ implying A ˜ C ˜ .
Definition 2.
Given a feasible solution x X , when there does not exists another feasible solution x X satisfying
H ˜ ( j ) A ˜ ( j ) , x H ˜ ( j ) A ˜ ( j ) , x   f o r   a l l   j = 1 , , u
and
H ˜ ( j ) A ˜ ( j ) , x H ˜ ( j ) A ˜ ( j ) , x   f o r   s o m e   j { 1 , , u } ,
this feasible solution x is said to be a nondominated solution of fuzzy multiobjective optimization problem (FMOP).
For convenience, we write
H ˜ α ( j L ) A ˜ ( j ) , x H ˜ ( j ) A ˜ ( j ) , x α L   and   H ˜ α ( j U ) A ˜ ( j ) , x H ˜ ( j ) A ˜ ( j ) , x α U
for j = 1 , , u and α [ 0 , 1 ] . Let Γ = { 0 = α 1 , α 2 , , α m = 1 } be a partition of [ 0 , 1 ] . For j = 1 , , u , we define the following functions
H j A ˜ ( j ) , x = w j 1 · H ˜ α 1 ( j L ) A ˜ ( j ) , x + + w j m · H ˜ α m ( j L ) A ˜ ( j ) , x + w j , m + 1 · H ˜ α 1 ( j U ) A ˜ ( j ) , x + + w j , 2 m · H ˜ α m ( j U ) A ˜ ( j ) , x
where w j k 0 for all j = 1 , , u and k = 1 , , 2 m satisfying
j = 1 u k = 1 2 m w j k = 1 .
The following scalar optimization problem
( SOP ) maximize H 1 A ˜ ( 1 ) , x + H 2 A ˜ ( 2 ) , x + + H u A ˜ ( u ) , x subject   to x X
is considered. The nondominated solutions of problem (FMOP) can be obtained by following the theorem presented below.
Theorem 1.
For w j k > 0 for all j = 1 , , u and k = 1 , , 2 m , If x is an optimal solution of problem (SOP), then it is also a nondominated solution of problem (FMOP).
Proof. 
Assume that x is not a nondominated solution of problem (FMOP). Then, we want to lead to a contradiction. In this case, there exists x satisfying
H ˜ ( j ) A ˜ ( j ) , x H ˜ ( j ) A ˜ ( j ) , x   for   all   j = 1 , , u
and
H ˜ ( j ) A ˜ ( j ) , x H ˜ ( j ) A ˜ ( j ) , x   for   some   j { 1 , , u } .
For j = 1 , , u and α [ 0 , 1 ] , by referring to (1), we have
H ˜ α ( j L ) A ˜ ( j ) , x H ˜ α ( j L ) A ˜ ( j ) , x   and   H ˜ α ( j U ) A ˜ ( j ) , x H ˜ α ( j U ) A ˜ ( j ) , x ,
which imply
H j A ˜ ( j ) , x H j A ˜ ( j ) , x   for   all   j = 1 , , u .
Since H ˜ ( j ) ( A ˜ ( j ) , x ) H ˜ ( j ) ( A ˜ ( j ) , x ) , the following conditions are satisfied.
  • For α [ 0 , 1 ] , we have
    H ˜ α ( j L ) A ˜ ( j ) , x H ˜ α ( j L ) A ˜ ( j ) , x   and   H ˜ α ( j U ) A ˜ ( j ) , x H ˜ α ( j U ) A ˜ ( j ) , x .
  • There exists α [ 0 , 1 ] satisfying
    H ˜ α ( j L ) A ˜ ( j ) , x < H ˜ α ( j L ) A ˜ ( j ) , x   for   all   0 α α
    or
    H ˜ α ( j U ) A ˜ ( j ) , x < H ˜ α ( j U ) A ˜ ( j ) , x   for   all   α α 1 .
We want to show that the following conditions are satisfied.
  • For i = 1 , , m , we have
    H ˜ α i ( j L ) A ˜ ( j ) , x H ˜ α i ( j L ) A ˜ ( j ) , x   and   H ˜ α i ( j U ) A ˜ ( j ) , x H ˜ α i ( j U ) A ˜ ( j ) , x .
  • There exists α r Γ satisfying
    H ˜ α r ( j L ) A ˜ ( j ) , x < H ˜ α r ( j L ) A ˜ ( j ) , x   or   H ˜ α r ( j U ) A ˜ ( j ) , x < H ˜ α r ( j U ) A ˜ ( j ) , x .
Using (3), it follows
H ˜ α i ( j L ) A ˜ ( j ) , x H ˜ α i ( j L ) A ˜ ( j ) , x   and   H ˜ α i ( j U ) A ˜ ( j ) , x H ˜ α i ( j U ) A ˜ ( j ) , x
for all i = 1 , , m . Since Γ is a partition of [ 0 , 1 ] , we consider the following cases.
  • Using (4), there exists α r Γ satisfying α r α and
    H ˜ α r ( j L ) A ˜ ( j ) , x H ˜ α ( j L ) A ˜ ( j ) , x < H ˜ α r ( j L ) A ˜ ( j ) , x
  • Using (5), there exists α r Γ satisfying α r α and
    H ˜ α r ( j U ) A ˜ ( j ) , x H ˜ α ( j U ) A ˜ ( j ) , x < H ˜ α r ( j U ) A ˜ ( j ) , x .
Therefore, we conclude that there exists α r Γ satisfying
H ˜ α r ( j L ) A ˜ ( j ) , x < H ˜ α r ( j L ) A ˜ ( j ) , x   or   H ˜ α r ( j U ) A ˜ ( j ) , x < H ˜ α r ( j U ) A ˜ ( j ) , x .
Since each w j k > 0 , we have
w j 1 · H ˜ α 1 ( j L ) A ˜ ( j ) , x + + w j m · H ˜ α m ( j L ) A ˜ ( j ) , x + w j , m + 1 · H ˜ α 1 ( j U ) A ˜ ( j ) , x + + w j , 2 m · H ˜ α m ( j U ) A ˜ ( j ) , x < w j 1 · H ˜ α 1 ( j L ) A ˜ ( j ) , x + + w j m · H ˜ α m ( j L ) A ˜ ( j ) , x + w j , m + 1 · H ˜ α 1 ( j U ) A ˜ ( j ) , x + + w j , 2 m · H ˜ α m ( j U ) A ˜ ( j ) , x ,
which also says
H j A ˜ ( j ) , x < H j A ˜ ( j ) , x   for   some   j { 1 , , u } .
Using (2), we obtain
H 1 A ˜ ( 1 ) , x + H 2 A ˜ ( 2 ) , x + + H u A ˜ ( u ) , x < H 1 A ˜ ( 1 ) , x + H 2 A ˜ ( 2 ) , x + + H u A ˜ ( u ) , x .
Since x is an optimal solution of problem (SOP), we lead to a contradiction. This completes the proof. □
The assignment of coefficients w j k for j = 1 , , u and k = 1 , , 2 m are frequently determined by the decision makers via intuition. We may argue that this kind of determination can subject to bias caused by the decision makers. Therefore, a mechanical way is suggested in this paper by following the solution concept of game theory to determine the coefficients w j k . The main idea is that the objective functions H ˜ α i ( j L ) and H ˜ α i ( j U ) are treated as the payoffs of players.

3. Cores of Cooperative Games

Given a set N = { 1 , , n } of players, any nonempty subset S of N is called a coalition. We consider the function v : 2 N R satisfying v ( ) = 0 . A cooperative game is defined to be an ordered pair ( N , v ) . Given any coalition S, the function value v ( S ) is interpreted as the worth of coalition S in the game ( N , v ) .
Given a payoff vector (or an allocation) x R n , each x i represents the value received by player i for i = 1 , , n . Many concepts are defined below.
  • We say that the vector x R n is a pre-imputation when the following group rationality is satisfied
    v ( N ) = i N x i .
  • We say that the vector x R n is an imputation when it is a pre-imputation and satisfies the following individual rationality
    x i v ( i )   for   i N .
The set of all imputations is denoted by I ( v ) , and the set of all pre-imputations is denoted by I ( v ) .
Given a coalition S and a payoff vector x , the excess of S with respect to x is defined by
e ( S , x ) = v ( S ) i S x i .
Now, the core of a game ( N , v ) is defined by
C ( v ) = x R n : i N x i = v ( N )   and   i S x i v ( S )   for   all   S N = x R n : i N x i = v ( N )   and   i S x i v ( S )   for   all   S N   and   S N .
It is also clear to see
C ( v ) = x I ( v ) : e ( S , x ) 0   for   all   S N = x I ( v ) : e ( S , x ) 0   for   all   S N .
We can see that the core of a game ( N , v ) is a set of all imputations such that only nonpositive excesses are taken into account.

4. Formulation of Cooperative Game

Given any fixed j, the function H ˜ α i ( j L ) is treated as the payoff of player ( i , j ) for i = 1 , , m , and the function H ˜ α i ( j U ) is treated as the payoff of player ( m + i , j ) for i = 1 , , m . In this case, the player is taken to be an ordered pair ( i , j ) . Therefore, the set of all players is given by
N = ( i , j ) : i = 1 , , 2 m   and   j = 1 , , u .
We have total 2 m u players corresponding to 2 m u functions. Let
d i j = sup x X H ˜ α i ( j L ) ( x )   and   d m + i , j = sup x X H ˜ α i ( j U ) ( x )   for   i = 1 , , m ,
where d i j are regarded as the ideal payoffs for i = 1 , , 2 m and j = 1 , , u . Therefore, we must assume
v ( { ( i , j ) } ) d i j   for   all   i = 1 , , 2 m   and   j = 1 , , u ,
which means that the true payoff v ( { ( i , j ) } ) of player ( i , j ) may not reach the ideal payoff d i j .
Given S N with s = | S | , this coalition is written as S = { i 1 , i 2 , , i s } . By intuition, the payoff of coalition S must be larger than the total payoffs of each player in S such that the cooperation is meaningful. In other words, the following inequality
v ( S ) v ( { i 1 } ) + v ( { i 2 } ) + + v ( { i s } )
must be satisfied. Also, the payoff of coalition S cannot be larger than the total ideal payoffs on S, which means that the payoff of coalition may not reach the total ideal payoffs. That is to say, the following inequalities must be satisfied:
v ( { i 1 } ) + v ( { i 2 } ) + + v ( { i s } ) v ( S ) d i 1 + d i 2 + d i s
Now, the payoff of coalition S with | S | 2 is defined by
v ( S ) = k = 1 s v ( { i k } ) + γ s s k = 1 s v ( { i k } ) ,
where γ s is a non-negative constant. The second term
γ s s k = 1 s v ( { i k } )
can be treated as the extra payoff subject to cooperation by forming a coalition S with | S | = s . We assume that this extra payoff is obtained by taking a non-negative constant γ s that multiplies the average of individual payoffs. In this situation, the non-negative constant γ s should be independent of any coalition S with | S | = s .
According to the upper bound of v ( S ) given in (6), the constant γ s must satisfy
k = 1 s v ( { i k } ) + γ s s k = 1 s v ( { i k } ) k = 1 s d i k .
Equivalently, we obtain
0 γ s s · k = 1 s d i k v ( { i k } ) k = 1 s v ( { i k } ) = s · k = 1 s d i k k = 1 s v ( { i k } ) s V s ( S )
for s = 2 , , 2 m u . Now, we define
V s = min V s ( S ) : S N   with   | S | = s .
Then, we have
0 γ s V s
for s = 2 , , 2 m u . We also define γ 1 = 0 for convenience.

5. Solving the Problem Using Core Values

Recall that the core of cooperative game ( N , v ) is given by
C ( v ) = w R 2 m u : i N w i = v ( N )   and   i S w i v ( S )   for   all   S N   with   S N .
Using (7), it follows that w C ( v ) if and only if w satisfies
i N w i = k = 1 2 m u v ( { i k } ) + γ 2 m u 2 m u k = 1 2 m u v ( { i k } )
and
i S w i k = 1 s v ( { i k } ) + γ s s k = 1 s v ( { i k } )   for   all   S N   with   S N .
Since the payoffs v ( S ) are non-negative for S N , the positive core C + ( v ) is defined by
C + ( v ) = C ( v ) R + 2 m u = w R + 2 m u : i N w i = v ( N )   and   i S w i v ( S )   for   all   S N   with   S N .
We normalize the values of w i to be w ¯ i given by
w ¯ i = w i j = 1 2 m u w j   for   i = 1 , , 2 m u .
In this case, we also write C ¯ + ( v ) to denote the set of all normalized values w ¯ obtained from w C + ( v ) .
Now, the following linear programming problem
( L P ) minimize i N w i subject   to i S w i v ( S )   for   all   S N w i 0   for   i = 1 , , 2 m u
is considered. The following property is useful for further study.
Proposition 1.
Let w be an optimal solution of problem (LP). If v ( N ) v ( S ) for all S N then w C + ( v ) .
Proof. 
Since w is a feasible solution of problem (LP), we have
i S w i v ( S )   for   all   S N .
We take w ¯ 1 = v ( N ) and w ¯ i = 0 for s = 2 , 3 , , 2 m u . Since v ( N ) v ( S ) for all S N , it follows that
w ¯ = w ¯ 1 , , w ¯ 2 m u = v ( N ) , 0 , , 0
is a feasible solution of problem (LP) with objective value v ( N ) , which implies
i N w i v ( N ) .
By taking S = N in (12), we also have
i N w i = v ( N ) ,
Therefore, we obtain w C + ( v ) . This completes the proof. □
From the payoff defined in (7), the formulation of cooperative game ( N , v ) is determined by the non-negative constants γ s for s = 1 , , 2 m u , which also says that the payoff function v must be determined by the vector γ = ( γ 1 , , γ 2 m u ) . We also write
C + ( v ) = C + ( γ )   and   C ¯ + ( v ) = C ¯ + ( γ ) .
Proposition 1 shows that the optimal solution w is determined by the payoff functions v. In this case, we can write w ( γ ) in Proposition 1. Then, the normalized values w ¯ ( γ ) C ¯ + ( γ ) are given by
w ¯ i ( γ ) = w i ( γ ) j = 1 2 m u w j ( γ )   for   i = 1 , , 2 m u ,
Now, we assign the normalized core values given in (13) to the coefficients of scalar optimization problem (SOP). In this case, according to Theorem 1, the optimal solution of problem (SOP) is called the core-nondominated solution of problem (FMOP), which means that the solution concept of core is involved. Therefore, we can solve the following scalar optimization problem
( SOP ) maximize j = 1 u i = 1 m w ¯ i ( γ ) · H ˜ α i ( j L ) A ˜ ( i ) , x + j = 1 u i = 1 m w ¯ m + i ( γ ) · H ˜ α i ( j U ) A ˜ ( m + i ) , x subject   to x X R 2 m u
to obtain the core-nondominated solution. Since the core-nondominated solution depends on the vector γ of non-negative constants, we also write x ( γ ) for convenience.
Let P be the set of all core-nondominated solutions of problem (FMOP). From (10), we have
P = x ( γ ) : 0 γ s V s   for   s = 1 , , 2 m u .
Since P is a large set, we intend to find a best core-nondominated solution from P by using the genetic algorithm. In this case, we plan to maximize the following fitness function
η ( γ ) = j = 1 u i = 1 m w ¯ i ( γ ) · H ˜ α i ( j L ) A ˜ ( i ) , x ( γ ) + j = 1 u i = 1 m w ¯ m + i ( γ ) · H ˜ α i ( j U ) A ˜ ( m + i ) , x ( γ ) ,
where 0 γ s V s for s = 1 , , 2 m u .

6. Genetic Algorithms

The purpose is to design an genetic algorithm such that a best core-nondominated solution can be obtained from the set P of all core-nondominated solutions of problem (FMOP). Therefore, we are going to maximize the following fitness function
η ( γ ) = j = 1 u i = 1 m w ¯ i ( γ ) · H ˜ α i ( j L ) A ˜ ( i ) , x ( γ ) + j = 1 u i = 1 m w ¯ m + i ( γ ) · H ˜ α i ( j U ) A ˜ ( m + i ) , x ( γ ) .
We shall evolve the non-negative vector γ by performing crossover and mutation to find a best chromosome according to the fitness function given in (14).
From (10), the non-negative constants γ s must satisfy
0 γ s V s   for   s = 2 , , 2 m u .
In this algorithm, each non-negative constant γ s will be a random number from the closed interval [ 0 , V s ] for s = 2 , , 2 m u . The chromosome in this algorithm is defined to be a vector γ = ( 0 , γ 2 , , γ 2 m u ) in R 2 m u satisfying γ 1 = 0 and γ s [ 0 , V s ] for s = 2 , , 2 m u .
Two phases will be performed in this algorithm. Since the scalar optimization problem (SOP) depends on the partition Γ = { 0 = α 1 , α 2 , , α m = 1 } of [ 0 , 1 ] , phase I will obtain the approximated best core-nondominated solution when the partition Γ is taken to be fixed. In phase II, we shall use the more finer partition Γ of [ 0 , 1 ] to perform the algorithm in phase I until the approximated best core-nondominated solution cannot be improved.

6.1. Phase I

In phase I, given a fixed partition Γ = { 0 = α 1 , α 2 , , α m = 1 } of [ 0 , 1 ] , we are going to obtain the approximated best core-nondominated solution by solving the scalar optimization problem (SOP) through performing crossover and mutation operations.
Proposition 2.
(Crossover) Let γ ^ = ( 0 , γ ^ 2 , , γ ^ 2 m u ) and γ ¯ = ( 0 , γ ¯ 2 , , γ ¯ 2 m u ) be two chromosomes satisfying
0 γ ^ s V s   a n d   0 γ ¯ s V s   f o r   s = 2 , , 2 m u .
Given any λ ( 0 , 1 ) , we consider the following crossover operation
γ = λ γ ^ + ( 1 λ ) γ ¯ ,
where the components are given by
γ 1 = 0   a n d   γ s = λ γ ^ s + ( 1 λ ) γ ¯ s   f o r   s = 2 , , 2 m u .
Then, the new chromosome γ also satisfies 0 γ s V s for s = 2 , , 2 m u .
Proof. 
Since 0 γ ^ s V s and 0 γ ¯ s V s , it is clear to see that the convex combination γ s = λ γ ^ s + ( 1 λ ) γ ¯ s also satisfies 0 γ s V s . □
Given a vector γ ¯ = ( 0 , γ ¯ 2 , , γ ¯ 2 m u ) , we shall perform the mutation to obtain γ from γ ¯ . Given a fixed s = 2 , , 2 m u , we first generate a random Gaussian number with mean zero and standard deviation σ s . Then, we assign
γ ^ s = γ ¯ s + N ( 0 , σ s 2 ) = γ ¯ s + σ s · N ( 0 , 1 ) .
The new mutated chromosome γ is defined by
γ s = γ ^ s if γ ^ s [ 0 , V s ] V s if γ ^ s > V s 0 if   γ ^ s < 0
where V s is given in (9) for s = 2 , , 2 m u .
Proposition 3.
(Mutation) Suppose that γ ¯ = ( 0 , γ ¯ 2 , , γ ¯ 2 m u ) is a chromosome. We consider the mutation in the way of (15). Then, the new mutated chromosome γ satisfies
0 γ s V s   f o r   s = 2 , , 2 m u .
Proof. 
It is clear to see from (15). □

6.2. Phase II

In phase II, we shall use the more finer partition Γ of [ 0 , 1 ] to perform the algorithm proposed in phase I. Assume that the partition Γ = { 0 = α 1 , α 2 , , α m = 1 } of [ 0 , 1 ] was considered in phase I. Now, given a new partition Γ ¯ = { 0 = α 1 , α 2 , , α n = 1 } of [ 0 , 1 ] , which is finer than Γ in the sense of Γ Γ ¯ . We shall perform the algorithm proposed in phase I using this new partition Γ ¯ . In other words, we are going to continuously execute Phase II by using the more finer partitions such that the approximated best core-nondominated solution cannot be improved. Two ways are suggested in this paper to obtain the finer partition Γ ¯ as follows.
For the first way, we can simply take the partition Γ ¯ of [ 0 , 1 ] such that [ 0 , 1 ] is equally divided and satisfies Γ Γ ¯ . The second way is to design another genetic algorithm to generate a new finer partition Γ ¯ by evolving the old partition Γ . More precisely, a population P = { α 1 , α 2 , , α m } is taken from the old partition Γ . After performing the crossover and mutation operations in the old population P , we can generate new points β 1 , , β r such that a new finer partition Γ ¯ is obtained as follows
Γ ¯ = Γ β 1 , , β r .
For example, the crossover and mutation operations can be designed as follows.
  • (Crossover operation). Take two α s and α t from the old partition P . We perform the convex combination λ α s + ( 1 λ ) α t for different λ ( 0 , 1 ) , which can generate the different new points.
  • (Mutation operation). Take α s from P . We perform the mutation α s + δ , which δ is a random number in [ 0 , 1 ] . When α s + δ is in [ 0 , 1 ] , the new generated point is taken to be α s + δ . When α s + δ > 1 , the new generated point is taken to be α s + δ 1 .
When a new finer partition Γ ¯ is generated, the algorithm in phase I is again performed using this new partition Γ ¯ . In this case, we can obtain a new approximated best core-nondominated solution. Also, this partition Γ ¯ is now regarded as the old partition. Afterward, a new finer partition Γ ^ of [ 0 , 1 ] is also generated to satisfy Γ ¯ Γ ^ . Now, we again perform the algorithms in phase I using this new finer partition Γ ^ .

6.3. Computational Procedure

Given a fixed partition Γ = { 0 = α 1 , α 2 , , α m = 1 } of [ 0 , 1 ] , We present the detailed computational procedure of genetic algorithm for phase I as follows.
  • Step 1 (Initialization). The population size of this algorithm is assumed to be p. The chromosomes in the initial population is determined by setting γ ( k ) = ( γ 1 ( k ) , , γ 2 m u ( k ) ) , where γ 1 ( k ) = 0 and γ s ( k ) are taken to be the random numbers in [ 0 , V 2 m u ] for all k = 1 , , p and s = 2 , , 2 m u , where V s is given in (9). For each γ ( k ) , we solve the linear programming problem in (11) and use Proposition 1 to obtain the positive core w ( γ ( k ) ) C + ( v ) for j = 1 , , p . Using (13), we also calculate the normalized positive core w ¯ i ( γ ( k ) ) for i = 1 , , 2 m u and k = 1 , , p . For each chromosome γ ( k ) , we assign its corresponding fitness value by calculating the following expression
    η ( γ ( k ) ) = j = 1 u i = 1 m w ¯ i ( γ ( k ) ) · H ˜ α i ( j L ) A ˜ ( j ) , x ( γ ( k ) ) + j = 1 u i = 1 m w ¯ m + i ( γ ( k ) ) · H ˜ α i ( j U ) A ˜ ( j ) , x ( γ ( k ) )
    for k = 1 , , p . Then, the p chromosomes γ ( k ) for k = 1 , , p are ranked in descending order according to their fitness values η ( γ ( k ) ) for k = 1 , , p . In this case, the top one is saved to be the (initial) best chromosome and is named as η ¯ 0 . We also save γ ( k ) to be the old elites γ ( k ) given by γ s ( k ) γ s ( j ) for s = 1 , , 2 m u and k = 1 , , p .
  • Step 2 (Tolerance). We set the tolerance ϵ . We also set the number m of maximum times of iterations such that the tolerance ϵ is satisfied. We set t = 0 to mean the initial generation. We set l = 1 to mean the first time such that the tolerance ϵ is satisfied. This step is related with the stopping criterion to avoid to be trapped in the local optimum. Therefore, it can be more clear by referring to step 7.
  • Step 3 (Mutation). We set t t + 1 to mean the t-th generation. By referring to Proposition 3, each chromosome γ ( k ) is mutated, and is assigned to γ ( k + p ) using (15) for k = 1 , , p . For each s = 2 , , 2 m u , the random Gaussian numbers with mean zero and standard deviation σ s are generated in which σ s is taken by
    σ s = β s · η ( γ ( k ) ) + b s .
    The constant β s is the proportionality to scale η ( γ ( k ) ) and the constant b s represents the offset. Then, we assign
    γ ^ s ( k ) = γ s ( k ) + σ s · N ( 0 , 1 ) = γ s ( k ) + β s · η ( γ ( k ) ) + b s · N ( 0 , 1 ) .
    In this case, the mutated chromosome γ ( k + p ) is obtained in which the components are given by γ 1 ( k + p ) = 0 and
    γ s ( k + p ) = γ ^ s ( k ) if γ ^ s ( k ) [ 0 , V s ] V s if γ ^ s ( k ) > V s 0 if γ ^ s ( k ) < 0
    for s = 2 , , 2 m u and k = 1 , , p . After this mutation step, we shall have 2 p chromosomes γ ( k ) for k = 1 , , 2 p .
  • Step 4 (Crossover). By referring to Proposition 2, randomly select γ ( k 1 ) and γ ( k 2 ) for k 1 , k 2 { 1 , , 2 p } with k 1 k 2 . A random number λ ( 0 , 1 ) is generated. In this case, the new chromosome is taken by
    γ ( 2 p + 1 ) = λ γ ( k 1 ) + ( 1 λ ) γ ( k 2 ) ,
    where the components are given by
    γ s ( 2 p + 1 ) = λ γ s ( k 1 ) + ( 1 λ ) γ s ( k 2 ) [ 0 , V s ]   for   s = 1 , , 2 m u .
    After this step, we shall have 2 p + 1 chromosomes γ ( k ) for j = 1 , , 2 p + 1 .
  • Step 5 (Calculate New Fitness). Using Proposition 1 and (13), for each new chromosome γ ( k + p ) , we calculate the normalized positive core w ¯ i ( γ ( k + p ) ) by solving the linear programming problem in (11) for i = 1 , , 2 m u and k = 1 , , p + 1 . For each γ ( k + p ) , we assign its corresponding fitness value by calculating the following expression
    η ( γ ( k + p ) ) = j = 1 u i = 1 m w ¯ i ( γ ( k + p ) ) · H ˜ α i ( j L ) A ˜ ( j ) , x ( γ ( k + p ) ) + j = 1 u i = 1 m w ¯ m + i ( γ ( k + p ) ) · H ˜ α i ( j U ) A ˜ ( j ) , x ( γ ( k + p ) )
    for k = 1 , , p + 1 .
  • Step 6 (Selection). The p old elites γ ( k ) for j = 1 , , p and the p + 1 new chromosomes γ ( k ) for k = 1 , , p + 1 obtained from Steps 3 and 4 are ranked in descending order according to their fitness values η ( γ ( k ) ) and η ( γ ( k ) ) , respectively. In this case, the top p chromosomes are saved as the new elites γ ( k ) for k = 1 , , p . Also, the top one is saved as the best chromosome that is named as η ¯ t for the t-th generation.
  • Step 7 (Stopping Criterion). After step 6, we may obtain η ¯ t 1 = η ¯ t , which seems to be trapped in the local optimum. In order to escape this trap, we are going to proceed more iterations for m times by referring to step 2 even though the criterion η ¯ t η ¯ t 1 < ϵ is satisfied. When the criterion η ¯ t η ¯ t 1 < ϵ is satisfied and the iterations reach m times, we stop the algorithm and return the solution for phase I. Otherwise, the new elites γ ( k ) for k = 1 , , p are copied as the next generation γ ( k ) for k = 1 , , p . Then, we set l l + 1 , and proceed to step 3. We also remark that the number l counts the times such that the tolerance η ¯ t η ¯ t 1 < ϵ is satisfied.
After step 7, given a fixed partition Γ = { 0 = α 1 , α 2 , , α m = 1 } of [ 0 , 1 ] , an approximated best core-nondominated solution can be obtained, which is denoted by x ( Γ ) . It also means that this solution is determined the partition Γ . Then, the algorithm proceeds to phase II by considering more finer partitions of [ 0 , 1 ] as follows.
  • Step 1. A new finer partition Γ ¯ is generated to satisfy Γ Γ ¯ .
  • Step 2. By using this new finer partition Γ ¯ to perform the genetic algorithm in phase I, we can obtain a new approximated best core-nondominated solution x ( Γ ¯ ) .
  • Step 3. Given a pre-determined tolerance ϵ , once the criterion x ( Γ ) x ( Γ ¯ ) < ϵ is reached, the algorithm halts and returns the final solution x ( Γ ¯ ) . Otherwise, Γ ¯ is set as the old partition Γ , and proceeds to step 1 to generate a new finer partition.
Finally, after step 3, we obtain the approximated best core-nondominated solution. In other words, by referring to Theorem 1, an approximated nondominated solution of problem (FMOP) is obtained.

7. Numerical Example

The membership function of triangular fuzzy interval A ˜ = ( a L , a , a U ) is defined by
ξ A ˜ ( r ) = r a L a a L if a L r a a U r a U a if a < r a U 0 otherwise .
Then, its α -level set A ˜ α = [ A ˜ α L , A ˜ α U ] is given by
A ˜ α L = ( 1 α ) a L + α a   and   A ˜ α U = ( 1 α ) a U + α a .
In particular, we consider the triangular fuzzy intervals as follows
4 ˜ = ( 3.5 , 4 , 4.5 ) , 5 ˜ = ( 4 , 5 , 5.5 )   and   6 ˜ = ( 5 , 6 , 7 ) .
Then, their α -level sets are given by
4 ˜ α = 3.5 + 0.5 · α , 4.5 0.5 · α 5 ˜ α = 4 + α , 5.5 0.5 · α 6 ˜ α = 5 + α , 7 α .
In this case, the following fuzzy linear programming problem
maximizae 5 ˜ 1 ˜ x 1 4 ˜ 1 ˜ x 2 6 ˜ 1 ˜ x 3 subject   to x 1 x 2 + x 3 20 3 x 1 + 2 x 2 + 4 x 3 42 3 x 1 + 2 x 2 30 x 1 , x 2 , x 3 0
will be solved. According to the above formulation, we equally divide the unit closed interval [ 0 , 1 ] by taking α 1 = 0 , α 2 = 0.5 and α 3 = 1 . Let A ˜ = ( 5 ˜ , 4 ˜ , 6 ˜ ) . Using the above notations, we obtain
H ˜ α 1 L A ˜ , x = H ˜ 0 L A ˜ , x = 3.5 x 1 + 4 x 2 + 5 x 3
H ˜ α 2 L A ˜ , x = H ˜ 0.5 L A ˜ , x = 3.75 x 1 + 4.5 x 2 + 5.5 x 3
H ˜ α 3 L A ˜ , x = H ˜ 1 L A ˜ , x = 4 x 1 + 5 x 2 + 6 x 3
and
H ˜ α 1 U A ˜ , x = H ˜ 0 U A ˜ , x = 4.5 x 1 + 5.5 x 2 + 7 x 3
H ˜ α 2 U A ˜ , x = H ˜ 0.5 U A ˜ , x = 4.25 x 1 + 5.25 x 2 + 6.5 x 3
H ˜ α 3 U A ˜ , x = H ˜ 1 U A ˜ , x = 4 x 1 + 5 x 2 + 6 x 3
We see that H ˜ α 3 L A ˜ , x = H ˜ α 3 U A ˜ , x . Next, we are going to solve the following scalar optimization problem
maximize w 1 H ˜ α 1 L A ˜ , x + w 2 H ˜ α 2 L A ˜ , x + w 3 H ˜ α 3 L A ˜ , x + w 4 H ˜ α 1 U A ˜ , x + w 5 H ˜ α 2 U A ˜ , x subject   to ( x 1 , x 2 , x 3 ) X ,
where the feasible set X is given by
X = ( x 1 , x 2 , x 3 ) R + 3 : x 1 x 2 + x 3 20 , 3 x 1 + 2 x 2 + 4 x 3 42   and   3 x 1 + 2 x 2 30 .
In order to formulate the corresponding cooperative game, we must obtain the ideal objective values d = ( d 1 , , d 5 ) given by
d 1 = sup x X H ˜ α 1 L A ˜ , x = 75 d 2 = sup x X H ˜ α 2 L A ˜ , x = 84 d 3 = sup x X H ˜ α 3 L A ˜ , x = 93 d 4 = sup x X H ˜ α 1 L A ˜ , x = 103.5 d 5 = sup x X H ˜ α 2 L A ˜ , x = 98.25
Therefore, we consider five players N = { 1 , 2 , 3 , 4 , 5 } such the cooperative game ( N , v ) is defined by
v ( { 1 } ) = 0.5 · d 1 , v ( { 2 } ) = 0.6 · d 2   and   v ( { 3 } ) = 0.7 · d 3
and
v ( { 4 } ) = 0.5 · d 4   and   v ( { 5 } ) = 0.7 · d 5 .
By referring to (8), for | S | = 2 , we have
V 2 ( { 1 , 2 } ) = 2 · d 1 + d 2 v ( { 1 } ) + v ( { 2 } ) 2 = 2 · d 1 + d 2 0.5 · d 1 + 0.6 · d 2 2 = 1.61774
V 2 ( { 1 , 3 } ) = 2 · d 1 + d 3 v ( { 1 } ) + v ( { 3 } ) 2 = 2 · d 1 + d 3 0.5 · d 1 + 0.7 · d 3 2 = 1.27485
V 2 ( { 1 , 4 } ) = 2 · d 1 + d 4 v ( { 1 } ) + v ( { 4 } ) 2 = 2 · d 1 + d 4 0.5 · d 1 + 0.5 · d 4 2 = 2
V 2 ( { 1 , 5 } ) = 2 · d 1 + d 5 v ( { 1 } ) + v ( { 5 } ) 2 = 2 · d 1 + d 5 0.5 · d 1 + 0.7 · d 5 2 = 1.26041
V 2 ( { 2 , 3 } ) = 2 · d 2 + d 3 v ( { 2 } ) + v ( { 3 } ) 2 = 2 · d 2 + d 3 0.6 · d 2 + 0.7 · d 3 2 = 1.06493
V 2 ( { 2 , 4 } ) = 2 · d 2 + d 4 v ( { 2 } ) + v ( { 4 } ) 2 = 2 · d 2 + d 4 0.6 · d 2 + 0.5 · d 4 2 = 1.67107
V 2 ( { 2 , 5 } ) = 2 · d 2 + d 5 v ( { 2 } ) + v ( { 5 } ) 2 = 2 · d 2 + d 5 0.6 · d 2 + 0.7 · d 5 2 = 1.05853
V 2 ( { 3 , 4 } ) = 2 · d 3 + d 4 v ( { 3 } ) + v ( { 4 } ) 2 = 2 · d 3 + d 4 0.7 · d 3 + 0.5 · d 4 2 = 1.36329
V 2 ( { 3 , 5 } ) = 2 · d 3 + d 5 v ( { 3 } ) + v ( { 5 } ) 2 = 2 · d 3 + d 5 0.7 · d 3 + 0.7 · d 5 2 = 0.85714
V 2 ( { 4 , 5 } ) = 2 · d 4 + d 5 v ( { 4 } ) + v ( { 5 } ) 2 = 2 · d 4 + d 5 0.5 · d 4 + 0.7 · d 5 2 = 1.34785 .
Using (9), we obtain
V 2 = min V 2 ( { 1 , 2 } ) , V 2 ( { 1 , 3 } ) , V 2 ( { 1 , 4 } ) , V 2 ( { 1 , 5 } ) , V 2 ( { 2 , 3 } ) , V 2 ( { 2 , 4 } ) , V 2 ( { 2 , 5 } ) , V 2 ( { 3 , 4 } ) , V 2 ( { 3 , 5 } ) , V 2 ( { 4 , 5 } ) = 0.85714 .
For | S | = 3 , we have
V 3 ( { 1 , 2 , 3 } ) = 3 · d 1 + d 2 + d 3 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 3 3 = 1.94117
V 3 ( { 1 , 2 , 4 } ) = 3 · d 1 + d 2 + d 4 0.5 · d 1 + 0.6 · d 2 + 0.5 · d 4 3 = 2.63909
V 3 ( { 1 , 2 , 5 } ) = 3 · d 1 + d 2 + d 5 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 5 3 = 1.92580
V 3 ( { 1 , 3 , 4 } ) = 3 · d 1 + d 3 + d 4 0.5 · d 1 + 0.7 · d 3 + 0.5 · d 4 3 = 2.27697
V 3 ( { 1 , 3 , 5 } ) = 3 · d 1 + d 3 + d 5 0.5 · d 1 + 0.7 · d 3 + 0.7 · d 5 3 = 1.66083
V 3 ( { 1 , 4 , 5 } ) = 3 · d 1 + d 4 + d 5 0.5 · d 1 + 0.5 · d 4 + 0.7 · d 5 3 = 2.25391
V 3 ( { 2 , 3 , 4 } ) = 3 · d 2 + d 3 + d 4 0.6 · d 2 + 0.7 · d 3 + 0.5 · d 4 3 = 2.03139
V 3 ( { 2 , 3 , 5 } ) = 3 · d 2 + d 3 + d 5 0.6 · d 2 + 0.7 · d 3 + 0.7 · d 5 3 = 1.48107
V 3 ( { 2 , 4 , 5 } ) = 3 · d 2 + d 4 + d 5 0.6 · d 2 + 0.5 · d 4 + 0.7 · d 5 3 = 2.01536
V 3 ( { 3 , 4 , 5 } ) = 3 · d 3 + d 4 + d 5 0.7 · d 3 + 0.5 · d 4 + 0.7 · d 5 3 = 1.76363 .
Therefore, we obtain
V 3 = min V 3 ( { 1 , 2 , 3 } ) , V 3 ( { 1 , 2 , 4 } ) , V 3 ( { 1 , 2 , 5 } ) , V 3 ( { 1 , 3 , 4 } ) , V 3 ( { 1 , 3 , 5 } ) , V 3 ( { 1 , 4 , 5 } ) V 2 ( { 2 , 3 , 4 } ) , V 2 ( { 2 , 3 , 5 } ) , V 2 ( { 2 , 4 , 5 } ) , V 2 ( { 3 , 4 , 5 } ) = 1.48107 .
For | S | = 4 , we have
V 4 ( { 1 , 2 , 3 , 4 } ) = 4 · d 1 + d 2 + d 3 + d 4 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 3 + 0.5 · d 4 4 = 2.94505
V 4 ( { 1 , 2 , 3 , 5 } ) = 4 · d 1 + d 2 + d 3 + d 5 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 3 + 0.7 · d 5 4 = 2.31721
V 4 ( { 1 , 2 , 4 , 5 } ) = 4 · d 1 + d 2 + d 4 + d 5 0.5 · d 1 + 0.6 · d 2 + 0.5 · d 4 + 0.7 · d 5 4 = 2.92335
V 4 ( { 1 , 3 , 4 , 5 } ) = 4 · d 1 + d 3 + d 4 + d 5 0.5 · d 1 + 0.7 · d 3 + 0.5 · d 4 + 0.7 · d 5 4 = 2.62857
V 4 ( { 2 , 3 , 4 , 5 } ) = 4 · d 2 + d 3 + d 4 + d 5 0.6 · d 2 + 0.7 · d 3 + 0.5 · d 4 + 0.7 · d 5 4 = 2.41881 .
Therefore, we obtain
V 4 = min V 4 ( { 1 , 2 , 3 , 4 } ) , V 4 ( { 1 , 2 , 3 , 5 } ) , V 4 ( { 1 , 2 , 4 , 5 } ) , V 4 ( { 1 , 3 , 4 , 5 } ) , V 4 ( { 2 , 3 , 4 , 5 } ) = 2.31721 .
Finally, we obtain
V 5 = V 5 ( 1 , 2 , 3 , 4 , 5 ) = 5 · d 1 + d 2 + d 3 + d 4 + d 5 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 3 + 0.5 · d 4 + 0.7 · d 5 5 = 3.29449 .
For s = | S | 2 , according to (7), we have
v ( S ) = k = 1 s v ( { i k } ) + γ s s k = 1 s v ( { i k } ) .
More precisely, for | S | = 2 , we have
v ( { 1 , 2 } ) = 1 + γ 2 2 v ( { 1 } ) + v ( { 2 } ) = 1 + γ 2 2 0.5 · d 1 + 0.6 · d 2
v ( { 1 , 3 } ) = 1 + γ 2 2 v ( { 1 } ) + v ( { 3 } ) = 1 + γ 2 2 0.5 · d 1 + 0.7 · d 3
v ( { 1 , 4 } ) = 1 + γ 2 2 v ( { 1 } ) + v ( { 4 } ) = 1 + γ 2 2 0.5 · d 1 + 0.5 · d 4
v ( { 1 , 5 } ) = 1 + γ 2 2 v ( { 1 } ) + v ( { 5 } ) = 1 + γ 2 2 0.5 · d 1 + 0.7 · d 5
v ( { 2 , 3 } ) = 1 + γ 2 2 v ( { 2 } ) + v ( { 3 } ) = 1 + γ 2 2 0.6 · d 2 + 0.7 · d 3
v ( { 2 , 4 } ) = 1 + γ 2 2 v ( { 2 } ) + v ( { 4 } ) = 1 + γ 2 2 0.6 · d 2 + 0.5 · d 4
v ( { 2 , 5 } ) = 1 + γ 2 2 v ( { 2 } ) + v ( { 5 } ) = 1 + γ 2 2 0.6 · d 2 + 0.7 · d 5
v ( { 3 , 4 } ) = 1 + γ 2 2 v ( { 3 } ) + v ( { 4 } ) = 1 + γ 2 2 0.7 · d 3 + 0.5 · d 4
v ( { 3 , 5 } ) = 1 + γ 2 2 v ( { 3 } ) + v ( { 5 } ) = 1 + γ 2 2 0.7 · d 3 + 0.7 · d 5
v ( { 4 , 5 } ) = 1 + γ 2 2 v ( { 4 } ) + v ( { 5 } ) = 1 + γ 2 2 0.5 · d 4 + 0.7 · d 5 .
For | S | = 3 , we have
v ( { 1 , 2 , 3 } ) = 1 + γ 3 3 v ( { 1 } ) + v ( { 2 } ) + v ( { 3 } ) = 1 + γ 3 3 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 3
v ( { 1 , 2 , 4 } ) = 1 + γ 3 3 v ( { 1 } ) + v ( { 2 } ) + v ( { 4 } ) = 1 + γ 3 3 0.5 · d 1 + 0.6 · d 2 + 0.5 · d 4
v ( { 1 , 2 , 5 } ) = 1 + γ 3 3 v ( { 1 } ) + v ( { 2 } ) + v ( { 5 } ) = 1 + γ 3 3 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 5
v ( { 1 , 3 , 4 } ) = 1 + γ 3 3 v ( { 1 } ) + v ( { 3 } ) + v ( { 4 } ) = 1 + γ 3 3 0.5 · d 1 + 0.7 · d 3 + 0.5 · d 4
v ( { 1 , 3 , 5 } ) = 1 + γ 3 3 v ( { 1 } ) + v ( { 3 } ) + v ( { 5 } ) = 1 + γ 3 3 0.5 · d 1 + 0.7 · d 3 + 0.7 · d 5
v ( { 1 , 4 , 5 } ) = 1 + γ 3 3 v ( { 1 } ) + v ( { 4 } ) + v ( { 5 } ) = 1 + γ 3 3 0.5 · d 1 + 0.5 · d 4 + 0.7 · d 5
v ( { 2 , 3 , 4 } ) = 1 + γ 3 3 v ( { 2 } ) + v ( { 3 } ) + v ( { 4 } ) = 1 + γ 3 3 0.6 · d 2 + 0.7 · d 3 + 0.5 · d 4
v ( { 2 , 3 , 5 } ) = 1 + γ 3 3 v ( { 2 } ) + v ( { 3 } ) + v ( { 5 } ) = 1 + γ 3 3 0.6 · d 2 + 0.7 · d 3 + 0.7 · d 5
v ( { 2 , 4 , 5 } ) = 1 + γ 3 3 v ( { 2 } ) + v ( { 4 } ) + v ( { 5 } ) = 1 + γ 3 3 0.6 · d 2 + 0.5 · d 4 + 0.7 · d 5
v ( { 3 , 4 , 5 } ) = 1 + γ 3 3 v ( { 3 } ) + v ( { 4 } ) + v ( { 5 } ) = 1 + γ 3 3 0.7 · d 3 + 0.5 · d 4 + 0.7 · d 5 .
For | S | = 4 , we have
v ( { 1 , 2 , 3 , 4 } ) = 1 + γ 4 4 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 3 + 0.5 · d 4
v ( { 1 , 2 , 3 , 5 } ) = 1 + γ 4 4 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 3 + 0.7 · d 5
v ( { 1 , 2 , 4 , 5 } ) = 1 + γ 4 4 0.5 · d 1 + 0.6 · d 2 + 0.5 · d 4 + 0.7 · d 5
v ( { 1 , 3 , 4 , 5 } ) = 1 + γ 4 4 0.5 · d 1 + 0.7 · d 3 + 0.5 · d 4 + 0.7 · d 5
v ( { 2 , 3 , 4 , 5 } ) = 1 + γ 4 4 0.6 · d 2 + 0.7 · d 3 + 0.5 · d 4 + 0.7 · d 5 .
Finally, for | S | = 5 , we have
v ( { 1 , 2 , 3 , 4 , 5 } ) = 1 + γ 5 5 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 3 + 0.5 · d 4 + 0.7 · d 5 .
Under the above settings, the computational procedure is presented below.
  • Step 1 (Initialization). The population size is taken to be p = 20 . The initial population is given by γ ( k ) = ( γ 1 ( k ) , γ 2 ( k ) , γ 3 ( k ) , γ 4 ( k ) , γ 5 ( k ) ) such that γ 1 ( k ) = 0 and γ s ( k ) are random numbers in [ 0 , V 5 ] = [ 0 , 3.29449 ] for all s = 2 , 3 , 4 and k = 1 , , 20 . Given each chromosome γ ( k ) , we solve the linear programming problem in (11) given by
    ( LP ) minimize w 1 + w 2 + w 3 + w 4 + w 5 subject   to w 1 v ( { 1 } ) , w 2 v ( { 2 } ) , w 3 v ( { 3 } ) , w 1 + w 2 v ( { 1 , 2 } ) w 1 + w 3 v ( { 1 , 3 } ) , w 1 + w 4 v ( { 1 , 4 } ) , w 1 + w 5 v ( { 1 , 5 } ) w 2 + w 3 v ( { 2 , 3 } ) , w 2 + w 4 v ( { 2 , 4 } ) , w 2 + w 5 v ( { 2 , 3 } ) w 3 + w 4 v ( { 3 , 4 } ) , w 3 + w 5 v ( { 3 , 5 } ) , w 4 + w 5 v ( { 4 , 5 } ) w 1 + w 2 + w 3 v ( { 1 , 2 , 3 } ) , w 1 + w 2 + w 4 v ( { 1 , 2 , 4 } ) w 1 + w 2 + w 5 v ( { 1 , 2 , 5 } ) , w 1 + w 3 + w 4 v ( { 1 , 3 , 4 } ) w 1 + w 3 + w 5 v ( { 1 , 3 , 5 } ) , w 1 + w 4 + w 5 v ( { 1 , 4 , 5 } ) w 2 + w 3 + w 4 v ( { 2 , 3 , 4 } ) , w 2 + w 3 + w 5 v ( { 2 , 3 , 5 } ) w 2 + w 4 + w 5 v ( { 2 , 4 , 5 } ) , w 3 + w 4 + w 5 v ( { 3 , 4 , 5 } ) w 1 + w 2 + w 3 + w 4 v ( { 1 , 2 , 3 , 4 } ) , w 1 + w 2 + w 3 + w 5 v ( { 1 , 2 , 3 , 5 } ) w 1 + w 2 + w 4 + w 5 v ( { 1 , 2 , 4 , 5 } ) , w 1 + w 3 + w 4 + w 5 v ( { 1 , 3 , 4 , 5 } ) w 2 + w 3 + w 4 + w 5 v ( { 2 , 3 , 4 , 5 } ) w 1 + w 2 + w 3 + w 4 + w 5 v ( { 1 , 2 , 3 , 4 , 5 } ) w 1 0 , w 2 0 , w 3 0 , w 4 0 , w 5 0 .
    More precisely, we are going to solve the following linear programming problem
    ( LP ) minimize w 1 + w 2 + w 3 + w 4 + w 5 subject   to w 1 0.5 · d 1 , w 2 0.6 · d 2 , w 3 0.7 · d 3 , w 4 0.5 · d 4 , w 5 0.7 · d 5 C 12 , C 13 , C 14 , C 15 , C 23 , C 24 , C 25 , C 34 , C 35 , C 45 C 123 , C 124 , C 125 , C 134 , C 135 , C 145 , C 234 , C 235 , C 245 , C 345 C 1234 , C 1235 , C 1245 , C 1345 , C 2345 , C 12345 w 1 0 , w 2 0 , w 3 0 , w 4 0 , w 5 0 ,
    where the detailed constraints are given below
    C 12 : w 1 + w 2 1 + γ 2 ( k ) 2 0.5 · d 1 + 0.6 · d 2
    C 13 : w 1 + w 3 1 + γ 2 ( k ) 2 0.5 · d 1 + 0.7 · d 3
    C 14 : w 1 + w 4 1 + γ 2 ( k ) 2 0.5 · d 1 + 0.5 · d 4
    C 15 : w 1 + w 5 1 + γ 2 ( k ) 2 0.5 · d 1 + 0.7 · d 5
    C 23 : w 2 + w 3 1 + γ 2 ( k ) 2 0.6 · d 2 + 0.7 · d 3
    C 24 : w 2 + w 4 1 + γ 2 ( k ) 2 0.6 · d 2 + 0.5 · d 4
    C 25 : w 2 + w 5 1 + γ 2 ( k ) 2 0.6 · d 2 + 0.7 · d 5
    C 34 : w 3 + w 4 1 + γ 2 ( k ) 2 0.7 · d 3 + 0.5 · d 4
    C 35 : w 3 + w 5 1 + γ 2 ( k ) 2 0.7 · d 3 + 0.7 · d 5
    C 45 : w 4 + w 5 1 + γ 2 ( k ) 2 0.5 · d 4 + 0.7 · d 5
    C 123 : w 1 + w 2 + w 3 1 + γ 3 ( k ) 3 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 3
    C 124 : w 1 + w 2 + w 4 1 + γ 3 ( k ) 3 0.5 · d 1 + 0.6 · d 2 + 0.5 · d 4
    C 125 : w 1 + w 2 + w 5 1 + γ 3 ( k ) 3 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 5
    C 134 : w 1 + w 3 + w 4 1 + γ 3 ( k ) 3 0.5 · d 1 + 0.7 · d 3 + 0.5 · d 4
    C 135 : w 1 + w 3 + w 5 1 + γ 3 ( k ) 3 0.5 · d 1 + 0.7 · d 3 + 0.7 · d 5
    C 145 : w 1 + w 4 + w 5 1 + γ 3 ( k ) 3 0.5 · d 1 + 0.5 · d 4 + 0.7 · d 5
    C 234 : w 2 + w 3 + w 4 1 + γ 3 ( k ) 3 0.6 · d 2 + 0.7 · d 3 + 0.5 · d 4
    C 235 : w 2 + w 3 + w 5 1 + γ 3 ( k ) 3 0.6 · d 2 + 0.7 · d 3 + 0.7 · d 5
    C 245 : w 2 + w 4 + w 5 1 + γ 3 ( k ) 3 0.6 · d 2 + 0.5 · d 4 + 0.7 · d 5
    C 345 : w 3 + w 4 + w 5 1 + γ 3 ( k ) 3 0.7 · d 3 + 0.5 · d 4 + 0.7 · d 5
    C 1234 : w 1 + w 2 + w 3 + w 4 1 + γ 4 ( k ) 4 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 3 + 0.5 · d 4
    C 1235 : w 1 + w 2 + w 3 + w 5 1 + γ 4 ( k ) 4 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 3 + 0.7 · d 5
    C 1245 : w 1 + w 2 + w 4 + w 5 1 + γ 4 ( k ) 4 0.5 · d 1 + 0.6 · d 2 + 0.5 · d 4 + 0.7 · d 5
    C 1345 : w 1 + w 3 + w 4 + w 5 1 + γ 4 ( k ) 4 0.5 · d 1 + 0.7 · d 3 + 0.5 · d 4 + 0.7 · d 5
    C 2345 : w 2 + w 3 + w 4 + w 5 1 + γ 4 ( k ) 4 0.6 · d 2 + 0.7 · d 3 + 0.5 · d 4 + 0.7 · d 5
    C 12345 : w 1 + w 2 + w 3 + w 4 + w 5 1 + γ 5 ( k ) 5 0.5 · d 1 + 0.6 · d 2 + 0.7 · d 3 + 0.5 · d 4 + 0.7 · d 5 .
    Proposition 1 says that the optimal solution w ( γ ( k ) ) of problem (LP) is a positive core for k = 1 , , 20 . According to (13), we also calculate the normalized positive core as follows
    w ¯ i ( γ ( k ) ) = w i ( γ ( k ) ) w 1 ( γ ( k ) ) + w 2 ( γ ( k ) ) + w 3 ( γ ( k ) ) + w 3 ( γ ( k ) ) + w 4 ( γ ( k ) )
    for i = 1 , 2 , 3 , 4 , 5 and k = 1 , , 20 . Each chromosome γ ( k ) is assigned a fitness value given by
    η ( γ ( k ) ) = w ¯ 1 ( γ ( k ) ) · H ˜ α 1 L x ( γ ( k ) ) + w ¯ 2 ( γ ( k ) ) · H ˜ α 2 L x ( γ ( k ) ) + w ¯ 3 ( γ ( k ) ) · H ˜ α 3 L x ( γ ( k ) ) + w ¯ 4 ( γ ( k ) ) · H ˜ α 1 U x ( γ ( k ) ) + w ¯ 5 ( γ ( k ) ) · H ˜ α 2 U x ( γ ( k ) ) .
    for k = 1 , , 20 . We rank the 20 chromosomes γ ( k ) for k = 1 , , 20 in descending order of their corresponding fitness values η ( γ ( k ) ) for k = 1 , , 20 . The top one is saved to be the (initial) best chromosome named as η ¯ 0 . Save γ ( k ) as an elite γ ( k ) given by γ s ( k ) γ s ( k ) for s = 1 , , 5 and j = 1 , , p .
  • Step 2 (Tolerance). Set t = 0 , m = 20 , l = 1 and the tolerance ϵ = 10 6 .
  • Step 3 (Mutation). We set t t + 1 to mean the t-th generation. For k = 1 , , 20 , each chromosome
    γ ( k ) = 0 , γ 2 ( k ) , γ 3 ( k ) , γ 4 ( k ) , γ 5 ( k )
    is mutated, and is assigned to
    γ ( k + 20 ) = 0 , γ 2 ( k + 20 ) , γ 3 ( k + 20 ) , γ 4 ( k + 20 ) , γ 5 ( k + 20 )
    by using (15). Generate the random Gaussian numbers with mean zero and standard deviation σ s , where σ s is taken to be the following form
    σ s = β s · η ( γ ( k ) ) + b s   for   s = 2 , 3 , 4 , 5 .
    The constant β s is the proportionality to scale η ( γ ( k ) ) and the constant b s represents the offset. In this example, we take
    β 2 = 0.02 , β 3 = 0.01 , β 4 = 0.02 , β 5 = 0.01
    and b s = 0 for s = 2 , 3 , 4 , 5 . Then, we assign
    γ ^ s ( k ) = γ s ( k ) + σ s · N ( 0 , 1 ) = γ s ( k ) + β s · η ( γ ( k ) ) + b s · N ( 0 , 1 ) .
    Therefore, we obtain the mutated chromosome γ ( k + p ) with components given by γ 1 ( k + p ) = 0 and
    γ s ( k + 20 ) = γ ^ s ( k ) if γ ^ s ( k ) [ 0 , V s ] V s if γ ^ s ( k ) > V s 0 if γ ^ s ( k ) < 0
    for s = 2 , 3 , 4 , 5 and k = 1 , , 20 . After this step, we shall have 40 chromosomes γ ( k ) for k = 1 , , 40 .
  • Step 4 (Crossover). We randomly select
    γ ( k 1 ) = 0 , γ 2 ( k 1 ) , γ 3 ( k 1 ) , γ 4 ( k 1 ) , γ 5 ( k 1 )   and   γ ( k 2 ) = 0 , γ 2 ( k 2 ) , γ 3 ( k 2 ) , γ 4 ( k 2 ) , γ 5 ( k 2 )
    for k 1 , k 2 { 1 , , 40 } with k 1 k 2 . We generate a random number λ ( 0 , 1 ) , the new chromosome is given by
    γ ( 41 ) = λ γ ( k 1 ) + ( 1 λ ) γ ( k 2 )
    with components
    γ s ( 41 ) = λ γ s ( k 1 ) + ( 1 λ ) γ s ( k 2 )   for   s = 2 , 3 , 4 , 5 .
    After this step, we shall have 41 chromosomes γ ( k ) for j = 1 , , 41 .
  • Step 5 (Calculate New Fitness). For the new generated chromosomes
    γ ( k + 20 ) = 0 , γ 2 ( k + 20 ) , γ 3 ( k + 20 ) , γ 4 ( k + 20 ) , γ 5 ( k + 20 )
    for k = 1 , , 20 , using Proposition 1 and (13), we calculate the normalized positive value w ¯ i ( γ ( k + 20 ) ) by solving the linear programming problem in (11) for i = 1 , 2 , 3 , 4 , 5 and k = 1 , , 20 + 1 . Each chromosome γ ( k + 20 ) is assigned a fitness value given by
    η ( γ ( k + 20 ) ) = w ¯ 1 ( γ ( k + 20 ) ) · H ˜ α 1 L x ( γ ( k + 20 ) ) + w ¯ 2 ( γ ( k + 20 ) ) · H ˜ α 2 L x ( γ ( k + 20 ) ) + w ¯ 3 ( γ ( k + 20 ) ) · H ˜ α 3 L x ( γ ( k + 20 ) ) + w ¯ 4 ( γ ( k + 20 ) ) · H ˜ α 1 U x ( γ ( k + 20 ) ) + w ¯ 5 ( γ ( k + 20 ) ) · H ˜ α 2 U x ( γ ( k + 20 ) ) .
    for k = 1 , , 20 .
  • Step 6 (Selection). We rank the 20 old elites
    γ ( k ) = 0 , γ 2 ( k ) , γ 3 ( k ) , γ 4 ( k ) , γ 5 ( k )   for   k = 1 , , 20
    and the 41 new chromosomes
    γ ( k ) = 0 , γ 2 ( k ) , γ 3 ( k ) , γ 4 ( k ) , γ 5 ( k )   for   k = 1 , , 41
    obtained from Steps 3 and 4 in descending order of their corresponding fitness values η ( γ ( k ) ) and η ( γ ( k ) ) . The top 20 chromosomes are saved to be the new elites
    γ ( k ) = 0 , γ 2 ( k ) , γ 3 ( k ) , γ 4 ( k ) , γ 5 ( k )   for   k = 1 , , 20 .
    Also, the top one is saved to be the best chromosome named as η ¯ t for the t-th generation
  • Step 7 (Stopping Criterion). After step 6, we may obtain η ¯ t 1 = η ¯ t 1 , which seems to be trapped in the local optimum. In order to escape this trap, we proceed more iterations for m = 20 times even though the criterion η ¯ t η ¯ t 1 < ϵ is satisfied. When the criterion η ¯ t η ¯ t 1 < ϵ is satisfied and the iterations reach 20 times, we stop the algorithm and return the solution for phase I. Otherwise, the new elites
    γ ( k ) = 0 , γ 2 ( k ) , γ 3 ( k ) , γ 4 ( k ) , γ 5 ( k ) f o r k = 1 , , 20
    must be copied to be the next generation
    γ ( k ) = 0 , γ 2 ( k ) , γ 3 ( k ) , γ 4 ( k ) , γ 5 ( k ) f o r k = 1 , , 20 .
    We set l l + 1 and the algorithm proceeds to step 3. Note that the number l counts the times for satisfying the tolerance η ¯ t η ¯ t 1 < ϵ .
The computer code is implemented using Microsoft Excel VBA. The best fitness value is 119.6879088 and the approximated core-nondominated solution is x ( Λ ) = ( x 1 , x 2 , x 3 ) = ( 0 , 15 , 3 ) .
Now, we consider more finer partitions of [ 0 , 1 ] according to the suggestion of phase II.
  • Step 1. From Section 6.2, we consider a new finer partition by equally divide the unit interval [ 0 , 1 ] . Therefore, we take
    Λ ¯ = α 1 = 0 , α 2 = 0.25 , α 3 = 0.5 , α 4 = 0.75 , α 5 = 1 .
    It is clear to see Λ Λ ¯ .
  • Step 2. Using this new finer partition Λ ¯ from Step 1 and the genetic algorithm in phase I, a new approximated best core-nondominated solution x ( Λ ¯ ) = ( 0 , 15 , 3 ) can be obtained.
  • Step 3. Step 2 says x ( Λ ) = x ( Λ ¯ ) = ( 0 , 15 , 3 ) . Therefore, the final best core-nondominated solution is given by ( x 1 , x 2 , x 3 ) = ( 0 , 15 , 3 ) .
Finally, using Theorem 1, we obtain the approximated nondominated solution ( x 1 , x 2 , x 3 ) = ( 0 , 15 , 3 ) of the original fuzzy linear programming problem.

8. Conclusions

This paper proposes a new methodology by incorporating the core values of cooperative games and the genetic algorithms to solve the fuzzy multiobjective optimization problem, which is a new attempt for solving this kind of problem. Usually, the fuzzy multiobjective optimization problem can be transformed into a conventional single-objective optimization problem such that the suitable weights are determined by the decision makers. In order to avoid the possible biased assignment of weights, a mechanical procedure is proposed in this paper by assigning the core values of cooperative game as the weights of this conventional single-objective optimization problem.
The purpose is to use the popular numerical methods of optimization to solve this conventional single-objective optimization problem. For example, for solving the fuzzy multiobjective linear programming problem, this original problem can be transformed into a conventional single-objective linear programming problem, In this case, the simplex method can be used to solve the desired problem. Frequently, the core-nondominated solutions can be a large set. Therefore, the genetic algorithms is adopted to obtain the best core-nondominated solution from this large set of core-nondominated solutions. This paper does not intend to use the genetic algorithm to directly solve the fuzzy multiobjective optimization problems. The genetic algorithm used in this paper is just to obtain the best core-nondominated solution from a large set of core-nondominated solutions. The monograph by Sakawa [44] provides the method for using the genetic algorithms to directly solve the fuzzy multiobjective optimization problems. Although the genetic algorithms is adopted in this paper to obtain the best core-nondominated solution, some other heuristic algorithms like Particle Swarm Optimization, Scatter Search, Tabu Search, Ant Colony Optimization, Artificial Immune Systems, and Simulated Annealing can still be used to obtain the best core-nondominated solutions.
Although the core values of cooperative games are considered in this paper, many other solution concepts of cooperative games can also be adopted to set up conventional single-objective optimization problem, which can be the future research. The theory of non-cooperative games may be another way to set up the conventional single-objective optimization problem, which can be the future research.

Funding

This research was funded by NSTC Taiwan with grant number 110-2221-E-017-008.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bellmam, R.E.; Zadeh, L.A. Decision-Making in a Fuzzy Environment. Manag. Sci. 1970, 17, 141–164. [Google Scholar] [CrossRef]
  2. Tanaka, H.; Okuda, T.; Asai, K. On Fuzzy-Mathematical Programming. J. Cybern. 1974, 3, 37–46. [Google Scholar] [CrossRef]
  3. Zimmermann, H.-J. Description and Optimization of Fuzzy Systems. Int. J. Gen. Syst. 1976, 2, 209–215. [Google Scholar] [CrossRef]
  4. Zimmemmann, H.-J. Fuzzy Programming and Linear Programming with Several Objective Functions. Fuzzy Sets Syst. 1978, 1, 45–55. [Google Scholar] [CrossRef]
  5. Herrera, F.; Kovács, M.; Verdegay, J.L. Optimality for fuzzified mathematical programming problems: A parametric approach. Fuzzy Sets Syst. 1993, 54, 279–285. [Google Scholar] [CrossRef]
  6. Buckley, J.J. Joint Solution to Fuzzy Programming Problems. Fuzzy Sets Syst. 1995, 72, 215–220. [Google Scholar] [CrossRef]
  7. Julien, B. An extension to possibilistic linear programming. Fuzzy Sets Syst. 1994, 64, 195–206. [Google Scholar] [CrossRef]
  8. Luhandjula, M.K.; Ichihashi, H.; Inuiguchi, M. Fuzzy and semi-infinite mathematical programming. Inf. Sci. 1992, 61, 233–250. [Google Scholar] [CrossRef]
  9. Inuiguchi, M. Necessity Measure Optimization in Linear Programming Problems with Fuzzy Polytopes. Fuzzy Sets Syst. 2007, 158, 1882–1891. [Google Scholar] [CrossRef]
  10. Wu, H.-C. Duality Theory in Fuzzy Optimization Problems. Fuzzy Optim. Decis. Mak. 2004, 3, 345–365. [Google Scholar] [CrossRef]
  11. Wu, H.-C. Duality Theorems and Saddle Point Optimality Conditions in Fuzzy Nonlinear Programming Problems Based on Different Solution Concepts. Fuzzy Sets Syst. 2007, 158, 1588–1607. [Google Scholar] [CrossRef]
  12. Wu, H.-C. The Optimality Conditions for Optimization Problems with Fuzzy-Valued Objective Functions. Optimization 2008, 57, 473–489. [Google Scholar] [CrossRef]
  13. Wu, H.-C. The Karush-Kuhn-Tucker Optimality Conditions for Multi-objective Programming Problems with Fuzzy-Valued Objective Functions. Fuzzy Optim. Decis. Mak. 2009, 8, 1–28. [Google Scholar] [CrossRef]
  14. Chalco-Cano, Y.; Lodwick, W.A.; Osuna-Gómez, R.; Rufian-Lizana, A. The Karush-Kuhn-Tucker Optimality Conditions for Fuzzy Optimization Problems. Fuzzy Optim. Decis. Mak. 2016, 15, 57–73. [Google Scholar] [CrossRef]
  15. Li, L.; Liu, S.; Zhang, J. On fuzzy generalized convex mappings and optimality conditions for fuzzy weakly univex mappings. Fuzzy Sets Syst. 2015, 280, 107–132. [Google Scholar] [CrossRef]
  16. Chalco-Cano, Y.; Silva, G.N.; Rufian-Lizana, A. On the Newton method for solving fuzzy optimization problems. Fuzzy Sets Syst. 2015, 272, 60–69. [Google Scholar] [CrossRef]
  17. Pirzada, U.M.; Pathak, V.D. Newton method for solving the multi-variable fuzzy optimization problem. J. Optim. Theorey Appl. 2013, 156, 867–881. [Google Scholar] [CrossRef]
  18. Luhandjula, M.K. An Interval Approximation Approach for a Multiobjective Programming Model with Fuzzy Objective Functions. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2015, 23, 845–860. [Google Scholar] [CrossRef]
  19. Yano, H. Multiobjective Programming Problems Involving Fuzzy Coefficients, Random Variable Coefficients and Fuzzy Random Variable Coefficients. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2015, 23, 483–504. [Google Scholar] [CrossRef]
  20. Ebenuwa, A.U.; Tee, K.F.; Zhang, Y. Fuzzy-Based Multi-Objective Design Optimization of Buried Pipelines. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2021, 29, 209–229. [Google Scholar] [CrossRef]
  21. Charles, V.; Gupta, S.; Ali, I. A Fuzzy Goal Programming Approach for Solving Multi-Objective Supply Chain Network Problems with Pareto-Distributed Random Variables. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2019, 27, 559–593. [Google Scholar] [CrossRef]
  22. Roy, J.; Majumder, S.; Kar, S.; Adhikary, K. A Multiobjective Multi-Product Solid Transportation Model with Rough Fuzzy Coefficients. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2019, 27, 719–753. [Google Scholar] [CrossRef]
  23. von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1944. [Google Scholar]
  24. Nash, J.F. Two-Person Cooperative Games. Econometrica 1953, 21, 128–140. [Google Scholar] [CrossRef]
  25. Young, H.P. Monotonic Solutions of Cooperative Games. Int. J. Game Theory 1985, 14, 65–72. [Google Scholar] [CrossRef]
  26. Barron, E.N. Game Theory: An Introduction; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  27. Branzei, R.; Dimitrov, D.; Tijs, S. Models in Cooperative Game Theory; Springer: Berlin, Germany, 2008. [Google Scholar]
  28. Curiel, I. Cooperative Game Theory and Applications: Cooperative Games Arising from Combinatorial Optimization Problems; Kluwer Academic Publishers: New York, NY, USA, 1997. [Google Scholar]
  29. González-Díaz, J.; García-Jurado, I.; Fiestras-Janeiro, M.G. An Introductory Course on Mathematical Game Theory; American Mathematical Society: Providence, RI, USA, 2010. [Google Scholar]
  30. Owen, G. Game Theory, 3rd ed.; Academic Press: New York, NY, USA, 1995. [Google Scholar]
  31. Yu, X.; Zhang, Q. Core for Game with Fuzzy Generalized Triangular Payoff Value. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2019, 27, 789–813. [Google Scholar] [CrossRef]
  32. Jing, R.; Wang, M.; Liang, H.; Wang, X.; Li, N.; Shah, N.; Zhao, Y. Multi-objective optimization of a neighborhood-level urban energy network: Considering Game-theory inspired multi-benefit allocation constraints. Appl. Energy 2018, 231, 534–548. [Google Scholar] [CrossRef]
  33. Lokeshgupta, B.; Sivasubramani, S. Cooperative game theory approach for multiobjective home energy management with renewable energy integration. IET Smart Grid 2019, 2, 34–41. [Google Scholar] [CrossRef]
  34. Lee, C.-S. Multi-objective game-theory models for conflict analysis in reservoir watershed management. Chemosphere 2012, 87, 608–613. [Google Scholar] [CrossRef]
  35. Meng, R.; Xie, N.-G. A Competitive-Cooperative Game Method for Multi-Objective Optimization Design of a Horizontal Axis Wind Turbine Blade. IEEE Access 2019, 7, 155748–155758. [Google Scholar] [CrossRef]
  36. Li, X.; Gao, L.; Li, W. Application of game theory based hybrid algorithm for multi-objective integrated process planning and scheduling. Expert Syst. Appl. 2012, 39, 288–297. [Google Scholar] [CrossRef]
  37. Yu, Y.; Zhao, R.; Zhang, J.; Yang, D.; Zhou, T. Multi-objective game theory optimization for balancing economic, social and ecological benefits in the Three Gorges Reservoir operation. Environ. Res. Lett. 2021, 16, 085007. [Google Scholar] [CrossRef]
  38. Zhang, Y.; Wang, J.; Liu, Y. Game theory based real-time multi-objective flexible job shop scheduling considering environmental impact. J. Clean. Prod. 2017, 167, 665–679. [Google Scholar] [CrossRef]
  39. Chai, R.; Savvarisa, A.; Tsourdosa, A.; Chai, S. Multi-objective trajectory optimization of Space Manoeuvre Vehicle using adaptive differential evolution and modified game theory. Acta Astronaut. 2017, 136, 273–280. [Google Scholar] [CrossRef]
  40. Cao, R.; Coit, D.W.; Hou, W.; Yang, Y. Game theory based solution selection for multi-objective redundancy allocation in interval-valued problem parameters. Reliab. Eng. Syst. Saf. 2020, 199, 106932. [Google Scholar] [CrossRef]
  41. Deb, K. Multiobjective Optimization Using Evolutionary Algorithms; John Wiley & Sons: New York, NY, USA, 2001. [Google Scholar]
  42. Osyczka, A. Evolutionary Algorithms for Single and Multicriteria Design Optimization; Studies in Fuzziness and Soft Computing 100; Physica: New York, NY, USA, 2002. [Google Scholar]
  43. Tan, K.C.; Khor, E.F.; Lee, T.H. Multiobjective Evolutionary Algorithms and Applications; Springer: New York, NY, USA, 2005. [Google Scholar]
  44. Sakawa, M. Genetic Algorithms and Fuzzy Multiobjective Optimization; Kluwer Academic Publishers: New York, NY, USA, 2002. [Google Scholar]
  45. Tiwari, V.L.; Thapar, A.; Bansal, R. A Genetic Algorithm for Solving Nonlinear Optimization Problem with Max-Archimedean Bipolar Fuzzy Relation Equations. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2023, 31, 303–326. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, H.-C. Using Genetic Algorithms and Core Values of Cooperative Games to Solve Fuzzy Multiobjective Optimization Problems. Axioms 2024, 13, 298. https://doi.org/10.3390/axioms13050298

AMA Style

Wu H-C. Using Genetic Algorithms and Core Values of Cooperative Games to Solve Fuzzy Multiobjective Optimization Problems. Axioms. 2024; 13(5):298. https://doi.org/10.3390/axioms13050298

Chicago/Turabian Style

Wu, Hsien-Chung. 2024. "Using Genetic Algorithms and Core Values of Cooperative Games to Solve Fuzzy Multiobjective Optimization Problems" Axioms 13, no. 5: 298. https://doi.org/10.3390/axioms13050298

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop