Next Article in Journal
Dirac’s Method for the Two-Dimensional Damped Harmonic Oscillator in the Extended Phase Space
Previous Article in Journal
Path Planning for Autonomous Landing of Helicopter on the Aircraft Carrier
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Investigation of the Optimistic Solution to the Linear Trilevel Programming Problem

Department of Mathematics, Shahid Chamran University of Ahvaz, Ahvaz 61357-83151, Iran
*
Author to whom correspondence should be addressed.
Mathematics 2018, 6(10), 179; https://doi.org/10.3390/math6100179
Submission received: 29 July 2018 / Revised: 22 September 2018 / Accepted: 25 September 2018 / Published: 27 September 2018

Abstract

:
In this paper, we consider a general version of a linear trilevel programming problem. Three different types of optimistic optimal solutions for a special trilevel programming problem have formerly been suggested. This paper presents the mathematical formulation of all of the three types of optimistic optimal solutions for the given linear trilevel programming problem. Moreover, some properties of the inducible region (the feasible region for the trilevel programming problem) corresponding to each optimistic optimal solution are investigated. Finally, a numerical example is presented to compare the different types of optimistic optimal solutions.

1. Introduction

Multi-level programming is a tool designed to illustrate interactions in organizations with a hierarchical structure. Mathematically, in a multi-level programming problem, constraints contain a sequence of parametric optimization problems, which should be solved in a predetermined order, and higher level variables are considered as parameters in the lower level programming problems. Multi-level programming problems have many applications in various situations such as those in supply chain management [1,2], network defense [3,4], planning [5], logistics [6] and economics [7]. Although most of the research concerning multi-level programming has focused on the cases that include only two levels (referred to as bilevel programming) [8,9], there are many multi-level programming problems that involve more than two levels. For example, the decision-making problem involving government at the first level, the private sector at the second level and people at the third level can be modeled as a trilevel programming problem. Therefore, investigating the properties and solution approaches of trilevel programming models has widely increased by researchers. Bard [10] and Anandalingam [11] proposed some methods based on Kuhn-Tucker transformation to find the optimal solution for linear trilevel problems. In order to solve some special class of trilevel problems, a multi-parametric approach was presented by Faisca et al. [12]. A trilevel Kth-best algorithm was developed by Zhang et al. [13] to solve linear trilevel problems. In addition, some meta-heuristic approaches based on fuzzy and particle swarm optimization methods were proposed for solving trilevel programming problems [14,15]. For a good bibliography of the multi-level programming problems and their applications, see [9].
It can be seen that, in most studies on multi-level programming problems, it is assumed that the optimal solution of lower level objectives for each decision made at upper levels is unique, while this does not always hold. We know that any non-strictly convex (concave) minimization (maximization) might have multiple optimal solutions. In a multi-level problem, the selection of alternative optimal solutions at a certain level brings the same results for that level, but each of these alternatives has a different impact on the overall problem. For this reason, at any level, how to choose an optimal solution among all of the alternatives is important.
In bilevel programming problems, optimistic and pessimistic approaches are proposed to resolve such ambiguities. In the optimistic approach [16], the decision maker of the lower level (follower) is motivated to choose an optimal solution that has the best result for the upper-level decision maker (leader) among one’s own alternative optimal solutions. In contrast, in the pessimistic approach [17], the second-level decision maker is required to choose an optimal solution that has the worst result for the first level decision maker. There are very few papers dealing with optimistic and pessimistic approaches in the trilevel programming problems [5,18]. Li et al. [18] have examined optimality conditions for the pessimistic trilevel programming problem and Florensa et al. [5] have proposed three different types of optimistic definitions for a special trilevel programming problem. In this paper, we investigate some properties of a general version of an optimistic linear trilevel programming (LTLP) problem.
The paper is organized as follows. In the next section, we develop the mathematical formulation of all types of optimistic definitions given in [5] for a general version of a linear trilevel programming problem. In Section 3, we demonstrate that the inducible region containing sequentially optimistic and hierarchically optimistic feasible solutions is comprised of the union of some faces of the constraint region. In addition, we prove that sequentially optimistic and hierarchically optimistic optimal solutions occur in some extreme points of the constraint region. A numerical example that shows that this result is not true for the strategically optimistic optimal solution is presented in Section 4. The paper is concluded in Section 5.

2. Preliminaries

In this section, we state the mathematical formulation of the LTLP problem. Then, we redefine the concepts of sequentially optimistic, hierarchically optimistic and strategically optimistic optimal solutions for the LTLP problem.
The linear trilevel programming problem can be formulated in general as follows:
min x 1 X 1 f 1 ( x 1 , x 2 , x 3 ) = j = 1 3 α 1 j T x j s . t j = 1 3 A 1 j x j b 1 where   x 2 , x 3   solve : min x 2 X 2 f 2 ( x 1 , x 2 , x 3 ) = j = 1 3 α 2 j T x j s . t j = 1 3 A 2 j x j b 2 where   x 3   solves : min x 3 X 3 f 3 ( x 1 , x 2 , x 3 ) = j = 1 3 α 3 j T x j s . t j = 1 3 A 3 j x j b 3
where, X k R + n k for k = 1 , 2 , 3 . The variables x 1 , x 3 , x 2 , are called the top-level, middle-level and bottom-level variables, and the functions f 1 , f 2 , f 3 : X 1 × X 2 × X 3 R are the top-level, middle-level and bottom-level objective functions, respectively. Here, α i j , A i j and b i are vectors and matrices of conformal dimensions. This decision-making problem consists of three optimization sub-problems, which are represented in a three-level hierarchy. Each level has its own control variables, but also considers other levels variables in its optimization function and constraints [13].
Definition 1.
An optimal solution to the LTLP problem is called a sequentially optimistic optimal solution if the third-level decision maker is required to choose an optimal solution in favor of the second-level decision maker among his/her own alternative optimal solutions, and the second-level decision maker is required to choose an optimal solution in favor of the first-level decision maker among his/her own multiple optimal solutions.
Definition 2.
An optimal solution to the LTLP problem is called a hierarchically optimistic optimal solution if second-level and third-level decision makers are required to choose their own optimal solutions in favor of the first-level decision maker among their own alternative optimal solutions.
Definition 3.
An optimal solution to the LTLP problem is called a strategically optimistic optimal solution if the third-level decision maker is motivated by the first-level decision maker to choose an optimal solution that is the most detrimental to the second-level decision maker among his/her own alternative optimal solution. Simultaneously, the second-level decision maker tends to react with any strategy that lets him/her attain solution that is at least better than the worst-case scenario that the bottom-level decision maker is pressuring him/her towards.
As a modeler, we have to be aware of different interpretations for different kinds of optimistic optimal solutions. For instance, in the case of market planning where the manufacturer, the seller and the customer are the top-level, the middle-level and the bottom-level decision makers, respectively, the sequentially optimistic model implies that the customer prefers to choose the product that has the most profit for the seller among the products with equal prices. In contrast, in the hierarchically optimistic model, the customer prefers the manufacturer’s benefit more. Eventually, the strategically optimistic model implies the manufacturer’s additional control over the customer decision. Actually, the manufacturer is able to oblige the customer to choose the most profitable product, which is against the seller’s benefit, and yet, the seller is willing to react so as to reach a situation at least better than the worst-case scenario to which the customer leads him/her. ( x ˜ 2 Φ ( x ¯ 1 ) , x ˜ 3 Ψ 3 ( x ¯ 1 , x ˜ 2 ) f 2 ( x ¯ 1 , x 2 , x 3 ) f 2 ( x ¯ 1 , x ˜ 2 , x ˜ 3 ) ) .
To illustrate the above definitions mathematically, we need to define the following sets:
  • The trilevel constraint region:
    S = { ( x 1 , x 2 , x 3 ) : j = 1 3 A i j x j b i , x i X i , i = 1 , 2 , 3 } .
  • Projection of S on X 1 :
    S X 1 = { x 1 X 1 : ( x 2 , x 3 ) X 2 × X 3 ( x 1 , x 2 , x 3 ) S }
  • Projection of S on X 1 × X 2 :
    S X 1 × X 2 = { ( x 1 , x 2 ) X 1 × X 2 : x 3 X 3 ( x 1 , x 2 , x 3 ) S }
  • Constraint region for middle and bottom levels, for fixed x ¯ 1
    S 2 ( x ¯ 1 ) = { ( x 2 , x 3 ) : j = 2 3 A i j x j b i A i 1 x ¯ 1 , x i X i , i = 2 , 3 } .
  • Feasible set of the third level, for fixed ( x ¯ 1 , x ¯ 2 ) :
    Ω 3 ( x ¯ 1 , x ¯ 2 ) = { x 3 X 3 : A 33 x 3 b 3 j = 1 2 A 3 j x ¯ j , x 3 X 3 } .
  • Rational reaction set of the third level, for fixed ( x ¯ 1 , x ¯ 2 ) :
    Ψ 3 ( x ¯ 1 , x ¯ 2 ) = argmin x 3 { f 3 ( x ¯ 1 , x ¯ 2 , x 3 ) : x 3 Ω 3 ( x ¯ 1 , x ¯ 2 ) } .
The sets corresponding to the sequentially optimistic optimal solution:
  • Sequentially optimistic rational reaction set of the third level, for fixed ( x ¯ 1 , x ¯ 2 ) :
    Ψ 3 S . O ( x ¯ 1 , x ¯ 2 ) = argmin x 3 { f 2 ( x ¯ 1 , x ¯ 2 , x 3 ) : x 3 Ψ 3 ( x ¯ 1 , x ¯ 2 ) } .
  • Sequentially optimistic feasible set of the second level, for fixed x ¯ 1 :
    Ω 2 S . O ( x ¯ 1 ) = { ( x 2 , x 3 ) S 2 ( x ¯ 1 ) : x 3 Ψ 3 S . O ( x ¯ 1 , x 2 ) } .
  • Sequentially optimistic rational reaction set of the second level, for fixed x ¯ 1 :
    Ψ 2 S . O ( x ¯ 1 ) = argmin x 2 , x 3 { f 1 ( x ¯ 1 , x 2 , x 3 ) : ( x 2 , x 3 ) argmin x ^ 2 , x ^ 3 { f 2 ( x ¯ 1 , x ^ 2 , x ^ 3 ) : ( x ^ 2 , x ^ 3 ) Ω 2 S . O ( x ¯ 1 ) } } .
  • Sequentially optimistic inducible region:
    I R S . O = { ( x 1 , x 2 , x 3 ) S : ( x 2 , x 3 ) Ψ 2 S . O ( x 1 ) } .
  • Sequentially optimistic optimal solution set:
    S . O . S = argmin x 1 , x 2 , x 3 { f 1 ( x 1 , x 2 , x 3 ) : ( x 1 , x 2 , x 3 ) I R S . O } .
The sets corresponding to the hierarchically optimistic optimal solution:
  • Hierarchically optimistic rational reaction set of the third level, for fixed ( x ¯ 1 , x ¯ 2 ) :
    Ψ 3 H . O ( x ¯ 1 , x ¯ 2 ) = argmin x 3 { f 1 ( x ¯ 1 , x ¯ 2 , x 3 ) : x 3 Ψ 3 ( x ¯ 1 , x ¯ 2 ) } .
  • Hierarchically optimistic feasible set of the second level, for fixed x ¯ 1 :
    Ω 2 H . O ( x ¯ 1 ) = { ( x 2 , x 3 ) S 2 ( x ¯ 1 ) : x 3 Ψ 3 H . O ( x ¯ 1 , x 2 ) } .
  • Hierarchically optimistic rational reaction set of the second level, for fixed x ¯ 1 :
    Ψ 2 H . O ( x ¯ 1 ) = argmin x 2 , x 3 { f 1 ( x ¯ 1 , x 2 , x 3 ) : ( x 2 , x 3 ) argmin x ^ 2 , x ^ 3 { f 2 ( x ¯ 1 , x ^ 2 , x ^ 3 ) : ( x ^ 2 , x ^ 3 ) Ω 2 H . O ( x ¯ 1 ) } } .
  • Hierarchically optimistic inducible region:
    I R H . O = { ( x 1 , x 2 , x 3 ) S : ( x 2 , x 3 ) Ψ 2 H . O ( x 1 ) } .
  • Hierarchically optimistic optimal solution set:
    H . O . S = argmin x 1 , x 2 , x 3 { f 1 ( x 1 , x 2 , x 3 ) : ( x 1 , x 2 , x 3 ) I R H . O } .
The sets corresponding to the strategically optimistic optimal solution:
  • Strategically optimistic feasible set of the second level, for fixed x ¯ 1 :
    Ω 2 S T . O ( x ¯ 1 ) = { ( x 2 , x 3 ) S 2 ( x ¯ 1 ) : x 3 Ψ 3 ( x ¯ 1 , x 2 ) } .
  • Projection of Ω 2 S T . O on X 2 for fixed x ¯ 1 :
    Φ ( x ¯ 1 ) = { x 2 X 2 : x 3 Ψ 3 ( x 1 , x 2 ) ( x 2 , x 3 ) Ω 2 S T . O ( x ¯ 1 ) } .
  • Strategically optimistic rational reaction set of the second level, for fixed x ¯ 1 :
    Ψ 2 S T . O ( x ¯ 1 ) = { ( x 2 , x 3 ) Ω 2 S T . O ( x ¯ 1 ) : x ˜ 2 Φ ( x ¯ 1 ) , x ˜ 3 Ψ 3 ( x ¯ 1 , x ˜ 2 ) f 2 ( x ¯ 1 , x 2 , x 3 ) f 2 ( x ¯ 1 , x ˜ 2 , x ˜ 3 ) }
  • Strategically optimistic inducible region:
    I R S T . O = { ( x 1 , x 2 , x 3 ) S : ( x 2 , x 3 ) Ψ 2 S T . O ( x 1 ) } .
  • Strategically optimistic optimal solution set:
    S T . O . S = argmin x 1 , x 2 , x 3 { f 1 ( x 1 , x 2 , x 3 ) : ( x 1 , x 2 , x 3 ) I R S T . O } .
For the convenience of the readers, we recall that the term argmin { f ( x ) : x S } denotes the set of all minimizers of the function f over the set S.
Remark 1.
It is clear that if Ψ 3 ( x ¯ 1 , x ¯ 2 ) is single-valued, then Ψ 3 ( x ¯ 1 , x ¯ 2 ) = Ψ 3 S . O ( x ¯ 1 , x ¯ 2 ) = Ψ 3 H . O ( x ¯ 1 , x ¯ 2 ) .
The following example is a simple case that shows that the various optimistic approaches may yield different optimal solutions.
Example 1.
Without loss of generality, let:
S = { ( x ¯ , y 1 , z 1 ) , ( x ¯ , y 1 , z 2 ) , ( x ¯ , y 2 , z ¯ 1 ) , ( x ¯ , y 2 , z ¯ 2 ) }
and:
Ψ 3 ( x ¯ , y 1 ) = { z 1 , z 2 } , Ψ 3 ( x ¯ , y 2 ) = { z ¯ 1 , z ¯ 2 } .
The values of top-level, middle-level and bottom-level objective functions are represented in Table 1. Therefore, we get that:
Ψ 3 S . O ( x ¯ , y 1 ) = argmin x 3 { f 2 ( x ¯ , y 1 , x 3 ) : x 3 Ψ 3 ( x ¯ , y 1 ) } = { z 2 } ,
Ψ 3 S . O ( x ¯ , y 2 ) = argmin x 3 { f 2 ( x ¯ , y 2 , x 3 ) : x 3 Ψ 3 ( x ¯ , y 2 ) } = { z ¯ 1 } .
and:
Ψ 3 H . O ( x ¯ , y 1 ) = argmin x 3 { f 1 ( x ¯ , y 1 , x 3 ) : x 3 Ψ 3 ( x ¯ , y 1 ) } = { z 1 }
Ψ 3 H . O ( x ¯ , y 2 ) = argmin x 3 { f 1 ( x ¯ , y 2 , x 3 ) : x 3 Ψ 3 ( x ¯ , y 2 ) } = { z ¯ 2 }
and:
Ω 2 S . O ( x ¯ ) = { ( y 1 , z 2 ) , ( y 2 , z ¯ 1 ) } , Ω 2 H . O ( x ¯ ) = { ( y 1 , z 1 ) , ( y 2 , z ¯ 2 ) }
then,
Ψ 2 S . O ( x ¯ ) = { ( y 1 , z 2 ) } , Ψ 2 H . O ( x ¯ ) = { ( y 1 , z 1 ) } .
Moreover,
Ω 2 S T . O ( x ¯ ) = { ( y 1 , z 1 ) , ( y 1 , z 2 ) , ( y 2 , z 1 ¯ ) , ( y 2 , z ¯ 2 ) }
and:
Ψ 2 S T . O ( x ¯ ) = { ( y 1 , z 1 ) , ( y 1 , z 2 ) , ( y 2 , z ¯ 1 ) }
Consequently, the points ( x ¯ , y 1 , z 2 ) , ( x ¯ , y 1 , z 1 ) and ( x ¯ , y 2 , z ¯ 1 ) are sequentially, hierarchically and strategically optimistic optimal solutions, respectively.

3. Geometric Properties

In this section, we investigate the geometric properties of the sequentially optimistic inducible region and hierarchically optimistic inducible region and their corresponding optimistic optimal solutions.
Note that Ψ 3 S . O ( . , . ) , Ψ 3 H . O ( . , . ) , Ψ 2 S . O ( . ) , Ψ 2 H . O ( . ) , and some other defined maps can be regarded as point-to-set maps. We need to introduce some definitions and notations about point-to-set maps before stating the main results.
Definition 4.
A convex polyhedral set is the intersection of a finite number of half-spaces.
Definition 5.
Let X R n , Y R m and Ψ : X P ( Y ) be a point-to-set map from X into P ( Y ) . The mapping Ψ ( . ) is called polyhedral if its graph can be written as the union of a finite number of convex polyhedral sets.
Recall that graph ( Ψ ) : = { ( x , y ) X × Y : y Ψ ( x ) } and P ( Y ) denotes the set of all subsets of Y or the power set of Y.
In order to assure that the LTLP problem is well-posed [10], it will be assumed that the following assumptions hold.
Assumption 1.
Ψ 3 ( x 1 , x 2 ) for each ( x 1 , x 2 ) S X 1 × X 2 , and it is bounded.
Assumption 2.
Ψ 2 S . O ( x 1 ) for each x 1 S X 1 , and it is continuous and bounded.
Assumption 3.
Ψ 2 H . O ( x 1 ) for each x 1 S X 1 , and it is continuous and bounded.
Remark 2.
By the definitions of Ψ 3 H . O ( x 1 , x 2 ) and Ψ 3 S . O ( x 1 , x 2 ) , it could be concluded that if Assumption 1 is satisfied, then the mappings Ψ 3 H . O ( x 1 , x 2 ) and Ψ 3 S . O ( x 1 , x 2 ) are also non-empty and bounded.
Now, we prove that Ψ 3 S . O ( . , . ) , Ψ 2 S . O ( . ) , Ψ 3 H . O ( . , . ) , and Ψ 2 H . O ( . ) are polyhedral. The proofs to the point-to-set maps related to sequentially and hierarchically optimistic optimal solutions are alike, and we omit the latter.
Theorem 1.
The point-to-set maps Ψ 3 S . O ( . , . ) and Ψ 3 H . O ( . , . ) are polyhedral.
Proof. 
Let ( x 1 , x 2 ) S X 1 × X 2 be arbitrary. By Assumption 1, we have, Ψ 3 S . O ( x 1 , x 2 ) . Using the definition of Ψ 3 S . O ( x 1 , x 2 ) , we get:
Ψ 3 S . O ( x 1 , x 2 ) = argmin x 3 { f 2 ( x 1 , x 2 , x 3 ) : x 3 Ψ 3 ( x 1 , x 2 ) } .
We know that the point-to-set mapping Ψ 3 ( . , . ) is polyhedral [8]. Therefore, Ψ 3 ( x 1 , x 2 ) = i = 1 k P i ( x 1 , x 2 ) , where P i ( x 1 , x 2 ) is a convex polyhedral for i = 1 , , k . Hence, Ψ 3 S . O ( x 1 , x 2 ) can be written as follows:
Ψ 3 S . O ( x 1 , x 2 ) = argmin x 3 { f 2 ( x 1 , x 2 , x 3 ) : x 3 i = 1 k P i ( x 1 , x 2 ) } .
Now, let I { 1 , , k } be the index set with the property that if i I , then:
argmin x 3 { f 2 ( x 1 , x 2 , x 3 ) : x 3 P i ( x 1 , x 2 ) } argmin x 3 { f 2 ( x 1 , x 2 , x 3 ) : x 3 Ψ 3 ( x 1 , x 2 ) }
It is clear that Ψ 3 S . O ( x 1 , x 2 ) = i I argmin x 3 { f 2 ( x 1 , x 2 , x 3 ) : x 3 P i ( x 1 , x 2 ) } and argmin x 3 { f 2 ( x 1 , x 2 , x 3 ) : x 3 P i ( x 1 , x 2 ) } is polyhedral for all i I [8]. This fact asserts that Ψ 3 S . O ( . , . ) is polyhedral, as well. ☐
Theorem 2.
The point-to-set maps Ω 2 S . O ( . ) and Ω 2 H . O ( . ) are polyhedral.
Proof. 
Let x ¯ 1 S X 1 be arbitrary. In order to prove that Ω 2 S . O ( . ) is polyhedral, we need to show that Ω 2 S . O ( x ¯ 1 ) is polyhedral, and this fact is an immediate conclusion from the actuality that Ψ 3 S . O ( . , . ) is polyhedral and graph ( Ω 2 S . O ( . ) ) is the intersection of graph ( Ψ 3 S . O ( . , . ) ) and S 2 ( x ¯ 1 ) , which are both polyhedral. ☐
Theorem 3.
The point-to-set mappings Ψ 2 S . O ( . ) and Ψ 2 H . O ( . ) are polyhedral.
Proof. 
Let x ¯ 1 S X 1 be arbitrary. Using the definition of Ψ 2 S . O ( x ¯ 1 ) , we have to prove that argmin { f 2 ( x ¯ 1 , x ^ 2 , x ^ 3 ) : ( x ^ 2 , x ^ 3 ) Ω 2 S . O ( x ¯ 1 ) } is polyhedral. The procedure of the proof is quite similar to that of Theorem 1. Using this fact and applying Theorem 1 once more, we can prove that Ψ 2 S . O ( x ¯ 1 ) is polyhedral. ☐
Theorem 4.
Let the trilevel constraint region S be non-empty and compact and I R S . O ( I R H . O ). Therefore, I R S . O ( I R H . O ) is comprised of the union of a finite number of convex polyhedral sets.
Proof. 
The result is deduced from the fact that graph ( Ψ 2 S . O ) is equal to the union of a finite number of convex polyhedral sets and I R S . O is the intersection of S and graph ( Ψ 2 S . O ) . ☐
Corollary 1.
If the conditions of Theorem 4 are satisfied, then I R S . O and I R H . O are comprised of the union of some non-empty faces of S.
Corollary 2.
If the conditions of Theorem 4 are satisfied, then S . O . S ( H . O . S ) is non-empty, and there exists an extreme point of S, which belongs to S . O . S ( H . O . S ).
In the next section, an example is presented to show that the above statements are not necessarily true for strategically optimistic optimal solutions.

4. Numerical Examples

In order to illustrate the above statements more precisely, we present the following numerical example.
Example 2.
min x R + 3 x + 3 y z 1 + 2 z 2 s . t 3 x + 2 y 2 z 1 + 2 z 2 30 where   y , z 1 , z 2   solve : min y R + 3 x y z 2 3 x + y 2 where   z 1 , z 2   solve : min ( z 1 , z 2 ) R + 2 x + y + z 1 z 2 2 x + y + z 1 z 2 6 0 y , z 1 , z 2 6 0 x 2
In order to find all types of the optimistic optimal solutions, we have to find Ψ 3 ( x , y ) for fixed x , y . Therefore, the following bottom-level optimization problem should be recast as a multi-parametric programming problem, where x , y are considered as parameters.
min ( z 1 , z 2 ) R + 2 x + y + z 1 z 2 s . t z 1 z 2 min { 6 , 6 x y } z 1 z 2 max { 6 , 2 x y } 0 y , z 1 , z 2 6 0 x 2
By solving Problem (3), using the multi-parametric method [12], we obtain:
Ψ 3 ( x , y ) = { ( z 1 , z 2 ) : z 1 z 2 = 2 x y , 0 z 1 , z 2 6 } for all 0 x 2 , 0 y 6 .
It can be seen that Problem (3) has infinitely many solutions, and hence, Ψ 3 ( x , y ) is not single-valued. In the following steps, we attempt to find the sequentially, hierarchically and strategically optimistic optimal solution for this problem.
  • Finding the sequentially optimistic optimal solution
First, we have to find Ψ 3 S . O ( x , y ) for each fixed x , y .
Step 1. In order to find Ψ 3 S . O ( x , y ) , we have to solve the following linear parametric programming problem when x , y are considered as parameters:
min ( z 1 , z 2 ) R + 2 3 x y z 2 s . t z 1 z 2 = 2 x y 0 y , z 1 , z 2 6 0 x 2
By solving Problem (4), we obtain that:
Ψ 3 S . O ( x , y ) = ( 6 , 4 + x + y ) if 0 x + y 2 x , y 0 ( 8 x y , 6 ) if 2 x + y 8 0 y 6 0 x 2
Step 2. By incorporating the resultant rational reaction set (5) into the definition of Ω 2 S . O ( x ) , for all 0 x 2 , we get:
Ω 2 S . O ( x ) = { ( y , z 1 , z 2 ) : 3 x + y 2 , 0 x + y 2 , z 1 = 6 , z 2 = 4 + x + y , y 0 } { ( y , z 1 , z 2 ) : 3 x + y 2 , 2 x + y 8 , z 1 = 8 x y , z 2 = 6 , 0 y 6 }
Step 3. Considering the definition of Ψ 2 S . O ( x ) , we will infer that:
Ψ 2 S . O ( x ) = ( 2 + 3 x , 6 4 x , 6 ) if 0 x 4 3 ( 6 , 2 x , 6 ) if 4 3 x 2
Step 4. Consequently, I R S . O could be acquired as follows:
I R S . O = { ( x , 2 + 3 x , 6 4 x , 6 ) : 0 x 4 3 } { ( x , 6 , 2 x , 6 ) : 4 3 x 2 } .
Step 5. The point ( 0 , 2 , 6 , 6 ) is obtained as the sequentially optimistic optimal solution, and sequentially optimistic optimal value for the linear trilevel programming problem (2) is ( f 1 , f 2 , f 3 ) = ( 12 , 8 , 2 ) .
  • Finding the hierarchically optimistic optimal solution
Step 1. In order to find the hierarchically optimistic optimal solution, we have to find Ψ 3 H . O ( x , y ) for each fixed x , y , that is the optimal solution of the following parametric linear programming problem with x , y being parameters:
min ( z 1 , z 2 ) R + 2 3 x + 3 y z 1 + 2 z 2 s . t z 1 z 2 = 2 x y 0 y , z 1 , z 2 6 0 x 2
By solving the above problem, we deduce that:
Ψ 3 H . O ( x , y ) = ( 2 x y , 0 ) if 0 x + y 2 x , y 0 ( 0 , x + y 2 ) if 2 x + y 8 0 x 2 0 y 6
Step 2. By using the resultant rational reaction set (7), Ω 2 H . O ( x ) for each 0 x 2 will be inferred as follows:
Ω 2 H . O ( x ) = { ( y , z 1 , z 2 ) : 3 x + y 2 , 0 x + y 2 , z 1 = 2 x y , z 2 = 0 , y 0 } { ( y , z 1 , z 2 ) : 3 x + y 2 , 2 x + y 8 , z 1 = 0 , z 2 = x + y 2 , 0 y 6 }
Step 3. Using the definition of Ψ 2 H . O ( x ) for each 0 x 2 , it will be concluded that:
Ψ 2 H . O ( x ) = ( 2 + 3 x , 0 , 4 x ) if 0 x 4 3 ( 6 , 0 , 4 + x ) if 4 3 x 2
By considering the constraint of the first level, I R H . O could be depicted as follows:
I R H . O = { ( x , 2 + 3 x , 0 , 4 x ) : 0 x 4 3 } { ( x , 6 , 0 , 4 + x ) : 4 3 x 2 } .
Step 4. By using I R H . O and solving the resultant upper-level programming problem, the point ( 0 , 2 , 0 , 0 ) is obtained as the hierarchically optimistic optimal solution. The hierarchically optimistic optimal value of the linear trilevel programming problem (2) associated with this point is ( f 1 , f 2 , f 3 ) = ( 6 , 2 , 2 ) .
  • Finding the strategically optimistic optimal solution
Step 1. In the first step, we try to find Ω 2 S T . O ( x ) and Φ ( x ) for each 0 x 2 :
Ω 2 S T . O ( x ) = { ( y , z 1 , z 2 ) : 3 x + y 2 , z 1 z 2 = 2 x y , 0 y , z 1 , z 2 6 }
and:
Φ ( x ) = { 0 y 2 + x } if 0 x 4 3 { 0 y 6 } if 4 3 x 2
Step 2. Using the definition of Ψ 2 S T . O ( x ) , for each 0 x 2 , it will be inferred that:
Ψ 2 S T . O ( x ) = { ( y , z 1 , z 2 ) Ω 2 S T . O ( x ) : y ˜ Φ ( x ) ( z ˜ 1 , z ˜ 2 ) z ˜ 1 z ˜ 2 = 2 x y ˜ , y + z 2 y ˜ + z ˜ 2 }
Using Equation (8), we infer that:
Ψ 2 S T . O ( x ) = ( y , z 1 , z 2 ) : y 2 + 3 x y + z 1 z 2 = 2 x y + z 2 2 + 7 x 0 y , z 1 , z 2 6 if 0 x 4 3 ( y , z 1 , z 2 ) : y 2 + 3 x y + z 1 z 2 = 2 x y + z 2 10 + x 0 y , z 1 , z 2 6 if 4 3 x 2
Step 3. Eventually, if we apply the definition of I R S T . O , we will find the point ( 0 , 0 , 4 , 2 ) as the strategically optimistic optimal solution, and the corresponding objective function values will be ( f 1 , f 2 , f 3 ) = ( 0 , 2 , 2 ) . The point ( 0 , 0 , 4 , 2 ) wonderfully has the best top-level objective function, and it is not an extreme point of the constraint region.

5. Conclusions

This paper considers the linear trilevel programming problems with the feature that middle-level and bottom-level decision makers have multiple optimal solutions. Then, the structure of the inducible region corresponding to kinds of the optimistic feasible solutions is investigated. We have shown that there are some extreme points of the constraint region that are sequentially optimistic and hierarchically optimistic optimal solutions. Finally, using a numerical example, it was indicated that the strategically optimistic optimal solution may not be an extreme point of the constraint region.

Author Contributions

Both authors contributed equally to this research. The research was carried out by both authors, and the manuscript was subsequently prepared together. Both authors read and approved the final manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LTLPLinear Trilevel Programming
IRInducible Region

References

  1. Sana, S.S. A production-inventory model of imperfect quality products in a three-layer supply chain. Decis. Support Syst. 2011, 50, 539–547. [Google Scholar] [CrossRef]
  2. Xu, X.; Meng, Z.; Shen, R. A tri-level programming model based on conditional value-at-risk for three-stage supply chain management. Comput. Ind. Eng. 2013, 66, 470–475. [Google Scholar] [CrossRef]
  3. Alguacil, N.; Delgadillo, A.; Arroyo, J.M. A trilevel programming approach for electric grid defense planning. Comput. Oper. Res. 2014, 41, 282–290. [Google Scholar] [CrossRef]
  4. Yao, Y.; Edmunds, T.; Papageorgiou, D.; Alvarez, R. Trilevel optimization in power network defense. IEEE Trans. Syst. Man Cybern. 2007, 37, 712–718. [Google Scholar] [CrossRef]
  5. Florensa, C.; Garcia-Herreros, P.; Misra, P.; Arslan, E.; Mehta, S.; Grossmann, I.E. Capacity planning with competitive decision-makers: Trilevel MILP formulation, degeneracy, and solution approaches. Eur. J. Oper. Res. 2017, 262, 449–463. [Google Scholar] [CrossRef]
  6. Safaei, A.S.; Farsad, S.; Paydar, M.M. Robust bi-level optimization of relief logistics operation. Appl. Math. Model. 2018, 56, 359–380. [Google Scholar] [CrossRef]
  7. Ke, G.Y.; Bookbinder, J.H. Coordinating the discount policies for retailer, wholesaler, and less-than-truckload carrier under price-sensitive demand: A trilevel optimization approach. Int. J. Pro. Econ. 2018, 196, 82–100. [Google Scholar] [CrossRef]
  8. Dempe, S. Fundations on Bilevel Programming; Kluwer Academic Publisher: Dordrecht, The Netherlands, 2002. [Google Scholar]
  9. Lu, J.; Han, J.; Hu, Y.; Zhang, G. Multilevel decision-making: A survey. J. Inf. Sci. 2016, 346-347, 463–487. [Google Scholar] [CrossRef]
  10. Bard, J.F. An investigation of the linear three level programming problem. IEEE Trans. Syst. Man Cybern. 1984, 14, 711–717. [Google Scholar] [CrossRef]
  11. Anandalingam, G. A mathematical programming model of decentralized multi-level systems. J. Oper. Res. Soc. 1988, 39, 1021–1033. [Google Scholar] [CrossRef]
  12. Faisca, N.P.; Saraiva, P.M.; Rustem, B.; Pistikopoulos, E.N. A multi-parametric programming approach for multilevel hierarchical and decentralised optimisation problems. Comput. Manag. Sci. 2009, 6, 377–397. [Google Scholar] [CrossRef]
  13. Zhang, G.; Lu, J.; Montero, J.; Zeng, Y. Model, solution concept, and Kth-best algorithm for linear trilevel programming. Inf. Sci. 2010, 180, 481–492. [Google Scholar] [CrossRef] [Green Version]
  14. Sakawa, M.; Nishizaki, I. Interactive fuzzy programming for multi-level programming problems: A review. Int. J. Multicrit. Decis. Making 2012, 2, 241–266. [Google Scholar] [CrossRef]
  15. Han, J.; Zhang, G.; Hu, Y.; Lu, J. Solving tri-level programming problems using a particle swarm optimization algorithm. In Proceedings of the 10th IEEE Conference on Industrial Electronics and Applications, Auckland, New Zealand, 15–17 June 2015; pp. 569–574. [Google Scholar]
  16. Dempe, S.; Pilecka, M. Necessary optimality conditions for optimistic bilevel programming problems using set-valued programming. J. Glob. Optim. 2015, 61, 769–788. [Google Scholar] [CrossRef]
  17. Wolfram, W.; Tsoukalas, A.; Kleniati, P.M.; Rustem, B. Pessimistic bilevel optimization. SIAM J. Optim. 2013, 23, 353–380. [Google Scholar]
  18. Li, G.; Wan, Z.; Chen, J.; Zhao, X. Optimality conditions for pessimistic trilevel problem with middle-level problem being pessimistic. J. Nonlinear Sci. Appl. 2016, 9, 3864–3878. [Google Scholar] [CrossRef]
Table 1. Top-level, middle-level and bottom-level objective function values of Example 1.
Table 1. Top-level, middle-level and bottom-level objective function values of Example 1.
( x , y , z ) ( x ¯ , y 1 , z 1 ) ( x ¯ , y 1 , z 2 ) ( x ¯ , y 2 , z ¯ 1 ) ( x ¯ , y 2 , z ¯ 2 )
f 1 350450300250
f 2 350150250400
f 3 300300200200

Share and Cite

MDPI and ACS Style

Esmaeili, M.; Sadeghi, H. An Investigation of the Optimistic Solution to the Linear Trilevel Programming Problem. Mathematics 2018, 6, 179. https://doi.org/10.3390/math6100179

AMA Style

Esmaeili M, Sadeghi H. An Investigation of the Optimistic Solution to the Linear Trilevel Programming Problem. Mathematics. 2018; 6(10):179. https://doi.org/10.3390/math6100179

Chicago/Turabian Style

Esmaeili, Maryam, and Habibe Sadeghi. 2018. "An Investigation of the Optimistic Solution to the Linear Trilevel Programming Problem" Mathematics 6, no. 10: 179. https://doi.org/10.3390/math6100179

APA Style

Esmaeili, M., & Sadeghi, H. (2018). An Investigation of the Optimistic Solution to the Linear Trilevel Programming Problem. Mathematics, 6(10), 179. https://doi.org/10.3390/math6100179

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop