Next Article in Journal
Algebraic Properties of Arbitrage: An Application to Additivity of Discount Functions
Next Article in Special Issue
Fixed Points for a Pair of F-Dominated Contractive Mappings in Rectangular b-Metric Spaces with Graph
Previous Article in Journal
Nonlinear Operators as Concerns Convex Programming and Applied to Signal Processing
Previous Article in Special Issue
On a New Generalization of Banach Contraction Principle with Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Global Optimization Algorithm for a Class of Linear Fractional Programming

1
School of Mathematics and Statistics, Ningxia University, Yinchuan 750021, China
2
Ningxia Province Cooperative Innovation Center of Scientific Computing and Intelligent Information Processing, North Minzu University, Yinchuan 750021, China
3
Ningxia Province Key Laboratory of Intelligent Information and Data Processing, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(9), 867; https://doi.org/10.3390/math7090867
Submission received: 26 August 2019 / Revised: 14 September 2019 / Accepted: 16 September 2019 / Published: 19 September 2019
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
In this paper, we propose a new global optimization algorithm, which can better solve a class of linear fractional programming problems on a large scale. First, the original problem is equivalent to a nonlinear programming problem: It introduces p auxiliary variables. At the same time, p new nonlinear equality constraints are added to the original problem. By classifying the coefficient symbols of all linear functions in the objective function of the original problem, four sets are obtained, which are I i + , I i , J i + and J i . Combined with the multiplication rule of real number operation, the objective function and constraint conditions of the equivalent problem are linearized into a lower bound linear relaxation programming problem. Our lower bound determination method only needs e i T x + f i 0 , and there is no need to convert molecules to non-negative forms in advance for some special problems. A output-space branch and bound algorithm based on solving the linear programming problem is proposed and the convergence of the algorithm is proved. Finally, in order to illustrate the feasibility and effectiveness of the algorithm, we have done a series of numerical experiments, and show the advantages and disadvantages of our algorithm by the numerical results.

1. Introduction

Fractional programming is an important branch of nonlinear optimization and it has attracted interest from researchers for several decades. The sum of linear ratios problem is a special class of fractional programming problem with wide applications, such as for transportation schemes, as well as finding applications in economics [1], investment and production control [2,3,4], and multi-objective portfolios [5]. The primary challenges in solving linear fractional programming ( L F P ) arise from a lack of useful properties (convexity or otherwise) and from the number of ratios and the dimension of decision space. Theoretically, it is NP-hard [6,7]. In addition, for a problem L F P , there may be several local optimal solutions [8], which interferes with finding the global optimal solution and increases the difficulty of the problem. It is therefore worthwhile to study this kind of problem. In this paper, we shall investigate the following linear fractional programming problem:
L F P : = min f ( x ) = i = 1 p c i T x + d i e i T x + f i s . t . A x b , x 0 .
where the feasible domain X = { x R n | A x b , x 0 } is n-dimensional, nonempty, and bounded; p 2 , A R m × n , b R m , c i R n , d i R , e i R n , f i R and e i T x + f i 0 .
In the application of practical problems, p usually does not exceed 10. At present, many algorithms have been proposed to solve the L F P problem with a limited number of ratios. For instance, In 1962, Charnes et al. gave an effective elementary simplex method in the case of p = 1 [9]. On the premise of p = 2 , Konno proposed one similar parametric elementary simplex method on the basis of reference [9], which can be used to solve large-scale problems [10]. When p = 3 , Konno et al. constructed an effective heuristic algorithm by developing the parameter simplex algorithm [11]. When p > 3 , Shen et al. reduced the original nonconvex programming problem to a series of linear programming problems by using equivalent transformation and linearization techniques to achieve the purpose of solving the linear fraction problem with coefficients [12]. Nguyen and Tuy considered a unified monotonic approach to generalized linear fractional programming [13]. Benson presented a simplicial branch-and-bound duality-bounds algorithm by applying the Lagrangian duality theory [6]. Jiao et al. gave a new interval reduced branch-and-bound algorithm for solving the global problem of linear ratio and denominator outcome space [14]. By exploring a well-defined nonuniform mesh, Shen et al. solved an equivalent optimization problem and proposed a complete polynomial time approximation algorithm [15]. In the same year, Hu et al. proposed a new branch-and-bound algorithm for solving the low-dimensional linear fractional programming [16]. Shen et al. introduced a practicable regional division and reduction algorithm for minimizing the sum of linear fractional functions over a polyhedron [17]. Through using a suitable transformation and linearization technique, Zhang and Wang proposed a new branch and bound algorithm with two reducing techniques to solve the generalized linear fractional programming [18]. By adopting the exponent transformation technique, Jiao et al. proposed a branch and bound algorithm of three-level linear relaxation to solve the generalized polynomial ratios problem with coefficients [19]. Based on the image space where the objective function is easy to deal with in a certain direction, Falk J E et al. transformed the problem into an “image space” by introducing new variables, and then analyzed and solved the linear fractional programming [20]. Gao Y et al. transformed the original problem into an equivalent bilinear programming problem, and used the convex envelope and concave envelope of bilinear functions to determine the lower bound of the optimal value of the original problem, and then propose a branch and bound algorithm [21]. By dividing the box where the decision variables are located, Ying Ji et al. proposed a new deterministic global optimization algorithm by relaxing the denominator on each box [22]. Furthermore, according to references [23,24], there are other algorithms that can be used to solve the L F P problem.
In this article, a new branch-and-bound algorithm based on the branch of output-space is proposed for globally solving the L F P problem. To do this, an equivalent optimization problem ( E O P ) is presented. Next, the objective function and constraint functions of the equivalence problem are relaxed using four sets (i.e., I I + , I I , J I + , J I ) and the multiplication rules for real number operations. Based on this operation, a linear relaxation programming problem that provides a reliable lower bound for the original problem is constructed. Finally, a new branch-and-bound algorithm for the L F P problem is designed. Compared with the methods mentioned above (e.g., [9,10,11,12,13,14,15,17,18,23,24]), the goal of this research is three-fold. First of all, the lower bound of the subproblem of each node can be achieved easily, solely by solving linear programs. Secondly, the performance of the algorithm is based on the difference between the number of decision variables n and the number p of ratios. Thirdly, the problem in this article is more general than those considered in [14,17,18], since we only require e i T x + f i 0 and don’t need to convert c i T x + d i < 0 to c i T x + d i 0 for each i. However, the problem solved by our model must ensure that every decision variable is non-negative, which is also a limitation of the problem we study. In the end, the computational results of a problem with a large number of ratio terms are shown below to illustrate the feasibility and validity of the proposed algorithm.
This paper is organized as follows. In Section 2, the L F P problem is changed to the equivalent non-convex programming problem E O P . Section 3 shows how to construct a linear relaxation problem of L F P . In Section 4, we give the branching rules on a hyper-rectangle. In Section 5, an output-space branch and bound algorithm is presented and its convergence is established. Section 6 introduces some existing test examples in the literature, and gives the calculation results and numerical analysis. Finally, the method of this paper is briefly reviewed, and the extension of this method to multi-objective fractional programming is prospected.

2. The Equivalence Problem of LFP

In order to establish the equivalence problem, we introduce p auxiliary variables and let t i = 1 e i T x + f i , i = 1 , 2 , , p . The upper and lower bounds of t i are referred to by t ¯ i and t ̲ i , respectively. Then, we calculate the following linear programming problems:
m ̲ i = min x X H e i T x + f i , m ¯ i = max x X H e i T x + f i .
So, we have
1 m ¯ i = t ̲ i t i = 1 e i T x + f i t ¯ i = 1 m ̲ i .
The hyper-rectangle of t can be denoted as follows:
H = [ t ̲ , t ¯ ] , t ̲ = ( t ̲ 1 , t ̲ 2 , , t ̲ p ) T , t ¯ = ( t ¯ 1 , t ¯ 2 , , t ¯ p ) T .
Similarly, for the sub-hyper-rectangle H k H that will be used below, the following definitions are given:
H k = [ t ̲ k , t ¯ k ] , t ̲ k = ( t ̲ 1 k , t ̲ 2 k , , t ̲ p k ) T , t ¯ k = ( t ¯ 1 k , t ¯ 2 k , , t ¯ p k ) T .
Finally, the L F P problem can be further translated into the following equivalent optimization problem:
E O P : = min f ( x , t ) = i = 1 p ( c i T x + d i ) t i , s . t . ( e i T x + f i ) t i = 1 , i = 1 , 2 , , p , x X = { x R n | A x b , x 0 } , t H .
Theorem 1.
The feasible solution x * is a global optimal solution of the L F P problem if and only if the E O P problem attaches to the global optimal solution ( x * , t * ) , and for every i = 1 , 2 , p we have equation t i * = 1 e i T x * + f i .
Proof of Theorem 1.
If x * is a globally optimal solution for the problem L F P , we have t i * = 1 e i T x * + f i , i = 1 , 2 , p , Thus ( x * , t * ) is the feasible solution and the objective function value f ( x * ) of E O P , respectively. Let ( x , t ) be any feasible solution to problem E O P . We have
t i = 1 e i T x + f i , i = 1 , 2 , p ,
which means
f ( x * ) = i = 1 p c i T x * + d i e i T x * + f i = i = 1 p ( c i T x * + d i ) t i * .
Using the optimality of x * ,
i = 1 p ( c i T x * + d i ) t i * = f ( x * , t * ) f ( x , t ) = i = 1 p ( c i T x + d i ) t i .
Hence, by x * X and t i = 1 e i T x * + f i , a global optimal solution ( x * , t * ) of problem E O P can be found.
On the other hand, problem E O P can also be solved and its optimal solution ( x * , t * ) obtained. Let
t i * = 1 e i T x * + f i , i = 1 , 2 , p ,
and then we have
f ( x * ) = i = 1 p c i T x * + d i e i T x * + f i = i = 1 p ( c i T x * + d i ) t i * .
Let t i = 1 e i T x + f i , i = 1 , 2 , p , for any feasible solution x of L F P . Then, ( x , t ) is a feasible solution to problem E O P and the objective function is f ( x ) . According to the optimality of ( x * , t * ) and the feasibility of x, we have
f ( x ) i = 1 p ( c i T x * + d i ) t i * = f ( x * ) .
As x * X , according to the above inequalities, for x * is a global optimal solution of problem L F P . Thus, the L F P problem is equivalent to E O P .  □

3. A New Linear Relaxation Technique

In this section, we will show how to construct a linear relaxation programming ( L R P ) for problem L F P . In the following, for the convenience of expression, denote:
I i + = { j | e i j > 0 , j = 1 , 2 , , n } , J i + = { j | c i j > 0 , j = 1 , 2 , , n } , I i = { j | e i j < 0 , j = 1 , 2 , , n } , J i = { j | c i j < 0 , j = 1 , 2 , , n } , Φ i ( x , t ) = ( e i T x j + f i ) t i = j = 1 n e i j t i x j + f i t i , Ψ i ( x , t ) = ( c i T x j + d i ) t i = j = 1 n c i j t i x j + d i t i .
Then, based on the above discussion, we have
Ψ i ( x , t ) = t i ( j = 1 n c i j x j + d i ) = j J i + c i j t i x j + j J i c i j t i x j + d i t i , j J i + c i j t ̲ i x j + j J i c i j t ¯ i x j + d i t i = Ψ ̲ i ( x , t ) .
Φ i ( x , t ) = t i ( j = 1 n e i j x j + f i ) = j I i + e i j t i x j + j I i e i j t i x j + f i t i , j I i + e i j t ̲ i x j + j I i e i j t ¯ i x j + f i t i = Φ ̲ i ( x , t ) .
Φ i ( x , t ) = t i ( j = 1 n e i j x j + f i ) = j I i + e i j t i x j + j I i e i j t i x j + f i t i , j I i + e i j t ¯ i x j + j I i e i j t ̲ i x j + f i t i = Φ ¯ i ( x , t ) .
Obviously, f ( x , t ) = i = 1 p Ψ i ( x , t ) i = 1 p Ψ ̲ i ( x , t ) = f ̲ ( x , t ) , f ̲ ( x , t ) is a lower bound function of f ( x , t ) .
Finally, we obtain a linear relaxation programming problem L R P of problem E O P by loosening the feasible region of the equivalent problem:
( LRP ) : = min f ̲ ( x , t ) = i = 1 p Ψ ̲ i ( x , t ) , s . t . Φ ̲ i ( x , t ) 1 , i = 1 , 2 , , p , Φ ¯ i ( x , t ) 1 , i = 1 , 2 , , p , t H = { t R p | t ̲ i t i t ¯ i , i = 1 , 2 , p } , x X = { x R n | A x b , x 0 } .
At the same time, the linear relaxation subproblem L R P k of problem E O P on sub-hyper-rectangle H k H is:
( LRP k ) : = min f ̲ ( x , t ) = i = 1 p Ψ ̲ i ( x , t ) , s . t . Φ ̲ i ( x , t ) 1 , i = 1 , 2 , , p , Φ ¯ i ( x , t ) 1 , i = 1 , 2 , , p , t H k = { t R p | t ̲ i k t i t ¯ i k , i = 1 , 2 , p } , x X = { x R n | A x b , x 0 } .
According to the above, assume that when the algorithm iterates to step k, we only need to solve problem L R P k , whose optimal value v ( L R P k ) is a lower bound of the global optimum value v ( E O P k ) of problem E O P on rectangle H k H . The optimal value v ( L R P k ) is also an effective lower bound of the global optimum value v ( L F P k ) of the original problem L F P on H k , i.e., v ( L R P k ) v ( E O P k ) = v ( L F P k ) .
Therefore, problem L R P k is solved—its optimal value is obtained, which is a lower bound of the global optimum of problem L F P on rectangle H k . The updated method of the upper bound will be explained in detail in Remark 4.

4. Branching Process

In order to facilitate the branch of the algorithm, we adopt the idea of dichotomy, and give an adaptive hyper-rectangular partition method, which depends on ω . Let H k = [ t ̲ k , t ¯ k ] H 0 = H denote the current hyper-rectangle to be divided, the corresponding optimal solution of the problem L F P k is represented by x k and the corresponding optimal solution of the linear relaxation problem L R P k is represented by ( x k , t k ), respectively. It is obvious that x k X and t k H k . The following forms of dissection will be performed on H k = [ t ̲ k , t ¯ k ] :
(i): Calculate ω = max { ( t i k t ̲ i k ) ( t i k t ¯ i k ) : i = 1 , 2 , , p } , if ω = 0 , let t ¯ μ k t ̲ μ k = max { t ¯ i k t ̲ i k : i = 1 , 2 , , p } , then t μ k = t ̲ μ k + t ¯ μ k 2 ; otherwise, find the first t j k a r g max ω and let t μ k = t j k .
(ii): Note that t ^ = ( t 1 k , t 2 k , , t i 1 k , t μ k , t i + 1 k , , t p k ) T . Using the point t ^ , the rectangular H k is divided into two sub-super-rectangular H k 1 = [ t ̲ k 1 , t ¯ k 1 ] and H k 2 = [ t ̲ k 2 , t ¯ k 2 ] , then the sub-super-rectangles H k 1 and H k 2 , which are respectively:
H k 1 = i = 1 μ 1 [ t ̲ i k , t ¯ i k ] × [ t ̲ μ k , t μ k ] × i = μ + 1 p [ t ̲ i k , t ¯ i k ] , H k 2 = i = 1 μ 1 [ t ̲ i k , t ¯ i k ] × [ t μ k , t ¯ μ k ] × i = μ + 1 p [ t ̲ i k , t ¯ i k ] .
Remark 1.
When ω = 0 , t k is located at the lower left vertex or upper right vertex of the hyper-rectangle H k , we divide the longest edge of the rectangle in a uniform dichotomous way. When ω 0 , the dividing method depends on the position of the t k in the hyper-rectangle H k , and the selected μ edge is as wide as possible and ensures that t μ k is as close to the midpoint of the edge as possible.
Remark 2.
The advantage of this method of dividing the hyper-rectangle is that it increases the diversity of hyper-rectangle segmentation, but to some extent, it increases the amount of extra computation. Other rules may have better performance.

5. Output-Space Branch-and-Bound Algorithm and Its Convergence

To allow a full description of the algorithm, when the algorithm iterates to step k, we make the following representation of the associated notation: H k is the hyper-rectangle to be thinned for the current iteration step; Q is the set of all feasible solutions to L F P ; Ω is the remaining sets of hyper-rectangles after pruning; U k is the upper bound of the global optimal value of the L F P problem when the algorithm iterates to the step k; L k is the lower bound of the global optimal value of the L F P problem when the algorithm iterates to the step k; L ( H k ) represents the optimal function value of problem L R P k on H k and ( x k , t k ) is its corresponding optimal solution.
Using the above, a description of the output-space branch-and-bound algorithm for solving the problem L F P is as follows.
Step 1. Set the tolerance ϵ > 0 . Construct the initial hyper-rectangle H 0 = H = [ t ̲ , t ¯ ] . Solve the linear programming problem L R P 0 on super-rectangular H 0 . The corresponding optimal solution and optimal value are recorded as ( x 0 , t 0 ) and L ( H 0 ) , respectively. Then, L 0 = L ( H 0 ) is the initial lower bound of the global optimal value of L F P . The initial upper bound is U 0 = f ( x 0 ) . If U 0 L 0 ϵ , then stop, a ϵ -global optimal solution x * = x 0 of problem L F P is found. Otherwise, set Ω = { H 0 } , F = , the initial iteration number k = 1 , and transfer to Step 2.
Step 2. If U k L k ϵ , then stop the iteration of the algorithm, output the current global optimal solution x * of the L F P problem and the globally optimal value f ( x * ) ; Otherwise, go to Step 3.
Step 3. The super-rectangle H k , which corresponds to the current lower bound L k , is selected, in Ω , i.e., L k = L ( H k ) .
Step 4. Using the rectangular branching process in Section 3, H k is divided into two sub-rectangles: H k 1 and H k 2 that satisfy H k 1 H k 2 = . For all L ( H k i ) < U k , set F = F { H k i } , Q = Q { x i } ( i { 1 , 2 } ) . If F = , go to Step 3. Otherwise, set Ω = Ω H k F , and continue.
Step 5. Let U k = min { U k , min { f ( x ) : x Q } } . If U k = min { f ( x ) : x Q } , the current optimal solution is x * a r g min { f ( x ) , x Q } ; Let L k = min { L ( H ) : H Ω } ; Set k : = k + 1 , F = , Q = , and go to Step 2.
Remark 3.
The branching target of our branch and bound algorithm is p-dimensional output-space, so our algorithm can be called O S B B A .
Remark 4.
It can be seen from Step 4 and Step 5 that the number of elements in Q does not exceed two in each iterative step, and at the same time, only two function values are calculated in Step 5 to update the upper bound.
Remark 5.
In Step 4, we save the super-rectangle H k i of L ( H k i ) < U k into Ω after each branch, which implies the pruning operation of the branching algorithm.
Remark 6.
The convergence rate of the algorithm O S B B A . is related to the optimal accuracy and the initial hyper-rectangle H 0 . It can be seen from Theorem 5 below that the convergence rate of the algorithm O S B B A is proportional to the size of the accuracy ϵ and inversely proportional to the diameter length of the initial hyper-rectangle H 0 . In general, the accuracy is given in advance, and the convergence rate mainly depends on the diameter length of the initial hyper-rectangle H 0 .
Theorem 2.
Let ε i = t i ¯ t i ̲ , for each i { 1 , 2 , , p } , or x X , t H , if ε i 0 , we have Φ i ( x , t ) Φ ̲ i ( x , t ) 0 , Φ ¯ i ( x , t ) Φ i ( x , t ) 0 , Ψ i ( x , t ) Ψ ̲ i ( x , t ) 0 , f ( x ) f ̲ ( x , t ) 0 .
Proof of Theorem 2.
For every x X , t H , by merging (1) and (2), we have:
Φ i ( x , t ) Φ ̲ i ( x , t ) = j = 1 n e i j t i x j + f i t i ( j I i + e i j t ̲ i x j + j I i e i j t ¯ i x j + f i t i ) , = j = 1 n e i j t i x j ( j I i + e i j t ̲ i x j + j I i e i j t ¯ i x j ) , j I i + e i j | t i t ̲ i | x j + j I i | e i j | | t ¯ i t i | x j , j I i + e i j | t ¯ i t ̲ i | x j + j I i | e i j | | t ¯ i t ̲ i | x j , = | t ¯ i t ̲ i | · j = 1 n | e i j | x j N i · | t ¯ i t ̲ i | .
Φ ¯ i ( x , t ) Φ i ( x , t ) = j I i e i j t ̲ i x j + j I i + e i j t ¯ i x j + f i t i ( j = 1 n e i j t i x j + f i t i ) , = j I i e i j t ̲ i x j + j I i + e i j t ¯ i x j j = 1 n e i j t i x j , j I i | e i j | | t i t ̲ i | x j + j I i + e i j | t ¯ i t i | x j , j I i | e i j | | t ¯ i t ̲ i | x j + j I i + e i j | t ¯ i t ̲ i | x j , = | t ¯ i t ̲ i | · j = 1 n | e i j | x j N i · | t ¯ i t ̲ i | .
and
Ψ i ( x , t ) Ψ ̲ i ( x , t ) = j = 1 n c i j t i x j + d i t i ( j J i + c i j t ̲ i x j + j = J i c i j t ¯ i x j + d i t i ) , = j = 1 n c i j t i x j ( j J i + c i j t ̲ i x j + j J i c i j t ¯ i x j ) , j J i + c i j | t i t ̲ i | x j + j J i | c i j | | t ¯ i t i | x j , j J i + c i j | t ¯ i t ̲ i | x j + j J i | c i j | | t ¯ i t ̲ i | x j , = | t ¯ i t ̲ i | · j = 1 n | c i j | x j N i · | t ¯ i t ̲ i | .
where N i = max { j = 1 n | e i j | x ¯ j , j = 1 n | c i j | x ¯ j } , i = 1 , , p . On the one hand, x X is bounded and 0 x j x ¯ j for each j = 1 , 2 , , n . Thus ε i 0 , N · | t ¯ i t ̲ i | 0 , besides Φ i ( x , t ) Φ ̲ i ( x , t ) 0 , Φ ¯ i ( x , t ) Φ i ( x , t ) 0 , Ψ i ( x , t ) Ψ ̲ i ( x , t ) 0 .
On the other hand,
f ( x ) f ̲ ( x , t ) = i = 1 p [ Ψ i ( x , t ) Ψ ̲ i ( x , t ) ] .
By combining Inequalities ( 3 ) and Equation (4), then f ( x ) f ̲ ( x , t ) 0 . Therefore, the theorem holds.  □
According to Theorem 2, we can also know that the super-rectangle of t is thinning gradually and the relaxed feasible region progressively approaches the original feasible region by operation of the algorithm.
Theorem 3.
(a) If the algorithm terminates within finite iterations, a globally optimal solution for L F P is found.
(b) If the algorithm generates an infinite sequence in the iterative process, then any accumulation point of the infinite sequence { x k } is a global optimal solution of the problem L F P .
Proof of Theorem 3.
(a) If the algorithm is finite, assume it stops at the kth iteration, k > 1 . From the termination rule of Step 2, we know that U k L k ϵ , which implies that
f ( x k ) L k ϵ .
Assuming that the global optimal solution is x * , we know that
U k = f ( x k ) f ( x * ) L k .
hence, combining inequalities (5) and (6), we have
f ( x k ) + ϵ f ( x * ) + ϵ L k + ϵ f ( x k ) .
and then part (a) has been proven.
(b) If the iteration of the algorithm is infinite, and in this process, an infinite sequence { x k } of feasible solutions for the problem L F P is generated by solving the problem L R P k , the sequence of feasible solutions for the corresponding linear relaxation problem is { ( x k , t k ) } . According to Steps 3–5 of the algorithm, we have
L k = f ̲ ( x k , t k ) f ( x * ) f ( x k ) = U k , k = 1 , 2 , .
Because the series { L k = f ̲ ( x k , t k ) } is nondecreasing and bounded, and { U k = f ( x k ) } is decreasing and bounded, they are convergent sequences. Taking the limit on both sides of (8), we have
lim k f ̲ ( x k , t k ) f ( x * ) lim k f ( x k ) .
Then, L = lim k f ̲ ( x k , t k ) , U = lim k f ( x k ) , and Formula (9) becomes
L f ( x * ) U .
Without loss of generality, assume that the rectangular sequence { H k = [ t ̲ k , t ¯ k ] } satisfies t k H k and H k + 1 H k . In our algorithm, the rectangles are divided continuously into two parts of equal width, then lim k H k = t * , and in the process, a sequence { t k } of t will be generated, obviously, lim k t k = t * , and also generate a sequence { x k } that satisfies lim k x k = x * , because of the continuity of function f ( x ) and Formula (10). So, the sequence { x k } , of which any accumulation point x * is a global optimal solution of the L F P problem.  □
From Theorem 3, we know that the algorithm in this paper is convergent, and then we use Theorems 4 and 5 to show that the convergence rate of our algorithm is related to the size of p. For the detailed proof of Theorem 4, see [25], and other concepts in the theorem are derived from [25]. In addition, we encourage readers to understand [25] in detail.
As the sub-hyper-rectangles obtained by our branch method are not necessarily congruent, take δ ( H ) = max { δ ( H l ) : l { 1 , 2 , , s } } , where the definition of s is given below. The definition of δ ( H l ) is the same as that of Notation 1 in [25], which represents the diameter of hyper-rectangle H l . Therefore, δ ( H ) represents the maximum diameter of the s hyper-rectangles. In order to connect well with the content of this paper, we adjust the relevant symbols and reinterpret.
Theorem 4.
Consider the big cube small cube algorithm with a bounding operation which has a rate of convergence of q 1 . Furthermore, assume a feasible super-rectangle H and the constants ϵ, C > 0 as before. Moreover, we assume the branching process which splits the selected super-rectangle along each side, i.e., into s = 2 r smaller super-rectangles. Then the worst case number of iterations for the big cube small cube method can be bounded from above by
v = 0 z 2 r · v w h e r e z = l o g 2 δ ( H ) ( ϵ / C ) 1 / q , δ ( H ) = max { δ ( H l ) : l { 1 , 2 , , s } } .
Proof of Theorem 4.
The proof method is similar to the Theorem 2 in [25] and is thus omitted.  □
In Theorem 4, r represents the spatial dimension of the hyper-rectangle to be divided. At the same time, Tables 1 and 2 in [25] show that q = 1 is the worst case, that is, the most times the algorithm needs to subdivide hyper-rectangles during iteration. For the convenience of the discussion, we assume q = 1 , and give Theorem 5 to show that the convergence rate of our algorithm is related to the size of p.
Theorem 5.
For the algorithm O S B B A , it is assumed that for a feasible hyper-rectangle H p , there is a fixed positive number of C p and the accuracy ϵ. In addition, we also assume that the branching process will eventually divide the hyper-rectangle into s = 2 p small hyper-rectangles. Then, in the worst case, the number of iterations of the O S B B A algorithm when dividing the hyper-rectangle H p can be expressed by the following formula:
v = 0 z p 2 p · v w h e r e z p = l o g 2 C p · δ ( H p ) ϵ , δ ( H p ) = max { δ ( H p l ) : l { 1 , 2 , , s } } .
We call the convergence rate of the algorithm O S B B A , O ( p ) .
Proof of Theorem 5.
We order “ r = p ”, “ C = C p ”, “ q = 1 ”, “ z = z p ” and “ H = H p ” in Theorem 4, the proof method is similar to Theorem 4, and the reader can refer to [25].  □
In addition, for the algorithms in [18,26,27,28,29], they subdivide the n-dimensional hyper-rectangle H n . Similar to Theorem 4, when they divide the hyper-rectangle H n , the number of iterations in the worst case can also be expressed by the following formula:
v = 0 z n 2 n · v w h e r e z n = l o g 2 δ ( H n ) ( ϵ / C n ) 1 / q n , δ ( H n ) = max { δ ( H n l ) : l { 1 , 2 , , s } } .
where “n”, “ C n ”, “ q n ”, “ z n ” and “ H n ” correspond to “r”, “C”, “q”, “z” and “H” in (11). We also record the convergence rate of the algorithm in [18,26,27,28,29] as O ( n ) .
By means of Equations (12) and (13), when p n , the following conclusions are drawn:
(i): If z p z n , then, v = 0 z p 2 p · v v = 0 z n 2 p · v v = 0 z n 2 n · v .
(ii): If z p > z n , there must be a positive number N z p z n p + 1 so that p < z p z n p < N holds, which means that when N n implies that p < z p z n p < N n , p z p n z n also holds, then:
v = 0 z n 2 n · v v = 0 z p 2 p · v = 2 n ( z n + 1 ) 1 2 n 1 2 p ( z p + 1 ) 1 2 p 1 , = ( 2 n ( z n + 1 ) 1 ) ( 2 p 1 ) ( 2 p ( z p + 1 ) 1 ) ( 2 n 1 ) ( 2 n 1 ) ( 2 p 1 ) , = 2 ( 2 n + p 1 1 ) ( 2 n z n 2 p z p ) + 2 n 2 p ( 2 n 1 ) ( 2 p 1 ) 0 .
Both conclusions (i) and (ii) can show that when p n , the following formula is established: v = 0 z p 2 p · v v = 0 z n 2 n · v .
Remark 7.
In Formula (12) of Theorem 5, q = 1 , while the q n in Formula (13) does not specify the size, which means that O ( p ) is compared with O ( n ) in the case of slowest convergence, but in the case of p n , there will always be (i) and (ii), which is a clearer indication of O ( p ) O ( n ) .
It can be seen that the size of O ( p ) and O ( n ) is exponential growth, but the size of p in O ( p ) is generally not more than 10, and p n , so our algorithm O S B B A has an advantage in solving large-scale problems in the case of p 10 and p n . The experimental analysis of several large-scale random examples below will also be referred to again.

6. Numerical Examples

Now, we give several examples and a random calculation example to prove the validity of the branch-and-bound algorithm in this paper.
We coded the algorithms in Matlab 2017a and ran the tests in a computer with an Intel(R) Core(TM)i7-4790s processor of 3.20 GHz, 4 GB of RAM memory, under the Microsoft Windows 7 operational system. In solving the L R P s , we use the simplex method in the linprog command in Matlab 2017a.
In Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9, the symbols of the table header are respectively: x * , the optimal solution of the L F P problem; f ( x * ), the optimal value of the objective function; Iter, the number of iterations on problems 1–11; Ave.Iter, the average number of iterations on problems 12–13; ϵ , tolerance; Time: the CPU running time of problems 1–11; Ave.Time, the average CPU running time of problems 12–13; p, the number of linear fractions in the objective function; m, the number of linear constraints; n, the dimension of the decision variable; S R , the success rate of the algorithm in calculating problem 12. When the number of Ave.Time or Ave.Iter shows “–”, it means that the algorithm fails to calculate when solving the problem.
Example 1
([15,18]).
min x 1 + 2 x 2 + 2 3 x 1 4 x 2 + 5 + 4 x 1 3 x 2 + 4 2 x 1 + x 2 + 3 s . t . x 1 + x 2 1 . 5 , x 1 x 2 0 , 0 x 1 1 , 0 x 2 1 .
Example 2
([18]).
min 0 . 9 × x 1 + 2 x 2 + 2 3 x 1 4 x 2 + 5 + ( 0 . 1 ) × 4 x 1 3 x 2 + 4 2 x 1 + x 2 + 3 s . t . x 1 + x 2 1 . 5 , x 1 x 2 0 , 0 x 1 1 , 0 x 2 1 .
Example 3
([17,21]).
min 3 x 1 + 5 x 2 + 3 x 3 + 50 3 x 1 + 4 x 2 + 5 x 3 + 50 + 3 x 1 + 4 x 2 + 50 4 x 1 + 3 x 2 + 2 x 3 + 50 + 4 x 1 + 2 x 2 + 4 x 3 + 50 5 x 1 + 4 x 2 + 3 x 3 + 50 s . t . 2 x 1 + x 2 + 5 x 3 10 , x 1 + 6 x 2 + 2 x 3 10 , 9 x 1 + 7 x 2 + 3 x 3 10 , x 1 , x 2 , x 3 0 .
Example 4
([17,22]).
min ( 4 x 1 + 3 x 2 + 3 x 3 + 50 3 x 2 + 3 x 3 + 50 + 3 x 1 + 4 x 3 + 50 4 x 1 + 4 x 2 + 5 x 3 + 50 + x 1 + 2 x 2 + 5 x 3 + 50 x 1 + 5 x 2 + 5 x 3 + 50 + x 1 + 2 x 2 + 4 x 3 + 50 5 x 2 + 4 x 3 + 50 ) s . t . 2 x 1 + x 2 + 5 x 3 10 , x 1 + 6 x 2 + 3 x 3 10 , 5 x 1 + 9 x 2 + 2 x 3 10 , 9 x 1 + 7 x 2 + 3 x 3 10 , x 1 0 , x 2 0 , x 3 0 .
Example 5
([17,21,24]).
min 4 x 1 + 3 x 2 + 3 x 3 + 50 3 x 2 + 3 x 3 + 50 + 3 x 1 + 4 x 3 + 50 4 x 1 + 4 x 2 + 5 x 3 + 50 + x 1 + 2 x 2 + 4 x 3 + 50 x 1 + 5 x 2 + 5 x 3 + 50 + x 1 + 2 x 2 + 4 x 3 + 50 5 x 2 + 4 x 3 + 50 s . t . 2 x 1 + x 2 + 5 x 3 10 , x 1 + 6 x 2 + 2 x 3 10 , 9 x 1 + 7 x 2 + 3 x 3 10 , x 1 , x 2 , x 3 0 .
Example 6
([17,22]).
min ( 3 x 1 + 5 x 2 + 3 x 3 + 50 3 x 1 + 4 x 2 + 5 x 3 + 50 + 3 x 1 + 4 x 2 + 50 4 x 1 + 3 x 2 + 2 x 3 + 50 + 4 x 1 + 2 x 2 + 4 x 3 + 50 5 x 1 + 4 x 2 + 3 x 3 + 50 ) s . t . 6 x 1 + 3 x 2 + 3 x 3 10 , 10 x 1 + 3 x 2 + 8 x 3 10 , x 1 , x 2 , x 3 0 .
Example 7
([17,21]).
min 37 x 1 + 73 x 2 + 13 13 x 1 + 13 x 2 + 13 + 63 x 1 18 x 2 + 39 13 x 1 + 26 x 2 + 13 s . t . 5 x 1 3 x 2 = 3 , 1 . 5 x 1 3 .
Example 8
([12,14]).
max 4 x 1 + 3 x 2 + 3 x 3 + 50 3 x 2 + 3 x 3 + 50 + 3 x 1 + 4 x 2 + 50 4 x 1 + 4 x 2 + 5 x 3 + 50 + x 1 + 2 x 2 + 5 x 3 + 50 x 1 + 5 x 2 + 5 x 3 + 50 + x 1 + 2 x 2 + 4 x 3 + 50 5 x 2 + 4 x 3 + 50 s . t . 2 x 1 + x 2 + 5 x 3 10 , x 1 + 6 x 2 + 3 x 3 10 , 5 x 1 + 9 x 2 + 2 x 3 10 , 9 x 1 + 7 x 2 + 3 x 3 10 , x 1 0 , x 2 0 , x 3 0 .
Example 9
([12,14,23]).
max 37 x 1 + 73 x 2 + 13 13 x 1 + 13 x 2 + 13 + 63 x 1 18 x 2 + 39 13 x 1 26 x 2 13 + 13 x 1 + 13 x 2 + 13 63 x 1 18 x 2 + 39 + 13 x 1 + 26 x 2 + 13 37 x 1 73 x 2 13 s . t . 5 x 1 3 x 2 = 3 , 1 . 5 x 1 3 .
Example 10
([14,23,24]).
max 4 x 1 + 3 x 2 + 3 x 3 + 50 3 x 2 + 3 x 3 + 50 + 3 x 1 + 4 x 3 + 50 4 x 1 + 4 x 2 + 5 x 3 + 50 + x 1 + 2 x 2 + 5 x 3 + 50 x 1 + 5 x 2 + 5 x 3 + 50 + x 1 + 2 x 2 + 4 x 3 + 50 5 x 2 + 4 x 3 + 50 s . t . 2 x 1 + x 2 + 5 x 3 10 , x 1 + 6 x 2 + 2 x 3 10 , 9 x 1 + 7 x 2 + 3 x 3 10 , x 1 0 , x 2 0 , x 3 0 .
Example 11
([18]).
max 3 x 1 + 4 x 2 + 50 3 x 1 + 5 x 2 + 4 x 3 + 50 3 x 1 + 5 x 2 + 3 x 3 + 50 5 x 1 + 5 x 2 + 4 x 3 + 50 x 1 + 2 x 2 + 4 x 3 + 50 5 x 2 + 4 x 3 + 50 4 x 1 + 3 x 2 + 3 x 3 + 50 3 x 2 + 3 x 3 + 50 s . t . 6 x 1 + 3 x 2 + 3 x 3 10 , 10 x 1 + 3 x 2 + 8 x 3 10 , x 1 0 , x 2 0 , x 3 0 .
As can be seen from Table 1, our algorithm can accurately obtain the global optimal solution of these 11 low-dimensional examples, which shows the effectiveness and feasibility of this algorithm. However, compared with other algorithms in the literature, the effect of this algorithm is relatively poor. This is because the method of constructing the lower bound is simple and easy to operate, and the branching operation is performed on the p-dimensional output-space. At the same time, the algorithm of this paper has no super-rectangular reduction technology, which makes the approximation of solving the low dimensional problem worse. We also note that the number p of ratios in these 11 examples is not larger than the dimension n of the decision variable, and our algorithm requires that p is much smaller than n, which is why our algorithm is not effective in solving these examples. With the continuous updating and progress of computers, the gap between our algorithm and other methods in solving these 11 low-dimensional examples can be bridged, and the needs of society mainly focus on the high-dimensional problems under p n . Therefore, we only use Examples 1–11 to illustrate the effectiveness and feasibility of our algorithm, and the numerical results can also show that the algorithm is convergent. When our algorithm is applied to higher dimensional problems, the effect gradually improves, as can be seen from the numerical results of Examples 12 and 13 in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9.
Example 12.
min i = 1 p j = 1 n c i j x j + d i j = 1 n e i j x j + f i s . t . j = 1 n a q j x j b q , q = 1 , 2 , , m , x j 0 , j = 1 , 2 , , n ,
where p is a positive integer, c i j , e i j , a q j are randomly selected on the interval [0,1], and set b q = 1 for all q. All constant terms of denominators and numerators are the same number, which randomly generated in [1,100]. This agrees that the random number is generated in [18]. First of all, when the dimension is not more than 500, we generate 10 random examples for each group (p, m, n), and use the algorithm OSBBA and the algorithm in [18] to calculate the same example respectively, and then record the average number of iterations and the average CPU running time of these 10 examples in Table 2, respectively. Secondly, when the dimension n is not less than 1000, it is noted that the algorithm in [18] needs to solve 2n linear programming problems when determining the initial hyper-rectangle, and the search space of each linear programming is at least thousands of dimensions, which is very time-consuming. Note that when (p, m, n) = (10, 200, 500), the CPU time is close to 1200 s. Therefore, in the case where the dimension n is not less than 1000, We generate only five random examples and specify that the algorithm is considered to fail when the calculated time exceeds 1200 s. On the premise of recording the average number of iterations and the average CPU running time, the success rate of the five high-dimensional examples is also added, which is presented in Table 3.
First of all, by comparing the lower bound subproblem in [18] with the lower bound subproblem in our algorithm, we can know that the lower bound subproblem of algorithm O S B B A only makes use of the information of the upper and lower bounds of the denominator of p ratios in the process of construction, while in the process of constructing the lower bound, the information of the upper and lower bounds of the decision variables is also used, which requires the calculation of 2 n linear programming problems in the initial iterative step. Compared with the method in [18], the algorithm O S B B A only needs to calculate the 2 p linear programming problems in the initialization stage, and does not need to calculate the upper and lower bounds of any fractal denominator. It can be seen that when p is much less than n,18] will spend a lot of time in calculating 2 n linear programming problems. The number of branches is often particularly large when the number of iterations is greater than 1, so that a large number of child nodes will be produced on the branch and bound tree, which not only occupies a large amount of computer memory space but also takes a lot of time. However, the performance of our algorithm O B S B A is the opposite. In real life, the size of p is usually not greater than 10, therefore, the number of subproblems that will need to be solved is usually small in the process of branching iteration. Compared with the method in [18], the amount of computer memory occupied is not very large.
Secondly, from the results of Table 2, we can see that the computational performance of [18] in solving small-scale problems is better than that of our algorithm O S B B A . However, it can be clearly seen that when the dimension of the problem is higher than 100, its computational performance gradually weakens. It can also be seen that when the dimension of the problem is above 100, the computational power of the method in [18] is inferior to our algorithm O S B B A . The computational performance of the algorithm O S B B A is closely related to the size of p, and the smaller the p is, the shorter the computing time is. For the algorithm in [18], its computational performance has a very important relationship with the size of n. The larger the n, the more time the computer consumes. It is undeniable that the method in [18] has some advantages in solving small-scale problems. However, in solving large-scale problems, Table 3 shows that the algorithm O S B B A is always superior to the algorithm in [18]. Especially when the dimension is more than 500, the success rate of our algorithm to find the global optimal solution in 1200 s is higher than that in [18]. This is the advantage of algorithm O S B B A .
In addition, for Example 12, we also use the same test method as the [18] to compare the algorithm O S B B A with the internal solver B M I B N B of MATLAB toolbox Y A L M I P [26], where we only record the the average CPU running time and the success rate of the two algorithms and display them in Table 4 and Table 5.
As can be seen from Table 4 and Table 5, the B M I B N B is more effective than the O S B B A when solving the small-scale problem, but it is sensitive to the size n of the decision variable, especially when n exceeds 100, the CPU time of the computer is suddenly increased. The algorithm O S B B A is less affected by n, but for small-scale problems, the computational performance of the algorithm O S B B A is very sensitive to the number p of the linear fractions. For large-scale problems, Table 5 shows similar results as Table 3.
Example 13.
min i = 1 p j = 1 n c i j x j + d i j = 1 n e i j x j + f i s . t . j = 1 n a q j x j b q , q = 1 , 2 , , m , 0 x j x ¯ j , j = 1 , 2 , , n .
In order to further illustrate the advantages of the algorithm O S B B A in this paper, a large number of numerical experiments were carried out for Example 13, comparing the algorithm O S B B A with the call to the commercial software package B A R O N [27]. Through the understanding of B A R O N , we know that the operation of its branches is also carried out in the n-dimensional space. Similar to [18], we can predict that B A R O N is quite time-consuming as the dimension increases. To simplify the comparison operation, the one of the constants of the numerator and denominator is set to 100 (i.e., d i = f i = 100 ) in order to successfully run B A R O N . Next, we give an upper bound x ¯ j for each decision variable, and randomly select from the interval [ 0 , 10 ] together with c i j , e i j , a q j and b q to form a random Example 13.
We set the tolerance of the algorithm O S B B A and B A R O N to 10 3 for the sake of fairness (This is because the accuracy of the internal setting of the package B A R O N is 10 3 and we are unable to adjust it). For each group ( m , n , p ) , we randomly generate 10 examples, calculate the same example with the algorithm O S B B A and the commercial software package B A R O N respectively, and then record the average number of iterations and the average CPU running time of the 10 examples in Table 6, Table 7, Table 8 and Table 9.
As we can see from Table 6, when n is less than 100, the CPU run time ( A v e . T i m e ) and the iteration number ( A v e . I t e r ) of our algorithm are not as good as B A R O N . In the case of p = 2 , 3 , n = 100, the CPU average running time ( A v e . T i m e ) and the average iteration number ( A v e . I t e r ) of B A R O N are less than our algorithm O S B B A . In the state of ( m , n ) = ( 10 , 100 ) and p = 4 , 5 , the algorithm O S B B A is better than B A R O N . In the case of n 200 , if p 5 , the average CPU running time ( A v e . T i m e ) of algorithm O S B B A is less than B A R O N , while at p > 5 , the average running time of the algorithm is opposite to that of the former.
According to Table 7, Table 8 and Table 9, we can also conclude that if p < 5 , the algorithm O S B B A takes significantly less than B A R O N . In Table 8 and Table 9, in the case of p 8 , if n = 300 , 500 , the calculation time of algorithm O S B B A is significantly more than B A R O N , if n = 700 , 900 , 1000 , 2000 , 3000 , the calculation of B A R O N takes more time than the algorithm O S B B A . At the same time, some “–” can be seen in Table 7 and Table 9, B A R O N fails in these 10 computations, which indicates that the success rate of B A R O N in solving high-dimensional problems is close to zero, but our algorithm can still obtain the global optimal solution of the problem within a finite step, and the overall time is no more than 420 s.
In general, when p n , our algorithm showed a good computing performance. In practical application, the size of p generally does not exceed 10. The calculation results in Table 6 show that our algorithm is not as effective as the B A R O N algorithm in solving small problems, but it can be seen from Table 6, Table 7, Table 8 and Table 9 that this algorithm has obvious advantages in solving large-scale and high-dimensional problems. At the same time, compared with B A R O N , this algorithm can also solve high-dimensional problems.
The method of the nonlinear programming in commercial software package B A R O N comes from [29]. It is a branch and bound reduction method based on the n-dimensional space in which the decision variable x is located, which we can see from the two examples in Section 6 of [29]. In Section 5 of [29], many feasible domain reduction techniques, including polyhedral cutting techniques, are also proposed. Although B A R O N connects many feasible domain reduction techniques when using this method, from the experimental results in this paper, it can be seen that B A R O N is more effective than our O S B B A method in solving small-scale linear fractional programming problems. B A R O N branches on a maximum of 2 n nodes, which is exponential, while our algorithm, potentially branches on a max of 2 p nodes, a smaller-sized problem. Even if these feasible domain reductions are still valid in combination with B A R O N , when a computer runs a reduced program in these feasible domains, it also increases time consumption and space storage to a large extent. For special nonlinear programming-linear fractional programming problems, the proposed global optimization algorithm is to branch hyper-rectangles in p-dimensional space, because p is much less than n, and p is not more than 10, which ensures that the algorithm proposed by us is suitable for solving large-scale problems. As the variables that the algorithm branches are located in different spatial dimensions, B A R O N branches on the n-dimensional decision space where the decision variable is located, and the branching process of the algorithm O S B B A is carried out on the p-dimensional output-space. It can also be seen from Table 6, Table 7, Table 8 and Table 9 that when the number p of ratio items is much smaller than the dimension n of the decision variable, the algorithm O S B B A calculates better than B A R O N . This is because in the case of higher dimensional problems, in the process of initialization, B A R O N needs to solve more and higher dimensional subproblems, while the algorithm O S B B A only needs to solve 2p n-dimensional subproblems, which greatly reduces the amount of computation. This is why our algorithm O S B B A branches in p-dimensional space.
In summary, in terms of the demand of real problems, the number p of ratios does not exceed 10 in the linear fractional programming problems required to be solved. At the same time, the size of p is much smaller than the dimension n of the decision variable. In the process of branching, the number of vertices of the rectangle to be divided is 2 p and 2 n , respectively. In the case of p n , the branch of our algorithm O S B B A can always be completed quickly, but the methods in the software package B A R O N and in [18] will have a lot of branching complexity. Therefore the computation required by the branch search in R p is more economical than that in R n . In the case of p n , our method is more effective in solving large-scale problems than in [18] and the software package B A R O N . At the same time, it is also noted that when the results of O S B B A and B M I B N B are compared, the latter is sensitive to the size of n, which once again illustrates the characteristics of the algorithm in this paper.

7. Conclusions

In this paper, a deterministic method is proposed for linear fractional programming problems. It is based on the linear relaxation problem of the positive and negative coefficient of the constructor, and the corresponding branch-and-bound algorithm O S B B A is given. In Section 6, the feasibility and effectiveness of the algorithm for solving linear fractional programming problems are fully illustrated by numerical experiments, and it is also shown that our algorithm O S B B A is more effective than the method in B A R O N , B M I B N B , and [18] when applied to high-dimensional problems in the case of p n . In recent years, the development of multi-objective programming is becoming increasingly rapid. We can solve the problem of multi-objective linear fractional programming by combining the ideas and methods of this paper with other approaches. In future academic research, we will also start to consider this aspect of the problem.

Author Contributions

X.L. and Y.G. conceived of and designed the study. B.Z. and X.L. performed the experiments. X.L. wrote the paper. Y.G. and F.T. reviewed and edited the manuscript. All authors read and approved the manuscript.

Funding

This research is supported by the National Natural Science Foundation of China under Grant (11961001), the Construction Project of first-class subjects in Ningxia higher Education(NXYLXK2017B09) and the major proprietary funded project of North Minzu University (ZDZX201901).

Acknowledgments

The authors are grateful to the responsible editor and the anonymous references for their valuable comments and suggestions, which have greatly improved the earlier version of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LFPlinear fractional programming
EOPequivalent nonlinear programming
LRPlinear relaxation programming
Ttranspose of a vector or matrix

References

  1. Schaible, S. Fractional programming. In Handbook of global optimization; Horst, R., Pardalos, P.M., Eds.; Kluwer: Dordrecht, The Netherlands, 1995; pp. 495–608. [Google Scholar]
  2. Falk, J.E.; Palocsay, S.W. Optimizing the Sum of Linear Fractional Functions. In Recent Advances in Global Optimization; Princeton University Press: Princeton, NJ, USA, 1992; pp. 221–258. [Google Scholar]
  3. Konno, H.; Inori, M. Bond portfolio optimization by bilinear fractional programming. J. Oper. Res. Soc. Jpn. 1989, 32, 143–158. [Google Scholar] [CrossRef]
  4. Colantoni, C.S.; Manes, R.P.; Whinston, A. Programming, profit rates and pricing decisions. Account. Rev. 1969, 44, 467–481. [Google Scholar]
  5. Sawik, B. Downside risk approach for multi-objective portfolio optimization. In Operations Research Proceedings 2011; Springer: Berlin/Heidelberg, Germany, 2012; pp. 191–196. [Google Scholar]
  6. Benson, H.P. A simplicial branch and bound duality-bounds algorithm for the linear sum-of-ratios problem. Eur. J. Oper. Res. 2007, 182, 597–611. [Google Scholar] [CrossRef]
  7. Freund, R.W.; Jarre, F. Solving the Sum-of-Ratios Problem by an Interior-Point Method. J. Glob. Optim. 2001, 19, 83–102. [Google Scholar] [CrossRef]
  8. Matsui, T. NP-hardness of linear multiplicative programming and related problems. J. Glob. Optim. 1996, 9, 113–119. [Google Scholar] [CrossRef] [Green Version]
  9. Charnes, A.; Cooper, W.W. Programming with linear fractional functionals. Naval Res. Logist. Q. 1962, 9, 181–186. [Google Scholar] [CrossRef]
  10. Konno, H.; Yajima, Y.; Matsui, T. Parametric simplex algorithms for solving a special class of nonconvex minimization problems. J. Glob. Optim. 1991, 1, 65–81. [Google Scholar] [CrossRef]
  11. Konno, H.; Abe, N. Minimization of the sum of three linear fractional functions. J. Glob. Optim. 1999, 15, 419–432. [Google Scholar] [CrossRef]
  12. Shen, P.P.; Wang, C.F. Global optimization for sum of linear ratios problem with coefficients. Appl. Math. Comput. 2006, 176, 219–229. [Google Scholar] [CrossRef]
  13. Nguyen, T.H.P.; Tuy, H. A Unified Monotonic Approach to Generalized Linear Fractional Programming. J. Glob. Optim. 2003, 26, 229–259. [Google Scholar]
  14. Jiao, H.; Liu, S.; Yin, J.; Zhao, Y. Outcome space range reduction method for global optimization of sum of affine ratios problem. Open Math. 2016, 14, 736–746. [Google Scholar] [CrossRef] [Green Version]
  15. Shen, P.P.; Zhang, T.L.; Wang, C.F. Solving a class of generalized fractional programming problems using the feasibility of linear programs. J. Inequal. 2017, 2017, 147. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Hu, Y.; Chen, G.; Meng, F. Efficient Global Optimization Algorithm for Linear-fractional-programming with Lower Dimension. Sci. Mosaic 2017, 1, 11–16. [Google Scholar]
  17. Shen, P.P.; Lu, T. Regional division and reduction algorithm for minimizing the sum of linear fractional functions. J. Inequal. 2018, 2018, 63. [Google Scholar] [CrossRef] [PubMed]
  18. Zhang, Y.H.; Wang, C.F. A New Branch and Reduce Approach for Solving Generalized Linear Fractional Programming. Eng. Lett. 2017, 25, 262–267. [Google Scholar]
  19. Jiao, H.W.; Wang, Z.K.; Chen, Y.Q. Global optimization algorithm for sum of generalized polynomial ratiosproblem. Appl. Math. Model. 2013, 37, 187–197. [Google Scholar] [CrossRef]
  20. Falk, J.E.; Palocsay, S.W. Image space analysis of generalized fractional programs. J. Glob. Optim. 1994, 4, 63–88. [Google Scholar] [CrossRef]
  21. Gao, Y.; Jin, S. A global optimization algorithm for sum of linear ratios problem. Math. Appl. 2013, 2013, 785–790. [Google Scholar] [CrossRef]
  22. Ji, Y.; Zhang, K.C.; Qu, S.J. A deterministic global optimization algorithm. Appl. Math. Comput. 2007, 185, 382–387. [Google Scholar] [CrossRef]
  23. Shi, Y. Global Optimization for Sum of Ratios Problems. Master’s Thesis, Henan Normal University, Xinxiang, China, 2011. [Google Scholar]
  24. Wang, C.F.; Shen, P.P. A global optimization algorithm for linear fractional programming. Appl. Math. Comput. 2008, 204, 281–287. [Google Scholar] [CrossRef]
  25. SchBel, A.; Scholz, D. The theoretical and empirical rate of convergence for geometric branch-and-bound methods. J. Glob. Optim. 2010, 48, 473–495. [Google Scholar]
  26. Lofberg, J. YALMIP: A toolbox for modeling and optimization in MATLAB. IEEE Int. Symp. Comput. Aided Control Syst. Des. 2004, 3, 282–289. [Google Scholar]
  27. Sahinidis, N. BARON User Manual v.17.8.9[EB/OL]. Available online: http://minlp.com (accessed on 18 June 2019).
  28. Sahinidis, N.V. BARON 14.3.1: Global Optimization of Mixed-Integer Nonlinear Programs. User’s Manual. 2014. Available online: http://archimedes.cheme.cmu.edu/?q=baron (accessed on 13 September 2019).
  29. Tawarmalani, M.; Sahinidis, N.V. A polyhedral branch-and cut approach to global optimization. Math. Programm. 2005, 103, 225–249. [Google Scholar] [CrossRef]
Table 1. Comparison of results in Examples 1–11.
Table 1. Comparison of results in Examples 1–11.
ExampleMethods x * f ( x * ) Iter Time ϵ
1[15](0.00,0.2817)1.6232434.3217 10 8
1[18](0.00,0.2839)1.6232650.9524 10 8
1 O S B B A (0.00,0.2839)1.6232198340.3528 10 8
2[18](0.0,1.0)3.575010.0561 10 9
2 O S B B A (0.0,1.0)3.5750120.1626 10 9
3[17](5.0000,0.0000,0.0000)2.8619160.1250 10 3
3[21](5.0000,0.0000,0.0000)2.86191228.2943 10 4
3 O S B B A (5.0000,0.0000,0.0000)2.86193798.4163 10 4
4[17](1.1111,0.0000,0.0000)−4.09071853.2510 10 2
4[22](1.0715,0,0)−4.087417- 10 6
4 O S B B A (1.11111111,0,0)−4.0907701.8196 10 6
5[17](0.0000,1.66666667,0.0000)3.710980.1830 10 3
5[21](0.0000,1.6667,0.0000)3.708754.1903 10 4
5[24](0.0000,0.625,1.875)4.0000582.9686 10 4
5 O S B B A (0.0000,1.66666667,0.0000)3.71091694.2429 10 6
6[17](0,0.333333,0)−3.0029170.1290 10 3
6[22](0,0.33329,0)−3.000030 10 6
6 O S B B A (0,0.333333,0)−3.0029209050.8226 10 6
7[17](1.5000,1.5000)4.9125561.0870 10 3
7[21](1.5000,1.5000)4.9125113201.6260 10 4
7 O S B B A (1.5000,1.5000)4.91254608.7944 10 4
8[12](1.1111, 0, 3 . 33067 × 10 5 )4.090730.0000 10 6
8[14](1.11111, 0.00000, 0.00000)4.090720.0020 10 6
8 O S B B A (1.11111, 0.00000,0.00000)4.0907421.1433 10 6
9[12](3.0, 4.0)3.291690.0000 10 6
9[14](3.0, 4.0)3.291620.0017 10 6
9[23](3.0, 4.0)3.2916780.0000 10 6
9 O S B B A (3.0,4.0)3.291669316.5359 10 6
10[14](5.0,0.0,0.0)4.428520.0018 10 6
10[23](5.0,0.0,0.0)4.4285350.0000 10 4
10[24](0.0, 0.625, 1875)4.0000582.9686 10 4
10 O S B B A (5.0,0.0,0.0)4.4285611.6153 10 6
11[18](0.0,3.3333,0.0)1.900080.1389 10 6
11 O S B B A (0.0,3.3333,0.0)1.90004026.8145 10 6
Table 2. The results of random calculations for Example 12. Ave.Iter, the average number of iterations on problems 12–13; Ave.Time, the average CPU running time of problems 12–13; S R , the success rate of the algorithm in calculating problem 12.
Table 2. The results of random calculations for Example 12. Ave.Iter, the average number of iterations on problems 12–13; Ave.Time, the average CPU running time of problems 12–13; S R , the success rate of the algorithm in calculating problem 12.
pmn OSBBA Reference [18]
Ave . Iter Ave . Time SR Ave . Iter Ave . Time SR
530303.30.1636100%2.50.0866100%
550504.20.2090100%2.70.1818100%
51001004.90.3075100%3.31.7679100%
1030307.70.4194100%2.80.1440100%
1050508.20.6288100%4.70.3852100%
1010010012.80.8931100%6.32.7120100%
5501005.10.2191100%2.80.4551100%
510030018.42.3497100%72.549.1801100%
520050016.711.1795100%19.9328.1308100%
1050100480.711.4817100%50.617.0826100%
10100300818.595.7513100%1064.4836.2154100%
10200500435.8281.8541100%1784.51126.2541100%
Table 3. The results of random calculations for Example 12.
Table 3. The results of random calculations for Example 12.
pmn OSBBA Reference [18]
Ave . Iter Ave . Time SR Ave . Iter Ave . Time SR
510010007.55.776580%0%
5200200014.096.296880%0%
5300300018.5555.587580%0%
10100100019.2127.565860%0%
10200200032.1649.626060%0%
10300300039.4964.181660%0%
530050006.1594.233760%0%
540080002.3785.033860%0%
550010,0001.4913.877440%0%
1030050002.1216.384040%0%
1040080002.41095.391840%0%
1050010,0002.31180.653120%0%
Table 4. The results of random calculations for Example 12.
Table 4. The results of random calculations for Example 12.
pmn OSBBA BMIBNB
Ave . Time SR Ave . Time SR
530300.1911100%0.0840100%
550500.2106100%0.1117100%
51001000.9178100%0.7412100%
1030304.7010100%2.2465100%
1050505.8519100%3.9418100%
1010010033.9510100%11.5741100%
5501000.5447100%6.9531100%
51003004.1197100%78.3507100%
520050069.9101100%403.4631100%
1050100108.1924100%10.4275100%
10100300115.5843100%149.4162100%
10200500238.9797100%806.4061100%
Table 5. The results of random calculations for Example 12.
Table 5. The results of random calculations for Example 12.
pmn OSBBA BMIBNB
Ave . Time SR Ave . Time SR
5100100038.6369100%0%
52002000161.030880%0%
53003000250.725180%0%
101001000167.517980%0%
102002000369.397660%0%
103003000603.833420%0%
53005000484.273160%0%
54008000697.953260%0%
550010,000907.194840%0%
103005000716.102140%0%
1040080001105.143740%0%
1050010,0001177.984120%0%
Table 6. The results of random calculations for Example 13.
Table 6. The results of random calculations for Example 13.
mnp OSBBA BARON
Ave . Iter Ave . Time Ave . Iter Ave . Time
5102361.161220.7473
5202791.914740.6075
5302114.52.786190.6482
5502751.919250.6511
57022865.9812140.7949
590267.51.6454130.8312
51002156.53.046260.7616
5103203.55.122170.5021
5203624.514.670590.6231
53032466.9570580.9305
550346611.2603160.9116
5703974.040340.7513
5903815.415.5686638.83.5839
5100375313.895326.20.9445
510489618.89031970.7966
5105138.53.418410.5202
5106245.55.682330.4332
5204472.59.008740.4629
5205269.55.882840.6038
5206448296.0794900.8938
5304140.53.225230.4684
53052054.704030.4701
53061404.531.7456180.6425
10305240150.5017320.6387
105054449.695930.5758
1070577716.9510360.9000
109052187.549.87525955.9809
10100264.41.27177.20.7844
101003223.14.455914.20.9466
101004594.212.17869651.253.4707
1020042009.244.0950436349.0320
101005441488.704830,905.4205.4919
102005389991.161435,481502.3583
1030056591176.248822,469505.1200
2030051668.245.048514,140.1307.2198
2050055091.8165.93644878.2264.4288
20500614,975.1577.16606854.1416.4063
20500762,930.13012.50287133.7487.2844
2050083570179.96893662.7567
20500975,2223909.72966740.5637.1516
205001030,2741504.88243274.1234
Table 7. The results of random calculations for example 13.
Table 7. The results of random calculations for example 13.
mnp OSBBA BARON
Ave . Iter Ave . Time Ave . Iter Ave . Time
503002201.105754.8298
503003272.940555.6102
5030041055.27982528.8512
505002161.8027312.2832
505003512.69291723.8195
505004688.98154381356.2157
507002373.86671167.7368
507003584.960877381002.5904
5070044112.7235351.0155
509002365.36607108.5627
509003347.273123142.4374
5090044014.97733178.2181
50100024610.65611131.5696
50100038222.041533162.2937
5010004636105.363119315.2514
50200025036.00241779.5139
502000316967.250961007.5297
5020004436113.1053291008.2303
50300027782.63993304.8984
503000312396.3323
50400024285.5768
5040003156129.1936
50500028193.9734
5020005370215.7799
50200061274283.4462
50200072867366.5079
50200083032414.5711
Table 8. The results of random calculations for Example 13.
Table 8. The results of random calculations for Example 13.
mnp OSBBA BARON
Ave . Iter Ave . Time Ave . Iter Ave . Time
100300319.34.124826.7865
10030051056.67353.811.3750
1003008676.441.53686.814.4418
100500321.95.92633.420.4406
1005005162.316.41796.835.2466
10050082812.1235.030713.469.6685
150500315.97.30812.421.4519
150500558.18.3569335.9739
1505008694.480.65137.251.4611
20070039.32.7830247.8042
200700545.210.86333.284.6023
2007008118.740.30794.2122.5232
30070034.61.89441.660.0400
300700510.55.387011.878.3503
300700891.328.59884119.6601
30090032.11.87281.293.1878
300900524.510.83763.2158.5425
3009008549175.66003.8231.1125
300100039.16.83192.2131.6551
3001000519.115.69412.2144.4356
3001000898.385.06393.2220.8366
Table 9. The results of random calculation for example 13.
Table 9. The results of random calculation for example 13.
mnp OSBBA BARON
Ave . Iter Ave . Time Ave . Iter Ave . Time
500100031.53.4675199.9415
500100057.29.75352.2147.3751
5001000824.730.01082.8187.4586
50020003316.09951.7496.0448
500200056.923.94082.6741.5934
5002000822.393.70392.7861.7061
500300033.526.47553.0920.6775
500300054.556.1823
5003000830125.0527
500400032.454.7006
500400059136.5505
5004000810.8196.5768
50050003271.7622
500500055126.6783
5005000817.3330.9163
500600031.872.6898
500700032.6238.4842
500900032.3209.8106
50010,00033.7319.7487

Share and Cite

MDPI and ACS Style

Liu, X.; Gao, Y.L.; Zhang, B.; Tian, F.P. A New Global Optimization Algorithm for a Class of Linear Fractional Programming. Mathematics 2019, 7, 867. https://doi.org/10.3390/math7090867

AMA Style

Liu X, Gao YL, Zhang B, Tian FP. A New Global Optimization Algorithm for a Class of Linear Fractional Programming. Mathematics. 2019; 7(9):867. https://doi.org/10.3390/math7090867

Chicago/Turabian Style

Liu, X., Y.L. Gao, B. Zhang, and F.P. Tian. 2019. "A New Global Optimization Algorithm for a Class of Linear Fractional Programming" Mathematics 7, no. 9: 867. https://doi.org/10.3390/math7090867

APA Style

Liu, X., Gao, Y. L., Zhang, B., & Tian, F. P. (2019). A New Global Optimization Algorithm for a Class of Linear Fractional Programming. Mathematics, 7(9), 867. https://doi.org/10.3390/math7090867

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop