Next Article in Journal
Efficacy of the Post-Exposure Prophylaxis and of the HIV Latent Reservoir in HIV Infection
Previous Article in Journal
Complex Dynamical Behaviors of Lorenz-Stenflo Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Iterative Algorithm for the Nonlinear MC2 Model with Variational Inequality Method

1
School of Science, Southwest Petroleum University, Chengdu 610500, China
2
Institute for Artificial Intellegence, Southwest Petroleum University, Chengdu 610500, China
3
State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation, Southwest Petroleum University, Chengdu 610500, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(6), 514; https://doi.org/10.3390/math7060514
Submission received: 25 April 2019 / Revised: 29 May 2019 / Accepted: 31 May 2019 / Published: 5 June 2019
(This article belongs to the Section Engineering Mathematics)

Abstract

:
The multiple criteria and multiple constraint level (MC 2 ) model is a useful tool to deal with the decision programming problems, which concern multiple decision makers and uncertain resource constraint levels. In this paper, by regarding the nonlinear MC 2 problems as a class of mixed implicit variational inequalities, we develop an iterative algorithm to solve the nonlinear MC 2 problems through the resolvent operator technique. The convergence of the generated iterative sequence is analyzed and discussed by a calculation example, and the stability of Algorithm 1 is also verified by error propagation. By comparing with two other MC 2 -algorithms, Algorithm 1 performs well in terms of number of iterations and computation complexity.

1. Introduction

In practical applications, the problem involving multi criteria decision-making has become a research hotspot. The design choices based on multi-criteria data acquisition schemes to determine the specific parameters of the attack were discussed, and a new attack type was proposed in [1]. The optimal mapping of hybrid energy systems based on wind and PV (photovoltaic system) was presented by [2]. In addition, the mixed energy system was also obtained by using HOMER (Hybrid Optimization Model for Multiple Energy Resources) software (National Renewable Energy Laboratory (NREL), CO, Boulder, USA) and the TOPSIS multi criteria algorithm. In [3], a nonlinear programming (NP) model based on the technique for order preference by similarity to ideal solution (TOPSIS) was developed to solve decision-making problems. Suzdaltsev et al. [4] developed the genetic, ant colony and bee algorithms for solving the printed circuit board (PCB) design multi criteria optimization problems.
As an extension of linear programming (LP), the multi criteria and multi constraint level linear programming (MC 2 LP) is a useful tool to handle the decision problems with multiple decision makers and multiple resource constraint levels [5], which can be seen in many economic situations [6]. The concept of MC 2 LP is attractive to practitioners and has been widely applied in many fields such as transportation [7], data mining [8,9], finance [10], telecommunication management [11], management information systems [12,13], and production planning [14,15]. Specifically, the MC 2 branch-and-partition algorithm and the MC 2 branch-and-bound algorithm were presented to solve MC 2 integer linear programs in [16]. Chen et al. [17] illustrated that MC 2 -simplex method was generated by the remarkable LP simplex method, and MC 2 -interior point method was driven by MC-interior point method, as well as introduced the current status and application areas of MC 2 LP. A new MC 2 LP model was proposed based on the structure of MC 2 LP to correct two types of errors in [18]. Nonlinear programming as an important branch of operations research is a mathematical programming with nonlinear constraints or objective functions, which is explained in a mathematical terms, that is,
min f ( x ) s . t . g i ( x ) = 0 i I 1 , g i ( x ) 0 i I 2 ,
where each g i ( x ) is a mapping from R n to R . Traditional methods for solving nonlinear programming include the steepest descent algorithm, Newton method, feasible direction method, function approximation method and trust region method. Aside from those methods, the enhanced Lagrange method is to solve the problem by replacing the original constraint problem with a series of unconstrained sub-problems, [19] proposed an algorithm for the infeasible constrained nonlinear programming problem based on the large-scale augmented Lagrangian function, and analyzed the global convergence considering the possibility of not being feasible. Sequential quadratic programming (SQP) generates steps by solving quadratic subproblems, which can be applied to small and large problems, as well as problems with important nonlinearity [20]. Algorithms for feasible SQP (Sequential Quadratic Programming) were designed by Craig and André [21] to solve optimization problems with nonlinear constraints. In [22], the original problem was reduced to a bounded-constrained nonlinear optimization problem, with reduced gradient algorithms instead of the penalty method. Regardless of the maturity of MC 2 LP theory and the further study of general nonlinear programming problems, we should notice that there is not yet very much progress in the research of nonlinear MC 2 models. In this paper, we will introduce a novel method for MC 2 NLP problems.
It is well known that the variational inequality theory is a very powerful tool to study the problems arising in nonlinear programming. Mathematical conditions, including a constraint qualification and convexity of the feasible set were shown by Toyasaki et al. [23], which allowed for characterizing the economic problem by using a variational inequality formulation. An iterative algorithm was suggested by the resolvent operator technique to compute approximate solutions of the system of nonlinear set valued variational inclusion [24]. The affine variational inequality problems and the polynomial complementary problems were discussed in [25]; here, it is the extension of the results in [24]. Then, the authors applied their results to discuss the existence of the solutions of weakly homogeneous nonlinear equations, the domains of which are closed convex cones. Motivated and inspired by [26,27], the purpose of this paper is to develop a new iterative algorithm for MC 2 NLP problems by employing the theory of variational inequalities and the resolvent operator technique. Considering the accuracy of solution for MC 2 NLP problems, the convergence and stability of the new algorithm are discussed in this paper. The result of this paper is the generalization of Theorem 2A.8 (Lagrange multiplier rule) in [28].

2. Preliminaries

The MC 2 LP model could be formulated in the following form
min λ T C x s . t . A x C o n v { D } , x 0 ,
where x R n is the decision vector and C R q × n is the criteria matrix. λ R q is the vector of weight parameters. A R m × n is the resource consumption matrix and D R m × p stands for the multiple resource availability levels. Each column of D indicates a constraint level. The constraint “ C o n v { D } ” denotes that x is feasible if A x is contained in the convex set generated by the column vectors of D, and we always denote C o n v { D } by D in the following content. Then, this model could be simply extended to the nonlinear case:
min f ( x ) s . t . g ( x ) D , x 0 ,
where f : R n R is a nonlinear function and g : R n R m is a nonlinear mapping. As D is a nonempty, closed and convex set, if both f and g are continuously differentiable, there exists the Lagrange multiplier rule the problem (1), which is listed in the book of Dontchev and Rockafellar [28].
Theorem 1.
Let the following constraint qualification condition be fulfilled: for a given x C : = { x R n : g ( x ) D } , there is no y N D ( g ( x ) ) , y 0 , such that y g ( x ) N R n ( x ) 0 . If f has a local minimum relative to C, then there exists y N D ( g ( x ) ) such that
[ f ( x ) + y g ( x ) ] = 0 .
Here, g ( x ) is the gradient of g at x, and N D ( x ) = { v R m : v , x x 0 , x D } is the normal cone to D.
Remark 1.
We should notice that the condition above is necessary and the differentiability assumptions on f and g are not always satisfied, so the Lagrange multiplier rule may not always hold. Thus we need some more generalized method.
First, we should notice that problem (1) is a special case of the following problem when f is differentiable: finding the solution u R n of the inequality such that
( f ( u ) , v u ) 0 , v C : = { x R n : g ( x ) D } ,
which means that any v in the feasible set is always “in the front direction of” u (in the direction that makes the value of f increase). Here, ( · , · ) is the scalar product.
Then, let T : R n R n be a continuous linear operator (such as the gradient or the Gateaux derivative of f: f , D f etc.) and g : R n R m be a single-valued mapping. We can see that problem (1) is generalized to the implicit variational inequality problem: for the given D R m , find u R n such that
g ( u ) D , T u , v u 0 , v C : = { x R n : g ( x ) D } .
Here, we can transform problem (2) to a more general case as finding u R n such that
g ( u ) d o m ( φ ) , a n d f o r a l l g ( v ) d o m ( φ ) T u , v u φ ( g ( u ) ) φ ( g ( v ) ) ,
where φ denotes the subdifferential of φ . If φ = δ D , where δ D denotes the indicator function of a nonempty closed convex set D, then problem (3) is equivalent to problem (2).
Definition 1.
A mapping T : R n R n is said to be Lipschitz continuous if there exists a constant l > 0 such that
T ( u ) T ( v ) l u v , u , v R n .
Definition 2.
A mapping T : R n R n is said to be monotone if
T ( u ) T ( v ) , u v 0 , u , v R n .
Definition 3.
Let φ : R n R { + } be a proper convex lower semicontinuous function. For some constant ρ > 0 , the resolvent operator J φ ρ : R n R n is defined by
J φ ρ ( v ) = ( I + ρ φ ) 1 ( v ) , v R n .
By the definition of J φ ρ , u = J φ ρ ( z ) means that 1 ρ ( z u ) φ ( u ) . Combining with the definition of subdifferential φ , we can see 1 ρ ( z u ) , v u φ ( v ) φ ( u ) , v R n , then we have the following lemma.
Lemma 1.
For a given u R n and a point [29] z R n , u = J φ ρ ( z ) if and only if
u z , v u ρ φ ( u ) ρ φ ( v ) , v R n .

3. Main Results

In this section, an iterative algorithm for problem (3) will be presented, and the related properties will be discussed.
Let ρ > 0 be a constant, T : R n R n a single-valued mapping, and φ : R m R { + } a proper monotone convex lower semicontinuous function. Suppose g = ( g 1 , g 2 , , g m ) : R n R m is a monotone and continuous mapping, and each component g i of g is monotone continuous convex function. Therefore, the compound function φ g is also proper convex lower semicontinuous.
Set
e ( u , ρ ) = u J φ g ρ ( u ρ T u ) d ( u , ρ ) = e ( u , ρ ) ρ T e ( u , ρ ) = e ( u , ρ ) ρ ( T u T ( J φ g ρ ( u ρ T u ) ) ) .
Then, we have the following proposition.
Proposition 1.
The problem (3) has a solution u R n if and only if e ( u , ρ ) = 0 .
Proof of Proposition 1.
If e ( u , ρ ) = 0 , then
u J φ g ρ ( u ρ T u ) = 0 .
By Lemma 1, for any v R n ,
u ( u ρ T u ) , v u ρ φ ( g ( u ) ) ρ φ ( g ( v ) ) .
Thus, the sufficiency is proved. From the above routine, the necessity is obvious. □
We can give the algorithm for problem (3) as follows:
Algorithm 1.
For any u 0 R n , define the iterative sequence { u k } as
u k + 1 = u k 1 ρ d ( u k , ρ ) , k = 0 , 1 , 2 ,
To verify the convergence of { u k } in Algorithm 1, we first introduce the following lemma.
Lemma 2.
Let T, φ and g satisfy the conditions presented at the beginning of this section. Moreover, suppose that T is monotone; then, for any solution u * of problem (3), the following inequality holds:
u u * , d ( u , ρ ) e ( u , ρ ) , d ( u , ρ ) .
Proof of Lemma 2.
Let u * be the solution of problem (3); then, for any u R n ,
T u * , J φ g ρ ( u ρ T u ) u * φ g ( u * ) φ g ( J φ g ρ ( u ρ T u ) ) .
Moreover, by Lemma 1, we have u R n ,
u ρ T u J φ g ρ ( u ρ T u ) , J φ g ρ ( u ρ T u ) u * ρ φ g ( J φ g ρ ( u ρ T u ) ) ρ φ g ( u * ) ,
that is,
1 ρ e ( u , ρ ) T u , J φ g ρ ( u ρ T u ) u * φ g ( J φ g ρ ( u ρ T u ) ) φ g ( u * ) .
By the monotonicity of T, we also have
T ( J φ g ρ ( u ρ T u ) ) T u * , J φ g ρ ( u ρ T u ) u * 0 .
Adding (5), (6) and (7), then
T ( J φ g ρ ( u ρ T u ) ) T u + 1 ρ e ( u , ρ ) , J φ g ρ ( u ρ T u ) u * 0 .
i.e.,
d ( u , ρ ) , ( ( J φ g ρ ( u ρ T u ) u ) ) + u u * = d ( u , ρ ) , e ( u , ρ ) + u u * 0 .
This completes the proof. □
By Lemma 2, we can get the following convergence theorem.
Theorem 2.
Let T : R n R n be a monotone and Lipschitz continuous mapping with the Lipschitz constant l > 0 and { u k } be the iterative sequence generated by Algorithm 1; then, for the solution u * of problem (3), the following inequality holds:
u k + 1 u * u k u * 2 ( 2 ρ 1 ρ 2 l 2 2 l ) e ( u k , ρ ) 2 .
Proof of Theorem 2.
It follows from Algorithm 1 that
u k + 1 u * = u k u * 1 ρ d ( u k , ρ ) 2 = u k u * 2 + 1 ρ 2 d ( u k , ρ ) 2 2 ρ u k u * , d ( u k , ρ ) .
Since T is monotone and Lipschitz continuous,
d ( u k , ρ ) 2 = e ( u k , ρ ) ρ ( T u k T ( J φ g ρ ( u k ρ T u k ) ) 2 = e ( u k , ρ ) 2 + ρ 2 T u k T ( J φ g ρ ( u k ρ T u k ) ) 2 2 ρ e ( u k , ρ ) , T u k T ( J φ g ρ ( u k ρ T u k ) ) e ( u k , ρ ) 2 + ρ 2 l 2 u k J φ g ρ ( u k ρ T u k ) ) 2 ( 1 + l 2 ρ 2 ) e ( u k , ρ ) 2 .
By Lemma 2, we have
u k u * , d ( u k , ρ ) e ( u k , ρ ) , d ( u k , ρ ) .
Moreover,
e ( u k , ρ ) , d ( u k , ρ ) = e ( u k , ρ ) , e ( u k , ρ ) ρ ( T u k T ( J φ g ρ ( u k ρ T u k ) ) ) = e ( u k , ρ ) 2 ρ e ( u k , ρ ) , T u k T ( J φ g ρ ( u k ρ T u k ) ) = e ( u k , ρ ) 2 ρ u k J φ g ρ ( u k ρ T u k ) , T u k T ( J φ g ρ ( u k ρ T u k ) ) e ( u k , ρ ) 2 ρ u k J φ g ρ ( u k ρ T u k ) T u k T ( J φ g ρ ( u k ρ T u k ) ) e ( u k , ρ ) 2 ρ l u k J φ g ρ ( u k ρ T u k ) 2 = e ( u k , ρ ) 2 ρ l e ( u k , ρ ) 2 = ( 1 ρ l ) e ( u k , ρ ) 2 ,
which follows Labels (9)–(11) that
u k + 1 u * u k u * 2 ( 2 ρ 1 ρ 2 l 2 2 l ) e ( u k , ρ ) 2 .
 □
When l < 1 and ρ > 1 1 + 2 ( l + 1 ) 2 , we know that { u k } is bounded and e ( u k , ρ ) 0 , which means that { u k } converges to a solution of problem (3). Thus, we get the corollary of Theorem 2.
Corollary 1.
Let T : R n R n be a monotone and Lipschitz continuous mapping with the Lipschitz constant 1 > l > 0 and { u k } be the iterative sequence generated by Algorithm 1. When ρ > 1 1 + 2 ( l + 1 ) 2 , the iterative sequence { u k } generated by Algorithm 1 converges to a solution of problem (3).
To verify the above results, next we will construct a simple example.
Example 1.
Let u = ( x 1 , x 2 , , x n ) T be a dimension variable, f ( u ) = 1 6 ( x 1 2 + x 2 2 + + x n 2 ) , g 1 ( u ) = x 1 , g 2 ( u ) = x 2 , ⋯, g n ( u ) = x n and φ ( g 1 , g 2 , , g n ) = i = 1 n g i , thus φ g ( u ) = i = 1 n x i , which satisfy all the conditions of Theorem 2. Here, one can easily see that the minima of f is ( 0 , 0 , , 0 ) T . Then, we will use Algorithm 1to construct a convergent sequence.
Let T be the operator f , then T ( u ) = 1 3 ( x 1 , x 2 , , x n ) T , which is Lipschitzian and the Lipschitz constant l = 1 3 < 1 . For some given ρ, we can see that
J φ g ρ ( u ) = ( I + ρ φ g ) 1 ( u ) = 1 1 + ρ u ,
thus
J φ g ρ ( u ρ T u ) = 1 1 + ρ ( u 1 3 ρ u ) = 1 1 3 ρ 1 + ρ u .
Then, by the algorithm, we have that
e ( u , ρ ) = 4 3 ρ 1 + ρ u d ( u , ρ ) = e ( u , ρ ) ρ T e ( u , ρ ) = 4 3 ρ 1 + ρ u ρ 1 3 4 3 ρ 1 + ρ u = ( 4 3 ρ 4 9 ρ 2 ) 1 + ρ u .
Thus, the sequence { u k } is constructed as
u k + 1 = u k 1 ρ d ( u k , ρ ) = ( 13 9 ρ 4 3 ) 1 + ρ u k .
When | ( 13 9 ρ 4 3 ) 1 + ρ | < 1 , the sequence { u k } is always convergent to the minima ( 0 , 0 ) T of f. The smaller ρ is, the higher convergence speed will be. By the condition ρ > 1 1 + 2 ( l + 1 ) 2 , we can find the smallest ρ to get the highest speed of the algorithm.
Remark 2.
Problem (2) is the special case of problem (3) where φ = δ D , thus we can also get an iterative sequence { u k } converging to a solution of problem (2) by Algorithm 1.
Then, we consider the stability of Algorithm 1.
Theorem 3.
Let T : R n R n be a monotone and Lipschitz continuous mapping with the Lipschitz constant 1 > l > 0 and { u k } be the iterative sequence generated by Algorithm 1 for problem (3), the stability of Algorithm 1 can be guaranteed.
Proof of Theorem 3.
We first prove that the compound resolvent operator J φ g ρ is Lipschitz continuous. Let u , v be any given points in R n ; it follows from Definition 3 that
J φ g ρ ( u ) = ( I + ρ φ g ) 1 ( u ) a n d J φ g ρ ( v ) = ( I + ρ φ g ) 1 ( v ) ,
which implies that
1 ρ [ u I ( J φ g ρ ( u ) ) ] ( J φ g ρ ( u ) ) a n d 1 ρ [ v I ( J φ g ρ ( v ) ) ] ( J φ g ρ ( v ) ) .
Then,
1 ρ [ u I ( J φ g ρ ( u ) ) ] [ v I ( J φ g ρ ( v ) ) ] , J φ g ρ ( u ) J φ g ρ ( v ) = 1 ρ u v [ I ( J φ g ρ ( u ) ) I ( J φ g ρ ( v ) ) ] , J φ g ρ ( u ) J φ g ρ ( v ) 0 .
It follows that
u v · J φ g ρ ( u ) J φ g ρ ( v ) u v , J φ g ρ ( u ) J φ g ρ ( v ) I ( J φ g ρ ( u ) ) I ( J φ g ρ ( v ) ) , J φ g ρ ( u ) J φ g ρ ( v ) = J φ g ρ ( u ) J φ g ρ ( v ) 2
so
J φ g ρ ( u ) J φ g ρ ( v ) u v .
Let u k be the iteration value and u ^ k be the accurate value. Then, we have
u k + 1 = u k 1 ρ d ( u k , ρ ) a n d u ^ k + 1 = u ^ k 1 ρ d ( u ^ k , ρ )
Supposing that ε k is the error between the iteration value and the exact value of the kth iteration, there holds
ε k + 1 = ε k 1 ρ | d ( u k , ρ ) d ( u ^ k , ρ ) | .
From Equation (4), we have
d ( u k , ρ ) d ( u ^ k , ρ ) = e ( u k , ρ ) ρ ( T u k T ( J φ g ρ ( u k ρ T u k ) ) ) [ e ( u ^ k , ρ ) ρ ( T u ^ k T ( J φ g ρ ( u ^ k ρ T u ^ k ) ) ) ] = e ( u k , ρ ) e ( u ^ k , ρ ) ρ [ T u k T u ^ k ( T ( J φ g ρ ( u k ρ T u k ) ) T ( J φ g ρ ( u ^ k ρ T u ^ k ) ) ) ] .
Here,
e ( u k , ρ ) e ( u ^ k , ρ ) = ( u k J φ g ρ ( u k ρ T u k ) ) ( u ^ k J φ g ρ ( u ^ k ρ T u ^ k ) ) = u k u ^ k ( J φ g ρ ( u k ρ T u k ) J φ g ρ ( u ^ k ρ T u ^ k ) ) ρ l ε k
T u k T u ^ k ( T ( J φ g ρ ( u k ρ T u k ) ) T ( J φ g ρ ( u ^ k ρ T u ^ k ) ) ) l ε k l J φ g ρ ( u k ρ T u k ) J φ g ρ ( u ^ k ρ T u ^ k ) l ε k l ( u k ρ T u k ) ( u ^ k ρ T u ^ k ) ρ l 2 ε k
so
d ( u k , ρ ) d ( u ^ k , ρ ) ρ l ε k ρ 2 l 2 ε k .
According to (12) and (17), we can get
ε k + 1 ε k 1 ρ | ρ l ε k ρ 2 l 2 ε k | = | 1 l + ρ l 2 | ε k = | 1 l + ρ l 2 | k ε 1 .
When 0 l 1 and ρ 1 l , we know that the effect of error ε k on the latter is attenuated, thus the stability of Algorithm 1 is guaranteed. □

4. Application and Comparison

It is well known that linear problems can be seen as special cases of nonlinear problems, so the following application of the oilfield production distribution optimization can be showed based on our results.
The historical data of eight sub-measures and the corresponding influencing factors of a oilfield which were in the mid-later stage from 2000 to 2006 are presented in Table 1. These historical data reflect to some extent the law of dynamic change of its development [30].
The measure cost c i per unit of output and the coefficient of resource consumption a 3 i per unit of output in the planning year are as follows:
c i = C o s t f o r t h e i t h m e a s u r e i n t h e p l a n n i n g y e a r ( 10 4 yuan ) O i l p r o d u c t i o n o f t h e i t h m e a s u r e i n t h e p l a n n i n g y e a r ( 10 4 t ) T o t a l c o s t f o r t h e i t h m e a s u r e b e f o r e t h e p l a n n i n g y e a r ( 10 4 yuan ) T o t a l o i l p r o d u c t i o n o f t h e i t h m e a s u r e b e f o r e t h e p l a n n i n g y e a r ( 10 4 t ) ,
a 3 i T o t a l w o r k l o a d o f t h e i t h m e a s u r e b e f o r e t h e p l a n n i n g y e a r ( 10 4 yuan ) T o t a l o i l p r o d u c t i o n o f t h e i t h m e a s u r e b e f o r e t h e p l a n n i n g y e a r ( 10 4 t ) .
Suppose that the forecast annual oil price here is 2760 × 104 yuan / 10 4 t , the measure cost c i per unit of output (Table 2) and the resource consumption coefficient a 3 i per unit of output (Table 3) in the planning year can be calculated from Table 1 and Formulas (14) and (15).
According to the overall historical oil production, investment and workload from 2000 to 2006, as well as the country’s demand for crude oil and the technical level of oilfields, the group headquarters and the oilfield branch determined the resource constraint levels for 2007 in Table 4.
Here, d j 1 ( j = 1 , 2 , 3 ) represents the corresponding constraints as high-level constraints; d j 2 ( j = 1 , 2 , 3 ) represents the corresponding constraints as low-level constraints. The new model of the output distribution of the oilfield in 2007 is
max CX , s . t . AX b , X 0 .
Here,
C = 1 1 1 1 1 1 1 1 2119.74 2003.78 2546.59 2081.86 2160.20 2148.73 2067.41 2017.49 , X = x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 T , A = 1 1 1 1 1 1 1 1 640.3 756.2 213.4 678.1 599.8 611.3 692.6 742.5 , b = 90 35 , 000 1450 140 49 , 000 1760 T , x i 0 , ( i = 1 , 2 , 8 ) .
The corresponding relaxation form of this problem is
max λ CX , s . t . AX b γ , X 0 .
Here, λ = λ 1 λ 2 , γ = γ 1 γ 2 T .
By Algorithm 1, we have
T ( u ) = f = ( λ 1 2119.74 λ 2 , λ 1 2003.78 λ 2 , λ 1 2546.59 λ 2 , λ 1 2081.86 λ 2 , λ 1 2160.20 λ 2 , λ 1 2148.73 λ 2 , λ 1 2067.41 λ 2 , λ 1 2017.49 λ 2 ) .
Here, we set A = A T A , B = A T b to make problem (18) easier to solve. Then,
J φ g ρ ( u k T u k ) = ( I + ρ φ g ) 1 ( u k T u k ) = ( I + A ) 1 ( u k T u k ) ,
e ( u k , ρ ) = u k J φ g ρ ( u k T u k ) = u k ( I + A ) 1 ( u k T u k ) ,
d ( u k , ρ ) = e ( u k , ρ ) ρ [ T u k T ( J φ g ρ ( u k T u k ) ) ] = u k [ ( I + A ) 1 ( u k T u k ) ] ρ [ T u k T ( J φ g ρ ( u k T u k ) ) ] .
Take λ 1 = λ 2 = 0.5 and γ 1 = 0.4 , γ 2 = 0.6 , the measure production and the total production in planning year can be calculated with Formulas (18)–(20). Table 5 shows the results of measure production and total production, and Figure 1 shows the convergence of the solution for each measure.
By comparing with M C 2 -simplex method and M C 2 -interior point method, some differences can be found in number of iterations and computation complexity.
Table 6 has shown that the computation complexity of the three algorithms differs slightly in this problem. This is because the decision variables, objective functions and constraint levels involved in this problem are relatively small. However, for large-scale problems, the obvious differences in number of iterations and computation complexity could be discovered. Although the number of iterations of Algorithm 1 and M C 2 -simplex method are the same, we still believe that our method would be efficient if the model turns to a nonlinear case.

5. Conclusions

This paper proposed Algorithm 1 for MC 2 NLP problems, and its novelty lies in the application of the theory of variational inequalities and the resolvent operator technique. The convergence of the generated iterative sequence was analyzed by Example 1, and the stability of Algorithm 1 was also verified by error propagation. During the considering and discussion, we found that Lipschitz constant l and the parameter ρ are important factors for Algorithm 1 to maintain convergence and stability. In Section 4, Algorithm 1 was utilized to solve the application problem (17); the comparison with two other algorithms showed that this method performs efficiently for the nonlinear problems.
The multi objective multi criteria programming is an important research topic in operations research and management science, not only because of the multi criteria and multi constraint level nature of most real-world decision problems, but also because there are still many open questions in this area. Algorithm 1 as a new tool to solve multi criteria and multi objective problems in decision systems provides algorithm support for decision makers; the basic structural ideas of this algorithm can be extended to practical linear and nonlinear problems in the future, such as project evaluation, program decision-making, engineering, industrial sector development sequencing and rational allocation of resources, etc., so as to consume fewer resources. In addition, the algorithm proposed will be improved in actual application.

Author Contributions

Conceptualization, C.M.; Writing—original draft, C.M.; Methodology, C.M.; Writing—review and editing, F.F.; Validation, F.F.; Formal analysis, Z.Y. and X.L.

Funding

This paper is supported by the National Natural Science Foundation of China (11526173, 11601451) and the scientific research starting project of Southwest Petroleum University (2015QHZ028).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Murat, T.A.; Alper, B. Robustness analysis of multi-criteria collaborative filtering algorithms against shilling attacks. Expert Syst. Appl. 2018, 115, 386–402. [Google Scholar]
  2. Diemuodeke, E.O.; Addo, A.; Oko, C.O.; Mulugetta, Y.; Ojapah, M.M. Optimal mapping of hybrid renewable energy systems for locations using multi-criteria decision-making algorithm. Renew. Energy 2019, 134, 461–477. [Google Scholar] [CrossRef]
  3. Garg, H.; Nancy. Non-linear programming method for multi-criteria decision making problems under interval neutrosophic set environment. Appl. Intell. 2017, 48, 2199–2213. [Google Scholar] [CrossRef]
  4. Suzdaltsev, I.V.; Chermoshentsev, S.F.; Bogula, N.Y. Bionic algorithms for multi-criteria design of electronic equipment printed circuit board. IEEE Int. Conf. SCM 2017, 394–396. [Google Scholar] [CrossRef]
  5. Yong, S. MC2 Linear Programming. In Multiple Criteria Multiple Constraint-Level, Linear Programming: Concepts, Techniques and Applications; World Scientific Publishing Co Pte Ltd.: Singapore, 2001; pp. 68–91. [Google Scholar]
  6. Seiford, L.; Yu, P.L. Potential solutions of linear systems: The multi-criteria multiple constraint levels program. J. Math. Anal. Appl. 1979, 6, 283–303. [Google Scholar] [CrossRef]
  7. Shi, Y. A Transportation Model with Multiple Criteria and Multiple Constraint Levels. Math. Comput. Model. 1995, 21, 13–28. [Google Scholar] [CrossRef]
  8. Shi, Y.; Wise, M.; Luo, M.; Lin, Y. Data mining in Credit Card Portfolio management: A multiple criteria decision making approach. In Multiple Criteria Decision Making in the New Millennium; Köksalan, M., Zionts, S., Eds.; Springer: Berlin, Germany, 2001; pp. 420–436. [Google Scholar]
  9. Li, G.Y.; Tang, X.M.; Zhang, C.Y.; Gao, X. Multi-criteria constraint algorithm for selecting ICESat/GLAS data as elevation control points. J. Remote Sens. 2017, 21, 96–104. [Google Scholar]
  10. Kwak, W.; Shi, Y.; Eldridge, S.W.; Kou, G. Bankruptcy prediction for Japanese firms: using Multiple Criteria Linear Programming data mining approach. Adv. Inv. Anal. Portf. Manag. 2006, 1, 27–49. [Google Scholar] [CrossRef]
  11. Zhang, J.L.; Shi, Y.; Zhang, P. Several Multi-criteria Programming Methods for Classification. Comput. Oper. Res. 2009, 36, 823–836. [Google Scholar] [CrossRef]
  12. Bellare, M.; Namprempre, C.; Neven, G. Lecture Notes in Computer Science. J. Cryptol. 2004, 22, 1–61. [Google Scholar] [CrossRef]
  13. Costa Batista, A.; Batista, L.S. Demand Side Management Using a Multi-Criteria ϵ-Constraint based Exact Approach. Expert Syst. Appl. 2018, 99, 180–192. [Google Scholar] [CrossRef]
  14. Kou, G.; Peng, Y.; Zhang, C.; Shi, Y. Multiple Criteria Mathematical Programming for Multi-class Classification and Application in Network Intrusion Detection. Inf. Sci. 2009, 179, 371–381. [Google Scholar] [CrossRef]
  15. Meghwani, S.S.; Thakur, M. Multi-criteria algorithms for portfolio optimization under practical constraints. Swarm Evol. Comput. 2017, 37, 104–125. [Google Scholar] [CrossRef]
  16. Shi, Y.; He, J.; Wang, L.; Fan, W. Computer-based algorithms for multiple criteria and multiple constraint level integer linear programming. Comput. Math. Appl. 2005, 49, 903–921. [Google Scholar] [CrossRef] [Green Version]
  17. Chen, D.; Zhong, Y.; Liao, Y.; Li, L. Review of Multiple Criteria and Multiple Constraint-level Linear Programming. Procedia Comput. Sci. 2013, 17, 158–165. [Google Scholar] [CrossRef] [Green Version]
  18. Wang, B.; Wang, Y. A Multiple-Criteria and Multiple-Constraint Levels Linear Programming Based Error Correction Classification Model. Procedia Comput. Sci. 2013, 17, 1073–1082. [Google Scholar] [CrossRef] [Green Version]
  19. Gono̧calves, M.L.N.; Melo, J.G. Augmented Lagrangian methods for nonlinear programming with possible infeasibility. J. Glob. Optim. 2015, 63, 297–318. [Google Scholar]
  20. Philip, E.G.; Elizabeth, W. Sequential Quadratic Programming Methods. In Mixed Integer Nonlinear Programming; Jon, L., Sven, L., Eds.; Springer: New York, NY, USA, 2012; pp. 147–225. [Google Scholar]
  21. Craig, T.L.; André, L.T. Nonlinear equality constraints in feasible sequential quadratic programming. Optim. Methods Softw. 2007, 6, 262–285. [Google Scholar]
  22. Lasdon, L.S.; Fox, R.L.; Ratner, M.W. Nonlinear optimization using the generalized reduced gradient method. Revue Française D’Automatique Informatique Recherche Opé-Rationnelle Recherche Opérationnelle 1974, 8, 73–103. [Google Scholar] [CrossRef] [Green Version]
  23. Toyasaki, F.; Daniele, P.; Wakolbinger, T. A variational inequality formulation of equilibrium models for end-of-life products with nonlinear constraints. Eur. J. Oper. Res. 2014, 236, 340–350. [Google Scholar] [CrossRef]
  24. Tang, Y.K.; Chang, S.S.; Salahuddin, S. A system of nonlinear set valued variational inclusions. SpringerPlus 2014, 3, 1–10. [Google Scholar] [CrossRef]
  25. Gowda, M.S.; Sossa, D. Weakly homogeneous variational inequalities and solvability of nonlinear equations over cones. Math. Program. 2018, 1–23. [Google Scholar] [CrossRef]
  26. He, B.S. A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 1997, 35, 69–76. [Google Scholar] [CrossRef]
  27. Huang, N.J.; Fang, Y.P. A New Iterative Algorithm for a Class of Nonlinear Mixed Implicit Variational Inequalities. Appl. Math. Lett. 2003, 16, 507–511. [Google Scholar] [CrossRef]
  28. Dontchev, A.L.; Rockafellar, R.T. Solution Mappings for Variational Problems; Set-Valued Analysis of Solution Mappings. In Implicit Functions and Solution Mappings; Thomas, V.M., Sidney, I.R., Eds.; Springer: New York, NY, USA, 2009; pp. 69–212. [Google Scholar]
  29. Webb, J.R.L. Operateurs Maximaux Monotones et Semi-groups de Contractions dans les Espaces de Hilbert. B Lond. Math. Soc. 1974, 6, 221–222. [Google Scholar] [CrossRef]
  30. Zhong, Y.H.; Zhao, L.; Zhang, G.L. The oilfield measure structure optimization model of the multi criteria and multi constraint levels and its application. J. Oil. Gas Technol. 2008, 30, 124–128. [Google Scholar]
Figure 1. Convergence result of Algorithm 1.
Figure 1. Convergence result of Algorithm 1.
Mathematics 07 00514 g001
Table 1. The historical data.
Table 1. The historical data.
Parameter2000200120022003200420052006
oil production/ 10 4 t (fracturing)6.3487.2113.2032.1863.0124.3868.711
number of wells (fracturing)55604441335391
cost/ 10 4 yuan (fracturing)3366467121441472196830385364
oil production/ 10 4 t (acidification)3.652.264.594.244.024.085.34
number of wells (acidification)54618374617285
cost/ 10 4 yuan (acidification)2847159933743282303932244013
oil production/ 10 4 t (fill holes)69.5871.0686.8272.0371.0567.8753.65
number of wells (fill holes)883893961812811920735
cost/ 10 4 yuan (fill holes)15,27715,35218,64715,64418,31111,12810,977
oil production/ 10 4 t (turn extraction)4.234.123.832.472.032.572.39
number of wells (turn extraction)56465447444742
cost/ 10 4 yuan (turn extraction)2965305028861522119217201623
oil production/ 10 4 t (large pumps)5.739.838.695.428.005.985.80
number of wells (larg pumps)12014714411612911487
cost/ 10 4 yuan (large pumps)4975522245822852520933043145
oil production/ 10 4 t (water plugging)11.0210.5211.0311.4811.0010.5117.11
number of wells (water plugging)262253255243239216303
cost/ 10 4 yuan (water plugging)6958671770467394718259048760
oil production/ 10 4 t (overhaul)4.975.635.582.083.002.271.73
number of wells (overhaul)65736932444035
cost/ 10 4 yuan (overhaul)3826403636661325223314541183
oil production/ 10 4 t (other measures)8.8510.136.235.734.008.297.24
number of wells (other measures)176184132119107187146
cost/ 10 4 yuan (other measures)7215755944024658251558775632
Table 2. Objective coefficient (yuan/t).
Table 2. Objective coefficient (yuan/t).
c 1 c 2 c 3 c 4 c 5 c 6 c 7 c 8
2119.742003.782546.592081.862160.202148.732067.412017.49
Table 3. Unit resource consumption coefficient (yuan/t).
Table 3. Unit resource consumption coefficient (yuan/t).
a 31 a 32 a 33 a 34 a 35 a 36 a 37 a 38
640.3756.2213.4678.1599.8611.3692.6742.5
Table 4. Resource constraint levels ( 10 4 ) yuan.
Table 4. Resource constraint levels ( 10 4 ) yuan.
d 11 d 12 d 21 d 22 d 31 d 32
9014035,00049,00014501760
Table 5. Result of measure production and total production ( 10 4 t).
Table 5. Result of measure production and total production ( 10 4 t).
MeasureFracturingAcidificationFill HolesTurn ExtractionLarg Pumps
Production5.96250.422181.44492.34215.4038
MeasureCard Water PluggingOverhaulOther MeasuresTotal
Production13.86752.81156.1651118.4195
Table 6. Comparison study of the speed of three algorithms.
Table 6. Comparison study of the speed of three algorithms.
Comparison FactorsAlgorithm 1 MC 2 -Simplex Method MC 2 -Interior Point Method
Number of iterations226
Computation complexity (s)0.0032910.0028840.010653

Share and Cite

MDPI and ACS Style

Min, C.; Fan, F.; Yang, Z.; Li, X. An Iterative Algorithm for the Nonlinear MC2 Model with Variational Inequality Method. Mathematics 2019, 7, 514. https://doi.org/10.3390/math7060514

AMA Style

Min C, Fan F, Yang Z, Li X. An Iterative Algorithm for the Nonlinear MC2 Model with Variational Inequality Method. Mathematics. 2019; 7(6):514. https://doi.org/10.3390/math7060514

Chicago/Turabian Style

Min, Chao, Feifei Fan, Zhaozhong Yang, and Xiaogang Li. 2019. "An Iterative Algorithm for the Nonlinear MC2 Model with Variational Inequality Method" Mathematics 7, no. 6: 514. https://doi.org/10.3390/math7060514

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop