Next Article in Journal
On Fall-Colorable Graphs
Previous Article in Journal
Real-Time Motorbike Detection: AI on the Edge Perspective
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Full-Newton Step Interior-Point Method for Weighted Quadratic Programming Based on the Algebraic Equivalent Transformation

1
Institute of Computing Science and Technology, Guangzhou University, Guangzhou 510006, China
2
Department of Mathematics, Azarbaijan Shahid Madani University, Tabriz 5375171379, Iran
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(7), 1104; https://doi.org/10.3390/math12071104
Submission received: 28 February 2024 / Revised: 3 April 2024 / Accepted: 4 April 2024 / Published: 7 April 2024
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
In this paper, a new full-Newton step feasible interior-point method for convex quadratic programming is presented and analyzed. The idea behind this method is to replace the complementarity condition with a non-negative weight vector and use the algebraic equivalent transformation for the obtained equation. Under the selection of appropriate parameters, the quadratic rate of convergence of the new algorithm is established. In addition, the iteration complexity of the algorithm is obtained. Finally, some numerical results are presented to demonstrate the practical performance of the proposed algorithm.

1. Introduction

Since Karmarkar’s seminal paper [1], a large amount of research has been devoted to the study of interior-point methods (IPMs). IPMs are one of the most efficient methods for solving linear optimization (LO). At the same time, these methods have been extended to other optimization problems, including convex quadratic programming (CQP), semidefinite optimization (SDO), etc.
Full-Newton step IPMs for solving LO were initiated by Roos et al. [2]. The main advantage of these methods is that they use only full-Newton steps and no line searches are needed. Furthermore, the iterates always lie in the quadratic convergence neighborhood, under some mild assumptions. In 2003, Darvay [3] proposed an algebraic equivalent transformation (AET) technique to determine search directions in IPMs for LO. He applied a continuously differentiable and monotone function ψ : [ 0 , ] R on both sides of the centering equation of the central path and then used Newton’s method to derive the search directions. In addition, he introduced a full-Newton step primal–dual path-following interior-point algorithm for LO using the square root function in the AET technique. Later on, Achache [4], Wang and Bai [5,6,7] and Wang et al. [8] extended Darvay’s algorithm for LO to CQP, second-order cone optimization (SOCO), SDO and symmetric cone optimization (SCO) and P * ( κ ) linear complementarity problem ( P * ( κ ) -LCP), respectively. Boudjellal et al. [9,10] proposed primal–dual interior-point algorithms for solving convex quadratic programming (CQP) problems based on parametric kernel functions with exponential and polynomial barrier terms.
The weighted linear complementarity problem (WLCP) was introduced by Potra [11]. In that paper, Potra defined a smooth central path for the WLCP and proposed two interior-point algorithms to solve the WLCP, both of which followed the smooth central path. Asadi et al. [12] extended the full-Newton step IPM introduced in [2] to the monotone WLCP and proved the quadratic rate of convergence to the target points on the smooth central path. Recently, Kheirfam [13] extended the full-Newton step IPM using the ψ ( z ) = z function in the AET technique for the monotone WLCP. Based on the function ψ ( z ) = z in the AET technique, Zhang et al. [14] defined a predictor–corrector interior-point algorithm for P * ( κ ) -WLCP. Very recently, Boudjellal and Benterki [15] extended the full-Newton step feasible IPM to solve CQP problems based on replacing the complementarity condition by a non-negative weight vector.
Inspired by the works mentioned above, we extend the full-Newton step IPM to CQP. We replace the complementarity condition with a non-negative weight vector and then use the ψ ( z ) = z function in the AET technique. We apply the Newton method to the system defining the weighted central path to obtain search directions and take full steps along these search directions. We prove the feasibility of the full steps and quadratic rate of convergence to the target points on the weighted central path. By choosing appropriate values for the parameters, we obtain an iteration bound for WCQP with the same complexity as the one obtained for this type of problem. The novelty of the proposed method lies in the use of AET to solve the convex QP, which the method presented in [15] is a special case of our method for ψ ( z ) = z .
The paper is organized as follows. In Section 2, we recall the primal–dual pair of CQP problems and then state the weighted central path for the CQP problem. In Section 3, we describe the AET technique on the weighted central path and define a norm-based proximity measure. A generic framework of the algorithm is presented. Section 4 is devoted to the analysis of the algorithm. In Section 5, we derive an iteration bound for the proposed algorithm. Some numerical results are presented in Section 6. Concluding remarks are given in Section 7.

2. CQP and Its Weighted Central Path

Consider the primal–dual CQP problem pair in the following standard form:
min c T x + 1 2 x T Q x s . t . A x = b x 0 , max b T y 1 2 x T Q x s . t . A T y + s Q x = c s 0 ,
where Q R n × n is a symmetric and positive semidefinite matrix, A R m × n is a full-row-rank matrix, c R n , and b R m . The vectors x R n , s R n and y R m are the decision variables. Let F 0 denote the set of strictly feasible solutions of the primal–dual pair (1), i.e.,
F 0 = { ( x , y , s ) : A x = b , A T y + s Q x = c , x > 0 , s > 0 } .
It is well known that finding an optimal solution for the primal–dual pair (1) is equivalent to solve the following Karush–Khun–Tucker (KKT) optimality conditions:
A x = b , x 0 , A T y + s Q x = c , s 0 , x s = 0 ,
where x s = [ x 1 s 1 , , x n s n ] T denotes the coordinate-wise (Hadamard) product of the vectors x and s. We consider the same central path as introduced in [15]. Let an initial point ( x 0 , y 0 , s 0 ) F 0 be given; we define
t 0 : = ( x 0 ) T s 0 n , c c : = x 0 s 0 , γ : = min c c t 0 , w ( t ) : = ( 1 t t 0 ) w + + t t 0 c c , w + : = ( 1 θ ) w ,
where t [ 0 , t 0 ] , w R + n and θ [ 0 , 1 ] . The basic idea of IPMs for weighted problems is to replace the complementarity condition x s = 0 in (2) by the parameterized equation x s = w ( t ) . In this way, we obtain the following perturbed system
A x = b , x 0 , A T y + s Q x = c , s 0 , x s = w ( t ) .
Under our assumptions, it is proved in [16] that the system (4) has a unique solution for each t [ 0 , t 0 ] , denoted by ( x ( w ( t ) ) , y ( w ( t ) ) , s ( w ( t ) ) ) . The set of all these solutions forms the so-called weighted path for (1). If  t 0 and w 0 , then w ( t ) 0 , the limit of the weighted path exists, and the limit point satisfies the complementarity condition. Therefore, the limit gives an optimal solution of (1).

3. New Search Direction and Algorithm

According to the idea of algebraic equivalent transformation presented by Darvay [3], we write the system (4) in the following equivalent form:
A x = b , x 0 , A T y + s Q x = c , s 0 , ψ ( x s ) = ψ ( w ( t ) ) ,
where ψ : [ ξ , ] R is a continuously differentiable function with ψ ( z ) > 0 for z [ ξ , ] and ξ [ 0 , 1 ] . It is worth noting that the transformed system (5) does not change the weighted path and only specifies different directions depending on the ψ function.
Let us be at the point ( x , y , s ) F 0 ; then, by applying Newton’s method in system (5), the search direction ( x , y , s ) is the solution of the following linear system:
A x = 0 , A T y + s Q x = 0 , s x + x s = ψ ( w ( t ) ) ψ ( x s ) ψ ( x s ) .
Considering the function ψ ( z ) = z for system (6), we obtain the following system
A x = 0 , A T y + s Q x = 0 , s x + x s = 2 x s w ( t ) x s .
The new iterates are then given by:
x + : = x + x , y + : = y + y , s + : = s + s ,
this means that a full-Newton step is taken along the search directions.
Remark 1.
Note that for ψ ( z ) = z , we obtain the search directions introduced in [15].
For ease of analysis, we consider a scaled version of (7). For this purpose, we introduce the vector v and the scaled search directions d x and d s as follows:
v : = x s t , d x : = v x x , d s : = v s s .
With these notations, one easily checks that the system (7) can be rewritten as follows:
A ¯ d x = 0 , A ¯ T y t + d s Q ¯ d x = 0 , d x + d s = p v ,
where A ¯ : = t A D , Q ¯ : = D Q D , D : = diag x s = diag x 1 s 1 , , x n s n , and p v : = 2 w ( t ) t v .
We define the norm-based proximity δ ( v ) to measure the distance between the current iterate ( x , y , s ) and the weighted center ( x ( w ( t ) ) , y ( w ( t ) ) , s ( w ( t ) ) ) for a given t > 0 , as follows:
δ ( v ) : = δ ( x , s ; t ) = 1 2 p v = w ( t ) t v .
Note that
x s = w ( t ) v 2 = w ( t ) t v = w ( t ) t δ ( v ) = 0 ,
where v 2 = v v . Let q v : = d x d s . Then, by the third equation of (9), we have
d x = p v + q v 2 , d s = p v q v 2 , d x d s = p v 2 q v 2 4 .
Furthermore, we have
q v 2 = d x d s 2 = d x + d s 2 4 d x T d s p v 2 = 4 δ 2 ,
where the inequality comes from the fact that
d x T d s = d x T Q ¯ d x A ¯ T y t = d x T Q ¯ d x 0 ,
where the first equality is due to the second equation of (9), and the second equality follows from the first equation of (9).
We now give a generic framework of our new weighted interior-point algorithm (Algorithm 1).  
Algorithm 1: Full-Newton step IPM for WCQP
Input:
A R m × n , Q R n × n ( symmetric and positive semidefinite ) , c R n , b R m ;
the accuracy parameter ε > 0 ;
the threshold parameter τ [ 0 , 1 ] ;
the barrier update parameter θ [ 0 , 1 ] ;
An initial point ( x 0 , y 0 , s 0 ) F 0 with δ ( x 0 , s 0 ; t 0 ) τ , where t 0 = ( x 0 ) T s 0 n ;
c c = x 0 s 0 , w 0 c c ;
begin
x = x 0 , y = y 0 , s = s 0 , t = t 0 , w = w 0 ;
while  w x s > ε  do
           Set  t = ( 1 θ ) t , w = ( 1 θ ) w , w ( t ) = ( 1 t t 0 ) w + t t 0 c c ;
Determine ( x , y , s ) according to (7);
Set  ( x , y , s ) = ( x , y , s ) + ( x , y , s ) ;
end
end.

4. Analysis of the Algorithm

In the next lemma, we present a sufficient condition that guarantees that the full-Newton step is strictly feasible.
Lemma 1.
Let ( x , y , s ) be a strictly feasible point. A new iterate ( x + , y + , s + ) is strictly feasible if
δ : = δ ( v ) < γ ,
where γ is defined as (3).
Proof. 
We introduce a step length α [ 0 , 1 ] and define
x ( α ) : = x + α x , s ( α ) : = s + α s .
Therefore, by using (8), (11) and the third equation of (9), we have
x ( α ) s ( α ) t = x s t + α x s + s x t + α 2 x s t = v 2 + α v ( d x + d s ) + α 2 d x d s = v 2 + α v p v + α 2 p v 2 q v 2 4 = ( 1 α ) v 2 + α ( v 2 + v p v ) + α 2 p v 2 q v 2 4 = ( 1 α ) v 2 + α w ( t ) t ( 1 α ) p v 2 4 α q v 2 4 ,
where the last equality is obtained from the following
v 2 + v p v = v 2 + 2 v w ( t ) t v = 2 v w ( t ) t v 2 = w ( t ) t p v 2 4 .
Furthermore, for α [ 0 , 1 ] , we have
( 1 α ) p v 2 4 + α q v 2 4 ( 1 α ) p v 2 4 + α q v 2 4 ( 1 α ) p v 2 4 + α q v 2 4 ( 1 α ) δ 2 + α δ 2 = δ 2 < γ ,
where the first inequality is due to the triangle inequality, the second inequality follows from the fact that for x R n , x x , the third inequality is due to (12) and the last inequality comes from the assumption δ < γ . Moreover, we have
min w ( t ) t ( 1 α ) p v 2 4 α q v 2 4 min w ( t ) t ( 1 α ) p v 2 4 + α q v 2 4 min c c t 0 ( 1 α ) p v 2 4 + α q v 2 4 = γ ( 1 α ) p v 2 4 + α q v 2 4 > 0 .
where the second inequality is due to w ( t ) = ( 1 t t 0 ) w + + t t 0 c c t t 0 c c , the equality follows from (3) and the last inequality is due to (14). Thus, for all α [ 0 , 1 ] , we have
( 1 α ) v 2 + α w ( t ) t ( 1 α ) p v 2 4 α q v 2 4 > 0 ,
which, by (13), implies that x ( α ) s ( α ) > 0 for α [ 0 , 1 ] . Hence, none of the entries of x ( α ) and s ( α ) vanish for α [ 0 , 1 ] . Since x > 0 and s > 0 and x ( α ) and s ( α ) are linear functions on α , this implies that x ( α ) > 0 and s ( α ) > 0 for α [ 0 , 1 ] . Hence, x ( 1 ) = x + > 0 and s ( 1 ) = s + > 0 . This completes the proof. □
Lemma 2.
Let δ : = δ ( v ) . After a full-Newton step
min i ( v i + ) γ δ 2 , where v + = x + s + t ,
and γ is defined as in (3).
Proof. 
From (13) with α = 1 , it follows that
x + s + t = w ( t ) t q v 2 4 .
Now, from (15) and the definition of v + , we have
min i ( v i + ) 2 = min i w i ( t ) t ( q v 2 ) i 4 min i w i ( t ) t q v 2 4 min i c c i t 0 q v 2 4 γ δ 2 ,
where the second inequality follows from w ( t ) t t 0 c c and q v 2 q v 2 q v 2 , and the last inequality is due to (3) and (12). By taking the square root of both sides of the above inequality, the desired inequality in the lemma is obtained, and the proof is complete. □
The next lemma shows the effect of a full-Newton step on the proximity measure.
Lemma 3.
Let δ : = δ ( v ) < γ , γ is defined as in (3). Then, we have
δ + : = δ ( v + ) δ 2 γ + γ δ 2 .
Thus, δ + 1 γ δ 2 , which shows the quadratic convergence of the algorithm.
Proof. 
We have
( δ + ) 2 = w ( t ) t v + 2 = i = 1 n w i ( t ) t v i + 2 = i = 1 n w i ( t ) t v i + 2 w i ( t ) t + v i + 2 w i ( t ) t + v i + 2 1 min i c c i t + min i ( v i + ) 2 i = 1 n w i ( t ) t ( v i + ) 2 2 1 γ + γ δ 2 2 w ( t ) t ( v + ) 2 2 = 1 γ + γ δ 2 2 q v 2 4 2 1 γ + γ δ 2 2 q v 2 4 2 δ 2 γ + γ δ 2 2 ,
where the second inequality follows from the fact that w i ( t ) t c c i t 0 min i c c i t 0 = γ and Lemma 2, the fourth equality is due to (15), the third inequality results from the fact that x 2 x 2 for x R n , and the last inequality is due to (12). By taking the square root of both sides of the above inequality, the proof is complete. □
In the following lemma, we give an upper bound on the duality gap after a full-Newton step.
Lemma 4.
After a full-Newton step, we have
( x + ) T s + e T w ( t ) ,
where e = [ 1 , , 1 ] T .
Proof. 
From (15), we obtain
( x + ) T s + t = e T w ( t ) t e T q v 2 4 = e T w ( t ) t 1 4 q v 2 1 t e T w ( t ) ,
which proves the lemma. □
When t is updated at each iteration, we estimate an upper bound on the value of the proximity measure.
Lemma 5.
Let δ : = δ ( x , s ; t ) < γ and t + : = ( 1 θ ) t , where θ [ 0 , 1 ] . Then, we have
δ ( x + , s + ; t + ) 1 1 θ γ θ t 0 w c c θ 2 t 0 w + γ δ 2 δ 2 + θ t 0 w c c + θ 2 t 0 w ,
where γ and c c are defined as in (3).
Proof. 
Let v ¯ + : = x + s + t + . Furthermore, from the definition of w ( t ) , we have
w ( t + ) = w ( t ) + θ t t 0 ( w + c c ) t t 0 c c + θ t t 0 ( w + c c ) = ( 1 θ ) t t 0 c c + θ t t 0 w + > 0 .
Therefore, we have
δ 2 ( x + , s + ; t + ) = w ( t + ) t + v ¯ + 2 = w ( t ) + θ t t 0 ( w + c c ) ( 1 θ ) t v + 1 θ 2 = 1 1 θ w ( t ) t + θ t 0 ( w + c c ) v + 2 1 1 θ i = 1 n 1 w i ( t ) t + θ t 0 ( w i + c c i ) + v i + 2 w i ( t ) t + θ t 0 ( w i + c c i ) ( v i + ) 2 2 1 ( 1 θ ) γ θ t 0 w + c c + γ δ 2 2 w ( t ) t ( v + ) 2 + θ t 0 w + c c 2 1 ( 1 θ ) γ θ t 0 w + c c + γ δ 2 2 q v 2 4 + θ t 0 w + c c 2 1 ( 1 θ ) γ θ t 0 w + c c + γ δ 2 2 δ 2 + θ t 0 w + c c 2 1 ( 1 θ ) γ θ t 0 w c c θ 2 t 0 w + γ δ 2 2 δ 2 + θ t 0 w c c + θ 2 t 0 w 2 .
The square root of both sides of the above inequality gives the desired inequality. The proof is complete. □

5. Iteration Bound

We obtain an upper bound on the number of iterations of the proposed algorithm. Before doing this, we determine the values of the barrier parameter θ and the threshold parameter τ that guarantee that the iterates are located in the τ -neighborhood of the central path, i.e, if δ : = δ ( x , s ; t ) τ , then δ ( x + , s + ; t + ) τ . By Lemma 5, we have
δ ( x + , s + ; t + ) δ 2 + θ t 0 w c c + θ 2 t 0 w 1 θ γ θ t 0 w c c θ 2 t 0 w + γ δ 2 ,
and because δ τ , we have
δ ( x + , s + ; t + ) τ 2 + θ t 0 w c c + θ 2 t 0 w 1 θ γ θ t 0 w c c θ 2 t 0 w + γ τ 2 .
Substituting t 0 = min c c γ in the latter inequality, we obtain
δ ( x + , s + ; t + ) τ 2 + γ θ min c c w c c + γ θ 2 min c c w 1 θ γ γ θ min c c w c c γ θ 2 min c c w + γ τ 2 .
If we take τ = γ 2 , we obtain
δ ( x + , s + ; t + ) τ 2 + 2 τ 2 θ min c c w c c + 2 τ 2 θ 2 min c c w 1 θ 2 τ 2 2 τ 2 θ min c c w c c 2 τ 2 θ 2 min c c w + τ .
Therefore, the condition δ ( x + , s + ; t + ) τ holds if
τ 2 + 2 τ 2 θ min c c w c c + 2 τ 2 θ 2 min c c w 1 θ 2 τ 2 2 τ 2 θ min c c w c c 2 τ 2 θ 2 min c c w + τ τ ,
or
1 + 2 θ min c c w c c + 2 θ 2 min c c w 1 θ 2 2 θ min c c w c c 2 θ 2 min c c w + 1 1 .
If we take
θ = min c c 4 ( min c c + w ) , and w > c c ,
we obtain θ 1 4 , which yields 1 1 θ 2 3 and
1 + 2 θ min c c w c c + 2 θ 2 min c c w 1 θ 2 2 θ min c c w c c 2 θ 2 min c c w + 1 2 3 6 6 + 53 55 36 0.7970 < 1 .
The following theorem gives the main result of the paper.
Theorem 1.
If θ = min c c 4 ( min c c + w ) and τ = γ 2 , then the algorithm achieves an ε-approximate solution ( x , y , s ) F 0 such that w x s ε , after at most
8 min c c + w min c c log max { min c c 2 + c c w 2 w + e , w } ε
iterations.
Proof. 
Using the triangle inequality, we have
w x s w ( t ) x s + w ( t ) w = t w ( t ) t v + w ( t ) w = t δ + w ( t ) w t γ 2 + w ( t ) w ,
where the second equality is due to (10), and the last inequality follows from δ τ = γ 2 .
On the other hand, from the definition of w ( t ) in (3), we have
w ( t ) w = w + t t 0 ( c c w ) w w + t t 0 ( c c w ) 2 w + e w t t 0 c c w 2 w + e .
Substituting this bound into (16) yields after k iterations
w x s t 0 γ 2 + c c w 2 w + e t t 0 t 0 γ 2 + c c w 2 w + e ( 1 θ ) k 2 .
Using the definition of γ in (3), we deduce that w x s ε is satisfied if
min c c 2 + c c w 2 w + e ( 1 θ ) k 2 ε .
Taking the logarithms of both sides and using the inequality log ( 1 + θ ) θ for θ > 1 , we obtain
k 8 min c c + w min c c log min c c 2 + c c w 2 w + e ε .
Moreover, since at each iteration the norm of the w vector is also reduced by the factor 1 θ , the result is obtained. The proof is completed. □

6. Numerical Results

In this section, we present computational results in the MATLAB environment to compare the proposed algorithm and the algorithm presented in [15]. We used the value of the accuracy parameter ε = 10 4 . In the implementation, we took different values of the weight vector w 0 such that w 0 { c c + 10 3 , 3 c c , 3 c c , n c c } . We reduced the value of the parameter t and the weight vector w by the factor 1 θ with θ = 0.2 . Table 1 shows the number of iterations of the proposed algorithm (denoted by iter) and the algorithm in [15] (denoted by iter [15]) to obtain the optimal solution. The optimal values of the primal and dual objective functions are denoted by “pri” and “dua”, respectively. In the following, we give the standard test problems of CQP problems [15].
Example 1.
A = 1 1 0 1 1 1 , b = 1 2 , c = 2 4 0 , Q = 2 0 0 0 2 0 0 0 0 .
The initial primal–dual interior point is:
x 0 = ( 0.3262 , 1.3261 , 0.3477 ) T , y 0 = ( 0 , 2.0721 ) T , s 0 = ( 0.7247 , 0.7247 , 2.0722 ) T .
Example 2.
A = 1 1 1 0 1 5 0 1 , c = 4 6 0 0 , Q = 4 2 0 0 2 4 0 0 0 0 0 0 0 0 0 0 , b = 2 5 .
The initial primal–dual interior point is:
x 0 = ( 0.9683 , 0.5775 , 0.4543 , 1.1444 ) T , y 0 = ( 0.9184 , 1.1244 ) T , s 0 = ( 0.7612 , 0.9141 , 0.9185 , 1.1244 ) T .
Example 3.
A = 1 1.2 1 1.8 0 3 1 1.5 2 1 1 2 3 4 2 , c = 1 1.5 2 1.5 3 , b = 9.31 5.45 6.60 Q = 20 1.2 0.5 0 1 1.2 32 1 1 1 0.5 1 14 1 1 0.5 1 1 15 1 1 1 1 1 16 .
The initial primal–dual interior point is:
x 0 = ( 2.4539 , 0.7875 , 1.5838 , 2.4038 , 1.3074 ) T , y 0 = ( 20.5435 , 9.4781 , 4.3927 ) T , s 0 = ( 7.1215 , 7.9763 , 8.3150 , 6.8686 , 7.9750 ) T .

7. Concluding Remarks

In this paper, we presented a full-Newton step IPM based on the AET for weighted convex quadratic programming. We used the square root function in order to obtain a new search direction. By appropriate choosing the barrier parameter θ and threshold parameter τ , we showed that the proposed algorithm had a complexity bound of 8 min c c + w min c c log max { min c c 2 + c c w 2 w + e , w } ε . Numerical results indicated that the proposed algorithm performed well on a small set of test problems.
An interesting question for further research is to investigate an infeasible version of the proposed method to avoid the difficulty of finding an initial point in F 0 .

Author Contributions

Conceptualization, B.K. and Y.R.; methodology, B.K.; software, B.K.; validation, Y.R., J.S. and B.K.; formal analysis, B.K. and Y.R.; investigation, Y.R., J.S. and B.K.; resources, B.K.; data curation, J.S.; writing—original draft preparation, B.K.; writing—review and editing, Y.R.; visualization, B.K.; supervision, Y.R.; project administration, J.S.; funding acquisition, Y.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the national natural science foundation of China (No. 62172116).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Karmarkar, N.K. A new polynomial-time algorithm for linear programming. Combinatorica 1984, 4, 375–395. [Google Scholar] [CrossRef]
  2. Roos, C.; Terlaky, T.; Vial, J.-P. Theory and Algorithms for Linear Optimization. An Interior-Point Approach; John Wiley & Sons: Chichester, UK, 1997. [Google Scholar]
  3. Darvay, Z. New interior-point algorithm in linear programming. Adv. Model. Optim. 2003, 5, 51–92. [Google Scholar]
  4. Achache, M. Complexity analysis and numerical implementation of a short-step primal-dual algorithm for linear complementarity problems. Appl. Math. Comput. 2010, 216, 1889–1895. [Google Scholar] [CrossRef]
  5. Wang, G.Q.; Bai, Y. A new primal-dual path-following interior-point algorithm for semidefinite optimization. J. Math. Anal. Appl. 2009, 353, 339–349. [Google Scholar] [CrossRef]
  6. Wang, G.Q.; Bai, Y. A primal-dual interior-point algorithm for second-order cone optimization with full Nesterov-Todd step. Appl. Math. Comput. 2009, 215, 1047–1061. [Google Scholar] [CrossRef]
  7. Wang, G.Q.; Bai, Y. A new full Nesterov-Todd step primal-dual path-following interior-point algorithm for symmetric optimization. J. Optim. Theory Appl. 2012, 154, 966–985. [Google Scholar] [CrossRef]
  8. Wang, G.Q.; Fan, X.J.; Zhu, D.T.; Wang, D.Z. New complexity analysis of a full-Newton step feasible interior-point algorithm for P*(κ)-LCP. Optim. Lett. 2015, 9, 1105–1119. [Google Scholar] [CrossRef]
  9. Boudjellal, N.; Roumili, H.; Benterki, D. A primal-dual interior point algorithm for convex quadratic programming based on a new parametric kernel function. Optimization 2020, 70, 1703–1724. [Google Scholar] [CrossRef]
  10. Boudjellal, N.; Roumili, H.; Benterki, D. Complexity analysis of interior point methods for convex quadratic programming based on a parameterized kernel function. Bol. Soc. Parana. Mat. 2020, 40, 1–16. [Google Scholar] [CrossRef]
  11. Potra, F.A. Weighted complementarity problems- a new paradigm for computing equilibria. SIAM J. Optim. 2012, 2, 1634–1654. [Google Scholar] [CrossRef]
  12. Asadi, S.; Darvay, Z.; Lesaja, G.; Mahdavi-Amiri, N.; Potra, F.A. A full-Newton step interior-point method for monotone weighted linear complementarity problems. J. Optim. Theory Appl. 2020, 186, 864–878. [Google Scholar] [CrossRef]
  13. Kheirfam, B. Complexity analysis of a full-Newton step interior-point method for monotone weighted linear complementarity problems. J. Optim. Theory Appl. 2022. [Google Scholar] [CrossRef]
  14. Zhang, L.; Chi, X.; Zhang, S.; Yang, Y. A predictor-corrector interior-point algorithm for P*(κ)-weighted linear complementarity problems. AIMS Math. 2023, 8, 9212–9229. [Google Scholar] [CrossRef]
  15. Boudjellal, N.; Benterki, D. A new full-Newton step feasible interior point method for convex quadratic programming. Optimization 2023. [Google Scholar] [CrossRef]
  16. Potra, F.A. Sufficient weighted complementarity problems. Comput. Optim. Appl. 2016, 64, 467–488. [Google Scholar] [CrossRef]
Table 1. The numerical results of Examples 1, 2 and 3 with different values of w 0 .
Table 1. The numerical results of Examples 1, 2 and 3 with different values of w 0 .
Exam. w 0 iteriter [15]pridua
Exam. 1 c c + 10 3 4355−4.4999−4.4995
Exam. 13cc4864−4.4999−4.4994
Exam. 1 3 cc4554−4.4999−4.4994
Exam. 1(n + 1)cc4678−4.4999−4.4994
Exam. 2cc + 10 3 4451−7.1614−7.1610
Exam. 23cc4958−7.1614−7.1609
Exam. 2 3 cc4652−7.1614−7.1610
Exam. 2n cc5083−7.1614−7.1609
Exam. 3cc + 10 3 5764172.7165172.7169
Exam. 33cc6278172.7165172.7170
Exam. 3 3 cc5967172.7165172.7169
Exam. 3n cc6189172.7165172.7170
Note: The table shows that w 0 close to c c gives better iteration numbers.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rao, Y.; Su, J.; Kheirfam, B. A Full-Newton Step Interior-Point Method for Weighted Quadratic Programming Based on the Algebraic Equivalent Transformation. Mathematics 2024, 12, 1104. https://doi.org/10.3390/math12071104

AMA Style

Rao Y, Su J, Kheirfam B. A Full-Newton Step Interior-Point Method for Weighted Quadratic Programming Based on the Algebraic Equivalent Transformation. Mathematics. 2024; 12(7):1104. https://doi.org/10.3390/math12071104

Chicago/Turabian Style

Rao, Yongsheng, Jianwei Su, and Behrouz Kheirfam. 2024. "A Full-Newton Step Interior-Point Method for Weighted Quadratic Programming Based on the Algebraic Equivalent Transformation" Mathematics 12, no. 7: 1104. https://doi.org/10.3390/math12071104

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop