Next Article in Journal
Algorithms for Instance Retrieval and Realization in Fuzzy Ontologies
Previous Article in Journal
Short-Term Traffic Flow Forecasting Based on Data-Driven Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Lifting-Penalty Method for Quadratic Programming with a Quadratic Matrix Inequality Constraint

1
School of Mathematical Sciences, Dalian University of Technology, Dalian 116025, China
2
School of Applied Mathematics, Beijing Normal University, Zhuhai 519087, China
3
School of Mathematics and Physics Science, Dalian University of Technology, Panjin 124221, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(2), 153; https://doi.org/10.3390/math8020153
Submission received: 23 December 2019 / Revised: 12 January 2020 / Accepted: 15 January 2020 / Published: 21 January 2020

Abstract

:
In this paper, a lifting-penalty method for solving the quadratic programming with a quadratic matrix inequality constraint is proposed. Additional variables are introduced to represent the quadratic terms. The quadratic programming is reformulated as a minimization problem having a linear objective function, linear conic constraints and a quadratic equality constraint. A majorization–minimization method is used to solve instead a l 1 penalty reformulation of the minimization problem. The subproblems arising in the method can be solved by using the current semidefinite programming software packages. Global convergence of the method is proven under some suitable assumptions. Some examples and numerical results are given to show that the proposed method is feasible and efficient.

1. Introduction

In this paper, we consider the quadratic programming with a quadratic matrix inequality constraint of the following form:
min x x T Q x + q T x , s . t . i , j = 1 d x i x j A i j + i = 1 n x i A i 0 + A 00 0 , A x b K ,
where q, x R n , Q, A i j S m , A is a linear operator, the cone K is a product of semidefinite cones, second-order cones, nonnegative orthants and Euclidean spaces, S m is the space of m × m symmetric matrices, and A 0 indicates that A is positive semidefinite.
Results of [1,2,3,4,5,6] have shown that problem (1) has important applications in many areas, including control theory and robust optimization. Moreover, it contains a wide range of optimization problems, the well-known linear and quadratic programming problems [7,8,9,10,11,12,13,14] and bilinear matrix inequality (BMI) feasibility problem [15] are its special case. The quadratic programming [16,17] and the problem of checking the solvability of a general BMI [18] are NP-hard. The problem (1) is not so easy to be solved computationally. It can be categorized as a special case of more general nonlinear semidefinite programming (NSDP). The first- and second-order optimality conditions and numerical methods for NSDP problems have been developed; see, for example, [19,20,21,22,23,24,25].
Nonsmooth optimization methods suited for eigenvalue optimization were used to compute locally optimal solutions for the BMI problem in [26]. A linearized convex-concave semidefinite programming algorithm for solving the following problem:
min x f ( x ) s . t . A ( x ) B ( x ) 0 , x X ,
was proposed in [1], where f ( x ) is convex, X is a nonempty and closed convex set, and A ( x ) and B ( x ) are positive semidefinite convex. Based on the fact that the bilinear form X T Y + Y T X can be decomposed as a difference between two positive semidefinite convex mappings, the method was applied to BMI optimization formulations of the static state/output feedback controller design problems. Following the same line of the work in [1], an iterative procedure to search a local optimum of more general nonconvex problem was developed in [27], based on a positive semidefinite convex overestimate of a positive semidefinite nonconvex matrix mapping. These local methods may not be able to obtain a global optimum.
It is easy to see that the problem (1) is equivalent to rank one constrained optimization problem, by introducing additional variable y i j that represents the quadratic term x i x j . Rank-constrained optimization problems are considered in [28,29]. In [30], polynomial-time checkable sufficient conditions, which guarantees that the semidefinite relaxations of quadratically constrained quadratic programs are exact, are given.
To attain global optimal solution, optimization approaches have been proposed. Based on the generalized Benders decomposition, a global approach is proposed for problems with BMI constraints in [31]. A slight general formulation of the BMI feasibility problem known as the following BMI eigenvalue problem (BMIEP):
min x , y , λ λ s . t . λ I A ( x , y ) 0 , l x u ,
was dealt in [32], where A ( x , y ) = i j x i x j A i j + i x i A i 0 + j y j A j 0 + A 00 . They proposed robust Branch-and-Cut algorithms, which improves the first implemented branch-and-bound algorithm [33] for the BMIEP. Other branch-and-bound algorithms can refer to [34,35]. Though the approaches based on the generalized Benders decomposition and branch-and-bound algorithms are global methods, it is in general impractical to solve large-scale problems. A solution method of reduction of variables is proposed for BMI problems in system and control designs in [36]. The proposed method consists of a principle of variable classification, a procedure for problem transformation and a hybrid algorithm. The proposed method can address feasibility, single-objective, and multiobjective problems with BMI constraints. However, it can fail in two circumstances.
The aim of this paper is to present a lifting-penalty method for the solution of problem (1). We reformulate a quadratic matrix inequality constraint as linear matrix inequality constraints and a single quadratic equality constraint, but instead of a rank one constraint or a quadratic matrix equality constraint. Then, we use a majorization–minimization method to solve instead a l 1 penalty reformulation of the minimization problem. For the fixed penalty problem, global convergence to a Karush–Kuhn–Tucker (KKT) point of the majorization–minimization method is proven. The organization of this paper is as follows. In Section 2, a lifting-penalty method, which can obtain an ϵ -optimal solution of the problem (1), and its global convergence are given. In Section 3, numerical results are reported to show that the proposed method is efficient.
Throughout this paper, we use the notations in Table 1.

2. The Lifting-Penalty Method

This section concerns a lifting-penalty method for solving the problem (1). Its motivation is simple to describe: Rather than solving the original problem (1) directly, we reformulate it as a minimization problem having a linear objective function, linear conic constraints and a quadratic equality constraint by lifting x to a symmetric matrix Y, and then we use l 1 penalty method to solve the reformulation of (1). For a given value of the penalty parameter, the majorization–minimization (MM) method is used to solve the penalty problem. Global convergence of the MM method is proven under strict feasibility and boundedness of the original problem (1). An ϵ -optimal solution to problem (1) is obtained by solving the penalized problem as long as the penalty parameter is chosen appropriately.
Letting Y = x x T , problem (1) can be stated equivalently as the following minimization problem:
min x , Y f ( x , Y ) = tr ( Q Y ) + q T x s . t . i , j = 1 d y i j A i j + i = 1 n x i A i 0 + A 00 0 , Y = x x T , A x b K .
By Schur complement Theorem and the equivalence between the equality Y = x x T and the following system
Y x x T 0 , tr ( Y x x T ) = tr ( Y ) x T x = 0 ,
we can state problem (3) equivalently as follows:
min x , Y f ( x , Y ) s . t . i , j = 1 d y i j A i j + i = 1 n x i A i 0 + A 00 0 , 1 x T x Y 0 , tr ( Y ) x T x = 0 , A x b K .
We assume throughout this paper that the feasible set of (1) is bounded. Hence, assume without loss of generality that x T x κ / 2 . By using a penalty function similar to the l 1 penalty function defined in [37], we consider the following penalized problem:
min x , Y f μ ( x , Y ) = f ( x , Y ) + μ | tr ( Y ) x T x | = f ( x , Y ) + μ ( tr ( Y ) x T x ) s . t . i , j = 1 d y i j A i j + i = 1 n x i A i 0 + A 00 0 , 1 x T x Y 0 , tr ( Y ) κ , A x b K ,
where μ > 0 is the penalty parameter. To simplify the notation, we denote ( b , κ ) and K × R + by b ˜ and K ˜ , respectively, and define the linear operator A ˜ : ( x , Y ) ( A x , tr ( Y ) ) . Hence, we can rewrite constraints tr ( Y ) κ and A x b K more compactly as A ˜ ( x , Y ) b ˜ K ˜ .
Let Ω denote the feasible set for problem (5). Unfortunately, strict feasibility of Ω (i.e., Slater’s constraint qualification) fails. Therefore, we cannot establish similar results to the exactness (Theorem 4.4 in [37]) of the l 1 penalty function. However, when the penalty parameter μ is large enough, it can be proven that an ϵ -optimal solution to problem (1) can be obtained by solving the penalized problem as long as the penalty parameter is sufficiently large.
Let Ω c denote the feasible set for (6), ( x f , Y f ) be a feasible solution of (5), ( x μ , Y μ ) and ( x , Y ) be optimal solutions of (6) and (5), respectively, and ( x c , Y c ) be an optimal solution of the following problem:
min x , Y { f ( x , Y ) : ( x , Y ) Ω c } .
Theorem 1.
Let ϵ > 0 be a given constant, and assume that μ ( f ( x f , Y f ) f ( x c , Y c ) ) / ϵ , then
tr ( Y μ ) x μ T x μ ϵ , f ( x μ , Y μ ) f ( x , Y ) μ ( tr ( Y μ ) x μ T x μ ) f ( x , Y ) .
Proof. 
By noting that ( x f , Y f ) is a feasible solution of (5), we have that tr ( Y f ) x f T x f = 0 . Therefore,
f ( x f , Y f ) = f ( x f , Y f ) + μ ( tr ( Y f ) x f T x f ) = f μ ( x f , Y f ) f μ ( x μ , Y μ ) = f ( x μ , Y μ ) + μ ( tr ( Y μ ) x μ T x μ ) f ( x c , Y c ) + μ ( tr ( Y μ ) x μ T x μ ) .
It follows from (8) and the assumption on μ that
tr ( Y μ ) x μ T x μ ( f ( x f , Y f ) f ( x c , Y c ) ) / μ ϵ .
Because ( x , Y ) is an optimal solution of (5), by replacing ( x f , Y f ) in (8) by ( x , Y ) , we have that
f ( x , Y ) f μ ( x μ , Y μ ) = f ( x μ , Y μ ) + μ ( tr ( Y μ ) x μ T x μ ) .
Hence,
f ( x μ , Y μ ) f ( x , Y ) μ ( tr ( Y μ ) x μ T x μ ) f ( x , Y ) .
 □
Based on Theorem 1, we give the following Algorithm 1 for finding an ϵ -optimal solution for problem (1).
Algorithm 1: The lifting-penalty method for solving (1)
Step 1. 
Choose an initial point ( x f , Y f ) Ω , μ 1 > 0 , s 1 > s 2 > 0 , ϵ 1 > 0 . Set ( x 1 , Y 1 ) = ( x f , Y f ) and j = 1 .
Step 2. 
Taking ( x j , Y j ) to be the initial point, compute an optimal solution ( x j + 1 , Y j + 1 ) of (6) with
μ = μ j .
Step 3. 
Set μ j + 1 = s 1 μ j , tr ( Y j ) x j T x j > 0.1 ; s 2 μ j , otherwise .
Step 4. 
If tr ( Y j + 1 ) x ( j + 1 ) T x j + 1 ϵ 1 , stop; else, set j = j + 1 , and go to Step 2.
In Step 1, to obtain a feasible point of the problem (5), we use the MM method to solve the following problem:
min x , Y tr ( Y ) x T x s . t . ( x , Y ) Ω c .
More detailed discussion on the MM method can be found in [38,39,40,41]. In Step 2, we use the MM method to solve the problem (6) with μ = μ j . Let
f ^ μ ( x , Y , z ) = f ( x , Y ) + μ tr ( Y ) μ z T z 2 μ z T ( x z ) ,
where z R n . From the definition of f ^ μ ( x , Y , z ) , we have that
f ^ μ ( x , Y , z ) f μ ( x , Y ) , for any z R n , f ^ μ ( x , Y , x ) = f μ ( x , Y ) .
That is, f ^ μ ( x , Y , z ) is the majorization function of f μ ( x , Y ) at z. Therefore, we can apply the following Algorithm 2 to the problem (6) for μ = μ j .
Algorithm 2: The MM method for (6) with a fixed value of the parameter μ
Step 1. Choose an initial point ( x 1 , Y 1 ) Ω c , ϵ 2 > 0 ; Set k = 1 .
Step 2. Taking ( x k , Y k ) to be the initial point, compute an optimal solution ( x k + 1 , Y k + 1 ) of thefollowing problem:
min x , Y f ^ μ ( x , Y , x k ) s . t . ( x , Y ) Ω c .
Step 3. If ( x k + 1 , Y k + 1 ) ( x k , Y k ) ϵ 2 , stop; else, set k = k + 1 , and go to Step 2.
In Algorithm 2, linear conic programming problem (10) can be solved by software packages SeDuMi [42], SDPT3 [43] or SDPNAL [44]. The MM method has global convergence properties, which states that the cluster points of { ( x k , Y k ) } are all KKT points of (6) for any given μ . To prove it, we need a constraint qualification. We first give the following Lemma.
Lemma 1.
If strict feasibility of the problem (1) holds, then strict feasibility of the problem (6) is also ensured.
Proof. 
If the problem (1) is strictly feasible, then there exists a point x ˜ R n such that
i , j = 1 d x ˜ i x ˜ j A i j + i = 1 n x ˜ i A i 0 + A 00 0 and A x ˜ b ri K ,
where ri K is the relative interior of the cone K.
Taking Y ¯ = x ˜ x ˜ T , we know that ( x ˜ , Y ¯ ) satisfies
G ( x ˜ , Y ¯ ) = i , j = 1 d y ¯ i j A i j + i = 1 n x ˜ i A i 0 + A 00 0 , A ˜ ( x ˜ , Y ¯ ) b ˜ ri K ˜ , 1 x ˜ T x ˜ Y ¯ 0 , 1 x ˜ T x ˜ Y ¯ 0 .
Denote the vector of all eigenvalues of Z S m arranged in non-increasing order by λ ( Z ) . By the definition of G ( x ˜ , Y ¯ ) and Mirsky’s Theorem ([45], Cor 4.12), we have that for any
ε ( 0 , λ min ( G ( x ˜ , Y ¯ ) ) / 2 λ ( i = 1 d A i i ) ) ,
λ ( G ( x ˜ , Y ¯ + ε I ) ) λ ( G ( x ˜ , Y ¯ ) ) = λ ( G ( x ˜ , Y ¯ ) + ε i = 1 d A i i ) λ ( G ( x ˜ , Y ¯ ) ) λ ( ε i = 1 d A i i ) = ε λ ( i = 1 d A i i ) λ min ( G ( x ˜ , Y ¯ ) ) / 2 .
Therefore, λ ( G ( x ˜ , Y ¯ + ε I ) ) > 0 . Take Y ˜ = Y ¯ + ε I , we have that
G ( x ˜ , Y ˜ ) 0 , Y ˜ x ˜ x ˜ T = ε I 0 .
From Schur complement Theorem and (13), we know that ( x ˜ , Y ˜ ) is a strictly feasible point for (6). □
Lemma 2.
If the feasible set of the problem (1) is nonempty and bounded, then, for any given μ > 0 , Ω c and the solution sets of the problems (6) and (10) are nonempty and compact.
Proof. 
Because the feasible set of the problem (1) is a nonempty subset of Ω c , Ω c is nonempty. It is trivial to show that Ω c is closed. We will prove Ω c is bounded. For any ( x , Y ) Ω c , Y 0 together with tr ( Y ) κ implies that Y is bounded, and x T x tr ( Y ) κ implies that x is bounded. As a result, Ω c is nonempty and compact.
From the continuity of f μ ( · ) and f ^ μ ( · ) , we know that the solution sets of the problems (6) and (10) are also nonempty and compact. □
By introducing the product topology on R n × S m with the induced inner product
( x 1 , Y 1 ) , ( x 2 , Y 2 ) = x 1 , x 2 + Y 1 , Y 2 = x 1 T x 2 + tr ( Y 1 Y 2 ) ,
for any fixed z and any direction ( d , D ) R n × S m , we calculate the derivative of f ^ μ ( x , Y , z ) with respect to ( x , Y ) as follows:
f ^ μ ( x , Y , z ) ( d , D ) = q 2 μ z , d + Q + μ I , D .
The gradient f ^ μ ( x , Y , z ) of f ^ μ ( x , Y , z ) with respect to ( x , Y ) , therefore, may be interpreted as the pair of matrices:
f ^ μ ( x , Y , z ) = ( q 2 μ z , Q + μ I ) .
Theorem 2.
Suppose that Slater’s constraint qualification for problem (1) holds, and the feasible set of (1) is bounded. For any given μ > 0 , let { ( x k , Y k ) } be the sequence generated by Algorithm 2, then lim k f μ ( x k , Y k ) = inf k f μ ( x k , Y k ) , and any cluster point of { ( x k , Y k ) } is a KKT point of (6).
Proof. 
From the boundedness of the feasible set of problem (1) and Lemma 2, we know that problem (10) has a nonempty and compact solution set and there exists a point ( x k + 1 , Y k + 1 ) arg min ( x , Y ) Ω c { f ^ μ ( x , Y , x k ) } .
By noting that f μ ( · ) is continuous and { ( x k , Y k ) } is bounded, we know that inf k { f μ ( x k , Y k ) } is finite. It follows from definitions of f μ ( x , Y ) and f ^ μ ( x , Y , z ) that
f μ ( x k + 1 , Y k + 1 ) f ^ μ ( x k + 1 , Y k + 1 , x k ) f ^ μ ( x k , Y k , x k ) = f μ ( x k , Y k ) .
That is, { f μ ( x k , Y k ) } is a monotonically decreasing sequence. Hence,
lim k f μ ( x k , Y k ) = inf k f μ ( x k , Y k ) .
Because ( x k + 1 , Y k + 1 ) is an optimal solution of (10), we have that
0 q 2 μ x k , Q + μ I + N Ω c ( x k + 1 , Y k + 1 ) .
We first consider the case ( x k + 1 , Y k + 1 ) = ( x k , Y k ) for some integer k 0 . When ( x k + 1 , Y k + 1 ) = ( x k , Y k ) , it follows from (16) that
0 q 2 μ x k + 1 , Q + μ I + N Ω c ( x k + 1 , Y k + 1 ) .
Denote G 1 ( x , Y ) = i , j = 1 d y i j A i j + i = 1 n x i A i 0 + A 00 , G 2 ( x , Y ) = 1 x T x Y , G 3 ( x , Y ) = A ˜ ( x , Y ) b ˜ and G i k = G i ( x k , Y k ) . From Lemma 1, we know that Slater’s constraint qualification for (6) holds. This, together with convexity, implies that
N Ω c ( x k + 1 , Y k + 1 ) = ( G 1 k + 1 ) N S + m ( G 1 k + 1 ) + ( G 2 k + 1 ) N S + n + 1 ( G 2 k + 1 ) + A ˜ N K ˜ ( G 3 k + 1 ) ,
where ( G i k + 1 ) and A ˜ are the adjoint operators of ( G i k + 1 ) and A ˜ , respectively. It follows from (17) and (18) that ( x k + 1 , Y k + 1 ) is a KKT point of problem (6).
Next, we assume that ( x k + 1 , Y k + 1 ) ( x k , Y k ) for all k 0 . Because { ( x k , Y k ) } is bounded, we know that { ( x k , Y k ) } has cluster points and any cluster point ( x ¯ , Y ¯ ) Ω c . By the equality lim k f μ ( x k , Y k ) = inf k f μ ( x k , Y k ) , we have that f μ ( x ¯ , Y ¯ ) = inf k f μ ( x k , Y k ) . Now, we prove that ( x ¯ , Y ¯ ) is a KKT point of (6). For any k 0 , we have that
f μ ( x k + 1 , Y k + 1 ) f μ ( x k , Y k ) = tr ( ( Q + μ I ) ( Y k + 1 Y k ) ) + q T ( x k + 1 x k ) μ ( x k + 1 ) T x k + 1 + μ x k T x k = tr ( ( Q + μ I ) ( Y k + 1 Y k ) ) + ( q 2 μ x k ) T ( x k + 1 x k ) + 2 μ x k T ( x k + 1 x k ) μ ( x k + 1 ) T x k + 1 + μ x k T x k = ( q 2 μ x k , Q + μ I ) , ( x k + 1 x k , Y k + 1 Y k ) μ ( x k + 1 x k ) T ( x k + 1 x k ) .
It follows from (16) that ( q 2 μ x k , Q + μ I ) N Ω c ( x k + 1 , Y k + 1 ) , which implies that
( q 2 μ x k , Q + μ I ) , ( x k + 1 x k , Y k + 1 Y k ) = ( q 2 μ x k , Q + μ I ) , ( x k x k + 1 , Y k Y k + 1 ) 0 .
From (19) and (20), we have that
f μ ( x k + 1 , Y k + 1 ) f μ ( x k , Y k ) μ x k + 1 x k 2 ,
which implies that
μ lim l k = 1 l x k + 1 x k 2 f μ ( x 1 , Y 1 ) lim l f μ ( x l + 1 , Y l + 1 ) < + .
Hence, lim k ( x k + 1 x k ) = 0 . Let k j satisfy that x k j x ¯ and Y k j Y ¯ , then
lim j x k j = lim j x k j 1 = x ¯ .
Because ( x k j , Y k j ) is an optimal solution, there exists D k j N Ω c ( x k j , Y k j ) such that
D k j = ( q 2 μ x k j 1 , Q + μ I )
for any k j 1 . Hence, for any ( x ˜ , Y ˜ ) Ω c ,
( q 2 μ x k j 1 , Q + μ I ) , ( x ˜ x k j , Y ˜ Y k j ) = D k j , ( x ˜ x k j , Y ˜ Y k j ) 0 ,
which, from the convergence of the subsequence { ( x k j , Y k j ) } and (22), gives rise to
( q 2 μ x ¯ , Q + μ I ) , ( x ˜ x ¯ , Y ˜ Y ¯ ) 0 ,
which implies that ( q 2 μ x ¯ , Q + μ I ) N Ω c ( x ¯ , Y ¯ ) . That is, 0 f μ ( x ¯ , Y ¯ ) + N Ω c ( x ¯ , Y ¯ ) , which, together with Slater’s constraint qualification, implies that ( x ¯ , Y ¯ ) is a KKT point of problem (6). □

3. Numerical Experiments

In this section, the examples and some preliminary numerical results taken by the lifting-penalty method (LPM) and a modified augmented Lagrangian method (MALM) [46] are given below.
All numerical experiments are done by running MATLAB ® 2016 on a notebook PC Intel ® Core TM i7-4810MQ CPU with 2.8 GHz and 16 GB RAM. The linear conic optimization problems in our method are solved by a SDPT3 solver. The optimization subproblems in MALM are solved by the subroutine fmincon. The parameters in the algorithms are set as follows:
LPM : μ 1 = 0.25 , s 1 = 4 , s 2 = 1.4 , ϵ 1 = ϵ 2 = 10 6 ; MALM : τ = 0.25 , γ = 10 , c 1 = 1 .
Example 1
(HE1 in D in Section V, [3]). Find matrix K such that A + B K C is Hurwitz, i.e., eigenvalues of matrix A + B K C all belong to the left half plane D = { t C : t + t < 0 } of the complex plane, where t is the conjugate of t,
A = 0.0366 0.0271 0.0188 0.4555 0.0482 1.01 0.0024 4.0208 0.1002 0.3681 0.7070 1.42 0 0 1 0 , B = 0.4422 0.1761 3.5446 7.5922 5.52 4.49 0 0 ,
C = 0 1 0 0 .
The problem amounts to solving a nonconvex feasibility problem S ( K ) 0 (see [3,47,48]). We solved it by solving the following non-strict optimization problem:
min K , λ λ s . t . λ I + S ( K ) 0 , 50 K i j 50 , 1 λ 0 .
From Figure 2 in [3], we know that the problem is nonconvex. For the starting point K ( 0 ) = ( 0 , 0 ) T , our method ended with the final solution K = ( 0.4293 , 0.5514 ) T .
Example 2.
Let x R n , λ R ,
max x , λ λ , s . t . λ I + i , j = 1 d x i x j A i j + i = 1 n x i A i 0 + A 00 0 , x i [ 1 , 1 ] , i = 1 , , n ,
where A i j S m is generated by the Matlab functionsprandn(m,m,r). We symmetrized the matrices by copying the upper triangular part to the lower one after creation.
In our experiment, 20 problem instances are randomly generated for each value of ( n , m , d ) , and the additional variable Y is a d × d symmetric matrix, r = 0.2 . Table 2 lists n, m, d and the average CPU time in seconds, respectively.
From the results listed in Table 2, we find that our method requires less time than the LPM for most instances. Moreover, consuming time seems to be more sensitive to d, compared with n and m. Preliminary computational experiences show that our method is competitive with the LPM.

4. Conclusions

In this paper, a lifting-penalty method for solving a quadratic optimization problem involving a quadratic matrix inequality constraint is introduced. By introducing additional variables, we reformulate a quadratic matrix inequality constraint as linear matrix inequality constraints and a single quadratic equality constraint but instead of a rank one constraint or a quadratic matrix equality constraint. Its global convergence result has been given under mild assumptions. Then, the method was applied to a feasibility problem and a problem of maximizing the smallest eigenvalue of a symmetric matrix. The numerical results show that the proposed method is competitive with the LPM. Note, however, that linear conic optimization subproblems arising in our method have the same feasible set, so the development of an efficient method for solving a family of linear conic optimization problems with the same feasible set is our future work.

Author Contributions

B.Y. supervised the research and helped W.L. at every step, especially framework building, analysis of the results, and writing of the manuscript. W.L. contributed the idea, framework building, implementation of the results, and writing of the manuscript. L.Y. helped with analyses of the introduction, results, and literature review. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by the National Natural Science Foundation of China (11301050, 11171051, 11971092), and the Fundamental Research Funds for the Central Universities (DUT17RC(4)38).

Acknowledgments

The authors thank the anonymous referees, whose comments and suggestions led to an improved version of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dinh, Q.T.; Gumussoy, S.; Michiels, W.; Diehl, M. Combining Convex–Concave Decompositions and Linearization Approaches for Solving BMIs, with Application to Static Output Feedback. IEEE Trans. Autom. Control 2012, 57, 1377–1390. [Google Scholar] [CrossRef]
  2. Goh, K.C.; Safonov, M.; Ly, J. Robust synthesis via bilinear matrix inequalities. Int. J. Robust Nonlinear Control 1996, 6, 1079–1095. [Google Scholar] [CrossRef]
  3. Henrion, D.; Löfberg, J.; Kočvara, M.; Stingl, M. Solving polynomial static output feedback problems with PENBMI. In Proceedings of the 44th IEEE Conference on Decision and Control, Sevilla, Spain, 15 December 2005; Volume 1, pp. 7581–7586. [Google Scholar]
  4. VanAntwerp, J.G.; Braatz, R.D. A tutorial on linear and bilinear matrix inequalities. J. Process. Control 2000, 10, 363–385. [Google Scholar] [CrossRef]
  5. Lim, Y.; Oh, K.; Ahn, H. Stability and Stabilization of Fractional-Order Linear Systems Subject to Input Saturation. IEEE Trans. Autom. Control 2013, 58, 1062–1067. [Google Scholar] [CrossRef]
  6. Fazelnia, G.; Madani, R.; Kalbat, A.; Lavaei, J. Convex Relaxation for Optimal Distributed Control Problems. IEEE Trans. Autom. Control 2017, 62, 206–221. [Google Scholar] [CrossRef]
  7. Dikin, I. Iterative solutions of problems of linear and quadratic programming. Sov. Math. Dokl. 1967, 8, 674–675. [Google Scholar]
  8. Ye, Y.; Tse, E. An extension of Karmarkar’s projective algorithm for convex quadratic programming. Math. Program. 1989, 44, 157–179. [Google Scholar] [CrossRef]
  9. Ye, Y. On affine scaling algorithms for nonconvex quadratic programming. Math. Program. 1992, 56, 285–300. [Google Scholar] [CrossRef]
  10. Zorkaltsev, V.; Mokryi, I. Interior Point Algorithms in Linear Optimization. J. Appl. Ind. Math. 2018, 12, 191–199. [Google Scholar] [CrossRef]
  11. Anstreicher, K.; Wolkowicz, H. On Lagrangian relaxation of quadratic matrix constraints. SIAM J. Matrix Anal. Appl. 2000, 22, 41–55. [Google Scholar] [CrossRef] [Green Version]
  12. Beck, A. Quadratic matrix programming. SIAM J. Optim. 2007, 17, 1224–1238. [Google Scholar] [CrossRef] [Green Version]
  13. Ding, Y.; Ge, D.; Wolkowicz, H. On equivalence of semidefinite relaxations for quadratic matrix programming. Math. Oper. Res. 2011, 36, 88–104. [Google Scholar] [CrossRef] [Green Version]
  14. Leibfritz, F.; Mostafa, E.M.E. An interior point constrained trust region method for a special class of nonlinear semidefinite programming problems. SIAM J. Optim. 2002, 12, 1048–1074. [Google Scholar] [CrossRef]
  15. Mesbahi, M.; Papavassilopoulos, G.P. A cone programming approach to the bilinear matrix inequality problem and its geometry. Math. Program. 1997, 77, 247–272. [Google Scholar] [CrossRef] [Green Version]
  16. Pardalos, P.M.; Vavasis, S.A. Quadratic programming with one negative eigenvalue is NP-hard. J. Glob. Optim. 1991, 1, 15–22. [Google Scholar] [CrossRef]
  17. Sahni, S. Computationally related problems. SIAM J. Comput. 1974, 3, 262–279. [Google Scholar] [CrossRef] [Green Version]
  18. Toker, O.; Özbay, H. On the NP-hardness of solving Bilinear Matrix Inequalities and simultaneous stabilization with static output feedback. In Proceedings of the 1995 American Control Conference (ACC’95), Seattle, WA, USA, 21–23 June 1995; pp. 2525–2526. [Google Scholar]
  19. Fares, B.; Noll, D.; Apkarian, P. Robust control via sequential semidefinite programming. SIAM J. Control Optim. 2002, 40, 1791–1820. [Google Scholar] [CrossRef] [Green Version]
  20. Gómez, W.; Ramírez, H. A filter algorithm for nonlinear semidefinite programming. Comput. Appl. Math. 2010, 29, 297–328. [Google Scholar]
  21. Kanzow, C.; Nagel, C.; Kato, H.; Fukushima, M. Successive linearization methods for nonlinear semidefinite programs. Comput. Optim. Appl. 2005, 31, 251–273. [Google Scholar] [CrossRef]
  22. Shapiro, A. First, and second order analysis of nonlinear semidefinite programs. Math. Program. 1997, 77, 301–320. [Google Scholar] [CrossRef] [Green Version]
  23. Sun, D.; Sun, J.; Zhang, L. The rate of convergence of the augmented Lagrangian method for nonlinear semidefinite programming. Math. Program. 2008, 114, 349–391. [Google Scholar] [CrossRef]
  24. Wolkowicz, H.; Saigal, R.; Vandenberghe, L. (Eds.) Handbook of semidefinite programming; International Series in Operations Research & Management Science, 27; Kluwer Academic Publishers: Boston, MA, USA, 2000. [Google Scholar]
  25. Yamashita, H.; Yabe, H.; Harada, K. A primal-dual interior point method for nonlinear semidefinite programming. Math. Program. 2011, 135, 89–121. [Google Scholar] [CrossRef]
  26. Thevenet, J.; Noll, D.; Apkarian, P. Nonsmooth methods for large bilinear matrix inequalities: Applications to feedback control. Optim. Methods Softw. 2005, 200, 1–24. [Google Scholar]
  27. Tran Dinh, Q.; Michiels, W.; Gros, S.; Diehl, M. An inner convex approximation algorithm for BMI optimization and applications in control. In Proceedings of the 51st IEEE Conference on Decision and Control, Maui, HI, USA, 10–13 December 2012; pp. 3576–3581. [Google Scholar]
  28. Kim, S.J.; Moon, Y.H. Structurally constrained and control: A rank-constrained LMI approach. Automatica 2006, 42, 1583–1588. [Google Scholar] [CrossRef]
  29. Sun, C.; Dai, R. Rank-constrained optimization and its applications. Automatica 2017, 82, 128–136. [Google Scholar] [CrossRef]
  30. Burer, S.; Ye, Y. Exact semidefinite formulations for a class of (random and non-random) nonconvex quadratic programs. Math. Program. 2019. [Google Scholar] [CrossRef] [Green Version]
  31. Beran, E.; Vandenberghe, L.; Boyd, S. A global BMI algorithm based on the generalized benders decomposition. In Proceedings of the 1997 European Control Conference (ECC), Brussels, Belgium, 1–7 July 1997; pp. 3741–3746. [Google Scholar]
  32. Fukuda, M.; Kojima, M. Branch-and-Cut Algorithms for the Bilinear Matrix Inequality Eigenvalue Problem. Comput. Optim. Appl. 2001, 19, 79–105. [Google Scholar] [CrossRef]
  33. Goh, K.C.; Safonov, M.G.; Papavassilopoulos, G.P. Global optimization for the biaffine matrix inequality problem. J. Glob. Optim. 1995, 7, 365–380. [Google Scholar] [CrossRef] [Green Version]
  34. Hisaya, F.; Kohta, H. Bounds for the BMI Eigenvalue Problem. Trans. Soc. Instrum. Control Eng. 1997, 33, 616–621. [Google Scholar]
  35. VanAntwerp, J.G.; Braatz, R.D.; Sahinidis, N.V. Globally optimal robust control for systems with time-varying nonlinear perturbations. Comput. Chem. Eng. 1997, 21, S125–S130. [Google Scholar] [CrossRef]
  36. Chiu, W. Method of Reduction of Variables for Bilinear Matrix Inequality Problems in System and Control Designs. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 1241–1256. [Google Scholar] [CrossRef] [Green Version]
  37. Han, S.P.; Mangasarian, O.L. Exact penalty functions in nonlinear programming. Math. Program. 1979, 17, 251–269. [Google Scholar] [CrossRef]
  38. De Leeuw, J. Convergence of the majorization method for multidimensional scaling. J. Classific. 1988, 5, 163–180. [Google Scholar] [CrossRef]
  39. De Leeuw, J.; Heiser, W.J. Convergence of correction matrix algorithms for multidimensional scaling. In Geometric Representations of Relational Data; Mathesis Press: Ann Arbor, MI, USA, 1977; pp. 735–752. [Google Scholar]
  40. Figueiredo, M.A.; Bioucas-Dias, J.M.; Nowak, R.D. Majorization–minimization algorithms for wavelet-based image restoration. IEEE Trans. Image Process. 2007, 16, 2980–2991. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Hunter, D.R.; Lange, K. A tutorial on MM algorithms. Am. Stat. 2004, 58, 30–37. [Google Scholar] [CrossRef]
  42. Sturm, J.F. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw. 1999, 11/12, 625–653. [Google Scholar] [CrossRef]
  43. Tütüncü, R.H.; Toh, K.C.; Todd, M.J. Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. 2003, 95, 189–217. [Google Scholar] [CrossRef]
  44. Zhao, X.Y.; Sun, D.; Toh, K.C. A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 2010, 20, 1737–1765. [Google Scholar] [CrossRef]
  45. Stewart, G.W.; Sun, J.G. Matrix Perturbation Theory; Academic Press: Cambridge, MA, USA, 1990. [Google Scholar]
  46. Wu, H.; Luo, H.; Ding, X.; Chen, G. Global convergence of modified augmented Lagrangian methods for nonlinear semidefinite programming. Comput. Optim. Appl. 2013, 56, 531–558. [Google Scholar] [CrossRef]
  47. Henrion, D.; Peaucelle, D.; Arzelier, D.; Šebek, M. Ellipsoidal approximation of the stability domain of a polynomial. IEEE Trans. Automat. Control 2003, 48, 2255–2259. [Google Scholar] [CrossRef]
  48. Jury, E.I. Inners and Stability of Dynamic Systems; Wiley-Interscience (John Wiley & Sons): New York, NY, USA, 1974. [Google Scholar]
Table 1. Mathematical symbols and their meaning.
Table 1. Mathematical symbols and their meaning.
SymbolMeaning
ϕ ( x ) The derivative of ϕ with respect to x
ϕ ( x ) The gradient of ϕ at x
S + m The set of symmetric positive semidefinite matrices of dimension m × m
S + + m The set of symmetric positive definite matrices of dimension m × m
IThe identity matrix of appropriate dimension
A B A B belongs to S + m
A B A B belongs to S + + m
A B The inner product of A and B
tr ( A ) The trace (sum of diagonal elements) of the matrix A
λ min ( A ) The minimum eigenvalue of the matrix A
N Ω ( x ) The normal cone to the set Ω at the point x Ω
Table 2. Numerical results of Example 2.
Table 2. Numerical results of Example 2.
Time in Seconds Time in Seconds Time in Seconds
LPMMALM LPMMALM LPMMALM
301061.714.07104.626.86155.9711.91
40 2.196.22 2.998.47 11.2813.23
50 2.106.46 9.129.42 12.2913.91
301561.914.42103.466.461513.9010.04
40 1.926.75 8.319.36 12.3214.89
50 2.257.75 6.7810.32 13.6317.07
302061.8512.93102.4217.791527.1934.87
40 2.1218.71 3.3228.41 24.8343.37
50 2.5123.90 9.0235.78 28.7051.18
303062.2114.65102.8923.251526.6439.39
40 2.4720.68 3.1430.03 42.4346.73
50 2.5728.45 5.0037.70 46.6255.09

Share and Cite

MDPI and ACS Style

Liu, W.; Yang, L.; Yu, B. A Lifting-Penalty Method for Quadratic Programming with a Quadratic Matrix Inequality Constraint. Mathematics 2020, 8, 153. https://doi.org/10.3390/math8020153

AMA Style

Liu W, Yang L, Yu B. A Lifting-Penalty Method for Quadratic Programming with a Quadratic Matrix Inequality Constraint. Mathematics. 2020; 8(2):153. https://doi.org/10.3390/math8020153

Chicago/Turabian Style

Liu, Wei, Li Yang, and Bo Yu. 2020. "A Lifting-Penalty Method for Quadratic Programming with a Quadratic Matrix Inequality Constraint" Mathematics 8, no. 2: 153. https://doi.org/10.3390/math8020153

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop