**1. Introduction**

Optimization problems normally fall into two categories, one for continuous variables and the other for discrete variables. The latter one is called combinatorial optimization, which is an active field in applied mathematics [1]. Common problems are the maximum flow problem, traveling salesman problem, matching problem, knapsack problem, etc. Among those famous combinatorial optimizations, zero-one quadratic programming (01QP), whose variables can only be either 0 or 1 [2], is very important and attracts a lot of attention.

Zero-one quadratic programming, which can be divided into constrained (01CQP) and unconstrained (01UQP), is a combinatorial optimization and has practical significance. For example, it can be applied in circuit design [3], pattern recognition [4], capital budgeting [5], portfolio optimization [6], etc. Except for these well known applications, 01QP also has potential applications in the nonlinear control related fields [7–11]. Among these applications, the phasor measurement unit (PMU) placement has been widely studied. Phasor measurement units can be used in dynamic monitoring, system protection, and system analysis and prediction. Therefore, the placement of PMU has become an important issue. As the scale of the electric grid grows, the PMU placement problem becomes more difficult and must be addressed considering certain requirements. In [12], a modified bisecting search combined with simulated annealing method was proposed, and the

latter randomly selected arrays were used to test the observability of the system. Considering the incomplete observability of the system, a calculation method based on graph theory was proposed in [13]. This method is time-consuming, and with the increase of dimensions, the calculation load is too heavy. An integer linear programming method [14] and an improved algorithm [15] were put forward, considering system redundancy, and full and incomplete observability. Researchers in [16] proposed a binary programming method considering the joint placement of the conventional measurement units and phasor measurement units.

Zero-one quadratic programming also has some theoretical significance. Many classical problems, such as the max-cut problem [17,18] and max-bisection problem [19] can be converted to zero-one quadratic programming. Therefore, designing the algorithm that can solve 01QP effectively and efficiently is very meaningful not only in practical fields but also in theoretical fields.

However, zero-one quadratic programming is a well known NP-hard problem in general. The common solutions are exact solution and heuristic algorithms. Exact solution algorithms can guarantee the global optimum. In [20], the branch-and-bound method was used to solve zero-one quadratic programming problems without constraints. In [21,22], some exact solutions were proposed by means of geometric properties. Penalty parameters were introduced to solve zero-one quadratic programming problems with constraints in [23]. But exact solution algorithm are very time consuming and suitable for small scale problems only. Oppositely, heuristic algorithms, such as simulated annealing [24], genetic algorithms [25], neural networks [26], and ant colony algorithms [27], can solve the medium and large scale problems quickly in general. But most of them can only find the local optimum.

Therefore, identifying polynomially solvable subclasses of zero-one quadratic programming problems and their corresponding algorithms is a promising way to not only compromise these two sides but also offer theoretical insight into the complicated nature of the problem. In our past studies [28,29], the problem of five-diagonal matrix quadratic programming with linear constraints has been solved effectively. In [30], an algorithm for solving 01QP with a seven-diagonal matrix *Q* was presented. However, for these algorithms, the applicable problem is very specific. This narrows their applications. Then, we proposed an algorithm for the unconstrained problems with a *k*-diagonal matrix in [31]. In this paper, based on our previous results, we further propose an algorithm for the general linearly constrained zero-one quadratic programming problem with a *k*-diagonal matrix by combining the basic algorithm and dynamic programming method. For the algorithm we proposed, the value of *k* is changeable. That means the algorithm can cover different subclasses of the problem. The theoretical analysis and experimental results reveal that our proposed algorithm is reasonably effective and efficient. We also apply the algorithm to real-world applications and then verify its feasibility.

The main contributions of this paper are reflected in the following aspects: (1) The previous algorithm targeted a fixed *k* value in *Q* matrix. While the algorithm in this paper targeted a general problem with changeable *k* values. (2) We analyze the time complexity and give the proof process of the rationality of the algorithm. (3) We apply the algorithm to the phasor measurement units placement in real-world application.

This paper is organized as follows: in Section 2, we review the algorithm of solving unconstrained zero-one *k*-diagonal matrix quadratic programming. In Section 3, a constrained zero-one *k*-diagonal matrix quadratic programming algorithm with the proof of the algorithm is proposed. Application of zero-one quadratic programming in the phasor measurement units placement is put forward in Section 4. Experimental results and discussion are given in Section 5. We draw our conclusions and put forward the prospects in Section 6.

#### **2. Basic Algorithm to 01UQP**

The following Equation (1) shows the form of the *k*-diagonal matrix zero-one quadratic programming problem. The special point is the form of the matrix *Q* called the *k*-diagonal matrix, where *k* = 2*m* + 1 (*m* = 0, 1, 2, . . . , *n* − <sup>1</sup>).

$$\min\_{\mathbf{x}\in\{0,1\}^{n}} f(\mathbf{x}) = \frac{1}{2}\mathbf{x}^{T}\mathbf{Q}\mathbf{x} + \mathbf{c}^{T}\mathbf{x} \tag{1}$$

where *Q* = (*qij*)*n*×*n*, *qij* = *qji*(*<sup>i</sup>*, *j* = 1, 2, ... , *n*) indicates that it is a symmetric matrix. Note that all the numbers in this matrix are zero except *qij* and *qji*(*<sup>i</sup>* = 1, 2, . . . , *n* − 1, *j* = *i* + 1, *i* + 2, . . . , *i* + *<sup>m</sup>*).


Based on past works [30,31], we can have the algorithm, as follows, to solve 01UQP. The feasibility and effect of the algorithm can be seen in [31]. Figure 1 shows the Algorithm 1 process intuitively.

**Figure 1.** Flow chart of the zero-one unconstrained quadratic programming (01UQP) algorithm.

#### **Algorithm 1** Process of solving 01UQP

**Step 1:** Assign *xn*−*m*+1,. . . , *xn* to 0 or 1.

> (1) Adjacent terms are the combination of *xn*−*m*+1,. . . , *xn*, whose value is the same except for the value of *xn*.

(2) Get the corresponding *f*(*x*).

(3) Label these states *state*0,*state*1, . . . ,*state*(2*<sup>m</sup>* − <sup>1</sup>).

(4) Compare *f*(*x*) in the two adjacent terms. (only the coefficient of *xn*−*<sup>m</sup>* and the constant term are different.)

**Step 2:** Change the value of *xn*−*m*−*i* to 0 and 1(*i* = 0, 1, . . . , *n* − *m* − 1).

(1) Compare every two adjacent states and take the result with the smaller constant term as the new state.

(2) Update all the 2*m* states.

**Step 3:** Get the optimal solution *x*.

> (1) Update the states based on **Step 2** until only the constant term is in *f*(*x*).

(2) The optimal value is the minimal one.

(3) Trace back and ge<sup>t</sup> the optimal solution *x*.

#### **3. Basic Algorithm to 01CQP**

#### *3.1. 01CQP Algorithm Description*

Consider the constrained *k*-diagonal matrix zero-one quadratic programming problem:

$$\begin{array}{ll}\underset{\mathbf{x}\in\{0,1\}^{n}}{\min} & \frac{1}{2}\mathbf{x}^{T}Q\mathbf{x} + c^{T}\mathbf{x} \\ \text{s.t.} & a^{T}\mathbf{x}\leq b \end{array} \tag{2}$$

where *Q* has the same meaning as that of the 01UQP formula in Section 2, *a* ∈ <sup>Z</sup>*<sup>n</sup>*+, *b* ∈ Z+.

In this section, we utilize the dynamic programming method to solve the 01CQP problem. To apply the dynamic programming method, we introduce a state variable *sk*(*sk* ∈ Z) and a stage variable *k*(0 < *k* ≤ *<sup>n</sup>*), which should satisfy the following iteration. *sk*+<sup>1</sup> can be expressed as:

$$s\_{k+1} = s\_k + a\_{k+1} x\_{k+1} \quad (k = 1, \ldots, n-1).$$

We only need to consider the integer point of the state space since *a* ∈ <sup>Z</sup>*<sup>n</sup>*+ and *b* ∈ Z+. Since *sk* satisfies 0 ≤ *sk* ≤ *b*, we define a set *sk* = {*sk*|<sup>0</sup> ≤ *sk* ≤ *b*,*sk* ∈ <sup>Z</sup>}.

Algorithms 2 and 3 show the detailed calculation process.

**Algorithm 2** Calculation Method of *f*(*sk*)

**Case 1:** When *k* = 1, there are two cases.

> **(1a)** *s*1 < *a*1 *x*1 = 0, *x*∗1 = 0, *f*(*<sup>s</sup>*1) = *f*(0, . . . , *xn*) **(1b)** *s*1 ≥ *a*1 *x*1 = 1, *x*∗1 = 1, *f*(*<sup>s</sup>*1) = *f*(1, . . . , *xn*)

We will ge<sup>t</sup> a series of functions *f*(*x*) after executing Case 1.

**Case 2:** When *k* ≥ 2, there are also two cases.

> **(2a)** *sk* < *ak xk* must be 0 to satisfy *sk*−1 = *sk* − *akxk*. At this time, *sk* = *sk*−1, by which we can obtain the function *f*(*sk*) = *f*(*sk*−<sup>1</sup>)|*xk*=0. **(2b)** *sk* ≥ *ak* In this case, *xk* can be both 0 or 1, which generates two more situations: 1) If *xk*<sup>∗</sup> = 0 and *sk* = *sk*−1, we can ge<sup>t</sup> *f*(*sk*) = *f*(*sk*−<sup>1</sup>)|*xk*=0. 2) If *xk*<sup>∗</sup> = 1 and *sk*−1 = *sk* − *akxk*, we can ge<sup>t</sup> *f*(*sk*) = *f*(*sk*−<sup>1</sup>)|*xk*=1.

We can see that there is only one function *f*(*sk*) corresponding to each state *sk* when *k* = 1, and there are several *f*(*sk*) to each state *sk* when *k* > 1. To save storage and computational time, *f*(*sk*) should be selected satisfactorily in the next step. We need to find the optimal *f*(*sk*) = *f*(*sk*−<sup>1</sup>)|*xk*=<sup>0</sup> and *f*(*sk*) = *f*(*sk*−<sup>1</sup>)|*xk*=1, that the optimizing process refers to for Algorithm 3.

#### **Algorithm 3** *state*0 and *state*1

*state*0(*xk* = 0) Case 1: There is only one *f*(*sk*) = *f*(*sk*−<sup>1</sup>)|*xk*=0: Set *xk*<sup>∗</sup> = 0, *f*(*sk*) = *f*(*sk*−<sup>1</sup>)|*xk*=<sup>0</sup> Case 2: There are more than one *f*(*sk*) = *f*(*sk*−<sup>1</sup>)|*xk*=0: 1) Compare *f*(*sk*−<sup>1</sup>)|*xk*=0, which are almost the same except for the constant term and pick up the smallest one. 2) Set *xk*<sup>∗</sup> = 0 and *f*(*sk*) = *f*(*sk*−<sup>1</sup>)|*xk*=0. *state*1(*xk* = 1) Case 1: There is only one *f*(*sk*) = *f*(*sk*−<sup>1</sup>)|*xk*=1: Set *xk*<sup>∗</sup> = 1, *f*(*sk*) = *f*(*sk*−<sup>1</sup>)|*xk*=<sup>1</sup> Case 2: There are more than one *f*(*sk*) = *f*(*sk*−<sup>1</sup>)|*xk*=1: 1) Find the *f*(*x*) using a similar approach to that in *state*0 Case 2. 2) Set *xk*<sup>∗</sup> = 1 and *f*(*sk*) = *f*(*sk*−<sup>1</sup>)|*xk*=1.

According to the above algorithm, the maximum number of functions per *State* is shown in Table 1. The primary time is spent generating the state table. The number of times that the core steps calculated is focused on the state number in Table 1. In total, we need to update the state table *n* times and the number of state is *b*, so the time complexity is *<sup>O</sup>*(2*<sup>m</sup>*−<sup>1</sup> × *n* × *b*).

**Table 1.** Maximum number of functions per *State*.


#### *3.2. Analysis on the Effectiveness and Rationality of 01CQP Algorithm*

In this section, we analyze the properties of the polynomial time algorithm for 01CQP. To analyze the algorithm, we need to demonstrate the rationality of Algorithms 2 and 3. Suppose *f*(*x*) has the form:

$$f(\mathbf{x}) = q\_{12}\mathbf{x}\_1\mathbf{x}\_2 + q\_{13}\mathbf{x}\_1\mathbf{x}\_3 + \dots + q\_{1,1+m}\mathbf{x}\_1\mathbf{x}\_{1+m} + \dots + q\_{n-m,n-m+1}\mathbf{x}\_{n-m}$$

$$\mathbf{x}\_{n-m+1} + \dots + q\_{n-m,n}\mathbf{x}\_{n-m}\mathbf{x}\_n + \dots + q\_{n-1,n}\mathbf{x}\_{n-1}\mathbf{x}\_n + \mathbf{c}\_1\mathbf{x}\_1 + \dots + \mathbf{c}\_n\mathbf{x}\_n$$

#### **Step 1:** Set *k* = 1

(1) *s*1 < *a*1, *x*1 = 0

$$\begin{aligned} f\_1(\mathbf{s}\_1) &= f(\mathbf{x})|\_{\mathbf{x}\_1=0} = q\_{i,i+1}\mathbf{x}\_i\mathbf{x}\_{i+1} + q\_{i,i+2}\mathbf{x}\_i\mathbf{x}\_{i+2} + \dots + q\_{i,i+m}\mathbf{x}\_i\mathbf{x}\_{i+m} \\ &+ \dots + q\_{n-m,n-m+1}\mathbf{x}\_{n-m}\mathbf{x}\_{n-m+1} + \dots + q\_{n-m,n}\mathbf{x}\_{n-m}\mathbf{x}\_n \\ &+ \dots + q\_{n-1,n}\mathbf{x}\_{n-1}\mathbf{x}\_n + c\_2\mathbf{x}\_2 + \dots + c\_n\mathbf{x}\_n(i = 2, \dots, n-m-1). \end{aligned} \tag{3}$$

(2) *s*1 ≥ *a*1, *x*1 = 1

$$\begin{aligned} f\_1(\mathbf{s}\_1) &= f(\mathbf{x})|\_{\mathbf{x}\_1=1} = q\_{i,i+1}\mathbf{x}\_i\mathbf{x}\_{i+1} + q\_{i,i+2}\mathbf{x}\_i\mathbf{x}\_{i+2} + \dots + q\_{i,i+m}\mathbf{x}\_i\mathbf{x}\_{i+m} \\ &+ \dots + q\_{n-m,n-m+1}\mathbf{x}\_{n-m}\mathbf{x}\_{n-m+1} + \dots + q\_{n-m,n}\mathbf{x}\_{n-m}\mathbf{x}\_n \\ &+ \dots + q\_{n-1,n}\mathbf{x}\_{n-1}\mathbf{x}\_n + \widehat{c}\_2\mathbf{x}\_2 + \dots + \widehat{c}\_{1+m}\mathbf{x}\_{1+m} + c\_{m+2}\mathbf{x}\_{m+2} \\ &+ \dots + c\_n\mathbf{x}\_n + c\_1. \end{aligned} \tag{4}$$

Clearly, when *k* = 1, each state *s*1 has only one function *f*1(*<sup>s</sup>*1). The coefficient of *x*2, ... , *xm*+1 and the constants are the only different terms in the function.

#### **Step 2:** Set *k* = 2, 3, ..., *m*

(1) If *sk* < *ak*, execute *state*0.

> If *sk*−1 < *ak*−1, *fk*(*sk*) has the same form as function (3):

$$\begin{aligned} f(s\_k) &= q\_{k,k+1} \mathbf{x}\_k \mathbf{x}\_{k+1} + q\_{k,k+2} \mathbf{x}\_k \mathbf{x}\_{k+2} + \dots + q\_{k,k+m} \mathbf{x}\_k \mathbf{x}\_{k+m} \\ &+ \dots + q\_{i,i+1} \mathbf{x}\_i \mathbf{x}\_{i+1} + q\_{i,i+2} \mathbf{x}\_i \mathbf{x}\_{i+2} + \dots + q\_{i,i+m} \mathbf{x}\_i \mathbf{x}\_{i+m} + \dots + \\ q\_{n-m,n-m+1} \mathbf{x}\_{n-m} \mathbf{x}\_{n-m+1} &+ \dots + q\_{n-m,n} \mathbf{x}\_{n-m} \mathbf{x}\_n + \dots + \\ q\_{n-1,n} \mathbf{x}\_{n-1} \mathbf{x}\_n + c\_{k+1} \mathbf{x}\_{k+1} + \dots + c\_n \mathbf{x}\_n + const. \end{aligned} \tag{5}$$

(*const* represents the constant term of the *f*(*sk*−<sup>1</sup>).) If *sk* ≥ *ak*−1, *fk*(*sk*) has the same form as function (4):

$$\begin{array}{l}f(\mathbf{s}\_{k-1}) = q\_{k,k+1}\mathbf{x}\_{k}\mathbf{x}\_{k+1} + q\_{k,k+2}\mathbf{x}\_{k}\mathbf{x}\_{k+2} + \cdots + q\_{k,k+m}\mathbf{x}\_{k}\mathbf{x}\_{k+m} \\ + \cdots + q\_{i,i+1}\mathbf{x}\_{i}\mathbf{x}\_{i+1} + q\_{i,i+2}\mathbf{x}\_{i}\mathbf{x}\_{i+2} + \cdots + q\_{i,i+m}\mathbf{x}\_{i}\mathbf{x}\_{i+m} + \cdots + \\ q\_{n-m,n-m+1}\mathbf{x}\_{n-m}\mathbf{x}\_{n-m+1} + \cdots + q\_{n-m,n}\mathbf{x}\_{n-m}\mathbf{x}\_{n} + \cdots + \\ q\_{n-1,n}\mathbf{x}\_{n-1}\mathbf{x}\_{n} + \widehat{c\_{k+1}}\mathbf{x}\_{k+1} + \cdots + \widehat{c\_{k+m}}\mathbf{x}\_{k+m} + c\_{k+m}\mathbf{x}\_{k+m} \\ + \cdots + c\_{h}\mathbf{x}\_{h} + const + c\_{k}. \end{array} \tag{6}$$

Then, *fk*(*sk*) can be shown as:

 can

$$\begin{aligned} f\_k(\mathbf{s}\_k) &= f\_{k-1}(\mathbf{s}\_{k-1})|\_{\mathbf{x}\_k=0} = q\_{k+1,k+2}\mathbf{x}\_{k+1}\mathbf{x}\_{k+2} + q\_{k+1,k+3}\mathbf{x}\_{k+1}\mathbf{x}\_{k+3} + \cdots \\ &+ q\_{k+1,k+m+1}\mathbf{x}\_{k+1}\mathbf{x}\_{k+m+1} + \cdots + q\_{i,i+1}\mathbf{x}\_{i}\mathbf{x}\_{i+1} + q\_{i,i+2}\mathbf{x}\_{i}\mathbf{x}\_{i+2} + \cdots + \\ q\_{i,i+m}\mathbf{x}\_{i}\mathbf{x}\_{i+m} + \cdots &+ q\_{n-m,n-m+1}\mathbf{x}\_{n-m}\mathbf{x}\_{n-m+1} + \cdots + q\_{n-m,n}\mathbf{x}\_{n-m}\mathbf{x}\_{n} \\ &+ \cdots + q\_{n-1,n}\mathbf{x}\_{n-1}\mathbf{x}\_{n} + \widehat{c\_{k+1}}\mathbf{x}\_{k+1} + \cdots + \widehat{c\_{k+m}}\mathbf{x}\_{k+m} + c\_{k+m+1}\mathbf{x}\_{k+m+1} \\ &+ \cdots + c\_{n}\mathbf{x}\_{n} + \widehat{c\_{0}}\mathbf{x}\_{0} \end{aligned} \tag{7}$$

At this point, the maximum number of *fk*(*sk*) corresponding to each *sk* is 2*k*−2. (2) If *sk* ≥ *ak*, execute *state*0 and *state*1.

> as:

Here, the *state*0 is the same as the one in Case 1, so we do not need to repeat it. Consider *state*1, *xk* = 1, *fk*(*sk*) = *fk*−<sup>1</sup>(*sk* − *akxk*), *sk*−1 = *sk* − *akxk* = *sk* − *ak*. If *sk*−1 < *ak*−1, *f*(*sk*−<sup>1</sup>) has the form of function (5). If *sk*−1 ≥ *ak*−1, *f*(*sk*−<sup>1</sup>) has the form of function(6),then,*fk*(*sk*)beexpressed

$$\begin{aligned} f\_k(\mathbf{s}\_k) &= f\_{k-1}(\mathbf{s}\_{k-1})|\_{\mathbf{x}\_k = 1} = q\_{k+1,k+2}\mathbf{x}\_{k+1}\mathbf{x}\_{k+2} + q\_{k+1,k+3}\mathbf{x}\_{k+1}\mathbf{x}\_{k+3} + \cdots \\ &+ q\_{k+1,k+m+1}\mathbf{x}\_{k+1}\mathbf{x}\_{k+m+1} + \cdots + q\_{i,i+1}\mathbf{x}\_{i}\mathbf{x}\_{i+1} + q\_{i,i+2}\mathbf{x}\_{i}\mathbf{x}\_{i+2} + \cdots + \\ &q\_{i,i+m}\mathbf{x}\_{i}\mathbf{x}\_{i+m} + \cdots + q\_{n-m,n-m+1}\mathbf{x}\_{n-m}\mathbf{x}\_{n-m+1} + \cdots + q\_{n-m,n}\mathbf{x}\_{n-m}\mathbf{x}\_{n} \\ &+ \cdots + q\_{n-1,n}\mathbf{x}\_{n-1}\mathbf{x}\_{n} + \widehat{c\_{k+1}}^{\prime}\mathbf{x}\_{k+1} + \cdots + \widehat{c\_{k+m}}^{\prime}\mathbf{x}\_{k+m} + c\_{k+m+1}\mathbf{x}\_{k+m+1} \\ &+ \cdots + c\_{n}\mathbf{x}\_{n} + \widehat{cont} \end{aligned} \tag{8}$$

All *fk*(*sk*) are only different in the constant terms and the coefficient of *xk*+1, ..., *xm*+2. **Step 3:** *k* = *n* − *m* − 1

Consider the most complex case, *sk* ≥ *ak* and the state variable *sk* corresponds to 2*<sup>m</sup>*−<sup>1</sup> *fk*(*sk*) and 2*<sup>m</sup>*−<sup>1</sup> *fk*(*sk*) = *fk*−<sup>1</sup>(*sk* − *ak*) respectively (2*<sup>m</sup>*−<sup>1</sup> is the number of both). Therefore, we need to execute *state*0 first, and then execute *state*1. (1)*state*0:

Since *fk*(*sk*) has the form as function (8), the *state*0 is executed for the *fk*(*sk*) respectively, and the following expression is obtained:

$$\begin{array}{l} f\_{\mathbf{k}}(\mathbf{s}\_{\mathbf{k}}) = f\_{\mathbf{k}-1}(\mathbf{s}\_{\mathbf{k}})|\_{\mathbf{x}\_{\mathbf{k}}=0} = q\_{\mathbf{n}-m,\mathbf{n}-m+1}\mathbf{x}\_{\mathbf{n}-m}\mathbf{x}\_{\mathbf{n}-m+1} + \dots + \\\ q\_{\mathbf{n}-m,\mathbf{n}}\mathbf{x}\_{\mathbf{n}-m}\mathbf{x}\_{\mathbf{n}} + \dots + q\_{\mathbf{n}-1,\mathbf{n}}\mathbf{x}\_{\mathbf{n}-1}\mathbf{x}\_{\mathbf{n}} + \widehat{\boldsymbol{\alpha}\_{\mathbf{n}-m}}^{\prime}\mathbf{x}\_{\mathbf{n}-m} + \boldsymbol{c}\_{\mathbf{n}-m+1}\mathbf{x}\_{\mathbf{n}-m+1} \\\ + \dots + \boldsymbol{c}\_{\mathbf{n}}\mathbf{x}\_{\mathbf{n}} + \widehat{\boldsymbol{\alpha}}^{\prime} \end{array} \tag{9}$$

Clearly, these *fk*(*sk*) are only different in the constants and the coefficients of *xn*−*m*. (2) *state*1:

It is possible to obtain *fk*(*sk*), which has the following form:

$$\begin{aligned} f\_k(\mathbf{s}\_k) &= f\_{k-1}(\mathbf{s}\_{k-1})|\_{\mathbf{x}\_k=1} = q\_{n-m,n-m+1} \mathbf{x}\_{n-m} \mathbf{x}\_{n-m+1} + \dots + \mathbf{c}\_{n-m+1} \\\ q\_{n-m,n} \mathbf{x}\_{n-m} \mathbf{x}\_n &+ \dots + q\_{n-1,n} \mathbf{x}\_{n-1} \mathbf{x}\_n + \widehat{\boldsymbol{\varepsilon}\_{n-m}}'' \mathbf{x}\_{n-m} + \boldsymbol{c}\_{n-m+1} \mathbf{x}\_{n-m+1} \\\ &+ \dots + \boldsymbol{c}\_n \mathbf{x}\_n + \widehat{\boldsymbol{\varepsilon}\_{n-m-1}}' \end{aligned} \tag{10}$$

For both case (1) and (2), they are only different in the constants and the coefficients of *xk*+1. **Step 4:** *k* = *m* + 2, ..., *n*

Based on Algorithm 2 Case 2, we calculate the function *fk*(*sk*), and the core is the implementation of *state*0 and *state*1. When executing *state*0 and *state*1, the difference between the two *fk*(*sk*) is the constant and the coefficients of *xk*. Therefore, when executing *state*0, we only need to compare the constant terms. When executing *state*1, we only need to compare the sum of the constants and the coefficients of *xk*. In this way, we can avoid solving the excess *fk*(*sk*).

The form of the function *fk*(*sk*) and the maximum number of *fk*(*sk*) in each stage *k* are similar to the one in stage *k* = *n* − *m*, which also ensures the repeatability of the algorithm. The algorithm is supplemented by concrete examples and numerical simulations.

#### *3.3. Calculation Example of 01CQP*

For example, the parameters *Q*, *c*, *a*, and *b* are:

$$Q = \begin{pmatrix} 0 & 23 & -37 & -56 & 0 & 0 & 0 & 0 \\ 23 & 0 & 41 & 16 & -34 & 0 & 0 & 0 \\ -37 & 41 & 0 & -62 & -27 & 76 & 0 & 0 \\ -56 & 16 & -62 & 0 & -81 & 14 & -58 & 0 \\ 0 & -34 & -27 & 81 & 0 & 90 & 25 & -42 \\ 0 & 0 & 76 & 14 & 90 & 0 & -12 & 31 \\ 0 & 0 & 0 & -58 & 25 & -12 & 0 & -94 \\ 0 & 0 & 0 & 0 & -42 & 31 & -94 & 0 \end{pmatrix}$$

$$c = \left( \begin{array}{cccc} 24, & -54, & -17, & 36, & -72, & 63, & 46, & -18 \\ 1, & 2, & 3, & 2, & 4, & 2, & 3, & 2 \\ \end{array} \right)^T$$

$$a = \left( \begin{array}{cccc} 1, & 2, & 3, & 2, & 4, & 2, & 3, & 2 \\ \end{array} \right)^T$$

$$b = 6.$$

In this example, we will omit some of the states.

(1) Set *k* = 1, *s*1 = *a*1*x*1,*a*1 = 1

We can set *s*1 = 0, 1, . . . , 6. According to Algorithm 2 Case 1, we can obtain Table 2. When *s*1 = 0, we can calculate *<sup>x</sup>*1<sup>∗</sup> and *f*(*<sup>s</sup>*1) through Algorithm 2 Case (1a) for *s*1 < *a*1. When *s*1 = 1, . . . , 6, we can calculate *<sup>x</sup>*1<sup>∗</sup> and *f*(*<sup>s</sup>*1) through Algorithm 2 Case (1b) for *s*1 ≥ *a*1. We omit *state*3, . . . ,*state*6, which is the same as *state*1,*state*2.

(2) Set *k* = 2, 3, *s*2 = *a*1*x*1 + *a*2*x*2 = *s*1 + *a*2*x*2, *s*3 = *s*2 + *a*3*x*3 We can set *s*3 = 0, 1, ... , 6. According to Algorithm 2 Case 2, we obtain the Table 3. The case of *s*2 = 0, 1, . . . , 6 is omitted.

(3) Set *k* = 4, 5, 6, 7

According to Algorithm 2 Case 2 and Algorithm 3, we can gain *fk*(*sk*) with *k* = 3, ..., 7. Tables 4 and 5 show part of the calculation process. From the table, we can see that only the coefficient of *sk*+<sup>1</sup> and constant terms are different in the adjacent items.

(4) Set *k* = 8

Finally, we have Table 6, which only has the constant terms. Through it, we can see that *<sup>x</sup>*8<sup>∗</sup>= 0, *f*(*x*<sup>∗</sup>) = −160, and by the backtracking method, we find the optimum solution *x*<sup>∗</sup> = (0, 1, 0, 0, 1, 0, 0, <sup>0</sup>).


**Table 2.** Functions *f*1(*<sup>s</sup>*1).




**Table 4.** Functions *f*4(*<sup>s</sup>*4).





#### **4. Application of Zero-One Quadratic Programming in Phasor Measurement Units Placement**

How to find the installation location and the number of installations is the focus of the phasor measurement units placement problem. The matrix in the PMU placement problem differs from the previous *k*-diagonal matrix in the elements of the main diagonal. However, since *x* is either 0 or 1, we can convert it into a *k*-diagonal matrix. Then, we can take advantage of the algorithms designed above to work out PMU placement without considering too many constraints and realistic requirements.
