*4.1. Modelling*

For the power system composed of *n* nodes, the placement of PMU is represented by *n*-dimensional vector *X* = (*<sup>x</sup>*1, *x*2,..., *xn*). Here, *i* = 1, 2, . . . , *n*,

$$x\_i = \begin{cases} 0 & \text{a PMU is installed at bus } i\\ 1 & \text{otherwise.} \end{cases}$$

The matrix *H* represents the network graph structure,

$$h\_{ij} = \begin{cases} 1 & i = j \\ 1 & i \text{ is connected to } j \\ 0 & \text{otherwise,} \end{cases}$$

and the objective function is

$$V(x) = \lambda (N - HX)^T R (N - HX) + X^T QX \tag{11}$$

where *λ* is the weight. *N* ∈ R*n* represents the upper bound of the maximum redundancy of each bus. *R* ∈ R*n*×*n* and *Q* ∈ R*n*×*n* are diagonal arrays, representing the importance of each bus and the cost of placing the PMU on the bus respectively.

We expand Equation (11),

$$\begin{array}{l} V(\mathbf{x}) = \lambda (N - H\mathbf{X})^T R (N - H\mathbf{X}) + \mathbf{X}^T Q \mathbf{X} \\ = \lambda [N^T R \mathbf{N} - N^T R H \mathbf{X} - \mathbf{X}^T H^T R \mathbf{N} + \mathbf{X}^T H^T R H \mathbf{X}] + \mathbf{X}^T Q \mathbf{X} \\ = \lambda N^T R \mathbf{N} - 2\lambda N^T R H \mathbf{X} + \lambda X^T H^T R H \mathbf{X} + \mathbf{X}^T Q \mathbf{X} \\ = \frac{1}{2} \mathbf{X}^T (2\lambda H^T R H + 2Q) \mathbf{X} + (-2\lambda N^T R H) \mathbf{X} + \lambda N^T R \mathbf{N}. \end{array} \tag{12}$$

Then, Equation (12) can be expressed as integer quadratic programming as,

$$\begin{array}{ll}\min & \frac{1}{2} \mathbf{x}^T \mathbf{G} \mathbf{x} + f^T \mathbf{x} \\ \text{s.t.} & M(\mathbf{x}) = 0 \\ & \mathbf{x}\_i \in \{0, 1\}^n \end{array} \tag{13}$$

where *G* = 2*λHTRH* + 2*Q*, *f* = (−2*λNTRH*)*<sup>T</sup>*. *<sup>M</sup>*(*x*) is the column vector consisting of *mi*(*x*) = (1 − *xi*) ∏*j*∈*Ai* (1 − *xj*)(*<sup>i</sup>* ∈ <sup>Ω</sup>). *Ai* and Ω represent a set of nodes adjacent to the bus *i* and the bus set respectively. The above constraints require at least one PMU unit to be placed between the bus and its adjacent nodes to ensure that each adjacent node of the bus can be observed. The above problem can be expressed as unconstrained problems by the weighted least square method:

$$\begin{array}{ll}\min & \frac{1}{2} \mathbf{x}^T \mathbf{G} \mathbf{x} + f^T \mathbf{x} + M(\mathbf{x})^T V M(\mathbf{x})\\ & = \frac{1}{2} \mathbf{x}^T \mathbf{G} \mathbf{x} + f^T \mathbf{x} + \sum\_{i=1}^n \left( v\_i (1 - \mathbf{x}\_i)^2 (\prod\_{j \in A\_i} (1 - \mathbf{x}\_j))^2 \right) \\ \text{s.t.} & \mathbf{x}\_i \in \{0, 1\}^T \end{array} \tag{14}$$

where *V* = *diag*(*vi*).

#### *4.2. Example and Experimental Result*

Suppose the coefficient matrix *H* of a power system is

$$H = \begin{pmatrix} 1 & 1 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 & 1 & 1 \end{pmatrix}$$

.

Set *Q* as the unit matrix, *λ* = 0.5,

$$R = \begin{pmatrix} 12 & 0 & 0 & 0 & 0 & 0 \\ 0 & 128 & 0 & 0 & 0 & 0 \\ 0 & 0 & 50 & 0 & 0 & 0 \\ 0 & 0 & 0 & 140 & 0 & 0 \\ 0 & 0 & 0 & 0 & 72 & 0 \\ 0 & 0 & 0 & 0 & 0 & 10 \end{pmatrix}, \\ N = \left( \begin{array}{cccc} 2, & 2, & 2, & 3, & 2, & 1 \end{array} \right)^T.$$

Then, we can have

$$G = \begin{pmatrix} 192 & 140 & 62 & 178 & 0 & 0 \\ 140 & 282 & 152 & 268 & 140 & 0 \\ 62 & 152 & 204 & 190 & 140 & 0 \\ 178 & 268 & 190 & 392 & 212 & 72 \\ 0 & 140 & 140 & 212 & 224 & 82 \\ 0 & 0 & 0 & 72 & 82 & 84 \end{pmatrix},$$

$$f = \begin{pmatrix} & -380, & -700, & -544, & -920, & -574, & -154 \end{pmatrix}^T.$$

The main diagonal of the matrix for this problem is 1, while the definition of *k*-diagonal matrix *Q* requires the main diagonal elements to be zero. Since *x* ∈ {0, <sup>1</sup>}*<sup>n</sup>*, the diagonal of G can be converted into a linear term and can be considered as a seven-diagonal matrix. This indicates that the diagonal of G becomes zero and *f* is updated as seen in Equation (15). Using the algorithm in Section 2, we can find the optimal solution (0, 1, 1, 1, 0, 1) without considering the constraints. This is to install four PMUs in bus 2, 3, 4, and 6. Through observation, it can be seen that the configuration results meet the system observability and the constraint is reached.

$$f = \begin{pmatrix} -284, & -559, & -442, & -724, & -462, & -112 \end{pmatrix}^T. \tag{15}$$

The above problem considers the placement of PMU under the condition that the system is completely observable. When considering the PMU placement problem with the *N* − 1 principle, constraint conditions 2*xi* + 2 <sup>∑</sup>*j*∈*P*2*i xj* + <sup>∑</sup>*j*∈*P*1*i xj* ≥ 2(*i* ∈ *N*) are added based on the definition of a single node observable. *N* is a set that includes the nodes required to have objectivity when a fault occurs in the system. *P*2*i* and *P*1*i* represents a node set connected to node *i* with two and one line respectively.

If the bus between nodes 5 and 6 breaks down, we set *H*(5, 6) and *H*(6, 5) as zero. Then, the constraint is added. 

$$
\left( \begin{array}{cccc} 0 & 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right) x \ge \left( \begin{array}{ccc} 1 \\ 1 \end{array} \right) .
$$

We can ge<sup>t</sup> the optimal solution is the same as that in the former example. As the connection between nodes 5 and 6 fails, the system after placing PMU will not lose its observability. We only consider the algorithm to solve the constrained *k*-diagonal zero-one matrix quadratic programming and consider that it can be applied to some form of PMU placement problems. Some of the conditions to the placement of PMU have not been taken into account here, which has ye<sup>t</sup> to be studied by specialized engineers.

#### **5. 01CQP Algorithm Simulation and Discussion**

#### *5.1. Algorithm Experimental Simulation*

Now, the experimental results are provided for this algorithm to illustrate its performance. We implement the algorithm with C++ and run it on an Intel(R) Core(TM) i7-8550U CPU. For the problems we tested, the dimension of matrix *Q* ranges from 10 to 100 and *k* takes 5, 7, ... , 25. All simulation data (*Q*, *c*, *a*, *b*) are generated randomly. *Q*, *c* is set up as in Section 2 with numbers ranging from −100 to 100, and *a*, *b* range between 0 and 20. Then, we obtain Table 7, which shows the detailed computation time for different dimensional problems with different diagonal numbers.

**Table 7.** Corresponding computation time in different diagonal numbers with dimension *n* = 10 to 100 of 01CQP.


#### *5.2. Experimental Results Discussion*

Here, based on the experimental results, we investigate the influence of *n* and *k* on the algorithm and discuss the importance and implications of our study results. Firstly, we fix the values of *k* but increase *n*. Figure 2 shows the experimental situations when the matrix dimension *n* changes from 10 to 100 with *k* is fixed at 9, 13, 19 and 23 respectively. As can be seen, the calculation time increases slightly with the increase of dimension. Secondly, we fix the values of *n* but increase *k*. Figure 3 shows the experimental situations when *k* changes from 5 to 25 with *n* is fixed at 20, 40, 60 and 80 respectively. It can be observed clearly that time changes significantly with the increase of diagonal number. Finally, we increase *n* and *k* simultaneously. Figure 4 illustrates this situation. It is obvious that *k* has an obvious influence on the calculation speed. All these observations coincide with the time complexity we derived in Section 3. It means that if the diagonal number is within an appropriate range, our algorithm can perform effectively and efficiently even for very large scale problems. And these kind of problems cover a large portion of the whole problem set. Therefore, the algorithm we proposed will have grea<sup>t</sup> potential in many real-world applications.

 **Figure 2.** Calculation time with different dimensions when the diagonal number *k* = 9, 13, 19, 23.

 **Figure 3.** Calculation time with different diagonal numbers when the dimension *n* = 20, 40, 60, 80.

**Figure 4.** The variation of calculation time with different dimensions *n* and diagonal number *k*.
