**5. Engineering Application**

In this chapter, the proposed CSHHO is applied to model the reactive power output of a synchronous condenser. Due to the large number of UHV DC transmission projects in the power system, to ensure DC to DC power consumption and peaking demand, the scale of conventional units on the receiving AC grid is reduced, the dynamic reactive power support capability of the system is weakened, and the voltage stability margin is reduced [81]. This requires dynamic reactive power compensation devices to have instantaneous reactive power support characteristics in case of system failure, and the synchronous condenser to have reactive power output characteristics to meet the dynamic reactive power compensation requirements of the grid [82]. Modeling the reactive power support capability of a synchronous condenser is of grea<sup>t</sup> theoretical significance and practical value for the reactive power control of converter stations in high-voltage DC transmission systems with synchronous condenser.

The existing research methods for modeling the reactive power output of synchronous condenser are the mathematical analytical model calculation method and the experimental result fitting method [83–86] and both require large computational effort and have low accuracy, but few papers have studied the application of LSSVM in modeling the reactive power output of synchronous condenser. The advantages of the least squares support vector machine (LSSVM) are that it is less likely to fall into local minima and has high generalization ability [87]. Researchers have used various intelligent optimization algorithms to find the optimal results of kernel function parameters and regularization parameters, including the GA [88], Particle Swarm Optimization Algorithm (PSO) [89], Free Search Algorithm (FS) [90], Ant Colony Optimization Algorithm (ACO) [91], ABC Algorithm [92], GWO algorithm [93] and Backtracking Search Optimization Algorithm (BSA) [94], etc. However, the traditional swarm optimization algorithm is prone to the defects of falling into local optimum and low convergence accuracy in the search process. According to the results above, the CSHHO not only reduces the probability of the algorithm falling into local optimum and improves the convergence accuracy of the algorithm, but also has the advantages of the basic Harris Hawk optimization algorithm: 1. the steadiness of the searching cores; 2. the fruitfulness in the initial iterations; 3. the progressive selection scheme [5].

This paper proposes a CSHHO-LSSVM-based reactive power modeling method based on the numerical characteristics and global search capability of CSHHO. The optimal values of the penalty parameters, kernel function parameters, and loss function parameters of the LSSVM are found by using CSHHO to build the CSHHO-LSSVM model for the reactive power output of the synchronous condenser.

#### *5.1. Principle of LSSVM*

Support Vector Machine (SVM) is an ML method based on statistical learning theory, with kernel function as the core, which implicitly maps the data in the original space to the high-dimensional feature space, then finds the linear relationship in the feature space [87].

LSSVM is a regression algorithm that extends the basic SVM. Compared with the SVM algorithm, LSSVM requires fewer parameters and is more stable. LSSVM simplifies the complex constraints, which makes the improved SVM more capable of handling data. Moreover, by setting the error sum of squares as the loss function of the algorithm, LSSVM enhances the performance of regression prediction and improves the prediction accuracy. Simultaneously, the complexity of the algorithm is reduced, which reduces the processing time of the algorithm and provides more flexibility. LSSVM uses a nonlinear model on basis of SVM:

$$f(\mathbf{x}) = (\omega, \phi(\mathbf{x})) + b \tag{28}$$

The input data were (*xi*, *yi*)*<sup>i</sup>* = 1, ··· *l* where *xi* ∈ *R<sup>d</sup>* denoted the different elements, *d* denoted the dimension, *yi* ∈ *R* the expected value of the output, and *l* the total number of inputs. *φ*(*x*) denoted the mapping function. In summary, the LSSVM optimization objective was:

$$\min \frac{1}{2} \|\omega\|^2 + \frac{1}{2} \gamma \sum\_{i=1}^{l} e\_i^2 \tag{29}$$

*<sup>s</sup>*.*t*.*ω<sup>T</sup> ϕ*(*xi*) + *b* + *ei* = *yi*(*<sup>i</sup>* = 1, ··· , *l*).

where *ei* denoted the error, the magnitude of which determined the prediction accuracy;*e* ∈ *Rl*×<sup>1</sup> denoted the error vector, *γ* denoted the regularization parameter *r*, which determined the magnitude of the error. Adding a Lagrangian multiplier to Equation (29),*λ* ∈ *Rl*×1, Equation (30) was expressed as:

$$\text{min}f = \frac{1}{2}||\omega||^2 + \frac{1}{2}\gamma \sum\_{i=1}^{l} e\_i^2 - \sum\_{i=1}^{l} \lambda\_i \left(\omega^T \phi(\mathbf{x}\_i) + b + e\_i - y\_i\right) \tag{30}$$

From the KKT condition, we obtained:

$$\begin{cases} \frac{\partial I}{\partial \boldsymbol{\omega}} = 0 \to \sum\_{i=1}^{l} \lambda\_i \boldsymbol{\varrho}(\mathbf{x}\_i) \\ \frac{\partial I}{\partial \boldsymbol{b}} = 0 \to \sum\_{i=1}^{l} \lambda\_i = 0 \\ \frac{\partial I}{\partial \boldsymbol{c}\_i} = 0 \to \lambda\_i = \gamma \boldsymbol{e}\_i, i = 1, 2, \cdots, l \\ \frac{\partial I}{\partial \lambda\_i} = 0 \to \boldsymbol{\omega}^T \boldsymbol{\varrho}(\mathbf{x}\_i) + \boldsymbol{b} + \boldsymbol{e}\_i - \boldsymbol{y}\_i = 0, i = 1, 2, \cdots, l \end{cases} \tag{31}$$

By eliminating the slack variables *ei* and weight vectors *ω*, the optimization problem was linearized: 

$$
\begin{bmatrix} 0 & \mathbf{Q}^{\mathrm{T}} \\ \mathbf{Q} & \mathbf{K} + \frac{1}{\gamma} \mathbf{I} \end{bmatrix} \begin{bmatrix} b \\ \mathbf{A} \end{bmatrix} = \begin{bmatrix} 0 \\ \mathbf{Y} \end{bmatrix} \tag{32}
$$

where A = [*<sup>α</sup>*1, *α*2, ··· , *<sup>α</sup>N*]T, Q = [1, 1, ··· , 1]T, was an l × 1 dimensional column vector, Y = [*y*1, *y*2, ··· , *yN*]T. According to the Mercer condition, *K* denoted a kernel function: *<sup>K</sup>xi*, *xj* = ϕ(*xi*)<sup>T</sup>ϕ*xj i*, *j* = 1, 2, ··· , *N*. The Radial Basis Function kernel function was chosen for the model:

$$k(\mathbf{x}\_i, \mathbf{x}\_j) = \exp\left(-\frac{||\mathbf{x}\_i - \mathbf{x}\_j||^2}{2\sigma^2}\right)\sigma > 0\tag{33}$$

Therefore, the nonlinear prediction model was expressed by Equation (34):

$$\log y = \sum\_{i=1}^{I} \lambda\_i \mathcal{K}(\mathbf{x}\_i, \mathbf{x}) + b \tag{34}$$

When predicting with least squares support vector regression models, the penalty factor and radial basis kernel function parameters were the two parameters to be solved.

#### *5.2. Simulation and Verification*

The reactive power regulation results of a synchronous condenser based on PSCAD/ EMTDC simulation software were used as training samples and test samples. Table 11 shows the main parameters. The data with serial numbers 9, 14, 26 and 35 in Table 11 were taken as the test samples, and the rest were the training samples.

First, the data were preprocessed, and the LSSVM was trained by using CSHHO to find the penalty parameters, kernel function parameters and optimal parameters of the loss function (*γ*, *σ*, *S*), and the LSSVM was predicted by applying the test sample to the LSSVM, and the regression fitted prediction model was output. The algorithm flow is shown in Figure 6. Figure 7 compares the output results of the LSSVM model of the test sample and the CSHHO-LSSVM model were compared with those of the test sample and the errors of the LSSVM model of the test sample and the CSHHO-LSSVM model. The output regression of CSHHO-LSSVM model was better fitted and had higher accuracy and radial basis kernel function parameters.

**Figure 6.** CSHHO Optimizes Synchronous condenser Reactive Power Output based LSSVM.

**Figure 7.** Output and error comparison diagram of LSSVM model and CSHHO-LSSVM model.

To verify the generalization ability of the CSHHO-LSSVM model, it was evaluated by absolute deviation, as shown in Table 13. The table shows the absolute deviation range of CSHHO-LSSVM model from 0.0123 to 0.989, indicating that the accuracy of the CSHHO-LSSVM model was high.


**Table 13.** Training samples and test samples.

From the reactive power and system voltage simulation results of the synchronous condenser in Tables 14–16, we can see that the maximum absolute error of the reactive power simulation result of CSHHO-LSSVM model was 0. 989 Mvar, and the maximum absolute error of the system voltage simulation result was 0.0415 kV, which were smaller than the simulation results of LSSVM model, indicating that the CSHHO-LSSVM model had higher accuracy and better regression fitting performance. The LSSVM model was more accurate and had better regression fitting performance.


**Table 14.** CSHHO-LSSV Model generalization ability verification.

**Table 15.** Reactive power simulation results comparison.


**Table 16.** System voltage simulation results comparison.

