*Article* **Intelligent Sensors for dc Fault Location Scheme Based on Optimized Intelligent Architecture for HVdc Systems**

**Muhammad Zain Yousaf 1,\*, Muhammad Faizan Tahir 2, Ali Raza 3, Muhammad Ahmad Khan <sup>4</sup> and Fazal Badshah <sup>1</sup>**


**Abstract:** We develop a probabilistic model for determining the location of dc-link faults in MT-HVdc networks using discrete wavelet transforms (DWTs), Bayesian optimization, and multilayer artificial neural networks (ANNs) based on local information. Likewise, feedforward neural networks (FFNNs) are trained using the Levenberg–Marquardt backpropagation (LMBP) method, which multi-stage BO optimizes for efficiency. During training, the feature vectors at the sending terminal of the dc link are selected based on the norm values of the observed waveforms at various frequency bands. The multilayer ANN is trained using a comprehensive set of offline data that takes the denoising scheme into account. This choice not only helps to reduce the computational load but also provides better accuracy. An overall percentage error of 0.5144% is observed for the proposed algorithm when tested against fault resistances ranging from 10 to 485 Ω. The simulation results show that the proposed method can accurately estimate the fault site to a precision of 485 Ω and is more robust.

**Keywords:** Levenberg–Marquardt backpropagation; protection sensor; Bayesian optimization; modular multilevel converter

#### **1. Introduction**

To date, China celebrates the completion of 30,000 km of ultra-high-voltage lines connecting six regional grids with a total transmission capacity of close to 150 gigawatts [1]. However, power engineers struggle to manage and regulate the impact of dc-link faults in hybrid ac/dc systems [2]. Let us say the 8 GW dc-link from Gansu reports a fault unexpectedly, and the protection algorithm cannot locate it. The power outage might start a chain reaction, resulting in widespread blackouts throughout Hunan and beyond. As a result, ensuring accurate fault location is beneficial to minimize the threat of possible failure and is a prerequisite for the successful and safe operation of dc transmission systems [3]. Furthermore, accurate fault location estimation is important for maintaining the voltage stability of the power system [4] and operating the electricity market efficiently [5].

The prediction of correct fault sites in dc transmission systems has been shown to be reliable by frequency extraction, fault signal analysis, and travelling-wave (TW) approaches in previous studies [2,3,6]. Currently, TW methods based on the concept of travelling-wave reflections are preferred in dc transmission projects since they are highly accurate, reliable, and have high fault resistance [7]. The advancement of TW theory has led to the development of several signal processing techniques, such as wavelet transformation (WT) [8], S and Hilbert–Huang transform [6,9], empirical mode decomposition (EMD) [10], etc. A waveform's characteristics are analyzed using approximate or detailed coefficients in WT to predict fault locations. However, conventional TW methods require a very high sampling

**Citation:** Yousaf, M.Z.; Tahir, M.F.; Raza, A.; Khan, M.A.; Badshah, F. Intelligent Sensors for dc Fault Location Scheme Based on Optimized Intelligent Architecture for HVdc Systems. *Sensors* **2022**, *22*, 9936. https://doi.org/10.3390/ s22249936

Academic Editor: Arshad Arshad

Received: 7 November 2022 Accepted: 13 December 2022 Published: 16 December 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

frequency to accurately predict fault location, which leads to expensive computation in the power grid [11].

Researchers have begun exploring intelligent algorithms to rectify computation burden and noise handling capabilities [12]. With the alienation coefficient and the Wigner distribution function [13], an effective transmission line protection mechanism for underground cables is proposed in [14] and implemented for a renewable-energy-based grid. In addition, machine-learning tools such as the radial basis function neural network (RBFNN) [15], support vector machine (SVM) [16,17], extreme machine learning [18], k-means cluster [19], etc., are also utilized. These algorithms can self-learn and modify weights and thresholds while training with historical data. Therefore, they are suitable for complex networks such as multiterminal high-voltage direct-current (MMC-MT-HVdc) systems, where constructing a feasible intelligent algorithm can be challenging.

Because of the enlargement of the structure and the extraordinary growth in the number of learning parameters in MMC-MT-HVdc networks, these intelligent algorithms experience slow convergence and computational burden [20]. Downsampling learning parameters may resolve the above issue but may eliminate valuable features. It is, therefore, imperative to find an algorithm for fault location with high accuracy, minimal computational burden, and low sensitivity to noise [21,22]. With this in mind, this work combines the discrete wavelet transform (DWT) [23], Bayesian optimization (BO) [24], and multilayer feedforward neural network (FFNN) to locate the dc-link faults.

A multilayer FFNN model based on BO is trained and evaluated using the selected features. BO is well-known as a powerful technique for optimizing black-box functions when closed-form expressions or surrogate models are unavailable [24]. A study in the literature found that BO provided a higher convergence rate than standard tuning methods after the neural network was adjusted [25,26]. The multi-stage BO introduced in this work reduces the computational burden by reducing the number of simulations needed to find the optimal design for any given neural network. As a result, it provides a well-tuned multilayer FFNN that can achieve improved response with better accuracy.

To simulate numerous fault scenarios, a four-terminal MT-HVdc system is developed in PSCAD/EMTDC, and the proposed model is investigated in a Matlab® environment. Meanwhile, the proposed algorithm is compared to intelligent adversaries such as backpropagation neural networks (BP-NNs) and conventional FFNNs. The test results show that the suggested model performs with the highest fault localization accuracy.

Under the circumstances above, the main contribution of this study is:


The remainder of the paper is organized in the following way.

Section 2 discusses the mathematical model derivation from a simple backpropagation algorithm to the improvised backpropagation algorithm. It also discusses the implementation of the proposed framework as well. Meanwhile, Section 3 introduces the conditions and properties of the chosen system model, which has been developed to capture fault data under dynamic fault scenarios. Section 4 covers the methodologies utilized to analyze input features extracted under dynamic fault scenarios. It also covers a denoising scheme that is used to denoise features before training and data preprocessing. Section 5 presents comparisons and analyses against adversaries. Finally, Section 6 concludes with a summary of the proposed algorithm. An overview of the proposed method is presented in the next section.

#### **2. Proposed Framework**

Figure 1 illustrates the architecture with the proposed methodology. During fault localization, the proposed method has two stages. In the first one, the captured fault window is filtered using discrete wavelets to rectify the noise issue, set to 10 ms. After denoising, the measured time-domain segment is transformed into a time-frequency domain by splitting it into low- and high-frequency components with a DWT-based multi-resolution analysis (MRA) technique.

**Figure 1.** Proposed architecture.

The multilayer neural network receives decluttered information from voltage and current signals as inputs. It then estimates the fault site using the activation function. Levenberg–Marquardt backpropagation (LMBP) is implemented instead of a standard backpropagation algorithm with a performance function. It is a function of the ANN regression model and ground truth of fault sites. The Levenberg–Marquardt method is used to update the weight and bias. The Jacobian matrix of the performance function with respect to the weight and bias variables is calculated via the proposed backpropagation algorithm. After updating the weight and bias, the multilayer ANN is applied to determine the fault site. The multi-stage BO procedure is conducted prior to the update, aiming to increase accuracy during training and to provide an optimal multilayer FFNN by optimizing hyperparameters. The hyperparameters, unlike internal parameters (weights, bias, etc.), are set before the neural network is trained, and they influence the neural network's performance. Regulating them via the trial-and-error method lengthens the training set-up time and may reduce accuracy. Hence, optimizing these hyperparameters enhances the accuracy and convergence speed [24]. In the following sub-section, the detailed architecture of the proposed algorithm is described.

#### *2.1. Feedforward Neural Network (FFNN)*

This study uses a feedforward neural network with a single hidden layer to model because a neural network with a single hidden layer can handle the most complex functions (i.e., one input layer, one output layer, and one hidden layer) [27]. In a multilayer FFNN, the basic building block is a neuron that mimics a biological neuron's functions and behavior [27]. The schematic structure based on the neuron is shown in Figure 2.

**Figure 2.** Structure of the multi-input neuron.

Usually, a neuron has multiple inputs. Each element of the input vector *p* = [*p*1, *p*2, *K*, *pR*] is weighted by elements *w*1, *w*2, *K*, *wj* of the weight matrix *W*. Next, the bias of each neuron is summed with the weighted inputs to form the net-input *n*, expressed as:

$$m = \sum\_{j=1}^{R} w\_j p\_j + b \, = \, \mathcal{W}\_p + b. \tag{1}$$

Following that, net-input *n* is sent via an activation function *f*, which results in the neuron's output *a*. Mathematically expressed as:

$$a = f(n) \tag{2}$$

In this work, the activation function is based on the hyperbolic tangent sigmoid transfer function. The following equation presents it.

$$f(\mathbf{x}) = \frac{2}{1 + e^{-2\mathbf{x}}} - 1 \tag{3}$$

With reference to Figure 2, the multi-input FFNN executes the following equation:

$$a^2 = f^2(\sum\_{i=1}^{s} w\_{1,i}^2 f^1(\sum\_{j=1}^{R} w\_{i,j}^1 p\_j + b\_i^1) + b^2) \tag{4}$$

The output of the neural network is represented by *a*2. *R* stands for the number of inputs, the number of neurons in the hidden layer is denoted by *S*, and the *j*th input is represented by *pj*. The activation functions of the output and hidden layers are represented by *f* <sup>2</sup> and *f* 1, respectively. The bias of the *i*th neuron is defined by *b*<sup>1</sup> *<sup>i</sup>* , whereas the bias of the neuron in the output layer is represented by *b*2. The weight *w*<sup>1</sup> *i*,*j pj* represents the connection between the *j*th input and the *i*th neuron of the hidden layer. Meanwhile, the weight connecting the *i*th hidden layer source to the output layer neuron is denoted by *w*<sup>2</sup> 1,*i* .

#### *2.2. Backpropagation Algorithm*

Following the definition of the FFNN, the next step is to create an algorithm for training such networks. To train the established multilayer FFNN, an error backpropagation algorithm based on the steepest descent technique is typically utilized [28]. For the proposed three-layer FFNN, we now express the function that represents the output of unit *i* in layer *m* + 1 as:

$$a^{m+1} = f^{m+1}\left(n^{m+1}(i)\right) \tag{5}$$

Then to propagate the function and generate net-input (*nm*+1(*i*)) to unit *i*, the neuron in the first layer receives extracted features from the MT-HVdc system to provide an initial condition for Equation (5):

$$a^0 = \; p \tag{6}$$

Equation (5) is further translated in matrix form for an *M* number of layers in a neural network as:

$$a^{m+1} = f^{m+1} \left( W^{m+1} a^m + b^{m+1} \right), \dots, m = 0, 1. \tag{7}$$

where *am*+<sup>1</sup> and *a<sup>m</sup>* are the outputs of the network's (*m*+1)th and *m*th layers. *bm*+<sup>1</sup> reflect the bias vector of the network's (*m*+1)th layer. Here, external inputs passing to the network via Equation (7), the overall network's outputs are equal to the outputs of the neurons in the last layer:

$$a = \; a^M \tag{8}$$

The objective of this study is to locate the dc-link faults. Therefore, the proposed multilayer FFNN requires a set of input–output pairs that characterize the behavior of an MT-HVdc system under faulty settings. Mathematically expressed as:

$$\begin{array}{c} \left[ \left( p\_1, t\_1 \right), \left( p\_2, t\_2 \right), \left( p\_3, t\_3 \right), \dots, \dots, \dots, \left( p\_{Q'}, t\_{Q} \right) \right], \ p\_q \text{ is input and } t\_q \text{ is the relevant} \\ \text{target of the network that uses for training.} \end{array} \tag{9}$$

After each input propagates through the multilayer FFNN during training, the network output is compared to the target. While doing so, the performance index for the backpropagation algorithm is the mean-square error (MSE), which is to be reduced by modifying the network parameters, given as:

$$F(\mathbf{x}) \;=\; E[(e^2)]\;=\; E[(t-a)^2]\tag{10}$$

In the FFNN, *x* is the vector matrix containing the network weights and biases. However, in our case, the proposed network has multiple outputs. Therefore, Equation (10) generalized to:

$$F(\mathbf{x}) \;= \; E\left[\left(e^T e\right)\right] \;= \; E[\left(t - a\right)^T \left(t - a\right)] \tag{11}$$

Since the steepest descent rule is utilized for the standard backpropagation algorithm, the performance index *F*(*x*) can be approximated as follows:

$$F^\wedge(\mathbf{x}) = E\left[ \left( t(k) - a(k) \right)^T \left( t(k) - a(k) \right) \right] \\
= \mathcal{e}^T(k)\varepsilon(k) \tag{12}$$

The squared error replaces the expectation of the squared error in Equation (11) at iteration step *k*. The steepest (gradient) descent algorithm for the estimated MSE is then:

$$w\_{i,j}^{m}(k+1) = w\_{i,j}^{m}(k) - \propto \frac{dF^{\wedge}}{dw\_{i,j}^{m}}\tag{13}$$

$$b\_i^{m}(k+1) = |b\_i^{m}(k) - \infty \frac{dF^{\wedge}}{db\_i^{m}} \tag{14}$$

∝ is the learning rate, similar to the number of neurons (*S*); it is also a hyperparameter. Defined:

$$s\_i^m = \frac{dF^\wedge}{dn\_i^m} \tag{15}$$

as the performance index ( *F* <sup>∧</sup>) sensitivity (*s<sup>m</sup> <sup>i</sup>* ) that measures the changes in the net input of the *i*th element in layer *m*. Next, based on the chain rule, the derivate of Equations (13) and (14) using Equations (5), (12) and (15) can be simplified as:

$$\frac{dF^{\wedge}}{dw\_{i,j}^{m}} = \frac{dF^{\wedge}}{dn\_i^m} \* \frac{dn\_i^m}{dw\_{i,j}^m} = \left. s\_i^m \* a\_j^{m-1} \right| \tag{16}$$

$$\frac{dF^{\wedge}}{db\_i^m} = \frac{dF^{\wedge}}{dn\_i^m} \ast \frac{dn\_i^m}{db\_i^m} = |s\_i^m|\tag{17}$$

Now with the definition of gradient, the steepest descent algorithm is approximated as:

$$w\_{i,j}^{\mathfrak{m}}(k+1) = \; w\_{i,j}^{\mathfrak{m}}(k) - \infty \; \*s\_i^{\mathfrak{m}} \* a\_j^{\mathfrak{m}-1} \tag{18}$$

$$b\_i^{\mathfrak{m}}(k+1) = \ b\_i^{\mathfrak{m}}(k) - \infty \, \*s\_i^{\mathfrak{m}} \tag{19}$$

The following recurrence relation in matrix form can be satisfied by the sensitivity [29,30]:

$$\mathbf{s}^{m} = \overline{\mathbf{F}}^{m} (\boldsymbol{\mathfrak{u}}^{m}) \left(\boldsymbol{\mathfrak{W}}^{m+1}\right)^{T} \mathbf{s}^{m+1}, \text{ for.} \\ \boldsymbol{m} = \boldsymbol{M} - 1, \dots, 2, 1. \tag{20}$$

Equation (20) expresses the step used to propagate the sensitivities backward through a neural network. Mathematically, the sensitivities propagate backward across the network as:

$$\mathbf{s}^{M} \to \mathbf{s}^{M-1} \to \cdots \to \mathbf{s}^{2} \to \mathbf{s}^{1} \tag{21}$$

where

$$\mathbf{F}^{\mathrm{m}}(\mathfrak{n}^{\mathrm{m}}) = \begin{bmatrix} \overline{f}^{\mathrm{m}}(n\_{1^{\mathrm{m}}}^{\mathrm{m}}) & 0 & \mathbf{K} & 0\\ 0 & \overline{f}^{\mathrm{m}}(n\_{2^{\mathrm{m}}}^{\mathrm{m}}) & 0\\ \mathbf{M} & \mathbf{M} & \mathbf{M}\\ 0 & 0 & \mathbf{K} & \overline{f}^{\mathrm{m}}(n\_{s^{\mathrm{m}}}^{\mathrm{m}}) \end{bmatrix} \tag{22}$$

And

$$\overline{f}^{\text{m}}\left(n\_{j^{\text{m}}}^{\text{m}}\right) = \frac{df^{\text{m}}\left(n\_{j}^{\text{m}}\right)}{dn\_{j}^{\text{m}}}\tag{23}$$

Whereas a recurrence relation is initialized at the final layer as:

$$\mathbf{s}^{\mathcal{M}} = -2\mathbf{\overline{F}}^{\mathcal{W}} \left( \mathfrak{n}^{\mathcal{M}} \right) (t - a) \tag{24}$$

Now, we can summarize the overall backpropagation (BP) based on the steepest descent algorithm as (1): First, use Equations (6)–(8) to propagate the input through the network. (2): Next, using Equations (20) and (24), backpropagate the sensitivity. (3): Finally, using Equations (18) and (19), update the weights and biases.

#### *2.3. Levenberg–Marquardt Backpropagation*

The backpropagation algorithm exhibits asymptotic convergence properties while training the multilayer FFNN, which causes a slow convergence rate due to minor weight changes around the solution. Meanwhile, Levenberg–Marquardt (LM) backpropagation [29] is a variant of Newton's method, which inherits the stability of the steepest descent algorithm and the speed of the Gauss–Newton algorithm [27,29,30]. Now, suppose we want to optimize performance index *F*(*x*); then, Newton's method is:

$$\mathbf{x}\_{k+1} \;=\; \mathbf{x}\_k - A\_k^{-1} \; \mathbf{g}\_k. \tag{25}$$

where *<sup>A</sup><sup>k</sup>* ∼= ∇2*F*(*x*) *X* = *X<sup>k</sup>* , plus *g<sup>k</sup>* ∼= ∇*F*(*x*) | *X* = *X<sup>k</sup>* . Note that ∇2*F*(*x*) represents the Hessian matrix, and ∇*F*(*x*) denotes the gradient. Let us assume that *F*(*x*) is a sum-ofsquares function, then:

$$F(\mathbf{x}) \;=\sum\_{i=1}^{N} v\_i^2(\mathbf{x})\;=\;\mathbf{v}^T(\mathbf{x})\mathbf{v}(\mathbf{x}).\tag{26}$$

Then the gradient and Hessian matrix are expressed in matrix form as:

$$
\nabla F(\mathbf{x}) = \mathbf{2} \, \mathbf{J}^T(\mathbf{x}) \mathbf{v}(\mathbf{x}).\tag{27}
$$

$$
\nabla^2 F(\mathbf{x}) = 2\mathbf{J}^T(\mathbf{x})\mathbf{J}(\mathbf{x}) + 2\,\mathbf{S}(\mathbf{x}).\tag{28}
$$

**J**(*x*) denotes the Jacobian matrix as:

$$\mathbf{J}(\mathbf{x}) = \begin{bmatrix} \frac{dv\_1(\mathbf{x})}{d\mathbf{x}\_1} & \frac{dv\_1(\mathbf{x})}{d\mathbf{x}\_2} & \cdots & \frac{dv\_1(\mathbf{x})}{d\mathbf{x}\_n} \\ \frac{dv\_2(\mathbf{x})}{d\mathbf{x}\_1} & \frac{dv\_2(\mathbf{x})}{d\mathbf{x}\_2} & \cdots & \frac{dv\_2(\mathbf{x})}{d\mathbf{x}\_n} \\ \mathbf{M} & \mathbf{M} & \cdots & \mathbf{M} \\ \frac{dv\_N(\mathbf{x})}{d\mathbf{x}\_1} & \frac{dv\_N(\mathbf{x})}{d\mathbf{x}\_1} & \cdots & \frac{dv\_N(\mathbf{x})}{d\mathbf{x}\_n} \end{bmatrix} \tag{29}$$

$$\mathbf{S}(\mathbf{x}) = \sum\_{i=1}^{N} v\_i(\mathbf{x}) \nabla^2 v\_i(\mathbf{x}) \tag{30}$$

Assume that **<sup>S</sup>**(*x*) ≈ 0, then Equation (30) (Hessian matrix) approximate as ∇2*F*(*x*) ∼= 2 **J***T*(*x*)**J**(*x*)**.** Next, Equation (25) updates after substituting Equation (27) and the approximation of Equation (28) as:

$$
\Delta \mathbf{x}\_k = \mathbf{x}\_{k+1} - \mathbf{x}\_k = -\left[\mathbf{J}^T(\mathbf{x}\_k)\mathbf{J}(\mathbf{x}\_k)\right]^{-1} \ast \mathbf{J}^T(\mathbf{x}\_k)\mathbf{v}(\mathbf{x}\_k). \tag{31}
$$

The matrix (**H** = **J***T***J )** may not be invertible using the Gauss–Newton method. This issue can be fixed by making the following changes to the approximation Hessian matrix:

$$\mathbf{G} = \mathbf{H} + \mu \mathbf{I} \tag{32}$$

This modification to the Gauss–Newton method eventually leads to the LM algorithm [29]:

$$
\Delta \mathbf{x}\_k = -\left[\mathbf{J}^T(\mathbf{x}\_k)\mathbf{J}(\mathbf{x}\_k) + \mu\_k \mathbf{I}\right]^{-1} \mathbf{J}^T(\mathbf{x}\_k)\mathbf{v}(\mathbf{x}\_k). \tag{33}
$$

Now, using the Δ*x<sup>k</sup>* direction, recalculate the approximated *F*(*x*). If a smaller number is obtained, then the computation procedure is repeated, but the parameter *μ<sup>k</sup>* is divided by a factor (*α* > 1). If the value of *F*(*x*) does not decrease, then the value of *μ<sup>k</sup>* for the next iteration in the step is multiplied by α.

The calculation of the Jacobian matrix is an essential step in the LM method. The elements of the Jacobian matrix are calculated using a slight modification to the BP algorithm to address the NN mapping difficulty [29]. For better understanding, similar to Equation (12) for the BP algorithm, Equation (26) is a performance index for the mapping problem in the LM algorithm, where the error vector is **v***<sup>T</sup>* = [*v*<sup>1</sup> *v*<sup>2</sup> K *vN*]**=** *e*1,1 *e*2,1 K *esM*,1 *e*2,1K *esM*,*<sup>Q</sup>* , and the vector *x* parametric values are *xT* = [*x*<sup>1</sup> *x*<sup>2</sup> K *xN*]**=** *w*1 1,1, *<sup>w</sup>*<sup>1</sup> 1,2 K , *<sup>w</sup>*<sup>1</sup> *S*, <sup>1</sup>*<sup>R</sup>* . *<sup>b</sup>*<sup>1</sup> <sup>1</sup>, K, *<sup>b</sup>*<sup>1</sup> *<sup>S</sup>*<sup>1</sup> .*w*<sup>2</sup> 1,1 , K, *<sup>b</sup><sup>M</sup> S<sup>M</sup>* , subscript *<sup>N</sup>* defined as *<sup>N</sup>* <sup>=</sup> *<sup>Q</sup>* ∗ *<sup>S</sup>M*.

Similarly, the *n* subscript is defined as *n* = *S*1(*R* + 1) + *S*<sup>2</sup> *S*<sup>1</sup> + 1 + ... + *SM SM*−<sup>1</sup> + 1 in the Jacobian matrix. Now making all these substitutions in Equation (29) of the Jacobian matrix as:

$$\mathbf{J}(\mathbf{x}) = \begin{bmatrix} \frac{d\boldsymbol{c}\_{1,1}}{dw^{1}\_{1,1}} & \frac{d\boldsymbol{c}\_{1,1}}{dw^{1}\_{1,2}} & \cdots & \frac{d\boldsymbol{c}\_{1,1}}{dw^{1}\_{1,R}} & \frac{d\boldsymbol{c}\_{1,1}}{db^{1}\_{1}} & \cdots\\ \frac{d\boldsymbol{c}\_{2,1}}{dw^{1}\_{1,1}} & \frac{d\boldsymbol{c}\_{2,1}}{dw^{1}\_{1,2}} & \cdots & \frac{d\boldsymbol{c}\_{2,1}}{dw^{1}\_{S^{1}\_{1,R}}} & \frac{d\boldsymbol{c}\_{2,1}}{db^{1}\_{1}} & \cdots\\ \mathbf{M} & \mathbf{M} & \mathbf{M} & \mathbf{M}\\ \frac{d\boldsymbol{c}\_{S\boldsymbol{M},R}}{dw^{1}\_{1,1}} & \frac{d\boldsymbol{c}\_{S\boldsymbol{M},R}}{dw^{1}\_{1,2}} & \cdots & \frac{d\boldsymbol{c}\_{S\boldsymbol{M},R}}{dw^{1}\_{S^{1}\_{1,R}}} & \frac{d\boldsymbol{c}\_{S\boldsymbol{M},R}}{db^{1}\_{1}} & \cdots\\ \frac{d\boldsymbol{c}\_{1,2}}{dw^{1}\_{1,1}} & \frac{d\boldsymbol{c}\_{1,2}}{dw^{1}\_{1,2}} & \cdots & \frac{d\boldsymbol{c}\_{1,2}}{dw^{1}\_{S^{1}\_{1,R}}} & \frac{d\boldsymbol{c}\_{1,2}}{db^{1}\_{1}} & \cdots\\ \mathbf{M} & \mathbf{M} & \cdots & \mathbf{M} & \mathbf{M} & \cdots \end{bmatrix} \tag{34}$$

Until now, the standard BP algorithm has been used to calculate the Jacobian matrix terms as follows:

$$\frac{dF^{\wedge}(\mathbf{x})}{d\mathbf{x}\_{I}} = \frac{d\mathbf{e}\_{q}^{T}\mathbf{e}\_{q}}{d\mathbf{x}\_{I}}\tag{35}$$

Meanwhile, in the LM algorithm, the terms for the elements of the Jacobian matrix can be calculated using the following:

$$\left[\mathbf{J}\right]\_{h,l} = \frac{d\mathbf{v}\_h}{d\mathbf{x}\_l} = \frac{d\mathbf{e}\_{k\boldsymbol{\eta}}}{dw\_{l,j}}\tag{36}$$

Thus, rather than computing the derivatives of the squared errors as in standard backpropagation, we are calculating the derivatives of the errors in this modified Levenberg– Marquardt algorithm. Similar to the concept for standard backpropagation sensitivities, a new Marquardt sensitivity is defined as follows:

$$[\mathbf{J}]\_{h,I} = \frac{d\mathbf{e}\_{k,\boldsymbol{\eta}}}{d w\_{i,\boldsymbol{\eta}}^{\rm m}} = \frac{d\mathbf{e}\_{k,\boldsymbol{\eta}}}{d n\_{i,\boldsymbol{\eta}}^{\rm m}} \* \frac{d n\_{i,\boldsymbol{\eta}}^{\rm m}}{d w\_{i,\boldsymbol{\eta}}^{\rm m}} = s\_{i,\boldsymbol{\eta}}^{\rm \rm \rm m} \* a\_{j,\boldsymbol{\eta}}^{\rm m-1} \tag{37}$$

if *x<sup>I</sup>* is a bias,

$$[\mathbf{J}]\_{h,l} = \frac{d\mathbf{e}\_{k,q}}{db\_i^m} = \frac{d\mathbf{e}\_{k,q}}{dn\_{i,q}^m} \* \frac{dn\_{i,q}^m}{db\_i^m} = s\_{i,h}^{\circ\_m} \tag{38}$$

As previously stated, the Marquardt sensitivity can be determined using the same recurrence relation as the standard sensitivities. However, toward the conclusion of the final layer, there is only one modification for calculating the new Marquardt sensitivity:

$$s\_{i,h}^{\uparrow M} = \frac{d\varepsilon\_{k,q}}{dn\_{i,q}^M} = \frac{d\left(t\_{k,q} - a\_{k,q}^M\right)}{dn\_{i,q}^M} = \frac{da\_{k,q}^M}{dn\_{i,q}^M} \tag{39}$$

for *i* = *k* , it is

$$s\_{i,h}^{\uparrow M} = -f^{\uparrow M} \left( n\_{i,q}^{M} \right) \tag{40}$$

for *i* = *k*, it is equal to zero. Note that *f* <sup>∧</sup>*<sup>M</sup>* and its matrix can be defined with the help of Equations (22) and (23). In the proposed model, when extracted features from the MT-HVdc network are applied to the multilayer FFNN as an input (*pq*) and the corresponding output (*a<sup>M</sup> <sup>q</sup>* ) is processed, the LMBP algorithm is initialized with the following:

$$\mathcal{S}\_{\boldsymbol{q}}^{\wedge M} = -\boldsymbol{F}^{\wedge M} \left( \boldsymbol{n}\_{\boldsymbol{q}}^{M} \right) \tag{41}$$

Each column of the matrix in Equation (41) is a sensitivity vector that must propagate back through the network to generate one row of the Jacobian matrix. The columns are propagated backward as follows:

$$\boldsymbol{S}\_{q}^{\wedge m} = \boldsymbol{F}^{\wedge m} \left( \boldsymbol{n}\_{q}^{m} \right) \left( \boldsymbol{\mathcal{W}}^{m+1} \right) \boldsymbol{S}\_{q}^{\wedge m+1} \tag{42}$$

The augmentation that follows then obtains all of the Marquardt sensitivity matrices for the overall layers.

$$\mathbf{S}^{\wedge\_{\text{m}}} = [\mathbf{S}\_1^{\wedge\_{\text{m}}} \mathbf{i} \mathbf{S}\_2^{\wedge\_{\text{m}}} \mathbf{i} \mathbf{S}\_3^{\wedge\_{\text{m}}} \mathbf{i} \mathbf{S}\_4^{\wedge\_{\text{m}}} \mathbf{i} \mathbf{i} \mathbf{S}\_Q^{\wedge\_{\text{m}}}] \tag{43}$$

The proposed algorithm based on Levenberg–Marquardt's backpropagation algorithm for fault allocation is given for clarity in Table 1.

**Table 1.** LMBP algorithm.

#### **LMBP Algorithm for the Fault Location Process**


#### *2.4. Parameter Optimization*

Hyperparameters should be distinguished from internal parameters such as weights and biases that are taken into account by the Levenberg–Marquardt backpropagation algorithm in the FFNN model. However, finding values for hyperparameters is a nonconvex optimization process for optimal fitting. This is because, like the MT-HVdc system, most existing systems do not have linear responses to their control parameters. From the standpoint of optimization, the problem can be presented as follows:

$$\min\_{\mathbf{x}\in\mathcal{X}^d} f(\mathbf{x}), \quad \text{where } \mathbf{x}\in\mathcal{X} \subset \mathbb{R} \tag{44}$$

*x* is the input vector (control parameters) of dimension *d*. *f*(*x*) is an objective function that depicts a multiscale system with high dimensional control parameters functioning under high-speed channels, such as an FFNN-based relaying model under dynamic conditions to protect the MT-HVdc grid. It is not a simple task to create a precise and accurate model of such systems in this situation. As a result, it is necessary to approach the problem in Equation (44) using the black-box settings shown in Figure 3.

**Figure 3.** Optimization of black-box systems.

#### 2.4.1. Black-Box Settings

In most black-box systems, including MT-HVdc grids relaying models, it is not easy to acquire *f*(*x*) gradient information at an arbitrary value of *x*. However, gradient information is not required when employing BO based on Gaussian processes (GPs) [31]. As a result, it is a promising and appropriate candidate for black-box optimization. While optimizing, BO is an active learning method that chooses the next observation to maximize the reward for solving Equation (45). Its foundation is Bayes' Theorem.

$$P(f|D\_{1:t}) \propto P(D\_{1:t}|f)P(f) \tag{45}$$

*P*(*f*), *P*( *f* |*D*1:*t*) and *P*( *D*1:*t*| *f*) are probabilities of prior, posterior, and likelihood based on the current observations, i.e., *D*1:*<sup>t</sup>* = [(*x*1, *y*1),(*x*2, *y*2), ..,(*xt*, *yt*)]. Various predictive and distributional models can be used as priors in BO, but the GP is preferred due to its practical and theoretical advantages [31].

#### 2.4.2. Gaussian Process (GP)

In the GP, the surrogate model replicates the behaviors of the expensive underlying function. While doing this, the underlying function *f*(*x*) that requires optimization is represented in BO as a collaborative and multidimensional Gaussian process. The mean (*μ*) and covariance (K) functions are calculated using:

$$f\_{\mathbf{1:t}} = \mathcal{N}(\mu(\mathbf{x\_{1:t}}), \mathcal{K}(\mathbf{x\_{1:t}})) \tag{46}$$

In BO, Equation (46) illustrates the process in which the predictive GP is trained. It is worth noting that, unlike other machine-learning algorithms, the goal of BO is to properly forecast where global extrema are situated in the sample space based on previous observations rather than to develop predictors that cover the entire sample space. Furthermore, the problem in Equation (44) is solved using black-box settings, implying that we do not have any prior information about the underlying function. Therefore, to improve the regression quality of the GP, we use a popular kernel/covariance function called the automatic relevance determination Matern 5/2 function in conjunction with a zero-mean GP for *P*(*f*), given as:

$$K(\mathbf{x}) = \begin{bmatrix} k(\mathbf{x}\_1, \mathbf{x}\_1) & \cdots & k(\mathbf{x}\_1, \mathbf{x}\_t) \\ \vdots & \ddots & \vdots \\ k(\mathbf{x}\_t, \mathbf{x}\_1) & \cdots & k(\mathbf{x}\_t, \mathbf{x}\_t) \end{bmatrix} \tag{47}$$

$$k(x\_i, x\_j) = \sigma\_f^2 (1 + \sqrt{5r} + \frac{5}{3}r^2) e^{-\sqrt{5r}} \tag{48}$$

where *r* = (∑*<sup>D</sup> d* = 1 *xi*,*<sup>d</sup>* − *xj*,*d*) 2 /*σ*<sup>2</sup> *<sup>d</sup>* ))1/2; *<sup>σ</sup><sup>f</sup>* and *<sup>σ</sup><sup>d</sup>* are hyperparameters of *<sup>K</sup>*(*x*). These hyperparameters are modified throughout the training phase to reduce the GP's negativelog marginal likelihood using the global or local method. Each parameter in an ARD-type kernel has a scaling parameter that must be set. If the *σ<sup>d</sup>* of one parameter is larger than the

others after the GP-based predictive model has been trained, then it can be assumed that a change in this parameter has less sensitivity on the prediction. Furthermore, if a certain parameter has a greater effect, then the proposed solution in BO will alter the training process to reduce *σ<sup>d</sup>* of that parameter in comparison to others. These advantages make the underlying function more interpretable and serve as an implicit sensitivity analysis.

#### 2.4.3. Acquisition Function

Since the original function *f*(*x*) is hard to estimate, based on a predefined strategy and auxiliary optimization, an acquisition function *u*(*x*) is obtained to find the next point *xt*+<sup>1</sup> of the solution. It is worth noting that *u*(*x*) does not require any additional points; instead, it relies on past sample knowledge to make predictions at candidate points.

$$
\mu(\mathfrak{x}\_{t+1}) = k^T \mathcal{K}^{-1} f\_{1:t} \tag{49}
$$

$$
\sigma^2(\mathbf{x}\_{t+1}) = k(\mathbf{x}\_{t+1}, \mathbf{x}\_{t+1}) - k^T \mathcal{K}^{-1} k \tag{50}
$$

Then predictive distribution at the next point is given as:

$$P(f\_{t+1}|D\_{1:t}, \mathbf{x}\_{t+1}) \sim \mathcal{N}\left(\mu(\mathbf{x}\_{t+1}), \sigma^2(\mathbf{x}\_{t+1})\right) \tag{51}$$

The most prominent acquisition functions in BO are the probability of improvement, upper confidence bound and expected improvement per second. However, we propose an expected improvement per second-plus in this paper. In comparison, it allows for faster model building and optimization, and the term 'plus' prevents a region from overexploiting (more search for a global minimum). Expected improvement (EI) is given as:

$$
\mu(EI) = \left(\mu(\mathbf{x}) - f^{\stackrel{\frown}{\cdot}\_{\cdot}} - \zeta\right)\Phi(Z) + \sigma(\mathbf{x})\phi(Z) \tag{52}
$$

where *f* ∧∗ is the best point observed so far. *ζ* is a hyperparameter for *μ*(*EI*), *Z* = *μ*(*x*) − *f* ∧∗ −*ζ*)/*σ*(*x*), *φ*(.) and Φ(.) are the probability density function and cumulative distribution function of normal distribution. Further interpreted in EI per second (*EIpS*) as:

$$EIp\mathbb{S}(\mathbf{x}) = \mu(EI) / \mu\_{\mathbb{S}}(\mathbf{x})\tag{53}$$

where *μS*(*x*) is the posterior mean of the timing Gaussian process model, respectively. The next sampling point *xt*+<sup>1</sup> is found by minimizing the expected improvement per second-plus *EIpSp*(*x*) acquisition function.

$$\mathbf{x}\_{t+1} = \arg\min \, EIp \, \mathbf{S} \, p \left( \mu(\mathbf{x}\_{t+1}), \sigma^2(\mathbf{x}\_{t+1}) \right) \tag{54}$$

In doing so, the proposed acquisition function escapes the local objective function minimum and searches for a global minimum by setting *σf*(*x*) to be the posterior objective function (*P*( *f* |*D*1:*t*)) standard deviation at point *x*. Let *σNP* be the additive noise posterior standard deviation so that *σ*<sup>2</sup> *<sup>Q</sup>*(*x*) = *<sup>σ</sup>*<sup>2</sup> *<sup>F</sup>*(*x*) + *<sup>σ</sup>*<sup>2</sup> *NP*. The positive exploration ratio is denoted by *tσNP*. After each iteration, the acquisition function evaluates if the next point *x* satisfies *σf*(*x*) < *tσNPσNP*. If this is the case, then the acquisition function will announce that *x* is overexploiting and adjust its kernel function by multiplying *θ* by the number of iterations [32]. When compared to *EIpS*(*x*), this adjustment increases the variation *σ<sup>Q</sup>* for points between observations. It then creates a new point using the newly fitted kernel function. However, if the new point *x* is still being overexploited, then the function multiplies *θ* by a factor of ten and tries again. This process is repeated five times, with the goal of generating a point *x* that is not overexploited. The new *x* is accepted as the next exploration ratio by the proposed acquisition function. As a result, it manages the tradeoff between examining new points, searching for a better global solution, and focusing on nearby already investigated points. The whole process optimizes the FFNN structure in a much faster and more efficient manner with a reduced computation burden.

2.4.4. Implementation of Proposed Framework

The steps to train the FFNN model with the LMBP algorithm and optimize network hyperparameters with the Bayesian algorithm are demonstrated in Figure 4.

**Figure 4.** Proposed Framework.

In step 1, fault location and impedance are modified to create the training and testing datasets for several simulations. Additionally, data events are labeled and normalized according to criteria to improve the training process in this mode. The ANN hyperparameters are determined by feeding the training dataset into BO's AI model until the maximum number of iterations is reached. The AI model is updated each time the maximum number of iterations is reached. In step 2, the optimal hyperparameters of the ANN, which gives the minimum root-mean-square error (RMSE), are selected by BO, and the FFNN is trained

for the given training data with the help of the LMBP algorithm. In step 3, the trained ANN model is evaluated on a different testing dataset from the training dataset.

To prevent overfitting, K-fold cross-validation was used during the assessment with K = 5. RMSE = <sup>1</sup> *Rz* <sup>∑</sup>*Rz <sup>n</sup>* <sup>=</sup> <sup>1</sup>(*yn* − *y*◦ *<sup>n</sup>*), *Rz* stands for the data size, *yn* for the actual output, and *y*◦ *<sup>n</sup>* represents the predicted output. The proposed framework can now be implemented; a system model will be presented in the next section, which enables the collection and analysis of input features for fault types, matching the theoretical foundation to real-world fault scenarios, and using intelligent computation to train and evaluate the framework's effectiveness.

#### **3. System Model**

The electrical power from two offshore wind farms is transferred to two onshore converters through dc transmission, as shown in Figure 5 [33]. A boundary is defined by installing current limiter inductors at the end of a dc line. Other test grid settings and MMC parameters are provided in Tables 2 and 3. The cable specifications are provided in Table 4. It is a single-end scheme, which means that information will be gathered near circuit breakers and inductor lines.

**Figure 5.** Configuration of MMC-based dc grid.

**Table 2.** Converter parameters.



**Table 3.** AC/dc System parameters.

#### **Table 4.** Cable parameters.


#### *Model Output*

As shown in Table 5, the examined system model has several outputs that can be used to determine fault distance from a relay contact point. Additionally, it shows fault resistances and fault types along with a total of 714 dc-link fault scenarios (*k*) for training. By doing so, dc-link faults are categorized into pole-to-pole (PTP) and pole-to-ground (PTG) faults. It is important to note that a dc-link problem is an internal fault, so the criteria (*dVdc*/*dT*) should be applied when an internal failure occurs. By activating this criterion, the trained algorithm begins sampling relevant values for the 10 ms time window and estimating the fault distance. Note that the fault detection strategy is selective in nature.

**Table 5.** Internal fault scenarios for training data.


respectively.

#### **4. Data Processing**

The fact that the initial travelling waves of the voltage and current induced by the dc-link faults from the system above contain helpful information about fault distance is exploited in this study [7]. However, noise interference is expected, considering the dynamic disturbances associated with the MT-HVdc system. Therefore, the following sub-section discusses the noise suppression mechanism before processing data for the regression model.

#### *4.1. Signal Processing*

The implementation of the DWT to suppress noises from a measured signal is shown in Figure 6 [34].

**Figure 6.** Signal denoising.

#### 4.1.1. Setting Numbers of Decomposition Layers

Transforming discrete wavelets into more decomposition layers helps separate noise from the original signal, resulting in better signal filtering. We have chosen eight levels to keep the balance between signal processing burden and robustness against noise, corresponding to the frequency band of 195.3–390.6 Hz at a sampling frequency of 50 kHz.

#### 4.1.2. Selection of Mother Wavelet Function

The next critical step in the denoising scheme is choosing a mother wavelet. A literature review and practical results presented in the previous studies show that Daubechies (dB) is an appropriate mother wavelet for analyzing fault signals [35]. It is suggested that, in this study, the Pearson correlation coefficient be used to determine the correlation between the Daubechies wavelet function and the cable fault signals in order to determine the best mother wavelet function. The mother wavelet function is written as follows:

$$\mathcal{B} \equiv \sum (\mathbf{X} - \overline{\mathbf{X}}) \left( \mathbf{Y} - \overline{\mathbf{Y}} \right) / \sqrt{\sum (\mathbf{X} - \overline{\mathbf{X}})^2 \left( \mathbf{Y} - \overline{\mathbf{Y}} \right)^2} \tag{55}$$

where *X* is the original fault signal, *X* denotes the original fault signal's average, *Y* denotes the noise-eliminated fault signal, and *Y* denotes the noise-eliminated fault signal's average.

#### 4.1.3. Set the Threshold and Filter the Signal

After selecting the mother wavelet, the noise from the fault signal can be filtered out. The Universal threshold is multiplied by the median of each decomposition layer after wavelet decomposition to automatically set the threshold, as expressed:

$$
\lambda\_j = \frac{\sigma\_j}{0.6745} \ast \sqrt{2 \log n\_j} \tag{56}
$$

*λ<sup>j</sup>* is the threshold of the *j*th decomposition layer, *σ<sup>j</sup>* is the median of the *j*th decomposition layer, and *nj* is the signal length of the *j*th decomposition layer. After setting the threshold, the noise is filtered out through the thresholding process. This thresholding process usually includes soft and hard thresholds [35]. However, in this study, a hard threshold is set to filter out the noise.

$$
\delta\_{\lambda}{}^{Hard} = \begin{bmatrix}
\mathbf{x}(t), & \mathbf{i}f|\mathbf{x}(t)| > \lambda \\
\mathbf{0}, & \text{otherwise}
\end{bmatrix} \tag{57}
$$

This equation demonstrates that the hard threshold retains a larger wavelet coefficient while the coefficient below the threshold is set to zero. Finally, using inverse DWT (IDWT), the signal processed by the hard threshold can be configured layer by layer into a noise-free signal. The implementation of the proposed denoising approach with a 20 dB signal-tonoise ratio (SNR) is shown in Figure 7.

**Figure 7.** Effect of the denoised solution on the contaminated signal of 20 dB signal-to-noise ratio (SNR).

#### *4.2. Feature Extraction Set-Up*

After selecting and denoising the signal, the feature extraction stage is critical for data-driven-based fault detection and location estimation problems. Extracted features are measurable data taken from the transient of the current- and voltage-filtered signals to create a feature vector. This feature vector should be dimensionally compact to successfully implement the learning and generalization processes in the estimation algorithms for fault location. The feature extraction stage is divided into two sub-stages. The first stage involves decomposing all generated samples for each fault location up to eight levels using DWT-MRA to obtain wavelet coefficients. The wavelet coefficients are *Aj* approximation and *Dj* detail levels. For each type of fault location, vectors of D1–D8 and A8 coefficients are obtained. The second stage of feature extraction involves providing effective and appropriate statistical parameters for feature vector creation to reduce the collected data and improve estimation performance.

#### 4.2.1. Feature Extraction Results

When a large number of high-frequency components of voltage and current signals are fed for training, several learning tools face problems due to a limitation on the input space dimension. These learning tools lack the capability to provide suitable learning patterns with a large number of features. This is due to the enlargement of the structure and an extreme increase in the number of learning parameters [11]. The regression model used in this study is designed to train with the second norm (referred to as the norm) of the wavelet coefficients. In general, the decomposed signal's norm for wavelet coefficients is determined as follows:

$$norm\_{D\_{\vec{i}}} = \sqrt{\sum\_{i=1}^{n} \left| \left| D\_{i,\vec{j}} \right|^{2}} \tag{58}$$

$$norm\_{A\_j} = \sqrt{\sum\_{i=1}^{n} \left| \left| A\_{i,j} \right| \right|^2} \tag{59}$$

*j* denotes the decomposition level, and the maximum level of decomposition is *N*. The detail and approximate coefficients have *n* values at level *j*. Overall, the proposed energy vector obtained from the MRA-based DWT for any current or voltage signal from a given time window is represented as

$$\mathbf{x} = \begin{bmatrix} \textit{norm}\_{\mathbf{D}\_1 \prime} \textit{norm}\_{\mathbf{D}\_2 \prime} \dots \textit{norm}\_{\mathbf{D}\_8 \prime} \textit{norm}\_{\mathbf{A}\_8} \end{bmatrix} \tag{60}$$

Using the MRA-based DWT, norm values of current for ground faults at various sites are calculated and presented in Figure 8, respectively. There is a distinct difference in the approximate norms between the given fault locations at levels D6 through D8. These differences in norms indicate that the obtained features contain distinct fingerprints for estimating ground faults at various places. Figure 9 shows the obtained features of the voltage signal for ground faults between locations 40 to 200 km.

**Figure 8.** Feature vector extracted for ground fault at various locations of the current signal.

**Figure 9.** Feature vector extracted for ground fault at various locations of the voltage signal.

In Figure 9, the norm values for each location are significantly different in the dominant frequency band between D5 and D8 and can be used as input vectors to establish fault estimation rules. Similarly, as illustrated in Figure 10, a unique signature of the pole-to-pole fault may be derived at different frequency bands. A schematic diagram for the feature vector development process is shown in Figure 11.

**Figure 10.** Feature vector extracted for the PTP fault at various locations of the voltage signal.

**Figure 11.** Flow chart for the development of the feature vector.

#### 4.2.2. Training Set-Up

Following preprocessing strategies, these extracted features are standardized for computational simplification. The decluttered training dataset is then applied to the BObased AI model to find the appropriate hyperparameters for the FFNN once the feature vectors have been determined. The input vector **p** = (x1, x2, x3, x4) of 10 ms is designed for the FFNN input; two inputs (x1, x2) represent the transient dc current second norm from positive and negative poles, while the rest (x3, x4) indicate the dc voltage second norm from positive and negative poles. This corresponds to 36 inputs for each training sample (total training samples = *k* = 714). In doing so, BO's AI model is modified each time until the maximum number of iterations is reached. BO then selects the ideal FFNN hyperparameters that result in the lowest RMSE, and the FFNN is trained using the LMBP algorithm. The final RMSE obtained is 0.0132, with a total evaluation time of 39.3428 s for 30 iterations. Some key hyperparameters of the multilayer FFNN model obtained via BO are presented in Table 6.


**Table 6.** Optimized parameters.

#### **5. Simulation Results and Discussions**

A. Metric for Evaluation and Testing Set-Up

Although, during validation, the selected models' average estimation accuracy was 98.94%. However, we tested our method for further investigation using case studies given in Table 7. For verification and more in-depth analysis, a performance index based on percentage error was used as follows:

$$Percentage\ error = \frac{\text{Actual Location} - \text{Prediction location}}{\text{Total length of transmission line}} \times 100\tag{61}$$

**Table 7.** Testing fault scenarios for testing data.


#### *5.1. Case 1 (Fault Location)*

In Case 1 (under varying fault locations and fault resistance), the functionality of the proposed technique was tested using the scenarios given in Table 7. After thorough training, fault analyses were carried out with varying fault distances and resistances. Table 8 shows the 800 test samples, absolute and percentage errors for two types of dc-link faults: PTP and PTG. It can be observed that the percentage error for the testing dataset was found to be 0.4927% and 0.5361% for the PTP fault and PTG fault, respectively. The proposed technique's total percentage error was found to be 0.5144 percent, which demonstrated that the misclassification was well within acceptable bounds.

**Table 8.** Fault location estimation errors.


In addition, Figure 12 depicts the percentage inaccuracy for the proposed technique in locating PTP faults on line 13 PTP faults with fault distances ranging from 5 km to 200 km. With a maximum percentage error of 1.3174% at 175 km and a minimum value of 0.00103% at 15 km, the findings revealed that the proposed algorithm had no major impact on the variance of fault distance. Therefore, the proposed approach is suitable for locating close-in and far-away faults.

**Figure 12.** Accuracy of the proposed technique.

#### *5.2. Case 2 (Fint)*

Apart from fault location, it is important to note that the characteristics and amplitude of faulty signals, such as voltage and current measured at the local terminal, are also determined by fault parameters such as fault resistance. Therefore, it is crucial to highlight the proposed approach's performance under diverse fault resistances. This section analyzes the proposed algorithm's performance for in-depth fault resistance validity ranging from 10 to 385 Ω, and the results are given in Table 9. Notably, in the event of high fault resistance, such as 385 Ω, with an actual fault distance of 185 km, the energy of the travelling waves tended to be on the lower side, bringing the system closer to the steady state. However, the proposed algorithm with selected features extracted even the most minute voltage and current information. For example, the predicted fault distances for PTP and PTG at 385 Ω were 183.63147 km and 186.93141 km, respectively. The associated misclassification of 0.68427% and 0.96571% for each fault type was well within acceptable limits.



#### *5.3. Case 3 (Noisy Events)*

In this case, a white Gaussian was added to the testing signals to examine the proposed fault-locating scheme under various noisy occurrences. Original signals with SNRs ranging from 20 to 45 dB were employed to assess fault location performance. Table 10 indicates that the proposed scheme could locate all sorts of faults with a reasonable mean percentage error rate for close-in, mid-point of line, and far-end of line. In the case of 45 dB noise additions at the far end of 155 km of the dc-link, the total mean percentage error was 0.72424% and 0.83147% for PTP and PTG faults. It is worth noting that the proposed method was noise-resistant because of the denoising process with better threshold settings and functions. This improved the estimation accuracy despite the high noise level of 20 dB with an overall mean percentage error of 0.9411% and 0.8561% for PTP and PTG faults, respectively.


**Table 10.** Results under the different noisy event.

#### *5.4. Case 4 (Comparison with Existing Methods)*

To further validate the proposed scheme's robustness, Figure 13 replaces it with intelligent adversaries such as the conventional FFNN and BP-NN with an original current signal as the input under the testing conditions listed in Table 7.

**Figure 13.** Comparative analysis.

On a dual 2.9 GHz, Intel Core i7 with 16 GB RAM, the current version of the algorithm implemented in Matlab® R2020a took 39.3428 s to run. Thirty ANN models were selected, trained, and validated with this runtime. It was approximately five times faster than a conventional FFNN configured manually with hyperparameters. The results showed that the proposed algorithm performed better than the BP-NN and had the lowest percentage error (i.e., 0.49%, 0.54% and 0.51%) for all fault types. In terms of percentage error, the conventional FFNN with hyperparameters such as 15 neurons in the hidden layer and a learning rate of 0.01 gave an average percentage error of 0.56%. This showed that efficient features and regulating parameters in the proposed algorithm helped to increase the interpretability of the spectrum generated by the wavelet.

#### **6. Comparison and Analysis**

This section compares the proposed methodology with existing fault estimation schemes for the MT-HVdc grid.

#### *6.1. Non-AI-Based Methods*

The proposed fault location method utilizes a continuous wavelet transform on dc line current signals in the MT-HVdc network [36]. The technique is quite efficient; however, a high sampling frequency of 200 kHz and time-synchronized measurements are required. Further, evaluation under high fault resistance has not been investigated thoroughly. Another work used time-stamped measurements to locate faults at a 200 kHz sampling frequency [37]. The proposed model is robust against noise measurement, but high sampling frequency and synchronized measurements could be a barrier to practical applications. The single-ended TW-based fault location model has no synchronized measurement issue [38] but has a high sampling frequency (100 kHz) [39]. In another example, modal voltage and current measurements are sampled at 1 MHz to develop a single-end fault location model [40]. However, it has only been tested for 100 Ω fault resistance. All the aforementioned TW-based fault location models require a high sampling frequency for good accuracy. Such a requirement is frequently considered a drawback. In comparison, the proposed single-end fault location approach operates with reasonable sampling frequency and tests against fault resistance as high as 485 Ω.

#### *6.2. AI-Based Methods*

Among the fault location approaches, learning-based techniques fall into a distinct category. Even though such practices are commonly utilized in AC systems for fault localization, few papers discuss their relevance to MT-HVdc networks. For example, an extreme learning machine was proposed to locate the fault in the MT-HVdc network [41]. Voltage and current measurements were captured at a 500 kHz sampling frequency during the learning phase to perform the wavelet transform and s-transform for feature extraction. However, the entire scheme has been tested for fault resistance up to 100 Ω. Similarly, the high voltage and current measurements sampled at 200 kHz and the investigation of highly resistive faults are missing [42]. Another method applied a traditional two-ended TW-based fault location algorithm to current measurements sampled at 5 kHz [43]. The distance inaccuracy caused by the moderate sampling frequency was subsequently reduced using a machine-learning approach. However, utilizing multiple distributed sensors on long transmission added cost to the method. With the help of the ANN, the real-time implementation of the proposed method is quite efficient. It has been proven to have a low execution time on low-spec machines [44]. Further, all the aforementioned models do not discuss the optimization of the machine-learning model. The proposed approach optimizes the pre-training set-up with the help of Bayesian optimization.

#### **7. Conclusions**

At first, a novel dc fault location scheme based on AI for a meshed dc grid is proposed. The BO-based FFNN model with DWT application is used to determine the best hyperparameters that improve the selected model's performance while keeping the RMSE low. Levenberg–Marquardt backpropagation is used to adjust weights and biases during training for the chosen multilayer FFNN model. The contribution of this work is summarized as follows:


In future work, variable time windows will be used to consider the effect of the fault location, fault resistance, and computational burden. This work provides an analysis of the fault location estimation method for HVdc cable grids that can be applied to hybrid cable–overhead line systems as well.

**Author Contributions:** Conceptualization, M.Z.Y.; methodology, M.Z.Y. and M.F.T.; software, M.Z.Y.; validation, M.A.K. and M.Z.Y.; mathematical analysis, M.A.K. and M.Z.Y.; investigation, M.F.T.; resources, M.Z.Y. and A.R.; writing original draft preparation, M.Z.Y.; writing review and editing, M.Z.Y. and A.R.; visualization, M.Z.Y. and A.R.; supervision, A.R.; project administration, M.A.K., F.B., M.F.T. and M.Z.Y.; funding acquisition, M.Z.Y. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors extend their appreciation for the support provided by the Hubei University of Automotive Technology (Shiyan, China) under a long-term innovation project (Project name: BK202211).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

