Next Article in Journal
Refractured Well Hydraulic Fractures Optimization in Tight Sandstone Gas Reservoirs: Application in Linxing Gas Field
Next Article in Special Issue
Efficient Fault Warning Model Using Improved Red Deer Algorithm and Attention-Enhanced Bidirectional Long Short-Term Memory Network
Previous Article in Journal
Mass Transfer Kinetics of Volatile Organic Compound Desorption from a Novel Adsorbent
Previous Article in Special Issue
Fault Diagnosis of Power Transformer in One-Key Sequential Control System of Intelligent Substation Based on a Transformer Neural Network Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fault Diagnosis of an Excitation System Using a Fuzzy Neural Network Optimized by a Novel Adaptive Grey Wolf Optimizer

1
School of Electrical Engineering, Southeast University, Nanjing 210096, China
2
State Grid Electric Power Research Institute Company, Nanjing 211106, China
3
School of Automation, Nanjing University of Science and Technology, Nanjing 210094, China
*
Author to whom correspondence should be addressed.
Processes 2024, 12(9), 2032; https://doi.org/10.3390/pr12092032
Submission received: 26 August 2024 / Revised: 17 September 2024 / Accepted: 19 September 2024 / Published: 20 September 2024

Abstract

:
As the excitation system is the core control component of a synchronous condenser system, its fault diagnosis is crucial for maximizing the reactive power compensation capability of the synchronous condenser. To achieve accurate diagnosis of excitation system faults, this paper proposes a novel adaptive grey wolf optimizer (AGWO) to optimize the initial weights and biases of the fuzzy neural network (FNN), thereby enhancing the diagnostic performance of the FNN model. Firstly, an improved nonlinear convergence factor is introduced to balance the algorithm’s global exploration and local exploitation capabilities. Secondly, a new adaptive position update strategy that enhances the interaction capability of the position information is proposed to improve the algorithm’s ability to jump out of the local optimum and accelerate the convergence speed. In addition, it is demonstrated that the proposed AGWO algorithm has global convergence. By selecting real fault waveforms of the excitation system for case validation, the results show that the proposed AGWO has a better convergence performance compared to the grey wolf optimizer (GWO), particle swarm optimization (PSO), whale optimization algorithm (WOA), and marine predator algorithm (MPA). Specifically, compared to the FNN and GWO-FNN models, the AGWO-FNN model improves average diagnostic accuracy on the test set by 4.2% and 2.5%, respectively. Therefore, the proposed AGWO-FNN effectively enhances the accuracy of fault diagnosis in the excitation system and exhibits stronger diagnostic capability.

1. Introduction

With the rapid development of ultra-high-voltage direct current (UHVDC) transmission, issues such as insufficient dynamic reactive power reserve at the sending and receiving ends, as well as weak voltage support capability, have become increasingly prominent [1]. In this context, new large-capacity synchronous condensers, compared to power electronic reactive power compensation devices, have been widely used due to their strong voltage support capability, wide reactive power regulation range, and large short-circuit capacity [2], which can satisfy the system’s dynamic reactive power demands and maintain grid voltage stability. As the excitation system is the core control component of the synchronous condenser system [3], it is of great significance to study its fault diagnosis methods to achieve rapid and accurate fault location and timely troubleshooting to maximize the reactive power compensation capability [4] of the synchronous condenser.
Currently, fault diagnosis methods for excitation systems are mainly divided into model-based methods, signal analysis-based methods, and knowledge-based methods, all of which require feature extraction and rely on threshold judgments. Model-based methods establish a reference model for the excitation control system [5], apply the same input signals as the actual system, compare the output signals of the reference model with those of the actual system, and calculate residuals for fault diagnosis. However, these methods require high modeling accuracy, limiting their practical application. Signal analysis-based methods process and analyze collected voltage and current signals to extract time–frequency components that contain fault information [6,7]. Nevertheless, these methods are mostly applied to the fault diagnosis of power unit thyristors in excitation system and require high signal sampling frequencies.
Knowledge-based methods rely on expert experience and historical data, utilizing artificial intelligence techniques to map fault features to fault types for diagnosis. Fault diagnosis methods based on expert experience include fault tree analysis [8] and expert systems [9]. Expert systems convert qualitative expert knowledge into executable logic rules, offering quick diagnosis and strong interpretability, and thus have been widely applied in excitation system fault diagnosis [9]. However, these methods require continuous maintenance and updating of the knowledge base. Fault diagnosis methods based on historical data include neural networks [10,11,12,13] and deep learning [14]. Lv et al. [10] simulated the excitation system demagnetization fault and used a back propagation neural network (BPNN) to achieve fault diagnosis, although no experimental validation was performed. Liao et al. [14] utilized a 1D convolutional neural network (1D-CNN) and a gated recurrent unit (GRU) to achieve automatic feature extraction and fault diagnosis from sequence data. However, deep learning methods require a large amount of fault data. The fuzzy neural network (FNN) combines the advantages of fuzzy inference and neural networks [11,12], effectively handling uncertainty and fuzzy information, while automatically adjusting internal parameters through training to achieve nonlinear mappings. By adding a fuzzification layer to the neural network [13], the FNN increases the feature extraction of input variables using membership functions, achieving more complex mappings, which not only improves diagnostic efficiency but also enhances the interpretability of diagnostic results, and thus is widely used in fault diagnosis.
In FNN training, the selection of initial weights and biases significantly impacts the model classification accuracy. Conventional methods such as zero initialization and random initialization often lead to issues like vanishing or exploding gradients. To address these problems, researchers have proposed rule-based weight initialization methods, such as Xavier initialization [15] and He initialization [16], which optimize the initial parameters by setting the range and variance of the weights. However, these methods lack flexibility in adapting to specific datasets, potentially resulting in suboptimal training performance. As a solution, intelligent optimization algorithms have been proposed to optimize the initial network parameters. In recent years, emerging algorithms such as the marine predator algorithm (MPA) [17] and war strategy optimization (WSO) [18] have shown superior performance in benchmark function tests. However, these algorithms still face limitations when dealing with complex optimization problems, such as difficulty balancing the algorithm’s global exploration and local exploitation, which leads to slow convergence and easily falling into a local optimum, as well as increased sensitivity to key parameters. The grey wolf optimizer (GWO) [19] also encounters similar issues, but its simple implementation and few parameter settings have led to its widespread application across various fields, showing better versatility. Compared to the genetic algorithm [20] and particle swarm optimization [21], the GWO exhibits stronger optimization ability. Nonetheless, the original GWO struggles to reach the global optimum [22], primarily because its linearly decreasing convergence factor tends to cause premature convergence, and because the search strategy dominated by the head wolves limits the algorithm’s ability to escape a local optimum. To address these issues, researchers have improved the GWO by introducing the square decreasing convergence factor to balance exploration and exploitation [23], and combining the Lévy flight strategy and the differential evolution strategy for position updating [24], thereby enhancing the algorithm’s global search ability.
Based on the above analysis, this paper proposes a novel Adaptive Grey Wolf Optimizer (AGWO) to optimize the initial weights and biases of the FNN, addressing the issue of FNN diagnostic accuracy being highly sensitive to initial network parameters in excitation system fault diagnosis. Firstly, an improved nonlinear convergence factor is introduced to balance the algorithm’s global exploration and local exploitation capabilities. Secondly, a new adaptive position update strategy that enhances the interaction capability of the position information is proposed to improve the algorithm’s ability to escape a local optimum and accelerate the convergence speed. In addition, it is demonstrated that the proposed AGWO algorithm exhibits global convergence. Finally, a sample set constructed with real fault data from the excitation system is used to conduct comparative experiments between the AGWO-FNN model and other fault diagnosis models, verifying the effectiveness of the proposed method.
The rest of the paper is organized as follows. Section 2 introduces common fault types and the overall diagnostic process of the excitation system. Section 3 focuses on specific improvement strategies for the proposed AGWO and provides a theoretical analysis of its convergence. Section 4 details the construction of the AGWO-FNN model. Section 5 verifies the effectiveness and superiority of the proposed algorithm using real fault cases. Finally, Section 6 summarizes the research findings of this paper.

2. Fault Types and Overall Diagnostic Process of the Excitation System

2.1. Types of Faults in the Excitation System

The synchronous condenser adopts the self-shunt excitation mode, and the basic structure of the self-shunt excitation control system is shown in Figure 1, including the synchronous condenser, excitation transformer, excitation regulator, power unit, and demagnetization unit. In Figure 1, SC represents the synchronous condenser, PT represents the potential transformer, and CT represents the current transformer.
According to different fault sources, the common fault types of excitation system [5,25,26] are shown in Table 1. Among them, measurement fault includes PT disconnection and CT sampling anomalies, while a digital input anomaly refers to a situation where the excitation increase or decrease commands received by the regulator do not match the actual operational requirements of the excitation system, and the above two types of faults belong to the controller input abnormality. Controller fault includes software program errors and missing trigger pulses, which can result in incorrect control output. Power unit fault mainly includes the open circuit fault of thyristors; demagnetization circuit fault includes an unclosed demagnetization switch and the disconnection of the excitation winding; and faults in the power part of the excitation system will lead to a decrease in field current.

2.2. Overall Diagnostic Process of the Excitation System

The overall diagnostic process of the excitation system is shown in Figure 2, which consists of three parts: data preprocessing, feature selection, and fault diagnosis. The input data include the raw measurement point data of the excitation system, such as voltage, current, power, trigger angle, and other monitoring variables. In the data preprocessing part, the raw data are cleaned by removing outliers, missing value interpolation, applying moving average filtering, and performing standardization. These steps help reduce the influence of outliers, noise, and different orders of magnitude on the diagnostic results, thereby providing a reliable data foundation for subsequent analysis.
Following preprocessing, feature selection aims to extract key feature quantities for fault diagnosis. The variance threshold and Spearman correlation coefficient methods [27] are used to eliminate irrelevant and redundant feature quantities. The ReliefF method [28] is then utilized to select the key feature quantities with the top-ranked feature weight scores, enhancing the efficiency of the fault diagnosis.
In the fault diagnosis part, the fuzzification layer of the FNN extracts fuzzy features from the selected key feature quantities. By adjusting the network weights and biases through gradient descent, the complex mapping between fault features and fault states is achieved, leading to the successful fault diagnosis of the excitation system. To enhance the diagnostic performance of the FNN, the proposed AGWO algorithm is used to optimize the initial network parameters of the FNN. The optimal solution obtained from the AGWO iterative process serves as the initial weights and biases for the FNN, thereby avoiding the issue of the FNN getting trapped in a local optimum during training.

3. Proposed Novel AGWO

To address the problem of the original GWO converging slowly and easily falling into the local optimum, the AGWO algorithm with improved nonlinear convergence factor and new adaptive position update strategy is proposed, and the improved AGWO algorithm is proven to have global convergence.

3.1. The Original GWO

Inspired by the social behavior of grey wolves, Mirjalil et al. [19] divided the wolves into four levels, including the head wolves α, β, δ, and the bottom wolves ω, which correspond to the optimal solution, the sub-optimal solution, the third optimal solution and the remaining candidate solutions of the current iteration of the GWO algorithm. Assuming that the size of the wolf pack is N, the position of the ith grey wolf in the D-dimensional search space is represented by X i = X i 1 ,   X i 2 ,   ,   X i D , where i = 1, 2, …, N. Based on the hunting behavior of the wolf pack, it is assumed that the head wolf is closer to the prey and guides the bottom wolves ω to update their positions. The process for the position movement of the wolves can be represented by Equations (1) and (2).
{ X i , α d ( k ) = X α d ( k ) A 1 d | C 1 d X α d ( k ) X i d ( k ) | X i , β d ( k ) = X β d ( k ) A 2 d | C 2 d X β d ( k ) X i d ( k ) | X i , δ d ( k ) = X δ d ( k ) A 3 d | C 3 d X δ d ( k ) X i d ( k ) |
X i d ( k + 1 ) = X i , α d ( k ) + X i , β d ( k ) + X i , δ d ( k ) 3
where d = 1, 2, …, D, k denotes the current iteration, X i d ( k ) and X i d ( k   + 1 ) represent the dth dimensional position of the ith grey wolf in the kth and k   + 1 th iteration, X α d ( k ) ,   X β d ( k ) and X δ d ( k ) are the dth dimensional positions of the head wolves α, β, and δ in the kth iteration, and X i , α d ( k ) ,   X i , β d ( k ) and X i , δ d ( k ) are the dth dimensional target positions of the ith grey wolf moving toward the head wolves α, β, and δ in the kth iteration, respectively. The expressions for the random parameters A j d , C j d are as follows:
{ A j d = 2 a · r 1 a C j d = 2 r 2
where j = 1, 2, 3, r 1 and r 2 are random numbers within the interval [0, 1], and the convergence factor a decreases linearly from 2 to 0 as the number of iterations increases. The parameter vector A j = ( A j 1 ,   A j 2 ,   ,   A j D ) determines the search range of the GWO algorithm. When A j < 1 , the grey wolf individuals approach the position of the head wolves for local exploitation. Conversely, when A j > 1 , the grey wolf individuals explore new regions, corresponding to the global exploration stage.

3.2. Nonlinear Decreasing Strategy for the Convergence Factor

From Equation (3), the magnitude of A j d is related to the a, with a range of [−a, a]. When applying the GWO algorithm to complex optimization problems, the linearly decreasing convergence factor a can easily cause the algorithm to become trapped in a local optimum during the iteration process. To address this issue, a nonlinear decreasing strategy of the convergence factor is proposed, as shown in Equation (4).
a = 1 + sin ( π 2 + π ( k k max ) λ 1 )
where λ 1 represents the adjustment coefficient, and k max denotes the maximum number of iterations. The variation of different convergence factors with the number of iterations is shown in Figure 3, where the expression for the squared decreasing convergence factor a ( 1 ) is given by a ( 1 ) = 2 2 ( k / k max ) 2 [23].
To verify the effectiveness of the proposed nonlinear convergence factor strategy, four benchmark functions are selected for testing, and the optimization results of the original GWO and the improved GWO using different nonlinear convergence factors are compared and analyzed. The specific benchmark functions are shown in Table 2, where the function types include unimodal and multimodal, Dim represents the dimension of the function, Range represents the boundary of the search space, and fmin represents the theoretical optimum.
The simulation platform is Windows 10, a 64-bit operating system, equipped with a 2.50 GHz Intel processor and 16 GB RAM, and Matlab2022a is used for programming tests. The population size of each algorithm is set to 30, and the maximum number of iterations is 500. In order to reduce the influence of chance on the results, the different algorithms are run 20 times. The average value reflects the convergence accuracy of the algorithm, and the standard deviation reflects the stability of the algorithm. The statistical results are shown in Table 3, where AGWO1 refers to the AGWO algorithm that only adopts the nonlinear convergence factor introduced in Section 3.2, SQGWO [23] refers to the GWO algorithm using the square decreasing convergence factor, and Std represents the standard deviation.
As can be seen from Table 3, the convergence accuracies of the improved GWO with nonlinear convergence factors are superior to those of the original GWO, and the AGWO1 with λ 1 = 1.5 selected has the optimal convergence accuracy. In order to show the convergence performance of the proposed algorithms more intuitively, the iterative convergence curves of different algorithms for the benchmark functions f1 and f4 are given in Figure 4.
As can be seen from Figure 4, the convergence speed of the algorithms with nonlinear convergence factors is slow in the early stages of iteration. This is due to the slow decrease of the nonlinear convergence factor, which increases the probability of wolves searching for new regions, improving the algorithm’s global exploration but weakening its local exploitation. However, in the later stages of iteration, the AGWO1 with λ 1 = 1.5 achieves optimal convergence accuracy, as it maintains a smaller value than the linearly decreasing convergence factor, thereby strengthening the algorithm’s local exploitation ability. Therefore, λ 1 = 1.5 is chosen as the adjustment coefficient of the nonlinear convergence factor.

3.3. New Adaptive Position Update Strategy

From Equations (1) and (2), it is evident that the GWO algorithm updates positions using only the position information of the three head wolves, without facilitating information sharing among other individuals within the wolf pack. To maintain the diversity of the wolf pack and accelerate the convergence of the algorithm, an adaptive position update strategy that enhances position information interaction is proposed. In this strategy, interaction refers to the sharing and mutual influence of position information among individuals. The position of a grey wolf is updated by integrating position information from the head wolves, its own historical optimum, and a randomly selected bottom wolf. Specifically, for a grey wolf with better fitness, its position is updated by combining its individual historical optimal position. For a grey wolf with poorer fitness, a random selected ω wolf is chosen to guide the position update along with the head wolves, to avoid the wolf pack converging solely to the position of the head wolves and falling into a local optimum prematurely. Additionally, a fitness-based weight dynamic adjustment strategy is proposed, in which the weights of the head wolves are continuously adjusted in each iteration based on fitness values to emphasize the leadership role of the α wolf. The specific expressions are as shown in Equations (5) and (6).
w j = 1 / f j 1 / f α + 1 / f β + 1 / f δ
X i d ( k + 1 ) = { w α X i , α d ( k ) + w β X i , β d ( k ) + w δ X i , δ d ( k ) + ( 1 k k max ) λ 2 r 3 ( X rand d ( k ) X i d ( k ) ) ,   f i > f med w α X i , α d ( k ) + w β X i , β d ( k ) + w δ X i , δ d ( k ) + ( 1 k k max ) r 3 ( X i best d ( k ) X i d ( k ) ) , f i f med
where w j and f j   ( j = α ,   β ,   δ ) represent the position update weights and fitness values of the head wolf α, β, and δ respectively, and r 3 is a random number within the interval [0, 1]. X rand d ( k ) is the dth dimension position of the ω wolf randomly selected during the kth iteration, while X i best d ( k ) is the dth dimension position of the historical best for the ith grey wolf. λ 2 represents the adjustment coefficient, set to λ 2 = 0.2 , and its value indicates the degree of influence of the randomly selected ω wolf on the position update of the ith grey wolf [20]. f med and f i are the median fitness of individual wolves and the fitness value of the ith grey wolf in the current iteration, respectively.

3.4. Convergence Analysis of the Proposed AGWO

To illustrate the effectiveness of the AGWO in solving complex optimization problems, a theoretical analysis of its convergence is conducted. When A j < 1 , the AGWO enters the local search stage, and the convergence of the algorithm in this stage is discussed first. For the individual i in the wolf pack with relatively better fitness, the position update formula can be written as follows by substituting Equation (1) into Equation (6).
X i d ( k + 1 ) = j = α , β , δ w j [ X j d ( k ) ( 2 a k · r 1 j a k ) | 2 r 2 j X j d ( k ) X i d ( k ) | ] + ( 1 k k max ) r 3 ( X i best d ( k ) X i d ( k ) )   = w α X α d ( k ) + w β X β d ( k ) + w δ X δ d ( k ) a k j = α , β , δ ( 2 r 1 j 1 ) w j | 2 r 2 j X j d ( k ) X i d ( k ) |   + ( 1 k k max ) r 3 ( X i best d ( k ) X i d ( k ) )
where   a k represents the value of convergence factor a at the kth iteration, and r 1 j and r 2 j ( j = α ,   β ,   δ ) are random numbers within the interval [0, 1]. The values of ( 2 r 1 j - 1 ) are taken to be random numbers within the interval [−1, 1]. As the number of iterations k approaches k max , a k gradually converges to 0, and the influence of the second and third terms in Equation (7) on the position of grey wolf i can be ignored. Similarly, for individual i in the wolf pack with poorer fitness, the position update formula is as follows:
X i d ( k + 1 ) = w α X α d ( k ) + w β X β d ( k ) + w δ X δ d ( k ) a k j = α , β , δ ( 2 r 1 j 1 ) w j | 2 r 2 j X j d ( k ) X i d ( k ) |   + ( 1 k k max ) λ 2 r 3 ( X rand d ( k ) X i d ( k ) )
Assuming the position of the head wolf remains unchanged in later iterations, X i d ( k + 1 ) can be expressed as shown in Equation (9).
lim k k max X i d ( k + 1 ) = w α X α d ( k ) + w β X β d ( k ) + w δ X δ d ( k )
From Equation (9), it can be observed that the AGWO algorithm exhibits convergence during the local search stage.
To validate the global convergence of the AGWO, the methods used by Feng et al. [29] are referenced and appropriately adjusted. Assuming that grey wolf i falls into the local optimum g ( k ) before the kth iteration, the probability that it still falls into the local optimum after the kth iteration is as follows:
P { X i d ( k + 1 ) = g d ( k + 1 ) | X i d ( k ) = g d ( k ) } = P { j = α , β , δ w j [ g d ( k ) A j d | C j d g d ( k ) X i d ( k ) | ] + r 3 Δ X i , i b e s t d ( k ) = g d ( k + 1 ) | X i d ( k ) = g d ( k ) } = { 1 , ( A j d = 0   or   C j d = 1 )   and   r 3 = 0 0 , o t h e r w i s e
where Δ X i , i b e s t d ( k ) = ( 1 k k max ) ( X i best d ( k ) X i d ( k ) ) , g d ( k ) represents dth dimensional position of the current local optimum. The parameters   A j d , C j d and r 3 are random numbers, so the probability of the AGWO falling into a local optimum is relatively low. According to Equations (2) and (6), compared to the GWO, the AGWO enhances its ability to escape from a local optimum by incorporating an additional term with the random number r 3 in its position update formula.
Assuming that the region around the global optimum solution in the D-dimensional search space is O * , the probability that the head wolf α has not yet reached O * before the kth iteration is
P { X α ( k ) O } = P { X α ( k ) O | X α ( k 1 ) O } · P { X α ( k 1 ) O } +   P { X α ( k ) O | X α ( k 1 ) O } · P { X α ( k 1 ) O }
AGWO employs an elitism preservation strategy [22] to ensure that the optimal position of the head wolf from previous iterations is retained, leading to the following formula:
P { X α ( k ) O | X α ( k 1 ) O } · P { X α ( k 1 ) O } = 0
From Equation (10), it can be seen that the AGWO algorithm has the ability to escape from a local optimum and find the globally optimal region with a certain probability in the limited search space, which can be obtained as
0 < ε P { X α ( k ) O | X α ( k 1 ) O } < 1
Equation (11) can be rewritten as
P { X α ( k ) O } = P { X α ( k ) O | X α ( k 1 ) O } · P { X α ( k 1 ) O }   = { 1 P { X α ( k ) O | X α ( k 1 ) O } } · P { X α ( k 1 ) O }   = t = 2 k { 1 P { X α ( t ) O | X α ( t 1 ) O } } · P { X α ( 1 ) O }   ( 1 ε ) k 1 · P { X α ( 1 ) O }
From Equation (14), it can be obtained that lim k   P { X α ( k ) O } = 1 lim k   P { X α ( k ) O } = 1 when k is sufficiently large, which indicates that the AGWO algorithm, after a sufficient number of iterations, can guarantee convergence to the global optimum region. However, in practical applications, the AGWO cannot perform an infinite number of iterations. Therefore, the proposed improvement strategy aims to increase the probability that the AGWO algorithm reaches the globally optimal solution within a finite number of iterations.

4. AGWO-FNN Model Construction

A diagnostic model is proposed that uses AGWO to optimize the initial weights and biases of the FNN. The optimal initial parameters of the network are obtained and assigned through iterative learning with AGWO, thereby improving the diagnostic accuracy of the FNN.

4.1. Basic Structure of Fuzzy Neural Network

The serial-type FNN is selected as the fault diagnosis model. This model utilizes two types of membership functions to extract fuzzy features from the input variables, which are then used as inputs to the BPNN. The structure of the serial-type FNN includes an input layer, a fuzzification layer, a hidden layer, and an output layer, as shown in Figure 5. In this structure, Np, Nh, and No represent the number of nodes in the input, hidden, and output layers, respectively.
The Gaussian function is selected to evaluate the membership degree μ p 1 , q G of the feature x p 1 to the three fuzzy features: “High”, “Normal”, and “Low”. The expression is as follows:
μ p 1 , q G = e ( x p 1 c p 1 , q G ) 2 2 ( σ p 1 , q G ) 2
where p 1 { 1 , 2 , , N p } and q = 1, 2, 3. The parameters c p 1 , q G and σ p 1 , q G represent the center and width of the Gaussian function, respectively. The S-type function is used to evaluate the membership degree μ p 2 S of the feature x p 2 to the fuzzy feature “Abnormal Fluctuation”. The expression is as follows:
μ p 2 S = 1 1 + e σ p 2 S ( x p 2 c p 2 S )
where p 2 { 1 , 2 , , N p } and p 2   p 1 , the parameters c p 2 S and σ p 2 S determine the center and width of the unsaturated region of the S-type function. The parameters of the two types of membership functions are pre-set based on expert experience.
The activation functions used are the Sigmoid function for the hidden layer and the Softmax function [14] for the output layer. To avoid the vanishing gradient phenomenon during training, cross entropy (CE) is selected as the loss function for the FNN. The expression is as follows:
L CE = n = 1 N o y n ln ( o n )
where n = 1, 2, …, N o , and y n and o n represent the true probability and predicted probability of the sample belonging to the nth class, respectively.

4.2. Flowchart of the AGWO-Optimized FNN Algorithm

The proposed AGWO is used to optimize the initial weights and biases of the FNN, with the algorithm flowchart illustrated in Figure 6. Specifically, the network weights and biases of the FNN are selected as the optimization variables, and the initialization of the wolf population is completed. The fitness function of the AGWO is defined as the cross-entropy loss function of the FNN after training, as shown in Equation (17). Finally, the optimal solution obtained through the iterative process of the AGWO is used as the optimal initial parameter for the FNN.
The computational complexity of the AGWO-FNN is analyzed. Discussing the time complexity first, the time complexity of the GWO depends on the population size N, the individual dimension D, the maximum number of iterations kmax, and the computation time of each individual’s fitness, f(D). The time complexity of the GWO initialization phase is O ( N × D + N × f ( D ) ) . During the iterative process, the time complexity is O ( N × k max × ( D + f ( D ) ) ) . Therefore, the total time complexity of the GWO is T GWO = O ( N × k max × ( D + f ( D ) ) ) . The proposed AGWO algorithm introduces additional complexity due to the median calculation of individual fitness values during the position update process, which adds O ( k max × N × log N ) . Consequently, the total time complexity of the AGWO is T AGWO = O ( N × k max × ( D + f ( D ) + log N ) ) . Since the running time of the algorithm is primarily determined by f(D), the running time of the AGWO-FNN is comparable to that of the GWO-FNN. The space complexity of the AGWO-FNN is mainly influenced by the population space, the fitness value space, and the space required for temporary variables. The space complexity for the population is O ( N × D ) , the space complexity for the fitness value is O(N), and the space of temporary variables stores individual optima and some random numbers with a space complexity of O(N). Therefore, the space complexity of the AGWO-FNN is O( N × D + 2N).

5. Case Study

In order to verify the diagnostic performance of the AGWO-FNN model, the data in case study are obtained from the dynamic characteristics verification experiment of the Zarut synchronous condenser [4] and the typical fault waveforms of the excitation system provided by the power grid company. After data preprocessing and feature selection, the effectiveness and superiority of the proposed method are validated by comparing and analyzing the diagnostic results of the AGWO-FNN with those of other diagnostic models.

5.1. Data Preprocessing

The quality and dimensional differences of the dataset can affect the diagnostic performance of the model, making it necessary to preprocess the raw data. Outliers are removed, missing values are interpolated, and moving average filtering is applied to the raw waveform data to minimize the influence of spikes and noise on the diagnostic results. Additionally, significant differences in magnitude exist between different variables. To prevent these differences from affecting model performance and to facilitate fuzzification by the FNN, a new standardization method is proposed with the following equation:
x st ( t ) = x ( t ) x base
where   x st ( t ) and x(t) represent the value of the original data and the standardized data at time t, respectively, and x base is the average value of measurement point x during normal operation.

5.2. Feature Selection

The excitation system measurement point data involve multiple variables, and irrelevant or redundant feature quantities can degrade the performance of the diagnostic model. Therefore, key feature quantities that contain fault information need to be extracted as inputs to the AGWO-FNN. To eliminate feature quantities that do not contain fault information, the variance threshold method is applied, eliminating measurement points where the variance of the waveform data before and after the fault is smaller than the set threshold.
The ReliefF method [28] is commonly used to select feature quantities for multi-category data, but it cannot eliminate redundant features. Therefore, a combination of the Spearman correlation coefficient [27] and the ReliefF method is utilized to select the key feature quantities. Redundant features with correlation coefficients greater than 0.95 are removed using Spearman correlation analysis. There are 47 dimensions in the excitation system measurement point raw data, and 12 key feature quantities containing fault information are obtained after eliminating irrelevant and redundant features, as shown in Table 4, where voltage, current, and power refer to their root mean square (RMS) values.
The feature weights of the key feature quantities listed in Table 4 are calculated using the ReliefF method and ranked from highest to lowest, as illustrated in Figure 7. The top four feature quantities—stator voltage, trigger angle, reactive power, and field current—are selected as the input features for the FNN.

5.3. Sample Set Construction and FNN Network Structure Determination

The experimental data encompass seven fault states, including normal operation, external faults, and five types of excitation system faults, each corresponding to a distinct category label. The measurement signals consist of stator voltage, trigger angle, reactive power, and field current, all sampled at a frequency of 300 Hz. Each sample consists of the sampled values of measurement signals at the same moment, and the variance of the trigger angle and reactive power waveforms within the preceding one second. For each fault state, 200 samples are selected and divided into training and test sets with a 7:3 ratio, as shown in Table 5. A portion of the sample data is provided in Table 6, where “var” denotes the variance.
Each sample contains six variables, corresponding to six nodes in the input layer of the FNN. The fuzzy feature set has a total of 14 elements, corresponding to 14 nodes in the fuzzification layer. The parameters for membership functions are set as follows: c p 1 , 1 G = 0 ,   σ p 1 , 1 G = 0.4 ,   c p 1 , 2 G = 1 ,   σ p 1 , 2 G = 0.05 ,   c p 1 , 3 G = 2 ,   σ p 1 , 3 G = 0.4 ,   c p 2 S   = 0.03 ,   σ p 2 S = 100 , where p 1 = 1 , 2 , 3 , 4 and p 2 = 1 , 2 . Based on the category labels, the number of nodes in the output layer is determined to be 7. After several trials, it is found that the lowest value of the network training loss is obtained when the number of nodes in the hidden layer is 13. Therefore, it is determined that the structure of the FNN network is 6-14-13-7. Further, the search dimension of the optimization algorithm can be calculated as 14 × 13 + 13 + 13 × 7 + 7 = 293 .

5.4. Results and Discussion

In order to evaluate the diagnostic performance of the model, the diagnostic accuracy Acc is selected as the evaluation index, which is calculated as follows:
Acc = T sum S sum
where T sum is the number of correctly categorized samples and S sum is the total number of samples. The algorithm implementation platform is the same as Section 3.2.

5.4.1. Validation of the Effectiveness of Improvement Strategies

To verify the effectiveness of the improved nonlinear convergence factor and adaptive position update strategy in the AGWO, a comparison is conducted among the AGWO, the original GWO, AGWO1 (with only the nonlinear convergence factor modified), and AGWO2 (with only the adaptive position update strategy applied) in terms of their impact on FNN diagnostic performance. In AGWO1, the adjustment coefficient λ 1 controls the value of the convergence factor a during the iteration process, influencing the optimization performance of the algorithm. Therefore, it is necessary to compare the diagnostic results of the AGWO1-FNN with different values of λ 1 . The population size of each optimization algorithm is set to 30, the maximum number of iterations is set to 100, the search dimension D is set to 293, and the network parameter optimization range is [−2, 2]. λ 1 for AGWO1 is set to {1, 1.5, 2, 3}, and λ 1 for the AGWO is set to 1.5. The test set samples are input into the trained diagnostic models and the experiments are repeated 10 times to calculate the mean and variance of the accuracy, and the diagnostic results are shown in Table 7.
As shown in Table 7, compared to the original GWO, all improved strategies (AGWO, AGWO1, AGWO2) enhanced the average diagnostic accuracy of the FNN. For the AGWO1 algorithm, the excessively large adjustment coefficient makes the algorithm stay in the global search stage for a long time, and the local exploitation ability is weakened, which affects the model diagnostic performance. When the adjustment coefficient λ 1 is set to 1.5, AGWO1 is able to effectively coordinate the global exploration and local exploitation, and the corresponding diagnostic model has the best performance, with an average accuracy improved by 1.02% compared to the GWO-FNN. The AGWO2-FNN improves average accuracy by 1.88% over the GWO-FNN, indicating that the adaptive position update strategy can avoid falling into a local optimum more effectively. The AGWO-FNN with two improved strategies has the best diagnostic performance, with an average accuracy of 96.10%, 2.5% higher than the GWO-FNN. Meanwhile, compared with other diagnostic models in Table 7, the AGWO-FNN has the smallest variance in diagnostic accuracy, indicating that it has good stability. These results demonstrate the effectiveness of the proposed improvement strategy.

5.4.2. Comparison of Different Optimization Algorithms

To verify the advantages of the proposed AGWO in optimization, the parameters of the FNN network are optimized using the AGWO, GWO, PSO, whale optimization algorithm (WOA) [30], and MPA. The optimization curves of these algorithms are then compared. The parameter settings common to the algorithms are the same as in Section 5.4.1, and the unique parameter settings for each optimization algorithm are shown in Table 8. The fitness iteration curves of the optimization algorithms are shown in Figure 8, where each algorithm fitness function is the FNN cross-entropy loss function.
As can be seen from Figure 8, PSO and GWO reach a local optimum at the 17th and 46th iterations, respectively, while AGWO, WOA, and MPA can all escape from a local optimum and obtain better feasible solutions. Although MPA converges faster initially, AGWO escapes from local optima several times during the iteration process and achieves a lower final fitness value, which indicates that the improved strategy of AGWO helps to maintain the population diversity and improve the quality of the final solution. Based on the above analysis, the AGWO algorithm exhibits better global search capability and convergence performance, making it highly effective for optimizing the initial parameters of FNN networks.

5.4.3. Diagnostic Results of Different Diagnostic Models

In order to further verify the effectiveness and superiority of the AGWO-FNN, the diagnostic results of various models, including the BPNN, FNN, GWO-FNN, AGWO-FNN, PSO-FNN, WOA-FNN, and MPA-FNN, are compared and analyzed. The BPNN network structure is set to 6-8-7, and its activation functions and loss function are the same as those used in the FNN. The learning rate of the neural network is set to 0.01, the maximum number of training times is 1000, the termination threshold is 0.001, and the elastic gradient descent method is used to train the network. The sample dataset is input into different diagnostic models for training and testing. The experiment is repeated 10 times with the mean and standard deviation of the Acc recorded. The results are shown in Figure 9.
As can be seen from Figure 9, the accuracy of the FNN is substantially improved compared to the BPNN, with the average accuracy of the training set and the test set improved by 3.4% and 4.7%, respectively. Additionally, the standard deviation of the accuracy for the FNN is lower than that of the BPNN, which indicates that the fuzzification layer of the FNN efficiently extracts discriminative fuzzy features from the input data and realizes more complex mapping relationships, thereby enhancing diagnostic accuracy. The FNN models optimized by the optimization algorithm all exhibit better diagnostic results than the original FNN, among which the AGWO-FNN achieves the best diagnostic effect, and its average diagnostic accuracy is optimal in both the test set and the training set, which is consistent with the results of the fitness iterative curves. The average accuracy of the AGWO-FNN test set is 4.2%, 2.5%, 3.2%, 1.4%, and 0.7% higher than that of the FNN, GWO-FNN, PSO-FNN, WOA-FNN, and MPA-FNN, respectively, indicating superior diagnostic accuracy for the AGWO-FNN. Moreover, compared with other diagnostic models, the average accuracy of the AGWO-FNN test set is closest to the training set, with only a 1.4% difference in average accuracy. This suggests that the AGWO-FNN exhibits neither overfitting nor underfitting, demonstrating better generalization ability. Additionally, the AGWO-FNN shows the lowest standard deviation of accuracy in both the test and training sets, further confirming its stability in handling classification tasks.
In order to analyze the diagnostic results of each fault state more intuitively, the confusion matrix is used to illustrate the classification outcomes. The diagnostic results of the BPNN, FNN, GWO-FNN, and AGWO-FNN are presented in Figure 10. These results are representative samples randomly selected from one of the ten repetitions of the experiment.
As can be seen in Figure 10, the BPNN has the lowest diagnostic accuracy, with a total of 47 classification errors, and its performance is particularly poor for the measurement fault (label 7), with 18 classification errors and an error rate of 30%. In contrast, the proposed AGWO-FNN has the highest diagnostic accuracy, with only 15 classification errors, in which the external fault (label 2), demagnetization circuit fault (label 3), and power unit fault (label 4) reach 100% diagnostic accuracy, indicating that the AGWO-FNN has a superior fault diagnostic capability compared to other models.

6. Conclusions

Aiming at the fault diagnosis of the excitation system, this paper proposes an excitation system fault diagnosis model based on an AGWO-optimized FNN. The proposed AGWO is theoretically analyzed and proven to have global convergence. Verification through examples using real fault waveforms of excitation systems demonstrates that the AGWO has faster convergence speed and higher convergence accuracy compared to the GWO, PSO, WOA, and MPA. Compared with the diagnostic model using the above optimization algorithms, the diagnostic accuracy of the proposed AGWO-FNN is optimal in the test set and the training set. Specifically, the average diagnostic accuracy of the AGWO-FNN in the test set is improved by 4.2%, 2.5%, 3.2%, 1.4%, and 0.7% compared with that of the FNN, GWO-FNN, PSO-FNN, WOA-FNN, and MPA-FNN, respectively. Meanwhile, compared to other diagnostic models, the average accuracy of the AGWO-FNN test set is closest to the training set, with only a 1.4% difference in accuracy, indicating that the AGWO-FNN has a better generalization ability.
Therefore, the proposed AGWO algorithm can significantly improve the diagnostic accuracy of the FNN model, and the AGWO-FNN has obvious advantages in excitation system fault diagnosis applications. In future work, the proposed AGWO will be used to further optimize the parameters of the FNN membership functions, without relying on expert experience to expand the application range of the method.

Author Contributions

Conceptualization, X.F.; methodology, X.F. and D.G.; software, D.G. and H.Z.; validation, K.H. and D.X.; writing—original draft preparation, D.G.; writing—review and editing, X.F. and D.X.; supervision, X.F., W.C. and D.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (52377038), and the State Grid Technology Project (5500-202240497A-3-0-ZZ).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Authors Kai Hou and Hongchao Zhu were employed by the State Grid Electric Power Research Institute Company. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The State Grid Electric Power Research Institute Company had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Wang, J.; Huang, M.; Fu, C.; Li, H.; Xu, S.; Li, X. A New Recovery Strategy of HVDC System During AC Faults. IEEE Trans. Ind. Electron. 2019, 34, 486–495. [Google Scholar] [CrossRef]
  2. Jia, J.; Yang, G.; Nielsen, A.; Gevorgian, V. Investigation on the Combined Effect of VSC-Based Sources and Synchronous Condensers Under Grid Unbalanced Faults. IEEE Trans. Power Deliv. 2019, 34, 1898–1908. [Google Scholar] [CrossRef]
  3. Hadavi, S.; Rathnayake, D.; Jayasinghe, G.; Mehrizi-Sani, A.; Bahrani, B. A Robust Exciter Controller Design for Synchronous Condensers in Weak Grids. IEEE Trans. Power Syst. 2022, 37, 1857–1867. [Google Scholar] [CrossRef]
  4. Li, Z.; Zhong, Z.; Huang, J. Test Verification of Dynamic Reactive Power Characteristics of Fast Dynamic Response Synchronous Condenser. Proc. CSEE 2019, 39, 6877–7101. [Google Scholar]
  5. Aldeen, M.; Saha, S. Decentralised Fault Detection, Identification, and Mitigation in Excitation Control Systems. Int. J. Electr. Power Energy Syst. 2016, 77, 302–313. [Google Scholar] [CrossRef]
  6. Chen, J.; Hao, L.; Li, H.; Zhang, L. Time–Frequency Characteristics Analysis and Diagnosis of Rotating Rectifier Faults in Multiphase Annular Brushless System. IEEE Trans. Ind. Electron. 2023, 70, 3233–3244. [Google Scholar] [CrossRef]
  7. Sun, C.; Liu, W.; Wei, Z.; Jiao, N.; Pang, J. Open-Circuit Fault Diagnosis of Rotating Rectifier by Analyzing the Exciter Armature Current. IEEE Trans. Power Electron. 2020, 35, 6373–6385. [Google Scholar] [CrossRef]
  8. Wang, H.; Li, W.; Wang, H. Fuzzy Fault Tree Analysis of Synchronous Generator Excitation Equipment used in Power Station Generation. In Proceedings of the 2011 International Conference on Advanced Power System Automation and Protection (APAP), Beijing, China, 16–20 October 2011; pp. 633–636. [Google Scholar]
  9. Vlad, M.; Popovici, R.; Popescu, C.L.; Popescu, M. Expert System for Diagnosis of Electrical Equipment. Case study—Generator Excitation System. In Proceedings of the 2011 7th International Symposium on Advanced Topics in Electrical Engineering (ATEE), Bucharest, Romania, 12–14 May 2011; pp. 1–6. [Google Scholar]
  10. Lv, Y.; Gao, Y.; Zhang, J.; Deng, C.; Hou, S. Symmetrical Loss of Excitation Fault Diagnosis in an Asynchronized High-Voltage Generator. Energies 2018, 11, 3054. [Google Scholar] [CrossRef]
  11. Mohammed, M.; Lim, C. An Enhanced Fuzzy Min–Max Neural Network for Pattern Classification. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 417–429. [Google Scholar] [CrossRef]
  12. Xiao, Y.; Deng, S.; Han, F.; Wang, X.; Jiang, Y.; Peng, K. Intelligent Health Diagnosis of Lithium Battery Pole Double Rolling Equipment Driven by Hybrid BP Neural Network and Expert System. IEEE Access 2022, 10, 80208–80224. [Google Scholar] [CrossRef]
  13. Kudelina, K.; Raja, H. Neuro-Fuzzy Framework for Fault Prediction in Electrical Machines via Vibration Analysis. Energies 2024, 17, 2818. [Google Scholar] [CrossRef]
  14. Liao, G.; Gao, W.; Yang, G.; Guo, M. Hydroelectric Generating Unit Fault Diagnosis Using 1-D Convolutional Neural Network and Gated Recurrent Unit in Small Hydro. IEEE Sens. J. 2019, 19, 9352–9363. [Google Scholar] [CrossRef]
  15. Xavier, G.; Yoshua, B. Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS), Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
  16. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
  17. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A. Marine Predators Algorithm: A Nature-inspired Metaheuristic. Expert Syst. Appl. 2022, 152, 113377. [Google Scholar] [CrossRef]
  18. Ayyarao, T.; Ramakrishna, N.; Elavarasan, R.; Polumahanthi, N.; Rambabu, M.; Saini, G.; Khan, B.; Alatas, B. War Strategy Optimization Algorithm: A New Effective Metaheuristic Algorithm for Global Optimization. IEEE Access 2022, 10, 25073–25105. [Google Scholar] [CrossRef]
  19. Mirjalili, S.; Mirjalili, S.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  20. Raj, B.; Ahmedy, I.; Idris, M.; Noor, R. A Hybrid Sperm Swarm Optimization and Genetic Algorithm for Unimodal and Multimodal Optimization Problems. IEEE Access. 2022, 10, 109580–109596. [Google Scholar] [CrossRef]
  21. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the 1995 International Conference on Neural Networks (ICNN), Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  22. Makhadmeh, S.; Al-Betar, M.; Doush, I.; Awadallah, M.; Kassaymeh, S.; Mirjalili, S.; Zitar, R. Recent Advances in Grey Wolf Optimizer, its Versions and Applications: Review. IEEE Access 2024, 12, 22991–23028. [Google Scholar] [CrossRef]
  23. Mittal, N.; Singh, U.; Sohi, B. Modified Grey Wolf Optimizer for Global Engineering Optimization. Appl. Comput. Intell. Soft Comput. 2016, 2016, 7950348. [Google Scholar] [CrossRef]
  24. Heidari, A.; Pahlavani, P. An Efficient Modified Grey Wolf Optimizer with Lévy Flight for Optimization Tasks. Appl. Soft. Comput. 2017, 60, 115–134. [Google Scholar] [CrossRef]
  25. Guo, X.; Huang, J.; Fu, W.; Li, L.; Lan, Q.; Liu, C. Excitation System Failure Analysis of Fast Dynamic Response Synchronous Condenser. Power Syst. Technol. 2021, 45, 4205–4211. [Google Scholar]
  26. Liu, W.; Huang, X.; Liu, Z.; Huang, D. Intelligent Fault Diagnosis and Fault Tolerant Control on Nonlinear Excitation System. In Proceedings of the 2006 6th World Congress on Intelligent Control and Automation (WCICA), Dalian, China, 21–23 June 2006; pp. 5787–5790. [Google Scholar]
  27. Shaikh, M.; Barbé, K. Wiener–Hammerstein System Identification: A Fast Approach Through Spearman Correlation. IEEE Trans. Instrum. Meas. 2019, 68, 1628–1636. [Google Scholar] [CrossRef]
  28. Fan, H.; Xue, L.; Song, Y.; Li, M. A repetitive feature selection method based on improved ReliefF for missing data. Appl. Intell. 2022, 52, 16265–16280. [Google Scholar] [CrossRef]
  29. Feng, W.; Deng, B. Global Convergence Analysis and Research on Parameter Selection of Whale Optimization Algorithm. Control Theory Appl. 2021, 38, 641–651. [Google Scholar]
  30. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
Figure 1. Basic structure of self-shunt excitation control system for synchronous condenser.
Figure 1. Basic structure of self-shunt excitation control system for synchronous condenser.
Processes 12 02032 g001
Figure 2. The overall diagnostic process of the excitation system.
Figure 2. The overall diagnostic process of the excitation system.
Processes 12 02032 g002
Figure 3. Variation curves of different convergence factors.
Figure 3. Variation curves of different convergence factors.
Processes 12 02032 g003
Figure 4. Iterative convergence curves of GWO algorithm with different convergence factors.
Figure 4. Iterative convergence curves of GWO algorithm with different convergence factors.
Processes 12 02032 g004
Figure 5. The structure of the serial-type FNN.
Figure 5. The structure of the serial-type FNN.
Processes 12 02032 g005
Figure 6. Flowchart of the AGWO-optimized FNN algorithm.
Figure 6. Flowchart of the AGWO-optimized FNN algorithm.
Processes 12 02032 g006
Figure 7. ReliefF method feature weight ordering.
Figure 7. ReliefF method feature weight ordering.
Processes 12 02032 g007
Figure 8. Fitness iteration curves of different optimization algorithms.
Figure 8. Fitness iteration curves of different optimization algorithms.
Processes 12 02032 g008
Figure 9. Comparison of diagnostic accuracy of different diagnostic models.
Figure 9. Comparison of diagnostic accuracy of different diagnostic models.
Processes 12 02032 g009
Figure 10. Comparison of confusion matrices for different diagnostic models.
Figure 10. Comparison of confusion matrices for different diagnostic models.
Processes 12 02032 g010
Table 1. Types of faults in the excitation system.
Table 1. Types of faults in the excitation system.
Fault SourceFault Type
Excitation regulatorMeasurement fault
Digital input anomaly
Controller fault
Power unitPower unit fault
Demagnetization unitDemagnetization circuit fault
Table 2. Benchmark functions.
Table 2. Benchmark functions.
Type of FunctionsFunctionDimRangefmin
Unimodal F 1 ( x ) = i = 1 n x i 2 30[−100, 100]0
F 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 30[−10, 10]0
Multimodal F 3 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
F 4 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e 30[−32, 32]0
Table 3. Statistical results of GWO algorithm with different convergence factors.
Table 3. Statistical results of GWO algorithm with different convergence factors.
FunctionIndexGWOAGWO1 λ 1 = 1 AGWO1 λ 1 = 1.5 AGWO−1 λ 1 = 3 SQGWO
F1Mean1.96 × 10−271.11 × 10−281.74 × 10−371.68 × 10−346.15 × 10−36
Std2.37 × 10−271.76 × 10−281.95 × 10−373.22 × 10−349.54 × 10−36
F2Mean8.94 × 10−172.41 × 10−172.18 × 10−221.07 × 10−201.58 × 10−21
Std4.12 × 10−172.61 × 10−171.13 × 10−228.57 × 10−211.36 × 10−21
F3Mean2.800.512.84 × 10−148.52 × 10−143.41 × 10−14
Std3.771.632.99 × 10−142.69 × 10−131.07 × 10−13
F4Mean1.03 × 10−136.90 × 10−141.57 × 10−141.64 × 10−142.12 × 10−14
Std1.54 × 10−141.36 × 10−142.48 × 10−152.21 × 10−144.92 × 10−15
Table 4. Key feature quantities of excitation system.
Table 4. Key feature quantities of excitation system.
Feature NumberFeature QuantityFeature NumberFeature Quantity
Q 1 Stator voltage Q 7 PSS_P channel output
Q 2 Field current Q 8 PSS_W channel output
Q 3 Trigger angle Q 9 PSS Acceleration Power
Q 4 Active power Q 10 PSS Output
Q 5 Reactive power Q 11 Stator voltage negative sequence percentile
Q 6 Stator current Q 12 Stator voltage negative sequence percentile
Table 5. Sample categories and sample set division.
Table 5. Sample categories and sample set division.
Category LabelFault StateNumber of Training SetsNumber of Test Sets
1Normal14060
2External fault14060
3Demagnetization circuit fault14060
4Power unit fault14060
5Controller fault14060
6Digital input anomaly14060
7Measurement fault14060
Table 6. Part of the sample data.
Table 6. Part of the sample data.
Fault StateStator VoltageField CurrentTrigger AngleReactive PowerVar of Trigger AngleVar of Reactive Power
Normal1.0221.0000.8051.0010.0000.000
0.9970.9631.0690.9480.0410.009
External fault0.3431.7500.0170.6660.1050.307
0.3771.7440.0170.5600.1020.309
Demagnetization circuit fault0.9800.0520.3010.6030.1200.108
0.9600.0820.3160.4890.1160.182
Power unit fault0.9470.8190.6450.1960.0000.049
0.9450.8170.6440.1690.0010.048
Controller fault1.0061.0730.9791.1690.0600.025
1.0031.0161.0551.0930.0470.005
Digital input anomaly1.0501.1381.0071.2590.0010.018
1.0581.2240.9971.3770.0010.018
Measurement fault0.0331.8680.0201.0000.0000.000
1.2460.0080.9341.4050.0020.103
Table 7. Diagnostic results of the AGWO-FNN with different improvement strategies.
Table 7. Diagnostic results of the AGWO-FNN with different improvement strategies.
Diagnostic ModelMean of Acc (%)Variance of Acc
GWO-FNN93.601.70
AGWO1-FNN λ 1 = 1 93.651.69
AGWO1-FNN λ 1 = 1.5 94.621.42
AGWO1-FNN λ 1 = 2 94.361.75
AGWO1-FNN λ 1 = 3 93.811.69
AGWO2-FNN95.481.78
AGWO-FNN96.101.41
Table 8. Parameter settings for different optimization algorithms.
Table 8. Parameter settings for different optimization algorithms.
Optimization AlgorithmParameter Settings
PSOacceleration factor c1 = 2, c2 = 2, inertia weight w = 0.5
GWOconvergence factor a linearly decrease from 2 to 0
AGWO adjustment   coefficient   λ 1   = 1.5 ,   λ 2 = 0.2
WOAconvergence factor a linearly decrease from 2 to 0, b = 1
MPAp = 0.5, FADs = 0.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fu, X.; Guo, D.; Hou, K.; Zhu, H.; Chen, W.; Xu, D. Fault Diagnosis of an Excitation System Using a Fuzzy Neural Network Optimized by a Novel Adaptive Grey Wolf Optimizer. Processes 2024, 12, 2032. https://doi.org/10.3390/pr12092032

AMA Style

Fu X, Guo D, Hou K, Zhu H, Chen W, Xu D. Fault Diagnosis of an Excitation System Using a Fuzzy Neural Network Optimized by a Novel Adaptive Grey Wolf Optimizer. Processes. 2024; 12(9):2032. https://doi.org/10.3390/pr12092032

Chicago/Turabian Style

Fu, Xinghe, Dingyu Guo, Kai Hou, Hongchao Zhu, Wu Chen, and Da Xu. 2024. "Fault Diagnosis of an Excitation System Using a Fuzzy Neural Network Optimized by a Novel Adaptive Grey Wolf Optimizer" Processes 12, no. 9: 2032. https://doi.org/10.3390/pr12092032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop