*Article* **A Dual-Optimization Fault Diagnosis Method for Rolling Bearings Based on Hierarchical Slope Entropy and SVM Synergized with Shark Optimization Algorithm**

**Yuxing Li 1,2,\* , Bingzhao Tang <sup>1</sup> , Bo Huang <sup>1</sup> and Xiaohui Xue <sup>1</sup>**


**Abstract:** Slope entropy (SlopEn) has been widely applied in fault diagnosis and has exhibited excellent performance, while SlopEn suffers from the problem of threshold selection. Aiming to further enhance the identifying capability of SlopEn in fault diagnosis, on the basis of SlopEn, the concept of hierarchy is introduced, and a new complexity feature, namely hierarchical slope entropy (HSlopEn), is proposed. Meanwhile, to address the problems of the threshold selection of HSlopEn and a support vector machine (SVM), the white shark optimizer (WSO) is applied to optimize both HSlopEn and an SVM, and WSO-HSlopEn and WSO-SVM are proposed, respectively. Then, a dualoptimization fault diagnosis method for rolling bearings based on WSO-HSlopEn and WSO-SVM is put forward. We conducted measured experiments on single- and multi-feature scenarios, and the experimental results demonstrated that whether single-feature or multi-feature, the WSO-HSlopEn and WSO-SVM fault diagnosis method has the highest recognition rate compared to other hierarchical entropies; moreover, under multi-features, the recognition rates are all higher than 97.5%, and the more features we select, the better the recognition effect. When five nodes are selected, the highest recognition rate reaches 100%.

**Keywords:** fault diagnosis; hierarchical slope entropy; white shark optimizer; optimized support vector machine; bearing signals

#### **1. Introduction**

Rolling bearings, as a key component in rotating machinery, serve a very significant role in modern industry. However, because of the increasingly sophisticated and complex structure of bearings and their common use in harsh working environments, rolling bearings are very prone to failures, which can lead to economic losses and even endanger personal safety [1–3]. Therefore, aiming to ensure the normal work of rotating machinery and reduce maintenance costs, it is of great importance to carry out fault diagnoses of rolling bearings [4–6].

Since bearing vibration signals contain rich state information about the bearing during operation, a vibration analysis method is broadly applied to rolling bearing faults [7,8]. In general, the method mainly consists of two steps: feature extraction and fault classification, in which valid feature extraction is crucial for accurate fault diagnosis. As the bearing vibration signal has nonlinear dynamic characteristics, traditional feature extraction methods based on Fourier transform and statistical analysis only characterize features from the time domain or frequency domain, and they cannot detect potential faults through changes in the complexity of the system to achieve effective and accurate extractions of fault features [9,10].

**Citation:** Li, Y.; Tang, B.; Huang, B.; Xue, X. A Dual-Optimization Fault Diagnosis Method for Rolling Bearings Based on Hierarchical Slope Entropy and SVM Synergized with Shark Optimization Algorithm. *Sensors* **2023**, *23*, 5630. https:// doi.org/10.3390/s23125630

Academic Editor: Jongmyon Kim

Received: 2 June 2023 Revised: 11 June 2023 Accepted: 13 June 2023 Published: 16 June 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

In recent years, nonlinear dynamic methods, such as sample entropy (SE) [11], permutation entropy (PE) [12], and dispersion entropy (DE) [13], have been widely used in the feature extraction of bearing signals and have presented superior performance. Han et al., used SE to extract bearing fault feature information effectively [14], but SE calculation is complicated and not suitable for real-time monitoring. As PE has the strengths of fast calculation and good stability, Xue et al., proposed a bearing fault diagnosis method based on PE and further improved the effectiveness of fault diagnosis [15]. While PE does not consider amplitude information, Dhandapani et al., applied DE to the feature extraction osf rolling bearing faults and considered the amplitude information of bearing signals [16]. Unlike the above entropies, slope entropy (SlopEn) is a new entropy estimator proposed based on symbolic patterns and magnitude information [17] and has been applied in the underwater acoustic field and medical field many times [18–21]. In 2022, SlopEn was introduced into the field of bearing fault diagnosis for the first time, and experimental results showed that, compared with PE and DE, SlopEn could better extract fault information [22]. However, all the above-mentioned entropy-based bearing fault diagnosis methods suffer from two defects: (i) the methods extract only the fault information of the low-frequency component for the bearing signal, and (ii) there is the problem of threshold selection for SlopEn, and the thresholds usually need to be optimized using an optimization algorithm.

Aiming at extracting the bearing fault information more comprehensively, some scholars have proposed fault diagnosis methods based on hierarchical entropy [23–25]. The authors of [26] proposed the concept of hierarchical permutation entropy (HPE) and employed it successfully for fault diagnosis. Moreover, Ref. [27] used hierarchical permutation entropy (HDE) to extract the fault information in both high- and low-frequency components. Diagnosis methods based on hierarchical entropy have proved that they can obtain diagnosis-related information of the whole frequency band and have strong noise resistance and stability; in addition, no scholars have introduced the concept of hierarchy to the SlopEn and used optimization algorithms to optimize the thresholds.

After feature extraction, the next step is fault classification. Commonly used fault classification methods mainly include k-nearest neighbor (KNN) [28], random forest (RF) [29], and the support vector machine (SVM) [30]. The SVM has been widely used in fault diagnosis because of its suitability for small sample classification and its simple structure [31]. Yet, since the parameter penalty factors and kernel functions of SVM models have an impact on fault diagnosis, some existing optimization algorithms have optimized the parameters of SVMs and improved the performance of fault classification, such as the genetic algorithm (GA) [32], particle swarm algorithm (PSO) [33], and whale optimization algorithm (WOA) [34]. Compared to common optimization algorithms, the white shark optimizer (WSO) is a new meta-heuristic optimization algorithm based on deep-sea foraging by great white sharks, proposed in 2022 for solving optimization problems on continuous search spaces [35]; in addition, the results of the basis function tests show that WSO is better than the common optimization algorithm in terms of optimization and has not yet been applied to optimize SVMs.

Based on the analysis above, a dual-optimization fault diagnosis method for rolling bearings is put forward, and the main novelties and contributions of this paper are presented as follows:


The remaining parts of this paper are structured as follows. Section 2 presents the basic concepts of algorithms. Section 3 introduces the steps of the proposed fault diagnosis method. Section 4 carries out the single-feature and multi-feature extraction experiments for bearing signals, and Section 5 summarizes the conclusions of this study. basic concepts of algorithms. Section 3 introduces the steps of the proposed fault diagnosis method. Section 4 carries out the single-feature and multi-feature extraction experiments for bearing signals, and Section 5 summarizes the conclusions of this study.

The remaining parts of this paper are structured as follows. Section 2 presents the

(3) Targeting the application of bearing fault diagnosis under different operating conditions, this paper proposes a dual-optimization fault diagnosis method for

rolling bearings based on HSlopEn and an SVM synergized with the WSO.

*Sensors* **2023**, *23*, x FOR PEER REVIEW 3 of 19

#### **2. Methodology 2. Methodology** *2.1. Slope Entropy*

## *2.1. Slope Entropy*

Slope entropy (SlopEn) is an algorithm proposed in 2019 to calculate the complexity of time series. It is based on symbolic patterns and magnitude information. The main calculation steps are listed below: Slope entropy (SlopEn) is an algorithm proposed in 2019 to calculate the complexity of time series. It is based on symbolic patterns and magnitude information. The main calculation steps are listed below: (1) For a given time series ൌ ሼଵ,ଶ,⋯,ேሽ, according to the embedding dimension ,


**Figure 1.** The symbol allocation of SlopEn. **Figure 1.** The symbol allocation of SlopEn.


$$SlopEn(X, m, \gamma, \delta) = -\sum\_{i=1}^{r} P\_{\mathbf{r}} \ln P\_{\mathbf{r}} \tag{1}$$

#### *2.2. Hierarchical Slope Entropy*

Since SlopEn only considers the low-frequency components of the time series, aiming to describe the time series more comprehensively, on the basis of SlopEn and combined with the concept of hierarchy, this paper proposes a new complexity feature, namely hierarchical slope entropy (HSlopEn). The specific process of HSlopEn is as follows:

(1) First, given a time series *X* = {*x*(*i*), *i* = 0, 1, · · · , *N*, *N* = 2 *<sup>n</sup>*} of length *<sup>N</sup>*, define an average operator *Q*0(*x*) and a difference operator *Q*1(*x*), which can be expressed as

$$Q\_0 = \frac{\mathfrak{x}(2j) + \mathfrak{x}(2j+1)}{2}, j = 0, 1, 2, \cdots, 2^{n-1} \tag{2}$$

$$Q\_1 = \frac{\mathfrak{x}(2j) - \mathfrak{x}(2j+1)}{2}, j = 0, 1, 2, \dots, 2^{n-1} \tag{3}$$

where the *Q*0(*x*) and *Q*1(*x*) operators are the low-frequency part and high-frequency part, respectively, of the original given time series after hierarchical decomposition and *n* is a positive integer.

(2) The operators *Qj*(*j* = 0 or 1) in matrix form is defined as

$$Q\_j = \begin{bmatrix} \frac{1}{2} & \left(\frac{-1}{2}\right)^j & 0 & 0 & \cdots & 0 & 0\\ 0 & 0 & \frac{1}{2} & \left(\frac{-1}{2}\right)^j & \cdots & 0 & 0\\ 0 & 0 & 0 & 0 & \cdots & \frac{1}{2} & \left(\frac{-1}{2}\right)^j \end{bmatrix}\_{2^{n-1}\times 2^n} \tag{4}$$

(3) The *l*-dimension vectors [*u*1, *u*2, . . . , *u<sup>l</sup>* ] ∈ {0, 1}(*leN*) are constructed, and an integer *e* can be expressed:

$$e = \sum\_{j=1}^{l} \mu\_j 2^{l-j} \tag{5}$$

where, for a positive *e*, there is a unique set of *l*-dimension vectors [*u*1, *u*2, . . . , *u<sup>l</sup>* ] ∈ {0, 1} and the positive integer *e* represents the sequence number of the node at each layer, where 0 6 *e* 6 2 *n*−1 .

(4) The hierarchical decomposition of a given time series *X* yields a hierarchical component corresponding to the node *e* at the *K*th level, defined as

$$X\_{\mathbf{K}, \mathfrak{e}} = \mathbb{Q}\_{\mathfrak{u}\_l} \* \mathbb{Q}\_{\mathfrak{u}\_{l-1}} \* \dots \* \mathbb{Q}\_{\mathfrak{u}\_1}(\mathfrak{x}) \* X \tag{6}$$

(5) By calculating the SlopEn of nodes on different layers, the HSlopEn can be expressed as

$$HSloopEnt(X, m, \mathbf{K}, \gamma, \delta) = StopEn(X\_{n, \varepsilon}, m, \gamma, \delta) \tag{7}$$

where *K* is the number of layers of decomposition, *γ* and *δ* are the two thresholds of SlopEn, and m is the embedding dimension.

As displayed in Figure 2, the hierarchical decomposition structure diagram when *K* = 3 is shown. SlopEn is calculated on each node after the hierarchical decomposition.

In Figure 2, *X* indicates the original time series, *x*1,1 is the first node of the first layer, *x*2,1 is the first node of the second layer, and so on.

*Sensors* **2023**, *23*, x FOR PEER REVIEW 5 of 19

**Figure 2.** Hierarchical decomposition structure diagram when = 3. **Figure 2.** Hierarchical decomposition structure diagram when *K* = 3. *2.3. Analysis of the Parameters for HSlopEn*

#### In Figure 2, indicates the original time series, ଵ,ଵ is the first node of the first *2.3. Analysis of the Parameters for HSlopEn* The main parameters of HSlopEn include the number of decomposition layers ,

layer, ଶ,ଵ is the first node of the second layer, and so on. *2.3. Analysis of the Parameters for HSlopEn* The main parameters of HSlopEn include the number of decomposition layers , embedding dimension , two threshold parameters and *,* and time delay . First, the number of decomposition layers determines the number of nodes in the hierarchical decomposition. When the number of decomposition layers is too large, the number of nodes decomposed is too large, resulting in a large number of calculations for SlopEn values of all nodes; when the value is too small, resulting in a small number of decomposed nodes, there are insufficient frequency bands for the given time series. Referring to other references, the default number of decomposition layers is 3 in this paper. Then, the embedding dimension is used to extract the subsequence of a given time series. If it is too small, it is difficult to determine the dynamic changes of the time series; if it is too large, it is difficult to capture the subtle changes in the time series. After that, the two threshold parameters and are used to divide the symbol pattern of a The main parameters of HSlopEn include the number of decomposition layers *K*, embedding dimension *m*, two threshold parameters *γ* and *δ,* and time delay *d*. First, the number of decomposition layers *K* determines the number of nodes in the hierarchical decomposition. When the number of decomposition layers is too large, the number of nodes decomposed is too large, resulting in a large number of calculations for SlopEn values of all nodes; when the value is too small, resulting in a small number of decomposed nodes, there are insufficient frequency bands for the given time series. Referring to other references, the default number of decomposition layers *K* is 3 in this paper. Then, the embedding dimension *m* is used to extract the subsequence of a given time series. If it is too small, it is difficult to determine the dynamic changes of the time series; if it is too large, it is difficult to capture the subtle changes in the time series. After that, the two threshold parameters *γ* and *δ* are used to divide the symbol pattern of a given subsequence, which affects the change in entropy value. Lastly, the default time delay *d* is 1, as important information about frequency may be lost at that time if *d* > 1. The effect of embedding dimension and thresholds on the performance of the HSlopEn is investigated below by analyzing the noisy signals. embedding dimension , two threshold parameters and *,* and time delay . First, the number of decomposition layers determines the number of nodes in the hierarchical decomposition. When the number of decomposition layers is too large, the number of nodes decomposed is too large, resulting in a large number of calculations for SlopEn values of all nodes; when the value is too small, resulting in a small number of decomposed nodes, there are insufficient frequency bands for the given time series. Referring to other references, the default number of decomposition layers is 3 in this paper. Then, the embedding dimension is used to extract the subsequence of a given time series. If it is too small, it is difficult to determine the dynamic changes of the time series; if it is too large, it is difficult to capture the subtle changes in the time series. After that, the two threshold parameters and are used to divide the symbol pattern of a given subsequence, which affects the change in entropy value. Lastly, the default time delay is 1, as important information about frequency may be lost at that time if 1. The effect of embedding dimension and thresholds on the performance of the HSlopEn is investigated below by analyzing the noisy signals. To investigate the effect of embedding dimension on the entropy value of hierarchical

given subsequence, which affects the change in entropy value. Lastly, the default time delay is 1, as important information about frequency may be lost at that time if 1. The effect of embedding dimension and thresholds on the performance of the HSlopEn is investigated below by analyzing the noisy signals. To investigate the effect of embedding dimension on the entropy value of hierarchical To investigate the effect of embedding dimension on the entropy value of hierarchical slope entropy, 50 sets of white Gaussian noise (WGN) of signal length 2048 are used, with the embedding dimension *m* varying from 2 to 5 and the two threshold parameters *γ* and *δ* defaulting to 0.1 and 0.001, respectively. Figure 3 shows the mean and standard deviation (SD) of the HSlopEn values for different embedding dimensions in every node. slope entropy, 50 sets of white Gaussian noise (WGN) of signal length 2048 are used, with the embedding dimension varying from 2 to 5 and the two threshold parameters and defaulting to 0.1 and 0.001, respectively. Figure 3 shows the mean and standard deviation (SD) of the HSlopEn values for different embedding dimensions in every node.

**Figure 3.** The mean and standard deviation (SD) of HSlopEn values for different embedding dimensions in every node. **Figure 3.** The mean and standard deviation (SD) of HSlopEn values for different embedding dimensions in every node.

**Figure 3.** The mean and standard deviation (SD) of HSlopEn values for different embedding dimensions in every node. As shown in Figure 3, as the embedding dimension *m* becomes larger, the entropy value of the HSlopEn also becomes larger, but the entropy value of each node for HSlopEn is close to others at different embedding dimensions, and the difference between the mean and SD is small, which indicates that the change in the embedding dimension affects the size of the entropy value, but the stability of the HSlopEn hardly changes. The embedding dimension *m* is set to 3 in this paper.

In addition, to further study the effect of thresholds *γ* and *δ* on the entropy of the HSlopEn, 50 independent pink noise (PN) and WGN signals are selected, where each noise is sampled at 2048 Hz and the embedding dimension *m* is 3. The three sets of thresholds (*γ*, *δ*) for HSlopEn are manually set, which are (0.1, 001), (0.3, 0.1), and (0.8, 0.3), and the mean and standard deviation (SD) of the HSlopEn values for the three sets of thresholds in every node are displayed in Figure 4. HSlopEn, 50 independent pink noise (PN) and WGN signals are selected, where each noise is sampled at 2048 Hz and the embedding dimension is 3. The three sets of thresholds (, ሻ for HSlopEn are manually set, which are (0.1, 001), (0.3, 0.1), and (0.8, 0.3), and the mean and standard deviation (SD) of the HSlopEn values for the three sets of thresholds in every node are displayed in Figure 4.

As shown in Figure 3, as the embedding dimension becomes larger, the entropy value of the HSlopEn also becomes larger, but the entropy value of each node for HSlopEn is close to others at different embedding dimensions, and the difference between the mean and SD is small, which indicates that the change in the embedding dimension affects the size of the entropy value, but the stability of the HSlopEn hardly changes. The embedding

In addition, to further study the effect of thresholds and on the entropy of the

*Sensors* **2023**, *23*, x FOR PEER REVIEW 6 of 19

dimension is set to 3 in this paper.

**Figure 4.** The mean and SD of HSlopEn values for three sets of thresholds in every node. **Figure 4.** The mean and SD of HSlopEn values for three sets of thresholds in every node.

It can be seen in Figure 4 that, as the threshold changes, the entropy values of the two types of noise signals change; at the same time, the ability to discriminate between the noise signals is constantly changing, so the threshold has a significant effect on the entropy of the HSlopEn. The WSO is used in the paper to optimize the thresholds to avoid taking values based on artificial experience and further improve the fault diagnosis. It can be seen in Figure 4 that, as the threshold changes, the entropy values of the two types of noise signals change; at the same time, the ability to discriminate between the noise signals is constantly changing, so the threshold has a significant effect on the entropy of the HSlopEn. The WSO is used in the paper to optimize the thresholds to avoid taking values based on artificial experience and further improve the fault diagnosis.

#### *2.4. WSO‐HSlopEn and WSO‐SVM 2.4. WSO-HSlopEn and WSO-SVM*

Following the principle of the HSlopEn algorithm, the two threshold parameters and of the HSlopEn are used to divide the sign pattern of a given time sequence Following the principle of the HSlopEn algorithm, the two threshold parameters *γ* and *δ* of the HSlopEn are used to divide the sign pattern of a given time sequence subsequence. Thus, the two threshold parameters have a great influence on the HSlopEn value. At the same time, the classification effect of the support vector machine (SVM) mainly depends on the selection of the penalty factor (*C*) and kernel function parameters (*g*), and it is generally difficult to take the values based on manual experience. Hence, the selection of an appropriate penalty factor and kernel function parameters is also particularly important for the classification and recognition accuracy of the SVM.

To enhance the performance of the fault diagnosis effect, in this paper, taking the recognition rate as the fitness function, the white shark optimizer (WSO) is used to optimize

the parameters of HSlopEn and the SVM, and WSO-HSlopEn and WSO-SVM are proposed, respectively, where the WSO is a new meta-heuristic optimization algorithm based on deepsea foraging by great white sharks, proposed in 2022 for solving optimization problems on continuous search spaces. The main process of optimizing the parameters of HSlopEn and the SVM is shown in Figure 5, and the specific process is as follows: (4) Evaluate the fitness function, and update the optimal white shark position (5) Update the position and speed of the white shark. (6) Judge whether the current iteration number reaches the maximum iteration number. If so, return to update the speed and position of the white shark and repeat the above steps; otherwise, output the best-optimized parameters ሺ, ሻ and ሺ, ሻ.

(2) Initialize the WSO parameters, such as population size, number of iterations ,

(3) Calculate the fitness function, and update the white sharks' position and speed.

(1) Set the initial parameter ranges of HSlopEn ሺ, ሻ and the SVM ሺ, ሻ.

position, and speed of white sharks.

subsequence. Thus, the two threshold parameters have a great influence on the HSlopEn value. At the same time, the classification effect of the support vector machine (SVM) mainly depends on the selection of the penalty factor () and kernel function parameters (), and it is generally difficult to take the values based on manual experience. Hence, the selection of an appropriate penalty factor and kernel function parameters is also particularly important for the classification and recognition accuracy of the SVM.

To enhance the performance of the fault diagnosis effect, in this paper, taking the recognition rate as the fitness function, the white shark optimizer (WSO) is used to optimize the parameters of HSlopEn and the SVM, and WSO-HSlopEn and WSO-SVM are proposed, respectively, where the WSO is a new meta-heuristic optimization algorithm based on deep-sea foraging by great white sharks, proposed in 2022 for solving optimization problems on continuous search spaces. The main process of optimizing the parameters of HSlopEn and the SVM is shown in Figure 5, and the specific process is as

*Sensors* **2023**, *23*, x FOR PEER REVIEW 7 of 19

follows:

**Figure 5.** The main process of optimizing the parameters of the HSlopEn and SVM. **Figure 5.** The main process of optimizing the parameters of the HSlopEn and SVM.


#### **3. The Proposed Method for Fault Diagnosis of Rolling Bearing**

Combining the concept of hierarchical structure, the new complexity feature HSlopEn is proposed, and the parameters of both HSlopEn and the SVM are optimized using the WSO algorithm, and WSO-HSlopEn and WSO-SVM are proposed, respectively. Then, a dual-optimization fault diagnosis method for rolling bearings based on WSO-HSlopEn and WSO-SVM is proposed. Figure 6 presents the flowchart of the proposed fault diagnosis method, and the method mainly includes the following steps:

**3. The Proposed Method for Fault Diagnosis of Rolling Bearing**

100 samples with 1024 data points.

(SSA), are used for comparison.

75 sample signals as test samples.

fault diagnosis method, and the method mainly includes the following steps:

Combining the concept of hierarchical structure, the new complexity feature HSlopEn is proposed, and the parameters of both HSlopEn and the SVM are optimized using the WSO algorithm, and WSO-HSlopEn and WSO-SVM are proposed, respectively. Then, a dual-optimization fault diagnosis method for rolling bearings based on WSO-HSlopEn and WSO-SVM is proposed. Figure 6 presents the flowchart of the proposed

(1) The different bearing signals are input. In this paper, each type of bearing signal has

(2) The WSO algorithm is applied to optimize the parameters of HSlopEn ሺ, ሻ and the SVM ሺ, ሻ by taking the final recognition rate as the fitness function, and the optimized parameters are obtained. At the same time, other optimization algorithms, including SO, marine predator algorithm (MPA), and sparrow search algorithm

(3) Different types of bearing signals are decomposed into several layers, and the nodes are obtained. In this paper, bearing signals are decomposed into three layers. (4) The nodes of WSO-HSlopEn are calculated, and then single-feature and multi-feature extraction experiments for bearing signals are carried out. Meanwhile, comparisons with some classical entropies, such as HFE, HPE, HSE, and HRDE, are conducted. (5) WSO-SVM is applied to classify bearing signals, and the recognition results are output. In this paper, for each type, select 25 sample signals as training samples and

**Figure 6.** The flowchart of proposed dual-optimization fault diagnosis method for rolling bearings, based on WSO-HSlopEn and WSO-SVM. **Figure 6.** The flowchart of proposed dual-optimization fault diagnosis method for rolling bearings, based on WSO-HSlopEn and WSO-SVM.


## **4. Experiments and Results**

In this chapter, two comparative experiments are implemented to examine the effectiveness of the proposed method in fault diagnosis: (1) In optimizing both HSlopEn and SVM parameters using the WSO, we compare different optimization algorithms, including SSA, MPA, and SO. (2) In extracting the WSO-HSlopEn of nodes, we compare classical hierarchical entropy metrics, including HPE, HSE, HFE, and HRDE.

#### *4.1. Fault Diagnosis of Rolling Bearing Signal 4.1. Fault Diagnosis of Rolling Bearing Signal*

**4. Experiments and Results**

The dataset used in this section was derived from the Bearing Data Center of Case Western Reserve University [36], which is an internationally recognized standard dataset for fault diagnosis of rolling bearings. The schematic of the test rig (Cleveland, USA) is shown in Figure 7. The dataset used in this section was derived from the Bearing Data Center of Case Western Reserve University [36], which is an internationally recognized standard dataset for fault diagnosis of rolling bearings. The schematic of the test rig (Cleveland, USA)ʺ.is shown in Figure 7.

classical hierarchical entropy metrics, including HPE, HSE, HFE, and HRDE.

In this chapter, two comparative experiments are implemented to examine the effectiveness of the proposed method in fault diagnosis: (1) In optimizing both HSlopEn and SVM parameters using the WSO, we compare different optimization algorithms, including SSA, MPA, and SO. (2) In extracting the WSO-HSlopEn of nodes, we compare

*Sensors* **2023**, *23*, x FOR PEER REVIEW 9 of 19

**Figure 7.** The schematic of test rig**. Figure 7.** The schematic of test rig.

As shown in Figure 7, the test rig consisted of an induction motor, drive-end bearing, self-aligning coupling, and accelerometer dynamometer. An accelerometer was installed on the base of the motor, which was used to detect the vibration acceleration of the faulty bearing at a sampling frequency of 12 kHz. The dataset divided the fault data into four categories: normal data (NOR), ball faults (BFs), outer race faults (ORFs), and inner race faults (IRFs). Among them, BFs, ORFs, and IRFs were simulated faults with single-point damage as an electric spark. The damage diameters were divided into 0.007, 0.014, and 0.021 inches. At the same time, the processed faulty bearing was reloaded into the test motor, and the vibration acceleration signal data were recorded under the load working conditions of 0, 1, 2, and 3 horsepower. As shown in Figure 7, the test rig consisted of an induction motor, drive-end bearing, self-aligning coupling, and accelerometer dynamometer. An accelerometer was installed on the base of the motor, which was used to detect the vibration acceleration of the faulty bearing at a sampling frequency of 12 kHz. The dataset divided the fault data into four categories: normal data (NOR), ball faults (BFs), outer race faults (ORFs), and inner race faults (IRFs). Among them, BFs, ORFs, and IRFs were simulated faults with single-point damage as an electric spark. The damage diameters were divided into 0.007, 0.014, and 0.021 inches. At the same time, the processed faulty bearing was reloaded into the test motor, and the vibration acceleration signal data were recorded under the load working conditions of 0, 1, 2, and 3 horsepower.

In this section, bearing signals with ten conditions were collected from the drive-end bearings, including rolling bearings in normal condition and those with damage to the inner race, the outer race, and the ball element. Bearings with various damage diameters were considered under a speed of 1730 rpm with a load of 3 horsepower. Table 1 illustrates the fault diagnosis sample collection of bearing signals. Each fault signal was divided into three types according to the fault diameter. We sampled from point 1001, and each condition had 100 samples with 1024 sampling points. Time-domain waveforms for each state bearing signals are displayed in Figure 8. In this section, bearing signals with ten conditions were collected from the drive-end bearings, including rolling bearings in normal condition and those with damage to the inner race, the outer race, and the ball element. Bearings with various damage diameters were considered under a speed of 1730 rpm with a load of 3 horsepower. Table 1 illustrates the fault diagnosis sample collection of bearing signals. Each fault signal was divided into three types according to the fault diameter. We sampled from point 1001, and each condition had 100 samples with 1024 sampling points. Time-domain waveforms for each state bearing signals are displayed in Figure 8.


**Table 1.** Fault diagnosis sample collection of bearing signals. **Table 1.** Fault diagnosis sample collection of bearing signals.

**Figure 8.** Time-domain waveforms for each state bearing signals. **Figure 8.** Time-domain waveforms for each state bearing signals.

#### *4.2. Comparison of Different Optimization Algorithms 4.2. Comparison of Different Optimization Algorithms*

Designed to verify the performance advantages of the WSO in optimizing HSlopEn and the SVM, this section introduces different optimization algorithms to optimize the parameters of HSlopEn and the SVM, and compares recognition rates of single-feature and multi-feature extractions with those of other optimization algorithms [37–39]. In this experiment, 10 different bearing signal conditions were sampled from the 1001 point as the starting point, and 100 samples were selected. Each sample had 1024 data points. First, the parameters of HSlopEn were set as follows: hierarchical layer ൌ 3, embedding dimension ൌ3, and threshold parameters and were adaptively determined using different optimization algorithms. HSlopEn with optimized parameters of bearing signals was extracted. Then, the sample set was divided into the training set and test set, and the select single feature or multi-features were input to optimize the SVM. The penalty factor and kernel function parameters of the SVM were also adaptively determined using the WSO algorithm. Figure 9 presents the fitness iteration curves of different optimization algorithms to optimize HSlopEn and the SVM. These are the fitness iteration curves of different optimization algorithms in the case of extracting five nodes. Designed to verify the performance advantages of the WSO in optimizing HSlopEn and the SVM, this section introduces different optimization algorithms to optimize the parameters of HSlopEn and the SVM, and compares recognition rates of single-feature and multi-feature extractions with those of other optimization algorithms [37–39]. In this experiment, 10 different bearing signal conditions were sampled from the 1001 point as the starting point, and 100 samples were selected. Each sample had 1024 data points. First, the parameters of HSlopEn were set as follows: hierarchical layer *K* = 3, embedding dimension *m* = 3, and threshold parameters *γ* and *δ* were adaptively determined using different optimization algorithms. HSlopEn with optimized parameters of bearing signals was extracted. Then, the sample set was divided into the training set and test set, and the select single feature or multi-features were input to optimize the SVM. The penalty factor and kernel function parameters of the SVM were also adaptively determined using the WSO algorithm. Figure 9 presents the fitness iteration curves of different optimization algorithms to optimize HSlopEn and the SVM. These are the fitness iteration curves of different optimization algorithms in the case of extracting five nodes. *Sensors* **2023**, *23*, x FOR PEER REVIEW 11 of 19

BF1 0.007 5 BF2 0.014 6 BF3 0.021 7 ORF1 0.007 8 ORF2 0.014 9 ORF3 0.021 10

**Figure 9.** The fitness iteration curve of different optimization algorithms**.** SSA Highest recognition rate (%) 64.13 70.40 74.67 83.27 **Figure 9.** The fitness iteration curve of different optimization algorithms.

**Table 2.** The recognition rate of HSlopEn for each single-feature node (nodes 1–7).

**Table 3.** The recognition rate of HSlopEn for each single-feature node (nodes 8–14).

It can be found in Figure 9 that, in the condition of extracting five nodes, the highest recognition rate of these ten types of bearing signals reached 100% using the WSO to

> **Recognition Rate for Each Node (%) 1 2 3 4 5 6 7**

> **Recognition Rate for Each Node (%) 8 9 10 11 12 13 14**

> > **2 3 4 5**

WSO 78.40 79.20 70.13 78.00 73.47 79.33 45.73 SO 77.47 74.80 67.87 76.32 72.27 75.43 43.28 MPA 74.00 75.87 62.93 71.20 66.93 76.53 40.40 SSA 74.67 73.07 68.53 71.87 70.67 75.60 45.07

WSO 51.60 72.27 59.60 66.93 62.80 69.07 70.93 SO 48.37 70.26 58.23 65.27 60.80 68.27 69.73 MPA 39.87 70.80 56.53 60.13 56.13 67.87 65.47 SSA 50.40 69.73 57.73 63.87 60.93 68.42 69.57

**Table 4.** The highest recognition rate of HSlopEn for multi-features for four types of optimization

WSO Highest recognition rate (%) 97.87 99.60 99.87 100 SO Highest recognition rate (%) 87.60 90.80 93.87 96.20 MPA Highest recognition rate (%) 67.87 84.37 86.67 89.47

**Algorithms Parameter Number of Extracted Features**

faster than other optimization algorithms. In addition, the early convergence of the WSO is quick. Its fitness curve eventually converged to a bigger value. To further demonstrate the significant advantages of WSO, we calculated the recognition rate of bearing signals based on using different optimization algorithms to optimize HSlopEn and the SVM under the situation of extracting the single feature and multi-features. The recognition rates of HSlopEn for all single-feature nodes are shown in Tables 2 and 3, and the highest recognition rates of HSlopEn for multi-features for the four types of optimization

algorithms are shown in Table 4.

**Optimization Algorithm**

**Optimization Algorithms**

algorithms.

**Optimization**

It can be found in Figure 9 that, in the condition of extracting five nodes, the highest recognition rate of these ten types of bearing signals reached 100% using the WSO to optimize HSlopEn. At the same time, the convergence speed of the WSO was relatively faster than other optimization algorithms. In addition, the early convergence of the WSO is quick. Its fitness curve eventually converged to a bigger value. To further demonstrate the significant advantages of WSO, we calculated the recognition rate of bearing signals based on using different optimization algorithms to optimize HSlopEn and the SVM under the situation of extracting the single feature and multi-features. The recognition rates of HSlopEn for all single-feature nodes are shown in Tables 2 and 3, and the highest recognition rates of HSlopEn for multi-features for the four types of optimization algorithms are shown in Table 4.

**Table 2.** The recognition rate of HSlopEn for each single-feature node (nodes 1–7).


**Table 3.** The recognition rate of HSlopEn for each single-feature node (nodes 8–14).


**Table 4.** The highest recognition rate of HSlopEn for multi-features for four types of optimization algorithms.


According to the recognition rate of different types of bearing signals, we can find that no matter how many features are extracted, the advantages of the WSO algorithm are obvious. In the case of extracting a single feature, the recognition rate of the WSO reaches 79.33% on node 6, which is much higher than that of other optimization algorithms. Under the circumstances of extracting multi-features, as the number of selected nodes increases, the recognition rate also improves. When we select five features, it realizes the correct identification of all samples. The recognition rate of other optimization algorithms, including SO, MPA, and SSA, is, respectively, 3.8%, 10.53%, and 17.73% lower than that of WSO. Above all, we prove that using the WSO to optimize HSlopEn and the SVM is feasible. Therefore, in this paper, the WSO is used to optimize HSlopEn and SVM parameters.

#### *4.3. Comparison of Different Hierarchical Entropies*

Aiming to demonstrate the superiority of WSO-HSlopEn in fault diagnosis, we compared it to other classical hierarchical entropies, including HSE, HFE, HPE, and HRDE. parameters.

The single-feature approach was first used to extract the fault feature and compare it with HFE, HSE, HPE, and HRDE. The parameters of HSlopEn were as follows: hierarchical layer *K* = 3, embedding dimension *m* = 3, time delay *d* = 1, and threshold parameter *γ* and *δ* were adaptively determined using the WSO algorithm. For a fair comparison, the parameter settings of other hierarchical entropies were the same as those in the HSlopEn method. Among them, the similarity tolerances of HSE and HFE were set as *r* = 0.2, and the category number of HRDE was set as *c* = 3. The entropy distributions of an optimal node for the single-feature extraction of bearing signals are shown in Figure 10. The single-feature approach was first used to extract the fault feature and compare it with HFE, HSE, HPE, and HRDE. The parameters of HSlopEn were as follows: hierarchical layer ൌ 3 , embedding dimension ൌ3 , time delay ൌ 1 , and threshold parameter and were adaptively determined using the WSO algorithm. For a fair comparison, the parameter settings of other hierarchical entropies were the same as those in the HSlopEn method. Among them, the similarity tolerances of HSE and HFE were set as ൌ 0.2, and the category number of HRDE was set as ൌ3. The entropy distributions of an optimal node for the single-feature extraction of bearing signals are shown in Figure 10.

Aiming to demonstrate the superiority of WSO-HSlopEn in fault diagnosis, we compared it to other classical hierarchical entropies, including HSE, HFE, HPE, and HRDE.

According to the recognition rate of different types of bearing signals, we can find that no matter how many features are extracted, the advantages of the WSO algorithm are obvious. In the case of extracting a single feature, the recognition rate of the WSO reaches 79.33% on node 6, which is much higher than that of other optimization algorithms. Under the circumstances of extracting multi-features, as the number of selected nodes increases, the recognition rate also improves. When we select five features, it realizes the correct identification of all samples. The recognition rate of other optimization algorithms, including SO, MPA, and SSA, is, respectively, 3.8%, 10.53%, and 17.73% lower than that of WSO. Above all, we prove that using the WSO to optimize HSlopEn and the SVM is feasible. Therefore, in this paper, the WSO is used to optimize HSlopEn and SVM

*Sensors* **2023**, *23*, x FOR PEER REVIEW 12 of 19

*4.3. Comparison of Different Hierarchical Entropies*

**Figure 10.** Entropy distribution of optimal node for single-feature extraction of bearing signals. **Figure 10.** Entropy distribution of optimal node for single-feature extraction of bearing signals.

Figure 10a presents that there was no aliasing phenomenon between the features of the three fault types of NOR, ORF3, and IRF1 and other fault types in the entropy distribution of WSO-HSlopEn. ORF2 only had a few samples of entropy close to BF2, and Figure 10a presents that there was no aliasing phenomenon between the features of the three fault types of NOR, ORF3, and IRF1 and other fault types in the entropy

the entropy values of the samples of the other fault types show a severe overlap. Based on the single-feature extraction shown in Figure 10b–e, compared with other hierarchical

hierarchical entropies. The entropy values of several types of samples in the ten conditions of bearing signals are quite different from the other types. Relatively speaking, the distance between the various types of samples in the distribution of the entropy of WSO-

**Table 5.** Recognition rates of single features for the five types of hierarchical entropies (nodes 1–7).

WSo-HSlopEn 78.40 79.20 70.13 78.00 73.47 79.33 45.73 HFE 57.07 55.06 37.73 55.33 42.67 59.73 32.53 HPE 43.73 52.53 24.80 40.93 38.13 46.53 18.80 HSE 39.07 47.60 23.87 36.27 21.07 46.67 18.40 HRDE 55.73 65.47 33.87 53.87 42.40 54.53 30.67

**1 2 3 4 5 6 7**

**8 9 10 11 12 13 14**

**Entropies Recognition Rates of Each Node (%)**

**Table 6.** Recognition rates of single features for the five types of hierarchical entropies (nodes 8–

WSo-HSlopEn 51.60 72.27 59.60 66.93 62.80 69.07 70.93 HFE 27.07 43.73 36.40 38.93 36.40 45.73 40.27 HPE 14.80 29.60 22.27 29.07 20.80 32.27 26.40 HSE 20.40 28.67 20.40 21.60 20.53 37.33 25.87

**Entropies Recognition Rates of Each Node (%)**

After using the WSO-HSlopEn as the fault feature of the bearing signal, the bearing fault diagnosis sample set was divided into a training set and a test set, and the training set was input into the WSO-SVM to train the model, and then the test set was input into the model to finish the fault diagnosis of bearings. The Gaussian kernel function was selected as the kernel function of the SVM. The penalty factor and kernel function parameters of the SVM were also adaptively determined by the WSO algorithm. Recognition rates of single features for the five types of hierarchical entropies are

HSlopEn is relatively large.

displayed in Tables 5 and 6.

14).

distribution of WSO-HSlopEn. ORF2 only had a few samples of entropy close to BF2, and the entropy values of the samples of the other fault types show a severe overlap. Based on the single-feature extraction shown in Figure 10b–e, compared with other hierarchical entropies, WSO-HSlopEn is not as serious as the aliasing of the distributions of other hierarchical entropies. The entropy values of several types of samples in the ten conditions of bearing signals are quite different from the other types. Relatively speaking, the distance between the various types of samples in the distribution of the entropy of WSO-HSlopEn is relatively large.

After using the WSO-HSlopEn as the fault feature of the bearing signal, the bearing fault diagnosis sample set was divided into a training set and a test set, and the training set was input into the WSO-SVM to train the model, and then the test set was input into the model to finish the fault diagnosis of bearings. The Gaussian kernel function was selected as the kernel function of the SVM. The penalty factor and kernel function parameters of the SVM were also adaptively determined by the WSO algorithm. Recognition rates of single features for the five types of hierarchical entropies are displayed in Tables 5 and 6.

**Table 5.** Recognition rates of single features for the five types of hierarchical entropies (nodes 1–7).




Tables 5 and 6 illustrate that, when using WSO-HSlopEn, the recognition rate of node 6 was the highest, which was 79.33%. Compared with other hierarchical entropies, under each node, the recognition rate based on WSO-HSlopEn was always the highest, which shows the effectiveness of WSO-HSlopEn as a fault diagnosis feature of bearing signals.

Through observation, when single-feature extraction is used to extract the fault feature, there is still overlap between the features of different conditions of the bearing signals. Furthermore, the recognition rate of the best node was low, and there were many misclassified samples based on single-feature extraction. Aiming to further improve the recognition rate of different conditions of the bearing signals, double features were used to extract the bearing signals. All parameters used in the experiments were the same as those listed in the single-feature extraction. The entropy distribution on the optimal node for double-feature extractions of bearing signals is shown in Figure 11, where the abscissa and ordinate are the entropy values of the two nodes, respectively. For example, in Figure 11a, the abscissa is the SlopEn of node 1, and the ordinate is the SlopEn of node 5.

**Figure 11.** Double-features distribution of ten types of bearing signals. **Figure 11.** Double-features distribution of ten types of bearing signals.

As can be observed from Figure 11, in the case of double-feature extraction, the WSO-HSlopEn distribution of sample signals belonging to the same type is relatively concentrated compared to other hierarchical entropies; for the other four types of hierarchical entropies, the bearing signals between different types are more divergent, and the entropy values of different types of bearing signals are very close. As can be observed from Figure 11, in the case of double-feature extraction, the WSO-HSlopEn distribution of sample signals belonging to the same type is relatively concentrated compared to other hierarchical entropies; for the other four types of hierarchical entropies, the bearing signals between different types are more divergent, and the entropy values of different types of bearing signals are very close.

HRDE 25.07 45.60 37.87 42.93 37.07 50.13 48.93

Tables 5 and 6 illustrate that, when using WSO-HSlopEn, the recognition rate of node 6 was the highest, which was 79.33%. Compared with other hierarchical entropies, under each node, the recognition rate based on WSO-HSlopEn was always the highest, which shows the effectiveness of WSO-HSlopEn as a fault diagnosis feature of bearing signals. Through observation, when single-feature extraction is used to extract the fault feature, there is still overlap between the features of different conditions of the bearing signals. Furthermore, the recognition rate of the best node was low, and there were many misclassified samples based on single-feature extraction. Aiming to further improve the recognition rate of different conditions of the bearing signals, double features were used to extract the bearing signals. All parameters used in the experiments were the same as those listed in the single-feature extraction. The entropy distribution on the optimal node for double-feature extractions of bearing signals is shown in Figure 11, where the abscissa and ordinate are the entropy values of the two nodes, respectively. For example, in Figure

11a, the abscissa is the SlopEn of node 1, and the ordinate is the SlopEn of node 5.

To further improve the recognition performance, triple features were used to extract bearing fault features on various hierarchical entropies. The parameters for calculating various hierarchical entropies were the same as those of double features. Figure 12 presents the triple-feature distributions of ten types of bearing signals for different To further improve the recognition performance, triple features were used to extract bearing fault features on various hierarchical entropies. The parameters for calculating various hierarchical entropies were the same as those of double features. Figure 12 presents the triple-feature distributions of ten types of bearing signals for different hierarchical entropies.

(**a**) WSo-HSlopEn (**b**) HFE

(**c**) HPE (**d**) HSE

hierarchical entropies.

(**e**) HRDE

hierarchical entropies.

**Figure 11.** Double-features distribution of ten types of bearing signals.

the entropy values of different types of bearing signals are very close.

As can be observed from Figure 11, in the case of double-feature extraction, the WSO-HSlopEn distribution of sample signals belonging to the same type is relatively concentrated compared to other hierarchical entropies; for the other four types of hierarchical entropies, the bearing signals between different types are more divergent, and

To further improve the recognition performance, triple features were used to extract bearing fault features on various hierarchical entropies. The parameters for calculating various hierarchical entropies were the same as those of double features. Figure 12 presents the triple-feature distributions of ten types of bearing signals for different

**Figure 12.** Triple-features distribution of ten types of bearing signals for different entropies. **Figure 12.** Triple-features distribution of ten types of bearing signals for different entropies.

It can be seen from Figure 12 that there is almost no overlap based on the WSO-HSlopEn, but the feature distributions of the BF2 and IRF2 samples are relatively low in clustering; for the other hierarchical entropies, the clustering of the feature distributions of the samples are very poor because of their approximate entropy distributions. Nevertheless, the entropy distribution of WSO-HSlopEn is more dispersed, and WSO-HSlopEn of different fault types are quite different, which effectively verifies the validity of WSO-HSlopEn as a feature extraction method for ten types of bearing signals. It can be seen from Figure 12 that there is almost no overlap based on the WSO-HSlopEn, but the feature distributions of the BF2 and IRF2 samples are relatively low in clustering; for the other hierarchical entropies, the clustering of the feature distributions of the samples are very poor because of their approximate entropy distributions. Nevertheless, the entropy distribution of WSO-HSlopEn is more dispersed, and WSO-HSlopEn of different fault types are quite different, which effectively verifies the validity of WSO-HSlopEn as a feature extraction method for ten types of bearing signals.

Next, WSO-SVM is used to construct a fault diagnosis model. The highestrecognition rate is calculated forthe five types of hierarchical entropies under multi-feature extraction, as shown in Table 7, where (1,5) indicates the combination of nodes with the highest recognition rate for two features are node 1 and node 5, (1,5,6) indicates the combination Next, WSO-SVM is used to construct a fault diagnosis model. The highest recognition rate is calculated for the five types of hierarchical entropies under multi-feature extraction, as shown in Table 7, where (1,5) indicates the combination of nodes with the highest recognition rate for two features are node 1 and node 5, (1,5,6) indicates the combination of

of nodes with the highest recognition rate for three features are node 1, node 5 and node

Highest recognition rate (%) 97.87 99.60 99.87 100

HFE Highest recognition rate (%) 87.60 90.80 93.87 96.20

HPE Highest recognition rate (%) 67.87 84.37 86.67 89.47

HSE Highest recognition rate (%) 64.13 70.40 74.67 83.27

HRDE Highest recognition rate (%) 80.60 89.87 93.87 95.87

Table 7 shows that no matter how many features are extracted, the recognition rate of these ten types of bearing signals using WSO-HSlopEn is higher than that of other hierarchical entropies; additionally, the more features we select, the better the recognition effect we obtain; in the circumstances of multi-features, the recognition rates of WSO-HSlopEn are all higher than 97.5%, yet the highest recognition rates of other hierarchical entropies are all significantly below 97.5%; for WSO-HSlopEn, when five nodes are selected, that is, choosing nodes (1,5,6,7,11), the highest recognition rate of these ten types of bearing signals reaches 100%; however, the highest recognition rate of other entropies

Choose the node (1,5) (1,5,6) (1,5,6,7) (1,5,6,7,11)

Choose the node (2,6) (2,5,13) (1,3,4,6) (2,5,6,12,14)

Choose the node (2,4) (1,2,14) (2,3,5,10) (1,5,7,12,14)

Choose the node (2,5) (1,2,6) (1,2,3,6) (2,3,7,12,14)

Choose the node (1,2) (1,2,3) (1,2,4,6) (2,5,6,11,12)

**2 3 4 5**

**Entropy Parameter Number of Extracted Nodes**

6, and so on.

WSo-HSlopEn nodes with the highest recognition rate for three features are node 1, node 5 and node 6, and so on.


**Table 7.** Highest recognition rate for the five types of hierarchical entropies for multi-features.

Table 7 shows that no matter how many features are extracted, the recognition rate of these ten types of bearing signals using WSO-HSlopEn is higher than that of other hierarchical entropies; additionally, the more features we select, the better the recognition effect we obtain; in the circumstances of multi-features, the recognition rates of WSO-HSlopEn are all higher than 97.5%, yet the highest recognition rates of other hierarchical entropies are all significantly below 97.5%; for WSO-HSlopEn, when five nodes are selected, that is, choosing nodes (1,5,6,7,11), the highest recognition rate of these ten types of bearing signals reaches 100%; however, the highest recognition rate of other entropies is, respectively, 3.80%, 10.53%, 16.73%, and 4.13% lower than that of WSO-HSlopEn. Through the above comparison, we can clearly find the significant advantages of the proposed method based on WSO-HSlopEn, and the recognition results applied to diagnose faults of rolling bearings are higher than those of classic methods.

#### **5. Conclusions**

This paper puts forward a dual-optimization fault diagnosis method for rolling bearings based on WSO-HSlopEn and WSO-SVM. The effectiveness of the proposed methods is verified by comparing them with the classical methods. The main innovations and conclusions are as follows:


The proposed WSO-HSlopEn and WSO-SVM solve the problem of dependent parameter settings for SlopEn and the SVM, respectively, and their superiority has been confirmed in fault diagnosis. Therefore, WSO-HSlopEn and WSO-SVM are expected to be applied

to other fields in future work, such as underwater acoustic signal processing and medical signal classification.

**Author Contributions:** Conceptualization, Y.L.; Methodology, Y.L.; Validation, B.T.; Data curation, B.H.; Writing—original draft, B.T.; Visualization, B.T., B.H. and X.X.; Supervision, Y.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by [Natural Science Foundation of Shaanxi Province] grant number [2022JM-337], [Xi'an University of Technology Excellent Seed Fund] grant number [252082220].

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data used to support the findings of this study are available from the corresponding author upon request.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
