*Article* **Test Strategy Optimization Based on Soft Sensing and Ensemble Belief Measurement**

**Wenjuan Mei 1, Zhen Liu 1,\*, Lei Tang <sup>2</sup> and Yuanzhang Su 1,3**


**Abstract:** Resulting from the short production cycle and rapid design technology development, traditional prognostic and health management (PHM) approaches become impractical and fail to match the requirement of systems with structural and functional complexity. Among all PHM designs, testability design and maintainability design face critical difficulties. First, testability design requires much labor and knowledge preparation, and wastes the sensor recording information. Second, maintainability design suffers bad influences by improper testability design. We proposed a test strategy optimization based on soft-sensing and ensemble belief measurements to overcome these problems. Instead of serial PHM design, the proposed method constructs a closed loop between testability and maintenance to generate an adaptive fault diagnostic tree with soft-sensor nodes. The diagnostic tree generated ensures high efficiency and flexibility, taking advantage of extreme learning machine (ELM) and affinity propagation (AP). The experiment results show that our method receives the highest performance with state-of-art methods. Additionally, the proposed method enlarges the diagnostic flexibility and saves much human labor on testability design.

**Keywords:** prognostic and health management; extreme learning machine; soft sensors

#### **1. Introduction**

With the increasing use of electric devices, prognostic and health management engineering (PHM engineering) has played an extremely significant role in product lifetime management over decades [1]. PHM engineering ensures electric devices' lifetime healthy operation and provides appropriate resource assignment for product management [2]. In recent years, the production cycle has shortened because circuit technology and system design have rapidly developed [3–6]. The system structures have become more complicated, more integrated, more intelligent, and highly intensive [7]. Additionally, the potential test procedures and fault cases grow exponentially. As a result, PHM engineering has received active demand and new challenges. Practical conditional maintenance (CM) solutions become difficult to generate on modern system applications. On another hand, CM must be flexible enough to math structure complexity and system function complexity. Hence, efficient PHM engineering solutions for modern devices become an urgent problem for academic researchers and industrial engineers.

Under the CM design, testability design and maintainability design are two essential projects to determine supportability, enhance reliability, and guarantee safety during lifetime device management [8]. Testability design analyses the system's internal structures, selects test projects and arranges test procedures, estimates the system operation condition, and locates failure modes. The key difficulty for testability design is balancing the system's high structural complexity and the solution efficiency properly. Classical testability design approaches use dynamic programming (DP) to assess optimal solutions [9,10]. However,

**Citation:** Mei, W.; Liu, Z.; Tang, L.; Su, Y. Test Strategy Optimization Based on Soft Sensing and Ensemble Belief Measurement. *Sensors* **2022**, *22*, 2138. https://doi.org/10.3390/ s22062138

Academic Editor: Simon Tomažiˇc

Received: 12 February 2022 Accepted: 4 March 2022 Published: 10 March 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

classic methods suffer from high time complexity when either the number of test projects or failures is larger than 12.

In recent years, the AO\* method [11], a sequential testing generation method, balances the generation complexity of test solutions and detection performance for diagnostic procedures with heuristic searching and AND/OR graph topology. Thus, the AO\* method became the most popular testability design technique. To match the growing electric system complexity, AO\* has been improved to optimize searching mechanisms with information theory [12], evolution algorithms [13,14], dynamic design [15], etc. Additionally, advanced research has simplified searching procedures with rollout strategy [16] and bottom-up decision tree [17] to achieve practical large-system testability solutions. Despite good application results, these testability design approaches need to assign logic relationships between test procedures and failure modes. Hence, all these methods assume that dependent single-signal operations and sequential procedures can make all diagnostic decisions. However, modern devices contain highly complicated logistic relationships [18] between test procedures and potential failure modes under many scenarios. Consequently, the testability design consumes too much human power and affects the testability design efficiency to prepare prior knowledge, especially under short production cycles. Furthermore, the testability design wastes the entailed sensor recording information from test procedures as the existing methods rely on human-selected binary information.

The maintainability estimation model provides real-time operation condition diagnosis and realizes system health management along with testability design. In general, existing maintainability design approaches can be divided into physics-of-failure (PoF) approaches [19–22] and data-driven (DD) approaches [23–25]. PoF approaches use rules from physics or chemical dynamics to estimate electric system failure conditions [26]. With accelerated aging of experimental records and prior modeling knowledge, PoF approaches generate an accurate dynamic model under specific stress influences such as thermal, electrical, and humidity. However, most PoF approaches are only suitable under one stress function, and the methods face high limitations with real applications.

Unlike PoF methods, DD approaches rely on historical sensor information and build maps from sensor recording to failure modes. Hence, DD approaches become more flexible compared to PoF methods. Classical DD approaches use statistic methods such as stochastic methods, regression methods, distance estimation, and similarity estimation [27–30]. These methods require large samples to ensure unbiased estimation and model robustness, and are constrained in off-line modeling. Therefore, statistical approaches have limitations on maintainability design with few sampling records and short time constraints. In recent years, machine learning (ML) methods [31,32] have attracted research attention because of its high accuracy, strong adaptive ability, powerful robustness, and fast computation time. As a result, ML methods, such as neural networks (NN) [33], support vector machine (SVM) [34,35], k-nearest neighbors (KNN) [36], K-means clustering [37], and particle filters (PF) [38], have been used for maintenance design and tuned to successfully extract hidden rules from failure mode and recording information [39]. However, these methods require prior information selection and the preprocessing system degradation features. The performance of testability and maintainability may affect the preprocessing model input due to the complicated system structure and the complex recording information relationships.

We propose a test strategy optimization based on this paper's soft-sensing and ensemble belief measurement to overcome this weakness described above. We suggest a closed loop framework for PHM design to replace the sequential framework between testability and maintenance design. Instead of experienced knowledge, our method uses ensemble learning based on direct sensor recordings to gain the system state estimation. Additionally, we connect the testability and maintenance design by the minimal conditional criterion to optimize the test strategy. Consequently, the proposed method improves the flexibility of PHM design, saves labor, and enhances diagnostic efficiency. The contributions of our work follow.


The experiment proves that our method has better diagnostic accuracy and lower false alarm ratios than other state-of-art diagnostic methods. Additionally, our diagnostic strategies take only a few tests with little test assignment consumption. For each fault state, the diagnostic procedures provide one efficient test sequence. Thus, the diagnostic procedure enjoys high efficiency for applications. Finally, affinity propagation enlarges the diagnostic flexibility and significantly reduces human labor used on testability design.

The rest of this paper is organized as follows. We introduce the PHM design problem and provide the general framework in the next section. Section 3 presents the details of the algorithms and Section 4 provides the experiment results and discussions. Finally, conclusions are drawn in Section 5.

#### **2. Problem Formulation and General Framework**

PHM engineering estimates the degradation processes and recognizes failure modes over the product's full lifetime. Based on experimental research and application surveys, PHM engineering provides fault analysis and maintenance advice to prevent failure occurrence. PHM engineering elements and the corresponding relationships are presented in Figure 1.

**Figure 1.** Element and the corresponding relationship of PHM engineering.

To study the target systems, engineers analyze potential failure modes and find unsafe and unreliable features. Thus, the target system's safety and reliability, reflected by its failure features, draw attention to system failure physical characteristics. For higher reliability and safety, the PHM platform provides maintenance procedures and diagnostic approaches for the system's supportability. The diagnostic approach depends on testability and the maintenance procedures that rely on maintainability.

Testability is the design characteristic that the health condition can be detected accurately and failure modes can be located successfully. Meanwhile, maintainability is the ability to repair and recover the system under certain conditions within a certain time. The PHM services can ensure healthy system operation and achieve the PHM engineering purpose with proper maintenance and testability design. In general, supportability is the key PHM engineering purpose influenced by testability and maintainability; thus, the PHM must meet the environmental stability needs. The physical system structure directly influences safety and reliability while system elements reflect safety and reliability critically. High testability and maintainability quality improve the system reliability and safety with good maintenance design procedures and diagnostic approaches. Hence, testability design and maintenance design are two significant PHM platform parts.

Since testability and maintenance are important, various techniques provide practical testability design and maintenance design. To our best knowledge, existing methods build a sequential approach to generate the PHM service. As Figure 2 shows, the traditional framework arranges the test procedures to assess the sensor recording and create maintenance design with the assigned sensor records. The framework is applicable for many large systems. However, the testability design uses human experience binary information such as information flow chart, dependence matrix and AND/OR graph. Therefore, the testability design suffers from extended time and the financial burden with modern complex systems, ignores the coupling effects between test procedures, and wastes valuable sensor recording information. As a result, the maintenance design receives poor performance and low efficiency because of poor information usage and low knowledge transmission efficiency from testability design and maintenance design.

**Figure 2.** Traditional framework of testability and maintenance design.

We introduce a closed-loop test strategy optimization method on soft-sensor information and ensemble learning to overcome the weakness. Similar to traditional PHM design approaches, the proposed method aims to generate a fault diagnostic tree and cut the fault set with testability sensor information and maintenance signal processing. In contrast, our fault tree grows with direct sensor recording information directly along with the processing module and extends with basic probability assignments. On the other hand, the PHM design process contains a cooperative closed loop between testability and maintenance design to improve information usage and transmission efficiency during fault detection.

The general framework of the proposed method is presented in Figure 3. For maintenance design, the soft sensor integrates the recording information from assigned sensors and the processing signal, such as statistical method, support machine, and learning machine to extract suitable test features. Considering the requirement of fast diagnosis and detection, we use extreme learning machine (ELM) [34], a noniterative single-layer learning

machine, to generate accurate fast processing modules of soft-sensing nodes. The details are introduced in the next section.

**Figure 3.** General framework of test strategy optimization based on soft sensing and ensemble belief measurement.

Suppose there exists *N* possible sensors with the PHM system design; therefore, the potential test procedures set is denoted as *Tpotential* = {*t*1, *t*2,..., *ti*,... *tN*−1, *tN*}, where *ti* means the test procedures with *i*-th sensors. As each sensor contains a vector of information *xi* = {*xi*,1, *xi*,2,..., *xi*,*K*}, *i* = 1, 2, 3, ... , *N* with *M* existing samples from the targeted system, the training information is regarded as:

$$\mathbf{X}\_{train} = \begin{bmatrix} \mathbf{x}\_{1,1} & \mathbf{x}\_{1,2} & \dots & \mathbf{x}\_{1,N} \\ \mathbf{x}\_{2,1} & \mathbf{x}\_{2,2} & \dots & \mathbf{x}\_{2,N} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{x}\_{M,1} & \mathbf{x}\_{M,2} & \dots & \mathbf{x}\_{M,N} \end{bmatrix} \tag{1}$$

where *xij* is the recording information vector from the *j*-th sensor for *i*-th sample, *j* = 1,2,3, ... , *N*, and *i* = 1,2,3, . . . , *M*.

With the sampling information, the potential fault tree soft-sensor node structure is denoted as following:

$$\text{Node} = \left\{ \mathbf{S}\_{\text{father}}, \mathbf{S}\_{\text{son}}, \mathbf{T}\_{\text{node}}, \mathbf{X}\_{\text{node}}, \mathbf{Y}\_{\text{node}}, \operatorname{ELM}\_{\text{node}}, \mathfrak{m}\_{\text{exemplar}} \right\} \tag{2}$$

where the *Tnode* is the assigned test procedures from previous selected soft-sensor nodes and the potential selected test procedure, and *Xnode* is the training sample sensor recording information to detect under the node, as follows:

$$X\_{mode} = \begin{bmatrix} \mathbf{x}\_{1,T\_{mode}}, \mathbf{x}\_{2,T\_{mode}}, \dots, \mathbf{x}\_{M\_{mode},T\_{mode}} \end{bmatrix}^T \tag{3}$$

$$\mathbf{x}\_{i,T\_{\text{node}}} = \left\{ \mathbf{x}\_{i,j} \middle| t\_j \in T\_{\text{node}} \right\} \tag{4}$$

where *Mmode* is the number of the sampling data. *Ynode* provides the corresponding sample failure conditions. Suppose the whole set of failure mode is *Snode* = {*s*1,*s*2,...,*sK*}, where *K* is the number of fault modes considered, then *Ynode* is determined with the actual condition of training samples as follows:

$$\mathcal{Y}\_{node} = \begin{bmatrix} y\_{1'}, y\_{2'}, \dots, y\_{M\_{node}} \end{bmatrix}^T \tag{5}$$

$$y\_i = [y\_{i,1}, y\_{i,2}, \dots, y\_{i,\kappa}] \tag{6}$$

$$y\_{i,j} = \begin{cases} 1 & ss\_i = s\_j \\ 0 & ss\_i \neq s\_j \end{cases} \tag{7}$$

where *ssi* is the actual *i*-th sample fault mode. Additionally, *Sf ather* is the fault mode set of the soft-sensor node, represented as follows:

$$\mathbf{S}\_{father} = \left\{ \mathbf{s}\_{\circ} \middle| \exists \mathbf{x}\_{i} \in \mathbf{X}\_{node} \text{ and } \mathbf{s}\_{\circ} \in \mathbf{S}\_{\prime} \text{ s} \\ s\_{i} = s\_{\circ} \right\} \tag{8}$$

With *Xnode* and *Ynode*, the ELM serves as the soft-sensor signal processing part and aims to provide the fuzzy set of fault states as much as possible. To achieve the goal, ELM builds a map *<sup>f</sup>* : *<sup>R</sup>*1×*<sup>N</sup>* → *<sup>R</sup>*1×*<sup>K</sup>* from the training sample signals to estimate the fault states and provides the training sample prediction *Mnode* as follows:

$$\mathbf{M}\_{node} = \left[ f(\mathbf{x}\_{1,T\_{node}} \left| ELM\_{node} \right), f(\mathbf{x}\_{2,T\_{node}} \left| ELM\_{node} \right), \dots, f(\mathbf{x}\_{M\_{node},T\_{node}} \left| ELM\_{node} \right) \right]^T \tag{9}$$

With *Mnode*, ELM estimates the training sample failure modes. To determine the processing part performance, *Ynode* is used as the expected output marks for the fault detection process. From the view of detection process, two indexes, fault detection rate (FDR) and false alarm rate (FAR), play essential roles in evaluating the accuracy.

FDR is defined as the ratio between the failure mode probability that is successfully detected with the ELM and the total failure modes probability. Here, we assume the historical samples are subject to the general failure probability distribution of the real applications. Thus, the statistic characteristics of training samples reflect the total failure mode probability and the training sample detection performance depicts the detection probability. From above, FDR for the node is presented as follows:

$$FDR\_{node} = \frac{\sum\_{i=1}^{M\_{node}} \sum\_{j=1}^{K} P(f\_j(\mathbf{x}\_{1, T\_{node}} | ELM\_{node}) \ge 1 - \varepsilon, y\_{i,j} = 1)}{\sum\_{i=1}^{M\_{node}} \sum\_{j=1}^{K} y\_{i,j}} \tag{10}$$

where *ε* is the detection margin. From the generation process, *fj*(*x*1,*Tnode* - -*ELMnode*) and *yi*,*<sup>j</sup>* are independent of each other. Additionally, since ELM provides a continuous probability estimation, the loss function with respect to *FDRnode* is computed as follows:

$$L\_{FDR\_{model}} = \frac{\sum\_{i=1}^{M\_{node}} \sum\_{j=1}^{K} \left( y\_{i,j} - f\_{\bar{f}} \left( \mathbf{x}\_{1, T\_{model}} \middle| ELM\_{model} \right) \right) P \left( y\_{i,\bar{j}} = 1 \right)}{\sum\_{i=1}^{M\_{node}} \sum\_{j=1}^{K} y\_{i,\bar{j}}} \tag{11}$$

Along with FDR, FAR presents the ratio between false-alarm failure mode probability and the total failure detection probability. Taking consideration of *fj*(*x*1,*Tnode* - -*ELMnode*), FAR is computed as follows:

$$FAR\_{node\varepsilon} = \frac{\sum\_{i=1}^{M\_{node}} \sum\_{j=1}^{K} P\left(f\_j(\mathbf{x}\_{1, T\_{node}} \left| ELM\_{node}\right.) \ge 1 - \varepsilon, \mathcal{Y}\_{i,j} = 0\right)}{\sum\_{i=1}^{M\_{node}} \sum\_{j=1}^{K} P\left(f\_j(\mathbf{x}\_{1, T\_{node}} \left| ELM\_{node}\right.) \ge 1 - \varepsilon\right)}\tag{12}$$

Here, it is assumed that the model has enough accuracy so that the failure estimation and the total failure conditions have approximate values. Thus, ∑*Mnode <sup>i</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>K</sup> <sup>j</sup>*=<sup>1</sup> *P*(*fj*(*x*1,*Tnode* |*ELMnode*)

<sup>≥</sup> <sup>1</sup> <sup>−</sup> *<sup>ε</sup>*) is able to be approximately equal to <sup>∑</sup>*Mnode <sup>i</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>K</sup> <sup>j</sup>*=<sup>1</sup> *yi*,*j*. Therefore, the loss function of the FAR is denoted as:

$$\begin{array}{rcl} L\_{\text{node}} & = & \frac{\sum\_{i=1}^{M\_{\text{node}}} \sum\_{j}^{K} \left( y\_{ij} - f\_{j} \left( \mathbf{x}\_{1.T\_{\text{mid}}} \left| \operatorname{ELM}\_{\text{node}} \right| \right)^{2} P \left( y\_{ij} = 1 \right) \right)}{\sum\_{i=1}^{M\_{\text{node}}} \sum\_{j}^{K} y\_{ij}} \\ & + \frac{\sum\_{i=1}^{M\_{\text{node}}} \sum\_{j}^{K} \left( y\_{ij} - f\_{j} \left( \mathbf{x}\_{1.T\_{\text{mid}}} \left| \operatorname{ELM}\_{\text{node}} \right| \right)^{2} P \left( y\_{ij} = 0 \right) \right)}{\sum\_{i=1}^{M\_{\text{node}}} \sum\_{j}^{K} y\_{ij}} \\ & = & \frac{\sum\_{i=1}^{M\_{\text{node}}} \sum\_{j}^{K} \left( y\_{ij} - f\_{j} \left( \mathbf{x}\_{1.T\_{\text{mid}}} \left| \operatorname{ELM}\_{\text{node}} \right) \right)^{2}}{\sum\_{i=1}^{M\_{\text{node}}} \sum\_{j}^{K} y\_{ij}} \end{array} \tag{13}$$

As ∑*Mnode <sup>i</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>K</sup> <sup>j</sup> yi*,*<sup>j</sup>* is independent from the soft-sensor construction, the task of ELM is to minimize ∑*Mnode <sup>i</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>K</sup> j yi*,*<sup>j</sup>* − *fj x*1,*Tnode* - -*ELMnode*<sup>2</sup> , which is the difference between *Mnode* and *Ynode*, expressed as follows:

$$ELM = \arg\min \left\{ \sum\_{i=1}^{M\_{\text{node}}} ||y\_i - f(\mathbf{x}\_{1, T\_{\text{node}}} | ELM\_{\text{node}})||^2 \right\} \tag{14}$$

Since the maintenance design of the proposed method directly relies on the recording sensor signals, the physical system knowledge is largely preserved and the information usage is highly enhanced.

Based on soft-sensor node construction, testability design process adds the soft-sensor node with best performance and builds a fault tree, taking consideration of potential soft-sensor node under the minimum conditional entropy criterion. Hence, the assigned soft-sensor nodes decrease the diagnostic uncertainty and improve the detection efficiency. Besides, affinity propagation (AP) is adapted to separate the fuzzy set of the fault modes and generate subnodes for the diagnostic model with the exemplar probability estimation *mexampor* and basic probability assignment *BPAnode*. The subnodes are denoted with the subset of failure modes *Sson* = *Sson*,1, *Sson*,2,..., *Sson*,*Knode* and satisfies the condition that <sup>∪</sup>*Knode <sup>i</sup>*=<sup>1</sup> *Sson*,*<sup>i</sup>* = *Sf ather*.

After adding the soft-sensor nodes and extending the subset of fault modes, the information of assigned nodes *Tnode* is regarded as prior knowledge, serving as feedback from testability design to maintenance design, and extending the fault tree until reaching the minimum fault condition set.

With the cooperation between testability design and maintenance design, a PHM model based on soft-senor information is generated and the sensors for PHM maintenance are assigned based on the selected test set of PHM model, as follows.

$$T\_{PHM} = \{ t\_i : \exists T\_{node} \; \; t\_i \in T\_{node} \} \tag{15}$$

To locate the fault condition when starting with the maximum set of failure modes, the corresponding sensor recording is collected and used to compute the potential basic probability assignment. Then, the subset of potential modes is determined based on existing samples with nearest neighbor strategy and the detection process is continued until finding the minimum failure sets and obtaining the failure detection. For each test sample denoted as *casei*, the PHM detection procedures generate a test sequence corresponding to the assigned sensor recording and the estimation from the signal processing by a branch of assigned soft-sensor nodes, as follows:

$$S\_{\rm node} = \left\{ \left. \text{Node}\_{\} : \exists \mathfrak{m}\_{\rm exempl}, k \in \mathfrak{m}\_{\rm exempl}, \left|| \left| f \left( \mathbf{x}\_{\rm enc}, \mathbf{r}\_{\rm node} \Big| \, \mathrm{ELM}\_{\rm node} \right) - m\_{\rm exempl} \,\right| \, \right| \, = d\_{\rm node} \right\} \tag{16}$$

where *dNodej* is the minimum distance from the failure condition estimation vector *f*(*xcase*,*Tnode* - - - *ELMnodej* ) to all exemplars of soft-sensor nodes with the same father node of *Nodej*. The detection process ends when the procedures reach the terminal node of *Snode* denoted as *Nodeterminal*,*case*. Then, the estimated failure condition vector is computed as:

$$\mathcal{Y}\_{case,j} = \begin{cases} \begin{array}{ll} 1 & s\_j \in \mathcal{S}\_{node\\_trans,}, \\ 0 & s\_j \notin \mathcal{S}\_{node\\_trans,}, \text{after} \end{array} \end{cases} \tag{17}$$

According to the definitions of FDR, FAR, and detection accuracy, the test performance indexes of the detection procedures are computed as follows:

$$FDR\_{test} = \sum\_{X\_{case}} \sum\_{j=1}^{K} P(\hat{y}\_{case,j} = 1 | y\_{case,j} = 1) \tag{18}$$

$$FDR\_{test} = \sum\_{X\_{case}} \sum\_{j=1}^{K} P(\hat{y}\_{case,j} = 0 | y\_{case,j} = 1) \tag{19}$$

$$Accuracy\_{test} = \sum\_{X\_{case}} \sum\_{j=1}^{K} P(\hat{y}\_{case,j} = 1 | y\_{case,j} = 1) + P(\hat{y}\_{case,j} = 0 | y\_{case,j} = 0) \tag{20}$$

From these, the test strategy optimization aims to select *TPHM* from *Tpotential* and generate the diagnostic tree with soft sensors and AP. For each case, AP determines the next procedure and the corresponding soft sensors with previous estimations. Thus, for each case, the diagnostic tree provides an adaptive test sequence and leads to the final evaluation on the terminal node. Similar to Equation (4), the objective function is the combination of FAR loss and FDR loss, as follows:

$$L\_{tree} = \frac{\sum\_{i=1}^{M} \sum\_{j=1}^{K} \left( y\_{i,j} - F\_j \left( \mathcal{X}\_{i, T\_{\text{req},i}} \Big| \mathcal{T}\_{seq,i} \in \mathcal{T}\_{PHM}, ELM\_{model} \in \text{tree} \right) \right)^2}{\sum\_{i=1}^{M} \sum\_{j=1}^{K} y\_{i,j}} \tag{21}$$

where *Tseq*,*<sup>i</sup>* is the required test procedure with respect to the diagnostic tree, and *Fj xi*,*Tseq*,*<sup>i</sup>* - - - *Tseq*,*<sup>i</sup>* ∈ *TPHM*, *ELMnode* ∈ *tree*) is the state estimation from the terminal node with respect to the test procedure of the *i*-th case.

### **3. Test Strategy Optimization Based on Soft Sensing and Ensemble Belief Measurement**

*3.1. Construct Soft-Sensor Node with Extreme Learning Machine*

As mentioned in the previous section, each soft sensor contains the recording information from the assigned sensors, the artificial intelligence signal processing modules, and probability estimation parameter for the isolation of fault states. During the construct process, maintenance design produces the soft-sensor node with candidate test procedures and candidate soft-sensor nodes with high performance, and generates the fault tree. For each candidate node, the sensor recording input is created as follows:

$$\mathbf{X}\_{\text{candidate, node}} = \left\{ \mathbf{x}\_{\mathbf{s}\_i, T\_i^\*} : \mathbf{x}\_i \in \mathbf{X}\_{\text{candidate node}} \right\} \tag{22}$$

$$\mathfrak{x}\_{\mathfrak{s}\_i, T\_i^\*} = \{ \mathfrak{x}\_{\mathfrak{s}\_i, \mathfrak{t}} | t \in T\_i^\* \}\tag{23}$$

$$T\_i^\* = \left\{ T\_{\text{sequence}}^\*, t^\* \right\} \\ \text{where } \{t^\*\} \cap T\_{\text{sequence}}^\* = \mathcal{Q} \\ \tag{24}$$

where *T*∗ *sequence* integrates the previous test information of before the candidate node and makes full use of sensor recording knowledge.

At the same time, we use ELM to generate artificial intelligence signal processing modules for fast training and high generation ability. Shown in Figure 4, ELM is a noniterative three-layer neural network and contains parameters of a fully connected hidden layer and a linear-combined output layer with an activation function, as follows:

$$ELM\_{\text{candidate},node} = \{ (\mathbf{W}\_{node}, \mathbf{b}\_{node}), \mathbf{\mathcal{B}}\_{\text{node}}, f\_h(.) \} \tag{25}$$

$$\mathbf{W}\_{node} = \begin{bmatrix} w\_{1,1} & \dots & w\_{1,N\_T} \\ w\_{2,1} & \dots & w\_{2,N\_T} \\ \vdots & \ddots & \vdots \\ w\_{L,1} & \dots & w\_{L,N\_T} \end{bmatrix} \tag{26}$$

$$b\_{node} = [b\_{node.1}, b\_{node.2}, \dots, b\_{mode.L}] \tag{27}$$

$$
\mathcal{B}\_{nod\varepsilon} = \begin{bmatrix}
\beta\_{1,1} & \dots & \beta\_{1,N\_T} \\
\beta\_{2,1} & \dots & \beta\_{2,N\_T} \\
\vdots & \ddots & \vdots \\
\beta\_{L,1} & \dots & \beta\_{L,N\_T}
\end{bmatrix} \tag{28}
$$

where *Wnode* is denoted as the weights matrix of the hidden layer while *bnode* is the hidden layer bias, and *L* is the number of hidden nodes. *βnode* is the output layer weight and *fh*(.) determines the activation function from the sensor input and hidden output. Here, the sigmoid function is taken as the activation functions for all soft-sensor nodes. Relative to *Xcandidate node*, the hidden outputs of training samples are produced as follows:

$$H\_{node} = \begin{bmatrix} f\_h(w\_1 \mathbf{x}\_{1, T^\*} + b\_{node, 1}) & \dots & f\_h(w\_L \mathbf{x}\_{1, T^\*} + b\_{node, L}) \\ f\_h(w\_1 \mathbf{x}\_{2, T^\*} + b\_{node, L}) & \dots & f\_h(w\_L \mathbf{x}\_{2, T^\*} + b\_{node, L}) \\ \vdots & \ddots & \vdots \\ f\_h(w\_1 \mathbf{x}\_{M\_{node}, T^\*} + b\_{node, 1}) & \dots & f\_h(w\_L \mathbf{x}\_{M\_{node}, T^\*} + b\_{node, L}) \end{bmatrix} \tag{29}$$

As ELM is a noniterative learning machine, *Wnode* and *bnode* can be assigned randomly with respect to arbitrary probability distribution, and the output of the model is computed as a linear combination of the hidden output with trained *βnode***,** as follows:

$$
\dot{\Upsilon}\_{\text{candidate node}} = \mathcal{J}\_{\text{node}} \mathcal{H}\_{\text{node}} \tag{30}
$$

To estimate the failure situation as accurately as possible, . *Ycandidate node* is supposed to be consistent with actual failure states *Ynode* defined based on Equations (5)–(7). According to Equation (14), the loss function of the candidate node is computed as follows:

$$Loss\_{candidate\ node} = \left| \left| \mathbf{Y}\_{node} - \dot{\mathbf{Y}}\_{candidate\ node} \right| \right|^2 = \left| \left| \mathbf{Y}\_{node} - \boldsymbol{\mathcal{B}}\_{node} \mathbf{H}\_{node} \right| \right|^2 \tag{31}$$

Taking differential of *Losscandidate node* with respect to *βnode*, the trained output weight is accessed as follows:

$$\mathcal{B}\_{node} = \left(H\_{node}H\_{node}^T + \lambda I\right)^{-1}H\_{node}\mathcal{Y}\_{node} \tag{32}$$

Based on the proper assignment of ELM parameters, the soft-sensor nodes gain knowledge from the training samples and obtain accurate condition estimation with the test samples.

From above, considering the candidate node with sensor recording inputs *Xcandidate*, *node*, the previous test sequence *T*∗ *sequence*, and candidate test point *t* ∗, the procedure to generate the *ELMnode* follows:

Step 1: Assign the candidate node with Equation (2), where *Sf ather* = {*si*|*scandidate node* = *si*,*si* ∈ *S*}. Meanwhile, *Tnode* is generated as Equation (24), *Ynode* is assigned with Equations (5)–(7);

Step 2: Initialize ELM parameters (*Wnode*, *bnode*) randomly in [−1,1];

Step 3: Calculate the hidden output with respect to *Xcandidate*, *node* as in Equation (29); Step 4: Train the output weights *βnode* with Equation (32);

Step 5: Obtain the estimation of candidate set . *Ycandidate node* with Equation (30).

**Figure 4.** Fault tree of test strategy optimization based on soft sensing and ensemble belief measurement.

#### *3.2. Separate the Fault Set Based on Affinity Propagation*

With ELM-based soft-sensor nodes, the condition of trained samples and test samples can be estimated with high efficiency. Meanwhile, owing to the individual sensor recording knowledge limitation, the ELM condition evaluation has a vague part with unrelated failure modes. Thus, the fault set of corresponding nodes *Sf ather* is divided into several fuzzy sets *Sson* = *Sson*,1, *Sson*,2,..., *Sson*,*Knode* based on the fault state evaluation value . *Ycandidate node*. When constructing traditional diagnostic tree and fault analysis processes, the failure mode subset is divided by comparing the fault state evaluation and reference value of the failure mode or the failure mode calibration value. However, these strategies are only applicable to systems with small structures or systems with known mechanisms and a historical sample may ignore the diversity and validity. Hence, in the proposed method, we introduce a new dividing strategy based on affinity propagation (AP) to cut the fuzzy set and samples with evaluation similarity measurement between pairs of data points. With AP, *Sson* it is constituted based on the similarity between condition estimates and the fault tree generation flexibility is enhanced.

Instead of assigning engineering-experience reference information, AP generates clusters based on all training set evaluation values . *Ycandidate node* = *<sup>y</sup>*1, *<sup>y</sup>*2,..., *<sup>y</sup>Mnode* . The clustering method treats each training sample as one data edge point and transmits two real-valued messages: the responsibility value *r yi* , *yj* and the availability value *a yi* , *yj* , to realize communication between edge nodes until a good set of exemplars and corresponding clusters emerges. The responsibility value *r yi* , *yj* indicates the accumulated evidence for how well suited a historical data point *y<sup>j</sup>* is to serve as the exemplar for the historical data point *y<sup>i</sup>* . In addition, the availability value *a yi* , *yj* represents how appropriate it would be for a historical data point *y<sup>i</sup>* to choose a historical data point *y<sup>i</sup>*

as its exemplar. For initialization, the similarity between the historical points *yi* , *yj* is calculated based on Euclidean distance, as follows:

$$d\left(y\_{i'}y\_{j}\right) = -\left|\left|y\_i - y\_j\right|\right|^2\tag{33}$$

where *d yi* , *yj* reflects how well the historical data point *y<sup>j</sup>* is suited to be the exemplar for historical data point *y<sup>i</sup>* . AP aims to provide a clustering solution that satisfies the historical data points with larger values of distance estimation, which are more likely to serve as exemplars. To achieve this purpose, the clustering method recursively conducts the following updating process, sending the responsibility message from each data point to the corresponding data point.

First, the responsibility value *r yi* , *yj* is computed based on the following data driven approach:

$$d\left(y\_{i'}y\_j\right) = d\left(y\_{i'}y\_j\right) - \max\_{k, \mathfrak{s}, \mathfrak{t}, \mathfrak{k}\neq \mathfrak{j}} \{a(y\_{i'}y\_k) + d(y\_{i'}y\_k)\}\tag{34}$$

As the availability value *a yi* , *yj* is set to 0, the responsibility value *r yi* , *yj* is initialized as the input similarity *d yi* , *yj* minus the largest similarity value between *y<sup>i</sup>* and other exemplars. Hence, the updating process does not consider how many other points favor each candidate exemplar. In later process, if some point is efficient to assign with other exemplars, the corresponding availability value will drop to less than 0 with the updating of *a yi* , *yj* . Then, negative *a yi* , *yj* will decrease the effective value of the similarities value *d yi* , *yj* by Equation (34) and removes the corresponding candidate exemplars from competition. Especially, the self-responsibility value *r yi* , *yj* is set to the input preference that the training data point *y<sup>i</sup>* becomes one of the exemplars of clusters and reflects accumulated evidence that *y<sup>i</sup>* is an exemplar based on its input preference tempered by how ill-suited it is for assignment to another exemplar.

After calculating the responsibility value, the availability value is updated to gather evidence from the training data point as to whether each candidate exemplar makes a good exemplar, as follows:

$$a\left(y\_i, y\_j\right) = \min\{0, d\left(y\_i, y\_j\right) + \max\_{k, s.t. k\ne j} \{a(y\_i, y\_k) + d(y\_i, y\_k)\}\}\tag{35}$$

In addition, the self-availability value *a yi* , *yj* is updated as follows:

$$a\left(y\_{i\prime}, y\_{\slash}\right) = \sum\_{k,s.t.k \neq j, k \neq i} \max\{0, r(y\_{i\prime} y\_k)\}\tag{36}$$

where *a yi* , *yj* reflects accumulated evidence that the training data point *y<sup>i</sup>* becomes an exemplar. For data point *y<sup>i</sup>* , the data point *y<sup>i</sup>* that maximizes *a yi* , *yj* + *r yi* , *yj* is chosen to be the exemplar. Additionally, if *i* = *j*, then it is necessary to identify the data point *y<sup>i</sup>* as the exemplar and assign its estimation value *y<sup>i</sup>* as the exemplar value *mexemplar*. The set of all the *mexemplar* is denoted as *Mexemplar*. Based on each exemplar, one subset of *Sf ather* satisfies.

$$\begin{aligned} \forall \mathsf{m}\_{\text{exemplar}} & \in \quad \mathsf{M}\_{\text{exemplar}} \; \exists \mathsf{S}\_{\text{son},k} \subset \mathsf{S}\_{\text{son},k} \; \text{s.t. } \mathsf{S}\_{\text{son},k} \\ &= \left\{ \mathsf{s}\_{i} : \exists \mathsf{x}\_{j} \in \mathsf{X}\_{\text{node}}, a\left(\boldsymbol{y}\_{j}, \mathsf{m}\_{\text{exemplar}}\right) + r\left(\boldsymbol{y}\_{i}, \mathsf{m}\_{\text{exemplar}}\right) \right\} \\ &= \operatorname\*{argmax}\left\{ a\left(\boldsymbol{y}\_{i}, \boldsymbol{y}\_{j}\right) + r\left(\boldsymbol{y}\_{i}, \boldsymbol{y}\_{j}\right) \right\} \right\}. \end{aligned} \tag{37}$$

From above, the AP process with respect to the candidate node is conducted as follows: Step 1: For . *yi* , . *yj* <sup>∈</sup> . *Ycandidate node*, initialize the responsibility value *r* . *yi* , . *yj* as the Euclidean distance *d* . *yi* , . *yj* with Equation (33), and set the availability value *a* . *yi* , . *yj* to zero; Step 2: Update the responsibility value *r* . *yi* , . *yj* with Equation (34); Step 3: Update the availability value *a* . *yi* , . *yj* with Equations (35) and (36); Step 4: If *r* . *yi* , . *yj* and *a* . *yi* , . *yj* become stable, conduct Step 5, otherwise return to Step 2; Step 5: For each sample . *yi* <sup>∈</sup> . *Ycandidate node*, assign the sample data corresponding to *a yi* , *yj* +*r yi* , *yj* as exemplar node *mexemplar* and then generate the exemplar set *Mexemplar*; Step 6: Separate *Sf ather* with Equation (37).

#### *3.3. Generate the Fault Diagnostic Tree under Minimum Conditional Criterion*

Based on soft-sensor construction and subset division, the fault states of the target system can be located by cutting the set of potential failure sets with the function of sequences of soft-sensor nodes. In this section, we introduce how to generate the fault tree using a potential soft-sensor node under heuristic strategy based on the minimum conditional criterion. For the assigned failure set *Sf ather*, the contains numbers of potential sensor nodes corresponding to the candidate procedures. To choose the soft sensor for fault tree construction, the condition entropy *H*(*Ynode* - - - . *Ynode*, *ELMcandidate*) is introduced as follows:

$$\text{pH}\left(\mathbf{Y}\_{node} \middle| \dot{\mathbf{Y}}\_{node}, \text{ELM}\_{candidate}\right) = -\sum\_{\mathbf{y} \in \mathbf{Y}\_{node}} p\left(\mathbf{y}, \dot{\mathbf{y}} \middle| \text{ELM}\_{candidate}\right) \log\left(\mathbf{y} \middle| \dot{\mathbf{y}}, \text{ELM}\_{candidate}\right) \tag{38}$$

Since the soft-sensor model is data-driven, the estimation value of the conditional entropy is computed as follows:

$$H\left(\mathbf{Y}\_{node} \left| \dot{\mathbf{Y}}\_{node}, \operatorname{ELM}\_{\mathrm{candidate}} \right.\right) = \sum\_{\mathbf{y} \in \mathbf{Y}\_{node}} \log p(\mathbf{y}|\dot{\mathbf{y}}, \operatorname{ELM}\_{\mathrm{candidate}}) \tag{39}$$

Assuming the data information *Ynode*, . *<sup>Y</sup>node* is subject to Gaussian distribution, then . *H Ynode* - - - . *<sup>Y</sup>node*, *ELMcandidate* is simplified as follows:

$$\dot{H}\left(\mathbf{Y}\_{node}\left|\dot{\mathbf{Y}}\_{node}, \operatorname{ELM}\_{\text{candidate}}\right.\right) = \sum\_{y \in \mathcal{Y}\_{node}} \left| \left| \mathbf{Y}\_{node} - \dot{\mathbf{Y}}\_{node} \right| \right|^2 \tag{40}$$

For all candidate soft-sensor nodes, the node with the lowest conditional entropy is selected to build the fault diagnostic tree. From above, the process to construct the fault diagnostic tree follows:


Step 5. For each subset *S<sup>i</sup>* in *Sson*, construct the extending node *NodeS<sup>i</sup>* . For each extending node *NodeS<sup>i</sup>* , the father set is assigned as *S<sup>i</sup>* and the data set is constructed based on

AP result. *T*∗ *sequence* is initialized as *T*∗ *sequence*, *topt* .

Step 6. Generate the subset node by repeating **Steps 2** to **5** until reaching the minimal subset of failure mode. The construction process is completed when all the subnodes of the failure tree are constructed.

With the minimum conditional criterion, the fault diagnostic tree is generated with data-driven mechanisms and requires few engineering experiences. Thus, the generating process is applicable to complex systems with insufficient knowledge about structures, functions, and mechanisms.

With the diagnostic tree generated, the diagnostic process of the target system is implemented as follows:


#### **4. Experiment**

In this section, we use the analog circuit in [40] to evaluate the detection performance of the proposed method with state-of-art methods. As Figure 5 shows, the target system contains four second-order filters and one adding device. The detail of the system is presented in Table 1. The tolerance of R1, R2, R3, R4, R5, R6, R7, and R8 is ±10% while the tolerance of R9, R10, and R11 is ±1%. For capacitances, the tolerance is set to ±5%. Under healthy operation, the transmission gain of Av1, Av2, Av3, and Av4 is within a range of ±1%.

Here, the failures caused by different changes of amplifiers are taken into failure detection. The failure modes are defined based on the range of transmission gain for Av1, Av2, Av3, and Av4, as shown in Table 2. Since 80% of failures in real applications have a single-failure mode, we only consider failure detection of single failure modes. For example, the failure condition of Av1 is divided into five phases with different ranges of transmission gain while the transmission gain of Av2, Av3, and Av4, are collected with four different frequencies (10 Hz, 100 Hz, 10 kHz, and 100 kHz) of input signals. The details are shown in Table 3. These voltage outputs from Av1, Av2, Av3, and Av4 are regarded as the potential test points for failure detection. In total, there are 16 candidate test points and 17 potential fault states that consider the health state.

**Figure 5.** Target analog circuit.



**Table 2.** Denotation of fault states.

We apply Monte Carlo simulation to generate 20 samples for each failure mode by using Pspice software to access data.

According to the traditional PHM framework shown in Figure 3, the optimization process requires binary fault marks based on human experience and design detection circuits for each fault state. Since there are four fault states for each second-order filter, the detection is a large burden on circuit design. Testability design may also fail to relate the relationship between binary estimations of different test procedures. On the other hand, the maintenance design suffers low estimation efficiency as the testability does not consider the detailed information of sensor recordings.

Unlike the sequential framework, the proposed method considers the direct sensor information under testability and maintenance design. With cooperative procedures in Figure 4, the proposed method generates the candidate soft-sensor nodes for maintenance. At the same time, the testability design uses the minimal conditional criterion to generate the optimized diagnostic strategy. The minimal conditional criterion enhances the flexibility of testability design by considering the soft-sensor estimation in maintenance phases. On the other hand, the maintenance performance with full sensor recording increases information usage efficiency. Instead of human-experience processing, the proposed method saves many costs during the PHM design. To evaluate diagnostic performance, we compared our method with the hidden Markov method (HMM), support learning machine (SVM), and radial basis function (RBF) by using all recordings of 16 test points as input information. To estimate the feature extraction performance and learning machine function, we also took HMM and SVM with principal component analysis (PCA) and extreme learning machine (ELM) into comparison. For each method, we used 70% of the samples in each fault state as the training samples to construct the model and the other 30% as the sensor information of target systems. We assigned 100 kernels or hidden nodes for our soft-sensor nodes for the SVM, RBF, and ELM models. Each method was conducted 30 times to obtain the average performance.


Performance comparison.

> **Table 3.**

*Sensors* **2022**, *22*, 2138

accuracy

Table 3 shows the FAR, FDR, and accuracy of each method. HMM has high diagnostic accuracy without feature extraction, especially for S0. This is because HMM has a higher statistical analysis ability than SVM and RBF. However, HMM gives poorer FAR than the other methods. Unlike HMM and RBF, SVM and ELM have lower FAR and become more sensitive to false-negative samples by the advantage of the learning machine. Comparing HMM, SVM, PCA–HMM, and PCA–SVM, PCA improves HMM diagnosis for S5 and S8 and proves that proper feature extractions can benefit from the diagnostic performance. Our method has the lowest FAR, the highest FDR, and the highest accuracy compared with the other methods. Based on the same sensor recordings or even less information, the generated strategy provides an accurate location for all test samples for all 16 fault states. Hence, with ensemble learning based on soft sensors, the functions of sensor recording are largely improved.

Figure 6 shows the diagnostic tree of our method. All fault conditions are separated and recognized with 13 individual testing sequences with the tree structure. Each testing sequence takes less than 5 test procedures and the whole diagnostic tree contains only 9 testing points out of 16 potential test points. In other words, the fault state of the target systems is located within 2–5 test procedures instead of collecting all 16 sensor recordings. From above, the diagnostic tree has higher efficiency and lower cost testing consumptions than diagnostic strategies that require full-tests built by SVM, HMM, and ELM, as well as the constrained diagnostic tests strategies that require PCA-based methods. Additionally, unlike traditional diagnostic trees with binary structures, our method separates the fuzzy set with the evaluation results from constructing and dividing modules. As a result, our testability design extends the diagnostic flexibility and improves the diagnostic accuracies of each fault mode.

Based on soft-sensor construction, the potential test accuracies of the diagnostic tree root node are compared based on different test procedures. As shown in Figure 7, different test procedures differ on diagnostic accuracies, especially for S0–S5. Therefore, the minimum conditional criterion-based fault tree constructions can efficiently select proper soft-sensor nodes into the diagnostic tree and ensure that the diagnostic FAR and FDR and their accuracies receive high improvement under the constructing process.

The affinity propagation result from the diagnostic-tree-root node is presented in Figure 8. As the root nodes evaluate all samples, the affinity results are generated with fault mode evaluations of all 17 fault conditions (from S0 to S16). Here, we depict the affinity propagation result from three fault states, which occur at the same space, such as Av1 failures (S0, S1, and S2), Av2 failures (S5, S7, and S14), and Av4 failures (S12, S13, and S15). Additionally, we present the affinity propagation results with three failure modes in different places: S2 in Av1, S7 in Av2, and S14 in Av4.

From Figure 8, the soft-sensor nodes provide distinguishable BPA evaluations for each data point. The affinity propagation generates sample clusters from different dimensions adaptively with the data point similarity measurements. Most data points are clustered with topological closer clusters; others may differ on other dimensions. Compared with traditional clustering strategies for diagnosis, affinity propagation provides practical, automatic fault-state divisions and saves much human labor on engineering applications.

Finally, the sequence testing performance for fault states is shown in Figure 9. Here, we present the test sequence of S0, S3, S8, and S15. These four test sequences achieve their best diagnostic accuracy within three to five test procedures, and the test efficiency is much higher than traditional maintenance methods. Test accuracy grows increasingly for all test sequences as test nodes are added into the test procedures. Especially for the test sequence of S0, the diagnostic accuracy is less than 60% in the first test procedures as the corresponding fuzzy state set contains many members. However, the accuracy grows fast as more nodes are added into the sequences and the potential states become smaller. Thus, the ensemble function of soft-sensor nodes improved the diagnostic performance with high efficiency.

**Figure 6.** Analog circuit diagnostic tree.

**Figure 7.** Analog circuit diagnostic tree potential ELM model accuracy comparison.

**Figure 8.** Potential ELM model accuracy comparison.

**Figure 9.** Test sequence accuracy comparison: (**a**) S0 test sequence, (**b**) S3 test sequence, (**c**) S8 test sequence, and (**d**) S15 test sequence.

From above, our method has better diagnostic accuracies and lower FARs compared with other state-of-art diagnostic methods. Additionally, our diagnostic strategies take only 9 out of 16 tests points and save much test assignment consumption. For each fault state, the diagnostic procedures provide 1 test sequence within 5 test procedures. Thus, the diagnostic procedure enjoys high efficiency for applications. Finally, the affinity propagation enlarges the diagnostic flexibility and saves much human labor on testability design.

#### **5. Conclusions**

Along with a short production cycle and rapid development of design technology, existing PHM techniques have become impractical and fail to match the structural and functional complexity. Prior knowledge preparation costs too much in human labor and binary decision-making strategies waste the entailed sensor recording, especially for large complicated systems.

We propose a test strategy optimization based on soft sensing and ensemble belief measurement to overcome these weaknesses. The proposed method constructs a closed loop between testability design and maintenance design, generating an efficient fault diagnostic tree with ELM-based soft-sensor nodes. Unlike traditional diagnostic approaches, our diagnostic tree adaptively separates the fault sets by affinity propagation, and the softsensor nodes are assigned with the minimum conditional criterion. Thus, our methods can achieve high efficiency and flexibility for diagnostic processes.

The experiment results prove that our methods have minimum FAR and maximum accuracies on fault diagnosis among state-of-art methods. Additionally, our methods require fewer test procedures and increase the test efficiency compared with other methods. Because the construction processes are based on ELM and AP, the PHM design saves much human labor and becomes more flexible compared to traditional PHM approaches. Hence, the proposed method has good performance on test strategy design. However, the proposed method uses an offline construction technique for the diagnostic tree. As a result, the diagnostic performance only depends on the assigned fault set, and the recordings of online operations do not work on the PHM design. Therefore, the online updating of the diagnostic strategy should be further investigated.

**Author Contributions:** Conceptualization, W.M. and Y.S.; methodology, W.M.; software, W.M.; validation, Y.S., L.T. and Z.L.; formal analysis, L.T.; investigation, Z.L.; resources, Z.L.; data curation, W.M.; writing—original draft preparation, W.M.; writing—review and editing, W.M.; visualization, W.M.; supervision, Y.S.; project administration, L.T.; funding acquisition, Z.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the National Natural Science Foundation of China under Grant No. U1830133(NSFC) and the Project of Sichuan Youth Science and Technology Innovation Team, China (Grant No. 2020 JDTD0008).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data that support the findings of this study are available on request from the authors.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

