**5. Experimental Research**

In this section, bus of control system and engine are taken as examples to illustrate the validity of the proposed model. The health assessment of bus of control system is introduced in Section 5.1. In Section 5.2, the health assessment of engine is conducted. The result analysis is presented in Section 5.3.

### *5.1. Example 1—Health Assessment of the Bus of Control System* 5.1.1.BackgroundDescription

The bus of control system, controlling the transmission of the test data and control instruction between the bus control (BC) and received terminal (RT), is wildly applied in rocket, missile, and other aerospace fields [24]. With the demand for rapid information transmission rate and large bandwidth, optical fiber communication technology is largely used in the bus of control system. To demonstrate the effectiveness of the proposed model, a type of the bus of control system based on a passive optical network (PON) is taken as an example. Passive optical networks are named as containing a large number of optical passive devices, such as optical fiber, optical fiber connector, and optical splitter. Because the passive optical devices in PON can be easily influenced by severe operation environment, the health status of the bus of control system can be degraded, resulting in the degradation of communication quality. Therefore, it is crucial to assess the health status of the bus of control system.

In this experiment, due to the shortage of the test data, the topology of the bus of control system is simulated based on the Optisystem software shown as Figure 5. According to the real status and fault mode analysis of the bus of control system, the different degrees of fault of the bus of control system is simulated in the simulation model. The q factor (Q) of eye diagram and received optical power (O) are selected as health status indicators [25]. The eye diagram is used to measure the signal-to-noise ratio of the signal, and q factor is one of the important parameters of the eye diagram [26]. The received optical power denotes the optical power at the optical receiver. When the received optical power is lower than the minimum received optical power of the optical receiver, the optical signal cannot be transmitted.

**Figure 5.** Simulation model of bus of control system.

As shown in Figure 6, the value of O ranges from −22.65 dBm to −17.84 dBm while Q ranges from 2.81 to 7.53. Both the curves of O and Q descended from high to the low. It can be seen in Figure 7 that the health status grades are denoted by *y*-axis. Meanwhile, as the failure degree of the bus of control system increases gradually, its health status can be concluded as four stage in the sequence. It is "Health" at first, followed by "Subhealth" and "Slight fault", finally "Severe fault".

**Figure 6.** Observation data of O and Q.

**Figure 7.** The health status of the bus of control system.

### 5.1.2. The Procedures of Health Assessment

In this subsection, the implementation process of the proposed model is conducted as the following steps:

### *Step1: the transformation of input information*

According to the real health status of the bus of control system, assessment result grades can be defined as four parts as *D* = {*<sup>D</sup>*1, *D*2, *D*3, *<sup>D</sup>*4} = {*Health*, *Subhealth*, *Slight f ault*, *Severe f ault*}. However, because "subhealth" has a vague and random status, which is deduced by conjunction of multiple indicators, its reference value cannot be found in the individual indicator, resulting in the disaccord between the input and output grades. Thus, the input indicator reference grades are introduced as three parts as *H* = {*<sup>H</sup>*1, *H*2, *<sup>H</sup>*3} = {*High*, *Medium*, *Low*} = {*<sup>H</sup>*, *M*, *<sup>L</sup>*}. The reference values corresponding the reference grades are determined based on experts shown in Table 1.

**Table 1.** The inference values of input indicators.


**Remark 4.** *In Table 1, expert gives the intervals of reference values corresponding to reference grades, and the initial reference values are selected from the intervals. The reference values need to be optimized within intervals*.

Once the antecedents and consequent parameters of the rule are determined, the transformation matrixes can be constructed in Table 2 based on Formula (3). It should be noted that there is no ignorance in the transformation matrix.


**Table 2.** The parameters of transformation matrixes.

Based on Table 2, the values of transformation matrixes *A*1 and *A*2 can be introduced as follows. 


Based on Formulas (5)–(9), the input information can be translated into initial evidence as Figures 8 and 9.

**Figure 9.** Belief distribution of Q.

It can be seen from Figures 8 and 9 that the belief distributions of two indicators are transformed from input information. The belief degree of two indicators of "Health" are both over 0.5 on the 0–100 sets of data, gradually decreasing to zero with the furthering of fault degree. The belief degree of "subhealth" transformed from input information is little in O and Q, as the belief degree allocated to "Subhealth" is small given by expert in Table 2. The belief degree of "slight fault" or "severe fault" is increasing as the fault continues aggravating, and reaching the greatest finally. Totally, in both figures, the declining trend of health status is conformed with real status in Figure 7.

### *Step2: Calculation of model parameter*

The evidence weights of the two indicators are set as 0.75 and 0.95 respectively. The statistic reliabilities of the two indicators are determined as 0.7 and 0.8 respectively, based on industry standards. The dynamic reliabilities are calculated as 0.4 and 0.5 separately. Based on (11)–(14). The weighting factors *δ* are set to be 0.8 and 0.9 separately. Then, the reliabilities of the two indicators are 0.68 and 0.86 separately.

### *Step 3: Aggregation of two indicators*

Based on (14)–(17), ER rule can be used to aggregate initial evidence, and the distributed health assessment results can be obtained, shown as Figure 10.

**Figure 10.** The aggregated health status.

It can be seen in Figure 10 that the belief degree of the "health" is clearly divided into four stages. At first, the belief degree is closed to 1, then floats around 0.5 and 0.2, and finally approaches to 0. Because the belief degree of the "subhealth" of O and Q is little in Figures 7 and 8, the aggregated belief degree is near to 0. The belief degree of "slight fault" and "severe fault" increases, which is caused by the belief distribution of O and Q.

By introducing the expected utility, the belief distribution can be transformed into numerical output. Define the utility of assessment result grades *D*1, *D*2, *D*3, *D*4 as *<sup>u</sup>*(*<sup>D</sup>*1) = 12, *<sup>u</sup>*(*<sup>D</sup>*2) = 7*, <sup>u</sup>*(*<sup>D</sup>*3) = 1*, <sup>u</sup>*(*<sup>D</sup>*4) = 0, respectively. Then, the assessment result of the initial model is shown in Figure 11. It shows that the simulated status fluctuates near the real status of the bus of control system in the first three status and deviates from the real health status in forth status. This basically matches to distributed assessment results in the Figure 10. In fact, the deviation of real status partly reflects the uncertainty of observation data and the limitation of expert's knowledge. Therefore, initial assessment model needs to be optimized based on quantitative data.

**Figure 11.** The comparison between initial and real status.

5.1.3. Parameters Optimization and Comparative Study

The optimization model is constructed based on Formula (24), as follows

$$\min(\text{RMSE}(\mathbf{Y})) \tag{30}$$

To ensure high accuracy, maintaining the interpretability of the assessment results, constraints of the model parameters are determined by expert as follows. The constraints of indicators reference values are given as:

$$\begin{cases} -18.5 < H\_{1,Power} < -17\\ -21.5 < H\_{2,Power} < 19.5\\ -26 < H\_{3,Power} < -21.5\\ 8 < H\_{1,q} < 10\\ 3 < H\_{2,q} < 6\\ 0 < H\_{3,q} < 3 \end{cases} \tag{31}$$

The constraints of weight are given as:

$$\begin{cases} \ 0.5 < \omega\_{\text{Power}} < 0.8\\ \ 0.8 < \omega\_{\text{\textquotedblleft}} < 1 \end{cases} \tag{32}$$

The constraints of expected utility are given as:

$$\begin{cases} 10 < u(D\_1) < 15 \\ 7 < u(D\_2) < 10 \\ 3 < u(D\_3) < 5 \\ 0 < u(D\_4) < 1 \end{cases} \tag{33}$$

The constraints of the transformation matrixes are given as:

> ⎧⎪⎪⎪⎨⎪⎪⎪⎩

$$\begin{aligned} 0 \le a\_{i,j} \le 1\\ a\_{i-1,j} < a\_{i,j} < a\_{i+1,j} \\ \sum\_{j=1}^{N} a\_{i,j} = 1 \end{aligned} \tag{34}$$

The above model can be optimized by the Fmincon algorithm. Fmincon algorithm is employed to find the minimum value of the objection function under nonlinear constraints. Total of 200 sets of training data are selected alternately from the 400 sets of data, and the

400 sets of data are determined as test data. The optimized parameters are obtained, as shown in Tables 3 and 4.


**Table 3.** The optimized transformation matrixes.

**Table 4.** The optimized expected utility.


**Remark 5** . *By carefully comparing Tables 2 and 3, it can be found that reference values of indicators are not significantly changed. There are two reasons to illustrate this phenomenon:*


The optimized model is compared with the initial model, as follows. It is shown in Figure 12 that the optimized simulated status is closer to the real status than the initial status, especially in the "Severe fault" status, in which the optimized simulated status fluctuates less than the initial simulated status. To further illustrate the effectiveness of the proposed method, the following comparative study is conducted.

**Figure 12.** The comparison between the optimized model and initial model.

(1) The comparison under the ER rule framework

In this part, the traditionary ER rule (model l) and DS evidence theory (model 2) under the ER rule framework are employed to compare with the proposed model. It needs to be guaranteed the consistency between the input and output grades in the model 1 and model 2. Therefore, First, indicator reference grades are added to make it consistent with the assessment result grades. The initial reference value of "Subhealth "is given as an average value between the "Health" and "Slight fault". The reference values of both model 1 and model 2 are given in Table 5. The reliabilities are set same as the proposed model in model 1.

**Table 5.** The reference values of input indicators.


The constraints of indicators reference values are given as Formula (35), and the constraints of weight, expected utility are settled same as Formulas (32) and (33). Constraints are given same in model 1 and model 2, except that and reliability are set to be 1 in model 2. The same training data are used to optimize model parameters, the whole sets of data are employed as test data.

$$\begin{cases} -18.5 < H\_{1,power} < -15\\ -20.5 < H\_{2,power} < -19.5\\ -21.5 < H\_{3,Power} < -20.5\\ -26 < H\_{4,Power} < -21.5\\ 8 < H\_{1,q} < 10\\ 6 < H\_{2,q} < 8\\ 3 < H\_{3,q} < 6\\ 1 < H\_{4,q} < 3 \end{cases} \tag{35}$$

The comparison result between the actual and simulated results are shown in Figure 13. It can be seen that the proposed model is fluctuating smaller and much closer to the real status in contrast with model 1 and model 2, especially in the "Health" status. To further compare the accuracy of different models, the root means square error can be calculated as Table 6. As can be seen from Table 6, the assessment accuracy of the proposed model is highest. Compared with the model 1, model 2 has improved 23.13%, 27.48%. In the view of the above analysis, it can be proved that the proposed model is more accurate than other methods under the ER rule framework.

**Figure 13.** The comparison under ER rule framework.


**Table 6.** Comparison of assessment accuracy under ER framework.

(2) The comparison with data-based models

In this part, a comparative study is implemented by adopting the data-based method, including backpropagation (BP) neural network and support vector regression (SVR). Some details of BP model parameters are shown in Table 7. The same training data and test data are utilized. The comparison results between the simulated and actual status are shown in Figure 14.

**Table 7.** The parameters of the BP models.


**Figure 14.** The comparison of data-based models.

It can be seen in Figure 14 and Table 8, the proposed model has high accuracy, which is second only to the BP model, and the accuracy of the proposed model is improved by 6.52% compared with the SVR.

**Table 8.** Comparison of assessment accuracy of data-based model.


At the same time, to further compare and analyze the proposed model and BP model, 10%, 25%, 50%, and 60% of the whole data sets are randomly selected as the training set, and the whole sets of data are selected as the test set. The comparative accuracy of proposed model is calculated as follows.

As shown in Table 9, when the training set randomly selects 10% and 25% of the data set, the accuracy of the proposed model is higher than that of the BP model. While the training set randomly selects 50% and 60% of the data set, the accuracy of ER model is worse than that of the BP model. It shows that the proposed model can achieve accurate health assessment by aggregating expert knowledge and observation data in the case of less observation data and sufficient prior knowledge.


**Table 9.** Comparative accuracy of proposed model and BP model.

(3) The comparison with knowledge-based models

Belief rule base (BRB) and fuzzy reasoning (FR) are the typical qualitative knowledgebased methods. In this part, BRB and FR are implemented to compare with the proposed model. Same training data and test data are selected. The initial parameters of BRB are determined by expert' knowledge shown in Table 10, and the part parameters of fuzzy reasoning are demonstrated in Table 11.



**Table 11.** The parameters of FR model.


It is shown in Figure 15 that the assessment results of FR and BRB are relatively scattered and far from the real status. Comparing with BRB and FR, the accuracy of the proposed model is improved by 19.28% and 16.25% respectively, as shown in Table 12. It is concluded that the proposed model is most accurate compared to the qualitative knowledge-based models.

**Figure 15.** Comparison of qualitative knowledge-based models.

**Table 12.** Comparison of assessment accuracy of knowledge-based model.


### *5.2. Example 2—Health Assessment of Engine*

In this subsection, the WD615 model engine is taken as a case to verify the effectiveness of the proposed model for complex system. The background description is introduced in Section 5.2.1. The implement progress of the proposed model is carried out in Section 5.2.2. In Section 5.2.3, the comparative study is conducted.
