*3.2. Models*

The general architecture of an aeronautical EMA is extremely complex, characterized by multibody interactions and multiple non linearities and can be summed up as follows: a controller, an electric motor (usually PMSMs or BLDC motors), a gearbox, and an intricate network of sensors for measuring the currents, vibrations, voltages, temperatures, etc. As a result, the system is made up of multiple interconnected hardware and software components. It is also required to investigate their cross-interactions, so that the control system can appropriately fulfill its functions of monitoring, fault detection, and the assessment of a possible divergence route.

As stated before, two models have been assembled in Simulink and then validated, thanks to experimental test rigs. The models are physical-based and each component in real life is treated and modelled thanks to the appropriate equations. In other words, each block encloses the mathematical formulation by means of real formulas (e.g., electromagnetic formulas for the motor, dynamic equations for the physical systems, etc.).

If the RM has been built with a very high degree of precision, the MM has been conceived with approximations and lumped parameters, so that an almost real-time use is more feasible; the computational cost is, hence, reduced. The thorough description of the model is way beyond the scope of this work. The interested reader should see [36,37,50,51], as far as the RM and MM models are concerned, respectively. Furthermore, as explained in [36,37,50,51], the models have been built to represent an existing and experimental test bench located in Politecnico di Torino laboratories. In order to match the models and experimental results, the test bench structure has been reported in Simulink through simulation blocks. The experimental test bench has been pivotal in the model development: in fact, validation is essential to test models simulation performances [52].

The main parameters used in the models are, hence, taken from components datasheets and technical documents. The hardware configuration is based on the "S120 AC/AC Trainer Package" by Siemens. The motor parameters are reported in Table 2. The PID controller parameters have been calculated according to experimental tests to match the real and simulated behaviours.

For reasons of clarity, a simple logical scheme of the RM model is shown in Figure 6, while the four main blocks are quickly analysed below:

**Figure 6.** RM model structure as taken from [41].

	- 1. The calculation of the counter-electromotive force coefficient *cj* for each phase. This is achieved with the multiplication of the back EMF coefficients (obtained with experimental test campaigns) with three sine waves 120◦ out of phase from each other.
	- 2. The implementation of the motor resitive-inductive circuit. A set of mathematical equations (Equation (5)) that model the three star connected LR branches is solved and phase currents (*ij*) are, hence, calculated. The resistance *Rj* and inductance *Lj* of the motor are taken from equipment data sheets.

$$\begin{aligned} \Sigma\_{j=1}^3 i\_j &= 0\\ N\_{\dot{j}} - \varepsilon\_{\dot{j}} \omega &= R\_{\dot{j}} i\_{\dot{j}} + L\_{\dot{j}} \frac{\mathrm{d}i\_{\dot{j}}}{\mathrm{d}t} \end{aligned} \tag{5}$$

where *ij* and *Vj* are the currents and voltages across a single *j* phase.

3. The calculation of the motor available torque. Three different electromotive coefficients are used to calculate the motor torque along with the relative phase currents:

$$T\_{\mathfrak{m}} = \sum\_{j=1,2,3} i\_j c\_{j\prime} \tag{6}$$

• Motor transmission dynamics: this final block compares the available torque with the external requested torque and solves a second-order dynamical system (Equation (7) comprehensive of multiple non linearities, such as dry friction and backlash [52]). The outputs of this block are the motor position and speed, which are looped back to the controller.

$$T\_m - T\_l = f\_m \frac{\mathbf{d}^2 \theta\_m}{\mathbf{d}t^2} + C\_m \frac{\mathbf{d}\theta\_m}{\mathbf{d}t} \,'\,\tag{7}$$

where *Tm* represents the motor torque, and *Tl* represents the external torque. On the other hand, *Jm* represents the assembly inertia, *Cm* is the viscous friction coefficient, and *θ<sup>m</sup>* is the motor position.

**Table 2.** PMSM motor parameters. The motor part number is S 1FK7060- 2AC71-1CA0 provided by Siemens. The numbers 60 K and 100 K refer to overtemperature values of 60 K and 100 K.


Following the results of a detailed EMA failure mode effect and criticality analysis (FMECA) found in literature [9], it was decided to take into account five distinct failures. These failures show a medium high or high probability and/or criticality for the overall EMA: dry friction, backlash, short circuit, eccentricity, and proportional gain drift. On top of that, they show a quite slow propagation rate, which is desirable when designing failure detection and identification methodologies. For each failure mode, one or more *ki* components have been assigned and two different magnitude levels have been simulated: high magnitude (*ki* = 0.75) and low one (*ki* = 0.25). In order to complete the analysis, also a condition with multiple failure affecting the EMA has been taken into account: the −→*<sup>k</sup>* vector, used in this case is the one shown in Equation (8).

$$
\overrightarrow{k'} = [0.0133 \ 0.05 \ 0.003 \ 0 \ 0 \ 0.012 \ 0.5 \ 0.35],\tag{8}
$$

A more in-depth explanation on the failure implementation and the modelling of each failure mode can be found in [40,41].

#### **4. Results**

In this section, the mean percentage error and the mean computational cost for each single failure and for the multiple failure situation are presented (Figures 7 and 8). The error has been calculated following Equation (9).

$$Err[\%] = \frac{100}{\sqrt{6.5}} \cdot \sqrt{\sum\_{i=1}^{6} (k\_i - k\_{i, \text{RM}})^2 + k\_{6, \text{RM}} (k\gamma - k\_{7, \text{RM}})^2 + (k\varsigma - k\_{8, \text{RM}})^2},\tag{9}$$

As stated before, for each failure, a low magnitude and a high magnitude condition have been considered (high magnitude (*ki* = 0.75) and low one (*ki* = 0.25)).

These data have been obtained through ten runs of each algorithm for a total of 300 runs to have a minimal statistical relevance. The employed computer is based on a Intel Core i7-8750H CPU @ 2.2 GHz with a RAM of 16 GB and a dedicated GPU NVIDIA GeForce GTX 1050. DE shows a slightly lower error than the other algorithms, as far as the low intensity failures are concerned. On the other hand, GWO seems to be the precise solution for the high level failures (e.g., short circuit and proportional gain). Finally, the PSO algorithm turns out to be the most accurate for backlash faults in general, high static eccentricity, and low proportional gain. Apparently there is not an algorithm able to outperform the others in every situation; this was expected as metaheuristic algorithms, despite being very versatile, are not the panacea for every problem. However, if we look at the computational cost, since a first glance, it is clear that PSO is the most efficient algorithm. In fact, apart in case of the static low eccentricity failure, this optimization technique shows computational burdens almost halved, with respect to the other algorithms, with an average 25 min to detect and identify the failures. If we now consider the other two remaining algorithms, GWO is slightly faster than DE, but the last one seems to provide more stable and repetitive results.

**Figure 7.** Comparison between different algorithms: mean percentage error [%]. For each failure mode and for each algorithm, the failure magnitudes are selected as follows: high magnitude (*ki* = 0.75) and low one (*ki* = 0.25).

**Figure 8.** Comparison between different algorithms: computational cost [s]. For each failure mode and for each algorithm, the failure magnitudes are selected as follows: high magnitude (*ki* = 0.75) and low one (*ki* = 0.25).

The authors then used a predefined performance coefficient (PC) (first defined in [38] and reported in Equation (10)) to take into the account both the accuracy (mean percentage error) and the computational cost in a single parameter, thus providing an objective criterion useful to choose the best solution overall.

$$PC\_i = 100 \cdot \left(1 - \frac{t\_i \cdot err\_i}{\sum\_{i=1}^3 t\_i \cdot err\_i} \right),\tag{10}$$

As regards Equation (10), the formula expresses the PC for the *i*-th algorithm, *ti* is the average computational time of the *i*-th algorithm for the considered fault, and *erri* is the average percentage error of the *i*-th algorithm. The rationale behind the formula is as follows: the denominator is added to make the result non-dimensional, while the multiplication and the subtraction are added to transform it in a percentage value. The results of this further investigation are reported in Table 3, along with other data to sum up the analysis. For reasons of brevity, the reported values of average percentage error and computational costs are obtained by an average between the two magnitude failures. As expected by linking the best results, in terms of accuracy and efficiency, PSO shows the highest figures for every failure. This happens not so much because of the error, as to the significantly lower computational cost related to the PSO-based solution.

On the opposite side of the ranking, there is clearly the DE algorithm. This is due to the very high computational cost.

**Table 3.** Different optimization algorithms outcomes with single failures [10]. The values related to best performing algorithm (PSO) are highlighted in bold, along with the relative PCs.


The multiple failure condition results are reported in Table 4. Once again, it is clear that PSO is the leading algorithm, both in terms of efficiency and accuracy. Interestingly, it has to be highlighted that the multiple failure condition requires less than the single failure technique to reach a result. This was probably due to the stochastic operating principles of the algorithms. In fact, as the −→*<sup>k</sup>* vector contains random values (Equation (8)), rather than just one component dissimilar from its nominal value, it is believed that the algorithms are able to approach the error tolerance more quickly.

**Table 4.** Different optimization algorithms outcomes with multiple failures [10]. The values related to best performing algorithm (PSO) are highlighted in bold, along with the relative PCs.


#### **5. Discussion**

By looking at the results, PSO is the leading algorithm, providing the best results for the single and multiple failure condition. It is deemed that this algorithm is able to outperform the others, due to the main constructive principle behind it: unlike GWO, there is a strong collaboration and sharing of information between particles. Each particle contributes to the creation of the aforementioned collective intelligence by sharing the information on the best position it has found in the solution space and, hence, the likelihood to find always better solution rises drastically. This does not happen with GWO, which has a strong hierarchical law. DE showed incomparably worse results, and its implementation is quite challenging, too. It has to be noted that, even though the algorithms are metaheuristic and they pose few assumption on the application, they are very sensible to the problem statement; thus, an algorithm could behave very badly on a specific situation, but could provide excellent results when approaching an even slightly different problem.

The most challenging failure to detect and identify is the proportional gain drift. This particular phenomenon can be led back to the fact that a controller issue affects every single aspect of the actuation system; hence, the prognostic system struggles to identify the specific problems. Percentage errors for this failure mode, albeit higher than the ones relative to the other failures, are restrained around 8%, which is a satisfactory result, indeed.

After checking the results of the different algorithms, the authors envisioned a wider and more comprehensive concept of operations that could provide a practical application of the prognostic checks. The proposed monitoring framework could be easily implemented in a check routine that could be run when the aircraft is on the ground and during the flight to check the EMA subsystem health status. For instance a monitoring procedure can be run while the aircraft is at the gate waiting for the passengers, during maintenance checks, even 24-h or walk-around checks. Additionally, throughout the flying phase, a test procedure can be carried out at predetermined intervals. In this way, EMA health status is assessed and the mission safety is guaranteed. As seen before the proposed checks can not be run in real-time due to the needed number of simulation runs; on the other hand, a real-time monitoring capability is deemed superfluous and unnecessary as failures usually have a progression curve that can be monitored. The proposed prognostic strategy is applicable to a wide range of actuators and do not require the installation of any additional sensors, which is a very researched requirement. A prognostic and health management computer (PHMC) in the avionic bay is needed to perform the calculations.
