Next Article in Journal
Intelligent Game Strategies in Target-Missile-Defender Engagement Using Curriculum-Based Deep Reinforcement Learning
Previous Article in Journal
Non-Intersecting Diverging Runways Separation under Emergency Avoidance Situation
Previous Article in Special Issue
An Experimental Study on Rotor Aerodynamic Noise Control Based on Active Flap Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Flush Airdata System on a Flying Wing Based on Machine Learning Algorithms

1
College of Aerospace Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210095, China
2
AVIC Chengdu Aircraft Industrial (Group) Co., Ltd., Chengdu 610092, China
*
Author to whom correspondence should be addressed.
Aerospace 2023, 10(2), 132; https://doi.org/10.3390/aerospace10020132
Submission received: 20 November 2022 / Revised: 10 January 2023 / Accepted: 26 January 2023 / Published: 31 January 2023

Abstract

:
By using an array of pressure sensors distributed on the surface of an aircraft to measure the pressure of each port, the flush airdata sensing (FADS) system is widely applied in many modern aircraft and unmanned aerial vehicles (UAVs). Normally, the pressure transducers of the FADS system should be mounted on the leading edge of the aircraft, where they are sensitive to changes in pressure. For UAVs, however, the leading edge of the nose and wing may not be available for pressure transducers. In addition, the number of transducers is limited to 8–10, making it difficult to maintain accuracy in the normal method for FADS systems. An FADS system model for an unmanned flying wing was developed, and the pressure transducers were all located outside the regions of the leading edge areas. The locations of the transducers were selected by using the mean impact value (MIV), and ensemble neural networks were developed to predict the airdata with a very limited number of transducers. Furthermore, an error detection method was also developed based on artificial neural networks and random forests. The FADS system model can accurately detect the malfunctioning port and use the correct pressure combination to predict the Mach number, angle of attack, and angle of sideslip with high accuracy.

1. Introduction

Intrusive booms are widely used on aircraft to measure their airdata, even for some unmanned aerial vehicles (UAVs) [1,2,3]. However, at a high angle of attack or during dynamic maneuvering, intrusive booms may lead to inferior handling qualities [4]. Furthermore, due to their intrusive nature, vibrations caused by aerodynamic force, poor alignment, and physical damage during operation or maintenance may significantly degrade the accuracy of the protruding probes, restricting their application to certain cutting-edge aircraft. To overcome these problems, in the 1960s, the National Aeronautics and Space Administration (NASA) began work on an embedded airdata sensing system. The prototype of the flush airdata sensing (FADS) system was first seen in the X-15. The FADS system has a very complex mechanical structure; however, its performance is very poor and, thus, it is not feasible to use this mechanical structure to estimate the airdata.
In the Shuttle Entry Air Data System (SEDA) project, Siemers et al. proposed using pressure sensor arrays distributed over the nose of a spacecraft to measure pressure values and predict airdata from those pressures [5]. In the 1980s, NASA demonstrated the feasibility of this approach through wind tunnel experiments. In the early 1990s, NASA installed the high-angle-of-attack flush airdata sensing system (HI-FADS) on the F-18 High Alpha Research Vehicle (HARV). Whitmore et al. [6] proposed the design and development of the whole HI-FADS system, including the system hardware, the airdata calculation algorithm, and the system calibration. Later, in order to improve the accuracy, Whitmore et al. [7] proposed a “Triples Algorithm” to calculate the angle of attack and the sideslip angle, where three ports on the vertical midline are used to calculate the angle of attack, and the sideslip angle can be calculated using any combination of pressure-measuring ports other than those located on the vertical midline. Therefore, for the Triples Algorithm, the locations of the pressure ports have some constraints. Although the Triples Algorithm used on the X-33 is more secure and stable than the nonlinear regression method developed previously, it still has some problems; for instance, the convergence and the stability of the method depend on the ratio of dynamic pressure ( q c ) to static pressure ( P ), and some parameters also need to be calibrated. The error of the angle of attack and the sideslip angle are calculated by the FADS algorithm on the X-33 is ±0.5°. Recently, Jiang et al. [8] combined the Triples-Algorithm-based coarse estimation with least-squares-based precise estimation to establish a mathematical model for the FADS system of a Mars entry vehicle [9]. However, although the method could accurately predict the angle of attack and the sideslip angle, the errors of the Mach number and static pressure were relatively large. In recent years, Karlgaard et al. [10] proposed the use of an inertial navigation system (INS) to assist the FADS system in obtaining airdata. They proposed an algorithm that more closely couples INS with FADS, and the results were compared with those of the traditional FADS algorithm and another INS-assisted FADS algorithm [11]. It was found that the FADS algorithm could predict the Mach number more accurately with the help of the INS. In 2017, Millman improved the Triples Algorithm based on the potential flow model and strategic differencing of the Triples Algorithm [12], making the arrangement of the pressure measurement location in the FADS system far less restrictive than before. However, this method still has some constraints on the geometry of the nose and the locations of the pressure transducer.
Since the 1980s, artificial neural networks (ANNs) have been widely applied to construct surrogate models. Artificial neural networks have also been applied to FADS systems [13], where the pressure value at each pressure measurement port is taken as the input, the airdata parameters of interest (e.g., angle of attack, sideslip angle, Mach number) are taken as the output, and the neural network is used to establish a mapping relationship between the input and output vectors. Typically, the input–output relationship is very complex and highly nonlinear. The neural network used in the FADS has good real-time performance, good stability, and geometric independence, but it requires a lot of training data; furthermore, with the increase in the amount of data, the required training time will also increase. In the 1990s, Rohloff and Catton studied the feasibility of applying artificial neural networks to flush airdata sensing, and it was demonstrated that the mapping relationship between pressure values and airdata can be established by artificial neural networks [14]. Shortly thereafter, in 1998, Rohloff et al. put this idea into practice by constructing artificial neural networks to calculate static and dynamic pressures [15]. Their accuracy was comparable to that of the real-time FADS (RT-FADS) system developed by Whitmore et al. [4], with lower maximum and average errors. However, the fault-tolerance capability of the system was insufficient and needed to be improved. In 1999, therefore, Rohloff et al. proposed a fault-tolerant artificial neural network algorithm [16]. This algorithm combines the aerodynamics model and 20 artificial neural networks to estimate the static and dynamic pressure data separately according to different Mach ranges, and another artificial neural network is used to eliminate the errors of the angle of attack and sideslip angle. Meanwhile, it also gives the system a certain redundancy and fault management ability, and even if some of the transducers are out of order or damaged, the pressure combination containing the fault pressure signal can be easily eliminated. The results show that the accuracy of the system is comparable to that of RT-FADS [4]. Moreover, it has better stability than the FADS system based on the aerodynamics model. In 2015, Rohloff et al. [17] also proposed a fault detection algorithm for the previously developed neural network flush airdata sensing (NNFADS) algorithm. The fault detection algorithm uses chi-squared tests to filter the signals and detects the port combinations containing abnormal pressure signals. In addition, it was found that the new NNFADS system shows good generalization ability, and its accuracy can meet the necessary requirements, meaning that the neural network technology can be successfully applied to the airdata prediction of FADS.
Many traditional FADS algorithms, such as the Triples Algorithm, need to calibrate the angle of attack and sideslip angle. Crowther et al. proposed the use of artificial neural networks to correct these parameters [18]. They also studied the stability of the neural network when a single transducer malfunction occurs (where the pressure is zero), finding that the root mean square error increases when there is a transducer malfunction. In addition, the effects of different fault-handling methods were compared. Recently, the FADS system was applied and studied in not only some fighter and hypersonic aircraft but also in small unmanned aerial vehicles (sUAVs). They studied and tested the fault tolerance and fault robustness of the FADS system. Two methods for tackling the fault ports were studied: the first was to replace the faulty port with adjacent ports, while the other was to replace the wrong port readings with the corresponding outputs of an autocorrelated neural network. It was found that both methods can effectively reduce the root mean square error (RMSE) in predicting airdata. Recently, Jia et al. [19] proposed an error detection method by combining the interval-valued neutrosophic sets (IVNSs), belief rule base (BRB), and Dempster–Shafer (D–S) evidence reasoning for an FADS system. However, these methods generally rely on the aerodynamic model, which limits the locations of the transducers.
Normally, the pressure transducers of the FADS system are installed on the leading edge of the aircraft, where they are sensitive to the pressure when the Mach number, angle of attack, or sideslip angle changes. However, the leading edge of an aircraft may host other devices, such as radar or deicing systems; thus, it may not be possible to install the pressure transducers in this region. For an unmanned aerial vehicle with limited area—especially where pressure transducers cannot be installed on the leading edge of the wing and nose, plus the number of transducers is limited—it is difficult to directly use the aforementioned method to design the FADS system. In this study, an FADS system for an unmanned flying wing was developed, and the pressure transducers were all located outside the leading edge area. The locations of the transducers were selected using the mean impact value (MIV), and the ensemble neural networks were developed to predict the airdata. Furthermore, the fault port detection method was developed based on artificial neural networks and random forests. The FADS system could accurately detect the malfunctioning port and use the correct pressure combination to predict the Mach number, angle of attack, and sideslip angle with high efficiency and accuracy.

2. Port Location Selection

The positioning of the sensors is a key issue for the prediction accuracy of the numerical model, which needs to be tackled first. Normally, the leading edge of the wing and of the nose is preferable; however, for a flying wing, these regions may have already been taken up—for instance, the nose may host a radar system. Thus, a method for determining the positions of these sensors is required. Two methods are compared in this paper: one is the cost function proposed by Laurence III et al. [20], and the other is the mean impact value (MIV) [21]. Laurence III et al. suggested that high-priority locations should have a large range in pressure (the sensor noise/bias would therefore have less effect on the estimation of α and β), and that the pressure should respond smoothly to variations in α and β. Thus, two cost functions to select the positions of the sensors for an sUAV were proposed. The first cost function, which selects the preferred locations for the prediction of α, is as follows:
J 1 = R M S E / P r
where R M S E is the root mean square error of the fit between pressure, α, and β, while P r is the difference between the maximum and minimum (for a specified grid point). The second cost function, which selects the preferred locations for the prediction of β, is as follows:
J 2 = R M S E R M S E l R M S E u R M S E l × 0.2 + P r β P r β m i n P r β m a x P r β m i n × 0.8
where P r β is the average range of pressure across the sideslips (averaged over all β for a given α), P r β m i n and P r β m a x are the minimum and maximum average sideslip ranges, respectively, and R M S E l and R M S E u are the lower and upper limits of acceptable R M S E values, respectively.
The mean impact value (MIV) is a good index to evaluate the relationship between variables in the neural network [22,23], and especially to reflect the degree of influence between the input variables and the output variables. The plus–minus and absolute values of MIV are related to the correlation direction and the degree of influence, respectively. In this study, the MIV was used to find the locations of the sensors. Two neural networks with 100 candidate pressure ports as inputs were used to calculate the MIV. The outputs of the two neural networks were the angle of attack and the sideslip angle, respectively. For each neural network, there are 15 neurons in the hidden layer. Particularly, the two networks should be well-trained using the method discussed in Section 3. The calculation of MIV based on a BP neural network is as follows:
  • The artificial neural network is trained with the original sample set S, and the prediction is evaluated;
  • Each variable is increased and decreased by 10% to generate two new sample sets, S1 and S2;
  • Taking S1 and S2 as inputs, the evaluation results of the trained artificial neural network are defined as M i and D i , respectively, where i is the ith evaluation;
  • The impact value W i indicates the degree of influence of each independent variable on the output variable, and it can be calculated as follows:
    W i = M i D i
  • Finally, the MIV is calculated by averaging
    M I V = W 1 + W 2 + W N N
where N represents the number of evaluations.
Since there are 100 candidate pressure ports, one should perform 100 calculations to determine the MIV for each input. The MIV can be sorted in descending order based on the absolute value, in which the first few independent variables have a great influence on the outputs of the artificial neural network. To avoid the location of the leading edge where the deicing system or radar is installed, 100 locations near the leading edge were selected on half of the aircraft, as shown in Figure 1. The first 10 port locations selected by the two methods were applied to train the artificial neural networks. These two neural networks have 100 inputs and 15 neurons in the hidden layer. The outputs include the angle of attack α and sideslip angle β , where α ranges from −4° to 30°, and β ranges from −10° to 10°. The prediction errors are compared in Figure 2 and Figure 3, and the averaged error and maximum error are listed in Table 1. As can be seen in the figures and the table, both methods can select proper ports for the FADS system with high prediction accuracy; however, the error of the artificial neural network with ports selected by MIV is almost half that of the other method.

3. Flush Airdata System Based on Artificial Neural Networks

A mathematical model for a flush airdata system considering the error detection was proposed. The model mainly consists of two parts: one is the error detection module, which is discussed in the next section; the other comprises a series of artificial neural networks to predict the airdata. The major architecture of the flush airdata system is outlined in Figure 4. The pressure readings from the sensors are first read into the error detection module, which judges whether the combination of the pressures is correct. If it is correct, then the output is 0, and the artificial neural network Ann0 is activated to roughly predict the Mach number (Ma) and the angle of attack ( α ) with low accuracy; if not, then the output is the number of the error port—for instance, an output of 2 means that the pressure read from Sensor 2 is incorrect, and then the artificial neural network Ann2 is invoked, whereupon the pressure data from Sensor 2 are eliminated from the inputs. Based on the Ma and the absolute value of the angle of attack, another artificial neural network is then invoked to calculate the α ,   β , and   M a with high precision.
As shown in Figure 4, there are two types of artificial neural networks: one is named ANNx (x = 1~n; n is the number of ports); the other is named ANNx.y (x = 1~n, y = 1~4). An ANNx roughly estimates the Ma and α with lower accuracy in order to select the proper artificial neural network ANNx.y, while an ANNx.y predicts the airdata α , β , and   M a with high accuracy. The reason for doing so is that the flow separation characteristics are inconsistent at different altitudes and, thus, the pressure distributions are also different. As a result, using a single artificial neural network to predict the airdata requires more neurons in the hidden layer, which causes huge computational costs in the training stage and less efficiency in the prediction process.
The training of the BP ANNs is gradient-based [24,25]; therefore, once the initial weights are determined, most of these methods make the network converge to its local optimum rather than the global optimum. In order to minimize this issue, two optimization methods—genetic algorithm and particle swarm optimization—were applied in this study to ensure better initial weight and bias for a high-accuracy artificial neural network. Three fitness functions were defined in this study—namely, the maximum error f 1 = E m a x , the total absolute error f 2 = E i , and the sum of maximum error and average absolute error f 3 = E m a x + E i / n — in order to find the global optimal network.

3.1. Genetic Algorithm

The genetic algorithm (GA) [26] proposed by Professor Hollander is a parallel optimization algorithm based on genetic mechanisms and biological evolution theory, along with the biological evolution of the natural “Survival of the fittest”. When solving a problem with genetic algorithms, every parameter is encoded as a “chromosome” or individual. According to the fitness function, and by means of selection, crossover, mutation, and other operations, the parameters are optimized to maintain better adaptability, and by eliminating the individuals with poor fitness, the new population can not only inherit the information of the previous generation but also obtain better results. After the evolution of the limited generation, the parameters may converge to the global optima.
To improve the prediction accuracy of the neural network, the initial weight and bias were considered the genes for optimization. The total number of generations was 100, and the population size was 1000 for the GA. The general procedure of the training (as shown in Figure 5) is as follows:
(a)
Design the structure of the network, i.e., the number of neurons in the hidden layer;
(b)
Encode the weights into chromosomes (real values), and initialize the weight of the network;
(c)
Define the fitness function;
(d)
Evaluate the fitness function;
(e)
Check whether the optimal solution meets the target; if it does, then go to Step j;
(f)
Check the generation; if it reaches the maximum generation, then go to Step j;
(g)
Selection using the roulette method—that is, a selection strategy based on the fitness proportion, where the selection probability of an individual p i is
f i = 1 / F i
p i = f i j = 1 N f j
where F i is the fitness of individual i, and N is the population;
(h)
Crossover of the chromosomes with real values—that is, a crossover strategy for the kth chromosome a k and the lth chromosome a l at the jth gene, as follows:
a k j = a k j 1 b + a l j b a l j = a l j 1 b + a k j b
where b is a random number between [0, 1];
(i)
Mutate the chromosomes, and then go to Step d. The mutation for the ith chromosome at the jth gene is
a i j = a i j + a i j a m a x f g   r > 0.5 a i j + a m i n a i j f g   r 0.5
where a m a x   and   a m i n   are the upper threshold and lower threshold of the gene a i j , respectively, r is a random number between [0, 1], f g = r 2 1 g / G m a x 2 ,   r 2 is a random number, g is the g t h generation, and G m a x is the maximum generation;
(j)
Output the optimal weights and biases and use the Levenberg–Marquardt method to train the network until it is fully converged.

3.2. Particle Swarm Optimization

Dr. Eberhart and Dr. Kennedy [27] proposed the particle swarm optimization (PSO) method—an evolutionary computation technique based on the study of avian predation. It was inspired by the swarming behavior of birds; in the process of searching for food, individual birds gather near the food’s location through the transmission and intercommunication of the location information, and the process of searching for the food’s location can be regarded as the search for the global optimal solution. The final clustering is to find the optimal solution.
Particle swarm optimization (PSO) simulates birds in flocks by designing massless particles that have only two attributes: velocity and position. The velocity represents the speed of movement, while the position represents the candidates of the solution; the motion of particles in the solution space is similar to that of birds in their search for food. The bird is abstracted as a particle with no mass or volume in n-dimensional space, where the position of particle i is represented as the vector X i = x 1 , x 2 , , x N , and the velocity is represented as the vector V i = v 1 , v 2 , , v N . Each particle can be measured by the fitness function, which is defined based on the objective function. Thus, for each particle, the best presented location so far is saved, and it is deemed as the experience of the bird. The particle’s optimal fitness value is taken as the individual optimum, while the best of all the individual optima is taken as the global optimum. Each particle (bird) knows the best position for all particles found in the population so far, and this is defined as the peer experience. A particle’s velocity is therefore determined by its own experience and the experience of its companion in approaching the real global optimum in the solution space. The initial weight and bias of the neural network are optimized with 1000 particles. The general procedure of the training (as shown in Figure 6) is as follows:
(a)
Design the structure of the network, i.e., the number of neurons in the hidden layer;
(b)
Initialize the weight of the network;
(c)
Define the fitness function;
(d)
Evaluate the fitness function;
(e)
Check whether the optimal solution meets the target; if so, then go to Step (i);
(f)
Check the generation; if it reaches the maximum generation, then go to Step (i);
(g)
Update the individual optimal value and the global optimal value:
p b e s t i = f i t i   f i t i < p b e s t i g b e s t = f i t i             f i t i < g b e s t
where f i t i is the new fitness value, p b e s t i is the individual optimum, and g b e s t is the global optimum;
(h)
Update the position and velocity of each particle as follows:
V i d k + 1 = ω V i d k + c 1 r 1 P i d k X i d k + c 2 r 2 P g d k X i d k
X i d k + 1 = X i d k + V i d k + 1
where ω = 1 is the inertia weight, d = 1 , 2 , , n , k is the current iteration,   V i d is the particle velocity, c 1 = 1.49445 and c 2 = 1.49445 are constants, and r 1 and r 2 are random numbers between [0, 1]. Then, go to Step (d);
(i)
Output the optimal weights and biases, and use the Levenberg–Marquardt method to train the network until it is fully converged.

3.3. Tests and Comparisons

The artificial neural networks optimized by particle swarm optimization (PSO) and the genetic algorithm (GA) used 20 neurons in the hidden layer for the angle of attack and the sideslip angle, and 10 neurons for the Mach number, and there were nine pressure inputs for all of the networks. According to Whitmore’s Triples Algorithm, at least five ports are needed; in this study, considering the malfunction of the sensors, another four ports were added. The training sets included ~220,000 samples, of which 80% were used for training, 10% for validation, and 10% for testing. The data were partially derived from the wind tunnel tests and CFD simulations, based on which Kriging interpolation was used to further increase them. The GA and PSO each used three different fitness functions, and the accuracy of the algorithms was compared under different fitness functions, as shown in Figure 7, Figure 8 and Figure 9. A neural network with random initial weights and biases trained using the Levenberg–Marquardt method was also included in the comparisons, referred to as the “original network” in the figures and tables. In Figure 7, the errors of the predicted angle of attack for 3023 cases (not included in the training sets) by different artificial neural networks (ANNs) are compared, and in Table 2, the average and maximum error of the 3023 cases for different ANNs are compared. All of the optimized artificial neural networks had less average error than the original network; however, the two networks optimized with fitness function f 2 had a larger maximum error than the original one. The artificial neural network trained with PSO and fitness function f 1 showed the best accuracy, with a maximum error of less than 0.1°.
In Figure 8, the errors of the predicted sideslip angle for 3023 cases by different artificial neural networks are compared, and in Table 3, the average and maximum error for the different ANNs are listed. All of the optimized artificial neural networks showed better accuracy than the original network, except for the network optimized with the GA method and fitness function f 3 , which had a larger maximum error than the original network. For the sideslip angle, two artificial neural networks trained with fitness function f 2 showed the best accuracy; however, the artificial neural network trained with PSO and fitness function f 1 ranked third overall. For those three artificial neural networks, the maximum errors were less than 0.1°.
In Figure 9, the errors of the predicted Mach number for 3023 cases by different artificial neural networks are compared, and in Table 4, the average and maximum error for the different ANNs are listed. Similar to the case for the prediction of angle of attack, all of the optimized artificial neural networks had less average error than the original network; however, two networks optimized with fitness function f 2 had a larger maximum error than the original network. For the Mach number, the artificial neural network trained with PSO and fitness function f 3 showed the best accuracy; however, the artificial neural network trained with PSO and fitness function f 1 showed the second best accuracy overall. For both artificial neural networks, the maximum errors were less than 0.005.
Generally, by the optimization of the GA and PSO methods, the averaged error can be significantly decreased; however, an improper fitness function may increase the maximum error. The PSO method with the fitness function f 1 showed the best performance overall; thus, the PSO method was used to train all of the remaining artificial neural networks, which had only eight inputs (i.e., one sensor malfunctioning).

4. Error Detection Method

The pressure transducers mounted on the surface of the aircraft may malfunction in flight due to contamination, icing, and other electronic problems. Therefore, the faulty sensor needs to be detected and then eliminated from the input in order to avoid the degradation of the prediction accuracy. The pressures from all of the ports are not independent. It is therefore possible to develop a classification method to find the single pressure that is inconsistent with the remaining pressure. In this section, four methods are proposed to detect the faulty port, namely, decision tree, random forest, artificial neural network with decision tree, and artificial neural network with random forest.

4.1. Decision Tree

The decision tree is a supervised machine learning method that can solve classification and regression problems, making it possible to classify the error pressure combinations based on the correct one. It is a hierarchical structure composed of nodes and directed branches. There are three types of nodes in the tree: a root node, the branches, and the leaf nodes. The decision tree contains only one root node; each branch in the tree is a splitting based on the entropy, and each leaf node is a class to which the sample belongs. The decision tree divides the search space into several non-overlapping regions and then it compares the attribute values on a branch and judges the child nodes according to the different attribute values. Finally, it obtains the decision results in the leaf nodes of the tree. There are three main decision tree algorithms, namely, the ID3 algorithm, the C4.5 algorithm, and the CART algorithm. In this study, the C4.5 algorithm was used to train the decision tree, while the CART algorithm was applied to generate the decision tree of the random forest.

4.2. Random Forest

The random forest algorithm is composed of several decision trees that vote for the final decision. The training sets of each decision tree are subsets of the original training sets and are generated based on the bootstrap method. By using the bootstrap method, training sets S 1 , S 2 , , S T are generated randomly. Then, with each training set, the corresponding decision tree C 1 , C 2 , , C T is trained. Each decision tree fully grows without pruning. In the prediction stage, for sample X , each decision tree gives its result; thus, the classification results of each decision tree C 1 X , C 2 X , , C T X are obtained. Finally, by voting, the category with the most output in decision tree T is the category to which sample X belongs.

4.3. ANN and Decision Tree/Random Forest

To further increase the accuracy of the classification, an artificial neural network was used to assist the decision tree and random forest in finding the error port. Firstly, a vector T ( t 1 , t 2 , t 3 , t n ) was defined, where n is the total number of ports, and n = 9 in this paper, and t i is the malfunction probability of port i. Therefore, when the pressure from port i is faulty, t i = 1 , and when the pressure from port i is 100% correct, t i = 0 . Then, an artificial neural network with the pressure from all ports as the input and T as the output was trained (Figure 10). In this study, 20 neurons were used in the hidden layer.
In the diagnosis process (as shown in Figure 11), after reading the pressure data, the artificial neural network is used to output the fault probability of each port. When the fault probability of the corresponding port is less than 40%, it is deemed as no fault, and the output is 0; if the fault probability of only one port exceeds 60%, this port is then deemed as the fault, and the output is its ID. If the fault probability of a port is between 40% and 60%, then the random forest or decision tree is used to judge again, and if the failure probabilities of several ports are more than 60%, the random forest is also used to judge again.

4.4. Error Definition

In order to generate the error samples in the training set, the meaning of pressure error must be defined. Normally, the relative precision of a pressure sensor is 0.1%; in this study, it was about 100 Pa. Therefore, we randomly chose one port and added or subtracted 100, 200, 300, 400, or 500 Pa; the prediction results are shown in Figure 12 and Figure 13. As can be seen in the figures, when the pressure error was less than 500 Pa, the prediction errors of the angle of attack and the sideslip angle were both less than 0.5°, which is within the range of the requirement of the flush airdata sensing system. According to this, if the pressure deviation from the real value is more than 500 Pa, the port is deemed to be faulty.

4.5. Tests and Comparisons

Four fault diagnosis methods for the FADS system using different machine learning methods were tested. In total, 6020 correct pressure combinations and 84,280 incorrect pressure combinations (by adding a random error larger than 500 Pa to a single port—were used. Of these combinations, 90,000 were randomly selected as training data, and the remaining 300 were used as test data. As can be seen in Table 5, when using two decision trees to form a random forest, there were 49 incorrect reports, and when the number of decision trees was increased to 500, the number of incorrect reports decreased to 1. However, 500 decision trees require a relatively high computational cost to maintain real-time characteristics, which are unavailable for the flush airdata system on an aircraft. When using the aforementioned artificial neural network, there were only five incorrect reports; upon further combining the decision trees, the number of incorrect reports decreased to one, which is the same accuracy as the random forest method with 500 decision trees, but with a much lower computational cost. When replacing the decision tree with a random forest with only two decision trees, the incorrect report vanished.

5. Results and Discussion

The port locations for the FADS system are shown in Figure 14; in this case, nine ports (one port was located on the central line, while the other eight ports were located on both sides; due to the symmetry, only four ports are illustrated in the figure) were selected using the MIV. Considering the redundancy of the FADS system, nine locations were selected for the ports, and with nine ports, the ensemble neural networks developed in this study could still correctly predict the airdata even with one malfunctioning sensor. The whole architecture of the ensemble neural networks is shown in Figure 4. As described in Section 3, the ensemble networks with nine inputs (ANN0 and ANN0.xs) were trained and tested. By using the PSO method with the fitness function f 1 , the rest of the neural networks with eight inputs (as shown in Figure 4) were trained. The whole system was tested using the same 3023 test cases described in Section 3.3. Thus, 3023 test samples were used with a randomly added error (larger than 500 Pa but less than 1000 Pa) to a single sensor (input pressure).
The prediction results with and without error detection are compared in Figure 15, Figure 16 and Figure 17 and Table 6. In Figure 15, the prediction of α is compared; as can be seen in the figure, without the error detection, the maximal error reaches 0.46°, although most of the errors are less than 0.5°. Meanwhile, with the error detection, by removing the fault pressure value, the prediction error is an order lower. Compared to the networks with nine inputs (see Table 2), the ensemble neural network with error detection maintains similar accuracy.
The prediction of β is compared in Figure 16. Without the error detection, the maximal error reached 5.8°, which is not allowed. However, after eliminating the fault pressure, the maximal error was 0.17°, while the average error was 0.01°. Compared to the networks with nine inputs (see Table 2), the accuracy of the ensemble neural network with error detection was only slightly degraded, but it was still within an acceptable range.
The prediction results for the Mach number are compared in Figure 17. By adding an error larger than 500 Pa but less than 1000 Pa, the prediction error of the Mach number did not seem to increase excessively, and even without the error detection, the maximal error was within the allowed range.

6. Conclusions

A mathematical model of an FADS system for an unmanned flying wing was developed in this study. To find the best locations outside the leading-edge area for the pressure transducer, the mean impact value (MIV) was used, and upon comparing the prediction error with the cost function method [21], it showed less average and maximal error. Ensemble neural networks were developed to predict the airdata with limited inputs. It was found that the artificial neural networks trained by the PSO method with the maximal error as the fitness function could achieve superior overall prediction accuracy. Furthermore, a fault port detection method was developed based on artificial neural networks and random forests. The FADS model can accurately detect the malfunctioning port and use the correct pressure combination to predict the Mach number, angle of attack, and sideslip angle with high accuracy.

Author Contributions

Methodology, Y.W., Y.X. and L.Z.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W. and Y.X.; supervision, N.Z. and C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Fundamental Research Funds for the Central Universities (No. NS2019006) and the National Science Foundation (No. 12032011).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors acknowledge the support of the Fundamental Research Funds for the Central Universities (No. NS2019006) and the National Science Foundation (No. 12032011).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Houston, A.L.; Argrow, B.; Elston, J.; Lahowetz, J.; Frew, E.W. The collaborative Colorado—Nebraska unmanned aircraft system experiment. Bull. Am. Meteor. Soc. 2012, 93, 39–54. [Google Scholar] [CrossRef] [Green Version]
  2. Kocer, G.; Mansour, M.; Chokani, N.; Abhari, R.; Müller, M. Full-scale wind turbine near-wake measurements using an instrumented uninhabited aerial vehicle. J. Sol. Energy Eng. 2011, 133, 041011. [Google Scholar] [CrossRef]
  3. van den Kroonenberg, A.; Martin, T.; Buschmann, M.; Bange, J.; Vorsmann, P. Measuring the wind vector using the autonomous mini aerial vehicle M2 AV. J. Atmos. Oceanic Technol. 2008, 25, 1969–1982. [Google Scholar] [CrossRef]
  4. Whitmore, S.A.; Davis, R.J.; Fife, J. In-flight demonstration of a real-time flush airdata sensing system. J. Aircr. 1996, 33, 5. [Google Scholar] [CrossRef]
  5. Siemers, P.M.I.; Wolf, H.; Henry, M.W. Shuttle entry air data system (SEADS)-flight verification of an advanced air data system concept. In Proceedings of the 15th AIAA Aerodynamics Testing Conference, San Diego, CA, USA, 18–20 May 1988. [Google Scholar]
  6. Whitmore, S.A.; Moes, T.R.; Larson, T.J. High angle-of-attack flush airdata sensing system. J. Aircr. 1992, 29, 915–919. [Google Scholar] [CrossRef]
  7. Whitmore, S.A.; Cobleigh, B.R.; Haering, E.A. Design and Calibration of the x-33 Flush Airdata Sensing (FADS) System; NASA/TM-1998-206540; Dryden Flight Research Center, NASA: Edwards, CA, USA, 1998; pp. 1–30. [Google Scholar]
  8. Jiang, X.; Li, S.; Gu, L.; Li, M.; Ji, Y. FADS based aerodynamic parameters estimation for mars entry considering fault detection and tolerance. Acta Astronaut. 2021, 180, 243–259. [Google Scholar] [CrossRef]
  9. Jiang, X.; Li, S.; Huang, X. Radio/FADS/IMU integrated navigation for Mars entry. Adv. Space Res. 2018, 61, 1342–1358. [Google Scholar] [CrossRef] [Green Version]
  10. Karlgaard, C.D.; Kutty, P.; Schoenenberger, M. Coupled inertial navigation and flush air data sensing algorithm for atmosphere estimation. J. Spacecr. Rocket. 2015, 54, 128–140. [Google Scholar] [CrossRef]
  11. Ellsworth, J.C.; Whitmore, S.A. Simulation of a flush air-data system for transatmospheric vehicles. J. Spacecr. Rocket. 2012, 45, 716–732. [Google Scholar] [CrossRef]
  12. Millman, D.R. A modified triples algorithm for flush air data systems that allows a variety of pressure port configurations. In Proceedings of the AIAA Atmospheric Flight Mechanics Conference, Denver, CO, USA, 5–9 June 2017. [Google Scholar]
  13. Samy, I.; Postlethwaite, I.; Gu, D.W. Neural-network-based flush air data sensing system demonstrated on a mini air vehicle. J. Aircr. 2010, 47, 18–31. [Google Scholar] [CrossRef]
  14. Rohloff, T.J.; Catton, I. Development of a neural network flush airdata sensing system. In Proceedings of the 1996 ASME International Mechanical Engineering Congress and Exposition, American Society of Mechanical Engineers, Atlanta, GA, USA, 17–22 November 1996; pp. 39–43. [Google Scholar]
  15. Rohloff, T.J.; Angeles, L. Air data sensing from surface pressure measurements using a neural network method. AIAA J. 1998, 36, 2094–2101. [Google Scholar] [CrossRef]
  16. Rohloff, T.J.; Whitmore, S.A.; Catton, I. Fault-tolerant neural network algorithm for flush air data sensing. J. Aircr. 1999, 36, 541–549. [Google Scholar] [CrossRef]
  17. Rohloff, T.J.; Catton, I. Fault tolerance and extrapolation stability of a neural network air-data estimator. J. Aircr. 2015, 36, 571–576. [Google Scholar] [CrossRef]
  18. Crowther, W.J.; Lamont, P.J. A neural network approach to the calibration of a flush air data system. Aeronaut. J. 2001, 105, 85–95. [Google Scholar] [CrossRef]
  19. Jia, Q.; Hu, J.; Zhang, W. A fault detection method for FADS system based on interval-valued neutrosophic sets, belief rule base, and D-S evidence reasoning. Aerosp. Sci. Technol. 2021, 114, 106758. [Google Scholar] [CrossRef]
  20. Laurence, R.J., III; Argrow, B.M.; Frew, E.W. Wind Tunnel results for a distributed flush airdata system. J. Atmos. Ocean. Technol. 2017, 34, 1519–1528. [Google Scholar] [CrossRef]
  21. Chorowski, J.; Wang, J.; Zurada, J.M. Review and performance comparison of SVM- and ELM based classifiers. Neurocomputing 2014, 128, 507–516. [Google Scholar] [CrossRef]
  22. Zhang, Z.L.; Yang, J.G. Narrow density fraction prediction of coarse coal by image analysis and MIV-SVM. Int. J. Oil Gas Coal Technol. 2016, 11, 279–289. [Google Scholar] [CrossRef]
  23. Jiang, J.L.; Su, X.; Zhang, H.; Zhang, X.H.; Yuan, Y.J. A novel approach to active compounds identification based on support vector regression model and mean impact value. Chem. Biol. Drug Des. 2013, 81, 650. [Google Scholar] [CrossRef]
  24. Hagan, M.T.; Menhaj, M.B. Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 1994, 5, 989–993. [Google Scholar] [CrossRef]
  25. Moller, M.F. A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 1993, 6, 525–533. [Google Scholar] [CrossRef]
  26. Holland, J. Adaptation in Nature and Artificial Systems. University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  27. Eberhart, R.; Kennedy, J. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
Figure 1. The illustration of the candidate locations.
Figure 1. The illustration of the candidate locations.
Aerospace 10 00132 g001
Figure 2. Comparison of the α prediction.
Figure 2. Comparison of the α prediction.
Aerospace 10 00132 g002
Figure 3. Comparison of the β prediction.
Figure 3. Comparison of the β prediction.
Aerospace 10 00132 g003
Figure 4. Major architecture of the flush airdata system.
Figure 4. Major architecture of the flush airdata system.
Aerospace 10 00132 g004
Figure 5. Flowchart for the genetic algorithm.
Figure 5. Flowchart for the genetic algorithm.
Aerospace 10 00132 g005
Figure 6. Flowchart for PSO.
Figure 6. Flowchart for PSO.
Aerospace 10 00132 g006
Figure 7. Comparisons of the prediction of the angle of attack. (a) Histogram of the prediction error of α for the network optimized by the GA with f1. (b) Histogram of the prediction error of α for the network optimized by the GA with f2. (c) Histogram of the prediction error of α for the network optimized by the GA with f3. (d) Histogram of the prediction error of α for the network optimized by PSO with f1. (e) Histogram of the prediction error of α for the network optimized by PSO with f2. (f) Histogram of the prediction error of α for the network optimized by PSO with f3. (g) Histogram of the prediction error of α for the network without optimization.
Figure 7. Comparisons of the prediction of the angle of attack. (a) Histogram of the prediction error of α for the network optimized by the GA with f1. (b) Histogram of the prediction error of α for the network optimized by the GA with f2. (c) Histogram of the prediction error of α for the network optimized by the GA with f3. (d) Histogram of the prediction error of α for the network optimized by PSO with f1. (e) Histogram of the prediction error of α for the network optimized by PSO with f2. (f) Histogram of the prediction error of α for the network optimized by PSO with f3. (g) Histogram of the prediction error of α for the network without optimization.
Aerospace 10 00132 g007aAerospace 10 00132 g007b
Figure 8. Comparison of the β prediction. (a) Histogram of the prediction error of β for the network optimized by the GA with f1. (b) Histogram of the prediction error of β for the network optimized by the GA with f2. (c) Histogram of the prediction error of β for the network optimized by the GA with f3. (d) Histogram of the prediction error of β for the network optimized by PSO with f1. (e) Histogram of the prediction error of β for the network optimized by PSO with f2. (f) Histogram of the prediction error of β for the network optimized by PSO with f3. (g) Histogram of the prediction error of β for the network without optimization.
Figure 8. Comparison of the β prediction. (a) Histogram of the prediction error of β for the network optimized by the GA with f1. (b) Histogram of the prediction error of β for the network optimized by the GA with f2. (c) Histogram of the prediction error of β for the network optimized by the GA with f3. (d) Histogram of the prediction error of β for the network optimized by PSO with f1. (e) Histogram of the prediction error of β for the network optimized by PSO with f2. (f) Histogram of the prediction error of β for the network optimized by PSO with f3. (g) Histogram of the prediction error of β for the network without optimization.
Aerospace 10 00132 g008aAerospace 10 00132 g008b
Figure 9. Comparison of the Mach number prediction. (a) Histogram of the prediction error of the Ma for the network optimized by the GA with f 1 . (b) Histogram of the prediction error of the Ma for the network optimized by the GA with f 2 . (c) Histogram of the prediction error of β for the network optimized by the GA with f 3 . (d) Histogram of the prediction error of the Ma for the network optimized by PSO with f . (e) Histogram of the prediction error of the Ma for the network optimized by PSO with f 2 . (f) Histogram of the prediction error of the Ma for the network optimized by PSO with f 3 . (g) Histogram of the prediction error of the Ma for the network without optimization.
Figure 9. Comparison of the Mach number prediction. (a) Histogram of the prediction error of the Ma for the network optimized by the GA with f 1 . (b) Histogram of the prediction error of the Ma for the network optimized by the GA with f 2 . (c) Histogram of the prediction error of β for the network optimized by the GA with f 3 . (d) Histogram of the prediction error of the Ma for the network optimized by PSO with f . (e) Histogram of the prediction error of the Ma for the network optimized by PSO with f 2 . (f) Histogram of the prediction error of the Ma for the network optimized by PSO with f 3 . (g) Histogram of the prediction error of the Ma for the network without optimization.
Aerospace 10 00132 g009aAerospace 10 00132 g009b
Figure 10. An artificial neural network for error detection.
Figure 10. An artificial neural network for error detection.
Aerospace 10 00132 g010
Figure 11. The flowchart of the diagnosis procedures.
Figure 11. The flowchart of the diagnosis procedures.
Aerospace 10 00132 g011
Figure 12. Prediction results of the angle of attack with fault pressure combinations.
Figure 12. Prediction results of the angle of attack with fault pressure combinations.
Aerospace 10 00132 g012
Figure 13. Prediction results of the sideslip angle with fault pressure combinations.
Figure 13. Prediction results of the sideslip angle with fault pressure combinations.
Aerospace 10 00132 g013
Figure 14. Port locations.
Figure 14. Port locations.
Aerospace 10 00132 g014
Figure 15. Comparisons of angle of attack.
Figure 15. Comparisons of angle of attack.
Aerospace 10 00132 g015
Figure 16. Comparisons of sideslip angle.
Figure 16. Comparisons of sideslip angle.
Aerospace 10 00132 g016
Figure 17. Comparison of Mach numbers.
Figure 17. Comparison of Mach numbers.
Aerospace 10 00132 g017
Table 1. The comparisons of the prediction error.
Table 1. The comparisons of the prediction error.
J1 for αJ2 for βMIV for αMIV for β
Averaged error0.03570.04320.01190.0341
Maximum error0.40330.69270.16700.2218
Table 2. Error comparison for the angle of attack prediction.
Table 2. Error comparison for the angle of attack prediction.
Average Error (°)Maximum Error (°)
Original3.438231.772
PSO + f 1 0.00340.0703
PSO + f 2 0.00780.2989
PSO + f 3 0.00460.0840
GA + f 1 0.00480.0555
GA + f 2 0.00410.1919
GA + f 3 0.00430.0629
Table 3. Error comparison for the sideslip angle prediction.
Table 3. Error comparison for the sideslip angle prediction.
Average Error (°)Maximum Error (°)
Original7.002314.4927
PSO + f 1 0.00300.0708
PSO + f 2 0.00200.0583
PSO + f 3 0.00140.1872
GA + f 1 0.00700.2604
GA + f 2 0.00240.0669
GA + f 3 0.00800.3168
Table 4. Error comparison for Mach number prediction.
Table 4. Error comparison for Mach number prediction.
Average ErrorMaximum Error
Original 5.97440.5631
PSO + f 1 0.00030.0048
PSO + f 2 0.00060.0160
PSO + f 3 0.00030.0025
GA + f 1 0.00040.0050
GA + f 2 0.00040.0266
GA + f 3 0.00040.0071
Table 5. Error diagnosis comparison for different methods.
Table 5. Error diagnosis comparison for different methods.
Random Forest
(2 Decision Trees)
Random Forest
(500 Decision Trees)
ANNANN and Decision TreeANN and Random Forest
(2 Decision Trees)
Correct reports251299295299300
Incorrect reports491510
Table 6. Comparisons of the prediction results with and without error detection.
Table 6. Comparisons of the prediction results with and without error detection.
α
(Without Error Detection)
α
(With Error Detection)
β
(Without Error Detection)
β
(With Error Detection)
Ma
(Without Error Detection)
Ma
(With Error Detection)
Average error0.11990.00413.45300.00910.00320.0003
Maximal error0.45540.04545.78330.16940.01330.0046
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Xiao, Y.; Zhang, L.; Zhao, N.; Zhu, C. Flush Airdata System on a Flying Wing Based on Machine Learning Algorithms. Aerospace 2023, 10, 132. https://doi.org/10.3390/aerospace10020132

AMA Style

Wang Y, Xiao Y, Zhang L, Zhao N, Zhu C. Flush Airdata System on a Flying Wing Based on Machine Learning Algorithms. Aerospace. 2023; 10(2):132. https://doi.org/10.3390/aerospace10020132

Chicago/Turabian Style

Wang, Yibin, Yijia Xiao, Lili Zhang, Ning Zhao, and Chunling Zhu. 2023. "Flush Airdata System on a Flying Wing Based on Machine Learning Algorithms" Aerospace 10, no. 2: 132. https://doi.org/10.3390/aerospace10020132

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop