Next Article in Journal
A Taxonomy of Idea Management Tools for Supporting Front-End Innovation
Previous Article in Journal
Electrocardiogram Analysis by Means of Empirical Mode Decomposition-Based Methods and Convolutional Neural Networks for Sudden Cardiac Death Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Aircraft Wake Recognition Based on Improved ParNet Convolutional Neural Network

1
College of Electronic Information and Automation, Civil Aviation University of China, Tianjin 300300, China
2
Hong Kong Observatory, Hong Kong, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(6), 3560; https://doi.org/10.3390/app13063560
Submission received: 20 January 2023 / Revised: 2 March 2023 / Accepted: 8 March 2023 / Published: 10 March 2023
(This article belongs to the Section Aerospace Science and Engineering)

Abstract

:
The occurrence of wake can pose a threat to the flight safety of aircraft and affect the runway capacity and airport operation efficiency. To effectively identify aircraft wake, this paper proposes a novel convolutional neural network (CNN) method of aircraft wake recognition based on the improved parallel network (ParNet). Depthwise separable convolution (DSC) was introduced into the ParNet to make the wake recognition model lightweight. In addition, the convolutional block attention module (CBAM) was introduced into the wake recognition model to improve the capacity of the model to extract the spatial features of the wind field. The proposed aircraft wake recognition method was used to identify the aircraft wake based on the lidar wind field scanning image of Hong Kong International Airport. The best wake recognition effect was obtained with a recognition accuracy of 98.91% and an F1 value of 98.90%. As the number of parameters of the model was only 0.46 M, the aircraft wake could be identified on an ordinary computer. Thus, the proposed method can effectively identify aircraft wake.

1. Introduction

Aircraft wake refers to a pair of reverse spiral turbulent vortices formed behind aircraft wings owing to the pressure difference between the upper and lower surfaces of the wing during the flight. It accompanies the aircraft throughout the duration of the flight [1]. When an aircraft enters the wake area of another aircraft, its aerodynamic performance may be affected; this can significantly threaten its flight safety [2]. Wake is an important factor that affects the capacity and efficiency of airport runways with intensive take-off and landing. Therefore, the recognition of aircraft wake is of considerable significance in ensuring flight safety and improving airport operation efficiency.
As regards numerical simulations, Kovalev et al. analysed the radar characteristics of wake by employing a 3D large-eddy simulation [3]. They described the settings of the large-eddy simulation and explained the evolution of wake under three simulation conditions. Xu et al. studied the impact of crosswind on wake attenuation and the acceleration mechanism of active flow control on wake attenuation [4]. The simulation results showed that the evident impact of the crosswind on the wake development is caused by the separation vortex generated near the crosswind field rather than the crosswind itself. Junduo et al. used an adaptive-grid large-eddy simulation technique to numerically study the near-ground evolution process of wake under crosswind conditions [5]. The results showed that the evolution of upstream and downstream vortices is asymmetric. The downstream vortex attenuates faster but moves farther. Moreover, the stronger the crosswind, the faster the wake vortex attenuates. Holzäpfel et al. proposed a new probabilistic two-phase wake vortex decay model (P2P), which has become the most classic wake evolution model [6]. The model entails a set of functions based on aircraft and environmental parameters, which can predict the probability of wake behaviour in real-time.
Light detection and ranging (lidar) is the most effective technical means to detect the near-ground wind field. In recent years, many researchers have studied wake phenomena by using the wind field data detected by wind lidar. Dolfi-Bouteyre et al. demonstrated the feasibility of using a 1.5 μm pulse fibre lidar for wake detection at an airport [7]. Yoshikawa et al. proposed an inversion method for an aircraft’s wake vortex, which can estimate the horizontal and vertical positions of the wake and the mean circulation radius from the Doppler spectrum measured by the lidar [8]. Yihua et al. proposed that the aircraft wake can be identified by the main features of the wake Doppler spectrum [9]. However, this recognition method was limited by the sample size of the Doppler spectrum features, and the recognition process was redundant. Xiaoye et al. proposed a fast wake recognition method based on the CDL spectrum width and radial wind speed [10]. However, this method cannot adaptively adjust the recognition threshold of the Doppler spectrum features, and its recognition efficiency was limited. Weijun et al. proposed a recognition method based on the k-nearest neighbour (KNN) [11]; however, its recognition accuracy required improvement. Xuan et al. proposed a method for aircraft wake recognition based on the support vector machine (SVM) [12]. However, the single extraction method of the velocity’s extremum features filtered the local information of the wake, limiting the recognition accuracy of the model on the measured data.
In recent years, with the improvement and application of convolutional neural networks (CNNs) in computer vision, CNNs have increasingly been applied to studies of aircraft wake recognition. Weijun et al. classified the wake velocity nephogram based on AlexNet to achieve wake recognition, with its accuracy reaching 91.30% [13]. However, the AlexNet model required many parameters and calculations. A large amount of data was required to train the model, and it also consumed extensive computer resources. Thus, it cannot be executed on an ordinary computer.
In this study, a CNN method based on an improved ParNet was devised to identify aircraft wake. Depthwise separable convolution (DSC) was introduced into the ParNet for the first time to make the wake recognition model lightweight. In addition, the convolutional block attention module (CBAM) was introduced into the wake recognition model to improve the recognition capability for different shapes of the wake. The proposed method was used to identify the aircraft wake based on the lidar wind field scanning images. The results showed that the introduction of DSC and the CBAM effectively improves the wake recognition efficiency and effect of the model. The proposed wake recognition model involves a relatively small number of parameters, thus making it possible to realise aircraft wake recognition on embedded devices and obtain the best-reported aircraft wake recognition effect.

2. Establishment of Dataset

2.1. Acquisition of Wind Field Data

Lidar can accurately detect the velocity of the atmospheric wind field and can better capture the wind field structure of the wake [14,15]. To obtain real aircraft wake data, this experiment used the wind field data of Hong Kong International Airport. For field detection, the Leosphere Windcube200S lidar was used [16,17,18]. The main parameters are shown in Table 1. The lidar has been successfully deployed in many locations globally. The radar was deployed at a vertical distance of approximately 1400 m from the south runway of Hong Kong International Airport, as shown in Figure 1. On the basis of the evolution characteristics and the aerodynamic structure of the wake, the lidar used the range height indicator (RHI) scanning mode to detect the wake, as shown in Figure 2.
The lidar scanned the target airspace periodically with a certain frequency, obtaining the wind field data of the aircraft wake evolution under different weather conditions. Considering the scanning data from 03:15:23 to 03:16:37 on 1 February 2019 as an example, the lidar scanned the airspace above the runway at the airport nine times. The detected wind speed data were visualised, as shown in Figure 3. The evolution of the aircraft wake was seen. As shown in the early stage (Figure 3a–c), under the induction of the ambient wind and two vortices, the velocity of the two vortices in the wake decreased gradually. The wake intensity was decreasing, and it was expanding in shape. As depicted in the middle stage (Figure 3d–f), under the influence of the crosswind and ground effect, the distance between the left and right vortices in the wake kept increasing [5]. As depicted in the later stage (Figure 3g–i), the symmetrical structure started to disappear, the left vortex merged with the ambient wind, and the right vortex lasted for a long time.

2.2. Establishment of Dataset

The wind field data obtained by lidar are one-dimensional data. To enable the recognition model to better extract the spatial characteristics of the wake and facilitate its processing, the obtained radial velocity of the wind field was imaged, as shown in Figure 4. First, the data were reconstructed according to the working mode set by the lidar, and the radial velocity matrix of the wind field was obtained. According to the statistical characteristics of the historical wind speed, the radial velocity matrix of the wind field was linearly transformed as follows:
I = f ( V ) = 255 × V + 5 10 ,
where V stood for the radial velocity matrix of the wind field captured by the lidar. The parameters in the equation were determined by the maximum and minimum of the radial wind velocity. Further, the wind field matrix was expanded by bicubic interpolation, and the greyscale images with 72 × 158 pixels were then generated, as shown in Figure 4. In Figure 4, red, orange, and yellow indicate that the radar radial velocity is positive, while blue and green indicate that it is negative. The relative relationship between wind speed and the laser beam’s emission direction is expressed as positive or negative.
A total of 6352 wind speed images were obtained, including 3169 images with a wake and 3083 images without a wake. For these images, ordinary atmosphere phenomena may occur, e.g., calm wind, downwind, and headwind. In the experiment, 1500 images were selected as the training dataset. A total of 320 images were selected in each validation and test set.
The images with wake were marked as positive, while the images without wake were marked as negative. The positive and negative images are shown in Figure 5. Figure 5a,c,e shows the positive samples under downwind, calm wind, and headwind. It is seen that the wake may shift, affected by the ambient wind. Figure 5b,d,f shows the negative samples under downwind, calm wind, and headwind.
Limited sample quantity may lead to the overfitting of the CNN model. The data augmentation technique was used to overcome the problem [19]. In the experiment data, enhancement was used to expand the training set. The data were enhanced by scaling and rotating the images. The images may be scaled by a random ratio within the range of 0.8–1.2. Similarly, they can be rotated by random degrees within the range of −5–+5°. The consequent images may be cropped or refreshed by adding pixels in order to maintain the size of the images. The data augmentation technique helped to produce images with wakes of different intensities, which enabled the diversity of the data samples. The scaling of the images imitated the wakes produced by different aircraft. The rotation of the images imitated the wakes affected by the ambient wind.

3. Improved ParNet

Although the CNN performs well on various large image datasets, certain problems occur when the traditional CNN is directly used for wake recognition. First, a remarkable difference exists between the low-resolution lidar single-channel image and the common optical image. Second, the implementation of the neural network algorithm needs to consider the hardware’s computing power and storage capacity. The ParNet has a simple network structure. Through the addition of the stream structure, the model can obtain good performance and be easily improved. On the basis of the characteristics of wake recognition, this study improved the ParNet so as to achieve effective recognition of the aircraft’s wake.

3.1. ParNet Model

This study used the ParNet to identify wake for the first time. This model uses a parallel subnet framework rather than increasing the depth of the neural network [20]. The structure of the ParNet model is illustrated in Figure 6. The ParNet contains three parallel streams to process feature maps of different scales, which are conducive to the extraction of the local features of various wakes. The feature maps processed by different streams are fused at the backend of the model and used for downstream tasks. Traditional ParNets include three block structures: the ParNet block, fusion, and downsampling. In addition to the ParNet block, the ParNet includes the downsampling and fusion blocks. The downsampling block reduces the resolution of the feature map and increases the width to achieve multi-scale processing. The fusion block integrates information from multiple streams into a stable multimodal fusion feature, which is conducive to the stable recognition performance of downstream classifiers.
As shown in Figure 7, the ParNet block uses the skip–squeeze–extraction (SSE) attention mechanism. The SSE attention mechanism is a lightweight improvement over the SE attention mechanism. The SiLU activation function is adopted in the model.

3.2. Improved ParNet

On the basis of the characteristics of the wake identification task, the following improvements were made to the ParNet block in the ParNet, as shown in Figure 8: (1) Compared with ordinary optical images, the greyscale images generated in Section 2.2 had low resolution. The grey levels varied within a very limited range. On the contrary, the ordinary ParNet has a large number of parameters, which leads to the difficulty of ParNet applications in the field of wake recognition. Therefore, to reduce the network, the standard 3 × 3 convolution of the ParNet block in the parallel subnetwork is replaced by DSC. By reducing the model’s parameters, the training time of the model is shortened, which significantly reduces the demand for the hardware environment of the model. Concurrently, the residual structure is retained to avoid gradient disappearance and explosion. (2) The ParNet block cannot effectively extract spatial features using the SSE channel attention module, while wake recognition requires more attention to the spatial features of the wake samples. Therefore, introducing the spatial attention module is important. The improved ParNet block removes the original SSE module and adds the CBAM after the additional layer. Additionally, the CBAM contains the sigmoid activation function. Hence, in the improved ParNet, the SiLU activation function was not necessary.
In addition, owing to the small number of samples, unreasonably large model parameters lead to overfitting of the model. In this study, the number of convolution kernels of the ParNet was appropriately optimised. The number of parameters of the ParNet and the improved ParNet was 7.72 M and 0.46 M, respectively.

3.2.1. Depthwise Separable Convolution

To solve the problem of the computing capacity of neural networks, Google used DSC on MobileNet in 2017. DSC significantly reduces the calculation amount of the neural network and essentially guarantees the stability of its performance; this is of considerable significance in practical applications [21].
DSC can be divided into two processes: depthwise convolution (DC) and pointwise convolution (PC), as shown in Figure 9. For the C × M × H input, the output of channel G is obtained by not filling the convolution layer. For the ordinary convolution layer, there are G ordinary convolutions with N × N × C convolution kernels. While for the DSC, there are C convolutions with N × N × 1 convolution kernels in the DC, and each channel is convolved by only one convolution kernel. There are G pointwise convolutions with 1 × 1 × C convolution kernels. Finally, the G-channel output is obtained. The number of training parameters of the DSC, W1, and the number of training parameters of the standard convolution, W2, are calculated as
W 1 = ( N × N + G ) × ( M N + 1 ) × ( H N + 1 ) × C W 2 = N × N × ( M N + 1 ) × ( H N + 1 ) × C × G
The ratio is
W 1 W 2 = ( N × N + G ) N × N × G = 1 G + 1 N × N
It is seen that DSC consumes much fewer computation resources than standard convolution.

3.2.2. CBAM

The attention module is widely used in deep learning applications [22]. It can focus on important features and suppress unnecessary features. The aircraft wake recognition model can better learn the main features of the wake through the attention module.
The CBAM includes the channel attention sub-module and the spatial attention sub-module, as shown in Figure 10. Compared with other attention, the CBAM not only focuses on channel features but also on spatial features, which results in the model having a better feature extraction capability [23].
In the channel attention sub-module, the input feature map, F , was pooled through global maximum pooling and global average pooling. The obtained feature maps were sent to the fully connected neural network and then summed. Then, the operation was activated by the sigmoid. The generated channel attention features, M C ( F ) and M C , were the feature descriptors of channel attention. Finally, the channel attention feature map, F , was obtained by multiplying M C ( F ) with the input feature map, F .
In the spatial attention sub-module, the feature map, F , was pooled through average pooling and maximum pooling to obtain two respective feature maps with one channel before splicing. After a convolution and sigmoid activation, the generated channel attention features, M S ( F ) and M S , were spatial attention feature descriptors. Finally, the spatial attention feature map, F , was obtained by multiplying the M S ( F ) with the input, F , of the module.

4. Aircraft Wake Identification Results and Discussion

4.1. Evaluation Indicators of Identification Effect

To evaluate the effectiveness of the improved ParNet proposed herein for wake identification, the accuracy, precision, recall rate, and F1 value were selected as evaluation indicators. The four indicators are popular parameters in the field of pattern recognition. Accuracy is the most important one to evaluate the performance of the model. Precision is closely related to the probability of miss detection. High precision corresponds to a small probability of miss-detection. The recall rate is related to the probability of a false alarm. A large recall rate corresponds to a small probability of a false alarm. F1 is the harmonic average of the accuracy and recall rate. The four indicators are calculated as
P Accuracy = T P + T N T P + T N + F P + F N
P Recall = T P T P + F N
P Precision = T P T P + F P
P F 1 = 2 × P Accuracy × P Precision P Accuracy + P Precision
The wake sample would be positive, and the sample without the wake would be negative. TP denotes the number of samples identified as positive and with positive truth values. FP indicates the number of samples identified as positive and with negative truth values. FN denotes the number of samples identified as negative and with positive truth values. TN refers to the number of samples identified as negative and with negative truth values.
Additionally, we used floating point operations (FLOPs) to evaluate the computation complexity of the models in aircraft wake recognition. The million floating point operations (MFLOPs) are normally used.

4.2. Experimental Environment and Settings

The experimental environment is based on a Windows 10 system. The main hardware includes an Intel Core i7-10700 processor, NVIDIA Quadro P400 4 GB graphics card, and 512 GB of memory. The experiment was performed using the Tensorflow2.1 deep learning framework. During training, the batch size was set to 32, and the stochastic gradient descent optimiser was used as a standby. The initial learning rate was 0.01, which decreased to 0.95 times after each epoch. A total of 70 epochs were trained, and the model weight was saved once every 10 epochs.

4.3. Ablation Study

In the ablation experiment, the ParNet was used as the benchmark. A ParNet with depthwise separable convolution (DSC-ParNet), a ParNet with the CBAM (CBAM-ParNet), and a ParNet with both DSC and the CBAM (DSC-CBAM-ParNet) were respectively used to complete the wake recognition. The recognition effects were compared.
During the training in the ablation experiment, the changes in the validation set’s precision and validation loss are shown in Figure 11. Compared with the ParNet, the performance of the DSC-ParNet was improved slightly, and the loss after the convergence was relatively stable. The improvement in the CBAM-ParNet’s performance was more evident. The performance of the DSC-CBAM-ParNet model was significantly improved, and the loss was the lowest when it was stable. After the convolution in the ParNet model was replaced by the DSC, the reduced parameters made the convergence effect of the model obvious, and the model’s performance was partially improved. The CBAM can significantly improve the performance of the model.
The recognition effect of each model on aircraft wake is shown in Table 2. Overall, the accuracy, precision, recall rate, and F1 value of the ParNet method for aircraft wake recognition were all above 95%, with a good recognition effect. Various improved models based on the ParNet had limited improvement in the recognition effect. The DSC-CBAM-ParNet model proposed herein maximised the recognition effect of the model on aircraft wake, and the recognition accuracy, precision, recall rate, and F1 value were all above 98%. It is also seen from the MFLOPs that DSC considerably reduces the computation complexity in wake recognition. Concurrently, the CBAM, as one of the spatial attention modules, increases the accuracy of the recognition. Consequently, the improved ParNet has evident advantages over the ordinary ParNet in terms of both recognition performance and computation complexity.

4.4. Comparison Experiments of Different Attention Modules

To compare the difference between the CBAM and other attention modules in improving the wake recognition performance of the model, four attention modules of the ECA, SE, CA, and SSE were introduced into the recognition model [24,25,26]. During the training, the changes in verification accuracy are shown in Figure 12.
Figure 12 shows that the attention modules of the SE and SSE with only channel attention had the worst effect. The CA module had a certain spatial representation capability, which improved its verification accuracy better than the original model. The ECA module had good lightweight characteristics, which resulted in a good balance between the calculation amount and the feature extraction capability of the model. The verification accuracy of the ECA and CBAM was similar. In this study, the CBAM was applied to the aircraft wake recognition model. Because of the special spatial attention of the CBAM, the verification accuracy increased significantly, achieving the highest verification accuracy. The performance was better than the other attention modules. In this study, the CBAM was used in the improved ParNet to effectively improve the wake recognition performance.
The effects of wake recognition using the improved models with different attention modules are shown in Table 3. When the four evaluation indicators of the CBAM adopted in this study reached the optimal value, the recognition effect was the best.

4.5. Visualisation of Identification Results

The improved ParNet classifies and visualises some images of the test set, as shown in Figure 13. The class is zero when the wake is identified, and the closer to zero the score is, the better the recognition performance is. The class is one when the wake is not identified, and the closer to one the score is, the better the recognition performance is. We purposely selected the representative samples with different time periods, different flights, and different background winds.
It is seen in Figure 13 that although both the ParNet and improved ParNet output correct the recognition results, the scores of the improved ParNet approximate the intended values more, which shows that the improved ParNet can effectively identify aircraft wake. It is also noted that when the left and right vortices separated and dissipated along with the evolution of the atmospheric environment, the model had better generalisation capability because the attention module was able to learn the characteristics of the wake at different stages.
The confusion matrices of the wake recognition using the ordinary ParNet and improved ParNet are demonstrated in Figure 14. It is clearly seen that the improved ParNet improves the recognition accuracy for the wake and non-wake, respectively. Concurrently, the improved ParNet considerably decreases the probabilities of false alarms and miss detection. The improved ParNet appears to have good performance in recognizing the aircraft wake.

4.6. Comparison Experiment of Different Recognition Models

To verify the superiority of the proposed model with regard to wake recognition, the existing SVM, KNN, CNN models, and the deep learning recognition model, VGG16, were compared with the improved ParNet model [11,12,27,28]. VGG16 inherits the advantages of AlexNet. Moreover, with small convolution kernels and deep network structures, its recognition performance is better than that of AlexNet.
In this study, the parameters of the SVM and KNN were optimised using the grid search algorithm. The CNN’s hyperparameters were set according to the neural network setting method in meta-learning. VGG16 used the transfer learning method to fine-tune the parameters for this recognition task. The wake identification effect of each model is shown in Table 4.
As indicated by Table 4, the accuracy, precision, recall, and F1 value of the improved ParNet model are 98.91%, 99.06%, 98.90%, and 98.90%, respectively, which are superior to those of all other methods. Compared with the SVM, KNN, and traditional CNN, the aircraft wake recognition performance of the proposed model significantly improved. As compared with the method with the migration of VGG16 as the backbone network and fine-tuning, the proposed method achieved similar recognition performance with fewer parameters and computation. It also reduced the requirements of the hardware and made the model more economical.

5. Discussion

The CNN seems promising in the application of aircraft wake recognition. Our work shows that the CNN based on the improved ParNet is excellent in the application. However, the method can be improved in the future. Due to the limitation of the data samples, the recognition performance of the wake may degrade in extreme weather. It is true that the wake seldom appears in extreme weather. Nevertheless, the study on wake recognition for extreme weather is especially important, considering flight safety under extreme weather conditions. Correspondingly, it is noted that the proposed method may efficiently recognize the wake with conventional spatial structures. However, the miss detection of the wake may remain when the spatial structure of the wake is heavily spoiled by the severe ambient atmosphere. Hence, the wake recognition performance can be improved in the case that the atmosphere condition is considered for the design of the CNN configuration.
On the other hand, the used datasets are wind field images captured by the lidar installed at the airport. Therefore, the diversity of the data sample is affected by the lidar wind field scanning mode, in particular, the scope of the scanning angle and the location of the lidar. Correspondingly, the wake recognition method is to be studied, oriented to multiple lidars simultaneously operating for the wind field scanning. The consequent results may better serve for traffic management.

6. Conclusions

This paper proposes a new method of aircraft wake recognition based on the improved ParNet. In the improved ParNet, DSC is introduced to significantly reduce the model’s parameters. Concurrently, to improve the model’s capability to extract spatial features, the CBAM replaces the parallel attention module in the original model. Considering the wind field data of Hong Kong International Airport obtained by lidar as the dataset, this paper discusses the recognition effects of the ParNet and its different improved versions on wake. The DSC and the CBAM are experimentally shown to be able to considerably improve the recognition performance of the model. Further experimental results show that the performance of the proposed aircraft wake recognition model is optimal compared with that of the SVM, KNN, traditional CNN, and VGG16 in terms of recognition effect and required software and hardware costs. The method based on the improved ParNet proposed herein can realise the accurate identification of aircraft wakes in complex environments.

Author Contributions

Conceptualization, Y.M.; methodology, Y.M. and J.Z.; software, J.Z. and H.H.; supervision, P.-w.C. and X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China grant number U1833111.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of the study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hallock, J.N.; Holzapfel, F. A review of recent wake vortex research for increasing airport capacity. Prog. Aeosp. Sci. 2018, 98, 27–36. [Google Scholar] [CrossRef]
  2. Kaden, A.; Luckner, R. Impact of Wake Vortex Deformation on Aircraft Encounter Hazard. J. Aircr. 2019, 56, 800–811. [Google Scholar] [CrossRef]
  3. Kovalev, D.A.; Vanhoenacker-Janvier, D.; Billuart, P.; Duponcheel, M.; Winckelmans, G. Modeling of Wake Vortex Radar Detection in Clear Air Using Large-Eddy Simulation. J. Atmos. Ocean. Technol. 2019, 36, 2045–2062. [Google Scholar] [CrossRef]
  4. Xu, Z.; Li, D.; An, B.; Pan, W. Enhancement of wake vortex decay by air blowing from the ground. Aerosp. Sci. Technol. 2021, 118, 107029. [Google Scholar] [CrossRef]
  5. Zhang, J.; Zuo, Q.; Lin, M.; Huang, W.; Pan, W.; Cui, G. Numerical simulation on near-field evolution of wake vortices of ARJ21 plane with crosswind. Hangkong Xuebao/Acta Aeronaut. Astronaut. Sin. 2022, 43, 125043. [Google Scholar]
  6. Holzapfel, F. Probabilistic two-phase wake vortex decay and transport model. J. Aircr. 2003, 40, 323–331. [Google Scholar] [CrossRef] [Green Version]
  7. Dolfi-Bouteyre, A.; Canat, G.; Valla, M.; Augere, B.; Besson, C.; Goular, D.; Lombard, L.; Cariou, J.; Durecu, A.; Fleury, D.; et al. Pulsed 1.5- lIDAR for axial aircraft wake vortex detection based on high-brightness large-core fiber amplifier. IEEE J. Sel. Top. Quantum Electron. 2009, 15, 441–450. [Google Scholar] [CrossRef]
  8. Yoshikawa, E.; Matayoshi, N. Aircraft Wake Vortex Retrieval Method on Lidar Lateral Range-Height Indicator Observation. AIAA J. 2017, 55, 2269–2278. [Google Scholar] [CrossRef]
  9. Xu, S.; Hu, Y.; Wu, Y. Identification of aircraft wake vortex based on Doppler spectrum features. Guangdianzi Jiguang/J. Optoelectron. Laser 2011, 22, 1826–1830. [Google Scholar]
  10. Wang, X.; Wu, S.; Liu, X.; Yin, J.; Pan, W.; Wang, X. Observation of Aircraft Wake Vortex Based on Coherent Doppler Lidar. Acta Opt. Sin. 2021, 41, 901001–901018. [Google Scholar]
  11. Weijun, P.; Zhengyuan, W.; Zhang, X.; Achiam, J. Identification of aircraft wake vortex based on k-nearest neighbor. Laser Technol. 2020, 44, 471–477. [Google Scholar]
  12. Weijun, P.; Zhengyuan, W.; Xiaolei, Z. Identification of Aircraft Wake Vortex Based on SVM. Math. Probl. Eng. 2020, 2020, 9314164. [Google Scholar]
  13. Weijun, P.; Yingjie, D.; Qiang, Z.; Zhengyuan, W.; Haochen, L. Research on aircraft wake vortex recognition using AlexNet. Opto-Electron. Eng. 2019, 46, 121–128. [Google Scholar]
  14. Murphy, B.; O’Callaghan, J.; Fox, M.; Ilcewicz, L.; Starnes, J.H., Jr. Overview of the structures investigation for the American Airlines flight 587 investigation. In Collection of Technical Papers, Proceedings of the AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Austin, TX, USA, 18–21 April 2005; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2005. [Google Scholar]
  15. Shen, C.; Gao, H.; Wang, X.; Li, J. Aircraft wake vortex parameter-retrieval system based on lidar. J. Radars 2020, 9, 1032–1044. [Google Scholar]
  16. Royer, P.; Boquet, M.; Cariou, J.P.; Sauvage, L.; Parmentier, R. Aerosol/Cloud Measurements Using Coherent Wind Doppler Lidars. EPJ Web Conf. 2016, 119, 11002. [Google Scholar] [CrossRef] [Green Version]
  17. Shu, Y.; Petersen, G.N.; von Lowis, S.; Preissler, J.; Finger, D.C. Determination of eddy dissipation rate by Doppler lidar in Reykjavik, Iceland. Meteorol. Appl. 2020, 27, e1913–e1951. [Google Scholar]
  18. Hon, K.K.; Chan, P.W.; Chim, K.C.Y.; Visscher, I.D.; Thobois, L.; Troiville, A.; Rooseleer, F. Wake vortex measurements at the Hong Kong International Airport. In Proceedings of the AIAA Science and Technology Forum and Exposition, AIAA SciTech Forum 2022, San Diego, CA, USA, 3–7 January 2022; American Institute of Aeronautics and Astronautics Inc., AIAA: Reston, VA, USA, 2022. [Google Scholar]
  19. Liu, X.; Pedersen, M.; Wang, R. Survey of natural image enhancement techniques: Classification, evaluation, challenges, and perspectives. Digit. Signal Prog. 2022, 127, 103547. [Google Scholar] [CrossRef]
  20. Goyal, A.; Bochkovskiy, A.; Deng, J.; Koltun, V. Non-deep networks. arXiv 2021, arXiv:2110.07641. [Google Scholar]
  21. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  22. Liu, J.; Liu, J.; Luo, X. Research progress in attention mechanism in deep learning. Gongcheng Kexue Xuebao/Chin. J. Eng. 2021, 43, 1499–1511. [Google Scholar]
  23. Sanghyun, W.; Jongchan, P.; Joon-Young, L.; In, S.K. CBAM: Convolutional Block Attention Module. arXiv 2018, arXiv:1807.06521. [Google Scholar]
  24. Qilong, W.; Banggu, W.; Pengfei, Z.; Peihua, L.; Wangmeng, Z.; Qinghua, H. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–18 June 2020; IEEE Computer Society: Los Alamitos, CA, USA, 2020. [Google Scholar]
  25. Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Qibin, H.; Daquan, Z.; Jiashi, F. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19–25 June 2021; IEEE: Piscataway, NJ, USA, 2021. [Google Scholar]
  27. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; International Conference on Learning Representations, ICLR: San Diego, CA, USA, 2015. [Google Scholar]
  28. Nichol, A.; Achiam, J.; Schulman, J. On first-order meta-learning algorithms. arXiv 2018, arXiv:1803.02999. [Google Scholar]
Figure 1. Lidar (red circle) at the Hong Kong International Airport.
Figure 1. Lidar (red circle) at the Hong Kong International Airport.
Applsci 13 03560 g001
Figure 2. The lidar working in RHI mode for aircraft wake recognition.
Figure 2. The lidar working in RHI mode for aircraft wake recognition.
Applsci 13 03560 g002
Figure 3. Lidar maps demonstrating the aircraft wake evolution. (ac) Early stage; (df) middle stage; (gi) late stage.
Figure 3. Lidar maps demonstrating the aircraft wake evolution. (ac) Early stage; (df) middle stage; (gi) late stage.
Applsci 13 03560 g003
Figure 4. Lidar image conversion.
Figure 4. Lidar image conversion.
Applsci 13 03560 g004
Figure 5. Wind field images with and without a wake. (a) Positive downwind; (b) negative downwind; (c) positive calm wind; (d) negative calm wind; (e) positive headwind; (f) negative headwind.
Figure 5. Wind field images with and without a wake. (a) Positive downwind; (b) negative downwind; (c) positive calm wind; (d) negative calm wind; (e) positive headwind; (f) negative headwind.
Applsci 13 03560 g005
Figure 6. Structure of ParNet.
Figure 6. Structure of ParNet.
Applsci 13 03560 g006
Figure 7. Structure of ParNet block.
Figure 7. Structure of ParNet block.
Applsci 13 03560 g007
Figure 8. Structure of improved ParNet block.
Figure 8. Structure of improved ParNet block.
Applsci 13 03560 g008
Figure 9. Depth separable convolution.
Figure 9. Depth separable convolution.
Applsci 13 03560 g009
Figure 10. Structure of CBAM.
Figure 10. Structure of CBAM.
Applsci 13 03560 g010
Figure 11. Verification accuracy and loss curves of ablation experiment. (a) verification accuracy; (b) verification loss.
Figure 11. Verification accuracy and loss curves of ablation experiment. (a) verification accuracy; (b) verification loss.
Applsci 13 03560 g011
Figure 12. Verification accuracy of attention module comparison experiment.
Figure 12. Verification accuracy of attention module comparison experiment.
Applsci 13 03560 g012
Figure 13. Visualization of wake recognition. (a,b,e,f,i,j) with wake; (c,d,g,h,k,l) without wake; (a,c,e,g,i,k) by ParNet; (b,d,f,h,j,l) by improved ParNet.
Figure 13. Visualization of wake recognition. (a,b,e,f,i,j) with wake; (c,d,g,h,k,l) without wake; (a,c,e,g,i,k) by ParNet; (b,d,f,h,j,l) by improved ParNet.
Applsci 13 03560 g013
Figure 14. Confusion matrix (a) ParNet; (b) improved ParNet.
Figure 14. Confusion matrix (a) ParNet; (b) improved ParNet.
Applsci 13 03560 g014
Table 1. Lidar parameters.
Table 1. Lidar parameters.
ParameterValue
Radar wavelength1.54 μm
Pulse width200 ns
Pulse repetition rate20 kHz
Detection range50–6000 m
Range gate width25 m
Angular resolution0.5°
Table 2. Recognition effect of ablation experiment.
Table 2. Recognition effect of ablation experiment.
MethodAccuracy (%)Precision (%)Recall (%)F1 Value (%)MFLOPs
ParNet96.8898.0795.6396.841.79
DSC-ParNet97.3498.1096.5697.320.84
CBAM-ParNet97.8197.5198.1297.821.70
DSC-CBAM-ParNet98.9199.0698.9098.900.75
Table 3. Recognition results of attention comparison experiment.
Table 3. Recognition results of attention comparison experiment.
Attention ModuleAccuracy (%)Precision (%)Recall (%)F1 Value (%)MFLOPs
SSE96.8892.6496.9294.740.84
ECA98.2899.0497.5098.260.73
SE95.6298.0293.1295.510.74
CA97.1998.0795.6396.840.75
CBAM98.9199.0698.9098.900.75
Table 4. Recognition effect of different recognition models.
Table 4. Recognition effect of different recognition models.
MethodAccuracy (%)Precision (%)Recall (%)F1 Value (%)MFLOPs
SVM93.5994.8592.1993.50
KNN92.6594.1790.9492.53
CNN94.8496.1493.4494.770.12
VGG16 98.4498.7498.1398.4315.00
Improved ParNet98.9199.0698.9098.900.75
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, Y.; Zhao, J.; Han, H.; Chan, P.-w.; Xiong, X. Aircraft Wake Recognition Based on Improved ParNet Convolutional Neural Network. Appl. Sci. 2023, 13, 3560. https://doi.org/10.3390/app13063560

AMA Style

Ma Y, Zhao J, Han H, Chan P-w, Xiong X. Aircraft Wake Recognition Based on Improved ParNet Convolutional Neural Network. Applied Sciences. 2023; 13(6):3560. https://doi.org/10.3390/app13063560

Chicago/Turabian Style

Ma, Yuzhao, Jiangbei Zhao, Haoran Han, Pak-wai Chan, and Xinglong Xiong. 2023. "Aircraft Wake Recognition Based on Improved ParNet Convolutional Neural Network" Applied Sciences 13, no. 6: 3560. https://doi.org/10.3390/app13063560

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop