Vision-Based Automatic Collection of Nodes of In/Off Block and Docking/Undocking in Aircraft Turnaround
Abstract
:1. Introduction
2. Related Works
2.1. Collection of KMNs
2.2. Preprocessing for the Collection of KMNs
3. Dataset Collection
3.1. Dataset Comprising Single Images
3.2. Dataset Comprising Video Sequences
4. Preprocessing Module
4.1. Detection and Recognition
4.2. Position Prediction
4.3. Association
5. Collection of KMNs
5.1. Aircraft Confirmation at the Current Position
Algorithm 1. Aircraft confirmation at the current position |
Inputs: bounding boxes of aircraft , i = 1, 2, …, n. W and H are the width and height of the input image, respectively. for i = 1:n if wi>W/2 or hi > H/2 Si = wi × hi else Si = 0 end if end for Outputs: the bounding box of the aircraft at current position is . |
5.2. Motion State Estimation for a Single Target
5.3. Collection of KMNs Based on a Single Target
Algorithm 2. In-block and off-block node collection |
Inputs: For a sequence with num frames, the bounding box of aircraft in each frame is , i = 1,2,…,num. Thc and ThIoU of Equation (10) are initialized (manually determined). for t = r + 2:num-r-1 Computing and , and , and . Computing C1(t − 1), C1(t), and C1(t + 1) using Condition 1 of Equation (10). if C1(t − 1) = 0 and C1(t) = 1 and C1(t + 1) = 1 In-block node is t. else if C1(t − 1) = 1 and C1(t) = 0 and C1(t + 1) = 0 Off-block node is t. end if end for Outputs: In-block and off-block nodes. |
5.4. Collection of KMNs Based on Multi-Object Interaction
Algorithm 3. Docking and undocking stairs node detection |
Inputs: For a sequence with num frames, and are the bounding boxes of aircraft and mobile aircraft landing stairs in each frame, respectively, i = 1,2, …, num. Thc and ThIoU are the thresholds of σc and σIoU of Condition 1, respectively. for t = r + 2:num – r − 1 Computing Condition 2 C2(t) using and if C2(t) = 1 Using the bounding boxes of mobile aircraft landing stairs and Condition 1 to compute C1(t − 1), C1(t), and C1(t + 1). if C1(t − 1) = 0 and C1(t) = 1 and C1(t + 1) = 1 Docking node is t. else if C1(t − 1) = 1 and C1(t) = 0 and C1(t + 1) = 0 Undocking node is t. end if end if end for Outputs: Docking and undocking stairs nodes. |
6. Experimental Results and Analysis
6.1. Experimental Results of Detection and Recognition
- (1)
- PrecisionC
- (2)
- RecallC
- (3)
- Mean Average Precision (mAP)
6.2. Experimental Results of the Prediction and Association
6.3. Experimental Results of the Collection of KMNs
7. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Li, B.; Wang, L.; Xing, Z.; Luo, Q. Performance Evaluation of Multiflight Ground Handling Process. Aerospace 2022, 9, 273. [Google Scholar] [CrossRef]
- A-CDM Milestones, Mainly the Target off Block Time (TOBT). Available online: http://www.eurocontrol.int/articles/air-portcollaborative-decision-making-cdm (accessed on 3 March 2023).
- More, D.; Sharma, R. The turnaround time of an aircraft: A competitive weapon for an airline company. Decision 2014, 41, 489–497. [Google Scholar] [CrossRef]
- Airport-Collaborative Decision Making (A-CDM): IATA Recommendations. Available online: https://www.iata.org/contentassets/5c1a116a6120415f87f3dadfa38859d2/iata-acdm-recommendations-v1.pdf (accessed on 5 March 2023).
- Wei, K.J.; Vikrant, V.; Alexandre, J. Airline timetable development and fleet assignment incorporating passenger choice. Transp. Sci. 2020, 54, 139–163. [Google Scholar] [CrossRef]
- Tian, Y.; Liu, H.; Feng, H.; Wu, B.; Wu, G. Virtual simulation-based evaluation of ground handling for future aircraft concepts. J. Aerosp. Inf. Syst. 2013, 10, 218–228. [Google Scholar] [CrossRef]
- Perl, E. Review of Airport Surface Movement Radar Technology. IEEE Aerosp. Electron. Syst. Mag. 2006, 21, 24–27. [Google Scholar] [CrossRef]
- Xiong, Z.; Li, M.; Ma, Y.; Wu, X. Vehicle Re-Identification with Image Processing and Car-Following Model Using Multiple Surveillance Cameras from Urban Arterials. IEEE Trans. Intell. Transp. Syst. 2020, 22, 7619–7630. [Google Scholar] [CrossRef]
- Zhang, C.; Li, F.; Ou, J.; Xie, P.; Sheng, W. A New Cellular Vehicle-to-Everything Application: Daytime Visibility Detection and Prewarning on Expressways. IEEE Intell. Transp. Syst. Mag. 2022, 15, 85–98. [Google Scholar] [CrossRef]
- Besada, J.A.; Garcia, J.; Portillo, J.; Molina, J.M.; Varona, A.; Gonzalez, G. Airport Surface Surveillance Based on Video Images. IEEE Trans. Intell. Transp. Syst. 2005, 41, 1075–1082. [Google Scholar]
- Thirde, D.; Borg, M.; Ferryman, J. A real-time scene understanding system for airport apron monitoring. In Proceedings of the IEEE International Conference on Computer Vision System, New York, NY, USA, 4–7 January 2006. [Google Scholar]
- Zhang, X.; Qiao, Y. A video surveillance network for airport ground moving targets. In Proceedings of the International Conference on Mobile Networks and Management, Chiba, Japan, 10–12 November 2020; pp. 229–237. [Google Scholar]
- Netto, O.; Silva, J.; Baltazar, M. The airport A-CDM operational implementation description and challenges. J. Airl. Airpt. Manag. 2020, 10, 14–30. [Google Scholar] [CrossRef]
- Simaiakis, I.; Balakrishnan, H. A queuing model of the airport departure process. Transp. Sci. 2016, 50, 94–109. [Google Scholar] [CrossRef]
- Voulgarellis, P.G.; Christodoulou, M.A.; Boutalis, Y.S. A MATLAB based simulation language for aircraft ground handling operations at hub airports (SLAGOM). In Proceedings of the 2005 IEEE International Symposium on Mediterrean Conference on Control and Automation Intelligent Control, Limassol, Cyprus, 27–29 June 2005; pp. 334–339. [Google Scholar]
- Wu, C.L. Monitoring aircraft turnaround operations–framework development, application and implications for airline operations. Transp. Plan. Technol. 2008, 31, 215–228. [Google Scholar] [CrossRef]
- Lu, H.L.; Vaddi, S.; Cheng, V.V.; Tsai, J. Airport Gate Operation Monitoring Using Computer Vision Techniques. In Proceedings of the 16th AIAA Aviation Technology, Integration, Operations Conference, Washington, DC, USA, 13–17 June 2016. [Google Scholar]
- Felzenszwalb, P.F.; Girshick, R.B.; McAllester, D.; Ramanan, D. Object detection with discriminatively trained part based models. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1627–1645. [Google Scholar] [CrossRef] [PubMed]
- Thai, P.; Alam, S.; Lilith, N.; Phu, T.N.; Nguyen, B.T. Aircraft Push-back Prediction and Turnaround Monitoring by Vision-based Object Detection and Activity Identification. In Proceedings of the 10th SESAR Innovation Days, Online, 7–10 December 2020. [Google Scholar]
- Thai, P.; Alam, S.; Lilith, N.; Nguyen, B.T. A computer vision framework using Convolutional Neural Networks for airport-airside surveillance. Transp. Res. Part C Emerg. Technol. 2022, 137, 103590. [Google Scholar] [CrossRef]
- Yıldız, S.; Aydemir, O.; Memiş, A.; Varlı, S. A turnaround control system to automatically detect and monitor the time stamps of ground service actions in airports: A deep learning and computer vision based approach. Eng. Appl. Artif. Intell. 2022, 114, 105032. [Google Scholar] [CrossRef]
- Available online: https://medium.com/@michaelgorkow/aircraft-turnaround-management-using-computer-vision-4bec29838c08 (accessed on 6 March 2023).
- Zaidi SS, A.; Ansari, M.S.; Aslam, A.; Kanwal, N.; Asghar, M.; Lee, B. A survey of modern deep learning based object detection models. Digit. Signal Process. 2022, 26, 103514. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2012, 60, 84–90. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 26 June–1 July 2015; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Mark Liao, H.Y. YOLOv4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Kasper-Eulaers, M.; Hahn, N.; Berger, S.; Sebulonsen, T.; Myrland, Ø.; Kummervold, P.E. Detecting heavy goods vehicles in rest areas in winter conditions using YOLOv5. Algorithms 2021, 14, 114. [Google Scholar] [CrossRef]
- Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. Yolox: Exceeding yolo series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
- Wu, H.; Du, C.; Ji, Z.; Gao, M.; He, Z. SORT-YM: An Algorithm of Multi-Object Tracking with YOLOv4-Tiny and Motion Prediction. Electronics 2021, 10, 2319. [Google Scholar] [CrossRef]
- Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
- Kuhn, H.W. The Hungarian method for the assignment problem. Nav. Res. Logist. Q. 1955, 2, 83–97. [Google Scholar] [CrossRef]
- Davis, J.; Goadrich, M. The relationship between Precision-Recall and ROC curves. In Proceedings of the 23rd International Conference on Machine Learning, New York, NY, USA, 25–29 June 2006; pp. 233–240. [Google Scholar]
Related Work | Detection | Tracking | KMNs Collection |
---|---|---|---|
Thai et al. 2022 [21] | AirNet (In terms of its architecture, AirNet is a one-stage detector with a module of bidirectional feature pyramid network module) | By comparing the positions of object bounding boxes and classes between previous and current frames, the same object in adjacent frames is matched. | Activity identification by the relationship between the involved objects and aircraft, the speed of the object, and the Intersection over Union (IoU) of the object and aircraft. |
Yıldız S et al. 2022 [22] | YOLO v3 | The MOSSE (Minimum Output Sum of Squared Error) | Ground services are recognized by using the motion status (stopping and moving) of the vehicles providing services. |
Ours | Improved YOLO v5 to obtain more precise bounding boxes. | Position prediction and bounding boxes of association in adjacent frames based on IoU. | Two methods for collecting KMNs are proposed, aiming at nodes based on a single target and nodes based on the interaction of two targets. |
Category | No. 734 | No. 939 | Total |
---|---|---|---|
In-block | 14 | 2 | 16 |
Off-block | 13 | 3 | 16 |
Docking of mobile aircraft landing stairs | 3 | 4 | 7 |
Undocking of mobile aircraft landing stairs | 2 | 1 | 3 |
Total | 32 | 10 | 42 |
Class/Metric | Precisionc | Recallc | mAP |
---|---|---|---|
Aircraft | 93% | 90.1% | 94% |
Mobile aircraft landing stairs | 89.3% | 90.1% | 91.9% |
Lreg | Precisionc | Recallc | mAP |
---|---|---|---|
IoU-loss | 91.7% | 90.3% | 94.2% |
GIoU-loss | 91.1% | 93.1% | 94.3% |
DIoU-loss | 93.5% | 90.8% | 93.6% |
CIoU-loss | 93.6% | 91.6% | 94.7% |
Nodes | num | IDSW | MOTA | MOTP |
---|---|---|---|---|
Off-block 1 | 323 | 3 | 94.25% | 90.56% |
Off-block 2 | 323 | 13 | 89.79% | 91.25% |
In-block 1 | 292 | 0 | 96.61% | 95.63% |
In-block 2 | 324 | 12 | 92.35% | 94.56% |
Docking stairs 1 | 239 | 0 | 97.34% | 94.4% |
Docking stairs 2 | 874 | 0 | 99.54% | 91.3% |
Undocking stairs 1 | 252 | 5 | 96.52% | 91.23% |
Undocking stairs 2 | 421 | 0 | 94.29% | 92.1% |
Position No. | Nodes | L | FE(Unit: Frame) | TE(Unit: Second) |
---|---|---|---|---|
No.939 | In-block | 2 | 51 | 10.2 |
Off-block | 3 | 9 | 1.8 | |
Docking stairs | 4 | 27 | 5.4 | |
Undocking stairs | 1 | 29 | 5.8 | |
No.734 | In-block | 14 | 63 | 12.6 |
Off-block | 13 | 8 | 1.6 | |
Docking stairs | 3 | 46 | 9.2 | |
Undocking stairs | 2 | 36 | 7.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xu, J.; Ding, M.; Zhang, Z.-Z.; Xu, Y.-B.; Wang, X.-H.; Zhao, F. Vision-Based Automatic Collection of Nodes of In/Off Block and Docking/Undocking in Aircraft Turnaround. Appl. Sci. 2023, 13, 7832. https://doi.org/10.3390/app13137832
Xu J, Ding M, Zhang Z-Z, Xu Y-B, Wang X-H, Zhao F. Vision-Based Automatic Collection of Nodes of In/Off Block and Docking/Undocking in Aircraft Turnaround. Applied Sciences. 2023; 13(13):7832. https://doi.org/10.3390/app13137832
Chicago/Turabian StyleXu, Juan, Meng Ding, Zhen-Zhen Zhang, Yu-Bin Xu, Xu-Hui Wang, and Fan Zhao. 2023. "Vision-Based Automatic Collection of Nodes of In/Off Block and Docking/Undocking in Aircraft Turnaround" Applied Sciences 13, no. 13: 7832. https://doi.org/10.3390/app13137832
APA StyleXu, J., Ding, M., Zhang, Z. -Z., Xu, Y. -B., Wang, X. -H., & Zhao, F. (2023). Vision-Based Automatic Collection of Nodes of In/Off Block and Docking/Undocking in Aircraft Turnaround. Applied Sciences, 13(13), 7832. https://doi.org/10.3390/app13137832