Next Article in Journal
Performance Evaluation of UAV-Enabled LoRa Networks for Disaster Management Applications
Previous Article in Journal
Torque Ripple Minimization of the Permanent Magnet Synchronous Machine by Modulation of the Phase Currents
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Vision-Based Method for Determining Aircraft State during Spin Recovery

1
Department of Computer and Control Engineering, Faculty of Electrical and Computer Engineering, Rzeszow University of Technology, W. Pola 2, 35-959 Rzeszow, Poland
2
Department of Avionics and Control Systems, Faculty of Mechanical Engineering and Aeronautics, Rzeszow University of Technology, Aleja Powstancow Warszawy 12, 35-959 Rzeszow, Poland
3
Department of Aerodynamics and Fluid Mechanics, Faculty of Mechanical Engineering and Aeronautics, Rzeszow University of Technology, Aleja Powstancow Warszawy 12, 35-959 Rzeszow, Poland
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(8), 2401; https://doi.org/10.3390/s20082401
Submission received: 5 March 2020 / Revised: 30 March 2020 / Accepted: 21 April 2020 / Published: 23 April 2020
(This article belongs to the Section Remote Sensors)

Abstract

:
This article proposes a vision-based method of determining in which of the three states, defined in the spin recovery process, is an aircraft. The correct identification of this state is necessary to make the right decisions during the spin recovery maneuver. The proposed solution employs a keypoints displacements analysis in consecutive frames taken from the on-board camera. The idea of voting on the temporary location of the rotation axis and dominant displacement direction was used. The decision about the state is made based on a proposed set of rules employing the histogram spread measure. To validate the method, experiments on flight simulator videos, recorded at varying altitudes and in different lighting, background, and visibility conditions, were carried out. For the selected conditions, the first flight tests were also performed. Qualitative and quantitative assessments were conducted using a multimedia data annotation tool and the Jaccard index, respectively. The proposed approach could be the basis for creating a solution supporting the pilot in the process of aircraft spin recovery and, in the future, the development of an autonomous method.

1. Introduction

An aircraft spin is a specific flight condition that occurs in all types of aviation. In this state, the trajectory has the characteristic form of a spiral line. Specific actions are required to recover from the spin and to avoid the plane crash. There are two types of spin flat and steep. In the flat one, the plane’s pitch angle is less than 45 degrees, whereas, in the steep one, it is between 45 and 90 degrees. This article concerns the steep case when the pilot sees the image of the rotating earth and can return to the normal state by appropriate actions.
When recovering from a spin, three phases of flight are defined: rotation, diving, and recovery (see Figure 1).
Each of them requires a different pilot’s action. This paper proposes a vision-based method of identification in which of these phases an aircraft is.
The presented solution could be part of the pilot assistance system and, in the future, the basis of the method for automatic spin recovery. The spin recovery procedure does not seem difficult, but the problem is the spatial disorientation of the pilot [1,2,3]. According to the Air Safety Institute of Aircraft Owners and Pilots Association (AOPA), in the years 2000–2014, 30 percent of stall related accidents in commercial flights caused fatalities [4]. Therefore such a solution would significantly improve safety.
Research on the spin phenomenon, conducted since the beginning of the 20th century, concerns the dynamics of aircraft in this state [5,6,7,8,9], or recovery procedures [10,11,12,13,14,15]. Control algorithms are also being developed to enable automatic spin recovery but mainly for military or experimental applications [16,17,18,19]. However, these methods assume that we can precisely determine the instantaneous state of the aircraft. Proposed solutions are mainly based on inertial sensors, which measure the aircraft state indirectly, for example, through the analysis of angular velocities. Such analysis may sometimes lead to ambiguous results. Therefore, direct measurement using a vision sensor is a desirable and innovative solution.
Vision systems are increasingly used in aviation to detect threats from intruder objects appearing in the operating space [20,21,22], or in navigation [23,24,25]. There are also known solutions that use cameras for spin analysis. Aircraft models are observed in specially designed wind tunnels [26,27]. However, these are solutions in which the view from the perspective of an external observer is used and they are designed to know how the different aircraft structural elements influence on the spin character.
Several works regarding the estimation of flying object state can be found in the literature. In [28], the attitude of the aircraft model placed in the vertical wind tunnel is measured using the stereo vision method. To achieve high robustness markers are attached to the surface of the plane. A vision system for a helicopter model six degrees of freedom pose estimation is proposed in [29]. It uses a pan/tilt/zoom ground camera and another small onboard imager. The algorithm is based on tracking of five colored blobs placed on the aircraft and a single marker attached to the ground camera. A system for precision projectiles roll and pitch estimation by interpreting data from a strapped-down, forward-facing imager is described in [30]. The solution is based on the horizon detection algorithm, employing the Hough transform and an intensity standard deviation method. Robust, real-time state estimation of micro air vehicles is proposed in [31]. The method is based on tracking the feature points, such as lines and planes, and the implicit extended Kalman filter. According to the authors, a vision-based estimation is an attractive option, especially in urban environments. In [32], a vision-based method of aircraft approach angle estimation is presented. Several sequential images are used to determine the horizon and the focus-of-expansion, and then to derive the angle value. A glider control system with vision-based feedback is presented in [33]. The proposed navigation algorithm allows for reaching the predetermined location. The position of the target in the image is determined by integrating the pixel intensities across the image and performing a cascade of feature matching functions. Then a Kalman filter is used to estimate attitude and glideslope.
The approach proposed in this work is original. According to the authors’ best knowledge, no other studies are published in which images from the on-board camera are used to determine the condition of the aircraft in the spin. An additional advantage of the vision-based method is passive measurement. It also does not require significant modification of the aircraft structure.
The main novelty and contributions of this paper are: (i) unique application based on the vision sensor only, (ii) proposal of mappings from the image sequence (space-time domain) to the parameter space to determine the rotation axis and the movement direction by voting technique and maxima detection in the accumulator matrices, (iii) analyzing of the accumulator matrices using the histogram spread measure, (iv) a set of rules proposal to estimate the aircraft spinning state, and (v) creating a unique dataset, annotated by a human expert, containing various simulation data as well as preliminary flight recordings, and making it available to the research community for fair comparisons, (vi) experimental verification of the method using data from the simulator and real recordings in-flight tests, and (vii) original application of the multimedia data annotation package—ELAN for qualitative analysis of results.
The structure of this paper is as follows. Section 1 defines the problem, gives the research background and relevant references. Section 2 describes the proposed method. Experiments are presented in Section 3. Section 4 concludes the paper and indicates further works.

2. Method

The general idea of the proposed solution is to search for the corresponding keypoints in successive video frames and to conclude about the temporary state of the aircraft during spin recovery, based on the analysis of the displacement of these points (see Figure 2). Details of the method are presented in Section 2.1, Section 2.2, Section 2.3, and Section 2.4.

2.1. Keypoints Detection and Matching

The concept of keypoints is widely used in computer vision for the tasks of object recognition, image registration, or 3D reconstruction. These points are related to local image features that persist over some period. Each keypoint is associated with the so-called descriptor. It is a set of distinctive features that can be used to search for corresponding points in different images. Several keypoints detectors and their descriptors have been developed, e.g., scale-invariant feature transform (SIFT) [34], gradient location and orientation histogram (GLOH) [35], speeded up robust features (SURF) [36], or local energy-based shape histogram (LESH) [37].
During spin recovery, the sizes of the objects, visible in images from an on-board camera, change quickly and randomly. Therefore, a scale-independent detector was considered, and finally, SURF was selected because it is faster than SIFT. The SURF keypoints are robust against different image transformations. Their descriptors ensure repeatability and distinctiveness [38]. Interest points are found at different scales using the multi-resolution pyramid technique. Therefore they are rotation and scale-invariant, which is essential in the considered problem.
Feature matching can be done by calculation of the pairwise distance between descriptors. However, to speed up the processing, an approximate nearest neighbor search was applied [39]. Let P t 1 and P t denote sets of N corresponding keypoints detected at the moment t 1 and t:
P t 1 = { p t 1 = x t 1 , y t 1 ; p t 1 p t } P t = { p t = x t , y t ; p t 1 p t }
where ≡ denotes the correspondence of points. The corresponding keypoints detected in two successive images acquired during spin recovery are shown in Figure 3.

2.2. Removing Faulty Keypoints Matchings

As shown in Figure 3, some displacements diverge from the others. They are much longer, and their directions “do not match” the visible change trend. They are the result of incorrect keypoints matching and may adversely affect the further analysis. Therefore, we filter out the points that do not satisfy the following criterion:
| r t μ t | k σ t
where r t = ( x t x t 1 ) 2 + ( y t y t 1 ) 2 , μ t = 1 N i = 1 N r t , i , σ t = 1 N i = 1 N ( r t , i μ t ) 2 , and k is a parameter of the method. This procedure is applied to increase the robustness of the method. The corresponding keypoints after removing faulty matchings are shown in Figure 4.

2.3. Voting

To find the temporary position of the rotation axis and the dominant direction of the shift vector, a voting scheme, similar to that used in the Hough transform, was applied [40]. Two so-called accumulator matrices: A R ( dim A R = W × H ) and A T ( dim A T = 1 × 360 ), were created and filed with zeros. W and H denote the image width and height, respectively (Figure 5).
Each vector r t = p t p t 1 “votes” for all possible positions of the hypothetical rotation axis, according to the idea presented in Figure 6.
The A R cells, through which the p t p t 1 ¯ segment bisector passes, are incremented.
r t A R x , y = A R x , y + 1 x t x t 1 x + y t y t 1 y x t 2 x t 1 2 2 x t 2 x t 1 2 2 ϵ
Each vector r t also “votes” for one direction in A T matrix:
r t A T β t = A T β t + 1
where: β t = α t 180 π + 180 , α t = a t a n 2 ( y t y t 1 , x t x t 1 ) , a t a n 2 —means four-quadrant inverse tangent, and is ceiling function.
In the ideal rotation case, all bisectors should intersect at one point in A R (see Figure 7). Due to noise, spatial quantization, and inaccuracy of vision-based measurement, we get a two-dimensional histogram with a maximum.
The decision made by voting reduces the risk that minor faulty matchings not eliminated by statistical analysis will influence the results.

2.4. Set of Rules

Figure 8 shows A R and A T accumulator matrices in the rotation (first row) and recovery (second row) phases. The two-dimensional A R matrix was visualized as an intensity image.
A R becomes more “flat” when the rotation of the camera relative to the observed scene decreases and more compact when the rotation is stronger (comp. Figure 8a,c. In the case where the observed scene is dominated by progressive movement (the majority of keypoints moves in one direction), a clear maximum should be visible in the A T histogram (comp. Figure 8b,d).
Therefore, it was proposed to use the histogram spread ( H S ) measure to determine the plane state [41].
H S = Q 3 Q 1 R
where Q 1 and Q 3 are the 1st and the 3rd quartile of the histogram and R denotes the posible range of histogram values. The 1st and 3rd quartile are the histogram bins at which the cumulative histogram has 25% and 75% of the maximum.
The following set of rules was proposed:
R o t a t i o n = 1 if H S A R T R 0 if H S A R T R + Δ R p r e v i o u s if T R < H S A R < T R + Δ R
R e c o v e r y = 1 if H S A T T T a r g m a x A T F ϵ 0 if H S A T T T + Δ T p r e v i o u s if T T < H S A T < T T + Δ T
D i v i n g = ! R o t a t i o n ! R e c o v e r y
where T R , T T —threshold values, Δ R , Δ T —deadbands, F—the angular value corresponding to the downward movement of keypoints, ϵ —permitted deviation from downward movement. The introduction of dead bands ( Δ R , Δ T ) prevents short-term state changes when the values H S A R and H S A T oscillate near the threshold values. The method uses the set of rules defined in the parameters domain. That is why it is resistant to local changes in the density of keypoints. Peaks in accumulator matrices also appear when selected parts of the image are devoid of keypoints. It is an analogy to the Hough transform, which can find an analytical description of a curve, also in the case of significant edge discontinuities.

3. Experiments

Spin is a dangerous phenomenon. Deliberately performing the spin-entry procedure when testing an experimental method, especially at lower altitudes and with poor visibility, would be extremely risky. Performing some experiments in flight is also impossible because Polish aviation law prohibits aerobatic flights over settlements and other population centers. Therefore, the evaluation of the new approach began with simulation tests, which additionally ensure repeatability of weather conditions.

3.1. Laboratory Setup

The X-Plane 10 professional flight simulator was used [42,43]. The simulator operates based on an analytical model of aircraft dynamics and provides images from a virtual camera taking into account geographical location, terrain diversity, time of year and day, cruising altitude, and atmospheric conditions, including visibility. Obtaining data on such diversity under real conditions, in addition to security issues, would also be very expensive. The camera was attached close to the aircraft bow. The experiments were carried out using two computers with the following parameters: Intel Core i7-6700K @ 4 GHz, 64 GB RAM, Nvidia GTX 750 Ti. On one of them, the simulator was launched, on the other, the MATLAB/Simulink computing environment. For the selected conditions, the first flight tests were also performed. Test videos used in the experiments are available at http://vision.kia.prz.edu.pl/.

3.2. Dataset

The dataset consists of 72 test videos divided into four groups (Table 1, Figure 9, Figure 10, Figure 11 and Figure 12). Three recordings were made for each condition.

3.3. Results Evaluation Methods

Each frame of the manually extracted video fragment corresponding to the entire spin recovery procedure was processed. Manual annotations created by an expert in ELAN—the popular annotation tool were used as a ground truth [44,45]. Results returned by the described method implemented in Matlab were automatically saved in the ELAN file using the annotation API [46]. Qualitative assessment of the results was made by visual comparison of both annotation layers (Figure 13).
For the quantitative assessment, the Jaccard index was used, defined as the length of the intersection divided by the length of the union of ’human expert’ and ’our method’ layers:
J ( A , B ) = | A B | | A B | = | A B | | A | + | B | | A B |
where A—the ground truth (‘human expert’ layer), B—the prediction (‘our method’ layer), | A B | —length of the layers overlap, and | A B | —length of the layers union.

3.4. Parameter Selection

The developed method has several parameters characterized in Table 2.
The fixed step size random search (FSSRS) [50] with the fitness function equal to the average Jaccard index, calculated for the entire dataset, was used for parameters selection. The following formula was minimized:
f ( x ) = 1 M i = 1 M J i ( x )
where J i —the Jaccard index (see Equation (9)) estimated for the test video i, M = 72 —number of test videos, and x—vector of decision variables composed of method parameters (see Table 2). The initial decision vector x 0 (a first approximation of method parameters) was selected randomly from the set of allowable values defined in the third column of Table 2. For the first five parameters related to the SURF algorithm, this set was defined based on suggestions given in Mathworks documentation [47,48,49]. For the remaining ones, it was determined experimentally by trial and error approach. The number of steps equals to 100 was proposed as the termination criterion. The lowest obtained value of the fitness function f m i n was equal to 0 . 85 for the set of parameter values x m i n given in Table 3.

3.5. Results

The results obtained for the selected parameters are shown in Table 4.
The graphs obtained for the selected test video are shown in Figure 14. In Figure 14a,b, the red lines show the histogram spread measure for AR and AT, respectively. The green lines correspond to the selected threshold values TR and TT. The blue ones show thresholds increased by deadbands TR + ΔR and TT + ΔT. Figure 14c shows the state of the aircraft during spin recovery.
For the first group, the Jaccard index was greater or equal to 0.90 in 11 out of 18 cases. This result is promising, given the range of changes in image brightness (compare Figure 9a–f). Moreover, for movies recorded after 21:00, surprisingly good results were noticed, because street and square lighting had a positive effect on the number of keypoints detected. However, they may be worse for areas with less variation in background brightness.
The results obtained for the second group confirm this hypothesis. It turns out that the method depends on the diversity of the scene. If the spin occurs over areas with a homogeneous structure and small variations in brightness, we get smooth, texture-free images. In such cases, the number of detected and matched keypoints is significantly lower (see Figure 15a). Therefore, the number of votes for the possible rotation center and the dominant displacement direction is also lower. As a result, the method does not infer the real tendency occurring in the processed video accurately. In 3 of 11 cases, the results were weaker (Jaccard index lower than 0.80). For video sequences recorded over a smooth ocean surface, it was impossible to reliably determine the aircraft state due to the small number of matched keypoints. Spin recovery in such background conditions is also problematic for the pilot.
In the third group, some regularity can be seen. The results are weaker for small and large altitudes. At 2000 feet, the objects seen become quite large. The edges and corners between them move apart (see Figure 11a). Because the keypoints are associated with high-frequency elements of the image, their density becomes definitely lower, which results in a lower number of votes (see Figure 15b). At altitudes of 10,000 and 12,000 feet, the edges and corners are so close together that they begin to “merge” into aggregate objects, which also adversely affects the number of detected keypoints. The solution to this problem could be the use of a camera with fast-changing zoom, controlled in an adaptive manner, depending on the height of the aircraft. At such high altitudes, the results can also be affected by the transparency of the atmosphere through which the light beam passes before it reaches the camera lens. (see Figure 11e,f).
Changes in visibility are particularly severe in the last phase of the spin recovery process when the aircraft is in a position close to horizontal (see Figure 16).
The differences in results observed for group 4 are the effect of different lengths of this phase in individual test videos.
Preliminary experiments for test videos registered during the glider flights were also performed. Flight tests were carried out in September, from 17:00 to 19:00, over the agricultural and forest area, at an altitude of 1500–500 m AGL (Above Ground Level), in CAVOK (Ceiling and Visibility OK) meteorological conditions. The camera was attached to the bow. Its optical axis was approximately parallel to the longitudinal axis of the glider (Figure 17).
Flights were made just before sunset. The sun was low above the horizon, which resulted in rapid changes in image brightness, depending on the spatial orientation of the aircraft, reflections in the lens, and the presence of underexposed areas on the ground due to long shadows. Recorded videos were used to preliminary test the method robustness in adverse lighting conditions.
Individual rows of Figure 18 show selected frames from consecutive phases of five spin executions.
Table 5 summarizes the results obtained for these videos.
The results obtained for demanding real images recorded on the fly are promising. It turned out that for the execution of the spin during the glider flight, the radius of the spiral line circled by the aircraft is larger. The position of the instantaneous rotation axis, determined by the algorithm, was often outside the image. Therefore, the size of the AR matrix has been doubled. It was also observed that due to the nonuniform scene illumination, the number of keypoints in some parts of the image was too small. The problem was solved by setting the MetricThreshold parameter value to 1. The worse results for the first two videos result from the unwanted glares appearing in the lens when it is in full sunlight (see the second row of Figure 18). Perhaps the problem can be solved by using some adaptive image processing algorithms.
In our tests, the single-frame processing time was 250 ms (Matlab), and 30 ms (C++ implementation) for FullHD (1920 × 1080) scaled four times. It is possible to further speed up the calculations by the parallel implementation or the use of an embedded computing system dedicated to vision applications.

4. Conclusions

A video-based method to determine the state of the aircraft during the spin recovery process was proposed. It uses the analysis of keypoint shifts in subsequent video frames and the idea of voting on the temporary location of the rotation axis and dominant displacement direction. The decision is based on the set of rules employing the histogram spread measure. Qualitative and quantitative assessments were conducted using a multimedia data annotation tool and the Jaccard index, respectively. The method was validated on the flight simulator videos, recorded at varying altitudes and in different lighting, background, and visibility conditions, as well as the videos acquired during the preliminary flight tests. According to the authors’ best knowledge, this is the first vision-based approach. The results obtained are promising and could be applied in the system supporting the pilot, but further work is needed to achieve the efficiency that would allow the development of a reliable method of automatic spin recovery using vision-based feedback. The following further works are planned: (i) application of adaptive image processing techniques to compensate for non-uniform illumination, (ii) a real-time implementation that would enable online testing during the flight, (iii) tests above the cloud ceiling, (iv) integration of the prepared solution with the horizon detection algorithm, (v) development and testing of the automatic control system.

Author Contributions

Conceptualization, P.S., T.K. and T.R.; methodology, P.S., T.K. and Z.S.; software, T.K. and P.S.; validation, P.S., T.K., P.R. and Z.S.; resources, P.S., T.R. and P.R.; data curation, T.K., P.S., Z.S., T.R. and P.R.; writing–original draft preparation, P.S. and T.K.; writing–review and editing, T.K., P.S., T.R., P.R. and Z.S.; visualization, P.S. and T.K.; supervision, T.R. and P.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AOPAAircraft Owners and Pilots Association
SIFTScale-Invariant Feature Transform
GLOHGradient Location and Orientation Histogram
SURFSpeeded Up Robust Features
LESHLocal Energy-based Shape Histogram
HSHistogram Spread
ftFeet
NMNautical Mile
FSSRSFixed Step Size Random Search
AGLAbove Ground Level
CAVOKCeiling and Visibility OK

References

  1. Silver, B.W. Statistical Analysis of General Aviation Stall Spin Accidents; Technical Report; SAE Technical Paper; SAE International: Warrendale, PA, USA, 1976. [Google Scholar]
  2. de Voogt, A.J.; van Doorn, R.R. Accidents associated with aerobatic maneuvers in US aviation. Aviat. Space Environ. Med. 2009, 80, 732–733. [Google Scholar] [CrossRef] [PubMed]
  3. Council, G.A.S. A Study of Fatal Stall or Spin Accidents to UK Registered Light Aeroplanes 1980 to 2008; GASCo Publication: Kent, UK, 2010. [Google Scholar]
  4. Stall and Spin Accidents: Keep the Wings Flying. Available online: https://www.aopa.org/-/media/files/aopa/home/pilot-resources/safety-and-proficiency/accident-analysis/special-reports/stall_spin.pdf (accessed on 16 February 2020).
  5. Young, J.; Adams, W., Jr. Analytic prediction of aircraft spin characteristics and analysis of spin recovery. In Proceedings of the 2nd Atmospheric Flight Mechanics Conference, Palo Alto, CA, USA, 11–13 September 1972; p. 985. [Google Scholar]
  6. Hill, S.; Martin, C. A Flight Dynamic Model of Aircraft Spinning; Technical Report; Aeronautical Research Labs: Melbourne, Australia, 1990. [Google Scholar]
  7. Kefalas, P. Aircraft Spin Dynamics Models Design. Ph.D. Thesis, Cranfield University, School of Engineering, College of Aeronautics, Cranfield, UK, 2001. [Google Scholar]
  8. Bennett, C.J.; Lawson, N.; Gautrey, J.; Cooke, A. The aircraft spin-a mathematical approach and comparison to flight test. In Proceedings of the AIAA Flight Testing Conference, Denver, CO, USA, 5–9 June 2017; p. 3652. [Google Scholar]
  9. Cichocka, E. Research on light aircraft spinning properties. Aircr. Eng. Aerosp. Technol. 2017, 89, 730–746. [Google Scholar] [CrossRef]
  10. Martin, C.; Hill, S. Prediction of aircraft spin recovery. In Proceedings of the 16th Atmospheric Flight Mechanics Conference, Boston, MA, USA, 14–16 August 1989; p. 3363. [Google Scholar]
  11. Lee, D.C.; Nagati, M.G. Momentum Vector Control for Spin Recovery. J. Aircr. 2004, 41, 1414–1423. [Google Scholar] [CrossRef]
  12. Raghavendra, P.K.; Sahai, T.; Kumar, P.A.; Chauhan, M.; Ananthkrishnan, N. Aircraft Spin Recovery, with and without Thrust Vectoring, Using Nonlinear Dynamic Inversion. J. Aircr. 2005, 42, 1492–1503. [Google Scholar] [CrossRef] [Green Version]
  13. Cvetković, D.; Radaković, D.; Časlav, M.; Bengin, A. Spin and Spin Recovery. In Mechanical Engineering; Gokcek, M., Ed.; IntechOpen: Rijeka, Croatia, 2012; Chapter 9. [Google Scholar] [CrossRef]
  14. Bennett, C.; Lawson, N. On the development of flight-test equipment in relation to the aircraft spin. Prog. Aerosp. Sci. 2018, 102, 47–59. [Google Scholar] [CrossRef] [Green Version]
  15. Rao, D.V.; Go, T.H. Optimization of aircraft spin recovery maneuvers. Aerosp. Sci. Technol. 2019, 90, 222–232. [Google Scholar] [CrossRef]
  16. Lay, L.W.; Nagati, M.G.; Steck, J.E. The Application of Neural Networks for Spin Avoidance and Recovery; Technical Report, SAE Technical Paper; SAE International: Warrendale, PA, USA, 1999. [Google Scholar]
  17. Rao, D.V.; Sinha, N.K. Aircraft spin recovery using a sliding-mode controller. J. Guid. Control. Dyn. 2010, 33, 1675–1679. [Google Scholar] [CrossRef]
  18. Liu, K.; Zhu, J.H.; Fan, Y. Aircraft control for spin-recovery with rate saturation in actuators. Control Theory Appl. 2012, 29, 549–554. [Google Scholar]
  19. Bunge, R.; Kroo, I. Automatic Spin Recovery with Minimal Altitude Loss. In Proceedings of the 2018 AIAA Guidance, Navigation, and Control Conference, Kissimmee, FL, USA, 8–12 January 2018; p. 1866. [Google Scholar]
  20. Carnie, R.; Walker, R.; Corke, P. Image processing algorithms for UAV “sense and avoid”. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation (ICRA 2006), Orlando, FL, USA, 15–19 May 2006; pp. 2848–2853. [Google Scholar] [CrossRef] [Green Version]
  21. Bratanov, D.; Mejias, L.; Ford, J.J. A vision-based sense-and-avoid system tested on a ScanEagle UAV. In Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 1134–1142. [Google Scholar] [CrossRef] [Green Version]
  22. Bauer, P.; Hiba, A.; Bokor, J.; Zarandy, A. Three Dimensional Intruder Closest Point of Approach Estimation Based-on Monocular Image Parameters in Aircraft Sense and Avoid. J. Intell. Robot. Syst. 2019, 93, 261–276. [Google Scholar] [CrossRef] [Green Version]
  23. Johnson, E.N.; Calise, A.J.; Watanabe, Y.; Ha, J.; Neidhoefer, J.C. Real-time vision-based relative aircraft navigation. J. Aerosp. Comput. Inform. Commun. 2007, 4, 707–738. [Google Scholar] [CrossRef] [Green Version]
  24. Tang, J.; Zhu, W.; Bi, Y. A Computer Vision-Based Navigation and Localization Method for Station-Moving Aircraft Transport Platform with Dual Cameras. Sensors 2020, 20, 279. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Krammer, C.; Mishra, C.; Holzapfel, F. Testing and Evaluation of a Vision-Augmented Navigation System for Automatic Landings of General Aviation Aircraft. In AIAA Scitech 2020 Forum; AIAA: Reston, VA, USA, 2020; p. 1083. [Google Scholar]
  26. Shortis, M.R.; Snow, W.L. Videometric Tracking of Wind Tunnel Aerospace Models at Nasa Langley Research Center. Photogramm. Rec. 1997, 15, 673–689. [Google Scholar] [CrossRef]
  27. Sohi, N. Modeling of spin modes of supersonic aircraft in horizontal wind tunnel. In Proceedings of the 24th International Congress of the Aeronautical Sciences, Yokohama, Japan, 29 August–3 September 2004. [Google Scholar]
  28. Li, P.; Luo, W.S.; Li, G.Z. Spin Attitude Measure Based on Stereo Vision. J. Natl. Univ. Def. Technol. 2008, 30, 107. [Google Scholar]
  29. Altug, E.; Taylor, C. Vision-based pose estimation and control of a model helicopter. In Proceedings of the IEEE International Conference on Mechatronics (ICM ’04), Istanbul, Turkey, 5 June 2004; pp. 316–321. [Google Scholar] [CrossRef]
  30. Fairfax, L.; Allik, B. Vision-Based Roll and Pitch Estimation in Precision Projectiles. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Minneapolis, MN, USA, 13–16 August 2012; p. 4900. [Google Scholar]
  31. Webb, T.P.; Prazenica, R.J.; Kurdila, A.J.; Lind, R. Vision-Based State Estimation for Autonomous Micro Air Vehicles. J. Guid. Control. Dyn. 2007, 30, 816–826. [Google Scholar] [CrossRef] [Green Version]
  32. Zhu, H.J.; Wu, F.; Hu, Z.-Y. A Vision Based Method for Aircraft Approach Angle Estimation. J. Softw. 2006, 17, 959–967. [Google Scholar] [CrossRef] [Green Version]
  33. De Wagter, C.; Proctor, A.A.; Johnson, E.N. Vision-only aircraft flight control. In Proceedings of the 22nd Digital Avionics Systems Conference (DASC ’03), Indianapolis, IN, USA, 12–16 October 2003; Volume 2, p. 8.B.2–01–11. [Google Scholar] [CrossRef]
  34. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar] [CrossRef]
  35. Mikolajczyk, K.; Schmid, C. A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1615–1630. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Bay, H.; Tuytelaars, T.; Van Gool, L. SURF: Speeded Up Robust Features. In Computer Vision—ECCV 2006; Leonardis, A., Bischof, H., Pinz, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
  37. Sarfraz, M.S.; Hellwich, O. Head Pose Estimation in Face Recognition Across Pose Scenarios. In Proceedings of the VISAPP 2008—Iternational Conference on Computer Vision Theory and Application, Funchal, Portugal, 22–25 January 2008. [Google Scholar]
  38. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  39. Muja, M.; Lowe, D.G. Fast approximate nearest neighbors with automatic algorithm configuration. In Proceedings of the VISAPP International Conference on Computer Vision Theory and Applications, Lisboa, Portugal, 5–8 February 2009; pp. 331–340. [Google Scholar]
  40. Illingworth, J.; Kittler, J. A survey of the hough transform. Comput. Vis. Graph. Image Process. 1988, 44, 87–116. [Google Scholar] [CrossRef]
  41. Tripathi, A.K.; Mukhopadhyay, S.; Dhara, A.K. Performance metrics for image contrast. In Proceedings of the 2011 International Conference on Image Information Processing, Shimla, India, 3–5 November 2011; pp. 1–4. [Google Scholar] [CrossRef]
  42. X-Plane 11 Flight Simulator. Available online: https://www.x-plane.com/ (accessed on 21 January 2020).
  43. Jaloveckỳ, R.; Bystřickỳ, R. On-line analysis of data from the simulator X-plane in MATLAB. In Proceedings of the 2017 IEEE International Conference on Military Technologies (ICMT), Brno, Czech Republic, 31 May–2 June 2017; pp. 592–597. [Google Scholar]
  44. Wittenburg, P.; Brugman, H.; Russel, A.; Klassmann, A.; Sloetjes, H. ELAN: A Professional Framework for Multimodality Research. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06), Genoa, Italy, 22–28 May 2006; European Language Resources Association (ELRA): Genoa, Italy, 2006. [Google Scholar]
  45. ELAN—The Language Archive, Max Planck Institute for Psycholinguistics, The Language Archive, Nijmegen, The Netherlands. Available online: https://tla.mpi.nl/tools/tla-tools/elan/ (accessed on 7 January 2020).
  46. Kapuściński, T.; Warchoł, D. A suite of tools supporting data streams annotation and its use in experiments with hand gesture recognition. Studia Inform. 2017, 38, 89–107. [Google Scholar]
  47. MATLAB Docs on DetectSURFFeatures. Available online: https://www.mathworks.com/help/vision/ref/detectsurffeatures.html (accessed on 7 January 2020).
  48. MATLAB Docs on ExtractFeatures. Available online: https://www.mathworks.com/help/vision/ref/extractfeatures.html (accessed on 12 January 2020).
  49. MATLAB Docs on MatchFeatures. Available online: https://www.mathworks.com/help/vision/ref/matchfeatures.html (accessed on 12 January 2020).
  50. Rastrigin, L. The convergence of the random search method in the extremal control of a many parameter system. Autom. Remote Control 1963, 24, 1337–1342. [Google Scholar]
Figure 1. Phases of flight during spin recovery.
Figure 1. Phases of flight during spin recovery.
Sensors 20 02401 g001
Figure 2. Keypoints displacements in diffrent phases.
Figure 2. Keypoints displacements in diffrent phases.
Sensors 20 02401 g002
Figure 3. SURF keypoints determined at t 1 (red circles) and t (green crosses) connected by displacements (yellow segments) plotted on t 1 and t frames superimposed using alpha blending for: (a) spinning phase, (b) diving phase, (c) recovery phase.
Figure 3. SURF keypoints determined at t 1 (red circles) and t (green crosses) connected by displacements (yellow segments) plotted on t 1 and t frames superimposed using alpha blending for: (a) spinning phase, (b) diving phase, (c) recovery phase.
Sensors 20 02401 g003
Figure 4. SURF keypoints from Figure 3 after removing faulty matching for: (a) spinning phase, (b) diving phase, (c) recovery phase.
Figure 4. SURF keypoints from Figure 3 after removing faulty matching for: (a) spinning phase, (b) diving phase, (c) recovery phase.
Sensors 20 02401 g004
Figure 5. Accumulator matrices.
Figure 5. Accumulator matrices.
Sensors 20 02401 g005
Figure 6. A R and A T after taking into account the “votes” of one pair of corresponding points.
Figure 6. A R and A T after taking into account the “votes” of one pair of corresponding points.
Sensors 20 02401 g006
Figure 7. A R and A T after taking into account the “votes” of two pairs of corresponding points.
Figure 7. A R and A T after taking into account the “votes” of two pairs of corresponding points.
Sensors 20 02401 g007
Figure 8. Accumulators: (a) A R in rotation, (b) A T in rotation, (c) A R in recovery, (d) A T in recovery.
Figure 8. Accumulators: (a) A R in rotation, (b) A T in rotation, (c) A R in recovery, (d) A T in recovery.
Sensors 20 02401 g008
Figure 9. Selected frames from test videos in group 1: (a) 6:00, (b) 9:00, (c) 12:00, (d) 15:00, (e) 18:00, (f) 21:00.
Figure 9. Selected frames from test videos in group 1: (a) 6:00, (b) 9:00, (c) 12:00, (d) 15:00, (e) 18:00, (f) 21:00.
Sensors 20 02401 g009
Figure 10. Selected frames from test videos in group 2: (a) ocean, (b) desert, (c) mountains, (d) arctic, (e) urban1, (f) urban2.
Figure 10. Selected frames from test videos in group 2: (a) ocean, (b) desert, (c) mountains, (d) arctic, (e) urban1, (f) urban2.
Sensors 20 02401 g010
Figure 11. Selected frames from test videos in group 3: (a) 2000 ft, (b) 4000 ft, (c) 6000 ft, (d) 8000 ft, (e) 10,000 ft, (f) 12,000 ft.
Figure 11. Selected frames from test videos in group 3: (a) 2000 ft, (b) 4000 ft, (c) 6000 ft, (d) 8000 ft, (e) 10,000 ft, (f) 12,000 ft.
Sensors 20 02401 g011
Figure 12. Selected frames from test videos in group 4: (a) 0.3 NM, (b) 0.4 NM, (c) 0.5 NM, (d) 3–5 NM, (e) 5–10 NM, (f) >10 NM.
Figure 12. Selected frames from test videos in group 4: (a) 0.3 NM, (b) 0.4 NM, (c) 0.5 NM, (d) 3–5 NM, (e) 5–10 NM, (f) >10 NM.
Sensors 20 02401 g012
Figure 13. Qualitative evaluation of results in ELAN after automatic insertion of “our method” layer.
Figure 13. Qualitative evaluation of results in ELAN after automatic insertion of “our method” layer.
Sensors 20 02401 g013
Figure 14. Graphs obtained for the selected test video: (a) H S A R = f ( t ) , (b) H S A T = f ( t ) , (c) plane state.
Figure 14. Graphs obtained for the selected test video: (a) H S A R = f ( t ) , (b) H S A T = f ( t ) , (c) plane state.
Sensors 20 02401 g014
Figure 15. The average number of matched keypoints for: (a) different scenes, (b) different altitudes.
Figure 15. The average number of matched keypoints for: (a) different scenes, (b) different altitudes.
Sensors 20 02401 g015
Figure 16. Selected frames from last phase of spin recovery for test videos in group 4: (a) 0.3 NM, (b) 0.4 NM, (c) 0.5 NM, (d) 3–5 NM, (e) 5–10 NM, (f) >10 NM.
Figure 16. Selected frames from last phase of spin recovery for test videos in group 4: (a) 0.3 NM, (b) 0.4 NM, (c) 0.5 NM, (d) 3–5 NM, (e) 5–10 NM, (f) >10 NM.
Sensors 20 02401 g016
Figure 17. Glider bow with mounted camera used in experiments.
Figure 17. Glider bow with mounted camera used in experiments.
Sensors 20 02401 g017
Figure 18. Selected frames from five spin executions: (ac) 1st spin, (df) 2nd spin, (gi) 3rd spin, (jl) 4th spin, (mo) 5th spin; columns correspond to spinning, diving and recovery.
Figure 18. Selected frames from five spin executions: (ac) 1st spin, (df) 2nd spin, (gi) 3rd spin, (jl) 4th spin, (mo) 5th spin; columns correspond to spinning, diving and recovery.
Sensors 20 02401 g018
Table 1. Characteristics of the dataset.
Table 1. Characteristics of the dataset.
Group 1DescriptionChange of time of day
GoalExamination of the dependence on the level and direction of lighting
Time of day6:009:0012:0015:0018:0021:00
Group 2DescriptionChange of location
GoalExamination of the dependence on the number of detected keypoints
Locationoceandesertmountainsarcticurban1urban2
Group 3DescriptionChange of altitude
GoalExamination of the dependence on the number and density of detected keypoints
Altitude [ft]200040006000800010,00012,000
Group 4DescriptionChange of visibility
GoalExamination of the dependence on the visibility
Visibility [NM]0.30.40.53–55–10>10
Table 2. Method parameters.
Table 2. Method parameters.
Parameters Used while Detecting, Extracting and Matching SURF Keypoints
NameDescriptionTested Values
M e t r i c T h r e s h o l d Strongest feature threshold [47]100, 200, ..., 1500
N u m O c t a v e s Number of octaves [47]1, 2, 3, 4
N u m S c a l e L e v e l s Number of scale levels per octave [47]3, 4, 5, 6
F e a t u r e S i z e Length of feature vector [48]64, 128
M a t c h T h r e s h o l d Matching threshold [49]1, 10, 20, ... 100
M e t r i c Feature matching metric [49]SAD, SSD
Parameters used while removing faulty matches
kFaulty match rejection threshold (Section 2.2, Equation (2))1, 2, ..., 6
Parameters used while determining the aircraft state
T R Threshold value for H S A R (Section 2.3, Equation (6))0.05, 0.06, ..., 0.15
Δ R Deadzone width for H S A R (Section 2.3, Equation (6))0.01, 0.02, ..., 0.05
T T Threshold value for H S A T (Section 2.3, Equation (7))0.01, 0.02, ..., 0.10
Δ T Deadzone width for H S A T (Section 2.3, Equation (7))0.01, 0.02, ..., 0.05
ϵ Permitted deviation from keypoints downward movement0, 5, ..., 45
(Section 2.3, Equation (7))
Table 3. Selected parameters.
Table 3. Selected parameters.
Name M e t r i c T h r e s h o l d N u m O c t a v e s N u m S c a l e L e v e l s F e a t u r e S i z e M a t c h T h r e s h o l d M e t r i c
Value100466410SSD
Namek T R Δ R T T Δ T ϵ
Value10.130.020.100.0340
Table 4. The results (Jaccard indices) for X-Plane videos.
Table 4. The results (Jaccard indices) for X-Plane videos.
Group 1Time of day6:009:0012:0015:0018:0021:00
video 10.930.880.910.830.910.96
video 20.910.920.940.860.780.91
video 30.960.850.800.930.880.91
Group 2Locationoceandesertmountainsarcticurban1urban2
video 1-0.740.990.820.920.91
video 2-0.970.910.710.790.96
video 3-0.890.990.850.970.80
Group 3Altitude [ft]200040006000800010,00012,000
video 10.820.990.940.920.910.75
video 20.790.870.970.900.930.84
video 30.740.900.910.900.770.90
Group 4Visibility [NM]0.30.40.53–55–10>10
video 10.890.820.910.920.820.88
video 20.860.900.880.900.820.93
video 30.760.950.940.890.920.80
Table 5. Results for flight-recorded videos.
Table 5. Results for flight-recorded videos.
Flight video12345
Jaccard index0.850.890.920.930.94

Share and Cite

MDPI and ACS Style

Kapuscinski, T.; Szczerba, P.; Rogalski, T.; Rzucidlo, P.; Szczerba, Z. A Vision-Based Method for Determining Aircraft State during Spin Recovery. Sensors 2020, 20, 2401. https://doi.org/10.3390/s20082401

AMA Style

Kapuscinski T, Szczerba P, Rogalski T, Rzucidlo P, Szczerba Z. A Vision-Based Method for Determining Aircraft State during Spin Recovery. Sensors. 2020; 20(8):2401. https://doi.org/10.3390/s20082401

Chicago/Turabian Style

Kapuscinski, Tomasz, Piotr Szczerba, Tomasz Rogalski, Pawel Rzucidlo, and Zygmunt Szczerba. 2020. "A Vision-Based Method for Determining Aircraft State during Spin Recovery" Sensors 20, no. 8: 2401. https://doi.org/10.3390/s20082401

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop