Next Article in Journal
An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network
Next Article in Special Issue
A Novel Gradient Vector Flow Snake Model Based on Convex Function for Infrared Image Segmentation
Previous Article in Journal
Advanced Spatial-Division Multiplexed Measurement Systems Propositions—From Telecommunication to Sensing Applications: A Review
Previous Article in Special Issue
A High Precision Terahertz Wave Image Reconstruction Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment

1
ShaanXi Provincial Key Laboratory of Speech and Image Information Processing, School of Computer Science, Northwestern Polytechnical University, Xi’an 710129, China
2
School of Telecommunications Engineering, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(9), 1393; https://doi.org/10.3390/s16091393
Submission received: 17 June 2016 / Revised: 19 August 2016 / Accepted: 25 August 2016 / Published: 30 August 2016
(This article belongs to the Special Issue Infrared and THz Sensing and Imaging)

Abstract

:
This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

1. Introduction

Unmanned air vehicles (UAVs) have become more and more prevalent recently. However, one of the most difficult challenges for both manned and unmanned aircraft is safe landing. A significant number of accidents happens during the landing phase due to inexperience of pilots or sudden changes in the weather dynamics, thus automatic landing systems are required to land UAVs safely [1]. How to develop autonomous landing systems has been a hot issue of the current UAV researches, it is a challenge for its high requirements of reliability and accuracy.
Several methods of the control of the UAV have been developed, such as PID control [2], backstepping [3,4,5], H control [6], sliding mode control [7], fuzzy control [8], and model based fault tolerant control [9]. The traditional navigation systems for landing which have been on the UAV mainly include Inertial Navigation System (INS), Global Position System (GPS), INS/GPS combined navigation system, Global Navigation Satellite System (GNSS) and so on [10]. One of the most commonly used methods is the GPS/INS integrated navigation, but GPS signals are easily blocked, lower height accuracy [11], and INS tends to drift because the errors accumulate over time [12].
As described previously, the height measurement from the GPS is usually inaccurate, which is easy to cause a crash, thus other sensors may be needed like radar altimeter. The most important is that the GPS signals may not be always available, the automatic landing may not be possible in many remote regions or GPS-denied environments. At this time, the advantages of vision-based automatic landing method is particularly important.
In recent years, new measurement systems which take visual sensors as cores have been applied more and more widely and expanded to the UAV automatic landing [10]. Guo et al. [12] proposed a vision-aided landing navigation system based on fixed waveband guidance illuminant using a single camera. Ming et al. [13] adopted a vision-aided INS method to implement the UAV auto landing navigation system. Vladimir et al. [14] presented a robust real-time line tracking algorithm for fixed-wing aircraft landing. Abu-Jbara [15] and Cao et al. [16] studied the airport runway in natural scenes, while Sereewattana et al. [17] provided a method to find the runway by adding four different color marks, Zhuang et al. [18] used the two edge lines on both sides of the main runway and the front edge line of the airport to estimate the attitude and position parameters. Li et al. [19] extracted three runway lines in the image using Hough transform method to estimate the pose of the UAV. Barber et al. [20] used a visual marker to estimate the roll and pitch for flight control. Huh et al. [11] proposed a vision-based automatic landing method which used a monotone hemispherical airbag as a marker. Lange et al. [21] adopted an adaptive thresholding technique to detect a landing pad for the multirotor UAV automatic landing. Miller et al. [22] proposed an navigation algorithm based on image registration. Daquan et al. [23] estimated aircraft’s pose using extended Kalman filter (EKF).
In the papers mentioned above, most of them are based on a downward-looking camera to recognize the airport runway, the artificial markers or the natural markers in the image, etc. However, the stabilizing controllers based on monocular camera are subject to drift over time [24], the field of view can be temporarily occluded and the illumination conditions might change drastically within a distance of a few meters [25], and extracting the symbols such as runway and the performance of the image-matching algorithm are greatly influenced by imaging circumstance, and it is hard to extract the symbols correctly and the performance of the image-matching algorithm may be unsatisfied in complicated situations such as rain, fog and night [10]. Compared with onboard navigators, the ground-based system possesses stronger computation resources and enlarges the search field of view. Wang et al. [26] used a ground USB-camera to track a square marker patched on the micro-aircraft. Martinez et al. [27] designed a trinocular system, which is composed of three FireWire cameras fixed on the ground, to estimate the vehicle’s position and orientation by tracking color landmarkers on the UAV. Researchers in Chiba University [28] designed a ground-based stereo vision system to estimate the three dimensional position of a quadrotor. The continuously adaptive mean shift algorithms was used to track the color based object. Abbeel et al. [29] achieved autonomous aerobatic flights of an instrumented helicopter using two Point Grey cameras with known positions and orientations on the ground. The accuracy of the estimates obtained was about 25 cm at about 40 m distance from the cameras.
There is one similar automatic landing system called OPATS (Object Position and Tracking System), which is developed by RUAG for the Swiss Air force in 1999 [30]. However, here are several different aspects compared our system with OPATS. First of all, the theories are different. OPATS is a laser-based automatic UAV landing system that continuously measures the dynamic positions of the object of the interest, using one laser sensor with tripod, etc. While the proposed landing system is based on stereo vision using infrared camera array. Thus their location theories are different. The location theories of the OPATS is based on the infrared laser beam reflected by the retroreflector on the UAV, while our infrared camera array system is based on the theory of binocular positioning. Secondly, the adopted equipment is different. The equipment on the ground of OPATS mainly includes: a standalone laser sensor, electronics unit, battery. The equipment on the ground of our landing system mainly include: infrared laser lamp, infrared camera array, camera lens and optical filter. The equipment fixed on the UAV of OPATS: passive optical retroreflector. The equipment fixed on the UAV of our landing system: near infrared laser lamp. Most importantly, OPATS can only guide one UAV landing at the same time, while the proposed landing system in this paper covers wide field of regard, which has the ability to guide several UAVs landing at the same time. And this problem could be described as “multi-object tracking”, which is very important as the development of UAV. A pan-tilt unit (PTU) is employed to actuate the vision system in [31]. Although it can be used under all weather conditions and around the clock, the limited baseline has resulted in short detection distance. To achieve long range detection and cover wide field of regard, a newly developed system has been designed in [32], which is the most similar works to ours. Two separate sets of PTU are mounted integrated with visible light camera on both sides of the runway instead of their previous stereo vision system, which is able to detect the UAV around 600 m. However, the detection results are unsatisfied when the background become cluttered.
In order to land the UAVs safely in GPS-denied environment, a novel ground-based infrared camera array system is proposed in this paper as shown in Figure 1. The direct detection of the UAV will be limited by detection range, thus a near infrared laser lamp is fixed on the nose to instead the position of the UAV, which simplifies the problem and improve the robustness of the system. Two infrared cameras are located on the two sides of the runway respectively to capture flying aircraft images, which are processed by the navigation computer. After processing, the real time position and speed of the UAV are sent to the UAV control center, and the detection results, tracking trajectory and three-dimensional localization results are displayed in real time. The infrared camera array system cooperated with the laser lamp can effectively suppress the interference of the light and it can be employed around the clock under all weather condition.
In this paper, we present the design and construction of hardware and software components to accomplish autonomous accurate landing of the UAV. Our work mainly makes three contributions:
  • We propose a novel infrared camera array and near infrared laser lamp based cooperative optical imaging method, which greatly improve the detection range of the UAV.
  • We design a wide baseline camera array calibration method. The method presented could achieve high precision calibration and localization results.
  • We develop a robust detection and tracking method for near infrared laser marker.
The proposed system is verified using a middle-sized fixed-wing UAV. The experiments demoustrate that the detection range has been greatly improved, which is more than 1000 m, and a high localization accuracy is achieved. The system has also been validated in the GPS-denied environment and the UAV is guided to land safely.
The rest of the paper is organized as follows. In Section 2, we describe the design and methodology of our landing system. Section 3 describes the experimental results and followed the conclude results in Section 4.

2. Landing System

This paper focuses on the content of the vision-based landing navigation. Figure 2 shows the system framework and the complete experimental setup is shown in Figure 3.
We adopt a vision-based method, and how to get clear images of the target at long-distance is our first task. In our system, the light source is carried on the nose to instead of the position of the UAV. After many tests and analyses, we finally choose the near infrared laser lamp and infrared camera array to form the optical imaging module.
The accuracy of the camera array parameters directly determines the localization accuracy, while the traditional method of measuring binocular calibration accuracy will serious decline in long-distance. In this paper, a calibration method for infrared camera array is proposed, which can achieve a high calibration accuracy.
When the UAV is landing from a long distance, the laser marker fixed on the UAV presents the characteristics of small target. Besides, the target may be influenced by the strong sunlight, signal noise and other uncertain factors in the actual situation. In this paper, these problems are discussed in detail and solved effectively.

2.1. Optical Imaging Module

For vision-based UAV automatic landing systems, one of the basic steps is to construct the optical imaging module. A good optical imaging module will make the target in the image prominent, simplify the algorithm, guarantee the stability of the system, etc. The main components of our optical imaging module are near infrared laser lamp, infrared camera array and optical filter.
The results of the direct detection of the UAV are usually not robust, especially at long distance, which is greatly affected by background. Thus the detection range is usually limited. To improve the detection range, we carefully design the vision system by introducing the light source, which plays an important role in optical imaging module. It directly affects the quality of the image, and then affects the performance of the system. In our system, the light source is carried on the nose to replace the position of the UAV. The function of the light source is to obtain a clear image with a high contrast. One of the required characteristics of the selected light source is insensitive to visible light. Taking the factors of the wind into consideration, the light source should have the robustness of multi-angle. We have conducted a lot of comparative tests of different light sources, and finally choose the near infrared laser lamp. The near infrared laser lamp parameters are shown in Table 1. The illumination distance of the near infrared laser lamp is more than 1000 m, which guarantees the detection range. The wavelength of the near infrared laser lamp is 808 ± 5 nm. Its weight is only 470 g, which is suitable to be fixed on the UAV.
Our ground-based system is mainly designed for fixed-wing UAVs which have a high flight speed and landing height. The function of the camera is to capture images of the near infrared laser lamp fixed on the UAV. So the camera needs to have a high sampling rate for the dynamic target, and should have enough resolution so that the target can still be clearly acquired at a long distance. Considering these, we finally choose the infrared camera of Point Grey GS3-U3-41C6NIR-C with 2048 × 2048 pixels. In order to ensure the camera resolution and spatial location accuracy, we select the the camera lens of Myutron HF3514V with focal length of 35 mm. The cameras are fixed on each side of the runway as shown in Figure 3, which has a wide baseline. The camera parameters are shown in Table 2 and the camera lens parameters are shown in Table 3. The maximum frame rate of the camera is 2048 × 2048 at 90 fps, which meets the need of the proposed system. The maximum magnification of the camera lens is 0.3× and its TV distortion is −0.027%.
In order to increase the anti-interference ability of the optical imaging module, a kind of Near-IR interference bandpass filter is adopted. The wavelength of the optical filter is 808 nm, of which the signal attenuation is small. The filter is fixed in front of the camera lens, so the camera is only sensitive to the wavelength of 808 nm. The emission wavelength of the near infrared laser lamp fixed on the UAV is 808 nm. The filter is a component of the infrared camera array. The cooperation of the infrared camera array and the light source could guarantee distinct imaging of the near infrared laser lamp and effectively avoid interferences of complicated background. Thus the robustness and the detection range of the system are both greatly improved. As shown in Figure 4, we can see that the filter could get rid of almost every interferential signal successfully. The optical filter parameters are shown in Table 4.

2.2. Large Scale Outdoor Camera Array Calibration

The process of calibration is to estimate the intrinsic parameters and extrinsic parameters of the cameras in the array system.
To get high localization accuracy, precise camera parameters are needed. Classical camera calibration methods include Weng’s [33], Zhang’s [34], etc. For those traditional camera calibration methods, the reference points or lines must be distributed in space or in the calibration image rationally, which is easy to be constructed indoors or if the view field is not large. To obtain the precise camera parameters in large scale outdoors, a new camera array calibration method is presented.
Chessboard pattern is adopted to obtain intrinsic parameters. As described before, two infrared cameras are located on both sides of the airport runway to enlarge the baseline, which contributes to promoting the localization accuracy. However, it brings difficulties to the calibration of the external parameters. Thus a novel camera external parameters calibration method based on electronic total station is presented here. The parameters of the electronic total station used in our system are shown in Table 5. The measuring distance of the electronic total station could reach 2000 m and its ranging accuracy is ±(2 mm + 2 ppm). And its angle measurement accuracy is ± 2 . We can see that the electronic total station has ensured a high accuracy of measurement. To get precise external parameters, ten near infrared laser lamps are also putted on both sides of the runway as control points as shown in Figure 1, one placement example of the distance between each pair of the near infrared laser lamps is presented here. Six of them are located closer to the ground, and four of them are located about 2.0 m height from the ground. The external parameters calibration method mainly includes the following steps:
(1)
Establish a world coordinate system.
(2)
Measure the precise world coordinates of the control points using the electronic total station.
(3)
Extract the projections of the control points from the two calibration images.
(4)
Accurately estimate the spot center coordinates of the control points using bilinear interpolation.
(5)
Obtain the initial external parameters by DLT (Direct Linear Transform) algorithm [35].
(6)
Generate the final calibration results by LM (Levenberg-Marquardt) algorithm [35].
It is necessary for centroids to be extracted after the region is determined in step 4. In order to improve the accuracy and stability, bilinear interpolation method are used before calculating the coordinate of the spot center:
g ( i + x , j ) = g ( i , j ) + x [ g ( i + 1 , j ) - g ( i , j ) ] g ( i , j + y ) = g ( i , j ) + y [ g ( i , j + 1 ) - g ( i , j ) ] g ( i + x , j + y ) = x [ g ( i + 1 ) , j ) - g ( i , j ) ] + y [ g ( i , j + 1 ) - g ( i , j ) ] + x y [ g ( i + 1 , j + 1 ) + g ( i , j ) - g ( i + 1 , j ) - g ( i , j + 1 ) ] + g ( i , j ) ,
where g ( i , j ) is the gray value of the point ( i , j ) , x , y ( 0 , 1 ) . Then the subpixel of the spot center ( x 0 , y 0 ) is calculated by:
x c = i = x b x e x i · w ( x i , y i ) / i = x b x e w ( x i , y i ) y c = i = y b y e y i · w ( x i , y i ) / i = y b y e w ( x i , y i ) a n d
w ( x i , y i ) = g ( x i , y i ) g ( x i , y i ) T 0 w ( x i , y i ) = 0 g ( x i , y i ) < T 0 x b = x 0 - T 1 x e = x 0 + T 1 y b = y 0 - T 1 y e = y 0 + T 1 ,
where x i and y i are the values of horizontal and vertical coordinates of the point ( x i , y i ) , T 0 and T 1 are the thresholds.

2.3. Target Detection, Localization and Tracking

Target Detection: Because of the obviously different grayscale between the target and background, we directly acquire the foreground image of the candidate targets after a simple morphological pre-processing, and then the foreground cluster is carried to get the coordinates of the candidate targets in the image. If pixel distance f p d ( p i , p j ) is less than foreground clustering window J, then clustered as a class x i ( i 0 ) . We regard the image centroid coordinate of each cluster as the coordinate of the candidate target in the image. The pixel distance is defined as:
f p d ( p i , p j ) = ( p i x - p j x ) 2 + ( p i y - p j y ) 2 ,
where p i and p j are image pixels, ( p i x , p j x ) and ( p i y , p j y ) are pixel coordinates of p i and p j respectively.
To determine the corresponding relationship of the candidate targets and remove the false targets, epipolar geometry constraints between the two cameras are used. Epipolar geometry between the two cameras refers to the inherent projective geometry between the views. It only depends on the camera intrinsic parameters and the relative pose of the two cameras. Thus after the target is detected on the two cameras independently, epipolar geometry constraints between the cameras can be used to get data association results. In this way, the corresponding relationship of the candidate targets are confirmed and parts of false targets are removed.
Define I 1 = { x 1 1 , x 2 1 , , x m 1 } and I 2 = { x 1 2 , x 2 2 , , x m 2 } as the detection results of the first and second camera. The duty of the data association is to find the corresponding relationship between x i 1 and x j 2 . Distance measurement is obtained by the symmetric transfer error between x i 1 ( i = 1 , 2 , , m ) and x j 2 ( i = 1 , 2 , , n ) , can be defined as:
d ( x i 1 , x j 2 ) = d ( x i 1 , F T x j 2 ) + d ( x j 2 , F x i 1 ) ,
where F is the fundamental matrix between the two cameras. The matching matrix between two images is:
D = d ( x 1 1 , x 1 2 ) d ( x 1 1 , x 2 2 ) d ( x 1 1 , x n 2 ) d ( x 2 1 , x 1 2 ) d ( x 2 1 , x 2 2 ) d ( x 2 1 , x n 2 ) d ( x m 1 , x 1 2 ) d ( x m 1 , x 2 2 ) d ( x m 1 , x n 2 ) .
The global optimal matching result is obtained by solving the matching matrix D using Hungarian algorithm [36], which is taken as the final detection result.
Target Localization: Supposing the world coordinate of the laser marker is X, the two camera parameter matrices are P and P , respectively. For the two detection images, supposing the image coordinates of the laser markers are x and x . Due to the measurement error, there are no points meet the equations x P X and x P X strictly, and the image point is not satisfied to epipolar geometry constraint x T F x = 0 .
A projective invariant binocular location method to minimize the re-projection error is presented here. The method is to find the exact solution to meet the minimum epipolar geometry constraint and re-projection error. Since the whole process only involves the projection of the space points and the distance of 2D image points, the method is projective invariant, which means that the solution is independent of the specific projective space.
In the corresponding images of the two cameras, the observation points are x and x respectively. Supposing that the points near x and x which precisely meet to epipolar geometry constraint are x ^ and x ^ . Maximum likelihood estimation of the following objective function:
C ( x ^ , x ^ ) = d ( x , x ^ ) + d ( x , x ^ ) ,
subject to x ^ T F x ^ = 0 , where d ( , ) is the Euclidean distance between image points.
We obtain initial value of x and x by DLT algorithm firstly. Supposing x P X and x P X , two equations x P X = 0 and x P X = 0 are obtained by the homogeneous relation. By expansioning the equations and we get:
x 1 ( p 3 T X ) - ( p 1 T X ) = 0 y 1 ( p 3 T X ) - ( p 2 T X ) = 0 x 1 ( p 2 T X ) - y 1 ( p 1 T X ) = 0 x 2 ( p 3 T X ) - ( p 1 T X ) = 0 y 2 ( p 3 T X ) - ( p 2 T X ) = 0 x 2 ( p 2 T X ) - y 2 ( p 1 T X ) = 0 ,
where p i T is the i-th row of the matrix P, p j T is the j-th column of matrix P . The homogeneous coordinate equations are x = ( x 1 , y 1 , 1 ) T and x = ( x 2 , y 2 , 1 ) T . The formula for linear equations on X can be written as A X = 0 . Although each set of points correspond to the three equations, only two of them are linearly independent. Thus each set of points could provide two equations about X. The third equation is usually omitted when solving X. Thus A could be described as:
A = x 1 p 3 T - p 1 T y 1 p 3 T - p 2 T x 2 p 3 T - p 1 T y 2 p 3 T - p 2 T .
Since X is a homogeneous coordinate, only three degrees of freedom are scale-independent. The linear equation set A X = 0 contains four equations, so the linear system actually is a over-determined system. To get the approximate solution of X, equation set A X = 0 could be changed into the following optimization problem:
min x A X ,
subject to X = 1 .
After the initial value X 0 of X is obtained from the above formula, LM algorithm is used for the iterative optimization to yield final localization results.
Target Tracking: The Euclidean distance is used as the distance measurement in the 3D space. Define the historical target tracking result T i t ( i = 1 , 2 , . . . , p ) and current localization result X j t + 1 ( j = 1 , 2 , . . . , q ), the distance between them is computed by:
d ( T i t , X j t + 1 ) = ( x i t - x j t + 1 ) 2 + ( y i t - y j t + 1 ) 2 + ( z i t - z j t + 1 ) 2 ,
where ( x i t , y i t , z i t ) and ( x j t + 1 , y j t + 1 , z j t + 1 ) are space coordinates of T i t and X j t + 1 . Thus the matching matrix between them is computed by:
D t t + 1 = d ( T 1 t , X 1 T + 1 ) d ( T 1 t , X 2 T + 1 ) d ( T 1 t , X q T + 1 ) d ( T 2 t , X 1 T + 1 ) d ( T 2 t , X 2 T + 1 ) d ( T 2 t , X q T + 1 ) d ( T p t , X 1 T + 1 ) d ( X p t , X 2 T + 1 ) d ( x p t , X q T + 1 ) .
The Hungarian algorithm is used to get the target tracking results from D t t + 1 .

3. Experiments and Discussion

3.1. Optical Imaging Experiments

We have compared several different kinds of light sources such as strong light flashlight, high intensity discharge lamp, halogen lamp, etc. Because of those light sources are sensitive to visible light and have a short irradiation range, we finally choose the near infrared laser lamp. In this section, we will present the comparison results of near infrared laser lamp and strong light flashlight.
We compare the quality of the imaging at different distances firstly. In this experiment, the near infrared laser lamp and the flashlight are placed in the same position at different distances. The experimental distance is from 80 m to 650 m. Figure 5 shows that the light spots of the strong light flashlight in the images are hard to find after 400 m, while the light spots of the near infrared laser lamp can still be seen clearly at 650 m.
The system need to have a certain fault tolerance to the angle change. In this experiment, we place the light sources at the same position with the same directions firstly, and then adjust the horizontal rotation angle of the light source. We did this experiment at 150 m. As shown in Figure 6, the near infrared laser lamp can be detected robustly from 0 degrees to 45 degrees, while the strong light flashlight cannot be seen clearly when the angle is greater than 10 degrees.
From the above experiments, we can see that the near infrared laser lamp greatly meets the needs of the landing system. Cooperated with the infrared camera array and optical filter, a robust optical imaging system with long detection range is established.

3.2. Infrared Camera Array Calibration Experiments

The precision of the camera array parameters directly determines the localization accuracy. To verify, five reference points are selected near the center line of the runway to simulate the UAV position. Their space coordinates are measured by electronic total station as the ground truth. Then laser markers are placed to the positions of the reference points, of which the space coordinates are calculated based on the calibration results. In fact, this experiment could also be considered as a localization accuracy verification of UAV on the ground.
The experiment results are shown in Table 6. It is obviously to observe that the errors of X elements are much larger than Y and Z elements, while the errors of Y and Z remain below a limited threshold. The measurement errors gradually descend from far to near in X axis, and the precision has attained centimeter level within 200 m. Limited by the length of the runway, the maximum experimental distance is about 400 m. And in this experiment, the accuracy of the Y axis and the Z axis is controlled within the centimeter level. More importantly, we can see that a high localization accuracy has been attained in the last 200 m, which greatly meets the needs of the UAV automatic landing system.
A practical experiment based on control points has also been conducted to verify the calibration results as shown in Figure 7. The calibration images taken by the two infrared cameras are shown in Figure 7a,b. As described previously, the positions of the control points are measured by electronic total station. The red circles in Figure 7c,d are real positions of the control points, which are marked by our detection algorithm. The world coordinates of control points are re-projected to the image coordinates based on calibration parameters, which are marked by yellow crosses as shown in Figure 7c,d. We can see that the red circles and yellow crosses are basic coincidence, which demonstrate the calibration accuracy of the intrinsic and external parameters effectively.

3.3. Target Detection and Localization Experiments

In order to improve the stability and robustness of the ground-based system, the landing system needs to be able to remove the false targets effectively. The removal of false targets is mainly reflected in three aspects. In the process of multi-camera collaborative detection based on the epipolar constrain, the false targets can be removed by the symmetric transfer error; In the process of the stereo vision localization of multi-camera, the false targets can be removed by the space motion track constraints of the UAV; In the process of the target tracking, the false targets can be removed by analyzing the motion directions and velocities of the candidate targets. In this way, the target can be detected correctly.
Figure 8 and Figure 9 shows the detection experiments under sunlight, even with smear effect in Figure 9, we can see that the targets are both detected correctly.
The detection accuracy of the near infrared laser lamp fixed on the UAV directly determines the accuracy of the target spatial localization. In this part, we will analyze the effects of the detection error on the localization accuracy. In this simulation experiment, we assume that the camera parameters are already known. During the landing phase, the point is projected to the image through the camera matrix, and Gaussian random noise with a mean of zero is added on the projected point. And then we get the localization result and analyze the localization accuracy. Table 7 gives the average error results of 1000 times simulation experiments with different standard deviation of Gaussian noise.
In Table 7, the standard deviation of the Gaussian noise is set to 0.1 pixels, 0.2 pixels and 0.5 pixels in turn. The error of the three axes decreases with the decrease of the distance. The error in X axis is the largest, and the error in Y axis and Z axis remains a high accuracy. When the standard deviation of the Gaussian noise is set to 0.5 pixels, the error in three axes is the largest. However, in the last 100 m, the error in X axis is less than 0.5 m and the error in Y axis and Z axis is within the centimeter level. We can see that when the target detection accuracy is less than 0.5 pixels, the localization accuracy could meet the requirements of the landing system.
We have performed extensive automatic landing experiments in several places successfully. In order to enlarge the field of view, we usually choose the runway whose width is more than 15 m. The field of view is usually determined by the width and length of the runway. As described previously, the two infrared cameras are located on the two sides of the runway respectively to capture flying aircraft images. One of basic principles is that the public field of view should cover the landing area of the UAV. In the actual experiment, the landing area is usually already known, thus it is easy to make the infrared camera array cover the landing area. And to ensure the accuracy, the imaging of the landing area should be close to the image center, especially for the last 200 m. The detection range changes with the baseline. With the baseline of 20 m, the minimum detection range is about 25 m and the maximum detection range is over 1000 m. The UAV takes off and cruise at an altitude of 200 m using the DGPS system. Once the UAV is detected and the error is acceptable, the UAV is guided to land by our ground-based vision system.

3.4. Real-Time Automatic Landing Experiments

It is important to ensure the safety and reliability of the UAV automatic landing system, therefore, the verification of the UAV real-time localization accuracy is necessary. Thus, we compared the localization accuracy with DGPS measurements. The DGPS data is produced by SPAN-CPT module as shown in Table 8. SPAN-CPT is a compact, single enclosure GNSS + INS receiver with variety of positioning modes to ensure the accuracy. The IMU components within the SPAN-CPT enclosure are comprised of Fiber Optic Gyros (FOG) and Micro Electromechanical System (MEMS) accelerometers, etc. The tight coupling of the GNSS and IMU measurements delivers the most satellite observations and the most accurate, continuous solution possible. In our experiments, we choose the RT-2 module and its horizontal position accuracy is 1 cm + 1 ppm. During the process of landing, the localization results of the ground-based system are uploaded to the UAV control center through wireless data chain, and then the received data and current DGPS measurement results will be saved at the same time by the UAV control center, of which the maximum data update rate is 200 HZ. By analyzing the stored localization data after UAV landing, the localization accuracy can be verified.
Airborne DGPS measurement data is usually defined in the geographic coordinate system, while the vision measurement data is defined in the ground world coordinate system. To analyze the errors between them, the conversion between the DGPS coordinates and the world coordinates is necessary. To ensure the accuracy of coordinate conversion, three points are selected, one is the origin of the world coordinates, and the others are far long the runway (e.g., 200 m). Their coordinate information such as longitude, latitude and altitude is measured by the DGPS measurement module. The world coordinates of the three points are measured by the electronic total station. The direction of the runway can be obtained by the two points that far along the runway, computed with the origin coordinate, the conversion relationship between the coordinate systems can be finally determined.
We compared the localization results with DGPS in Figure 10a. We can see that the detection range is over 1000 m and the data generated by our system are coincident with the DGPS data. The accuracy in Z axis is the most important factor to real UAV automatic landing. On the contrary, the accuracy in X axis has the minimal influence on its automatic landing because of the long runway. And the limited width of the runway also needs a high localization accuracy at Y axis. In this experiment, we refer the data from DGPS as the ground truth, the absolute errors in X, Y, Z are evaluated as shown in Figure 10e–g. The errors of X elements are the largest compared with Y and Z elements. However, the errors of X and Y elements are gradually decreased during the landing phase. At the last 200 m, The location errors in X and Y coordinates have reduced to below 5 and 0.5 m, respectively. Both of which have achieved a high accuracy. And at the last 100 m, the localization results in X and Y coordinates nearly are the same with DGPS. To avoid the crash, a high precision estimation of altitude should be guaranteed. We have achieved an impressive localization result in Z axis, in which the error is less than 0.22 m during the whole landing process. The measurement precisions in the whole landing process completely meet the requirements of the UAV control center.
Figure 11 shows one of the landing trajectories generated by our system and several landing poses of the UAV are presented here. Under the control of our system, the poses of the UAV could remain steady during the whole process of the decline. When the UAV was controlled by our ground-based system, the GPS jammer was turned on. Thus, in this experiment, the UAV was controlled from 820 m in the GPS-denied environment and successful landing, we can see that the trajectory is smooth and complete.

4. Conclusions

This paper described a novel infrared camera array guidance system for UAV automatic landing in GPS-denied environment. We overcome the shortcomings of the traditional GPS method which is easily blocked, etc. After an optical imaging system is designed, a high precision calibration method for large scene based on electronic total station is provided. The feasibility, accuracy has been verified through real-time flight experiments without GPS, and the results have identified that the control distance of our system is over 1000 m and a high landing accuracy has been achieved.

Supplementary Materials

The following are available online at https://www.mdpi.com/1424-8220/16/9/1393/s1.

Acknowledgments

This work is supported by the ShenZhen Science and Technology Foundation (JCYJ20160229172932237), National Natural Science Foundation of China (No. 61672429, No. 61502364, No. 61272288, No. 61231016), Northwestern Polytechnical University (NPU) New AoXiang Star (No. G2015KY0301), Fundamental Research Funds for the Central Universities (No. 3102015AX007), NPU New People and Direction (No. 13GH014604). And the authors would like to thank Bin Xiao, SiBing Wang, Rui Yu, XiWen Wang, LingYan Ran, Ting Chen and Tao Zhuo who supplied help on the algorithm design and experiments.

Author Contributions

Tao Yang and Guangpo Li designed the algorithm and wrote the source code and the manuscript together; Jing Li and Yanning Zhang made contributions to algorithm design, paper written and modification; Xiaoqiang Zhang, Zhuoyue Zhang and Zhi Li supplied help on experiments and paper revision.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shaker, M.; Smith, M.N.; Yue, S.; Duckett, T. Vision-based landing of a simulated unmanned aerial vehicle with fast reinforcement learning. In Proceedings of the 2010 International Conference on Emerging Security Technologies (EST), Canterbury, UK, 6–7 September 2010; pp. 183–188.
  2. Erginer, B.; Altug, E. Modeling and PD control of a quadrotor VTOL vehicle. In Proceedings of the 2007 IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007; pp. 894–899.
  3. Ahmed, B.; Pota, H.R. Backstepping-based landing control of a RUAV using tether incorporating flapping correction dynamics. In Proceedings of the 2008 American Control Conference, Seattle, WC, USA, 11–13 June 2008; pp. 2728–2733.
  4. Gavilan, F.; Acosta, J.; Vazquez, R. Control of the longitudinal flight dynamics of an UAV using adaptive backstepping. IFAC Proc. Vol. 2011, 44, 1892–1897. [Google Scholar] [CrossRef]
  5. Yoon, S.; Kim, Y.; Park, S. Constrained adaptive backstepping controller design for aircraft landing in wind disturbance and actuator stuck. Int. J. Aeronaut. Space Sci. 2012, 13, 74–89. [Google Scholar] [CrossRef]
  6. Ferreira, H.C.; Baptista, R.S.; Ishihara, J.Y.; Borges, G.A. Disturbance rejection in a fixed wing UAV using nonlinear H state feedback. In Proceedings of the 9th IEEE International Conference on Control and Automation (ICCA), Santiago, Chile, 19–21 December 2011; pp. 386–391.
  7. Rao, D.V.; Go, T.H. Automatic landing system design using sliding mode control. Aerosp. Sci. Technol. 2014, 32, 180–187. [Google Scholar]
  8. Olivares-Méndez, M.A.; Mondragón, I.F.; Campoy, P.; Martínez, C. Fuzzy controller for uav-landing task using 3d-position visual estimation. In Proceedings of the 2010 IEEE International Conference on Fuzzy Systems (FUZZ), Barcelona, Spain, 18–23 July 2010; pp. 1–8.
  9. Liao, F.; Wang, J.L.; Poh, E.K.; Li, D. Fault-tolerant robust automatic landing control design. J. Guid. Control Dyn. 2005, 28, 854–871. [Google Scholar] [CrossRef]
  10. Gui, Y.; Guo, P.; Zhang, H.; Lei, Z.; Zhou, X.; Du, J.; Yu, Q. Airborne vision-based navigation method for uav accuracy landing using infrared lamps. J. Intell. Robot. Syst. 2013, 72, 197–218. [Google Scholar] [CrossRef]
  11. Huh, S.; Shim, D.H. A vision-based automatic landing method for fixed-wing UAVs. J. Intell. Robot. Syst. 2010, 57, 217–231. [Google Scholar] [CrossRef]
  12. Guo, P.; Li, X.; Gui, Y.; Zhou, X.; Zhang, H.; Zhang, X. Airborne vision-aided landing navigation system for fixed-wing UAV. In Proceedings of the 12th International Conference on Signal Processing (ICSP), Hangzhou, China, 26–30 October 2014; pp. 1215–1220.
  13. Ming, C.; Xiu-Xia, S.; Song, X.; Xi, L. Vision aided INS for UAV auto landing navigation using SR-UKF based on two-view homography. In Proceedings of the 2014 IEEE Chinese Guidance, Navigation and Control Conference (CGNCC), Yantai, China, 8–10 August 2014; pp. 518–522.
  14. Vladimir, T.; Jeon, D.; Kim, D.H.; Chang, C.H.; Kim, J. Experimental feasibility analysis of roi-based hough transform for real-time line tracking in auto-landing of UAV. In Proceedings of the 15th IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing Workshops (ISORCW), Shenzhen, China, 11 April 2012; pp. 130–135.
  15. Abu-Jbara, K.; Alheadary, W.; Sundaramorthi, G.; Claudel, C. A robust vision-based runway detection and tracking algorithm for automatic UAV landing. In Proceedings of the 2015 International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015; pp. 1148–1157.
  16. Cao, Y.; Ding, M.; Zhuang, L.; Cao, Y.; Shen, S.; Wang, B. Vision-based guidance, navigation and control for Unmanned Aerial Vehicle landing. In Proceedings of the 9th IEEE International Bhurban Conference on Applied Sciences & Technology (IBCAST), Islamabad, Pakistan, 9–12 January 2012; pp. 87–91.
  17. Sereewattana, M.; Ruchanurucks, M.; Thainimit, S.; Kongkaew, S.; Siddhichai, S.; Hasegawa, S. Color marker detection with various imaging conditions and occlusion for UAV automatic landing control. In Proceedings of the 2015 Asian Conference on IEEE Defence Technology (ACDT), Hua Hin, Thailand, 23–25 April 2015; pp. 138–142.
  18. Zhuang, L.; Han, Y.; Fan, Y.; Cao, Y.; Wang, B.; Zhang, Q. Method of pose estimation for UAV landing. Chin. Opt. Lett. 2012, 10, S20401. [Google Scholar] [CrossRef]
  19. Hong, L.; Haoyu, Z.; Jiaxiong, P. Application of cubic spline in navigation for aircraft landing. J. HuaZhong Univ. Sci. Technol. (Nat. Sci. Ed.) 2006, 34, 22. [Google Scholar]
  20. Barber, B.; McLain, T.; Edwards, B. Vision-based landing of fixed-wing miniature air vehicles. J. Aerosp. Comp. Inf. Commun. 2009, 6, 207–226. [Google Scholar] [CrossRef]
  21. Lange, S.; Sunderhauf, N.; Protzel, P. A vision based onboard approach for landing and position control of an autonomous multirotor UAV in GPS-denied environments. In Proceedings of the International Conference on Advanced Robotics, Munich, Germany, 22–26 June 2009; pp. 1–6.
  22. Miller, A.; Shah, M.; Harper, D. Landing a UAV on a runway using image registration. In Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 182–187.
  23. Daquan, T.; Hongyue, Z. Vision based navigation algorithm for autonomic landing of UAV without heading & attitude sensors. In Proceedings of the Third International IEEE Conference on Signal-Image Technologies and Internet-Based System, Shanghai, China, 16–19 December 2007; pp. 972–978.
  24. Zingg, S.; Scaramuzza, D.; Weiss, S.; Siegwart, R. MAV navigation through indoor corridors using optical flow. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA), Anchorage, AK, USA, 3–8 May 2010; pp. 3361–3368.
  25. Gautam, A.; Sujit, P.; Saripalli, S. A survey of autonomous landing techniques for UAVs. In Proceedings of the 2014 International Conference on Unmanned Aircraft Systems (ICUAS), Orlando, FL, USA, 27–30 May 2014; pp. 1210–1218.
  26. Wang, W.; Song, G.; Nonami, K.; Hirata, M.; Miyazawa, O. Autonomous control for micro-flying robot and small wireless helicopter X.R.B. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 2906–2911.
  27. Martínez, C.; Campoy, P.; Mondragón, I.; Olivares Mendez, M.A. Trinocular ground system to control UAVs. In Proceedings of the 2009 IEEE-RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 11–15 October 2009; pp. 3361–3367.
  28. Pebrianti, D.; Kendoul, F.; Azrad, S.; Wang, W.; Nonami, K. Autonomous hovering and landing of a quad-rotor micro aerial vehicle by means of on ground stereo vision system. J. Syst. Des. Dynam. 2010, 4, 269–284. [Google Scholar] [CrossRef]
  29. Abbeel, P.; Coates, A.; Ng, A.Y. Autonomous helicopter aerobatics through apprenticeship learning. Int. J. Robot. Res. 2010. [Google Scholar] [CrossRef]
  30. OPATS Laser based landing aid for Unmanned Aerial Vehicles, RUAG—Aviation Products. 2016. Available online: http://www.ruag.com/aviation (accessed on 27 July 2016).
  31. Kong, W.; Zhang, D.; Wang, X.; Xian, Z.; Zhang, J. Autonomous landing of an UAV with a ground-based actuated infrared stereo vision system. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 2963–2970.
  32. Kong, W.; Zhou, D.; Zhang, Y.; Zhang, D.; Wang, X.; Zhao, B.; Yan, C.; Shen, L.; Zhang, J. A ground-based optical system for autonomous landing of a fixed wing UAV. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 4797–4804.
  33. Weng, J.; Cohen, P.; Herniou, M. Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980. [Google Scholar] [CrossRef]
  34. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  35. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  36. Edmonds, J. Paths, trees, and flowers. Can. J. Math. 1965, 17, 449–467. [Google Scholar] [CrossRef]
Figure 1. Infrared camera array system.
Figure 1. Infrared camera array system.
Sensors 16 01393 g001
Figure 2. The framework of the infrared camera array system.
Figure 2. The framework of the infrared camera array system.
Sensors 16 01393 g002
Figure 3. Ground-based landing system.
Figure 3. Ground-based landing system.
Sensors 16 01393 g003
Figure 4. The ability to resist interference in complex environments (650 m).
Figure 4. The ability to resist interference in complex environments (650 m).
Sensors 16 01393 g004
Figure 5. The comparison of the imaging at different distances.
Figure 5. The comparison of the imaging at different distances.
Sensors 16 01393 g005
Figure 6. The comparison of the imaging from different angles at the distance of 150 m. (a) The near infrared laser lamp: from 0 degrees to 45 degrees; (b) The strong light flashlight: from 0 degrees to 10 degrees.
Figure 6. The comparison of the imaging from different angles at the distance of 150 m. (a) The near infrared laser lamp: from 0 degrees to 45 degrees; (b) The strong light flashlight: from 0 degrees to 10 degrees.
Sensors 16 01393 g006
Figure 7. Verification of calibration results(red circles in (c) and (d) are real positions of the control points, the positions of yellow crosses are calculated by re-projection based on calibration parameters).
Figure 7. Verification of calibration results(red circles in (c) and (d) are real positions of the control points, the positions of yellow crosses are calculated by re-projection based on calibration parameters).
Sensors 16 01393 g007
Figure 8. The detection results under sunlight.
Figure 8. The detection results under sunlight.
Sensors 16 01393 g008
Figure 9. The detection results under sunlight with smear effect.
Figure 9. The detection results under sunlight with smear effect.
Sensors 16 01393 g009
Figure 10. Comparison of DGPS and vision data. (a) The UAV trajectories of vision-based method and DGPS method; (bd) The location results in X, Y and Z coordinates. (eg) The location errors in X, Y and Z coordinates.
Figure 10. Comparison of DGPS and vision data. (a) The UAV trajectories of vision-based method and DGPS method; (bd) The location results in X, Y and Z coordinates. (eg) The location errors in X, Y and Z coordinates.
Sensors 16 01393 g010
Figure 11. Landing trajectory away from 800 m.
Figure 11. Landing trajectory away from 800 m.
Sensors 16 01393 g011
Table 1. The near infrared laser lamp parameters.
Table 1. The near infrared laser lamp parameters.
Near Infrared Laser LampSpecificationParameter
Sensors 16 01393 i001wavelength808 ± 5 nm
illumination distancemore than 1000 m
weight470 g
working voltageDC12V ± 10%
maximum power consumption25 W
operating temperature 0 ° C∼50 ° C
Table 2. The camera parameters.
Table 2. The camera parameters.
CameraSpecificationParameter
Sensors 16 01393 i002sensorCMOSIS CMV4000-3E12
maximum resolution 2048 × 2048
maximum frame rate 2048 × 2048 at 90 fps
interfaceUSB 3.0
maximum power consumption4.5 W
operating temperature - 20 ° C∼50  ° C
Table 3. The camera lens parameters.
Table 3. The camera lens parameters.
Camera LensSpecificationParameter
Sensors 16 01393 i003focal length35 mm
F No.1.4
range of WD110 mm–∞
maximum magnification0.3×
TV distortion - 0.027 %
filter pitchM46P = 0.75
maximum compatible sensor1.1
Table 4. The Near-IR interference bandpass filter parameters.
Table 4. The Near-IR interference bandpass filter parameters.
FilterSpecificationParameter
Sensors 16 01393 i004useful range798∼820 nm
FWHM35 nm
tolerance±5 nm
peak transmission≥85%
Table 5. The electronic total station parameters.
Table 5. The electronic total station parameters.
Electronic Total StationSpecificationParameter
Sensors 16 01393 i005field of view 1 ° 30
measuring distance2000 m
ranging accuracy±(2 mm + 2 ppm)
angle measurement accuracy ± 2
ranging time1.2 s
operating temperature - 20 ° C∼50  ° C
Table 6. The calibration accuracy analysis.
Table 6. The calibration accuracy analysis.
SerialReference Points (m)Localization Results (m)Errors (m)
NumberX(m) Y ( m ) Z ( m ) X Y Z Δ X Δ Y Δ Z
164.3630.0122.07264.2748110.0121492.060729−0.0881880.000149−0.011271
2102.898−0.0682.185102.961128−0.0662522.1641580.0631260.001748−0.020842
3198.018−0.1412.615197.970352−0.1134682.613832−0.0476530.027532−0.001168
4303.228−0.3243.049300.387817−0.2830892.991707−2.8401790.040911−0.057293
5395.121−0.5673.427396.371521−0.5735973.4560941.250519−0.0065970.029094
Table 7. The effect of the detection error on localization accuracy.
Table 7. The effect of the detection error on localization accuracy.
Pixel Errors0.5 Pixels0.2 Pixels0.1 Pixels
Distance (m) X ( m ) Y ( m ) Z ( m ) X ( m ) Y ( m ) Z ( m ) X ( m ) Y ( m ) Z ( m )
100.09050.00920.00910.03670.00370.00370.01800.00180.0018
200.11730.01050.01040.04790.00420.00420.02250.00210.0021
300.14300.01130.01230.05500.00470.00490.02850.00240.0024
400.17830.01280.01370.07140.00510.00560.03390.00260.0027
500.20780.01390.01630.08100.00560.00600.03930.00270.0031
1000.40990.01980.02880.17090.00800.01200.08390.00390.0062
1500.70810.02480.05030.28460.01020.02050.14180.00530.0101
2001.10510.03050.07900.43070.01290.03020.20990.00650.0155
3002.00750.04220.15010.79600.01670.05910.39890.00870.0294
4003.14240.05490.24231.31650.02170.10130.65230.01050.0497
Table 8. The SPAN-CPT parameters.
Table 8. The SPAN-CPT parameters.
The SPAN-CPTSpecificationParameter
Sensors 16 01393 i006horizontal positon accuracy (RT-2 module)1 cm + 1 ppm
horizontal positon accuracy (single point)1.2 m
heading accuracy 0.03 °
bias (gyroscope) ± 20 ° / h
bias stability (gyroscope) ± 1 ° / h
bias (accelerometer) ± 50 mg
bias stability (gyroscope) ± 0.75 mg
speed accuracy0.02 m/s
weight2.28 kg

Share and Cite

MDPI and ACS Style

Yang, T.; Li, G.; Li, J.; Zhang, Y.; Zhang, X.; Zhang, Z.; Li, Z. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment. Sensors 2016, 16, 1393. https://doi.org/10.3390/s16091393

AMA Style

Yang T, Li G, Li J, Zhang Y, Zhang X, Zhang Z, Li Z. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment. Sensors. 2016; 16(9):1393. https://doi.org/10.3390/s16091393

Chicago/Turabian Style

Yang, Tao, Guangpo Li, Jing Li, Yanning Zhang, Xiaoqiang Zhang, Zhuoyue Zhang, and Zhi Li. 2016. "A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment" Sensors 16, no. 9: 1393. https://doi.org/10.3390/s16091393

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop