Next Article in Journal
Humidity Sensing Properties of Paper Substrates and Their Passivation with ZnO Nanoparticles for Sensor Applications
Next Article in Special Issue
Time-of-Travel Methods for Measuring Optical Flow on Board a Micro Flying Robot
Previous Article in Journal
Fabrication of Polymer Optical Fibre (POF) Gratings
Previous Article in Special Issue
Development of an Unmanned Aerial Vehicle-Borne Crop-Growth Monitoring System
 
 
Correction published on 21 August 2019, see Sensors 2019, 19(17), 3639.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel System for Correction of Relative Angular Displacement between Airborne Platform and UAV in Target Localization

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(3), 510; https://doi.org/10.3390/s17030510
Submission received: 30 December 2016 / Revised: 16 February 2017 / Accepted: 27 February 2017 / Published: 4 March 2017
(This article belongs to the Special Issue UAV-Based Remote Sensing)

Abstract

:
This paper provides a system and method for correction of relative angular displacements between an Unmanned Aerial Vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy. Because the angular displacements have an influence on the final accuracy, by attaching a measuring system to the platform, the texture image of platform base bulkhead can be collected in a real-time manner. Through the image registration, the displacement vector of the platform relative to its bulkhead can be calculated to further determine angular displacements. After being decomposed and superposed on the three attitude angles of the UAV, the angular displacements can reduce the coordinate transformation errors and thus improve the localization accuracy. Even a simple kind of method can improve the localization accuracy by 14.3%.

1. Introduction

Currently, enemy situation reconnaissance, target localization, directing and adjusting artillery fire, and other auxiliary functions are still the main UAV applications. With a very low safety risk [1], an operator can remotely operate an UAV to fly toward the target area in order to acquire the target's real-time image and location information, which can be sent to the control center for analysis and decision making by intelligence analysts and commanders. In several recent wars, UAV has played a key role in situations where it is used for real-time battlefield reconnaissance, for collecting and providing intelligence, and for providing accurate target information to facilitate firing. For civil use, such as search and rescue [2], target localization is also the important UAV applications.
The basic principle of current target localization is the R-θ (remove-angle) method [3,4,5,6,7], in which the distance (R) of the target relative to the UAV is determined by a laser rangefinder and the angle (θ) is determined by a series of sensors. Based on its own location (usually using the Earth coordinate system), the position of a target in the geodetic coordinate system can be obtained after a series of coordinate transformations. The common target localization process is usually done through the transformations among at least five coordinate systems, including the camera coordinate system C, UAV coordinate system (platform) B, UAV geographic coordinate system V, Earth-Centered Earth-Fixed coordinate system (ECEF) E (in line with the WGS-84 standard) and the geodetic coordinate system G (in line with the WGS-84 standard). The platform moves independently of the UAV, detects the target through rotating search, and outputs the azimuth and pitch information of the detected target relative to the UAV through continuous tracking [8]. Then these data are linked to the INS data of the UAV and converted into a geodetic coordinate system the same as the GPS standard.
Based on the combination of camera and laser range finder, this paper proposes a method to improve the accuracy of angles, which could improve the accuracy of single-point localization in real time. As the operating frequency of laser rangefinders in a single range-based localization task is quite low (usually at the Hz level), the system designed by this paper can be considered real-time provided that it can complete the ranging accuracy optimization in the range measurement period. In addition, this system takes up a small space, without a significant increase in cost. It can effectively improve the accuracy of one-time single-station localization and won’t interfere with most of the multi-times multi-station methods, so it is quite universal. At present, researchers have used various methods to analyze the localization error and improve the localization accuracy. Currently, some people are studying how to improve the single-aircraft single-point localization accuracy, while others are using the methods such as flight course planning and multi-aircraft localization to realize accuracy improvement. Most of the work done by current researchers fails to consider the deviation angle of the platform relative to the UAV. What has been considered, if possible, is only the addition of a parallel translation or the analysis of error influence, which, however, was seldom quantified. Redding [3] considered the influence of wind resistance, but the angular displacements caused by UAV attitude change were left out of consideration. The system in this paper can correct the angular displacements between the UAV and its platform when the cause of angular displacement error doesn't need to be known. The method proposed by Pachter et al. [9] also needs to know the target elevation in advance. Chiang et al. [10] improved the localization accuracy through setting the ground control point. Yue et al. [11] proposed the use of a height-based Steepest Descent Method for single-aircraft localization optimization, which, however, needs time and fails to meet the real-time requirement. While studying the single-aircraft localization, some people have begun to explore multi-aircraft localization. Morbidi et al. [12] proposed the method of active target tracking and joint localization based on an UAV fleet to estimate the target position through a Kalman filter. Qu et al. [13] proposed a joint localization method based on the azimuth angles among various UAVs. These methods, which are based on sensor data fusion, involve the complex issue of fleet route control [14,15,16] and need coordination among multiple UAVs, thus resulting in a higher hardware cost, more task time in most of the cases, and a greater risk in case of emergency. These exceptions apart, methods based on Kalman filter, Recursive Least Squares (RLS) filter, nonlinear filter [3,6,17,18] and methods based on video sequence [19,20,21,22,23,24] are also proposed to estimate the location. Most of these use the same aircraft at different time to improve the accuracy, but the optimization process requires time, that cannot meet the real-time requirement. On the other hand, the self-localization method integrating IMU into the platform involves the issue of size restriction. For a smaller IMU, its accuracy can’t meet the requirement easily so that an additional onboard IMU whose size is larger is needed, which will increase the load.
This paper is structured as follows: first, we build the target localization model. Then the principle and working process of the system designed in this paper are introduced in detail, followed by effectiveness analysis, experiments, simulation and verification, analysis of verification results, and finally a summary.

2. Methodology

The core process of the target localization method adopted by this paper, is shown in Figure 1.
Compared with most of the target localization models [25], the main advantage of this method is the separation of the platform coordinate system from UAV coordinate system [26]. The addition of transformational matrix has effectively reduced the directional error. This is more evident during the high-pitch big-slope reconnaissance. Take a large UAV whose rising limit is 8 km as an example, In the case of vertical down-view, the localization error brought by 1 mrad of angle error is only 8 m, but during the oblique-view reconnaissance, the slope distance can easily reach 30 km, where the localization error brought by the same angle error is as big as 30 m. As shown in Figure 2, the bigger the slope distances from the platform to the target, the higher the directional requirement. The specific localization process is as follows.

2.1. Transformation from Camera Coordinate System C to Platform Coordinate System P

The relationship between the two coordinate systems is illustrated in Figure 3.
The homogeneous coordinates of a target in the camera coordinate system are:
[ x c    y c    z c    1 ] T = [ u    v    f    1 ] T ,
where u and v are the target’s coordinates in the image (in pixel), and f is the current focal length of camera. Usually, when the UAV is detecting a target, the photoelectric platform will lock the detected target at the Field of View (FOV) center with multiple pixels and therefore the target can be considered at the image center. When the error inside camera coordinate system is ignored, the homogeneous coordinates of the target can be expressed as [0 0 f 1]T. A photoelectric platform is the camera carrier, which outputs the information on the angles a and e between Line of Sight (LOS) and the zero positions of two platform angles and measures the target distance R through a laser range finder. Since the platform uses the polar coordinate system, a coordinate transformation listed below is needed:
[ x p    y p    z p    1 ] T = R × Q p c [ 0    0    f    1 ] T ,
where Qpc is the conversion matrix from the camera coordinate system C to the platform coordinate system P.
Q p c = [ cos a 0 sin a 0 0 1 0 0 sin a 0 cos a 0 0 0 0 1 ] [ 1 0 0 0 0 cos e sin e 0 0 sin e cos e 0 0 0 0 1 ] .

2.2. Transformation from Platform Coordinate System P to UAV Coordinate System B

This transformation process is the core of localization accuracy improvement over other methods [27]. Considering the demand for fast disconnection, the airborne platform is usually attached to the UAV in a strap-down manner. Most of the platforms have a shafting structure so that the onboard imaging systems (such as cameras and IR thermal imagers) can expand the reconnaissance field through rotation. Therefore, most of the platforms can be divided into two parts, namely the base and rotating part. The base is fixed to the UAV. The rotating part is linked, through shafting, to the base, putting the imaging system in motion to search for the target and locking the target to LOS via the servo system [28]. To improve the reconnaissance imaging quality, the base of an airborne platform often needs to be fixed to the UAV through a shock absorber, thus isolating part of high-frequency vibration and enhancing the stability of the platform itself, as show in Figure 4. However, the damping structure used by most of the shock absorber body is a flexible material, so in actual flight, the platform will produce a displacements relative to the UAV due to the influence of such factors as engine vibration, wind resistance, UAV attitude change and motion of platform. These displacements include monolithic translation, angular displacement and mixed displacement.
The target is usually locked in the center of the image (just like when you need to turn your head around to face a target and turn your eyes to the target so that you can look at it attentively) during flight. To determine the target angle we need to know the angle of the detected target relative to the platform, as output by the platform, and the angle of the platform itself relative to reference azimuth angle. Then the required angle can be obtained through transformation. For example, if a stone lies on the ground east-northeast of you, you need to know both the angle of the stone relative to your body and the angle of your body relative to the East or North.
The Inertial Measurement Unit (IMU) can determine the UAV attitude angles. As stated above, the size of IMU will be larger if we need a high-precision one. Accordingly, the IMU in most of the cases is installed not on the platform, but in other UAV compartments through the rigid connection due to the limitation of platform size. Otherwise, the platform will be too large to be installed.
Through the adjustment of airborne IMU angle during initial assembly, the reference 0° direction of the platform is considered consistent with the 0° direction of the UAV, but with the platform translation available, the 0° directions of platform might deflect. In this case, the 0° lines of the platform will move and/or rotate. Among all types of rotation, only the transition along the indication line has a negligible influence on localization accuracy. The non-parallel translation and rotation in any other direction, including the rotation around the zero line, will have a great influence on localization result. When the deflection angle is non-zero, the resolving based on the pitch and azimuth angles output by the platform is actually inaccurate [29]. That is, an error exists in the obtained target relative to the reference direction. For example, you think your eyes are 30° to the left in front of your body, but actually it is 29°. At this time, you need to have an error compensation factor (1°) to correct that. The transformation from the platform coordinate system P to the UAV coordinate system B is shown in Figure 5.
To determine the transformation relation, the direct relative angular displacements Δ1 and Δ2 between platform and UAV must be determined first. For the concrete method, see the subsequent system description. Since this is a transformation from platform to UAV, converting the determined angular displacements directly into the UAV coordinate system can effectively reduce the workload. Then the angular displacements can be expressed as Δα, Δβ, Δγ and the transformation relation expressed as:
[ x b    y b    z b    1 ] T = Q b p [ x p    y p    z p    1 ] T = R × Q b p Q p c [ 0    0    f    1 ] T ,
Q b p = [ cos Δ α 0 sin Δ α 0 0 1 0 0 sin Δ α 0 cos Δ α 0 0 0 0 1 ] [ 1 0 0 0 0 cos Δ β sin Δ β 0 0 sin Δ β cos Δ β 0 0 0 0 1 ] [ cos Δ γ sin Δ γ 0 0 sin Δ γ cos Δ γ 0 0 0 0 1 0 0 0 0 1 ] ,
where Qbp is the conversion matrix from the platform coordinate system P to the UAV coordinate system B.

2.3. Transformation from UAV Coordinate System B to UAV Geographic Coordinate System V

The relationship between UAV coordinate system B and UAV geographic coordinate system V is illustrated in Figure 6.
The UAV geographic coordinate system V in this paper is defined as (North-East-Down) NED, of which the three axes are North Pole, due east and the earth’s core respectively. The transformation relation in this process is:
[ x v    y v    z v    1 ] T = Q v b [ x b    y b    z b    1 ] T ,
Q v b = [ cos γ sin γ 0 0 sin γ cos γ 0 0 0 0 1 0 0 0 0 1 ] [ 1 0 0 0 0 cos β sin β 0 0 sin β cos β 0 0 0 0 1 ] [ cos α 0 sin α 0 0 1 0 0 sin α 0 cos α 0 0 0 0 1 ] ,
where Qvb is the conversion matrix from the UAV coordinate system B to UAV geographic coordinate system V.

2.4. Transformation from UAV Geographic Coordinate System V to ECEF System E

The relationship between UAV geographic coordinate system V and ECEF system E is illustrated in Figure 7.
In the ECEF system, the origin is the center of mass of the earth, the axis ze points from the Earth’s spin axis to the North Pole, the axis xe points to the intersection between prime meridian and equator, and all the three axes, namely ye, xe and ze, jointly constitute a right-handed coordinate system. The transformation relation in this process is:
[ x e    y e    z e    1 ] T = Q e v [ x v    y v    z v    1 ] T ,
Q e v = [ cos γ sin γ 0 0 sin γ cos γ 0 0 0 0 1 0 0 0 0 1 ] [ 1 0 0 0 0 1 0 0 0 0 1 ( N + h ) 0 0 0 1 ] [ cos α 0 sin α 0 0 1 0 0 sin α 0 cos α 0 0 0 0 1 ] ,
where Qev is the conversion matrix from the UAV geographic coordinate system V to the ECEF system E, and N is the radius of curvature in the prime vertical of the Earth. The semi-major axis of ellipsoid is a = 6,378,137 m, the semi-minor axis is b = 6,356,752 m, the first eccentricity of spheroid is:
e = a 2 b 2 a ,
and the second eccentricity of the spheroid is:
e = a 2 b 2 b ,
By combining them with the current latitude, we can obtain:
N = a 1 e e 2 sin 2 M .

2.5. Transformation from ECEF Frame E to Geodetic Coordinate System G

The relationship between the ECEF frame E and geodetic coordinate system G is shown in Figure 8.
The geodetic coordinate system is spherical. In this frame, the origin is also the center of mass of the Earth, the axis ze points from the Earth's spin axis to the North Pole, the axis xe points to the intersection between prime meridian and equator, and all the three axes, namely ye, xe and ze, jointly constitute a right-handed coordinate system. The transformation relation in this process is:
[ x g    y g    z g    1 ] T = Q g e [ x e    y e    z e    1 ] T + [ 0    0    N e 2 sin M    0 ] T
Q g e = [ 1 0 0 0 0 cos M sin γ 0 0 sin γ cos γ 0 0 0 0 1 ] [ cos L 0 sin L 0 0 1 0 0 sin L 0 cos L 0 0 0 0 1 ] [ 1 0 0 0 0 1 0 0 0 0 1 ( N + h ) 0 0 0 1 ]
where Qge is the conversion matrix from the ECFF system E to the geodetic coordinate system G.
A simpler calculation method is to obtain the ECEF system parameters at first, and then to calculate directly various parameters of geodetic coordinate system by using the following equation:
{ M = arctan ( z e + b e 2 sin 3 U x e 2 + y e 2 a e 2 cos 3 U ) L = arctan ( y e x e ) H = x e 2 + y e 2 cos M a 1 e 2 sin 2 M ,
U = arctan ( a z e b x e 2 + y e 2 ) .
Thus, the target position (M, L, H) in the geodetic coordinate system G can be obtained. During the transformation, different kinds of errors are possible, which are summarized in Table 1.
These errors have been studied by many scholars [18,30]. This paper only discusses the angular displacements between the platform and UAV, while processing other errors generally in the subsequent simulation.

3. System Composition and Working Principle

In this paper, the method to correct the relative angular displacement between the platform and the carrier UAV is to separate the platform coordinate system and the UAV coordinate system by adding a transformation matrix to reduce the angular displacement arising from the inconsistent deformation in shock absorber. Decomposing and superposing the angular displacements to the attitude angles of the UAV on the basis of the UAV can reduce the error in coordinate conversion and thus improve the localization accuracy [6].
In order to reduce the hardware requirements on the UAV, this paper assumes that the calculation is carried out by a command station located on the ground. The plane just needs to give its own position and the angle of the target, and the system designed in this paper is only used to measure the amount of angular displacement between the platform and the UAV. The localization process (including the coordinate transformation process) is carried out on the ground through the received data sent by the UAV. The advantage is that on the one hand the hardware consumption of the UAV much less, we can reduce the load weight; on the other hand, computers on the ground can achieve higher accuracy and faster speed.

3.1. System Composition

A combination of image registration and coordinate transformation is used to obtain the deflection angle of the platform relative to the UAV. The schematic diagram of the system is shown in Figure 9. For convenience of reading, only the front and rear bulkheads are shown, while the left and right bulkheads are omitted.
The system for correcting the relative angular displacements between the UAV and its onboard platform includes a cruciform main frame, movable probes in the frame, a LED lighting system and a CMOS high-speed imaging system on each probe, a position measurement system, and an information processing system on the central PCB board. The cross frame is fixed on the top of the airborne platform base through a rigid connection, and its center is a PCB circuit board for processing information in real time.
Each probe is in the same plane and each of the outer ends of each probe is provided with an LED illumination system and a CMOS imaging system. Since the bulkhead is semi-closed and dimmed, the LED is used to illuminate the bulkhead area at which each probe is aimed. The imaging system is used for real-time high-frequency imaging of the bulkhead area.
The position measuring system is used for measuring the probe position in each arm and sending the position information to the information processing system, which calculates the total length of each arm according to the probe position information sent by position measurement system and at the same time, carries out registration and comparison with the initial image texture according to a series of bulkhead images obtained by the imaging system. After that, the displacement vectors in the image after registration could be obtained by comparing each center of the two images that has been encoded in a special way. The displacement of the probe relative to the initial position in the plane of the bulkhead can be obtained by a scale calculated following.
Through comprehensive analysis, the angular displacement of the cruciform frame relative to the UAV can be determined. Because the frame and the platform are rigidly connected, this angular displacement can be considered as the angular displacement of the platform relative to the UAV. In the subsequent coordinate conversion, the platform coordinate frame P is separated from the UAV coordinate system B, and a rotation matrix of the platform relative to the UAV is added. The coefficient of this matrix corrects the directional error of the target, thus improving the accuracy of target localization.

3.2. The Specific Work Process

3.2.1. Step 1: Calibrate the Initial Position of the Probes

The pattern is pre-printed on the four bulkheads of the payload capsule. When the zero installation line of the airborne platform coincides with the zero attitude angle of the UAV through adjustment, we fix the platform to the UAV and lift or lower it to its actual working position. Let the center of the frame vertically coincide with the geometric center of the polygon consisting of the connection lines of shock absorbers, and then fix the cruciform frame to the platform base. As the IMU and the platform have the same zero-line direction, it can be considered that there is only a parallel translation between platform coordinate system P and UAV coordinate system B at this time. This translation can be measured on the ground through side projection, but can be left out of consideration considering that the installation position of IMU is generally close to the platform, with only a small impact (on the order of cm) on final localization result. The probe was adjusted to be close to the four sides of the bulkhead and maintain a certain distance. The distance between the front and rear probes is denoted as l12, and the distance between the left and right probes is l34. Four illumination systems are then lit to illuminate the bulkhead walls that the probe is facing accordingly. The four CMOS imaging systems collect the image texture of the bulkhead walls and store them in the flash memory on the PCB. The four images at this time are referred to as reference images. Observe the top view of the UAV along the flight direction. The front and rear probes are defined as the front and rear probes, and the left and right probes are defined as the left probe and the right probe respectively. Take the center of the reference image of each bulkhead wall corresponding to each probe as the origin of a coordinate system, the vertical axis as the y axis, and the horizontal direction as the x axis. Four 2D rectangular coordinate systems are set on the plane of bulkhead walls, defined as x1O1y1, x2O2y2, x3O3y3 and x4O4y4. The significance of the bulkhead coordinate system is that the transformation matrix is obtained by differential calculation. And the bulkhead coordinate system does not appear in the subsequent coordinate transformation.

3.2.2. Step 2: Real Time Acquisition and Analysis of Displacement

During the flight, due to the effect of turning, bumps, resistance and other factors, the platform and the cross frame fixed on it would be displaced relative to the UAV. Each CMOS imaging system takes photos for the corresponding bulkhead wall at high frequency. The DSP chip in the information processing system matches and compares the texture of the current image with that of the reference image, and gives the displacement vector (in pixels) of the current image center relative to its original position. After that, we can obtain the displacement vector of every probe projection in the wall plane relative to its own reference origin through conversion using a factor λ. By combining the obtained displacement vector with the length of probe arm, the angular displacements of the frame relative to the UAV can be determined. The angular displacements include the displacements of pitch angle, roll angle and azimuth angle.

3.2.3. Step 3: Error Correction

The angular displacement of the frame relative to the UAV, as obtained by Step 2, will be transmitted to the receiver on the ground along with other date. The transformation matrix Qbp is generated. Error correction is carried out during the localization process.

4. Validity Analysis

4.1. The Shock Absorber

The shock absorber used is shown in Figure 10. The black part is made from rubber. Figure 11 shows a side and top view of the displacement of the shock absorber. The radius of the model shock absorber is 1.25 cm. Its deformation in a plane will not exceed its radius, otherwise the shock absorber has been damaged. Here we take the limit value of 1 cm. As the platform is fixed to four shock absorbers, the final angular displacement T will not be large. The square edge of the stent structure is 25 cm. After projection in the plane, with | X 1 | ≤ 1 cm and | X 2 | ≤ 1 cm, the limit value of angular displacement:
θ max = arctan | X 1 | + | X 2 | L 4.57 ° .
Of course, this is a kind of very extreme situation. With the four shock absorbers working together, each single shock absorber will not deform to such a large extent.
After the introduction, verify the deformation and angular displacement. Figure 12 shows just the change in the length of absorber by comparing the length before and after the platform is placed on an inclined plane. It shows that there is indeed deformation.
The change in length has been 0.56 mm can be measured with the inclination angle of the inclined plane is merely 5°. According to the settings above (25 cm), the angular displacement can reach 0.1283°. It is easy to deduce that an angular displacement of 0.1° is common during the flight.

4.2. Image Registration

In the practical work, the image taken by the CMOS imaging system is sent to the information processing system. The DSP chip series [31] C64XX and SIFT algorithm [32] are used to calculate the number of pixels that shift between the reference image and the real-time image. Then, according to the distance between the imaging system and the bulkhead, the factor λ between the pixel and the actual distance is calculated.
As we adopt the pre-designed pattern, it is possible to control the size and kind of the pattern on the bulkhead. In view of the fact that the deformation of the shock absorber is not so great, a pattern having a size of 5 cm × 5 cm is enough. As the lens is close to the bulkhead, the distortion near the edge of the image caused by perspective projection will be large, so a small area in the center of the image is selected as the effective region to match, which can improve the registration speed and accuracy. Paste or print it on the area facing the probe on the bulkhead. We choose a pattern with simple feature, such as a variety of common graphics and directions inconsistent stripes (see Figure 13) to improve the efficiency of registration.
The pattern can also ensure that a small area has a unique registration result, not mistakenly registered to other areas (for example, you would not match a small area that included both acute and right angles at the same time to another area that include both round and right angles), so it can ensure that any small area has a unique registration result.

4.3. The Transformation Matrix

By image registration, the offset is actually given in pixels. A scale factor λ must be converted to be the actual displacement m on the bulkhead. A schematic diagram of the pixel and actual displacement are shown in Figure 14. The conversion formula is:
l m = f d
where l is the distance on the CMOS sensor, m is the actual displacement size on the bulkhead, f is the focal length, d is the size of the lens on the probe from the bulkhead.
Take the pixel size of 5 μm as an example. If the focal length of the lens is 10 mm and the lens is 50 mm away from the wall, then the offset of a pixel corresponds to a displacement of 25 μm. That means the actual displacement is:
m = 25 μm × n
where n is the number of pixels offset.
Subsequent image offset calculation can continue to follow this scale factor. Then as long as the accuracy of image registration is 4 pixels, you can measure a displacement of 0.1 mm. The initial captured image is saved as a reference image, with its image center as the origin of the coordinates, after taking another image with displacement, the two images can be registered, we can see an offset has occurred in the image. This means that the lens has moved relative to its original position. A schematic diagram is illustrated in Figure 15.
When the imaging system moves in the direction of the probe, the real-time image is not on the same scale as the reference image. The registration process would adjust the image after zooming and rotation, and the final result refer to the reference image according to the principle of Scale Invariant Feature Transform (SIFT). Extracting the shape feature and scale in the moving image as invariants can ensure that the pixel-to-real distance scaling relationship is always valid [33].
After obtaining the displacement vector of the probe projecting in the plane of the bulkhead, the differential calculation is carried out to obtain the compensation matrix for correcting the error.
As defined above, the arm length between front probe and rear one is l12, and the arm length between the left and right probe is l34. The schematic for calculation of angular displacement size is shown in Figure 16. Pitch angle error is:
Δ α = arctan x 1 x 2 l 12 .
Using the same method, we can calculate the roll angle error:
Δ β = arctan x 3 x 4 l 34 ,
and yaw angle error:
Δ γ = arctan y 1 y 2 l 12 .
The transformation matrix used to compensate the offset angular displacement is then obtained:
Q b p = [ cos Δ α 0 sin Δ α 0 0 1 0 0 sin Δ α 0 cos Δ α 0 0 0 0 1 ] [ 1 0 0 0 0 cos Δ β sin Δ β 0 0 sin Δ β cos Δ β 0 0 0 0 1 ] [ cos Δ γ sin Δ γ 0 0 sin Δ γ cos Δ γ 0 0 0 0 1 0 0 0 0 1 ] .

5. Experiments

5.1. The System

Limited to the experimental conditions and funding, this paper verified the system through a simulation of an actual flight in laboratory. The platform was fixed on the swing table shown in Figure 17, simulating the attitude change during flight.
We use the multi-direction measuring turntable to measure angles under various attitudes After the vertical axis of the platform is tilted at various small angles with the horizontal plane, the error due to the deformation of the shock absorber can be measured because the center of gravity of the platform is far away from the base part in the vertical direction. The angle can reach 0.1° in the vertical direction. But the pitch angle can be set 0.05° for the upper limit value in the horizontal direction due to tangential deformation is small. Those parameters are set as the original error without optimization. According to the description above, the result of image registration will affect the measurement precision of probe displacement vector. At the same time, the registration time should be as short as possible to meet the real-time requirements [34]. In this paper, due to the use of a simple pattern, the registration algorithm used to be verified time-consuming in the 100-ms level. With the laser range finder limited in the level of several Hz, the solution can be considered real-time calculated. And by using the previous printed pattern, the image registration accuracy can reach the level of 2 pixels. According to above, 2 pixels correspond to the displacement of 0.05 mm.That means that we can measure an error of 0.01146°. That is, the measurement accuracy can be 42’’. Compared with the original 0.1° error, the accuracy has been improved a lot.

5.2. Simulation

Simulation has been carried out in the localization process to prove it can improve the localization accuracy effectively. We used Monte Carlo method to simulate and analyze the target localization accuracy by using a simplified model. Taking into account the actual localization process, parameters that measured by the other sensors onboard are also with errors. So this simulation does not get rid of these errors except for the reference value. The parameters of each section are shown in Table 2. The errors were generated by the standard normal distribution function.
At first, the angular displacement errors presented in this paper were not considered. The Monte Carlo method was used to generate 500 sets of localization parameters at random. The localization results are shown in Table 3 and Figure 18. The reference point was obtained with no angular displacement (set errors as 0°). In order to reduce occasionality, we generate 500 sets of data and obtain the Root Mean Square Errors (RMSE) as the final result. Note that the process was done on the computer, because this system only needs to output a specific angular displacement, which conforms to the actual use of the entire UAV system environment. As mentioned earlier, the use of the computer on the ground will be faster and more accurate.
When taking the angular displacement errors shown in Table 4 between the platform and the carrier into account, the accuracy significantly decreased with the same other parameters. Results are shown in the Table 5 and Figure 19.
This is the case of a large angle to the horizontal direction. The angular displacement would impact greater if the angle is smaller [3]. The influence of angular displacement on the final localization results could be understood by comparison. When the UAV hovers above the target to reconnoiter it, the roll angle of UAV would be much larger. At this time, different grade deformations of shock absorber would greatly increase the inconsistency of the four distances and the center of gravity of platform is offset with the center of form. As a result, the angular displacement would be much greater than 0.1°. We adopted a conservative value when we carried out simulation in this paper. After correction of relative angular displacements between an UAV and its platform, we can obtain a series of more accurate localization results. As illustrated above, we can measure an error of 0.0115°, so we can set the angle errors to be 0.0115°, as shown in Table 6.
Results are shown in the Table 7 and Figure 20.
We can obtain significant improvements by comparing the results. The platform was assumed to have an elevation angle of −40° (this elevation angle means the one that LOS of the platform relative to the target) in the simulation above. The influence of angular displacement on the final results can be seen by comparison. When the UAV reconnoiters target at a small elevation angle (set 0° in the horizontal plane), the target is almost in the front of the UAV. In this case, the directional errors—the angular errors—affect the final localization accuracy with a more significant factor.
Then we carried another simulation to analyze the effect of the system at different angles. We set the elevation angle at −60° to −20° as shown in Table 8 (0° in the horizontal plane) in the case of the remaining parameters of the same with Table 2. After obtaining the results without correction, we compare the results those have been corrected with them. The improvement can be obtained in this way:
I = | R r | R ,
where I represents the improvement; R represents the result without correction and r represents the results have been corrected.
Figure 21 shows the contribution of the angular displacement errors between the platform and the carrier UAV that have been corrected to final latitude errors (RMSE). The vertical axis represents the final latitude errors with a size of 10−5°.
Figure 22 shows what percentage in the final latitude errors of the angular displacement errors are.

6. Discussion

We can conclude from Figure 20 that the effectiveness declines with the decreasing elevation angle, which has been illustrated in Figure 2. The percentage of the corrected errors in the final errors declines as the influence of other directing angle errors on the final result increases. Although the final localization accuracy declines significantly on the whole, we still achieve an improvement in accuracy by 14.34% as shown in Table 8. How to improve the accuracy on the whole with a small elevation angle should be considered in the future.
Simulation results show that the method proposed in this paper can effectively reduce the influence of angular displacement on the final localization results. However, the exploration must continue. Improving the accuracy of the system is still the most important. In this paper, the design of the system still has much room for improvement.
Firstly, in the equation of displacement measurement Equation (17), the resolution of measurement m can be smaller by reducing the distance between the probe and the bulkhead walls d and increasing the focal length f because the size of the pixel size l is limited by the CMOS sensor, which may not to be improved in a short period. However, it’s easy to cause the probe to run into the bulkhead walls and get damaged in case of reducing d. Increasing the f will increasing the cost of the lens sharply. The specific control of d/f ratio should be studied in practice, but must not be blindly reduced.
Secondly, the registration accuracy of the system can be up to about 2 pixels currently. If the accuracy of a pixel can be improved, the accuracy of the measurement can be doubled. However, in order to meet the requirement of real-time calculation, the registration algorithm cannot be too complex, otherwise the time will be too long and that will increase the burden of calculation, and even result in data flow alternation errors. Taking the 2 Hz laser range finder as an example, this system needs to control the speed of the pattern registration within 500 ms, and better within 200 ms considering the clock signal synchronization, data transmission and other issues. A lot of efforts are still needed to study how to design a better and faster algorithm based on this system or to find the best predesigned pattern in order to improve the registration speed and accuracy.

7. Conclusions

The positive effect of the system lies in the fact that through measurement and calculation, a real-time angular displacement is obtained and then can be sent to the receiver, where the resulting angular displacement can serve as the error compensation item to be superposed with the current UAV attitude angle to obtain a more accurate LOS direction helpful for improving the localization accuracy. And this system has a small demand for space and hardware resources. The whole structure is compact, and low-cost, and can be installed in various forms. In terms of chip resources, the software is characterized by small calculation load, high real-time performance and effective correction. Earlier calibration and alignment can be completed in dozens of minutes without adding too much workload. Through the image registration, the displacement in pixels is obtained, which can be proportionally converted into actual displacement, which, in turn, can be used to calculate the error of angular displacement between platform and UAV. With this system, real-time measurement can be taken in flight to improve the single-aircraft single-point localization accuracy, thus laying a good foundation for other improvement methods.

Acknowledgments

We acknowledge Academic Editor for his careful revision of the languages and grammatical structures in this article.

Author Contributions

Chenglong Liu, Jinghong Liu and Yueing Song conceived and designed the experiments; Chenglong Liu and Huaidan Liang performed the experiments; Jinghong Liu and Yueing Song contributed analysis tools; Chenglong Liu and Huaidan Liang wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. United States Government Office of the Secretary of Defense. Unmanned Aerial Vehicles Roadmap 2002–2027; Office of the Secretary of Defense—Unmanned Aircraft & Drones: Washington, DC, USA, 2002.
  2. Sun, J.; Li, B.; Jiang, Y.; Wen, C. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes. Sensors 2016, 16, 1778. [Google Scholar] [CrossRef] [PubMed]
  3. Barber, D.B.; Redding, J.D.; McLain, T.W.; Beard, R.W.; Taylor, C.N. Vision-based Target Geo-location using a Fixed-wing Miniature Air Vehicle. J. Intell. Robot. Syst. 2006, 47, 361–382. [Google Scholar] [CrossRef]
  4. Dobrokhodov, V.N.; Kaminer, I.I.; Jones, K.D.; Ghabcheloo, R. Vision-based tracking and motion estimation for moving targets using small uavs. In Proceedings of the 2006 American Control Conference, Minneapolis, MN, USA, 14–16 June 2006.
  5. Monda, M.J.; Woolsey, C.A.; Reddy, C.K. Ground target localization and tracking in a riverine environment from a UAV with a gimbaled camera. In Proceedings of the AIAA Guidance, Navigation and Control Conference, Hilton Head, SC, USA, 18–21 August 2007; pp. 6747–6750.
  6. Redding, J.; McLain, T.W.; Beard, R.W.; Taylor, C. Vision-based target localization from a fixed-wing miniature air vehicle. In Proceedings of the 2006 American Control Conference, Minneapolis, MN, USA, 14–16 June 2006; pp. 2862–2867.
  7. Whang, H.; Dobrokhodov, V.N.; Kaminer, I.I.; Jones, K.D. On vision-based target tracking and range estimation for small UAVs. In Proceedings of the AIAA Guidance, Navigation and Control Conference, San Franscisco, CA, USA, 15–18 August 2005.
  8. Ma, Y.; Soatto, S.; Kosecka, J.; Sastry, S.S. An Invitation to 3-D Vision from Images to Geometric Models; Springer: New York, NY, USA, 2012. [Google Scholar]
  9. Pachter, M.; Ceccarelli, N.; Chandler, P.R. U.S. Air Force Institute of Technology, Vision-Based Target Geolocation Using Micro Air Vehicles. J. Guidance Control Dyn. 2008, 31, 597–615. [Google Scholar] [CrossRef]
  10. Chiang, K.-W.; Tsai, M.-L.; Chu, C.-H. The Development of an UAV Borne Direct Georeferenced Photogrammetric Platform for Ground Control Point Free Applications. Sensors 2012, 12, 9161–9180. [Google Scholar] [CrossRef] [PubMed]
  11. Yue, L.; Yang, C.Q.; Sheng, X.; Xi, H.Z. A fast target localization method with multi-point observation for a single UAV. In Proceedings of the 2016 28th Chinese Control and Decision Conference (CCDC), Yinchuan, China, 28–30 May 2016.
  12. Morbidi, F.; Mariottini, G.L. Active target tracking and cooperative localization for teams of aerial vehicles. IEEE Trans. Control Syst. Technol. 2013, 21, 1694–1707. [Google Scholar] [CrossRef]
  13. Qu, Y.; Wu, J.; Zhang, Y. Cooperative localization based on the azimuth angles among multiple UAVs. In Proceedings of the 2013 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 28–31 May 2013; pp. 818–823.
  14. Sarunic, P.W.; Evans, R.J. Trajectory control of autonomous fixed-wing aircraft performing multiple target passive detection and tracking. In Proceedings of the 2010 Sixth International Conference on Intelligent Sensors, Networks and Information Processing (ISSNIP), Brisbane, Australia, 7–10 December 2010; pp. 169–174.
  15. Rysdyk, R. UAV path following for constant line-of-sight. In Proceedings of the 2nd AIAA Unmanned Unlimited Systems, Technologies and Operations Aerospace, Land and Sea Conference, San Diego, CA, USA, September 2003.
  16. Frew, E.; Rock, S. Trajectory Generation for Monocular-Vision Based Tracking of a Constant-Velocity Target. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation, Taipei, Taiwan, 14–19 September 2003.
  17. Grewal, M.S.; Henderson, V.D.; Miyasako, R.S. Application of Kalman filtering to the calibration and alignment of inertial navigation systems. IEEE Trans. Autom. Control. 1991, 36, 4–12. [Google Scholar] [CrossRef]
  18. Dmitriyev, S.P.; Stepanov, O.A.; Shepel, S.V. Nonlinear filter methods application in INS alignment. IEEE Trans. Aerosp. Electron. Syst. 1997, 33, 260–271. [Google Scholar] [CrossRef]
  19. Han, K.; DeSouza, G.N. Multiple Targets Geolocation using SIFT and Stereo Vision on Airborne Video Sequences. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 11–15 October 2009.
  20. Conte, G.; Doherty, P. An Integrated UAV Navigation System Based on Aerial Image Matching. In Proceedings of the 2008 IEEE Aerospace Conference, 1–8 March 2008; pp. 1–10.
  21. Schultz, H.; Hanson, A.; Riseman, E.; Stolle, F.; Zhu, Z. A system for real-time generation of geo-referenced terrain mod-els. In Proceedings of the SPIE Enabling Technologies for Law Enforcement, Boston, MA, USA, 6 November 2000.
  22. Whitacre, W.; Campbell, M.; Wheeler, M.; Stevenson, D. Flight results from tracking ground targets using seascan UAVs with gimballing cameras. In Proceedings of the 2007 American Control Conference, New York, NY, USA, 9–13 July 2007.
  23. Zhan, F.; Shen, H.; Wang, P.; Zhang, C. Precise ground target location of subsonic UAV by compensating delay of navigation information. Opt. Precision Eng. 2015, 23, 2506–2507. [Google Scholar] [CrossRef]
  24. Cai, G.; Chen, B.M.; Lee, T.H. Unmanned Rotorcraft Systems; Springer: London, UK, 2011; pp. 223–254. [Google Scholar]
  25. Zhou, Q.; Liu, J.; Wang, X. Automatic Correction of Geometric Distortion in Aerial Zoom Squint Imaging. Opt. Precision Eng. 2015, 23, 2927–2942. [Google Scholar] [CrossRef]
  26. Wang, X.; Liu, J.; Zhou, Q. Real-Time Multi-Target Localization from Unmanned Aerial Vehicles. Sensors 2017, 17, 33. [Google Scholar] [CrossRef] [PubMed]
  27. Miller, J.L.; Way, S.; Ellison, B.; June, C.A. Design challenges regarding highdefinition electro-optic/infrared stabilized imaging systems. Optical Eng. 2013, 52, 061310. [Google Scholar] [CrossRef]
  28. Chen, V.C.; Miceli, W.J. The effect of roll, pitch and yaw motions on ISAR imaging. In Proceedings of the Radar Processing, Technology, and Applications IV, Denver, CO, USA, 18 July 1999; pp. 149–158.
  29. Maier, A.; Kiesel, S.; Trommer, G.F. Performance analysis of federated filter for SAR/TRN/GPS/INS integration. Gyroscopy Navig. 2011, 2, 293–300. [Google Scholar] [CrossRef]
  30. DeLima, P.; York, G.; Pack, D. Localization of ground targets using a flying sensor network. In Proceedings of the IEEE International Conference on Sensor Networks, Biquitous, and Trusworthy Computing, Taichung, Taiwan, 5–7 June 2006; pp. 194–199.
  31. Texas Instruments Incorporated. TMS320C6000 CPU and Instruction Set Reference Guide; Texas Instruments Incorporated: Dallas, TX, USA, 1999. [Google Scholar]
  32. Lowe, D.G. Distinctive image features from scale—Invariant key points. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  33. Hu, M. Visual-pattern recognition by moment invariants. Ire Trans. Inf. Theory 1962, 8, 179–187. [Google Scholar]
  34. Huang, D.; Wu, Z.; Liang, M.; Dong, Y. The Application of TMS320C64x DSP Assembly Language in Correlation Tracking Algorithms. In Proceedings of the 2010 3rd International Congress on Image and Signal Processing (CISP2010), Yantai, China, 16–18 October 2010.
Figure 1. Localization process.
Figure 1. Localization process.
Sensors 17 00510 g001
Figure 2. Different localization errors resulting from the same angle error.
Figure 2. Different localization errors resulting from the same angle error.
Sensors 17 00510 g002
Figure 3. Relationship between camera coordinate system C and platform coordinate system P.
Figure 3. Relationship between camera coordinate system C and platform coordinate system P.
Sensors 17 00510 g003
Figure 4. (a) Position of shock absorber; (b) Deformation of shock absorber.
Figure 4. (a) Position of shock absorber; (b) Deformation of shock absorber.
Sensors 17 00510 g004
Figure 5. Relationship between platform coordinate system P and UAV coordinate system B.
Figure 5. Relationship between platform coordinate system P and UAV coordinate system B.
Sensors 17 00510 g005
Figure 6. Relationship between UAV coordinate system B to UAV geographic coordinate system V.
Figure 6. Relationship between UAV coordinate system B to UAV geographic coordinate system V.
Sensors 17 00510 g006
Figure 7. Relationship between UAV geographic coordinate system V and ECEF system E.
Figure 7. Relationship between UAV geographic coordinate system V and ECEF system E.
Sensors 17 00510 g007
Figure 8. Relationship between ECEF system E and geodetic coordinate system G.
Figure 8. Relationship between ECEF system E and geodetic coordinate system G.
Sensors 17 00510 g008
Figure 9. (a) System installation position; (b) Working principle diagram, top view.
Figure 9. (a) System installation position; (b) Working principle diagram, top view.
Sensors 17 00510 g009
Figure 10. (a) Real shock absorber; (b) Structure of the shock absorber.
Figure 10. (a) Real shock absorber; (b) Structure of the shock absorber.
Sensors 17 00510 g010
Figure 11. (a) Side view of the shock absorber deformation; (b) top view of the shock absorber deformation.
Figure 11. (a) Side view of the shock absorber deformation; (b) top view of the shock absorber deformation.
Sensors 17 00510 g011
Figure 12. (a) Length of absorber before inclination; (b) Length of absorber after inclination.
Figure 12. (a) Length of absorber before inclination; (b) Length of absorber after inclination.
Sensors 17 00510 g012
Figure 13. Pre-designed patterns.
Figure 13. Pre-designed patterns.
Sensors 17 00510 g013
Figure 14. Schematic diagram of pixel and actual displacement.
Figure 14. Schematic diagram of pixel and actual displacement.
Sensors 17 00510 g014
Figure 15. (a) Reference image; (b) displaced image; (c) image after registration.
Figure 15. (a) Reference image; (b) displaced image; (c) image after registration.
Sensors 17 00510 g015
Figure 16. Schematic diagram of actual displacement conversion.
Figure 16. Schematic diagram of actual displacement conversion.
Sensors 17 00510 g016
Figure 17. (a) the swing table; (b) turntable; (c) the pattern picture pasted on the swing table; (d) position of each part; (e) the camera; (f) the DSP chip C64XX.
Figure 17. (a) the swing table; (b) turntable; (c) the pattern picture pasted on the swing table; (d) position of each part; (e) the camera; (f) the DSP chip C64XX.
Sensors 17 00510 g017aSensors 17 00510 g017b
Figure 18. (a) The localization results in the plane without angular displacement errors; (b) the localization errors in the plane; (c) the localization errors in the elevation.
Figure 18. (a) The localization results in the plane without angular displacement errors; (b) the localization errors in the plane; (c) the localization errors in the elevation.
Sensors 17 00510 g018
Figure 19. (a) The localization results in the plane with initial angular displacement errors; (b) the localization errors in the plane; (c) the localization errors in the elevation.
Figure 19. (a) The localization results in the plane with initial angular displacement errors; (b) the localization errors in the plane; (c) the localization errors in the elevation.
Sensors 17 00510 g019
Figure 20. (a) The localization results in the plane with correction; (b) the localization errors in the plane; (c) the localization errors in the elevation.
Figure 20. (a) The localization results in the plane with correction; (b) the localization errors in the plane; (c) the localization errors in the elevation.
Sensors 17 00510 g020aSensors 17 00510 g020b
Figure 21. The contribution of the angular displacement errors to final latitude errors.
Figure 21. The contribution of the angular displacement errors to final latitude errors.
Sensors 17 00510 g021
Figure 22. The percentage of the angular displacement errors in final latitude errors.
Figure 22. The percentage of the angular displacement errors in final latitude errors.
Sensors 17 00510 g022
Table 1. Sources of localization error.
Table 1. Sources of localization error.
Type of FunctionType of SubsystemInfluence Factor
Directional error of photoelectric platformOptical systemParallelism and conformance of optical axis
Mechanical frameError in design and installation
Control systemError in stabilization and tracking system
OthersDeformation, vibration, electromagnet interference, wear and tear etc.
Alignment error of photoelectric platform and INSInstallation alignmentInitial directional and horizontal leveling
Shock absorberHigh-frequency angular vibration and low-frequency shaking
Error in UAV (INS) motion parameterAttitude measurement (INS)UAV attitude measurement error
Position measurementUAV localization error
Range errorLaser range finderMeasurement error of range finder
Coordinate transformation errorError in the transformations among different geodetic coordinate systems
Table 2. Initial parameters and their error range.
Table 2. Initial parameters and their error range.
TypeFocusLaser RangeAzimuth Angle of PlatformElevation Angle of PlatformPitchRollYawLongitudeLatitudeElevation of UAV
Reference100 mm12000 m−40°125.19°43.54°8000 m
Error range5%10 m0.02865°0.02865°0.03°0.03°0.06°0.0001°0.0001°10 m
Table 3. Localization accuracy without angular displacement errors.
Table 3. Localization accuracy without angular displacement errors.
ConfigureLongitudeLatitudeElevation
Localization results125.199925840984° E43.6224190056630° N293.18 m
Error (RMSE)0.000170295986306348°0.000119554074008099°12.4911 m
Table 4. The angular displacement errors without correction.
Table 4. The angular displacement errors without correction.
Angular Displacement ErrorsPitchRollYaw
Range0.1°0.1°0.05°
Table 5. Results without correction.
Table 5. Results without correction.
ConfigureLongitudeLatitudeElevation
Error (RMSE)0.000249456444498644°0.000184640982639463°21.53 m
Table 6. The angular displacement errors with correction.
Table 6. The angular displacement errors with correction.
Angular Displacement ErrorsPitchRollYaw
Range0.0115°0.0115°0.0115°
Table 7. Results with correction.
Table 7. Results with correction.
ConfigurePitchRollYaw
Error (RMSE)0.000174500400122541°0.000132910371952061°14.3658 m
Table 8. Improvement made by the system with different angle.
Table 8. Improvement made by the system with different angle.
Current Angle−60°−50°−40°−30°−25°−20°
Improvement39.12%33.24%31.13%24.63%18.56%14.34%

Share and Cite

MDPI and ACS Style

Liu, C.; Liu, J.; Song, Y.; Liang, H. A Novel System for Correction of Relative Angular Displacement between Airborne Platform and UAV in Target Localization. Sensors 2017, 17, 510. https://doi.org/10.3390/s17030510

AMA Style

Liu C, Liu J, Song Y, Liang H. A Novel System for Correction of Relative Angular Displacement between Airborne Platform and UAV in Target Localization. Sensors. 2017; 17(3):510. https://doi.org/10.3390/s17030510

Chicago/Turabian Style

Liu, Chenglong, Jinghong Liu, Yueming Song, and Huaidan Liang. 2017. "A Novel System for Correction of Relative Angular Displacement between Airborne Platform and UAV in Target Localization" Sensors 17, no. 3: 510. https://doi.org/10.3390/s17030510

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop