Next Article in Journal
Maximum Correntropy Unscented Kalman Filter for Ballistic Missile Navigation System based on SINS/CNS Deeply Integrated Mode
Previous Article in Journal
Indoor Pedestrian Localization Using iBeacon and Improved Kalman Filter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Spatial Registration and Target Tracking Method for Sensors on Multiple Missiles

Institute of Precision Guidance and Control, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(6), 1723; https://doi.org/10.3390/s18061723
Submission received: 10 April 2018 / Revised: 15 May 2018 / Accepted: 23 May 2018 / Published: 27 May 2018
(This article belongs to the Section Physical Sensors)

Abstract

:
Inspired by the problem that the current spatial registration methods are unsuitable for three-dimensional (3-D) sensor on high-dynamic platform, this paper focuses on the estimation for the registration errors of cooperative missiles and motion states of maneuvering target. There are two types of errors being discussed: sensor measurement biases and attitude biases. Firstly, an improved Kalman Filter on Earth-Centered Earth-Fixed (ECEF-KF) coordinate algorithm is proposed to estimate the deviations mentioned above, from which the outcomes are furtherly compensated to the error terms. Secondly, the Pseudo Linear Kalman Filter (PLKF) and the nonlinear scheme the Unscented Kalman Filter (UKF) with modified inputs are employed for target tracking. The convergence of filtering results are monitored by a position-judgement logic, and a low-pass first order filter is selectively introduced before compensation to inhibit the jitter of estimations. In the simulation, the ECEF-KF enhancement is proven to improve the accuracy and robustness of the space alignment, while the conditional-compensation-based PLKF method is demonstrated to be the optimal performance in target tracking.

1. Introduction

The spatial registration is vital for current and near-future cooperative combat missions through the vehicle network to estimate and compensate the sensor errors by measuring the common target [1,2,3].
Depending on the dimension of the error model, the algorithm is usually divided into a two-dimensional (2-D) system and a three-dimensional (3-D) system [4]. Many typical 2-D algorithms have been proposed in literatures. e.g., Real Time Quality Control (RTQC) [5], Least Square (LS) [6], Generalized Least Square (GLS) [7], Exact Maximum Likelihood (EML) [8]. Compared to 3-D algorithms, the former wherein has relatively lower computational complexity at the expense of accuracy. Meanwhile, a number of online registration algorithms for 3-D sensors have been put forward. Unfortunately, there are few approaches taking both measurement errors and orientation errors into consideration. The traditional Earth-Centered Earth-Fixed coordinate (ECEF) method [9] capable of solving the spatial registration problem concentrates on the measurement biases, yet ignores the attitude errors of sensors themselves when the measurement states are transformed from separate reference frame to public system. The neglect of measurement deviations in the Kalman-filter-based method [10] proposed for attitude biases makes it deficient way as well. Although the algorithm [11] combines the two types of errors for estimation, it only fits for the situation where the attitude angles are small and the sensors are close to each other.
Besides, there is another branch of research that integrates the registration with target tracking. The paper [12] presents an improved method based on the state value and space deviation of federated filtering of unscented Kalman filter and standard Kalman filter, conducting registration and tracking simultaneously. However, this method is thought to accounts for more computation [13]. For this reason, the processes of those two are usually performed separately in current investigation. Moreover, most methods referred above are only suitable for sensors on stationary base which would be deemed points in the relative motion between them and targets. Failure of them to adapts for the sensors having their own attitude and position changes (e.g., radar seeker on high-speed moving and turning missiles) is considered as the deficiency.
Target tracking, based on sensor measurement information, is a technique for estimation of the state (position, speed) of a moving target at the current or future time [14,15]. Filtering is the cornerstone of tracking algorithm with the existence of systematic and measured noise. The typical filters have been greatly adopted for more than half a century: Kalman Filter (KF) is first exploited by [16] to solve the problem of tracking for linear system. Extended Kalman Filter (EKF) [17] transforms nonlinear model to linear one and then takes KF to estimate the states, which easily diverges under strongly nonlinearity. On this basis, [18] puts forwards Unscented Kalman Filter (UKF) which is apt for both linear and nonlinear tracking, but compromised by the heavy computation. To make up the disadvantage of the methods above, the Pseudo Linear Kalman Filter (PLKF) [19] which takes the models in [20] preserves the nonlinearity in the system. With the development of computers and data links during the last ten years, tracking performance is increased through the design and implementation of systems using data collected by a network of spatially distributed sensors. Consequently, the spatial registration has gradually become the pre-requisite for information process and data fusion for multiple sensors.
In the case of two missiles cooperative target tracking, this paper employs an improved Kalman Filter on Earth-Centered Earth-Fixed coordinate (ECEF-KF) algorithm which transforms the measurements to public target from body systems into the ECEF system so as to isolate the motion of sensors. Taking advantage of the difference of transformed values, the systemic biases and the attitude errors are estimated simultaneously. Subsequently, the results of registration algorithm adopted are fed back to compensate the inputs of linear PLKF and nonlinear UKF which are used for target tracking. It is found in this paper that the convergence of ECEF-KF and the effectiveness of compensation depend on the quality of the 3-D measurement data that can be adversely affected by the movement of missiles and target. More specifically, the 3-D information may loses one degree of freedom on any dimension and turns to be 2-D with the approach of vehicles, ruining the registration results. For sake of the possible divergence, a Low-pass First Order Filter (LFOF) with position-judgement condition is introduced to detect and inhibit divergent trend, furtherly improves the accuracy of the target tracking. The paper researches theoretically in the information processing methods, which conduces furtherly to the application of sensor in the engineering problems.
The main contributions are listed below:
  • A new spatial registration algorithm is first proposed for sensors on high-speed moving vehicles, realizing the simultaneous estimation for system and attitude biases which are compensated to the biased measurements of the tracking schemes. The accuracy and robustness of the estimation of target state are effectively improved.
  • Inspired by the ideal of integral controller, a Low-pass filter is used when the position relationship between missiles and target meets the special condition to inhibit the jitter of estimations. This skill improve the adaptability of tracking system without time-delay caused by the common integral controllers.
This paper is organized as follows. The definition of each coordinate system and their transform relationships are given in Section 2. Section 3 develops the model of original and improved ECEF-KF algorithm. The compensation-based PLKF and UKF methods with strategies of divergence detection and inhibition is introduced in Section 4. Section 5 presents the performance of improved ECEF-KF under two cases of motion of missiles and compares PLKF methods and UKF methods. The concluding remarks are given in Section 6.

2. Coordinate Defination and Transformation

Before the model development, the reference frames will be introduced in this section.

2.1. Definition of Coordinate System

In the spatial alignment of multiple missiles, each member has its own reference frame or coordinate system. Among which, the measurements of sensors on the platform are obtained in the body coordinate system, the attitude angles and the position are usually described in the Local East, North, Up system and ECEF coordinate system respectively. The definition of these coordinate systems referred are listed as follows (see in Figure 1).
(1) ECEF Coordinate O X e Y e Z e : As defined in WGS-84 coordinate [21], the origin is located in the earth’s Center of Mass (CM). The Z e axis points to the direction of Conventional Terrestrial Pole (CTP). The X e axis points to the intersection point of the zero-meridian plane and the CTP equator defined in Bureau International deI’Heure (BIH) 1984 [22]. The Y e axis and the Z e axis are perpendicular to the X e axis, forming the right-hand coordinate system.
(2) Local East, North and Up (LENU) Coordinate O X d Y d Z d : The origin is the projection of the platform’s CM to the surface of the earth, the X d , Y d and Z d axis points to the east, north and up respectively [23].
(3) Body Coordinate O X b Y b Z b : The origin is located in the CM of platform. The X b axis coincides with the longitudinal axis of platform, pointing to the head as positive. The Y b axis coincides with the vertical axis of platform, pointing to the up as positive. And the Z b is determined by the right-hand rule.

2.2. Transformation between Reference Frames

(1) ECEF and LENU Coordinate: Let the column vectors r e = [ x e y e z e ] T and r d = [ x d y d z d ] T (the superscript T denotes matrix transposition) represent the ECEF coordinate and LENU coordinate respectively. The transformation from the later to the former is given by
r e = C d e r d
where C d e is the 3 × 3 orthogonal matrix given by [24]
- sin λ - sin φ cos λ cos φ c o s λ cos λ - sin φ sin λ cos φ sin λ 0 cos φ sin φ
λ and φ denote the longitude and latitude of platform. Besides, if the longitude λ , latitude φ and height h are known, its ECEF coordinate can be calculated by [25]
x e = C + h cos φ cos λ y e = C + h cos φ sin λ z e = C 1 - e 2 + h sin φ C = R e / 1 - e 2 sin 2 φ
where R e is the equatorial radius, e is the eccentricity of the earth.
(2) LENU and Body Coordinate: Let the column vectors r b = [ x b y b z b ] T represents the body coordinates. Likewise, the transformation from the body coordinate to the LENU coordinate is represented by a matrix C b d in [26]
cos θ cos ψ sin θ - cos θ sin ψ - sin θ cos ψ cos γ + s i n ψ s i n γ cos θ cos γ sin θ sin ψ cos γ + cos ψ s i n γ sin θ sin γ cos ψ + cos γ s i n ψ - cos θ sin γ - sin θ sin ψ s i n γ + cos γ cos ψ
where ψ , θ and γ are the yaw, pitch and roll angle of the platform respectively.

3. Registration Algorithm

The algorithm is based on the following assumptions:
  • The attitude biases and the measured deviations are considered as constant and the attitude biases are assumed as small values.
  • The coupling between biases is ignored.
  • The position errors of sensors are not considered. It means the positions are known exactly with other possible assistant device (e.g., GPS).

3.1. Attitude and Sensor Measurement Errors

The ECEF-KF algorithm is introduced to estimate attitude and sensor measurement biases which are modeled as additive constant biases to the reported values. That is
ψ ˜ k = ψ k + Δ ψ k θ ˜ k = θ k + Δ θ k γ ˜ k = γ k + Δ γ k
where ψ ˜ k , θ ˜ k , γ ˜ k are the reported values of the k th sensor’s yaw, pitch and roll angles. ψ k , θ k , γ k are the true values and Δ ψ k , Δ θ k , Δ γ k are the bias errors of these angles. Likewise, the measurement erros are modeled as
r ˜ k = r k + Δ r k α ˜ k = α k + Δ α k β ˜ k = β k + Δ β k
where r k , α k , β k represent the true values of the kth sensor’s slope distance (range), the elevation and azimuth angle of line of sight respectively. It is notable that the measurement errors include constant biases and random noises, that is
Δ r k = Δ r k m + Δ r k n Δ α k = Δ α k m + Δ α k n Δ β k = Δ β k m + Δ β k n
where Δ r k m , Δ α k m , Δ β k m are the constant biases, and Δ r k n , Δ α k n , Δ β k n denote the random noises.

3.2. Traditional ECEF-KF Algorithm

Given the true measurements of range r, elevation α and azimuth β of radar sensor which are defined under body system, the non-linear relationship between target state ( x b , y b , z b ) and measurement is (see in Figure 2)
α = arcsin y b x b 2 + y b 2 + z b 2 β = - arctan z b x b r = x b 2 + y b 2 + z b 2
then the coordinate of target can be written as
x b = r cos α cos β y b = r sin α z b = - r cos α sin β
Substitute (6) in (9)
x b = r ˜ - Δ r cos α ˜ - Δ α cos β ˜ - Δ β y b = r ˜ - Δ r sin α ˜ - Δ α z b = - r ˜ - Δ r cos α ˜ - Δ α sin β ˜ - Δ β
Let X b = [ x b y b z b ] T , ξ = [ Δ r Δ α Δ β ] T . Since the measurement errors are small values as assumed, the Equation (10) can be expressed as the first-order linearity of the errors
X b = X ˜ b + J ξ
where X ˜ b is the reported coordinate of target, J is the Jacobian matrix of X b with respect to ξ given by
J = x b Δ r x b Δ α x b Δ β y b Δ r y b Δ α y b Δ β z b Δ r z b Δ α z b Δ β
Now, here are two sensors A and B. ( λ a , φ a , h a ) and ( λ b , φ b , h b ) are their longitude, latitude and height coordinates. Let X e a = [ x e a y e a z e a ] T and X e b = [ x e b y e b z e b ] T be the ECEF coordinates of two sensors respectively and X e t = [ x e t y e t z e t ] T represents the ECEF coordinate of target. X b a = [ x b a y b a z b a ] T and X b b = [ x b b y b b z b b ] T are the body coordinates of target from two sensors respectively. Thus, the following relationships are given.
x e t y e t z e t = x e a y e a z e a + - sin λ a - sin φ a cos λ a cos φ a c o s λ a cos λ a - sin φ a sin λ a cos φ a sin λ a 0 cos φ a sin φ a x b a y b a z b a
x e t y e t z e t = x e b y e b z e b + - sin λ b - sin φ b cos λ b cos φ b c o s λ b cos λ b - sin φ b sin λ b cos φ b sin λ b 0 cos φ b sin φ b x b b y b b z b b
From the two equations above, we obtain the following equality
X e a + R a X b a = X e b + R b X b b
where R a and R b denote the rotational matrices in (13) and (14). X b a and X b b can be expanded to the linear form like (11), that is
X e a + R a X ˜ b a + R a J a ξ a = X e b + R b X ˜ b b + R b J b ξ b X ˜ e a - X ˜ e b = - R a J a ξ a + R b J b ξ b
where X ˜ e a and X ˜ e b represent the ECEF coordinates of target reported by sensors A and B. In order to estimate the measurement errors of sensors A and B, the state vector is developed as
χ k = Δ r a m Δ α a m Δ β a m Δ r b m Δ α b m Δ β b m T
where Δ r a m , Δ α a m , and Δ β a m are the measurment biases of sensor A, Δ r b m , Δ α b m , Δ β b m denote the measurment biases of sensor B. Since these terms are constant, the discrete equation of the state can be written as
χ k = F k χ k - 1 + W k F k = 1 0 0 0 0 0 0 1 6 × 6
where W k is zero mean white Gaussian process noise with covariance Q k . F k is the state transition matrix. In (16), let Z = X ˜ e a - X ˜ e b , H 1 = - R a J a , H 2 = R b J b , H = H 1 H 2 . Therefore, the measurement equation can be expressed as
Z k = H k χ k + V k
where V k is zero mean white Gaussian process noise with covariance R k . Kalman filtering techniques [27] using (18) and (19) can be applied to estimate sensor errors online.

3.3. Improved ECEF-KF Algorithm

From the above derivation, it can be seen that the traditional ECEF-KF method only fits for sensors on stationary base without attitude errors. For the sensors on high-speed moving missiles, the position’s and attitude’s change of missiles will influence the actual measurements, and the oriented biases of each partner which may cause the misalignment of tracking or even the loss of target must not be ignored. Considering this situation, an improved method making minor modifications on the original algorithm, is proposed to realize the simultaneous estimation of attitude and measurement errors.
Here assuming two missiles A and B as well. Considering the attitude change, (13) and (14) are modified as
X e t = X e a + C d , a e C b , a d X b a
X e t = X e b + C d , b e C b , b d X b b
where C d , a e and C d , b e equal with R a and R b in (15) are rotation matrices from the LENU coordinate system to the ECEF coordinate system (2) of missiles A and B. C b , a d and C b , b d represent rotation matrices from the body coordinate system to the LENU coordinate system (4). Combining the two equations above, there is
Δ X e = X e b - X e a = C d , a e C b , a d X b a - C d , b e C b , b d X b b
Since the rotation matrix is orthogonal [28], (for one orthogonal matrix M , there is M - 1 = M T ), (22) is transformed to
X b a = C b , a d T C d , a e T C d , b e C b , b d X b b + Δ X e
When attitude errors are small values, the rotation matrix from body coordinate system to LENU system can be written as [29]
C ˜ b d = I - ϕ C b d ϕ = ( δ × ) = 0 - Δ ψ Δ θ Δ ψ 0 - Δ γ - Δ θ Δ γ 0 δ = Δ ψ Δ θ Δ γ T
Thus
C b , a d = ( I - ϕ a ) T C ˜ b , a d C b , b d = ( I - ϕ b ) T C ˜ b , b d
Substitute (25) in (23) and ignore the product of ϕ a and ϕ b
X b a C ˜ d b , a I - ϕ a C e d , a C d , b e - C e d , a C d , b e ϕ b T C ˜ b , b d X b b + C ˜ d b , a I - ϕ a C e d , a Δ X e
Both sides of (26) are multiplied by C d , a e C ˜ b , a d
C d , a e C ˜ b , a d X b a = ( C d , b e - C d , a e ϕ a C e d , a C d , b e - C d , b e ϕ b T ) C ˜ b , b d X b b + C d , a e ( I - ϕ a ) C e d , a Δ X e
Expand X b a and X b b into the form of first-order linearity as (11). Since the item ϕ and ξ are both small values, their product is ignored. Therefore, (27) becomes
C d , a e C ˜ b , a d X ˜ b a - C d , b e C ˜ b , b d X ˜ b b + Δ X e C d , b e C ˜ b , b d J b ξ b - C d , a e C ˜ b , a d J a ξ a - ( C d , a e ϕ a C e d , a + C d , b e ϕ b T C e d , b ) C d , b e C ˜ b , b d X ˜ b b
In the above formula, the left part of the equation is labeled as Z * . Let H 1 * = - C d , a e C ˜ b , a d J a , H 2 * = - C d , b e C ˜ b , b d J b and the last term in the right part of the equation is represented as P , we arrive at
Z * = H 1 * ξ a + H 2 * ξ b + P
The following work is to transform P to the linear form of attitude errors. The matrix P is written as the following
P = - ( A ϕ a A T + B ϕ b T B T ) M
where
A = C d , a e = a 1 a 2 a 3 B = C d , b e = b 1 b 2 b 3 M = B C ˜ b , b d X ˜ b b
a i and b i are the column vectors of A and B respectively.
Remark 1.
Here giving an equality:
A ϕ a A T M = A - a 2 a 1 0 a 3 0 - a 1 0 - a 3 a 2 T d i a g M , M , M Δ ψ a Δ θ a Δ γ a
Proof of Remark 1.
L e f t = a 1 a 2 a 3 0 - Δ ψ a Δ θ a Δ ψ a 0 - Δ γ a - Δ θ a Δ γ a 0 a 1 T a 2 T a 3 T M = a 1 a 2 a 3 - Δ ψ a a 2 T + Δ θ a a 3 T Δ ψ a a 1 T - Δ γ a a 3 T - Δ θ a a 1 T + Δ γ a a 2 T M = a 2 a 1 T - a 1 a 2 T Δ ψ a + a 1 a 3 T - a 3 a 1 T Δ θ a + a 3 a 2 T - a 2 a 3 T Δ γ a M R i g h t = a 1 a 2 a 3 - a 2 T a 3 T 0 a 1 T 0 - a 3 T 0 - a 1 T a 2 T Δ ψ a M Δ θ a M Δ γ a M = - a 1 a 2 T + a 2 a 1 T a 1 a 3 T - a 3 a 1 T - a 2 a 3 T + a 3 a 2 T × Δ ψ a M Δ θ a M Δ γ a M = L e f t
So, the Theorem 1 is true. ☐
Likewise
B ϕ b T B T M = - B ϕ b B T M = - B - b 2 b 1 0 b 3 0 - b 1 0 - b 3 b 2 T d i a g M , M , M Δ ψ b Δ θ b Δ γ b
Let
H 3 * = - A - a 2 a 1 0 a 3 0 - a 1 0 - a 3 a 2 T d i a g M , M , M H 4 * = B - b 2 b 1 0 b 3 0 - b 1 0 - b 3 b 2 T d i a g M , M , M δ a = Δ ψ a Δ θ a Δ γ a T δ b = Δ ψ b Δ θ b Δ γ b T
Substituting (31)∼(33) in (30), P is finally expanded as
P = H 3 * δ a + H 4 * δ b
Let (34) in (29)
Z * = H 1 * ξ a + H 2 * ξ b + H 3 * δ a + H 4 * δ b = H 1 * Δ r a m + Δ r a n Δ α a m + Δ α a n Δ β a m + Δ β a n T + H 2 * Δ r b m + Δ r b n Δ α b m + Δ α b n Δ β b m + Δ β b n T + H 3 * Δ ψ a Δ θ a Δ γ a T + H 4 * Δ ψ b Δ θ b Δ γ b T
where Δ r a m , Δ α a m , Δ β a m , Δ r b m , Δ α b m , Δ β b m are constant measurment biases of missile A and B respectively, Δ r a n , Δ α a n , Δ β a n , Δ r b n , Δ α b n , Δ β b n are random noises added to these measurements.
Considering the terms of attitude errors, we redefine
χ * k = Δ r a m Δ α a m Δ β a m Δ r b m Δ α b m Δ β b m Δ ψ a Δ θ a Δ γ a Δ ψ b Δ θ b Δ γ b T
to be the 12-dimensional state vector. The corrected discrete equation of the state is
χ * k = F * k χ * k - 1 + W * k
Since the attitude and measurement biases are all considered as constant values.
F * k = 1 0 0 0 0 0 0 1 12 × 12
Denoting H * = H 1 * H 2 * H 3 * H 4 * In (35), the new measurement equation is
Z * k = H * k χ * k + V * k
where V * ( k ) is still zero mean white Gaussian process noise with covariance R * ( k ) . Kalman filtering techniques using (37) and (38) can be used here as well, which is unnecessary to go into the details.

4. Target Tracking with Error Compensation

4.1. Compensation PLKF Algorithm

In Section 3, the coordinate of target in the body coordinate system has been given in (10):
x b = r ˜ - Δ r cos α ˜ - Δ α cos β ˜ - Δ β y b = r ˜ - Δ r sin α ˜ - Δ α z b = - r ˜ - Δ r cos α ˜ - Δ α sin β ˜ - Δ β
where Δ r = Δ r m + Δ r n , Δ α = Δ α m + Δ α n , Δ β = Δ β m + Δ β n .
In order to improve the accuracy of target tracking, the error compensation method is introduced.The estimation of registration errors including range Δ r m * , elevation angle Δ α m * , azimuth angle Δ β m * and attitude errors Δ ψ * , Δ θ * , Δ γ * are obtained through ECEF-KF. These estimated values are compensated to the error terms of the equation above and the rotation matrix C ˜ b d in (24), that is
Δ r m Δ r m * Δ α m Δ α m * Δ β m Δ β m *
C b d * = ( I - ϕ * ) T C ˜ b d ϕ * = 0 - Δ ψ * Δ θ * Δ ψ * 0 - Δ γ * - Δ θ * Δ γ * 0
Let r = r ˜ - Δ r m * , α = α ˜ - Δ α m * , β = β ˜ - Δ β m * , (39) becomes
x b = r - Δ r n cos α - Δ α n cos β - Δ β n y b = r - Δ r n sin α - Δ α n z b = - r - Δ r n cos α - Δ α n sin β - Δ β n
Considering that Δ r n , Δ α n , Δ β n are small values, there are cos Δ α n 1 , cos Δ β n 1 , sin Δ α n Δ α n , sin Δ β n Δ β n . While, ignore the high-level small quantities, which means Δ α n Δ β n 0 , Δ α n Δ r n 0 , Δ r n Δ β n 0 . Then, the equation above turns to the following pseudo measurement:
l x r cos α cos β = x b - n x l y r sin α = y b - n y l z - r cos α sin β = z b - n z
where n x , n y , n z are pseudo measurement noises, here are
n x = Δ β n r cos α sin β + Δ α n r sin α cos β - Δ r n cos α cos β n y = - Δ α n r cos α - Δ r n sin α n z = Δ β n r cos α cos β - Δ α n r sin α sin β + Δ r n cos α sin β
The covariance of these measurement noises is
R n = V a r n x C o v n x , n y C o v n x , n z C o v n y , n x V a r n y C o v n y , n z C o v n z , n x C o v n y , n z V a r n z
where
σ r 2 = V a r Δ r n , σ α 2 = V a r Δ α n , σ β 2 = V a r Δ β n V a r n x = σ β r cos α sin β 2 + σ α r s i n α c o s β 2 + σ r cos α c o s β 2 V a r n y = σ α r cos α 2 + σ r sin α 2 V a r n z = σ β r cos α c o s β 2 + σ α r s i n α s i n β 2 + σ r cos α s i n β 2 C o v n x , n y = C o v n y , n x = cos α sin α cos β × σ r 2 - r σ α 2 C o v n x , n z = C o v n z , n x = cos β sin β × r cos α σ β 2 - r s i n α σ α 2 - cos α σ r 2 C o v n y , n z = C o v n z , n y = cos α sin α sin β × r σ α 2 - σ r 2
Representing U = x b y b z b x ˙ b y ˙ b z ˙ b T as the state vector, the continuous equation is
U ˙ = SU + G ω
where ω = ω x ω y ω z T is zero-mean white noise with covariance Σ , and S is the transition matrix. The CV (Constant Velocity) model [30] is chosen as the target motion model, that is
S = 0 3 I 3 0 3 0 3 G = 0 3 I 3
The discrete-time state function is
U k = T k U k - 1 + Λ k
where Λ ( k ) is discrete-time process noise, and the state transition matrix T k = T t k , t k + 1 at time t k over time interval t Δ = t k + 1 - t k is given by
T k = e S t Δ I + t Δ S
The covariance of the discrete-time process noise is
Q k = E Λ k Λ k T = t k t k + 1 T t k , τ G Σ G T T t k , τ T d τ
Let L = l x l y l z T denotes the measurement vector. Therefore, the measurement equation is transformed to the linear function of the target state.
L k = Y k U k + μ k Y k = I 3 0 3
where μ k is zero-mean Gaussian noise.

4.2. Conpensation UKF Algorithm

If we directly take the compensation inputs as the measurements of filter L * = r α β T , then the measurement equation will be changed as the non-linear form (8)
L * k = h U k + μ * k
where μ * ( k ) is measurement noise with Ω ( k ) as variance. Combined with (46) and (48), the UKF algorithm for nonlinear system state estimation is exactly required. The formulas of UKF are directly given as follows.
Initializing:
U ^ 0 = E U 0 P 0 = E U 0 - U ^ 0 U 0 - U ^ 0 T
Computing sigma point set:
k = U ^ k U ^ k + n + λ P k U ^ k - n + λ P k
Prediction updating:
k + 1 | k = T ( k ) ( k ) U ^ k + 1 | k = i = 0 2 n W i m i k + 1 | k P k + 1 | k = i = 0 2 n W i c i k + 1 | k - U ^ k + 1 | k × i k + 1 | k - U ^ k + 1 | k T + Q k + 1 L ^ i * k + 1 | k = h i k + 1 | k L ^ * k + 1 | k = i = 0 2 n W i m L ^ i * k + 1 | k
Measurement updating:
P L L k + 1 = i = 0 2 n W i c L ^ i * k + 1 | k - L ^ * k + 1 | k L ^ i * k + 1 | k - L ^ * k + 1 | k T + Ω k + 1 P U L k + 1 = i = 0 2 n W i c k + 1 | k - U ^ k + 1 | k L ^ i * k + 1 | k - L ^ * k + 1 | k T K k + 1 = P U L k + 1 P L L k + 1 - 1 U ^ k + 1 = U ^ k + 1 | k + K k + 1 L * k + 1 | k k - L ^ * k + 1 | k P k + 1 = P k + 1 | k - K k + 1 P L L k + 1 - 1 K k + 1 T

4.3. Compensation Condition and Strategy

Remark 2.
Considering the situation where two missiles and target are on the same latitude or longitude plane during their fight, the 3-D tracking problem will become 2-D.
As shown in the Figure 3, O X Y Z coordinate system is the longitude, latitude and height coordinate system. The axes O X , O Y , O Z denote longitude, height and latitude respectively. In this moment, the two missiles and the target are on the same latitude plane Z = Z 0 and the track plane A B T is parallel to the plane O X Y , which means the measurement of space registration will lost the measurements on O Z axis and the tracking problem in 3-D space turns to be on the plane O X Y . If two missiles keep flying on this plane for some time, the estimation of errors may be divergent, which will worsen the compensation effect.
A judging condition is presented here: Assume Γ is the normal vector of the tracking plane A B T , r i is the unit vector parallel to the axes, ϑ i is the vectorial angle between Γ and r i .
cos ϑ i - 1 = r i · Γ r i Γ - 1 ε
where ε is a sufficiently small value, is the Euclidean metric of vector [31]. If the inequality satisfies, the situation in Remark 2 will happen.
To avoid the decreasing precision of compensation tracking, the estimation of registration errors will be introduced to a LFOF when (53) is true, that is
y = 1 κ s + 1 u y ˙ = - 1 κ y + 1 κ u
where u is the error items, y is the output of the filter, κ is the propotionality coefficient. The fourth-order Runge-Kutta algorithm [32,33] can be used to solve the above differential equation.
Figure 4 shows the flow of whole algorithms. Before target tracking, the ECEF-KF is introduced to obtain registration errors of measurements of missiles. The errors acquired are furtherly introduced to the tracking system through the conditional compensation. When the tracking plane is parallel to one of three coordinate planes, the LFOF is taken to inhibit the divergence of error estimations.
The compensation PLKF method with LFOF is called the CCPLKF (Conditional Compensation Pseudo Linear Kalman Filter), otherwise, called UCPLKF (Unconditional Compensation Pseudo Linear Kalman Filter). Accordingly, the compensation UKF method is divided to CCUKF and UCUKF.

5. Simulation

In this section, the performance of the proposed algorithm in spatial registration compared with traditional ECEF-KF is demonstrated. And the target tracking scheme is proved effective through the comparison of linear filters and nonlinear filters.
In the simulation, the ground target on curvilinear motion with variable velocity is considered. The velocity of target is (N 10 sin 2 π t T , E 5m/s), wherein t is the current time and T is the total simulation time.
The initial position of target is (longitude 108 . 5 , latitude 34 . 01 , height 0 m). The same terms of 1th and 2th missiles are (longitude 108 , latitude 34 , height 300 m) and (longitude 108 , latitude 34 . 02 , height 200 m) respectively. There are two motion situations of missiles being considered:
Case 1: Both two missiles make uniform linear flight to the east, the velocity are (N 0 m/s, E 120 m/s) and (N 0 m/s, E 100 m/s) respectively.
Case 2: The missiles make uniform linear flight as well, the velocity are (N 5 m/s, E 120 m/s) and (N −7 m/s, E 100 m/s) respectively. Figure 5 shows the absolute trajectories of missiles and target in the longitude, latitude and height coordinate system under the cases above.
The 1th sensor offset error is (range 200 m, elevation angle 0 . 5 , azimuth angle 0 . 2 ) and the 2th sensor offset error is (range 300 m, elevation angle 0 . 3 , azimuth angle 0 . 2 ). Both two sensors’ standard deviation of measurement noises is (5 m, 0 . 01 , 0 . 1 ) All the measurement errors and noises are added to ideal measurements to create reported values. Both two platforms’ initial attitudes are (yaw 1 , pitch 0 , roll 3 ) and offset errors of these angles are 0 . 01 , which are added to initial attitudes to create deviated values. The sampling interval is 0.1 s. All the simulation results are obtained over 100 Monte Carlo trials. The performance of improved ECEF-KF algorithm proposed in the paper is compared with the method based on ECEF-LS in [24] and the standard Kalman filter, which is called traditional ECEF-KF.
The estimations of sensor errors are proposed in Figure 6, wherein the left column is about missile 1 and the right column presents the outcome of missile 2. The results of two cases are compared in one figure. It is obvious that the curves of case 2 jitter severely during 100∼200 s, which does not appear in the case 1. As mentioned in Section 4.2, the jitter is caused by the approach of latitudes of two missiles and target (see in Figure 5b). At 150 s, the two missiles move to nearly the same latitude (about 34 . 01 ), the latitude of target is about 34 . 0186 , they are almost on the same latitude plane. That means the measurements of ECEF-KF become two dimensional, which causes the divergence during this time and converge gradually with the continual motion of missiles.
The Figure 7 shows the estimations of attitude biases. The divergency during 100∼200 s appears similarly under case 2. Besides, it is found in Figure 7c,f that the roll error increases after 200 s. This is because the longitudes of missiles come close to the targets’ at the end of track, which leads to the same phenomenon caused by the approach of latitude. Since the measurement equation of ECEF-KF is a first-order linear function of state, the estimations converge quickly from beginning (about 2 s in Figure 7a) and jitter near the truth.
To demonstrate the priority of proposed registration method for sensors on motivated platforms, Figure 8 illustrates the bias estimates of traditional ECEF-KF and improved one under case 1. The overlook of attitudes of orientated frames makes the former deviate from the truth as well as the disability to estimate the align errors. On the contrary, the latter which follows the truth closely gives better performance on the results. The reason causing the disappointment of traditional one is that the attitudes of missiles lead to the noncoincidence of LENU system and missile body system and influence the measurement obtained from sensors. For instance, the positive yaw value would increase the measurement of azimuth angle which is actually invariant in LENU system. However, the item C b d of improved method transform the measurement from body system to LENU system, isolating the disturbance of frame angles.
The RMSE (Root Mean Square Error) of the estimation of two methods is listed in Table 1. Obviously, the improved ECEF-KF outperforms the original strategy in all terms, especially for the azimuth because of the relatively large value of yaw angle, which demonstrates the proposed algorithm has more advantage in spatial registration for mobile sensors.
Take case 2 for example, the target tracking is conducted on missile 1. Figure 9 compares the estimations of different schemes, wherein the left column presents the results of linear filters and the nonlinear filters are illustrated in the right list. The trajectory of target is estimated in the LENU coordinate system of missile 1 (see in Figure 5b). The parameters are given ε = 0 . 001 , κ = 10 .
Remark 3.
It is notable that ε is the limit of (53) and κ is the scale factor of integrator in (54). The former wherein will influence the time of error compensation (the lager ε is, the longer the LFOF will be taken) and the latter wherein will enhance the stability of error estimations (the lager κ is, the smoother the estimation through LFOF will be).
Seen from the left column of Figure 9, the accuracy of estimation dramatically improved after compensation (the estimation of UCPLKF is much closer to the truth value than PLKF). This is because the constant biases are subtracted from the input of UCPLKF. But during 100∼200 s, the estimations of these biases deviate from the truth, resulting to the unsatisfactory performance of UCPLKF in this period. The CCPLKF which can effectively inhibit the jitter of curves through a condition-based LFOF is prior to UCPLKF. Since the LFOF only works when (53) satisfies, so it does not cause the time delay to the whole tracking process. In the right column, likewise, the CCUKF is the optimal method among the nonlinear tracking schemes.
The RMSE of the estimation of target state is listed in Table 2. Seen from the results, the accuracy of tracking system can be greatly improved through compensation and the conditional compensation methods (CCPLKF and CCUKF) have nearly the same and the smallest errors. The CCUKF although has slight edge over CCPLKF in the items RMSEx and RMSEy, but suffer from more error in RMSEz, which turns the advantage to disadvantage in joint distance. This result can be explained through the comparison of Figure 9c,f. As mentioned, the registration errors jitter severely during 100∼200 s, which are compensated directly to the measurement inputs of UKF yet indirectly introduced to PLKF. Thus, the latter has the better input-jitter tolerance, which can also be proved in Figure 10.
The estimations of CCPLKF and CCUKF are compared in Figure 10. Although the local magnifications in Figure 10a,b show the smoother curves of CCUKF than those of CCPLKF, but the Figure 10c exposes its flaw when the inputs jitter more severely. Moreover, the another fatal weakness of CCUKF is its relatively large computation. Given from simulation, the operation time caused by CCPLKF is 2.32 s, whereas CCUKF takes 5.62 s. In summary, the CCPLKF method keeps the best performance in target tracking.

6. Conclusions

In this paper, a novel spatial registration algorithm is proposed to estimate biases of measurements and orientation angles simultaneously. The linear and nonlinear filtering algorithms are introduced to solve the three-dimensional maneuvering target tracking problem. The estimations of registration method are used to modify the inputs of tracking scheme. In addition, a low pass first order filter is selectively taken to inhibit the jitter of estimation without the time delay. Simulation shows the good performance and robustness of the registration approach and the advantage of linear tracking method with modified inputs. Research results of this paper can also be extended to solve the registration and tracking of a dynamic network consisted of large quantities of sensors and multi-maneuvering targets [34,35].

Author Contributions

X.L. and Y.X. conceived and designed the experiments; Y.X. derived the mathmatical expression. All the authors contributed to the writing of the paper.

Acknowledgments

This work was supported by “the Fundamental Research Funds for the Central Universities” No. 61703339 and No. 3102016ZB021. The authors would like to thank the associate editor and anonymous reviewers for their valuable comments and suggestions to improve this paper.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Li, M.; Jing, Z.; Hu, M.; Dong, P. Exact Joint Estimation of Registration Error and Target States Based on GLMB Filter. In Proceedings of the 20th International Conference on Information Fusion, Xi’an, China, 10–13 July 2017. [Google Scholar]
  2. Xiu, J.; Dong, K.; He, Y. Systematic error real-time registration based on modified input estimation. Syst. Eng. Electron. 2016, 27, 986–992. [Google Scholar] [CrossRef]
  3. Zhang, F.; Christian, B.; Alois, K. Multiple Vehicle Cooperative Localization with Spatial Registration Based on a Probability Hypothesis Density Filter. Sensors 2014, 14, 995–1009. [Google Scholar] [CrossRef] [PubMed]
  4. Song, W. Research progress of spatial registration algorithms for sensor data. Trans. Microsyst. Technol. 2012, 8, 5–8. [Google Scholar]
  5. He, Y.; Xiu, J.; Guan, X. Radar Network Error Registration Algorithm. Radar Data Processing with Applications; John Wiley & Sons Singapore Pte. Ltd.: Singapore, 2016. [Google Scholar]
  6. Zheng, Z.; Zhu, Y. New least squares registration algorithm for data fusion. IEEE Trans. Aerosp. Electron. Syst. 2004, 40, 1410–1416. [Google Scholar] [CrossRef]
  7. Dong, Y.; He, Y.; Wang, G. A Generalized Least Squares Registration Algorithm with Earth-Centered Earth-Fixed (ECEF) Coordinate System [Radar Sensor Fusion Applications]. In Proceedings of the International Conference on Computational Electromagnetics and ITS Applications, Beijing, China, 1–4 November 2004. [Google Scholar]
  8. Yang, X.; Wang, Y.; Shan, X. Ill-Condition Controlled Maximum Likelihood Registration for Multiple Sensors. Adv. Mater. Res. 2014, 898, 797–801. [Google Scholar]
  9. Han, C.; Zhu, H.; Duan, Z. Time and Spatial Registration. In Multi-Source Information Fusion, 2nd ed.; Tsinghua University Press: Beijing, China, 2010; pp. 198–202. [Google Scholar]
  10. Liu, Y.; Yang, Z.; Han, C. A study on space registration algorithm of sensor attitude bias. Mod. Radar 2009, 31, 29–31. [Google Scholar]
  11. Helmick, R.E.; Rice, T.R. Removal of alignment errors in an integrated system of two 3-d sensors. IEEE Trans. Aerosp. Electron. Syst. Aes. 1993, 29, 1333–1343. [Google Scholar] [CrossRef]
  12. Bo, Y.; Chen, Z.; Yin, M.; Wang, T. Improved different dimensional sensors combined space registration algorithm. Math. Prob. Eng. 2015, 2015, 1–9. [Google Scholar] [CrossRef]
  13. Van Doorn, B.A.; Blom, H.A. Systematic Error Estimation in Multisensor Fusion Systems. In Proceedings of the SPIE—Conference on Signal and Data Processing of Small Targets, Orlando, FL, USA, 22 October 1993; pp. 450–461. [Google Scholar]
  14. Li, S. Research on the Method of Locating and Tracking Based on Multi-Source Information Fusion Technology. Master’s Degree, Henan University, Kaifeng, China, May 2014. [Google Scholar]
  15. Doulamis, A. Dynamic tracking re-adjustment: A method for automatic tracking recovery in complex visual environments. Multimed. Tool. Appl. 2010, 50, 49–73. [Google Scholar] [CrossRef]
  16. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  17. Bucy, R.S.; Senne, K.D. Digital Synthesis of Non-linear Filters. Autimatica 1971, 7, 287–298. [Google Scholar] [CrossRef]
  18. Julier, S.J.; Uhlmann, J.K. New Extension of the Kalman Filter to Nonlinear Systems. Int. Soc. Opt. Photonics 1997, 182–193. [Google Scholar] [CrossRef]
  19. Pathiranage, C.D.; Watanabe, K.; Jayasekara, B.; Izumi, K. Simultaneous Localization and Mapping: A Pseudolinear Kalman Filter (PLKF) Approach. In Proceedings of the International Conference on Information and Automation for Sustainability, Colombo, Sri Lanka, 12–14 December 2008; pp. 61–66. [Google Scholar]
  20. Li, X.R.; Jikov, V.P. A Survey of Maneuvering Target Tracking-part III Measurement Models. In Proceedings of the Conference on Signal and Date Processing of Small Targets, San Diego, CA, USA, 26 November 2001. [Google Scholar]
  21. Li, W.; Peng, W.; Peng, L.; Li, Y. Semidefinite programming algorithm with TDOA and FDOA measurements based on WGS-84 earth model. Acta Aeronaut. ET Astronaut. Sin. 2017, 38, 283–290. [Google Scholar]
  22. Guinot, B.; Feissel, M. Bureau International de l’Heure. Annual Report 1984; Observatoire de Paris: Paris, France, 1985; p. 22. [Google Scholar]
  23. Crespi, M.; Baiocchi, V.; De Vendictis, L.; Giannone, F. A New Rigorous Model for the Orthorectification of Syncronous and Asyncronous High Resolution Imagery. In Proceedings of the 24th EARSeL Symposium, Dubrovnik, Croatia, 25–27 May 2004; pp. 461–468. [Google Scholar]
  24. Zhou, Y.; Leung, H.; Blanchette, M. Sensor alignment with Earth-centered Earth-fixed (ECEF) coordinate system. IEEE Trans. Aerosp. Electron. Syst. 2002, 35, 410–418. [Google Scholar] [CrossRef]
  25. Carlson, G.E.; Bott, M.E. Tilt-Table Alignment for Inertial-Platform Maintenance without a Surveyed Site. IEEE Trans. Aerosp. Electron. Syst. 1973, AES-9(3), 406–411. [Google Scholar] [CrossRef]
  26. Lo, J.; Ma, W.; Yuan, J. Inertial Navigation. In Principle and Application of Integrated Navigation, 1st ed.; NWPU Press: Xi’an, China, 2012; p. 61. [Google Scholar]
  27. Qin, Y.; Zhang, H.; Wang, S. Discrete Kalman Filter. In Kalman Filter and Integrated Navigation Principle, 2nd ed.; NWPU Press: Xi’an, China, 2012; p. 34. [Google Scholar]
  28. Xu, Y. The Concept and Some Conclusion of Rotation Matrix. J. Jiangsu Open Univ. 1997, 1997, 81–83. [Google Scholar]
  29. Qin, Y. Strapdown inertial navigation system. In Inertial Navigation, 1st ed.; Science Press: Beijing, China, 2006; p. 356. [Google Scholar]
  30. Liu, C. Study on Motion Model and Tracking Algorithm of Radar Maneuvering Target. Ph.D. Thesis, Xidian University, Xi’an, China, April 2014. [Google Scholar]
  31. Guo, Z.; Zhou, J.; Guo, J.; Cieslak, J.; Chang, J. Coupling characterization-based robust attitude control scheme for hypersonic vehicles. IEEE Trans. Ind. Electron. 2017, 64, 6350–6361. [Google Scholar] [CrossRef]
  32. Yang, H.; Gao, Y. GPS Satellite Orbit Prediction at User End for Real-Time PPP System. Sensors 2017, 17, 1981. [Google Scholar] [CrossRef] [PubMed]
  33. Abadi, M.A.H.; Cao, B.Y. Solving First Order Fuzzy Initial Value Problem by Fourth Order Runge-Kutta Method Based on Different Means. Fuzzy Inf. Eng. Decis. 2018. [Google Scholar] [CrossRef]
  34. Nillius, P.; Sullivan, J.; Carlsson, S. Multi-Target Tracking-Linking Identities Using Bayesian Network Inference. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; pp. 2187–2194. [Google Scholar]
  35. Andriyenko, A.; Schindler, K.; Roth, S. Discrete-Continuous Optimization for Multi-Target Tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 1926–1933. [Google Scholar]
Figure 1. The definition of coordinate systems.
Figure 1. The definition of coordinate systems.
Sensors 18 01723 g001
Figure 2. Measurement of target in body coordinate system.
Figure 2. Measurement of target in body coordinate system.
Sensors 18 01723 g002
Figure 3. Judge on spatial relationship.
Figure 3. Judge on spatial relationship.
Sensors 18 01723 g003
Figure 4. Arcitecture of registration and tracking algorithm.
Figure 4. Arcitecture of registration and tracking algorithm.
Sensors 18 01723 g004
Figure 5. (a) Trajectories of missiles and target under case 1; (b)Trajectories of missiles and target under case 2.
Figure 5. (a) Trajectories of missiles and target under case 1; (b)Trajectories of missiles and target under case 2.
Sensors 18 01723 g005
Figure 6. (a) The estimation of range error of missile 1; (b) The estimation of elevation error of missile 1; (c) The estimation of azimuth error of missile 1; (d) The estimation of range error of missile 2; (e) The estimation of elevation error of missile 2; (f) The estimation of azimuth error of missile 2.
Figure 6. (a) The estimation of range error of missile 1; (b) The estimation of elevation error of missile 1; (c) The estimation of azimuth error of missile 1; (d) The estimation of range error of missile 2; (e) The estimation of elevation error of missile 2; (f) The estimation of azimuth error of missile 2.
Sensors 18 01723 g006
Figure 7. (a) The estimation of yaw bias of missile 1; (b) The estimation of pitch bias of missile 1; (c) The estimation of roll bias of missile 1; (d) The estimation of yaw bias of missile 2; (e) The estimation of elevation error of pitch bias of missile 2; (f) The estimation of roll bias of missile 2.
Figure 7. (a) The estimation of yaw bias of missile 1; (b) The estimation of pitch bias of missile 1; (c) The estimation of roll bias of missile 1; (d) The estimation of yaw bias of missile 2; (e) The estimation of elevation error of pitch bias of missile 2; (f) The estimation of roll bias of missile 2.
Sensors 18 01723 g007
Figure 8. (a) The comparison of range error of missile 1; (b) The comparison of elevation error of missile 1; (c) The comparison of azimuth error of missile 1; (d) The comparison of range error of missile 2; (e) The comparison of elevation error of missile 2; (f) The comparison of azimuth error of missile 2.
Figure 8. (a) The comparison of range error of missile 1; (b) The comparison of elevation error of missile 1; (c) The comparison of azimuth error of missile 1; (d) The comparison of range error of missile 2; (e) The comparison of elevation error of missile 2; (f) The comparison of azimuth error of missile 2.
Sensors 18 01723 g008
Figure 9. (a) Linear estimation of x coordinate of target; (b) Linear estimation of y coordinate of target; (c) Linear estimation of z coordinate of target; (d) Nonlinear estimation of x coordinate of target; (e) Nonlinear estimation of y coordinate of target; (f) Nonlinear estimation of z coordinate of target.
Figure 9. (a) Linear estimation of x coordinate of target; (b) Linear estimation of y coordinate of target; (c) Linear estimation of z coordinate of target; (d) Nonlinear estimation of x coordinate of target; (e) Nonlinear estimation of y coordinate of target; (f) Nonlinear estimation of z coordinate of target.
Sensors 18 01723 g009
Figure 10. (a) Estimation Error of x coordinate of target; (b) Estimation Error of y coordinate of target. (c) Estimation Error of z coordinate of target.
Figure 10. (a) Estimation Error of x coordinate of target; (b) Estimation Error of y coordinate of target. (c) Estimation Error of z coordinate of target.
Sensors 18 01723 g010
Table 1. RMSE of registration errors.
Table 1. RMSE of registration errors.
MethodPlatformRange (m)Elevation ( )Azimuth ( )
ImprovedMissile 10.18780.01260.0327
ECEF-KFMissile 20.18880.00570.0255
TraditionalMissile 10.26670.05220.5557
ECEF-KFMissile 20.26880.04751.0674
Table 2. RMSE of tracking errors.
Table 2. RMSE of tracking errors.
MethodRMSEx (m)RMSEy (m)RMSEz (m)RMSEr (m)
PLKF191.6083269.883388.9778342.6377
UCPLKF6.314020.558124.185932.4938
CCPLKF6.286218.872823.533730.9746
UKF191.3337269.890189.1703342.7357
UCUKF6.256820.529324.397732.3645
CCUKF6.236618.842223.780330.8145

Share and Cite

MDPI and ACS Style

Lu, X.; Xie, Y.; Zhou, J. Improved Spatial Registration and Target Tracking Method for Sensors on Multiple Missiles. Sensors 2018, 18, 1723. https://doi.org/10.3390/s18061723

AMA Style

Lu X, Xie Y, Zhou J. Improved Spatial Registration and Target Tracking Method for Sensors on Multiple Missiles. Sensors. 2018; 18(6):1723. https://doi.org/10.3390/s18061723

Chicago/Turabian Style

Lu, Xiaodong, Yuting Xie, and Jun Zhou. 2018. "Improved Spatial Registration and Target Tracking Method for Sensors on Multiple Missiles" Sensors 18, no. 6: 1723. https://doi.org/10.3390/s18061723

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop