Next Article in Journal
Generation of High Temporal Resolution Full-Coverage Aerosol Optical Depth Based on Remote Sensing and Reanalysis Data
Previous Article in Journal
Detection and Monitoring of Woody Vegetation Landscape Features Using Periodic Aerial Photography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Velocity Estimation for Space Infrared Dim Targets Based on Multi-Satellite Observation and Robust Locally Weighted Regression

1
Key Laboratory of Intelligent Infrared Perception, Chinese Academy of Sciences, Shanghai 200083, China
2
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(11), 2767; https://doi.org/10.3390/rs15112767
Submission received: 17 April 2023 / Revised: 18 May 2023 / Accepted: 24 May 2023 / Published: 26 May 2023

Abstract

:
Velocity estimation of space moving targets is a key part of space situational awareness. However, most of the existing methods do not consider the satellite observation process, and the performance mainly depends on the preset target motion state, which has great limitations. To accurately obtain the motion characteristics of space infrared dim targets in space-based infrared detection, a velocity estimation method based on multi-satellite observation and robust locally weighted regression is proposed. Firstly, according to parameters such as satellite position, satellite attitude angle, and sensor line of sight, the overall target observation model from the sensor coordinate frame to the Earth-centered inertial coordinate frame is established, and the pixel coordinates of the target imaging point are extracted using the gray-weighted centroid method. Then, combined with the least squares criterion, the position sequence of the space target is obtained. Finally, a robust locally weighted regression operation is performed on the target position sequence to estimate the velocity. This study verified the feasibility of the proposed method through simulation examples, with the results showing that the Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) of the method were only 0.0733 m/s and 1.6640 m/s without measurement error. Moreover, the velocity estimation accuracy was better than that of other methods in most scenarios. In addition, the estimation accuracy under the impact of various measurement errors was analyzed, and it was found that the pixel coordinate extraction error had the greatest impact on velocity estimation accuracy. The proposed method provides a technical basis for the recognition of space infrared dim moving targets.

Graphical Abstract

1. Introduction

The space-based infrared (IR) detection system is not restricted by geography and atmosphere, and has strong concealment, so it can effectively detect and identify space targets such as missiles and launch vehicles [1,2,3]. The high-precision extraction of velocity features is a key technology of space-based IR detection systems and an important prerequisite for space target recognition. The research on the velocity features of space IR targets has garnered extensive attention in various application areas, including space debris monitoring [4], space kill assessment [5], target tracking [6], etc. Due to factors such as space target design cost and weight control, there are relative differences in speed and direction before and after the release of different targets. This difference provides a reference for target classification and recognition. Due to the long detection distance and low signal-to-noise ratio of targets, it is difficult to detect targets on the image plane. Therefore, researchers have conducted many theoretical analyses and simulations on the detection and recognition of dim moving targets in deep space backgrounds [7,8,9,10,11]. Satellites located in specific orbits can use the passive observation of IR sensors to obtain Line-of-Sight (LOS) measurement information from the sensor to the target. Combined with the coordinate transformation relationship, the position and velocity information of the observed target can be obtained [12].
The commonly used velocity estimation method mainly combines prior knowledge of the space target dynamics model and adopts the batch processing algorithm or the filtering algorithm to estimate the target motion information. The basic premise of batch processing methods is to linearize nonlinear functions around the initial estimate using Taylor series expansion and then iteratively solve them [13]. However, the solving process of this method is unstable and the computation is complex. The most classic filtering algorithm is the Extended Kalman Filter (EKF). Sandip [14] used the EKF to estimate the motion position and velocity of the target and found that the EKF algorithm had a significant advantage in calculating velocity. The EKF can use the linearization method to overcome the nonlinear filtering problem with relatively low computational complexity. However, it is prone to divergence problems in situations where nonlinearity is strong and the amount of data is limited. Therefore, the Unscented Kalman Filter (UKF) [15,16] and the Cubature Kalman Filter (CKF) [17,18] have been proposed. Huang et al. introduced the variance statistical function in UKF to calculate the residual weight in real time, which improved the sensitivity of the system to target maneuvering [19]. Cui et al. [20] proposed an adaptive high-order CKF algorithm to solve the problem in which the accuracy of CKF velocity estimation decreases when the system state changes abruptly. In summary, the above-mentioned correlation filtering methods are based on the assumption that the motion model of space objects is known, and aim to balance the calculation efficiency and estimation accuracy according to this motion model. The effectiveness of these algorithms depends largely on the accuracy of the established motion model and the selection of the initial state, and the accuracy of the model greatly influences the final estimation result.
During actual detection processes, these unknown targets are usually uncooperative, and there is no prior information about their structure or motion parameters. In this case, the aforementioned velocity estimation methods are not applicable. In recent years, some scholars have used geometric methods [21,22,23] to locate space targets. These methods do not need to establish a motion model for the space target but only focus on the rotation relationship between the space target and the IR imaging position, and the calculation is small and easy to process on the satellite. Combining the relationships among position, velocity, and time, it is also relatively feasible to use the above algorithm to estimate the velocity. Limited by the performance of the detector, long-distance objects appear in the form of small dots on the imaging plane of the detector. In the absence of prior information, only angular information can be obtained under single-satellite observation, and the distance between the target and detector cannot be obtained accurately. Furthermore, the accurate position and velocity of the target cannot be obtained. Therefore, the geometric method usually requires more than two satellites to observe the target simultaneously.
There are many error sources in the space-based detection system, which seriously affect the target positioning results [12,24]. How to accurately fit the positioning information is directly related to the accuracy of velocity estimation. Commonly used fitting methods are the Weighted Moving Average (WMA) [25], Least Mean Square (LMS) [26], Savitzky–Golay filter [27], locally weighted regression (LWR) [28], and variants of these methods. WMA and its variants are used in various fields to estimate models and predict future events [29,30], but it responds slowly to sudden and fast-changing data, is sensitive to weight selection, and has a poor smoothing effect. LMS gradually adjusts the weight of the filter in an iterative manner to minimize the mean square error of the prediction error [26]. However, LMS has a slow convergence speed and requires high signal correlation. The effectiveness of the classic Savitzky–Golay filter depends on the correct choice of polynomial degree, which may lead to overfitting or loss of detail information for high-order polynomial fitting. Ochieng et al. proposed an improved Adaptive Savitzky–Golay filter (ASG), which automatically adjusts parameters by dynamically modifying the smoothing results of the filter to ensure high-precision feature extraction [31]. The LWR method can adapt to the distribution and density of different data points, as well as retain the local characteristics and nonlinear trends of the data. It is widely used in various fields such as automatic driving [32], software engineering quantity estimation [33], and computer tomography [34].
Building on the previous research, this study utilized different IR detectors mounted on multiple satellites at a certain distance, combined with robust locally weighted regression, to extract the velocity characteristics of dim space targets. This method does not need to assume the motion state of the space object, and it is more accurate and stable than other methods. There are three main contributions of this study:
(1) Established a multi-satellite observation model under space-based IR conditions, which solved the problem of insufficient information obtained by single-satellite observation.
(2) A robust local weighted regression method is proposed for velocity estimation, which can accurately and quickly calculate the target velocity.
(3) The impacts of satellite position measurement error, satellite attitude angle measurement error, sensor pointing error, and pixel coordinate extraction error on velocity estimation accuracy are analyzed.
The remainder of this paper is organized as follows: In Section 2, the space-based IR multi-satellite observation model is established in detail; Section 3 presents a velocity estimation method based on robust locally weighted regression; Section 4 provides the simulation experiment results and an analysis of the impact factors; and, finally, Section 5 provides the conclusions.

2. Space-Based IR Multi-Satellite Observation Model

When a single IR detector observes a target, only the angular information of the target can be obtained by combining the coordinates of the imaging point and the optical system parameters. Thus, the greatest estimation uncertainty always exists along the LOS direction, and the target position and velocity information cannot be accurately obtained [35]. Moreover, due to the limitations of single-satellite observations, the continuity of the observation is usually unsatisfactory. Using multiple IR detectors (mounted on different satellites) can overcome the above limitation in extracting the velocity characteristics of space targets [36]. This section presents the complete derivation process for the space-based multi-satellite observation model. Firstly, the overall space target observation model is derived, followed by the extraction of the target IR imaging point position using the gray-weighted centroid method. Finally, the multi-satellite cross-positioning model is obtained.

2.1. Space Target Observation Model

The observation of space targets is used to solve the projection problem of the target in three-dimensional space by discerning the position information of the target on the two-dimensional plane and the full-link projection transformation relationship between the target and the sensor [37]. In the process of solving the target position, it is necessary to establish and clarify multiple coordinate frames and their conversion relationships. As shown in Figure 1, the process involves four coordinate transformations of the target position vector. The specific transformations are as follows.
Assume that the position vectors of the space target and the satellite in the Earth-Centered Inertial (ECI) coordinate frame ( O e X e Y e Z e ) are r T and r s , the latitude angle is u , the orbital inclination angle is i , and the Right Ascension of the Ascending Node (RAAN) is ω . According to the coordinate rotation adjustment relationship, the position vector r o r b of the space target in the satellite orbital coordinate frame ( O o X o Y o Z o ) can be calculated:
r o r b = B R z u R x i R z ω r T + c = R e o r T + c
where c = 0,0 , r s T , · is the matrix two norm; R e o represents the transition matrix from the ECI coordinate frame to the satellite orbit coordinate frame; R · denotes the rotation matrix; and B refers to the coordinate axis adjustment matrix, expressed as follows.
R x · = 1 0 0 0 cos · sin · 0 sin · cos ·
R y · = cos · 0 sin · 0 1 0 sin · 0 cos ·
R z · = cos · sin · 0 sin · cos · 0 0 0 1
B = 0 1 0 0 0 1 1 0 0
The three-axis attitude angles of the observation satellite relative to the satellite orbit coordinate frame are roll angle ζ , pitch angle ξ , and yaw angle φ . According to the rotation of the satellite attitude, the target position vector r s a t under the satellite body coordinate frame ( O b X b Y b Z b ) can be obtained as:
r s a t = R x ζ R y ξ R z φ r o r b = R o b r o r b
where R o b is the transition matrix from the satellite orbital coordinate frame to the satellite body coordinate frame. When the three attitude angles are all 0, r s a t = r o r b .
Upon rotating the position vector r s a t by the azimuth angle ρ around the Z b axis, and then rotating it by the elevation angle ϵ around the Y b axis, the position vector r s e n in the sensor coordinate frame ( O s X s Y s Z s ) is obtained:
r s e n = R y π / 2 ϵ R z ρ r s a t = R b s r s a t
where R b s is the transition matrix from the satellite body coordinate frame to the sensor coordinate frame.
The Z-axis of the sensor coordinate frame and the image plane coordinate frame coincide with each other. Assume that the component length of each coordinate axis of the target in the sensor coordinate frame is x s e n , y s e n , z s e n . According to the principle of lens imaging, the object and image are conjugate. Therefore, the imaging point coordinates are
x f o c , y f o c = f I F O V z s e n x s e n , y s e n
where f I F O V is the focal length, and z s e n represents the component length of the target on the Z s e n axis. The coordinate value of the imaging point is rounded and translated to obtain the target pixel coordinate p :
p = p x , p y = x f o c d , y f o c d + N r 2 , N c 2
where d represents the pixel size, · is the rounded operation, and N r and N c denote the number of pixels in the rows and columns of the image plane, respectively.
By combining Equation (1) to Equation (9), the overall observation model of the target projected from the three-dimensional space to the image plane position can be obtained:
p = f R b s · R o b · R e o r T + b
where f · is the formula for transformation from the sensor coordinate frame to the image plane coordinate frame ( O f X f Y f ).

2.2. Target IR Image Plane Extraction

Ideally, the target is imaged as a dispersed spot on the IR sensor. However, in the case of long exposure, the space target moves too fast and is imaged as a strip source in different directions. The angle-dependent anisotropic Gaussian spread function [38] is introduced as follows:
x , y = A · e x p x x cos θ + y y s i n θ 2 2 σ x 2 x x sin θ y y c o s θ 2 2 σ y 2
where coordinates x , y and x , y are the positions of the target center and the distributed pixels, θ represents the rotation angle of the strip target image, A refers to the maximal energy, and σ x and σ y are the horizontal and vertical spread parameters of the dispersed spot, respectively. The imaging of an ideal target and a space moving target is shown in Figure 2.
Due to point spread and other reasons, the energy of the target imaging point is dispersed to the surrounding pixels, which affects the accurate pixel coordinates of the target imaging point [39,40]. The target imaging region is divided into rectangular regions. Assuming that the X-axis coordinate range of the region is m , n and the Y-axis coordinate range is u , v , the pixel coordinates of the target imaging point p are expressed as
p = p x , p y = x = m n y = u v G x , y · x , y x = m n y = u v G x , y .
When the centroid method is used, G x , y = 1 , g x , y > ϵ 0 , g x , y = ϵ , g x , y is the gray value of the point x , y in the rectangular region, and ϵ represents the average gray value of background and noise. When the gray-weighted centroid method is used, G x , y = g x , y . When the threshold centroid method is employed, G x , y = g x , y γ , and γ is the set threshold. The gray-weighted centroid method is widely used in engineering because of its simple calculation and high positioning accuracy, and the fact that no new parameters are introduced. Therefore, this study used the gray-weighted centroid method to extract the pixel coordinates of the target imaging point.

2.3. Multi-Satellite Cross-Positioning

If the space target is in the satellite position ( r T = r s ) , Equation (1) can be converted to 0 = R e o r s + c , that is, c = R e o r s . Combined with the formulae in Section 2.1, the target position vector r s e n in the sensor coordinate frame satisfies
r s e n = R b s · R o b · R e o r T + c = R b s · R o b · R e o r T r s .
If I = R b s · R o b · R e o 1 , then I is the overall transition matrix from the sensor coordinate frame to the ECI coordinate frame. Combined with the results of the image plane position extraction and Equation (9), the target coordinates in the image plane coordinate frame can be obtained:
x f o c , y f o c = p x N r 2 , p y N c 2 × d
The target unit direction vector in the sensor coordinate frame is
d m = x f o c , y f o c , f I F O V x f o c , y f o c , f I F O V 2 .
The unit direction vector is converted to the ECI coordinate frame using U = I d m , and U is the unit LOS vector of the target in the ECI coordinate frame.
Single-satellite observation can only provide space target angular information; to accurately determine the position of the target, at least two or more satellites are needed. In this study, the principle of multi-satellite cross-positioning was used to determine the three-dimensional position information of space targets under only angle information, as shown in Figure 3. According to the geometric relationship between the target and the IR detector in the ECI coordinate frame, the following equations are established:
r T = r i , s + r T r i , s U i i = 1,2 , , N
where N is the number of satellites that simultaneously observe the target, and r T r i , s represents the distance between the target position vector r T and the satellite position vector r i , s . Multiplying the two ends of the equation by U i gives
r T × U i = r i , s × U i .
The above equations are expanded in three dimensions:
0 U i , z U i , y U i , z 0 U i , x U i , y U i , x 0 · r T , x r T , y r T , z = U i , z r i , s y U i , y r i , s z U i , x r i , s z U i , z r i , s x U i , y r i , s x U i , x r i , s y .
Equation (18) is the target positioning equation. By combining the positioning equations of N satellites, the equation can be written as
0 U 1 , z U 1 , y U 1 , z 0 U 1 , x U 1 , y U 1 , x 0 0 U N , z U N , y U N , z 0 U N , x U N , y U N , x 0 · r T , x r T , y r T , z = U 1 , z r 1 , s y U 1 , y r 1 , s z U 1 , x r 1 , s z U 1 , z r 1 , s x U 1 , y r 1 , s x U 1 , x r 1 , s y U N , z r N , s y U N , y r N , s z U N , x r N , s z U N , z r N , s x U N , y r N , s x U N , x r N , s y .
The above equation is simplified as
P 3 N × 3 r 3 × 1 = Q 3 N × 1 .
Since the unknown variable is 3 and the number of equations is 3 N , the equation becomes an overdetermined system of equations with no analytical solution when N ≥ 2. By using the least squares method to estimate the optimal solution, the target spatial position r T in the ECI coordinate frame can be obtained:
r T = P P T 1 P T Q

3. Velocity Estimation Based on Robust Locally Weighted Regression

If the spatial position r T obtained under multi-satellite observation is directly used to solve the velocity, a huge error occurs. This is due to two reasons: 1. The least squares method is used to calculate the estimated value of a single sampling point position, which is prone to producing errors in the estimated values of continuous sampling points. This seriously affects the calculation of instantaneous velocity. 2. There are many error sources that affect the target positioning accuracy in the space-based IR surveillance system, which will lead to a large number of outliers in the target position. This section fully considers the above factors, proposes a velocity estimation method based on robust locally weighted regression, and analyzes the velocity estimation accuracy.

3.1. Locally Weighted Regression

The locally weighted regression (LWR) is a non-parametric regression method that assigns a weight to each observation and uses weighted least squares to perform polynomial regression fitting [28]. Unlike ordinary linear regression, LWR does not fit the entire data globally but fits the data around each point to be predicted, thereby capturing local features and variations more accurately. After obtaining the target position for multiple frames of images, a three-axis position sequence r T x , r T y , and r T z is formed. The LWR method is used to perform regression on the position sequence, and r ^ T x , r ^ T y , and r ^ T z are obtained.
Take r T x as an example. Select an appropriate window length l , and take the serial number point x j ( j = 1,2 , n ) corresponding to r T x , j as the center to form a serial number segment x j l 2 , x j + l 2 . Standardize the serial number segment in the window:
z = x k x j d , k = j d , , j + d
where d = l 2 . The numerical weights within the window are determined by the following cubic weighting function:
w k ( z ) = 1 z 3 3 .
Use the least squares method to calculate the estimated value of the regression coefficient α x j for each observation point x j , r T x , j with weight w k x j . The obtained r ^ T x , j is the fitting value at x j , and the regression equation is
r ^ T x , j = α 0 x j + α 1 x j x j + α p x j x j p .

3.2. Robust Locally Weighted Regression

The robust locally weighted regression (RLWR) is an LWR-based method designed to deal with data sets that contain outliers or abnormal points. For data sets containing outliers, RLWR can fit the data more accurately and improve the robustness of the regression. Unlike the LWR method in Section 3.1, the weight function used in RLWR is relatively insensitive to outliers, and its weighting function is defined as
B z = 0 , z > 1 1 z 2 2 , z 1 .
Differently from the cubic weight function in Equation (23), the quadratic power in Equation (25) reduces the surrounding weights more slowly and has a more balanced impact on adjacent data. During robust iterations, volatility can be reduced to provide more stable results.
After obtaining the regression Equation (24) using the LWR method, the robustness enhancement process begins. Let e j = r T x , j r ^ T x , j be the residual of the fitting value, and M be the median value of e j . The weight correction coefficient is defined as
δ k = B e k 6 M .
We use δ k w k x j to replace the original weight w k x j at the points x j , r T x , j , and use the least squares method to perform 𝑝-order polynomial fitting to calculate the new r ^ T x , j . Repeat the robust enhancement process R times; the final r ^ T x , j is the robust locally weighted fitting value.
Traditional LWR is sensitive to outliers because it will give higher weights to neighboring points that are closer to the target point, and outliers often cause significant deviations in the fitting results. RLWR introduces a correction coefficient δ k for the weights. During the R iterations, the weights are continuously corrected by updating the correction coefficients, which improve the robustness of the algorithm.
All the above steps are performed for all the position points to obtain the processed r ^ T x . In this study, the window length was 50, the polynomial degree was 1, and the number of iterations was 2. Detailed parameter analysis is described in Section 4.3.1.

3.3. Velocity Estimation and Accuracy Analysis

Taking the velocity calculation interval as Δ t , the velocity of the target is
v T = r ^ T t + Δ t r ^ T t Δ t .
The flowchart of the proposed space IR target velocity estimation method is shown in Figure 4. The whole technical process is mainly divided into two parts: (1) Establish the space target observation model, extract the pixel coordinates of the target on different satellite IR image planes, and use multi-satellite cross-positioning to solve the target spatial position; (2) Perform locally weighted regressions of multiple robust enhancement iterations on the target spatial position and, finally, solve for the velocity.
The estimation accuracy for velocity is mainly affected by the measurement error for the target. The target measurement error is composed of the satellite location error, satellite attitude angle error, sensor pointing error, and target pixel coordinate extraction error [12,41], which can be seen from Figure 4. Let us assume that the measurement errors of the satellite position, satellite attitude angles, sensor pointing, and pixel coordinate extraction all follow normal distributions with a mean of 0 and standard deviations of σ l , σ a , σ s , and σ p , respectively. First, these error values are added to the true values, and then, the proposed method is used to calculate the target velocity. Finally, the calculated velocity is compared with the true velocity to obtain the velocity estimation error. The velocity estimation error model obtained using the Monte Carlo method is as follows:
Δ V x i , Δ V y i , Δ V z i = V i V i i = 1,2 , , M
where V i is the real velocity of the target; V i represents the velocity estimation result after adding random errors to each input; and M denotes the number of Monte Carlo simulations.
The velocity estimation accuracy during the observation phase is evaluated using the Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). The estimation results of the velocity components in the X-axis, Y-axis, and Z-axis directions and the overall velocity are separately evaluated. The calculation formulas are as follows:
M A E = j = 1 M i = 1 N V i , j V i , j N · M
R M S E = 1 M j = 1 M i = 1 N V i , j V i , j 2 N
where N represents the number of observation points.

4. Experiment and Analysis

4.1. Simulation Parameter Settings

Let us assume that, on 21 March 2021 at time 20:10:00.000 (UTCG), three satellites were tracking and monitoring the target in real time. The parameter settings of the satellites and detectors are listed in Table 1. Figure 5 illustrates the orbits of the three observational satellites. In our experiments, all satellites had the same IR sensor performance parameters and satellite position measurement errors. The measurement error settings in different scenarios are shown in Table 2. The number of Monte Carlo simulations for all experiments described in this section is 1000.

4.2. Comparative Experiment

This paper proposes a velocity estimation method for space IR dim targets that combines multi-satellite positioning (MSP) and robust locally weighted regression. To validate the effectiveness of the proposed method, this study used MSP to directly solve the MSP combined with the LMS [26] method (MSP-LMS), the MSP combined with LWR [28] method (MSP-LWR), and the MSP combined with the ASG [31] method (MSP-ASG) for a velocity estimation comparison experiment. All the methods were implemented based on MATLAB R2021b and run on the PC (Intel Core I7-11800H + 16GDDR4) platform.
Table 3 shows the MAE and RMSE of the overall velocity estimation, with the best values in each scenario highlighted in bold. When the four measurement error sources are 0, the MSP was more accurate, but such an ideal situation does not exist in reality. As the measurement error increased, the accuracy of the direct solution lagged behind that of the other methods by more than 100 times. Therefore, it was necessary to perform regression on the values obtained from the multi-satellite observations again. In addition, as the error intensity increased, the velocity errors determined via different methods also increased, but the results of the proposed method were still better than the comparison methods. We also noticed that the gap between the proposed method and MSP-LWR gradually increased as the error increased, which indicates that the robust enhancement process described in Section 3.2 is effective. This can be seen more clearly in Figure 6.
Figure 6 displays the MAE of the velocity estimation for each component axis under different scenarios. When all four error sources were absent, our method achieved an MAE of “0.57 m/s” in the X-axis component, “0.03 m/s” in the Y-axis component, “0.38 m/s” in the Z-axis component, and “0.07 m/s” in the overall velocity, outperforming the compared methods.
In space situational awareness, time cost is particularly important. Based on six scenarios and one thousand Monte Carlo runs per scenario, the average running times of the four velocity estimation methods are shown in Table 4. Obviously, the MSP method, which directly uses the positioning results to solve the velocity, has the lowest computational complexity and, thus, the shortest running time. However, its huge estimation error in complex scenarios makes it not worth considering. The MSP-LMS method runs the longest at 0.2925 s and MSP-ASG runs for 0.1782 s. The proposed method takes slightly more time to run than MSP-LWR by 0.1136 s, because the proposed method performs a robust enhancement operation based on MSP-LWR. Under the condition of ensuring the highest estimation accuracy, the running time of the proposed method (0.1517 s) is still acceptable.
Figure 7 shows the instantaneous velocity estimation results under the error data of Scenario 4, where only the results in the range of 40–60 s and the four methods with relatively small errors are displayed. The velocity values obtained using the proposed method are more consistent with the real velocity values and have a smoother velocity change, whether in each axis or in the overall velocity, while the velocity values derived using the other three comparative algorithms show a dramatic change.

4.3. Analysis of Impact Factors

4.3.1. Parameter Analysis for Proposed Method

It is noted, in Section 3.2, that three parameters affect the proposed method: the iterations, polynomial degree, and window size. We used the method of controlling variables to explore the parameter settings, and their default settings were 2, 1, and 50. Figure 7 shows the MAE of the overall velocity estimation under different scenarios and parameters. To clearly display the error changes in scenario 0, we used a dual Y-axis coordinate frame for plotting.
Figure 8a shows the MAE results for different iterations. When the iteration number was 0, the proposed method was equivalent to the LWR without robust enhancement. At Iteration 1, the MAE converged for Scenarios 0, 1, and 2 with small errors. At Iteration 2 or higher, the remaining scenarios also started to converge. Interestingly, in the error-free Scenario 0, increasing robustness during the iteration negatively affected data fitting, leading to an increased MAE. This suggests that the LWR can already fit the data well when the scenario error is small enough, and increasing robustness during iteration can introduce additional noise.
Figure 8b shows the MAE results for different polynomial degrees. When the degree is 0, which means directly using local constant fitting, the MAE is slightly higher than that of degree 1 in Scenarios 1 to 5. When the degree is 1, which is linear fitting, the MAE of velocity estimation is the smallest. When the degree is 2 or higher, the MAE increases. This is because the target motion in the local window is close to uniform motion, so the linear fitting of degree 1 is more appropriate. Scenario 0 has a different trend compared with the other scenarios. When the degree is 3, the MAE drops to the lowest, and there is no fluctuation thereafter.
Figure 8c shows the MAE results for different window sizes. In Scenarios 1 to 5, the wider the window, the lower the MAE, but when the window size is larger than 300, the MAE starts to increase. In Scenario 0, however, the wider the window, the higher the MAE. This is related to the motion characteristics of the target during the observation time and the measurement error factors. In local fitting, the smaller the measurement error and the narrower the window, the more precise the fitting. When the error is large, a narrower window is prone to extreme misestimation due to the inclusion of too many outliers. For complex spatial target velocity changes, parameter selection should be considered based on the actual situation.

4.3.2. Measurement Error Analysis

Measurement errors for the satellite position (E1), satellite attitude (E2), sensor pointing (E3), and target pixel coordinate extraction (E4) result in different velocity estimation errors. Table 5 shows the impact of each individual and combined factor on the accuracy of velocity estimation, which was determined by sequentially changing the presence of the above four error factors in each scenario.
The results from Table 5 show that there are differences in the target velocity estimation errors caused by different error sources. For example, in Scenario 5, a satellite position measurement error of 50 (m) results in an MAE and RMSE of approximately 4.24 (m/s) and 7.95 (m/s), with rates of contribution to the MAE and RMSE of 0.084 (1/s) and 0.159 (1/s), respectively. A satellite attitude measurement error of 50 (μrad) results in an MAE and RMSE of approximately 21.79 (m/s) and 42.84 (m/s), with contribution rates of 0.435 (m/s/μrad) and 0.856 (m/s/μrad), respectively. The rates of the contribution of sensor pointing measurement error to the MAE and RMSE are 0.426 (m/s/μrad) and 0.682 (m/s/μrad), respectively, while the rates of the contribution of pixel coordinate extraction error are approximately 120.44 (m/s/pixel) and 249.02 (m/s/pixel). On average, the pixel coordinate extraction error has the greatest impact on the velocity estimation error, reaching 86.29% and 83.36% of the overall error in terms of the velocity estimation MAE and RMSE, respectively. The next most impactful error sources are the satellite attitude measurement error and the sensor pointing measurement error, which have a similar impact on the velocity estimation error, and can be considered together since they are both angular measurement errors. The impact of the satellite position measurement error is the smallest.
Figure 9 more clearly shows the impact of each measurement error factor on velocity estimation accuracy. After linear fitting, the sizes of the four error sources exhibit an approximately linear relationship with the final velocity estimation error.

4.3.3. Satellite Number Analysis

In multi-satellite observations, the parameters of the satellites (detectors) that simultaneously observe the target and participate in velocity estimation also affect the estimation accuracy. Due to experimental complexity considerations, this study only analyzed the impact of the number of satellites on the space targets velocity estimation. The number of satellites observing the target at the same time will vary according to the specific satellite layout and working status, and may be different due to factors such as satellite orbit, occlusion, and target distance. Based on the actual situation, we added a fourth satellite on the basis of Table 1. Its parameter settings are as follows: latitude angle = 62.21 ° ; orbit inclination angle = 54.97 ° ; and RAAN = 191.19 ° . We considered three cases for velocity estimation: (1) Dual satellite: satellite 1 + satellite 2; (2) Triple satellite: satellite 1 + satellite 2 + satellite 3; (3) Quadruple satellite: satellite 1 + satellite 2 + satellite 3 + satellite 4. The experimental results are shown in Figure 10.
As can be seen from Figure 10, in Scenarios 1 to 5, the greater the number of satellites involved in the velocity measurement, the higher the accuracy. In Scenario 0 without measurement errors, the MAE and RMSE of the velocity estimation in the three cases are equal, and the accuracy of velocity estimation is independent of the number of satellites.

5. Discussion

The proposed method significantly improves the velocity estimation accuracy of space IR dim targets in different scenarios, which we believe is related to two characteristics.
(1) Multi-satellite observation. Satellite position and attitude are known information that are easily obtained in real time in space-based IR surveillance systems, and are an important basis for velocity estimation based on physical principles. Multi-satellite cooperation is conducive to long-term and wide-area observation of moving dim targets. As mentioned in 4.3.3, the increase in observation satellites effectively reduces the decrease in estimation accuracy caused by measurement errors.
(2) Robust locally weighted regression. The target position vector obtained by multi-satellite cross-location contains a large number of outliers. In the process of robust enhancement, a weighting function is introduced to reduce the weight of sampling points that are far away, and the slight abnormal points will be smoothed out. By continuously iteratively modifying the weights, the robustness of the algorithm is improved. In addition, local fitting preserves the nonlinear relationships between data, which can better capture the characteristics and laws between data, and is of great benefit to the establishment of complex models.
This research focuses on the velocity estimation of space IR dim targets, and has achieved certain results. However, there are still some shortcomings in the research work:
(1) Our research object is the space IR dim moving target. However, collecting enough data for comprehensive experiments and analyzes in the real world is very challenging, involving many limitations and huge costs. Therefore, we adopted a Monte Carlo (MC)-based random error generation method in the experimental design. This method can simulate the noise, uncertainty, and change in the actual environment, and has high flexibility and controllability. By using the MC method to generate a series of different error cases, we can comprehensively evaluate the performance of the proposed method. It should be pointed out that there are certain limitations in using simulated data. Despite our efforts to design and generate realistic data, it is still difficult to fully simulate complex real-world environments. In future research, we will further consider validation based on real-world data.
(2) Although the proposed method has a good processing effect, there is still room for improvement in the running time. In follow-up research, it is necessary to seek optimization methods to reduce computing resource consumption for better satellite transplantation.
(3) This paper only briefly discusses the influence of the number of satellites on velocity estimation, but the variable orbits and complex parameters of satellites in orbit will inevitably affect velocity estimation. Therefore, the influence of satellite parameters needs to be further explored in the future to meet the application requirements in more scenarios.

6. Conclusions

In this study, we propose a space IR dim target velocity estimation method that combines multi-satellite positioning and robust locally weighted regression under space-based observations. Simulation comparison experiments were conducted to validate the effectiveness of the proposed method. In ideal conditions, our method achieved an MAE of 0.0733 m/s and an RMSE of 1.6640 m/s for the overall target velocity estimation. In addition, we conducted a parameter analysis for the proposed method, and investigated the impact of four types of measurement errors and the number of satellites on the accuracy of velocity estimation in multi-satellite observations. Our study provides valuable insights into the velocity measurement of space IR dim moving targets, with potential applications in fields such as aerospace engineering and geophysics.

Author Contributions

Conceptualization, S.Z. and H.Z.; methodology, H.Z.; software, S.Z.; validation, S.Z., H.Z. and X.C.; formal analysis, X.C.; investigation, H.Z.; resources, P.R.; data curation, S.Z.; writing—original draft preparation, S.Z.; writing—review and editing, P.R.; visualization, S.Z.; supervision, X.C.; project administration, P.R.; funding acquisition, P.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62175251.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fontana, S.; Di Lauro, F. An Overview of Sensors for Long Range Missile Defense. Sensors 2022, 22, 9871. [Google Scholar] [CrossRef]
  2. Zhou, X.; Ni, X.; Zhang, J.; Weng, D.; Hu, Z.; Chen, F. A novel detection performance modular evaluation metric of space-based infrared system. Opt. Quantum Electron. 2022, 54, 274. [Google Scholar] [CrossRef]
  3. He, B.; Li, H.; Li, G.; Pei, Z.; Jiang, T. Simulation modeling and detection performance analysis of space-based infrared early warning system. In Proceedings of the 2022 IEEE 5th International Conference on Information Systems and Computer Aided Education (ICISCAE), Dalian, China, 23–25 September 2022; pp. 969–977. [Google Scholar]
  4. Yılmaz, Ö.; Aouf, N.; Checa, E.; Majewski, L.; Sanchez-Gestido, M. Thermal analysis of space debris for infrared-based active debris removal. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2019, 233, 811–822. [Google Scholar] [CrossRef]
  5. Erlandson, R.E.; Taylor, J.C.; Michaelis, C.H.; Edwards, J.L.; Brown, R.C.; Swaminathan, P.K.; Kumar, C.K.; Hargis, C.B.; Goldberg, A.C.; Klatt, E.M.; et al. Development of Kill Assessment Technology for Space-Based Applications. Johns Hopkins APL Tech. Dig. 2010, 29, 289. [Google Scholar]
  6. Wang, Y.; Chen, X.; Gong, C.; Rao, P. Non-Ellipsoidal Infrared Group/Extended Target Tracking Based on Poisson Multi-Bernoulli Mixture Filter and B-Spline. Remote Sens. 2023, 15, 606. [Google Scholar] [CrossRef]
  7. Hu, Y.; Ma, Y.; Pan, Z.; Liu, Y. Infrared Dim and Small Target Detection from Complex Scenes via Multi-Frame Spatial–Temporal Patch-Tensor Model. Remote Sens. 2022, 14, 2234. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Leng, K.; Park, K.S. Infrared detection of small moving target using spatial–temporal local vector difference measure. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  9. Zhang, S.; Rao, P.; Zhang, H.; Chen, X.; Hu, T. Spatial Infrared Objects Discrimination based on Multi-Channel CNN with Attention Mechanism. Infrared Phys. Technol. 2023, 104670. [Google Scholar] [CrossRef]
  10. Chen, L.; Rao, P.; Chen, X.; Huang, M. Local Spatial–Temporal Matching Method for Space-Based Infrared Aerial Target Detection. Sensors 2022, 22, 1707. [Google Scholar] [CrossRef]
  11. Du, J.; Lu, H.; Zhang, L.; Hu, M.; Deng, Y.; Shen, X.; Li, D.; Zhang, Y. DP–MHT–TBD: A Dynamic Programming and Multiple Hypothesis Testing-Based Infrared Dim Point Target Detection Algorithm. Remote Sens. 2022, 14, 5072. [Google Scholar] [CrossRef]
  12. Sheng, W. Research on Target Tracking Technologies for Space-Based Optical Surveillance System; National University of Defense Technology: Changsha, China, 2011. [Google Scholar]
  13. Lih, Y.; Kirubarajan, T.; Bar-Shalom, Y.; Yeddanapudi, M. Trajectory and launch point estimation for ballistic missiles from boost phase LOS measurements. In Proceedings of the 1999 IEEE Aerospace Conference, Proceedings (Cat. No. 99TH8403), Snowmass, CO, USA, 7 March 1999; Volume 4, pp. 425–442. [Google Scholar]
  14. Aghav, S.T.; Gangal, S.A. Simplified orbit determination algorithm for low earth orbit satellites using spaceborne gps navigation sensor. Artif. Satell. 2014, 49, 81–99. [Google Scholar] [CrossRef]
  15. Julier, S.J.; Uhlmann, J.K. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef]
  16. Liu, J.; Luo, Q.; Lou, J.; Li, Y. Space infrared tracking of a hypersonic cruise vehicle using an adaptive scaling UKF. Aerospace Syst. 2020, 3, 287–296. [Google Scholar] [CrossRef]
  17. Arasaratnam, I.; Haykin, S. Cubature kalman filters. IEEE Trans. Autom. Control. 2009, 54, 1254–1269. [Google Scholar] [CrossRef]
  18. Zou, T.; Situ, W.; Yang, W.; Zeng, W.; Wang, Y. A Method for Long-Term Target Anti-Interference Tracking Combining Deep Learning and CKF for LARS Tracking and Capturing. Remote Sens. 2023, 15, 748. [Google Scholar] [CrossRef]
  19. Huang, P.; Li, H.; Wen, G.; Wang, Z. Application of Adaptive Weighted Strong Tracking Unscented Kalman Filter in Non-Cooperative Maneuvering Target Tracking. Aerospace 2022, 9, 468. [Google Scholar] [CrossRef]
  20. Cui, N.G.; Zhang, L.; Wang, X.G.; Yang, F.; Lu, B. Application of adaptive high-degree cubature Kalman filter in target tracking. Acta Aeronaut. Astronaut. Sin. 2015, 36, 3885–3895. [Google Scholar]
  21. Ding, W.Z.; Zhang, Z.Y.; Yang, H. Analysis of target positioning accuracy based on method of double satellite optical tracking. Acta Astron. Sin. 2017, 58, 40. [Google Scholar]
  22. Zhao, J.B.; Xu, T.T.; Yang, X.B.; Yong, Q. Stereo celestial positioning of space-based double satellites to space target. Opt. Precis. Eng. 2021, 12, 2902–2914. [Google Scholar] [CrossRef]
  23. Duan, C.; Feng, B.; Zhang, K.; Xue, J.; Zhang, Q. A Novel Constellation Selection Strategy of Multi-Satellite Joint Positioning. In Proceedings of the 2021 13th International Conference on Wireless Communications and Signal Processing (WCSP), Changsha, China, 20–22 October 2021; pp. 1–5. [Google Scholar]
  24. Zhao, H.; Yan, Y.; Shi, X. A dynamic localization network for regional navigation under global navigation satellite system denial environments. Int. J. Distrib. Sens. Netw. 2019, 15, 1550147719834427. [Google Scholar] [CrossRef]
  25. Holt, C.C. Forecasting seasonals and trends by exponentially weighted moving averages. Int. J. Forecast. 2004, 20, 5–10. [Google Scholar] [CrossRef]
  26. Widrow, B. The LMS algorithm. In Cybernetics 2.0: A General Theory of Adaptivity and Homeostasis in the Brain and in the Body; Springer International Publishing: Cham, Switzerland, 2022; pp. 23–31. [Google Scholar]
  27. Schafer, R.W. What is a Savitzky-Golay filter? [lecture notes]. IEEE Signal Process. Mag. 2011, 28, 111–117. [Google Scholar] [CrossRef]
  28. Cleveland, W.S.; Devlin, S.J. Locally weighted regression: An approach to regression analysis by local fitting. J. Am. Stat. Assoc. 1988, 83, 596–610. [Google Scholar] [CrossRef]
  29. Chen, J.-H.; Lu, S.-L. A New Sum of Squares Exponentially Weighted Moving Average Control Chart Using Auxiliary Information. Symmetry 2020, 12, 1888. [Google Scholar] [CrossRef]
  30. Mabude, K.; Malela-Majika, J.C.; Castagliola, P.; Shongwe, S.C. Generally weighted moving average monitoring schemes: Overview and perspectives. Qual. Reliab. Eng. Int. 2021, 37, 409–432. [Google Scholar] [CrossRef]
  31. Ochieng, P.J.; Maróti, Z.; Dombi, J.; Krész, M.; Békési, J.; Kalmár, T. Adaptive Savitzky–Golay Filters for Analysis of Copy Number Variation Peaks from Whole-Exome Sequencing Data. Information 2023, 14, 128. [Google Scholar] [CrossRef]
  32. Guo, D.; Yang, G.; Qi, B.; Wang, C. A Fast Ground Segmentation Method of LiDAR Point Cloud From Coarse-to-Fine. IEEE Sens. J. 2022, 23, 1357–1367. [Google Scholar] [CrossRef]
  33. Alqasrawi, Y.; Azzeh, M.; Elsheikh, Y. Locally weighted regression with different kernel smoothers for software effort estimation. Sci. Comput. Program. 2022, 214, 102744. [Google Scholar] [CrossRef]
  34. Zhang, W.; Zhao, S.; Pan, H.; Zhao, X. A locally weighted linear regression look-up table-based iterative reconstruction method for dual spectral CT. IEEE Trans. Biomed. Eng. 2023. [Google Scholar] [CrossRef]
  35. Hu, Y.; Zhang, X.; Chen, L. Strategy design and sensor scheduling for optical navigation of low earth orbit satellites. IEEE Sens. J. 2018, 18, 9802–9811. [Google Scholar] [CrossRef]
  36. Hu, Y.; Li, K.; Liang, Y.; Chen, L. Review on strategies of space-based optical space situational awareness. J. Syst. Eng. Electron. 2021, 32, 1152–1166. [Google Scholar]
  37. Wang, X. Study of 3D Computer Vision for Photoelectric Theodolite Tracking Aircraft; The Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences: Changchun, China, 2010. [Google Scholar]
  38. Li, M.; Yan, C.; Hu, C.; Liu, C.; Xu, L. Space target detection in complicated situations for wide-field surveillance. IEEE Access 2019, 7, 123658–123670. [Google Scholar] [CrossRef]
  39. Chen, Z.; Hu, Z.; Su, X.; Hu, T.; Chen, F. A Large-Aperture Remote Sensing Camera Calibration Method Based on Stellar and Inner Blackbody. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–9. [Google Scholar] [CrossRef]
  40. Li, Z.; Zhao, Q.; Gong, W. Distorted point spread function and image reconstruction for ghost imaging. Opt. Laser Eng. 2021, 139, 106486. [Google Scholar] [CrossRef]
  41. Xie, K.; Han, Y.; Xue, M. Analysis of passive location accuracy in LEO infrared early warning constellation. Signal Process. 2008, 3, 343–348. [Google Scholar]
Figure 1. Schematic of the target position vector transformation process. (The red font is the position vector in the corresponding coordinate frame, and the blue font is the transition matrix.)
Figure 1. Schematic of the target position vector transformation process. (The red font is the position vector in the corresponding coordinate frame, and the blue font is the transition matrix.)
Remotesensing 15 02767 g001
Figure 2. Imaging of an ideal target (a) and a space moving target (b).
Figure 2. Imaging of an ideal target (a) and a space moving target (b).
Remotesensing 15 02767 g002
Figure 3. Schematic of multi-satellite cross-positioning.
Figure 3. Schematic of multi-satellite cross-positioning.
Remotesensing 15 02767 g003
Figure 4. Flowchart of the proposed space IR targets velocity estimation method.
Figure 4. Flowchart of the proposed space IR targets velocity estimation method.
Remotesensing 15 02767 g004
Figure 5. Schematic of the orbits of the three observation satellites.
Figure 5. Schematic of the orbits of the three observation satellites.
Remotesensing 15 02767 g005
Figure 6. MAE of velocity estimation under different scenarios: (a) X-axis; (b) Y-axis; (c) Z-axis; (d) Overall velocity.
Figure 6. MAE of velocity estimation under different scenarios: (a) X-axis; (b) Y-axis; (c) Z-axis; (d) Overall velocity.
Remotesensing 15 02767 g006
Figure 7. Display of instantaneous velocity estimation within 40–60 s: (a) X-axis; (b) Y-axis; (c) Z-axis; (d) Overall velocity.
Figure 7. Display of instantaneous velocity estimation within 40–60 s: (a) X-axis; (b) Y-axis; (c) Z-axis; (d) Overall velocity.
Remotesensing 15 02767 g007
Figure 8. MAE of overall velocity estimation under different parameters: (a) Iteration; (b) Polynomial degree; (c) Window size.
Figure 8. MAE of overall velocity estimation under different parameters: (a) Iteration; (b) Polynomial degree; (c) Window size.
Remotesensing 15 02767 g008
Figure 9. The impact of different measurement error factors on velocity estimation accuracy: (a) MAE; (b) RMSE.
Figure 9. The impact of different measurement error factors on velocity estimation accuracy: (a) MAE; (b) RMSE.
Remotesensing 15 02767 g009
Figure 10. The impact of different numbers of satellites on velocity estimation accuracy: (a) MAE; (b) RMSE.
Figure 10. The impact of different numbers of satellites on velocity estimation accuracy: (a) MAE; (b) RMSE.
Remotesensing 15 02767 g010
Table 1. Simulation parameter settings of the three satellites and detectors.
Table 1. Simulation parameter settings of the three satellites and detectors.
Satellite 1Satellite 2Satellite 3
Latitude angle 102.70 ° 23.94 ° 72.21 °
Orbit inclination angle 54.88 ° 55.03 ° 55.12 °
RAAN 252.97 ° 12.82 ° 71.90 °
Focal plane size512 × 512
Pixel size 30   μ m
Aperture 20   m m
Focal 40   mm
Frame frequency10 Hz
Observation time100 s
Table 2. Measurement error settings in different scenarios.
Table 2. Measurement error settings in different scenarios.
Scenario σ l m σ a μ r a d σ s μ r a d σ p p i x e l
00000
11010100.1
22020200.2
33030300.3
44040400.4
55050500.5
Table 3. MAE and RMSE of overall velocity estimation.
Table 3. MAE and RMSE of overall velocity estimation.
ScenarioMSPMSP-LMSMSP-LWRMSP-ASGProposed Method
MAERMSEMAERMSEMAERMSEMAERMSEMAERMSE
00.03310.42231.43032.91790.06631.45070.03540.42800.07331.6640
11939.47553811.079117.709159.418520.396043.166621.594247.894315.173735.4483
25242.17678056.915741.0581102.051033.920774.925944.631791.602329.826663.2735
38582.623211,823.334463.7244145.777050.8750119.535058.8795132.052145.6544101.3857
411,795.980215,510.899487.3195192.780468.7979144.995780.4788189.400059.9750116.5446
515,424.178819,591.5688108.3511239.545595.8185194.1914104.6386232.187381.9898161.8585
Table 4. Average running time.
Table 4. Average running time.
MSPMSP-LMSMSP-LWRMSP-ASGProposed Method
Time (s)0.01150.29250.03810.17820.1517
Table 5. Velocity estimation errors under different measurement errors.
Table 5. Velocity estimation errors under different measurement errors.
Scenario M A E   ( m / s ) R M S E   ( m / s )
E1E2E3E4OverallE1E2E3E4Overall
10.755.314.2514.0115.172.189.267.0428.2235.44
21.219.049.9624.1029.823.3217.6315.1452.1763.27
31.8815.0413.4241.2645.654.3426.4020.8182.91101.38
42.9816.8319.0756.6459.976.1031.7728.66111.92116.54
54.2421.7921.3060.2281.987.9542.8434.13124.51161.85
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, S.; Rao, P.; Zhang, H.; Chen, X. Velocity Estimation for Space Infrared Dim Targets Based on Multi-Satellite Observation and Robust Locally Weighted Regression. Remote Sens. 2023, 15, 2767. https://doi.org/10.3390/rs15112767

AMA Style

Zhang S, Rao P, Zhang H, Chen X. Velocity Estimation for Space Infrared Dim Targets Based on Multi-Satellite Observation and Robust Locally Weighted Regression. Remote Sensing. 2023; 15(11):2767. https://doi.org/10.3390/rs15112767

Chicago/Turabian Style

Zhang, Shenghao, Peng Rao, Hao Zhang, and Xin Chen. 2023. "Velocity Estimation for Space Infrared Dim Targets Based on Multi-Satellite Observation and Robust Locally Weighted Regression" Remote Sensing 15, no. 11: 2767. https://doi.org/10.3390/rs15112767

APA Style

Zhang, S., Rao, P., Zhang, H., & Chen, X. (2023). Velocity Estimation for Space Infrared Dim Targets Based on Multi-Satellite Observation and Robust Locally Weighted Regression. Remote Sensing, 15(11), 2767. https://doi.org/10.3390/rs15112767

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop