1. Introduction
Accurate trajectory prediction for space targets is crucial in aerospace defense, object collision avoidance, and space situational awareness. The ability to precisely estimate and track such objects under dynamic and noisy conditions plays a significant role in threat assessment and strategic planning. However, predicting the trajectory of high-speed maneuvering targets remains challenging due to complex nonlinear motion dynamics, sensor measurement noise, and environmental uncertainties.
Conventional trajectory tracking and prediction methods typically rely on filtering techniques or motion equations. For example, the Kalman Filter (KF) has been widely used in trajectory prediction tasks, such as aircraft motion tracking, where optimized initialization parameters enhance prediction efficiency [
1]. Other adaptations include infrared target tracking, where confidence-based KF enhancements improve real-time performance [
2]. Additionally, integrating filtering techniques with image processing has shown promise for more accurate motion data extraction [
3]. The empirical modeling of system parameters is also employed to stabilize trajectory predictions [
4]. However, these approaches generally assume linear or weakly nonlinear dynamics, and they often suffer from reduced accuracy in highly maneuverable or nonlinear motion scenarios, which are common in space targets.
Despite their effectiveness in some scenarios, traditional filtering methods face significant challenges in high-speed space target tracking due to the highly dynamic nature of these targets. Their rapid maneuvers and nonlinear motion characteristics render purely linear models inadequate. Consequently, conventional methods often struggle with prediction accuracy in real-world conditions.
To address these challenges, recent research has increasingly leveraged neural network-based approaches [
5]. Recurrent neural networks (RNNs), particularly long short-term memory (LSTM) networks, have demonstrated superior performance in time-series and complex trajectory forecasting, frequently outperforming traditional filtering methods. The Attention-Enhanced Convolutional LSTM (AC-LSTM), which incorporates convolutional layers and attention mechanisms, has further enhanced prediction accuracy in complex regression tasks [
6]. Nonetheless, these models typically require large-scale labeled datasets, and their performance may degrade in phases of motion that exhibit quasi-linear behavior, where simpler models like KF could suffice.
Several studies have refined LSTM-based trajectory prediction. Ruiping Ji et al. [
7] proposed a deep LSTM network for online trajectory prediction during the ascent phase of a high-speed vehicle, targeting the challenge of complex aerodynamic forces and the limitations of parameter uncertainty in traditional models. The method achieved prediction errors within several kilometers under noise-free conditions during the boost phase (i.e., close-range segment), with an online runtime of 0.5 s per prediction. However, this model did not consider generalization across different scenarios, and it lacked robustness testing under noisy conditions.
Jihuan Ren et al. [
8] introduced a Context-Enhanced LSTM (CE-LSTM) that improves traditional Gaussian-based models and physical equation solvers by redesigning the LSTM’s internal units. Their method achieved prediction errors within tens of meters over trajectory distances ranging from hundreds to thousands of meters (under noise-free conditions), with average computation times of tens of milliseconds. However, its reliance on hyperparameter tuning and lack of noise-resilience evaluation limit its practical robustness in real-time applications.
Jiatong Liang et al. [
9] developed Bidirectional LSTM with Attention Mechanism (BiLSTM-AM), designed to reduce the computational burden of traditional extrapolation methods and improve real-time responsiveness. Their experiments reported high accuracy with prediction errors below 10 m within short-range scenarios and fast inference times of a few milliseconds per run. However, the model depends on pre-filtered input data and similarly lacks robustness testing under noisy conditions.
Although these methods achieve improvements over basic RNNs, their performance in mixed-motion scenarios—where both nonlinear and linear phases coexist—is still limited. In contrast to these methods—which focus either on nonlinear modeling via deep networks or linear extrapolation via motion equations—our approach introduces a hybrid framework that integrates both nonlinear (neural network-based) and linear (Kalman filter-based) components. This enables the model to adapt to different phases of trajectory dynamics, including both highly nonlinear and quasi-linear segments. Furthermore, our method explicitly incorporates sensor noise into its design and maintains superior prediction performance under noisy conditions, thereby improving both robustness and generalizability.
In addition to end-to-end neural network models, hybrid approaches have been explored for trajectory prediction. Yaoshuai Wu and Jian Chen [
10,
11] combined feedforward networks with an extended Kalman filter (EKF) for indoor target localization, although these methods often fail to fully utilize the nonlinear capabilities of neural networks. Licai Dai et al. [
12] introduced a Kalman filter-enhanced LSTM model for trajectory estimation, but its master–slave architecture limits the network’s adaptability. Other studies have explored clustering-based methods, such as Density-Based Spatial Clustering of Applications with Noise (DBSCAN) integrated with gated recurrent units (GRUs), which reduce computational overhead but are constrained to 2D trajectory predictions [
13]. Overall, while hybrid methods show promise, they often lack a principled mechanism to balance the contributions of each module during inference.
Despite these advancements, many existing methods are not universally applicable to space-target trajectory prediction due to the unique dynamic characteristics of such targets. The motion of a high-speed space vehicle is typically modeled as a nonlinear dynamic system governed by complex physical equations. However, under certain conditions, the trajectory can be locally approximated as linear, especially when decomposed into distinct phases such as ascent, free flight, and reentry. As a result, existing methods often struggle to simultaneously handle the nonlinear and quasi-linear components of trajectory dynamics.
Given the strong regression capabilities of neural networks for nonlinear systems and the established performance of KF for linear estimation, a hybrid approach that combines both methodologies presents a promising solution. To this end, we propose a novel algorithm for predicting three-dimensional space target trajectories that integrates a dual-confidence AC-LSTM network with a Kalman Filter. The proposed method capitalizes on the complementary strengths of both models through a dynamically weighted fusion mechanism, enhancing overall prediction accuracy. Specifically, our contributions are as follows:
Confidence-based hybrid model: We propose a novel framework that combines AC-LSTM networks applying multi-task learning techniques [
14,
15,
16] for nonlinear trajectory prediction and a multi-channel Kalman Filter for linear motion estimation, integrating both modules through confidence-based fusion. A dual-confidence approach is also introduced, where AC-LSTM estimates confidence based on signal-to-noise ratio (SNR) variations, while KF confidence is derived from real-time residual monitoring.
Simulation-based evaluation: A synthetic dataset of 1600 trajectories is generated using a minimum-energy trajectory model [
17]. Comprehensive experiments, including ablation studies and comparisons with existing baseline methods, validated the effectiveness of the proposed method.
2. Methodology
The proposed confidence-based dual-model fusion framework is a parallel algorithm that integrates an AC-LSTM network [
18] with a linear Kalman filter. This model was specifically designed to analyze the 3D space target trajectory data. Both components underwent structural and input-output optimization, enabling them to perform regression tasks on 3D trajectory data effectively. Additionally, a redesigned output structure provides confidence levels for the predictions, facilitating the dynamic weighted fusion of the two algorithms. By incorporating the AC-LSTM neural network into the linear fitting framework, the model enhances its nonlinear fitting capability while preserving the linear regression characteristics. This approach ensured that the proposed model was theoretically robust and practically executable.
Figure 1 shows a diagrammatic representation of the dual-confidence AC-LSTM and KF fusion prediction models. The model comprises three principal components: a neural network prediction module, a KF prediction module, and a fusion module.
2.1. AC-LSTM Prediction Module
We implemented an AC-LSTM architecture for the network-prediction module. This hybrid model synergistically integrates convolutional neural networks (CNNs) and LSTM networks, leveraging the complementary strengths of both architectures. Specifically, the CNN component performs deep spatial feature extraction to capture trajectory motion patterns, whereas the LSTM module subsequently conducts temporal regression analysis on the processed sequential data. Compared with conventional LSTM implementations, our framework demonstrates a superior capability for modeling complex spatiotemporal relationships. To further enhance the representational power of the model, we incorporated an attention mechanism that dynamically prioritizes informative temporal states. The mathematical formulation of this architecture is described below.
For convolutional operations, we employed a parallelized one-dimensional convolution scheme that simultaneously processed multi-dimensional input features. This design enables the efficient extraction of spatial correlations across different trajectory parameters while maintaining temporal coherence. The convolution operation is mathematically expressed as follows:
where
denotes the output of the convolutional sequence (specifically serving as the LSTM input when a single convolutional layer is deployed),
represents the input feature vector at timestep
t,
, and
denotes the learnable convolution kernel weights and bias term, respectively, with the ReLU serving as the nonlinear activation function.
In our LSTM network implementation, we employed a stacked multi-layer LSTM architecture to process sequential feature data. LSTM computational operations primarily consist of three fundamental gating mechanisms (forget gate, input gate, and output gate) coupled with dynamic cell-state updates. A detailed network schematic (
Figure 2) and the corresponding mathematical formulation are systematically presented below.
where
,
, and
represent the outputs of the forget, input, and output gates, respectively, and
and
denote the current cell state and hidden state, respectively.
The forget gate regulates historical information retention through the following components: the sigmoid activation function governs the gate’s activation intensity, indicates the preceding hidden state, corresponds to the convolutional feature map from the previous timestep, and and are the trainable weight matrix and bias term, respectively. Output determines the proportion of the previous cell state information to be preserved.
The input gate modulates new feature integration via and as its parametric weights and bias, respectively, where the output specifies the assimilation rate of novel information into the current cell state.
The output gate manages the state visibility as follows: and constitute its learnable parameters, with the output scaling the contribution of the cell state to the updated hidden state.
Cell state updating combines gated operations from both the forget and input gates, utilizing the hyperbolic tangent (tanh) activation function to constrain state values within a predefined numerical range, where
and
parameterize the candidate state transformation. Subsequent hidden state generation depends on the refined cell state filtered through the output gate.
in the attention mechanism architecture, the query (Q), key (K), and value (V) vectors are derived through linear projections of the input embedding vectors (or outputs of the preceding layer) via distinct learnable weight matrices. The scaling factor
, where
denotes the dimensionality of the key vectors, is incorporated to regulate the magnitude the of dot product computations, thereby maintaining gradient stability during backpropagation. The attention weights were subsequently normalized using softmax activation to form a probabilistic distribution. The final network output
synthesizes contextual dependencies across sequential positions through linear transformation parameters (
,
) operating in the attention-modulated hidden state
.
To enable the synchronous output of trajectory predictions and confidence estimates from the network prediction module, we implemented two task-specific fully connected branches in the output layer, one dedicated to trajectory regression and the other to confidence score estimation. The network training framework employs a multi-task learning paradigm in which distinct loss functions are strategically assigned to each subtask to ensure proper gradient flow during backpropagation [
19,
20]. This multi-objective optimization was formalized using a composite loss function:
where
L represents the joint loss function, and
and
represent the loss functions for the regression tasks and confidence level assessment, respectively. The coefficient
should be greater than
to ensure the dominance of the regression task.
To enhance the initially defined confidence levels in the neural network module, we explicitly incorporated the confidence level estimation as a dedicated component of the network’s final output. Considering the distinctive characteristics of space targets—particularly their high velocity, long-range trajectories, and relatively low data noise—we adopted SNR as the classification criterion for confidence levels in the dataset. These predefined confidence levels were then used to train the confidence output layer of the network. The confidence level formulation for the AC-LSTM prediction module is expressed as
where
k and
C are the coefficients and biases of the sigmoid function, respectively, which were determined by empirically fitting different degrees of noise-added data from the training set.
2.2. KF Prediction Module
A two-channel Kalman filter prediction model was employed to mitigate the impact of fitting interference in the 3D space target trajectory data. This approach allows for the independent handling of the position and velocity data, thereby addressing the magnitude differences between these dimensions, which can impede accurate fitting. The state transition matrix
F for both channels is initialized identically to ensure that the relationship between position and velocity remains consistent throughout the prediction process. The observation matrix
H, employs distinct diagonal matrices to modify the mapping relationship between the position and velocity during the update process. The fundamental principles underlying the KF algorithm’s operational steps are summarized as follows [
21].
here,
and
represent the system states before and after the update, respectively, while
and
denote the estimation error covariance matrices before and after the update, respectively,
is the control input matrix,
is the external control input,
is the process noise covariance matrix,
K is the Kalman gain coefficient, and
is the measurement equation. These are all the process variables in the Kalman filter algorithm.
To further clarify the implementation details of the dual-channel Kalman filtering mechanism in this module, we provide the shared state transition matrix
F and the distinct observation matrices
corresponding to the two prediction channels. These matrices define the system dynamics and measurement relationships used in each filter branch. The specific forms are given as follows:
Here,
represents a 3 × 3 identity matrix,
is the discretization parameter. In the observation matrix,
and
denote the micro-coupling coefficients for position and velocity respectively.
stands for the Jacobian matrix of acceleration with respect to position, which can be expressed as follows:
Here, the position vector is denoted as
,
represents the target acceleration,
is the gravitational constant (taken as
in this context), and
r is the distance from the target to the origin.
The confidence metric
of the KF prediction module follows a definition framework analogous to that of the AC-LSTM confidence formulation. The critical distinction stems from KF’s inherent stepwise recursive nature of KF: its prediction reliability quantification utilizes the accumulated multi-step forecast errors relative to ground truth observations. This error-driven confidence measure is mathematically expressed as follows:
where the position confidence
and velocity confidence
are defined as follows:
We employed linear and quadratic equations to model the position and velocity confidence metrics, respectively, where the coefficients
and
c represent the optimizable parameters of each corresponding regression function. The prediction uncertainty quantification is implemented through residual analysis over
N steps, with
and
denoting the standard deviations of the position and velocity prediction residuals respectively, mathematically defined as
where
are the measurement and prediction of step
i and their residuals, respectively, and
S is the standard deviation of the prediction residuals for a total of
N steps.
2.3. Confidence-Based Fusion Module
In the fusion module, the predictions and confidence levels from the AC-LSTM and KF prediction modules were dynamically weighted to obtain the final 3D trajectory vector prediction result,
P:
where
and
are the confidence levels of the two modules, and
and
are the prediction outputs from the AC-LSTM and KF modules, respectively. The final outputs are six-dimensional vectors consisting of the three-dimensional position and velocity coordinates.
Finally, the fused predicted trajectory, including both 3D position coordinates and 3D velocity vectors, was used as the model’s output, with a total weighted confidence score representing the reliability of the prediction result.
To summarize, the proposed framework incorporates noise-awareness at multiple levels to enhance robustness in uncertain environments. The AC-LSTM module leverages multi-task training on datasets with varying SNRs and utilizes a confidence-guided loss function to reflect prediction uncertainty. The Kalman filter module introduces a dual-channel linear filtering structure, where confidence is estimated through multi-step residual analysis. Finally, the fusion module adaptively integrates outputs from both branches based on their confidence levels, achieving a balanced and reliable prediction outcome even under significant noise, as demonstrated in the experiments presented in
Section 4.
4. Performance Evaluation
4.1. Ablation Study on Model Components
To systematically validate the efficacy of our framework, we conducted a three-branch ablation study: (1) full architecture (AC-LSTM+KF)—the proposed confidence-based joint prediction algorithm; (2) AC-LSTM standalone—pure deep learning implementation without confidence-guided KF integration; and (3) KF standalone—the conventional filtering approach excluding neural network enhancements.
Through a comparative analysis of the prediction accuracy across different configurations, we further examined the synergistic role of AC-LSTM and the Kalman filter in trajectory prediction, highlighting the joint model’s advantages in handling both linear and nonlinear trajectories. The results of the ablation experiments are listed in
Table 4.
Ablation studies revealed distinct performance patterns across model configurations. The integrated AC-LSTM+KF architecture achieves superior precision, demonstrating an 82% improvement over standalone AC-LSTM and 96% enhancement compared to the KF-only implementation. This validates the complementary strengths: AC-LSTM’s nonlinear modeling capacity effectively captures complex trajectory patterns, whereas the KF’s linear estimation capability provides noise-resistant stabilization. This significant performance gap highlights the critical need for hybrid approaches in high-noise dynamic trajectory scenarios.
To validate the confidence estimation accuracy and training effectiveness of our neural component, we conducted isolated testing with pre-labeled confidence datasets.
Figure 5 quantitatively compares the input confidence levels against network-derived confidence estimates across different noises, and the Spearman correlation coefficients (≤1) for the three dimensions of XYZ confidence are 0.987, 0.989, 0.990, respectively.
The
Figure 5 demonstrates a comparison between the predicted and true confidence values in the network module. As shown, the predicted confidence varies inversely with data reliability and under low-SNR conditions, the confidence range expands significantly, indicating reduced reliability. Conversely, high-SNR scenarios yielded tightly clustered confidence values approaching 1.0, demonstrating consistency with statistical principles.
4.2. Prediction Accuracy Under Different SNR
To evaluate the efficacy and practical viability of our proposed methodology rigorously, we conducted comprehensive benchmarking against six established baseline approaches prior to controlled experimentation. The comparator models included the following: (1) a linear Kalman filter (KF), (2) a particle filter (PF) [
23], (3) a multi-layer CNN, (4) LSTM, (5) a gated recurrent unit (GRU) [
24], and (6) a transformer–encoder architecture [
25].
All models were trained on identical input tensors with dimensions of 1 × 6 × 10, maintaining a consistent spatial–temporal resolution. Notably, conventional architectures generate output tensors of size 1 × 6 × 1, whereas our dual-branch framework produces enhanced 1 × 6 × 2 outputs through the channel-wise concatenation of prediction-confidence tensor pairs.
The simulation employed an identical methodology on test data with the following parameter configurations: origin point coordinates (125° W longitude, 45.0° N latitude, and 50 m altitude), destination point coordinates (115° E longitude, 35.0° N latitude, and 10 m altitude), and the trajectory apogee of 850 km. To simulate real-world sensing conditions, controlled Gaussian noise was injected at an SNR of 20 dB. The simulated test data are shown in
Figure 6.
Standard experimental protocols typically mandate noise reduction and signal filtering during data preprocessing to enhance algorithmic prediction accuracy. To rigorously evaluate the noise robustness of our method, we deliberately excluded preprocessing stages and directly utilized raw sensor data with 20 dB SNR for validation testing.
The benchmark evaluation measured three-dimensional trajectory prediction performance across comparative algorithms using mean absolute error (MAE) metrics. Our analysis specifically highlights both instantaneous error fluctuations and aggregate performance through mean error quantification, with comprehensive comparative results detailed in
Figure 7.
The figure above illustrates the prediction error fluctuations of the proposed algorithm and comparison algorithms on the test data. The x-axis corresponds to the trajectory sampling points, and the y-axis quantifies the discrepancy between the predicted and true values. The colored markers along the y-axis display the overall mean errors for each of the seven algorithms.
As shown in the figure, the traditional convolutional neural network demonstrates limited efficacy in regressing high-speed aerospace trajectories in complex environments. In contrast, the proposed method exhibits a prediction error 2.8 times lower than that of conventional approaches. When configured with appropriate parameters, the baseline algorithms achieved relatively accurate predictions; however, our method showed statistically significant improvements: a 20.9% mean error reduction over Kalman filtering, 7.5% versus particle filtering, 28.3% against LSTM, 21% relative to GRU, and 26.9% compared to transformer architectures.
These experimental results conclusively demonstrate the dual advantage of the proposed algorithm for 3D trajectory prediction: optimal absolute accuracy coupled with minimal error fluctuations across operational scenarios.
To rigorously assess the noise robustness of the algorithm, we conducted comprehensive trajectory prediction experiments employing varying noise-corrupted datasets (10–40 dB SNR). The quantitative evaluation employed three complementary metrics: the MAE, root mean squared error (RMSE), and coefficient of determination (
), systematically measuring precision loss, error magnitude, and model interpretability across signal degradation conditions. The experimental results are shown in
Figure 8.
Figure 8a,b show that the proposed algorithm consistently achieved the lowest MAE and RMSE values under different SNR conditions. In the experiment with an SNR of 10 dB, the prediction error metrics of the proposed algorithm were reduced by 38% and 41%, respectively, compared with the best-performing baseline algorithm.
Figure 8c shows that the proposed algorithm achieved the highest
value in the model evaluation. In the SNR experiment at 10 dB, the
metric of the proposed algorithm is 13.6% higher than that of the best baseline model.
5. Comparison with Baseline Methods
The proposed confidence-aware fusion mechanism demonstrates superior prediction accuracy compared to conventional 3D trajectory predictors. Through the progressive refinement of prediction errors across trajectory phases, our method effectively integrates linear Kalman filtering with nonlinear neural dynamics. The baseline models used for comparison include both classical integration methods and recent deep learning approaches. Specifically, RK4 represents a physics-based baseline grounded in orbital motion equations, while DeepLSTM, CE-LSTM, and BiLSTM-AM are drawn from recent works targeting trajectory prediction under noisy conditions. These models provide diverse reference points for evaluating the effectiveness and robustness of the proposed hybrid AC-LSTM and Kalman filter framework.
The benchmark evaluation includes the following: (1) BiLSTM-AM networks; (2) CE-LSTM networks; (3) deep recurrent LSTM networks, with traditional trajectory equations as the reference baseline (based on four-order Runge-Kutta integration [
26]). Experimental validation utilized the dataset described in
Section 3.2, partitioned into 70% training, 20% validation, and 10% testing subsets.
Five metrics evaluate the prediction fidelity, with formal definitions provided as follows.
Mean absolute error (MAE);
MAE measures the mean absolute error between the predicted value and the actual value and is suitable for assessing the magnitude of the absolute error.
Root mean square error (RMSE);
RMSE measures the mean of the square root of the prediction error, which reflects the size of the prediction error, and larger errors are amplified.
Mean absolute percentage error (MAPE);
MAPE measures the prediction error as a percentage of the actual value and is often used to assess the relative error of the model, which can reflect the obvious scale differences in the data.
Mean squared percentage error (MSPE);
MSPE measures the squared error of the prediction as a percentage of the actual value and can penalize large errors, the value of which is sensitive.
Coefficient of determination (
);
measures how well the model fits and how well the independent variable explains the dependent variable. The closer the r-squared value is to 1, the better the model. A value close to zero indicates a poor model fit.
Our experimental validation incorporates realistic noise simulation by introducing zero-mean Gaussian perturbations (
= 35,000 m,
= 20,000 m,
= 60,000 m) to the radar measurements, and the SNR of these data were calculated to be approximately 40 dB.
Figure 9 and
Table 5 compares the multi-dimensional prediction errors (X/Y/Z axes) and metrics of the five methods.
As shown in
Figure 9 and
Table 5, the proposed method demonstrates superior performance across all three spatial dimensions compared to the classical RK4 and several deep learning baselines (CE-LSTM, BiLSTM-AM, and Deep LSTM). Notably, in the Y-axis direction, which corresponds to the linear component of the trajectory, our method achieves a substantial performance advantage. Specifically, it records the lowest MAE (24,135 m), RMSE (30,813 m), and MAPE (0.93%) among all methods, indicating highly accurate and stable predictions. This significant improvement is attributed to the proposed model’s strong capability in capturing linear motion trends and effectively leveraging the temporal dependencies of the data.
It is also important to note that the MAPE and MSPE values for the Y-axis are generally higher across all models. This does not necessarily indicate poor model performance. Rather, the Y-axis component exhibits relatively small variation ranges (as shown in
Figure 6b), which amplifies relative error metrics such as MAPE and MSPE. Consequently, even small absolute deviations can result in large percentage-based errors. Despite this, our method still achieves the lowest MAPE and MSPE in the Y direction, underscoring its superior performance and stability in modeling linear trajectories under challenging evaluation conditions.
In contrast, the X and Z axes exhibit more nonlinear and oscillatory behavior, making the prediction task more challenging. Even so, our method still outperforms all baseline models in these two directions. For instance, it achieves an MAE of 32,446 m and RMSE of 40,022 m in the X-axis, and 52,659 m and 66,397 m in the Z-axis, respectively. Although the performance margins in these directions are slightly narrower due to the inherent complexity of nonlinear trajectories, the proposed method consistently delivers the best results. This indicates that our model can generalize well across both linear and nonlinear dynamic patterns.
In terms of overall prediction quality, our method also exhibits the highest coefficient of determination (R2), exceeding 97.66% in all three directions and reaching up to 99.98% on the Y-axis, which confirms its strong goodness-of-fit. Moreover, the mean squared percentage error (MSPE) remains below 0.01% in all cases, showcasing the model’s remarkable robustness to noise. Taken together, the experimental results clearly validate that the proposed dual-confidence fusion model not only improves prediction accuracy across multiple dimensions but also enhances robustness under noisy conditions, thereby making it highly suitable for practical applications in 3D trajectory forecasting.
Table 6 provides a comparison of memory occupancy and reasoning efficiency across several methods. The results indicate that the traditional method, which relies on nested loops, exhibits a slower performance. Although the reasoning time for the method proposed in this study is marginally slower than that of other approaches, it incorporates a greater number of neurons, resulting in the highest reasoning efficiency among the evaluated methods.
6. Conclusions
The experimental results on a simulated three-dimensional space target dataset confirm that the proposed method not only achieves high prediction accuracy but also demonstrates exceptional robustness in complex environments. Specifically, our confidence-based dual-model fusion framework, which separately processes linear and nonlinear trajectory components, significantly improves the prediction performance. Compared to other related algorithms, our method reduces the prediction error in space target trajectory prediction, with the MAE value of the prediction error reduced by at least 11.1%, and the RMSE value was reduced by at least 12%. Furthermore, the proposed method maintains higher operational efficiency within the neural network module, ensuring that computational requirements are well balanced with performance gains. These results underscore the effectiveness of our dual-confidence fusion strategy, combining the strengths of the AC-LSTM network for nonlinear motion prediction and the Kalman Filter for quasi-linear motion modeling. This approach not only enhances prediction accuracy but also ensures reliability in real-world aerospace defense applications. The proposed method holds significant potential for aerospace defense systems, where rapid and precise target prediction is crucial for operational success in the complex and dynamic environments typical of aerospace missions.
Future work could explore several promising avenues: first is extending the model to address a broader range of defense scenarios, such as those encountered in aerospace environments, where rapid changes in target motion and diverse threat profiles are prevalent. By testing the model with different datasets, including real-world projectile trajectory data, we can enhance its generalizability and ensure its effectiveness in varying operational conditions encountered in aerospace defense. Second, investigating more advanced feature extraction techniques and confidence estimation methods will allow for further refinement of the model’s predictive accuracy, Which is critical for space situational awareness systems, where precision is paramount. Finally, integrating additional sensor data from sources such as radar, infrared, and satellite systems will strengthen the model’s robustness, providing a more comprehensive and adaptable solution for aerospace applications. These advancements would not only improve prediction accuracy but also ensure that the model can be effectively deployed in real-time aerospace missions, where quick, reliable, and accurate target tracking is vital for mission success.