The motivation for introducing Equation (12) is to explicitly link the feasibility of achieving a specific impact time to the physical constraints imposed by the aircraft’s maneuverability. The acceleration capacity, defined as the maximum allowable acceleration, directly limits the feasible range of control inputs, which subsequently determines the possible trajectories the aircraft can follow to achieve the desired impact time. This underscores the critical role of acceleration capacity in ensuring that the aircraft can reach the target within a specified time frame while adhering to physical limitations.
By reformulating the explanation, the aim is to provide a clearer understanding of why Equation (12) is essential for predicting the desired impact time range and how it directly stems from the physical and operational constraints on aircraft control. The FOV plays a critical role in the ITCG law.
3.1. Binary Search
- (1)
Monotonicity description
From Equations (1)–(5) and Equation (6), it can be concluded that the feasibility of achieving a desired impact time is influenced by several key parameters, such as the acceleration command curve and the miss distance. To properly formulate this engagement problem, an approach similar to that described in [
22] is adopted, where the dynamic behavior of the aircraft, combined with control constraints, is thoroughly analyzed to determine the feasibility of specific impact times. By analyzing these equations, it can be assessed whether the desired impact time aligns with the physical constraints of the system. However, it is not possible to directly determine the feasible range of the desired impact time, and an accurate determination of this range is crucial for ensuring sample quality. The traditional method involves using a fixed step size when the desired impact time is known. A constant step size is incrementally added to or subtracted from the desired impact time in a cyclic manner until the desired impact time becomes infeasible, thus determining the feasible range. While a larger step size reduces computation time, it compromises the precision of the feasible range. Conversely, a smaller step size increases the precision but significantly extends the computation time. As a result, the fixed-step-size method is not suitable for determining the feasible range of the desired impact time.
Clearly, this problem exhibits monotonicity, for which the binary search algorithm offers significant advantages. To address the feasible range problem of the desired impact time, the binary search algorithm can be employed as a solution. The monotonicity of the problem is explained below. The method developed is the control variate method. Using this method, one variable can be selected while the other parameters are fixed. To verify this hypothesis, numerical simulations were conducted across a range of initial conditions. A simulation demonstration was performed for this problem, with
selected as the variable. The other parameters were fixed, and their values are presented in
Table 2.
For
, the interval range is selected as [40°, 50°], with values chosen every 2°. For
, the interval range is selected as [10°, 20°], with values chosen every 2°. Taking sup(
) as an example, the trend is illustrated in
Figure 4.
Figure 4 illustrates the variation in
td with respect to the changes in the initial conditions. The results indicate that
td decreases in a near-linear manner as
and
increase, supporting the argument for a monotonic relationship. The consistency of this trend across multiple scenarios further highlights the robustness of the proposed guidance strategy.
- (2)
Binary search solution
According to
Figure 2, when the selected impact time falls outside a reasonable range, a divergence phenomenon occurs in the later stages of the acceleration command curve, while this phenomenon does not occur within the boundaries. Based on this observation, sup(
) and inf(
) can be determined. Although traditional methods, such as fixed-step approaches, can also obtain boundary values, they involve significant computational workloads and long processing times. The binary search technique was selected for its simplicity and robust convergence properties. For this problem, where the objective is to determine a feasible impact time within a bounded and potentially unimodal solution space, the binary search provides an efficient method with logarithmic complexity to narrow down the solution range. This approach ensures a balanced trade-off between precision and computational cost, making it highly suitable for our application.
Compared to traditional fixed-step methods, the binary search algorithm reduces the computational complexity and accelerates convergence. It is particularly effective in identifying the desired impact time range within a predefined interval, as it systematically narrows down possible values through successive halving, leading to efficient and accurate results. The primary difficulty in applying the binary search lies in determining the starting point, the ending point, and the search direction. The acceleration command curve indicates that when the desired impact time is inappropriate, a divergence phenomenon occurs in the later stage of the curve, characterized by oscillations between maximum acceleration commands. Due to the nature of the guidance law, this phenomenon does not occur at other times. Thus, determining whether the acceleration command reaches its maximum value provides a basis for judging feasibility.
Taking the upper limit of the feasible range as an example, if divergence occurs around the median value, it indicates that the upper limit is less than the median value, and the search must continue to the left. Conversely, if no divergence occurs around the median value, it indicates that the upper limit is greater than the median value, and the search must continue to the right. This process continues until the upper limit is identified. Regarding the selection of the starting and ending points, the starting point can be any feasible desired impact time, while the ending point can be set to a larger value. However, the choice of the ending point affects the search time to some extent. The flow chart of the binary search algorithm is shown in
Figure 5.
3.2. Sensitivity Analysis
Sensitivity analysis is frequently employed to examine the stability of the optimal solution when the original data are inaccurate or when model parameters or system input constraints change. Specifically, sensitivity analysis helps assess how the optimal solution responds to variations in input parameters, such as changes in the boundary conditions, initial states, or acceleration capacity. In this paper, sensitivity analysis is conducted to evaluate how the variations in input parameters influence the impact time range. This analysis ensures that the model prioritizes the most critical parameters, thereby improving the efficiency and reducing the computational costs.
In the previous section, it was determined that the problem exhibits monotonicity, but the impact of individual parameters on the impact time range was not specified. Therefore, this section primarily focuses on analyzing how parameter changes in Equation (12) affect the impact time range. According to the derivation above, it is not difficult to derive an analytical solution for the impact time range. To analyze the sensitivity of each parameter, the influence of variations in each parameter on the impact time range can be evaluated through simulation verification.
Before conducting the simulation verification, the boundary of each dimension is first specified. Parameters that are excessively large or small may deviate from actual conditions and only increase the number of simulations without significantly impacting the results. Second, the parameters are categorized as follows: is a distance parameter, is a control capability parameter, is a parameter related to the initial angle, and is a parameter related to the FOV constraint. The advantage of parameter categorization is that only one parameter from each category needs to be examined in the sensitivity analysis, as similar parameters have comparable influence levels. Finally, the database is established. The ranges of the parameters are as follows: , deg, deg, and (70, 160) m/s2.
First, the sensitivity of
is analyzed. Taking
as an example, the remaining dimensions are divided into three levels, labeled as ISV1, ISV2, and ISV3. The corresponding parameter values are as follows: (10°, 10°, 40°, 70 m/s
2, 70 m/s
2), (14°, 14°, 44°, 110 m/s
2, 110 m/s
2), and (20°, 20°, 50°, 160 m/s
2, 160 m/s
2). These three groups of parameters represent the lower limits, medians, and upper limits. Taking inf(
) as an example, simulations are performed for these three cases, and the results are shown in
Figure 6.
As shown in
Figure 6, regardless of the value, changes in
have a relatively significant impact on inf(
), indicating high sensitivity. Since the change trend is relatively smooth, the sample selection interval should be small.
Second, the sensitivity of
is analyzed. Taking
as an example, the remaining dimensions are divided into three levels, labeled as ISV1, ISV2, and ISV3. The corresponding parameter values are as follows: (5400 m, 5400 m, 10°, 40°, 40°), (6200 m, 6200 m, 14°, 44°, 44°), and (7200 m, 7200 m, 20°, 50°, 50°). These three groups of parameters represent the lower limits, medians, and upper limits. Taking inf(
) as an example, simulations are performed for these three cases, and the results are shown in
Figure 7.
As shown in
Figure 7, regardless of whether it is the lower limit, median, or upper limit, the influence of changes in
on inf (
) is relatively small, indicating low sensitivity. Since the change trend is relatively smooth, the sample selection interval should be large.
Third, the sensitivity of
is analyzed. The remaining dimensions are divided into three levels, labeled as ISV1, ISV2, and ISV3. The corresponding parameter values are as follows: (5400 m, 5400 m, 40°, 70 m/s
2, 70 m/s
2), (6200 m, 6200 m, 44°, 110 m/s
2, 110 m/s
2), and (7200 m, 7200 m, 50°, 160 m/s
2, 160 m/s
2). These three groups of parameters represent the lower limits, medians, and upper limits. Taking inf(
) as an example, simulations are performed for these three cases, and the results are shown in
Figure 8.
As shown in
Figure 8, regardless of whether it is the lower limit, median, or upper limit, the influence of changes in
on inf(
) is relatively small, indicating low sensitivity. Since the change trend is relatively smooth, the sample selection interval should be large.
Finally, the sensitivity of
is analyzed. The remaining dimensions are divided into three levels, labeled as ISV1, ISV2, and ISV3. The corresponding parameter values are as follows: (5400 m, 5400 m, 10°, 70 m/s
2, 70 m/s
2), (6200 m, 6200 m, 14°, 110 m/s
2, 110 m/s
2), and (7200 m, 7200 m, 20°, 160 m/s
2, 160 m/s
2). These three groups of parameters represent the lower limit, median value, and upper limit of the value. Taking inf(
) as an example, simulations are performed for these three cases, and the results are shown in
Figure 9.
As shown in
Figure 9, regardless of whether it is the lower limit, median, or upper limit, changes in
have a relatively significant impact on inf(
), indicating high sensitivity. Since the change trend is relatively smooth, the sample selection interval should be small.
In conclusion, the sensitivities of , , and are low, whereas those of , r, and are high. The analysis above provides the basis for establishing sample distribution equilibrium analysis. Based on the analysis results, representative data can be selected as inputs. This approach ensures accuracy while reducing the sample size, thereby saving training time. Therefore, in this paper, are sampled every 100 m, every 1°, every 10 m/s2, and every 0.5°.