Considering the growing need for remote-sensing satellite mission scheduling and its critical role in ensuring optimal satellite operations, our study delves into the complexity of onboard mission queue scheduling. These challenges stem from the dynamic nature of mission prioritization and require a versatile and adaptive solution capable of using changing conditions. While static or semi-static traditional mission scheduling methods provide some comfort, an efficient, robust, and adaptable approach is critical. To address the NP-hard nature of the satellite task scheduling problem, we develop a novel scheduling algorithm based on the BnB method. The algorithm aims to efficiently explore the solution space by systematically evaluating the feasibility of potential schedules and pruning suboptimal paths early in the computational process. The dynamic nature of the algorithm allows for the real-time adjustment of task prioritization and resource allocation in rapid response to unforeseen changes in satellite state or task requirements. By combining feedback mechanisms and adaptive strategies, the algorithm ensures that resource allocation is not only based on the initial plan but also reflects the current operational environment, thus optimizing scheduling flexibility and mission readiness. So, this paper utilizes the capabilities of the BnB method to address the complex challenges posed by loading task queue scheduling in a dynamic task prioritization context.
4.1. Dynamic Priority Satellite Task Scheduling Model
Given the complex nature of satellite task scheduling, especially with changing priorities, a robust mathematical model is indispensable. Let’s derive this based on our defined notations:
The total number of tasks is represented by n.
The priority of each task is given by for .
The start time, end time, and running time for the i-th task are , , and , respectively.
Our goal is to maximize the weighted sum of tasks based on their priorities and running times, which can be represented as:
Another aspect to consider is the time difference between the start and end times for each task, which serves as a denominator in our objective function:
Combining the above, our objective function is as follows:
where
and
are the maximum and minimum possible values of our objective, respectively.
The constraints of our model include:
This ensures that the task finishes within its allowed time window.
Each task should be scheduled:
Overlapping of tasks is not allowed:
In our model, Equation (
22) defines the objective function that aims to maximize the sum of task priority and duration weighting by maximizing the task priority and duration weighting. Equation (
23) ensures that each task is completed within its allowed time window. Equation (
24) specifies that the sum of the start times of all tasks is 1. This is the initial condition of the model and ensures that all tasks are considered equally and initialized correctly in the scheduling model and that there is no time overlap (see Equation (
25)). These formulas form a powerful mathematical model for solving the dynamic priority adjustment problem in satellite mission scheduling.
Axiom 1 (Uniqueness of Task Execution). Every task is executed at most once within its allowed time window.
Explanation: This foundational principle ensures that no task in the system is executed multiple times, ensuring resource efficiency and avoiding redundancy.
Having established the above hypothesis, a natural question arises: How do tasks interact with each other regarding timing and execution? This leads us to a fundamental theorem.
Theorem 1 (Task Overlapping Prohibition). Given two tasks and , if starts before and ends after starts, then cannot start until is complete.
Proof. Consider the situation where
is already running when
is set to begin. From our model constraints, we have as follows:
From Equation (
26),
will end at time
. For
to start before
ends would mean
. However, combining this with Equation (
27) leads to a contradiction since
, which violates the non-overlapping constraint. So,
cannot start until
has been completed. □
This theorem is deeply rooted in our mathematical model and provides clear insights into task execution patterns. It ensures that tasks do not overlap and that each task obtains the resources it needs without competing for them.
We are now poised to address the central challenge of this research: devising a solution strategy for the onboard task queue scheduling problem, particularly in the context of dynamically changing task priorities. The problem can be concisely defined as follows:
Given a set of tasks (each with a dynamic priority, start time, end time, and execution duration), determine an optimal scheduling sequence that maximizes the system’s utility while adhering to the constraints derived from the previous mathematical model and hypotheses.
Mathematically, we aim to solve:
Constraints as defined by Axiom 1 and Theorem 1.
This problem encapsulates the essence of the satellite scheduling challenge under dynamic priorities. In the subsequent sections, we will delve into potential solution strategies, algorithmic frameworks, and a holistic solution approach to address this challenge.
One of the most prominent strategies for solving combinatorial optimization problems, as we define them, is BnB. Rooted in integer programming and discrete optimization principles [
41,
42], this method provides a structured way to explore the solution space while dis-carding suboptimal solutions, ensuring computational efficiency and accuracy.
Essence of the Branch and Bound method:
The core concept of the BnB approach lies in two primary operations:
Branching: This involves decomposing the problem into smaller subproblems or branches. For our task scheduling problem, this translates into exploring different permutations of the task sequence.
Bounding: By evaluating the potential solution quality of branches, we can discard (or “bound”) certain branches if they do not lead to a better solution than the best solution found so far.
Before exploring the mathematical complexity, it is crucial to understand the basic concepts that govern BnB methods. Our goal is to fully and efficiently explore the solution space, which often requires us to make informed decisions to select branches that should be explored further and discard those that are unlikely to provide advantages. This decision-making process depends on the specific properties and characteristics of the problem. Combinatorial optimization problems often involve the concept of suboptimality, which is a characteristic of our task scheduling problem.
Theorem 2. Suboptimality Principle A branch cannot produce a solution that is better than the current best solution, then neither can any of its descendants in the solution tree.
Proof. Suppose we have a branch B which has an upper bound greater than or equal to the value of the best known solution. Let S be any solution that belongs to the sub-tree rooted at B. By the definition of an upper bound, the value of S is no better than the upper bound of B. Therefore, S is also no better than the best-known solution. This holds true for any solution in the sub-tree rooted at B, which proves our claim. □
During the branch and bound process, we will compute an upper bound (usually a relaxed version of the problem) and a lower bound (based on the current best solution) at each node or branch point. If the upper bound of a branch is not as optimal as the current best solution, the branch can be pruned.
The pruning strategy is a vital component of BnB, as shown in
Figure 3. Pruning helps to efficiently navigate the solution space by eliminating suboptimal branches. The described strategy ensures that our method remains computationally efficient by focusing only on promising branches.
Algorithm 1 provides a structured representation of this method. The pseudo-code encapsulates the mathematical essence of the BnB method. At each step, we evaluate the potential of a node (or task scheduling sequence) using the objective function derived from Equation (
28). If the potential is promising, we explore it further by branching. Otherwise, we prune the branches. This complex balance between exploration (branching) and exploitation (bounding) ensures that the algorithm efficiently navigates the solution space and follows the optimal task scheduling sequence. Therefore, we propose Algorithm 1 to derive an optimal task scheduling sequence. Initially, best_solution is initialized to infinity, setting a reference point that represents the upper bound of the algorithm’s effort to minimize (line 1, as highlighted by Equation (
28)). The starting point of the BnB method is determined by creating an initial_node, which represents our initial task scheduling node (line 2). If the algorithm encounters a leaf node, it evaluates the priority of the current solution based on the best solution found so far. If more desirable, the best_solution is updated accordingly (lines 3–5). For nodes with branching potential, a new sequence of possible tasks is expanded from the current node (line 7). Subsequently, by pruning the application boundaries, the algorithm discards nodes considered not beneficial to the solution space (line 8). Each surviving node undergoes a recursive application of the BnB method after pruning, allowing the algorithm to delve deeper into the potential solution terrain (lines 9–11). This mechanism ensures that the algorithm efficiently and thoroughly searches the environment for the optimal task scheduling order.
Algorithm 1: Branch and Bound Algorithm for Problem Solving |
|
In the context of satellite mission scheduling planning, the schematic shown in
Figure 4 illustrates a typical scheduling scenario. This helps to grasp the complexity inherent in onboard mission queue scheduling.
Figure 4 is an illustrative guide for integrating remote-sensing satellite mission scheduling with BnB methods, visually representing the complex processes involved. The figure serves as a critical roadmap for navigating the complexity of satellite mission scheduling, where each step implies an interplay between system mission planning and the adaptive resolution of subproblems. It depicts how the entire scheduling task (which may be partitioned into smaller, more manageable sequences) is optimized by a Branch and Bound algorithm that methodically explores the decision tree, prunes suboptimal paths, and focuses on promising solutions. It conveys the meticulous choreography of satellite operations, where each task is closely linked to the overall mission objectives, ensuring that the satellite makes the best use of its valuable elapsed time as it collects and transmits critical Earth observation data.
Figure 4 shows the remote-sensing satellite mission scheduling process in detail.
4.2. Constructing an Algorithmic Framework
Given the system model developed in the previous sections and the understanding of the optimal solution algorithm, we now present a comprehensive algorithmic framework to optimize the task scheduling problem. The framework is designed based on the mathematical relationships and constraints previously outlined in Equation (
20) through Equation (
25) and is intended to refine the solution space further and provide enhanced performance.
Iterative Refinement through Sub-problems:
Often, a promising approach in combinatorial optimization schemes is to decompose the overall problem into smaller, more manageable subproblems. By solving these subproblems and combining their solutions, we can iteratively refine our answer to the main issue. In our task scheduling context, this involves breaking down the entire task sequence into more minor sequences, optimizing them individually, and then combining them.
Mathematical Expression for Sub-problem Decomposition:
Let us denote the entire task sequence by
T. We can break it down into
k smaller sequences
such that:
where
represents the
sub-sequence. The optimization objective for each
is to minimize the completion time, which can be represented as:
where
denotes the completion time of sequence
as defined in Equation (
21).
Merging Sub-problem Solutions:
Once we have optimized solutions for each sub-sequence
, the next step is to merge them in a manner that results in an optimized solution for the entire sequence
T.
The challenge lies in ordering these sub-sequences to achieve global optimization.
To address the problem of efficient task scheduling, we propose Algorithm 2, which aims to generate an optimized sequence of tasks. The task sequence
T is initially decomposed into multiple sub-sequences
, as outlined in Equation (
29) (line 1). For each of these identified sub-sequences
, the algorithm embarks on its optimization (lines 2–4). This refinement is achieved through the
OptimizeSubSequence function, leveraging BnB, in alignment with the relation presented in Equation (
30), ensuring each sub-sequence
is meticulously optimized (lines 8-9). Post optimization of all sub-sequences, the algorithm amalgamates these to form the comprehensive optimized task sequence
, as described in Equation (
31) (line 5). In summation, Algorithm 2 delivers an optimized sequence,
, that undergoes segment-based refinement followed by a cohesive merger, ensuring all-encompassing efficiency.
Algorithm 2: Optimize Task Scheduling Algorithm |
|
These individual sub-sequences are optimized using BnB, a well-known combinatorial optimization technique. The method systematically enumerates candidate solutions through a state-space search: the set of candidate solutions forms a rooted tree, where the complete set is located at the root. The algorithm explores the branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against the estimated upper and lower bounds of the optimal solution. If it does not produce a better solution than the algorithm’s best solution, it is discarded.
Once each subsequence has been optimized, the merging phase begins. This phase is paramount as how the sub-sequences are combined will determine the efficiency of the resultant task sequence
. The algorithm relies on the additivity of the sub-sequence completion times, as shown in Equation (
31). However, a salient point to consider is the ordering of these sub-sequences, which can significantly impact the global optimization of
T. The ordering should be carefully planned to ensure that the generated merged sequence achieves the overall goal of the shortest completion time.
4.3. Advancing to a Comprehensive Solution
We now propose a comprehensive solution to the task scheduling problem. The solution will use the previously discussed iterative optimization techniques but with additional optimizations to improve performance and ensure accuracy.
Incorporating Priority into Sub-problem Decomposition:
Given our objective to minimize the overall completion time, prioritizing tasks based on specific criteria, such as their complexities or dependencies, is crucial. Let us introduce a priority function
for each sub-sequence
:
where
is the sum of the complexity of all tasks in the sub-sequence
and
is the number of tasks contained in the sub-sequence
. Higher values of
indicate that the sub-sequence
has tasks with higher average complexity and should be scheduled earlier.
Optimized Merging of Sub-problem Solutions:
The main insight of this section is that when merging sub-sequences, we should consider their priority values. This ensures that sub-sequences with tasks of higher average complexity are scheduled first, thus potentially reducing the overall completion time.
To holistically tackle the challenge of task scheduling, we introduce Algorithm 3, designed meticulously to deliver a fully optimized task sequence,
. In the initial phase, the algorithm leverages the
DecomposeSubSequences function to split the task sequence
T into multiple sub-sequences, precisely
, as dictated by Equation (
29) (see line 8). Upon decomposition, the algorithm then dives into a two-fold operation for each of these sub-sequences. It first invokes the
CalculateP function to compute
following the scheme presented in Equation (
32) (line 9). Post calculation, the
OptimizeSubSequence function steps in, utilizing the renowned BnB to refine each
(lines 10). With all sub-sequences thoroughly optimized, the next strategic move is to organize these in a particular sequence. The
SortSubSequences function is tasked with this responsibility, ensuring that the sub-sequences are arrayed based on descending values of their computed
(line 11). Concluding the procedure, these sorted sub-sequences are merged seamlessly by the
MergeSortedSubSequences function to sculpt the fully optimized task sequence
, as elaborated in Equation (
33) (line 12). In essence, Algorithm 3 offers a comprehensive solution, emphasizing both the granular refinement of sub-sequences and their strategic aggregation, all converging to produce an unparalleled task scheduling outcome.
where
is a permutation function that orders the sub-sequences based on decreasing values of
.
Algorithm 3: Total Task Scheduling Solution |
|
As described in Algorithm 3, Total Task Scheduling Solution takes a deeper look at the task scheduling problem by incorporating an intelligent prioritization mechanism. It combines the advantages of iterative refinement with prioritization decomposition to provide a robust and efficient scheduling solution.
By integrating prioritization into the iterative optimization approach, the method presented in Algorithm 3 provides a nuanced and holistic solution for task scheduling. The strategy balances granularity and global optimization, positioning it as a reliable solution for complex scheduling schemes.