In this section, we first discuss limitation of vanilla EDF in meeting non-preemptive tasks’ job deadlines without the information about the future job release patterns of tasks. We then explain design principles for LCEDF, and finally develop the LCEDF scheduling algorithm.
4.1. Limitation of Vanilla EDF
In this subsection, we discuss limitation of vanilla EDF. To this end, we first explain the scheduling algorithm of vanilla (global non-preemptive) EDF. Vanilla EDF manages a ready queue, in which jobs are sorted such that a job with an earlier deadline has a higher priority. Whenever there exists an unoccupied processor, the highest-priority job in the ready queue starts its execution on the processor; once a job starts its execution, the job cannot be preempted by any other job until the job finishes its execution.
As many studies pointed out [
5], non-preemptive EDF is not effective in meeting job deadlines, if the information about the future job release patterns is not available (i.e., the system is non-clairvoyant). The following example demonstrates such ineffectiveness of EDF in meeting job deadlines.
Example 1. Consider a task set τ with the following two tasks is executed on a uniprocessor platform: , , . Consider the following scenario: (i) the interval of interest is ; and (ii) is released at and is released at . We show the schedule under vanilla EDF. At , vanilla EDF schedules , because any job of is not released at ; then, occupies the processor . Suppose that is released at ; then, misses its deadline without having any chance to compete for an unoccupied processor until as shown in Figure 1a. What if we know at that will be released at ? Then, by idling the processor in , we can make and schedulable as shown in Figure 1b. However, without the information of the future release time of , we cannot idle the processor. This is because, if is not released until , eventually misses its deadline. Note that the task set is schedulable by LCEDF, to be presented in Example 3 in Section 4.3. The similar phenomenon occurs on a symmetry multiprocessor platform, as demonstrated in the following example.
Example 2. Consider a task set τ with the following three tasks on a two-processor platform: , and . Consider the following scenario: (i) the interval of interest is ; (ii) is released at , and and is released at and , respectively. We show the schedule under vanilla EDF. At , vanilla EDF schedules , because and are not released at ; then, occupies the processor . Suppose that and are released at and , respectively; then, misses its deadline without having any chance to compete for unoccupied processors until as shown in Figure 2a. If we know that and will be released at and , respectively, we can make schedulable by idling a processor in as shown in Figure 2b. However, without the information of the future release time of , we cannot idle the processor; this is because, if is not released until , eventually misses its deadline. Note that the task set is schedulable by LCEDF, to be presented in Example 4 in Section 4.3. As shown in Examples 1 and 2, there may exist a release pattern that yields a deadline miss of a job of
of interest under vanilla EDF, if
of
is smaller than the worst-case execution time of some other tasks (i.e.,
). The following observation records this property formally. (Observation 1 is already implicitly incorporated into the existing schedulability analysis for vanilla EDF [
11,
12].)
Observation 1. Suppose that we do not know any future job release pattern. Then, there exists a release pattern that yields a deadline miss of a job of of interest, if there exist at least m tasks whose worst-case execution time (i.e., ) is larger than .
The observation holds as follows. Suppose that jobs of m tasks whose worst-case execution time is larger than are released at . We consider the following situation. Suppose that the m jobs start their execution at . In this case, if a job of is released at , the job of misses its deadline. In addition, if any of the m jobs do not start its execution forever, the jobs eventually miss their deadlines. Therefore, the observation holds.
The observation is important because if there exists such a task , the task set including is unschedulable by vanilla EDF. Therefore, we design LCEDF so as to carefully handle such a task in Observation 1, which is detailed in the next subsection.
4.2. Design Principle for LCEDF
Motivated by Observation 1, we would like to avoid job deadline misses of tasks which belong to in Observation 1, by using the limited information about the future job release patterns. To this end, we classify tasks offline as follows:
, a set of tasks which satisfy that there exist at least m other tasks whose execution time (i.e., ) is larger than , and
, a set of tasks which do not belong to .
We would like to avoid the deadline miss situations shown in
Figure 1a and
Figure 2a, in which a job
of a task in
misses its deadline without having any chance to compete for an unoccupied processor until
, which is the last time instant each job should start its execution to avoid its deadline miss. As we explained in
Figure 1b and
Figure 2b, the situations can be avoided using knowledge of future job release patterns. In this paper, we aim at developing a systematic way to avoid such deadline miss situations with only a limited information for future job release patterns (explained in
Section 3). To this end, we manage a critical queue
CQ, in which jobs
of tasks in
are sorted by their
, which is the last time instant each job should start its execution to avoid its deadline miss. Whenever a job of a task in
is able to start its execution, we check whether executing the job of a task in
will jeopardize timely execution of jobs in
CQ by experiencing the deadline miss situations such as
Figure 1a and
Figure 2a. If so, the job of a task in
will be postponed by idling a processor where the job is supposed to execute under vanilla EDF. Note that although we assume to know the next job’s release time of all tasks in
Section 3, we actually need to know the next job’s release time of tasks in
only.
As we mentioned, the LCEDF algorithm avoids the deadline miss situations of jobs of tasks in
shown in
Figure 1a and
Figure 2a, by postponing jobs of tasks in
. Then, the main problem is when jobs of tasks in
should postpone their executions for timely execution of jobs of tasks in
. The more postponing yields the higher and lower chance for timely execution of jobs of tasks in
and
, respectively; on the other hand, the less postponing results in the lower and higher chance for timely execution of jobs of tasks in
and
, respectively. Therefore, we need to minimize postponing the execution of jobs of tasks in
while guaranteeing timely execution of jobs of tasks in
. We may classify the situations where a job
of
in
has and does not have at least a chance to compete for unoccupied processors until
, into four situations as shown in
Figure 3.
We now discuss the four situations at
t in
Figure 3. Suppose that at
t, there are three unoccupied processors out of the four processors in the system. And, three jobs (
,
and
) of tasks belonging to
start their executions at
t, while one job (
) of a task belonging to
keeps its execution started before
t. Now we are interested in the timely execution of
of a task in
. If all the four jobs (i.e.,
,
,
and
) finish their execution after
,
misses its deadline without having any chance to compete for unoccupied processors, as shown in
Figure 3a. However, if at least one job among the three jobs (
,
and
) which start its execution at
t finish their execution no later than
, then
does not miss its deadline as shown in
Figure 3b. Similarly, although there is another job
of a task in
,
does not miss its deadline as long as
finishes its execution before
, which is shown in
Figure 3c. We have another case where
does not miss its deadline; that is the case where the job of
which keeps its execution started before
t finishes its execution before
, as shown in
Figure 3d.
Therefore, if the current situation does not belong to one of situations illustrated in
Figure 3b–d, we judge that there exists a job deadline miss. Once we judge such a deadline-miss situation, we choose the lowest-priority job among the jobs of tasks in
which is supposed to start their execution at
t, and avoid the lowest-priority job’s execution, by idling a processor intentionally.
4.3. Algorithm Details and Examples
Based on the design principle explained in
Section 4.2, we detail the LCEDF algorithm using pseudo code and examples in this subsection.
As shown in Algorithm 1, the input components of LCEDF at
t are the ready queue
RQ, the critical queue
CQ, the number of unoccupied processors
, and the running job set
RJ. Here,
RQ is a set of ready jobs at
t, and
CQ is a set of jobs which will be released after
t, invoked by
.
Algorithm 1 The LCEDF algorithm |
Input: the ready queueRQ, the critical queueCQ, the number of unoccupied processors , and the running job setRJ, at t - 1:
// Step 1: Check the priority of jobs of in RQ - 2:
for every job in RQ do - 3:
if Priority of then - 4:
Remove the job from RQ; start its execution; - 5:
Update the job release information of (from to ) in CQ - 6:
end if - 7:
end for - 8:
// Step 2: Check whether every job of in CQ does not miss its deadline - 9:
for every job in CQ do - 10:
if the number of jobs in RQ then - 11:
; Continue the for statement - 12:
end if - 13:
if then - 14:
Exit the for statement - 15:
end if - 16:
IsFeasible ← Case-0 - 17:
for high priority jobs of tasks in in RQ do - 18:
if then - 19:
IsFeasible ← Case-1 - 20:
Exit the for statement - 21:
end if - 22:
end for - 23:
if IsFeasible = Case-0 then - 24:
for every job in CQ and do - 25:
if then - 26:
IsFeasible ← Case-2 - 27:
Exit the for statement - 28:
end if - 29:
end for - 30:
for every job in RJ do - 31:
if then - 32:
IsFeasible ← Case-3 - 33:
Exit the for statement - 34:
end if - 35:
end for - 36:
end if - 37:
if IsFeasible = Case-1 then - 38:
Remove the highest priority job that satisfies from RQ; start its execution; - 39:
else if IsFeasible = Case-2 or Case-3 then - 40:
Remove the highest priority job from RQ; start its execution; - 41:
else // if IsFeasible = Case-0 - 42:
- 43:
end if - 44:
end for - 45:
// Step 3: Execute remaining jobs of in RQ - 46:
for highest-priority jobs of in RQ do - 47:
Remove from RQ; start its execution - 48:
end for
|
Step 1 in Algorithm 1 assigns jobs of tasks in belonging to RQ, to unoccupied processors. Since we postpone the execution of jobs of tasks in in order for timely execution of jobs of tasks in , the highest-priority jobs of tasks in belonging to RQ can be executed, which is the same as that under vanilla EDF. To this end, we first find jobs of tasks in belonging to RQ, whose execution is started at t in unoccupied processors (Lines 2–3). Such a job starts its execution and it removed from RQ (Line 4). Also, whenever a job starts its execution on an unoccupied processor, we decrease the number of unoccupied processors by 1 (Line 4). Then, we update the release information of a task which invokes the job starting its execution (Line 5).
In Step 1 we start the execution of higher-priority jobs of tasks in
belonging to
RQ, and therefore we are ready to start the execution of higher-priority jobs of tasks in
belonging to
RQ in the remaining unoccupied processors. In Step 2, we decide whether each job of a task in
belonging to
RQ starts or postpones its execution, according to whether it is possible to guarantee the timely execution of jobs of tasks in
belonging to
CQ. First, we investigate whether it is possible to guarantee the schedulability of each job of tasks in
belonging to
CQ (Line 9). If the number of jobs in
RQ is strictly smaller than
, we can assign an unoccupied processor to
even though all jobs in
RQ start their execution in unoccupied processors. Since we reserve an unoccupied processors for
, we decrease the number of unoccupied processors by 1; then continue the
for statement (Lines 10–11). If there is no more unoccupied processor, we stop this process because we cannot start any job execution (Lines 13–15). We set
IsFeasible as
CASE-0 (Line 16), and investigate whether the current situation belongs one of the three cases in
Figure 3b–d where the timely execution of a job of a task in
is guaranteed; based on the cases, we change
IsFeasible to either
CASE-1,
CASE-2 or
CASE-3 (Lines 17–36).
The
for statement in Lines 17–22 aims at checking whether execution of the
highest-priority jobs in
RQ compromises the schedulability of any job of tasks in
in
CQ. We assume to assign the highest-priority job
of a task in
to an unoccupied processor (Line 17). If the finishing time of the job’s execution (i.e.,
) is no later than
that is the last instant at which
in
CQ starts its execution without its deadline miss, we set
IsFeasible as
Case-1 (Line 19), which corresponds to
Figure 3b.
The
for statements respectively in Lines 24–29 and Lines 30–35 are performed only when
IsFeasible is equal to
Case-0, which means the timely execution of the job
in
CQ is not guaranteed yet. In the
for statement in Lines 24–29, we check the finishing time of a job in
CQ (i.e.,
) is no later than
, which corresponds to
Figure 3c; since
cannot start its execution until
t because
is later than
t, we calculate the earliest finishing time of the job by
(Line 25). If so, we set
IsFeasible as
Case-2 (Line 26). In the
for statement Lines 30–35, we check the finishing time of a job in
RJ which starts its execution before
t (i.e.,
) is no later than
, which corresponds to
Figure 3d, where
denotes the remaining execution time of
at
t; this is because,
starts its execution before
t (Line 31). If so, we set
IsFeasible as
Case-3 (Line 32).
In Lines 37–43, if IsFeasible is set to Case-1, we remove the highest-priority job in RQ that satisfies and start to execute the job; also, we decrease the number of unoccupied processors by 1 (Line 38). If IsFeasible is set to Case-2 or Case-3, we remove the highest-priority job in RQ and start to execute the job; also, we decrease the number of unoccupied processors by 1 (Line 40). Otherwise (meaning that IsFeasible equals to Case-0), we just decrease the number of unoccupied processors by 1, meaning that we postpone a job of a task in belonging to RQ (Line 42).
In Steps 1 and 2, we already guarantee the timely execution of jobs of tasks belonging to in CQ, meaning that the remaining unoccupied processors can serve for jobs of tasks in in RQ. Therefore, in Step 3 we start to execute highest-priority jobs in RQ (Lines 46–48).
In the following examples, we show that the task sets associated with the processor platforms in Examples 1 and 2 can be schedulable by the LCEDF algorithms.
Example 3. Consider the task set with the processor platform shown in Example 1; that is, and are scheduled on a uniprocessor platform. We first categorize each task into or . When we calculate of , we get 17; since there is one task whose execution time is larger than 17 (which is ), belongs to . Similarly, of is 79, which is no smaller than ; therefore, belongs to .
Consider the following scenario which is the same as that of Example 1: (i) the interval of interest is ; and (ii) is released at and is released at . Since we categorized as , we know that at , will release at . At , there is only in the ready queue; according to Step 2 we examine if there is an unoccupied processor for in the critical queue to be executed. This can be done by checking resulting , which is wrong. We conclude that there is no unoccupied processor for to be executed after . Hence we postpone and execute at ; this yields that the task set is schedulable by LCEDF.
Example 4. Consider the task set with the processor platform shown in Example 2; that is, , , and are scheduled on a two-processor platform. We first categorize each task into or . If we calculate of , we get 8; since there are two tasks whose execution time is larger than 8 (which is and ), belongs to . Similarly, and belong to .
Consider the following scenario which is the same as that of Example 2: (i) the interval of interest is ; and (ii) is released at , and and is released at and , respectively. Since we categorized as , we know that at , will release at . At , there is only in the ready queue; according to Step 2 we examine if there is an unoccupied processor for in the critical queue to be executed. Since there are two unoccupied processors and there is only one job in the ready queue, we do not postpone the execution of . At the , there is only in the ready queue; according to Step 2 we examine if there is an unoccupied processor for in the critical queue to be executed. This can be done by checking resulting , which is wrong. We conclude that there is no unoccupied processor for to be executed after . Hence we postpone and execute at ; this yields that the task set is schedulable by LCEDF.
We now discuss time-complexity of the LCEDF algorithm itself. Algorithm 1 takes , where the number of tasks in is n. That is, the number of jobs in is upper-bounded by n (See the number of iterations in Line 10), and the number of iterations for Lines 17, 24 and 30 are also upper-bounded by n. Also, it takes to sort jobs according to EDF. Therefore, . Note that LCEDF performs Algorithm 1 only when there is job release, job completion or processor idling.