1. Introduction
Real-time systems take some inputs and produce outputs in a time-bound manner. Meeting deadline is the core concept of a real-time system such that missing a deadline may collapse the whole system. A real-time system has fragile uses such as an airline command system, which is so highly critical that a single failure can cause a major explosion. Similarly, a real-time system is employed in satellite receivers for collecting highly important information and failures can misguide and result in a major collapse [
1]. Daily home appliances such as microwave, AC, electric power system, and refrigerator, etc. can also employ a real-time system.
In a real-time system, the term mixed-critically means that high-critical tasks must meet their deadlines at the cost of missing deadlines for certain low-criticality tasks. Therefore mixed-criticality can be used as a tool for assuring the system failure needed for different components. In the literature, mixed-criticality is identified as mission-criticality and LO- (low-criticality) criticality. The mission-criticality (hard real-time) failures can cause major damage in the systems such as loss of flight control, receiving wrong information via radar system, and misguiding satellite data. On the other hand, LO-criticality (soft real-time) is relaxed critical and can be considered less destructive such that deadlines can be violated occasionally.
A mixed-criticality system
is characterized to execute in each of two modes, high and low critical mode [
2]. Each task is described by the shortest arrival time of a task (period denoted by
P), deadline (denoted by
D), and Worst case execution time
one per criticality level, denoted by
and
. The condition of the basic
model is the system beginning in the LO-criticality mode and can stay in that mode given all jobs execute within their low-criticality computation times
. If any job executes for its
execution time without any signal, the system directly moves to high-criticality (HI)-criticality mode. In HI-criticality mode, LO-criticality jobs should not be executed but some level of service should be maintained if at all possible as LO-criticality tasks are still critical.
In this scheme Guan, Emberson, and Pedro [
3,
4,
5] consider a simple protocol for mode switch situations for controlling the time of the change of mode back to low-criticality, which is to wait until the CPU is idle and then safely be made. Producing a somewhat more efficient scheme, Santy [
6] extends this approach that can be applied to a globally scheduling multi-processor system in which the CPU may never get to an ideal tick. In a dual criticality level that has just shifted into a HI-criticality mode and hence no LO-criticality tasks are computed, its protocol is to first wait when the HI-criticality task has completed its high computation time and then wait for the next high priority task, and this continues until the lowest priority job is inactive and it is then safe to reintroduce all low-criticality jobs. If there is a further misbehavior of low computation bound the protocol drops all low-criticality jobs if any jobs compute more then its
value.
Dynamic voltage and frequency scaling (DVFS) is a commonly-used technique for reducing the overall energy consumption, which is minimized in a large-scale data processing environment. This technique is based on utilizing two common parameters such as processor voltage and processor frequency to reduce power consumption. DVFS enable processor maximum power consumption, which can be accomplished by decreasing the operating frequency level of a processor. However, a scale-down of the processor’s CPU frequency causes a delay in task completion time. Much of the literature has been focused on reducing power consumption in embedded systems. A similar technique, real-time dynamic voltage and frequency scaling (RT-DVFS), studied reducing power consumption for periodic and aperiodic tasks. In the RT-DVFS technique, slack time is used as a parameter for adjusting the processor speed such that tasks deadlines will be guaranteed.
In the proposed work, we scheduled a single-processor which support variable frequency and voltage scaling. Our aim is to schedule the given jobs that a CPU speeds all jobs achieved to meet its deadline and minimize energy. Few research has been done on minimizing the energy in a mixed-criticality (MC) real-time system, in [
7] CPU acceleration is a deterioration algorithm that adds for given mixed-criticality aperiodic real-time tasks. They characterize an optimization issue of power consumption in MC real-time systems under extended frequency scaling. As the same time each job is performed under the derived frequency scaling. So we enhanced the dynamic approach where the frequency level accommodates under the derived frequency scaling for the plain power decline. The main grant in this research is that we reduced energy in HI-criticality mode dynamically.
2. Related Work and Problem Description
Initially, an MC system is considered by Vestal [
8] for scheduling and since then it has gained increasing interest in real-time scheduling. S. Barauch and P. Ekberg consider [
9] the mixed-criticality system in a way that all LO-criticality jobs are discarded when the system mode switches to HI-criticality [
10,
11,
12]. In [
13], they showed that the scheme of Vestal is optimal for fixed-priority scheduling systems. In [
14], they provided response-time analysis of mixed-criticality tasks in order to increase the schedulability of fixed-priority tasks. In [
10], they provided a heuristic scheduling algorithm based on Audsley priority assignment strategy for efficient scheduling.
Audsley approach [
15] is used to assign priority from the lowest to highest level. At each priority level, the lowest priority job from the low criticality task set is tried first, if it is schedulable then the job moves up to the next priority level if it is schedulable, then the lowest search can be abandoned as the task set is unscheduled. In [
16], they considered how these time-triggered tables can be produced via first simulation.
The energy-minimization consumption of a processor is generally classified into dynamic and static techniques in terms of the consideration of dynamic frequency adjustment. They are also classified into continuous or discrete frequency level schemes according to the assumption of frequency continuity. Yao et al. [
17] and Aydin et al. [
18] also proposed a static (or offline) scheduling method to reduce energy minimization in a real-time system, in this paper [
19] Jejurikar and Gupta study the energy saving of a periodic real-time job. Gruian determined proposed stochastic data to derive a energy-efficient schedules scheme in [
20]. In [
21], they provided minimum power consumption in periodic task scheduling for discrete frequency-level systems. On the contrary, the dynamic scheduling scheme adjusts the CPU frequency or speed levels depending on the current system load in order to fully utilized the CPU slack time.
The Audsley scheme for assigning priority to mixed-criticality jobs is based on their criticality level in this paper [
15], and priority is given to jobs manner high to low scheduling priorities so that priorities are given to lowest priorities task, the schedule difficulty of the MC real-time system is investigated by Baruah, the author proof when all jobs are released at the same time is when these jobs are set to NP-complete [
9]. In this scheme, they investigated the optimal schedule algorithm for the MC system scheduling performing well in practice.
The own criticality base priority (OCBP) to MC sporadic jobs by Li and Baruah [
22] considers criticality for priority assignment. When a new job arrives to the system, a new priority is assigned to the job. In [
3], they presented a scheduling scheme known as priority-list reuse scheduling based on the OCBP scheduler. In [
23], they assumed a likewise realistic energy model and presented an optimal static scheme for minimizing the energy of multi-component with adjusting individual frequencies main memory and processor system bus.
The connection between multiple-choice knap sack problem (MCKP) and dynamic voltage scaling (DVS) for periodic task and energy optimization was at first proven by Mejia-Alvarez and Mosse [
24]. In this paper Aydin et al. consider [
18] the dynamic voltage frequency scaling scheme for periodic jobs that complete before their worst-case execution times (WCETs). In [
25], they proposed the elastic scheduling for the purpose of utilizing CPU with discrete frequencies. In [
26], they presented a dynamic slack algorithm allocation for real time that consider both the loss energy minimization and frequency scaling overhead. The cycle conservation approach was proposed by Mei et al. [
27]. They suggested a novel power aware scheduling scheme named cycle conservation DVFS for sporadic jobs. In this algorithm P.Pillai and K.G.Shin [
28] proposed real-time DVS, the OS’s real time scheduler, and jobs managing service to allocate minimum power consumption while maintaining that the deadlines must always be met.
More recently researches on a power-aware mixed-criticality real-time system have been presented by [
7,
29]. The major technique is used for a power-aware mixed-criticality system and they consider only a set job with no periodical jobs. They determine possible CPU speed degraded for MCS jobs. In this algorithm [
29], they show that minimizing the energy of power-aware mixed-criticality real-time scheduling for periodic jobs under continuous frequency scaling. The early deadline first with the virtual deadline (EDF-VD) algorithm [
11] provide the most favorable virtual deadline (VD) and frequency scaling of jobs, and do not adjust during run time the derived frequency levels of jobs. In [
30], when high-critical jobs do not finish low computation time, all low-critical jobs are terminated and the system frequency level is set to maximum, in this paper they only reduce frequency in low-critical mode.
In our work we provide an efficient power-aware scheduling algorithm in MC real-time systems and adjust the optimal frequency level of high-criticality mode, to the best of our knowledge this is the first work that introduces optimal energy consumption of high-criticality mode in a mixed-criticality real-time system, the main grant our scheme is that we minimize energy in high-criticality mode dynamically and show the experimental results in simulations.
3. System Model
3.1. Task Model
In this subsection, we provide an overview of the task model. In the mixed-criticality real-time systems, a low-criticality periodic task releases an order of jobs only in low criticality mode, while high-criticality tasks release their jobs in both high- and low-criticality mode. Thus a mixed-criticality task consists of four parameters: Period , computation time of low-criticality jobs, , computation time of high-criticality jobs, , and tasks level as follows:
: The task period. The task releases a job every period (minimum interval arrival time);
: The worst-case execution time in low-criticality mode. The task requires times in low-criticality mode;
: The worst-case execution time in high-criticality mode. The task requires times in high-criticality mode;
: The criticality level of task. The system can be either in high-criticality (HI) mode or in low-criticality (LO) mode.
The task is a periodic real-time task, so that jobs are released at every time units. The j-th instance or job of a task is denoted as the . In the mixed-criticality system, tasks are categorized into low-criticality and high-criticality tasks. In addition, the system mode is also divided into low-criticality and high-criticality mode. In low-criticality mode, all tasks release their jobs so that each task’s job requires the worst-case execution time of . On the contrary, in high-criticality mode, only the high-criticality tasks release their jobs with execution time (). Thus, each task has its criticality mode .
The mixed-criticality system is an integrated suit of hardware, middleware service, operating system, and application software that support the execution of non-criticality, mission-criticality, and safety-critical functions. The system starts in low-criticality mode. However, if there is a possibility that any low-critical job interrupts in high-criticality jobs’ execution time, then the system criticality mode changes. In such a situation, all low-criticality tasks are dropped in the system. In mixed-criticality systems, such a possibility occurs when a high-criticality job does not complete its computation time, which is the condition of switching from low-criticality mode to high-criticality mode.
On the contrary, the system returns to low-criticality mode when there is no possibility of overrun. While high-criticality tasks are executed in high-criticality mode, the system changes its criticality to low mode as long as there is no task ready in the queue [
29].
For example,
Figure 1 shows an example of three mixed-criticality tasks of
(2, 2, 5, LO),
(1, 3, 6, HI), and
(2, 3, 8, HI). The system starts in low-criticality mode, where each task requires
execution time. Each task releases its job every
time units. The scheduling algorithm used in
Figure 1 is EDF (earliest deadline first).
Let us assume that the job does not complete its execution at time 19. Then, the system changes the criticality mode to high-criticality. After then, the system executes only high-criticality tasks ( and ) with their execution times. The execution times of and become 3 in each. When the system is in high-criticality mode, all low-criticality jobs are ignored or removed from the queue. For instance, the job released at time 20 is removed from the scheduling queue since it is a low-criticality job.
The systems returns to low-criticality mode if there is no high-criticality jobs waiting in the scheduling queue. For example, the system returns back to the low-criticality mode at time 23 because there are no jobs available. After then, the system executes low-criticality jobs again as before.
3.2. Power Model
In this paper, we assume the DVFS-enabled CPU system where the CPU frequency is adjusted dynamically during run-time. The number of discrete frequency levels is given by m while the frequency levels are defined as a set F.
Let us assume that a task requires t execution time on the CPU at its maximum frequency level. For a given frequency level f of the CPU, the relative speed level s is defined by , where is the maximum frequency level. Then, the task execution time is defined by .
Since the dynamic power consumption is a major issue in the power consumption of systems, we take dynamic power consumption into account in the paper. Generally, the dynamic power is in proportion to
or
for a frequency level
f, we use Equation (
1) for the execution time model of a task with
t execution time on the relative speed level
s [
31].
where
is a coefficient. In this paper we assume
for the sake of simplicity.
Figure 2 shows an DVFS scheme for real-time task scheduling. For example, a real-time task requires 3 time unit for its execution, while its result requires 10 time units (
Figure 2a). If there is no other task, the system has 7 time-unit slack time to the task deadline. Thus, the task can be executed on the relative speed level of 0.3, as shown in
Figure 2b. In the reduced CPU speed level, the system can reduce the power consumption without violating the task deadline.
5. The Proposed Scheme
5.1. Dynamic Power Aware Scheme MCS Jobs
The proposed scheme dynamically adjusts the CPU frequency level depending on both the system mode and task mode. The baseline frequency levels are derived from static analysis so that x, , , and are obtained before run-time. Throughout the optimization problem, we solve those values in the initial step.
The power-consumption with consideration of both high- and low-crticality modes in defined by the following three equations. The unit-time power consumption in low-crticality mode is derived by Equation (
6), where
is the least common multiplier of all periods. In Equation (
6), the total power consumption during
is computed by adding the power consumption of task
in low mode using Equation (
1). The number of
’s jobs is
. Thus, the unit-time power consumption is obtained by dividing the total sum with
.
Similarly, the unit-time power consumption in high-criticality mode is defined by Equation (
7). Thus, the average unit-time power consumption can be obtained as the expected value in each mode, as in Equation (
8), where
and
denote the probabilities of the system mode in low- and high-crticality, respectively.
For the given probabilities of
and
, the problem of deciding the optimal frequency levels and
x of EDF-VD is: to minimize
subject to
The scheduling system flow in low mode is shown in
Figure 4a. Each task releases jobs with
execution time every period. Since we use EDF-VD, the virtual deadline of a high-criticality job released at time
t is given by
. The deadline of low-criticality job is set as
. These new jobs are waiting in the ready queue.
The scheduling algorithm for jobs is based on early deadline first so that the job with the earliest deadline is scheduled first. At the time of dispatching a high-criticality job, the CPU frequency level is set as . On the contrary, the frequency level is adjusted with for low-criticality job execution.
When a high-criticality job does not complete its low-mode execution time, then the system switches to high-criticality mode. At that time, all low-criticality jobs are dropped in order to guarantee high-criticality tasks as shown in
Figure 4b. However, the system can switch back to low-mode at any time when there is no pending task.
5.2. DVFS Scheduling
The notation for the scheduling algorithm is shown in
Table 2. The task utilization of
is denoted as
. Each job, denoted as
, in the waiting queue is defined by (
,
) so that a job requires
execution time by the deadline
. The values are determined at the time of job release.
The proposed scheme is defined by functions that are called at a certain event. The algorithms are given in the followings pseudo-code in Algorithms 1 and 2.
Job-Release (): Every period, a task releases a job. The function Job-Release is called;
Job-Finish (): The function is called when a job completes its execution or over-runs the execution time;
Power-aware Schedule (): At the time of a job release or completion, the function re-schedules jobs in the queue;
Frequency-Adjust (): The CPU frequency is adjusted at the time of job allocation to the CPU.
When a job is released in low mode, the job is inserted in the ready queue. The task utilization is also updated. Since the frequency-level of a LO-criticality task is given by , the task utilization is determined by the equation in line 5 of Algorithm 1. In case of a high-critical job of every period so that the utilization is given by the equation in line 7. If the current system mode is low, we terminate or ignore the low-criticality job. If the current mode is high, we execute the high-criticality job (line 14). The job is inserted in the ready queue, we call the scheduling algorithm in line 19.
When the job finishes its computation, if the current system mode is low, nothing is executed. We only check = HI. We have two cases if finishes. If does not complete, the system mode becomes high. When the ready queue is empty and there is no high-criticality job in the ready queue, the system mode is changed from high to low (lines 29–31).
The function
Power-aware Schedule () dispatches jobs using EDF (line 38–43 of Algorithm 1). At each scheduling event,
Frequency-Adjust () function is called so as to adjust the CPU frequency dynamically. As shown in Algorithm 2, if the system is in high-criticality mode, we minimize the frequency of high-criticality mode which is set as
. The frequency level is set as the frequency level sufficient to schedule current jobs. Thus, the relative speed level of the frequency is greater than or equal to the current utilization.
Algorithm 1 Algorithm of energy minimization consumption in mixed-criticality tasks. |
1: | functionJob-Release() |
2: | if the current system mode is Low then |
3: | Insert job into |
4: | if Low then | ▹ Low-criticality job |
5: | |
6: | else |
7: | + |
8: | end if |
9: | else | ▹ The current system mode is High |
10: | if Low then |
11: | 0 |
12: | else | ▹ High |
13: | |
14: | Insert job into |
15: | end if |
16: | end if |
17: | Power-aware Schedule() |
18: | end function |
|
19: | functionJob-Finish() |
20: | if the current system mode is Low then |
21: | if High then | ▹ High-criticality job |
22: | if finish completely then |
23: | |
24: | else |
25: | The system mode changed to High | ▹ Mode switch to HI |
26: | end if |
27: |
end if |
28: | else | ▹ The current system mode is High |
29: | if then |
30: | The system mode is changed from High to Low | ▹ Mode switch back to LO |
31: | end if |
32: | end if |
33: | Power-aware Schedule() |
34: | end function |
|
35: | functionPower-aware Schedule() |
36: | if then |
37: | the job with the earliest deadline in |
38: | if then | ▹ CPU idle |
39: | |
40: | else if then | ▹ Preemption by EDF |
41: | is preempted and re-Inserted into |
42: | |
43: | end if |
44: | Frequency-Adjust() |
45: | end if |
46: | end function |
Algorithm 2 Algorithm of selecting frequency. |
- 1:
functionFrequency-Adjust() - 2:
if The system is in High mode then - 3:
The frequency is set as . - 4:
else ▹The system is in Low mode. - 5:
- 6:
if LO then - 7:
- 8:
else - 9:
- 10:
end if - 11:
freq ← the minimum s.t. - 12:
The frequency is set as freq. - 13:
end if - 14:
end function
|
5.3. Example
Let us consider the task set in
Table 1 as an example. The previous work derives the optimal value of
and
as 0.6 and 0.8, respectively. In high-criticality mode, the maximum frequency level is used. However, the proposed work derives the optimal frequency levels by solving Equation (
9) with two constraints of Equations (
10) and (
11).
Table 3 shows those values for given probabilities of high- and low-criticality mode.
For example, for a given
, the optimal frequency levels of
,
, and
are 0.7, 0.8, and 0.9. The scheduling example of
Table 1 in the same scenario as
Figure 3 is shown in
Figure 5. The frequency level in high-criticality is set as 0.9, not as 1.0. As shown in
Table 3, the proposed work can reduce more energy in higher probability of high-criticality mode.