1. Introduction
In the era of telecommunication and interconnectivity, the Internet of Things (IoT) is a promising new technology for connecting intelligent objects that surround us into a large network, often distributed over large geographic regions [
1,
2]. Real-time and non-real-time, critical, and non-critical systems coexist with increasingly complex and sometimes conflicting demands from the network. Thus, new concepts for the provisioning, management, and monitoring of these systems and their components must be developed. The development of such concepts has led to an entirely new field, the Real-Time Internet of Things (RT-IoT) [
3], which promises better connectivity and efficient use of next-generation embedded devices. As the field of real-time systems has developed in recent years, it is natural to integrate new concepts from RT systems into RT-IoT. This concept, which is currently gaining attention in classical RT systems, is the concept of Mixed-Criticality Systems (MCSs), which define systems running real-time tasks of different criticality levels sharing the same platform. As many critical real-time applications have already been implemented using distributed heterogeneous architectures [
4], the MCS concept can also be applied to RT-IoT, leading to a new scheduling paradigm called MC-IoT (for a full list of abbreviations, please see
Appendix B).
The need for comprehensive solutions that integrate systems with strict timing constraints/requirements into IoT architectures is not new and has been extensively discussed in [
5]. This need is also reflected in the design and implementation of real-time/time-triggered Ethernet protocols with mixed-criticality support [
4]. However, only a few examples of real-time systems in their classical form or mixed-criticality systems have been integrated into IoT architectures. In [
1], the placement of services and data in fog devices is discussed with regard to the optimization of the system. The challenges of both soft and hard real-time systems are addressed in [
3] by introducing the term RT-IoT. Papers such as [
6] and [
7] present examples of autonomous IoT devices and applications, while [
8] presents an example of a collaborative homogeneous IoT system, and [
9], a system development platform for reducing the complexity of developing IoT-enabled applications with different criticalities. In [
10], the deep learning approach is used to schedule real-time edge services, while Jin et al. [
11] proposed a mechanism for data aggregation in mobile privacy-aware crowdsourcing systems. Furthermore, a real-time system that proposes the approximate image sharing and energy-aware adaptation to obtain higher bandwidth and energy efficiency, called BEES, was introduced in [
12].
From our perspective, MC-IoT can effectively address the problem of resource management at the edge level of IoT platforms running a variety of applications with different criticalities and different time requirements, such as sensor fusion for automated driving applications [
13], smart buildings [
14], and healthcare mixed-criticality applications [
15]. The scheduling problem in real-time distributed heterogeneous systems is not trivial and has been insufficiently addressed, despite its potential [
16]. A few papers addressing this problem have been published [
17,
18,
19,
20].
The heterogeneous nature of the IoT architecture has a direct influence on applications that run on these different hardware components.
The application code execution time is strongly influenced by the target system running the application.
In the same manner, the power consumption owing to the execution of the same code may be different for target j than for target k.
Therefore, new application models are required for the MC-IoT applications. These models need to be considered in addition to the temporal and sometimes criticality aspects as well as hardware particularities.
In RT systems in general and in RT-IoT in particular, temporal behavior is a significant aspect, as important as the correctness and accuracy of the provided result [
21]. Thus, the application models for MC-IoT, as a particular case of RT-IoT, should consider the influence of the hardware particularities.
In this study, we formalized the basic concepts of MC scheduling in the IoT domain and defined an MC–IoT paradigm. We also provide a methodology for scheduling MC applications in MC-IoT.
The main contributions of this research are:
Definition and mathematical formalization of the Mixed-Criticality Internet of Things scheduling concept;
The scheduling problem in MC-IoT and RT-distributed systems was mathematically formulated;
A novel hardware-aware real-time task model for distributed mixed-criticality systems such as MC-IoT was proposed;
A methodology for mapping tasks on heterogeneous targets, considering both timing and hardware particularities, was proposed and analyzed.
The evolution of real-time scheduling models is presented in
Section 2, reviewing all the key points in their development. We created a mathematical formalization of the scheduling problem in
Section 3. We adapted the classical MC task model to the MC-IoT paradigm in
Section 4. The methodology for scheduling tasks in MC-IoT is proposed together with example algorithms for setting task parameters and distributing tasks on different IoT nodes in
Section 5. The proposed function for task distribution is evaluated and compared with the classical mapping function in
Section 6. The article ends with conclusions and future perspectives in
Section 7.
2. Related Work
Tasks are basic execution units of an application. Depending on their activation patterns, they can be periodic or sporadic. Periodic tasks are activated at a constant rate [
22], whereas sporadic tasks have a minimum inter-arrival time [
23,
24]. In the mixed-criticality paradigm, each task has an assigned criticality level and a set of properties [
25,
26] (for a full list of symbols, please see
Appendix B).
where
l represents the number of criticality levels,
Ti—the period for periodic tasks or the (minimum) arrival interval between two consecutive jobs of the same task i,
Di is the time by which any job execution needs to be completed relative to its release time,
Li is the criticality level (1 being the lowest level), and
Ci,j is the computation time (vector of values—one per criticality level j, for levels lower or equal to the criticality level
Li (
j ≤
Li) expressing the worst-case execution time for each criticality level).
A task consists of a series of jobs, with each job inheriting the set of parameters of the task
(Ti, Di, Li) to which it adds its parameters [
27]. Thus, the
k-th job of task
τi is characterized as:
where
ai,k represent the arrival time
ai,k+1 − ai,k >=
Ti;
di,k represents the absolute deadline
(di,k+1 = ai,k + Di);
ci,k represents the execution time, which is dependent on the criticality mode of the system (for
Lj,
ci,k = Ci,j); and
Ti, Di, Lj have the same meaning as in the task model.
The classical model has been the source of other simplified models, such as those introduced by Burns [
28]. In [
29], it was proven that Vestal’s model is a generalization of Burns’ model. Another classical simplification of the model considers only two levels of criticality (LO and HI) [
30].
In time, other parameters were transformed from a simple scalar value to a vector of values depending on the criticality level. Such is the case of the period in [
31]. In the same manner, the worst-case memory access was considered and added to the general model as a vector of values in [
32], and a vector of values describing the QoS was added in [
33]. In [
34], a new model called Elastic Mixed-Criticality (E-MC) was introduced, which introduces the idea of variable periods for low-criticality tasks.
In addition to general mixed-criticality models derived from hard real-time task models, another task model for cyber-physical mixed-criticality systems was proposed in [
35]. This model can be considered as a translation of the model presented in [
36] for real-time tasks into an MCS task model by expressing the number of tolerable deadline misses as a function of the criticality level instead of a constant value, as in the classical real-time case.
On the one hand, we have the task-level models presented in the previous paragraph, which are mostly used for sets of independent tasks and are insufficient for sets of tasks with precedence constraints. However, we have models derived from graph-based real-time task models [
37], such as those proposed in [
38]. The mode-switching Diagraph Real-Time (MS-DRT) task model is such an example, which was proposed by Ekberg and Yi in [
39]. These types of models are more complex and describe, in addition to temporal behavior, functional dependencies between tasks. Another graph-based model that specifies the allowed interference between tasks, called interference constraint graph (ICG), was proposed in [
40]. The task parameters are the same as those in the Vestal model, and the main difference lies in the fact that the graph models the relationship between tasks, namely, task interference.
However, we have graph-based models for which we could not identify a central influential model but encountered several variants, from which we mention Ekberg and Yi (2016) and Huang et al. (2013).
Figure 1 summarizes the above paragraphs, presenting the main task models for RTS (real-time systems) and MCS (mixed-criticality systems), their main differences, and their evolution. For uniformity reasons, the parameter representing the criticality level has been marked with L even if in some of the original articles other symbols were used. Vestal’s model (2007) is the most influential model, which was developed from the sporadic task model proposed by Mok in 1983 and was further customized into several simplified models, among which we consider worth mentioning the Burns and Baruah model (2013). In addition to general task models, there are models specifically for firm/soft real-time tasks with criticality levels (Lee and Shin, 2017).
Although there are a relatively large number of task models, there are only a few core models that give birth to different variations. Most of the variations are compatible with one another, but they focus on different aspects, generalizing or simplifying certain behaviors/parameters, thus differing in the assumptions regarding the task parameters (see
Appendix A).
3. Application Execution Model and Scheduling Problem Formulation
With the development of 6G networks, the IoT applications running at the edge level are integrating more and more real-time and mixed-criticality functionalities [
33].
Mixed-Criticality Systems represent a special class of real-time systems in which application functions with different criticalities share the same resources in terms of computation and/or communication [
12]. The MCS concept offers important advantages when integrated into different types of systems that directly interact with the environment [
12]. IoT is also a type of system that interacts directly with the user and/or environment. As IoT systems are rapidly evolving towards real-time IoT systems, where tasks with different time constraints and criticality levels coexist, it is only natural that the application scheduling in such systems becomes increasingly complex, and the need to integrate the concept of mixed-criticality becomes stringent.
Adapting the MCS concept to IoT architectures gives birth to a new scheduling paradigm called MC-IoT, for which we propose the following definition.
Definition 1. Mixed Criticality-Internet of Things (MC-IoT) is a type of system that run real-time tasks of different criticalities at the edge level of IoT architectures.
In MCSs, as in any real-time system, applications are split into basic units of execution called tasks. The scheduling issue of these tasks on multiprocessors and distributed systems have become prominent in RT-IoT systems such as 6G networks [
33].
Tasks can be independent of one another or have a precedence dependence on different tasks. Each task or set of dependent tasks is allocated to a specific node in a process called global level scheduling, and each set of independent or dependent tasks is scheduled at the device level in a process called local scheduling.
Thus, the scheduling problem at the edge level can be divided into two sub-problems:
- (1)
Task mapping (at an intermediate/Fog level): Each task must be allocated or mapped to a processing element.
- (2)
Local task scheduling (at the device/Edge node level): all tasks allocated to a certain processing element must be schedulable.
Until now, the task mapping problem was not approached by taking into consideration the hardware heterogeneity. Thus, we further propose a hardware-aware solution to this problem.
In this paper, we assume the following hypotheses:
The system is represented by an IoT layer architecture with heterogeneous edge nodes. The architecture contains three main computation levels (Edge, Fog, Cloud) [
41].
These applications were implemented as systems for periodic/sporadic independent mixed-criticality tasks.
These tasks have stringent timing requirements (i.e., hard real-time tasks).
The worst-case execution time (WCET) was determined via static analysis for high criticality levels and estimated through measurements for low criticality levels.
Each task was statically assigned to a processing element (PE) and could not migrate during runtime.
Each processing element starts in the lowest criticality mode,
L1, and remains in that mode as long as all jobs are executed within their criticality level computation times
Ci,1. If any job is executed for more than
Ci,1 without signaling completion, the system immediately changes to
L2 and so on [
25].
For simplicity, we consider that, for each criticality mode
Lj, tasks with a criticality lower than
Lj are dropped [
25]. Still, the models and partitioned algorithms proposed in this article are valid, even if we consider more complicated local task scheduling algorithms, which do not drop lower criticality tasks but schedule them using processor slack time.
The system is heterogeneous, which means that the task execution time varies depending on the PEs.
4. The Proposed MC-IoT Task Model
As was presented in
Section 2, the current MC task models do not take into consideration the hardware heterogeneity of the system.
We begin with the classical MCS model proposed by Vestal in 2007 [
25] and consider the parameters from Equation (1). In addition to the temporal behavior already included in this model, we propose an extension consisting of a new parameter, affinity score
Ai, which is defined as a vector of values, one per host, representing the affinity score of the task for each host. The affinity score is an integer between zero and p, where p is the number of processing elements. A higher value indicates higher affinity, and zero indicates no affinity.
Moreover, we propose to express the computation time not only as a function of the criticality level (
Li) but also as a function of the processing hardware element running the task (
PEq). Thus, task model (1) becomes:
where
Ci is a matrix of size
p ×
l (
p represents the number of PEs, and
l is the number of criticality levels).
The task affinity score can be set statically by the task creator or computed using an algorithm based on the resources required by the task and the task computation time for different processing elements. If a task cannot be scheduled on a particular PE, the WCET value can be set to infinity, which translates to an affinity score of zero.
In the following analysis, we consider a dual criticality system. However, these algorithms can also be extended to platforms with multiple criticality levels. In criticality-aware heuristics, it makes more sense to use utilization in the high criticality (Hi) mode on a certain processing element q during the assignment of Hi-criticality tasks and utilization in the low criticality (Lo) mode during the assignment of Lo-criticality tasks. Following this principle, the affinity score is set for Hi-criticality tasks using their Hi-criticality WCET and for Lo-criticality tasks using their Lo-criticality WCET.
5. A Methodology for Mapping Tasks on Different Processing Elements
Having the previously defined MC-IoT task model and scheduling problem formalized in
Section 4, we propose a hardware-aware methodology for mapping tasks into our proposed MC-IoT architecture. The methodology comprises different methods for setting the newly introduced task parameter, namely the affinity score, and defining a suitable mapping function to respect both application and resource constraints.
5.1. Setting Affinity Score
While the task parameters inherited from the classical MCS model are assigned considering only the temporal behavior [
25], the affinity score is computed by considering the particularities of the processing elements. This can be achieved using the following methodology.
5.1.1. Set Affinity Score Based on Computation Time
The affinity score can be set based on the computation time using the algorithm presented in Algorithm 1.
Algorithm 1: SetAffinity_WCET |
Input: |
Output: |
1 for do |
2 for do |
3 |
4 end for |
5 end for |
6 for do |
7 |
8 for do |
9 |
10 |
11 |
12 |
13 end for |
14 end for |
Step 1: Extract the task computation time for the highest criticality level (2 in the case of a dual criticality system) on each PE (the second column in the generated computation time matrix Ci,j,q) by copying it into an array of structures Xi,2,q. Each structure has a computation time value Xi,2,q, and a PE index (q), where i represents the task number, which is fixed, and 2 represents the highest criticality level. Index q varies from one to p.
Step 2: Extract the matrix line index of the maximum value for Xi,2,q from the array when q varies from one to p.
Step 3: In the affinity array, set the affinity score Ai,q to 1 for the PE corresponding to the highest Xi,2,q value, and then 2 for the PE corresponding to the highest Xi,2,q from the array after setting the element with the highest value from the first iteration Xi,2,q to 0, 3 for the remaining highest value, and so on, while q ≤ p.
5.1.2. Set Affinity Score Based on Criticality Level
To distribute tasks on different processing elements considering the criticality level of the task, we can employ carefully selected algorithms, such as those described in [
42].
An alternative method for mapping the tasks that consider the affinity score is proposed as Algorithm 2.
Algorithm 2: SetAffinity_criticality (p > l) |
Input: , |
Output: |
1 for do |
2 for do |
3 |
4 end for |
5 end for |
6 for do |
7 |
8 if then |
9 |
10 end if |
11 end for |
12 for do |
13 |
14 for do |
15 |
16 if then |
17 |
18 |
19 |
20 end if |
21 end for |
22 end for |
23 for do |
24 |
25 for do |
26 |
27 if then |
28 |
29 |
30 |
31 end if |
32 end for |
33 end for |
Step 1: Build an array of structures PEq,j. Each structure has a criticality level Lj that represents the expected criticality level of the tasks to be partitioned on PEq and PE index (q). Index q varies from 1 to p, whereas Lj is given by computing Lj = PEq mod l, where l is the number of criticality levels and mod represents the modulo operation. If Lj is zero, we consider Lj = l.
Step 2: Assign each task τi according to criticality level Li. We have two subsets: PEs with expected criticality PEq,j equal to Li, and PEs with expected criticality PEq,j not equal to Li. The affinity scores Ai,q had the highest values for the first subset of PEs. For each subset of PEs, an affinity score was assigned according to the computation time, as shown in Algorithm 1.
If the number of criticality levels exceeded the number of PEs, the affinity score was set according to Algorithm 3.
Algorithm 3: SetAffinity_criticality (l>p) |
Input: , |
Output: |
1 for do |
2 for do |
3 |
4 end for |
5 end for |
6 for do |
7 |
8 if then |
9 |
10 end if |
11 |
12 |
13 end for |
14 for do |
15 |
16 for do |
17 |
18 |
19 |
20 |
21 end for |
22 end for |
Step 1: For each taskτi, obtain the PE on which it is expected to run by computing PEq = Lj mod p, where p is the number of PEs, and mod represents the modulo operation. If PEq is 0, we consider PEq = p. Next, we set the affinity score Ai,q for PEq as p.
Step 2: For each task τi, set the remaining affinity scores Ai,q according to the computation time (Algorithm 1), where q ranges from 1 to p-1.
5.2. Task Mapping
As highlighted in the literature [
33], the partitioned scheduling approach can achieve better schedulability results than the global scheduling approach. Thus, we also propose a partitioned approach.
Given a set of independent MC tasks and processing elements, the problem involves determining the following function:
where
τ is the set of tasks in the system,
P is the set of processing elements, and
M is a morphism.
Considering the working hypotheses presented above, tasks cannot migrate during runtime; therefore, each task was assigned to a single PE.
M (
τi) represents the PE at which
τi runs.
Figure 2 illustrates the assignment of tasks to PEs, where
p is the total number of PEs, and n is the number of tasks in the system.
After each task is assigned to a processing element, the tasks allocated to
PEq form a task subset defined as
Ψq [
27]. If task set τ is successfully partitioned, then
where
Ψq can also be an empty subset {Ø}.
The goal is to find a suitable M function to comply with the next request given the following priority order:
- (1)
Suitable subsets of tasks Ψq are created, such that each subset is schedulable by the local scheduling algorithm running on its assigned PEq.
- (2)
Respect task affinity score.
- (3)
Optimize resource usage.
Owing to its increased complexity, the mapping function should be implemented at the intermediate (Fog) level.
5.2.1. Local Task Scheduling
According to Vestal [
25], a task set is schedulable if the deadline of each task
τi is greater than or equal to its worst-case response time.
where WCET is the maximum duration between completion and activation of each job of task
τi [
43].
where
Ii is the inference from higher priority tasks.
The value of
Ri is dependent on the scheduling algorithm and can be iteratively computed as the least fixed point of (8), where
hp(i) is a subset of tasks with a priority higher than or equal to that of task
τi but not containing
τi,
Ci is the worst-case execution time, and
Ti is the task period [
25]:
Scheduling in mixed-critic systems is feasible if the following two conditions are satisfied [
44].
Condition 1. If all jobs run no more than their Ci,j values for the current Lj criticality mode, then all jobs with a criticality level Li higher than or equal to Lj must be completed before their deadline.
Condition 2. If at least one job exceeds its Ci,j execution time value for the current Lj criticality mode, then the current criticality mode changes to the next criticality value (Lj+1), for which Condition 1 must also be true.
Thus, Condition 1 must be satisfied for all the criticality modes available in the system.
For a k criticality mode (
Lk) on the
PEq processing element, Equation (8) becomes:
Checking Condition 1, for the
Lk criticality mode, reduces to the verification of Condition (6), particularly for the
Lk criticality mode on the
PEq processing element:
is true for all the tasks in the task set.
Moreover, for all deadlines to be satisfied, the processing units must not be overloaded with the tasks. Thus, a necessary but sufficient condition bound on the load metric for any processing unit is given by [
44]:
For periodical/sporadic tasks, the total processor utilization in execution mode
Lj is:
where
hc(Lj) is a subset of tasks from
Ψq with criticality higher than or equal to
Lj and
l is the number of criticality levels.
Lk = j…l represents the criticality levels higher than or equal to
Lj,
and Ci,k,q represents the WCET of task
i, which runs in criticality mode
k on the processing element
PEq.
Moreover, many local task-scheduling algorithms have sufficient conditions. For example, a sufficient condition for a local task scheduling algorithm called EDF-VD when the system has two criticality levels has been proven to be [
27].
5.2.2. Affinity Score Deviation Function
The task affinity score can be set statically by the task creator or computed using an algorithm similar to that proposed in
Section 5.1, based on the resources needed by the task and the task computation time for different processing elements. The mapping function should also consider the affinity score for each task, provided that the local feasibility conditions are not violated. To evaluate this, we propose a function representing the total affinity-score deviation
Adτ for a particular mapping as follows:
where
q is the index of subset
Ψ containing task
i and
p is the number of PEs (
p is also equal to the highest affinity score value).
Our goal is to minimize Adt with respect to the feasibility conditions imposed by the local scheduler (Condition 1).
5.2.3. Proposed Mapping Function—Best Affinity First
Based on the methods used to set the affinity score introduced in the first paragraphs of this section, we developed a mapping algorithm (see Algorithm 4) that considers the affinity value when partitioning tasks to PEs, namely Best Affinity Fit (BAF). This algorithm is evaluated later in the experimental section.
Algorithm 4: Best Affinity Fit (BAF) |
Input: |
Output: , |
1 for do |
2 |
3 |
4 while do |
5 |
6 |
7 if then |
8 |
9 end if |
10 end while |
11 if then |
12 |
13 |
14 end if |
15 end for |
For each task :
Step 1: First, we assume that the task can be assigned to PE. Therefore, we set the variable assigned to 1. We find the PE with the highest affinity score for task i, which has enough space to accept task τi (PE utilization must respect Equation (14)).
Step 2: If a certain PEindex does not have sufficient space for task i, the affinity score Ai,index to 0. If there are no PEs that can host task i, then the set is assigned to 0.
Step 3: If we have found a PEindexwhich can accept task τi, then add task i to subset Ψindex and update the PE utilization.
6. Examples and Comparative Evaluation
We conducted a series of simulation experiments to evaluate the effectiveness of our mapping technique: the Best Affinity Fit. It has been compared against two other relevant mapping methods, Best Fit Decreasing Utilization (BFDU) and Best Fit Decreasing Criticality (BFDC), which are some of the most frequently used algorithms [
45]. BAF dispatches tasks to processors according to the Affinity value, while BFDU and BFDC tasks are first ordered by decreasing utilization for BFDU or by decreasing criticality for BFDC and then assigned to each processor. The processors are also ordered by their decreasing utilization. Two affinity assignment strategies were also evaluated: one allocates values according to the WCET, whereas the other depends on the criticality level.
6.1. Random Task Set Generation
All tasks were randomly generated in Matlab (R2018b) using the task set generation algorithm introduced in [
46], which is a slight modification of the workload generation algorithm proposed by Guan et al. [
47]. We consider a dual-criticality system
{Lo, Hi}. Each new task,
τi, was generated as follows:
Task criticality level: Li = Hi with probability PHi; otherwise, Li = Lo.
Task period: Ti is a randomly generated value drawn from a uniform distribution [10, 100].
Task deadline: Di = Ti, because of the implicit deadline constraint.
The utilization of each task
Ui,j,q is a matrix
p×
l, where
p is the number of PEs in the system and
l represents the number of criticality levels. To generate the utilization values, we considered five input parameters [
46].
:
where
π is the task set, and
Hi(
π) is a subset of
π that contains only
Hi criticality tasks.
[Ul, UU] Utilizations are generated uniformly from this range, with:
[Zl, ZU]: Ratio between the Hi-criticality utilization of a task and its Lo-criticality utilization with
.
WCET for criticality level Lo:
.
WCET for criticality level Hi:
. .
The affinity values were assigned using one of the two methods described in the previous section.
6.2. Best Affinity First Performance Evaluation Results
The parameters used to generate the task sets are provided in graph captions. For each plot, one parameter was varied, whereas the others were fixed. The utilization of each processor must satisfy the necessary condition bound on the load metric for any m-processing unit system (9). Each data point is determined by randomly generating 100 task sets.
6.2.1. Proposed MC Task Model vs. Classical MC Task Model
The main differences between our proposed model and the classical MC task model are as follows: a new parameter called affinity is introduced, and the computation time, in the form of worst-case execution time, becomes a bi-dimensional array (with one dimension being the criticality level and one the processing element).
Table 1 briefly illustrates the differences.
Next, we consider an example of an application model using both the proposed and classical models. We also consider a sensor node modeled as a dual-criticality system with three processing elements. The sensor node runs the aforementioned IoT sensing application and consists of four tasks. The affinity score for each task is computed in
Table 2 according to the two algorithms presented previously: Algorithms 1 and 2 (because the number of PEs exceeds the number of criticality levels).
The tasks are mapped on processing elements using the following methods: on the one hand, BAF (by computing the affinity score according to Algorithm 1), and BFDU and BFDC using the classical MC task model, considering the worst-case execution time vector (the line with the highest values from the
Ci,j,q matrix). The task-mapping results are presented in
Table 3.
For a system with four criticality levels, {
Lo, M1, M2, Hi}, and three processing elements, the affinity score is computed in
Table 4 according to Algorithms 1 and 3 (because the number of PEs is lower than the number of criticality levels).
The tasks were mapped using the same partitioning heuristics as those listed in
Table 3. The mapping results are presented in
Table 5.
In conclusion, by considering a matrix of computation times instead of a vector, we can model the computation time not only as a function of the criticality level but also by considering the influence of the hardware particularities of the processing elements on the task computation time. Moreover, the introduction of a new parameter called affinity offers a certain degree of versatility to the task model because this parameter can be computed according to any optimization function. Our proposed model offers better resource management by achieving lower total utilization, as discussed in the following paragraphs.
6.2.2. BAF vs. BFDU
In the following paragraphs, we compare the results of different mapping functions in terms of total utilization. To reduce the gap between the two MC task models for all mapping functions, the computation time was considered as a matrix, as in our proposed model. The only difference between the task models used for BAF and BFDU is that BAF also uses the affinity parameter, whereas BFDU and BFDC do not. For the BAF, affinity values were assigned according to the WCET in the Hi mode for Hi-criticality tasks and the WCET in the Lo mode for Lo-criticality tasks.
In
Figure 3a, the task set utilization bound (x-axis) ranges from 0.4 to 1.0 times the number of processors in steps of 0.1; in step 3.b, the number of processors (x-axis) ranges from 2 to 12 in steps of 2. In
Figure 3c, the number of tasks (x-axis) ranges from 10 to 24 in steps 2 and 3.d the percentage of Hi-criticality tasks (x-axis) was varied, ranging from 0.2 to 0.6 in steps of 0.1. The average total utilization is depicted on the y-axis.
From the simulation results presented in
Figure 3, we obtain the following results: by varying the number of processors, BAF shows better results with an average value of 1.64, meaning almost 60%, and by varying the number of tasks, there is a reduction in the total utilization when using BAF compared to BFDU with 1.29, meaning about 40%.
6.2.3. BAF vs. BFDC
In this case, affinity values were assigned according to the criticality level of each task. Four experiments were conducted (
Figure 4) similar to the first set. Each data point is determined by randomly generating 100 task sets. The average affinity deviations for the task sets used for
Figure 4a are depicted in
Figure 5 based on Equation (14).
In
Figure 4, by varying the number of processors, BAF has better results with an average value of 1.2, meaning about 43%, and by varying the number of tasks, there is a reduction in the total utilization when using BAF compared to BFDC with 0.51, meaning about 15%.
In conclusion, the average total utilization functions for BFDU and BFDC have a similar growth tendency compared to BAF, which is reaching better results with almost a constant factor (between 0.5-1 for BFDU and between 0.2-0.7 for BFDC). The case where the BAF function growth is significantly better is when the number of processors/ processing units is increased.
7. Conclusions and Future Perspectives
With the continuous improvement in IoT technology and increasingly complex applications, some of them with strict timing constraints are envisioned to be developed. Under these circumstances, the implementation of new and effective RT-IoT architectures may offer much-required technological support. From our perspective, combining the underlying mechanisms of mixed-criticality systems with the Internet of Things offers a huge opportunity for the research and development of complex distributed applications.
In this study, we mathematically formalized the MC-IoT concept. We also propose an effective methodology for scheduling mixed-criticality applications on IoT nodes. A novel hardware-aware extension of the classical MCS task model was proposed to support this methodology, along with a new task mapping function: Best Affinity First.
The performance evaluations prove that BAF has significantly better results than the existing BFDU and BFDC task-mapping techniques in terms of total task utilization. The difference in performance increases with the number of processing elements. These results are underlying the suitability of the proposed partitioning algorithm to the IoT scenario where more numerous processing elements are used compared to the classic MCSs.
Moreover, a hardware-aware scheduling approach increases the potential for the development of the distributed heterogeneous systems running real-time mixed-criticality applications such as 6G networks and their applications.
The proposed model can be further extended in order to include other aspects besides the hardware affinity. Depending on the other resources in the system that need to be more efficiently used, other functions for setting the affinity score can also be developed.
Author Contributions
Conceptualization, C.S.S., E.A.C., M.V.M. and D.I.C.; methodology, C.S.S., E.A.C. and V.S.; software, E.A.C.; validation, C.S.S., E.A.C. and V.S.; formal analysis, C.S.S., E.A.C., M.V.M. and D.I.C.; investigation, C.S.S. and E.A.C.; resources, M.V.M. and D.I.C.; writing—original draft preparation, C.S.S. and E.A.C.; writing—review and editing, M.V.M. and D.I.C.; visualization, C.S.S., E.A.C., M.V.M., V.S. and D.I.C.; supervision, M.V.M. and D.I.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A
Model Class | Task Model | Authors | Year | Number of Parameters | Main Task Parameters | Criticality Levels |
RTS | Periodic | Liu & Layland | 1973 | 2 | C—computation time, T—period | - |
RTS | DAG | Baruah | 1998 | 2 | e—execution requirement, d—deadline | - |
RTS | Sporadic | Mok | 1983 | 3 | c—computation time, p—period, d-deadline | - |
RTS | Periodic | Hamdaoui & Ramanathan | 1995 | 4 | c—computation time, d—deadline, m—tolerable deadline misses from total k deadlines | - |
RTS | DAG | Stigge et al. | 2011 | 3 | e—execution requirement, d—deadline, r—release time | - |
MCS | Generic | Vestal | 2007 | 4 | C—computation time (function of L), T—period, D—deadline, L—criticality level | 4 |
MCS | Periodic/ Sporadic | Lee & Shin | 2017 | 5 | C—computation time (function of L), T—period, D—deadline, L—criticality level, m—tolerable deadline misses | n |
MCS | DAG | Huang et al. | 2013 | 4 | C—computation time (function of L), T—period, D—deadline, L—criticality level | n |
MCS | DAG | Ekberg & Yi | 2016 | 3 | e—execution requirement, d—deadline, m—mode of the corresponding job type | 4 |
MCS | Periodic/ Sporadic | Su and Zhu | 2013 | 6 | C—computation time (function of L), T—period, D—deadline, L—criticality level, p_max—maximum period, P_er—early release points | 2 |
MCS | Periodic/Sporadic | Burns | 2015 | 4 | C—computation time (function of L), T—period, D—deadline, L—criticality level | 2 |
MCS | Periodic/ Sporadic | Burns & Baruah | 2013 | 4 | C—computation time (function of L), T—period, D—deadline, L—criticality level | 2 |
MCS | Periodic/ Sporadic | Li and He | 2017 | 6 | L—criticality level, E—execution time (function of L), M—memory access time (function of L), C—computation time (function of L), T—period, D—deadline | 2 |
MCS | Periodic/ Sporadic | Baruah & Chattopadhyay | 2013 | 4 | C—computation time (function of L), T—period (function of L), D—deadline, L—criticality level | 2 |
MCS | Distributed | Wang et al. | 2021 | 5 | C—computation time (function of L), T—period, D—deadline, L—criticality level, V—value of task (function of L) | 2 |
For uniformity reasons, the criticality level is represented by L, even if in the original articles there might be other symbols used for it.
Appendix B
Abbreviation | Description |
RT-IoT | Real-Time Internet of Things |
MC-IoT | Mixed Criticality Internet of Things |
MCS | Mixed Criticality System |
E-MC | Elastic Mixed Criticality |
MS-DTR | Mode-Switching Diagraph Real-Time |
ICG | Interference Constraint Graph |
WCET | Worst Case Execution Time |
PE | Processing Element |
BAF | Best Affinity Fit |
BFDU | Best Fit Decreasing Utilization |
BFDC | Best Fit Decreasing Criticality |
Symbol | Description |
| Task i |
| Period of task i |
| Deadline of task i |
| Criticality of task i |
| Computation time in terms of WCET of task i for criticality level j |
| Job k of task i |
| Arrival time of job k of task i |
| Computation time of job k of task i |
| Computation time in terms of WCET of task i for criticality level j on processing element q |
| Affinity of task i for processing element q |
| Mapping function |
| Subset of tasks running on processing element q |
| Response time of task i |
| Inference or higher priority tasks for task i |
| Response time of job k, of task i on processing element q |
| Processor utilization of the Ψq subset of tasks of level Lj on processing element q |
| Processor utilization of task i for criticality level Lo on processor q |
| Processor utilization of task i for criticality level Hi on processor q |
Ui,j,q | Processor utilization of task i for criticality level j on processor q |
| Total affinity score deviation |
| Processor utilization bound |
References
- Velasquez, K.; Abreu, D.P.; Assis, M.R.M.; Senna, C.; Aranha, D.F.; Bittencourt, L.F.; Laranjeiro, N.; Curado, M.; Vieira, M.; Monteiro, E.; et al. Fog orchestration for the Internet of Everything: State-of-the-art and research challenges. J. Internet Serv. Appl. 2018, 9, 14. [Google Scholar] [CrossRef] [Green Version]
- Viel, F.; Silva, L.A.; Leithardt, V.R.Q.; Santana, J.F.D.P.; Teive, R.C.G.; Zeferino, C.A. An Efficient Interface for the Integration of IoT Devices with Smart Grids. Sensors 2020, 20, 2849. [Google Scholar] [CrossRef] [PubMed]
- Chen, C.Y.; Hasan, M.; Mohan, S. Securing real-time internet-of-things. Sensors 2018, 18, 4356. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Tămaş–Selicean, D.; Pop, P.; Steiner, W. Design optimization of TTEthernet-based distributed real-time systems. Real-Time Syst. 2015, 51, 1–35. [Google Scholar] [CrossRef]
- Calvaresi, D.; Marinoni, M.; Sturm, A.; Schumacher, M.; Buttazzo, G. The challenge of real-time multi-agent systems for enabling IoT and CPS. In Proceedings of the International Conference on Web Intelligence, Leipzig, Germany, 23–26 August 2017; ACM: New York, NY, USA; pp. 356–364. [Google Scholar]
- Carpenter, T.; Hatcliff, J.; Vasserman, E.Y. A reference separation architecture for mixed-criticality medical and IoT devices. In Proceedings of the 1st ACM Workshop on the Internet of Safe Things, Delft, The Netherlands, 5 November 2017; ACM: New York, NY, USA; pp. 14–19. [Google Scholar]
- Moratelli, C.; Johann, S.; Neves, M.; Hessel, F. Embedded virtualization for the design of secure IoT applications. In Proceedings of the 27th International Symposium on Rapid System Prototyping: Shortening the Path from Specification to Prototype, 6–7 October 2016; ACM: New York, NY, USA; pp. 2–6. [Google Scholar]
- Yang, Y.; Wang, K.; Zhang, G.; Chen, X.; Luo, X.; Zhou, M.T. MEETS: Maximal energy-efficient task scheduling in homogeneous fog networks. IEEE Internet Things J. 2018, 5, 4076–4087. [Google Scholar] [CrossRef]
- Kamienski, C.; Jentsch, M.; Eisenhauer, M.; Kiljander, J.; Ferrera, E.; Rosengren, P.; Thestrup, J.; Souto, E.; Andrade, W.S.; Sadok, D. Application development for the Internet of Things: A context-aware mixed-criticality systems development platform. Comput. Commun. 2017, 104, 1–16. [Google Scholar] [CrossRef]
- Yao, S.; Hao, Y.; Zhao, Y.; Shao, H.; Liu, D.; Liu, S.; Wang, T.; Li, J.; Abdelzaher, T. Scheduling real-time deep learning services as imprecise computations. In Proceedings of the 2020 IEEE 26th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), Gangnueng, Korea, 19–21 August 2020; pp. 1–10. [Google Scholar]
- Jin, H.; Su, L.; Xiao, H.; Nahrstedt, K. Incentive mechanism for privacy-aware data aggregation in mobile crowd sensing systems. IEEE/ACM Trans. Netw. 2018, 26, 2019–2032. [Google Scholar] [CrossRef]
- Zuo, P.; Hua, Y.; Liu, X.; Feng, D.; Xia, W.; Cao, S.; Wu, J.; Sun, Y.; Guo, Y. BEES: Bandwidth-and energy-efficient image sharing for real-time situation awareness. In Proceedings of the 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), Atlanta, GA, USA, 5–8 June 2017; pp. 1510–1520. [Google Scholar]
- Steinbaeck, J.; Tengg, A.; Holweg, G.; Druml, N. A 3D time-of-flight mixed-criticality system for environment perception. In Proceedings of the 2017 Euromicro Conference on Digital System Design (DSD), Vienna, Austria, 30 August–1 September 2017; pp. 368–374. [Google Scholar]
- Dimopoulos, A.C.; Bravos, G.; Dimitrakopoulos, G.; Nikolaidou, M.; Nikolopoulos, V.; Anagnostopoulos, D. A multi-core context-aware management architecture for mixed-criticality smart building applications. In Proceedings of the 2016 11th System of Systems Engineering Conference (SoSE), Kongsberg, Norway, 12–16 June 2016; pp. 1–6. [Google Scholar]
- Bravos, G.; Dimitrakopoulos, G.; Anagnostopoulos, D.; Nikolaidou, M.; Kotronis, C.; Politi, E.; Amira, A.; Bensaali, F. Embedded Intelligence in IoT-Based Mixed-Criticality Connected Healthcare Applications: Requirements, Research Achievements and Challenges. Preprints 2018, 2018100216. [Google Scholar] [CrossRef]
- Capota, E.A.; Stangaciu, C.S.; Micea, M.V.; Curiac, D.I. Towards Mixed Criticality Task Scheduling in Cyber Physical Systems: Challenges and Perspectives. J. Syst. Softw. 2019, 156, 204–216. [Google Scholar] [CrossRef]
- Gaur, P.; Tahiliani, M.P. Operating systems for IoT devices: A critical survey. In Proceedings of the 2015 IEEE Region 10 Symposium, Ahmedabad, India, 13–15 May 2015; pp. 33–36. [Google Scholar]
- Kim, J.E.; Abdelzaher, T.; Sha, L.; Bar-Noy, A.; Hobbs, R.; Dron, W. On maximizing quality of information for the internet of things: A real-time scheduling perspective. In Proceedings of the 2016 IEEE 22nd International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), Daegu, Korea, 17–19 August 2016; pp. 202–211. [Google Scholar]
- Zhang, T.; Gong, T.; Gu, C.; Ji, H.; Han, S.; Deng, Q.; Hu, X.S. Distributed dynamic packet scheduling for handling disturbances in real-time wireless networks. In Proceedings of the 2017 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), Pittsburgh, PA, USA, 18–21 April 2017; pp. 261–272.
- Sukumaran Nair, A.; Colaco, L.M.; Raveendran, B.; Punnekkat, S. TaskMUSTER: A comprehensive analysis of task parameters for mixed criticality automotive systems. Sādhanā 2022, 47, 1–23. [Google Scholar] [CrossRef]
- Buttazzo, G.C. Hard Real-Time Computing Systems: Predictable Scheduling Algorithms and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011; Volume 24. [Google Scholar]
- Capota, E.A.; Stangaciu, C.S.; Micea, M.V.; Curiac, D.I. Towards Fully Jitterless Applications: Periodic Scheduling in Multiprocessor MCSs Using a Table-Driven Approach. Appl. Sci. 2020, 10, 6702. [Google Scholar] [CrossRef]
- Jeffay, K.; Stanat, D.F.; Martel, C.U. On non-preemptive scheduling of periodic and sporadic tasks. In Proceedings of the Twelfth Real-Time Systems Symposium, San Antonio, TX, USA, 4–6 December 1991; pp. 129–139. [Google Scholar]
- Mok, A.K.L. Fundamental Design Problems of Distributed Systems for the Hard-Real-Time Environment. Diploma Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1983. [Google Scholar]
- Vestal, S. Preemptive scheduling of multi-criticality systems with varying degrees of execution time assurance. In Proceedings of the 28th IEEE International Real-Time Systems Symposium (RTSS 2007), Tucson, AZ, USA, 3–6 December 2007; pp. 239–243. [Google Scholar]
- Baruah, S.; Li, H.; Stougie, L. Towards the design of certifiable mixed-criticality systems. In Proceedings of the 2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium, Stockholm, Sweden, 12–15 April 2010; pp. 13–22. [Google Scholar]
- Zeng, L.; Xu, C.; Li, R. Partition and Scheduling of the Mixed-Criticality Tasks Based on Probability. IEEE Access 2019, 7, 87837–87848. [Google Scholar] [CrossRef]
- Burns, A. An augmented model for mixed criticality. In Mixed Criticality on Multicore/Manycore Platforms (Dagstuhl Seminar 15121); Baruah, S.K., Cucu-Grosjean, L., Davis, R.I., Maiza, C., Eds.; Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik: Dagstuhl, Germany, 2015; Volume 5. [Google Scholar]
- Baruah, S.; Guo, Z. Mixed-criticality job models: A comparison. In Proceedings of the Workshop on Mixed-Criticality Systems (WMC’15), San Antonio, TX, USA, 1 December 2015. [Google Scholar]
- Burns, A.; Baruah, S. Towards a more practical model for mixed criticality systems. In Proceedings of the Workshop on Mixed-Criticality Systems (Colocated with RTSS), Berlin, Germany, 1–4 December 2020. [Google Scholar]
- Baruah, S.; Chattopadhyay, B. Response-time analysis of mixed criticality systems with pessimistic frequency specification. In Proceedings of the 2013 IEEE 19th International Conference on Embedded and Real-Time Computing Systems and Applications, Taipei, Taiwan, 19–21 August 2013; pp. 237–246. [Google Scholar]
- Li, Z.; He, S. Fixed-priority scheduling for two-phase mixed-criticality systems. ACM Trans. Embed. Comput. Syst. 2017, 17, 1–20. [Google Scholar] [CrossRef]
- Wang, W.; Mao, C.; Zhao, S.; Cao, Y.; Yi, Y.; Chen, S.; Liu, Q. A smart semipartitioned real-time scheduling strategy for mixed-criticality systems in 6G-based edge computing. Wirel. Commun. Mob. Comput. 2021, 2021, 1–11. [Google Scholar] [CrossRef]
- Su, H.; Zhu, D. An elastic mixed-criticality task model and its scheduling algorithm. In Proceedings of the Conference on Design, Automation and Test in Europe, Grenoble, France, 18–22 March 2013; pp. 147–152. [Google Scholar]
- Lee, J.; Shin, K.G. Development and use of a new task model for cyber-physical systems: A real-time scheduling perspective. J. Syst. Softw. 2017, 126, 45–56. [Google Scholar] [CrossRef] [Green Version]
- Hamdaoui, M.; Ramanathan, P. A dynamic priority assignment technique for streams with (m, k)-firm deadlines. IEEE Trans. Comput. 1995, 44, 1443–1451. [Google Scholar] [CrossRef] [Green Version]
- Baruah, S.K. A general model for recurring real-time tasks. In Proceedings of the 19th IEEE Real-Time Systems Symposium (Cat. No. 98CB36279), Madrid, Spain, 4 December 1998; pp. 114–122. [Google Scholar]
- Stigge, M.; Ekberg, P.; Guan, N.; Yi, W. The digraph real-time task model. In Proceedings of the 2011 17th IEEE Real-Time and Embedded Technology and Applications Symposium, Chicago, IL, USA, 11–14 April 2011; pp. 71–80. [Google Scholar]
- Ekberg, P.; Yi, W. Schedulability analysis of a graph-based task model for mixed-criticality systems. Real-Time Syst. 2016, 52, 1–37. [Google Scholar] [CrossRef]
- Huang, P.; Kumar, P.; Stoimenov, N.; Thiele, L. Interference Constraint Graph—A new specification for mixed-criticality systems. In Proceedings of the 2013 IEEE 18th Conference on Emerging Technologies & Factory Automation (ETFA), Cagliari, Italy, 10–13 September 2013; pp. 1–8. [Google Scholar]
- Alimi, I.A.; Patel, R.K.; Zaouga, A.; Muga, N.J.; Xin, Q.; Pinto, A.N.; Monteiro, P.P. Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research Directions toward 6G Fog Networks. In Moving Broadband Mobile Communications Forward: Intelligent Technologies for 5G and Beyond; IntechOpen: London, UK, 2021; p. 3. [Google Scholar]
- Kelly, O.R.; Aydin, H.; Zhao, B. On partitioned scheduling of fixed-priority mixed-criticality task sets. In Proceedings of the 2011 IEEE 10th International Conference on Trust, Security and Privacy in Computing and Communications, Changsha, China, 16–18 November 2011; pp. 1051–1059. [Google Scholar]
- Santy, F.; George, L.; Thierry, P.; Goossens, J. Relaxing mixed-criticality scheduling strictness for task sets scheduled with fp. In Proceedings of the 2012 24th Euromicro Conference on Real-Time Systems, Pisa, Italy, 11–13 July 2012; pp. 155–165. [Google Scholar]
- Socci, D. Scheduling of Certifiable Mixed-Criticality Systems. Diploma Thesis, L’ Ecole Doctorale Mathematiques, Sciences et Technologies de l’Information, Informatique Universite de Grenoble Alpes, Grenoble, France, 2016. [Google Scholar]
- Lupu, I.; Courbin, P.; George, L.; Goossens, J. Multi-criteria evaluation of partitioning schemes for real-time systems. In Proceedings of the 2010 IEEE 15th Conference on Emerging Technologies & Factory Automation (ETFA 2010), Bilbao, Spain, 13–16 September 2010; pp. 1–8. [Google Scholar]
- Li, H.; Baruah, S. Outstanding paper award: Global mixed-criticality scheduling on multiprocessors. In Proceedings of the 2012 24th Euromicro Conference on Real-Time Systems, Pisa, Italy, 11–13 July 2012; pp. 166–175. [Google Scholar]
- Guan, N.; Ekberg, P.; Stigge, M.; Yi, W. Improving the Scheduling of Certifiable Mixed-Criticality Sporadic Task Systems; Technical Report 2013-008; Uppsala University: Uppsala, Sweden, 2013; pp. 1–12. [Google Scholar]
| Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).