Next Article in Journal
Application of Machine Learning Algorithms for Prediction of Tumor T-Cell Immunogens
Previous Article in Journal
Transmission Line Fault Classification Using Conformer Convolution-Augmented Transformer Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Schedulability Analysis in Fixed-Priority Real-Time Multicore Systems with Contention

Instituto de Automática e Informática Industrial (ai2), Universitat Politècnica de València, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
All authors contributed equally to this work.
Appl. Sci. 2024, 14(10), 4033; https://doi.org/10.3390/app14104033
Submission received: 18 March 2024 / Revised: 22 April 2024 / Accepted: 7 May 2024 / Published: 9 May 2024

Abstract

:
In the scheduling of hard real-time systems on multicore platforms, significant unpredictability arises from interference caused by shared hardware resources. The objective of this paper is to offer a schedulability analysis for such systems by assuming a general model that introduces interference as a time parameter for each task. The analysis assumes constrained deadlines and is provided for fixed priorities. It is based on worst-case response time analysis, which exists in the literature for monocore systems. We demonstrate that the worst-case response time is an upper bound, and we evaluate our proposal with synthetic loads and execution on a real platform.

1. Introduction

Multicore platforms are growing in importance in critical systems, such as automotive or avionic applications. In these systems, failure to meet any requirement may lead to catastrophic consequences. Therefore, compliance with deadlines in hard real-time multicore systems is a crucial aspect.
The scheduling problem in multicore systems in turn requires solving two problems: (1) decide which tasks are executed in each core (allocation problem) and (2) execute each task to meet all the deadlines (scheduling problem). In partitioned multicore systems where migration is not allowed, each core can be executed independently, so that monocore scheduling theory can be applied. But this is not possible when shared hardware resources exist. When a task runs in parallel with other tasks in other cores, it may lead to delays due to shared hardware resources, such as caches, buses, and memories [1,2]. In this case, interference among cores is produced, and it causes unpredictability, which is undesired in critical systems.
Modeling interference is not an easy task, and it has been a hot topic in recent years. There are two main trends to take into account for interference in the temporal model. On the one hand, there are methods that focus on the modeling and interference mitigation of specific hardware resources [3]. These methods are only valid for this type of resources, but they are able to adjust the interference produced in a very accurate way. On the other hand, other methods analyze contention for multiple resources and their integration in schedulability analysis. When multiple shared resources are considered, the contention model must cope with any of the resources, so a general contention model is required [4]. The drawback is that the interference produced is overestimated, but sometimes, it is the only feasible approach when the chip vendor does not provide details about hardware behavior.
In addition to modeling interference, it is also necessary to act in both the allocation and scheduling stages to reduce the overall interference produced. The allocation problem is recognized to be NP-hard in the strictest sense [5]. In multicore systems, the way tasks are allocated to cores largely determines the global interference [6]. Some of the main allocation techniques are heuristics based on bin packing, such as First Fit, Worst Fit [7,8], etc. These techniques consist of mapping tasks to cores according to their loads (or utilization), without considering interference. But the allocation strategy Wmin, presented in [4], reduces overall interference, as it aims to group tasks with interference in the same core, whenever possible. This work assumes a general model for interference where this parameter is added to the traditional parameters of a task.
If the allocation strategy greatly reduces interference, as is the case with Wmin, traditional scheduling algorithms such as fixed or dynamic priorities can be adopted in the scheduling phase. Because the temporal task model is modified by the inclusion of interference due to contention, it is necessary to provide new schedulability tests for these models. The work presented in [9] presents schedulability tests for dynamic priorities under the assumption of the temporal model presented in [4].
When it comes to schedulability tests for fixed priorities, they are based on worst-case response time (WCRT). WCRT is the length of the longest interval from a task’s release till its completion [10]. These schedulability tests do not consider interference, so this work fills this gap. Furthermore, when interference is taken into account, the worst-case scenario may occur at any point in time during the entire execution window, not necessarily upon the first activation of the task [9]. Extending the concept of WCRT to consider this is also challenging.
Regarding scheduling strategies to reduce interference, the recent work [6] proposes the Rolling Horizon MILP Algorithm (RHMA), which not only achieves a feasible schedule but also minimizes the interference generated by shared hardware resources within the context of hard real-time multicore systems.

Contribution

This paper proposes a schedulability test for fixed priorities in hard real-time multicore systems. We assume the temporal model defined in [4], which considers the interference generated by shared hardware resources. We extend the work presented in [9] to consider fixed priorities in hard real-time systems. In particular, we propose a schedulability test based on WCRT. The novelty of the contribution is that both the task model and worst-case response time equation incorporate interference due to shared hardware resources.
The rest of the paper is organized as follows: Section 2 presents the main contributions in the related research area. In Section 3, we present the system model for constrained-deadline systems that is used throughout this paper. Section 4 reviews the classical schedulability analysis for multicore systems with fixed priorities. Then, we propose an upper bound for the classical worst-case response time analysis, considering contention due to shared hardware resources. We also propose the corresponding schedulability test. The experimental evaluation in Section 5 exposes the compliance of our schedulability test. It also assesses different allocation techniques by comparing how closely the suggested algorithm matches the real value. These results are confirmed by a real case. Finally, Section 6 presents the main conclusions and further lines of research.

2. Related Works

The evolution from single-core to multicore computing systems has been a gradual process. Beginning with initial investigations into hard real-time multicore scheduling in the late 1960s, the field has seen a steady stream of research developments aimed at adapting traditional single-core scheduling techniques to multicore environments. Various approaches have been proposed and explored over the years, each contributing to the evolving landscape of multicore scheduling methodologies.
A comprehensive survey of the existing literature on multicore scheduling, as compiled in [11], serves as a valuable resource for understanding the breadth and depth of research in this area. This compilation encapsulates the spectrum of techniques and findings relevant to multicore scheduling, providing insights into the progress made and the challenges that lie ahead.
In recent years, there has been a surge in research focused on reducing interference in multicore systems. A notable contribution to this field is the recent survey by Lugo et al. [12], which highlights key advancements in interference reduction techniques within multicore systems over the past five years. This survey provides a comprehensive overview of the latest research efforts and their contributions to mitigating interference in multicore environments.
Various interference models in the literature target specific sources of interference within computer systems. Some models concentrate on interference stemming from the main memory [13,14], while others focus on interference generated by cache memory [15,16], and others yet address interference originating from the memory bus [17,18]. Each of these models incorporates the impacts of interference into schedulability analysis, typically focusing on a single shared hardware resource at a time.
Some of the works that consider a general model and are closer to our work are detailed hereunder. Altmeyer et al. [19] presented the Multicore Response Time Analysis (MRTA) framework, designed to integrate task demands on various shared resources with the resources’ supply and to incorporate the resulting interference explicitly into response time analysis. Notably, they eschewed the concept of worst-case execution time (WCET) and instead assumed fixed-priority preemptive scheduling.
The research described in [20] introduced a response time analysis method tailored for real-time partitioned multiprocessor systems. These systems consist of a set of independent tasks with fixed priorities, which can be activated arbitrarily and share secondary resources. In the above system model, the worst-case response time (WCRT) of a task may exceed its period, allowing the task to potentially re-arrive before the completion of its preceding job. Additionally, both local and global shared resources are accounted for, and a cap on the maximum number of requests any task can make to a shared resource is imposed.
The study outlined in [21] adopted the concept of superblocks to represent real-time tasks. The WCRT of tasks is deduced from the worst-case completion time of a superblock, determined by the upper bound on access requests to shared memory and the maximum required computation time. The authors investigated various hardware access models and their impact on schedulability.
Choi et al. [22] presented a response time analysis method tailored for synchronous dataflow programs mapped to multiple parallel dependent tasks executing on a computer cluster consisting of many cores. This technique extends the MRTA framework [19] to suit a specific architecture. The WCRT analysis module determines the upper limit of demand for shared resources based on the event stream model of each cluster in each processing element.
The authors in [23] examined fixed-priority partitioned multiprocessor scheduling. They demonstrated that for task systems with arbitrary deadlines, a greedy mapping strategy achieves a speedup factor that holds true for both polynomial-time and exponential-time schedulability tests. They provided schedulability tests specifically tailored for Deadline Monotonic (DM) scheduling, encompassing implicit, constrained, or arbitrary deadline models. However, none of these tests incorporate shared hardware resources into their definitions.
Chen et al. [24] introduced a schedulability condition initially designed for uniprocessor systems, later extending it to derive an upper limit on the number of cores necessary for a periodic, non-preemptive task set. Additionally, they proposed a comprehensive formula to determine valid start time offsets for a new task to collaborate with existing tasks on the same processor. Consequently, offsets and non-preemption are incorporated, which deviates from our temporal model, and the proposed schedulability condition overlooks interference considerations.
Huang et al. [25] introduced an asymmetric analysis method for evaluating the response time of task processing and access to shared resources, considering both the execution and suspension intervals of tasks. Their model incorporates an upper limit on the execution time of shared resource access for each task and the maximum count of segments for accessing resources per task, with each segment potentially comprising multiple consecutive data requests to access the shared resource. In contrast, our approach offers greater flexibility by not constraining the amount of interference or the number of access times in the model.
Choi et al. [26] proposed a fundamental technique for estimating the WCRT of synchronous dataflow (SDF) applications in multicore systems, accounting for task dependency and execution time variations. Their approach integrates schedule time bound analysis, which accounts for interference between task instances within the same SDF graph, and response time analysis, which addresses interference from other real-time tasks.
While both studies consider interference from other real-time tasks in response time analysis, their work focuses on SDFs, whereas ours does not involve dataflows. In [27], the allocation problem and exact computation of WCRT are addressed by using a compositional framework based on time automata.
Andersson et al. [28] introduced a schedulability test for the fixed-priority preemptive partitioned scheduling of sporadic, constrained-deadline tasks. Their approach accounts for varying the execution times of tasks based on specific co-runners, which are determined through static analysis or measurements. In a similar vein, Al-Bayati et al. [29] concentrated on resource-aware partitioning approaches within the context of fixed-priority partitioned scheduling. They proposed an Integer Linear Programming formulation to efficiently assign tasks to cores, assign priorities to the tasks, and select resource protection mechanisms.
This work follows the general system model in [4], which introduced a scheduling algorithm that precisely computes total interference and an allocator that aims to minimize this interference. However, in the above work, the authors did not propose any schedulability test to ensure the system’s feasibility. For this reason, [9] proposed two schedulability tests for dynamic-priority, real-time systems with contention. Therefore, our work covers the gap of the schedulability test in [4], but it is focused on fixed-priority, real-time systems with contention.

3. System Task Model

We assume the same task model as the one presented in [4]. We suppose a multicore system with m homogeneous cores ( M 0 , M 1 , M 2 , , M m 1 ) in which a task set τ of n independent periodic or sporadic tasks must be statically allocated. Each task τ i is represented by the tuple
τ i = ( C i , D i , T i , I i )
where C i is the WCET, D i is the relative deadline, T i is the period, and I i is the interference. We assume constrained deadlines, so D i T i .
The  I i parameter was first introduced in [4]; then, it was also used in [9]. Briefly, interference parameter I i is the worst-case time that it takes τ i to access hardware resources and means a “delay” in other tasks executing at the same time on all other cores due to contention. I i is the worst-case time that τ i spends reading and writing memory operations. This is depicted in Figure 1, where we can see that all the time that τ 0 dedicates to these operations is included in interference parameter I 0 . Task execution times are represented by solid rectangles while interference is sketched in dashed rectangles. Considering the viewpoint of other tasks, I i is the additional time that τ i causes in other tasks concurrently executing on other cores due to contention. For a more comprehensive understanding of the interference parameter, please refer to the aforementioned articles.
Although our work introduces the interference parameter in the general model, there are two commonly used approaches for modeling interference: one involves utilizing a model tailored to the particular type of shared hardware, while the other proposes a general model that is applicable to any type of hardware. The former approach provides a more accurate estimation of interference but is limited to the specific hardware for which it was developed. Additionally, this interference value is typically incorporated into WCET, resulting in a highly pessimistic solution. On the other hand, the latter approach yields a higher interference value, but by independently incorporating this parameter into the temporal model, it is possible to derive a less pessimistic model.
The hyperperiod of  τ , H, is the time after which the pattern of job release times starts to repeat and is the least common multiple of the periods of all the tasks in that set. N i is the number of activations that task τ i has throughout the hyperperiod ( N i = H / T i ).
Throughout this text, we talk about tasks that belong or not to a core. When we refer to  M τ i , we mean the core in which τ i is allocated. Moreover, we denote by τ M k the set of tasks in  τ that belong to core M k . Therefore, k = 0 m 1 τ M k = τ .
Also from [4], a task is designated as a receiving task when it accesses shared hardware resources, resulting in an increase in its computation time caused by interference from other tasks allocated to different cores. Conversely, a task is classified as a broadcasting task when its access to shared hardware resources triggers an increment in computation time in other tasks allocated to different cores due to contention. If the interference parameter of task τ i is equal to zero ( I i = 0 ), then τ i is neither a broadcasting nor a receiving task. If I i > 0 and there is at least one task τ j in another core whose I j > 0 , τ i is a broadcasting and receiving task.
Once the model is defined, let us introduce the concept of interference in the schedulability analysis of real-time multicore systems based on fixed priorities.

4. Interference-Aware Schedulability Analysis for Fixed Priorities

In this section, first, we present a well-known fixed-priority schedulability test for constrained-deadline task models. Traditional models overlook the interference arising from the concurrent execution of different tasks across multiple cores. Then, we contribute to the field in this sense by including interference in the schedulability test.

4.1. Deadline Monotonic Schedulability Analysis

Fixed-priority schedulers assign an initial priority to tasks that remains constant during all execution. Deadline Monotonic scheduling (DM) [30] considers task sets with deadlines shorter than periods and assigns the highest priority to the task with the shortest deadline. An exact schedulability test (necessary and sufficient condition) for fixed priorities is based on calculating the WCRT of each task.
The calculation of the WCRT of  τ i in a monocore system is presented in Equation (2) [31]:
W C R T i = C i + τ j h p ( i ) τ j M τ i W C R T i T j C j
The second term of the preceding equation signifies the execution of higher-priority tasks in the worst case, that is, assuming the synchronous activation of tasks. Equation (2) is solved by an iterative method. The stop conditions are the violation of a deadline ( W C R T i > D i for any task i) or convergence ( W C R T ( k + 1 ) = W C R T ( k ) ).
Then, the task set is schedulable if and only if
W C R T i D i τ i τ
The outcome of this test not only determines whether the system is schedulable or not but also provides the worst-case response time (WCRT) of each task. Additionally, it identifies which tasks are implicated in deadline misses, if any occur.
Without interference considerations, the WCRT of all tasks in the task set is obtained when all tasks are activated simultaneously. It corresponds to the first activation of the tasks, so the response time is not calculated for all the activations but only the first.

4.2. Interference-Aware Schedulability Analysis for Deadline Monotonic Scheduling

Given the monocore classical test presented above, this section proposes a schedulability test that extends this test to multicore systems, considering the interference delay due to contention.
First, we present an upper bound of WCRT, when interference is considered. This is an upper bound because it assumes that the maximum interference is always produced. Then, we provide the corresponding schedulability test. As we define an upper bound of the response time, the test that we provide is sufficient but not necessary.
Let us start with some considerations to be taken into account. When we extend the analysis to multiple cores, would it be sufficient to consider that the worst-case response time of a task only depends on the computation times of higher-priority tasks of its core? In multicore systems with contention, the response time of a task depends on the following:
  • Its own computation time.
  • The interference it is affected by (if it is a receiving task).
  • The computation time of higher-priority tasks allocated to its core. And this, in turn, depends on the interference they are affected by (if they are receiving tasks).
Therefore, these aspects may be considered when schedulability analysis is proposed.
As stated before, the WCRT of synchronous tasks executed in a monocore system corresponds to the first activation of the tasks. In multicore systems, when interference is considered, WCRT does not necessarily have to be relative to the first activation. Let us consider an example to report this common misconception.
Let us take task set τ = [ τ 0 , τ 1 , τ 2 ] , with τ 0 = ( 1 , 2 , 3 , 0 ) , τ 1 = ( 2 , 4 , 5 , 1 ) , and τ 2 = ( 1 , 3 , 5 , 1 ) , allocated to a dual-core platform (Table 1). τ 0 and τ 1 are allocated to core M 0 , and τ 2 , to  M 1 . After allocating tasks to cores, the DM algorithm schedules tasks in each core. The execution chronogram of the task set is shown in Figure 2.
Task τ 0 possesses the highest priority, as it is the task with the shortest deadline. As  I 0 = 0 , it is not a receiving task, so it does not receive any interference. τ 1 and τ 2 are receiving/broadcasting tasks ( I 1 , I 2 > 0 ). Every time τ 1 is active at the same time as τ 2 is, interference appears. As  τ 0 has the highest priority, τ 1 starts its execution at time 1, when τ 2 has just finished its execution, so interference does not appear. Therefore, the response time of  τ 1 at its first activation is 3. However, at time 5, τ 1 and τ 2 are released and coincide in execution. Due to this interference and its preemption due to the release of  τ 0 , the response time of  τ 1 at its second activation is 4. The same happens at the third activation. Then, the WCRT of  τ 1 does not refer to its first activation. The same happens with τ 2 .
With this example, we can conclude that when interference is considered, the WCRT of a task may refer to any activation, not necessarily the first activation. For this reason, in multicore systems with contention, all activations of all tasks must be studied in order to ensure that the test is correct.

4.2.1. Previous Definitions

Below, let us derive an upper bound of the response time of all activations of all tasks allocated to a multicore system with contention. For the subsequent analysis, we need some definitions.
Definition 1.
Let v j i be the activation pattern from a broadcasting task τ j to a receiving task τ i [9].
We require this information, represented as an array, to compute the interference that a broadcasting task τ j can induce on a receiving task τ i . This array denotes the number of activations of  τ j that occur within an activation of  τ i . Array v j i is calculated as a relation of periods between broadcasting and receiving tasks.
This expression is valid for implicit deadline task sets. However, if this array is used to calculate the interference between tasks when D i T i , the interference obtained is highly overestimated. The reason is that there are activations of a task that fall within an activation of another that do not provoke interference because the deadline is not taken into account.
Then, let us propose an improved definition of the activation pattern for constrained deadlines, noted as v j i .
Definition 2.
Let v j i be the improved activation pattern from a broadcasting task τ j to a receiving task τ i .
This definition is similar to the one presented in [9] but considers the deadlines to count the maximum number of overlaps among activations. In this sense, if the broadcasting task is released when the deadline of the receiving task has expired and has not yet been released again (and vice versa), this overlap is not counted. Note that if  τ i is not a receiving task, v j i [ a ] = 0 τ j τ , a N i .
We use an example to show the differences of both parameters.
Let us consider a system with two tasks, τ = [ τ 0 , τ 1 ] , with τ 0 = ( 1 , 2 , 3 , 1 ) and τ 1 = ( 1 , 6 , 7 , 1 ) , allocated to a dual-core platform. τ 0 is allocated to core M 0 , and τ 1 , to  M 1 (Table 2). For simplicity, computation times are not shown.
As seen in Figure 3a, v 1 0 = [ 1 , 1 , 2 , 1 , 2 , 1 , 1 ] from [9]. Equivalently, v 0 1 = [ 3 , 3 , 3 ] . However, from Figure 3b, v 1 0 = [ 1 , 1 , 1 , 1 , 1 , 1 , 1 ] and v 0 1 = [ 2 , 3 , 2 ] . Then, v 0 1 [ 2 ] = 2 and v 0 1 [ 2 ] = 1 . This difference is due to the fact that first activation of  τ 1 does not overlap with the third activation of  τ 0 (considering deadlines), as D 1 just expires when τ 0 is released. However, as v 1 0 does not consider deadlines but periods, this overlap is counted.
The next theorem proves that v j i can be used to calculate an upper bound of interference.
Theorem 1.
The maximum number of activations of broadcasting task τ j that fall within the a t h activation of  τ i and may provoke interference is
v j i [ a ] = K + t = a T i + 1 a T i + D i 1 g ( t )
where
K = { 1 I f   a T i [ n T j , n T j + D j ) n N j 0 E l s e w h e r e
and
g ( t ) = { 1 I f   t T j t T j = 0 0 E l s e w h e r e
Proof. 
Let us suppose that any a t h activation of  τ i does not fall in the interval defined as [ n T j , n T j + D j ) . Then, n T j + D j a T i < n T j . In this case, K = 0, which means that the receiving task is released when the deadline of the broadcasting task has expired and that it has not yet been released again. So, there is no interference. On the contrary, if the receiving task is released when the broadcasting task has been released and its deadline has not expired, there may be interference.
Let us assume now that there exists t so that t = α · T j and a T i < t < a T i + D i .
In this case,
t T j t T j = α · T j T j α · T j T j = 0
thus g ( t ) = 1 .
Thus, g ( t ) equals 1 only when broadcasting task τ j is released within the interval ( a T i , a T i + D i ) . By analyzing g ( t ) throughout the preceding interval, we determine the number of activations that occur within activation a of  τ i , considering the term “activation” as the period from the release to the deadline.
The sum of  g ( t ) cannot exceed the total number of activations. Therefore, Equation (4) accurately computes the maximum number of activations occurring within the interval.    □
Listing 1 presents the pseudo-code (python-like) for calculating v j i .
Listing 1. Maximum interference array algorithm.
  1  
# variables   definition   and   initialization
  2  
v j i = [ None ]   N i
  3  
L = [ [ range   ( n T j ,   n T j + D j , 1 ) ]   for ~ n   in ~ range   ( N j ) ]
  4  
i f     I i 0     a n d     I j 0 :
  5  
        for ~ a ~ in   range   ( N i ) :
  6  
                 K , g = 0
  7  
                 i f   ~ a T i       i n ~ L :
  8  
        K = 1
  9  
                 for ~ t     in ~ range   ( a T i + 1 , a T i + D i ,   1 ) :
 10  
        i f ~ t     %     T j = = 0 :
 11  
          g = g + 1
 12  
                 v j i [ a ] = K + g
Once the v j i vector is defined, it is used to derive the upper bound of the WCRT of task activations. This vector helps us to know whether a receiving task is affected by interference from tasks in other cores. Apart from this vector, we need to know the “delay” that the tasks are subject to due to higher-priority tasks in the same core. For these reasons, we need to identify which activations of the higher-priority tasks overlap with the activations of the task under study.
To do so, we need the following definition and properties.
Definition 3.
Let A i j be a binary matrix of N i × N j . The value of A i j m n indicates whether activation m t h of τ i falls within activation n t h of τ j or not in the following way:
  • A i j m n = 1 : activation m t h of τ i overlaps with activation n t h of τ j .
  • A i j m n = 0 : activation m t h of τ i does not overlap with activation n t h of τ j .
This matrix serves as the basis to establish when an activation of task τ i is subject to the influence of activations of tasks with higher priority running in the same core.
Property 1.
If a pair of tasks τ i and τ j are allocated to different cores, M τ i M τ j ; then, A i j m n = 0 m N i , n N j .
Property 2.
A i j = A j i T τ i , τ j τ .
Property 3.
A i j = O if i = j .
We shall demonstrate the behavior of A with an example. We consider task set τ = [ τ 0 , τ 1 τ 2 ] , with τ 0 = ( 1 , 2 , 3 , 1 ) , τ 1 = ( 2 , 5 , 5 , 0 ) , and τ 2 = ( 1 , 3 , 5 , 1 ) , allocated to a dual-core system, where τ 0 and τ 1 are allocated to M 0 , and τ 2 , to M 1 (Table 3).
Figure 4 shows how tasks are allocated to cores and the relation of deadlines and periods. For simplicity, neither computation times nor interferences are depicted. Let us deduce the values of A 01 , A 02 , A 10 , A 12 , A 20 , and A 21 . If tasks are not allocated to the same core, A i j are null matrices. Then, A 02 = A 12 = A 20 = A 21 = O . Let us now calculate A 10 .
A 10 is the matrix that relates τ 1 and τ 0 and is composed of N 1 rows and N 0 columns, where N 1 = H / T 1 = 3 and N 0 = H / T 0 = 5 . For each element a 10 i j , it is evaluated if activation i t h of τ 1 overlaps with activation j t h of τ 0 , and it may be deduced by observing Figure 4. For example, activation 0 of τ 1 overlaps with activation 1 of τ 0 . Then, a 10 01 = 1 . But activation 1 of τ 1 does not overlap with activation 1 of τ 0 . Then, a 10 11 = 0 . By evaluating the overlap among all activations of both tasks, A 10 is obtained.
A 10 = 1 1 0 0 0 0 0 1 1 0 0 0 0 1 1
The next section presents the definition of WCRT considering the interference produced in multicore systems, making use of the above definitions.

4.2.2. Worst-Case Response Time with Interference Considerations

In this section, we present a schedulability test based on WCRT by using the definitions of activation pattern array ( v j i ) and binary matrix ( A i j ).
As stated at the beginning of Section 4.2, the WCRT of an activation of a task depends on its own C i , the interference from broadcasting tasks allocated to other cores that it is affected by (if it is a receiving task), and the computation times of and interference from the higher-priority tasks allocated to its core. With these three factors, Equation (7) is formulated.
Definition 4.
The upper bound of the WCRT of activation k of task τ i considering multicore system interference is
W C R T i k ( u b ) = C i + τ z M τ i I i 0 v z i [ k ] · I z + τ j h p ( i ) τ j M τ i a = k · T i T j k · T i + D i T j 1 C j + τ l M τ j I j 0 v l j [ a ] · I l · A i j k a
As this is an upper bound, it considers that whenever there can be interference, there is interference. In reality, it is possible that even if two tasks on different cores have overlapping execution windows, one of them has already finished its activation when the other one starts its own. That is why the above expression is an upper bound. In addition, it is also important to note that in systems where there is no interference, this definition also gives us an upper bound and it will not give an exact result like Equation (2).
Once interference is considered in the response time to provide an upper bound, the schedulability test presented next applies.
Theorem 2.
A task set τ is schedulable by fixed priorities if
W C R T i u b = max k ( W C R T i k ( u b ) ) D i k N i , τ i τ
Proof. 
As stated before, the WCRT of a task in multicore systems with interference depends on the following three parameters, which are represented in the expression of W C R T k ( u b ) :
  • The task’s computation time: C i .
  • The interference the task is affected by: τ z M τ i I i 0 v z i [ k ] · I z .
  • The computation time of higher-priority tasks allocated to the task’s core and the interference they are affected by:
    τ j h p ( i ) τ j M τ i a = k · T i T j k · T i + D i T j 1 C j + τ l M τ j I j 0 v l j [ a ] · I l · A i j k a .
The last two terms are upper bounds on the real interference since they count the maximum number of possible overlaps of execution among tasks. There is no possibility of further interference, as this would imply a higher number of overlaps and the maximum is already calculated in both expressions.
Therefore, since W C R T i u b is an upper bound to W R C T i it means that W C R T i u b W C R T i k N i .
In general, if W C R T i D i τ i τ , then the system is schedulable. So, if W C R T i u b D i , the system is schedulable, as all the deadlines are met. □

Example

Let us show the usage of Equation (7) to determine the upper bound of the WCRT of all the activations involved in the system previously defined, i.e., τ . τ = [ τ 0 , τ 1 τ 2 ] , with τ 0 = ( 1 , 2 , 3 , 1 ) , τ 1 = ( 2 , 5 , 5 , 0 ) , and τ 2 = ( 1 , 3 , 5 , 1 ) , allocated to a dual-core system, where τ 0 and τ 1 are allocated to M 0 , and τ 2 , to M 1 . Considering that tasks are scheduled according to the DM algorithm, the scheduling plan is shown in Figure 5.
From the scheduling plan, we can deduce the actual WCRT of all activations of all tasks: W C R T 0 = ( 2 , 1 , 1 , 1 , 1 ) , W C R T 1 = ( 5 , 3 , 2 ) , and W C R T 2 = ( 2 , 1 , 1 ) . (Note that the array of WCRT details the WCRT of each activation. For example, W C R T 1 = ( 5 , 4 , 2 ) means that the WCRT of the first activation of τ 1 is 5, that of the second activation is 4, and that of the third activation is 2.)
In order to calculate the upper bound of WCRT, first, we calculate v j i and binary matrix A i j . By applying Theorem 1, v 2 0 = ( 1 , 0 , 1 , 1 , 1 ) and v 0 2 = ( 1 , 1 , 2 ) . τ 1 is not a broadcasting task, so v j 1 = v 1 i = 0 . Array A 10 was calculated at the end of Section 4.2.1 (we avoid the calculation of other arrays because they do not allow for actions in the formula of WCRT) as
A 10 = 1 1 0 0 0 0 0 1 1 0 0 0 0 1 1
With these above concepts, we can now apply Equation (7) to calculate W C R T i k ( u b ) k , i .
W C R T 0 k ( u b ) = C 0 + v 2 0 [ k ] · I 2 +
W C R T 1 k ( u b ) = C 1 + + a = k · 5 3 k · 5 + 5 3 1 C 0 + v 2 0 [ a ] · I 2 · A 10 k a
W C R T 2 k ( u b ) = C 2 + v 0 2 [ k ] · I 0 +
Therefore, W C R T 0 u b = ( 2 , 1 , 2 , 2 , 2 ) , W C R T 1 u b = ( 5 , 6 , 6 ) , and W C R T 2 u b = ( 2 , 2 , 3 ) .
It can be observed that W C R T i u b W C R T i for all tasks and activations. However, τ 1 does not pass schedulability test (2), as W C R T 1 1 ( u b ) = W C R T 1 2 ( u b ) = 6 D 1 = 5 , but the system is schedulable. So, the condition of schedulability is sufficient but not necessary.

5. Evaluation

In this section, we evaluate the proposed schedulability test in an experimental environment, first by means of a synthetic load generator and then by means of a real platform.

5.1. Simulations with Synthetic Workload

In this section, we present the evaluation of the proposed schedulability test with a synthetic load in an experimental environment, as depicted in Figure 6. First, the load was generated. Then, for each task set, the scheduling problem was solved. Finally, the results were validated to evaluate some parameters. In parallel, the proposed schedulability test was applied to the task sets, and all results were compared.
Let us explain each of these steps.
We used the same synthetic task generator as in [9] with the experimental parameters defined in Table 4.
With the total system utilization and a specified number of tasks for each set, the utilization was distributed among the tasks by using the UUniFast discard algorithm from [32]. Periods were randomly generated within the range [20, 1000], and computation times were determined based on system utilization. Deadlines were constrained to be less than or equal to periods, and they were set to D i [ 0.5 T i , T i ] .
The theoretical utilization ranges from 50 and 75% of the system’s maximum potential load. For instance, the maximum load of a system with eight cores is 8; so, for evaluation purposes, the initial utilization was set to the range 4.1 (≈50%) to 6 (75%).
The number of broadcasting tasks was configured to be 25% of the total number of tasks, except for scenarios 1–6 (two cores), where it was set to 50%. In the latter case, if only one task is a broadcasting task, no interference is generated. We evaluated every combination of core count and utilization by testing interference levels of 10%, 20%, and 30% relative to the WCET. It is important to note that not all tasks in a task set have the same interference value, but they all experience the same percentage of interference with the WCET.
Once the load was generated, to solve the scheduling problem, tasks were allocated to cores by different algorithms. On the one hand, we considered bin-packing algorithms such as Worst Fit (WF) and First Fit (FF) [7,8] to allocate the tasks to cores. Tasks are ordered in descending order of utilization (DU). On the other hand, the interference-aware Wmin algorithm [4] assigns tasks to cores aiming to reduce contention.
Following the task allocation phase, we now move to the task scheduling phase. For this purpose, we employed the contention-aware scheduling algorithm, as also proposed in [4]. Specifically, we chose Deadline Monotonic (DM) as the priority-based algorithm serving as the basis for this scheduling approach.
Next, the obtained scheduling plans were evaluated, and different parameters were measured (validation phase). In this case, we focused on measuring the response times of the activations of the tasks.
Independently of the above, the schedulability test stated in Theorem 2 was applied to the task sets. This was compared with the results obtained in the validation phase.
Figure 7 shows the actual schedulability rates measured in the validation phase. As expected, when the number of cores and broadcasting tasks increased, the percentage of schedulability decreased. In fact, in systems at higher loading, fixed-priority scheduling techniques cannot guarantee all the deadlines, unlike other algorithms, such as Earliest Deadline First [30,33]. The WFDU allocator is the one that provided better schedulability rates than any other of the proposed algorithms. The FFDU allocator in combination with DM is the algorithm with the worst schedulability rates. The peaks of the function imply scenarios with lower-interference factors, so the schedulability rates are higher.
Figure 8 depicts the percentage of tasks that passed the schedulability test over those tasks that were feasibly scheduled. Obviously, if the task set is not schedulable, it makes no sense to apply the schedulability test. As seen in this figure, it is evident that as the number of cores in the system increases, a greater number of tasks fail the schedulability test. This occurs because the definition of W C R T k ( u b ) is an upper bound, and it is overestimated as the number of broadcasting tasks and the coefficient of interference increase. For FF scenarios, as the schedulability ratio was zero from systems with four cores (from scenario 4 onwards), the schedulability test could not be applied (0 in Figure 8) The WF allocator is the one with which the test provided better results. It is important to note that there is no set that passes the schedulability test and is not schedulable. However, there are task sets that fail the test and are schedulable. Then, the proposed test is sufficient but not necessary.

5.2. Simulations on a Real Platform

Besides the simulation environment, a real platform was used to test the proposed contributions. We used a Zybo z7 (Digilent, Tokyo, Japan), which integrates a dual-core ARM Cortex-A9 processor.
We implemented four real-time tasks in Ada language, which is a programming language used for critical software. In particular, we used the Ravenscar profile [34] to restrict the use of many tasking facilities so that the execution of the program was predictable. The parameters of the four tasks are detailed in Table 5 and were created based on a two-core scenario due to the board characteristics. In order to introduce interference, 50% of the tasks were broadcasting tasks.
The code that executed the tasks consisted of loops with mathematical operations read and stored in a data array until the task execution time was completed.
These tasks were allocated to the Zybo dual-core platform by the previously mentioned algorithm, WFDU (Figure 9).
For this task set, the hyperperiod was H = 1200 . Then, we calculated the W C R T measured on the platform and the W C R T u b of each k activation. The results are shown in Table 6.
From these results, we can deduce that the proposed upper bound is, in all cases, greater than the actual values measured on the platform. Apart from that, when all parameters in the system are integers, we may assume without loss of generality that all preemptions occur at integer time values. So, throughout this paper, we have assumed that all parameters are indeed integers. However, on a real platform using Ada, we make use of the package Ada.Real_Time, which has a reliable clock for real-time applications.

6. Conclusions

This paper proposes a schedulability assessment utilizing worst-case response time analysis for hard real-time multicore systems. This analysis assumes fixed-priority and constrained-deadline task models. We assume a general model in which interference is a temporal parameter for each task. In other works, schedulability analysis has previously been proposed for this model for dynamic priorities but did not exist for fixed priorities. With this work, we complete the schedulability analysis of the general interference model proposed in [4]. The proposed test is sufficient, as it is an upper bound of the response time of task sets allocated in multicore systems, where tasks suffer from contention with other tasks running simultaneously in other cores. The test was tested with simulations with synthetic workloads and simulations on a real platform.
As future work, we would like to explore how to expand our work into the areas of power consumption and interference reduction. To tackle the consumption problem, frequency reduction in idle instants of time might be a feasible approximation. Also, incorporating the modeling of the consumption of each task in the scheduling phase could lead to interesting results. To lessen the effect of interference, the use of AI techniques might lead to a breakthrough in this aspect of scheduling, mainly based on mathematical models.

Author Contributions

Conceptualization, L.O., A.G., P.B., J.S. and A.C.; methodology, L.O., A.G., P.B., J.S. and A.C.; software, L.O., A.G. and P.B.; validation, L.O., A.G. and P.B.; formal analysis, L.O., A.G., P.B., J.S. and A.C.; investigation, L.O., A.G., P.B., J.S. and A.C.; resources, L.O., A.G., P.B., J.S. and A.C.; data curation, L.O., A.G., P.B., J.S. and A.C.; writing—original draft preparation, L.O., A.G. and P.B.; writing—review and editing, L.O., A.G., P.B., J.S. and A.C.; visualization, L.O., A.G., P.B., J.S. and A.C.; supervision, P.B., J.S. and A.C.; project administration, A.G.; funding acquisition, P.B., J.S. and A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by MCIN/AEI/10.13039/501100011033/ grant PID2021-124502OB-C41 (PRESECREL), and by PAID-10-20 (Universitat Politècnica de València).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
WCRTworst-case response time
WCETworst-case execution time
DMDeadline Monotonic
WFWorst Fit
FFFirst Fit
WFDUWorst-Fit Decreasing Utilization
FFDUFirst-Fit Decreasing Utilization

References

  1. Dasari, D.; Akesson, B.; Nélis, V.; Awan, M.A.; Petters, S.M. Identifying the sources of unpredictability in COTS-based multicore systems. In Proceedings of the 2013 8th IEEE International Symposium on Industrial Embedded Systems (SIES), Porto, Portugal, 19–21 June 2013; pp. 39–48. [Google Scholar] [CrossRef]
  2. Karuppiah, N. The Impact of Interference due to Resource Contention in Multicore Platform for Safety-critical Avionics Systems. Int. J. Res. Eng. Appl. Manag. (IJREAM) 2016, 2, 39–48. [Google Scholar]
  3. Kim, H.; de Niz, D.; Andersson, B.; Klein, M.; Mutlu, O.; Rajkumar, R. Bounding memory interference delay in COTS-based multi-core systems. In Proceedings of the 2014 IEEE 19th Real-Time and Embedded Technology and Applications Symposium (RTAS), St. Louis, MI, USA, 22–24 April 2014; pp. 145–154. [Google Scholar] [CrossRef]
  4. Aceituno, J.M.; Guasque, A.; Balbastre, P.; Simó, J.; Crespo, A. Hardware resources contention-aware scheduling of hard real-time multiprocessor systems. J. Syst. Archit. 2021, 118, 102223. [Google Scholar] [CrossRef]
  5. Johnson, D.S. Near-Optimal Bin Packing Algorithms. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1973. [Google Scholar]
  6. Aceituno, J.M.; Guasque, A.; Balbastre, P.; Blanes, F.; Pomante, L. Optimized Scheduling of Periodic Hard Real-Time Multicore Systems. IEEE Access 2023, 11, 30027–30039. [Google Scholar] [CrossRef]
  7. Oh, Y.; Son, S.H. Allocating Fixed-Priority Periodic Tasks on Multiprocessor Systems. Real-Time Syst. 1995, 9, 207–239. [Google Scholar] [CrossRef]
  8. Coffman, E.G.; Garey, M.R.; Johnson, D.S. Approximation Algorithms for Bin Packing: A Survey. In Approximation Algorithms for NP-Hard Problems; PWS Publishing Co.: Boston, MA, USA, 1996; pp. 46–93. [Google Scholar]
  9. Guasque, A.; Aceituno, J.M.; Balbastre, P.; Simó, J.; Crespo, A. Schedulability Analysis of Dynamic Priority Real-Time Systems with Contention. J. Supercomput. 2022, 78, 14703–14725. [Google Scholar] [CrossRef]
  10. Bril, R.; Lukkien, J.; Verhaegh, W. Worst-Case Response Time Analysis of Real-Time Tasks under Fixed-Priority Scheduling with Deferred Preemption Revisited. Real-Time Syst. 2007, 42, 269–279. [Google Scholar]
  11. Davis, R.I.; Burns, A. A Survey of Hard Real-Time Scheduling for Multiprocessor Systems. ACM Comput. Surv. 2011, 43, 1–44. [Google Scholar] [CrossRef]
  12. Lugo, T.; Lozano, S.; Fernández, J.; Carretero, J. A Survey of Techniques for Reducing Interference in Real-Time Applications on Multicore Platforms. IEEE Access 2022, 10, 21853–21882. [Google Scholar] [CrossRef]
  13. Pan, X.; Mueller, F. NUMA-aware memory coloring for multicore real-time systems. J. Syst. Archit. 2021, 118, 102188. [Google Scholar] [CrossRef]
  14. Hassan, M. Reduced latency DRAM for multi-core safety-critical real-time systems. Real-Time Syst. 2019, 56, 171–206. [Google Scholar] [CrossRef]
  15. Mancuso, R.; Dudko, R.; Betti, E.; Cesati, M.; Caccamo, M.; Pellizzoni, R. Real-time cache management framework for multi-core architectures. In Proceedings of the 2013 IEEE 19th Real-Time and Embedded Technology and Applications Symposium (RTAS), Philadelphia, PA, USA, 9–11 April 2013; pp. 45–54. [Google Scholar] [CrossRef]
  16. Sun, G.; Shen, J.; Veidenbaum, A.V. Combining prefetch control and cache partitioning to improve multicore performance. In Proceedings of the 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS), Rio de Janeiro, Brazil, 20–24 May 2019; pp. 953–962. [Google Scholar] [CrossRef]
  17. Yun, H.; Yao, G.; Pellizzoni, R.; Caccamo, M.; Sha, L. Memory Bandwidth Management for Efficient Performance Isolation in Multi-Core Platforms. IEEE Trans. Comput. 2015, 65, 562–576. [Google Scholar] [CrossRef]
  18. Xu, M.; Gifford, R.; Phan, L.T.X. Holistic multi-resource allocation for multicore real-time virtualization. In Proceedings of the 56th Annual Design Automation Conference, Las Vegas, NV, USA, 2–6 June 2019; pp. 1–6. [Google Scholar] [CrossRef]
  19. Altmeyer, S.; Davis, R.I.; Indrusiak, L.; Maiza, C.; Nelis, V.; Reineke, J. A generic and compositional framework for multicore response time analysis. In Proceedings of the 23rd International Conference on Real Time and Networks Systems, Dortmund, Germany, 7–8 June 2015; pp. 129–138. [Google Scholar] [CrossRef]
  20. Negrean, M.; Schliecker, S.; Ernst, R. Response-time analysis of arbitrarily activated tasks in multiprocessor systems with shared resources. In Proceedings of the 2009 Design, Automation & Test in Europe Conference & Exhibition, Nice, France, 20–24 April 2009; pp. 524–529. [Google Scholar] [CrossRef]
  21. Schranzhofer, A.; Pellizzoni, R.; Chen, J.J.; Thiele, L.; Caccamo, M. Worst-case response time analysis of resource access models in multi-core systems. In Proceedings of the Design Automation Conference, Anaheim, CA, USA, 13–18 July 2010; pp. 332–337. [Google Scholar] [CrossRef]
  22. Choi, J.; Kang, D.; Ha, S. Conservative modeling of shared resource contention for dependent tasks in partitioned multi-core systems. In Proceedings of the 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 14–18 March 2016; pp. 181–186. [Google Scholar] [CrossRef]
  23. Chen, J.J. Partitioned multiprocessor fixed-priority scheduling of sporadic real-time tasks. In Proceedings of the 2016 28th Euromicro Conference on Real-Time Systems (ECRTS), Toulouse, France, 5–8 July 2016; pp. 251–261. [Google Scholar] [CrossRef]
  24. Chen, J.; Du, C.; Xie, F.; Yang, Z. Schedulability Analysis of Non-Preemptive Strictly Periodic Tasks in Multi-Core Real-Time Systems. Real-Time Syst. 2016, 52, 239–271. [Google Scholar] [CrossRef]
  25. Huang, W.H.; Chen, J.J.; Reineke, J. MIRROR: Symmetric timing analysis for real-time tasks on multicore platforms with shared resources. In Proceedings of the 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC), Austin, TX, USA, 5–9 July 2016; pp. 1–6. [Google Scholar] [CrossRef]
  26. Choi, J.; Ha, S. Worst-Case Response Time Analysis of a Synchronous Dataflow Graph in a Multiprocessor System with Real-Time Tasks. ACM Trans. Des. Autom. Electron. Syst. 2017, 22, 1–26. [Google Scholar] [CrossRef]
  27. Foughali, M.; Hladik, P.E.; Zuepke, A. Compositional verification of embedded real-time systems. J. Syst. Archit. 2023, 142, 102928. [Google Scholar] [CrossRef]
  28. Andersson, B.; Kim, H.; Niz, D.D.; Klein, M.; Rajkumar, R.R.; Lehoczky, J. Schedulability Analysis of Tasks with Corunner-Dependent Execution Times. ACM Trans. Embed. Comput. Syst. 2018, 17, 1–29. [Google Scholar] [CrossRef]
  29. Al-bayati, Z.; Sun, Y.; Zeng, H.; Natale, M.D.; Zhu, Q.; Meyer, B.H. Partitioning and Selection of Data Consistency Mechanisms for Multicore Real-Time Systems. ACM Trans. Embed. Comput. Syst. 2019, 18, 1–28. [Google Scholar] [CrossRef]
  30. Liu, C.L.; Layland, J.W. Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment. J. ACM 1973, 20, 46–61. [Google Scholar] [CrossRef]
  31. Joseph, M.; Pandya, P. Finding Response Times in a Real-Time System. Comput. J. 1986, 29, 390–395. [Google Scholar] [CrossRef]
  32. Davis, R.I.; Burns, A. Priority assignment for global fixed priority pre-emptive scheduling in multiprocessor real-time systems. In Proceedings of the 2009 30th IEEE Real-Time Systems Symposium, Washington, DC, USA, 1–4 December 2009; pp. 398–409. [Google Scholar] [CrossRef]
  33. Buttazzo, G.C. Rate Monotonic vs. EDF: Judgment Day. Real-Time Syst. 2003, 29, 5–26. [Google Scholar] [CrossRef]
  34. Burns, A.; Dobbing, B.; Vardanega, T. Guide for the Use of the Ada Ravenscar Profile in High Integrity Systems. Ada Lett. 2004, 24, 1–74. [Google Scholar] [CrossRef]
Figure 1. Example of task interference.
Figure 1. Example of task interference.
Applsci 14 04033 g001
Figure 2. WCRT of tasks in multicore systems with contention.
Figure 2. WCRT of tasks in multicore systems with contention.
Applsci 14 04033 g002
Figure 3. Example (a,b) of (a) v j i and (b) v j i for task set τ = [ τ 0 , τ 1 ] allocated to a dual-core platform.
Figure 3. Example (a,b) of (a) v j i and (b) v j i for task set τ = [ τ 0 , τ 1 ] allocated to a dual-core platform.
Applsci 14 04033 g003
Figure 4. Representation of deadlines and periods of system τ = [ τ 0 , τ 1 τ 2 ] allocated to a dual-core platform to derive matrices A i j .
Figure 4. Representation of deadlines and periods of system τ = [ τ 0 , τ 1 τ 2 ] allocated to a dual-core platform to derive matrices A i j .
Applsci 14 04033 g004
Figure 5. Scheduling plan of system τ = [ τ 0 , τ 1 τ 2 ] according to DM algorithm.
Figure 5. Scheduling plan of system τ = [ τ 0 , τ 1 τ 2 ] according to DM algorithm.
Applsci 14 04033 g005
Figure 6. Experimental simulation.
Figure 6. Experimental simulation.
Applsci 14 04033 g006
Figure 7. Schedulability ratio with DM and different allocators.
Figure 7. Schedulability ratio with DM and different allocators.
Applsci 14 04033 g007
Figure 8. Percentage of cases that pass the proposed schedulability test.
Figure 8. Percentage of cases that pass the proposed schedulability test.
Applsci 14 04033 g008
Figure 9. WFDU allocation applied to the task set defined in Table 5.
Figure 9. WFDU allocation applied to the task set defined in Table 5.
Applsci 14 04033 g009
Table 1. System with task set τ .
Table 1. System with task set τ .
Task τ CDTICore M
012300
124510
213511
Table 2. System with task set τ .
Table 2. System with task set τ .
Task τ CDTICore M
012310
116711
Table 3. System with task set τ .
Table 3. System with task set τ .
Task τ CDTICore M
012310
125500
213511
Table 4. Definition of experimental scenarios.
Table 4. Definition of experimental scenarios.
Number
of Cores
UtilizationTasksBroadcasting
Tasks
Interference
with
WCET (%)
Scenario
21.142101
202
303
1.5104
205
306
42.4123107
208
309
31010
2011
3012
63.11641013
2014
3015
4.51016
2017
3018
84.12051019
2020
3021
61022
2023
3024
105.12871025
2026
3027
7.51028
2029
3030
Table 5. Task set parameters (ms) for the experimental use case.
Table 5. Task set parameters (ms) for the experimental use case.
IdCDTI
05230030014
1113003000
2524004005
3114004000
Table 6. W C R T measured on the platform and the W C R T u b of each k activation when the task set is allocated with the WFDU algorithm (Figure 9).
Table 6. W C R T measured on the platform and the W C R T u b of each k activation when the task set is allocated with the WFDU algorithm (Figure 9).
(a) Measured WCRT
Activation k
0123
τ 0 55.70759.03155.70455.713
τ 1 10.002310.0023110.0023110.00231
τ 2 60.68850.403650.3931-
τ 3 81.002381.002181.0021-
(b)  WCRT ub
Activation k
0123
τ 0 57626257
τ 1 11111111
τ 2 102102102-
τ 3 130135130-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ortiz, L.; Guasque, A.; Balbastre, P.; Simó, J.; Crespo, A. Schedulability Analysis in Fixed-Priority Real-Time Multicore Systems with Contention. Appl. Sci. 2024, 14, 4033. https://doi.org/10.3390/app14104033

AMA Style

Ortiz L, Guasque A, Balbastre P, Simó J, Crespo A. Schedulability Analysis in Fixed-Priority Real-Time Multicore Systems with Contention. Applied Sciences. 2024; 14(10):4033. https://doi.org/10.3390/app14104033

Chicago/Turabian Style

Ortiz, Luis, Ana Guasque, Patricia Balbastre, José Simó, and Alfons Crespo. 2024. "Schedulability Analysis in Fixed-Priority Real-Time Multicore Systems with Contention" Applied Sciences 14, no. 10: 4033. https://doi.org/10.3390/app14104033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop