Next Article in Journal
An Algorithm to Predict E-Bike Power Consumption Based on Planned Routes
Next Article in Special Issue
Optical CDMA MAC Evaluation in Vehicle-to-Vehicle Visible Light Communications
Previous Article in Journal
Fully Implantable Neural Stimulator with Variable Parameters
Previous Article in Special Issue
Context-Aware Content Selection and Message Generation for Collective Perception Services
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Computation Offloading with Task Scheduling Minimizing Reallocation in VANETs

1
Department of Electrical and Computer Engineering, University of Seoul, Seoul 02504, Korea
2
School of Computer Engineering and Science, Pusan National University, Busan 46241, Korea
3
Department of Computer Science and Engineering, University of Seoul, Seoul 02504, Korea
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(7), 1106; https://doi.org/10.3390/electronics11071106
Submission received: 8 February 2022 / Revised: 11 March 2022 / Accepted: 29 March 2022 / Published: 31 March 2022
(This article belongs to the Special Issue Wireless Communication Technology in Intelligent Transport Systems)

Abstract

:
Computation Offloading (CO) can accelerate application running through parallel processing. This paper proposes a method called vehicular adaptive offloading (VAO) in which vehicles in vehicular ad-hoc networks (VANETs) offload computationally intensive tasks to nearby vehicles by taking into account the dynamic topology change of VANETs. In CO, task scheduling has a huge impact on overall performance. After representing the precedence relationship between tasks with a directed acyclic graph (DAG), VAO in the CO requesting vehicle assigns tasks to neighbors so that it can minimize the probability of task reallocations caused by connection disruption between vehicles in VANETs. The simulation results showed that the proposed method decreases the number of reallocations by 45.4%, as compared with the well-known task scheduling algorithms HEFT and Max-Min. Accordingly, the schedule length of entire tasks belonging to one application is shortened by 14.4% on average.

1. Introduction

As artificial intelligence (AI) has attracted much attention, many recent applications in vehicular ad-hoc networks (VANETs) also contain AI algorithms with resource-intensive computational tasks. However, all vehicles may not have enough computing resources to run those computationally intensive tasks, so computation offloading (CO) [1] has been investigated as a promising technique to overcome this resource limitation.
Historically, CO was proposed to lighten the computation overhead of resource-constrained devices by transferring computational tasks to resource-abundant servers like a cloud. However, since cloud servers are usually located in remote areas, long data transfer time hindered CO from being widely adopted in VANETs. The security concern along the long path from a requesting vehicle to a cloud server could also be another problem. This is why edge computing [2] and fog computing [3] were suggested, which perform computation at edge or fog servers near requesting vehicles.
However, edge or fog servers are in fixed locations, while vehicles move fast, so vehicles might leave the communication range of edge servers even before computation at the edge servers is complete. As a result, requesting vehicles should often receive computation results through multi-hop communication via road side units (RSUs), not directly from the edge servers. This multi-hop communication latency, although it is not as much as the latency from cloud servers, may be intolerable for recent time-sensitive applications, such as autonomous driving.
Therefore, this paper focuses only on offloading to nearby vehicles, instead of to fixed remote servers. CO to nearby vehicles has several advantages as compared with CO to edge servers. First, the distance to other vehicles is usually shorter than the distance to edge servers, which can reduce the bit error rate (BER) during the transmission. Second, the number of neighbor vehicles is usually greater than the number of neighbor edge servers, so more tasks can be offloaded at one time, leveraging the advantage of parallel processing.
However, there are two important issues to be solved in CO to other vehicles: (i) efficient task scheduling and adaptive allocation to service vehicles, (ii) reallocation of the tasks interrupted by topology changes.
In CO to cloud or edge servers, all computational works are statically assigned to several servers. On the other hand, there can be many candidate vehicles that can provide services in CO to other vehicles, so tasks should be efficiently scheduled and adaptively distributed to them. Actually, task scheduling has a huge impact on the overall performance of CO, but finding the optimal schedule is a well-known NP-hard problem [4,5]. Thus, heuristic scheduling methods should be designed.
When designing a task scheduling method, we must first consider the relationship between tasks. For example, if task B needs the results of task A, the two tasks should be assigned so that task A can be completed before task B. However, the precedence relationship between tasks may not be so explicit. Assuming that there is task C that needs the result of task B above, task C should also be executed after task A finishes. This kind of precedent and descendent relationship may not be seen at first glance.
We adopt a directed acyclic graph (DAG) to clearly show the precedence relationship between tasks. In a DAG, nodes represent task computation cost and edges represent the precedence relationship between tasks, as well as the data transfer cost. After constructing a DAG with tasks belonging to one application, a CO requesting vehicle (RV) assigns each task to a nearby vehicle, so that it may be completed before the required finish time. Then the actual finish time depends on the computing power of the CO service vehicles (SVs), so the information on the neighbors should be collected before the task allocation. Here, our proposed method, vehicular adaptive offloading (VAO), is used. VAO in an RV first collects the nearby vehicle information in real time, then allocates tasks to some of the neighbors so that the overall application can be completed as quickly as possible. Usually, the most computationally intensive task is assigned to the vehicle with the best computation power.
The allocation of tasks is not the end of the VAO operation. If any SV moves at a different speed, or in a different direction, from the RV, the SV may leave the communication range of the RV before completing its allocated task. In this case, the allocated task must be retrieved and reassigned to another vehicle. Then, the more task reallocation occurs, the longer the total execution time of the application usually is. So, the task allocation that can reduce the chance of reallocation is performed by VAO.
For performance evaluation, we conducted a series of simulations using the Veins simulator [6]. In terms of the schedule length (or the completion time of the application), VAO outperformed the existing task scheduling methods, HEFT [7] and Max-Min [8], by 14.4%. The task reallocation of VAO was also reduced by 45.4%.
The main contributions of this paper are as follows:
To the best of our knowledge, VAO is the first CO method to apply DAG scheduling to dynamic vehicular environments.
The proposed VAO can smoothly reallocate partial tasks for the completion of the whole workflow when an SV is out of the range of an RV. While HEFT and Max-Min are used only for static servers or processors, VAO ensures these scheduling algorithms deal with expected disconnection caused by velocity difference.
Before moving on to the next section, let us answer a question that many people ask about vehicle-to-vehicle (V2V) computation offloading: Why does an SV provide the service to an RV? Firstly, a single vehicle may not run the latest VANET applications alone. For example, object recognition-based driving assistance software can reduce blind spots and help in changing lanes, joining lanes and keeping distance safe [9,10,11], but it requires excessive computation from a single vehicle. If this kind of software can be run successfully through cooperation between vehicles and the object recognition result is shared between RVs and SVs, driving safety can be greatly improved. Secondly, SVs might get a direct reward for providing services. For instance, blockchain or non-fungible token (NFT) technology can be used to create non-manipulable incentives to reward SVs. Since further discussion on blockchain and NFT is beyond the scope of this paper, we just assume that every vehicle in this paper is willing to cooperate with other vehicles.
The rest of the paper is organized as follows. In Section 2, related works on computation offloading and task scheduling are briefly overviewed. Section 3 presents the system model and Section 4 covers the numerical method to find the near-optimal task schedule in a static environment. The proposed VAO for a dynamic circumstance like VANET is described in detail in Section 5, and the simulation results are given in Section 6. Section 7 concludes this paper.

2. Related Works

This section overviews the offloading mechanism first, then explains famous task scheduling algorithms based on the DAG.

2.1. Computation Offloading

The authors of [1] proposed a deep Q-learning [12] based offloading mechanism for the mobile fog computing environment [1]. In their frameworks, an application is decomposed into basic blocks and offloaded to many devices. They evaluated latency and energy consumption for varying computation demands and a number of devices. MAUI [13], ThinkAir [14], and CloneCloud [15] are the frameworks for offloading to the mobile edge cloud. In the frameworks, the scheduling algorithm decides whether to offload tasks or not and deploys them according to the policy of the cloud. AVE [16] is an ant colony optimization (ACO) [17] based task offloading framework for mobile devices that are connected with each other by dedicated short-range communications (DSRC). It assumes that tasks are context-free, although the tasks of an application are actually tightly connected among themselves. Note that the precedence relationships and data transfer cost between tasks must be considered to optimize the completion time of an application. However, the works mentioned above model an application as one task or multiple context-free tasks.
CO to other devices requires data transfer overhead. The data transfer overhead is important, particularly when devices are connected with a wireless connection, like the vehicular environment. In the worst-case scenario, the data transfer time might overwhelm the task processing time, so several studies focused on reducing this overhead. For example, the authors of [18] proposed a method using federated learning (FL), in which local devices compute a deep learning (DL) model and a data center gathers the model updates. In [19], the authors suggested reducing the data transfer overhead of the FL method by introducing the random client selection mechanism. For that, federated distillation, based on knowledge distillation [20] and data augmentation [21], were adopted. However, the offloading schemes in [18,19,20,21] are very specific and only for deep-learning algorithms, so they cannot be applied to general applications.
OpenVDAP [22] classifies CO-possible services into four categories, (1) real-time diagnosis services, (2) driving assistance services, (3) in-vehicle infotainment services, and (4) third-party application services. It is very important, especially for real-time diagnosis and driving assistance services, to minimize the latency. The total latency of a service is related to the schedule length, but there has not been much work done on shortening the schedule length.

2.2. Task Scheduling

Task scheduling is one of the major factors affecting the performance of CO because the length of the schedule determines the total completion time of all tasks. Then, to the best of our knowledge, there have been no task scheduling methods that consider the precedence relationships between tasks in a dynamic topology like VANET. Thus, we introduce two methods, HEFT [7] and Max-Min [8], instead, which are used in a static environment like parallel processing servers.
First, HEFT is a representative scheduling algorithm that considers the relationship between tasks in the form of a DAG, but it is basically for the multi-processor computing environment with very low communication cost between static processors. In HEFT, the earliest finish time (EFT) of each task, when it is executed on every processor, is calculated. For instance, regarding EFT ( t i , p j ), the EFT of task t i on processor p j , is computed as:
E F T ( t i , p j ) = w i j + m a x { a v a i l [ j ] , max m p r e d ( t i ) ( A F T ( t m ) + c m i ) }
where w i j is the expected execution time of the task t i on processor p j ; avail[j] is the earliest time when p j is ready to execute t i ; p r e d ( t i ) is the set of predecessor tasks of t i ; A F T ( t m ) is the actual finish time (AFT) of t m ; and c m i is the data transfer cost from t m to t i . The schedule length is defined as the largest one among AFTs of all tasks:
s c h e d u l e   l e n g t h = m a x i { A F T ( t i ) }
HEFT schedules a set of tasks, T , in a DAG D on a given set of heterogeneous processors P . HEFT computes the average processing cost w ¯ i of a task t i and the average communication cost c ¯ i j between tasks t i and t j in the given DAG. Then, it calculates the upward ranks of every task in D reversely from an exit task t e x i t . The upward rank r a n k u ( t i ) is the total cost from task t i to the exit task, defined as:
r a n k u ( t i ) = w ¯ i + max t h i s u ( t i ) ( c ¯ i h + r a n k u ( t h ) )
where i s u ( t i ) is the set of the immediate successors of t i . HEFT sorts the tasks in T in non-increasing order by r a n k u into the sorted list T . Then, it gets the first task (i.e., the task with the highest rank) t i in T and allocates the task to the minimum EFT processor (i.e., the processor that can finish the task the fastest) using insertion-based scheduling. The insertion-based scheduling inserts t i to the first time slot of p k where the task can be executed. Finally, t i is allocated to the minimum EFT processor p m i n .
Another well-known scheduling method, Max-Min [8], is very simple, considering only the computation power of processors. Max-Min calculates the estimated finish time K i j of a task t i on a processor p j for every task and every processor. Then, Max-Min allocates the task with the largest K i j to the processor that can execute it the fastest. This allocation is repeated until every task is scheduled.
Several recent works [23,24] utilize duplication-based DAG scheduling. The Task Duplication-based Clustering Algorithm (TDCA) [23] improved performance by accurate parameter definition, enhanced clustering, and by considering the chain reaction. On the other hand, Lachesis [24], which is a task duplication-based learning algorithm, uses a convolutional network to parse a DAG and uses a heuristic search algorithm to allocate selected tasks to selected nodes. Although duplication-based scheduling is effective in reducing the schedule length, by maximizing computing resource usage, it cannot work on resource-constraint devices, such as mobile phones and vehicles, since it has to make copies of tasks to idle processors. Table 1 shows the comparison of related works.

3. System Model

Offloading tasks to clouds or other devices allows mobile devices to overcome the limitation on computation resources. However, offloading to the remote cloud is not suitable for real-time applications because of inevitable long latency. Even in the case of edge/fog computing, a fast-moving vehicle can easily go far from the edge/fog server in a short time, resulting in increased latency. According to current vehicular technology trends, we may expect that vehicles are to be equipped with computational resources, such as CPU, memory, and communication capabilities, and will be able to execute applications using their computational resources managed by virtual machine techniques [25,26]. In this environment, a vehicle can request other vehicles to handle its tasks. In modern computer systems, various types of high-performance computing processors, like CPU, graphics processing unit (GPU), neural processing unit (NPU), field programmable gate array (FPGA) utilizing CUDA [27] or OpenCL [28] can be used to process codes. In this paper, regardless of the types of processors, we model the computing power of a vehicle in MIPS (million instructions per second) units.
Vehicles in a VANET communicate with each other via V2V communications, such as DSRC and cellular vehicle-to-everything (C-V2X). After searching for nearby vehicles, an RV requests SVs to execute its tasks. A set of SVs, V s ,is a subset of the set of searched vehicles V ( V s V ). Assuming that there is no conflict and fading on wireless channels between vehicles, we consider only the amount of data that should be transferred on each channel. Then, the communication overhead on wireless is usually larger than the wired channel, so the transfer cost should be complemented by reducing the processing cost with the optimized task schedule.
Figure 1 compares the efficiency of the task schedules made by HEFT and Max-Min. Suppose that an application consists of five tasks and their precedence relationships can be represented in a DAG as in Figure 1a. In this graph, nodes and edges correspond to tasks and the dependency between two tasks, respectively. If a task t i in a DAG needs the result of another task t k , t k must be finished before the execution of t i . In this case, t k is the predecessor of t i and t i is the successor of t k . An edge of a DAG points from a predecessor to a successor.
Figure 1b,c are the Gantt charts of task executions by Max-Min and HEFT. Here, v 1 is an RV and the other two vehicles are SVs; and the computation powers of v 1 , v 2 , and v 3 are 100, 150, and 200 MIPS, respectively. Max-Min and HEFT in v 1 allocate tasks according to their algorithms. As you can see, HEFT made a shorter schedule length than Max-Min by overlapping data transfer time with execution time on SVs. In our experiment, the schedule length of HEFT was shorter than that of Max-Min by 15.3% on average. This is because HEFT computes the explicit rank of each task, in other words, the order of executing each task, in an upward direction from an exit task based on a DAG, while Max-Min does not use a DAG. Therefore, we also adopt the DAG for the proposed method.
In this example, the task allocation is decided offline before actual execution, since HEFT and Max-Min were designed for static environments, such as parallel processing servers. However, in VANET, task reallocation must be handled frequently, since the associated SVs may leave the communication range of the RV without returning the computation result.

4. Problem Formulation

As mentioned earlier, the optimal task allocation problem is NP-hard. However, we can get a near-optimal solution using linear programming.
First, the allocation of t i is denoted by a binary variable x i j which is set to 1 when t i is allocated to vehicle v j , and otherwise, 0:
x i j = { 1 0 i f   t i   i s   a l l o c a t e d   t o   v j o t h e r w i s e
A task schedule χ is represented by a matrix consisting of x i j s’:
χ = [ x 11 x 1 n v x n t 1 x n t n v ]
where n v is the number of SVs and n t is the number of tasks.
For the sake of simplicity, we assume that a vehicle executes only one task at a time and that a task cannot be allocated to more than one vehicle simultaneously. In order to guarantee a task being allocated to only one vehicle, the following condition must be satisfied:
x i 1 + x i 2 + + x i n v = 1 ,   i { 1 ,   2 , , n t }
A task without any predecessor is called an entry task, and a task without any successor is called an exit task. Only the applications with one entry task are considered in this paper.
For executing task t i on an SV v j , all of its predecessors must be finished beforehand, and the results from the predecessors must be received by v j . Therefore, S T ( t i ) , the start time of t i , must satisfy the following condition:
S T ( t i ) > m a x t m p r e d ( t i ) ( A F T ( t m ) + c m i )
where p r e d ( t i ) is the set of the immediate predecessors of t i and c m i is the data transfer cost from t m to t i . Furthermore, because an SV is assumed not to execute more than one task at the same time, S T ( t i ) must satisfy the following condition:
S T ( t i ) < S T ( t k )   o r   S T ( t i ) > A F T ( t k ) , t k a l l o c ( j )
where A F T ( t k ) = S T ( t k ) + E k j and a l l o c ( j ) is the set of tasks allocated (or offloaded) to v j . The value E k j is the execution time of t k on v j and can be computed as E k j = O ( k ) i p b ( k ) / m i p s ( j ) where O ( k ) is the data size of t k , i p b ( k ) is the number of instructions needed to process 1 byte of data in t k , and m i p s ( j ) is the computing power of processor j expressed in MIPS.
Assuming that there is no task reallocation, the objective function is:
m i n i m i z e   m a x t i T { A F T ( t i ) } s . t .   ( 4 ) ~ ( 8 )
However, a fast-moving SV can be out of the communication range of the RV easily, so unfinished tasks that had been allocated to the SV have to be reallocated to other vehicles. If the set of the tasks that should be reallocated is R T , the objective function is:
m i n i m i z e   m a x t i R T { A F T ( t i ) } s . t .   ( 4 ) ~ ( 8 )
In the dynamic vehicular environment, the vehicles in the coverage area of a vehicle tend to change frequently. After offloading tasks to SVs, the RV has to continuously track the SVs for possible task reallocation events.

5. Vehicular Adaptive Offloading (VAO)

While linear optimization on the task schedule is possible in a static environment, a dynamic situation like a VANET requires a heuristic method, since even the information on the service processors is not basically given. This section firstly overviews the operation of our method of VAO, then explains the proposed task scheduling and task reallocation scheme in detail.

5.1. Operation Overview

Figure 2 shows the operation of VAO. First, an RV v r searches for nearby vehicles and collects their information, such as computing power and route, by broadcasting a message. Then v r assigns computational tasks to nearby vehicles according to the task schedule it makes, based on neighbor vehicle information. After that, it periodically checks whether SVs will be out of its communication range soon, and in that case, unfinished tasks on the SVs are reallocated to other vehicles. This procedure is depicted in Algorithm 1. Before delving into the algorithm, see Table 2 for the notations.
Algorithm 1: VAO in a RV v r
Input: a given DAG D, a set of tasks T in D
  1 initialize a set of target tasks T′ to be the same with T
  2 let V be a set of searched vehicles
  3 let V S be a set of SVs in V ( V S V )
  4 set V    
  5repeat
  6    search vehicles
  7    add every searched vehicle to V
  8     χ , V S   task_schedule (T′, V )
  9    for every task t i in T′
  10     transfer t i to the vehicles in V S according to χ
  11     remove t i from T′
  12    end for
  13    start/resume execution of the application by transferring the input data
  14    T′ check_realloc ( V S )
  15until all SVs complete the allocated tasks
Algorithm 1 describes the operation of VAO in an RV v r . After initialization, v r conducts a search broadcasting a vehicle information request (VIRQ) message, and each vehicle receiving this message sends a vehicle information reply (VIRP) with the information on its route and computing resources (Line 6). The searched vehicle information is stored into V (Line 7), then v r calls up the task scheduling algorithm to determine χ based on the information on the set of the tasks T and the computing power of vehicles in V (Line 8). This task scheduling algorithm is described in Algorithm 2 in the next section. Then, v r transfers tasks in T to the vehicles in V S according to χ (Line 9 to 12). Then, in Line 13, v r starts to execute the application by transferring input data to the SVs that have entry tasks.
After completing its allocated tasks, an SV transmits the result to the SVs that have successor tasks. However, in a VANET where vehicles move fast, SVs may frequently leave the communication range of the RV before completing their work, and this can cause a big problem in the CO system because all successor tasks cannot be executed until they receive the results of predecessor tasks. Therefore, we need to reallocate the tasks associated with original SVs that have gone far away without completing the task. If any SV not finishing its assigned tasks has gone too far apart from v r , those tasks are reallocated to other nearby vehicles. This check and reallocation are periodically done until all offloaded tasks are complete. The detailed procedure will be explained in Section 5.3.

5.2. Task Scheduling

The proposed task scheduling method has two important features: first, it utilizes the DAG to consider the precedence relationships between tasks, and, second, it selects only nearby vehicles whose routes are similar to the route of RV as SVs, in order to reduce the chance of task reallocation. As mentioned earlier, a task schedule χ is the most important factor affecting performance, since a long task schedule means a long execution time. In general, the more reallocations that occur, the longer the task schedule is.
Algorithm 2: task schedule ( T , V )
Input: a given DAG D, a set of tasks T in D, a set of searched vehicles V
Output: task schedule χ
  1 compute average processing and communication cost
  2 calculate r a n k u of every task in D in an upward direction, starting from an exit task
  3 let T′ be the list of tasks in T, sorted in non-increasing order by r a n k u
  4 V f { v i | v i V and its route overlaps the route of v r by more than α blocks}
  5while there exist tasks in T do
  6      t i the first task in T′
  7     for  v k in V f  do
  8      calculate the EFT ( t i , v k ) with insertion-based scheduling
  9     end for
  10      allocate t i to the minimum EFT vehicle v m i n
  11      remove t i from T′
  12end while
Lines 1 to 2 compute the average cost of searched vehicles and the r a n k u of every task in an upward direction from an exit task. Line 3 sorts the tasks in non-increasing order by r a n k u . Then, in Line 4, out of the searched vehicles, only vehicles whose routes overlap the route of v r by more than α blocks are filtered into a new set of vehicles V f . Only vehicles in V f are considered as a candidate SV that can provide the minimum EFT for each task in subsequent lines. Line 6 gets the first task in the sorted list T′ into t i . In Lines 7 to 9, v r calculates the EFT of t i on every v k in V f with insertion-based scheduling. Line 10 allocates t i to the vehicle that can provide the minimum EFT. Lines 6 to 11 are repeated while there exist any unallocated tasks.

5.3. Check Reallocation

Although our scheduling method tries its best to avoid task reallocation, this cannot be perfectly avoided because of the dynamic topology change in VANETs. The following Algorithm 3 shows how VAO handles task reallocation caused by the connection disruption topology change makes.
Algorithm 3: check realloc ( V S )
Input: a set of SVs V S
Output: a set of unfinished tasks T
  1 periodically broadcasts RV’s location information
  2if receives a message that v s are expected to leave the communication range
  3    informs all SVs of reallocation event to make them stop data transfer to v s
  4    let T a set of unfinished tasks allocated to v s
  5    request v s to transfer the stored data for the tasks
  6end if
  7return  T
In Line 1, an RV v r periodically broadcasts its location information to SVs, then SVs which are expected to be out of communication range send report messages to v r . In Lines 2 to 6, all SVs are informed of a task reallocation event if any v s is expected to be out of range. All the SVs stop data transfer to v s and v r requests v s to transfer stored data for the tasks. After that, the algorithm returns the set of unfinished tasks, so Algorithm 1 can reallocate the tasks.
To filter out the vehicles, Algorithm 2 needs to compare every route of the searched vehicles with that of RV. The time complexity of comparing the routes is O ( α | V | ) when the number of the searched vehicles is | V | . The time complexity of finding the first idle time slot is O ( | T | 2 | V | ). So, the total complexity of the VAO is O ( | T | 2 | V | ).

6. Performance Evaluation

Using the Veins network simulator [6], we compared the performances of VAO, HEFT, and Max-Min in terms of schedule length, reallocation probability, and data transfer time. Actually, since neither HEFT nor Max-Min include a way to reallocate tasks, the VIRQ/VIRP process of VAO was adopted for the HEFT and Max-Min task reallocation. So, the message overheads of the three methods are naturally the same. In VAO, α was set to 3 empirically, or only neighbor vehicles whose routes overlap the route of the RV for more than 3 blocks were selected as candidate SVs. When α was greater than 3, performance dropped because of the scarce number of candidate SVs. When α was smaller than 3, the chance of task reallocation rarely decreased, as compared with HEFT.
We first compared the schedule length by each algorithm and observed the effects of data transfer time and the number of SVs on the overall performance. Then, the reallocation probability of each algorithm is investigated. Finally, the effect of the depth of width of a DAG on performance is studied.
Figure 3 shows the DAG representation of an offloaded application used for the simulation, with both depth and width of 4. The depth and width mean the total number of levels in a DAG and the maximum number of nodes in one level, respectively.
The road structure is shown in Figure 4. A total of 100~300 vehicles, which is proportional to the simulation time, with a maximum speed of 30 km/h were randomly deployed on the roads. Each of them followed one of 30 routes that were randomly generated, and its processing power followed uniform distribution between 200 and 600 MIPS.
Figure 5 compares the average schedule length over 1000 simulations for an application that can be represented by a DAG with both depth and width of 4. As mentioned in Section 3, the schedule length by HEFT was 15.3% shorter than Max-Min, since HEFT makes the schedule considering relationships between tasks of an application, which is represented in a DAG. On the other hand, VAO created schedules 11.7% longer than HEFT, although VAO also utilizes a DAG. This is because VAO filters out some searched vehicles from the set of candidate SVs, based on how long their routes overlap the route of the RV. The smaller the number of candidate SVs, the smaller the number of tasks that can be executed in parallel. Furthermore, if some vehicles with high computing power are excluded from the candidate SVs’ set, the total computing capability of the offloading system would be degraded. More experiments were needed to better understand these results.
We performed additional simulations 1000 times with various densities of vehicles so that the number of searched vehicles could be different. Figure 6a depicts the schedule length against the number of searched vehicles | V | . When | V | was 0, tasks could not be offloaded (no CO happened); and when | V | became 1, the schedule lengths of HEFT and Max-Min decreased by 39.4%. As | V | increased, the schedule length typically decreased, but after | V | was greater than 4, the schedule lengths by HEFT and Max-Min were stable around 41.7% and 49.1% of the no CO case. This is because the width of the DAG for our application was 4. Since up to four tasks can be executed simultaneously in a level, many more vehicles than 4 are not much help to improve parallel computing efficiency. However, in the case of VAO, about twice as many searched vehicles were needed to stabilize the performance because some of the searched vehicles were excluded from the candidate SVs’ set, due to their different routes from the RV’s route. Figure 6b shows the schedule length when the number of searched vehicles | V | is sufficient. When | V | is large enough, the schedule lengths by HEFT and Max-Min do not change much. On the other hand, the schedule length by VAO continuously decreases as | V | increases.
Figure 6c shows the number of SVs according to the number of searched vehicles. Actually, we can see that the number of vehicles that provide computation services in HEFT became equal to the number of SVs in HEFT when the number of searched vehicles was 8, which is twice the width of the DAG. This means if there are sufficient vehicles, VAO can give good performance.
The accumulated data transfer times during the execution of an application are presented in Figure 6d. While the data transfer time of Max-Min continuously increased, the data transfer times of HEFT and VAO barely increased once stable. This is because Max-Min considers only computation power of vehicles in selecting SVs, resulting in task offloading to an unnecessarily excessive number of SVs as | V | increases. On the contrary, HEFT and VAO select SVs considering data transfer time as well as computation power of vehicles in (1) and (3). Thus, although a neighbor vehicle has a little higher computation power, an RV does not offload tasks to the neighbor if the data transfer cost is too high.
As seen in Figure 5, the initial schedule length by VAO is slightly longer than HEFT if there are not enough vehicles around an RV because some neighbor vehicles are excluded from the candidate SVs’ set. However, in terms of reallocation overhead, VAO is better than others.
Figure 7 compares the task reallocation probability of the three methods. This result comes from the same simulations in Figure 6. Only 15.2% of tasks were reallocated to others in VAO, whereas 60.6% and 86.2% of tasks had to be reallocated to others in HEFT and Max-Min, respectively. As a result, the schedule length by VAO, including reallocated tasks, was shorter than HEFT by 14.4%, although the initial schedule length by VAO was 11.7% longer than HEFT.
The performance of the CO scheme can be different according to the depth and width of a DAG. Figure 8a,b illustrate initial schedule lengths associated with varying the depth and width of a DAG. While one of the width and depth was varied, the other was fixed at 4. In both graphs, as the depth or the width increases, the length of the schedules increases naturally, since the total number of tasks increases.
Comparing “large depth” to “large width”, the large width was better to create shorter schedule lengths because more tasks can be executed in parallel on more SVs. Figure 8b shows that Max-Min is also a good option in the case where the width is large. However, the width of a DAG can be widened only when tasks have no tight relationships with each other. If tasks have strict precedence relationships between themselves, only the depth can increase. Figure 8a shows that the schedule length difference between Max-Min and the other methods got bigger as the depth of a DAG increased. When the depth was 8, the schedule length difference between HEFT and Max-Min was 14.8 s, 7.7 times larger than the difference when the depth was 2. This substantiates the view that DAG scheduling is efficient for applications with a large depth.
Figure 9 represents the data transfer time with varying depth and width. Figure 9a shows the data transfer time of Max-Min was longer than that of HEFT and VAO by 18~20%. Meanwhile, Figure 9b shows that the data transfer times of HEFT and Max-Min became closer as width increased, and the difference was just 5.8% when the width was 8. Anyway, in both graphs, VAO outperformed the two other methods, since it selects a relatively smaller number of SVs, taking both their routes and data transfer time into account.

7. Conclusions

We proposed vehicular adaptive offloading (VAO) for the dynamic vehicular environment, which uses DAG scheduling for optimizing task execution. Unlike other well-known scheduling methods, HEFT and Max-Min, VAO focuses on reducing task reallocation, taking into account the dynamic topology of VANETs. For that purpose, VAO does not offload tasks to neighbors whose routes are much different from the RV’s route. The simulation results showed that VAO decreases the probability of task reallocation by 45.4%, as compared with HEFT. Our simulation reveals that more SVs are required than the width of DAG to optimize CO but many more vehicles than the width are not helpful. When the number of searched vehicles is sufficient, the schedule length of VAO, including all reallocated tasks, is 14.4% shorter than that of HEFT.
We have two future works. First, during the simulation, we occasionally observed that the schedule length of a task set increased when CO was used, as compared with the case where all the tasks were performed in one vehicle. This is mainly because of unexpected task reallocation events, so, if we can predict the chance of task reallocation occurring, inefficient CO can be avoided. We think that AI methods like LSTM (Long Short-Term Memory), which can analyze time-series data on number and distribution of vehicles, might be helpful to predict task reallocation events. Second, we did not consider the situation where multiple vehicles request CO at the same time, resulting in possible conflict at their shared nearby vehicles. This might be solved if a nearby vehicle informs each CO RV of the computing power that can be provided to the RV exclusively.
Table 3 summarizes the characteristics of the algorithms observed in this paper.

Author Contributions

Conceptualization, M.G. and S.A.; methodology, M.G.; software, M.G.; validation, M.G., Y.Y. and S.A.; formal analysis, Y.Y.; investigation, M.G.; resources, M.G.; data curation, M.G.; writing—original draft preparation, M.G.; writing—review and editing, Y.Y.; visualization, M.G.; supervision, S.A.; project administration, S.A.; funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C1011184).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Alam, M.G.R.; Hassan, M.M.; Uddin, M.Z.; Almogren, A.; Fortino, G. Autonomic Computation Offloading in Mobile Edge for IoT Applications. Future Gener. Comput. Syst. 2019, 90, 149–157. [Google Scholar] [CrossRef]
  2. Patel, M.; Naughton, B.; Chan, C.; Sprecher, N.; Abeta, S.; Neal, A. Mobile-Edge Computing Introductory Technical White Paper. White Pap. Mob. Edge Comput. (MEC) Ind. Initiat. 2014, 29, 854–864. [Google Scholar]
  3. Bonomi, F.; Milito, R.; Zhu, J.; Addepalli, S. Fog Computing and Its Role in the Internet of Things. In Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing, Helsinki, Finland, 17 August 2012. [Google Scholar]
  4. Fernández-Baca, D. Allocating Modules to Processors in a Distributed System. IEEE Trans. Softw. Eng. 1989, 15, 1427–1436. [Google Scholar] [CrossRef]
  5. Garey, M.R.; Johnson, D.S. Computers and Intractability; Freeman: San Francisco, CA, USA, 1979. [Google Scholar]
  6. Sommer, C.; German, R.; Dressler, F. Bidirectionally Coupled Network and Road Traffic Simulation for Improved IVC Analysis. IEEE Trans. Mob. Comput. 2010, 10, 3–15. [Google Scholar] [CrossRef] [Green Version]
  7. Topcuoglu, H.; Hariri, S.; Wu, M.-Y. Performance-Effective and Low-Complexity Task Scheduling for Heterogeneous Computing. IEEE Trans. Parallel Distrib. Syst. 2002, 13, 260–274. [Google Scholar] [CrossRef] [Green Version]
  8. Etminani, K.; Naghibzadeh, M. A Min-Min Max-Min Selective Algorihtm for Grid Task Scheduling. In Proceedings of the 3rd IEEE/IFIP International Conference in Central Asia on Internet, Tashkent, Uzbekistan, 26–28 September 2007. [Google Scholar]
  9. Llatser, I.; Michalke, T.; Dolgov, M.; Wildschütte, F.; Fuchs, H. Cooperative Automated Driving Use Cases for 5G V2X Communication. In Proceedings of the IEEE 2nd 5G World Forum (5GWF), Dresden, Germany, 30 September–2 October 2019; pp. 120–125. [Google Scholar]
  10. Chen, Q.; Ma, X.; Tang, S.; Guo, J.; Yang, Q.; Fu, S. F-Cooper: Feature Based Cooperative Perception for Autonomous Vehicle Edge Computing System Using 3D Point Clouds. In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, Arlington, VA, USA, 7–9 November 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 88–100. [Google Scholar]
  11. Aoki, S.; Higuchi, T.; Altintas, O. Cooperative Perception with Deep Reinforcement Learning for Connected Vehicles. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 20–23 October 2020. [Google Scholar]
  12. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-Level Control through Deep Reinforcement Learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef] [PubMed]
  13. Cuervo, E.; Balasubramanian, A.; Cho, D.; Wolman, A.; Saroiu, S.; Chandra, R.; Bahl, P. Maui: Making Smartphones Last Longer with Code Offload. In Proceedings of the 8th International Conference on Mobile Systems, Applications, and Services, San Francisco, CA, USA, 15–18 June 2010; pp. 49–62. [Google Scholar]
  14. Kosta, S.; Aucinas, A.; Hui, P.; Mortier, R.; Zhang, X. Thinkair: Dynamic Resource Allocation and Parallel Execution in the Cloud for Mobile Code Offloading. In Proceedings of the IEEE Infocom, Orlando, FL, USA, 25–30 March 2012; pp. 945–953. [Google Scholar]
  15. Chun, B.-G.; Ihm, S.; Maniatis, P.; Naik, M.; Patti, A. Clonecloud: Elastic Execution between Mobile Device and Cloud. In Proceedings of the Sixth Conference on Computer Systems, Salzburg, Austria, 10–13 April 2011; pp. 301–314. [Google Scholar]
  16. Feng, J.; Liu, Z.; Wu, C.; Ji, Y. AVE: Autonomous Vehicular Edge Computing Framework with ACO-Based Scheduling. IEEE Trans. Veh. Technol. 2017, 66, 10660–10675. [Google Scholar] [CrossRef]
  17. Dorigo, M.; Birattari, M.; Stutzle, T. Ant Colony Optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  18. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Ford Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  19. Nishio, T.; Yonetani, R. Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge. In Proceedings of the ICC 2019–2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019. [Google Scholar]
  20. Hinton, G.; Vinyals, O.; Dean, J. Distilling the Knowledge in a Neural Network. arXiv 2015, arXiv:1503.02531. [Google Scholar]
  21. Jeong, E.; Oh, S.; Kim, H.; Park, J.; Bennis, M.; Kim, S.-L. Communication-Efficient on-Device Machine Learning: Federated Distillation and Augmentation under Non-Iid Private Data. arXiv 2018, arXiv:1811.11479. [Google Scholar]
  22. Zhang, Q.; Wang, Y.; Zhang, X.; Liu, L.; Wu, X.; Shi, W.; Zhong, H. OpenVDAP: An Open Vehicular Data Analytics Platform for CAVs. In Proceedings of the 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), Vienna, Austria, 2–5 July 2018; pp. 1310–1320. [Google Scholar]
  23. He, K.; Meng, X.; Pan, Z.; Yuan, L.; Zhou, P. A Novel Task-Duplication Based Clustering Algorithm for Heterogeneous Computing Environments. IEEE Trans. Parallel Distrib. Syst. 2018, 30, 2–14. [Google Scholar] [CrossRef]
  24. Luo, J.; Li, X.; Yuan, M.; Yao, J.; Zeng, J. Learning to Optimize DAG Scheduling in Heterogeneous Environment. arXiv 2021, arXiv:2103.06980. [Google Scholar]
  25. Merkel, D. Docker: Lightweight Linux Containers for Consistent Development and Deployment. Linux J. 2014, 2014, 2. [Google Scholar]
  26. Kubernetes Documentation. Available online: https://kubernetes.io/docs/home/ (accessed on 27 January 2022).
  27. Sanders, J.; Kandrot, E. CUDA by Example: An Introduction to General-Purpose GPU Programming; Addison-Wesley Professional: Boston, MA, USA, 2010. [Google Scholar]
  28. Stone, J.E.; Gohara, D.; Shi, G. OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems. Comput. Sci. Eng. 2010, 12, 66. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. An example of task scheduling: (a) DAG representation of the application to be offloaded; (b) The Gantt chart of task executions offloaded by the Max-Min; (c) The Gantt chart of task executions offloaded by the HEFT.
Figure 1. An example of task scheduling: (a) DAG representation of the application to be offloaded; (b) The Gantt chart of task executions offloaded by the Max-Min; (c) The Gantt chart of task executions offloaded by the HEFT.
Electronics 11 01106 g001aElectronics 11 01106 g001b
Figure 2. The operation of the VAO mechanism.
Figure 2. The operation of the VAO mechanism.
Electronics 11 01106 g002
Figure 3. An example of DAG of depth 4, width 4.
Figure 3. An example of DAG of depth 4, width 4.
Electronics 11 01106 g003
Figure 4. The road environment of the Veins simulation. The black line is rad and red triangles are vehicles.
Figure 4. The road environment of the Veins simulation. The black line is rad and red triangles are vehicles.
Electronics 11 01106 g004
Figure 5. Initial schedule length without reallocation for a depth 4 and width 4 DAG application.
Figure 5. Initial schedule length without reallocation for a depth 4 and width 4 DAG application.
Electronics 11 01106 g005
Figure 6. Schedule length and data transfer time for a depth 4 and width 4 DAG application: (a) Schedule length against the number of searched vehicles; (b) Schedule length against the number of searched vehicles; (c) The number of SVs against the number of searched vehicles; (d) Data transfer time against the number of searched vehicles.
Figure 6. Schedule length and data transfer time for a depth 4 and width 4 DAG application: (a) Schedule length against the number of searched vehicles; (b) Schedule length against the number of searched vehicles; (c) The number of SVs against the number of searched vehicles; (d) Data transfer time against the number of searched vehicles.
Electronics 11 01106 g006aElectronics 11 01106 g006b
Figure 7. Probability of task reallocation.
Figure 7. Probability of task reallocation.
Electronics 11 01106 g007
Figure 8. Initial schedule length according to the shape of the DAG: (a) Schedule length with varying depth of DAG; (b) Schedule length with varying width of DAG.
Figure 8. Initial schedule length according to the shape of the DAG: (a) Schedule length with varying depth of DAG; (b) Schedule length with varying width of DAG.
Electronics 11 01106 g008
Figure 9. Data transfer time with varying depth and width of DAG: (a) Data transfer time against the depth of DAG; (b) Data transfer time against the width of DAG.
Figure 9. Data transfer time with varying depth and width of DAG: (a) Data transfer time against the depth of DAG; (b) Data transfer time against the width of DAG.
Electronics 11 01106 g009
Table 1. Comparison of Related Works.
Table 1. Comparison of Related Works.
SolutionsTask GranularityNetwork TopologyComputation Provider
AVE [16]Individualstaticedge
OpenVDAP [22]Individualstaticedge/vehicular
HEFT [7]DAGstaticprocessor
Max-Min [8] Individualstaticprocessor
TDCA [23]DAGstaticprocessor
Lachesis [24]DAGstaticprocessor
VAO [this work]DAGdynamicvehicular
Table 2. The notations and meanings.
Table 2. The notations and meanings.
NotationMeaning
D a directed acyclic graph
V a set of searched vehicles
V s a set of service vehicles
v r a requesting vehicle (RV)
v s a service vehicle (SV)
v m i n an SV that can provide the minimum EFT for a task
T a set of the tasks
t i a task
t e x i t an exit task
χ a matrix of a task schedule
Table 3. Characteristics of task scheduling algorithms.
Table 3. Characteristics of task scheduling algorithms.
AlgorithmCharacteristics
HEFTDAG scheduling
Good performance in a static environment
Max-MinConsidering only computation power
High data transfer overhead
Good performance if tasks in an application are independent
VAODAG scheduling
Lower reallocation probability
Good performance in a dynamic environment
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gong, M.; Yoo, Y.; Ahn, S. Adaptive Computation Offloading with Task Scheduling Minimizing Reallocation in VANETs. Electronics 2022, 11, 1106. https://doi.org/10.3390/electronics11071106

AMA Style

Gong M, Yoo Y, Ahn S. Adaptive Computation Offloading with Task Scheduling Minimizing Reallocation in VANETs. Electronics. 2022; 11(7):1106. https://doi.org/10.3390/electronics11071106

Chicago/Turabian Style

Gong, Minyeong, Younghwan Yoo, and Sanghyun Ahn. 2022. "Adaptive Computation Offloading with Task Scheduling Minimizing Reallocation in VANETs" Electronics 11, no. 7: 1106. https://doi.org/10.3390/electronics11071106

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop