Next Article in Journal
Improved Segmentation of Cellular Nuclei Using UNET Architectures for Enhanced Pathology Imaging
Previous Article in Journal
Determining the Optimal Window Duration to Enhance Emotion Recognition Based on Galvanic Skin Response and Photoplethysmography Signals
Previous Article in Special Issue
Streamlined Deep Learning Models for Move Prediction in Go-Game
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging Time-Critical Computation and AI Techniques for Task Offloading in Internet of Vehicles Network Applications

School of Computer Engineering, Jiangsu University of Technology, Changzhou 213001, China
*
Authors to whom correspondence should be addressed.
Electronics 2024, 13(16), 3334; https://doi.org/10.3390/electronics13163334
Submission received: 15 July 2024 / Revised: 16 August 2024 / Accepted: 19 August 2024 / Published: 22 August 2024
(This article belongs to the Special Issue AI in Information Processing and Real-Time Communication)

Abstract

:
Vehicular fog computing (VFC) is an innovative computing paradigm with an exceptional ability to improve the vehicles’ capacity to manage computation-intensive applications with both low latency and energy consumption. Moreover, more and more Artificial Intelligence (AI) technologies are applied in task offloading on the Internet of Vehicles (IoV). Focusing on the problems of computing latency and energy consumption, in this paper, we propose an AI-based Vehicle-to-Everything (V2X) model for tasks and resource offloading model for an IoV network, which ensures reliable low-latency communication, efficient task offloading in the IoV network by using a Software-Defined Vehicular-based FC (SDV-F) architecture. To fit to time-critical data transmission task distribution, the proposed model reduces unnecessary task allocation at the fog computing layer by proposing an AI-based task-allocation algorithm in the IoV layer to implement the task allocation of each vehicle. By applying AI technologies such as reinforcement learning (RL), Markov decision process, and deep learning (DL), the proposed model intelligently makes decision on maximizing resource utilization at the fog layer and minimizing the average end-to-end delay of time-critical IoV applications. The experiment demonstrates the proposed model can efficiently distribute the fog layer tasks while minimizing the delay.

1. Introduction

Currently, with the development of machine learning, deep learning, computer vision, and 5G mobile communication technologies, the Internet of Things (IoT) techniques [1,2] have made tremendous progress and become a closely related part of people’s lives. Consequently, the intelligent transportation technology represented by Intelligent Transportation Systems (ITSs) and autonomous driving technology has achieved tremendous development due to a growing demand for smart cities and IoT technologies in modern society. As one of the most important popular applications of IoT technologies, the Internet of Vehicles (IoV) [3,4] technique has become an essential data transmission and resource scheduling framework in ITSs and has attracted the attention of many researchers. Although IoT and IoV technologies have become very hot research fields and achieved tremendous development, they have faced some challenges because of their known limitations, such as restricted storage, applicability in real-time critical scenarios, load balancing, energy consumption, and so on. Artificial Intelligence (AI) [5] technologies such as machine learning (ML), deep learning [6] and deep neural networks, that are popular and have shown significant influence, are applied more and more in IoT and IoV fields and dramatically improve the effectiveness of IoT devices [7,8].
Generally, the IoV is regarded as a data transmission platform that provides information exchange service between vehicles, or a vehicle and other surrounding devices through different communication media [9]. Through deep integration with the ITSs, the IoV builds an intelligent network to provide essential functions for transportation systems, such as intelligent traffic management, dynamic information services, intelligent vehicle control, among others [10]. The architecture of the IoV shown in Figure 1 is composed of three fundamental components: the inter-vehicular network (V2V), intra-vehicular network (V2I), and vehicular mobile Internet. Every vehicle in the IoV is connected with other vehicles and devices through the mobile Internet at all times. The IoV creates an interconnected network for all vehicles to enable the exchange of information about passengers, drivers, sensors and electric actuators, and the Internet by using advanced communication techniques, such as IEEE 802.11p, cellular data networks (4G/5G) directional medium access control (DMAC), vehicular cooperative media access control (VC-MAC), and others.
However, this early proposed IoV architecture faces the challenge of a real-time critical requirement. Specifically, the IoV may be susceptible to significant latency when storing or retrieving data; for example, when multiple vehicles access data simultaneously, it is limited by the data storage and processing capabilities of the vehicular cloud layer, resulting in significant latency. In this situation, the problem of network congestion often appears in IoV networks, which can cause many issues that influence the operation of the network, such as a reduction in QoS and a long time delay in data transmission [11]. From this point of view, IoV networks are time-critical systems [1,12].
The main challenge of time-critical systems is to ensure that tasks with real-time constraints in the system meet their respective deadlines. This problem presents an astonishing number of challenges and research opportunities for modern computing systems, which are not specifically designed to support time criticality [12]. However, the significant latency is the main challenge of current IoV architectures. In a cloud computing-based IoV architecture, the data sources are often far away data processes and storage servers, which is the main reason causing a long time latency and leading to slow response times [13]. Therefore, these features limit the application of these frameworks to cases with less stringent functional requirements in terms of real-time or timely intervention, thereby limiting the scope of vehicle services that cloud-based IoT frameworks may provide [13,14,15]. Over the last decade, many researchers have presented various architectural configurations for IoV services. The main targets of these paradigms are to reduce end-to-end delay by applying advanced technologies or proposing novel architectures.
Based on fog computing, ref. [16] proposed a new paradigm for the IoV called vehicular fog computing (VFC), in which the end-to-end latency was deeply investigated. Based on the early architecture shown in Figure 1, a special layer called fog computing was introduced to reduce the delays and improve QoS. The fog layer consisted of a large number of interconnected fog nodes which were data processing servers. Many IoV cloud-like services are provided by fog nodes using the FC technique. However, the study of VFC in the IoV is still in its early stage and there are several problems to address, such as congestion avoidance, guaranteed end-to-end delay, resources and tasks’ offloading, fault tolerance, security, and so on [17,18].
To reduce time delay and reach the time-critical requirement, some proposed architectures [19,20,21,22] have applied some effective methods by reducing energy consumption, end-to-end delay, and communication resources in IoV networks. However, there are several important issues such as dynamic computational costs for load balancing, minimizing IoV networks’ delay, and dynamic IoV topologies that are still at an early stage of research and require more in-depth studies [16].
Focusing on these issues, we propose a new IoV architecture in this paper by combining the advantages of an AI-based time-critical system, deep learning approaches, and edge(fog)/cloud-based IoT technologies. Benefiting from the advantages of AI and a fog computing-based vehicle network, the proposed architecture guarantees reliable and low latency communication in a highly dynamic environment. As the edge node, the processors in each vehicle need to process large volumes of data collected from sensors and implement many tasks. In this paper, we propose a task allocation and offloading algorithm based on the SDV-F framework by applying a deep learning technique to distribute tasks and computation resource efficiently and minimize the end-to-end delay.
The main contributions of this article are as follows:
  • We propose an AI-based task-offloading algorithm for an IoV network based on the SDV-F framework, which can help to minimize end-to-end delay in data transmission.
  • We propose an AI-based time-critical task-allocation approach for an IoV network, in which AI algorithms such as DL and RL are applied to implement task offloading and resource allocation.
  • We propose deep network-based reinforcement learning framework for a resource-allocation and task-offloading approach in an IoV network.
The rest of this paper is organized as follows. Section 2 introduces the background and challenges of this study. Section 3 describes the framework of the proposed model. The evaluation criteria and simulation results are presented in Section 4. Finally, Section 5 gives the conclusions.

2. Background and Challenges

Over the last decade, many researchers have presented various architectural configurations for IoV services. The main targets of these paradigms are to reduce end-to-end delay by applying advanced technologies or proposing novel architectures.
An early IoV service architecture was proposed in [23,24], the basic architecture of which is shown in Figure 1. In [23,24], the researchers combined IoT-aware technique with ML and cloud computing techniques for IoV architecture. In these kinds of early architectures, the devices of vehicles accessed the Internet from anywhere send data to a cloud computing server which implemented data processing, storage, transmission, and other facilities. To ensure seamless connectivity, the communication layer needed to apply advanced wireless communication technologies such as GSM, WiFi, 3G/4G mobile network, or others. Although various technological alternatives have been adopted to meet specific needs or application scenarios and improve the efficiency and reliability of the cloud-based IoV infrastructure, the limitations of a cloud-based architecture configuration are privacy and latency issues. These are mainly related to the centralized cloud server location and network infrastructure for communication and data transmission. As a result, when responding to the speed, accuracy, and reliability requirements, these kinds of approaches cannot achieve satisfied time-critical solution.
As the mobile communication networks used in SDV-F are more and more intelligent and efficient, one effective way to solve the shortcomings of time-critical systems is to apply advanced data transmitting technologies or propose efficient IoV architectures or approaches. With the development and popular application of 5G cellular mobile communication, Huang et al. [25] proposed a 5G-enabled SDV network (5G-SDVN) to provide communication services on the IoV. In improving performance of the data transmission in dynamic vehicular networking environments, [26] proposed a novel approach which was under the framework of the SDN-based medium access control (MAC) protocol. At the same time, further work was carried out in [27], in which the authors proposed the MCH framework. This MCH (Mobile Cloud Hybrid) framework is often applied to decrease the power consumption of mobile terminals or robotic devices. At the same time, Chen et al. [19] proposed another cloud computing framework for mobile system frameworks. By mean of these approaches, each mobile user’s independent task can be processed locally at the Computing Access Point or on a remote cloud server.
Nowadays, applying fog computing is a great improvement to decrease the time delay in data processing and transmission. The edge and fog computing-based IoV application frameworks offer better response time and privacy preservation [16] by moving data processing and storage to the fog or edge layer to reduce distances and the significant latency. In building edge/fog computing infrastructures, the popular AI techniques including ML, DL, or reinforcement learning (RL) [28] algorithms are widely applied, which makes the intelligent data processing possible at the edge of network. This novel technique of edge/fog computing greatly reduces the application latency and improves the privacy offered to each vehicle. In this regard, Multi-access Edge Computing (MEC) complements cloud computing and enables users to reduce latency and save energy by offloading computation towards the edge servers [29,30].
Recently, much work has been conducted to merge MEC technology into a vehicular network in academic and industrial fields. Specifically, vehicle fog computing (VFC) is the MEC technology associated with vehicular networks. VFC is extremely useful for carrying out computation-intensive and time-constrained tasks under vehicular networks [31]. Through offloading complex computational tasks to VFC servers, the computing delay and energy consumption of vehicular applications can be drastically minimized while mitigating the chance of network congestion. In addition, sometimes it is not feasible to offload tasks to edge servers as it uses extra energy and consumes more time [32]. The challenge, however, is to make the offloading decision while taking overall computation and communication costs into account. On the other hand, vehicles face certain unprecedented constraints, although they are capable of executing more computational tasks. These constraints include inadequate computing capacity and high energy consumption [33].
Undoubtedly, recent research work offers good models or approaches to minimize latency and build time-critical systems to manage the task-offloading issues in the SDV-F architecture. However, most of these works are limited to using a multi-agent system and the horizontal fog layer resource pooling, which have been demonstrated to substantially decrease the data response latency [7]. Focusing on the time-critical computation applied in the VSDN of IoV service architectures, in this article, we present an AI-based hierarchical framework for SDV-F. We propose an AI-based time-critical system to manage task allocation and offloading to fit real-time requirements. We also present an AI (ML, DL)-based fog computing network supporting between-fog-node, vehicles-to-fog-node, and fog-layer-to-cloud-layer tasks and traffic offloading to attain intelligent resource allocation and minimize the end-to-end latency.

3. Proposed System Architecture

Based on the analysis reported in the literature review and inspired by the solutions proposed in the previous papers, in this section, we present the architecture for an IoV service system that incorporates advanced AI-based time-critical technologies, fog computing, and deep learning approaches. The proposed architecture contains three main layers: an Intelligent Data Acquisition layer or IoV Layer, a fog computing layer, and a data visualization and AI-based SDN controller layer, as illustrated in Figure 2.

3.1. Layers of the System Architecture

The Intelligent Data Acquisition layer is also called IoV layer and includes a large number of IoV devices. Each vehicle contains a complex computer system which processes a large volume of data from many sensors and implements and allocates multiple tasks. The vehicles are edge nodes of the edge computing network that communicate with a Base Station (BS) by using 5G mobile communications.
The fog computing layer is a fog computing network which consists of many fog computing servers providing network communications, data storage, data processing, and computing services to IoV devices. These many servers are also called fog nodes. In real-world applications, the vehicles in motion generate a large volume of data representing their real-time status at all times. The main function of the fog nodes is to process and upload that large volume of data to the control servers. Moreover, these implementations of fog nodes have to meet the real-time-critical and low-latency requirements. Obviously, it is very difficult for fog nodes to complete these tasks successfully. Therefore, it is most important for IoV systems to be able to perform distributed computing and implement a load balancing technique to control the load and reduce latency. More specifically, besides the fog computing-based network architecture, an efficient AI-based algorithm for task and resource offloading is also essential. Another aim of this paper was to propose a task and resource offloading approach by applying AI algorithms.
At the high level of the architecture is a cloud computing layer which provides AI-based SDN controlling and data visualization functions. In designing the AI-based SDN controller, we adopted a two-layer structure, i.e., the data process unit was separated from the control unit. This structure helped to improve the evolution of the system and facilitated network management. The intelligent unit implemented big-data analysis and processing and made decisions. The intelligent unit consisted of three intelligent modules: an intelligent agent module, a big-data analysis module, and a deep learning module. By taking into account the available computing resources and combining data analytic results provided by the big-data analysis module, the deep learning module offered the best model for the fog node to execute on each fog node. By using intelligent techniques, the AI unit could make intelligent decisions adaptively.

3.2. Intelligent Data Acquisition Layer (IoV Layer)

The Intelligent Data Acquisition layer includes a large number of IoV devices. Each node of the IoV layer is a complex computer system which is also divided into three layers: advanced sensors and sub-system layer; a computational, storage, and processing unit; and an Artificial Intelligence module, as shown in Figure 3.

3.2.1. AI-Based Task-Allocation Algorithm in IoV Nodes

Advanced sensing technologies are used to collect data related to the real-time status of vehicles on motion. Multiple-sensor techniques allow the system to collect important correlating data that can be used by the AI module to make meaningful decisions, ensuring the IoV frameworks are more robust and trustworthy. The computation and data processing unit is a key part in this system. To achieve a real-time response and the forwarding of processed data to upper layers, it is necessary to equip the AI-based algorithm with a central processing unit (CPU). By applying AI techniques such as deep learning, machine learning, or reinforcement learning, the AI-based algorithms can make intelligent decisions based on data analysis. In this subsection, we propose a reinforcement learning (RL) [34] and deep neural network (DNN)-based algorithm to fulfill task offloading and allocation for IoV networks.
The CPU is a central processing unit which manages many tasks of multiple sub-systems and makes decisions to provide distributed and efficient resource management. The responding algorithm is based on the framework of a Markov decision process (MDP) [35], RL, and an embedding deep neural network to enable the servers to make effective decisions adaptively.

3.2.2. MDP-Based Reinforcement Learning

In this on-board system, we assume the central CPU is main agent, and other sub-systems are general agents. In the RL algorithm, three components are needed: states, actions, and rewards.
According to the principle of RL shown in Figure 4, the CPUs of servers are regarded as agents (i.e., the central CPU is the primary agent) outputting actions to the environment based on perceived states. The environment represents the task allocation or offloading system, which evaluates the current actions and outputs the reward function and states. Based on the evaluation, a value function related to a reward based on actions records the value difference between the current and previous state–action pairs. Consequently, the long-term rewards represent the total rewards that the CPUs (i.e., the agents) can expect to accumulate over time for each environmental state. Following this process, the RL model provides a long-term value about the future states based on their corresponding rewards. With the previous reward and value function, finally, the system model evaluates the current action to optimize the best reward and value of the next state.
The theory of the Markov decision processes provides the mathematical foundation for the proposed system. We represent a Markov decision process function with a tuple of ( s , a , P , r ) , where s S , a A , and r R . The symbols S, A, and R represent the set of states, actions, and rewards. Additionally, the symbol P is the transition probability, which represents the probability of a cyclic process that the current state s produces the next state s under the condition of the current action. The value of the probability P is between 0 and 1. Accordingly, P ( s s , a ) is the probability of a new state of the environment. This new state is generated under the environment that is represented with state s and the chosen action a. R ( s , a , s ) is the reward function of a new state from the current state, which is generated by the environment after the action. The reward function’s value is represented by the discount factor γ , where the value of the discount factor is 0 < γ < 1 .
  • Action Selection Policy
The policy is a necessary component in RL and defines the behavior of agents. A policy π is a distribution over action a given state s:
π ( a s ) = P ( A t = a S t = s )
In RL, an agent attempts to seek the optimal policy π that the agent can achieve by maximizing the sum of rewards, called utility. The utility function can be represent as follows:
U h ( [ s 0 , s 1 , , s n ] ) = i = 0 n γ i R ( s i )
where γ is the discount factor, and R ( · ) is the reward function.
  • State–Action Quality Function
In MDP, the dynamic programming (DP) technology is applied to solve P and R. DP is an optimization technique which seeks best choices by using an optimal value function. To perform RL using the MDP model, there are three functions required to be optimized, namely, the values of state (s), U * ( s ) , and Q * ( s , a ) . Q ( s t , a t ) represents the value of the action that the agent takes at the current state s t . According to the principle of RL shown in Figure 4, the agent is required to choose a new action for the current state s t based on the rewards generated by the environment. Specifically, before selecting a new action, the agent computes Q ( s t , a t ) for each possible action and then decides the new action of the current state s t according to the optimal policy. In the optimization, π * ( s ) is an optimal policy defined as the optimal action selection from state s to a new state s . In this paper, we applied the Bellman equations for the optimizing target [16]; in other words, the optimum action had to satisfy the Bellman Equation (3).
U h * ( s ) = max a A ( s ) Q * ( s t , a t )
where
Q * ( s t , a t ) = t = 0 T P ( s | s t , a t ) R ( s , a t ) + γ t U h * ( s , a t )
and
U h * ( s t , a t ) = max a A ( s ) t = 0 T P ( s | s t , a t ) R ( s | s t , a t ) + γ t U * ( s , a t )
where ( s t , a t ) represents the state–action quality pair at time t, and T is the time limit of the agents’ optimization problem in the proposed model.

3.2.3. Deep Q-Function Learning for Task Allocation

On the basis of RL, we applied a DQN (deep Q-function network)) to predict the value of ( s t , a t ) . In this model, the deep neural network (DNN) was used to learn the agent’s Q-function Q ( s , a ) .
Q λ * ( s t , a t ) = E R ( s t , a t ) + γ max a A s Q λ * ( s , a )
where ( s , a ) is the state–action pair at the next time slot, and A s is the set of actions at the next state s . Figure 5 illustrates the basic framework of the DQN, in which a convolutional neural network (CNN) is used as the DNN. The symbol λ represents the parameters of CNN.
For training the model, the mean-squared error (MSE) was applied as the loss function of the proposed model with the CNN parameters λ , which is defined as follows:
L o ( λ ) = E ( q Q λ ( s , a ) ) 2
where
q = R ( s , a ) + γ max a A s Q * ( s , a )
is the maximal sum of the future reward for the agents’ task allocation process.

3.3. Fog Computing Layer

The fog computing layer consists of a large number of fog nodes, which frequently process and upload real-time data generated by vehicles in motion to the SDN controller. To control the load and reduce latency, an efficient algorithm is vital and essential to realize distributed computing and load-balancing technique.

AI-Based Task-Offloading Algorithm in Fog Node Layer

The proposed task-offloading algorithm is also based on the RL framework introduced in previous subsections [16]. But we improved the algorithm by proposing a novel architecture of a DNN network, reward function, and Q-function.
  • Task-Offloading Model for Fog Nodes
The aim of the approach was to optimize the offloading operations of each agent to achieve maximum utility under the condition of minimizing time latency and optimizing the allocation of IoV tasks. Therefore, we applied the reward function defined in [16]:
R ( s , a ) = U ( s , a ) ( P l ( s , a ) + D L ( s , a ) )
where P l ( s , a ) represents the traffic load probability function of fog node f j , and D L ( s , a ) indicates the end-to-end delay function. Unlike Equation (2), the utility function is defined as:
U ( s , a ) = r u l o g ( 1 + t o )
where t o is the number of tasks offloaded to fog node f j , and r u is the utility reward.
We applied the reward function Equation (9) to obtain the appropriate reward value of the selected fog node for the task computation and a next state s in this proposed algorithm. Since the task to be assigned at the next fog node arrive randomly and the task size after each node’s task are also random, Poisson random variables were used in the proposed model [36].
Following Equation (9), the probability function P l ( s , a ) of a fog node f n is computed as follows:
P l ( s , a ) = W t t c P c + t o P o t c + t o
where the probability of P i ( i = c , o ) ) is modeled by a Poisson process with the following:
P i = max ( 0 , r a r ( max ( Q i ) Q i ) ) r a r
where W t represents the weight of the traffic load, t c is the current processing tasks, r a r indicates the tasks’ arrival rate at fog node f i . The symbol Q j represents the next estimated queue state of fog node f i with a given state s and action a,
Q i ( s , a ) = min max ( 0 , Q i ) + k i , max ( Q i )
The end-to-end delay D L ( s , a ) of a task is very important in the proposed model. We computed it using the following equation:
D L ( s , a ) = W d t d e + t d q + t d t t p + t o
where W d is the delay weight, t d e is the operation delay, t d q is the time delay of the queue, and t d t is the time delay of the data transmission delay. In the proposed model, t d q represents the waiting time of the current node f i in the queue, and t d e depends on the running-speed of the processor in f i .
  • Optimizing Task-Offloading Algorithm
Due to the dynamic nature of the IoV network, it is hard for the controller to predict R and P. Based on the fact that the reward and probability distribution are stable, we also applied a DNN-based RL technique. The base framework is shown in Figure 5. We used the U-Network (U-Net) [6] shown in Figure 6 as the DNN to learn Q-function Q ( s , a ) .
The architecture of our U-Net was an autoencoder with an attention module, which could make the learning process focus on important agents quickly and maximize the foreseen reward function. The architecture of the network consisted of three downsampling layers, each of which included two convolutional layers and a max-pooling layer with 2 × 2 pooling. Accordingly, there were three upsampling layers on the other side of network. Each of upsampling layer consisted of two convolutional layers with 2 × 2 upsampling. Before inputting each upsampling layer, the attention module was applied to fuse and calculate a single scalar attention value. To reduce the delay and fulfill the real-time requirement, we used a 1 × 1 convolutional kernel in the attention module as shown in Figure 7.
In the optimization, the aim of the model was to select the optimal policy of ( s , a ) in the system. Specifically, based on a current policy ( s , a ) , the system foresaw the new state s and the reward by using Q-learning. Q-learning is a continuous optimization in which the Q-function is updated with each of iterations to make the best decision for the new task. The updating equation of the Q-function is as follows:
Q * ( s , a ) = ( 1 δ ) Q * ( s , a ) + δ R ( s , a ) + γ max a A s Q * ( s , a )
where δ is the learning rate which is a factor between 0 and 1 ( 0 < δ < 1 ), and γ is the discount factor. During the optimization, the reward function R is modified based on the new learning rate.

4. Experiments

4.1. Experimental Setup

To evaluate the optimization performance, we simulated a macro IoV environment based on the SDV-F framework covering a road of about 30 km. We set 60 RSUs and a BS as fog nodes in the simulation network, and we used five cars as moving vehicles with a speed from 30 to 80 km/h. We set the transmission parameters as follows: the CPU size for each content was R c p u = 1 cycle/bit, the transmission power was 100 mW, the maximal delay was 0.2 s, and the bandwidth was 100 MB/s.
We first discuss the simulation results of the task allocation of IoV nodes, i.e., the task processing system of the CPU in each vehicle. We conducted this experiment in a laboratory by using an embedded experimental platform. To evaluate the performance of the proposed network, we considered three baseline models: ARTNet [16], eDors [37], and the Energy-Constrained Signaling Reduction model (ECSR) [38]. The ARTNet is an AI-based resource-allocation and task-offloading model for IoV networks, providing reliable and short delay communications. The eDors is an energy-based offloading scheme. It combines resource optimization and dynamic offloading to reduce the minimization of full-time collaborative applications. The ECSR model builds an average restriction on the RSU in each time slot to obey the long-term energy constraint.

4.2. Results of Task-Allocation Model

As introduced in Section 3.2, the proposed task-allocation model used a DNN as the Q-learning architecture. To investigate the performance in different neural networks, we designed three network architectures: a large-scale autoencoder, a small-scale autoencoder, and a general CNN. The large-scale autoencoder (L-En) consisted of four hidden layers and each layer included more nodes than the small-scale autoencoder. Its architecture was 256 × 512 × 256 × 128 . The small-scale autoencoder (S-En) consisted of only two hidden layers with a 64 × 128 architecture. The CNN consisted of two convolutional layers and a fully connected layer. In the convolutional layer, we applied a 3 × 3 filter kernel and the max-pooling technique. In the simulations, we compared the energy consumption and computation delay with the numbers of tasks.
The Figure 8 shows the comparison of the energy consumption and computation delay for different DNN architectures during the Q-learning process. From the figure, we see that for a given task, the energy consumption increases as the architecture becomes more complex; however, the changes in the computation delay are opposite. The reason for this is that the DNN has a more complex architecture consisting of more nodes and requiring more computations, but it can provide better learning performance, which leads to a small time delay. We also find a more interesting conclusion from Figure 8; with the increase in the number of tasks, the task-allocation model decreases the energy consumption and computation delay. This is because the AI-based algorithm makes reasonable optimization arrangements based on the increase in task count.

4.3. Results of Proposed Task-Offloading Model

Similar to [16], we evaluated the performance of the proposed model in terms of performance under different time slots, number of IoVs, and vehicles with different speeds.

4.3.1. Performance under Different Time Slots

The comparison results are shown in Figure 9. By comprehensively analyzing these three figures, we can draw the conclusion that the performance of our proposed model is much better than that of other baseline models.
Figure 9a shows the results of the energy shortfall for all models. From this sub-figure, overall, this model has better performance. At first, the ECSR has a smaller energy shortfall than other models, but with a time slot greater than 15, our proposed model has a smaller energy shortfall than the ECSR. The performance of the ARTNet is good, since the energy shortfall is zero when the time slot increases above 55. From the figure, the energy shortfall is smaller than that of the ARTNet all the time, and it decreases to zero after time slot 45. Conclusively, the performance is much better than that of the ARTNet. This is because our proposed model applies an AI technique in its architecture.
From Figure 9b, we find that the average latency is much less than that of other baseline models. In a communication system, the average latency is the most important in evaluation. The results from this figure demonstrate the outstanding performance of our proposed model. However, when analyzing the energy consumption results, our proposed model did not have any advantage. When the time slot increased above 30, our proposed model required the most energy consumption. The reason for this it is that our proposed model used a DNN neural network, which made the processor require more computations and a larger energy consumption. Conclusively, although the proposed model consumed more energy, it succeeded in terms of a lower latency and smaller energy shortfall, which stemmed from its intelligent distribution of tasks in the offloading algorithm at the fog layers.

4.3.2. Performance under Different Number of IoV Nodes

To evaluate the carrying capacity of the proposed model, we implemented it and other compared models with a various number of IoVs. In these simulations, we designed simulation program with python. It simulates many IoV networks by sending task requirements to the IoV network. We evaluated the model on the following: latency, energy consumption, energy shortfall, and overload probability. The comparison results are trends that vary with the number of IoV networks. The comparison results are shown in Figure 10.
From the results shown in Figure 10, we see that our proposed model achieves better performance in terms of average latency, average energy shortfall, and average overload probability when the number of IoV networks varies from 50 to 600. But it obtains similar results in energy consumption. The average energy consumption of all compared models increases linearly with the increasing number of IoVs. From the results, the average energy consumption in the proposed model is larger than that of ECSR and ARTNet. The reason for this is related to the complexity of the deep network in the proposed model. When the architecture of the proposed model includes more layers and nodes, it requires more computation and energy consumption. From the viewpoint of the time-critical requirement, the average latency is the most important target in a time-critical system. As shown in Figure 10b, the proposed model achieves the best performance in average latency, i.e., the latency is the least. At the same time, the proposed model obtains a smaller average shortfall than that of other compared models, when the number of IoVs increases to more than 250. The shortfall is a performance measure to evaluate the stability of system operation. The smaller the shortfall, the more stable the system. Therefore, Figure 10a indicates the proposed model achieves outstanding performance when running the system. The average overload probability is shown in Figure 10d. When the number of IoV networks is above 500, the overload probability in eDors and ECSR is more than 50 % . At the same time, the overload probability in ARTNet is more than 40 % . In the same conditions, the average overload probability in the proposed model is much less than other models. In other words, the proposed model distributes the tasks efficiently.
Conclusively, the proposed model effectively decreases the latency without excessively increasing the average energy consumption. The AI-based task allocations in the IoV node increases the performance of the proposed IoV network.

4.3.3. Computation Efficiency

To evaluate the impact of communication performance on computation efficiency with the varying number of vehicles, we carried out efficiency experiments and compared the results. Figure 11 shows the computation efficiency versus the varying number of vehicles with different speeds. To show the difference, we set the vehicle speed to 30 km/h, 60 km/h, and 80 km/h. For convenience, in the comparison, we set an equal task size for different vehicles. While comparing the behaviors of different schemes, we observed that the proposed model obtained the highest computation efficiency among all the given speeds.
From the results, we can also observe that the computation efficiency of all schemes decreased rapidly as the vehicles’ speed increased. This is because the communication latency becomes high with a high speed. The aim of this paper was mainly to propose an AI-based algorithm to optimize the task offloading and allocation. However, the performance of software is limited by many practical conditions, such as communication latency and structural optimization of vehicle networking. In the future, this can be extended to more complex scenarios in which precise information about the channel and vehicle state is unknown and we can combine advanced communication technologies to achieve better performance in IoVs.

5. Conclusions

In this paper, we proposed a novel framework for an IoV network by considering the application and requirement of a time-critical system. Based on the requirement of a time-critical application, we first studied the problems of reliable low-latency communication and task offloading in dynamic environments in IoV networks. Focusing on these problems, we proposed an AI-based task- and resource-offloading model for IoV networks, which ensured reliable, low-latency communication, efficient task offloading in IoV networks by using an SDV-F architecture. By applying AI technologies such as RL, Markov decision process, and deep learning, the proposed model intelligently distributed the fog layer’s traffic load according to the computational power and load of each fog node. By proposing an AI-based task-allocation algorithm in the IoV layer, the proposed model effectively reduced unnecessary task allocation at the fog computing layer, thereby improving the efficiency of the distribution of tasks and resources and reducing the time delay. On the other hand, this work consisted of simulations of a small number of vehicles and in an experimental environment. In the future, we will extend our work with more complex scenarios in which precise information about the channel and vehicle state is unknown. To this end, we will examine how to integrate more machine learning techniques by incorporating SDV-F communications to improve long-term delay performance and further strengthen the task-offloading process.

Author Contributions

Methodology, P.L.; Investigation, H.F.; Resources, W.C.; Data curation, H.Z.; Writing—original draft, P.L.; Supervision, P.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Jiangsu Province Double Innovation Doctoral Funding Project [grant no. JSSCBS20221131].

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jan, M.A.; Zakarya, M.; Khan, M.; Mastorakis, S.; Menon, V.G.; Balasubramanian, V.; Rehman, A.U. An AI-enabled lightweight data fusion and load optimization approach for Internet of Things. Future Gener. Comput. Syst. 2021, 122, 40–51. [Google Scholar] [CrossRef] [PubMed]
  2. Liang, P.; Yang, L.; Xiong, Z.; Zhang, X.; Liu, G. Multilevel Intrusion Detection Based on Transformer and Wavelet Transform for IoT Data Security. IEEE Internet Things J. 2024, 11, 25613–25624. [Google Scholar] [CrossRef]
  3. Wang, X.; Ning, Z.; Hu, X.; Wang, L.; Guo, L.; Hu, B.; Wu, X. Future communications and energy management in the Internet of vehicles: Toward intelligent energy-harvesting. IEEE Wirel. Commun. 2019, 26, 87–93. [Google Scholar] [CrossRef]
  4. Qureshi, K.N.; Din, S.; Jeon, G.; Piccialli, F. Internet of vehicles: Key technologies, network model, solutions and challenges with future aspects. IEEE Trans. Intell. Transp. Syst. 2020, 22, 1777–1786. [Google Scholar] [CrossRef]
  5. Yadav, S.P.; Mahato, D.P.; Linh, N.T.D. Distributed Artificial Intelligence: A Modern Approach; CRC Press: Boca Raton, FL, USA, 2020. [Google Scholar]
  6. Liang, P.; Liu, G.; Xiong, Z.; Fan, H.; Zhu, H.; Zhang, X. A facial geometry based detection model for face manipulation using CNN-LSTM architecture. Inf. Sci. 2023, 633, 370–383. [Google Scholar] [CrossRef]
  7. Ibrar, M.; Wang, L.; Muntean, G.; Chen, J.; Shah, N.; Akbar, A. IHSF: An intelligent solution for improved performance of reliable and time-sensitive flows in hybrid SDN-based FC IoT systems. IEEE Internet Things J. 2020, 8, 3130–3142. [Google Scholar] [CrossRef]
  8. Liang, P.; Liu, G.; Xiong, Z.; Fan, H.; Zhu, H.; Zhang, X. A fault detection model for edge computing security using imbalanced classification. J. Syst. Archit. 2022, 133, 102779. [Google Scholar] [CrossRef]
  9. Contreras-Castillo, J.; Zeadally, S.; Guerrero-Ibañez, J.A. Internet of vehicles: Architecture, protocols, and security. IEEE Internet Things J. 2017, 5, 3701–3709. [Google Scholar] [CrossRef]
  10. Guerrero-Ibanez, J.A.; Zeadally, S.; Contreras-Castillo, J. Integration challenges of intelligent transportation systems with connected vehicle, cloud computing, and internet of things technologies. IEEE Wirel. Commun. 2015, 22, 122–128. [Google Scholar] [CrossRef]
  11. Mukherjee, M.; Matam, R.; Shu, L.; Maglaras, L.; Ferrag, M.A.; Choudhury, N.; Kumar, V. Security and privacy in fog computing: Challenges. IEEE Access 2017, 5, 19293–19304. [Google Scholar] [CrossRef]
  12. Mitra, T.; Teich, J.; Thiele, L. Time-critical systems design: A survey. IEEE Des. Test 2018, 35, 8–26. [Google Scholar] [CrossRef]
  13. Shumba, A.; Montanaro, T.; Sergi, I.; Fachechi, L.; De Vittorio, M.; Patrono, L. Leveraging IOT-aware technologies and AI techniques for real-time critical healthcare applications. Sensors 2022, 22, 7675. [Google Scholar] [CrossRef] [PubMed]
  14. Merenda, M.; Porcaro, C.; Iero, D. Edge machine learning for ai-enabled iot devices: A review. Sensors 2020, 20, 2533. [Google Scholar] [CrossRef]
  15. Erhan, L.; Ndubuaku, M.; Di Mauro, M.; Song, W.; Chen, M.; Fortino, G.; Bagdasar, O.; Liotta, A. Smart anomaly detection in sensor systems: A multi-perspective review. Inf. Fusion 2021, 67, 64–79. [Google Scholar] [CrossRef]
  16. Ibrar, M.; Akbar, A.; Jan, S.R.U.; Jan, M.A.; Wang, L.; Song, H.; Shah, N. Artnet: Ai-based resource allocation and task offloading in a reconfigurable internet of vehicular networks. IEEE Trans. Netw. Sci. Eng. 2020, 9, 67–77. [Google Scholar] [CrossRef]
  17. Kadhim, A.J.; Seno, S.A.H. Maximizing the utilization of fog computing in internet of vehicle using SDN. IEEE Commun. Lett. 2018, 23, 140–143. [Google Scholar] [CrossRef]
  18. Xiong, Z.; Li, X.; Zhang, X.; Zhu, S.; Xu, F.; Zhao, X.; Wu, Y.; Zeng, M. A service pricing-based two-stage incentive algorithm for socially aware networks. J. Signal Process. Syst. 2022, 94, 1227–1242. [Google Scholar]
  19. Chen, M.; Liang, B.; Dong, M. Joint offloading and resource allocation for computation and communication in mobile cloud with computing access point. In Proceedings of the IEEE INFOCOM 2017-IEEE Conference on Computer Communications, Atlanta, GA, USA, 1–4 May 2017; pp. 1–9. [Google Scholar]
  20. Whaiduzzaman, M.; Naveed, A.; Gani, A. MobiCoRE: Mobile Device Based Cloudlet Resource Enhancement for Optimal Task Response. IEEE Trans. Serv. Comput. 2018, 11, 144–154. [Google Scholar] [CrossRef]
  21. Shuja, J.; Gani, A.; Ko, K.; So, K.; Mustafa, S.; Madani, S.A.; Khan, M.K. SIMDOM: A framework for SIMD instruction translation and offloading in heterogeneous mobile architectures. Trans. Emerg. Telecommun. Technol. 2018, 29, e3174. [Google Scholar] [CrossRef]
  22. Zeng, Y.; Pan, M.; Just, H.A.; Lyu, L.; Qiu, M.; Jia, R. Narcissus: A practical clean-label backdoor attack with limited information. arXiv 2022, arXiv:2204.05255. [Google Scholar]
  23. Miche, M.; Bohnert, T.M. The internet of vehicles or the second generation of telematic services. Ercim News 2009, 77, 43–45. [Google Scholar]
  24. Bao, J.; Chen, D.; Wen, F.; Li, H.; Hua, G. Towards open-set identity preserving face synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, UT, USA, 18–23 June 2018; pp. 6713–6722. [Google Scholar]
  25. Huang, X.; Yu, R.; Kang, J.; He, Y.; Zhang, Y. Exploring mobile edge computing for 5G-enabled software defined vehicular networks. IEEE Wirel. Commun. 2017, 24, 55–63. [Google Scholar] [CrossRef]
  26. Dai, P.; Liu, K.; Wu, X.; Yu, Z.; Xing, H.; Lee, V.C.S. Cooperative temporal data dissemination in SDN-based heterogeneous vehicular networks. IEEE Internet Things J. 2018, 6, 72–83. [Google Scholar] [CrossRef]
  27. Akbar, A.; Lewis, P.R. Towards the optimization of power and bandwidth consumption in mobile-cloud hybrid applications. In Proceedings of the 2017 Second International Conference on Fog and Mobile Edge Computing (FMEC), Valencia, Spain, 8–11 May 2017; pp. 213–218. [Google Scholar]
  28. Ling, C.; Jiang, J.; Wang, J.; Thai, M.T.; Xue, R.; Song, J.; Qiu, M.; Zhao, L. Deep graph representation learning and optimization for influence maximization. In Proceedings of the 40th International Conference on Machine Learning 2023, PMLR, Honolulu, HI, USA, 23–29 July 2023; pp. 21350–21361. [Google Scholar]
  29. Chen, S.; Li, Q.; Zhou, M.; Abusorrah, A. Recent advances in collaborative scheduling of computing tasks in an edge computing paradigm. Sensors 2021, 21, 779. [Google Scholar] [CrossRef]
  30. Deng, T.; Chen, Y.; Chen, G.; Yang, M.; Du, L. Task offloading based on edge collaboration in MEC-enabled IoV networks. J. Commun. Netw. 2023, 25, 197–207. [Google Scholar] [CrossRef]
  31. Zhang, X.; Zhang, J.; Liu, Z.; Cui, Q.; Tao, X.; Wang, S. MDP-based task offloading for vehicular edge computing under certain and uncertain transition probabilities. IEEE Trans. Veh. Technol. 2020, 69, 3296–3309. [Google Scholar] [CrossRef]
  32. Silva, L.; Magaia, N.; Sousa, B.; Kobusińska, A.; Casimiro, A.; Mavromoustakis, C.X.; Mastorakis, G.; De Albuquerque, V.H.C. Computing paradigms in emerging vehicular environments: A review. IEEE/CAA J. Autom. Sin. 2021, 8, 491–511. [Google Scholar] [CrossRef]
  33. Raza, S.; Wang, S.; Ahmed, M.; Anwar, M.R.; Mirza, M.A.; Khan, W.U. Task offloading and resource allocation for IoV using 5G NR-V2X communication. IEEE Internet Things J. 2021, 9, 10397–10410. [Google Scholar] [CrossRef]
  34. Arulkumaran, K.; Deisenroth, M.P.; Brundage, M.; Bharath, A.A. Deep reinforcement learning: A brief survey. IEEE Signal Process. Mag. 2017, 34, 26–38. [Google Scholar] [CrossRef]
  35. Puterman, M.L. Markov decision processes. In Handbooks in Operations Research and Management Science; Elsevier: Amsterdam, The Netherlands, 1990; Volume 2, pp. 331–434. [Google Scholar]
  36. Kingman, J.F.C. Poisson Processes; Clarendon Press: Oxford, UK, 1992; Volume 3. [Google Scholar]
  37. Guo, S.; Xiao, B.; Yang, Y.; Yang, Y. Energy-efficient dynamic offloading and resource scheduling in mobile cloud computing. In Proceedings of the 35th Annual IEEE International Conference on Computer Communications(INFOCOM 2016), San Francisco, CA, USA, 10–14 April 2016; pp. 1–9. [Google Scholar]
  38. Liao, Q.; Aziz, D. Modeling of mobility-aware RRC state transition for energy-constrained signaling reduction. In Proceedings of the 2016 IEEE Global Communications Conference (GLOBECOM), Washington, DC, USA, 4–8 December 2016; pp. 1–7. [Google Scholar]
Figure 1. The IoV’s layered architectures in [9].
Figure 1. The IoV’s layered architectures in [9].
Electronics 13 03334 g001
Figure 2. Base architecture of the proposed system.
Figure 2. Base architecture of the proposed system.
Electronics 13 03334 g002
Figure 3. Illustration of the Intelligent Data Acquisition layer.
Figure 3. Illustration of the Intelligent Data Acquisition layer.
Electronics 13 03334 g003
Figure 4. Framework of reinforcement learning.
Figure 4. Framework of reinforcement learning.
Electronics 13 03334 g004
Figure 5. Illustration of the Intelligent Data Acquisition layer.
Figure 5. Illustration of the Intelligent Data Acquisition layer.
Electronics 13 03334 g005
Figure 6. Architecture of the U-Net used in the DQN.
Figure 6. Architecture of the U-Net used in the DQN.
Electronics 13 03334 g006
Figure 7. Attention module of the U-Net.
Figure 7. Attention module of the U-Net.
Electronics 13 03334 g007
Figure 8. Comparison of energy consumption and computation delay for different DNN architectures. (a) Energy consumption. (b) Computation delay.
Figure 8. Comparison of energy consumption and computation delay for different DNN architectures. (a) Energy consumption. (b) Computation delay.
Electronics 13 03334 g008
Figure 9. Comparison of the performance of our proposed model and other baseline models in different time slots. (a) Energy shortfall. (b) Average latency. (c) Energy consumption.
Figure 9. Comparison of the performance of our proposed model and other baseline models in different time slots. (a) Energy shortfall. (b) Average latency. (c) Energy consumption.
Electronics 13 03334 g009
Figure 10. Comparison of the performance of the proposed model and other baseline models with different number of IoVs. (a) Energy shortfall. (b) Average latency. (c) Energy consumption. (d) Average overload probability.
Figure 10. Comparison of the performance of the proposed model and other baseline models with different number of IoVs. (a) Energy shortfall. (b) Average latency. (c) Energy consumption. (d) Average overload probability.
Electronics 13 03334 g010
Figure 11. Comparison of the computation efficiency with different speeds when varying the number of vehicles. (a) Speed of 30 km/h. (b) Speed of 60 km/h. (c) Speed of 80 km/h.
Figure 11. Comparison of the computation efficiency with different speeds when varying the number of vehicles. (a) Speed of 30 km/h. (b) Speed of 60 km/h. (c) Speed of 80 km/h.
Electronics 13 03334 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, P.; Chen, W.; Fan, H.; Zhu, H. Leveraging Time-Critical Computation and AI Techniques for Task Offloading in Internet of Vehicles Network Applications. Electronics 2024, 13, 3334. https://doi.org/10.3390/electronics13163334

AMA Style

Liang P, Chen W, Fan H, Zhu H. Leveraging Time-Critical Computation and AI Techniques for Task Offloading in Internet of Vehicles Network Applications. Electronics. 2024; 13(16):3334. https://doi.org/10.3390/electronics13163334

Chicago/Turabian Style

Liang, Peifeng, Wenhe Chen, Honghui Fan, and Hongjin Zhu. 2024. "Leveraging Time-Critical Computation and AI Techniques for Task Offloading in Internet of Vehicles Network Applications" Electronics 13, no. 16: 3334. https://doi.org/10.3390/electronics13163334

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop