1. Introduction
In recent years, the rapid development of Internet of Things (IoT) technology, artificial intelligence, and big data has made people’s requirements for travel services more complex and diverse [
1,
2]. According to statistics, the number of people using all types of vehicles is expected to reach 2 billion by 2035 [
3]. In response to this environment, technologies such as intelligent mobility, autonomous driving, and the Internet of Vehicles (IoV) have emerged [
4,
5]. Autonomous vehicle technology allows multiple intelligent applications to be run, enabling vehicles to be driven more safely and efficiently [
6]. Advanced communication and information technology are integrated into the IoV, which helps solve various traffic and driving problems, thus improving service quality needs such as passenger safety. Vehicular communication, in-vehicle communication, and vehicular mobile internet are the three main communication components of IoV [
7]. Any latency in processing, analyzing, and collecting data in real-time is intolerable for IoV. Cloud computing, fog computing, and Mobile Edge Computing (MEC) are a few intelligent computing platforms for big data analytics that can speed up data processing and effectively reduce latency [
8,
9,
10,
11,
12,
13].
Even some vehicles do not have the storage and computing capacity due to the limitations of the vehicle terminals [
14]. In order to handle the huge amount of data collected through IoV, cloud computing is one of the ways to process this data efficiently [
15]. The user in IoV is offloading the task requirements to the cloud computing center, which passes the processed results back to the user side. However, the distance between the cloud computing center and the users is long, and multiple users releasing service requests to the cloud computing center at the same time can cause a large delay, making it impossible for the users to receive the results of the cloud computing center promptly. Moreover, the vehicle is mostly on the move, which places higher demands on latency. Once the response latency exceeds the minimum requirement for a service request, it will likely cause safety problems and increase the risk of road accidents [
16]. As a result, the bandwidth and latency of cloud computing are not suitable for processing real-time tasks in vehicles. Mobile edge computing is more promising than cloud computing as it solves the problem of insufficient computing power at vehicle terminals and allows for real-time data processing. Deploying MEC servers to a Road Side Unit (RSU), bringing computing resources closer to the side of the connected car user and sinking them to the MEC server, thus reducing latency to meet the changing needs of the vehicle user [
17,
18].
Recently, researchers have started using emerging techniques to solve computational offloading and resource optimization problems, such as artificial intelligence, machine learning, and reinforcement learning [
19,
20]. A great deal of research has shown that deep reinforcement learning-based approaches are widely used in many research areas and are more stable and efficient in solving dynamic computational offloading and resource optimization.
On the Internet of Vehicles mobile edge computing environment, the network environment, the needs of mobile vehicle users, and system computing resources are all changing. In a resource-constrained environment, using the distributed features of mobile edge computing can provide users with services with low latency and low energy consumption more stably and reduce the system’s total cost. The current IoV mobile edge computing service offloads a significant challenge. Most of the existing research content is the research on computing task offloading based on the combination of edge and end. For this reason, this paper proposes a computing offloading method for the Internet of Vehicles based on Edge-Cloud collaboration. The contributions of this paper are as follows:
(1) In order to solve the problem of limited MEC resources, a task offloading overhead model based on Edge-Cloud collaboration based on multiple mobile vehicle users and MEC servers is designed. The model mainly has a three-layer architecture: the cloud service layer, the edge service layer, and the terminal layer. The method deployed on the MEC server is used to make real-time offloading decisions, and the service offloading method is partially optimized. The cloud center is responsible for coordinating the resources of the entire link.
(2) Within the system model, we formulate reducing the total cost of processing tasks by the system as an optimization problem, transforming the task offloading and resource optimization problem into an optimization problem based on deep reinforcement learning. Drawing on the idea of deep reinforcement learning, combined with the experience replay of DQN and the characteristics of the target network, an Edge-Cloud collaborative, dynamic computing unloading method (ECDDPG) based on a deep deterministic policy network is proposed.
(3) The simulation results show that the ECDDPG method can effectively reduce the computational complexity of system execution tasks. Even when the number of users reaches 35, compared to the baseline method, it can save 7.9–46.8% of the total cost.
The remaining chapters of this paper are organized as follows. The second section mainly summarizes the related work of this paper. The third section introduces the system model, communication model, related computing models, etc. The fourth section introduces the MEC offloading method on the Internet of Vehicles environment relatively thoroughly.
Section 5 carries out the simulation experiment design and result from the analysis of the method in this paper.
Section 6 concludes with a comprehensive summary of this paper.
2. Related Work
Nowadays, smart transportation is constantly evolving, and the scope of the Internet of Things is becoming more and more widespread, which continues to drive the growth of users’ demand for computing [
21]. With the advent of the MEC technique, the issue of efficiency and latency in data processing has again received increasing attention, with the problem of computational offloading of the MEC being widely followed by scholars.
In order to reduce the energy consumption of the MEC system to perform tasks, in [
22], a heterogeneous two-layer computing offloading framework has been proposed, which can formulate joint offloading and multi-user association problems for multi-user MEC systems and effectively reduce the energy consumption of system processing tasks. Reference [
23] proposes a new adaptive offloading algorithm, considering that a large number of resources of mobile devices are underutilized, as well as the spatiotemporal dynamics of devices, the uncertainty of service request volume, and the changes in the communication environment within the MEC system. Reference [
24] builds a multi-user MEC system model under channel interference for continuous task execution and data partition-oriented applications. On this basis, the author proposes and solves the corresponding energy consumption minimization problem.
Recently, some scholars have studied the optimization of the MEC calculation offloading problem in the MEC system with the goal of reducing the system execution delay. The literature [
25] considers the problems of offloading decisions, collaborative relay selection, and resource allocation among multiple users. It proposes a joint iterative algorithm based on Lagrangian Dual Analysis, monotonic optimization algorithms, to minimize the execution delay of all tasks [
26]. In the device-to-device MEC scenario, the computing tasks generated by the device can be offloaded to the edge server for computing or offloaded to nearby devices. In order to effectively reduce the system delay, consider the energy consumption and partial offloading required to perform tasks, and resource allocation constraints, a new scheme based on joint partial offloading and resource allocation is proposed.
The above research work only considers the energy consumption or execution delay of the system executing tasks. Reference [
27] studies the safe offloading framework of the mobile edge under Unmanned Aerial Vehicle (UAV) in a MEC network, describes it as a multi-objective optimization problem, and proposes a multi-objective optimization strategy based on the DQN algorithm, which can better reduce delay and reduce energy consumption. Reference [
28] studies a distributed machine learning method based on a multi-user MEC network in a cognitive eavesdropping environment, proposes three optimization criteria, and uses a federated learning method to solve combinatorial optimization problems. Reference [
29] proposes an adaptive task offloading and resource allocation method in the MEC environment, using deep reinforcement learning to select appropriate task computing nodes for mobile users, which can optimize the average response delay of the task and the total energy consumption of the system.
In the IoV environment, MEC can meet the low-latency and diverse requirements of mobile vehicle users. In recent years, some researchers have received extensive attention and research on applying MEC in IoV. Reference [
30] designs a model framework combining MEC and IoV, where all vehicle users and MEC servers within the model can act as offloading nodes. A task offloading method with task classification and mobile offloading nodes is proposed. In the literature [
31], in the IoV environment, a layered architecture is constructed to minimize the system’s total time delay, a hybrid nonlinear programming optimization problem is established, and an online multi-decision optimization is established algorithm based on Lyapunov is proposed [
32] to solve the service request problem in in-vehicle networks by jointly optimizing task offloading decisions and resource optimization problems, proposing a cloud-edge coordinated computational offloading scheme, designing a distributed optimization method based on this scheme, and finally obtaining the optimal solution.
In addition, regarding the challenges of computing offloading and resource optimization of in-vehicle edge computing, some researchers use deep reinforcement learning to solve the MEC computational offloading problem. The literature [
33] models task offloading and propose a mobility-oriented computing offloading retrieval protocol for in-vehicle edge multiple access to improve service quality. The literature [
34] proposes a deep reinforcement learning-based offloading method in a multi-cell vehicle network scenario, which is used to optimize communication and computational resources and reduce the energy consumption and latency of the system in performing computational tasks. Reference [
35] considers a system model with multiple mobile users and multiple MEC servers in vehicular networking, combines deep reinforcement learning methods and improves the traditional Q-learning approach, and demonstrates through simulation experiments that the approach can achieve reduced energy consumption at different wireless bandwidths.
The above research has done related work in reducing energy consumption and system execution delay in the MEC environment of IoV. The current research work considers a single problem. The problem of limited MEC resources will be exposed in the scenario of multiple vehicle users. It cannot meet the task requests from multiple vehicle users simultaneously. Therefore, providing a reliable service for multi-vehicle users still faces significant challenges in the IoV mobile edge computing environment.
Regarding the issue above, in the IoV environment where vehicle users race, this paper designs a three-layer system task offloading overhead model based on the Edge-Cloud collaboration of multiple vehicle users and multiple MEC servers. Using the mighty computing power of cloud services to coordinate the calculation of the full link can effectively alleviate the problem of limited computing resources of the MEC server. Based on the consideration of different devices, computing devices at different levels undertook task calculations with different computing power requirements. Combined with the idea of deep reinforcement learning, an Edge-Cloud collaborative, dynamic computing offloading method (ECDDPG) based on a deep deterministic policy network is proposed. The method can make quick offloading decisions for task requests and effectively meet the user’s real-time requirements for task processing. The simulation results show that this method can achieve the expected goals well.
3. System Model
In the IoV environment, this section establishes a collaborative Edge-Cloud task offloading overhead model based on multiple mobile vehicle users and multiple edge servers, with a system model consisting of a cloud computing center and multiple edge server roadside units, and individual mobile vehicle users. On this basis, the communication model of the system is first established by mathematical modeling to ensure the reliability of data transmission within the system. Then, the computational offloading Model is introduced in terms of local computing, MEC server computing, and cloud server computing. The task offloading problem for vehicle user computing on Internet of Vehicles mobile edge computing is abstracted as a complex optimization problem. The task offloading overhead model for Edge-Cloud collaboration is shown in
Figure 1.
3.1. Edge-Cloud Collaborative Mobile Edge Computing System Model
We consider a system model with multiple mobile users and MEC servers, where the locations of the cloud computing center and the MEC computing nodes are fixed, and each mobile vehicle user is arbitrarily mobile. At the same time, each mobile vehicle user has access to each MEC computing node, and each mobile vehicle user is relatively independent. The system model is mainly divided into three layers: the cloud service layer, edge service layer, and terminal layer. The cloud service layer makes use of the powerful computing and storage capabilities possessed by the cloud computing center, coordinates the computational power and intelligent resources on the entire link, gives full play to the advantages between different devices, and sends transmission instructions to the edge server when necessary, comprehensively providing services for end-vehicle users. The edge service layer consists mainly of MEC servers, which may consist of base stations, wireless access points or lightweight servers, and roadside units responsible for collecting information from vehicle users. There are multiple MEC nodes within the system MEC server, and the most appropriate node is selected to provide computing services to the vehicle users based on their request location. The terminal layer mainly consists of mobile vehicle users generating task requests or performing a limited number of task calculations. Considering the uncertainty of the wireless channel and the competition for server resources, minimizing the total system cost also requires solving the problem of system communication and the rational allocation of server computing resources.
There is one cloud center server in the system, MEC server can be represented as , each MEC server has computing power, and mobile vehicle users are connected to the MEC server, mobile vehicle users can be represented as . The system time is defined as . Define the tasks generated by the mobile vehicle users as , where indicates the size of the period required to complete the calculation task, indicates the amount of data generated by the vehicle user. indicates the maximum time delay allowed for system processing tasks.
3.2. Communication Model
In the IoV, each vehicle user generates multiple independent tasks, which are divided into multiple sub-tasks with data dependencies. Based on the actual needs of the users, the tasks can either be computed locally, offloaded to the MEC server at the edge service layer for computation, or offloaded to a cloud server to be processed using a cloud center. Therefore, the costs under these three different computation methods are to be considered.
In this paper, the communication between the mobile vehicle user
and the MEC computing node
adapts to the wireless network. In order to better realize bidirectional data transmission between users and edge nodes, users and MEC computing nodes adopt the orthogonal multiple access technology of frequency division multiplexing, and the bandwidth of each sub-channel is
. In a certain dynamic time-varying ideal channel state, the communication between each user and the MEC node will not be disturbed. Then, the maximum data transmission rate between the mobile vehicle user and the corresponding MEC node can be expressed as:
where
indicates the number of wireless sub-channels allocated by the system to the vehicle user
,
represents the background noise power of the Internet of Vehicles environment,
represents the vehicle user
transmit signal power.
represents the channel gain between the vehicle user
and the MEC node
.
3.3. Computational Offloading Model
For the computing-intensive task request of each vehicle used on the Internet of Vehicles, the task can be selected for local computing, offloaded to MEC server computing, or offloaded to the cloud server computing center. Therefore, the vehicle user’s choice of the appropriate offloading strategy plays a crucial role in the system function. The purpose of the offloading strategy is to face the dynamic offloaded of multiple mobile vehicle users, reduce the delay and energy consumption of computing tasks, improve the computing speed, use resources reasonably, and better meet the needs of users. For the offloading decision parameter, we use a 0–1 variable, indicating as
,
, when
. This means that the mobile vehicle user selects their CPU to process the generated data. When
, the tasks generated by the mobile vehicle user need to send an offloading request to the edge server, making the appropriate decision for the offloading. If
, then the task data is offloaded to the MEC server for calculation; If
, then the task data will be offloaded to the cloud server for computational processing. This is represented as follows:
The tasks generated by mobile vehicle users can be processed locally and offloaded to edge servers or cloud servers for processing. Therefore, we discuss the latency, energy consumption, and total consumption cost arising from local computing, edge server computing, and cloud server computing.
3.3.1. Local Computing
The task data generated by the mobile vehicle user is calculated locally, and the calculation process is only related to the CPU computing capability of the vehicle user. In our scenario, for tasks performed by local computation, let denote the computing power of the i-th end vehicle user.
In heterogeneous IoV, there is no transmission delay for local computation. Therefore, the delay caused by the user of the mobile vehicle performing the task locally can be expressed as:
According to the literature [
22], let the energy consumption model be as
, then the energy consumption generated by the mobile vehicle user processing computing tasks at the terminal layer is expressed as follows:
where
depends on the chip structure of the terminal device in the system, let
. According to Formulas (3) and (4), we can obtain the total cost required to execute the task data locally:
The sum of coefficients and in Formula (5) represents the time weight and energy consumption weight generated by the user performing the calculation task locally.
3.3.2. MEC Server Computing
For tasks performed by the edge server, the execution of the task generates additional latency and energy overhead so that vehicle users can wirelessly transmit data to the edge. The total delay generated by mobile vehicle users performing computing tasks on the edge server comprises four parts: transmission delay, calculation delay, waiting for the delay, and result return delay. According to the relevant literature and data results, the delay caused by the result back to the terminal layer is slight, which is ignored in this paper. Let
denote the computing power of the
i-th edge server, according to the Formula (1) of the communication model. Therefore, in the heterogeneous IoV, the delay generated by the sub-tasks generated by different vehicle users in the execution of tasks on the edge server can be expressed as:
According to Formula (1), the time delay consumed by the task data generated by the mobile vehicle user transmitted from the terminal layer to the edge server through the wireless channel is expressed as:
Therefore, if the data task request of a mobile vehicle user is uninstalled to the edge server for computation, the resulting total delay can be expressed as:
Among them,
is the queuing delay for the user to make a task request. The queuing delay for any user is the time interval from when the user sends a request to the system to when the task is executed.
denotes the start time of the task request,
denotes the time when the task request starts to be executed.
When the data task request of the mobile vehicle user is offloaded to the edge server to perform the calculation, the energy consumption is determined by the data transmission in the channel, the time the system waits, and the energy generated by the calculation at the edge server. The total energy consumption is expressed as:
According to the above formula, the total cost of vehicle users performing computing tasks on the edge server is:
The sum of coefficients and in Formula (11) represents the time weight and energy consumption weight generated by performing the calculation task on the MEC server.
3.3.3. Cloud Server Computing
The MEC server cannot process the data tasks generated by the mobile vehicle users faster and is offloaded to the cloud server for processing through the wireless network in the channel. At this time, the total delay generated by the cloud server processing computing task is mainly composed of the transmission delay generated by the user passing the task to the cloud server layer, the delay generated by the cloud server processing computing task, the waiting delay of the mobile vehicle user and the delay generated by the task result back to the mobile vehicle user. The delay caused by the return of the cloud processing task results to the terminal layer is small and can be ignored. The overall latency generated by cloud server computing can be expressed as:
The transmission delay is expressed as:
The delay of data tasks generated by mobile vehicle users in cloud server computing is expressed as:
When the data task request of mobile vehicle users is offloaded to the cloud server for computing, the energy consumption is generated by data transmission in the channel, system waiting time and computing in the cloud server. Thus, the total energy consumption incurred in processing task data at the cloud server is expressed as:
From the above formula, the overall cost of vehicle users performing computing tasks on the cloud server can be expressed as:
The sum of coefficients and in Formula (16) represents the time weight and energy consumption weight generated by performing the calculation task on the cloud server.
3.4. Problem Formulation
The purpose of the offloading decision is to reduce the system’s total cost to complete the mobile vehicle user task request as much as possible. If the task requests of all mobile vehicle users are executed locally, the MEC server and the cloud server are idle; if the task requests are offloaded to the MEC server, the computing resources of the vehicle users and the cloud server are in an idle state.
According to the above analysis, the minimum cost problem of the system can be expressed as:
Among them, the constraints 3a and 3b are that the delay generated by the local task execution and the MEC server execution task cannot be higher than the maximum delay of the system. Therefore, 3c indicates the relationship between the decision parameters of the system offloading, 3d indicates the relationship between the two weight coefficients, and 3e is determined according to the calculation model of the system. The total cost is incurred by the system to perform a computing task.
Since wireless channels and vehicle users are dynamically changing and moving, traditional optimization problems cannot solve the resource optimization problem better. Resource allocation decisions and unloading decisions are related to the current system utility and affect the state of subsequent system processing tasks. To this end, we combine the methods of deep reinforcement learning to solve the above problems. See the next chapter for details.
6. Conclusions
In the environment of mobile edge computing of the Internet of Vehicles, in order to solve the problem of minimizing the cost of system execution tasks, we designed a task offloading overhead model for the “Edge-Cloud” collaborative MEC system based on multiple mobile vehicle users and multiple edge servers, and introduced the communication model, computational offloading model, and problem modeling. We formulated reducing the total cost of system processing tasks as an optimization problem to minimize the total cost of vehicle users, MEC servers, cloud center, and the task offloading. The resource optimization problem is transformed into a combinatorial optimization problem based on deep reinforcement learning; combined with the idea of deep reinforcement learning, a dynamic computing offloading method based on Edge-Cloud collaboration based on a deep deterministic policy network is proposed (ECDDPG). The simulation results show that this paper’s method effectively reduces the system execution cost.
There are still some shortcomings in the research of this paper, such as the vehicle resources parked on the roadside not being fully utilized, the task data is not divided into the result part, and the problem of complete unloading is considered. In the future, we will study the task offloading problem of the in-vehicle network, divide the task into multiple parts, and make full and reasonable use of the vehicle resources parked on the roadside.