Next Article in Journal
Bayesian Variable Selection and Estimation in Semiparametric Simplex Mixed-Effects Models with Longitudinal Proportional Data
Next Article in Special Issue
Improving Localization Accuracy under Constrained Regions in Wireless Sensor Networks through Geometry Optimization
Previous Article in Journal
A Supervised Link Prediction Method Using Optimized Vertex Collocation Profile
Previous Article in Special Issue
A Clustering Scheme Based on the Binary Whale Optimization Algorithm in FANET
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Parallelizable Task Offloading Model with Trajectory-Prediction for Mobile Edge Networks

1
School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou 450001, China
2
National Supercomputing Center in Zhengzhou, Zhengzhou University, Zhengzhou 450000, China
3
Nanyang Institute of Technology, No.80, Changjiang Road, Nanyang 473000, China
4
School of Informatics, University of Leicester, Leicester LE1 7RH, UK
5
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
6
Department of Information Management, Chaoyang University of Technology, Taichung 413310, Taiwan
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(10), 1464; https://doi.org/10.3390/e24101464
Submission received: 3 August 2022 / Revised: 30 September 2022 / Accepted: 10 October 2022 / Published: 14 October 2022
(This article belongs to the Special Issue Wireless Sensor Networks and Their Applications)

Abstract

:
As an emerging computing model, edge computing greatly expands the collaboration capabilities of the servers. It makes full use of the available resources around the users to quickly complete the task request coming from the terminal devices. Task offloading is a common solution for improving the efficiency of task execution on edge networks. However, the peculiarities of the edge networks, especially the random access of mobile devices, brings unpredictable challenges to the task offloading in a mobile edge network. In this paper, we propose a trajectory prediction model for moving targets in edge networks without users’ historical paths which represents their habitual movement trajectory. We also put forward a mobility-aware parallelizable task offloading strategy based on a trajectory prediction model and parallel mechanisms of tasks. In our experiments, we compared the hit ratio of the prediction model, network bandwidth and task execution efficiency of the edge networks by using the EUA data set. Experimental results showed that our model is much better than random, non-position prediction parallel, non-parallel strategy-based position prediction. Where the task offloading hit rate is closed to the user’s moving speed, when the speed is less 12.96 m/s, the hit rate can reach more than 80%. Meanwhile, we we also find that the bandwidth occupancy is significantly related to the degree of task parallelism and the number of services running on servers in the network. The parallel strategy can boost network bandwidth utilization by more than eight times when compared to a non-parallel policy as the number of parallel activities grows.

1. Introduction

With the popularization and widespread promotion of handheld mobile devices in the Internet of Things (IoT), the mobile computing model has become one of the most important forms of computing under the current hybrid Internet communication model [1]. In the mobile network, people can submit various computing tasks at any time with the nomadic devices in their hands [2]. Unlike the traditional resource-centering cloud architecture model [3,4], mobile edge computing is a kind of distributed computing model centered on edge services [5]. It could resolve the problems, such as edge network delay [6], task overload [7], and network congestion [8], by maximizing the utilization of edge computing resources [9].
Due to the limitation of edge network resources in mobile edge network(MEN), how to optimize the offloading of services and the allocation of resources has always been a challenging problem [10]. Although the promotion of 5G technology and devices have greatly improved the data transmission and service delay of the MEN, the random access of network devices, the variety of available network resources and the users mobility may limit the advantages of MEN. Many researchers proposed that providing services to users in the nearest place can reduce the network delay and improve the Quality of Service (QoS) [7,11,12]. J.Xu et al. [13] proposed a service proximity caching strategy, which responds to the user’s service request on the nearest edge server, and shortens the waiting time for service requests. Another approaches split the multiple user’s tasks into many parts, and distribute them to the terminal devices, edge servers, and cloud to share computing resources of MEN [14,15,16].
The user task request and offloading scheme in the MEN is closely related to the access location of the user equipment (UE), thus it is becoming a trend to introduce user’s mobility into the task offloading strategy of edge computing services [17]. Because of the randomness and unpredictability of the user behavior in MEN [18], as shown in Figure 1, the demand for the resources provided by the network is not determined when the UEs pass through the area covered by the edge servers. It is the real-time changes in network resources caused by this random behavior that affect the performance of task load strategies. Nadembega et al. discovered that the movement of users in an edge network has a great impact on the user experience and QoS of network in [19]. They also proposed a method to improve the efficiency of data transmission for edge network tasks. Therefore, it is worth considering the behavioral factors jointly when exploring task offloading solutions in MEN.
Figure 1 represents some moving UEs request services from edge networks (ENs). If no policy intervention is added during the tasks assignment, the terminal users have to wait around for their expected results [20]. It can be improved by the effective utilization of edge devices to guess the user’s location before task offloading [21,22]. This kind of method, by introducing the trajectory prediction based on a prior data involving users’ previous location into the tasks offloading in MEN, indeed improved the effective utilization of resources near the user. However, it is challenging to obtain the user’s personal location information.
In this paper, we proposed an approach to distribute the tasks in MEN based on a non-prior trajectory prediction model (NPTP) which can enhance the predictive ability of task offloading decision. In other words, we construct a dynamic user-centering edge network (UCEN) based on user’s movement trajectory, and surmise the next position of the users just following their current information in the coverage area of MEN. Then, a tasks offloading model can automatically adapt to the topology changes of UCENs based on the user’s movement. In addition, the offloading model distributes the subs-tasks of one service to different computing devices for concurrent execution, and the simulation experiments of services paralleling showed that the model has obvious advantages in the task’s execution efficiency and equipment occupation rate. In this article, we analyze current trajectory prediction methods and tasks assignment strategies in MEN, and make some improvements in how to reduce the dependence of trajectory prediction on historical information and subs-tasks offloading among the parallelizable tasks. In short, our contributions are the following:
(1) A non-prior trajectory prediction model (NPTP).
NPTP is based on random walking model, which can forecast the user’s next position through their current location and the moving direction. It provides an important reference for edge servers to optimize service offloading.
(2) A dynamic construction mechanism for mobile edge networks.
According to the user’s trajectory, UEs construct a mobile cloud dynamically to serve themselves, which is a mobile user-centering edge network (MUCEN) architecture. The MUCEN would be constructed along the user’s movement path provisionally. Therefore, the available computing devices, prepared for the user during their movement, are dynamically adjusted in the domain of MUCEN.
(3) Parallelizable Task Offloading Strategy Based Position Prediction (PTOSBPP) in MEN.
Due to the concurrent and asynchronous execution of the subs-tasks, we proposed a service offloading strategy for MEN that can adapt to dynamic user’s location.
The rest of this paper is organized as follows: Section 2 introduces the related methods about task offloading in MEN. It illustrates that how NPTP model to guess the next position of the UEs, and formulate the cost of task offloading in Section 3. Then, the algorithms of NPTP and PTOSBPP in detail are described in Section 4. Finally, we show the simulation results and analyze the performance of the algorithms, and conclude this study in Section 5.

2. Related Works

Edge computing is a new computing model after cloud computing and fog computing [9,11]. Its purpose is to assuage the computing pressure of the cloud with the idle and scattered resources around the UEs. Task offloading is a well-established approach to improve the edge computing performance [1,23]. However, with the increasing of the portable devices in MEN, it makes the task unloading and resource apportionment more complicated. The deployment of MEN services is a challenging issue during the process of tracking user’s mobility [24]. In order to achieve a better QoS and Quality of Experience (QoE) in MEN, lots of task offloading strategies are proposed by many researchers who are studying how to distribute the limited network resources to end users in a rational way. The traditional optimization techniques for these distribution schemes can be broadly divided into two categories: decision optimization and architecture or devices optimization.
Decision optimization mainly analyzes the parameters, bandwidths, server speeds, available memory etc., evaluates the cost function, and then determines a resource allocation strategy [25]. For instace, Chen, Xu et al. [26] designed a novel D2D Crowd framework to assist in the sharing of edge network resources. The authors of [27] proposed an adaptive computation offloading approach for saving the power of portable computer. Wolski et al. [28] presented a framework for making computation offloading decisions based on the networks bandwidth. The methods, proposed in [29,30], tried to explore as many computing resources as possible around the base station (BS) to meet the computing needs of edge services. As mentioned in [1], discovering and using the potential resources as much as possible is theoretically conducive to task execution efficiency in MEN. In most cases, these solutions are restricted by the number of free resources in UCENs, and weak in adapting to real-time changes of the network topology caused by random accessing. When the edge resources are insufficient, the cloud servers still need to provide a large amount of computing resources to assist in completing the computing tasks. Optimizing the network service architecture also address the issue of deal distribution of edge services. The UEs are responeded from the MEN in a short time though a novel network device [31,32] or virtualization technologies [25]. Femtolet proposed by Mukherjee et al. [32] could offer communication and computation offloading facilities simultaneously at low power and low latency. As well as, Chun et al. [33] disgned an architecture to extend the edge server by cloud. There are lots of article studying the methods mentioned above. We summerize them in Table 1, and analyze the shortcomings among various approaches.
Arbitrary accessing computational resource is a relatively prevalent occurrence in MEN, which causes complications in task offloading. Thus, if the destination server is sensed in advance for a moving user, it is beneficial for task offloading. As predicting the target edge sever is related to the user’s spatio-temporal location, it is necessary to study the mobile behavior of users. There are large numbers of mobile devices with random access in the MEN. It is the way of connection that brings huge challenges to the task offloading.Currently, this issue has attracted much attention of authors in [9,14,19,21,22,41]. A deep learning method in [37] is used to predict the trajectory of mobile edge network users for the task distribution among the ESs. O.Tao et al. [24] used the Markov model to predict the user’s movement trajectory, and achieved the optimal goal of long-term service in the application and released of network resources. Aissioui et al. [42] also proposed a FMeC-based framework for automated driving, which addresses the increased cost of network service migration caused by vehicle location updates. The methods proposed in [38,39] speculated the the UEs’ next position based on the terminal devices’ track. In some special scenario, the service offloading decision is built on a Lagrange trajectory interpolation method [40]. These studies indicate that mobility of the users has a certain impact on the efficiency of task execution in MEN, and the prediction of the user’s movement trajectory is a suitable for improving task offloading performance. However, all of methods have a common feature that they needed historical data for position pridiction. We call them as prior displacement prediction model (PDPM). In Section 1, we have already stated that it’s difficult and even illegal to acquire the past privacy data. Thus, PDPM seems to have some limitations in practice.
To overcome the shortage of PDPM in historical data collection, we propose a non-prior trajectory prediction model to predict the path of moving users in this paper. In addition, considering the techniques of paralleling and fragmentation could expand the number of edge devices that participate in task offloading, and reduce the network latency, the methods in [15,43] show that parallelization is one of the effective and efficient ways to improve the offloading of edge tasks in MEN. Therefore, our experiment also import a sub-task offloading policy.

3. Problem Statement and System Model

In this section, we formulate the mobile user-centering edge network architecture, introduce the model of the NPTP, and then discuss the optimal cost calculation method of parallel unloading strategy.

3.1. Mobile User-Centering Edge Network Architecture

Generally, the terminal device with users are centered on the edge server which dominates the task assignment. All of the service request are triggered by them. The hardware resource, such as processor. storage etc.,can also participate in the scheduling and allocation of resources. However, these resources are monopolized by the user device for security.
Different from the traditional MEN, the user-centering mobile edge network is a dynamic network architecture along with the path of moving UEs. As shown in Figure 2, the wireless mobile devices carried by users are randomly connected to the edge network, that leads to the rebuilding of a new network topology. Thus, the edge server needs to constantly adjust the assignment of tasks.
As the user ’s location changes in ENs, so does the serving ES (Figure 2). That may result in available resources providing to users to be different at a certain moment. In mobile user-centric edge network, the resources available around the user can be adjusted timely to cope with random changes by the user’s location.
Given U = U i | i = 1 , 2 , 3 , , M represents the set of users in the network, where M is the number of users; D e v = D e v i | i = 1 , 2 , 3 , , D represents the set of devices in the network, D is the number of devices; E S = E S i | i = 1 , 2 , 3 , , S represents the set of ES, where S is the number of edge servers in the network. The user-centered edge service network can be expressed as a triple ( U , D , E S ) .
E N = E N i | i = 1 , 2 , 3 , , E C represents the set of ENs, and E C is the number of ENs in the network. Each edge network can be represented by a quintuple E N i = ( U Q i , R Q i , T Q i , R S _ c u r i , R S _ m a x i ) , where U Q represents the user queue requesting the computing devices in MEN, R Q represents the available resource queue, T Q represents the task request queue in the EN, R S _ c u r represents the current usage of edge network resources, and R S _ m a x represents the maximum load capacity of the EN (including the upper limit of connection number, CPU occupancy rate, bandwidth, etc.). All sets above must obey the constraints in Equation (1).
i = 1 S | U Q i | M | R Q i | | D e v i | , 1 i S i = 1 S R S _ c u r i i = 1 S R S _ m a x i
Since the number of resources and network performance varies over time, we set E N k ( t ) = ( U Q i ( t ) , R Q i ( t ) , T Q i , R S _ c u r i ( t ) , R S _ m a x i ) to represent the E N k at time t, where R S _ m a x i is a constant independent of time; D e v i k ( t ) represents the set of devices in E N k at t.

3.2. User Trajectory Prediction Model

Compared with the conventional prediction method, the user trajectory prediction model proposed in this paper is a non-priori prediction solution. Just according to the current motion state of the user, we guess the possible position of the user in the next time slice. The user’s movement refers to the random walk model which can change the direction and speed of the users randomly.
Assuming that the speed of user movement varies with time, which is denoted by v ( t ) at time t. For every fixed time slice T, the edge server needs to update the location of the user. Then, the moving distance d of the user in a single T can be expressed as d = T v ( t ) , which directly affects the selection of the target device for the current request task offloading.
When the T infinitely approaches a very small value, we regard that the UEs movement in a straight line with a constant direction and speed. Therefore, we can deduce a circular region with radius r according to Equation (2) (red notation in Figure 3), which is the user’s activity range in the next time period.
Equation (2) represents that the UEs move in a straight line with a constant direction and speed, when the ε is a small value. Therefore, we can deduce a circular region with radius r according to Equation (2) (red notation in Figure 3), which is the user’s activity range in the next time period.
r ( t ) = lim T ε T v ( t ) ,
where ε is extremely small values, and T is the time interval for system polling.
According to the radius r determined by Equation (2), there is an unambiguous circular region for the moving UEs.In Figure 3, α , β , γ and θ are the vector angles between the velocity vector v and different adjacent communication ESs, which represent the range of service areas that the user is about to reach for different base stations. These angles divide the entire circular region into several fan-shaped regions. We choose the sector area corresponding to the smallest two angles as the range with the greatest probability of user’s next location, which is the region corresponding to α and β in Figure 3. Therefore, the position of the moving user at next time is most likely to fall within the domain delimited by Equation (3).
A r e a = A r e a ( β , r ) + A r e a ( α , r ) w h e r e A r e a ( β , r ) = π / 360 β r 2 A r e a ( α , r ) = π / 360 α r 2
The user activity range prediction model described in Figure 3 can outline the next location based on the current user’s status, which is an important basis for the target device selection when the subsequent terminal device task is unloaded. This model can effectively narrow down the search scope for available devices in the offloading algorithm.

3.3. Parallelizable Task Offloading Cost Model

In Figure 3, the candidate ES is determined by the size of the overlap between the sector area and the coverage area of the BS. The resources of current EN and alternative ES will participate in the offloading of the network task. In this section, it mainly introduces the offloading cost evaluation model in parallelizable task offloading strategy.
B S = B S j | j = 1 , 2 , 3 , , B N represents the set of edge server BSs, B N is the number of BSs, T a s k = t a s k i | i = 1 , 2 , 3 , , N represents the set of tasks in the MEN, and N is the total number of tasks that can be requested in the network. Furthermore, each task has a parallelizable subtask sequence s u b _ t a s k i k , which is represented by a k-tuple t a s k i = s u b _ t a s k i 1 , s u b _ t a s k i 2 , , s u b _ t a s k i k
For simplicity, each BS is represented by a quadruple B S j = E N , C U , B W , T Q , where E N is the edge network to which the BS belongs, it contains the available resources in the network; C U represents CPU occupancy; B W represents the bandwidth usage of edge servers; T Q represents the task queue on the BS. Assuming that all BS servers in MEN possess same performance, so that C P U _ M A X , B a n d w i d t h _ M A X and B S _ T a s k _ M A X respectively represent the upper limit of the server ’s computing power, bandwidth, and the number of tasks carried.
Each sub-task is represented by s u b _ t a s k i k = ( C P U i k , D a t a i k , T o l e r a n c e _ m a x i k ) , where C P U represents the number of CPU clock cycles of task; D a t a is the amount of data generated by each task in the execution process; T o l e r a n c e _ m a x is the maximum delay of execution time of each sub-task. In this paper, we mainly considers the consumption of the task in the CPU clock cycle on the computing device and the data exchange in the network to evaluate the offloading strategy.

3.3.1. Energy Consumption Calculation Model of Task Offloading

All resources around moving UEs will be used as the targets to unload each task, and the feasibility of this solution has been demonstrated in [44,45,46]. In Equation (4), ξ e , ξ t are CPU clock cycle rated energy consumption on remote cloud servers (CS), edge servers (ES), and terminal devices (TD), t a s k i can be executed in parallel with K sub-tasks which have been divided by the designers of the task in advance, where K 1 , K 2 and K 3 represent the number of sub-tasks assigned on CS, ES and TD. According to the offloading strategy proposed in the article, the energy consumption required for the execution of t a s k i is shown in Equation (4).
E n e r g y _ C P U _ t a s k i = ξ c δ = 1 K 1 C P U s u b _ t a s k i δ δ + ξ e ω = 1 K 2 C P U s u b _ t a s k i ω ω + ξ t χ = 1 K 3 C P U s u b _ t a s k i χ χ ; w h e r e K 1 + K 2 + K 3 = | t a s k i |
In Equation (4), ξ c δ = 1 K 1 C P U s u b _ t a s k i δ δ , ξ e ω = 1 K 2 C P U s u b _ t a s k i ω ω , ξ t χ = 1 K 3 C P U s u b _ t a s k i χ χ represent the energy consumption of CPU cycles of tasks in cloud, edge and end device respectively.
Generality, the edge network described in this paper is an random network with multi-task and multi-user access, the same task may be requested by multiple users at the same point in time. So, we assume that c n t i t represents the total number of requests for task at t time, and then the CPU energy consumption for the entire network is shown in Equation (5).
E n e r g y _ C P U ( t ) = i = 1 | T a s k | E n e r g y _ C P U t a s k i c n t i t

3.3.2. Network Delay Calculation Model

Network delay is another important indicator to measure the offloading strategy. It includes the execution time of the task itself, the synchronization waiting time between subtasks, and the time for data transmission. However, the transmission is the main reason for excessive network latency [11]. Equation (6) is the total data amount of t a s k i , where s u b D k is the data amount of all sub-tasks of t a s k i .
D t a s k i = k = 1 K s u b _ D t a s k i k , w h e r e K = | t a s k i |
Commonly, the data in the network can be divided into upload data and download data, and s u b _ D t a s k i ) k in Equation (6) can be expressed in Equation (7).
s u b _ D t a s k ) k = u p _ D t a s k i k + d o w n _ D t a s k i k
In Equation (7), u p _ D t a s k i k and d o w n _ D t a s k i k represent sub-task upload and download data; If the communication rate is represented by B, the execution time of t a s k i in the network can be represented as:
T _ d e l a y t a s k i = k = 1 | t a s k i | ( u p _ D t a s k i k / B + d o w n _ D t a s k i k / B ) + s y s _ d e l a y t a s k i
In Equation (8), s y s _ d e l a y t a s k i is the time delay caused by EN itself. c n t i t represents the number of requests for t a s k i , the network delay of the whole network at t can refer to Equation (9).
T _ d e l a y ( t ) = i = 1 | T a s k | T _ d e l a y t a s k i c n t i t = i = 1 | T a s k | ( k = 1 | t a s k i | ( u p _ D t a s k i k / B + d o w n _ D t a s k i k / B + s y s _ d e l a y t a s k i ) ) c n t i t

3.4. Problem Statement

In this section, we study the impact of user mobility behavior on task offloading in mobile edge network, and propose a task offloading solution for parallel tasks based on the user trajectory prediction model. We transform this issue into the problem of finding an optimal set of devices with the least cost for task offloading in a mobile user-centering network, which is represented as multi-objective optimization function in Equation (10).
C o s t t a s k i = T _ d e l a y t a s k i + E n e r g y _ C P U t a s k i
where C o s t t a s k i is the cost function of offloading t a s k i . Assuming A is the set of all assignments of task at t, the objective function of offloading cost (Equation (10)) can be transformed into Equation (11) which is regarded as a function of t and A. Thus, the execution time of the task and the computational resources consumed become the optimization targets.
C o s t ( A , t ) = T _ d e l a y ( A , t ) + E n e r g y _ C P U ( A , t )
To put it clearly, we need to convert the E n e r g y _ C P U ( A , t ) into computer clock cycles which is measured in time. Therefore, the units of T _ d e l a y ( A , t ) and E n e r g y _ C P U ( A , t ) are then unified. The assumptions and conversion rules are listed in the experimental section. Considering the parallelization of tasks, the each terms in Equation (11) are non-negative. The minimum of offloading cost is obtained when both of T _ d e l a y t a s k i and E n e r g y _ C P U t a s k i reach minimum values. Therefore, the process of finding the optimal solution of C o s t ( A , t ) can be converted to two minimization functions (Equation (12)).
M i n T _ d e l a y ( A , t ) M i n E n e r g y _ C P U ( A , t )
Based on the constraint of Equation (12), we try to seek an optimal A all tasks at t. However, it is hard to search a solution of C o s t ( A , t ) to satisfy both of the conditions in Equation (12). We assume that A is the subset of A which makes the T _ d e l a y ( A , t ) , and B is the subset of A which makes the E n e r g y _ C P U ( A , t ) . Therefore, the optimal solution Equation (11) is divided into two cases. If A B Φ , there is a optimal solution for C o s t ( A , t ) . Otherwise, C o s t ( A , t ) only has a suboptimal solution.

4. Mobility-Aware Parallelizable Task Offloading Strategy (MPTOS)

Based on the models discussed in the previous section, this section describes the process of NPTP and PTOSBPP in detail. A mobility-aware parallelizable task offloading strategy is proposed here, the task offloading decision is based on the NPTP.

4.1. Overview of the MPTOS

As the UEs shuttle in the EN, the task requesting of moving UEs is irregular. The ESs perceive the user’s location according to task application from the end devices and passively provide service for them. In order to solve the NP-hard problem [30,47,48], task unloading strategy must be able to adapt to the changes of EN caused by the user’s movement. It is MPTOS that can dynamically modify the offloading policy as the ENs topology changes. MPTOS also includes user trajectory prediction algorithm without prior information and task offloading algorithm for parallelizable task. To demonstrate more clearly the task offloading of MPTOS, we present a complete description of the task offloading and decision-making process in Algorithms 1 and 2 in this section.

4.2. Non-Prior Trajectory Prediction Algorithm

Non-prior information trajectory prediction algorithm is the basis for the implementation of following service offloading strategy. The algorithm predicts the location of the UEs in the next slice by the instantaneous position and current speed of the UEs. Then, PTOSBPP unloads the request service received by the current device to the target edge device in advance according to the feedback results of NPTPat t, which is convenient for terminal user service requests at t + 1 .
According to the model in Figure 3, it quantifies the position angle between the motion vector v and the edge servers, which determines the next hop ES. Then, the offloading tactics works on the current available device queue including the next target ES. The algorithm of NPTPis shown in Algorithm 1.
Algorithm 1 Non-prior Trajectory Prediction algorithm (NPTP)
Require:
  cu_dev_coordinate: Current coordinate of the user’s device
  cur_BS_coordinate: Current coordinate of the servicing BS
  speed: Current user’s speed that includes a direction and a value of mobile rate
  R: Radius of BS covered area
Ensure:
  Next_BS: Candidate device for the task offloading
  1:   Mobile_r ← s p e e d T
  2:   Neighbor_BS ← get_neighborBS()
  3:   dis ← distance(cur_dev_coordinate, cur_BS_coordinate)
  4:   Mobile_area ← pi * Mobile_r * Mobile_r
  5:   if d i s < = R ˘ M o b i l e _ r then
  6:  Next_BS ← cur_BS_coordinate
  7:   else
  8:  //Get the angle between the velocity vector and different adjacent BS, and save the
    result in Neighbor_ angle
  9:  for i ( 0 , l e n ( N e i g h b o r _ B S ) do
10:    temp_v ← make_vector(Neighbor_ BS [i], cur_dev_coordinate)
11:    Neighbor_ angle[i] ← get_angle(speed,temp_v)
12:  end for
13:  min_angle ← Min(Neighbor_ angle)
14:  Next_BS ← getBS_from_angle(minangle)
15:   end if
16:   retrun Next_BS

4.3. Parallelizable Task Offloading Strategy Based Position Prediction

In order to adapt to the movement of UEs, the algorithm of PTOSBPP, combined with the characteristics of parallel task in ENs, has to select the optimal unloading target device from available device queues in each time slice according to Equation (12) in Section 3.4. Thus, the strategy of PTOSBPP conform the following principles to distribute the tasks in ENs:
(1) MEN does its best to satisfy users’ resource requests.In addition, the time dependence of edge task on UEs’ displacement should be considered comprehensively in the provision of services by MEN.
(2) Large tasks that take up more CPU cycles are preferentially distributed to the devices with better bandwidth and sufficient-resources ESs. That can shorten the execution time of a single task.
(3) The number of tasks offloaded on ESs is restricted by increasing the parallelization of subtasks. In other words, when network delay caused by mobility cannot be avoided, cloud server and end device collaboration ought to be utilized to its full potentials.
Algorithm 2 explains the scheme of PTOSBPPin detail. The US sends different types of task requests to different edge servers by the user’s trajectory at each time slice, the PTOSBPP will be running on each ESs to offload the tasks to different candidate equipment.
Algorithm 2 Parallelizable Task Offloading Strategy Based Position Prediction (PTOSBPP)
Require:
  M,N: Number of users and subtask of per t a s k i
  subtask[M][N]: Subtasks list of t a s k i
  cur_dev,cur_BS,cloud: Current user’s device, servicing BS and cloud servers
  pre_BS[M]: Candidate BSs for M users selected in the Algorithm 1
Ensure:
  A[M][N]: Device assignment list of each subtask contained in subtask[M][N]
  1:   Initialization of the information of UEs, edge servers, cloud and tasks
  2:   Make array of subtask randomly
  3:   for i ( 0 , M ) do
  4:   k e t | e x e _ T i m e ( s u b t a s k [ i ] [ e t ] ) > T
  5:   t e t | e x e _ T i m e ( s u b t a s k [ i ] [ e t ] ) T
  6:  if pre_BS[i] == cur_BS then
  7:    if len(cur_dev.TQ) ≤ MAX then
  8:     A[i][t] ← cur_dev
  9:    else
10:     A[i][t] ← cur_BS
11:    end if
12:    if len(cur_BS.TQ) ≤ BS_Task_MAX then
13:     A[i][k] ← cur_BS
14:    else
15:     A[i][k] ← cloud
16:    end if
17:  end if
18:  for j ( 0 , N ) do
19:    if len(cur_BS.TQ)≤ BS_Task_MAX then
20:     A[i][t] ← cur_BS
21:    else
22:     A[i][t] ← cur_dev
23:    end if
24:    if len(pre_BS[i].RQ) ≤ BS_Task_MAX then
25:     A[i][k] ← pre_BS
26:    else
27:     A[i][k] ← cloud
28:    end if
29:  end for
30:   end for
31:   return A

5. Experiment and Discussion

In this section, we evaluate the proposed approaches by a series of experiments. All the experiments are conducted on a Windows OS equipped with Intel Core i5-8500 and 16 GB RAM. All of the algorithms are completed with python3.7 and pycharm IDE. We take a part of the EUA data set in these experiments, which contains the location information of 125 base station sites and 816 users in a Central Business District(CBD). EUA data set is a publicly released to facilitate research in Edge Computing, we can get it from the internet freely (https://www.opensourceagenda.com/projects/eua-dataset (accessed on 12 October 2022)). In addition, we also construct and use a task data set with 8 types, and each task is divided into several sub-tasks. According to the random walk model in Section 3, the users move in the area of a Central Business District, and send different task requests to the edge server in time slices stochastically.

5.1. Benchmark Approaches

As there is no completely similar experimental scenario to the issue discussed in this paper, we abstract three experimental schemes from the previous literature [9,14,19,21,24,35] to compare the performance of our methods. The tasks offloading method, proposed in this paper, is based on user trajectory prediction and introduces the parallel execution strategy of sub-tasks. By combining the random walk model with the EUA data set, we have designed three experimental scenarios to verify the effectiveness of our method. The following benchmark experiments are different combinations of NPTP and parallelizable task offloading strategy, and the detail are shown in Table 2.
(1) Random strategy (RS)
RS randomly assigns the undivided task to the devices surrounding the UEs. In RS, since the tasks are not split into multiple sub-tasks, the UEs, ESs and cloud servers cannot execute in parallel. If a task can be completed in one time slice, it will be unloaded to the neighbor ES. Otherwise, the UE requesting the task is responsible for completing the task.
(2) Non-position Prediction Parallel Strategy (NPPS)
In non-position prediction parallel strategy, the task is split into several parallel sub-tasks. That is the same task at the same time that can be divided into multiple sub-tasks and run in parallel between the UEs, ESs and cloud servers. This strategy may improve the execution efficiency of tasks within a single time slice. However, in order to avoid repeated requests during UEs moving and the failure to feedback the results to the terminal devices, the sub-tasks whose execution time exceeds one time slice are unloaded to the UEs for execution firstly.
(3) Non-parallel Strategy Based Position Prediction (NPSBPP)
In NPSBPP, it introduces the user trajectory prediction model mentioned in Section 3, and the edge resources predicted by model are also considered as the candidate devices for unloading the current task. Meanwhile, the task that runs more than one time slice is offloaded beforehand to the cloud or the ES at the next slice.

5.2. Experimental Setting and Evaluation Standard

To improve the universality of our approaches, our experiment Section 5 groups of fixed number of users from 816 users in the EUA data set. These users move at different speed based on the random walk model proposed in Section 3, and request for different types of tasks randomly at each time slice. The simulation experiment is carried out under four different task offloading strategies. In order to ensure the consistency of the experiment, the parameters in the simulation environment are set as follows in Table 3:
To comprehensively analyze the performance of our approaches, we choose a series of comparison indicators to evaluate our method, such as task offloading predicting hit rate, network bandwidth, and task execution efficiency. Where task offloading predicting hit rate is equal to the accuracy rate of user’s position prediction, bandwidth is mainly used to measure the burden of task parallel strategy, and the task execution efficiency is quantified the task acceleration ratio between parallel mode and serial mode.

5.3. Results and Discussion

In this section, we display the experimental results from the predicted hit rate of the mobile users, bandwidth, and task execution efficiency and compare them with RS, NPPS and NPSBPP.

5.3.1. Task Offloading Hit Rate

Among the four strategies, NPPS and PTOSBPP adopt the position prediction method proposed in Section 3 to offloading the tasks, so that the task offloading hit rate and its comparison will be obtained from these two experiments. In each experimental scenario, we simulate the users’ trajectory by the random walk model, and collect the track information of 400 randomly selected users in 5 groups. According to prediction model in Figure 3, we compare the location of each user at time slice of t and t + 1 by counting the number of times that the front and back positions are the same. Furthermore, we make statistics of these data according to 4 ranges of speed to observe the impact of rate on the prediction accuracy. Figure 4a,b present the predicting hit rate of NPPS and PTOSBPPamong different user sets. Since no prediction strategy is used in RS and NPPS, there is no need to compare this item in these two cases.
Each line of Figure 4 displays the hit ratio trend for different user data sets. As shown in Figure 4, with the increase of user’s moving speed, the accuracy rate gradually decreases. However, they still remain above 75%. In other words, when there is no sufficient additional information, the faster the user moves, the harder it is to predict the user location.

5.3.2. Bandwidth

Any task unloading solution, especially collaboration strategy of UEs, ESs and clouds, has a certain impact on the EN. NPPS and PTOSBPPare offloading strategies for parallel tasks. In the simulation experiments, we have calculated the bandwidth occupancy of different offloading strategies at each time. Since more sub-tasks in the parallel offloading strategy may be assigned to different edge devices for execution shown in Figure 5a, it can be seen from Figure 5b that the average bandwidth utilization of NPSBPP and PTOSBPPis higher than RS and NPPS. Nevertheless, all of the curves in Figure 5b do not reach half of the total bandwidth.

5.3.3. Execution Efficiency of Task

Eight different types of task requests are randomly generated at each time slice in the simulation experiments. Each task can be divided into several sub-tasks. Under different strategies, we collected the task execution time of 400 random users, calculated the average ratio of the actual execution time to the theoretical time of all users’ requesting tasks. Figure 6a presents the average number of tasks or sub-tasks that needs to be unloaded in MEN at each point in time, and the Figure 6b shows that the average ratio of the actual task execution time to the theoretical practice time under different unloading scheme whose horizontal axis is the UEs’ moving speed. The actual task execution time of RS and NPPS is over four times more than theoretical value, while the NPSBPP and PTOSBPPare basically close to the theoretical time.
As can be seen from Figure 6, the execution efficiency of programs in PTOSBPP and NPSBPP have obvious advantages over RS and NPPS. Since the parallel mechanism, the actual execution time of all tasks is close to the theoretical time in PTOSBPP, while the execution efficiency of NPSBPP is slightly higher than that of PTOSBPP. These results suggest that the prediction of UEs’ position in task offloading can effectively improve the execution efficiency of tasks in MEN, and the parallel division of tasks can shorten the execution time of tasks as well.

6. Conclusions and Future Work

In this paper, we identified the importance of prediction of user movement trajectory in task offloading of MEN network. We proposed a task offloading strategy for paralleling edge network based on a prediction model of non-prioritized trajectory. The time complexity of two algorithms has certain advantages to some extent. For example, the time complexity of Algorithm 1 seems to be O ( n ) , but the n is the number of neighbor edge servers which is not more excessive. Meanwhile, the time complexity of Algorithm 2 is O ( M × N ) , where M and N are the number of UEs and subtask. As the data structure of device assignment list, the temporal consumption of this algorithm needs to improve in the future.
Experimental results show that the prediction of user location not only improves the hit rate of target edge server for task offloading in parallel networks, but also effectively improves the execution efficiency of tasks and the utilization rate of network resources. Meanwhile, it demonstrated that the user mobility rate is the key factor affecting the above performance. However, we found that the hit rate of task offloading prediction decreased as the user moved faster. Therefore, how to improve the hit rate of the task offloading in fast moving scenarios is the one of our research challenges and key points in the future.
Additionally, mobile user-centered architecture is a type of dynamic networking approach in MEN, it is appropriate for the accessing way of MEN. In these networks, it may provide a personalized list of available devices for any end user, and increase the equipment utilization of the edge network. Since the user device participates in managing the resource of edge networks, the security of the network is a inevitable issue, which also is a potential area for research in our next works.

Author Contributions

Conceptualization, P.H. and B.Y.; methodology and experiment, P.H. and L.H.; software and data curation, J.-S.P. and J.S.; writing—original draft preparation, P.H.; writing—review and editing, J.-S.P. and J.S.; funding acquisition, J.S. and L.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the project of Research on Key Technologies for water and sediment simulation and intelligent decision of Yellow River (Grant No.201400211000).

Institutional Review Board Statement

Not applicable for studies not involving humans or animals.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The corresponding author (J. Shang) was supported by the project of Research on Key Technologies for the Construction and Service of Yellow River Simulators for Suppercomputing funded by Henan Provincial Department of Science and Technology (Grant No.201400210900).

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Abbreviations

The following abbreviations are used in this manuscript:
IoTInternet of Things
MENMobile edge network
UEUser quipment
QoSQuality of service
QoEQuality of Experience
ENEdge network
UCENuser-centering edge network
NPTPNon-prior trajectory prediction model
MUCENMobile user-centering edge network
CSCloud server
BSBase station
ESEdge server
TDTerminal device
PDPMprior displacement prediction model
MPTOSMobility-aware Parallelizable Task Offloading Strategy
PTOSBPPParallelizable Task Offloading Strategy Based Position Prediction
RSRandom strategy
NPPSNon-position Prediction Parallel Strategy
NPSBPPNon-parallel Strategy Based Position Prediction

References

  1. Akherfi, K.; Gerndt, M.; Harroud, H. Mobile cloud computing for computation offloading: Issues and challenges. Appl. Comput. Inform. 2018, 14, 1–16. [Google Scholar] [CrossRef]
  2. Mao, Y.; You, C.; Zhang, J.; Huang, K.; Letaief, K.B. A survey on mobile edge computing: The communication perspective. IEEE Commun. Surv. Tutor. 2017, 19, 2322–2358. [Google Scholar] [CrossRef] [Green Version]
  3. Odun-Ayo, I.; Ananya, M.; Agono, F.; Goddy-Worlu, R. Cloud computing architecture: A critical analysis. In Proceedings of the 2018 18th International Conference on Computational Science and Applications (ICCSA), Melbourne, VIC, Australia, 2–5 July 2018; pp. 1–7. [Google Scholar]
  4. Zhang, A.N.; Chu, S.C.; Song, P.C.; Wang, H.; Pan, J.S. Task Scheduling in Cloud Computing Environment Using Advanced Phasmatodea Population Evolution Algorithms. Electronics 2022, 11, 1451. [Google Scholar] [CrossRef]
  5. Sabella, D.; Vaillant, A.; Kuure, P.; Rauschenbach, U.; Giust, F. Mobile-edge computing architecture: The role of MEC in the Internet of Things. IEEE Consum. Electron. Mag. 2016, 5, 84–91. [Google Scholar] [CrossRef]
  6. Nguyen, T.T.; Pan, J.S.; Dao, T.K.; Chu, S.C. Load balancing for mitigating hotspot problem in wireless sensor network based on enhanced diversity pollen. J. Inf. Telecommun. 2018, 2, 91–106. [Google Scholar] [CrossRef]
  7. Hu, Y.C.; Patel, M.; Sabella, D.; Sprecher, N.; Young, V. Mobile edge computing—A key technology towards 5G. ETSI White Pap. 2015, 11, 1–16. [Google Scholar]
  8. Pan, J.S.; Li, G.C.; Li, J.; Gao, M.; Chu, S.C. Application of the Novel Parallel QUasi-Affine TRansformation Evolution in WSN Coverage Optimization. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2022; pp. 241–251. [Google Scholar]
  9. Shakarami, A.; Ghobaei-Arani, M.; Shahidinejad, A. A survey on the computation offloading approaches in mobile edge computing: A machine learning-based perspective. Comput. Netw. 2020, 182, 107496. [Google Scholar] [CrossRef]
  10. Mach, P.; Becvar, Z. Mobile edge computing: A survey on architecture and computation offloading. IEEE Commun. Surv. Tutor. 2017, 19, 1628–1656. [Google Scholar] [CrossRef] [Green Version]
  11. Khan, W.Z.; Ahmed, E.; Hakak, S.; Yaqoob, I.; Ahmed, A. Edge computing: A survey. Future Gener. Comput. Syst. 2019, 97, 219–235. [Google Scholar] [CrossRef]
  12. Shahzadi, S.; Iqbal, M.; Dagiuklas, T.; Qayyum, Z.U. Multi-access edge computing: Open issues, challenges and future perspectives. J. Cloud Comput. 2017, 6, 30. [Google Scholar] [CrossRef]
  13. Xu, J.; Chen, L.; Zhou, P. Joint service caching and task offloading for mobile edge computing in dense networks. In Proceedings of the IEEE INFOCOM 2018-IEEE Conference on Computer Communications, Honolulu, HI, USA, 15–19 April 2018; pp. 207–215. [Google Scholar]
  14. Atanasov, I.; Pencheva, E.; Nametkov, A.; Trfonov, V. Provisioning of UE Behavior Prognostic by Multiaccess Edge Computing. In Proceedings of the 2019 International Symposium on Networks, Computers and Communications (ISNCC), Istanbul, Turkey, 18–20 June 2019; pp. 1–6. [Google Scholar]
  15. Dong, L.; Satpute, M.N.; Shan, J.; Liu, B.; Yu, Y.; Yan, T. Computation offloading for mobile-edge computing with multi-user. In Proceedings of the 2019 IEEE 39th international conference on distributed computing systems (ICDCS), Dallas, TX, USA, 7–9 July 2019; pp. 841–850. [Google Scholar]
  16. Ou, S.; Yang, K.; Liotta, A. An adaptive multi-constraint partitioning algorithm for offloading in pervasive systems. In Proceedings of the Fourth Annual IEEE International Conference on Pervasive Computing and Communications (PERCOM’06), Pisa, Italy, 13–17 March 2006; pp. 10–125. [Google Scholar]
  17. Pan, J.S.; Fan, F.; Chu, S.C.; Du, Z.; Zhao, H. A node location method in wireless sensor networks based on a hybrid optimization algorithm. Wirel. Commun. Mob. Comput. 2020, 1–14. [Google Scholar] [CrossRef]
  18. Nasrin, W.; Xie, J. SharedMEC: Sharing clouds to support user mobility in mobile edge computing. In Proceedings of the 2018 IEEE International Conference on Communications (ICC), Kansas City, MO, USA, 20–24 May 2018; pp. 1–6. [Google Scholar]
  19. Nadembega, A.; Hafid, A.S.; Brisebois, R. Mobility prediction model-based service migration procedure for follow me cloud to support QoS and QoE. In Proceedings of the 2016 IEEE International Conference on Communications (ICC), Kuala Lumpur, Malaysia, 22–27 May 2016; pp. 1–6. [Google Scholar]
  20. Elgazzar, K.; Martin, P.; Hassanein, H.S. Cloud-assisted computation offloading to support mobile services. IEEE Trans. Cloud Comput. 2014, 4, 279–292. [Google Scholar] [CrossRef]
  21. Nadembega, A.; Taleb, T.; Hafid, A. A destination prediction model based on historical data, contextual knowledge and spatial conceptual maps. In Proceedings of the 2012 IEEE International Conference on Communications (ICC), Ottawa, ON, Canada, 10–15 June 2012; pp. 1416–1420. [Google Scholar]
  22. Nadembega, A.; Hafid, A.; Taleb, T. A path prediction model to support mobile multimedia streaming. In Proceedings of the 2012 IEEE International Conference on Communications (ICC), Ottawa, ON, Canada, 10–15 June 2012; pp. 2001–2005. [Google Scholar]
  23. Li, W.; Wang, F.; Pan, Y.; Zhang, L.; Liu, J. Computing Cost Optimization for Multi-BS in MEC by Offloading. Mob. Netw. Appl. 2020, 27, 1–13. [Google Scholar] [CrossRef]
  24. Ouyang, T.; Zhou, Z.; Chen, X. Follow me at the edge: Mobility-aware dynamic service placement for mobile edge computing. IEEE J. Sel. Areas Commun. 2018, 36, 2333–2345. [Google Scholar] [CrossRef] [Green Version]
  25. Kumar, K.; Liu, J.; Lu, Y.H.; Bhargava, B. A survey of computation offloading for mobile systems. Mob. Netw. Appl. 2013, 18, 129–140. [Google Scholar] [CrossRef]
  26. Chen, X.; Pu, L.; Gao, L.; Wu, W.; Wu, D. Exploiting massive D2D collaboration for energy-efficient mobile edge computing. IEEE Wirel. Commun. 2017, 24, 64–71. [Google Scholar] [CrossRef] [Green Version]
  27. Xian, C.; Lu, Y.H.; Li, Z. Adaptive computation offloading for energy conservation on battery-powered systems. In Proceedings of the 2007 International Conference on Parallel and Distributed Systems, Hsinchu, Taiwan, 5–7 December 2007; pp. 1–8. [Google Scholar]
  28. Wolski, R.; Gurun, S.; Krintz, C.; Nurmi, D. Using bandwidth data to make computation offloading decisions. In Proceedings of the 2008 IEEE International Symposium on Parallel and Distributed Processing, Miami, FL, USA, 14–18 April 2008; pp. 1–8. [Google Scholar]
  29. Ateya, A.A.; Muthanna, A.; Vybornova, A.; Darya, P.; Koucheryavy, A. Energy-aware offloading algorithm for multi-level cloud based 5G system. In Internet of Things, Smart Spaces, and Next Generation Networks and Systems; Springer: Berlin/Heidelberg, Germany, 2018; pp. 355–370. [Google Scholar]
  30. Chen, X.; Jiao, L.; Li, W.; Fu, X. Efficient multi-user computation offloading for mobile-edge cloud computing. IEEE/ACM Trans. Netw. 2015, 24, 2795–2808. [Google Scholar] [CrossRef] [Green Version]
  31. Hirsch, M.; Mateos, C.; Zunino, A. Augmenting computing capabilities at the edge by jointly exploiting mobile devices: A survey. Future Gener. Comput. Syst. 2018, 88, 644–662. [Google Scholar] [CrossRef]
  32. Mukherjee, A.; De, D. Femtolet: A novel fifth generation network device for green mobile cloud computing. Simul. Model. Pract. Theory 2016, 62, 68–87. [Google Scholar] [CrossRef]
  33. Chun, B.G.; Maniatis, P. Augmented smartphone applications through clone cloud execution. In Proceedings of the HotOS, Ann Arbor, MI, USA, 1–3 June 2009; Volume 9, pp. 8–11. [Google Scholar]
  34. ur Rehman, M.H.; Chee, S.L.; Wah, T.Y.; Iqbal, A.; Jayaraman, P.P. Opportunistic computation offloading in mobile edge cloud computing environments. In Proceedings of the 2016 17th IEEE International Conference on Mobile Data Management (MDM), Porto, Portugal, 13–16 June 2016; Volume 1, pp. 208–213. [Google Scholar]
  35. Liu, L.; Zhao, M.; Yu, M.; Jan, M.A.; Lan, D.; Taherkordi, A. Mobility-aware multi-hop task offloading for autonomous driving in vehicular edge computing and networks. IEEE Trans. Intell. Transp. Syst. 2022, 1–14. [Google Scholar] [CrossRef]
  36. Long, T.; Ma, Y.; Xia, Y.; Xiao, X.; Peng, Q.; Zhao, J. A Mobility-Aware and Fault-Tolerant Service Offloading Method in Mobile Edge Computing. In Proceedings of the 2022 IEEE International Conference on Web Services (ICWS), Barcelona, Spain, 10–16 July 2022; pp. 67–72. [Google Scholar]
  37. Wu, C.; Peng, Q.; Xia, Y.; Lee, J. Mobility-aware tasks offloading in mobile edge computing environment. In Proceedings of the 2019 Seventh International Symposium on Computing and Networking (CANDAR), Nagasaki, Japan, 25–28 November 2019; pp. 204–210. [Google Scholar]
  38. Prabhala, B.; La Porta, T. Spatial and temporal considerations in next place predictions. In Proceedings of the 2015 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Hong Kong, China, 26 April–1 May 2015; pp. 390–395. [Google Scholar]
  39. Prabhala, B.; La Porta, T. Next place predictions based on user mobility traces. In Proceedings of the 2015 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Hong Kong, China, 26 April–1 May 2015; pp. 93–94. [Google Scholar]
  40. Wang, Z.; Zhao, Z.; Min, G.; Huang, X.; Ni, Q.; Wang, R. User mobility aware task assignment for mobile edge computing. Future Gener. Comput. Syst. 2018, 85, 1–8. [Google Scholar] [CrossRef]
  41. Hui, G.; Rui, L.L.; Gao, Z.P. V2V Task Offloading Algorithm with LSTM-based Spatiotemporal Trajectory Prediction Model in SVCNs. IEEE Trans. Veh. Technol. 2022, 1–16. [Google Scholar] [CrossRef]
  42. Aissioui, A.; Ksentini, A.; Gueroui, A.M.; Taleb, T. On enabling 5G automotive systems using follow me edge-cloud concept. IEEE Trans. Veh. Technol. 2018, 67, 5302–5316. [Google Scholar] [CrossRef] [Green Version]
  43. Sun, M.; Zhou, Z.; Wang, J.; Du, C.; Gaaloul, W. Energy-efficient IoT service composition for concurrent timed applications. Future Gener. Comput. Syst. 2019, 100, 1017–1030. [Google Scholar] [CrossRef]
  44. Yang, L.; Cao, J.; Cheng, H.; Ji, Y. Multi-user computation partitioning for latency sensitive mobile cloud applications. IEEE Trans. Comput. 2014, 64, 2253–2266. [Google Scholar] [CrossRef]
  45. Deng, S.; Huang, L.; Taheri, J.; Zomaya, A.Y. Computation offloading for service workflow in mobile cloud computing. IEEE Trans. Parallel Distrib. Syst. 2014, 26, 3317–3329. [Google Scholar] [CrossRef]
  46. Du, J.; Zhao, L.; Feng, J.; Chu, X. Computation offloading and resource allocation in mixed fog/cloud computing systems with min-max fairness guarantee. IEEE Trans. Commun. 2017, 66, 1594–1608. [Google Scholar] [CrossRef] [Green Version]
  47. Mehrabi, M.; You, D.; Latzko, V.; Salah, H.; Reisslein, M.; Fitzek, F.H. Device-enhanced MEC: Multi-access edge computing (MEC) aided by end device computation and caching: A survey. IEEE Access 2019, 7, 166079–166108. [Google Scholar] [CrossRef]
  48. Huang, M.; Liu, W.; Wang, T.; Liu, A.; Zhang, S. A cloud–MEC collaborative task offloading scheme with service orchestration. IEEE Internet Things J. 2019, 7, 5792–5805. [Google Scholar] [CrossRef]
Figure 1. Moving targets in the MEN.
Figure 1. Moving targets in the MEN.
Entropy 24 01464 g001
Figure 2. Architecture of User-centered edge mobile network.
Figure 2. Architecture of User-centered edge mobile network.
Entropy 24 01464 g002
Figure 3. Prediction Model of Mobile user trajectory.
Figure 3. Prediction Model of Mobile user trajectory.
Entropy 24 01464 g003
Figure 4. Task Offloading Hit Rate of NPPS and PTOSBPP. (a) Task Offloading Hit Rate of NPPS; (b) Task Offloading Hit Rate of PTOSBPP.
Figure 4. Task Offloading Hit Rate of NPPS and PTOSBPP. (a) Task Offloading Hit Rate of NPPS; (b) Task Offloading Hit Rate of PTOSBPP.
Entropy 24 01464 g004
Figure 5. Bandwidth occupancy rate of different strategies. (a) The Number of Tasks Running on ESs or Cloud; (b) Bandwidth Occupancy Rate at Different Time.
Figure 5. Bandwidth occupancy rate of different strategies. (a) The Number of Tasks Running on ESs or Cloud; (b) Bandwidth Occupancy Rate at Different Time.
Entropy 24 01464 g005
Figure 6. Average Execution Efficiency of Task. (a) Task Requests at Different Time Slice; (b) Average Execution Efficiency of Tasks among Different Cases.
Figure 6. Average Execution Efficiency of Task. (a) Task Requests at Different Time Slice; (b) Average Execution Efficiency of Tasks among Different Cases.
Entropy 24 01464 g006
Table 1. Categories of Traditional Task Offloading Research in Edge Networks.
Table 1. Categories of Traditional Task Offloading Research in Edge Networks.
Optimization GoalsMethodology DescriptionReferencesShortcoming
DecisionOffloading based on constant resourcesAssigning computational tasks to user’s neighbouring devices based on the resources in the current area[1,4,13,16,23,26,27,28,29,30,34]The offloading stratagy is limited by the number of available resources.
Offloading based on extended resourcesExpanding the available resources set based on the user’s current area and the predicted target area, and then offloading the tasks[19,21,22,35,36,37,38,39,40]Predictive modeling needs historical movement trajectory, it’s difficult to obtain these personal data.
Architecture or DevicesOptimizing the architecture of edge network or update devices to improve the performance of task offloading[3,5,18,20,31,32,33]It requires additional investment to upgrade and renovate the network.
Table 2. Detail of Different Experiments.
Table 2. Detail of Different Experiments.
Experimental SchemePosition PredictionParallel Strategy
Random strategy (RS)
Non-position Prediction Parallel Strategy (NPPS)
Non-parallel Strategy Based Position Prediction (NPSBPP)
Parallelizable Task Offloading Strategy Based Position Prediction (PTOSBPP)
Table 3. Experimental Parameters Setting.
Table 3. Experimental Parameters Setting.
ObjectParameter Setting
End-usersSelect 80 users as a group randomly from 816 users in EUA dataset, that might ensure the users are distributed within the CDB randomly; each experimental scenario contains 5 groups users.
Task8 different types of task, each task is represented by a subtask set s u b t a s k i | s u b t a s k i = ( C P U , D a t a , T o r l a r e n c e _ m a x ) , 0 i N , where N is the total number of sub-tasks. We also assume that each task contains 2 to 5 subtask.
Cloud/MEN ServersThere are 125 edge servers and one cloud server in the CBD area, and we assuming the resource of cloud is sufficient.
Coverage Radius of BS (m) 150 R 400
BandwidthWe assume that the bandwidth between the end, the edge and the cloud is adequate. B W e n d _ t o _ e d g e = 100 MB/s; B W e d g e _ t o _ c l o u d = 1000 MB/s
Time SlotT is a constant, we set T = 2 s ; and the number of task requests generated by each user in each time slice is in the range [0,3]
Rated Cycle of CPUThe cycle of CPU of Cloud δ c = 10 δ c δ e δ t
The cycle of CPU of MEN servers δ e = 40
The cycle of CPU of terminal devices δ t = 100
Moving speed (m/s) ϑ 1 : 2 ϑ 1 6 ϑ i is the speed of different targets,
and ϑ 1 < ϑ 2 < ϑ 3 < ϑ 4
ϑ 2 : 6 ϑ 2 10
ϑ 3 : 10 ϑ 3 13
ϑ 4 : 13 ϑ 16
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Han, P.; Han, L.; Yuan, B.; Pan, J.-S.; Shang, J. A Parallelizable Task Offloading Model with Trajectory-Prediction for Mobile Edge Networks. Entropy 2022, 24, 1464. https://doi.org/10.3390/e24101464

AMA Style

Han P, Han L, Yuan B, Pan J-S, Shang J. A Parallelizable Task Offloading Model with Trajectory-Prediction for Mobile Edge Networks. Entropy. 2022; 24(10):1464. https://doi.org/10.3390/e24101464

Chicago/Turabian Style

Han, Pu, Lin Han, Bo Yuan, Jeng-Shyang Pan, and Jiandong Shang. 2022. "A Parallelizable Task Offloading Model with Trajectory-Prediction for Mobile Edge Networks" Entropy 24, no. 10: 1464. https://doi.org/10.3390/e24101464

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop