Next Article in Journal
Design of Cost-Efficient SRAM Cell in Quantum Dot Cellular Automata Technology
Next Article in Special Issue
Intelligent Computation Offloading Mechanism with Content Cache in Mobile Edge Computing
Previous Article in Journal
Examination of a Method for Estimating Solid Fraction at Flow Cessation from Flow Velocity of Mushy Formation Molten Alloys
Previous Article in Special Issue
Decompose Auto-Transformer Time Series Anomaly Detection for Network Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Task Offloading and Resource Allocation for Tasks with Varied Requirements in Mobile Edge Computing Networks

School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100101, China
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(2), 366; https://doi.org/10.3390/electronics12020366
Submission received: 6 December 2022 / Revised: 5 January 2023 / Accepted: 7 January 2023 / Published: 10 January 2023
(This article belongs to the Special Issue Resource Allocation in Cloud–Edge–End Cooperation Networks)

Abstract

:
Edge computing enables devices with insufficient computing resources to offload their tasks to the edge for computing, to improve the service experience. Some existing work has noticed that the data size of offloaded tasks played a role in resource allocation shares but has not delved further into how the data size of an offloaded task affects resource allocation. Among offloaded tasks, those with larger data sizes often consume a larger share of system resources, potentially even monopolizing system resources if the data size is large enough. As a result, tasks with small or regular sizes lose the opportunity to be offloaded to the edge due to their limited data size. To address this issue, we introduce the concept of an emergency factor to penalize tasks with immense sizes for monopolizing system resources, while supporting tasks with small sizes to contend for system resources. The joint offloading decision and resource allocation problem is formulated as a mixed-integer nonlinear programming (MINLP) problem and further decomposed into an offloading decision subproblem and a resource allocation subproblem. Using the KKT conditions, we design a bisection search-based algorithm to find the optimal resource allocation scheme. Additionally, we propose a linear-search-based coordinate descent (CD) algorithm to identify the optimal offloading decision. Numerical results show that our proposed algorithm converges to the optimal scheme (for the minimal delay) when tasks are of regular size. Moreover, when tasks of immense, small and regular sizes coexist in the system, our scheme can exclude tasks of immense size from edge resource allocation, while still enabling tasks of small size to be offloaded.

1. Introduction

Mobile edge computing (MEC) has been prevailing in recent years for deploying computing resources at the network edge in proximity to end-user devices [1,2]. End users request a task offloading to improve service experiences [3]. However, the limited resources deployed at the edge can be overwhelmed by the ever-increasing number of user devices (UDs). Furthermore, the data size of different tasks ranges from tens of kilobytes to hundreds of megabytes, and the satisfactory completion time of these tasks can range from tens of milliseconds to several seconds. Therefore, an important research topic is how to effectively utilize the limited resources at the edge to provide satisfactory service quality for tasks with varied requirements.
Task offloading combined with resource allocation has garnered significant research attention in recent years [4]. Ensuring that critical tasks can be processed in a timely manner in delay-sensitive scenarios [5,6], such as automated driving [7], industrial manufacturing [8], smart cities [9], is of paramount importance. As such, the allocation of bandwidth and computing resources should be biased towards tasks with higher requirements and/or importance. While previous research has focused on minimizing task execution time [10,11,12,13] and energy consumption [14], there have been relatively few studies that focus on resource allocation among tasks with significant differences in data size. Naouri et al. [15] differentiated tasks into high-computation and high-communication tasks and proposed processing high-communication tasks at the edge or nearby peer devices, while offloading high-computation tasks to the cloud. Some prior work [10,11,16] formulated the closed-form solution for bandwidth and computing resource allocation in time-division multiple access (TDMA) MEC systems, indicating that the share of bandwidth allocated to the offloaded task was proportional to its data size. However, these studies have not thoroughly examined the impact of significant differences in data size on resource allocation, or how to address this issue if necessary.
While some articles [10,11] have attempted to differentiate the weights of mobile devices (tasks) to emphasize the differences in task requirements, to our knowledge, they, like other existing works, have overlooked the fact that tasks with small data sizes may be crowded out of resource allocation by tasks with immense sizes, thereby losing the opportunity to be offloaded. In this paper, we investigate the offloading decision and resource allocation mechanism among tasks with significant differences in data size, and we propose a scheme to prevent tasks with immense sizes from monopolizing system resources, while still allowing tasks with small sizes to contend for system resources. The main contributions of this paper are as follows:
  • To address the issue of tasks with immense sizes monopolizing system resources, we introduce the concept of an emergency factor to support tasks with small sizes in contending for system resources. The joint optimization of offloading decisions and edge resource allocation among tasks with significant differences in data size is formulated as a mixed-integer nonlinear programming problem.
  • We decompose the MINLP problem into two subproblems and propose a linear-search-based coordinate descent method and a bisection-search-based resource allocation algorithm to address the offloading decision and resource allocation subproblems, respectively.
  • Simulation results demonstrate the effectiveness of our proposed scheme in regulating offloading decisions and resource allocation when there is a significant difference in the data size of the offloaded tasks. When the tasks are of regular size, our scheme obtains the minimum delay as the compared baseline scheme.
The remainder of this paper is organized as follows. Section 2 discusses the related work. Section 3 shows the details of the proposed system model. Section 4 introduces the optimal solution based on the KKT conditions and CD. Finally, Section 5 presents the simulation results and analyses, and we conclude our work in Section 6.

2. Related Work

Existing research on task offloading and resource allocation has focused on various objectives. Some studies aim to minimize task completion time in the system. Ren et al. [11] designed a subgradient-based algorithm to reduce latency for mobile devices with divisible compression tasks. Xing et al. [17] minimized task execution time with the help of helpers in a TDMA system, using relaxation-based and decoupling-based approaches to obtain a suboptimal solution. Zhao et al. [18] jointly optimized beamforming and resource allocation to minimize the maximal delay encountered by users in the mmWave MEC system. Ning et al. [19] incorporated cloud and mobile edge computing and formulated a computation delay minimization problem with limited bandwidth resources. Li et al. [20] minimized service delay with a user-mobility prediction model in heterogeneous networks. Chen and Hao [21] minimized total task duration in software-defined ultradense networks. Tang and Wong [22] proposed a deep reinforcement learning (DRL) method to decide on the task offloading issue and introduced computation and transmission queues to model delays encountered in the MEC system. Edge computing resources were equally allocated for tasks at edge nodes, which implied that the computing resources allotted to current tasks would be reduced with the arrival of new tasks.
In addition, part of the current literature focuses on designs that minimize energy consumption in MEC systems. You et al. [23] studied an energy-efficient wireless resource allocation policy for computation offloading in both TDMA and orthogonal frequency-division multiple access (OFDMA) systems. Chen et al. [24] jointly optimized bandwidth and computation resource allocation to minimize UDs’ expected energy consumption, considering caching. The initial problem was formulated as an MINLP, and the caching decision subproblem was decoupled and solved by a learning-based deep neural network. Dai et al. [25] designed a DRL method to learn a joint offloading decision and edge-computing resource allocation policy to minimize energy consumption. Chen et al. [26] incorporated the Monte Carlo tree search (MCTS) algorithm with a deep neural network to learn the optimal bandwidth and computing resource allocation policy. Yan et al. [27] investigated the offloading and resource allocation problem for tasks under the general dependency model. An actor–critic-based DRL method was proposed to generate the offloading actions.
Furthermore, there have been several efforts to design the task offloading and resource allocation schemes based on other optimization goals. Chen et al. [28] established a Stackelberg game based incentive mechanism to motivate BS to allocate resources more reasonably. Bi and Zhang [16] modeled the computation rate maximization problem in wireless-powered TDMA edge networks as an MINLP problem, which was further decoupled and solved with the ADMM-based method and coordinate descent (CD) method. Huang et al. [29] decoupled the computation rate maximization problem into a computation offloading decision subproblem and a wireless resource allocation subproblem. They solved the offloading decision subproblem with a DNN method and the wireless resource allocation subproblem with a one-dimensional bisection search method. Furthermore, Bi et al. [30] adopted the Lyapunov optimization theory to decompose the maximization problem of the long-term weighted sum of the computation rate of all devices into a single-step optimization problem solved with an actor–critic-based deep reinforcement learning method. While some existing research has considered caching [2,24,31] in edge networks and user mobility issues [13,20,32], it falls outside the scope of this paper.
The characteristics of part of the discussed works are summarized in Table 1. However, they all ignore that in resource allocation, the allocation share for small-data-volume tasks can be crowded out by large-data-volume tasks. Therefore, this paper reveals how this happens and present our solutions to eliminate this effect.Since this paper focuses on tasks with significant differences in data size, its explosive state space would pose a substantial challenge to model training for deep reinforcement learning methods using neural networks. Therefore, the deep reinforcement learning algorithm is not considered in this paper.

3. System Model

As shown in Figure 1, the system works in an OFDMA manner and consists of a base station serving M UDs from a set M = { 1 , 2 , 3 , , M } . The BS is endowed with a bandwidth B (in hertz) and connected with an edge server whose computing capacity is denoted as F e (in CPU cycles). AP, BS and the edge are used interchangeably in the remainder of this article. Multiple UDs undertaking F types of computation tasks contend for resources to shorten task completion time. In this paper, we classified computation tasks into four categories (i.e., F = 4 ) based on their data size, i.e., tasks of small size, tasks of regular size, tasks of large size and tasks of immense size.
A computation task is denoted as a quadruplet o m = ( ρ m f , l m f , e m f , T m f ) . ρ m f denotes the number of CPU cycles required to process one bit of task t m f ( f { 1 , 2 , , f , , F } ) on UD m. l m f (in bits) represents the data size of task t m f . e m f is the emergency factor of task t m f , which can be utilized to regulate resource allocation and offloading decisions. T m f indicates the maximum acceptable processing delay of t m f . It is worth mentioning that although the emergency factor is described as an inherent part of the task, it can also be defined as a configurable parameter managed by the BS. o = { o m } M contains all the task information from UDs requesting a task offloading. Instead of the arbitrary divisible task processing model, the binary task processing model was considered in this paper. It was assumed that a task was either completed locally ( x m = 0 ) or at the AP ( x m = 1 ), where x m is the task offloading decision of UD m. Once x m = 1 , the AP has to allocate α m ( 0 , 1 ] of its wireless bandwidth and β m ( 0 , 1 ] of its computing resources to task t m f . All the resources allocated to UDs should not exceed the AP’s capacity,
m = 1 M x m α m 1 ,
m = 1 M x m β m 1 .
In this study, we focused on the offloading decision and resource allocation within a scheduling slot. It was assumed that each user device (UD) had at most one task to process and the channel status between each UD and the base station was assumed to be quasi-static.

3.1. Local Computing

Once t m f has to be processed locally, i.e.,  x m = 0 , UD m exploits the f m of the computing resources to process the task. f m should not violate the capacity constraint,
f m F c m , m M ,
where F c m is the maximum computing speed in CPU cycles and Fc = ( F c 1 , F c 2 , , F c M ) denotes the computing capacity of UDs in the system. Then, the local processing delay can be written as
t m l o c = ρ m f l m f f m , m M .

3.2. Edge Computing

UD m utilizes the allocated bandwidth α m B to upload task data for edge computing. Hence, the maximum achievable transmission rate can be calculated by [10]
r m u p = α m B log 2 ( 1 + p m h m 2 σ 2 ) = α m R m ,
where p m represents m’s transmit power, and h m denotes the channel gain between m and the AP. σ 2 indicates the background noise power. Accordingly, the corresponding transmission delay can be expressed as
t m u p = l m f r m u p = l m f α m B n log 2 ( 1 + p m h m 2 σ 2 ) .
The BS allocates β m F e of its computing resources to process t m f after the transmission. In this case, the corresponding computation delay can be denoted as
t m c o m = ρ m f l m f β m F e .

3.3. Problem Formulation

We aim to maximize the processing time gain harvested from the task offloading. Hereafter, the term revenue and reward are used interchangeably to denote this objective. The joint task offloading and resource allocation problem at the edge with constrained bandwidth and computing resources is formulated as a mixed-integer nonlinear-programming (MINLP) problem, which is denoted as
M a x i m i z e x , f , p , α , β : m = 1 M x m e m f ( T m f t m u p t m c o m ) + ( 1 x m ) e m f ( T m f t m l o c )
s . t . C 1 : m = 1 M x m α m 1 C 2 : m = 1 M x m β m 1 C 3 : f m F c m , m M C 4 : p m P m m a x , m M C 5 : x m { 0 , 1 }
C 1 is the bandwidth allocation constraint, and  C 2 is the computing resource allocation constraint. C 3 reveals the maximum local computing speed, while C 4 shows a UD’s maximal transmit power.
The formulated problem (P0) is intractable due to the coupling of variables x = ( x 1 , x 2 , , x m , , x M ) , α = ( α 1 , α 2 , , α m , , α M ) and β = ( β 1 , β 2 , , β m , , β M ) . However, once x = { x m } M is determined, (P0) is reduced to a convex optimization problem.

4. Decoupled Computation Offloading and Resource Allocation with Coordinate Descent (CD)

Inspired by [17], we adopted the CD method [16] to obtain the offloading scheme x = ( x 1 , x 2 , , x m , , x M ) , where x m { 0 , 1 } indicates whether UD m offloads or not. The core idea of the CD-based scheme is to fix x m i = ( x 1 i , , x m 1 i , x m + 1 i , , x M i ) iteratively (that is, to use the value in the ith iteration) and find the local optimum on x m i + 1 . With the generated offloading scheme, the initial problem (P0) can be divided into two parts, i.e., a local processing part for M 0 = { n | x n = 0 , n M } (P1) and an edge resource allocation part M 1 = { m | x m = 1 , m M } (P2). The whole procedure is summarized in Algorithm 1.
Algorithm 1: Linear CD-Aided Optimal Resource Allocation
Input: ϑ = ( ϑ 1 , ϑ 2 , , ϑ M ) in ascending order
Output: offloading decision x * and corresponding resource allocation scheme osch ;
  • x 0 ( 0 , 0 , , 0 ) ;
  • fori in { 1 , 2 , , M } do :
  •      x i x m i 1 and x m i = 1 ;
  •     get ( α i , β i , p i ) with x i as the input of Algorithm 2 and calculate r i with (8);
  •     record r i , x i and ( α i , β i , p i ) ;
  •  
  • find the max r i and corresponding offloading scheme x i and resource allocation scheme ( α i , β i , p i ) , which is recorded as maxR , x * and osch , respectively;
  •  
  • while True do:
  •     for  i { 1 , 2 , 3 , , M }  do:
  •          x osch , and  x i x i 1 (⊕ is the XOR operator);
  •         obtain ( α , β , p ) and r with Algorithm 2 ( x as the input) and calculate r with (8);
  •         record r, x and ( α , β , p ) ;
  •     end for
  •     if the maximum of recorded r > m a x R  then
  •          m a x R r m a x , x * x m a x and osch osch m a x ;
  •     else
  •         break;
  • return scheme x * , osch ;
For each x , we solve the corresponding (P1) and (P2) and obtain a feasible solution to (P0). The computation complexity of our proposed bisection-search-based resource allocation scheme in Algorithm 2 is O ( M ) [16]. For the worst case, the CD method solves (P1) and (P2) M 2 times with Algorithm 2 to search for the best offloading decision scheme with maximized system gain. Therefore, the overall complexity of our proposed scheme to solve (P0) is O ( M 3 ) . For simplicity, we used the brute-force search method to compare with our [d=A1]CD-based algorithmCD-aided. The brute-force method enumerates all the 2 M offloading schemes and solves the corresponding (P1) and (P2), resulting in a complexity of O ( M 2 2 M ) . But it is never a time-friendly solution because the computation time grows exponentially with M (e.g., 8 UDs).
Algorithm 2: Bisection-Search-Based Resource Allocation
Input:  x 0 = ( x 1 , x 2 , , x m , , x M ) ;
Output: the optimal ( α * , β * , p * ) ;
  • M 1 = { m | x m = 1 , x m x 0 } for offloading UDs;
  • δ = 1 × 10 6 , L o w e r λ 1 = 0 , L o w e r λ 2 = 0 , U p e r λ 1 and U p e r λ 2 are big enough;
  • while  | U p e r λ 1 L o w e r λ 1 | δ   do
  •     if  U 1 ( λ 1 ) > 0  then
  •          U p e r λ 1 = λ 1 ;
  •     else
  •          L o w e r λ 1 = λ 1 ;
  • while  | U p e r λ 2 L o w e r λ 2 | δ   do
  •      λ 2 = L o w e r λ 2 + U p e r λ 2 2 ;
  •     if  U 2 ( λ 2 ) > 0  then
  •          U p e r λ 2 = λ 2 ;
  •     else
  •          L o w e r λ 2 = λ 2 ;
  • for  m M 1 do
  •      p m = P m m a x ;
  • return ( α m , β m , p m ) ;

4.1. Local Processing Part

Once the offloading decision is determined, tasks processed locally can be extracted and further expressed as:
M a x i m i z e f n : n M 0 e n f ( T n f ρ n f l n f f n ) s . t . C 1 : f n F c n , n M 0 .
C 1 represents the local processing capacity constraint. It is quite intuitive to infer from (P1) that a UD will greedily utilize all its computing resources to process the task locally. The best local computing resource allocation for UD n ( n M 0 ) is f n = F c n . M 0 and M 1 represent the local processing UD set and offloading UD set, respectively. Thus, given decision x , (P1) can be solved and calculated with f n = F c n , n M 0 . The remaining problem is how to solve (P2), which is described in the next section.

4.2. Edge Processing Part

For tasks offloaded to the edge, the BS allocates its available resources to accommodate these requests. The optimal resource allocation problem between offloading UDs can be denoted as:
M a x i m i z e p m , α m , β m : H = m M 1 e m f ( T m f l m f α m B n log 2 ( 1 + p m h m 2 σ 2 ) ρ m f l m f β m F e )
s . t . C 1 : m M 1 α m 1 C 2 : m M 1 β m 1 C 3 : p m < P m m a x , m M 1 .
Corollary 1.
(P2) is a convex optimization problem on p m , α m , β m m M 1 with a given M 1 .
Proof. 
Please see the detailed proof in Appendix A. □
To get the optimal allocation scheme for (P2), Lagrange multipliers λ 1 and λ 2 and ω = { ω m } m = 1 | M 1 | are introduced, and the Lagrangian function is formulated as:
L ( p m , α , β , λ 1 , λ 2 , ω ) = m M 1 e m f ( T m f l m f α m B n log 2 ( 1 + p m u p h m 2 σ 2 ) ρ m f l m f β m F e ) + λ 1 ( 1 m M 1 α m ) + λ 2 ( 1 m M 1 β m ) + m M 1 ω m ( P m m a x p m ) ;
the Karush–Kuhn–Tucker (KKT) conditions are denoted as:
L α m = e m f l m f α m 2 B n log 2 ( 1 + p m h m 2 σ 2 ) λ 1 = 0
L β m = e m f ρ m f l m f β m 2 F e λ 2 = 0
L p m = e m f l m f h m 2 α m B n ( 1 + p m h m 2 σ 2 ) [ log 2 ( 1 + p m h m 2 σ 2 ) ] 2 σ 2 ln 2 ω m = 0
λ 1 ( 1 m M 1 α m ) = 0
λ 2 ( 1 m M 1 β m ) = 0
ω m ( P m m a x p m ) = 0
1 m M 1 α m 0
1 m M 1 β m 0
P m m a x p m 0 , m M 1
1 m M 1 β m 0
λ 1 0 , λ 2 0 , ω m 0 m M 1 .
Corollary 2.
The optimal allocation scheme is exhausted because all vacant resources are always allocated to all UDs in M 1 . The optimal allocation scheme for M 1 under the optimal λ 1 * and λ 2 * is:
( α m * , β m * , p m * ) = ( ϑ m λ 1 * , ζ m λ 2 * , P m m a x ) ,
where ϑ m = e m f l m f B log 2 ( 1 + P m max h m 2 σ 2 ) and ζ m = e m f ρ m f l m f F e .
Proof. 
Please see the detailed proof in Appendix B. □
For the sake of illustration, auxiliary functions U 1 ( λ 1 ) and U 2 ( λ 2 ) used to get λ 1 * and λ 2 * are introduced and denoted as
U 1 ( λ 1 ) = 1 m M 1 α m ,
U 2 ( λ 2 ) = 1 m M 1 β m .
U 1 ( λ 1 ) and U 2 ( λ 2 ) are monotonically decreasing with respect to λ 1 and λ 2 , respectively. Thus, the optimal λ 1 * and λ 2 * can be obtained by a bisection search on auxiliary functions U 1 ( λ 1 ) and U 2 ( λ 2 ) . Accordingly, the proposed resource allocation scheme is summarized in Algorithm 2.

5. Simulation and Results

In this section, we compare the performance of our proposed linear CD-based algorithm (LCD) with existing schemes and demonstrate the role of the emergency factor in offloading decisions and resource allocation. Additionally, we compare our approach to a DRL-based scheme [29] where the data size of a task was drawn from a distribution with the probability p on regular size ( 8 , 10 megabits) and 1 p on small size ( 2 , 4 megabits), large size ( 16 , 25 megabits) and immense size ( 70 , 80 megabits). Our scheme penalized tasks of immense size by setting their emergency factor to L r e ¯ ϵ p L i m , where L r e ¯ is the average of regular size, ϵ p is the penalty coefficient and L i m is the data volume of the task with immense data size. Conversely, we supported tasks of small size by setting their emergency factor to ϵ e L r e ¯ L s m , where ϵ e is the enhancement coefficient and L s m is the data volume of the task with small data size. The baseline schemes used in this paper included:
  • All offload (AO): all tasks are processed at the edge server.
  • All local (AL): all tasks are processed locally.
  • Random offload (RO): the offloading decision is randomly generated and the resource allocation decisions are obtained with Algorithm 2.
  • Brute-force search method (BF): searches all the 2 M offloading schemes and selects the one with the highest reward as the final solution.
  • Naive coordinate descent (NCD): directly goes into the “while loop” [16] with the randomly initialized x 0 in Algorithm 1.
  • Deep-reinforcement-learning-based scheme (DRL): uses channel conditions and task data size to make offloading decisions and utilizes the critic module to get the resource allocation scheme with minimum delay, which is slightly different from [29].

5.1. Simulation Setting

By default, there were M = 10 UDs in our system. The channel gain of the large-scale fading model in this paper was A d ( 3 × 10 8 4 π f 0 d m ) d 0 χ [30], where A d denotes the antenna gain of a UD, f 0 represents the carrier frequency, d m is the distance between UD m and BS in meters, the path loss exponent was d 0 = 2.6 and χ followed a Rayleigh distribution with unit variance. The BS had a bandwidth of B = 2   M H z and a computing capacity of F e = 1 × 10 10 cycles / second by default. The maximal transmission power was P m m a x = 0.2 (in watts). UD’s local computing capacity Fc took the value F c m = 1 × 10 8 m M cycles/second to 1 × 10 9 cycles / second . The value of e f was set to 1, and T f = 1 as the maximal acceptable service delay. We considered tasks of F = 4 categories, and ρ f was randomly taken from { 100 , 1000 } (in cycles/bit).

5.2. Result Discussion

In Figure 2, we varied Fc from F c m = 1 × 10 8 m M to F c m = 5 × 10 7 m M . This caused tasks processed locally, as shown in Figure 2b, to time out. The results in Figure 2a demonstrate that our proposed LCD algorithm could effectively converge to the optimal scheme (results from BF) and the NCD method deviated slightly from the optimal solution. Furthermore, the resources deployed at the edge could support the simultaneous task offloading for six to eight UDs (with a data size of one megabit). When UDs exceeded that threshold, the overall revenue of the system significantly declined. However, in Figure 2a, the overall rewards remained unchanged and even slightly increased as the number of UDs increased. This was because the local computing resources were sufficient to process the tasks locally without incurring negative rewards. The system could even enhance revenue by offloading tasks from UDs with more competitive conditions (e.g., better channel conditions). This was no longer the case in Figure 2b for F c m = 5 × 10 7 m M , where the revenue declined as the number of UDs increased. In this scenario, processing tasks locally resulted in negative rewards due to the timeout caused by an inadequate local processing capacity.
Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 show how the offloading decision and resource allocation for all tasks varied with the emergency factor e 1 f . We tested the emergency factor of a randomly selected task (task t 1 f was selected) with a group of values ( { 2 4 , 2 3 , 2 2 , 2 1 , 1 , 2 1 , 2 2 , 2 4 , 2 8 } ) while keeping the other factors constant. We can see that for tasks that failed in the task offloading competition, setting a higher e f value could not only improve the likelihood of task offloading but also increased their share in the resource allocation phase (if they were offloaded to the edge).
According to Figure 3, the optimal offloading decision under the default settings was to process tasks t 1 f , t 4 f and t 9 f locally and to offload tasks of the other seven UDs to the edge for processing. When the emergency factor ( e 1 f ) of a locally processed task took on a small value, nothing happened except that ϑ changed correspondingly. We can see that, when e 1 f took on the values { 2 4 , 2 3 , 2 2 , 2 1 } successively, UD 1 still processed task t 1 f locally, and the bandwidth allocation (shown in Figure 4) and edge computing resource allocation (shown in Figure 5) for UD 2, UD 3, UD 5, UD 6, UD 7, UD 8 and UD 10 remained unchanged. However, as e 1 f became large enough ( e 1 f = 2 ), UD 1 started to offload task t 1 f and was allocated some bandwidth and computing resources. Meanwhile, UD 3 and UD 10 were crowded out of resources and processed their tasks locally. As e 1 f continued to increase, more and more devices started to process their tasks locally. When e 1 f became extremely large, UD 1 monopolized all resources in the system.
It is noteworthy that when e 1 f shifted from one to two, not all the released bandwidth from UD 3 and UD 10 was allocated to UD 1. This can be explained by Figure 6. We know from Equation (21) that the bandwidth allocation share ( α m ) is proportional to e m f . When e 1 f took the value two, the corresponding ϑ 1 was 1.180 (normalized), and ϑ 1 < ϑ 3 + ϑ 10 . Therefore, the released bandwidth from UD 3 and UD 10 was reallocated to UD 1 and the remaining offloading UDs (UD 2, UD 5, UD 6, UD 7 and UD 8). It is worth noting that as e 1 f became larger, existing offloading UDs with larger ϑ 1 began to process tasks locally at first. However, UD 9, with the largest ϑ 9 , could only process tasks locally all the time, while UD 1, with the smallest ϑ 1 , could only process tasks locally at first. Fortunately, when UD 1 obtained a larger ϑ 1 ( ϑ 1 > ϑ 9 ) due to a larger e 1 f , UD 1 could not only offload task t 1 f to the edge for processing but also obtain a large share of resources. This indicated that m f could effectively regulate resource allocation and offloading decisions among UDs.
Figure 7 shows the processing delay of each task in the system and the total revenue when e 1 f takes different values. When UD 1 began to offload its task for edge processing (the emergency factor of UD 1 took the value two), both the total delay of each task in the system and the system revenue increased. This was because a larger e 1 f indicated that the system favored UD1 in the resource allocation and received a larger reward for prioritizing UD 1. As a result, other UDs lost the opportunity to offload their tasks to the edge for processing. When e m f = 256 , the completion time of all other UDs reached the maximum because their tasks were processed locally. Although the reward increased significantly when e 1 f varied from 16 to 256, the total delay of all UDs increased because the edge resources were exclusively occupied by UD 1.
Results in Figure 7 and Figure 8 share the same offloading decision, bandwidth allocation and edge computing resource allocation schemes. The distinction is that e m f m M was set to the default value for all tasks, which meant e 1 f remained unchanged in this situation. System rewards and processing delays of tasks were obtained when l 1 f shifted from 2 4 to 256 times the default data size (1 megabit). With equal emergency factors, i.e., e m f = 1 m M , those tasks of large data size were offloaded in preference to tasks of small data size, even monopolizing the edge resources ( l 1 f = 256 Mb). Tasks of large data size were more advantageous for offloading decisions. When tasks were of the same data size, task t 1 f could only be processed locally. As l 1 f got larger, the system preferred to process task t 1 f (tasks with large quantities of data). This is what took place in existing research works. From Figure 8, we can conclude that tasks of extremely large size will be offloaded to the edge if no restrictions are taken. This is not what we want to see because it stops UDs with limited computing resources from offloading their tasks to the edge. Luckily, we can prevent a task of extremely large size from monopolizing edge resources by setting a sufficiently small e m f for the data-intensive computing task t m f .
Figure 9 illustrates how the emergency factor impacts data-intensive tasks. We randomly sampled from [ 80 , 90 , 10 , 110 , 120 ] megabits and set it as the data size of a randomly selected task ( t 2 f was selected and l 2 f = 100 Mb in this experiment). The data size of the other tasks remained at the default value. Setting a sufficiently small emergency factor for the task with a large data size prevented the task from monopolizing system resources. When e 2 f took the default value, as the others ( e m f = 1 m M ), only t 2 f was offloaded to the edge, and the processing delay of t 2 f was less than 10 s. As we set a smaller value of e 2 f , more and more UDs could offload their tasks to the edge for processing (UD 9 and UD 10 for e 2 f = 0.5 , UD 3, UD 9 and UD 10 for e 2 f = 0.25 ). When e 2 f took the value 0.008, task t 2 f started to be processed locally. We can conclude that when the emergency factor of a task with a large data volume is small enough, it loses its advantage in task offloading.
In Figure 10, we compare the performance among “DRL” [29], “LCD”, “RO” and “AL” (results are organized in this order) under different sampling probabilities. We tested four types of tasks of different data sizes: regular size, small size, large size and immense size. We can see that both our LCD scheme and the DRL scheme achieved the minimum delay when tasks were of regular size (sampling probability p = 1 ). However, when tasks of immense size (the task from UD 8) and regular size coexisted ( p = 0.9 ), our scheme penalized tasks of immense size by setting sufficiently small emergency factors for tasks of immense size, which in turn disadvantaged our scheme in obtaining the minimum delay. Similarly, when tasks of small size (tasks from UD 2 and UD 8) emerged ( p = 0.8 ), our scheme failed to obtain the minimum delay as well. However, our scheme succeeded in excluding tasks of immense size from a monopoly on edge resources and supporting tasks of small size to contend for edge resources. For example, tasks from UD 2 and UD 8 with p = 0.8 and tasks from UD2, UD5 and UD 10 with p = 0.9 obtained shorter delays when compared with the DRL scheme.

6. Conclusions

Current task-offloading schemes targeting a minimum delay tend to prioritize tasks of large data size, which prevents tasks of small data size from being offloaded. When coexisting with tasks of large data size, tasks of small data size may lose opportunities to be offloaded to the edge for processing. In this paper, we introduced the emergency factor to penalize tasks of immense size for monopolizing system resources and support tasks of small size in contending for system resources. The joint task offloading and resource allocation issue was formulated as an MINLP problem that aimed to maximize the processing time reward. A bisection-search-based resource allocation algorithm combined with a CD-based method was proposed to solve the problem. Simulation results validated the effectiveness of our proposed scheme in regulating offloading decision and resource allocation when there was a significant difference in the data size of the offloaded tasks.
In future work, we will study resource allocation based on a more fine-grained task classification scheme and explore the use of state-of-the-art deep reinforcement learning methods [29,33] for efficiency. We may also consider schemes for different objectives, such as profit [28] and QoS, and may also consider deploying caching [24,31] at the edge.

Author Contributions

Conceptualization, L.D. and H.Y.; methodology, H.Y.; software, L.D.; validation, L.D. and W.H.; formal analysis, L.D.; investigation, L.D.; resources, H.Y.; data curation, L.D. and W.H.; writing—original draft preparation, L.D.; writing—review and editing, L.D. and W.H.; visualization, L.D.; supervision, H.Y.; project administration, H.Y.; funding acquisition, H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof. 
The first terms in stationarity Equations (10)–(12) are positive, which result in the positiveness of Lagrangian multipliers λ 1 , λ 2 , ω . Furthermore, the complementary slackness Equations (13)–(15) hold only when
1 m M 1 α m = 0 ,
1 m M 1 β m = 0 ,
P m m a x p m = 0 .
Then, as we go back to (10), with a fixed λ 1 , α m can be derived as α m = ϑ m λ 1 . Similarly, β m can be derived as β m = ζ m λ 2 from Equation (11) and p m = P m m a x from Equation (A3). □

Appendix B

Proof. 
( C 1 ) , ( C 2 ) and ( C 3 ) are affine functions with respect to the corresponding optimization variables in (P2). Most importantly, it has been discovered that the objective function is a concave function when the second partial derivative with respect to p m , α m , β m is calculated, and its value is strictly less than zero.
2 H p m 2 = e m f h m 4 l m f B σ 4 α m G m 2 ( log 2 G m ) 2 2 e m f h m 4 l m f B 2 σ 8 α m 2 ( G m ) 4 ( log 2 G m ) 3
2 H α m 2 = 2 e m f l m f B α m 3 log 2 G m
2 H β m 2 = 2 e m f l m f ρ m f F e β m 3 ,
where G m = 1 + p m h m 2 / σ 2 . For m in M 1 , p m > 0 guarantees G m > 1 . With all the positive terms p m , α m , β m , e m f , h m and l m f , the second derivative functions in (A4)–(A6) are always less than zero, which proves the concavity of (9). Hence, (P2) is a convex optimization problem. □

References

  1. Mach, P.; Becvar, Z. Mobile edge computing: A survey on architecture and computation offloading. IEEE Commun. Surv. Tutor. 2017, 19, 1628–1656. [Google Scholar] [CrossRef] [Green Version]
  2. Wang, X.; Han, Y.; Wang, C.; Zhao, Q.; Chen, X.; Chen, M. In-edge ai: Intelligentizing mobile edge computing, caching and communication by federated learning. IEEE Netw. 2019, 33, 156–165. [Google Scholar] [CrossRef] [Green Version]
  3. Chen, Y.; Zhang, N.; Zhang, Y.; Chen, X. Dynamic computation offloading in edge computing for internet of things. IEEE Internet Things J. 2018, 6, 4242–4251. [Google Scholar] [CrossRef]
  4. Wu, Y.; Ni, K.; Zhang, C.; Qian, L.P.; Tsang, D.H. NOMA-assisted multi-access mobile edge computing: A joint optimization of computation offloading and time allocation. IEEE Trans. Veh. Technol. 2018, 67, 12244–12258. [Google Scholar] [CrossRef]
  5. Raza, S.; Wang, S.; Ahmed, M.; Anwar, M.R.; Mirza, M.A.; Khan, W.U. Task offloading and resource allocation for IoV using 5G NR-V2X communication. IEEE Internet Things J. 2021, 9, 10397–10410. [Google Scholar] [CrossRef]
  6. Yousefpour, A.; Ishigaki, G.; Gour, R.; Jue, J.P. On reducing IoT service delay via fog offloading. IEEE Internet Things J. 2018, 5, 998–1010. [Google Scholar] [CrossRef] [Green Version]
  7. Yang, B.; Cao, X.; Xiong, K.; Yuen, C.; Guan, Y.L.; Leng, S.; Qian, L.; Han, Z. Edge intelligence for autonomous driving in 6G wireless system: Design challenges and solutions. IEEE Wirel. Commun. 2021, 28, 40–47. [Google Scholar] [CrossRef]
  8. Qiu, T.; Chi, J.; Zhou, X.; Ning, Z.; Atiquzzaman, M.; Wu, D.O. Edge computing in industrial internet of things: Architecture, advances and challenges. IEEE Commun. Surv. Tutor. 2020, 22, 2462–2488. [Google Scholar] [CrossRef]
  9. Peng, K.; Huang, H.; Liu, P.; Xu, X.; Leung, V.C. Joint Optimization of Energy Conservation and Privacy Preservation for Intelligent Task Offloading in MEC-Enabled Smart Cities. IEEE Trans. Green Commun. Netw. 2022, 6, 1671–1682. [Google Scholar] [CrossRef]
  10. Ren, J.; Yu, G.; He, Y.; Li, G.Y. Collaborative cloud and edge computing for latency minimization. IEEE Trans. Veh. Technol. 2019, 68, 5031–5044. [Google Scholar] [CrossRef]
  11. Ren, J.; Yu, G.; Cai, Y.; He, Y. Latency optimization for resource allocation in mobile-edge computation offloading. IEEE Trans. Wirel. Commun. 2018, 17, 5506–5519. [Google Scholar] [CrossRef] [Green Version]
  12. Kai, C.; Zhou, H.; Yi, Y.; Huang, W. Collaborative cloud-edge-end task offloading in mobile-edge computing networks with limited communication capability. IEEE Trans. Cogn. Commun. Netw. 2020, 7, 624–634. [Google Scholar] [CrossRef]
  13. Saleem, U.; Liu, Y.; Jangsher, S.; Li, Y.; Jiang, T. Mobility-aware joint task scheduling and resource allocation for cooperative mobile edge computing. IEEE Trans. Wirel. Commun. 2020, 20, 360–374. [Google Scholar] [CrossRef]
  14. El Haber, E.; Nguyen, T.M.; Assi, C. Joint optimization of computational cost and devices energy for task offloading in multi-tier edge-clouds. IEEE Trans. Commun. 2019, 67, 3407–3421. [Google Scholar] [CrossRef]
  15. Naouri, A.; Wu, H.; Nouri, N.A.; Dhelim, S.; Ning, H. A novel framework for mobile-edge computing by optimizing task offloading. IEEE Internet Things J. 2021, 8, 13065–13076. [Google Scholar] [CrossRef]
  16. Bi, S.; Zhang, Y.J. Computation rate maximization for wireless powered mobile-edge computing with binary computation offloading. IEEE Trans. Wirel. Commun. 2018, 17, 4177–4190. [Google Scholar] [CrossRef] [Green Version]
  17. Xing, H.; Liu, L.; Xu, J.; Nallanathan, A. Joint task assignment and resource allocation for D2D-enabled mobile-edge computing. IEEE Trans. Commun. 2019, 67, 4193–4207. [Google Scholar] [CrossRef] [Green Version]
  18. Zhao, C.; Cai, Y.; Liu, A.; Zhao, M.; Hanzo, L. Mobile edge computing meets mmWave communications: Joint beamforming and resource allocation for system delay minimization. IEEE Trans. Wirel. Commun. 2020, 19, 2382–2396. [Google Scholar] [CrossRef]
  19. Ning, Z.; Dong, P.; Kong, X.; Xia, F. A cooperative partial computation offloading scheme for mobile edge computing enabled Internet of Things. IEEE Internet Things J. 2018, 6, 4804–4814. [Google Scholar] [CrossRef]
  20. Li, J.; Zhang, X.; Zhang, J.; Wu, J.; Sun, Q.; Xie, Y. Deep reinforcement learning-based mobility-aware robust proactive resource allocation in heterogeneous networks. IEEE Trans. Cogn. Commun. Netw. 2019, 6, 408–421. [Google Scholar] [CrossRef]
  21. Chen, M.; Hao, Y. Task offloading for mobile edge computing in software defined ultra-dense network. IEEE J. Sel. Areas Commun. 2018, 36, 587–597. [Google Scholar] [CrossRef]
  22. Tang, M.; Wong, V.W. Deep reinforcement learning for task offloading in mobile edge computing systems. IEEE Trans. Mob. Comput. 2020, 21, 1985–1997. [Google Scholar] [CrossRef]
  23. You, C.; Huang, K.; Chae, H.; Kim, B.H. Energy-efficient resource allocation for mobile-edge computation offloading. IEEE Trans. Wirel. Commun. 2016, 16, 1397–1411. [Google Scholar] [CrossRef]
  24. Chen, J.; Xing, H.; Lin, X.; Nallanathan, A.; Bi, S. Joint resource allocation and cache placement for location-aware multi-user mobile edge computing. IEEE Internet Things J. 2022, 9, 25698–25714. [Google Scholar] [CrossRef]
  25. Dai, Y.; Zhang, K.; Maharjan, S.; Zhang, Y. Edge intelligence for energy-efficient computation offloading and resource allocation in 5G beyond. IEEE Trans. Veh. Technol. 2020, 69, 12175–12186. [Google Scholar] [CrossRef]
  26. Chen, J.; Chen, S.; Wang, Q.; Cao, B.; Feng, G.; Hu, J. iRAF: A deep reinforcement learning approach for collaborative mobile edge computing IoT networks. IEEE Internet Things J. 2019, 6, 7011–7024. [Google Scholar] [CrossRef]
  27. Yan, J.; Bi, S.; Zhang, Y.J.A. Offloading and resource allocation with general task graph in mobile edge computing: A deep reinforcement learning approach. IEEE Trans. Wirel. Commun. 2020, 19, 5404–5419. [Google Scholar] [CrossRef]
  28. Chen, Y.; Li, Z.; Yang, B.; Nai, K.; Li, K. A Stackelberg game approach to multiple resources allocation and pricing in mobile edge computing. Future Gener. Comput. Syst. 2020, 108, 273–287. [Google Scholar] [CrossRef]
  29. Huang, L.; Bi, S.; Zhang, Y.J.A. Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks. IEEE Trans. Mob. Comput. 2019, 19, 2581–2593. [Google Scholar] [CrossRef] [Green Version]
  30. Bi, S.; Huang, L.; Wang, H.; Zhang, Y.J.A. Lyapunov-guided deep reinforcement learning for stable online computation offloading in mobile-edge computing networks. IEEE Trans. Wirel. Commun. 2021, 20, 7519–7537. [Google Scholar] [CrossRef]
  31. Fang, C.; Liu, C.; Wang, Z.; Sun, Y.; Ni, W.; Li, P.; Guo, S. Cache-assisted content delivery in wireless networks: A new game theoretic model. IEEE Syst. J. 2020, 15, 2653–2664. [Google Scholar] [CrossRef]
  32. Fang, C.; Yao, H.; Wang, Z.; Wu, W.; Jin, X.; Yu, F.R. A survey of mobile information-centric networking: Research issues and challenges. IEEE Commun. Surv. Tutor. 2018, 20, 2353–2371. [Google Scholar] [CrossRef]
  33. Fang, C.; Xu, H.; Yang, Y.; Hu, Z.; Tu, S.; Ota, K.; Yang, Z.; Dong, M.; Han, Z.; Yu, F.R.; et al. Deep-reinforcement-learning-based resource allocation for content distribution in fog radio access networks. IEEE Internet Things J. 2022, 9, 16874–16883. [Google Scholar] [CrossRef]
Figure 1. System model.
Figure 1. System model.
Electronics 12 00366 g001
Figure 2. Rewards versus UDs in the system. (a) Default setup where the maximal local computing frequency F c = 1 × 10 8 cycles/s. (b) The others are the same as the default, except for F c = 5 × 10 7 cycles/s.
Figure 2. Rewards versus UDs in the system. (a) Default setup where the maximal local computing frequency F c = 1 × 10 8 cycles/s. (b) The others are the same as the default, except for F c = 5 × 10 7 cycles/s.
Electronics 12 00366 g002
Figure 3. Offloading decisions of UDs versus e 1 f .
Figure 3. Offloading decisions of UDs versus e 1 f .
Electronics 12 00366 g003
Figure 4. Bandwidth allocation of UDs versus e 1 f .
Figure 4. Bandwidth allocation of UDs versus e 1 f .
Electronics 12 00366 g004
Figure 5. Computing resource allocation of UDs versus e 1 f .
Figure 5. Computing resource allocation of UDs versus e 1 f .
Electronics 12 00366 g005
Figure 6. ϑ of UDs versus e 1 f .
Figure 6. ϑ of UDs versus e 1 f .
Electronics 12 00366 g006
Figure 7. Delay and rewards versus e 1 f .
Figure 7. Delay and rewards versus e 1 f .
Electronics 12 00366 g007
Figure 8. Delay and rewards versus l 1 f .
Figure 8. Delay and rewards versus l 1 f .
Electronics 12 00366 g008
Figure 9. Delay and rewards versus e 2 f .
Figure 9. Delay and rewards versus e 2 f .
Electronics 12 00366 g009
Figure 10. Delay versus the sampling probability.
Figure 10. Delay versus the sampling probability.
Electronics 12 00366 g010
Table 1. Summary of part of the discussed works.
Table 1. Summary of part of the discussed works.
WorkOffloading ModeOptimization VariablesObjectiveMethodology
[10]Partial λ , x , b , α DDecomposition and Karush–Kuhn–Tucker conditions
[11]Partial λ 8, b , α DLagrange multiplier method
[12]Partial λ , α p  9DSuccessive convex approximation
[14]Binary y , p , b , α EBranch-and-bound
[16]Binary x , b , α R 6The alternating direction method of multipliers and CD
[25]Binary x , y 7, α EDeep deterministic policy gradient (DDPG)
[26]Binary x 1, b 2, α 3D 4, E 5Monte Carlo tree search, DNN and replay memory
[27]Binary x , α D+EActor–critic-based DRL
[30]Binary x , b , α R 6Lyapunov optimization and DRL
Our workBinary x , b , α , p Revenue maximizationCD and Lagrange multiplier method
1: x denotes the offloading decision vector; 2: b denotes the communication resource allocation vector; 3: α denotes the edge-computing resource allocation vector; 4: D stands for latency/delay minimization; 5: E stands for energy consumption minimization; 6: R stands for computation rate maximization; 7: denotes the computation node selection vector; 8: λ denotes the splitting ratio; 9: p denotes the transmission power.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dong, L.; He, W.; Yao, H. Task Offloading and Resource Allocation for Tasks with Varied Requirements in Mobile Edge Computing Networks. Electronics 2023, 12, 366. https://doi.org/10.3390/electronics12020366

AMA Style

Dong L, He W, Yao H. Task Offloading and Resource Allocation for Tasks with Varied Requirements in Mobile Edge Computing Networks. Electronics. 2023; 12(2):366. https://doi.org/10.3390/electronics12020366

Chicago/Turabian Style

Dong, Li, Wenji He, and Haipeng Yao. 2023. "Task Offloading and Resource Allocation for Tasks with Varied Requirements in Mobile Edge Computing Networks" Electronics 12, no. 2: 366. https://doi.org/10.3390/electronics12020366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop