Next Article in Journal
A Nested Ensemble Filtering Approach for Parameter Estimation and Uncertainty Quantification of Traffic Noise Models
Previous Article in Journal
Segmentation of Multiple Tree Leaves Pictures with Natural Backgrounds using Deep Learning for Image-Based Agriculture Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Computation Offloading in Multi-Tier Multi-Access Edge Computing Systems: A Particle Swarm Optimization Approach

1
Department of Computer Science and Engineering, Kyung Hee University, Yongin 17104, Korea
2
Research Institute of Computers, Information and Communication, Pusan National University, Busan 46241, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(1), 203; https://doi.org/10.3390/app10010203
Submission received: 1 December 2019 / Revised: 21 December 2019 / Accepted: 22 December 2019 / Published: 26 December 2019
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
In recent years, multi-access edge computing (MEC) has become a promising technology used in 5G networks based on its ability to offload computational tasks from mobile devices (MDs) to edge servers in order to address MD-specific limitations. Despite considerable research on computation offloading in 5G networks, this activity in multi-tier multi-MEC server systems continues to attract attention. Here, we investigated a two-tier computation-offloading strategy for multi-user multi-MEC servers in heterogeneous networks. For this scenario, we formulated a joint resource-allocation and computation-offloading decision strategy to minimize the total computing overhead of MDs, including completion time and energy consumption. The optimization problem was formulated as a mixed-integer nonlinear program problem of NP-hard complexity. Under complex optimization and various application constraints, we divided the original problem into two subproblems: decisions of resource allocation and computation offloading. We developed an efficient, low-complexity algorithm using particle swarm optimization capable of high-quality solutions and guaranteed convergence, with a high-level heuristic (i.e., meta-heuristic) that performed well at solving a challenging optimization problem. Simulation results indicated that the proposed algorithm significantly reduced the total computing overhead of MDs relative to several baseline methods while guaranteeing to converge to stable solutions.

1. Introduction

With the tremendous growth of the Internet of Things (IoTs), massive amounts of mobile devices (MDs), such as smart devices and virtual reality (VR) glasses, connect to networks and generate large amounts of data on communications networks. Moreover, many new IoT applications, such as smart healthcare, surveillance, real-time online gaming, and augmented/VR, require increased energy efficiency and higher computing capacity [1,2]; however, MDs often have limited computing power and low-capacity batteries, with these limitations representing obstacles for the network deployment and the emergence of new applications and services.
To address this, multi-access edge computing (MEC) represents a promising solution for enhancing the computing capabilities and limited battery power of MDs. Computation offloading to MEC servers allows them to process the computational tasks of MDs at the edge of mobile cellular networks through wireless links, which addresses latency problems between cloud servers and MDs, as well as MD-specific limitations in computational power and storage. MEC brings cloud resources (e.g., MEC servers) to the network edges, thereby enabling heavy computing tasks to be performed close to the end user [2,3,4,5]. However, due to the resource limitation of the MEC server, computation offloading remains challenging. In particular, the rapid increase in compute-intensive tasks (such as face recognition, augmented reality, and autonomous driving) from devices will cause server resource bottlenecks of the MEC server and significantly affect task execution latency. Therefore, offloading calculation strategies and resource allocation optimization (e.g., computing resources at MEC server and transmit power) are needed to improve the performance of computation offloading.
Exponential increases in data traffic have been addressed through the use of heterogeneous networks (HetNets) comprising homogeneous cell macro-base stations (MBSs) with high transmission power overlaid onto low-power small cell base stations (SBSs) (e.g., femto-cells, pico-cells, and micro-cells) to meet data-rate requirements for next-generation networks [6,7,8]. Therefore, integrating MEC into 5G networks in order to increase network performance, to reduce MD-related power consumption, and to utilize macro and small cells represents a research interest in academia and industry. Recent studies have focused on computation offloading in 5G networks [5], with other research focusing on single-tier MEC servers used in heterogeneous scenarios [9,10,11,12,13,14,15,16]; however, achieving effective offloading decisions related to computation-completion time and energy consumption in a multi-tier MEC server remains an optimization problem that attracts considerable attention [17,18,19,20]. Moreover, there have been several MEC-related studies associated with its use in vehicular networks [21,22,23,24] and video processing [25,26].
In Reference [7], we proposed a two-tier computation-offloading framework for multi-user multi-server MEC in 5G HetNets. Different from our previous work that only focused on optimizing the offloading decisions to minimize energy consumption, in this paper, we considered a joint resource allocation and computation-offloading problem, aiming at minimizing the total computing overhead of MDs, including completion time and energy consumption. Using complex optimization techniques and various application constraints, we developed an efficient and low-complexity algorithm capable of high-quality solutions with guaranteed convergence. We found that a high-level heuristic (i.e., meta-heuristic) achieved satisfactory performance in solving a challenging optimization problem using particle swarm optimization (PSO) [27,28]. Our main contributions include the following:
  • We investigated a two-tier computation-offloading strategy using multi-user multi-server MEC in 5G HetNets.
  • We addressed joint resource allocation and computation-offloading decisions in order to minimize the total computing overhead of MDs, including completion time and energy consumption. The optimization problem was formulated as a mixed-integer nonlinear program (MINLP) problem of NP-hard complexity based on the inclusion of both continuous (resource allocation) and discrete (offloading decision) variables. This problem is highly complex and difficult to solve optimally in polynomial time. To address this, we divided the original problem into two subproblems: computational resource allocation and computation-offloading decisions.
  • We developed an algorithm using a meta-heuristic based on PSO to solve the optimization problem. PSO can solve large-scale NP problems with polynomial time complexity while ensuring convergence within specified limits [27,28,29].
  • Numerical simulations showed that the proposed algorithm achieved outstanding performance in terms of total computing overhead relative to several baseline schemes as well as guaranteeing the convergence of our solution.
The remainder of the manuscript is structured as follows. In Section 2, we review relevant literature. In Section 3, we describe a model two-tier computation-offloading system in a multi-user multi-server MEC in a 5G HetNet. In Section 4, we formulate the problem to minimize the total computing overhead of MDs, including completion time and energy consumption. Our proposed PSO-based algorithm is presented in Section 5, and the numerical simulations are described in Section 6. The conclusion is presented in Section 7.

2. Related Work

There have been numerous studies addressing offloading decisions in MEC systems, with most focusing on computation offloading from multiple users in a single- or multi-server MEC. Multi-user single-server MEC architectures involve a single MEC server integrated into an MBS or SBS. We further classify existing research as addressing either single-tier [9,10,11,12] or multi-tier [13,14,15,16] offloading of a computational task. Chen et al. [9] used a game-theoretic approach to solve the problem of computing overhead, including completion time and energy consumption, whereas another study [10] used system utility as a quality of experience measurement based on device power consumption and completion time and solved the optimization problem by jointly addressing computational resources and offloading decisions. Hao et al. [11] studied offloading and task caching to minimize the power consumption of user equipments, and another study [12] used the same scenario, given that MEC servers are only attached to one wireless access station. In Reference [12], the authors used Karush–Kuhn–Tucker (KKT) conditions to solve the problem of optimizing MD energy consumption. Moreover, there were similar studies in single MEC server such as References [30,31,32,33]. For multi-tier offloading of computational tasks [13,14,15,16], there can be one or more SBSs overlaid by an MSB, with the SBSs connected to the MBS by a wireless or wired network. In this system, only MBS is integrated with the MEC servers. Zhang et al. [13] investigated energy-efficient computation offloading in 5G HetNets comprising one MBS and one SBS by deploying an optimization problem to minimize the power consumption of the system while meeting latency limitations. The computing and communication demands on SBSs are smaller than those on an MBS. A previous study [14] optimized wireless backhaul bandwidth using resource allocation and computation-offloading decisions in order to minimize system computing overhead. Moreover, other studies [15,16] considered a HetNet scenario created by multiple small cells and one macro cell, with Wang et al. [15] assuming that each SBS was only connected with one user and proposing optimization of time and energy consumption in order to complete the computational tasks by combining offloading decisions, interference management, and resource allocation. Zhang et al. [16] proposed an algorithmic trade-off between energy consumption and completion time by combining offloading decisions and computational resource allocation in MDs. In these cases, installing a MEC at the MBS could save monetary costs but could potentially increase latency. Additionally, an IoT system involves a large number of computation-intensive tasks that can lead to resource bottlenecks at the single MEC server.
In the case of multi-user multi-server MEC architectures, most studies focus on offloading onto single-tier multi-server MEC architectures [34,35,36,37,38,39,40]. In SBS and MBS systems equipped with MEC servers, the SBS and MBS work independently and considered equally important (the SBS is not connected to the MBS), enabling MDs to choose either the SBS or MBS for offload. A previous study [34] investigated use of a matching game to solve the optimization problem for total computing overhead, including energy consumption and completion time, by combining resource allocation and offloading decisions, and Tuyen et al. [35] maximized the system-offloading utility by combining the computation-offloading decision and resource allocation. Chen et al. [36] proposed a creative framework for computational task offloading to a MEC server in a software-defined ultra-dense network by optimizing the completion time of the computational tasks through their execution locally and on the edge cloud. Guo et al. [37,38] investigated offloading tasks in ultra-dense networks with multiple SBSs and one MBS. In one study [37], the authors developed a game-theoretic algorithm to solve the optimization problem for overall computing overhead considering various types of computational tasks through computation-offloading decisions, whereas in the other study [38], they optimized the overall cost of the computing overhead according to power consumption and completion time by combining the optimal offloading decision and resource allocation (CPU cycles and power transmission) by MDs. Qui et al. [39] studied task offloading to a MEC server in order to extend resource capacity by hiring resources from cloud computing resources and vehicular nodes in order to minimize the total computing overhead, including latency and monetary cost, of using computer resources. Yang et al. [40] considered a two-tier small cell network comprising a set of relay base stations connected to a set of micro-base stations, where only the micro-base stations are integrated with the MEC servers in order to perform computational tasks from the user. In that study, they used an artificial fish swarm algorithm to optimize the overall energy consumption of the system according to computation-offloading decisions. Ateya et al. [41] investigated two levels of offloading decision between two micro-cloud units to achieve high energy efficiency while satisfied with latency. Wan et al. [42] studied computation offloading strategies for the applications in vehicles using edge node considering offloading cost, offloading latency, and load balancing between edge nodes. Lee et al. [17] proposed a hierarchical MEC architecture, with MEC servers arranged according to different processing performances configured by the network operator in order to increase network performance. In that scenario, MDs that request different actions are handled by different levels of MEC servers. A previous study [18] introduced a hybrid wireless network (FiWi) to provide support between cloud computing and MEC but only considered MDs connected to a wireless base station and applied a game-theoretic approach to optimize power consumption by user equipment. Liu et al. [19] optimized IoT power consumption by IoT sensors in a system model that included a MEC server and a remote cloud server by addressing inter-task dependency and the completion time of an IoT service. However, these studies only considered multi-tier architectures that included cloud computing and one MEC. Computation offloading in such multi-tier multi-server architectures is more difficult. In the present study, we considered multi-tier multi-server MEC systems for 5G HetNets and proposed an efficient, low-complexity algorithm to obtain optimal solutions.

3. Network Setting and Computational Model

3.1. Scenario Description

We considered multi-access edge computing in 5G HetNet comprising a set of SBSs connected to one MBS (Figure 1). MDs with limited computational capacity are randomly situated, thereby allowing them to directly connect to the SBSs in order to offload their computational task(s), with the SBSs connected to the MBS through fiber-optic links [13,16]. The SBSs and the MBS are integrated with MEC servers that can service the computational task(s) of the MDs. We defined the MEC servers co-located with the MBS and the SBSs as mMECs and sMECs, respectively. In this scenario, the MDs connect to the SBSs, which enable latency reduction and increases data rates. For this reason, we did not consider the MDs directly connected to the MBS [16]. We assume that the computing capability of MEC server attached to the SBS is limited and, so, when receiving many task offloading requests, may become overloaded; therefore, the computational task of MD can first be offloaded to the sMEC, but in the event that the computation resource is exhausted, the computational task will be offloaded to the mMEC, where powerful computing resources are available [38].
We denote the set of MDs as M = { 1 , 2 , , M } and the set of SBSs as N = { 1 , 2 , , N } . Each MD has a computing-intensive task to be accomplished, and here, we assume that each MD has one task, which is subsequently executed on either the MD, sMEC, or mMEC. Each MD can offload to an sMEC via wireless links or continue offloading to the mMEC via fiber-optic links. The computational task of each MD is atomic and cannot be divided. The computational task of each MD is represented as I m = { D m , C m } , m M , where D m is the data size of the computational task to be distributed to the edge cloud for computation and C m is the total number of computational requirements required to complete the computational task, as measured in CPU cycles.

3.2. Communication Model

The computational task of an MD can either be performed on the local MD or offloaded to the MEC server. Therefore, we show the communication model when the computational task is offloaded to sMEC through a wireless link or continues offloading to the mMEC via fiber-optic links. In this study, we used orthogonal frequency division multiple access for communication between MDs and SBSs. To reuse the spectrum, we assumed that the spectrum is used in an overlaid manner. The transmission rate of the computational task I m from the MD m to the SBS n through the wireless channel is expressed as follows:
R m n = W log 2 1 + p m h m n n 0 + Q m n ,
where W is the channel bandwidth, p m is the fixed transmit power of the MD m, h m n is the channel gain between the MD m and the SBS n, n 0 is the noise power, and Q m n is the inter-cell interference.

3.3. Local Execution Model

We introduce the local execution model in terms of energy consumption and completion time. We can obtain the local completion time for MD m for accomplishing task I m as
T m l = C m f m l ,
where f m l is the local computing capability of the MD m. Let α denote a coefficient associated with the chip architecture [34]. Additionally, the energy consumption E m l of the MD m when the task I m is computed locally as follows:
E m l = α C m ( f m l ) 2 .

3.4. Computation-Offloading Model

This section presents the model for computational task offloading. The computational task can be offloaded to the SBS for execution at the sMEC through a wireless link or to the MBS (from SBS to MBS) for execution at the mMEC via fiber-optic links. Therefore, we classified this activity into two schemes for computational task offloading: (1) through the sMEC or (2) through the mMEC. Here, the total energy consumption for completing the computational task comprises the energy consumption for the uploading transmission, for task performance on the MEC servers, and for returning the output data. In this study, we regarded optimization from the user perspective; therefore, we ignored the energy consumption associated with execution on the MEC server and only considered that required for the upload transmission [16]. This assumption is reasonable because it has been used by existing studies [16,35,36] and allows that the MEC server is usually operated using electricity supplied from the grid. Moreover, the power required to transmit the result back to the MD is small compared with that required from transmission of the original computational task [18,34,38].

3.4.1. sMEC Offloading

For offloading to a sMEC, the completion time T m n s b s of task I m comprises two aspects: transmission time T m n t and remote computation time T m n e s . Transmission time through the wireless access link when MD m offloads its task to the sMEC n is given by
T m n t = D m R m n .
Let f m n denote the computing resources of the sMEC n allocated to the MD m. The computation time required to execute the computational task on the sMEC n can be calculated as follows:
T m n e s = C m f m n .
According to Equations (4) and (5), the completion time T m n s b s can be obtained as follows:
T m n s b s = T m n t + T m n e s = D m R m n + C m f m n .
As noted, the energy consumption required to offload computational task I m for execution at sMEC n (denoted by E m n s b s ) is equivalent to the energy consumption required to transmit task I m from MD m to SBS n, as follows:
E m n s b s = E m n t = p m T m n t = p m D m R m n .

3.4.2. mMEC Offloading

Computational tasks can be executed on the mMEC following initial transmission to the SBS through the wireless access link (from MD to SBS) and then migrated to the mMEC through a fiber-optic link (from SBS to MBS). We denote the transmission data rate of the fiber-optic link between each SBS and the MBS as β . The transmission time when the computational task I m migrates from the SBS n to the mMEC via the fiber-optic link is calculated as follows:
T m n t m = D m β .
Let f 0 denote the computing resources allocated to each offloading MD by the mMEC, which is assumed to be constant and identical to all offloading MDs [10,16]. The computation time for the computational task on the mMEC is given by
T m n e m = C m f 0 .
The completion time T m n m b s when the computational task m is executed on the mMEC can be computed by the sum of (1) the transmission time between MD m and SBS n according to Equation (4) via wireless access, (2) the transmission time T m n t m from SBS n to the mMEC through the fiber-optic link, and (3) the computation time T m n e m at the mMEC:
T m n m b s = T m n t + T m n t m + T m n e m = D m R m n + D m β + C m f 0 .
Similarly, the energy consumption when task I m is offloaded to the mMEC (denoted by E m n m b s ) is equal to the transmission energy consumed E m n t according to Equation (7):
E m n m b s = E m n t = p m D m R m n .

4. Problem Formulation and Analysis

The offloading decision determined the execution location for the computational tasks (e.g., sMEC offloading or mMEC offloading). Therefore, we divided the computation-offloading decision into two phases. The first involves task offload to the MEC server via the SBS, and the second involves task execution at the sMEC or the mMEC (migration from SBS to the mMEC through the fiber-optic link). Let Y = { y m n | m M , n N } denote the offloading-decision matrix, where each element y m n is given as follows:
y m n = 1 , if the MD m offloads its computational task to the sMEC n , 0 , otherwise .
Because a task is offloaded to at most one MEC server, the following constraint must be satisfied: n = 1 N y m n 1 , m M . Let Z = { z m n | m M , n N } represent the matrix of offloading place for task I m :
z m n = 1 , the task I m is executed on the sMEC n , 0 , the task I m is executed on the mMEC .
The following constraint ensures that each task can be executed by at most one MEC server: n = 1 N z m n 1 , m M . Additionally, we denote M off = { m M , n N | y m n = 1 , z m n = 1 } as the set of MDs that offload their tasks to sMECs and M n = { m M | y m n = 1 , z m n = 1 } as the set of MDs that offload their tasks to the sMEC m. Finally, we denote the computational-resource matrix for the sMEC as F = { f m n | m M , n N } , where f m n > 0 is the resource allocation of sMEC n to computational task I m offloaded from MD m M n . Typically, the amount of computing resources at the sMEC is limited; therefore, the following constraint must be satisfied: m M n f m n f n max , n N , where f n max denotes the maximum computing capability of sMEC n. Moreover, f m n = 0 if m M n , indicating that sMEC n does not allocate any computing resources to MD m.
Here, we describe joint optimization of resource allocation and computation-offloading decision strategies in order to minimize the total computing overhead of MDs, including completion time and energy consumption. In particular, the total time necessary to accomplish computational task I m is computed as follows:
T m = n = 1 N y m n T m n t + z m n T m n e s + ( 1 z m n ) ( T m n t m + T m n e m ) + 1 n = 1 N y m n T m l
The total energy consumption required to complete computational task I m is computed as follows:
E m = n = 1 N y m n z m n E m n s b s + ( 1 z m n ) E m n m b s + 1 n = 1 N y m n E m l
Therefore, the optimization problem can be expressed as follows:
minimize Y , Z , F m = 1 M ( λ m e E m + λ m t T m ) subject to C 1 : y m n , z m n { 0 , 1 } , m M , n N , C 2 : n = 1 N y m n 1 , m M , C 3 : n = 1 N z m n 1 , m M , C 4 : m = 1 M y m n H , n N , C 5 : f m n > 0 , m M off , n N , C 6 : m M n f m n f n max , n N ,
where λ m e , λ m t [ 0 , 1 ] . and λ m e + λ m t = 1 depict the weights associated with energy consumption and completion time for MD m and H is the number of subcarriers (denoting the number of MDs allowed to offload their computational tasks simultaneously). Equation (16) shows that constraint C1 represents the binary offloading decision, and constraints C2 and C3 ensure that each MD m can select at most one SBS. Constraint C4 implies that the number of tasks to be offloaded to the MEC server must be less than the total number of subcarriers, and constraints C5 and C6 indicate that the total computing resources assigned to the offloading MDs by each sMEC cannot exceed its maximum computing capability (note that f m n = 0 if m M n ).
Equation (16) is an MINLP problem of NP-hard complexity and difficult to solve optimally in polynomial time [10,34]. Although global optimization can be used to obtain an optimal solution to this problem, the worst-case exponential computational complexity limits its application to 5G wireless networks due to the massive level of connectivity, heterogeneous quality of service requirements, and highly dynamic wireless channels. In this study, we developed an efficient and low-complexity algorithm capable of obtaining a high-quality solution with guaranteed convergence using PSO.

5. Joint Optimization of Resource Allocation and Computation-Offloading Decisions

Here, we describe the algorithm used for joint optimization of resource allocation and computation-offloading decisions using PSO to address the problem in Equation (16). In particular, Equation (16) is an MINLP problem of NP-hard complexity [10,34] based on the presence of both continuous (resource allocation profile F ) and discrete (offloading decision profiles Y and Z ) variables. To address this problem, we divided it into two subproblems: (1) computational resource allocation, which is a convex problem for a given offloading decision and solved by KKT optimality conditions, and (2) computation-offloading decision, which is solved by binary PSO. PSO has the advantages of easy implementation and fast convergence, as well as the ability to escape from local optima and to converge to an approximate global optimum [27,28]. Therefore, to solve Equation (16), we developed a PSO-based algorithm to jointly optimize resource allocation and computation-offloading decisions (JROPSO) in order to minimize total computing overhead of MDs.

5.1. Computational Resource Allocation

For a given task-offloading decision, Y = Y 0 , Z = Z 0 , Equation (16) for MDs offloading to sMEC n can rewritten as follows:
minimize F n N m M n λ m t C m f m n subject to C 5 and C 6 .
Theorem 1.
The optimization problem in Equation (17) is a convex optimization problem.
Proof of Theorem 1 
We define G ( F ) as the objective function in Equation (17) and obtain the Hessian matrix G ( F ) with respect to F , as follows:
2 G f m n 2 = 2 λ m t C m f m n 3 , m M off , n N .
2 h f m n f i j = 0 , ( m , n ) ( i , j ) .
We can verify that 2 G f m 2 0 ; therefore, the Hessian matrix is a positive semi-definite matrix. According to previously described theorems [43], we conclude that the objective function G ( F ) is convex. Note that the solution space of Equation (17) is convex, which suggests that Equation (17) is convex. □
Because Equation (17) is convex, it can be solved using KKT optimality conditions. The optimal resource allocation F is computed as follows:
f m n * = f m max λ m t C m m M n λ m t C m .

5.2. Computation-Offloading Decision

When computational resource allocation is provided (i.e., F = F * ), Equation (16) can be simplified as follows:
minimize Y , Z m = 1 M ( λ m e E m + λ m t T m ) subject to C 1 , C 2 , C 3 and C 4 .
Here, we applied PSO to solve the subproblem in Equation (21) in order to obtain the solution to offloading decisions Y and Z . In particular, the binary version of PSO (BPSO) is applied to solve the offloading-decision problem in Equation (21). First, we introduce the standard continuous PSO algorithm and the BPSO algorithm. PSO is a natural optimization method bio-inspired by the behavior of bird flocking [44], where each particle (individual) in a swarm searches for food in a search space. An effective way to locate food is to follow the bird closest to the food. By mimicking bird-foraging behavior, a variety of optimization problems can be solved by the PSO algorithm with highly competitive performance, including joint problems of power control and channel allocation in device-to-device communications [28,29].
In the PSO algorithm, a fitness function is defined to assess solution feasibility. The algorithm is initialized with random population-candidate solutions (particles), with each candidate solution represented by the position of a particle for the optimization problem. With each iteration, the position of each particle in the swarm changes within the search space in order to obtain a good solution based on the best individual position and the best position of all particles. After numerous iterations, the best solution will be found corresponding to the optimal position of all particles. Application of the PSO algorithm requires mapping the problem solution onto the particle space, which directly effects feasibility and performance. We considered a swarm involving K particles and assumed that the particle position kth of the swarm was represented by X k = ( X m 1 , X k 2 , , X k m , , X k M ) with a particle velocity kth of V k = ( V k 1 , V k 2 , , V k n , , V k M ) , where M is the total number of MDs. According to a previous study [44], the particle updates its velocity and changes to a new position via the following two equations:
V k = w V k + c 1 r 1 ( L k X k ) + c 2 r 2 ( G X k ) ,
X k = X k + V k ,
where L k and G represent the best individual position of the mth particle and the global best position, respectively; w denotes a constant inertia weight; c 1 and c 2 are two acceleration coefficients; and r 1 and r 2 are two random variables in (0, 1).
Because the offloading decision and location are discrete values (i.e., either 0 or 1), the problem cannot be solved using the continuous standard PSO algorithm; therefore, we used the BPSO [45]. The main differences between the PSO and BPSO involve the position-updating procedure and the transfer function [45]. The position X k is based on toggling between the values 1 and 0, with the velocity of the particle used to update the position. Therefore, we converted the velocity values to probabilities within the interval [0, 1] using the sigmoid function, as follows [45]:
S i g ( V k m ) = 1 1 + e V k m ,
where V k m is the velocity of the kth particle at the mth element, with the position of this particle updated to a binary variable according to the following rule:
X k m = 0 , if r a n d < S i g ( V k m ) , 1 , otherwise ,
where r a n d is a uniformly random number within [0, 1]. Algorithm 1 shows the pseudo-code for the standard BPSO, which is similar to the standard continuous PSO algorithm, except that updating each particle position occurs using Equation (25). F i t ( X k ) is defined as the fitness value at each particle position X k .
According to a previous study [27], a modified version of the BPSO can avoid local optima more effectively than the original version; therefore, we apply a new transfer function instead of that described by the rule in Equation (25), as follows [27]:
S i g ( V k m ) = π 2 a r c t a n ( π 2 V k m ) .
Each candidate solution in the subproblem in Equation (21) (i.e., the offloading decision Y and the offloading place Z for all the MDs) is constructed according to a single position of each particle and its velocity and describes how the solutions are developed. We established a search space of D-dimension and transferred { Y , Z } to the position of the particles. We denote X = { Y , Z } as the set of computation-offloading decisions and comprising matrices containing the offloading decisions and places. Therefore, the position of the kth particle is defined by X k and the velocity of the kth particle is denoted by V k . To solve Equation (21), the fitness function for the position can be computed as follows:
F i t = m = 1 M ( λ m e E m + λ m t T m ) + P ,
where P = n = 1 N θ n ( max ( 0 , m = 1 M y m n H ) is the penalty function, which is an integrated form of the inequality constraint function C4 in Equation (21) and θ n is the penalty factor. Other constraints are guaranteed during initialization and genetic activities. The penalty function plays an important role in helping particles to fly out of a given area as soon as possible or as close to the feasible area as possible.
Algorithm 1: The standard binary particle swarm optimization (BPSO) algorithm.
Applsci 10 00203 i001

5.3. The Proposed Algorithm JROPSO

As noted, the proposed algorithm JROPSO uses PSO to jointly find optimal solutions to problems concerning computation-offloading decisions in Equation (21) and computational resource allocation in Equation (20). The JROPSO algorithm is illustrated in Algorithm 2.
Algorithm 2: PSO-based joint resource allocation and computational offloading decision algorithm (JROPSO)
Applsci 10 00203 i002

6. Performance Evaluation

6.1. Simulation Settings

Here, we present the simulation results demonstrating the performance of the JROPSO algorithm. We employed the following simulation scenario for most of the simulations unless otherwise noted. We considered the HetNets as comprising of M = 50 MDs within a coverage area (250 × 250 m 2 ) including six SBSs and one MBS connected to all SBSs. The channel-gain models were assumed to follow the path-loss model according to 140.7 + 36.7log10(d) [35], where d (km) is the distance between the MD and the SBS. For the communication model, the noise power was set to n 0 = 100 dBm [34], and the transmit power of the MD for task offloading to the SBS was set to p m = 23 dBm. The data rate of the fiber-optic link was β = 1 Gbps [18], and we set the number of subcarriers at H = 12 and the bandwidth of each sub-channel to W = 3 MHz. Additionally, there was no inter-cell interference in the wireless fronthaul. For the computational task, the input data size of the computational tasks was uniformly distributed within the range [ 600 , 1200 ] KB, and the number of CPU cycles required for completing the computational task was randomly generated at between C m = [ 500 , 1000 ] Megacycles. In terms of computing resources, the CPU speed of the computing resource of the mMEC allocated to each MD was f 0 = 3 GHz, the maximum CPU computing capacity for each sMEC was f m m a x = 6 GHz, and the local computing power of the MDs was randomly assigned from the set { 0.5 , 0.8 , 1 } GHz [34,39] and α = 5 × 10 27 [34,35]. The weighting parameters of the computational task were set to 0.5 (i.e., λ m t = λ m t = 0.5 , m M ). For the PSO algorithm, PSO parameters were set according to a previous study [27]. Briefly, we set w = 2 , c 1 = 2 , c 2 = 2 , the size of the particle population (K) to 30, and the maximum number of iterations ( T max ) to 1000. The simulation results were averaged over a number of Monte Carlo simulations, and MD and SBS locations were randomly and uniformly distributed.

6.2. Simulation Results

To evaluate the performance of the JROPSO algorithm, we compared it with baseline schemes:
  • Local Computing Only (LCO): all of the MDs execute their computational tasks locally in the absence of computation offloading.
  • Without MBS (WMBS): The proposed algorithm JROPSO is implemented without MBS.
  • Only offloading in a 2-tier system (OF): offloading only to sMECs and mMEC, with computation resource allocation optimized for sMECs.
In the first experiment, we verified the convergence of our proposed algorithm in terms of the total computing overhead by the numerical result in Figure 2. In this experiment, we performed in 3 cases of 10 MDs, 30 MDs, and 50 MDs. From Figure 2, we see that the proposed algorithm can converge into a stable solution after a finite number of iterations. In addition, the average iteration for the convergence of the proposed algorithm is approximately linear with the number of MDs.
In the second experiment, we determined the percentage of offloading MDs according to the increase in the total number of MDs (Figure 3). Initially, when the number of MDs was small, the percentage of offloading MDs was relatively high; however, as the number of MDs increased, the percentage of offloading MDs decreased. The reason for this is that each MD tends to offload its computational task to the MEC server to save energy consumption and/or completion time. However, a larger number of MDs lead to a high competition for computing resources at the MEC server. Therefore, more MDs are not beneficial from remote computing; thus, some MDs must be executed locally to minimize the total computing overhead.
In the third experiment, we evaluated proposed algorithm JROPSO in terms of the total computing overhead of MDs with respect to increases in the number of MDs (Figure 4). The results indicated that total computing overhead calculated by all of the schemes increased along with the number of MDs. The results indicated that total computing overhead calculated by all of the schemes increased along with the number of MDs. The reason is that the total computing overhead of MDs is linearly related to the number of MDs. We also saw that the proposed algorithm JROPSO helped reduce the total computing overhead of MDs. More specifically, the proposed algorithm JROPSO achieved the lowest total computing overhead relative to other schemes. This is because the proposed algorithm JROPSO was optimized for the joint resource-allocation and computation-offloading decision such that computation offloading is beneficial for offloading tasks.
In the fourth experiment, we evaluated algorithm performance according to different weighting parameters (Figure 5). Specifically, we increased energy consumption λ m e from 0.1 to 0.9 with a step of 0.1, which altered completion time λ m t accordingly to λ m t = 1 λ m e . The number of MDs is fixed at 50. The results indicated that the proposed algorithm JROPSO was superior to other solutions, although the weighting parameters vary. The total computing overhead according to the LCO algorithm consistently increased along with the λ m e parameter because local execution of computational tasks by MDs results in large energy consumption as compared with the execution time. For example, MD m with f l = 1 GHz and C m = 800 Megacycles returned T m l = ( 800 × 10 6 ) / ( 1 × 10 9 ) = 0.8 (s) and E m l = 5 × 10 27 × 800 × 10 6 × ( 1 × 10 9 ) 2 = 4 (Joules). This suggests that, for local computing, the completion time was 5-fold smaller than the energy consumption. By contrast, the FO algorithm offloaded the computational task to the MEC server in order to save the energy consumption of the MDs. Therefore, increases in λ m e decreased the total computing overhead and suggested that a large enough λ m e would result in decreases in total computing overhead according to the FO, WMBS, and JROPSO algorithms. This is a consequence of the benefits of offloading to allow energy savings versus computing time. These results indicated that optimal performance requires choosing the appropriate weighting parameters.
Figure 6 shows the effect of data size on the JROPSO algorithm. The results indicated that increased data size did not change the total computing overhead according to the LCO algorithm based on it having no impact on completion time or energy consumption when the tasks are executed locally. By contrast, the computing overhead calculated by the OF algorithm increased rapidly due to high completion time to offloading task on MEC server. The large data sizes resulted in computing overheads for the JROPSO algorithm close to the WMBS algorithm. The reason is that some tasks will not be executed at mMEC due to the considerable time required to transfer the task to mMEC. These findings suggest that computing offloaded tasks involving small amounts of data achieve better results.

7. Conclusions

In this study, we described the development of an efficient optimization method to determining computation-offloading decisions in multi-tier multi-MEC-server architectures within 5G HetNets. We addressed resource-allocation and computation-offloading decision strategies to minimize the total computing overhead, including completion time and energy consumption, and developed the JROPSO algorithm to obtain optimal solutions. Our simulations confirmed the efficiency of the algorithm and its superior performance relative to other methods. In addition, this paper showed that the PSO algorithm is used to solve the optimization problem of computation offloading with highly competitive performance. In future work, the comparison of our proposed algorithm with recent meta-heuristic algorithms like Ant Colony Optimization (ACO), Whale Optimization Algorithm (WOA), and Harris Hawks Optimization (HHO) is also promising. Moreover, we will involve investigation of PSO applications in non-orthogonal multiple-access-assisted MEC systems.

Author Contributions

E.-N.H. supervised the study; L.N.T.H. proposed the conceptualization; L.N.T.H. developed the methodology; L.N.T.H. and Q.-V.P. analyzed the algorithm; L.N.T.H. programmed the software; L.N.T.H. wrote the original draft of the manuscript; L.N.T.H., Q.-V.P., X.-Q.P., T.D.T.N., M.D.H., and E.-N.H. participated in manuscript review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This work was supported by Institute for Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2017-0-00294, Service mobility support distributed cloud technology).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mach, P.; Becvar, Z. Mobile Edge Computing: A Survey on Architecture and Computation Offloading. IEEE Commun. Surv. Tutor. 2017, 19, 1628–1656. [Google Scholar] [CrossRef] [Green Version]
  2. Mao, Y.; You, C.; Zhang, J.; Huang, K.; Letaief, K.B. A Survey on Mobile Edge Computing: The Communication Perspective. IEEE Commun. Surv. Tutor. 2017, 19, 2322–2358. [Google Scholar] [CrossRef] [Green Version]
  3. Y.3508: Cloud Computing—Overview and High-Level Requirements of Distributed Cloud; Recommendation Y.3508; ITU Publications: Geneva, Switzerland, 2019.
  4. Pham, Q.; Nguyen, T.H.; Han, Z.; Hwang, W. Coalitional Games for Computation Offloading in NOMA-Enabled Multi-Access Edge Computing. IEEE Trans. Veh. Technol. 2019. [Google Scholar] [CrossRef]
  5. Pham, Q.V.; Fang, F.; Vu, H.; Le, M.; Ding, Z.; Le, L.B.; Hwang, W. A Survey of Multi-Access Edge Computing in 5G and Beyond: Fundamentals, Technology Integration, and State-of-the-Art. arXiv 2019, arXiv:1906.08452. [Google Scholar]
  6. Ramazanali, H.; Mesodiakaki, A.; Vinel, A.; Verikoukis, C. Survey of user association in 5G HetNets. In Proceedings of the 2016 8th IEEE Latin-American Conference on Communications (LATINCOM), Medellin, Colombia, 15–17 November 2016; pp. 1–6. [Google Scholar]
  7. Huynh, L.N.T.; Pham, Q.V.; Nguyen, Q.D.; Pham, X.Q.; Nguyen, V.; Huh, E.N. Energy-Efficient Computation Offloading with Multi-MEC Servers in 5G Two-Tier Heterogeneous Networks. In Proceedings of the 13th International Conference on Ubiquitous Information Management and Communication (IMCOM), Phuket, Thailand, 4–6 January 2019; Springer International Publishing: Cham, Switzerland, 2019; pp. 120–129. [Google Scholar]
  8. Hu, Y.C.; Patel, M.; Sabella, D.; Sprecher, N.; Young, V. Mobile edge computing—A key technology towards 5G. ETSI White Pap. 2015, 11, 1–16. [Google Scholar]
  9. Chen, X.; Jiao, L.; Li, W.; Fu, X. Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing. IEEE/ACM Trans. Netw. 2016, 24, 2795–2808. [Google Scholar] [CrossRef] [Green Version]
  10. Lyu, X.; Tian, H.; Sengul, C.; Zhang, P. Multiuser Joint Task Offloading and Resource Optimization in Proximate Clouds. IEEE Trans. Veh. Technol. 2017, 66, 3435–3447. [Google Scholar] [CrossRef]
  11. Hao, Y.; Chen, M.; Hu, L.; Hossain, M.S.; Ghoneim, A. Energy Efficient Task Caching and Offloading for Mobile Edge Computing. IEEE Access 2018, 6, 11365–11373. [Google Scholar] [CrossRef]
  12. Tao, X.; Ota, K.; Dong, M.; Qi, H.; Li, K. Performance Guaranteed Computation Offloading for Mobile-Edge Cloud Computing. IEEE Wirel. Commun. Lett. 2017, 6, 774–777. [Google Scholar] [CrossRef] [Green Version]
  13. Zhang, K.; Mao, Y.; Leng, S.; Zhao, Q.; Li, L.; Peng, X.; Pan, L.; Maharjan, S.; Zhang, Y. Energy-Efficient Offloading for Mobile Edge Computing in 5G Heterogeneous Networks. IEEE Access 2016, 4, 5896–5907. [Google Scholar] [CrossRef]
  14. Pham, Q.V.; Le, L.B.; Chung, S.H.; Hwang, W.J. Mobile Edge Computing With Wireless Backhaul: Joint Task Offloading and Resource Allocation. IEEE Access 2019, 7, 16444–16459. [Google Scholar] [CrossRef]
  15. Wang, C.; Yu, F.R.; Liang, C.; Chen, Q.; Tang, L. Joint Computation Offloading and Interference Management in Wireless Cellular Networks with Mobile Edge Computing. IEEE Trans. Veh. Technol. 2017, 66, 7432–7445. [Google Scholar] [CrossRef]
  16. Zhang, J.; Hu, X.; Ning, Z.; Ngai, E.C.; Zhou, L.; Wei, J.; Cheng, J.; Hu, B. Energy-Latency Tradeoff for Energy-Aware Offloading in Mobile Edge Computing Networks. IEEE Internet Things J. 2018, 5, 2633–2645. [Google Scholar] [CrossRef]
  17. Lee, J.; Lee, J. Hierarchical Mobile Edge Computing Architecture Based on Context Awareness. Appl. Sci. 2018, 8, 1160. [Google Scholar] [CrossRef] [Green Version]
  18. Guo, H.; Liu, J. Collaborative Computation Offloading for Multiaccess Edge Computing Over Fiber–Wireless Networks. IEEE Trans. Veh. Technol. 2018, 67, 4514–4526. [Google Scholar] [CrossRef]
  19. Liu, F.; Huang, Z.; Wang, L. Energy-Efficient Collaborative Task Computation Offloading in Cloud-Assisted Edge Computing for IoT Sensors. Sensors 2019, 19, 1105. [Google Scholar] [CrossRef] [Green Version]
  20. Ryu, J.W.; Pham, Q.V.; Luan, H.N.T.; Hwang, W.J.; Kim, J.D.; Lee, J.T. Multi-Access Edge Computing Empowered Heterogeneous Networks: A Novel Architecture and Potential Works. Symmetry 2019, 11, 842. [Google Scholar] [CrossRef] [Green Version]
  21. Zhao, J.; Li, Q.; Gong, Y.; Zhang, K. Computation Offloading and Resource Allocation for Cloud Assisted Mobile Edge Computing in Vehicular Networks. IEEE Trans. Veh. Technol. 2019, 68, 7944–7956. [Google Scholar] [CrossRef]
  22. Wang, J.; Feng, D.; Zhang, S.; Tang, J.; Quek, T.Q.S. Computation Offloading for Mobile Edge Computing Enabled Vehicular Networks. IEEE Access 2019, 7, 62624–62632. [Google Scholar] [CrossRef]
  23. Fan, X.; Cui, T.; Cao, C.; Chen, Q.; Kwak, K.S. Minimum-Cost Offloading for Collaborative Task Execution of MEC-Assisted Platooning. Sensors 2019, 19, 847. [Google Scholar] [CrossRef] [Green Version]
  24. Lamb, Z.W.; Agrawal, D.P. Analysis of Mobile Edge Computing for Vehicular Networks. Sensors 2019, 19, 1303. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Tran, T.X.; Pompili, D. Adaptive Bitrate Video Caching and Processing in Mobile-Edge Computing Networks. IEEE Trans. Mob. Comput. 2019, 18, 1965–1978. [Google Scholar] [CrossRef]
  26. Long, C.; Cao, Y.; Jiang, T.; Zhang, Q. Edge Computing Framework for Cooperative Video Processing in Multimedia IoT Systems. IEEE Trans. Multimed. 2018, 20, 1126–1139. [Google Scholar] [CrossRef]
  27. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary Particle Swarm Optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  28. Xu, J.; Guo, C.; Zhang, H. Joint channel allocation and power control based on PSO for cellular networks with D2D communications. Comput. Netw. 2018, 133, 104–119. [Google Scholar] [CrossRef]
  29. Girmay, G.G.; Pham, Q.; Hwang, W. Joint channel and Power Allocation for Device-to-Device Communication on Licensed and Unlicensed Band. IEEE Access 2019, 7, 22196–22205. [Google Scholar] [CrossRef]
  30. Mao, Y.; Zhang, J.; Song, S.H.; Letaief, K.B. Power-Delay Tradeoff in Multi-User Mobile-Edge Computing Systems. In Proceedings of the 2016 IEEE Global Communications Conference (GLOBECOM), Washington, DC, USA, 4–8 December 2016; pp. 1–6. [Google Scholar]
  31. Deng, M.; Tian, H.; Lyu, X. Adaptive sequential offloading game for multi-cell Mobile Edge Computing. In Proceedings of the 2016 23rd International Conference on Telecommunications (ICT), Thessaloniki, Greece, 16–18 May 2016; pp. 1–5. [Google Scholar]
  32. Al-Shuwaili, A.; Simeone, O. Energy-Efficient Resource Allocation for Mobile Edge Computing-Based Augmented Reality Applications. IEEE Wirel. Commun. Lett. 2017, 6, 398–401. [Google Scholar] [CrossRef]
  33. Kan, T.; Chiang, Y.; Wei, H. Task offloading and resource allocation in mobile-edge computing system. In Proceedings of the 2018 27th Wireless and Optical Communication Conference (WOCC), Hualien, Taiwan, 30 April–1 May 2018; pp. 1–4. [Google Scholar]
  34. Pham, Q.V.; Anh, T.L.; Tran, N.H.; Park, B.J.; Hong, C.S. Decentralized Computation Offoading and Resource Allocation for Mobile-Edge Computing: A Matching Game Approach. IEEE Access 2018, 6, 75868–75885. [Google Scholar] [CrossRef]
  35. Tran, T.X.; Pompili, D. Joint Task Offloading and Resource Allocation for Multi-Server Mobile-Edge Computing Networks. IEEE Trans. Veh. Technol. 2019, 68, 856–868. [Google Scholar] [CrossRef] [Green Version]
  36. Chen, M.; Hao, Y. Task Offloading for Mobile Edge Computing in Software Defined Ultra-Dense Network. IEEE J. Sel. Areas Commun. 2018, 36, 587–597. [Google Scholar] [CrossRef]
  37. Guo, H.; Liu, J.; Zhang, J.; Sun, W.; Kato, N. Mobile-Edge Computation Offloading for Ultradense IoT Networks. IEEE Internet Things J. 2018, 5, 4977–4988. [Google Scholar] [CrossRef]
  38. Guo, H.; Zhang, J.; Liu, J.; Zhang, H. Energy-Aware Computation Offloading and Transmit Power Allocation in Ultradense IoT Networks. IEEE Internet Things J. 2019, 6, 4317–4329. [Google Scholar] [CrossRef]
  39. Pham, X.Q.; Nguyen, T.D.; Nguyen, V.; Huh, E.N. Joint Node Selection and Resource Allocation for Task Offloading in Scalable Vehicle-Assisted Multi-Access Edge Computing. Symmetry 2019, 11, 58. [Google Scholar] [CrossRef] [Green Version]
  40. Yang, L.; Zhang, H.; Li, M.; Guo, J.; Ji, H. Mobile Edge Computing Empowered Energy Efficient Task Offloading in 5G. IEEE Trans. Veh. Technol. 2018, 67, 6398–6409. [Google Scholar] [CrossRef]
  41. Ateya, A.A.; Muthanna, A.; Vybornova, A.; Darya, P.; Koucheryavy, A. Energy—Aware Offloading Algorithm for Multi-level Cloud Based 5G System. In Internet of Things, Smart Spaces, and Next Generation Networks and Systems; Springer International Publishing: Cham, Switzerland, 2018; pp. 355–370. [Google Scholar]
  42. Wan, S.; Li, X.; Xue, Y.; Lin, W.; Xu, X. Efficient computation offloading for Internet of Vehicles in edge computing-assisted 5G networks. J. Supercomput. 2019, 1–30. [Google Scholar] [CrossRef]
  43. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  44. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks (ICNN’95), Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  45. Kennedy, J.; Eberhart, R. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; Volume 5, pp. 4104–4108. [Google Scholar]
Figure 1. Multi-tier computation-offloading strategy in HetNets.
Figure 1. Multi-tier computation-offloading strategy in HetNets.
Applsci 10 00203 g001
Figure 2. JROPSO convergence in terms of total computing overhead according to the number of iterations.
Figure 2. JROPSO convergence in terms of total computing overhead according to the number of iterations.
Applsci 10 00203 g002
Figure 3. Percentage of offloading mobile devices (MDs) according to the number of MDs.
Figure 3. Percentage of offloading mobile devices (MDs) according to the number of MDs.
Applsci 10 00203 g003
Figure 4. Comparison of the total computing overhead according to the number of MDs.
Figure 4. Comparison of the total computing overhead according to the number of MDs.
Applsci 10 00203 g004
Figure 5. Performance evaluation under different weighting parameters.
Figure 5. Performance evaluation under different weighting parameters.
Applsci 10 00203 g005
Figure 6. The effect of data size on task offloading.
Figure 6. The effect of data size on task offloading.
Applsci 10 00203 g006

Share and Cite

MDPI and ACS Style

Huynh, L.N.T.; Pham, Q.-V.; Pham, X.-Q.; Nguyen, T.D.T.; Hossain, M.D.; Huh, E.-N. Efficient Computation Offloading in Multi-Tier Multi-Access Edge Computing Systems: A Particle Swarm Optimization Approach. Appl. Sci. 2020, 10, 203. https://doi.org/10.3390/app10010203

AMA Style

Huynh LNT, Pham Q-V, Pham X-Q, Nguyen TDT, Hossain MD, Huh E-N. Efficient Computation Offloading in Multi-Tier Multi-Access Edge Computing Systems: A Particle Swarm Optimization Approach. Applied Sciences. 2020; 10(1):203. https://doi.org/10.3390/app10010203

Chicago/Turabian Style

Huynh, Luan N. T., Quoc-Viet Pham, Xuan-Qui Pham, Tri D. T. Nguyen, Md Delowar Hossain, and Eui-Nam Huh. 2020. "Efficient Computation Offloading in Multi-Tier Multi-Access Edge Computing Systems: A Particle Swarm Optimization Approach" Applied Sciences 10, no. 1: 203. https://doi.org/10.3390/app10010203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop