Next Article in Journal
Structure Analysis of the Fractionator Overhead Vapor Line of a Delayed Coker Unit
Previous Article in Journal
Evaluation of the Relevance of Global and By-Step Homogenization for Composites and Heterogeneous Materials at Several Scales
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Delay Optimization for Wireless Powered Mobile Edge Computing with Computation Offloading via Deep Learning

1
School of Computer Science and Technology, Shaanxi Normal University, Xi’an 710119, China
2
Department of Computer Science & Technology, Xi’an Jiaotong University, Xi’an 710049, China
3
Xi’an Aeronautics Computing Technique Research Institute, AVIC, Xi’an 710068, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(16), 7190; https://doi.org/10.3390/app14167190
Submission received: 16 June 2024 / Revised: 13 August 2024 / Accepted: 13 August 2024 / Published: 15 August 2024

Abstract

:
Mobile edge computing (MEC), specifically wireless powered mobile edge computing (WPMEC), can achieve superior real-time data analysis and intelligent processing. In WPMEC, different user nodes (UNs) harvest significantly different amounts of energy, which results in longer delays for lower-energy UNs when data are offloaded to MEC servers. This study involves quantifying the delays in energy harvesting and task offloading to edge servers in WPMEC via user cooperation. In this paper, a method for transferring the tasks that need to be offloaded to edge servers as quickly as possible is investigated. The problem is formulated as an optimization model to minimize the delay, including the time required for the energy harvesting and offloading tasks. Because the problem was non-deterministic polynomial hard (NP-hard), a delay-optimal approximation algorithm (DOPA) is proposed. Finally, with the training data generated based on the DOPA, a deep learning-based online offloading (DLOO) framework is designed for predicting the transmission power of each UN. After each UN’s transmission power is obtained, the original model is converted to a linear programming problem, which substantially reduces the computational complexity of the DOPA for solving the mixed-integer linear programming problem, especially in large-scale networks. The numerical results show that compared with the non-cooperation methods for WPMEC, the proposed algorithm significantly reduces the total delay. Additionally, in the delay optimization process for a scale of six UNs, the average computation time of the DLOO is only 0.2% that of the DOPA.

1. Introduction

At present, due to resource constraints, including limitations regarding computing and storage, user nodes (UNs) with miniaturized designs cannot locally process large amounts of perceptual data. In a cloud architecture, the original data needs to be transmitted to a remote cloud computing platform for calculation. However, transferring large volumes of raw data to the remote platform results in significant energy consumption overhead for UNs. As an auxiliary technology for cloud computing, edge computing can effectively address the energy constraints of UNs by optimizing information processing and transmission. Consequently, cloudlets [1] and edge computing [2] have been extensively researched.
Although edge computing can decrease the energy consumption rate of UNs, it cannot solve the fundamental problem of the limited UN battery lifetime. Currently, batteries are the primary power source for UNs. However, these batteries have limited energy capacities and can be difficult to replace in certain environments. To address the issue of energy limitations, there has been an increasing amount of research into wireless power supplies. In terms of energy source selection, traditional sources like solar, wind, and other energy resources are significantly affected by geographic and environmental factors. Extensive research has been conducted on the collection of energy from wireless spectrum signals, as these signals represent a controllable and predictable energy source [3]. Additionally, wireless signals can serve both as a source of energy and as a carrier for information transmission. In the process of wireless energy harvesting, the amount of energy collected by UNs depends on the channel quality. More energy can be collected when the UN is closer to an access point (AP), enabling it to transmit more data. Conversely, a UN farther from the AP collects less energy. As a result, tasks that need to be offloaded to a mobile edge computing (MEC) server may experience delays because of insufficient energy. This phenomenon is known as the “near–far” effect. By integrating an energy supply network into MEC, computationally intensive tasks can be offloaded to nearby edge servers or UNs, reducing power consumption and computational latency [4,5].
This paper studied a wireless powered mobile edge computing (WPMEC) scenario where UNs complete latency-critical, computation-intensive tasks offloaded to an edge server through cooperative communication, with all UN energy supplied entirely by the AP. A method was designed to determine the data forwarding path, the amount of data to forward, and the energy harvesting time to ensure that all the UN tasks reach the AP as quickly as possible, while minimizing the system delay. First, the problem of minimizing delay was formalized as a mixed nonlinear integer programming (MINLP) problem. Given that this problem is NP-hard, a delay-optimal approximation algorithm (DOPA) was proposed. Finally, a deep learning-based online offloading (DLOO) framework was designed to predict the transmission power of each UN in the WPMEC. Compared to existing methods for solving the WPMEC optimization problem, this paper makes the following contributions:
(1)
To eliminate the “near–far” effect in the WPMEC, a UN cooperative transmission method is introduced. This method improves the data offloading delay of “far” UNs by enabling UN cooperation, while also eliminating the deployment cost of relay nodes.
(2)
In the case of energy-constrained edge computing, a proposal is made to transfer tasks that need to be offloaded to the MEC server as quickly as possible. A mathematical model aimed at minimum delay is established and solved using the proposed DOPA method to reduce the data offloading delay.
(3)
The proposed DLOO framework draws lessons from historical experiences under various wireless channel conditions and automatically predicts the transmission power of the UNs. This approach significantly reduces calculation delay while ensuring accurate predictions. The remainder of this paper is organized as follows. A review of some related work comprises Section 2. The system model and problem formulation are described in Section 3. The detailed design of the DOPA and the DLOO framework are introduced in Section 4. The numerical results are presented in Section 5. Finally, the paper is concluded in Section 6.

2. Related Work

Several studies related to WPMEC have been conducted. The European Telecommunications Standardization Institute clearly defined the concept of MEC in Ref. [6]. Shi et al. [7] discussed the transition from a cloud architecture to an edge computing architecture and outlined six major challenges for future edge computing, including how UNs with limited energy can achieve low-latency data offloading.
Wireless power transfer (WPT), particularly in the form of a wireless powered communication network (WPCN) [8,9,10], has been recognized as an important method for providing continuous power for mobile communications. The authors of Ref. [11] first applied a wireless energy supply to a cloud computing architecture. UNs first harvested energy and then transmitted data to the cloud. Under energy and delay constraints, an appropriate computing mode (local, or offloading to the cloud server) was selected to maximize the probability of successful computing. Wang et al. [12] proposed a unified MEC-WPT design by considering a wireless powered multiuser MEC system, where an AP integrated with an MEC server was produced to broadcast wireless power to charge UNs, received offloaded data from UNs, and executed computation tasks. This system reduced the transmission energy of UNs for sending data to the cloud. Based on this system, Wang et al. [13] proposed an online learning algorithm for computation offloading using a distributed execution manner in multiple-U, multiple-AP scenarios. Maray et al. [14] proposed a joint DL-based framework that effectively initiates the worthy offloading solution in the orthogonal frequency-division multiple access (OFDMA) WPMEC. Mustafa et al. [15] aimed to generate an effective offloading choice between local and remote computation in a real-time environment. However, these measures still did not resolve the “near–far” effect, which impacted network performance [16,17]. In effect, UNs that are far from the AP harvest less energy but have a longer communication distance than those closer to the AP. Consequently, the “far” UNs require more time than the “near” UNs to transmit their data.
An effective way to solve this problem is to establish cooperation between the UN and the design routing, power control, or time distribution mechanisms. The authors of Ref. [18] considered the “near–far” effect of the WPMEC. By employing joint cooperative transmission and the allocation of computing resources, the AP’s transmit power minimization problem was effectively solved, and energy efficiency was improved. In Ref. [19], a wireless-powered non-orthogonal multiple access (NOMA)-aided MEC framework was proposed, in which joint communication and computation cooperation was considered to overcome the “near–far” effect. Based on this framework, they minimized energy consumption by jointly optimizing the transmission power, while satisfying the maximum tolerance delay. However, this research was conducted and the results derived only in scenarios with two UNs and cannot be applied to large-scale networks. In Ref. [13], the authors proposed an online learning algorithm for computation offloading in large-scale WPMECs using a distributed execution manner. It is assumed that the transmission power of UNs is fixed in the algorithm. This assumption cannot be applied to practical scenarios in which the power of UNs is variable. In Ref. [20], the authors minimize the total energy consumption of the system using helper node cooperation by jointly considering energy beamforming, timeslot assignment, computation-task allocation, and power allocation. This system required additional helper nodes exhibiting sufficient resources to guarantee data forwarding, which introduced extra overhead. Therefore, it is significant to identify a method for minimizing delay based on UN cooperation that is suitable for multiple UN scenarios.

3. System Model and Problem Formulation

As shown in Figure 1, this study considers a WPMEC system that consists of an AP, an MEC server, and  N  fixed UNs denoted by a set  N = { 1 ,   2 , ,   N } . The AP supplies energy to the UNs, which then use the harvested energy for data offloading. Each UN may possess computation-intensive, latency-critical tasks to offload to the AP. To reduce interference from information transmission between UNs, a block-based time-division multiple access (TDMA) structure is employed. In each time block, the AP provides energy to all UNs in the downlink, and then the UNs use the harvested energy to offload tasks to the AP in the uplink. Once the MEC server completes the tasks, it sends the results back to the UNs. Since the results are typically small, the return time is generally negligible. Consequently, the harvest-then-transmit protocol proposed in Ref. [21] is used and is referred to as the harvest-then-offload protocol in WPMEC. In the UN cooperative transmission mode, the AP determines the data transmission path of each UN based on its energy status, assisting “far” UNs in transmitting information. Multi-hop transmission is employed to aid “far” UNs in offloading tasks. The objective is to minimize the total time, which includes both the energy harvesting time and the task offloading time. Some notations used in this paper are shown in Table 1.
Each UN exhibits computation-intensive tasks Ii  i     N  and needs to offload tasks from the UN to the MEC server to meet the critical latency. Considering the impact of the “near–far” effect in WPMEC, UNs use a UN cooperation model to offload their own data, which helps to improve the latency performance. Lij is used to denote the link formed between UN i and UN j. The uplink and downlink transmission power gains of link Lij are expressed as gij and hij, respectively. For any single block, a time division structure is used, as shown in Figure 2. Data transmission is the main factor affecting latency. Compared to the transmission time, the computation time and the resulting feedback time are negligible [18,19].
At the downlink energy harvesting time  t 0 , the AP uses transmission power P0 to broadcast wireless power to all UNs in the downlink, assuming that all UNs have sufficient battery capacity to store the harvested energy. The energy harvested by each UN during the WPT is expressed as follows:
E i = ζ P 0 h Ai t 0 ,
where ζ represents the energy conversion efficiency of each UN, which is 0 < ζ ≤ 1. Each UN begins transmitting information after the energy harvesting period has ended. Self-interference occurs when the AP transmits energy and receives information simultaneously. To avoid this issue, the AP does not participate in the wireless energy supply during the uplink time.
During the uplink information transmission time, UN i may select link Lij, if it exhibits a better link quality compared to that of link LiA. Otherwise, link Lij is not active, and UN i may directly transmit information to the AP. Links that can potentially be activated for data offloading are termed feasible links, and L represents the set of these links. A binary variable Bij is introduced to represent the status of the feasible links during the uplink transmission. If Lij is activated for information transmission, Bij equals 1. Otherwise, it is 0. Due to the unidirectional nature of the links, the following formula applies:
1     B ji + B ij ,     L ij     L .
In the uplink time, the energy consumption of UNs is mainly due to two actions: transmitting information and receiving information. The energy consumption for transmitting information also includes the energy required for offloading data and the non-ideal circuit. Here, Pc represents the power loss caused by the extra electronic device during the UN transmission time [22]. The transmission energy consumption of UN i is expressed as follows:
E ti = L ij L ( P c + P i ) t ij ,   i     N ,
where Pi is the transmission power of UN i, and its value is in the interval of [0, Pmax]. In the receiving information phase, UNs need to consume energy for receiving data. The amount of information received by UN i through links  L ui  is expressed as  r ui . The total amount of information received by UN i is  L u , i L r ui . The energy consumption of the UNs to receive information is mainly attributed to the electronic devices that receive signals. The energy required to receive 1 bit of information is denoted as  E elec  nJ/bit. Therefore, the total energy consumption of UN i during the reception of information is expressed as follows [23]:
E ri = E elec L ui L r ui ,   i     N .
According to Equations (2)–(4), UNs must abide the energy constraint for self-sustaining operation in the entire time block. In other words, the total energy consumption of UN i should not exceed the harvested energy. This can be expressed using the following formula:
E ri + E ti     E .
The energy constraints ensure that the energy consumption of the UNs does not exceed the harvested energy. However, as the communication volume increases, the energy harvesting time and information transmission time in the solutions significantly increase. Under TDMA, the channel state of each active link depends only on its corresponding signal-to-noise ratio (SNR). Here,  SNR ij   denotes the SNR of link  L ij . Once the transmission power of UN i is determined, the capacity  C ij  of link Lij can be calculated using the following formula:
C ij = Wlog 2 ( 1 + SNR ij ) ,
where  SNR ij = P i g ij W η , W represents the bandwidth, and  η  represents the spectral noise power. The amount of information transmitted by the UNs through link  L ij  is affected by the limited link capacity, denoted by  C ij .The amount of transmitted information is denoted by  r ij  in the link  L ij . Thus, the link capacity constraints are as follows:
r ij     C ij t ij .
where  t ij  is the active time of link  L ij . After receiving data, each UN must transmit all the generated and forwarded data through its own output link. Let  I i  denote the amount of data generated by UN i. Then, the traffic constraint that UN i must satisfy is as follows:
L ij L r ij L ui L r ui       I i ,   i     N .
After all UNs complete the information transmission, the AP needs to completely receive all of the information before the end of the time block. Otherwise, information may be lost. Thus, the AP must meet the following flow conservation constraint:
i N I i = I A .
Let  T off  denote the data offloading time from the UNs to the AP, resulting in the following:
L ij L t ij = T off .
The minimum delay problem can be modeled as the minimizing delay (MD), as follows:
MD :                                                                               min           t 0   +   T off s . t .     Constraints   ( 1 ) Constraints   ( 10 )
B ij     { 0 , 1 } ,   r ij   >   0 ,   P i   >   0 ,   t 0   >   0 ,   t ij   >   0
The objective function of the MD model is to minimize the sum of the total time for energy harvesting and data offloading, subject to various constraints, including those related to energy, flow conservation, and link capacity. The MD model is a mixed-integer nonlinear programming (MINLP) model. The problem is NP-hard and cannot be directly solved using existing methods [24]. Analyzing the MD model reveals that the main challenges in solving the problem are:
(1)
the product relationship between the nonlinear log function and the linear variable, as seen in the right-hand side of Constraint (6).
(2)
the presence of bilinear variable multiplication terms, such as the Pitij term in Equation (3).
The next section shows how a piecewise linearization (PWL) method is employed to convert the model into a mixed-integer linear programming (MILP) model, thereby mitigating the effects of these two challenges on the solution process.

4. DOPA for Solving the Optimal Delay Problem

In this system, the MEC server first obtains user and AP information. It then solves the problem using the proposed method to obtain feasible solutions for time allocation, power control, and forwarding paths. Figure 3 summarizes the main steps for solving the problem.

4.1. Conversion of Nonlinear Function to Piecewise Linear Function Using the PWL Method

PWL is a method used to analyze nonlinear functions by approximating them with piecewise linear segments. In this method, the nonlinear characteristic curve is divided into several intervals of varying sizes. Within each interval, the nonlinear curve is approximated by a straight line segment. This process effectively replaces the nonlinear curve with a series of linear segments.
In the MD model, Constraint (6) contains log functions, which complicates problem solving. The PWL method is used to convert Constraint (6) into a linear constraint, as shown in Figure 4. First, a continuous piecewise linear function is employed to approximate the log function. To ensure that the approximate error between the piecewise function and the log function remains within acceptable limits, an error threshold is introduced. The error between the fitted piecewise function across all intervals and the log function does not exceed this threshold. Shorter intervals yield a more accurate fit. For the sake of discussion, the following constraints are used in place of Constraint (6):
C ij = W ln 2 ln ( 1 + SNR ij ) .
From Equation (12), it is clear that the ln function of link  L ij  should be segmented. In other words, interval  [ 0 ,   P max g ij / η ] is segmented. Let  S ij  denote the number of segments, where the qth interval is expressed as  ( ( SNR ij q ) L ,     ( SNR ij q ) H ) , and  q S ij v ij q  represents the slope of the qth segment, which can be expressed as follows:
v ij q = ln ( 1 + ( SNR ij q ) H )   ln ( 1 + ( SNR ij q ) L ) ( SNR ij q ) H ( SNR ij q ) L .
The value of  S ij  directly determines the fitting accuracy of the ln function of the corresponding variable. In Algorithm 1,  S ij  is determined using the PWL method, and the error is guaranteed not to exceed ε.
Algorithm 1. PWL method
Inputs q = 0 ,   S ij = 0 , ( SNR ij q ) L = 0 .
Outputs S ij , ( SNR ij q ) L , ( SNR ij q ) H .
1:
Solve   the   following   equation   using   the   Newton   iterative   method   to   obtain   slope   v ij q .
ln ( v ij q ) + v ij q ( 1 + ( SNR ij q ) L ) 1 ln ( 1 + ( SNR ij q ) L ) = ε   (14)
2:
Solve Equation (13) to obtain  ( SNR ij q ) H .
If  ( SNR ij q ) H   P max g ij / η , then stop and set  ( SNR ij q ) H = P max g ij / η  and  S ij = q .
Otherwise, continue to the next step.
3:
Update  ( SNR ij q ) L = ( SNR ij q ) H .  Set  q + = 1 , and return to step 1.
As shown in Algorithm 1, the PWL method starts from the first interval and determines the slope according to the error requirements. In the second step, the upper bound of the interval is calculated using the known slope and the lower bound of the interval. Through continuous iteration, a piecewise linear function is generated over the entire interval. The log function transformation rule is then applied to convert Constraint (7) into the following constraint:
r ij   W ln 2 [ v ij q ( P i g ij η ( SIN R ij q ) L ) + ln ( 1 + ( SIN R ij q ) L ) ] , L i j     L ,   q     S ij .
By converting Constraint (7) to Constraint (15), all the constraints of the MD model are converted to linear terms. The MD is rewritten as follows:
MD 1 :             min   t 0   +   T off
L ij L ( P c t ij + P ij ) + E elec L ui L r ui     ζ P 0 h Ai t ,   i     N ,  
L ij L t ij     T off   ,
1     B ji + B ij ,   L ij     L ,
L i , j L r ij L u , i L r ui       I i ,   i     N ,
i   N I i   = I A
r ij   W ln 2 [ v ij q ( P i g ij η ( SIN R ij q ) L ) + ln ( 1 + ( SIN R ij q ) L ) ] ,   L ij     L ,   q     S ij ,
B ij     { 0 , 1 } ,   r ij   >   0 ,   P i   >   0 ,   t 0   >   0 ,   t ij   >   0
The MD1 model is an MILP model. It can be solved using existing optimization software. The proposed DOPA can be found in Algorithm 2. First, the error threshold is determined according to requirement r. Then, the PWL method is used to transform the model. Finally, the approximate optimal solution of the problem is obtained by solving the MD1 model.
Algorithm 2. DOPA for solving MD1
Inputs: r
Outputs: Optimal solution
1:
The error threshold is determined according to requirement r.
2:
Algorithm 1 is used to convert the MD into a MILP model.
3:
The MD1 is solved by a mathematical tool to obtain an approximate optimal solution.

4.2. DLOO Framework for Predicting the Transmission Power

The transmission power of each UN is crucial for solving the MD1 problem. In this section, we aim to develop a DL framework that can quickly generate the transmission power for each UN at the beginning of each timeframe. The proposed DLOO framework incrementally updates the weights based on accumulated experience.
The DLOO framework mainly includes three aspects. First, the deep belief network (DBN) is trained offline to ensure that the neural network can perform satisfactorily using past historical data. The DBN training algorithm used in this study is provided in Algorithm 3.
Algorithm 3. Training of DBN
Inputs: Channel state information
Outputs: Trained DBN network
1:
Build training set {( g ij ,   P i )} by solving the MD1 model;
2:
Train the DBN in an unsupervised way;
3:
Add the output layer on the top layer, and use the BP method to adjust the top layer weights;
4:
Use the BP to fine-tune the weights of all the layers.
Then, the trained DBN is used online to predict the transmission power of each UN. As shown in Algorithm 4, for a new UN, a new corresponding channel state is first generated. The input channel parameters are sent to the MEC server, which then runs the DLOO algorithm. The trained DBN takes the channel state information as the input and gives the output, which is the transmission power of the UNs. Based on these predicted transmission powers, the required results can be quickly obtained by solving the MD1 model. Once calculated, this dataset may be stored in the computer memory as a set of sample data to update the DBN module. Finally, as the dataset size increases, the DBN module is updated with the new dataset.
The experiments showed that when the newly added sample data made up 1/5th of the total dataset, the network’s prediction accuracy decreased, indicating that the DBN module in the DLOO needed updating. During the update phase, a batch of training samples was extracted from the memory, and the BP algorithm was used to directly adjust the weight of the DBN module. The DBN module then updated its parameters. The updated DBN could accurately predict each UN’s transmission power, according to the new weight. Subsequent observations of new channel states prompted repeated iterations and continuous updates of the DBN module.
Algorithm 4. DLOO framework for power prediction
Inputs: DBN module, channel state information
Output: Each UN’s transmission power
1:
Enter the channel state information vector into the DBN module to obtain each UN’s transmission power.
2:
Based on the given transmission power, transform the MD1 into a linear programming problem.
3:
Solve the linear programming problem. The channel state information and the transmission power ( g ij ,   P i ) make up the new data.
4:
Add the data to the memory. If the dataset size is increased by 1/5, update the DBN module using Algorithm 3.

5. Experimental Results and Analysis

5.1. Simulation Setup

In WPMEC, it is assumed that the bandwidth is 1 MHz and the noise power  η  is −90 dBm [8]. The energy harvesting efficiency  ζ  is 0.5 [15]. The channel attenuation follows a Rayleigh distribution. The power gains of the uplink and downlink channels are  g ij = 10 3 ρ ij 2 d ij α ,   h Ai = 10 3 ρ Ai 2 d Ai α , respectively, where  ρ ij 2  follows a Rayleigh distribution,  d ij  represents the distance between UN i and UN j d Ai  is the distance between UN i and the AP, and α represents the path attenuation coefficient [13]. The average signal attenuation at a reference distance of 1 m is assumed to be 30 dB. Here, r and  I i  are set as 0.1 and 10 kb, respectively, with  i     N . All the algorithms are based on C++ 20, CPLEX 12.9, TensorFlow 2.1, and Python 3.6.
“CT-t0” and “CT-t” are used to denote the harvest energy time and total time obtained using the DOPA. The total time includes both the downlink energy harvesting time and the uplink transmission time. In addition, this section also describes a method for evaluating the delay performance when the UN directly transmits information to the AP. This method is known as the non-cooperative direct transmission method. “DT-t0” and “DT-t” are used to denote the harvest energy time and the total time obtained without UN cooperation.

5.2. Simulation Results

5.2.1. DOPA Performance Analysis

It is assumed that six UNs are deployed in a square area of 10 m × 10 m, as shown in Figure 5. In Figure 5, the triangle in the center represents the AP, and the red dots represent the UNs. Here,  P 0 = 30   dBm ,  and  α = 3 .
Figure 5 shows the results for the data forwarding paths obtained with and without the cooperation of the UNs. The number on the straight line indicates the proportion of each UN’s data that is forwarded to its relay UNs. For example, the proportions of UN1’s data forwarded to AP, UN0, and UN2 are 0.113, 0.874, and 0.003, respectively. The weight value is calculated based on the DOPA. In the non-cooperative transmission method, each weight is 1, indicating direct transmission to the AP.
As shown in Figure 5a, the proposed method effectively balances the relationship between the transmitted information and the energy consumption. Since UN0 and UN2 are closer to the AP, they exhibit better channel quality and can harvest more energy. Therefore, UN1 and UN3 forward their information through UN0 and UN2, respectively. Since UN4 is farther from the AP, part of its data is forwarded by UN5, while the rest is transmitted directly to the AP.
Figure 5b does not consider cooperative transmission. The tasks offloaded by UN 1, UN 3, and UN 4 need to be transmitted to the AP over a long distance, which causes significant signal attenuation. The data offloading path for UN 1, UN 3, and UN 4 is not optimized, which increases the latency. The long delays for UN 1, UN 3, and UN 4 have become the bottleneck of delay optimization. The data transmission delay for “far” UNs can be optimized through UN cooperation.
The delays in Figure 5a,b are CT-t = 0.3046, CT-t = 0.7031, DT-t0 = 0.3046, and DT-t = 0.7392, respectively. Compared with the non-cooperation method, the DOPA can effectively reduce the data offloading delay while maintaining the same harvest energy time.
Figure 6 illustrates the delay variation for different AP transmission power values using  α = 3  and N = 6. The results show that the delay of the proposed DOPA is lower compared to that of the non-cooperative method. When  P 0  is very small, the delay of both the non-cooperative method and the DOPA are similar because UNs harvest less energy, resulting in minimal data being forwarded by the relay UNs. As  P 0  increases, the UNs can use the energy harvested to forward more information. Consequently, the delay gap between the DOPA and the non-cooperative method gradually increases. As AP transmission power  P 0  continues to rise, the DOPA allows the UNs to offload more data with enhanced energy channel gains, thereby improving the overall delay performance of the system.
Figure 7 shows the delay under different path attenuation coefficients with  P 0 = 30   dBm .  As the path attenuation coefficient decreases, the gap in the delay between the DOPA and non-cooperative method increases. Lower path attenuation improves the channel quality for both energy and information transmission. With better channel quality, the DOPA can utilize more of the available energy, leading to a greater reduction in delay compared to that of the non-cooperative method. As the path attenuation coefficient increases, the delay performance of both decreases. This occurs because as the channel quality deteriorates, the UNs are unable to harvest sufficient energy to both receive and forward information effectively. With reduced energy harvesting, the UNs can only sustain the transmission of their own information and are unable to effectively participate in cooperative data forwarding.
Changes in the delay are observed for different numbers of UNs. As shown in Figure 8, both the DOPA and non-cooperative method exhibit increases in energy harvesting time and system delays when there is a higher number of UNs. As the number of feasible links formed between UNs increases, the UNs can select the most appropriate relay node for cooperative information transmission. Consequently, the delay gap between the cooperative and non-cooperative modes becomes more pronounced with an increasing number of UNs. Therefore, with a large number of UNs, the DOPA proves to be more effective in reducing latency.

5.2.2. DLOO Performance Analysis

This section details simulations conducted to evaluate the performance of the proposed DLOO framework. The DLOO framework is implemented in Python 3.7.1 with TensorFlow 1.0.0 on a desktop with an Intel Core i5-4590 3.3 GHz CPU and 16 GB of memory. In the proposed DLOO framework, a DBN consisting of one input layer, two hidden layers, and one output layer is employed. The first hidden layer contains 20 neurons, and the second hidden layer comprises 10 neurons. The input to the network is a set of channel coefficients, with the input size varying according to the channel state information. With  N = 6 P 0 = 30   dBm , and  α = 3 , the number of input neurons is 36, and the output is the UN transmission power, represented by 6 neurons.
The DLOO framework predicts the transmission power using the DBN trained offline, which could reduce the model complexity under the new channel conditions to solve the optimization problem. First, different UN distribution scenarios are simulated, and the existing model is used to generate training data and further expand the training dataset by solving the problem caused by varying numbers of deployed UNs. Each training sample is obtained by solving the optimization problem using Algorithm 4.
The experiment simulated a random distribution of six UNs; we use the DLOO framework for power prediction. The experiment is conducted five times, and the learning results are compared with the actual results obtained by the DOPA. As can be seen from Table 2, the error of the DLOO is within 3% higher than the error of the DOPA. The DLOO framework demonstrates the capability to accurately predict each UN’s transmission power. Therefore, the DLOO framework could predict each UN’s transmission power and link selection in WPMEC, eliminate redundant links, reduce the complexity of the network model and thus, the latency, and satisfy the real-time requirements of edge computing.
Figure 9 compares the delay of the DLOO with that of the DOPA. With 60 different UN deployments, the DLOO and DOPA calculation times are compared. It can be seen that the calculation time of the DLOO framework was essentially stable at 0.001 s. This delay was within the acceptable range of the UNs, and the calculation time of the DOPA is between 0.5 and 0.6 s. This delay may not be satisfactory for some UNs that are required to handle urgent tasks. The DLOO predicts each UN transmission power value and directly converts the original problem into a linear programming problem. Thus, compared with that of the DOPA, the calculation time is substantially reduced by using the DLOO framework.

6. Conclusions

This paper proposed a delay-optimal approximation algorithm (DOPA) based on the UN cooperation method to solve the problem of the “near–far” effect and to improve the delay performance in WPMEC. The DOPA made it possible to offload UN tasks to the edge server for execution, returning the completed tasks as soon as possible, thereby enhancing the UN service experience, particularly for UNs located farther from the server. The computational complexity of the DOPA was relatively high because its solution process involved solving a mixed-integer programming problem that combines time and power allocation. To reduce the computation time of the DOPA, a DLOO framework was proposed, which could provide an efficient solution for predicting the transmission power values of the UNs. Simulation results showed that the proposed scheme outperformed the non-cooperation method, regardless of channel conditions or network size. Furthermore, the DLOO framework achieved high-precision predictions, while effectively reducing the computation time overhead of the DOPA.
Previously, the DOPA was limited to providing delay optimization solutions for single-AP scenarios and could not be applied to multi-AP scenarios. In multi-AP scenarios, the collaboration between APs to optimize energy harvesting and system delay became more complex. Additionally, generating training data using the DOPA in large-scale networks required a significant amount of computing time, leading to prolonged training time for the DLOO framework. Therefore, in future work, we will optimize the DOPA to reduce its computational time overhead in large-scale scenarios. Additionally, we will investigate delay optimization methods for multi-AP scenarios based on the DOPA. In these scenarios, we will explore delay optimization issues using local computation, balancing data volumes between local computation and remote offloading. If communication energy consumption is too high, tasks could be executed locally.

Author Contributions

M.L.: conceptualization, methodology, supervision, writing—original draft, and writing—review and editing; Z.F.: conceptualization, writing—original draft, methodology, investigation, formal analysis; B.Y.: formal analysis and investigation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Natural Science Foundation of China (62102239), the Xi’an Science and Technology Plan Project (22GXFW0023), and the Natural Science Basic Research Program of Shaanxi (2021JQ-314).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DOPADelay-optimal approximation algorithm
MECMobile edge computing
WPMECWireless powered mobile edge computing
UNsUser nodes
DLOODeep learning-based online offloading
MILPMixed-integer linear programming
MINLPMixed nonlinear integer programming
APAccess point
PWLPiecewise linear
WPTWireless power transfer
TDMATime-division multiple access
OFDMAOrthogonal frequency-division multiple access
NOMANon-orthogonal multiple access
SNRSignal-to-noise ratio
DPDeep learning
DBNDeep belief network

References

  1. Satyanarayanan, M.; Bahl, P.; Caceres, R.; Davies, N. The case for vm-based cloudlets in mobile computing. IEEE Pervasive Comput. 2009, 8, 14–23. [Google Scholar] [CrossRef]
  2. Bonomi, F.; Milito, R.; Zhu, J.; Addepalli, S. Fog computing and its role in the internet of things. In Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing, Helsinki, Finland, 17 August 2012; pp. 13–16. [Google Scholar]
  3. Lu, X.; Wang, P.; Niyato, D.; Kim, D.I.; Han, Z. Wireless networks with rf energy harvesting: A contemporary survey. IEEE Commun. Surv. Tutor. 2015, 17, 757–789. [Google Scholar] [CrossRef]
  4. You, C.; Huang, K.; Chae, H.; Kim, B.H. Energy-efficient resource allocation for mobile-edge computation of-floading. IEEE Trans. Wirel. Commun. 2017, 16, 1397–1411. [Google Scholar] [CrossRef]
  5. Chen, X.; Jiao, L.; Li, W.; Fu, X. Efficient multi-user computation offloading for mobile-edge cloud computing. IEEE/ACM Trans. Netw. 2016, 24, 2795–2808. [Google Scholar] [CrossRef]
  6. Hu, Y.C.; Patel, M.; Sabella, D.; Sprecher, N.; Young, V. Mobile edge computing—A key technology towards 5G. ETSI White Pap. 2015, 11, 1–16. [Google Scholar]
  7. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge computing: Vision and challenges. Internet Things J. IEEE 2016, 3, 637–646. [Google Scholar] [CrossRef]
  8. Ju, H.; Zhang, R. Throughput maximization in wireless powered communication networks. IEEE Trans. Wirel. Commun. 2013, 13, 418–428. [Google Scholar] [CrossRef]
  9. Hu, S.; Chen, X.; Ni, W.; Wang, X.; Hossain, E. Modeling and analysis of energy harvesting and smart grid-powered wireless communication networks: A contemporary survey. IEEE Trans. Green Commun. Netw. 2020, 4, 461–496. [Google Scholar] [CrossRef]
  10. Wang, X.; Li, J.; Ning, Z.; Song, Q.; Guo, L.; Guo, S.; Obaidat, M.S. Wireless powered mobile edge computing networks: A survey. ACM Comput. Surv. 2023, 55, 135. [Google Scholar] [CrossRef]
  11. You, C.; Huang, K.; Chae, H. Energy efficient mobile cloud computing powered by wireless energy transfer. IEEE J. Sel. Areas Commun. 2016, 34, 1757–1771. [Google Scholar] [CrossRef]
  12. Wang, F.; Xu, J.; Wang, X.; Cui, S. Joint offloading and computing optimization in wireless powered mobile-edge computing systems. IEEE Trans. Wirel. Commun. 2017, 17, 1784–1797. [Google Scholar] [CrossRef]
  13. Wang, X.; Ning, Z.; Guo, L.; Guo, S.; Gao, X.; Wang, G. Online learning for distributed computation offloading in wireless powered mobile edge computing networks. IEEE Trans. Parallel Distrib. Syst. 2021, 33, 1841–1855. [Google Scholar] [CrossRef]
  14. Maray, M.; Mustafa, E.; Shuja, J. Wireless Power Assisted Computation Offloading in Mobile Edge Computing: A Deep Reinforcement Learning Approach. Hum.-Centric Comput. Inf. Sci. 2024, 14. [Google Scholar] [CrossRef]
  15. Mustafa, E.; Shuja, J.; Bilal, K.; Mustafa, S.; Maqsood, T.; Rehman, F.; Khan, A.U.R. Reinforcement learning for intelligent online computation offloading in wireless powered edge networks. Clust. Comput. 2023, 26, 1053–1062. [Google Scholar] [CrossRef]
  16. Bolourian, M.; Shah-Mansouri, H. Energy-efficient task offloading for three-tier wireless-powered mobile-edge computing. IEEE Internet Things J. 2023, 10, 10400–10412. [Google Scholar] [CrossRef]
  17. Wei, Z.; Yu, X.; Ng, D.W.K.; Schober, R. Resource allocation for simultaneous wireless information and power transfer systems: A tutorial overview. Proc. IEEE 2021, 110, 127–149. [Google Scholar] [CrossRef]
  18. Hu, X.; Wong, K.K.; Yang, K. Wireless powered cooperation-assisted mobile edge computing. IEEE Trans. Wirel. Commun. 2018, 17, 2375–2388. [Google Scholar] [CrossRef]
  19. Zeng, S.; Huang, X.; Li, D. Joint communication and computation cooperation in wireless-powered mobile-edge computing networks with NOMA. IEEE Internet Things J. 2023, 10, 9849–9862. [Google Scholar] [CrossRef]
  20. Mao, S.; Wu, J.; Liu, L.; Lan, D.; Taherkordi, A. Energy-efficient cooperative communication and computation for wireless powered mobile-edge computing. IEEE Syst. J. 2020, 16, 287–298. [Google Scholar] [CrossRef]
  21. Ju, H.; Zhang, R. User cooperation in wireless powered communication networks. In Proceedings of the 2014 IEEE Global Communications Conference, Austin, TX, USA, 8–12 December 2014; pp. 1430–1435. [Google Scholar]
  22. Pejoski, S.; Hadzi-Velkov, Z.; Duong, T.Q.; Zhong, C. Wireless powered communication networks with non-ideal circuit power consumption. IEEE Commun. Lett. 2017, 21, 1429–1432. [Google Scholar] [CrossRef]
  23. Wang, C.; Shih, J.; Pan, B.; Wu, T. A network lifetime enhancement method for sink relocation and its analysis in wireless sensor networks. IEEE Sens. J. 2014, 14, 1932–1943. [Google Scholar] [CrossRef]
  24. Garey, M.R.; Johnson, D.S. Computers and Intractability; Freeman: San Francisco, CA, USA, 1979; Volume 174. [Google Scholar]
Figure 1. A WPMEC system using UN cooperation.
Figure 1. A WPMEC system using UN cooperation.
Applsci 14 07190 g001
Figure 2. Harvest-then-offload transmission protocol.
Figure 2. Harvest-then-offload transmission protocol.
Applsci 14 07190 g002
Figure 3. Main steps for solving the problem.
Figure 3. Main steps for solving the problem.
Applsci 14 07190 g003
Figure 4. Linearization of ln function at qth interval of SNRij.
Figure 4. Linearization of ln function at qth interval of SNRij.
Applsci 14 07190 g004
Figure 5. Routing results for two methods: (a) with cooperation of UNs and (b) without cooperation of UNs.
Figure 5. Routing results for two methods: (a) with cooperation of UNs and (b) without cooperation of UNs.
Applsci 14 07190 g005
Figure 6. Delay variation for different AP transmission power values.
Figure 6. Delay variation for different AP transmission power values.
Applsci 14 07190 g006
Figure 7. Delay variation under different path attenuation coefficients.
Figure 7. Delay variation under different path attenuation coefficients.
Applsci 14 07190 g007
Figure 8. Delay variation with different numbers of UNs.
Figure 8. Delay variation with different numbers of UNs.
Applsci 14 07190 g008
Figure 9. Comparison of model solutions times of the DLOO and DOPA.
Figure 9. Comparison of model solutions times of the DLOO and DOPA.
Applsci 14 07190 g009
Table 1. List of model notations.
Table 1. List of model notations.
SymbolDescriptionSymbolDescription
IiData generated by UN i t 0 Downlink energy harvesting time
LijLink formed between UN i and UN jBijStatus of the feasible links during the uplink transmission
gij, hijUplink and downlink transmission power gains of link LijPcPower loss caused by the extra electronic device
P0Transmission power of APPiTransmission power of UN i
ζ Energy conversion efficiency E elec Energy required to receive 1 bit of information
r ui Information received by UN i through links  L ui   W Bandwidth
ηNoise power T off Data offloading time from UNs to the AP
I A Data received by the AP t ij Active time of link Lij
Table 2. The DLOO framework and the DOPA for power control.
Table 2. The DLOO framework and the DOPA for power control.
Transmission Power of Each UN/W Transmission Power of Each UN/W Transmission Power of Each UN/W
UNsTimesDOPADLOOUNsTimesDOPADLOOUNsTimesDOPADLOO
010.3120.319210.8720.896410.5020.499
20.9180.90720.5370.55120.9010.922
30.7560.72330.390.41230.9160.889
40.5020.51340.9180.92740.4280.409
50.7360.69950.7710.75950.0560.102
110.2460.199310.8970.879510.7810.782
20.3420.35720.9210.92420.9330.935
30.5430.55830.9170.92630.5020.519
40.3910.38740.7910.78840.1020.101
50.6850.67150.4720.44850.6320.621
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lei, M.; Fu, Z.; Yu, B. Delay Optimization for Wireless Powered Mobile Edge Computing with Computation Offloading via Deep Learning. Appl. Sci. 2024, 14, 7190. https://doi.org/10.3390/app14167190

AMA Style

Lei M, Fu Z, Yu B. Delay Optimization for Wireless Powered Mobile Edge Computing with Computation Offloading via Deep Learning. Applied Sciences. 2024; 14(16):7190. https://doi.org/10.3390/app14167190

Chicago/Turabian Style

Lei, Ming, Zhe Fu, and Bocheng Yu. 2024. "Delay Optimization for Wireless Powered Mobile Edge Computing with Computation Offloading via Deep Learning" Applied Sciences 14, no. 16: 7190. https://doi.org/10.3390/app14167190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop