Next Article in Journal
A Novel Route to Optimize Placement Equipment Kinematics by Coupling Capacitive Accelerometers
Next Article in Special Issue
Traffic Aware Scheduler for Time-Slotted Channel-Hopping-Based IPv6 Wireless Sensor Networks
Previous Article in Journal
9-DOF IMU-Based Attitude and Heading Estimation Using an Extended Kalman Filter with Bias Consideration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Time-Driven Cloudlet Placement Strategy for Workflow Applications in Wireless Metropolitan Area Networks

1
College of Computer and Data Science, Fuzhou University, Fuzhou 350108, China
2
Fujian Provincial Key Laboratory of Network Computing and Intelligent Information Processing, Fuzhou University, Fuzhou 350108, China
3
Department of Computer Science and Information Engineering, Asia University, Taichung 413, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(9), 3422; https://doi.org/10.3390/s22093422
Submission received: 15 March 2022 / Revised: 23 April 2022 / Accepted: 26 April 2022 / Published: 29 April 2022

Abstract

:
With the rapid development of mobile technology, mobile applications have increasing requirements for computational resources, and mobile devices can no longer meet these requirements. Mobile edge computing (MEC) has emerged in this context and has brought innovation into the working mode of traditional cloud computing. By provisioning edge server placement, the computing power of the cloud center is distributed to the edge of the network. The abundant computational resources of edge servers compensate for the lack of mobile devices and shorten the communication delay between servers and users. Constituting a specific form of edge servers, cloudlets have been widely studied within academia and industry in recent years. However, existing studies have mainly focused on computation offloading for general computing tasks under fixed cloudlet placement positions. They ignored the impact on computation offloading results from cloudlet placement positions and data dependencies among mobile application components. In this paper, we study the cloudlet placement problem based on workflow applications (WAs) in wireless metropolitan area networks (WMANs). We devise a cloudlet placement strategy based on a particle swarm optimization algorithm using genetic algorithm operators with the encoding library updating mode (PGEL), which enables the cloudlet to be placed in appropriate positions. The simulation results show that the proposed strategy can obtain a near-optimal cloudlet placement scheme. Compared with other classic algorithms, this algorithm can reduce the execution time of WAs by 15.04–44.99%.

1. Introduction

With the emergence of more advanced mobile hardware and a new generation of mobile communication technology that integrates wireless communication and modern Internet technologies, many delay-sensitive and computation-intensive mobile applications, such as voice recognition applications, online games, webcasts, and augmented reality applications, are present on mobile devices. These applications pose new challenges to mobile devices. Although mobile-hardware-related technologies have made great progress, due to the restrictions of mobile devices in size, weight, battery life, and heat dissipation, the gap between the capability of limited computational resources and computation-intensive application requirements is gradually increasing [1]. Mobile cloud computing (MCC), a computing paradigm, integrates cloud computing and mobile computing to enhance the computation performance of mobile devices [2]. By offloading a portion or all of the components of mobile applications to a cloud server, the performance of mobile applications can be significantly improved [3]. However, modern mobile applications generally have high requirements for instantaneity. The traditional central cloud is usually remotely located and distant from the users. For most modern mobile applications, offloading to the central cloud may not be optimal. To overcome these disadvantages, mobile edge computing (MEC) is an efficient solution that can better solve the insufficient resources of mobile devices. As a new computing paradigm proposed after MCC, MEC has brought innovation to the working mode of traditional cloud computing. When edge servers are placed at the edge of the network, the computing power of the central cloud is distributed to the network edge, and the rich computational resources of the edge servers compensate for the lack of mobile devices. In particular, cloudlets have been widely studied by academia and industry in recent years, constituting a specific form of the edge server. The problem of cloudlet placement is the key to the efficient utilization of edge resources in the network [4,5,6]. A cloudlet is composed of computer clusters with many computational and storage resources; it resides at an access point (AP) in the network to provide edge computing services for mobile devices. Mobile applications can enter the network through the AP and ultimately be offloaded to the cloudlets in the network for processing. Compared with that for the traditional central cloud, the spatial distance between cloudlets and mobile devices is closer, which increases the network connection stability and speed between them while reducing the time cost of mobile devices expended on obtaining additional computational resources.
Existing research work has mainly focused on the problem of computation offloading of general tasks under the conditions of fixed cloudlet placement positions [7,8,9,10]. In such studies, cloudlet placement positions are directly given as a part of the network environment, ignoring the impact on computation offloading results from cloudlet placement positions and the data dependence among mobile application components. In fact, the cloudlet placement position in the network is critical to the execution time of computing tasks offloaded from mobile devices and the resource utilization of cloudlets, especially in large-scale wireless metropolitan area networks (WMANs) with thousands of APs. A WMAN provides network services for mobile users in large-scale metropolitan areas. These networks are usually public infrastructure and operated by the local government [11], which brings the following benefits: (1) the metropolitan area covered by the WMAN has a high population density, and cloudlets can provide computing services for more mobile devices to improve the utilization of cloudlets; (2) in view of the large scale of WMANs, service providers can take advantage of economies of scale to reduce the used price of cloudlets, making cloudlet services more easily accepted by the public. However, the correct placement of cloudlets in WMANs is a major challenge. In addition, the correct cloudlet placement position is highly important to improve the user experience of mobile applications. Failure to properly place cloudlets will result in computation offloading that cannot effectively reduce the execution time of mobile applications, which will negatively affect the user experience for mobile applications. This paper focuses on solving the K ( 1 ) cloudlet placement problem in large-scale WMANs and fully considers the impact of the data transmission time and execution time of sub-tasks included in workflow applications (WAs). The object is to find a cloudlet placement scheme that minimizes the execution time of WAs. The main contributions of this paper are as follows:
We propose an abstract model of a cloudlet placement system in the WMAN. In this system model, mobile applications are WAs with complex internal dependencies. With the object of minimizing the execution time of WAs, a detailed mathematical analysis and modeling of the K cloudlet placement problem in a WMAN is performed.
The particle encoding and location update mode processes in the traditional particle swarm optimization (PSO) algorithm are improved, and the cloudlet placement strategy based on the PSO algorithm using genetic algorithm (GA) operators with the encoding library updating mode (PGEL) is proposed. By introducing the update operator of the GA and the encoding library update mode in the particle update process, this strategy solves the problems of easily falling into local optima and redundant operations during the particle update process that are encountered in the traditional PSO algorithm.
To verify the effectiveness and superiority of the proposed strategy, we conducted sufficient simulation experiments. The experimental results show that the proposed strategy not only obtains a near-optimal cloudlet placement scheme in typical WMANs, but also maintains excellent performance when the WMAN changes. This approach also has a major advantage when evaluated against several classic algorithms.
The rest of the paper is organized as follows. In Section 2, we discuss works related to this paper. In Section 3, we describe the system models and define the problem to be studied. In Section 4, we introduce the cloudlet placement strategy based on PGEL. In Section 5, we present our experimental setup, evaluation, and analysis of the results. Finally, Section 6 concludes the paper.

2. Related Work

Limited by hardware technology, battery life, and other factors, mobile devices have limited computational resources and often need the help of remote servers to complete computing tasks efficiently. The research on related problems has also attracted the attention of academia and industry [7,12,13]. During the execution of the computing task, it is encapsulated in the virtual machine and offloaded to the remote server for performance [14]. Due to the mature development of cloud service technology, the destination of most computational offloading is the remote cloud [7,15,16]. However, driven by the many advantages of the cloudlet, it has gradually replaced the cloud servers as a new offloading destination in many scenarios. Compared with the cloud server far away from the user, the cloudlet with certain computational resources in the network has a smaller spatial distance from the user, which greatly reduces the transmission delay involved in the task offloading process and effectively improves the user experience of delay-sensitive applications. For example, the Odesa [17] system is designed to support interactive mobile applications. By offloading some application components to the cloudlet instead of the central cloud, the data transmission time and the execution time of computing tasks are reduced, so that the application can meet the strict response time requirements. Taking into account the quality of service (QoS) requirements of mobile users, Hoang et al. [18] proposed a linear planning solution that offloads computing tasks to an appropriate cloudlet to maximize revenue from service providers. In [19], the authors proposed a novel MEC-based mobility-aware offloading model to solve the intra-Cloudlet offloading scheduling issue and inter-Cloudlet load-aware heterogeneous resource allocation issue in terms of considering the offloading execution efficiency, task processing time constraints, and energy efficiency. Chen et al. [20] proposed an innovative framework that uses a distributed decision-making manner and effectively achieves cooperative load balancing among multiple edges based on reinforcement learning in Industrial IoT environments.
Although the study of cloudlets as a computational offloading destination has received much attention, the impact of the cloudlet placement location on the offloading results is often ignored. Some existing studies assume that cloudlets are used in small network environments such as campuses, companies, and factory parks [21,22,23]. In such network environments, the spatial distance between cloudlets and mobile devices is minimal, so the cloudlet placement has minimal impact on network efficiency. However, the cloudlet placement in a WMAN consisting of thousands of APs becomes extremely important and complex. To improve the execution efficiency of mobile applications, it is imperative to optimize the cloudlet placement location. Specifically, many related studies have focused on the cloudlet placement problem in large-scale networks. The authors reassigned the mobile users to complete the cloudlet placement to balance the workload of each cloudlet, thus minimizing the response time of the system. Bhatta et al. [24] formulated the cloudlet placement problem as a multi-objective integer programming model and showed that it is a computationally NP-hard problem. The authors then proposed a bifactor approximate cloudlet placement (ACP) to tackle its intractability. In [25], a dynamic cloudlet placement method based on a clustering algorithm (DCDM-CA) was proposed to solve the problem of deploying mobile cloudlets for mobile applications. After determining the placement location of the cloudlets, the authors also optimized the computational offloading to minimize the system response latency. In [26], the authors proposed an application development method of the Internet of Things (IoT) based on a runtime model. They tried to manage various IoT devices through the architecture based on runtime software for the first time. Guo et al. [27] formulated the edge cloud placement problem as a multi-objective optimization problem to balance the workload between edge clouds and minimize the service communication delay of mobile users. In [28], the authors studied the cloudlet deployment problem to optimize deployment cost and network latency. When the cloudlets have been deployed in the network, the authors proposed a fault-tolerant cloudlet deployment scheme to ensure acceptable QoS. Zhu et al. [29] investigated a joint cloudlet deployment and task offloading problem with the objectives of minimizing energy consumption and the task response delay of users and the number of deployed cloudlets. After formulating this problem as a mixed-integer nonlinear program and proving its NP-completeness, the authors proposed a modified guided population archive whale optimization algorithm to solve it. In [30], the placement problem of edge servers in the Internet of Vehicles (IoV) was studied, and the six-objective edge server deployment optimization model was constructed by simultaneously considering transmission delay, workload balancing, energy consumption, deployment costs, network reliability, and edge server quantity. In [31], the authors utilized particle swarm optimization (PSO) to reallocate the virtual machines (VMs) in overloaded physical machines (PMs) and to consolidate underloaded PMs for energy savings. In [32], the authors utilized PSO to allocate more kinds of resources and to consolidate VMs across multiple cloud data centers. Tseng et al. [33] formulated a multiobjective optimization problem of resource allocation, which considers the CPU and memory utilization of VMs and PMs and the energy consumption of the data center. The authors proposed a multiobjective genetic algorithm (GA) to dynamically forecast the resource utilization and energy consumption. In [34], the authors defined the network-aware VM placement optimization (NAVMPO) problem based on integer linear programming. They proposed the service-oriented physical machine (PM) selection (SOPMS) algorithm and link-aware VM placement (LAVMP) algorithm to solve the above problem. The proposed methods or frameworks in the above research work are oriented toward simple task scenarios. They do not consider the complex dependencies within computing tasks, limiting further division for them. The comparative analysis of the previous work is illustrated in Table 1. In fact, the fine-grained division of computing tasks will effectively improve the utilization efficiency of computational resources.

3. System Model and Problem Formulation

In this section, we first describe the WMAN and WA system models. We then define the key time points in the execution process of WAs. Finally, we define the problem precisely. For the ease of reference, we list the key notations of our system model in Table 2.

3.1. System Model

3.1.1. WMAN Model

As shown in Figure 1, we considered a WMAN composed of several APs. The APs are connected through a wired network to form a connectivity graph structure. It is assumed that each AP in the WMAN will receive a WA in the considered time slot, and these WAs are jointly executed by several cloudlets placed on the APs. The cloudlet placement scheme and the offloading strategy of WAs will affect the execution time of WAs. A WMAN can be represented by a graph G = ( V , E ) , and each node in V = { v 1 , v 2 , , v n } represents an AP in the WMAN. E is the set of edges between APs, and each edge ( v i , v j ) E represents the unit data transmission delay between v i and v j . In particular, when two APs are not directly connected, the unit data transmission delay is obtained by adding the unit data transmission delay of each side in the shortest path (in a given AP network topology, the shortest path between APs can be obtained by the Dijkstra algorithm) between them. We define a matrix D R n × n , where D i , j D represents the unit data transmission delay between v i and v j .

3.1.2. WA Model

In a WMAN, the WA received by an AP consists of several sub-tasks, and there is data dependency between them. The set of WAs received by all APs is denoted as G W = { G 1 W , G 2 W , , G n W } , and each WA can be represented by a directed acyclic graph G i W = ( L i , E i ) . As shown in Figure 2, L i = { l i 1 , l i 2 , , l i s } represents s sub-tasks included in G i W , and the computational requirement of l i j is represented by θ i j . E i = { e i j , k , j , k { 1 , , s } } indicates the data dependence between sub-tasks included in G i W ; when e i j , k > 0 , there is a data dependency relationship between l i j and l i k , and when e i j , k = 0 , there is no data dependency relationship between l i j and l i k .

3.1.3. Cloudlet Placement and Sub-Task Offloading

We use a set C = { c 1 , c 2 , , c K } to represent K cloudlets to be placed in a WMAN, where the computing power of cloudlet c i is represented by η i . The K cloudlet placement scheme in the WMAN can be defined as ω = { ω i , j , i { 1 , , K } , j { 1 , , n } } . When cloudlet c i is placed at AP v j , ω i , j = 1 ; otherwise, ω i , j = 0 .
Because our work focuses on cloudlet placement in a WMAN, it does not consider the impact of the sub-task offloading strategy on the WA execution results. When the cloudlet placement positions are determined, we divide the whole WMAN into several areas on average. In each area, a greedy offloading algorithm similar to that in [35] is used to offload the sub-tasks; that is, the WAs received by each area are jointly executed by the cloudlets placed in the area. The execution order of the sub-tasks included in the same WA is determined by the breadth-first traversal (BFT) of the graph [7]. When the above greedy offloading algorithm offloads the sub-tasks, the strategy of minimizing the total time cost of task execution, namely the sum of execution time and transmission time, is adopted. According to the execution location of each sub-task’s precursor sub-tasks, it is offloaded to the appropriate cloudlet, and the global sub-task offloading strategy is ultimately obtained, which is denoted as M = i = 1 | G W | { ( l i j , c k ) | l i j L i , c k C } .

3.1.4. WA Execution Time

Given the cloudlet placement scheme ω and sub-task offloading strategy M , the following key time points are defined.
The start execution time of the sub-task. The start time of sub-task l i j on cloudlet c k is determined by the execution end time and the dependent data transmission time of all its predecessor sub-tasks. It is quantified as
t s t a r t ( l i j , c k ) = max e i p , j > 0 , e i p , j { t e n d ( l i p , c w ) + t t r s ( l i p , l i j , c w , c k ) } , f o r a l l ( l i j , c k ) , ( l i p , c w ) M ,
where t e n d ( l i p , c w ) and t t r s ( l i p , l i j , c w , c k ) are the execution end time of sub-task l i p and the dependent data transmission time between sub-tasks l i p and l i j , respectively. Their calculation equations will be given in Equation (4) and Equation (3), respectively.
The execution time of the sub-task. The execution time of sub-task l i j is determined by its computational requirements and the computing power of the cloudlet executing the sub-task. It is quantified as
t e x e ( l i j , c k ) = θ i j η k , ( l i j , c k ) M .
The dependent data transmission time between the sub-tasks. The dependent data transmission time between sub-tasks l i p and l i j is determined by the size of the dependent data between the two sub-tasks and the unit data transmission delay between the APs with cloudlets placed. It is quantified as
t t r s ( l i p , l i j , c w , c k ) = e i p , j d w , k , ( l i j , c k ) , ( l i p , c w ) M , ω w , w , ω k , k = 1 .
The execution end time of the sub-task. The execution end time of sub-task l i j on cloudlet c k is determined by the start execution time of sub-task l i j and its execution time on cloudlet c k . It is quantified as
t e n d ( l i j , c k ) = t s t a r t ( l i j , c k ) + t e x e ( l i j , c k ) , ( l i j , c k ) M .
The execution end time of the WA. The execution end time of WA G i W is the maximum of the execution end time of its sub-tasks. It is quantified as
t e n d ( G i W ) = max l i j L i { t e n d ( l i j , c k ) } , ( l i j , c k ) M .
The execution end time of all WAs. The end time of all WAs in the WMAN is the maximum of the execution end time of all the WAs. It is quantified as
t e n d ( G W ) = max G i W G W { t e n d ( G i W ) } .

3.2. Problem Formulation

The K cloudlet placement based on the WA problem (KCPWP) in a WMAN is defined as follows. Given an integer K > 1 and system model parameters ( G , D , G W ) , the problem is to find ω such that the execution time of WAs in Equation (6) is minimized, i.e.,
min ω t e n d ( G W ) s . t . j = 1 n ω i , j = 1 , i { 1 , , K } ( C 1 ) i = 1 K ω i , j 1 , j { 1 , , n } ( C 2 )
Constraint (C1) indicates that each cloudlet to be placed has only one placement position, and Constraint (C2) indicates that each AP has at most one cloudlet to be placed.

4. Cloudlet Placement Strategy Based on PGEL

In this section, we first introduce the traditional PSO algorithm and then introduce the cloudlet placement strategy based on PGEL in detail.

4.1. Traditional PSO

The PSO algorithm is a stochastic optimization technique based on populations and was proposed by Eberhart and Kennedy in 1995 [36]. In nature, animals that belong to the same population cooperate with each other in a certain way. Each member of the population changes its behavior by learning its own and others’ experiences. The PSO algorithm solves the optimization problem by imitating the clustering behavior of animals. In the PSO algorithm, a particle represents a candidate solution of the optimization problem, and all particles can move in the whole solution space. In the process of each search, the particle moves at a certain speed, which is affected by three factors: the situation of the particle itself, the best position of the particle itself, and the historical best position of the particle in the whole particle swarm. Although the traditional PSO algorithm has the advantages of good robustness and easy convergence, it also has defects of developing premature convergence and becoming trapped in a local optimum. In this paper, by introducing the crossover operator and mutation operator of the GA into the traditional PSO algorithm, the PGEL cloudlet placement algorithm is proposed to compensate for the defects of the traditional PSO algorithm, optimize its optimization ability, and better solve the K cloudlet placement problem in WMANs.

4.2. PGEL

4.2.1. Problem Encoding

To solve the KCPWP in WMANs, we use the cloudlet position sequence encoding strategy to encode the particles. Each particle in the particle swarm composed of Ω particles represents a placement scheme of K cloudlets in a WMAN. The state of the i-th particle in the t-th iteration is as follows:
P i t = ( p i , 1 t , p i , 2 t , , p i , K t ) ,
where p i , j t { 1 , 2 , , n } represents the placement position of the j-th cloudlet mapped by particle i in the t-th iteration.

4.2.2. Fitness Function

To judge the cloudlet placement scheme corresponding to each particle, a fitness function is introduced. Our purpose is to obtain a cloudlet placement scheme that can minimize the execution time of WAs in a WMAN. Therefore, the particle with a smaller execution time corresponding to the mapped cloudlet placement scheme can be simply regarded as a better particle. The fitness function of particle P i t can be defined as
f i t n e s s ( P i t ) = T i m e ( P i t ) ,
where T i m e ( P i t ) represents the execution time of WAs calculated by Equation (6) when the placement scheme corresponding to P i t is adopted. The particle with a smaller fitness obviously corresponds to the cloudlet placement scheme with a shorter execution time for WAs.

4.2.3. Update Strategy

In our previous work [37], the update strategy of the traditional PSO algorithm was introduced in detail. The PSO includes three core parts: inertia, personal cognition, and social cognition. In the iterative process of the algorithm, the update of each particle is affected by its personal optimal position and the current global optimal position. To avoid the tendency of the traditional PSO algorithm falling into the local optimum prematurely and to enhance the search ability of the algorithm, we introduce the crossover operator in the personal cognitive and social cognitive domains and the mutation operator in the inertia part to compensate for the defects of the traditional PSO algorithm. The update strategy of particle i in the ( t + 1 ) -th iteration is as follows:
P i t + 1 = S C ( P C ( I T ( P i t , w t + 1 , μ ) , p B e s t i t , c 1 t + 1 ) , g B e s t t , c 2 t + 1 ) ,
where P C ( x , y , z ) and S C ( x , y , z ) are the personal cognition update operation and the social cognition update operation, respectively, and I T ( x , y ) is the inertia update operation.
The crossover operator of the GA is introduced into the personal cognitive update operation and social cognitive update operation. The results of the personal cognition update operation and social cognition update operation are quantified as
P C ( A i t + 1 , p B e s t i t , c 1 t + 1 ) = C O ( A i t + 1 , p B e s t i t ) , if r 1 c 1 t + 1 A i t + 1 , if r 1 > c 1 t + 1 ,
S C ( B i t + 1 , g B e s t t , c 2 t + 1 ) = C O ( B i t + 1 , g B e s t t ) , if r 2 c 2 t + 1 B i t + 1 , if r 2 > c 2 t + 1 ,
respectively, where r 1 and r 2 are random numbers from 0 to 1 and C O ( x , y ) represents the crossover operator of GA. The crossover operator randomly selects an encoded segment of particle x to be updated and replaces it with the corresponding encoded segment of particle y. The crossover operator in the personal (or social) cognitive update operation is shown in Figure 3.
The mutation operator of the GA is introduced into the inertia update operation. The result of the inertia update operation is as follows:
I T ( P i t , w t + 1 , μ ) = M U ( P i t , μ ) , if r 3 w t + 1 P i t , if r 3 > w t + 1 ,
where r 3 is a random number from 0 to 1 and M U ( x ) represents the mutation operator of the GA. The mutation operator randomly selects μ encodings of particle x to be updated and then randomly changes the values of these encodings. The mutation operator in the inertial update operation is shown in Figure 4. When μ = 2 , the encoded i n d 1 and i n d 2 are selected; that is, the placement positions of the corresponding two cloudlets have changed.
A. 
Traditional updating mode.
In our previous work [37], we used the traditional updating mode to update the encodings of the particle. The traditional updating mode is as follows:
(1)
Initialization: Randomly encode all encoding bits of the particle and ensure that the encodings are not equal.
(2)
Crossover: If the particle’s encoding obtained by the crossover operator conflicts with the original encoding, it does not meet the constraint that an AP places only a cloudlet, but needs to adjust the encoding according to certain rules (such as the most equivalent replacement method) to ensure that every two encodings are not equal. As shown in Figure 5, the crossed 2 and 4 conflict with the original encoding and need to be adjusted to avoid conflict.
(3)
Mutation: Similarly, if the particle’s encoding obtained by the mutation operator conflicts with the original encoding, the mutated encoding needs to be adjusted to ensure that the two encodings are not equal. As shown in Figure 6, 5 in the original encoding changes to 1, and the new encoding conflicts with the original encoding. One of the adjustment schemes is to change 1 to 5 in the mutated encoding.
After the mutation operator and crossover operator are executed in the traditional updating mode, the encodings in the same particle may conflict and need an additional adjustment process, which will result in additional execution time during the execution process of the algorithm.
B. 
Encoding library updating mode.
In PGEL, we use the encoding library updating mode to avoid possible encoding conflicts to avoid the extra time caused by the adjustment process. We give the encoding library for each particle and update it in the following way:
(1)
Initialization: When the particles are initialized, the encodings are removed from the encoding library one by one so that they do not conflict.
(2)
Crossover: In the process of executing the crossover operator, first add the replaced encodings of the particle back to their own encoding library, and then, check the encodings obtained by the cross one by one. If encoding exists in the encoding library, delete it from the encoding library. If encoding does not exist in the encoding library, the closest encoding is selected from the encoding library to replace it. The label of the AP is determined by its specific spatial location, so the location of the access point in the network represented by the closest value is the closest, so the above operation can retain the original particle placement scheme to the greatest extent.
(3)
Mutation: When the mutation operator needs to be executed, an encoding is randomly selected from the encoding library to replace the original encoding, and then, the original encoding is added to the encoding library.
The encoding library update mode can ensure that the encodings in the particles do not conflict and effectively shorten the execution time of the crossover operator and the mutation operator.

4.2.4. Map from a Particle-to-Cloudlet Placement Scheme

The mapping algorithm of the particle-to-cloudlet placement scheme is shown in Algorithm 1.   
Algorithm 1: Mapping particle to cloudlet placement scheme
   Input: Paticle P i t .
   Output: Cloudlet placement scheme ω .
1begin
2  foreach  ω i , j ω  do
3    ω i , j = 0 ;
4  end
5  foreach  p i , j t P i t  do
6    ω j , p i , j t = 1 ;
7  end
8  return  ω
9end
The input of Algorithm 1 is particle P i t , and the output is cloudlet placement scheme ω . First, cloudlet placement scheme ω is initialized (Lines 2–4); second, the placement position of each cloudlet is determined according to each encoding of the particle (Lines 5–7); lastly, cloudlet placement scheme ω is output (Line 8).

4.2.5. Parameter Settings

The inertia weight factor w determines the convergence and search ability of PGEL, so its setting is very important. According to Equation (13), when w is small, the probability of particle mutation is small, and the current state of the particle has a great influence on the update of the next state, so PGEL has a strong local search ability. In contrast, when w is large, the probability of particle mutation is large, and the current state of particle has little influence on the next state update, so PGEL has a strong global search ability. In the early stage of algorithm implementation, our emphasis is on giving more attention to the global search ability. As the search deepens, our focus is on concentrating more attention on the local search ability. In conclusion, w should evolve with the evolution of population particles. In PGEL, an inertia weight factor adjustment strategy is used, which is adjusted adaptively according to the advantages and disadvantages of the current population of particles. In the t-th iteration, the value of the inertia weight factor is as follows:
w t = w max ( w max w min ) × exp d ( P t 1 ) d ( P t 1 ) 1.01 n ,
where w max and w min are the maximum and minimum values of the inertia weight factor, respectively, and d ( P t 1 ) is the number of bits encoded differently between particle P t 1 and the global best particle.
In addition, the personal cognitive factor c 1 max and the social cognitive factor c 2 max are set by the linearly increasing strategy and the linearly decreasing strategy in the iterative process [38], respectively. They are quantified as
c 1 t = c 1 min + t × c 1 max c 1 min t max ,
c 2 t = c 2 max t × c 2 max c 2 min t max ,
respectively, where c 1 max and c 2 max are the maximum values of the personal cognitive factor and social cognitive factor, respectively; c 1 min and c 2 min are the minimum values of the personal cognitive factor and social cognitive factor, respectively; and t max is the maximum number of iterations of the algorithm.

4.2.6. Algorithm Flow

The main steps of PGEL are as follows:
Step 1: The parameters of PGEL are initialized, and then, the initial population is generated.
Step 2: According to the mapping between cloudlets and APs, the fitness of each particle is calculated according to Equation (9). The personal best state of each particle is initialized by its initial state, and the particle with the least fitness in the initial particle swarm is set as the global best particle.
Step 3: The particles are updated one by one according to Equation (10), and the fitness of the particles is calculated after updating.
Step 4: If the fitness of the updated particle is less than that of its personal best, it is set as its personal best. Otherwise, go to Step 6.
Step 5: If the fitness of the updated particle is less than that of the global best particle, it is set as the global best particle.
Step 6: Verify that the stop condition is met. If the stop condition is not satisfied, return to Step 3; otherwise, the algorithm will terminate.

4.3. Time Complexity

In each iteration of PGEL, all particles need to be updated and recalculate the corresponding fitness. In an iteration, the number of particle updates is jointly determined by the population size Ω and particle dimension Φ . The time complexity of the fitness value calculation is determined by the number of cloudlets to be placed (K). Since the particle dimension is equal to the number of cloudlets, that is Φ = K , the time complexity of each iteration of PGEL is O ( Ω · K ) .

5. Performance Evaluation

In this section, to verify the effectiveness of the proposed placement strategy based on PGEL in solving the KCPWP in WMANs, the experimental evaluation was carried out in a simulation environment. In particular, the following research questions (RQs) were verified by simulation experiments:
RQ1: Can PGEL obtain a near-optimal cloudlet placement scheme in typical WMANs? (Section 5.3)
RQ2: What is the impact of changes in the WMAN on the performance of PGEL? (Section 5.4)
RQ3: In typical WMANs, compared with several classic algorithms, does PGEL have performance advantages when solving KCPWP? (Section 5.5)
For RQ1, the experimental results show that PGEL can obtain a near-optimal cloudlet placement scheme in typical WMANs. For RQ2, it can be seen from the experimental results that regardless of how WMANs change, the performance of PGEL is not affected, and a near-optimal cloudlet placement scheme can be obtained. For RQ3, the experimental results show that PGEL is superior to other algorithms in typical WMANs.

5.1. Experimental Settings

All simulation experiments in this section were carried out on a PC equipped with an i5-8500 CPU and 32 GB of RAM. The operating system version was Windows 10-2004. PGEL and all classic algorithms were implemented in Python 3.7. The relevant parameters of PGEL refer to [37] and were set as t max = 1000 , Ω = 100 , w max = 0.9 , w max = 0.4 , c 1 max = 0.9 , c 1 min = 0.2 , c 2 max = 0.9 , c 2 min = 0.4 , and μ = 0.1 K .
We selected the regions from the dataset of the Shanghai Telecom base station [39] to simulate the WMANs according to a certain longitude and latitude span. Different WMANs include different numbers and connection topologies for APs. The unit data transmission delay of each edge in the WMAN was generated randomly from 5 ms to 50 ms [4]. During the process of sub-task offloading, the WMAN was divided into 2 × 2 areas.

5.2. Classic Algorithms

To verify the advantages of PGEL, we introduced the following classic algorithms in the simulation experiments:
Optimal placement algorithm (OPT): This algorithm traverses all possible cloudlet placement schemes and selects the placement scheme with the minimum execution time of WAs as the optimal cloudlet placement scheme.
PSO: This traditional PSO algorithm has the same encoding and parameter settings as PGEL.
GA: According to the update strategy of the GA, it uses the elite retention strategy, binary tournament selection operator, two-point crossover operator, and exchange mutation operator to update the chromosome and takes the final elite solution as the optimal solution. The crossover and mutation probability were set as 0.7 and 0.1, respectively.
Random cloudlet placement algorithm (RAN): This algorithm randomly selects K from all APs of the WMAN and randomly places K to-be-placed cloudlets on these APs. The result is an average of 100 repetitions.

5.3. RQ1. PGEL Can Obtain a Near-Optimal Cloudlet Placement Scheme in Typical WMANs

According to the administrative divisions of Shanghai, we selected regions with spans of 8 and 4 points in latitude and longitude, respectively, from Huangpu, Xuhui, Minhang, and Pudong as typical WMANs. The distribution and topology of APs in the WMAN were simulated by the data of the Telecom base station in [39]. It was assumed that the number of to-be-placed cloudlets K in the WMAN is determined by the number of APs n; that is, K = κ · n , κ ( 0 , 1 ) . According to the population density of the above administrative divisions, the κ values of the selected regions from Huangpu, Xuhui, Minhang, and Pudong were set to be 0.35, 0.3, 0.25, and 0.2, respectively. In the above regions, the WAs received by APs were randomly generated from various network architectures, namely AlexNet, Visual Geometry Group network (VGGNet), GoogLeNet, and residual network (ResNet) [40]. In addition, the clock frequency of the CPU equipped with the to-be-placed cloudlets in the same WMAN satisfied a uniform distribution of 2 GHz to 3 GHz.
By running OPT and PGEL in different typical WMANs, we can obtain the local cloudlet placement results, which are shown in Figure 7. Although the cloudlet placement scheme obtained by PGEL in each typical WMAN is not equivalent to the optimal cloudlet placement scheme, there are only 1-2 cloudlet placement positions that differ locally; other cloudlet placement positions are the same, and the overall placement scheme is close.
By calculating the execution time of the WAs corresponding to the cloudlet placement scheme shown in Figure 7, the histogram shown in Figure 8 can be obtained. The results show that in four different typical WMANs (Huangpu, Xuhui, Minhang, and Pudong), and the execution times of the WAs corresponding to the cloudlet placement scheme obtained by PGEL were 7.51%, 11.91%, 2.06%, and 0.83% larger than those corresponding to the optimal placement scheme, respectively. The combined insights of Figure 7 and Figure 8 indicate that although the cloudlet placement schemes obtained by PGEL in the WMANs somewhat differed from the optimal placement schemes, the difference was very small. PGEL can obtain the approximate optimal solution of KCPWP in typical WMANs. Although there was still a certain gap between the optimal placement scheme and the placement scheme obtained by PGEL, it is unfeasible to obtain the optimal placement scheme by brute force when the network scale increases, so PGEL has a certain practicality.

5.4. RQ2. The Changes in the WMAN Have Almost No Impact on the Performance of PGEL

From Section 5.3, we know that PGEL can very closely approach the optimal cloudlet placement scheme in typical WMANs. However, the WMAN cannot be fixed; it is both temporal and spatial. Next, we will discuss the impact of WMAN changes on PGEL performance.

5.4.1. The Impact of Changes in AP Topology on the Performance of PGEL

We used the regions selected from different administrative divisions of Shanghai as described in Section 5.3 to simulate changes in AP topology. In this part, it is assumed that the number of to-be-placed cloudlets in each region and the clock frequency of the CPU equipped with the to-be-placed cloudlets are the same, and other settings were the same as those in Section 5.3. In addition, the WAs received by APs were simulated by GoogLeNet, the structure of which is shown in [41].
As shown in Figure 9a, in four WMANs with different AP topologies, although the number of to-be-placed cloudlets and their CPU clock frequency were the same, the execution times of the identical number and type of WAs were different. This is because the AP topology will affect the placement positions of cloudlets, resulting in different sub-task offloading schemes and data transmission times. From the experimental results, it can be concluded that in four WMANs with different AP topologies (Huangpu, Xuhui, Minhang, and Pudong), the executions time of the WAs corresponding to the cloudlet placement scheme obtained by PGEL were 5.80%, 7.51%, 3.04%, and 3.13% larger than those corresponding to the optimal placement scheme, respectively. Although there was a gap between them, it can almost be ignored. On the basis of these results, we can conclude that the changes in the AP topology have little impact on the performance of PGEL, and PGEL can still obtain a near-optimal cloudlet placement scheme.

5.4.2. The Impact of the Changes in the To-Be-Placed Cloudlet CPU Clock Frequency on the Performance of PGEL

In Section 5.3, we assume that the CPU clock frequency of to-be-placed cloudlets meets the uniform distribution within a certain range. In this part, we simulated the changes in the CPU clock frequency of the to-be-placed cloudlets by changing the maximum and minimum values of the uniform distribution. In addition, the first typical WMAN (i.e., Huangpu) of Section 5.3 was adopted. The WAs received by APs were simulated by implementing GoogLeNet.
The results shown in Figure 9b indicate that the execution time of the WAs was negatively correlated with the CPU clock frequency of to-be-placed cloudlets. This is because the CPU clock frequency determines the execution time of sub-tasks. The greater the CPU clock frequency is, the smaller the execution time of sub-tasks. The execution time of sub-tasks largely determines the execution time of WAs. In addition, the experimental results indicate that the execution time of the WAs corresponding to the cloud placement scheme obtained by PGEL was 3.94%, 3.76%, 2.03%, and 6.41% larger than the optimal placement scheme under the four to-be-placed cloudlet CPU clock frequencies, respectively. Although there was a gap between the values, it can almost be ignored. From these results, we can conclude that the CPU clock frequency changes of the to-be-placed cloudlets have little impact on the performance of PGEL, and PGEL can still be used to obtain the near-optimal cloudlet placement scheme.

5.4.3. The Impact of the Changes in WAs on the Performance of PGEL

In Section 5.3, we assume that the WAs randomly select from four typical tasks. In this part, we simulated the changes in WAs by changing the type of WAs. In addition, the relevant settings of the first typical WMAN of Section 5.3 were adopted, and all WAs were equivalent.
As shown in Figure 9c, when the types of WAs were different, the execution times of the WAs were different. This is because different types of WAs have different computational requirements and data transmission rates, so the execution time was different. From the experimental results, when the WA types were AlexNet, VGGNet, GoogLeNet, and ResNet, the execution times of the WAs corresponding to the cloudlet placement scheme obtained by PGEL were 4.27%, 2.06%, 0.15%, and 0.95% larger, respectively, than those corresponding to the optimal placement scheme. Although there was a gap between the values, it can almost be ignored. From these results, we can conclude that the changes in the WAs have little impact on the performance of PGEL, and PGEL can still obtain a near-optimal cloudlet placement scheme.
In summary, we can conclude that changes in the WMANs have almost no impact on the performance of PGEL, and PGEL can still obtain a near-optimal cloudlet placement scheme.

5.5. RQ3. PGEL Has Greater Performance Advantages than Several Classic Algorithms in Typical WMANs Do

Finally, we used experiments to test the performance advantages of the proposed PGEL over several classic algorithms when solving the KCPWP in typical WMANs. As shown in Figure 10, in the four typical WMANs discussed in Section 5.3, PGEL reduced the execution time of WAs by 15.04%, 31.32%, and 44.99% on average compared with those of the PSO, GA, and RAN, respectively. PGEL adaptively adjusts the search capability according to the current situation and iteratively updates from a global perspective. In the four different WMANs, PGEL performed better than several classic algorithms did. Although the PSO algorithm has a certain search ability, it easily falls into a local optimum, so its performance is limited. The GA performs only a partial search during each iteration, so its performance is poor, and RAN can reflect only the average level of all placement schemes. In summary, it can be concluded that PGEL has a greater performance advantage than do several classic algorithms in solving the KCPWP in WMANs.

6. Conclusions

For mobile users seeking solutions to the problem of insufficient resources through remote servers, cloudlet technology is essential. However, the problem of cloudlet placement has been largely ignored. To solve the problem of cloudlet placement, this paper proposed a time-driven cloudlet placement strategy based on PGEL, which aims to reduce the execution time of WAs in WMANs. We set up a reasonable simulation environment and conducted a complete simulation experiment based on this approach. The results show that compared with the existing solutions, our proposed strategy has better performance and is extraordinarily close to the optimal placement scheme.
In the future, we will consider the impact of environmental fluctuations (i.e., network delays, bandwidth fluctuations, and cloudlet failures) on cloudlet placement and improve existing strategies to adapt to the dynamic environment. In addition, we will fully consider the characteristics of the sub-tasks included in the WA and the placement cost of the cloudlet in different positions and design a more scientific and complete system cost model.

Author Contributions

J.Z. and M.L. drafted the original manuscript and designed the experiments. X.Z. provided ideas and suggestions. C.-H.H. provided a critical review and helped to draft the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by the Specific Research Fund of the Innovation Platform for Academicians of Hainan Province: YSPTZX202145, the Key-Area Research and Development Program of Guangdong Province: 2020B0101090005, and the Natural Science Foundation of Fujian Province under Grant: 2019J01286.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data in this paper are available from the corresponding authors upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, F.; Shu, P.; Jin, H.; Ding, L.; Yu, J.; Niu, D.; Li, B. Gearing resource-poor mobile devices with powerful clouds: Architectures, challenges, and applications. IEEE Wirel. Commun. 2013, 20, 14–22. [Google Scholar]
  2. Huang, D.; Xing, T.; Wu, H. Mobile cloud computing service models: A user-centric approach. IEEE Netw. 2013, 27, 6–11. [Google Scholar] [CrossRef]
  3. Jararweh, Y.; Doulat, A.; AlQudah, O.; Ahmed, E.; Al-Ayyoub, M.; Benkhelifa, E. The future of mobile cloud computing: Integrating cloudlets and mobile edge computing. In Proceedings of the International Conference on Telecommunications (ICT), Thessaloniki, Greece, 16–18 May 2016. [Google Scholar]
  4. Xu, Z.; Liang, W.; Xu, W.; Jia, M.; Guo, S. Efficient algorithms for capacitated cloudlet placements. IEEE Trans. Parallel Distrib. Syst. 2015, 27, 2866–2880. [Google Scholar] [CrossRef]
  5. Xu, Z.; Liang, W.; Xu, W.; Jia, M.; Guo, S. Capacitated cloudlet placements in wireless metropolitan area networks. In Proceedings of the IEEE 40th Conference on Local Computer Networks (LCN), Clearwater Beach, FL, USA, 26–29 October 2015. [Google Scholar]
  6. Jia, M.; Cao, J.; Liang, W. Optimal cloudlet placement and user to cloudlet allocation in wireless metropolitan area networks. IEEE Trans. Cloud Comput. 2015, 5, 725–737. [Google Scholar] [CrossRef]
  7. Chen, M.; Guo, S.; Liu, K.; Liao, X.; Xiao, B. Robust computation offloading and resource scheduling in cloudlet-based mobile cloud computing. IEEE Trans. Mob. Comput. 2021, 20, 2025–2040. [Google Scholar] [CrossRef]
  8. Chun, B.G.; Ihm, S.; Maniatis, P.; Naik, M.; Patti, A. Clonecloud: Elastic execution between mobile device and cloud. In Proceedings of the Sixth Conference on Computer Systems (CCS), Salzburg, Austria, 10–13 April 2011. [Google Scholar]
  9. Cuervo, E.; Balasubramanian, A.; Cho, D.k.; Wolman, A.; Saroiu, S.; Chandra, R.; Bahl, P. MAUI: Making smartphones last longer with code offload. In Proceedings of the 8th International Conference on Mobile Systems, Applications, and Services (ICMAS), San Francisco, CA, USA, 15–18 June 2010. [Google Scholar]
  10. Kosta, S.; Aucinas, A.; Hui, P.; Mortier, R.; Zhang, X. Thinkair: Dynamic resource allocation and parallel execution in the cloud for mobile code offloading. In Proceedings of the IEEE International Conference on Computer Communications (INFOCOM), Orlando, FL, USA, 25–30 March 2012. [Google Scholar]
  11. LAN/MAN Standards Committee. IEEE Standard for Local and Metropolitan Area Networks: Overview and Architecture (En línea); The Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2002. [Google Scholar]
  12. Luo, Q.; Li, C.; Luan, T.H.; Shi, W.; Wu, W. Self-learning based computation offloading for internet of vehicles: Model and algorithm. IEEE Trans. Wirel. Commun. 2021, 20, 5913–5925. [Google Scholar] [CrossRef]
  13. Tan, J.; Chang, T.H.; Guo, K.; Quek, T.Q. Robust computation offloading in fog radio access network with fronthaul compression. IEEE Trans. Wirel. Commun. 2021, 20, 6506–6521. [Google Scholar] [CrossRef]
  14. Ha, K.; Pillai, P.; Richter, W.; Abe, Y.; Satyanarayanan, M. Just-in-time provisioning for cyber foraging. In Proceedings of the 11th Annual International Conference on Mobile Systems, Applications, and Services (ICMAS), Taipei, Taiwan, 25–28 June 2013. [Google Scholar]
  15. Zhang, Y.; Liu, H.; Jiao, L.; Fu, X. To offload or not to offload: An efficient code partition algorithm for mobile cloud computing. In Proceedings of the IEEE 1st International Conference on Cloud Networking (CLOUDNET), Paris, France, 28–30 November 2012. [Google Scholar]
  16. Zhang, Y.; Huang, G.; Liu, X.; Zhang, W.; Mei, H.; Yang, S. Refactoring android Java code for on-demand computation offloading. In Proceedings of the Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), Tucson, AZ, USA, 21–25 October 2012. [Google Scholar]
  17. Ra, M.R.; Sheth, A.; Mummert, L.; Pillai, P.; Wetherall, D.; Govindan, R. Odessa: Enabling interactive perception applications on mobile devices. In Proceedings of the The 9th International Conference on Mobile Systems, Applications, and Services (ICMAS), Bethesda MD, USA, 28 June–1 July 2011. [Google Scholar]
  18. Hoang, D.T.; Niyato, D.; Wang, P. Optimal admission control policy for mobile cloud computing hotspot with cloudlet. In Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC), Paris, France, 1–4 April 2012. [Google Scholar]
  19. Guan, S.; Boukerche, A. A novel mobility-aware offloading management scheme in sustainable multi-access edge computing. IEEE Trans. Sustain. Comput. 2022, 7, 1–13. [Google Scholar] [CrossRef]
  20. Chen, X.; Hu, J.; Chen, Z.; Lin, B.; Xiong, N.; Min, G. A reinforcement learning-empowered feedback control system for industrial internet of things. IEEE Trans. Ind. Inform. 2022, 18, 2724–2733. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Niyato, D.; Wang, P. Offloading in mobile cloudlet systems with intermittent connectivity. IEEE Trans. Mob. Comput. 2015, 14, 2516–2529. [Google Scholar] [CrossRef]
  22. Gai, K.; Qiu, M.; Zhao, H.; Tao, L.; Zong, Z. Dynamic energy-aware cloudlet-based mobile cloud computing model for green computing. J. Netw. Comput. Appl. 2016, 59, 46–54. [Google Scholar] [CrossRef]
  23. Mukherjee, A.; De, D.; Roy, D.G. A power and latency aware cloudlet selection strategy for multi-cloudlet environment. IEEE Trans. Cloud Comput. 2016, 7, 141–154. [Google Scholar] [CrossRef]
  24. Bhatta, D.; Mashayekhy, L. A bifactor approximation algorithm for cloudlet placement in edge computing. IEEE Trans. Parallel Distrib. Syst. 2021, 33, 1787–1798. [Google Scholar] [CrossRef]
  25. Jin, X.; Gao, F.; Wang, Z.; Chen, Y. Optimal deployment of mobile cloudlets for mobile applications in edge computing. J. Supercomput. 2022, 78, 7888–7907. [Google Scholar] [CrossRef]
  26. Chen, X.; Li, A.; Zeng, X.E.; Guo, W.; Huang, G. Runtime model based approach to IoT application development. Front. Comput. Sci. 2015, 9, 540–553. [Google Scholar] [CrossRef]
  27. Guo, Y.; Wang, S.; Zhou, A.; Xu, J.; Yuan, J.; Hsu, C.H. User allocation-aware edge cloud placement in mobile edge computing. Softw. Pract. Exp. 2020, 50, 489–502. [Google Scholar] [CrossRef]
  28. Wang, Z.; Gao, F.; Jin, X. Optimal deployment of cloudlets based on cost and latency in Internet of Things networks. Wirel. Netw. 2020, 26, 6077–6093. [Google Scholar] [CrossRef]
  29. Zhu, X.; Zhou, M. Multiobjective optimized cloudlet deployment and task offloading for mobile-edge computing. IEEE Internet Things J. 2021, 8, 15582–15595. [Google Scholar] [CrossRef]
  30. Cao, B.; Fan, S.; Zhao, J.; Tian, S.; Zheng, Z.; Yan, Y.; Yang, P. Large-scale many-objective deployment optimization of edge servers. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3841–3849. [Google Scholar] [CrossRef]
  31. Dashti, S.E.; Rahmani, A.M. Dynamic VMs placement for energy efficiency by PSO in cloud computing. J. Exp. Theor. Artif. Intell. 2016, 28, 97–112. [Google Scholar] [CrossRef]
  32. Chou, L.D.; Chen, H.F.; Tseng, F.H.; Chao, H.C.; Chang, Y.J. DPRA: Dynamic Power-Saving Resource Allocation for Cloud Data Center Using Particle Swarm Optimization. IEEE Syst. J. 2018, 12, 1554–1565. [Google Scholar] [CrossRef]
  33. Tseng, F.H.; Wang, X.; Chou, L.D.; Chao, H.C.; Leung, V.C.M. Dynamic Resource Prediction and Allocation for Cloud Data Center Using the Multiobjective Genetic Algorithm. IEEE Syst. J. 2018, 12, 1688–1699. [Google Scholar] [CrossRef]
  34. Tseng, F.H.; Jheng, Y.M.; Chou, L.D.; Chao, H.C.; Leung, V.C. Link-Aware Virtual Machine Placement for Cloud Services based on Service-Oriented Architecture. IEEE Trans. Cloud Comput. 2020, 8, 989–1002. [Google Scholar] [CrossRef]
  35. Chen, Z.; Hu, J.; Chen, X.; Hu, J.; Zheng, X.; Min, G. Computation offloading and task scheduling for DNN-based applications in cloud-edge computing. IEEE Access 2020, 8, 115537–115547. [Google Scholar] [CrossRef]
  36. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks (ICNN), Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
  37. Guo, W.; Lin, B.; Chen, G.; Chen, Y.; Liang, F. Cost-driven scheduling for deadline-based workflow across multiple clouds. IEEE Trans. Netw. Serv. Manag. 2018, 15, 1571–1585. [Google Scholar] [CrossRef]
  38. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the IEEE International Conference on Evolutionary Computation Proceedings (ICECP), Anchorage, AK, USA, 4–9 May 1998. [Google Scholar]
  39. Wang, S.; Zhao, Y.; Huang, L.; Xu, J.; Hsu, C.H. QoS prediction for service recommendations in mobile edge computing. J. Parallel Distrib. Comput. 2019, 127, 134–144. [Google Scholar] [CrossRef]
  40. Chavan, T.R.; Nandedkar, A.V. A hybrid deep neural network for online learning. In Proceedings of the International Conference on Advances in Pattern Recognition (ICAPR), Bangalore, India, 27–30 December 2017. [Google Scholar]
  41. Zhang, J. GitHub. Available online: https://github.com/JamesZJS/dataset (accessed on 1 January 2022).
Figure 1. Cloudlet placement and WA offloading in a WMAN.
Figure 1. Cloudlet placement and WA offloading in a WMAN.
Sensors 22 03422 g001
Figure 2. Diagram of the WA.
Figure 2. Diagram of the WA.
Sensors 22 03422 g002
Figure 3. Crossover operator in the personal cognitive update operation and the social cognitive update operation.
Figure 3. Crossover operator in the personal cognitive update operation and the social cognitive update operation.
Sensors 22 03422 g003
Figure 4. Mutation operator in the inertial update operation.
Figure 4. Mutation operator in the inertial update operation.
Sensors 22 03422 g004
Figure 5. Traditional crossover operator. (The red parts indicate the crossover and adjustment, and the arrows indicate the crossover and adjustment operator.)
Figure 5. Traditional crossover operator. (The red parts indicate the crossover and adjustment, and the arrows indicate the crossover and adjustment operator.)
Sensors 22 03422 g005
Figure 6. Traditional mutation operator. (The red parts indicate the mutation and adjustment, and the arrows indicate the mutation and adjustment operator.)
Figure 6. Traditional mutation operator. (The red parts indicate the mutation and adjustment, and the arrows indicate the mutation and adjustment operator.)
Sensors 22 03422 g006
Figure 7. Comparison of local cloudlet placement results in typical WMANs. (a) Huangpu. (b) Xuhui. (c) Minhang. (d) Pudong.
Figure 7. Comparison of local cloudlet placement results in typical WMANs. (a) Huangpu. (b) Xuhui. (c) Minhang. (d) Pudong.
Sensors 22 03422 g007
Figure 8. Comparison of execution times of WAs in typical WMANs.
Figure 8. Comparison of execution times of WAs in typical WMANs.
Sensors 22 03422 g008
Figure 9. The impact of WMAN changes on the performance of PGEL. (a) APs’ topology changes. (b) CPU clock frequency changes. (c) WA changes.
Figure 9. The impact of WMAN changes on the performance of PGEL. (a) APs’ topology changes. (b) CPU clock frequency changes. (c) WA changes.
Sensors 22 03422 g009
Figure 10. Comparison of cloudlet placement. (a) Huangpu. (b) Xuhui. (c) Minhang. (d) Pudong.
Figure 10. Comparison of cloudlet placement. (a) Huangpu. (b) Xuhui. (c) Minhang. (d) Pudong.
Sensors 22 03422 g010
Table 1. The comparative analysis of different work (“+”: involved; “−”: not involved).
Table 1. The comparative analysis of different work (“+”: involved; “−”: not involved).
ReferenceInfrastructureServerTask TypeConstraintObject
CloudEdgeTerminalSingleMultipleJobsWorkflowEnergyDeadlineTimeEnergyWorkloadOther
Our work+++++++
[24]+++++
[25]+++++
[27]+++++++
[28]+++++++
[29]++++++
[30]+++++
[31]++++++
[34]++++++++
Table 2. Summary of key notations.
Table 2. Summary of key notations.
NotationDescription
G = ( V , E ) a WMAN
v i V AP with index i in the WMAN
D i , j D unit data transmission delay between v i and v j
G W = { G 1 W , , G n W } set of WAs received by all APs
G i W = ( L i , E i ) WA with index i
L i = { l i 1 , , l i s } sub-tasks included in WA G i W
E i = { e i j , k } data dependence between sub-tasks included in WA G i W
θ i j computational requirement of sub-task l i j
C = { c 1 , , c K } set of cloudlets to be placed in a WMAN
η i computing power of cloudlet c i
ω = { ω i , j } cloudlet placement scheme
M = i = 1 | G W | { ( l i j , c k ) } global sub-task offloading strategy
t s t a r t ( l i j , c k ) start time of sub-task l i j on cloudlet c k
t e x e ( l i j , c k ) execution time of sub-task l i j
t t r s ( l i p , l i j , c w , c k ) dependent data transmission time between sub-tasks l i p and l i j
t e n d ( l i p , l i j , c w , c k ) execution end time of sub-task l i j on cloudlet c k
t e n d ( G i W ) execution end time of WA G i W
t e n d ( G W ) end time of all WAs in the WMAN
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, J.; Li, M.; Zheng, X.; Hsu, C.-H. A Time-Driven Cloudlet Placement Strategy for Workflow Applications in Wireless Metropolitan Area Networks. Sensors 2022, 22, 3422. https://doi.org/10.3390/s22093422

AMA Style

Zhang J, Li M, Zheng X, Hsu C-H. A Time-Driven Cloudlet Placement Strategy for Workflow Applications in Wireless Metropolitan Area Networks. Sensors. 2022; 22(9):3422. https://doi.org/10.3390/s22093422

Chicago/Turabian Style

Zhang, Jianshan, Ming Li, Xianghan Zheng, and Ching-Hsien Hsu. 2022. "A Time-Driven Cloudlet Placement Strategy for Workflow Applications in Wireless Metropolitan Area Networks" Sensors 22, no. 9: 3422. https://doi.org/10.3390/s22093422

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop