Next Article in Journal
Symmetry-Breaking-Induced Internal Mixing Enhancement of Droplet Collision
Previous Article in Journal
On the Design of Multi-Party Reversible Data Hiding over Ciphered Overexposed Images
Previous Article in Special Issue
Topology Optimization of Continuum Structures Based on Binary Hunter-Prey Optimization Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Whale Optimization Algorithm for Quality of Service-Aware Manufacturing Cloud Service Composition

College of Engineering, South China Agricultural University, Guangzhou 510642, China
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(1), 46; https://doi.org/10.3390/sym16010046
Submission received: 24 October 2023 / Revised: 30 November 2023 / Accepted: 21 December 2023 / Published: 29 December 2023
(This article belongs to the Special Issue Meta-Heuristics for Manufacturing Systems Optimization Ⅱ)

Abstract

:
Cloud Manufacturing (CMfg) has attracted lots of attention from scholars and practitioners. The purpose of quality of service (QoS)-aware manufacturing cloud service composition (MCSC), as one of the key issues in CMfg, is to combine different available manufacturing cloud services (MCSs) to generate an optimized MCSC that can meet the diverse requirements of customers. However, many available MCSs, deployed in the CMfg platform, have the same function but different QoS attributes. It is a great challenge to achieve optimal MCSC with a high QoS. In order to obtain better optimization results efficiently for the QoS-MCSC problems, a whale optimization algorithm (WOA) with adaptive weight, Lévy flight, and adaptive crossover strategies (ASWOA) is proposed. In the proposed ASWOA, adaptive crossover inspired by the genetic algorithm is developed to balance exploration and exploitation. The Lévy flight is designed to expand the search space of the WOA and accelerate the convergence of the WOA with adaptive crossover. The adaptive weight is developed to extend the search scale of the exploitation. Simulation and comparison experiments are conducted on various benchmark functions and different scale QoS-MCSC problems. The QoS attributes of the problems are randomly and symmetrically generated. The experimental results demonstrate that the proposed ASWOA outperforms other compared cutting-edge algorithms.

1. Introduction

Cloud manufacturing (CMfg) is an advanced service-oriented manufacturing model that uses different advanced internet technologies to integrate different virtualized manufacturing resources services [1,2]. The topic has attracted lots of attention from scholars and practitioners. In CMfg, various lifecycle-oriented manufacturing resources and capabilities, including the hard and software capabilities for product design, production, simulation, transportation, and so on, are virtualized and encapsulated into the CMfg platform [3]. The characteristics of each MCS contain two categories, including functional and non-functional attributes [4]. The non-functional attributes are generally called QoS. The deployed MCSs in the CMfg platform facilitate customers to select proper MCSs according to their requirements and QoS to complete manufacturing tasks [5]. In detail, a complex manufacturing task can be split into different subtasks, which can be completed by selecting an MCS from the candidate manufacturing cloud service set (CMCSS) deployed in the CMfg platform; the selected MCSs are integrated to form a manufacturing cloud service composition (MCSC).
Large amounts of MCSs deployed in the CMfg platform with a rapid increase trend bring great challenges to selecting optimal MCSs. Numerous available candidate MCSs provide the same or analogous functions but have different QoS attributes, such as time, product performance, manufacturing capacity, and so on. It is difficult to optimize some QoS attributes at the same time because one attribute may conflict with another. For example, an MCS may have a longer execution time but worse manufacturing capacity, whereas another MCS may have a shorter execution time but better manufacturing capacity [5]. Meanwhile, we also have to consider the issue of service correlations in the composition process that can influence the global QoS of the MCSC [6]. The above particular problems of CMfg improve the difficulty of selecting MCSs to be composed of MCSC and is still an arduous task that attacked many researchers [7].
The problem with QoS-MCSC is that each subtask of a cloud task selects the suitable MCS to aggregate it in sequence to generate an MCSC. The MCSC can meet both functional requirements and optimal QoS of customers. Finding the optimal composite path from the feasible solutions distributed in a discrete space for QoS-MCSC is known as an NP-Hard problem. Taking a complex task with N subtasks and each subtask with M MCSs as an example, the solution space reaches up to MN. The solutions for the QoS-MCSC problem are distributed in a large discrete space. Various intelligent optimization algorithms have been developed to explore the optimal composite path for this problem, such as the genetic algorithm (GA) [8], the differential evolution algorithm (DE) [9], particle swarm optimization (PSO) [10], the artificial bee colony algorithm (ABC) [11], and the whale optimization algorithm (WOA) [12]. Khanouchea et al. [13] constructed a clustering-based search tree to improve global search capability for the problem of QoS-MCSC. Li et al. [14] proposed a hybrid PSO (AIWPSO) that utilizes adaptive inertia weights to enhance global search capabilities. Deng et al. [15] developed a hybrid DE with neighborhood mutation operators and opposition-based learning (NOBLDE). Savsani et al. [16] developed a teaching and learning (TLBO) for non-linear large-scale problems. When solving the QoS-MCSC problem, there may be multiple local optimal solutions in the search space, and the above-mentioned approaches are easily trapped in local optimal solutions owing to their randomization or stochastic strategies. Achieving global optimal solutions is still a great challenge.
The WOA, as a popular bionic algorithm, is proposed by Mirjalili [12]. The principle of the WOA is to simulate the behavior of humpback whales in hunting prey, including encircling prey, bubble net attacking, and searching for prey. Recently, the WOA has aroused the interest of many researchers and practitioners, and it has been employed or modified to handle diversified practical engineering problems, such as multilevel threshold image segmentation [17], permutation flow shop scheduling [18], microgrid operations planning [19], and so on. Experimental tests demonstrate that the WOA can achieve competitive or better results compared to other heuristic algorithms. For instance, the WOA outperforms the DE and grey wolf optimization (GWO) while solving reactive power planning problems [20]. However, one disadvantage of the WOA is that it may easily drap into local optimization in the later iteration, especially when the number of evolution times exceeds 600 [21]. The reason is that the probability related to exploration attenuates along with the iterations and the exploration ability of the WOA for global optimal solution gradually decreases, while the exploitation ability gradually increases. Some existing algorithms also do not have a strong exploration ability in the later iteration, which might lead the approach to be trapped in the local optimal solution. Generally speaking, the stronger the exploration ability of algorithms, the superior the solution accuracy, especially for the NP-Hard problems.
Lévy flight is a type of generalized random walk algorithm that imitates the trajectory of biological activity [22], and the direction of each step is completely random. A random direction search facilitates the exploration of the global optimal solution but is not conducive to algorithm convergence. Therefore, Lévy flight has always been integrated into other intelligent algorithms to improve the global search capability. For example, Liu et al. [23] advance a hybrid approach by combining quantum particle swarm optimization with Lévy flight and straight flight strategy to solve engineering design optimization problems. Zhou et al. [24] utilized Lévy flight to enhance the global optimization capability of the ABC for the MCSC problem. Thus, Lévy flight is employed in the WOA to enlarge the search space and increase the exploration capability.
Crossover is one of the essential operators used to preserve the population diversity of the GA. Tradition crossover tries to alter a few parts of genes for each individual that are different from the WOA, which changes all the whale positions at the same time. This mechanism hinders the fast convergence of the GA and causes the algorithm to easily fall into the local optimal solution because traditional crossover is more inclined to generate similar individuals at a later iteration [25]. Some studies reported that the WOA outperforms the GA with traditional crossover when solving MCSC problems [26]. Thus, different adaptive crossover strategies have been developed to balance the exploration and exploitation ability. More competitive results have been achieved, such as an adaptive genetic algorithm for environment monitoring data acquisition [27], a genetic algorithm adaptive homogeneous approach for identifying wall cracks problems [28], an adaptive dimensionality reduction GA for high-dimensional large-scale problems [29], and so on. Inspired by these ideas, an adaptive crossover strategy is employed to balance the exploration and exploitation of the WOA.
Adaptive weighting strategies are often developed to preserve population diversity and increase the search space of algorithms. In the exploitation of the standard WOA, whales can only surround prey in a small area, which causes whales to easily fall into local optimal solutions [21]. Recently, more and more scholars have used adaptive weights to optimize algorithms. For example, Li et al. [14] introduced an AIWPSO, which has excellent global search capabilities. In order to classify underwater sonar images, Wang et al. [30] introduced a new novel deep learning model that combines adaptive weights with a convolutional neural network (AW-CNN). Cao et al. [31] introduced an image classification algorithm based on adaptive feature weight for the low classification accuracy of the single-feature and multi-featured fusion. Inspired by these algorithms, the adaptive weight strategy is developed to extend the search scale of the exploitation phase.
The approach developed in this research uses the WOA, adaptive crossover, adaptive weight, and Lévy flight strategies to improve the exploration and exploitation abilities cost effectively. The WOA performs well in exploitation with high convergence speed [21]. The crossover strategies of the GA have been widely adopted for population diversification preservation in real and integer-coded optimization problems. Then, adaptive crossover with three crossover strategies and single point mutation is utilized to enhance the algorithm’s performance and accelerate the convergence of the WOA, while Lévy flight is designed to enhance the exploration of the WOA by expanding the search space. Finally, an adaptive weight strategy is used to enhance the speed at which the whale approaches the prey. This study mainly consists of the following parts:
  • A novel WOA with an adaptive crossover, adaptive weight, and Lévy flight strategy (ASWOA) is proposed;
  • The Lévy flight strategy expands the solution space and increases the exploration ability for global search;
  • The adaptive crossover balances the exploration and exploitation of the WOA at different iterations and enhances the WOA to escape local optimal at the later iteration;
  • The adaptive weights are developed to accelerate the speed of approaching prey;
  • Simulation and comparison experiments were conducted on different scale QoS-MCSC problems, which prove the superiority of the proposed ASWOA compared to the standard WOA.
The rest is arranged as follows. The background of QoS-MCSC and the approaches for it are summarized in Section 2. The model of QoS-MCSC is introduced based on aggregation formulas in Section 3. Section 4 presents the proposed ASWOA and related techniques, including the WOA, adaptive crossover, Lévy flight, and adaptive weight. Section 5 shows the simulation and comparison experiments conducted on various benchmark functions. Section 6 shows the applicability of the ASWOA in different scale QoS-MCSC problems. Finally, Section 7 provides a summary and the future research direction.

2. Related Work

CMfg is a popular research topic, and relevant scholars have carried out a lot of studies on CMfg service modeling and description [32], cloud architecture design [33], cloud service standards [34], and so on. In our previous study, a correlation-aware MCSC model was proposed. This model can describe the QoS dependency between different services [6].
In recent years, cloud computing and big data advanced by leaps and bounds, and many manufacturing resources have been virtualized and encapsulated to be provided in the network platform, thereby leading to a rapidly and constantly expanding CMfg system. As the amount of MCS increases, how to select appropriate MCS efficiently to accomplish the functional requirements of corresponding manufacturing tasks and how to integrate these MCS into an MCSC with optimal QoS are promising research issues [35]. Many novel approaches have been developed to handle the problem of the optimal selection of MCSC. There are three main methods to solve MCSC, including salarization-based, Pareto-based, and other approaches.
The MCSC problem is considered a multi-objective problem (MOP) [36]. The scalarization method can convert an MOP into a single-objective problem (SOP). At present, there are two scalarization methods: the fraction-based fitness technique and the simple additive weighting (SAW) technique [37]. Based on the fraction-based fitness technique, Canfora et al. [36] utilized a GA to settle the MCSC problem. Based on the SAW, Zhou [38] proposed a hybrid TLBO for the MCSC problem. Mardukhi et al. [39] proposed a new model, which can decompose global constraints into multiple local constraints. SKG A et al. [40] have combined the WOA with the eagle strategy for the QoS-MCSC problem Yue et al. [41] developed a hybrid GA based on population diversity and relational matrix coding.
In addition to the declarative meta-heuristic algorithm mentioned above, non-heuristic algorithms and heuristic algorithms are also used for MCSC problems. Liu et al. [42] proposed an adaptive MCSC based on deep reinforcement learning. Jiang et al. [43] introduced a top k query mechanism and proposed a Key Path-Based Loose (KPL) algorithm. But, meta-heuristic algorithms have the most competitive performance for MCSC problems [44].
Pareto is used to solve MOP problems and uses multi-objective optimization and optimize multiple parameters of QoS at the same time to acquire the Pareto optimal explanation [45]. Generally, there are some famous MOP methods. For example, Wahild et al. [46] utilized the Strength Pareto Evolutionary Algorithm (SPEA-II) to solve the MOP problem. Deb et al. [25] utilized a Non-dominated Sorting Genetic Algorithm II (NSGA-II) for the MOP problem. Feng et al. [47] proposed a new algorithm for MOP based on the combination of the idea of the Pareto solution, which was developed to address the SCOS problem. Rudziński et al. [48] presented an application of the generalized Strength Pareto Evolutionary Algorithm (SPEA) with an original multi-objective optimization technique in the logistic facilities location problem. The proposed approach has the purpose of seeking out a set of high-spread and well-balanced distribution solutions in a specific solution space. Xie et al. [49] introduced a new algorithm that uses the differential evolution mutation operator in directional guiding ideology and combines the NSGA-II algorithm to improve the solution population distribution. Napoli et al. [50] proposed a trade-off negotiation strategy that can process multiple QoS properties at the same time. NK et al. [51] developed a Non-dominated Sorting GA (NSGA-II) for composition service problems in IoT. Suciu et al. [52] introduced an adaptive MOEA/D algorithm for QoS-MCSC problems.
When multiple objectives need to be optimized, the optimization problem becomes more complex, and the efficiency of MOEAs will become lower and lower [53]. In the algorithm execution stage, due to conflicts between different targets, multiple targets cannot be optimized at the same time. It is possible that one goal will be strengthened and another goal will inevitably be weakened. At the same time, the calculation amount of the above method based on Pareto optimization is much larger than the salinization method. Moreover, the above method based on Pareto optimization cannot be better to balance exploration and exploitation.
Apart from the above two methods, many scholars use other methods to resolve this problem. Teixeira et al. [54] introduced a new service-oriented model that can be conducted without necessarily implementing the real system. This can accomplish QoS tasks at a lower cost. Ping et al. [55] proposed a new vague information decision model that alleviates the bias of existing approaches through the improved fuzzy ranking index. Zhang et al. [56] proposed an intuitionistic fuzzy entropy weight BBO algorithm for QoS-MCSC problems. Hu et al. [57] introduced a game theoretic power control mechanism based on the hidden Markov model (HMM).
In sum, using the above new model or Pareto to solve the MCSC problem has high computational complexity. After increasing the computational complexity, it may not be possible to obtain the global optimal QoS. Therefore, this article uses the salinization method to solve the QoS-MCSC problem.

3. Problem Formulation of QoS-Aware MCSC

The composition of manufacturing service can be divided into task decomposition, service discovery, and service optimal selection in three stages. This process is illustrated in Figure 1.
Task decomposition: the complex manufacturing task of the MCSC can be decomposed into multiple subtasks, such as Task = {ST1, ST2, ..., STi, ..., STn}, where STi represents the subtasks i and n is the total number of subtasks.
Service discovery: Each subtask STi has a candidate service set CMCSSi, and CMCSSi = { M C S i , 1 , M C S i , 2 , M C S i , 3 , ..., M C S i , j ..., M C S i , m i }, where M C S i , j represents the jth candidate service that can satisfy the functional and QoS constraints of subtask STi and mi represents the total number of MCS for STi.
Generate composite paths: a single MCS or a composition of multiple MCSs are selected for each subtask from CMCSS and connected as an executable path CMSC. Pj = { M C S 1 , k 1 , M C S 2 , k 2 , M C S 3 , k 3 , M C S i , k i , ..., M C S n , k n } is taken as the j executable path, and M C S i , k i represents the k i candidate service of STi. Let P = {P1, P2, ..., Pi, ..., P l p a t h } represent the executable path space for task T and l p a t h = i = 1 n m i . QoS-aware MCSC chooses an optimal path from P with a high performance of QoS.
QoS, as the non-functional attribute of MCSC, is used to evaluate the performance of services. There are more than twenty QoS metrics in practical applications, and the four widely used QoS metrics, including time (T), cost (C), reliability (R), and availability (A), are taken to construct the QoS evaluation model for MCSs in this study. These four metrics consider the balance of efficiency, economy, effectiveness, and stability of service that customers care about most. The QoS metrics of each MCS can be represented as Q M C S i , j = { T ( M C S i , j ) , C ( M C S i , j ) , R ( M C S i , j ) , and A ( M C S i , j ) }, where M C S i , j denotes the jth candidate MCSs for the ith subtask.
CMCS is composed of sequence, parallel, selective, and circular four types of composite structures. But, parallel, selective, and circular composite structures are not conducive to the QoS value calculation. Thus, it is necessary to convert the other three composite structures into a sequence structure, and then the QoS value of MCSC can be calculated by the sequential structure formula [56]. The four structures of formulas are given in Table 1.
The purpose of QoS-MCSC is to select the optimal combined path, and the global QoS value of each MCSC must be taken as the optimization goal. These QoS attributes are categorized into positive attributes ( Q + ) and negative attributes ( Q ). The optimization of QoS-MCSC tries to achieve a high value of positive QoS attributes, such as availability and reliability, but a low value of negative QoS attributes, such as time and cost simultaneously. The SAW is employed to convert multiple QoS attributes into a single value. The values of QoS attributes should be normalized in the same scale [0, 1] through SAW, and then a weighted sum for each scaled QoS for aggregation should be conducted. The SAW-based QoS value of MCSC can be defined with the following formula:
Q ( M C S C m ) = q t Q Q t , m a x q t ( M C S C m ) Q t , m a x Q t , m i n × ω t + q t Q + q t ( M C S C m ) Q t , m i n Q t , m a x Q t , m i n × ω t
where Q t , m a x and Q t , m i n indicate the max and min value of the tth QoS attribute, respectively and wt is the weight value of each QoS attribute, i = 1 t ω i = 1, and they can be determined by the preference of customers or the CMfg platform.
It is difficult to seek out a global solution for QoS-MCSC because it has a large solution space. Taking a complex task with N subtasks and each subtask with M MCSs as an example, the solution space reaches up to MN. Thus, a WOA with adaptive crossover, adaptive weight, and Lévy flight strategies is developed to optimize this challenging problem in this study.

4. The Proposed ASWOA for QoS-Aware MCSC Problems

4.1. Encoding QoS-Aware MCSC

A n-dimension real integer vector X = [x1, x2, ..., xi, ..., xn] is used to represent a solution for QoS-aware MCSC with n subtasks, where xi is the index of the MCS in the CMCSS for subtask i. The value of xi is bound to be in the discrete range [1, mi], where mi is the number of available MCSs for the ith subtask.

4.2. Whale Optimization Algorithm

The WOA is a swarm intelligence algorithm that mimics whale hunting. Its hunting behavior includes three foraging behaviors: surrounding prey, bubble net attack, and hunting prey randomly [12]. The mathematical model characterizing the three imitation operators is discussed in detail in the following subsections.
Surround prey: Humpback whales can identify the location of nearby target prey and assume the location of the target prey as the best position among the current whales, and then humpback whales approach the prey by continuously updating their position. The WOA presumes that the generated feasible solutions are ‘whales’ and takes the current best candidate solution or local optimal solution as the ‘best position for prey encircling’. The operator of the WOA that simulates the encircling prey is shown as follows:
X ( t + 1 ) = X b e s t ( t ) A · | C · X b e s t ( t ) X ( t ) |
where · is an element-by-element multiplication, X b e s t ( t ) is the current best position of whale in the tth iteration, X is the currently selected search whale, | C · X b e s t ( t ) X ( t ) | denotes the distance between C · X b e s t ( t ) and X ( t ) , and coefficient v e c t o r s   A and C are dynamic variables and can be updated by Equations (3) and (4), respectively:
A = 2 × a × r a
C = 2 × r
where a will decrease from 2 to 0 according to 2 2 t / t max with t max as the maximum iteration and r is a random vector in [0, 1]. The introduced random vector r limits the A in the range [− a , a ]. It is noteworthy that random vectors A and C facilitate the whale to update its position for optimal solutions.
Bubble net attacking: Humpback whales use bubble nets to push prey to the surface to catch them. The spiral bubble net attacking process formulas are as follows:
X ( t + 1 ) = | X b e s t ( t ) X ( t ) | · e b l cos ( 2 π l ) + X b e s t ( t )
where b is used to characterize the logarithmic spiral shape and l is a random number in [0, 1].
Hunting prey randomly: Humpbacks randomly select a whale position and swim toward the position to explore new target prey while searching for prey. The WOA simulates the process of a global search using the following formula:
X ( t + 1 ) = X r a n d ( t ) A · | C · X r a n d ( t ) X ( t ) |
where X r a n d is the randomly selected whale position.
The selection of the three operators is determined by a random switch control parameter p in [0, 1], and the vector A is to determine the hunting method of the whale. We assume that the whale has a 50% probability of selecting bubble net attacking for their position updating during solution exploitation, and the probability for the selection of operator searching for prey or encircling prey is further determined by the adaptive variation of the vector A . The mathematical model for the operator selection can be defined as follows:
X ( t + 1 ) = X b e s t ( t ) A · | C · X b e s t ( t ) X ( t ) | ,   i f   p < 0 . 5   and   | A | < 1 X ( t + 1 ) = X r a n d ( t ) A · | C · X r a n d ( t ) X ( t ) | , i f   p < 0 . 5   and   | A | 1 | X b e s t ( t ) X ( t ) | . e b l cos ( 2 π l ) + X b e s t ( t ) ,          i f   p 0 . 5
The WOA takes A as a switch for the transition between exploration (   | A | 1 ) and exploitation (   | A | < 1 ). However, exploration probability will gradually decrease as the number of iterations increases because   | A | decreases as a whole, according to its definition given in Equation (3), which will lead it to be trapped into local optimal [23].

4.3. The WOA Enhanced by Lévy Flight

The WOA renews the position of each individual according to another randomly selected individual in a small range in the standard WOA based on Equation (6) in the exploration phase for the prey, which limits the exploration space. Introducing the random walk mechanism of Lévy flight [58] in Equation (6) to update position with occasional long-distance leaps can expand the search scope and strengthen global search capability. The global search expression enhanced by Lévy flight used to update the positions of humpback whales can be described as follows:
X ( t + 1 ) = X r a n d ( t ) + α 0 · | X r a n d ( t ) X ( t ) | · s i g n [ r a n d 1 2 ] L e v y ( s )
where s i g n [ r a n d 1 2 ] is a symbolic function with three options: −1, 0, or 1. α 0 is a step parameter for distance | X r a n d ( t ) X ( t ) | and is set to 0.05 in this study. L e v y ( s ) is the Lévy distribution used to characterize the non-Gaussian random process, and the distribution can be expressed by the following formula:
L é v y ( s ) ~ s 1 β ,   0 < β 2
The parameter s and β are the step length of Lévy flight and index, respectively. s can be defined by two normal distributions, according to Manteca’s algorithm, with the following formula:
s = μ υ 1 β ,   μ ~ N 0 , σ μ 2 , υ ~ N 0 , σ υ 2
where β is set to 1.5 in this study, σ υ = 1, and:
σ μ = T 1 + β · sin π β / 2 β · T 1 + β / 2 β 1 / 2 1 β

4.4. An Improved WOA Enhanced by Adaptive Crossover Strategies

The decrease in   | A | in Equation (7) as the number of iterations increases is not conducive to global search at the later iteration stage, which makes the standard WOA not easy to escape the local optimum for global optimal solution exploration. The adaptive crossover with three position adjustment strategies is embedded in the WOA. Most whales update their positions based on adaptive parameters, which can improve the position diversity of the whale population at a later stage. The adaptive crossover strategies can increase information sharing among whale populations and strengthen the capability of the global search in the later stage.
The three crossover strategies adopted to adaptively enhance the WOA are suitable for real integer representation; thus, they can operate the whale position vector denoting the feasible solution of the standard WOA given in Section 4.1 directly. The three crossover strategies include the multipoint crossover with one intersection point (MCOIP), the multipoint crossover with two intersection points (MCTIP), and the single-point crossover (SPC).
For the MCOIP, one intersection point is generated randomly for two search whale position vectors P1 and P2; the components behind the intersection point on P1 will be exchanged with the corresponding components on P2. MCTIP will generate two intersection points randomly for P1 and P2, and the components between the two intersection points on P1 will be exchanged with the corresponding components on P2, whereas SPC only exchanges components on the intersection point for P1 and P2. MCOIP, and MCOIP exchanges many components for the two select search whales, which means changing candidate service for more subtasks; thus, it is more suitable for preventing the whale population from becoming too similar at the later iteration stage. SPC exchanges only one component for the selected search whale with a small disturbance for each individual; thus, it is more suitable for whales with quite diverse positions at the early iteration stage. Therefore, a switch control parameter Ap is designed to guide the algorithm to select the proper crossover strategy adaptively. The adaptive parameter Pt can be formulated as follows:
A p = e m a x i t e r t / m a x i t e r
where t denotes the current number of iterations, maxiter is the maximum number of iterations, and the value of Ap is in the range [ e 1 ,1]. The selection of crossover strategy (Cs) based on the adaptive Ap is defined by the following formula:
C s =     S P C , A p < 0.5 ,     M C O I P , A p > 0.5 , r a n d > 0.5 M C T I P , A p > 0.5 , r a n d < 0.5
where the value of Ap is small and the value of   | A | is large at the early stage of iteration; thus, the ASWOA is more inclined to conduct a global search with Equation (6), and the SPC has high priority to update whale positions. The ASWOA tends to conduct local search using Equations (2) and (5), whereas Ap will increase and guide the algorithm to select strategy MCOIP or MCTIP for exploration. The randomly generated number between (0, 1) for the selection of MCOIP or MCTIP is designed to further improve the randomness and diversity of whales. Different crossover strategies are shown in Figure 2 below.

4.5. An Improved WOA Enhanced by Adaptive Weight Strategies

The adaptive weight strategy is applied to preserve population diversity [14]. At the same time, the adaptive weight strategy can also strengthen the local search ability of the WOA [59]. Therefore, based on the above idea, this paper uses an adaptive weight strategy to increase the hunting range of the exploitation phase. The adaptive weight strategy w is given by:
w = m a x i t e r 3 t 3 m a x i t e r 3
where maxiter is the maximum number of iterations of the algorithm. t is the iteration number of the current population. The value of w is in the range [0, 1], and the value of w will linearly decrease from 1 to 0. In the exploitation phase of the WOA, the adaptive weight strategy is used to accelerate the speed of whales approaching the prey so as to enhance the exploitation ability of the algorithm. In addition, the adaptive weights can also accelerate the convergence speed. According to Equation (2), the WOA uses the following formula to update the position:
X t + 1 = X b e s t t w · A · C · X b e s t t X t

4.6. The Proposed ASWOA

The Lévy flight strategy is introduced to expand the search space and increase the exploration ability for global search. The adaptive crossover is applied to balance the exploration and exploitation of the WOA. At the same time, it enhances the ability of the WOA to jump out of local optima in late iterations. The adaptive weight strategy is developed to expand the hunting range of the whale bubble net. The pseudo-code of the ASWOA is given in Algorithm 1. The formula and notations in Algorithm 1 can refer to the above sections.
Algorithm 1: The WOA enhanced with adaptive crossover and Lévy flight
 1:  Initial population Xi (i = 1, 2, ..., n), initialize crossover probability pc, flag = 0
 2:  Calculate the fitness of all individuals according to Equation (1)
 3:  Store the best solution as Xbest
 4:  while (t < tmax)
 5:   for each search whale Xi in population
 6:     Update a, C, A, p
 7:   if1 (p < 0.5)
 8:    if2 (|A| < 1)
 9:      // WOA enhanced by adaptive weight (Presented in Section 4.5)
 10:      Update Xi by Equation (15)
 11:     else if2
 12:        // WOA enhanced by Lévy flight (Presented in Section 4.3)
 13:      Update Xi by Equation (8)
 14:     end if2
 15:    else if1 (p ≥ 0.5)
 16:     // Bubble-net attacking (Presented in Section 4.2)
 17:     Update Xi by Equation (5)
 18:    end if1
 19:   end for
 20:   flag = flag + 1
 21:   // Adaptive crossover phase (Presented in Section 4.4)
 22:   if1 flag > population size/2 and rand > pc
 23:   Update adaptive parameters Ap by Equation (12)
 24:    for i = 1: population size; i = i + 2
 25:     if2 Ap > 0.5
 26:      if3 rand > 0.5
 27:        Conduct the MCOIP to update Xi and Xi+1
 28:      else if3
 29:        Conduct the MCTIP to update Xi and Xi+1
 30:      end if3
 31:     else if2
 32:      Conduct the SCP to update Xi and Xi+1
 33:     end if2
 34:   end for
 35:   flag = 0
 36: end if1
 37: Amend the updated positions that go beyond the search space
 38: Calculate the fitness of all individuals according to Equation (1)
 39: Update Xbest if there is a better solution
 40: t = t + 1
 41: end while
 42: output Xbest

5. Experiment Results

5.1. Benchmark Functions

In this study, the reliability of the experimental results is verified using 23 standard benchmark functions. There are three categories of benchmark functions, including unimodal, multimodal, and fixed-dimension multimodal. Detailed descriptions are presented in Table 2, Table 3 and Table 4, respectively. Note that V_no is the dimension of design variables, Range is the range of each variable, and fmin is the optimum in theory. Each function, F1~F7, given in Table 2 is unimodal since it has only one global optimum but no local optima; thus, the functions can be taken as the benchmarks to evaluate the convergence speed and exploitation capability of different algorithms. Multimodal functions F8~F13 presented in Table 3 are characterized by many local optimums and have been widely utilized to evaluate the exploration capability of optimization algorithms. Fixed-dimension multimodal functions F14~F23 described in Table 4 have fewer local optimums compared to multimodal functions, which are suitable to test both the exploitation and exploration capability of these algorithms.

5.2. Implementation Details and Comparative Approaches

The proposed ASWOA is compared to four cutting-edge algorithms, including the standard WOA [12], AIWPSO [14], NOBLDE [15], and TLBO [16], based on benchmark problems. The AIWPSO [14] is modified from the standard PSO, in which adaptive weight parameters and a mutation threshold have been introduced to increase the diversity of the population. NOBLDE [15], as an improved DE, utilizes the neighborhood mutation operator and opposition-based learning to improve the exploration capability. TLBO [16] is a swarm intelligence algorithm that simulates the traditional classroom teaching process, including a teacher stage and learner stage.
The parameter settings of the proposed ASWOA and the four comparative algorithms are presented in Table 5, in which the WOA [12], AIWPSO [14], NOBLDE [15], and TLBO [16] follow the original setting in the refereed articles. For all the approaches, the population size is 30 and the maximum iterations are 1000. For each test function, the proposed ASWOA and comparative algorithms independently run 30 times, and each run is initialized with randomly generated populations. Statistical results are reported in Table 6, in which ‘Mean’ is the average of outputs of the 30 times executions and ‘Std’ is the corresponding standard deviation. The experiments are implemented on a PC with a MAC (64-bit) operating system, an Intel i7-6500U 2.50 GHz CPU, 16 GB of RAM, and MATLAB R2017a.

5.3. Comparison Results of Different Algorithms

Unimodal functions F1~F7 can be seen in Table 6. As Figure 3 and Table 6 show, although the ASWOA fails to obtain the global optimum for functions F1 and F3, it provides a better exploitation ability than the WOA and TLBO. For function F1, the ASWOA converges faster than the WOA, TLBO, and NBOLDE. For functions F5 and F6, although the ASWOA can find the global optimum, it converges slower than the WOA, AIWPSO NBOLDE, and TLBO. For function F7, the ASWOA provides a better exploitation ability than the AIWPSO and NBOLDE. In summary, the ASWOA can achieve good precision for unimodal functions.
For multimodal functions F8~F13 with results presented in Table 6, the ASWOA is very competitive with the standard WOA, AIWPSO, NBOLDE, and TLBO. The ASWOA is the most efficient optimizer for functions F8, F10, F12, and F13. It is shown that the ASWOA has strong robustness to search the entire solution space for functions F10, F12, and F13. It can be clearly seen in functions F10, F11, F12, and F13 in Figure 3 that the ASWOA has a strong ability to jump out of the local optimum in the later stages of the iteration. By comparing other optimization algorithm results, the ASWOA can effectively avoid trapping into the local optimum for multimodal functions.
The average and standard deviation of functions F14~F23 are shown in Table 6. The ASWOA and TLBO can achieve the global optimum solution for function F14. For functions F15 and F20, the solutions of all algorithms are almost the same, except for the AIWPSO. For function F16, the solutions of all algorithms are almost the same, except for NOBLDE. For functions F17~F19, all algorithms perform similarly. The ASWOA can achieve better results than other algorithms on functions F21~F22. For function F23, although the ASWOA cannot obtain the global optimum solution, it converges faster than the WOA NBOLDE and TLBO. The ASWOA shown in Figure 3 has a strong ability to avoid trapping into the local optimum for functions F14, F15, and F21.
In summary, the advanced exploration of the ASWOA is due to the Pt controlling the crossover strategy. The algorithm can select different crossover strategies according to Pt. At the same time, it not only strengthens the global search ability but also preserves population diversity, which can prevent the algorithm from dropping into a local optimization. The Lévy distribution is applied to improve the exploration ability through expanding the search space. The adaptive weight is developed to expand the local search range. Consequently, the proposed ASWOA can acquire the best solution in most test experiments.

6. Application to QoS-MCSC Problems

The solution searching ability of the proposed ASWOA in QoS-aware MCSC problems is verified in a virtual application and is compared with the four cutting-edge algorithms, the WOA [12], AIWPSO [14], NOBLDE [15], and TLBO [16], for QoS-aware MCSC problems in this section. The parameter settings and operating environment in this experiment are the same as those in Section 5.
Four QoS attributes, including time, cost, reliability, and availability, for each MCSC are considered. The values of the four attributes are randomly and symmetrically generated in the interval [0.7, 0.95]. It is assumed customers care more about time and cost, and the weights of the difference QoS attributes are set as wtime = 0.35, wcost = 0.35, wreliability = 0.15, and wavailability = 0.15, according to the preference of customers. The MCS correlation is 40%.
In this section, 16 experiments with different service scales were designed. The subtask sizes are 20, 30, 40, and 50, respectively. The candidate service sizes of each subtask are 50, 100, 150, and 200, respectively. For example, T-50-100 indicates that the subtask scale is 30 and the candidate service scale is 100. The proposed ASWOA and comparative algorithms independently run 30 times, and each run is initialized with randomly generated populations. The experiment outputs are the average of the best solutions of the 30 executions.
The results of the WOA, AIWPSO, NOBLDE, and ASWOA in 16 test problems are given in Table 7. Please note that ‘Mean’, ‘Std’, and ‘Best’ indicate the average results, corresponding standard deviation, and best result of the 30 executions with the best solution as its output in each run. It can be found that the ASWOA outperforms other compared algorithms for all the test problems according to the average QoS fitness values. Meanwhile, the ASWOA obtains the best solutions in all cases based on the best QoS fitness values. It can be found that the ASWOA has better robustness with lower ‘Std’ than the WOA, AIWPSO, NOBLDE, and TLBO, except for T-20-100, T-20-150, T-20-200, T-30-50, T-30-200, T-50-50, and T-50-100. The ASWOA has a stronger ability to escape the local optimal solution due to the Pt controlling the crossover strategy. So, the ASWOA can find solutions that are closer to the optimal solution, especially in large-scale problems. However, the randomness of the crossover strategy is relatively large. Therefore, in some test cases, such as T-20-100, T-20-150, and T-50-100, the ‘Std’ of the ASWOA is not better than compared algorithms.
The optimization results of the different scale QoS-MCSC problems are shown in Figure 4. It can be seen that the average optimization results obtained by the ASWOA are better than the WOA, TLBO, AIPSO, and NOBLDE in the 16 QoS-MCSC test problems. This means that the ASWOA has stronger robustness and can solve different scale QoS-MCSC problems well. Although the convergence rate of the ASWOA is worse than the WOA, TLBO, AIPSO, and NOBLDE, the ASWOA demonstrates stronger global search capabilities, even in the middle and later stages of iteration. This means that the ASWOA can effectively maintain the population diversity and search robustness during the iteration process using adaptive Lévy flight and crossover strategies. The experimental results show that the ASWOA has achieved excellent results in solving symmetric QoS-MCSC problems.
The QoS-MCSC problem is a kind of combination NP-hard problem. When the ASWOA is utilized to solve this kind of problem, the ASWOA can effectively balance local search and global search to find better optimal solutions. The adaptive crossover strategies and Lévy flight are utilized to enhance the ASWOA’s global search capability. The parameter Pt is utilized to control crossover frequency to ensure the efficiency of the algorithm. The adaptive weight is utilized to expand the local search range. Through the above methods, the ASWOA has better convergence accuracy than the WOA, regardless of the convergence speed.
The Wilcoxon rank sum test is utilized to ascertain the significance of the differences observed between the ASWOA and the other algorithms. Each algorithm runs independently 30 times on the 16 test problems, and the best results of 30 executions for each of the problems are used as samples. The Wilcoxon rank sum test is performed on the samples at a significance level of 5% to obtain a p-value. When the p-value is less than 0.05, it means there is a significant difference between the two samples. Table 8 gives the Wilcoxon rank sum test results. It can be seen that the p-value of the Wilcoxon rank sum test for the WOA, TLBO, AIPSO, and NOBLDE is less than 0.05. From a statistical perspective, it can be concluded that the ASWOA is significantly superior to the WOA, TLBO, AIPSO, and NOBLDE in the test QoS-MCSC problems.
The execution time of the algorithm is shown in Table 9. It can be seen that the solution time of the ASWOA is longer than the WOA. This is because the ASWOA strengthens the later global search capability through crossover mutation while reducing the operation efficiency. Although the ASWOA takes more execution time, it can find better solutions for QoS-aware MCSC problems. Moreover, the execution time spent by the algorithm is insignificant compared to the hundreds of hours of completing the manufacturing task. It is valuable to spend more computing time to find better solutions for manufacturing tasks. Finding a better solution under the condition of slightly increasing the calculation time has higher cost performance for enterprises.
In summary, the ASWOA can effectively balance local search and global search. The advanced exploration of the ASWOA is due to the Pt controlling the crossover strategy. The algorithm can select different crossover strategies according to Pt. At the same time, it not only strengthens the global search ability but also preserves the population diversity and enhances the search robustness, which can avoid the algorithm dropping into a local optimization. The Lévy distribution is applied to improve the exploration ability through expanding search space. The adaptive weight is developed to expand the local search range. The experimental outcome shows that the ASWOA has good performance in solving large-scale QoS-MCSC problems.

7. Conclusions

In order to resolve the QoS-MCSC problems as one of the key issues in CMfg, the ASWOA has proposed. In this paper, the adaptive crossover, adaptive weight, and Lévy flight strategies were introduced to better balance global search and local search for QoS-MCSC problems. In the proposed method, global exploration is improved by Lévy flight, which can expand the search space of the whale. The adaptive crossover is used to strengthen the exploration ability, which can preserve the diversity of the population and enhance the search robustness of the proposed algorithm. At the same time, the adaptive weight is used to enhance the exploitation ability in the bubble net stage. The ASWOA is compared with other frontier algorithms in 23 benchmark functions and different scale QoS-MCSC problems to verify the performance of the ASWOA. The tested results have illustrated that the ASWOA outperforms the compared cutting-edge algorithms.
In the future, to enhance the efficiency of the proposed algorithm, other versions of the ASWOA are going to be developed to solve QoS-MCSC problems by combining reinforcement learning. We would also utilize the proposed algorithm to solve other types of problems, such as permutation flow shop scheduling, microgrid operations planning, and so on. In this study, the QoS information is assumed to be known to users. However, the QoS information of some manufacturing services is unknown to users. Future work would establish a QoS prediction model for MCSs to predict the missing QoS.

Author Contributions

Conceptualization, H.J. and C.J.; methodology, H.J.; software, H.J. and C.J.; validation, H.J., C.J. and S.L.; formal analysis, S.L.; investigation, H.J.; data curation, C.J.; writing—original draft preparation, H.J. and C.J.; writing—review and editing, H.J. and S.L.; visualization, S.L.; project administration, H.J.; funding acquisition, H.J. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Technology Program of Guangzhou, China, grant number 2023A04J2006, and the National Natural Science Foundation of China, grant number 52275487.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, B.H.; Zhang, L.; Wang, S.L. Cloud manufacturing: A new service-oriented networked manufacturing model. Comput. Integr. Manuf. Syst. 2010, 16, 1–7+16. [Google Scholar]
  2. Tao, F.; Zhang, L.; Venkatesh, V.C.; Luo, Y.; Cheng, Y. Cloud manufacturing: A computing and service- oriented manufacturing model. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2011, 225, 1969–1976. [Google Scholar] [CrossRef]
  3. Wan, C.; Zheng, H.; Guo, L.; Xu, X.; Zhong, R.Y.; Yan, F. Cloud manufacturing in china: A review. Int. J. Comput. Integr. Manuf. 2020, 33, 229–251. [Google Scholar] [CrossRef]
  4. Talhi, A.; Fortineau, V.; Huet, J.C.; Lamouri, S. Ontology for cloud manufacturing based product lifecycle management. J. Intell. Manuf. 2017, 30, 2171–2192. [Google Scholar] [CrossRef]
  5. Tao, F.; Cheng, Y.; Xu, L.D.; Zhang, L.; Li, B.H. CCIoT-CMfg: Cloud computing and internet of things-based cloud manufacturing service system. J. Intell. Manuf. 2014, 10, 1435–1442. [Google Scholar]
  6. Jin, H.; Yao, X.F.; Chen, Y. Correlation-aware QoS modeling and manufacturing cloud service composition. J. Intell. Manuf. 2017, 28, 1947–1960. [Google Scholar] [CrossRef]
  7. Yu, C.; Zhang, W.; Xu, X.; Ji, Y.J.; Yu, S.Q. Data mining based multi-level aggregate service planning for cloud manufacturing. J. Intell. Manuf. 2018, 29, 1351–1361. [Google Scholar] [CrossRef]
  8. Pu, G.Q.; Yi, L.L.; Zhang, L.; Hu, W.S. Genetic algorithm-based fast real-time automatic mode-locked fiber laser. IEEE Photonics Technol. Lett. 2019, 32, 7–10. [Google Scholar] [CrossRef]
  9. Wu, L.H.; Wang, Y.N.; Yuan, X.F.; Chen, Z.L. Multiobjective optimization of HEV fuel economy and emissions using the self-adaptive differential evolution algorithm. IEEE T. Veh. Technol. 2011, 60, 2458–2470. [Google Scholar] [CrossRef]
  10. Koh, B.I.; George, A.D.; Haftka, R.T.; Fregly, B.J. Parallel asynchronous particle swarm optimization. Int. J. Numer. Methods Eng. 2006, 67, 578–595. [Google Scholar] [CrossRef]
  11. Karaboga, D.; Akay, B. A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 2009, 214, 108–132. [Google Scholar] [CrossRef]
  12. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  13. Khanouche, M.E.; Attal, F.; Amirat, Y.; Chibani, A.; Kerkar, M. Clustering-based and QoS-aware services composition algorithm for ambient intelligence. Inf. Sci. 2019, 482, 419–439. [Google Scholar] [CrossRef]
  14. Li, M.; Chen, H.; Wang, X.D.; Zhong, N.; Lu, S.F. An improved particle swarm optimization algorithm with adaptive inertia weights. Int. J. Inf. Technol. Decis. Mak. 2019, 18, 833–866. [Google Scholar] [CrossRef]
  15. Deng, W.; Shang, S.F.; Cai, X.; Zhao, H.M.; Song, Y.J.; Xu, J.J. An improved differential evolution algorithm and its application in optimization problem. Soft Comput. 2021, 25, 5277–5298. [Google Scholar] [CrossRef]
  16. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  17. Wang, J.Q.; Bei, J.L.; Song, H.H.; Zhang, H.Y.; Zhang, P.L. A whale optimization algorithm with combined mutation and removing similarity for global optimization and multilevel thresholding image segmentation. Appl. Soft. Comput. 2023, 137, 110130. [Google Scholar] [CrossRef]
  18. Li, S.H.; Luo, X.H.; Wu, L.Z. An improved whale optimization algorithm for locating critical slip surface of slopes. Adv. Eng. Softw. 2021, 157, 103009. [Google Scholar] [CrossRef]
  19. Liu, Y.; Yang, S.; Li, D.; Zhang, S. Improved whale optimization algorithm for solving microgrid operations planning problems. Symmetry 2023, 15, 36. [Google Scholar] [CrossRef]
  20. Raj, S.; Bhattacharyya, B. Optimal placement of TCSC and SVC for reactive power planning using whale optimization algorithm. Swarm Evol. Comput. 2017, 40, 131–143. [Google Scholar] [CrossRef]
  21. Luo, J.; Shi, B. A hybrid whale optimization algorithm based on modified differential evolution for global optimization problems. Appl. Intel. 2019, 49, 1982–2000. [Google Scholar] [CrossRef]
  22. Edwards, A.M.; Phillips, R.A.; Watkins, N.W.; Freeman, M.P.; Murphy, E.J.; Afanasyev, V.; Buldyrev, S.V.; da Luz, M.G.E.; Raposo, E.F.; Stanley, H.E.; et al. Revisiting Lévy flight search patterns of wandering albatrosses, bumblebees and deer. Nature 2007, 449, 1044. [Google Scholar] [CrossRef] [PubMed]
  23. Liu, X.Y.; Wang, G.G.; Wang, L. LSFQPSO: Quantum particle swarm optimization with optimal guided Lévy flight and straight flight for solving optimization problems. Eng. Comput. 2022, 38, 4651–4682. [Google Scholar] [CrossRef]
  24. Zhou, J.J.; Yao, X.F. Multi-objective hybrid artificial bee colony algorithm enhanced with Lévy flight and self-adaption for cloud manufacturing service composition. Appl. Intel. 2017, 47, 721–742. [Google Scholar] [CrossRef]
  25. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  26. Seghir, F.; Khababa, A. A hybrid approach using genetic and fruit fly optimization algorithms for QoS-aware cloud service composition. J. Intell. Manuf. 2017, 29, 1773–1792. [Google Scholar] [CrossRef]
  27. Chen, F.; Xu, S.Z.; Zhao, Y.; Zhang, H. An adaptive genetic algorithm of adjusting sensor acquisition frequency. Sensors 2020, 20, 990. [Google Scholar] [CrossRef]
  28. Tiberti, S.; Grillanda, N.; Mallardo, V.; Milani, G. A genetic algorithm adaptive homogeneous approach for evaluating settlement-induced cracks in masonry walls. Eng. Struct. 2020, 221, 111073. [Google Scholar] [CrossRef]
  29. Kuang, T.; Hu, Z.; Xu, M. A genetic optimization algorithm based on adaptive dimensionality reduction. Math. Probl. Eng. 2020, 2020, 8598543. [Google Scholar] [CrossRef]
  30. Wang, X.; Jiao, J.; Yin, J.W.; Zhao, W.S.; Han, X.; Sun, B.X. Underwater sonar image classification using adaptive weights convolutional neural network. Appl. Acoust. 2019, 146, 145–154. [Google Scholar] [CrossRef]
  31. Cao, J.F.; Wang, M.; Li, Y.F.; Zhang, Q. Improved support vector machine classification algorithm based on adaptive feature weight updating in the Hadoop cluster environment. PLoS ONE 2019, 14, e0215136. [Google Scholar] [CrossRef] [PubMed]
  32. Li, X.B.; Zhuang, P.; Yin, C. A metadata based manufacturing resource ontology modeling in cloud manufacturing systems. J. Ambient Intell. Humaniz. Comput. 2019, 10, 1039–1047. [Google Scholar] [CrossRef]
  33. Petrillo, A.; Caiazzo, B.; Piccirillo, G.; Santini, S.; Murino, T. An IoT-based and cloud-assisted AI-driven monitoring platform for smart manufacturing: Design architecture and experimental validation. J. Manuf. Technol. Manag. 2023, 34, 507–534. [Google Scholar]
  34. Hert, P.; Papakonstantinou, V.; Kamaraa, I. The cloud computing standard ISO/IEC 27018 through the lens of the EU legislation on data protection. Comput. Law Secur. Rev. 2016, 32, 16–30. [Google Scholar] [CrossRef]
  35. Tao, F.; Zhao, D.M.; Hu, Y.F.; Zhao, Z.D. Correlation-aware resource service composition and optimal-selection in manufacturing grid. Eur. J. Oper. Res. 2010, 201, 129–143. [Google Scholar] [CrossRef]
  36. Canfora, G.; Penta, M.D.; Esposito, R.; Villani, M.L. An Approach for QoS-Aware Service Composition Based on Genetic Algorithms. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, Washington, DC, USA, 25–29 June 2005; pp. 1069–1075. [Google Scholar]
  37. Zeng, L.Z.; Benatallah, B.; Ngu, A.H.H.; Dumas, M.; Kalagnanam, J.; Chang, H. QoS-aware middleware for web services composition. IEEE Trans. Softw. Eng. 2004, 30, 311–327. [Google Scholar] [CrossRef]
  38. Zhou, J.J.; Yao, X.F. Hybrid teaching–learning-based optimization of correlation-aware service composition in cloud manufacturing. Int. J. Adv. Manuf. Technol. 2017, 9, 3515–3533. [Google Scholar] [CrossRef]
  39. Mardukhi, F.; Nematbakhsh, N.; Zamanifar, K.; Barati, A. QoS decomposition for service composition using genetic algorithm. Appl. Soft. Comput. 2013, 13, 3409–3421. [Google Scholar] [CrossRef]
  40. Gavvala, S.K.; Jatoth, C.; Gangadharan, G.R.; Buyya, R. QoS-aware cloud service composition using eagle strategy. Futur. Gener. Comp. Syst. 2019, 90, 273–290. [Google Scholar] [CrossRef]
  41. Ma, Y.; Zhang, C.W. Quick convergence of genetic algorithm for QoS-driven web service selection. Comput. Netw. 2008, 52, 1093–1104. [Google Scholar] [CrossRef]
  42. Liu, J.W.; Hu, L.Q.; Cai, Z.Q.; Xing, L.N.; Tan, X. Large-scale and adaptive service composition based on deep reinforcement learning. J. Vis. Commun. Image Represent. 2019, 65, 102687–102692. [Google Scholar] [CrossRef]
  43. Jiang, W.; Hu, S.; Liu, Z. Top k query for QoS-aware automatic service composition. IEEE Trans. Serv. Comput. 2014, 7, 681–695. [Google Scholar] [CrossRef]
  44. Jatoth, C.; Gangadharan, G.R.; Buyya, R. Computational Intelligence based QoS-aware web service composition: A systematic literature review. IEEE Trans. Serv. Comput. 2017, 10, 475–492. [Google Scholar] [CrossRef]
  45. Burugari, V.K.; Periasamy, P.S. Multi QoS constrained data sharing using hybridized pareto-glowworm swarm optimization. Cluster Comput. 2019, 22, S9727–S9735. [Google Scholar] [CrossRef]
  46. Wahid, A.; Gao, X.; Andreae, P. Multi-Objective Clustering Ensemble for High-Dimensional Data Based on Strength Pareto Evolutionary Algorithm (SPEA-II). In Proceedings of the 2015 IEEE International Conference on Data Science & Advanced Analytics (DSAA), Paris, France, 19–21 October 2015; pp. 1–9. [Google Scholar]
  47. Xiang, F.; Hu, Y.F.; Yu, Y.R.; Wu, H.C. QoS and energy consumption aware service composition and optimal-selection based on Pareto group leader algorithm in cloud manufacturing system. Cent. Europ. J. Oper. Res. 2014, 22, 663–685. [Google Scholar] [CrossRef]
  48. Rudziński, F. An Application of Generalized Strength Pareto Evolutionary Algorithm for Finding a Set of Non-Dominated Solutions with High-Spread and Well-Balanced Distribution in the Logistics Facility Location Problem. In Proceedings of the International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland, 11–15 June 2017; pp. 439–450. [Google Scholar]
  49. Gadhvi, B.; Savsani, V.; Patel, V. Multi-objective optimization of vehicle passive suspension system using NSGA-II, SPEA2 and PESA-II. Procedia Technol. 2016, 23, 361–368. [Google Scholar] [CrossRef]
  50. Napoli, C.D.; Rossi, S.A. Trade-off negotiation strategy for pareto-optimal service composition with additive QoS-constraints. Group Decis. Negot. 2021, 30, 119–141. [Google Scholar] [CrossRef]
  51. Kashyap, N.; Kumari, A.C.; Chhikara, R. Multi-objective optimization using NSGA II for service composition in IoT. Procedia Comput. S. 2020, 167, 1928–1933. [Google Scholar] [CrossRef]
  52. Mihai, S.C.; Denis, P.; Marcel, C.; Dumitru, D. Adaptive MOEA/D for QoS-Based Web Service Composition. In Proceedings of the European Conference on Evolutionary Computation in Combinatorial Optimization, Vienna, Austria, 3–5 April 2013; pp. 73–84. [Google Scholar]
  53. Ramirez, A.; Parejo, J.A.; Romero, J.R.; Segura, S.; Ruiz-Cortes, A. Evolutionary composition of QoS-aware web services: A many-objective perspective. Expert Syst. Appl. 2017, 72, 357–370. [Google Scholar] [CrossRef]
  54. Teixeira, M.; Ribeiro, R.; Oliveira, C.; Massa, R. A quality-driven approach for resources planning in service-oriented architectures. Expert Syst. Appl. 2015, 42, 5366–5379. [Google Scholar] [CrossRef]
  55. Ping, W. QoS-aware web services selection with intuitionistic fuzzy set under consumer’s vague perception. Expert Syst. Appl. 2009, 36, 4460–4466. [Google Scholar]
  56. Zhang, S.; Xu, S.; Zhang, W.Y.; Yu, D.J.; Chen, K. A hybrid approach combining an extended BBO algorithm with an intuitionistic fuzzy entropy weight method for QoS-aware manufacturing service supply chain optimization. Neurocomputing 2017, 272, 439–452. [Google Scholar] [CrossRef]
  57. Hu, Z.; Yan, H.; Yan, T.; Geng, H.J.; Liu, G.Q. Evaluating QoE in VoIP networks with QoS mapping and machine learning algorithms. Neurocomputing 2020, 386, 63–83. [Google Scholar] [CrossRef]
  58. Zhou, Y.; Ling, Y.; Luo, Q. Lévy flight trajectory-based whale optimization algorithm for global optimization. IEEE Access 2017, 5, 6168–6186. [Google Scholar]
  59. Hu, H.P.; Bai, Y.P.; Xu, T. A whale optimization algorithm with inertia weight. WSEAS Trans. Comput. 2016, 15, 319–326. [Google Scholar]
Figure 1. Framework of the QoS-MCSC model.
Figure 1. Framework of the QoS-MCSC model.
Symmetry 16 00046 g001
Figure 2. Adaptive crossover strategies.
Figure 2. Adaptive crossover strategies.
Symmetry 16 00046 g002
Figure 3. The convergence curves of 5 algorithms for benchmark functions.
Figure 3. The convergence curves of 5 algorithms for benchmark functions.
Symmetry 16 00046 g003
Figure 4. The convergence curves of 5 algorithms for the different scale QoS-MCSC problems.
Figure 4. The convergence curves of 5 algorithms for the different scale QoS-MCSC problems.
Symmetry 16 00046 g004
Table 1. QoS aggregation formula for the four basic structures.
Table 1. QoS aggregation formula for the four basic structures.
AttributesSequence Parallel Selective Loop
TimeTT = i = 1 n T (MCSi)TT = Max{T(MCSi)}TT = i = 1 n T (MCSi) × αiTT = k × i = 1 n T (MCSi)
CostTC = i = 1 n C (MCSi)TC = i = 1 n C MCSi)TC = i = 1 n C (MCSi) × αiTC = k × i = 1 n T (MCSi)
ReliabilityTR = i n R (MCSi)TR = Min{R(MCSi)}TR = i = 1 n R (MCSi) × αiTR = i n R (MCSi)
AvailabilityTA = i n A (MCSi)TA = i n A (MCSi)TA = i = 1 n A (MCSi) × αiTA = i n A (MCSi)
Note: k is the number of cycles for MCSC, n represents the number of MCSs of a structure, and αi is the probability that the candidate service is selected.
Table 2. Unimodal test functions.
Table 2. Unimodal test functions.
FunctionV_noRangefmin
F 1 = i = 1 D i m X i 2 30[−100, 100]0
F 2 = i = 1 D i m   | x i | + i = 1 D i m | x i | 30[−10, 10]0
F 3 = i = 1 D i m ( j 1 i x i ) 2 30[−100, 100]0
F 4 = m a x i { | x i | , 1 i D } 30[−100, 100]0
F 5 = i = 1 D i m 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−30, 30]0
F 6 = i = 1 D i m ( [ x i + 0.5 ] ) 2 30[−100, 100]0
F 7 = i = 1 D i m i × x i 4 + r a n d o m [ 0,1 ) 30[−1.28, 1.28]0
Table 3. Multimodal functions.
Table 3. Multimodal functions.
FunctionV_noRangefmin
F 8 = i = 1 D i m x i sin ( x i ) 30[−500, 500]−12,569.5
F 9 = i = 1 D i m [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−512, 512]0
F 10 = 20 e x p ( 0.2 ) e x p ( 1 D i m i = 1 D i m cos 2 π x i + 20 + e ) 30[−32, 32]0
F 11 = 1 4000 i = 1 D i m x i 2 i = 1 D i m c o s ( x i i ) + 1 30[−600, 600]0
F 12 = π D i m { 10 sin ( π y 1 ) + i = 1 D i m 1 ( y i 1 ) 2 [ 1 + 10 sin π y i + 1 2 ] + ( ( y D i m 1 ) 2 ) } + i = 1 D i m u ( x i , 10,100,4 ) 30[−50, 50]0
F 13 = 0.1 { sin ( 3 π x 1 ) 2 + i = 1 D i m ( x i 1 ) 2 [ 1 + sin ( 3 π x i + 1 ) 2 ] + ( x D i m 1 ) 2 [ sin ( 3 π x D i m ) 2 ] } + i = 1 D i m u ( x i , 5,100,4 ) 30[−50, 50]0
Table 4. Fixed-dimension multimodal functions.
Table 4. Fixed-dimension multimodal functions.
FunctionV_noRangefmin
F 14 = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 2[−65, 65]0.998
F 15 = i = 1 11 [ a i x i ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]3.075 × 10−4
F 16 = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 -4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F 17 = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 5]0.398
F 18 = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
F 19 = i = 1 4 c i e x p ( j = 1 3 a i j ( x j p i j ) 2 ) 3[1, 3]−3.86
F 20 = i = 1 4 c i e x p ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0, 1]−3.322
F 21 = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.1532
F 22 = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.4028
F 23 = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.5363
Table 5. Parameter settings.
Table 5. Parameter settings.
AlgorithmParameterValue
WOAswitch control parameter p1random number from 0 to 1
AIWPSOacceleration factor c11.494
acceleration factor c21.494
inertia weight awrelated to the fitness values
NOBLDEmutation factor F0.4
cross-factor CR0.9
opposition-based learning rate Jr0.3
ASWOAswitch control p2random number from 0 to 1
switch probability Apexponential increase from e−1 to 1
crossover probability pc0.2
adaptive weight wrelated to iteration
Table 6. Comparison of the optimization results obtained for the three categories of benchmark functions.
Table 6. Comparison of the optimization results obtained for the three categories of benchmark functions.
FunctionWOAAIWPSONOBLDETLBOASWOA
MeanStdMeanStdMeanStdMeanStdMeanStd
F13.40 × 10−1623.14 × 10−1626.36401.7706101.370643.03594.86 × 10−24401.35 × 10−169.93 × 10−16
F29.99 × 10−1091.39 × 10−1096.04880.66077.30380.10313.17 × 10−1221.29 × 10−1224.90 × 10−141.09 × 10−14
F32.64 × 1041.52 × 10468.576418.6035996.2803111.2001002.543 + 03850.3463
F420.826510.39472.65260.336411.74121.19740017.39661.7561
F527.13790.1974213.718521.3637794.992237.363728.94210.023014.85620.5037
F60.16630.085512.47826.814230.71770.64775.91240.23765.05 × 10−61.46 × 10−6
F70.00280.002111.81361.35330.15480.05240.00965.68 × 10−50.06250.0159
F8−9.00 × 1032.20471.65 × 103322.4827−1.65 × 103322.4827−4.63 × 10346.3830−1.23 × 10459.1171
F900137.712910.254414.06058.5556005.97790.9966
F104.44 × 10−152.50 × 10−153.63300.23033.00200.04892.66 × 10−151.776 × 10−158.88 × 10−160
F110.02470.02471.54840.12191.40950.0328005.60 × 10−155.27 × 10−15
F120.00330.00180.02200.00910.42120.02680.45060.01585.07 × 10−61.82 × 10−7
F130.18100.02230.56880.08383.08580.46902.75670.00579.10 × 10−54.09 × 10−5
F145.88064.882612.67052.62 × 10−1012.670500.998000.99800
F156.92 × 10−43.64 × 10−50.11750.00913.07 × 10−41.38 × 10−193.07 × 10−41.62 × 10−193.07 × 10−42.77 × 10−7
F16−1.03161.10 × 10−11−0.99933.79 × 10−4−0.76570−1.03160−1.03160
F170.397883.52 × 10−70.397883.31 × 10−70.397885.62 × 10−80.3978800.397884.13 × 10−8
F183.00001.77 × 10−53.00760.00623.00003.28 × 10−133.000003.00008.66 × 10−15
F19−3.85860.0039−3.41940.1678−3.86281.00 × 10−12−3.86280−3.86283.14 × 10−16
F20−3.22730.0947−1.02670.0047−3.26240.0596−3.30540.0111−3.32201.14 × 10−11
F21−7.60372.5485−5.05527.62 × 10−6−6.41803.7352−7.60422.5490−10.15322.67 × 10−11
F22−4.40590.6818−5.08751.05 × 10−4−10.40293.63 × 10−5−7.74532.6576−10.40294.32 × 10−12
F23−7.83072.7022−5.12837.56 × 10−5−10.53574.93 × 10−4−7.83242.7040−6.47914.0573
Table 7. The results of the WOA, AIWPSO, NOBLDE, TLBO, and ASWOA in 16 test problems.
Table 7. The results of the WOA, AIWPSO, NOBLDE, TLBO, and ASWOA in 16 test problems.
ProblemsIndexWOAAIWPSONOBLDETLBOASWOA
T-20-50Mean0.50410.45500.46160.55820.5663
Std0.01080.00740.00310.01070.0017
Best0.51290.46960.46460.56630.5684
T-20-100Mean0.49320.45400.46480.46930.5823
Std0.00850.00860.00700.00440.0142
Best0.50230.46600.47360.47540.5949
T-20-150Mean0.49660.45230.45920.46960.5713
Std0.00210.00380.00250.00480.0124
Best0.49950.45700.46100.47570.5851
T-20-200Mean0.49390.45800.46960.50690.5758
Std0.00650.00750.00550.01290.0121
Best0.50030.46390.47730.51690.5878
T-30-50Mean0.48990.42160.43280.51290.5442
Std0.00850.00570.00860.01260.0046
Best0.51670.42400.44490.52420.5507
T-30-100Mean0.47310.42180.43190.47200.5521
Std0.00410.00480.00710.03010.0038
Best0.48670.43840.44060.51730.5686
T-30-150Mean0.46640.42470.42690.49050.5492
Std0.00900.00680.00240.01090.0019
Best0.47690.43820.43940.51350.5543
T-30-200Mean0.45810.43100.43010.48050.5389
Std0.00630.00230.00420.00840.0065
Best0.46660.43420.44590.50200.5476
T-40-50Mean0.45730.41250.41290.45520.5318
Std0.0091 0.00380.00740.00970.0031
Best0.46760.41310.42280.46360.5562
T-40-100Mean0.44980.40330.41020.50640.5255
Std0.00590.00260.00300.00420.0019
Best0.45720.42640.42400.51970.5391
T-40-150Mean0.45250.41100.41160.48880.5275
Std0.00960.00230.00140.00530.0010
Best0.47290.42280.42350.50350.5389
T-40-200Mean0.45690.40970.41690.46930.5336
Std0.01390.00410.00860.01230.0015
Best0.47660.41550.43760.48330.5454
T-50-50Mean0.46420.39290.39620.43220.5086
Std0.01270.00510.00410.01320.0062
Best0.47740.00420.41210.44700.5137
T-50-100Mean0.45250.39590.40130.49330.5174
Std0.00470.00310.00160.00980.0116
Best0.45800.39950.41260.51560.5308
T-50-150Mean0.43400.39890.40570.45770.5095
Std0.00700.00360.00350.01150.0016
Best0.45340.40370.41040.47660.5109
T-50-200Mean0.43630.40030.40110.46400.5141
Std0.00960.00750.00920.03360.0066
Best0.45340.41370.41790.51030.5204
Table 8. Wilcoxon rank sum test under 16 test problems.
Table 8. Wilcoxon rank sum test under 16 test problems.
ProblemsWOAAIWPSONOBLDETLBO
T-20-503.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
T-20-1001.83 × 10−43.02 × 10−113.02 × 10−113.02 × 10−11
T-20-1504.12 × 10−63.02 × 10−113.02 × 10−112.03 × 10−7
T-20-2002.61 × 10−103.02 × 10−113.02 × 10−113.34 × 10−11
T-30-504.50 × 10−113.02 × 10−113.02 × 10−113.34 × 10−11
T-30-1003.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
T-30-1503.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
T-30-2003.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
T-40-503.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
T-40-1001.96 × 10−103.02 × 10−113.02 × 10−113.34 × 10−11
T-40-1503.02 × 10−113.02 × 10−113.02 × 10−117.39 × 10−11
T-40-2003.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
T-50-505.57 × 10−103.02 × 10−113.02 × 10−113.02 × 10−11
T-50-1004.50 × 10−113.02 × 10−113.02 × 10−117.39 × 10−11
T-50-1503.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
T-50-2009.92 × 10−113.02 × 10−113.02 × 10−112.87 × 10−10
Table 9. Algorithm average time (s).
Table 9. Algorithm average time (s).
ProblemsWOAAIWPSONOBLDETLBOASWOA
QoS_20_2000.17260.62080.16180.37350.2714
QoS_30_2000.22870.91590.20370.724560.6029
QoS_40_2000.27051.19850.29210.65290.4575
QoS_50_2000.48231.46070.34820.74261.7334
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jin, H.; Jiang, C.; Lv, S. A Hybrid Whale Optimization Algorithm for Quality of Service-Aware Manufacturing Cloud Service Composition. Symmetry 2024, 16, 46. https://doi.org/10.3390/sym16010046

AMA Style

Jin H, Jiang C, Lv S. A Hybrid Whale Optimization Algorithm for Quality of Service-Aware Manufacturing Cloud Service Composition. Symmetry. 2024; 16(1):46. https://doi.org/10.3390/sym16010046

Chicago/Turabian Style

Jin, Hong, Cheng Jiang, and Shengping Lv. 2024. "A Hybrid Whale Optimization Algorithm for Quality of Service-Aware Manufacturing Cloud Service Composition" Symmetry 16, no. 1: 46. https://doi.org/10.3390/sym16010046

APA Style

Jin, H., Jiang, C., & Lv, S. (2024). A Hybrid Whale Optimization Algorithm for Quality of Service-Aware Manufacturing Cloud Service Composition. Symmetry, 16(1), 46. https://doi.org/10.3390/sym16010046

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop