Next Article in Journal
On Mathematical Models Based on Delay Differential Equations in Epidemiology
Previous Article in Journal
A Forensic Odontology Application: Impact of Image Quality on CNNs for Dental Status Analysis from Orthopantomograms
Previous Article in Special Issue
Sequential Game Model for Urban Emergency Human–Machine Collaborative Decision-Making
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Performance and Robustness with Two Strategies in Self-Adaptive Differential Evolution Algorithms for Planning Sustainable Multi-Agent Cyber–Physical Production Systems

Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan
Appl. Sci. 2025, 15(18), 10266; https://doi.org/10.3390/app151810266
Submission received: 19 August 2025 / Revised: 14 September 2025 / Accepted: 17 September 2025 / Published: 21 September 2025

Abstract

In the real world, forming a team of two or more people to solve a problem collaboratively is common to take advantage of the complementarity of the values and skills of team members. This idea can be used to develop more effective hybrid solution algorithms for solving problems by combining different solution strategies. In the realm of metaheuristic optimization, many hybrid metaheuristic algorithms have been developed based on combining different metaheuristic solution approaches. An interesting question is to study whether arbitrarily combining two different strategies can lead to a more effective solution approach to tackle complex problems. To evaluate whether a hybrid solution algorithm created by combining two different strategies to solve a problem is effective, we studied whether the hybrid solution algorithm can improve the performance and robustness by comparing the results of the solutions obtained by the hybrid solution algorithm with those obtained by the corresponding two original single-strategy solution algorithms. More specifically, we studied whether arbitrarily combining two different DE strategies selected from four standard DE strategies can lead to a more effective solution approach for planning sustainable Cyber–Physical Production Systems (CPPSs) modeled with multi-agent systems (MASs) in terms of performance and robustness. Ten cases for testing the algorithms for planning sustainable processes in CPPSs, with up to 20 operations and up to 40 resources, were used in the experiments. We conducted experiments by applying 13 algorithms, including 6 hybrid DE algorithms and 7 existing algorithms (4 standard DE, NSDE algorithms, PSO, SaNADE), to find the solutions for 10 discrete optimization planning problems with various types of constraints. The results of the experiments show that each self-adaptive hybrid DE algorithm either outperforms or performs as well as the four standard DE algorithms, NSDE algorithm, and PSO algorithm in most test cases in terms of performance and robustness for population sizes of 30 and 50. The rankings generated through the Friedman test based on the results of the experiments also show that the rankings of the six hybrid DE algorithms created based on hybridization are better than most of the others seven existing algorithms, with only one exception. The rankings generated via the Friedman test indicate that the top 3 among the 13 algorithms are the hybrid DE algorithms. The results of this study provide a simple rule to develop a more effective hybrid DE algorithm by combining two DE strategies.

1. Introduction

Metaheuristic optimization provides a practical and popular approach to solving complex optimization problems [1]. It refers to problem-solving techniques that aim to find good yet not necessarily optimal solutions for complex optimization problems. Many solution algorithms developed based on the metaheuristic optimization approach have been proposed for a wide variety of problem domains. Well-known solution algorithms developed based on the metaheuristic optimization approach include (1) evolutionary algorithms (EAs) such as genetic algorithms (GAs) [2] and differential evolution (DE) algorithms [3], which are inspired by Darwin’s evolutionary theory, (2) algorithms based on swarm intelligence such as the Bat algorithm [4], Particle Swarm Optimization (PSO) [5], Grey Wolf Optimizer (GWO) [6], Firefly Algorithm (FA) [7] and Whale Optimization Algorithm (WOA) [8], which rely on information sharing to directly influence the movement of each agent based on social behaviors of agents within the swarm, (3) algorithms based on physical rules such as Simulated Annealing (SA) [9] and the Gravitational Search Algorithm (GSA) [10], and (4) human-based algorithms such as Teaching-Based Learning Optimization (TBLO) [11]. Although these metaheuristic algorithms have been applied to solve optimization problems in a wide variety of domains, many complex emerging real-world problems call for the development of more effective solvers. Different approaches have been proposed to attempt to improve the performance of existing metaheuristic algorithms. One of these approaches is to combine different existing metaheuristic algorithms to obtain more advanced effective metaheuristic algorithms called hybrid algorithms [12]. Combining different solution approaches or algorithms to obtain new solvers is called hybridization.
An interesting issue is to study whether a hybrid algorithm is more effective than the original algorithms that are used and combined in the hybrid algorithm. The effectiveness of a metaheuristic algorithm is assessed based on performance metric and robustness metric. The performance of a hybrid algorithm for a problem is characterized by the mean fitness value of the solutions obtained. The robustness of a hybrid algorithm for a problem is characterized by the standard deviation of the fitness function values of the solutions obtained. A hybrid algorithm is more effective if it outperforms the original algorithms used and combined in the hybrid algorithm in terms of the performance metric and robustness metric. However, finding an “adequate” combination of complementary solution approaches that are able to improve performance and robustness is a challenge. This study was motivated by the need to find an “adequate” combination of complementary solution approaches that can work effectively and improve performance as well as robustness.
In this study, we focused on the research question of whether a self-adaptive hybrid DE algorithm obtained by combining two original standard DE algorithms is more effective than the two original standard DE algorithms. The Sustainable Development Goals (SDGs) [13] have significant and varied impacts across different sectors in smart cities, including the transport sector [14] and manufacturing sector [15]. Achieving sustainable development requires the development of effective strategies and solution approaches. To support the development of effective methods to achieve the Sustainable Development Goals (SDGs) [13] in the manufacturing sector, we studied hybridization based on DE strategies to develop problem solvers for planning a class of sustainable Cyber–Physical Production Systems (CPPSs) [16]. Cyber-physical systems (CPSs) refer to networked systems with entities in cyber space and physical space operating based on the collaboration of these entities through computation, communications, and control technology. CPPSs are a class of CPS which are applied in manufacturing systems. Machines, robots, actuators, and work pieces or parts being processed are entities in the physical space of CPPSs. Computational elements refer to entities related to production processes, manufacturing resources, product information, and requirements represented by proper Cyber World models of CPPSs. CPPSs can be modelled as multi-agent systems (MASs) [17] with different types of agents such as process agents, resource agents, and optimization agents. These agents work cooperatively and autonomously to achieve the goals of production. The sustainable development of the self-adaptive CPPS [18] is one important trend. Although the CPPS paradigm bears significant potential for improving the economic and environmental performance of production, it poses challenges in the development of sustainable CPPSs [19]. Optimization of sustainable CPPSs is a challenging issue due to the discrete solution space and complex constraints, and it relies on the development of effective problem solvers. We will consider several combinations of mutation mechanisms in DE, develop self-adaptive hybrid DE algorithms, and assess the effectiveness and robustness of these self-adaptive hybrid DE algorithms in planning a class of sustainable CPPSs modeled with MASs.
The structure of the rest of this paper is as follows. We will provide a literature review in Section 2. The discrete constrained optimization problem for planning sustainable CPPSs to be addressed is formulated in Section 3. The fitness function to be used in the self-adaptive hybrid DE algorithms and the way to hybridize standard DE algorithms are briefly introduced in Section 4. The experimental design and analysis of the results for studying the effectiveness and robustness of the self-adaptive hybrid DE algorithms developed in this paper are presented in Section 5. A discussion of the results is given in Section 6. Section 7 concludes this paper.

2. Literature Review

In the literature, several hybridization approaches for metaheuristics have been studied [20]. These include (1) Hybridizing metaheuristics with (meta-)heuristics, (2) Hybridizing metaheuristics with constraint programming, (3) Hybridizing metaheuristics with tree search techniques, (4) Hybridizing metaheuristics with problem relaxation, and (5) Hybridizing metaheuristics with dynamic programming. For example, combining evolutionary algorithms with local search memetic algorithms leads to “memetic algorithms” (MAs) [21,22]. Combining differential evolution with particle swarm optimization creates hybrid differential evolution and particle swarm optimization algorithms [23,24,25]. Combining ant colony optimization with differential evolution produces hybrid differential evolution and ant colony optimization algorithms [26,27,28]. Combining the whale optimization algorithm with simulated annealing [29] or ant colony optimization [30] also shows the advantages in terms of improvement in performance and efficiency. Hybridizing the firefly algorithm with differential evolution [31] and hybridizing the firefly algorithm with particle swarm optimization [32] results in the benefits of enhanced performance and searching efficiency. The hybrid metaheuristic algorithms in the studies mentioned above are developed based on the hybridization of different metaheuristic approaches. However, hybridization based on different metaheuristic approaches is not the only way to develop hybrid metaheuristic algorithms. Hybridization can also be based on different variants of a specific metaheuristic approach. For example, hybridization based on variants of differential evolution approaches [33,34] can create more effective hybrid differential evolution algorithms.
Hybridization of a combination of different optimization approaches or different variants of a specific metaheuristic approach to achieve superior performance relies on exploiting the complementary character of different approaches. Arbitrarily combining different metaheuristic algorithms might not lead to a better solver. Reference [35] focuses on the effectiveness of two hybrid metaheuristic algorithms for solving ridesharing problems by (1) hybridizing FA with PSO and (2) hybridizing FA with DE. The results show that, for the ridesharing recommendation problem, hybridizing FA with PSO creates a more efficient algorithm, whereas hybridizing FA with DE does not. Therefore, choosing an “adequate” combination of complementary solution approaches can be the key to benefitting from the synergy in the hybridization approach [20]. However, finding an “adequate” combination of complementary solution approaches that can work effectively to improve performance and robustness is a difficult task that relies on expertise from different optimization approaches and problem domains.
According to the “No Free Lunch” theorem [36], no single algorithm or method is universally superior to all others across all possible problems [37]. It follows that a hybridization scheme that works well for a specific problem might perform poorly for other ones. Therefore, a hybrid metaheuristic algorithm must be tested and compared with others to study its effectiveness for solving a problem.
The effectiveness of a hybrid metaheuristic algorithm for solving a problem is assessed based on some performance metric and robustness metric. A commonly used performance metric for stochastic optimization methods is the mean fitness value of the solutions obtained. Robustness is another important property of stochastic optimization algorithms in the context of metaheuristic algorithms [38,39], swarm intelligence algorithms [40,41], and differential evolution [42,43,44]. The robustness of a hybrid metaheuristic algorithm for a problem is characterized by the standard deviation of the fitness function values of the solutions. A hybrid metaheuristic algorithm is more effective if it outperforms the original algorithms used in the hybrid algorithm in terms of the performance metric and robustness metric. Therefore, the research issues of hybridization approaches in the development of hybrid metaheuristic algorithms can be divided into two parts: (1) determination of the combinations of the metaheuristic mechanism to be hybridized, and (2) verification of the effectiveness of the hybrid metaheuristic algorithms for the problem of interest based on the experimental results of a set of test cases.
In this study, we focused on the research question of whether a self-adaptive hybrid DE algorithm obtained by combining two original standard DE algorithms is more effective than the two original standard DE algorithms. To characterize the effectiveness of a hybrid metaheuristic algorithm quantitatively, a self-adaptive hybrid DE algorithm, obtained by hybridizing two original metaheuristic algorithms for a specific problem, is said to outperform the two originals if its mean fitness function value is better than the individual algorithms. A self-adaptive hybrid DE algorithm, obtained by hybridizing two metaheuristic algorithms, is said to be more robust than the two originals if its standard deviation of the fitness function values is lower than the original algorithms.
In this paper, we focus on hybridization based on differential evolution to develop problem solvers for planning a class of sustainable Cyber–Physical Production Systems (CPPSs) [16]. Cyber-Physical Systems (CPSs) [45] refer to networked systems with entities in cyber space and physical space that operate based on the collaboration of these entities through computation, communications, and control technology. Cyber–Physical Production Systems (CPPSs) are a class of CPS that is applied in manufacturing environments to perform production-related tasks [16]. Multi-Agent Systems (MASs) [46] provide a paradigm to model the operation and interaction of autonomous, cooperative, and intelligent agents in CPPSs [47,48]. The global trend to pursue the Sustainable Development Goals (SDGs) underscores the development of sustainable CPPSs. Although the CPPS paradigm bears significant potential to improve the economic and environmental performance of production, it poses challenges in the development of sustainable CPPSs [19]. The optimization of sustainable CPPS is a challenging issue due to the discrete solution space and complex constraints, and it relies on the development of effective problem solvers. The architecture of a CPS can be divided into five levels: connection, conversion, cyber, cognition, and configuration [49]. Planning and scheduling are at the configuration level in CPS design [50]. Formulation of the optimization problem for planning sustainable CPPSs requires the construction of the Cyber World models for CPPSs. Petri nets provide a tool to construct Cyber World models for CPPSs [51].
In [52], planning of sustainable CPPSs is formulated as a discrete constrained optimization problem based on Cyber World models of CPPSs in an MAS architecture with process agents, resource agents, and optimization agents. A self-adaptive metaheuristic algorithm based on the DE approach was proposed in [52] for planning processes of sustainable CPPSs modeled by Discrete Timed Petri Nets (DTPNs). The algorithm proposed in [52] is obtained by combining the mechanisms of two standard DE algorithms and the method proposed in [53] to handle constraints. Whether other ways to combine standard DE algorithms are effective is an interesting research question.
In the DE literature, many well-known self-adaptive variants of DE algorithms have been proposed to improve the original DE algorithm. These include SaDE [54], JADE [55], SHADE [56], and L-SHADE [57]. In SaDE, the learning strategy and the two control parameters are gradually self-adapted according to the learning experience. JADE improves performance via adaptive updating of control parameters and implementation of a new mutation strategy with optional external archive. SHADE adapts the control parameters in DE based on a historical memory of successful control parameter settings. L-SHADE continually decreases the population size according to a linear function to improve the performance of SHADE. All the self-adaptive variants of the DE algorithms mentioned above focus on improving the adaptation of control parameters to improve performance. The effects of hybridizing two mutation strategies in self-adaptive DE algorithms on performance and robustness are a research gap rarely explored in the DE literature. This paper focuses on the issue of improving performance and robustness through the hybridization of two standard DE strategies.
In this paper, we consider several combinations of mutation strategies in differential evolution, develop self-adaptive hybrid DE algorithms, and assess the effectiveness and robustness of these self-adaptive hybrid DE algorithms in solving the planning problem of a class of sustainable CPPSs. There are three steps in a standard DE algorithm after initialization: mutation, crossover, and selection. Therefore, the mechanisms used in initialization, mutation, crossover, and selection of a DE approach define a specific DE algorithm. In this paper, a DE algorithm that uses the original mechanisms—which are well defined in the literature for initialization, mutation, crossover, and selection—is called a standard DE algorithm. We focus on the hybridization of two standard differential evolution algorithms based on four mutation strategies defined in the literature: (1) DE/rand/1, (2) DE/best/1, (3) DE/rand/2, and (4) DE/best/2 [58]. We hybridize any combination of two of the four mutation strategies to create self-adaptive hybrid DE algorithms. Each self-adaptive hybrid DE algorithm selects the strategy with the highest success rate in improving the solution from the two mutation strategies. We design experiments to study the performance and robustness of each self-adaptive hybrid DE algorithm created. The results show that each self-adaptive hybrid DE Algorithm created in this study is more effective than the original two standard DE algorithms in terms of performance and robustness for most test cases. The six self-adaptive hybrid DE algorithms also either outperform or perform as well as the NSDE algorithm and PSO algorithm for most test cases in the experiments.
In a previous study [52], a hybrid DE algorithm that uses two fixed strategies was proposed, and its performance was demonstrated. This paper generalizes the results of [52] by proposing a systematic approach to develop self-adaptive hybrid DE algorithms based on the hybridization of arbitrary two strategies selected from four candidate strategies, defining metrics to assess the performance and robustness of these self-adaptive hybrid DE algorithms and experimentally verifying their performance and robustness. The contributions of this paper are summarized as follows.
We provide a potential method to develop effective self-adaptive hybrid DE algorithms based on the hybridization of two DE strategies selected from a set of four candidate DE strategies.
We illustrate that each self-adaptive hybrid DE algorithm created in this study either outperforms or performs as well as the two corresponding DE algorithms and the other three existing algorithms for most test cases in terms of performance and robustness.
We provide rigorous statistical evidence that the observed performance differences are significant by ranking the six self-adaptive hybrid DE algorithms and the other seven existing algorithms based on the Friedman test and show that the average rankings of the six self-adaptive hybrid algorithms are better than most of the other seven existing algorithms, with only one exception. The average rankings generated based on the Friedman test indicate that the top 3 among the 15 algorithms are the self-adaptive hybrid algorithms.

3. Optimization Problem Formulation for Sustainable CPPSs

Cyber–Physical Production Systems (CPPSs) consist of different types of entities: physical components, computational elements, and communication infrastructure. Machines, robots, actuators, and work pieces or parts being processed are physical components in CPPSs. Computational elements refer to entities related to production processes, manufacturing resources, product information, and requirements represented by proper Cyber World models of CPPSs. Communication infrastructure enables real-time communication between physical components and computational elements through the exchange of data. Multi-Agent Systems (MASs) provide an architecture to capture the operations of CPPSs based on the agents’ characteristics of autonomy and cooperation for analysis and optimization of performance. Entities in CPPSs such as manufacturing resources, processes, and process planners can be represented by different types of agents in MAS models of CPPSs. These include process agents, resource agents, task agents, and optimization agents. A process agent represents a production process described by a Cyber World Process model. A resource agent represents a manufacturing resource. The capability of a resource agent is specified by a Cyber World model. A task agent represents a task with given time requirements and sustainability requirements. Figure 1 shows an example of an MAS for a CPPS.
To describe CPPSs and formulate the problem, we define variables, models, and symbols in Table 1.
Operations are the elements that represent the production activities in CPPSs. To build the Cyber World models for a CPPS, proper Cyber World models for operations must be constructed first. In this paper, we construct the Cyber World models of operations based on Discrete Timed Petri Nets (DTPNs). A DTPN P N = ( P , T , F , m 0 , μ ) is defined in Table 1. The operations in CPPSs are represented by the Cyber World models for operations. The Cyber World model for an operation k is denoted by DTPN O k = ( P k , T k , F k , m k 0 , μ k ) and is defined in Table 1, were O k is an abbreviation for O k = ( P k , T k , F k , m k 0 , μ k ) . A process typically consists of a set of operations. We define a composition operation to combine Cyber World models of operations required for a process. The Cyber World model of a process agent that requires operation k   d k = 1 is O = k { k { 1 , 2 , , K } d k = 1 } O k , which is an acyclic DTPN.
Let us use examples to illustrate the related Cyber World models mentioned above. Consider three operations, O 1 , O 2 , and O 3 . The Cyber World models for operations O 1 , O 2 , and O 3 , are O 1 = ( P 1 , T 1 , F 1 , m 10 , μ 1 ) , O 2 = ( P 2 , T 2 , F 2 , m 20 , μ 2 ) , and O 3 = ( P 3 , T 3 , F 3 , m 30 , μ 3 ) , respectively. Suppose the workflow of a process agent requires operations O 1 , O 2 , and O 3 to be performed. The Cyber World model of the process agent is O = O 1 O 2 O 3 . Figure 2 shows an example of the Cyber World model for a process agent. The Cyber World models for operations O 1 , O 2 , O 3 , and the Cyber World model of the process agent O = O 1 O 2 O 3 shown in Figure 2 are used to capture the operations required for a specific process. The operation combines multiple DTPNs into a new DTPN by merging common places, transitions, or arcs in different DTPNs. The Cyber World models for three operations, O 1 , O 2 , and O 3 , on the left side of Figure 2 are combined, and the Cyber World model O for a process agent is obtained by merging the common transitions, t 2 and t 3 , the common places, p 1 , p 2 , p 3 , and the common arcs connecting the transitions to the places or connecting the places to the transitions.
In this paper, the requirements of process agent n are denoted by e q , which consists of two parts: the operations in the production process and total processing time. e q = ( d 1 , d 2 , d 3 , , d K , ω ) , where d k is equal to 1 if operation k is required to be performed in the requirements of the given process agent, d k is equal to 0 otherwise, and the overall processing time must be no greater than ω .
Let R denote the set of indices of resource agents in CPPS. A resource agent a , where a R , is an entity that may autonomously submit bids according to its capabilities. Let J a denote the number of bids submitted by resource agent a . Let B a j = ( o a j 1 , o a j 2 , o a j 3 , , o a j K , τ a j , e a j ) denote the j t h bid submitted by resource agent a , where o a j k = 1   if   operation     k   can   be   performed     by   resource     agent   a 0   otherwise , τ a j is the overall processing time for performing the specified operations in the bid, and e a j is the overall energy consumption required to perform the specified operations in B a j .
A a j = ( P a j , T a j , F a j , m a j 0 , μ a j ) is the DTPN representing the activity that resource agent a performs for the operations specified in the bid B a j . A a j represents the capabilities of resource agent a to perform the operations specified in B a j . For example, the three operations in Figure 2 can be performed by different resource agents in various ways, subject to the capabilities of the resource agents. The Cyber World models are used to represent the capabilities of resource agents or the different ways that resource agents may perform the operations. An activity is a specific way that a resource agent performs one or more operations. Suppose there are four resource agents; that is, R = {1, 2, 3, 4}. Suppose resource agent 1 is able to perform operation O 1 and operation O 3 . This is represented by A 11 in Figure 3. Resource agent 1 can perform operation O 1 only. This is represented by A 12 in Figure 3. Resource agent 1 can perform operation O 3 only. This is represented by A 13 in Figure 3. Figure 3 shows the Cyber World models for four resource agents, 1, 2, 3, and 4 to perform various operations.
Let x a j be a variable that specifies whether B a j is selected to perform the operations required by the process O of a process agent. x a j = 1   if     B a j   is   selected   to   perform   the   operations   in   O 0   otherwise   .
A configuration is defined by the process of a given process agent n and a set of selected of bids of resource agents. That is, the process O of a process agent and the set of selected bids for resource agents specified by { B a j , x a j = 1   j { 1 , 2 , , J a } a R   } jointly form a configuration c ( O , x , B )   .
The problem is to determine the set of selected bids that achieve the time requirements and sustainability requirements.
The Cyber World model, Ψ = ( P , T , F , m 0 , μ ) , corresponding to a configuration c ( O , x , B ) for process agent n is defined by the Cyber World model, O , of process agent n and Cyber World models A a j , where x a j = 1   j { 1 , 2 , , J a } a R   , of the activities of resource agents. More specifically, the Cyber World model is represented by DTPN Ψ = ( P , T , F , m 0 , μ ) = O x a j = 1   j { 1 , 2 , , J a } a R   A a j .
Figure 4 shows two configurations for performing the three operations mentioned above. An arbitrary configuration might not be able to complete all the operations required for a production process. A configuration is called a feasible configuration if it can complete all the operations required for a production process O . If some of the operations in O cannot be performed by the resource activities in a configuration, the configuration is called infeasible. Figure 5 shows two infeasible configurations for performing the three operations. Figure 5a is an infeasible configuration, as operation O 3 is not performed by a resource. Figure 5b is an infeasible configuration, as operation O 2 is not performed by a resource.
Although all the operations specified in a production process O can be performed by resources in a feasible configuration, other requirements must be satisfied for a feasible configuration to meet the goals of production. These include the requirements of operations required, time requirements, and other types of requirements. The problem is to find a feasible configuration that can optimize some objective function to achieve the goals of production while meeting the above-mentioned requirements. The objective function H ( x ) is related to the time requirements and energy consumption associated with the configuration. We define Γ ( O , x , B ) as a function to calculate the total processing time of a configuration c ( O , x , B ) of process agent n . We define E n g ( O , x , B ) as a function to calculate the energy consumption of a configuration c ( O , x , B ) of process agent n . H ( x ) is defined by Equation (1). To formulate the optimization problem, we denote the constraints of operations in e q by ( O , x , B , e q ) . We denote the constraints of the time requirements by Τ ( O , x , B ) and other type of constraints by Π ( O , x , B ) . We denote the energy consumption for ( x , B ) by E n g ( O , x , B ) . The optimization problem is formulated in Equations (1)–(5).
max ( H ( x ) = H ( Γ ( O , x , B ) , E n g ( O , x , B ) )
s . t .   ( O , x , B , e q )
Τ ( O , x , B )
Π ( O , x , B )
x a j { 0 , 1 }   a R   , j { 1 , 2 , 3 , .... , J a }
The objective function H ( Γ ( O , x , B ) , E ( O , x , B ) ) is increasing with respect to ω     Γ ( O , x , B ) and decreasing with respect to E n g ( O , x , B ) . Pursuing efficiency and sustainability are typically conflicting objectives. The trade-off of efficiency with sustainability is modeled by the weighting factors w 1 and w 2 , which are related to the total processing time and energy consumption of a configuration. For example, we define the objective function H ( Γ ( O , x , B ) , E n g ( O , x , B ) ) = w 1 ( ω     Γ ( O , x , B ) ) w 2 E n g ( O , x , B ) in Equation (6) in the following problem formulation to maximize the objective function. The setting of w 1 and w 2 depends on the goal of process planning in terms of efficiency and sustainability.
When applying the above mathematical formulation to find a solution, it is necessary to derive the constraints ( O , x , B , e q ) , Τ ( O , x , B ) , and Π ( O , x , B ) according to the characteristics of the specific type of production processes of interest. For the case of sequential production, suppose three requirements must be satisfied: (i) the constraints of operations ( O , x , B , e q ) : each operation in process O must be performed by a resource agent, (ii) the constraints of time requirements Τ ( O , x , B ) : the overall processing time of process O must be less than or equal to ω ; and (iii) other type of constraints Π ( O , x , B ) : the number of times operation k can be performed by each resource agent cannot exceed q a k .
If O is a sequential process, constraints ( O , x , B , e q ) can be represented by Equation (7), as a R j = 1 J a x a j o a j k d k k 1 , , K . If O is a sequential process, Γ ( O , x , B ) = a R j = 1 J a x a j τ a j   . Τ ( O , x , B ) can be represented by Equation (8). Let q a k be the maximum number of times that operation k can be performed by each agent. The constraints Π ( O , x , B ) can be represented by Equation (9). The function E n g ( O , x , B ) used to calculate the energy consumption of a configuration c ( O , x , B ) is defined as E n g ( O , x , B ) = a R j = 1 J a x a j e a j   . The problem for planning a sequential process is formulated in Equation (6)–(10).
max H ( x ) = H ( Γ ( O , x , B ) , E n g ( O , x , B ) ) = w 1 ( ω n     Γ ( O , x , B ) ) w 2 E n g ( O , x , B )
s . t . a R j = 1 J a x a j o a j k d k k 1 , , K
ω a R j = 1 J a x a j τ a j  
j = 1 J a x a j o a j k q a k a R   , k 1 , , K
x a j { 0 , 1 }   a R   , j { 1 , 2 , 3 , .... , J a }

4. Fitness Function and Self-Adaptive Hybrid DE Algorithm

4.1. A Fitness Function Based on an Objective Function, Biasing Feasible Solutions over Infeasible Solutions

Evolutionary algorithms require a properly designed fitness function to iteratively move toward the feasible solution space and improve the quality of solutions. The quality of a solution is determined by the value of the fitness function and violations of constraints associated with the solution. The information regarding violations of constraints associated with a solution can be used to efficiently move toward a feasible solution space by biasing feasible solutions over infeasible solutions. The method proposed in [53] is adopted in this paper to bias feasible solutions over infeasible solutions.
Let us define the fitness function to be used in the self-adaptive hybrid DE algorithm with two strategies to be presented in the next subsection. In this paper, the set of all feasible solutions in the current population is denoted by S f . For the planning problem defined by Equations (1)–(5), let x denote a solution. If x is a feasible solution, the fitness function H 1 ( x ) is the same as the objective function H ( x ) . If the solution x is infeasible, the objective function value of the best feasible solutions in the current population and the violations of constraints are used to define the fitness function U ( x ) for x in Equation (11):
U ( x ) =   S f max +   U 1 ( x ) +   U 2 ( x ) +   U 3 ( x )
where S f max = max x S f H ( x ) , U 1 ( x ) is the penalty of constraint violation of ( x , B , R ) , U 2 ( x ) penalizes violation of constraint Τ ( x , B ) , and U 3 ( x ) penalizes violation of constraint Ψ ( x , B ) . The fitness function is defined in Equation (12) based on Equation (11) as follows:
H 1 ( x ) = H ( x )   if   x   satisfies   ( 2 )   ( 5 ) U ( x )   otherwise
For the sequential process planning problem defined by Equations (6)–(10), the fitness function for the problem of planning sequential processes is defined based on Equations (13)–(17) in Equation (18) as follows:
S f max = max x S f H ( x )
U 1 ( x ) = k = 1 K ( ( min ( a A j = 1 J a x a j o a j k d k ) , 0.0 ) )
U 2 ( x ) = min ( ( ω n Γ ( x , B ) ,   0.0 )
U 3 ( x ) = a A ( k = 1 K ( max ( j = 1 J a x a j o a j k q a k , 0.0 ) ) )
U ( x ) = S f max + U 1 ( x ) + U 2 ( x ) + U 3 ( x )
H 1 ( x ) = H ( x )   if   x   satisfies   ( 6 )   ( 10 ) U ( x )   otherwise

4.2. A Self-Adaptive Hybrid DE Algorithm with Two Strategies

One of the goals of this paper is to propose a systematic approach to the development of self-adaptive hybrid DE algorithms. The notations to be used in the algorithm are listed in Table 2. Our approach is to develop self-adaptive metaheuristic algorithms based on the hybridization of two strategies in DE. Table 3 summarizes four well-known DE strategies to be used in hybridization. In a standard DE approach, a trial vector is calculated based on a mutant vector to find potential better solutions iteratively. A mutant vector is calculated according to the DE strategy used. Table 3 shows Formulae (19)–(22), which are used to calculate the mutant vectors for four different DE strategies. For each DE strategy s in Table 3, there is a specific formula to calculate the mutant vector v i d for the element in the d -th dimension of each individual z i in the population in each generation.
In our approach, two different DE strategies are selected from Table 3 for hybridization. The probability of selecting one of the two strategies to evolve the solution in the hybrid algorithm depends on the historical success rate of the two strategies. The historical success rate of each of the two strategies is calculated according to total number of times the strategy has successfully improved the solution and is updated in each generation. Algorithm 1 provides the pseudo code of the self-adaptive hybrid DE algorithm based on the hybridization of two strategies, s 1 and s 2 , in DE.
Algorithm 1 first sets two strategies, s 1 and s 2 , to be hybridized in Step 1; sets the algorithmic parameters, including population size N P and the number of generations G , in Step 2; and sets self-adaptive algorithmic parameters, including learning period L P , probability f p , and crossover probability c r in Step 3. Algorithm 1 creates the initial population randomly and computes fitness function values in Step 4. Finally, Algorithm 1 enters a nested loop in Step 5 to evolve each solution (individual) in the population iteratively. Step 5 is divided into five steps. Step 5-1 calculates the scale factor F i for the i -th individual in the population, Step 5-2 calculates the mutant vector v g i , Step 5-3 computes the trial vector u g i for the i -th individual in the population, Step 5-4 updates the counters depending on whether the trial vector improves the solution, and Step 5-5 calculates the values of the probability variable f p and the crossover probability variable c r . Note that the success event counter S C s and the failure event counter F C s of strategy s are cumulative. These counters are updated in Step 5-4 and are used to update f p at the end of each learning period in Step 5-5.
In Step 5-3, the self-adaptive hybrid DE algorithm computes the trial vector u i for the i -th individual in the population and converts each dimension in u i to binary ( u ¯ i ) by calling the B i n a r y function below. The binary vector u ¯ i will be used in Step 5-4 to compute the fitness function value and determine whether the trial vector improves the solution.
Function B i n a r y ( u , V max )
Input: u
Output: u ¯
Step 1: Calculate s ( v ) = 1 1 + exp v , where v = V max           i f   u > V max u                       i f   V max u V max V max   i f   u < V max
Step 2: Calculate u ¯ = 1   r s i d < s ( v ) 0   o t h e r w i s e , where r s i d is randomly generated from uniform distribution U ( 0 , 1 )
Algorithm 1: A Self-Adaptive Hybrid DE Algorithm with Two Strategies
Step 1: Set two strategies, s 1 and s 2 , to be hybridized, where s 1 S , s 2 S and s 1 s 2
Step 2: Set algorithmic parameters
Step 2-1: Set population size N P
Step 2-2: Set the number of generations G
Step 3: Set Self-Adaptive algorithmic parameters
Step 3-1: Set learning period L P
Step 3-2: Set probability f p = 0.5
Step 3-3: Set crossover probability c r = 0.5
Step 3-4: Set β
Step 4: Create the initial population randomly and compute fitness function values
Step 4-1: Generate a population with N P individuals randomly
Step 4-2: Calculate the fitness function value for each individual
Step 5: Evolve solutions iteratively for each individual in the population
   For g = 1   t o   G
    For i = 1   t o   N P
Step 5-1: Calculate the scale factor F i for the i -th individual
    Generate a random number r from U ( 0 , 1 )
      F i = r 1 ,   w h e r e   r 1   i s   g e n e r a t e d   f r o m   N o r m a l   d i s t r i b u t i o n   N ( 0 , 1 )   i f   r < f p r 2 ,   w h e r e   r 2   i s   g e n e r a t e d   f r o m   f r o m   U ( 0 , 1 )   o t h e r w i s e
Step 5-2: Calculate the mutant vector v g i
    Generate r randomly from U ( 0 , 1 )
     s = s 1   i f   r < f p s 2   o t h e r w i s e
    For d 1, 2, …, D
     Calculate v i d according to strategy s based on the formula defined in Table 3.
    End For
Step 5-3: Compute the trial vector u i for the i -th individual
    Generate α randomly from Normal distribution N ( 0 , 1 )
     c r i = α β + c r
    For d 1, 2, …, D
      Generate r randomly from U ( 0 , 1 )
       u i d = v i d   i f   r < c r i z i d   o t h e r w i s e
      Convert the d -th dimension in the trial vector into binary
       u ¯ i d B i n a r y ( u i d , V max )
    End For
Step 5-4: Calculate the fitness function value for the trial vector and update the counters
    If the trial vector improves the solution, update counter S C s , otherwise update counter F C s
    If H 1 ( u ¯ i ) H 1 ( z i )
      z i = u i
      L L { c r i }
      S C s = S C s + 1
    Else
      F C s = F C s + 1
    End If
    End For
Step 5-5: Calculate the values of the probability variable f p and the crossover probability variable c r
    If g   M O D   L P = 0
      w 1 = S C s 1 ( S C s 1 + F C s 1 )
      w 2 = S C s 2 ( S C s 2 + F C s 2 )
      f p = w 1 w 1 + w 2
      c r = k { 1 , 2 , , L } L ( k ) L
    End If
    End For

5. Results

In this section, we introduce six self-adaptive hybrid DE algorithms created based on hybridization of two DE strategies, the metrics to assess these algorithms, the experiments used to verify these algorithms, comparison of the performance metric and robustness metric based on the experimental results, and comparison with existing algorithms.

5.1. Self-Adaptive Hybrid DE Algorithms and Metrics to Assess the Algorithms

To verify the effectiveness of hybridizing two different strategies, several self-adaptive hybrid DE algorithms are created. Each self-adaptive hybrid DE algorithm is created by hybridizing two different strategies selected from four candidate strategies defined in (19), (20), (21), and (22) of Table 3. This leads to the six self-adaptive hybrid DE algorithms listed in Table 4. In Table 4, the self-adaptive hybrid DE algorithm created based on the hybridization of strategy m and strategy n is denoted by SaNSDE(m,n), where m is different from n. In this study, we use DE-m and DE-n to refer to the two original standard DE algorithms based on strategy m and strategy n, respectively.
To assess the self-adaptive hybrid DE algorithms, we use two metrics to characterize their performance and robustness properties. The performance of a self-adaptive hybrid DE algorithm is characterized by the mean fitness value of the solutions obtained for a problem. The robustness of a self-adaptive hybrid DE algorithm for a problem is characterized by the standard deviation of the fitness function values of the solutions obtained. The performance and robustness metrics are also used to characterize the properties of standard DE algorithms. We apply each self-adaptive hybrid DE algorithm, SaNSDE(m,n), and the two original standard DE algorithms, DE-m and DE-n, to find solutions for several test cases, then we calculate the mean fitness value and the standard deviation of the fitness values based on the solutions obtained by SaNSDE(m,n), DE-m, and DE-n. We compare the performance and robustness of each self-adaptive hybrid DE algorithm and the corresponding two standard DE algorithms. Finally, we compare the performance and robustness of each self-adaptive hybrid DE algorithm with the existing self-adaptive hybrid DE algorithm proposed in [52], the NSDE algorithm, and the PSO algorithm.

5.2. Test Cases and Parameters Used in the Experiments and an Illustrative Example

To study the performance and robustness of the self-adaptive hybrid DE algorithms and compare them with the existing the self-adaptive hybrid DE algorithm proposed in [52], the NSDE algorithm, and the PSO algorithm, we used the same set of test cases that was used in [52] as a benchmark. The process planning problem parameters include the number of operations, K , and the number of resources, I . The ten test cases created in [52] for sequential processes with up to 20 operations and up to 40 resources in CPPSs were used in the experiments to test the process planning algorithms in this study. The complete set of test cases is available in [Planning_CPS] at [https://drive.google.com/drive/folders/1_5bMvhjvVhTDN0yunbFNQ4HasaICu8b4?usp=sharing] (accessed on 29 June 2024).
The study of the influence of the L P parameter on performance in [52] indicates that the self-adaptive hybrid algorithm proposed is not sensitive to the L P parameter and should be large enough to achieve good performance. The value 1000 was obtained based on the sensitivity analysis in [52]. The value β is set to the same as the one in [52], obtained via parameter tuning. The initial value of c r will be adapted at the end of each learning period. Parameter tuning of NSDE and PSO was done by adjusting one parameter at a time while keeping others constant and running the algorithm multiple times to record the performance metric iteratively to identify good parameter values. The parameters used in the algorithms are listed in Table 5. For all algorithms, the parameter V max was set to 4 in all experiments.
All the experiments were performed on a Windows 10 laptop with an Intel(R) Core(TM) i7 CPU and 16 GB of on-board memory at a base clock speed of 2.6 GHz. The algorithms were implemented in Java using the Java Development Kit version 8 or a newer version.
An illustrative example is given below.
We first apply one self-adaptive hybrid DE algorithm, SaNSDE(1,2), to find a solution for planning the sustainable sequential CPPS described in Figure 2 and Figure 3. For the CPPS with four resource agents, suppose resource agents submit bids based on their supported activities; that is, resource agent 1 submits three bids, B 11 = [1 0 1 68 6], B 12 = [1 0 0 40 4], and B 13 = [0 0 1 50 5], based on activities A 11 , A 12 , and A 13 , respectively. Resource agent 2 submits three bids, B 21 = [0 0 1 22 1], B 22 = [1 0 0 90 9], and B 23 = [1 0 1 100 10], based on activities A 21 , A 22 , and A 23 , respectively. Resource agent 3 submits one bid, B 31 = [0 1 0 10 1], based on A 31 , and resource agent 4 submits one bid, B 41 = [0 1 0 30 3], based on A 41 . Table 6 summarizes all the bids submitted by the four resource agents in this example.
The objective function used is G ( x ) = G ( Γ ( x , B ) , E n g ( x , B ) ) = w 1 ( ω     Γ ( x , B ) ) w 2 E n g ( x , B ) , where w 1 = 1, w 2 = 1 and ω = 400.
By applying the self-adaptive hybrid DE algorithm SaNSDE(1,2) to solve the optimization problem in the above example, we obtained the following solution: x 11 = 0 , x 12 = 1 , x 13 = 0 , x 21 = 1 , x 22 = 0 , x 23 = 0 , x 31 = 1 , x 41 = 0 . The configuration corresponding to this solution is depicted in Figure 4a.
Therefore, Γ ( x , B ) = 0 × 68 + 1 × 40 + 0 × 50 + 1 × 22 + 0 × 90 + 0 × 100 + 1 × 10 + 0 × 30 = 72.
E n g ( x , B ) = 0 × 6 + 1 × 4 + 0 × 5 + 1 × 2 + 0 × 9 + 0 × 10 + 1 × 1 + 0 × 3 = 7.
As ω = 400, w 1 = 1 and w 2 = 1, G ( x ) = G ( Γ ( x , B ) , E n g ( x , B ) ) = w 1 ( ω     Γ ( x , B ) ) w 2 E n g ( x , B ) = 400 − 79 = 321.

5.3. Comparison of Performance Metric Based on Results of Experiments

In this subsection, we illustrate the advantage of the six self-adaptive hybrid DE algorithms in Table 4 by comparing their performance with those of the two original DE algorithms. For the self-adaptive hybrid DE algorithm SaNSDE(m,n), where m is different from n, the average fitness value of the solutions obtained by SaNSDE(m,n) were compared with those of DE-m and DE-n. As SaNSDE(m,n), DE-m, and DE-n are stochastic optimization algorithms, each of the ten test cases was solved by SaNSDE(m,n), DE-m, and DE-n ten times. The average fitness values of the solutions obtained by SaNSDE(m,n), DE-m, and DE-n were calculated and compared. The details of the results regarding the performance metric for each self-adaptive hybrid DE algorithm in Table 4 and the corresponding two original DE algorithms are presented below.
Table 7 shows the mean fitness function values and mean number of generations of SaNSDE(1,2), SaNSDE(1,3), and the related DE algorithms, DE-1, DE-2, and DE-3, for a population size of 30. The results indicate that the mean fitness function values obtained by SaNSDE(1,2) are either equal to or better than those obtained by the two corresponding DE algorithms, DE-1 and DE-2, for most test cases except test case 5. The mean fitness function values obtained by SaNSDE(1,3) are either the same as or better than those obtained by the two corresponding DE algorithms, DE-1 and DE-3, for most test cases, with the exception of test case 5, test case 6, and test case 7. Figure 6 and Figure 7 show the mean fitness function values of SaNSDE(1,2), DE-1, and DE-2 as well as SaNSDE(1,3), DE-1, and DE-3 as bar charts for a population size of 30.
The results of SaNSDE(1,4), SaNSDE(2,3), and the corresponding DE algorithms, DE-1, DE-4, DE-2, and DE-3, used in the hybridization for a population size of 30 are listed in Table 8. The results for a population size of 30 in Table 8 indicate that the mean fitness function values obtained by SaNSDE(1,4) are either equal to or better than those obtained by DE-1 and DE-4 for most test cases, except for test case 5 and test case 7, and the mean fitness function values obtained by SaNSDE(2,3) are either equal to or better than those obtained by DE-2 and DE-3 for most test cases, except for test case 5. Figure 8 and Figure 9 show bar charts for side-by-side comparison of SaNSDE(1,2), DE-1, and DE-2, as well as SaNSDE(1,3), DE-1, and DE-3, respectively, for a population size of 30.
The results of SaNSDE(2,4), DE-2, and DE-4, as well as SaNSDE(3,4), DE-3, and DE-4, for population size of 30 are shown in Table 9. The outcomes of the experiments for a population size of 30 confirm the evidence that SaNSDE(2,4) is able to improve the performance, as the mean fitness function values obtained by SaNSDE(2,4) are either equal to or better than those obtained by the two corresponding DE algorithms, DE-2 and DE-4, for all test cases. They also confirm the evidence that SaNSDE(3,4) is able to improve the performance, as the mean fitness function values obtained by SaNSDE(3,4) are either equal to or better than those obtained by the two corresponding DE algorithms, DE-3 and DE-4, for most test cases, except test case 5. Figure 10 and Figure 11 show the results for a population size of 30 as bar charts for comparison.
In summary, the results presented above show that the mean fitness values of the solutions obtained by SaNSDE(m,n) are either equal to or better than those obtained by DE-m and DE-n for most test cases for a population size of 30.
The results for a population size of 50 in the SaNSDE(1,2), DE-1, and DE-2 columns in Table 10 indicate that the mean fitness function values obtained by SaNSDE(1,2) are either equal to or better than those obtained by DE-1 and DE-2 for all test cases. The results for a population size of 50 in the SaNSDE(1,3), DE-1, and DE-3 columns in Table 10 indicate that the mean fitness function values obtained by SaNSDE(1,3) are either the same or better than those obtained by DE-1 and DE-3 for most test cases, with the exception of test case 5, test case 6, and test case 7. For convenience of comparison, the results for a population size of 50 are presented as bar charts in Figure 12 and Figure 13.
We obtained similar results in Table 11 for SaNSDE(1,4) and the two corresponding DE algorithms, DE-1 and DE-4, for a population size of 50. That is, the performance of SaNSDE(1,4) was either the same or better than those obtained by the two corresponding DE algorithms, DE-1 and DE-4, for most test cases, with the exception of test case 6 and test case 7. When the population size is 50, the outcomes of the experiments in Table 11 for SaNSDE(2,3) and the two corresponding DE algorithms, DE-2 and DE-3, are similar; that is, the performance of SaNSDE(2,3) is either the same or better than those obtained by the two corresponding DE algorithms, DE-2 and DE-3, for most test cases, except for test case 5 and test case 6. The results for a population size of 50 are presented as bar charts in Figure 14 and Figure 15.
Table 12 shows that the performance of SaNSDE(2,4) is either the same or better than the two corresponding DE algorithms, DE-2 and DE-4, for most test cases, with the exception of test case 6, and the performance of SaNSDE(3,4) is either as good as or better than DE-3 and DE-4 for most test cases, except test case 5. For convenience of comparison, the results for a population size of 50 are presented as bar charts in Figure 16 and Figure 17.
In short, the above analysis for population sizes of 30 and 50 indicate that the performance of SaNSDE(m,n) is either as good as or better than DE-m and DE-n for most test cases.

5.4. Comparison of Robustness Metric Based on Results of Experiments

In this subsection, we illustrate the advantage of the self-adaptive hybrid DE algorithms by comparing the robustness metric of each self-adaptive hybrid DE algorithm in Table 4 with those of the corresponding two standard DE algorithms. For the self-adaptive hybrid DE algorithm SaNSDE(m,n), where m is different from n, the standard deviation of the fitness function values of the solutions obtained by SaNSDE(m,n) will be compared with those of DE-m and DE-n. As SaNSDE(m,n), DE-m, and DE-n are stochastic optimization algorithms, each of the ten test cases was solved by SaNSDE(m,n), DE-m, and DE-n ten times. The standard deviation of the fitness function values of the solutions obtained by SaNSDE(m,n), DE-m, and DE-n were calculated and compared. The results of the robustness metric for all self-adaptive hybrid DE algorithms and the two corresponding standard DE algorithms in Table 4 are presented as follows.
Table 13 shows the standard deviation of the fitness function values obtained by SaNSDE(1,2) and the two corresponding DE algorithms, DE-1 and DE-2, for a population size of 30. The results indicate that the standard deviation of the fitness function values obtained by SaNSDE(1,2) either equals or is smaller than those obtained by the two corresponding DE algorithms, DE-1 and DE-2, for all test cases. The results for SaNSDE(1,3) and the two corresponding DE algorithms, DE-1 and DE-3, are similar; that is, SaNSDE(1,3) is either as robust as or more robust than DE-1 and DE-3 for all test cases for a population size of 30.
The standard deviation of the mean fitness function values of SaNSDE(1,4) and the two corresponding DE algorithms, DE-1 and DE-4, for a population size of 30 are shown in Table 14. The results in Table 14 indicate that the standard deviation of the fitness function values obtained by SaNSDE(1,4) are either equal to or smaller than those obtained by the two corresponding DE algorithms, DE-1 and DE-4, for all test cases. For a population size of 30, SaNSDE(1,4) is as robust as or more robust than DE-1 and DE-4 for all test cases. Similarly, Table 14 also shows that for a population size of 30, SaNSDE(2,3) is at least as robust as DE-2 and DE-3 for all test cases.
Similar results hold for SaNSDE(2,4), SaNSDE(3,4), and the corresponding DE algorithms for a population size of 30 in Table 15, where SaNSDE(2,4) is as robust as or more robust than DE-2 and DE-4, and SaNSDE(3,4) is as robust as or more robust than DE-3 and DE-4 for all test cases.
The results for a population size of 50 are reported in Table 16, Table 17 and Table 18. The results in Table 16 show that SaNSDE(1,2) is as robust as or more robust than DE-1 and DE-2, and SaNSDE(1,3) is as robust as or more robust than DE-1 and DE-3 for all test cases. The results in Table 17 show that SaNSDE(1,4) is as robust as or more robust than DE-1 and DE-4, and SaNSDE(2,3) is as robust as or more robust than DE-2 and DE-3 for all test cases. The results in Table 18 show that SaNSDE(2,4) is as robust as or more robust than DE-2 and DE-4, and SaNSDE(3,4) is as robust as or more robust than DE-3 and DE-4 for all test cases.
In summary, the results for population sizes of 30 and 50 indicate that the robustness of SaNSDE(m,n) is either as good as or better than DE-m and DE-n for most test cases.

5.5. Comparison with Other Existing Algorithms

In this subsection, we compare the performance metric and robustness metric of the six self-adaptive hybrid DE algorithms in Table 4 with the self-adaptive hybrid DE algorithm called SaNSDE proposed in [52], the NSDE algorithm, and PSO algorithm. Table 19 shows the mean fitness function values obtained by these 9 algorithms for a population size of 30. The results indicate that all the six self-adaptive hybrid DE algorithms are comparable with respect to the SaNSDE algorithm in [52] in terms of performance for most test cases. In addition, the six self-adaptive hybrid DE algorithms also either outperform or perform as well as the NSDE algorithm and PSO algorithm in most test cases for a population size of 30.
The results of the mean fitness function values of the nine algorithms for a population size of 50 are shown in Table 20, where all six self-adaptive hybrid DE algorithms are comparable with respect to the SaNSDE algorithm in [52] in terms of performance for most test cases. In addition, each of the six self-adaptive hybrid DE algorithms either outperforms or performs as well as the NSDE algorithm and PSO algorithm in most test cases for a population size of 50.
Table 21 shows the standard deviation of the fitness function values of these nine algorithms for a population size of 30. The results indicate that all six self-adaptive hybrid DE algorithms are comparable with respect to the SaNSDE algorithm in [52] in terms of robustness. In addition, the standard deviation of the fitness function values obtained by each of the six self-adaptive hybrid DE algorithms is either equal to or smaller than that of the PSO algorithm for all test cases for a population size of 30. The standard deviation of the fitness function values obtained by each of the six self-adaptive hybrid DE algorithms is either equal to or smaller than that of the NSDE algorithm for a population size of 30. For a population size of 50, we obtained the results presented in Table 22, where the six self-adaptive hybrid DE algorithms are comparable with respect to the SaNSDE algorithm in [52], the NSDE algorithm, and PSO algorithm in terms of robustness for most test cases.
To study the statistical significance of the experimental results, we applied the Friedman test to rank the 13 metaheuristic algorithms. In the Friedman test, an algorithm with a lower average rank has better performance. Note that the average rankings of all the self-adaptive hybrid DE algorithms are lower than the other algorithms, with the exception of the DE-1 algorithm. The second column on the left of Table 23 shows the average rankings generated by the Friedman test for the 13 algorithms for a population size of 30. The top three algorithms are SaNSDE(1,2), SaNSDE(1,4), and SaNSDE [52] for a population size of 30. The last column of Table 23 shows the average rankings generated by the Friedman test for the 13 algorithms for a population size of 50. The top three algorithms are SaNSDE(1,2), SaNSDE [52], and SaNSDE(1,4). In summary, the Friedman test reports strong significance, as the gap between top ranks and weak ranks is clear for population sizes of 30 and 50. Note that the ranking results of the Friedman test are not sensitive to population size. Although the average ranking results of the Friedman test do provide useful guidance for users in choosing algorithms, the results of the experiments with the ten test cases confirm the validity of the “No Free Lunch” theorem: no single one of the thirteen algorithms tested was universally superior to the others across all possible test cases. For example, the performance of the best performer SaNSDE(1,2) ranked by the Friedman test was worse than several other algorithms, including SaNSDE(1,3), SaNSDE(1,4), and SaNSDE [52], for test case 10 for a population size of 30.
Table 24 and Table 25 show the average computation time of all algorithms for population sizes of 30 and 50, respectively.

6. Discussion

In this paper, we attempt to answer several research questions regarding the effectiveness of algorithms created through hybridization of different strategies in DE. These research questions include the following:
Question 1. Is the performance of a hybrid DE algorithm, SaNSDE(m,n), with the collaboration of two DE strategies, DE-m and DE-n, better than that of each of the two original DE strategies for solving the same problem?
Question 2. Is the hybrid DE algorithm more robust than each of the two original solution algorithms using only one DE strategy?
Question 3. Does arbitrarily combining two different DE strategies selected from the set of four strategies defined in this paper lead to a more effective solution approach in terms of performance and robustness?
To answer the questions mentioned above, six self-adaptive hybrid DE algorithms were created to conduct the experiments. The results presented in Section 5.3 indicate that the mean fitness function values obtained by SaNSDE(m,n) are either equal to or better than those obtained by the two corresponding DE algorithms, DE-m and DE-n, for most test cases. Therefore, the answer to Question 1 is ‘yes’, for most test cases. The results of Section 5.4 show that the standard deviation of the fitness function values obtained by SaNSDE(m,n) is either equal to or less than that obtained by the two corresponding DE algorithms, DE-m and DE-n, for all test cases. Therefore, the answer to Question 2 is ‘yes’, for all test cases. As the answer to Question 1 is ‘yes’ for most test cases and the answer to Question 2 is ‘yes’ for all test cases, arbitrarily combining two different DE strategies selected from the set of four strategies can lead to a more effective solution approach for most test cases in terms of performance and robustness. Therefore, the answer to Question 3 is ‘yes’ for most test cases for the four hybridized DE strategies. In summary, the experimental results confirm that SaNSDE(m,n) improves the two corresponding DE algorithms, DE-m and DE-n, for most test cases in terms of performance and robustness. In addition to answering the three questions mentioned above, the results in Section 5.5 show that the performance and robustness of the six self-adaptive hybrid DE algorithms are comparable to the self-adaptive hybrid DE algorithm proposed in [52] and either outperforms or performs as well as the NSDE algorithm and PSO algorithm for most test cases.

7. Conclusions

Many problems in the real world are so complex that they rely on the development of solution algorithms to find the best or good-quality solutions. The problem of planning sustainable processes in multi-agent Cyber–Physical Production Systems (CPPSs) with different types of agents such as process agents, resource agents, and optimization agents is an example of a complex optimization problem. Due to computational complexity, it is difficult to find the optimal solution to a complex optimization problem. Metaheuristic algorithms provide a viable approach for finding quality solutions for complex optimization problems. When a single metaheuristic algorithm cannot effectively solve a problem, different metaheuristic algorithms may be combined in some way to create a more effective hybrid algorithm. This approach is similar to forming a team to solve a problem. Suppose a hybrid algorithm is created based on the hybridization of two original algorithms or strategies. An interesting issue is to study whether the hybrid algorithm is more effective than two original algorithms or strategies. In this study, we focus on the research question of whether a self-adaptive hybrid DE algorithm, created by combining two original standard DE algorithms, is more effective than the two original standard DE algorithms for planning sustainable processes in multi-agent CPPSs. The effectiveness of a metaheuristic algorithm is assessed based on the performance metric and robustness metric. The performance of a self-adaptive hybrid DE algorithm for a problem is characterized by the mean fitness value of the solutions obtained. The robustness of a self-adaptive hybrid DE algorithm for a problem is characterized by the standard deviation of the fitness function values of the solutions obtained. We used a set of four DE strategies and arbitrarily selected two of the four DE strategies to create six self-adaptive hybrid DE algorithms. The preliminary results of this study show that arbitrarily hybridizing two original standard DE strategies selected from a set of four strategies can create a more effective self-adaptive hybrid DE algorithm for most of the test cases in terms of performance and robustness. For the 13 algorithms in this comparative study, the top 3 performers were self-adaptive hybrid DE algorithms according to the average rankings of the Friedman test. In addition, the average rankings of the self-adaptive hybrid DE algorithms created based on hybridization were better than most of the others seven existing algorithms, with only one exception. In the literature, there are other DE strategies that can be hybridized. However, the effectiveness of hybridization based on other DE strategies is unknown. An interesting research question is to study the effectiveness of the algorithms obtained by hybridizing other combinations of DE strategies not covered in this study. As there are several hundreds of metaheuristic algorithms in the literature, an interesting future research direction would be to study which combinations of two different metaheuristics can work effectively and result in improvements in performance and robustness. The effectiveness of applying self-adaptive hybrid algorithms with two strategies in planning CPPSs with multi-objectives is another research issue. In addition, the study of more advanced methods to deal with constraints to improve the computational efficiency of the self-adaptive hybrid algorithm is also an interesting issue.

Funding

This research was supported in part by the National Science and Technology Council, Taiwan, under Grant NSTC 111-2410-H-324-003.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data is contained within the article. The original data presented in this study are available in [Planning_CPS] at [https://drive.google.com/drive/folders/1_5bMvhjvVhTDN0yunbFNQ4HasaICu8b4?usp=sharing] (accessed on 29 June 2024).

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Du, K.L.; Swamy, M.N.S. Search and Optimization by Metaheuristics; Springer International Publishing AG: Cham, Switzerland, 2016. [Google Scholar]
  2. Srinivas, M.; Patnaik, L.M. Genetic algorithms: A survey. Computer 2002, 27, 17–26. [Google Scholar] [CrossRef]
  3. Pant, M.; Zaheer, H.; Garcia-Hernandez, L.L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar] [CrossRef]
  4. Yang, X.S.; He, X. Bat algorithm: Literature review and applications. Int. J. Bio-Inspired Comput. 2013, 5, 141–149. [Google Scholar] [CrossRef]
  5. Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.A.; Mirjalili, S. Particle swarm optimization: A comprehensive survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
  6. Faris, H.; Aljarah, I.; Al-Betar, M.A.; Mirjalili, S. Grey wolf optimizer: A review of recent variants and applications. Neural Comput. Appl. 2018, 30, 413–435. [Google Scholar] [CrossRef]
  7. Fister, I.; Fister, I., Jr.; Yang, X.S.; Brest, J. A comprehensive review of firefly algorithms. Swarm Evol. Comput. 2013, 13, 34–46. [Google Scholar] [CrossRef]
  8. Gharehchopogh, F.S.; Gholizadeh, H. A comprehensive survey: Whale Optimization Algorithm and its applications. Swarm Evol. Comput. 2019, 48, 1–24. [Google Scholar] [CrossRef]
  9. Bertsimas, D.; Tsitsiklis, J. Simulated annealing. Stat. Sci. 1993, 8, 10–15. [Google Scholar] [CrossRef]
  10. Rashedi, E.; Rashedi, E.; Nezamabadi-Pour, H. A comprehensive survey on gravitational search algorithm. Swarm Evol. Comput. 2018, 41, 141–158. [Google Scholar] [CrossRef]
  11. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  12. Ting, T.O.; Yang, X.S.; Cheng, S.; Huang, K. Hybrid metaheuristic algorithms: Past, present, and future. In Recent Advances in Swarm Intelligence and Evolutionary Computation; Springer International Publishing: Cham, Switzerland, 2015; pp. 71–83. [Google Scholar]
  13. Transforming Our World: The 2030 Agenda for Sustainable Development. Available online: https://sdgs.un.org/2030agenda (accessed on 20 June 2024).
  14. Hsieh, F.-S. Emerging Research Issues and Directions on MaaS, Sustainability and Shared Mobility in Smart Cities with Multi-Modal Transport Systems. Appl. Sci. 2025, 15, 5709. [Google Scholar] [CrossRef]
  15. Scharmer, V.M.; Vernim, S.; Horsthofer-Rauch, J.; Jordan, P.; Maier, M.; Paul, M.; Schneider, D.; Woerle, M.; Schulz, J.; Zaeh, M.F. Sustainable Manufacturing: A Review and Framework Derivation. Sustainability 2024, 16, 119. [Google Scholar] [CrossRef]
  16. Monostori, L. Cyber-physical production systems: Roots, expectations and R&D challenges. Procedia CIRP 2014, 17, 9–13. [Google Scholar]
  17. Ferber, J. Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence; Addison Wesley: Reading, MA, USA, 1999. [Google Scholar]
  18. Restrepo, L.; Aguilar, J.; Toro, M.; Suescún, E. A sustainable-development approach for self-adaptive cyber–physical system’s life cycle: A systematic mapping study. J. Syst. Softw. 2021, 180, 111010. [Google Scholar] [CrossRef]
  19. Thiede, S. Environmental sustainability of cyber physical production systems. Procedia CIRP 2018, 69, 644–649. [Google Scholar] [CrossRef]
  20. Blum, C.; Puchinger, J.; Raidl, G.R.; Roli, A. Hybrid metaheuristics in combinatorial optimization: A survey. Appl. Soft Comput. 2011, 11, 4135–4151. [Google Scholar] [CrossRef]
  21. Moscato, P. Memetic Algorithms: A Short Introduction; New Ideas in Optimization; Corne, D., Dorigo, M., Glover, F., Eds.; McGraw-Hill: New York, NY, USA, 1999. [Google Scholar]
  22. Krasnogor, N.; Smith, J. A tutorial for competent memetic algorithms: Model, taxonomy, and design issues. IEEE Trans. Evol. Comput. 2005, 9, 474–488. [Google Scholar] [CrossRef]
  23. Punyakum, V.; Sethanan, K.; Nitisiri, K.; Pitakaso, R.; Gen, M. Hybrid differential evolution and particle swarm optimization for Multi-visit and Multi-period workforce scheduling and routing problems. Comput. Electron. Agric. 2022, 197, 106929. [Google Scholar] [CrossRef]
  24. Mao, B.; Xie, Z.; Wang, Y.; Handroos, H.; Wu, H.; Shi, S. A hybrid differential evolution and particle swarm optimization algorithm for numerical kinematics solution of remote maintenance manipulators. Fusion Eng. Des. 2017, 124, 587–590. [Google Scholar] [CrossRef]
  25. Xu, H.; Deng, Q.; Zhang, Z.; Lin, S. A hybrid differential evolution particle swarm optimization algorithm based on dynamic strategies. Sci. Rep. 2025, 15, 4518. [Google Scholar] [CrossRef]
  26. Zhang, X.; Duan, H.; Jin, J. DEACO: Hybrid ant colony optimization with differential evolution. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; IEEE: New York, NY, USA, 2008; pp. 921–927. [Google Scholar]
  27. Lahiri, S.K.; Khalfe, N. Improve shell and tube heat exchangers design by hybrid differential evolution and ant colony optimization technique. Asia-Pac. J. Chem. Eng. 2014, 9, 431–448. [Google Scholar] [CrossRef]
  28. Ali, M.; Pant, M.; Abraham. A hybrid ant colony differential evolution and its application to water resources problems. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 1133–1138. [Google Scholar]
  29. Mafarja, M.M.; Mirjalili, S. Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing 2017, 260, 302–312. [Google Scholar] [CrossRef]
  30. Kumar, N.; Singh, K.; Lloret, J. WAOA: A hybrid whale-ant optimization algorithm for energy-efficient routing in wireless sensor networks. Comput. Netw. 2024, 254, 110845. [Google Scholar] [CrossRef]
  31. Zhang, L.; Liu, L.; Yang, X.S.; Dai, Y. A novel hybrid firefly algorithm for global optimization. PLoS ONE 2016, 11, e0163230. [Google Scholar] [CrossRef]
  32. Pitchaimanickam, B.; Murugaboopathi, G. A hybrid firefly algorithm with particle swarm optimization for energy efficient optimal cluster head selection in wireless sensor networks. Neural Comput. Appl. 2020, 32, 7709–7723. [Google Scholar] [CrossRef]
  33. Hsieh, F.S. Applying “Two Heads Are Better Than One” Human Intelligence to Develop Self-Adaptive Algorithms for Ridesharing Recommendation Systems. Electronics 2024, 13, 2241. [Google Scholar] [CrossRef]
  34. Hsieh, F.S. Creating Effective Self-Adaptive Differential Evolution Algorithms to Solve the Discount-Guaranteed Ridesharing Problem Based on a Saying. Appl. Sci. 2025, 15, 3144. [Google Scholar] [CrossRef]
  35. Hsieh, F.S. Comparison of a hybrid firefly–Particle swarm optimization algorithm with six hybrid firefly–Differential evolution algorithms and an effective cost-saving allocation method for ridesharing recommendation systems. Electronics 2024, 13, 324. [Google Scholar] [CrossRef]
  36. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 2002, 1, 67–82. [Google Scholar] [CrossRef]
  37. Ho, Y.C.; Pepyne, D.L. Simple explanation of the no-free-lunch theorem and its implications. J. Optim. Theory Appl. 2002, 115, 549–570. [Google Scholar] [CrossRef]
  38. Alberdi, R.; Khandelwal, K. Comparison of robustness of metaheuristic algorithms for steel frame optimization. Eng. Struct. 2015, 102, 40–60. [Google Scholar] [CrossRef]
  39. Arandhakar, S.; Chaudhary, N.; Depuru, S.R.; Dubey, R.K.; Bhukya, M.N. Analysis and implementation of robust metaheuristic algorithm to extract essential parameters of solar cell. IEEE Access 2021, 10, 40079–40092. [Google Scholar] [CrossRef]
  40. Hughes, M.; Goerigk, M.; Dokka, T. Particle swarm metaheuristics for robust optimisation with implementation uncertainty. Comput. Oper. Res. 2020, 122, 104998. [Google Scholar] [CrossRef]
  41. Zeb, A.; Din, F.; Fayaz, M.; Mehmood, G.; Zamli, K.Z. A systematic literature review on robust swarm intelligence algorithms in search?based software engineering. Complexity 2023, 2023, 4577581. [Google Scholar] [CrossRef]
  42. Gong, W.; Cai, Z.; Ling, C.X. ODE: A fast and robust differential evolution based on orthogonal design. In Australasian Joint Conference on Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2006; pp. 709–718. [Google Scholar]
  43. Guzman-Gaspar, J.Y.; Mezura-Montes, E. Differential evolution variants in robust optimization over time. In Proceedings of the 2019 International Conference on Electronics, Communications and Computers, Cholula, Mexico, 27 February–1 March 2019; pp. 164–169. [Google Scholar]
  44. Souza, I.P.; Boeres, M.C.S.; Moraes, R.E.N. A robust algorithm based on differential evolution with local search for the capacitated vehicle routing problem. Swarm Evol. Comput. 2023, 77, 101245. [Google Scholar] [CrossRef]
  45. Liu, Y.; Peng, Y.; Wang, B.; Yao, S.; Liu, Z. Review on cyber-physical systems. IEEE/CAA J. Autom. Sin. 2017, 4, 27–40. [Google Scholar] [CrossRef]
  46. Nilsson, N.J. Artificial Intelligence: A New Synthesis; Morgan Kaufmann: San Francisco, CA, USA, 1998. [Google Scholar]
  47. Vogel-Heuser, B.; Diedrich, C.; Pantforder, D.; Gohner, P. Coupling heterogeneous production systems by a multi-agent based cyber-physical production system. In Proceedings of the 2014 12th IEEE International Conference on Industrial Informatics (INDIN), Porto Alegre, Brazil, 27–30 July 2014; pp. 713–719. [Google Scholar]
  48. Latsou, C.; Farsi, M.; Erkoyuncu, J.A.; Morris, G. Digital twin integration in multi-agent cyber physical manufacturing systems. IFAC-Pap. 2021, 54, 811–816. [Google Scholar] [CrossRef]
  49. Lee, J.; Bagheri, B.; Kao, H.A. A Cyber-Physical Systems Architecture for Industry 4.0-Based Manufacturing Systems. Manuf. Lett. 2015, 3, 18–23. [Google Scholar] [CrossRef]
  50. Rossit, D.A.; Tohmé, F.; Frutos, M. Production planning and scheduling in Cyber-Physical Production Systems: A review. Int. J. Comput. Integr. Manuf. 2019, 32, 385–395. [Google Scholar] [CrossRef]
  51. Murata, T. Petri nets: Properties, analysis and applications. Proc. IEEE 1989, 77, 541–580. [Google Scholar] [CrossRef]
  52. Hsieh, F.S. A Self-Adaptive Neighborhood Search Differential Evolution Algorithm for Planning Sustainable Sequential Cyber–Physical Production Systems. Appl. Sci. 2024, 14, 8044. [Google Scholar] [CrossRef]
  53. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  54. Qin, A.K.; Suganthan, P.N. Self-adaptive differential evolution algorithm for numerical optimization. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; Volume 2, pp. 1785–1791. [Google Scholar]
  55. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  56. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for differential evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar]
  57. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
  58. Opara, K.; Arabas, J. Comparison of mutation strategies in differential evolution–a probabilistic perspective. Swarm Evol. Comput. 2018, 39, 53–69. [Google Scholar] [CrossRef]
Figure 1. A Multi-agent system architecture for a CPPS with N process agents and I resource agents.
Figure 1. A Multi-agent system architecture for a CPPS with N process agents and I resource agents.
Applsci 15 10266 g001
Figure 2. Cyber World models for three operations, O 1 , O 2 , O 3 , and the Cyber World model O for a process agent with the three operations.
Figure 2. Cyber World models for three operations, O 1 , O 2 , O 3 , and the Cyber World model O for a process agent with the three operations.
Applsci 15 10266 g002
Figure 3. Cyber World models for activities A 11 , A 12 , A 13 , A 21 , A 22 , A 23 , A 31 , A 41 of four resource agents.
Figure 3. Cyber World models for activities A 11 , A 12 , A 13 , A 21 , A 22 , A 23 , A 31 , A 41 of four resource agents.
Applsci 15 10266 g003
Figure 4. Two configurations of a CPPS: (a) the Cyber World model for Configuration 1 is Ψ 1 = O A 12 A 21 A 31 ; (b) the Cyber World model for Configuration 2 is Ψ 2 = O A 11 A 41 .
Figure 4. Two configurations of a CPPS: (a) the Cyber World model for Configuration 1 is Ψ 1 = O A 12 A 21 A 31 ; (b) the Cyber World model for Configuration 2 is Ψ 2 = O A 11 A 41 .
Applsci 15 10266 g004
Figure 5. Two infeasible configurations of a CPPS, where the Cyber World model for (a) Configuration 3 is Ψ 1 = O A 12 A 21 and the Cyber World model for (b) Configuration 4 is Ψ 2 = O A 11 .
Figure 5. Two infeasible configurations of a CPPS, where the Cyber World model for (a) Configuration 3 is Ψ 1 = O A 12 A 21 and the Cyber World model for (b) Configuration 4 is Ψ 2 = O A 11 .
Applsci 15 10266 g005
Figure 6. Comparison of the mean fitness function values of SaNSDE(1,2), DE-1, and DE-2 for a population size of 30.
Figure 6. Comparison of the mean fitness function values of SaNSDE(1,2), DE-1, and DE-2 for a population size of 30.
Applsci 15 10266 g006
Figure 7. Comparison of mean fitness function values of SaNSDE(1,3), DE-1, and DE-3 for a population size of 30.
Figure 7. Comparison of mean fitness function values of SaNSDE(1,3), DE-1, and DE-3 for a population size of 30.
Applsci 15 10266 g007
Figure 8. Comparison of mean fitness function values of SaNSDE(1,4), DE-1, and DE-4 for a population size of 30.
Figure 8. Comparison of mean fitness function values of SaNSDE(1,4), DE-1, and DE-4 for a population size of 30.
Applsci 15 10266 g008
Figure 9. Comparison of mean fitness function values of SaNSDE(2,3), DE-2, and DE-3 for a population size of 30.
Figure 9. Comparison of mean fitness function values of SaNSDE(2,3), DE-2, and DE-3 for a population size of 30.
Applsci 15 10266 g009
Figure 10. Comparison of mean fitness function values of SaNSDE(2,4), DE-2, and DE-4 for a population size of 30.
Figure 10. Comparison of mean fitness function values of SaNSDE(2,4), DE-2, and DE-4 for a population size of 30.
Applsci 15 10266 g010
Figure 11. Comparison of mean fitness function values of SaNSDE(3,4), DE-3, and DE-4 for a population size of 30.
Figure 11. Comparison of mean fitness function values of SaNSDE(3,4), DE-3, and DE-4 for a population size of 30.
Applsci 15 10266 g011
Figure 12. Comparison of mean fitness function values of SaNSDE(1,2), DE-1, and DE-2 for a population size of 50.
Figure 12. Comparison of mean fitness function values of SaNSDE(1,2), DE-1, and DE-2 for a population size of 50.
Applsci 15 10266 g012
Figure 13. Comparison of mean fitness function values of SaNSDE(1,3), DE-1, and DE-3 for a population size of 50.
Figure 13. Comparison of mean fitness function values of SaNSDE(1,3), DE-1, and DE-3 for a population size of 50.
Applsci 15 10266 g013
Figure 14. Comparison of mean fitness function values of SaNSDE(1,4), DE-1, and DE-4 for a population size of 50.
Figure 14. Comparison of mean fitness function values of SaNSDE(1,4), DE-1, and DE-4 for a population size of 50.
Applsci 15 10266 g014
Figure 15. Comparison of mean fitness function values of SaNSDE(2,3), DE-2, and DE-3 for a population size of 50.
Figure 15. Comparison of mean fitness function values of SaNSDE(2,3), DE-2, and DE-3 for a population size of 50.
Applsci 15 10266 g015
Figure 16. Comparison of mean fitness function values of SaNSDE(2,4), DE-2, and DE-4 for a population size of 50.
Figure 16. Comparison of mean fitness function values of SaNSDE(2,4), DE-2, and DE-4 for a population size of 50.
Applsci 15 10266 g016
Figure 17. Comparison of mean fitness function values of SaNSDE(3,4), DE-3, and DE-4 for a population size of 50.
Figure 17. Comparison of mean fitness function values of SaNSDE(3,4), DE-3, and DE-4 for a population size of 50.
Applsci 15 10266 g017
Table 1. Notation list for modeling and formulating the problem.
Table 1. Notation list for modeling and formulating the problem.
Variable/Model/SymbolMeaning
K The total number of operations.
k The index of an operation, where k { 1 , 2 , 3 , .... , K }
W The set of process agents
n The index of a process agent, where n W
ω An upper bound on total processing time
d k If operation k is required in the requirements of process agent n , d k set to 1. Otherwise, d k is set to 0.
e q e q denotes the requirements of the process agent n . e q = ( d 1 , d 2 , d 3 , , d K , ω ) , where d k is equal to 1 if operation k is required to be performed, d k is equal to 0 otherwise, and the overall processing time must be less than or equal to ω .
P N = ( P , T , F , m 0 , μ ) A Discrete Timed Petri Net (DTPN) P N = ( P , T , F , m 0 , μ ) , abbreviated as P N , is five tuples with a set of places, P , a set of transitions, T , a set of flow relations, F ( P × T ) ( T × P ) an initial marking, m 0 , and a function μ : T Z defining the firing time for each transition in T , where Z is a set of nonnegative integers.
O k = ( P k , T k , F k , m k 0 , μ k ) O k = ( P k , T k , F k , m k 0 , μ k ) is a DTPN model for operation k , abbreviated as O k , where T k = , T k = { t s k , t e k } , P k =   { p k b } , t s k is the start transition, t e k is the end transition, p k b is the busy place of operation k , and μ k is a function defining the firing time of transitions in T k .
A composition operation to combine two DTPNs, P N 1 =   ( P 1 ,   T 1 ,   F 1 ,   m 10 ,   μ 1 )   and   P N 2 =   ( P 2 ,   T 2 ,   F 2 ,   m 20 ,   μ 2 ) , into one DTPN, P N 1 P N 2 = ( P , T , F , m 0 , μ ), where P = P 1 P 2 ,   T = T 1 T 2 ,   F ( p , t ) = F 1 ( p , t )   i f   p P 1   a n d   t T 1 F 2 ( p , t )   i f   p P 2   a n d   t T 2   ,   F ( t , p ) = F 1 ( t , p )   i f   p P 1   a n d   t T 1 F 2 ( t , p )   i f   p P 2   a n d   t T 2   and   m 0 ( p ) = m 10 ( p )   i f   p P 1 m 20 ( p )   i f   p P 2 .
O = k { k { 1 , 2 , , K } d k = 1 } O k O = k { k { 1 , 2 , , K } d k = 1 } O k is an acyclic DTPN that represents the Cyber World model of process agent n .
I The number of resource agents
R The set of indices of resource agents, where R = { 1 , 2 , 3 , .... , I }
a The index of a resource agent, where a R
A a j = ( P a j , T a j , F a j , m a j 0 , μ a j ) A a j = ( P a j , T a j , F a j , m a j 0 , μ a j ) is a DTPN to represent the Cyber World model of the j -th activity of resource agent a . The initial marking m a j 0 ( r a ) specifies the number of available resources in the idle state place r a , where a R . There is no common transition between A a j and A a k j for j j .
j The index of a resource agent’s bid
J a The number of bids submitted by resource agent a
o a j k o a j k is one if operation k can be performed by agent a in the j t h bid, o a j k is zero otherwise.
B a j The j t h bid submitted by agent a with B a j =   ( o a j 1 , o a j 2 , o a j 3 , , o a j K , τ a j , e a j ) , where o a j k = 1   if   operation     k   can   be   performed     by   resource       agent     a 0   otherwise
τ a j is the overall processing time for the specified operations performed in the bid, and e a j is the overall energy consumption required to perform the operations specified in B a j .
B Notation to represent the set of all bids, B a j , where a R and j { 1 , 2 , 3 , .... , J a }
q a k The maximum number of times that operation k can be performed by agent a
x a j A decision variable, where a R and j { 1 , 2 , 3 , .... , J a } . The value of x a j is one if the j t h bid of agent a is accepted and is zero otherwise.
Γ ( O , x , B ) Γ ( O , x , B ) is a function to calculate the total processing time of a configuration c ( O , x , B ) of process agent n .
Ψ = ( P , T , F , m 0 , μ ) The Cyber World model for a configuration of process agent n ; Ψ = ( P , T , F , m 0 , μ ) = O x a j = 1   j { 1 , 2 , , J a } a R   A a j
Table 2. Notations used in the algorithm.
Table 2. Notations used in the algorithm.
Parameter/VariableMeaning
D D = R J a , which is the problem dimension (the total number of decision variables x a j ,   where   a R   and   j { 1 , 2 , 3 , .... , J a }
G The total number of generations
g The g - th   generation ,   where   g { 1 , 2 , 3 , , G }
L P The learning period
f p A   probability   parameter   for   selecting   the   type   of   probability   distribution   used   to   generate   scale   factor   F i and select the mutation strategy used to calculate the mutant vector
c r i The crossover probability for the i -th individual
β A weighting factor to update the crossover probability
L An   array   list   for   recording   the   crossover   probability   c r i of the i -th individual with which the trial vector has successfully replaced the i -th individual
N P The population size
z i A binary vector with dimension D to represent the i -th individual in the population, with z i d representing the element in the d -th dimension of z i
z b The best individual in the current population
r 1 ,   r 2 ,   r 3   and   r 4 Four distinct random integers generated between 1 and N P .
v i Mutant vector corresponding to z i , with v i d representing the element in the d -th dimension of v i
u i Trial vector, with u i d representing the element in the d -th dimension of u i
u ¯ i The binary trial vector corresponding to u i
s The index of a strategy, where s S
S The set of the indices of all strategies used in hybridization
S C s The success event counter of strategy s
F C s The failure event counter of strategy s
V max The boundary used in the binary transformation function
Table 3. Candidate DE strategies for creating mutant vectors.
Table 3. Candidate DE strategies for creating mutant vectors.
DE StrategyFormula to Calculate Mutant Vector
s = 1 v i d = z r 1 n + F i ( z r 2 n z r 3 n ) (19)
s = 2 v i d = z b d + F i ( z r 2 d z r 3 d ) (20)
s = 3 v i d = z r 1 d + F i ( z r 2 d z r 3 d ) + F i ( z r 4 d z r 5 d ) (21)
s = 4 v i d = z b d + F i ( z r 1 d z r 2 d ) + F i ( z r 3 d z r 4 d ) (22)
Table 4. Six self-adaptive hybrid DE algorithms based on two strategies.
Table 4. Six self-adaptive hybrid DE algorithms based on two strategies.
Self-Adaptive Hybrid DE Algorithm Strategy   s 1 Strategy   s 2
SaNSDE(1,2)12
SaNSDE(1,3)13
SaNSDE(1,4)14
SaNSDE(2,3)23
SaNSDE(2,4)24
SaNSDE(3,4)34
Table 5. Parameters used in the SaNSDE( s 1 , s 2 ), DE( s 1 ), DE( s 2 ), NSDE, and PSO algorithms.
Table 5. Parameters used in the SaNSDE( s 1 , s 2 ), DE( s 1 ), DE( s 2 ), NSDE, and PSO algorithms.
AlgorithmSetting 1 of ParametersSetting 2 of Parameters
SaNSDE( s 1 , s 2 ) N P = 30, G = 10,000, L P = 1000, β = 0.1 N P = 50, G = 10,000, L P = 1000, β = 0.1
DE( s 1 ) N P = 30, G = 10,000, c r = 0.5
F i : generated from U ( 0 , 2 )
N P = 50, G = 10,000, c r = 0.5
F i : generated from U ( 0 , 2 )
DE( s 2 ) N P = 30, G = 10,000, c r = 0.5
F i : generated from U ( 0 , 2 )
N P = 50, G = 10,000, c r = 0.5
F i : generated from U ( 0 , 2 )
NSDE N P = 30, G = 10,000, c r = 0.5
F i = 0.5 r 1 + 0.5 , where r 1 is generated from Gaussian distribution N ( 0 , 1 )
N P = 50, G = 10,000, c r = 0.5
F i = 0.5 r 1 + 0.5 , where r 1 is generated from Gaussian distribution N ( 0 , 1 )
PSO N P = 30, G = 10,000, c 1 = 0.4, c 2 = 0.6,
ω = 0.4
N P = 50, G = 10,000, c 1 = 0.4, c 2 = 0.6,
ω = 0.4
Table 7. Comparison of mean fitness function values/mean number of generations of SaNSDE(1,2) and SaNSDE(1,3) with the corresponding DE algorithms for a population size of 30.
Table 7. Comparison of mean fitness function values/mean number of generations of SaNSDE(1,2) and SaNSDE(1,3) with the corresponding DE algorithms for a population size of 30.
Test CaseIKSaNSDE(1,2)DE-1DE-2SaNSDE(1,3)DE-1DE-3
143321/5.3321/10.6321/653321/7.2321/10.6321/88.6
25542/10.777842/8.442/341.542/8.777842/8.441.5/19.6
35555/42.222255/71.949/876.555/415.333355/71.950.5/1837.1
41010542.2/9.6542.2/9.6542.2/333.2542.2/21.4542.2/9.6540.6/24.9
51010730.9/5403.5731.05/3521.3641.42/4200.1711.67/5612.4731.05/3521.3724.24/1625.1
610201749/3855.31724.9/30531705.4/3283.61713.1/4589.61724.9/30531699.6/6563
720201676.9/2590.31645.5/4361.91370.3/3447.41514.9/48661645.5/4361.91509.3/7857
83010731/2039.2731/977484.1/1186.4731/361.3731/977724.9/2582.3
93020802/3118779.4/289.9741.4/290802/349.7779.4/289.9802/1128.9
104010792.3/2806.3792.3/1407.1739.6/3086.4797/1667.6792.3/1407.1783.7/2071
Table 8. Comparison of mean fitness function values/mean number of generations of SaNSDE(1,4) and SaNSDE(2,3) with the corresponding DE algorithms for a population size of 30.
Table 8. Comparison of mean fitness function values/mean number of generations of SaNSDE(1,4) and SaNSDE(2,3) with the corresponding DE algorithms for a population size of 30.
Test CaseIKSaNSDE(1,4)DE-1DE-4SaNSDE(2,3)DE-2DE-3
143321/7.3321/10.6320.4/1126321/7.7321/653321/88.6
25542/8.666742/8.441.5/684.842/9.22242/341.541.5/19.6
35555/125.333355/71.951/1645.855/299.666749/876.550.5/1837.1
41010542.2/7.6542.2/9.6542.2/397.7542.2/19.1542.2/333.2540.6/24.9
51010725.19/3807731.05/3521.3705.95/2988.4717.63/3833.8641.42/4200.1724.24/1625.1
610201733.2/2687.21724.9/30531620.8/3037.61727.2/3073.71705.4/3283.61699.6/6563
720201554.2/3601.81645.5/4361.91454.9/4874.91542.1/958.11370.3/3447.41509.3/7857
83010731/855.3731/977584.6/1425.7728.8/1253.3484.1/1186.4724.9/2582.3
93020802/339.1779.4/289.9710.3/1926802/432.4741.4/290802/1128.9
104010797/1754.4792.3/1407.1759.8/1151.1797/733.2739.6/3086.4783.7/2071
Table 9. Comparison of mean fitness function values/mean number of generations of SaNSDE(2,4) and SaNSDE(3,4) with the corresponding DE algorithms for a population size of 30.
Table 9. Comparison of mean fitness function values/mean number of generations of SaNSDE(2,4) and SaNSDE(3,4) with the corresponding DE algorithms for a population size of 30.
Test CaseIKSaNSDE(2,4)DE-2DE-4SaNSDE(3,4)DE-3DE-4
143321/336.5321/653320.4/1126321/6.8321/88.6320.4/1126
25542/10,444442/341.541.5/684.842/71.555641.5/19.641.5/684.8
35555/752.777849/876.551/1645.855/397.222250.5/1837.151/1645.8
41010542.2/123.2542.2/333.2542.2/397.7542.2/11.3540.6/24.9542.2/397.7
51010713.04/4824.7641.42/4200.1705.95/2988.4716.63/4309724.24/1625.1705.95/2988.4
610201722.5/4752.91705.4/3283.61620.8/3037.61719.2/4500.21699.6/65631620.8/3037.6
720201500.8/4482.51370.3/3447.41454.9/4874.91522.9/749.41509.3/78571454.9/4874.9
83010731/3943.8484.1/1186.4584.6/1425.7731/943.4724.9/2582.3584.6/1425.7
93020802/1679741.4/290710.3/1926802/605.4802/1128.9710.3/1926
104010797/4613.8739.6/3086.4759.8/1151.1794.8/2157.2783.7/2071759.8/1151.1
Table 10. Comparison of mean fitness function values/mean number of generations of SaNSDE(1,2) and SaNSDE(1,3) with the corresponding DE algorithms for a population size of 50.
Table 10. Comparison of mean fitness function values/mean number of generations of SaNSDE(1,2) and SaNSDE(1,3) with the corresponding DE algorithms for a population size of 50.
Test CaseIKSaNSDE(1,2)DE-1DE-2SaNSDE(1,3)DE-1DE-3
143321/4.6321/7.7321/138.8321/5.4321/7.7321/17.8
25542/14.742/9.741.5/526.442/9.242/9.742/743.2
35555/278.455/13347.5/4147.655/66.455/13349.9/1808.3
41010542.2/11.1542.2/342.3540.6/41.6542.2/12.3542.2/342.3530.12/907.3
51010730.67/4172.1715.36/3430.6627.7/3291.1715.73/5473.8715.36/3430.6727.83/3778.6
610201759.9/3700.81734.1/3210.31726.6/2628.11719.6/4704.71734.1/3210.31693/5468.9
720201684.2/3757.31646.1/4503.71300.7/3689.61492.7/4965.41646.1/4503.71328.9/7119.9
83010731/1031.1720.8/460.1541.9/1235.6731/537.1720.8/460.1723.1/2999.5
93020802/993.6777.7/622.8745.4/2565.8802/229.5777.7/622.8802/707.8
104010797/2434.3787.9/976722.1/902797/849787.9/976774.3/3339.9
Table 11. Comparison of mean fitness function values/mean number of generations of SaNSDE(1,4) and SaNSDE(2,3) with the corresponding DE algorithms for a population size of 50.
Table 11. Comparison of mean fitness function values/mean number of generations of SaNSDE(1,4) and SaNSDE(2,3) with the corresponding DE algorithms for a population size of 50.
Test CaseIKSaNSDE(1,4)DE-1DE-4SaNSDE(2,3)DE-2DE-3
143321/3.7321/7.7321/1107.2321/4.9321/138.8321/17.8
25542/8.342/9.742/192.542/6.741.5/526.442/743.2
35555/103.255/13344.9/1376.255/44.647.5/4147.649.9/1808.3
41010542.2/5.7542.2/342.3535.66/976542.2/7.9540.6/41.6530.12/907.3
51010722.99/3059.3715.36/3430.6648.74/2632.6715.48/3210.7627.7/3291.1727.83/3778.6
610201727.6/1864.71734.1/3210.31711.7/4546.31722.1/3864.11726.6/2628.11693/5468.9
720201596.7/1930.31646.1/4503.71382/4348.71530.3/858.11300.7/3689.61328.9/7119.9
83010731/353.7720.8/460.1714.6/3235731/616.2541.9/1235.6723.1/2999.5
93020802/141.1777.7/622.8774.9/879.5802/410.9745.4/2565.8802/707.8
104010797/1024.2787.9/976771.8/3505.2797/1572.4722.1/902774.3/3339.9
Table 12. Comparison of mean fitness function values/mean number of generations of SaNSDE(2,4) and SaNSDE(3,4) with the corresponding DE algorithms for a population size of 50.
Table 12. Comparison of mean fitness function values/mean number of generations of SaNSDE(2,4) and SaNSDE(3,4) with the corresponding DE algorithms for a population size of 50.
Test CaseIKSaNSDE(2,4)DE-2DE-4SaNSDE(3,4)DE-3DE-4
143321/3.9321/138.8321/1107.2321/24.7321/17.8321/1107.2
25542/8.841.5/526.442/192.542/6.242/743.242/192.5
35555/509.847.5/4147.644.9/1376.255/118.349.9/1808.344.9/1376.2
41010542.2/25.7540.6/41.6535.66/976542.2/11.3530.12/907.3535.66/976
51010707.39/3115.7627.7/3291.1648.74/2632.6720.54/2908.9727.83/3778.6648.74/2632.6
610201716.2/3512.11726.6/2628.11711.7/4546.31718.7/4544.51693/5468.91711.7/4546.3
720201525.9/4475.31300.7/3689.61382/4348.71439.1/454.11328.9/7119.91382/4348.7
83010728.8/4703.3541.9/1235.6714.6/3235728.8/716723.1/2999.5714.6/3235
93020802/2054.1745.4/2565.8774.9/879.5802/223.5802/707.8774.9/879.5
104010797/3403.5722.1/902771.8/3505.2797/2018.4774.3/3339.9771.8/3505.2
Table 13. Comparison of standard deviation of fitness function values of SaNSDE(1,2) and SaNSDE(1,3) with the corresponding DE algorithms for a population size of 30.
Table 13. Comparison of standard deviation of fitness function values of SaNSDE(1,2) and SaNSDE(1,3) with the corresponding DE algorithms for a population size of 30.
Test CaseIKSaNSDE(1,2)DE-1DE-2SaNSDE(1,3)DE-1DE-3
143000000
255000001.5811
355005.1639005.986
41010000005.0596
510107.5062922.1143112.88199.0375822.114316.5798
6102011.925664.545396.486212.368864.545381.4278
7202029.2363219.7454607.285247.9176219.7454244.3599
8301000339.57530013.2199
93020037.830650.2354037.83060
10401014.862714.962785.3661014.962721.5254
Table 19. Average fitness values and average number of generations obtained by the six self-adaptive hybrid DE algorithms and three existing algorithms for a population size of 30.
Table 19. Average fitness values and average number of generations obtained by the six self-adaptive hybrid DE algorithms and three existing algorithms for a population size of 30.
Test CaseIKSaNSDE(1,2)SaNSDE(1,3)SaNSDE(1,4)SaNSDE(2,3)SaNSDE(2,4)SaNSDE(3,4)SaNSDE [52]NSDEPSO
143321/5.3321/7.2321/7.3321/7.7321/336.5321/6.8321/6321/16.3321/14.1
25542/10.777842/8.777842/8.666742/9.22242/10,444442/71.555642/10.142/740.942/24
35555/42.222255/415.333355/125.333355/299.666755/752.777855/397.222255/3754/304.355/1134.7
41010542.2/9.6542.2/21.4542.2/7.6542.2/19.1542.2/123.2542.2/11.3542.2/13.9533.05/43.8542.2/45.9
51010730.9/5403.5711.67/5612.4725.19/3807717.63/3833.8713.04/4824.7716.63/4309721.56/1551.2620.56/163.1495.24/4441.8
610201749/3855.31713.1/4589.61733.2/2687.21727.2/3073.71722.5/4752.91719.2/4500.21728.4/2322.61648.8/45891371.2/5215.7
720201676.9/2590.31514.9/48661554.2/3601.81542.1/958.11500.8/4482.51522.9/749.41598.3/1268.31214.1/4732326.3/5634.2
83010731/2039.2731/361.3731/855.3728.8/1253.3731/3943.8731/943.4731/558.9731/2373.5594/4288.1
93020802/3118802/349.7802/339.1802/432.4802/1679802/605.4802/519.9802/379.1486.2/5297.2
104010792.3/2806.3797/1667.6797/1754.4797/733.2797/4613.8794.8/2157.2797/1379.6797/2694.4383.4/5878.5
Table 20. Average fitness values and average number of generations obtained by the six self-adaptive hybrid DE algorithms and three existing algorithms for a population size of 50.
Table 20. Average fitness values and average number of generations obtained by the six self-adaptive hybrid DE algorithms and three existing algorithms for a population size of 50.
Test CaseIKSaNSDE(1,2)SaNSDE(1,3)SaNSDE(1,4)SaNSDE(2,3)SaNSDE(2,4)SaNSDE(3,4)SaNSDE [52]NSDEPSO
143321/4.6321/5.4321/3.7321/4.9321/3.9321/24.7321/6.2320.4/23.9321/8.2
25542/14.742/9.242/8.342/6.742/8.842/6.242/4.341/61.742/14.9
35555/278.455/66.455/103.255/44.655/509.855/118.355/7452/119.255/1371
41010542.2/11.1542.2/12.3542.2/5.7542.2/7.9542.2/25.7542.2/11.3542.2/11.3542.2/25.5542.2/34.6
51010730.67/4172.1715.73/5473.8722.99/3059.3715.48/3210.7707.39/3115.7720.54/2908.9724.02/1794.6612.07/156.6494.62/6664.5
610201759.9/3700.81719.6/4704.71727.6/1864.71722.1/3864.11716.2/3512.11718.7/4544.51730.5/2311.21637/6304.81381.1/5476.9
720201684.2/3757.31492.7/4965.41596.7/1930.31530.3/858.11525.9/4475.31439.1/454.11576.6/1088.11203.9/4685.2359.7/5671.6
83010731/1031.1731/537.1731/353.7731/616.2728.8/4703.3728.8/716731/279.2731/2092.8586.2/3114.5
93020802/993.6802/229.5802/141.1802/410.9802/2054.1802/223.5802/375.9802/240.3495.9/5033.8
104010797/2434.3797/849797/1024.2797/1572.4797/3403.5797/2018.4797/663.4797/2420.8357.1/4258.6
Table 21. Standard deviation of fitness function values obtained by the six self-adaptive hybrid DE algorithms and three existing algorithms for a population size of 30.
Table 21. Standard deviation of fitness function values obtained by the six self-adaptive hybrid DE algorithms and three existing algorithms for a population size of 30.
Test CaseIKSaNSDE(1,2)SaNSDE(1,3)SaNSDE(1,4)SaNSDE(2,3)SaNSDE(2,4)SaNSDE(3,4)SaNSDE [52]NSDEPSO
143000000000
255000000000
35500000003.16220
41010000000021.39870
510107.506299.0375812.0823513.4459416.1106013.7738910.332324.673720.0383
6102011.925612.368818.689216.369320.641129.502527.056316.903635.6426
7202029.236347.917646.913149.787298.133495.109840.724144.388365.0658
830100006.957000051.3917
930200000000089.8749
10401014.862700006.95700111.3943
Table 6. Information regarding bids submitted by four resource agents.
Table 6. Information regarding bids submitted by four resource agents.
Agent ( a ) j Bid ( B a j ) o a j 1 o a j 2 o a j 3 Processing Time ( τ a j )Energy Consumption ( e a j )
11 B 11 101686
12 B 12 100404
13 B 13 001505
21 B 21 001222
22 B 22 100909
23 B 23 10110010
31 B 31 010101
41 B 41 010303
Table 14. Comparison of standard deviation of fitness function values of SaNSDE(1,4) and SaNSDE(2,3) with the corresponding DE algorithms for a population size of 30.
Table 14. Comparison of standard deviation of fitness function values of SaNSDE(1,4) and SaNSDE(2,3) with the corresponding DE algorithms for a population size of 30.
Test CaseIKSaNSDE(1,4)DE-1DE-4SaNSDE(2,3)DE-2DE-3
143001.8973000
255001.5811001.5811
355005.163905.16395.986
41010000005.0596
5101012.0823522.114330.630613.44594112.881916.5798
6102018.689264.5453169.274716.369396.486281.4278
7202046.9131219.7454404.564949.7872607.2852244.3599
8301000308.63826.957339.575313.2199
93020037.8306182.6812050.23540
104010014.962753.2015085.366121.5254
Table 15. Comparison of standard deviation of fitness function values of SaNSDE(2,4) and SaNSDE(3,4) with the corresponding DE algorithms for a population size of 30.
Table 15. Comparison of standard deviation of fitness function values of SaNSDE(2,4) and SaNSDE(3,4) with the corresponding DE algorithms for a population size of 30.
Test CaseIKSaNSDE(2,4)DE-2DE-4SaNSDE(3,4)DE-3DE-4
143001.8973001.8973
255001.581101.58111.5811
35505.16395.163905.9865.1639
4101000005.05960
5101016.11060112.881930.630613.7738916.579830.6306
6102020.641196.4862169.274729.502581.4278169.2747
7202098.1334607.2852404.564995.1098244.3599404.5649
830100339.5753308.6382013.2199308.6382
93020050.2354182.681200182.6812
104010085.366153.20156.95721.525453.2015
Table 16. Comparison of standard deviation of fitness function values of SaNSDE(1,2) and SaNSDE(1,3) with the corresponding DE algorithms for a population size of 50.
Table 16. Comparison of standard deviation of fitness function values of SaNSDE(1,2) and SaNSDE(1,3) with the corresponding DE algorithms for a population size of 50.
Test CaseIKSaNSDE(1,2)DE-1DE-2SaNSDE(1,3)DE-1DE-3
143000000
255001.5811000
35503.162212.158603.16227.3098
41010005.05960025.5757
510105.582335.32398.82499.818535.32316.0407
6102018.864754.11856.107411.067454.11869.6333
7202045.6333202.1124486.409740.5957202.1124361.1803
83010024.7512292.6231024.751218.0027
93020041.317373.7039041.31730
104010016.1551107.1078016.155121.7411
Table 17. Comparison of standard deviation of fitness function values of SaNSDE(1,4) and SaNSDE(2,3) with the corresponding DE algorithms for a population size of 50.
Table 17. Comparison of standard deviation of fitness function values of SaNSDE(1,4) and SaNSDE(2,3) with the corresponding DE algorithms for a population size of 50.
Test CaseIKSaNSDE(1,4)DE-1DE-4SaNSDE(2,3)DE-2DE-3
143000000
25500001.58110
35503.162217.1298012.15867.3098
410100020.681205.059625.5757
510108.984335.32392.144116.735198.824916.0407
6102024.748554.11880.896419.404756.107469.6333
7202055.9484202.1124510.466448.9967486.4097361.1803
83010024.751224.7080292.623118.0027
93020041.317347.8294073.70390
104010016.155126.86920107.107821.7411
Table 18. Comparison of standard deviation of fitness function values of SaNSDE(2,4) and SaNSDE(3,4) with the corresponding DE algorithms for a population size of 50.
Table 18. Comparison of standard deviation of fitness function values of SaNSDE(2,4) and SaNSDE(3,4) with the corresponding DE algorithms for a population size of 50.
Test CaseIKSaNSDE(2,4)DE-2DE-4SaNSDE(3,4)DE-3DE-4
143000000
25501.58110000
355012.158617.129807.309817.1298
4101005.059620.6812025.575720.6812
5101015.422898.824992.144112.735616.040792.1441
6102032.189756.107480.896426.549969.633380.8964
7202049.1719486.4097510.466465.1109361.1803510.4664
830106.957292.623124.7086.95718.002724.708
93020073.703947.82940047.8294
1040100107.107826.8692021.741126.8692
Table 22. Standard deviation of fitness function values obtained by the six self-adaptive hybrid DE algorithms and three existing algorithms for a population size of 50.
Table 22. Standard deviation of fitness function values obtained by the six self-adaptive hybrid DE algorithms and three existing algorithms for a population size of 50.
Test CaseIKSaNSDE(1,2)SaNSDE(1,3)SaNSDE(1,4)SaNSDE(2,3)SaNSDE(2,4)SaNSDE(3,4)SaNSDE [52]NSDEPSO
14300000001.89730
25500000002.10810
35500000004.83040
41010000000000
510105.58239.81858.984316.735115.422812.735615.329117.806313.9583
6102018.864711.067424.748519.404732.189726.549929.24718.767527.3229
7202045.633340.595755.948448.996749.171965.110976.495739.767274.9163
8301000006.9576.9570032.1102
9302000000000115.8959
1040100000000090.0067
Table 23. Average rankings of the 13 algorithms obtained by the Friedman test for N P = 30 and N P = 50.
Table 23. Average rankings of the 13 algorithms obtained by the Friedman test for N P = 30 and N P = 50.
POP 30 POP 50
AlgorithmRankingAlgorithmRanking
SaNSDE(1,2)4.55SaNSDE(1,2)4.55
SaNSDE(1,4)4.55SaNSDE(1,4)4.55
SaNSDE [52]4.75SaNSDE [52]4.75
DE-15.45DE-15.45
SaNSDE(2,3)5.6SaNSDE(2,3)5.6
SaNSDE(2,4)5.95SaNSDE(2,4)5.95
SaNSDE(3,4)6SaNSDE(3,4)6
SaNSDE(1,3)6.05SaNSDE(1,3)6.05
NSDE8.35NSDE8.35
DE-39DE-39
DE-29.85DE-29.85
PSO9.95PSO9.95
DE-410.95DE-410.95
Table 24. Average computation time of the self-adaptive hybrid DE, NSDE, PSO, and DE algorithms for a population size of 30.
Table 24. Average computation time of the self-adaptive hybrid DE, NSDE, PSO, and DE algorithms for a population size of 30.
Test CaseSaNSDE(1,2)SaNSDE(1,3)SaNSDE(1,4)SaNSDE(2,3)SaNSDE(2,4)SaNSDE(3,4)SaNSDE [52]NSDEPSODE-1DE-2DE-3DE-4
15173.75184.85180.35159.75139.65125.55330.94740.14662.85390.64750.34755.45390.6
25363.1536053305350.65381.34819.75243.84449.34582.55268.94578.14505.35268.9
36287.16338.26323.45749.26327.26301.86499.65219.75148.66555.85249.35168.56555.8
45900.85924.25949.65987.45945.35905.15839.832.348885801.34814.14715.65801.3
512,482.312,73412,540.9125,77.812,971.312,467.213,112.7117.79915.512,840.58176.87953.812,840.5
615,351.415,338.715,404.715,168.515,284.915,24715,715.627,25910,212.715,412.221,77022,296.515,412.2
725,503.225,846.526,028.426,636.226,163.326,443.425,382.648,832.417,183.925,594.640,499.438,285.725,594.6
810,077.710,287.910,213.910,187.910,131.410,249.810,140.216,24467,69.8950213,127.213,842.99502
912,114.512,598.412,440.412,625.712,282.412,558.912,420.820,359.38059.411,634.514,607.917,361.911,634.5
1011,009.911,305.111,180.811,384.911,188.711,23111,179.719,520.47757.811,154.115,526.115,423.211,154.1
Table 25. Average computation time of the self-adaptive hybrid DE, NSDE, PSO, and DE algorithms for a population size of 50.
Table 25. Average computation time of the self-adaptive hybrid DE, NSDE, PSO, and DE algorithms for a population size of 50.
Test CaseSaNSDE(1,2)SaNSDE(1,3)SaNSDE(1,4)SaNSDE(2,3)SaNSDE(2,4)SaNSDE(3,4)SaNSDE [52]NSDEPSODE-1DE-2DE-3DE-4
16059.96014.66027.46013.46008.46046.75846.64404.94427.65124.54451.44468.35124.5
26350.36342.56251.16386.56323.26417.76094.74359.44413.75147.94402.944075147.9
379078082.38066.47957.87917.579737769.24939.44941.26133.54927.94878.66133.5
47229.472607245.57345.57368.47313.37039.426.24806.45877.64722.44870.45877.6
517,340.317,868.717,51617,58817,399.817,567.416,945.1111.19235.612,403.68305.77660.312,403.6
621,97922,014.621,928.421,821.521,716.821,58221,503.226,170.39739.515,020.221,273.821,288.815,020.2
739,251.239,056.838,897.640,669.639,131.640,978.338,440.349,080.717,525.426,038.739,992.240,907.526,038.7
813,681.514,00614,026.813,981.914,046.713,96013,848.716,257.96705.99395.113,274.413,954.99395.1
916,751.91770117,734.117,775.917,62517,748.117,439.920,095.88013.711,226.315,608.617,875.211,226.3
1014,963.115,601.915,466.816,065.215,681.616,018.815,463.219,231.87732.310,821.114,798.816,089.310,821.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hsieh, F.-S. Improving Performance and Robustness with Two Strategies in Self-Adaptive Differential Evolution Algorithms for Planning Sustainable Multi-Agent Cyber–Physical Production Systems. Appl. Sci. 2025, 15, 10266. https://doi.org/10.3390/app151810266

AMA Style

Hsieh F-S. Improving Performance and Robustness with Two Strategies in Self-Adaptive Differential Evolution Algorithms for Planning Sustainable Multi-Agent Cyber–Physical Production Systems. Applied Sciences. 2025; 15(18):10266. https://doi.org/10.3390/app151810266

Chicago/Turabian Style

Hsieh, Fu-Shiung. 2025. "Improving Performance and Robustness with Two Strategies in Self-Adaptive Differential Evolution Algorithms for Planning Sustainable Multi-Agent Cyber–Physical Production Systems" Applied Sciences 15, no. 18: 10266. https://doi.org/10.3390/app151810266

APA Style

Hsieh, F.-S. (2025). Improving Performance and Robustness with Two Strategies in Self-Adaptive Differential Evolution Algorithms for Planning Sustainable Multi-Agent Cyber–Physical Production Systems. Applied Sciences, 15(18), 10266. https://doi.org/10.3390/app151810266

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop