Next Article in Journal
A Low-Cost Metamaterial Sensor Based on DS-CSRR for Material Characterization Applications
Next Article in Special Issue
LightFD: Real-Time Fault Diagnosis with Edge Intelligence for Power Transformers
Previous Article in Journal
A Study on the Applicability of Thermodynamic Sensors in Fermentation Processes in Selected Foods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multipopulation Dynamic Adaptive Coevolutionary Strategy for Large-Scale Complex Optimization Problems

1
Faculty of Mechanical and Electrical Engineering, Kunming University of Science and Technology, Kunming 650500, China
2
College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(5), 1999; https://doi.org/10.3390/s22051999
Submission received: 18 January 2022 / Revised: 22 February 2022 / Accepted: 28 February 2022 / Published: 4 March 2022

Abstract

:
In this paper, a multipopulation dynamic adaptive coevolutionary strategy is proposed for large-scale optimization problems, which can dynamically and adaptively adjust the connection between population particles according to the optimization problem characteristics. Based on analysis of the network evolution characteristics of collaborative search between particles, a dynamic adaptive evolutionary network (DAEN) model with multiple interconnection couplings is established in this algorithm. In the model, the swarm type is divided according to the judgment threshold of particle types, and the dynamic evolution of collaborative topology in the evolutionary process is adaptively completed according to the coupling connection strength between different particle types, which enhances the algorithm’s global and local searching capability and optimization accuracy. Based on that, the evolution rules of the particle swarm dynamic cooperative search network were established, the search algorithm was designed, and the adaptive coevolution between particles in different optimization environments was achieved. Simulation results revealed that the proposed algorithm exhibited a high optimization accuracy and converging rate for high-dimensional and large-scale complex optimization problems.

1. Introduction

Many scientific and engineering application problems are complex multi-objective optimization problems involving more decision variables and optimization objectives, such as management and optimal distribution of energy resources [1], the short-term load forecast of power systems [2], the solution time of the joint energy-reserve market clearing problem [3], and wind signal prediction [4], etc. However, in the face of the characteristics of data hybridity in complex problems, it is difficult to use the model-driven method to establish accurate models based on prior knowledge, which has essential limitations. At the same time, the traditional method is difficult to adapt to the uncertainty changes of the search environment and the problem itself in the process of solving complex optimization problems. In particular, with the increase of the dimension of the optimization problem, the search space expands exponentially, and the probability of finding the optimal solution decreases exponentially, which leads to the performance of the algorithm deteriorate sharply. For example, in [5], the trust-tech methods, consensus-based PSO (particle swarm optimization), and local optimization methods that are integrated to compute the small-dimension benchmark optimization problems. It is shown in [6] that the quasi-opposition-based learning (QOBL) and chaotic local search (CLS) strategies with SOS (symbiotic organisms search) are integrated to deal with the global optimization problems with a higher quality solution and faster convergence. The authors of [7] proposed to use the repulsive force rule in mimicry physics to keep the diversity of particles and improve the global search ability of the algorithm. That is, the traditional algorithm mainly enhances the global search ability by improving the diversity of population particles, but it is difficult to solve the high-dimensional complex optimization problems. Consequently, large-scale optimization algorithms have become a research focus in the fields of science and engineering.
In recent years, domestic and foreign scholars have mainly conducted research on two aspects for large-scale optimization algorithms. On the one hand, large-scale complex problems are decomposed into lower-dimensional simple problems in order to get a good solution in a reasonable time. On the other hand, it is a nongrouping strategy, which is mainly solved by utilizing new evolutionary algorithms or adding a local search strategy and tabu search strategy to the original algorithm based on the characteristics of the large-scale and complex problems. Han et al. proposed a dynamic coevolutionary strategy, which integrates the dynamic coevolution mechanism of two probability models and the best individual inheritance strategy into the compact genetic algorithm [8]. For the nongrouping strategy, Aminbakhsh, S. et al. [9] utilized an adaptive differentiation evolution operator in order to solve the local optimization of subproblems, and introduced random search mechanisms based on simulated annealing to improve the global searching capability of the algorithm. Liang, J. [10] reported a random dynamic coevolutionary strategy, which was introduced into the dynamic multi-group PSO algorithm in order to realize the dual grouping of population particles and decision variables. Yao Yucheng et al. [11] made use of the repulsive force rules in pseudo-physics in order to keep the particles diverse and improve the algorithm’s global searching capability. When the population enters the global optimal solution region, the gravitational effect is enhanced and the repulsive effect is reduced. The algorithm’s local searching capability can be improved by using the gravitational effect of particles with better adaptability and global searching capability. Kyle Robert Harrison et al. [12] proposed a parameter-free PSO algorithm based on the prediction model built by machine learning. Moreover, the dynamic grouping strategy and dynamic topology evolution are used to solve large-scale optimization problems. In the latest work [13], a stochastic dynamic coevolution strategy is proposed, which is added to the dynamic multipopulation particle swarm optimization algorithm to realize the double grouping of population particles and decision variables, and thus improves the local search ability and population diversity of the algorithm. The authors of [14] proposed a hybrid topology mixed with fully connected topology and ring topology, where it enables the particles to have stronger exploration ability and fast convergence rate at the same time. However, the above methods rarely aim at high-dimensional complex optimization problems. According to the coupling connection strength between different types of particles, the cooperation relationship and strength between particles are adjusted adaptively in order to improve the algorithm’s adaptability to the complex and variable optimization environment, and thereby overcome the algorithm’s huge space–time cost in solving large-scale complex optimization problems.
Thus, this paper will study the dynamic adaptive coevolution strategy for high-dimensional complex optimization problems, where particles can be divided into model particles, which can guide the whole population to evolve toward the optimal value direction, and ordinary particles, which can guide the population to explore new search directions. According to the cooperation weight between particles and the connecting nodes’ response degree, the two kinds of particles continuously adjust their node connection strength in order to complete the population’s evolution from fully-connected topology at the early stage of evolution to the ring-like topology at the later stage. Based on analysis of the network evolution characteristics of collaborative search between particles, a dynamic adaptive evolutionary network (DAEN) model with multiple interconnections coupling is established in this algorithm. In the model, the swarm type is divided according to the judgment threshold of particle types, and the dynamic evolution of collaborative topology in the evolutionary process is adaptively completed according to the coupling connection strength between different particle types, which enhances the algorithm’s global and local searching capability and optimization accuracy.
The contribution of the paper is the adaptive adjustment of evolutionary topology according to particle node connection strength at different stages of evolution. In particular, the abundant evolution rules are indeed beneficial to solving large-scale complex optimization problems such as sensor network node deployment, acceleration sensor dynamic compensation, sensor optimal configuration, and so on. For example, particle swarm optimization algorithm can be applied to the layout of sensor network nodes, and the global optimization ability of particle swarm optimization algorithm is used to optimize the network coverage. Particle swarm optimization algorithm can also be used for dynamic compensation of an acceleration sensor to expand its frequency range to meet the needs of dynamic measurement. In addition, in order to improve the accuracy of test results in dynamic testing, a particle swarm optimization algorithm can optimize the configuration of sensors, determine the optimal number of sensors, and configure them in the optimal position.
This paper is organized as follows: the problems to be studied are stated in Section 2. The DAEN model and undirected weighted DAEN evolution rules for coevolutionary particle swarm optimization are defined in Section 3. Large-scale complex optimization experiments are given in Section 4 and conclusion are made in Section 5.

2. Description of Large-Scale Complex Optimization Problems

Large-scale complex optimization problems are often nondifferentiable and nonlinear. When solving large-scale complex optimization problems with continuous iterations, dimension disaster is likely to be encountered [15]. In order to overcome the algorithm’s huge computational time and space cost in solving the large-scale complex optimization problems, the algorithm’s optimization accuracy, convergence speed, and solution success rate in large-scale complex optimization problems have been improved. The large-scale complex optimization problem is expressed by the following formula:
{     min   f ( x )   /   max   f ( x ) X i = ( x i 1 , x i 2 , , x i D ) s . t . Ω .  
where min   f ( x ) / max   f ( x ) refers to the objective function of the optimization problem. In the single objective optimization problem, it can be understood as a real valued continuous nonlinear objective function mapping from d-dimensional space to one-dimensional fitness value. X i = ( x i 1 , x i 2 , , x i D ) is the boundary constraint. D is the number of decision variables, that is, the dimensions of the optimization problem. In large-scale setting, the number of decision variables D is generally greater than 100, usually reaching more than 1000 dimensions, and x i 1 is the decision variable.

3. Coevolutionary Particle Swarm Optimization Algorithm Based on DAEN

With the increase of the dimensions of large-scale optimization problems, the time-varying law presents multiscale characteristics. If the full connection method is adopted, it is easier to fall into local optimization, and the performance of particle swarm optimization algorithm will degrade rapidly, so it is difficult to directly apply this method to large-scale complex optimization problems. To solve this problem, we need to improve and expand the population optimization model, and establish information interaction and association rules between different search tasks and cooperative populations.
Specifically, the coupling degree of evolution within and between communities can be reduced through the mechanism of coevolution, and the node strength is used to represent the cooperation strength among communities. In particularly, the collaborative rules are used to trigger the multicommunity collaborative search process. And thus, the scalability and adaptability of the algorithm are improved through the dynamic reorganization of the cooperation relationship. On the other hand, the parallel implementation mechanism is adopted to set up the global optimal location storage area of community members, which could complete the asynchronous iteration of each search process. Moreover, the iterative results of each step are sent to other processes in the form of broadcast to reduce process communication and improve the optimization efficiency of the algorithm effectively. Consequently, the DAEN model and undirected weighted DAEN evolution rules for coevolutionary particle swarm optimization are presented in this section. The coevolutionary flowchart is shown in Figure 1.
  • Dynamic adaptive evolutionary network model based on topological connection strength. It is well-known that a network can be regarded as the combination of vertex set and edge set. Thus, we have used the edge to represent the connection between particles, which can describe the cooperative search relationship between particles, and analyze its adaptive cooperative evolution law. Following this idea, the particles were divided into model particles and ordinary particles according to the threshold value of particle type, where the model particles have strong local optimization ability, and ordinary particles have strong global exploration ability. On this basis, the topological connection relationship between different particles was established, and the cooperation and optimization ability of particles were comprehensively evaluated by the distance vector and connection strength between particles, where the evolution rules of topological connection among particles were formulated to form a self-adaptive evolutionary network model that adapts to the environmental changes of large-scale complex optimization problems.
  • Algorithm execution model. During the algorithm execution, the topology connection relationship among particles was adaptively adjusted according to the complex search environment, and the current optimal location and global optimal location storage area was set. Thus, the new global optimal position obtained was sent to other processes in the form of broadcast of the asynchronous iteration process, which was calculated as the current generation global optimal value. Consequently, the process communication could be reduced, and the optimization efficiency of the algorithm was improved while conforming to the biological mechanism of particle swarm optimization.

3.1. Standard Particle Swarm Algorithm

The PSO algorithm was inspired by social animals, such as flocks of birds and fish. PSO is initialized by a set of random solutions, which searches for the optimal solution through generation updates [16].
There are m particles in D dimension search space. The particle i , the position of i = 1 ,   2 ,   ,   m being X i = ( x i 1 ,   x i 2 ,   ,   x i D ) , experiences the optimal position, which is recorded as P i = ( p i 1 ,   p i 2 ,   ,   p i D ) , also known as particle extremum (pbest). The best position that all particles in the population have experienced is P g = ( p g 1 ,   p g 2 ,   ,   p g D ) , also known as global extreme (gbest). Particle velocity is expressed in terms of V i = ( v i 1 ,   v i 2 ,   ,   v i D ) . For each generation, the particles update themselves by tracking two extremes, that is, the particles evolve according to the following formula.
v i d t + 1 = ω · v i d t + c 1 · r a n d 1 (   ) · ( P i d t x i d t ) + c 2 · r a n d 2 (   ) · ( P g d t x i d t )
x i d t + 1 = x i d t + v i d t + 1
The above formula describes that in each iteration process, each member particle changes its own state according to the position and speed update rules, and continuously improves itself by tracking the historical optimal value of the particle member and the global optimal value of the community. Where t or t + 1 is the number of iterations, ω is the inertia weight, c 1 and c 2 are acceleration constants, and r a n d 1 (   ) and r a n d 2 (   ) are random functions that vary in the range of [0, 1]. The first part is the particles’ searching speed, which reflects the particles’ memory. The second part is the “cognition”, which reflects the particles’ thinking and affirmation. The third part is the “society”, reflecting the information sharing and cooperation among particles. Significantly, each search agent is checked for out-of-search space and amended. If Xt+1 is beyond the upper boundaries of the search space, Xt+1 is the value of the upper boundary. If Xt+1 is beyond the lower boundaries of the search space, Xt+1 is the value of the lower boundary.

3.2. DAEN Model

The standard particle swarm algorithm is a global optimization model based on the optimal particles, whose neighborhood structure is equivalent to a fully-connected network, which can converge to the optimal value more quickly. However, it is inconvenient to use a fully-connected network to process high-dimensional data. For more complex high-dimensional data, the fully-connected method falls more easily into the local optimum. Based on the six-degree separation theory [17] and small-world network [18], the DAEN, with a fast convergence speed and strong global searching capability, is formed by combining the fully-connected topology with the ring topology, as shown in Figure 2. In this topology, there are cooperative relations due to different types of particles, including the cooperative relations between model particles and other model particles, model particles and ordinary particles, and ordinary particles and other ordinary particles.
From a mathematical perspective, the network can be regarded as a combination of vertex sets and edge sets. In order to better describe DAEN and establish its evolution model, the following definitions are given.
Definition 1.
In the DAEN structure, edges between the nodes are undirected and have connection strength. Connections between particles can be represented by the undirected weighted graph  G ( P , R ) , as shown in Figure 3.Where  P = ( p 1 , p 1 , , p 1 , , p n ) represents the set of all particles in the population.  R = ( r ( p 1 , p 2 ) , r ( p 1 , p 3 ) , , r ( p i , p j ) , , r ( p n , p n ) , ) represents the set of connection relations among particles.  r s ( p i ,   p j ) R ,   s = 1 ,   2 , where r1 represents the connection relationship of the ring topology in the first step of initializing the topology. r2 represents the connection between the model particles in the second step of initializing topology. |Ri| denotes the module with connection relation set R with particle pi, and indicates that there are |Ri| edges directly connected with pi.
Definition 2.
Particle type determination threshold F.
F = i = 1 n f i n
where fi is the fitness value of particle pi and n is the total number of particles in the population.
According to particle type determination threshold F, the particles in the population can be divided into model particles pm and ordinary particles po. If the fitness value, Fi, of the particle satisfies Fi < F, the particle has a better fitness value, which is divided into model particles in order to guide the whole population to evolve toward the optimal value direction. On the contrary, if FiF, the particle fitness is poor, and it is divided into ordinary particles in order to guide the population to explore a new direction.
In order to determine whether DAEN needs to add connections or to continue to reduce them, the evaluation index of the optimal value for population nodes g b e s t i is introduced: distance vector H.
Definition 3.
Distance vector. The difference between the global optimal value  g b e s t i of the population and the individual optimal position  p b e s t i of m particles with t = n iterations in the population is calculated and absolute values are taken, then the population’s distance vector H under the current iteration times is obtained.
H = ( h 1 , h 2 , · · · , h m )
Definition 4.
Particle connection strength. In DAEN, the two particles’ connection strength is defined as the undirected weighted graph’s weight  v i j . The undirected weighted graph’s connection strength is calculated from the two currently connected particles’ fitness values:
v i j = { 1 | f i f j | f b e s t ,   F ( r r ( p i , p j ) ) < F 0 ,   F ( r r ( p i , p j ) ) F
Suppose that there are n particles in the undirected weighted DAEN, and there are n particles with connection relationship r with particle pi, then the local aggregation coefficient of particle pi is:
μ i = j , k n r j k n ( n 1 )
On the basis of Equation (2), the particle connection strength matrix can be expressed as C. E is the nth order unit matrix, and matrix C can be expressed as:
C = v ( r ( p i , p j ) ) × E
According to Equations (2) and (3), the particle undirected weighted DAEN model can be expressed as follows:
M = ( B , C ) n m , m = n + 1

3.3. Undirected Weighted DAEN Evolution Rules

The calculation of the reduced connection rule and the added connection rule for the undirected edge is based on particle connection strength and particle fitness value. Hopefully, two high-connection strength particles can get more reliable connections, and particles with good fitness values can get more connections. After each iteration, the particles’ fitness value is recalculated and the particle type is judged. Meanwhile, the reduced-connection or added-connection operations are carried out. Figure 4 shows the evolution process.
  • Initialize the topology: initialize particle swarm, set the fitness value threshold, calculate each particle’s fitness value, judge whether the particles’ fitness value reaches the threshold value, and define the particles that reach the threshold value as model particles and those that do not as ordinary particles. That is, the topology is initialized as a ring topology, and the connections between the model particles are fully connected in order to build an initial fully-connected topology.
  • Reduced-connection rule: in order to make the algorithm jump out of local optimization and seek global optimal solution, the reduced connection operation is performed according to the edge’s reduced connection rule every time the algorithm evolves. The fully connected topology’s initial search speed is faster, but it is easy to fall into local optimization. In this paper, two kinds of reduced-connection rules are designed.
    Rule 1: If F i F , Then r 2 ( p i ) = 0 ;
    Rule 2: If F i < F ,   v i j ( | R i | = | R m a x | ,   f i = f m a x )   0 , Then r 2 ( p ( v m i n ) ) = 0 ;
  • Reduced-connection termination rule: according to the connection relationship r between particles and the distance vector, two kinds of reduced-connection termination rules are designed:
    Rule 3: If |   r 2 | = 0 , End;
    When the change of the distance vector’s module for the particle is less than the designed threshold value, the reduced connection is stopped:
    Rule 4: If | H n H n + 1   |   / | H n   | < γ H , End;
  • Added-connection rule: according to the number of r 2 edges of the model particle pi, i.e., the size of |   r 2 | and the local aggregation coefficient, the added-connection rules are designed to improve different particles’ adaptability and balance the particles’ global and local searching capability. Two kinds of added connection rules are designed.
    When DAEN is a ring topology, and when |   r 2 | = 0 , the model particle p j with the farthest distance from p i is selected in order to establish the connection:
    Rule 5: If |   r 2 | = 0 ,   p i = p m i n and p j = p m , Then r 2 ( i , j + 1 ) = 1 ,   j = i + N / 2 + n ( n = 0 , 1 , 2 , 3 )
    When | r 2 | 0 , the local aggregation coefficient μ of all model particles is calculated, and the model particle with the smallest μ is selected in order to establish a connection with the model particles farthest away from the population:
    Rule 6: If | r 2 | 0 , μ i = μ m i n and p j = p m , Then r 2 ( i , j + 1 ) = 1 ,   j = i + N / 2 + n ( n = 0 , 1 , 2 , 3 ) .

3.4. Algorithm Execution Steps

When improving search speed and searching capability, each particle is given subjective initiative, considering the evolutionary method diversity presented by particles with different individual attributes, and resource sharing among members in the community and information interaction between the communities are fully utilized. Based on the particles’ fitness values and connection strength, the added-connection and reduced-connection rules for edges are designed, and the added-connection and reduced-connection operations are performed, As shown in Figure 5. Then, in order to improve search efficiency in the algorithm’s early stage and enhance the local searching capability in the later stage, the DAEMPSO algorithm is proposed by using the DAEN model evolution in order to combine the fully-connected topology with the ring topology.
Based on this parallel idea, the specific pseudo-code for the DAEMPSO algorithm (Algorithm 1) is:
Algorithm 1: DAEMPSO.
  • procedure DAEMPSO
  • for each particle i:
  • Initialize velocity Vi and position Xi for particle i. Evaluate particle i and set pBesti = Xi
  • end for
  • gBest = min{pBesti}
  • for i = l to neighborhood
  • F = i = 1 n f i t n e s s i n
  • if |r2| ≠ 0, & |HnHn+1|/|Hn| > γH
  • ifFi > F // Model particle
  • Update neighborhood // Reduce edge
  • Update the velocity and position of particle i. Evaluate particle i
  • if fitness(Xi) < fitness(pBesti), pBesti = Xi // Update individual optimal value
  • ifFi < F  //Ordinary particle
  • if vij(|Ri| = |Rmax|, fi = fmax) ≠0
  • repeat steps 10–12 // Reduce edge
  • if |HnHn+1|/|Hn| < γH
  • if μi = μmin and pj = pm
  • repeat steps 10–12  // Increase edge
  • if |r2| = 0
  • if i = imin and j = jm
  • repeat steps 10–12  // Increase edge
  • if fitness(pBesti) < fitness(gBest), gBest = pBesti;  // Update neighborhood optimum
  • end for
  • print gBest
  • end producer

4. Analysis of Simulation Results

4.1. Test Function and Experimental Environment

In order to analyze the DAEMPSO algorithm’s adaptability, execution efficiency, and calculation accuracy in solving high-dimensional complex problems, 13 high-dimensional complex multimode functions of the virtual simulation library are used for simulation analysis. These functions include unimodal and multimodal functions, and the variable dimensions can be set. The thirteen test functions’ main characteristics are shown in Table 1 and Table 2. The first 13 problems are classical benchmark functions utilized in the optimization literature [19,20,21,22].

4.2. Simulations

In the experiment, GWO [23], BOA [24], MPA [25], and COOT [26] were selected to compare with DAEMPSO in order to verify the effectiveness of the new strategy. GWO has achieved good results in large-scale global optimization algorithm, and BOA, MPA, and COOT are three recently proposed large-scale optimization algorithms. Compared with these algorithms, the effectiveness of the DAEMPSO based on coevolution strategy can be verified. Specifically, the GWO algorithm mimics the leadership hierarchy and hunting mechanism of grey wolves in nature. Four types of grey wolves, such as alpha, beta, delta, and omega are employed for simulating the leadership hierarchy. BOA is mainly based on the foraging strategy of butterflies, which imitates their sense of smell to determine the optimal value of the function. According to the motion type and velocity of the predator, MPA has an optimal motion strategy for the predator to maximize the encounter rate with the prey. The Coot algorithm imitates the movement patterns of two different birds on the water surface: in the first stage, the movement of birds is irregular, and in the second phase the movements are regular. At the same time, the colony moves to a group of leaders to obtain food supply, and the movement of the end of the colony is in the form of a chain of coots, each coot moving behind the coots in front of it.
The parameters of the five algorithms are set as follows. That is, the dimensions are 500, 800, and 1000 and the maximum number of iterations is 500. The above algorithms are run independently 25 times, and the optimal value, average optimal value, and success rate are recorded. Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 shows the test results.
As compared with Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8, when the dimension is set to 500, 800, and 1000 for high-dimensional complex optimization functions, each optimization algorithm can better adapt to the peak shape changes of F1 and F6 with the increase of the search area, but it has poor adaptability for multimodal functions like F7, F12, and F13. That is, the number of peaks of the function has a great impact on the algorithm convergence. It is shown that GWO has a local convergence for high-dimensional functions. In particular, BOA has worse local convergence for F4, F5, F7, F12, and F13. Analysis of the reasons posits that BOA did not consider the typical characteristics of large-scale optimization problems; although BOA can divergent the search path in the search process, it is difficult to jump out of multiple local optimal points of the high-dimensional multimodal functions or high-dimensional unimodal functions, which leads to the poor performance of the algorithm in solving large-scale optimization problems. Because the search process in MPA uses a phased strategy, the search stages cannot be dynamically divided, which leads to poor performance in the testing process of high-dimensional multimodal functions. COOT does not have the previous speed parameter in the proposed algorithm, and the location of each search agent is updated according to the location of the current search agent and the location of multiple search agents. On the other hand, the proposed algorithm updates a new position based on topological link motion and random motion in different directions, and it can converge to the optimal value in most cases. It is noted that DAEMPSO can adaptively adjust the evolutionary topology according to particle node connection strength at different stage of evolution, which evolves between the fully-connected topology and the ring topology by evolutionary rules for different optimization environments. Consequently, the optimization accuracy of DAEMPSO is significantly higher than that of the above four optimization algorithms.
The test function is a large-scale global optimization algorithm test function set, which contains single-mode and multi-mode characteristics. From the results shown as Figure 6 and Figure 7, we can see the effectiveness of the DAEMPSO algorithm in solving large-scale optimization problems, which are determined by the characteristics of dynamic topology connection based on performance evaluation of particle collaboration. It divides the population into model and ordinary particles, and the two kinds of particles continuously adjust their node connection strength in order to complete the population’s evolution from fully-connected topology at the early stage of evolution to the ring-like topology at the later stage. Noticeably, in test functions F4, F12, and F13, both BOA and MPA algorithms use the coevolutionary strategy of population grouping, where due to the strong local search ability of dynamic multigroup strategy and the sacrifice of global search ability, the convergence ability of the algorithm is not strong, so the test results of BOA and MPA are not good. Moreover, COOT converges in multiple test functions, but fails to converge at F7, F12, and F13. Generally, when the dimension is 1000, the above algorithm’s convergence performance is similar for F1, F6, F9, and F10, the convergence speed is faster for F7, F9, and F10. But for F5, F7, F12, and F13, the performance of DAEMPSO is obviously better than other algorithms, in which the evolutionary topology can be adjusted adaptively according to the connection strength of particle nodes, and rich evolution rules are formulated considering the characteristics of large-scale complex optimization problems during the implementation of the algorithm.

4.3. Statistical Analysis of DAEMPSO

This section uses the Bonferroni–Dunn test to analyze the competitiveness of DAEMPSO with respect to its other competitors. In order to have a reliable test, this study categorized the inspection data into three groups. The three groups of data are the basic test functions of different algorithms in 500, 800, and 1000 dimensions, which are ranked according to the running results of the convergent average. This test demonstrates that there is a significant difference in performance between two algorithms if the difference in average ranking of methods is greater than the critical difference (CD). Figure 8 shows the average ranking of methods in different dimensions with a significance level of 0.1. DAEMPSO can significantly outperform those algorithms, whose average ranking is above the threshold line shown in the figure in 500 and 800 dimensions. The threshold line of each group is identified by its color. As is observable from the figure, DAEMPSO is ranked first and has significant advantages over other algorithms.

4.4. Result Analysis

As the aforementioned simulation results, one can observe that the DAEMPSO algorithm shows significantly superior convergence performance for multidimensional F1–F6 and F7–F13 in comparison to other improved methods such as GWO, BOA, MPA, and COOT. On the other hand, it is noted that the other aforementioned methods may fail to retain their convergence speed with the increasing dimensions. In particular, as shown in Figure 6 and Figure 7, the presented method can guarantee a well-balanced performance for the exploratory and exploitative propensities on problem’s topographies with high dimensions. Moreover, these comparative results show even worse ability between several methods such as the GWO, BOA, MPA, and COOT, with high-quality solutions found by DAEMPSO. Consequently, the dynamic coevolution behaviors are of great importance for the high-dimension problems. To address this issue, some efforts have been advanced to exploit the ability of adaptive dynamic topology evolution in different evolution stages in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. It is shown that DAEMPSO is validated to adjust node connection strength to guarantee the evolutionary topology adaptively in different dimensions. The results also support the superior exploratory strengths of DAEMPSO for multimodal and hybrid composition landscapes. Moreover, the results for 1000 dimensions functions in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 also disclose that the improved convergence performance can be achieved for the proposed algorithm in comparison to other conventional methods.
The following features are provided to demonstrate the efficacy of the proposed methods:
  • Division of superior and inferior populations with regard to the average location of particles can encourage the exploratory behavior of DAEMPSO in the initial iterations.
  • Node connection strength has a dynamic randomized time-varying nature to guarantee the adaptive adjustment of DAEMPSO exploration and exploitation patterns.
  • Different topological evolution patterns according to the connection strength of particle nodes enhance the exploitative behaviors of DAEMPSO when performing a local search.
  • The progressive topological coevolution scheme can be used to drive the model particles to find the optimal position step by step, so as to improve the quality of the solution and enhance the iterative ability of the algorithm.
  • A series of adaptive adjustment strategies, based on H and C for the DAEN model can inspire particles to select the best topological link relationship. Such ability also has a constructive impact on the exploitation potential of the algorithm.

5. Conclusions

In this paper, a dynamic adaptive coevolutionary strategy is proposed for large-scale complex optimization problems, where particles can be divided into model particles and ordinary particles. Thus, the model particles can guide the whole population to evolve toward the optimal value direction, the ordinary particles can guide the population to explore new search directions. According to the cooperation ability between particles, the two kinds of particles continuously adjust their node connection strength in order to complete the population’s evolution from fully-connected topology at the early stage of evolution to the ring-like topology at the later stage. The contribution of this paper is to adjust the evolutionary topology adaptively in different evolution stages according to the connection strength of particle nodes. The dynamic evolution of connection topology can solve the problem of multiple decision variables and correlation among variables, while population dynamic grouping can solve the problem of multimodality and algorithm convergence too fast and fall into a local optimum. Finally, the proposed algorithm is compared with other algorithms in benchmark function set testing to verify the effectiveness of the results. However, there are still some problems to be solved in future work.
  • Parameter adjustment: the new algorithm does not discuss the parameter adjustment to increase the adaptive mechanism of parameters and reduce the complexity of the algorithm.
  • Practical application: the algorithm proposed in this paper has good results on the test platform, but the results in practical application have not been verified, so the effectiveness of the algorithm in practical optimization problems such as large-scale production line collaborative operation needs to be verified.

Author Contributions

Conceptualization, L.W.; methodology, Y.Y. and L.Z.; analysis and validation, Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The National Natural Science Foundation of China, grant number 52065033.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Test function source: http://www.sfu.ca/~ssurjano/index.html (accessed on 4 May 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aghajani, G.; Ghadimi, N. Multi-objective energy management in a micro-grid. Energy Rep. 2018, 4, 218–225. [Google Scholar] [CrossRef]
  2. Liu, Y.; Wang, W.; Ghadimi, N. Electricity load forecasting by an improved forecast engine for building level consumers. Energy 2017, 139, 18–30. [Google Scholar] [CrossRef]
  3. Hamian, M.; Darvishan, A.; Hosseinzadeh, M.; Lariche, M.J.; Ghadimi, N.; Nouri, A. A framework to expedite joint energy-reserve payment cost minimization using a custom-designed method based on Mixed Integer Genetic Algorithm. Eng. Appl. Artif. Intell. 2018, 72, 203–212. [Google Scholar] [CrossRef]
  4. Leng, H.; Li, X.; Zhu, J.; Tang, H.; Zhang, Z.; Ghadimi, N. A new wind power prediction method based on ridgelet transforms, hybrid feature selection and closed-loop forecasting. Adv. Eng. Inform. 2018, 36, 20–30. [Google Scholar] [CrossRef]
  5. Zhang, Y.F.; Chiang, H.D. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization. IEEE Trans. Cybern. 2017, 47, 2717–2729. [Google Scholar] [CrossRef]
  6. Truong, K.H.; Nallagownden, P.; Elamvazuthi, I.; Vo, D.N. A Quasi-Oppositional-Chaotic Symbiotic Organisms Search algorithm for global optimization problems. Appl. Soft Comput. 2019, 77, 567–583. [Google Scholar] [CrossRef]
  7. Yang, Q.; Chen, W.N.; Da Deng, J.; Li, Y.; Gu, T.; Zhang, J. A Level-based Learning Swarm Optimizer for Large Scale Optimization. IEEE Trans. Evol. Comput. 2018, 22, 578–594. [Google Scholar] [CrossRef]
  8. Han, Z.; Zhu, Y.; Lin, S. A dynamic co-evolution compact genetic algorithm for E/T problem. IFAC Pap. 2015, 48, 1439–1443. [Google Scholar] [CrossRef]
  9. Aminbakhsh, S.; Sonmez, R. Discrete Particle Swarm Optimization Method for the Large-Scale Discrete Time-Cost Trade-Off Problem. Expert Syst. Appl. 2016, 51, 177–185. [Google Scholar] [CrossRef]
  10. Liang, J.; Liu, R.; Yu, K.J.; Qu, B.Y. Dynamic Multi-Swarm Particle Swarm Optimization with Cooperative Coevolution for Large Scale Global Optimization. Ruan Jian Xue Bao/J. Softw. 2018, 29, 2595–2605. Available online: http://www.jos.org.cn/1000-9825/5398.htm (accessed on 20 May 2021). (In Chinese).
  11. Yao, C.Y.; Wang, B.; Chen, D.N.; Zhang, R.X. Hybrid particle interactive particle swarm optimization algorithm. J. Mech. Eng. 2015, 51, 198–207. [Google Scholar] [CrossRef]
  12. Harrison, K.R.; Ombuki-Berman, B.M.; Engelbrecht, A.P. A parameter-free particle swarm optimization algorithm using performance classifiers. Inf. Sci. 2019, 503, 381–400. [Google Scholar] [CrossRef]
  13. Sabar, N.R.; Abawajy, J.; Yearwood, J. Heterogeneous cooperative co-evolution memetic differential evolution algorithm for big data optimization problems. IEEE Trans. Evol. Comput. 2017, 21, 315–327. [Google Scholar] [CrossRef]
  14. Wang, C.; Liu, Y.; Zhao, Y.; Chen, Y. A hybrid topology scale-free Gaussian-dynamic particle swarm optimization algorithm applied to real power loss minimization. Eng. Appl. Artif. Intell. 2014, 32, 63–75. [Google Scholar] [CrossRef]
  15. Liu, K.; Bellet, A. Escaping the curse of dimensionality in similarity learning: Efficient Frank-Wolfe algorithm and generalization bounds. Neurocomputing 2019, 333, 185–199. [Google Scholar] [CrossRef] [Green Version]
  16. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 26 November–1 December 1995. [Google Scholar]
  17. Shu, W.; Chuang, Y.H. The perceived benefits of six-degree-separation social networks. Internet Res. 2011, 21, 26–45. [Google Scholar] [CrossRef]
  18. Porter, M.A. Small-world network. Scholarpedia 2012, 7, 1739. [Google Scholar] [CrossRef]
  19. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  20. Digalakis, J.G.; Margaritis, K.G. On benchmarking functions for genetic algorithms. Int. J. Comput. Math. 2001, 77, 481–506. [Google Scholar] [CrossRef]
  21. Molga, M.; Smutnicki, C. Test Functions for Optimization NEEDS. 2005. Available online: http://www.robertmarks.org/Classes/ENGR5358/Papers/functions.pdf (accessed on 20 June 2021).
  22. Yang, X.-S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 49, 46–61. [Google Scholar] [CrossRef] [Green Version]
  24. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  25. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A Nature-inspired Metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  26. Naruei, I.; Keynia, F. A new optimization method based on COOT bird natural life model. Expert Syst. Appl. 2021, 183, 115352. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the algorithm.
Figure 1. Flowchart of the algorithm.
Sensors 22 01999 g001
Figure 2. DAEN model.
Figure 2. DAEN model.
Sensors 22 01999 g002
Figure 3. Undirected weighted DAEN model.
Figure 3. Undirected weighted DAEN model.
Sensors 22 01999 g003
Figure 4. Topological evolution steps.
Figure 4. Topological evolution steps.
Sensors 22 01999 g004
Figure 5. DAEN evolution.
Figure 5. DAEN evolution.
Sensors 22 01999 g005
Figure 6. Convergence curves of five optimization algorithms for function (1000-dimension) F1–F13.
Figure 6. Convergence curves of five optimization algorithms for function (1000-dimension) F1–F13.
Sensors 22 01999 g006aSensors 22 01999 g006bSensors 22 01999 g006cSensors 22 01999 g006d
Figure 7. Convergence curves of five optimization algorithms for function (1000-dimension) F1–F13.
Figure 7. Convergence curves of five optimization algorithms for function (1000-dimension) F1–F13.
Sensors 22 01999 g007aSensors 22 01999 g007b
Figure 8. Bonferroni–Dunn’s test for different methods and groups with α = 0.1.
Figure 8. Bonferroni–Dunn’s test for different methods and groups with α = 0.1.
Sensors 22 01999 g008
Table 1. High-dimensional unimodal benchmark test functions.
Table 1. High-dimensional unimodal benchmark test functions.
Function NameFunctionDimensionsSearch SpaceTheory Optimum
F1SPHERE
FUNCTION
f ( x ) = i = 1 n x i 2 1000[−100, 100]0
F2ROTATED HYPER-
ELLIPSOID
FUNCTION
f ( x ) = i = 1 n ( j 1 i x j 2 ) 1000[−100, 100]0
F3SCHWEFEL’ S
PROBLEM
f ( x ) = m a x i { | x i 2 | , 1 i n } 1000[−100, 100]0
F4ROSENBROCK
FUNCTION
f ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 1000[−30, 30]0
F5STEP
FUNCTION
f ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 1000[−100, 100]0
F6QUARTIC
FUNCTION
f ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ] 1000[−1.28, 1.28]0
Table 2. High-dimensional unimodal benchmark test functions.
Table 2. High-dimensional unimodal benchmark test functions.
Function NameFunctionDimensionsSearch SpaceTheory Optimum
F7SCHWEFEL
FUNCTION
f ( x ) = 418.9829 d i = 1 d x i s i n ( | x i | ) 1000[−500, 500]0
F8RASTRIGIN
FUNCTION
f ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 1000[−5.12, 5.12]0
F9ACKLEY
FUNCTION
f ( x ) = 20 e x p ( 0.2 1 n i = 1 n x i 2 ) e x p ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e 1000[−32, 32]0
F10GRIEWANK
FUNCTION
f ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 1000[−600, 600]0
F11GENERALIZED
PENALIZED
FUNCTION 1
f ( x ) = π n { 10 sin ( π y 1 ) + n = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 )
y i = 1 + x i + 1 4 u ( x i , a , k , m ) = { k ( x i a ) m x i > a 0 a < x i < a k ( x i a ) m x i < a
1000[−50, 50]0
F12GENERALIZED
PENALIZED
FUNCTION 2
f ( x ) = 0.1 { sin 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ]       + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] } + i = 1 n u ( x i , 5 , 100 , 4 ) 1000[−5, 5]0
F13LEVY
FUNCTION
f ( x ) = sin 2 ( π ω 1 ) + i = 1 d 1 ( ω i + 1 ) [ 1 + 10   s i n 2 ( π ω i + 1 ) ] + ( ω d 1 ) 2 [ 1 + sin 2 ( 2 π ω d ) ] 1000[−10, 10]0
Table 3. Comparison of the optimization results of five algorithms for six functions (500 dimensions).
Table 3. Comparison of the optimization results of five algorithms for six functions (500 dimensions).
F1F2F3F4F5F6
GWOObtained best solution2.68 × 10−94.42 × 1047.13 × 10−41.22 × 10−61.87 × 10−73.66 × 10−8
Average1.33 × 10−51.02 × 1056.25 × 104.97 × 1027.82 × 101.12 × 10−2
Standard deviation1.21 × 10−58.34 × 1042.54 × 109.85 × 103.65 × 101.01 × 10−2
Success rate100%068%52%76%96%
BOAObtained best solution4.83 × 10−158.46 × 10−171.15 × 10−186.18 × 10−62.47 × 10−111.07 × 10−12
Average1.28 × 10−111.27 × 10−116.26 × 10−94.98 × 1021.22 × 1026.48 × 10−4
Standard deviation3.04 × 10−112.33 × 10−115.32 × 10−91.88 × 1027.48 × 104.78 × 10−4
Success rate100%100%100%48%44%92%
MPAObtained best solution1.01 × 10−191.41 × 10−75.51 × 10−85.25 × 10−61.70 × 10−75.33 × 10−10
Average5.64 × 10−162.42 × 1032.41 × 10−54.96 × 1025.91 × 101.13 × 10−3
Standard deviation4.23 × 10−165.35 × 1024.29 × 10−53.12 × 108.993.89 × 10−4
Success rate100%36%100%82%88%96%
COOTObtained best solution5.82 × 10−462.10 × 10−548.47 × 10−225.97 × 10−91.86 × 10−85.94 × 10−7
Average4.33 × 10−441.45 × 10−423.42 × 10−184.98 × 1027.42 × 101.86 × 10−3
Standard deviation3.89 × 10−442.19 × 10−423.13 × 10−188.48 × 102.67 × 106.64 × 10−3
Success rate100%100%100%88%92%96%
DAEMPSOObtained best solution1.11 × 10−956.67 × 10−732.00 × 10−457.73 × 10−107.90 × 10−96.66 × 10−14
Average6.51 × 10−874.79 × 10−665.35 × 10−412.75 × 10−21.91 × 10−15.58 × 10−5
Standard deviation4.45 × 10−876.33 × 10−667.33 × 10−415.34 × 10−21.56 × 10−15.33 × 10−5
Success rate100%100%100%88%92%96%
Table 4. Comparison of the optimization results of five algorithms for seven test functions (500 dimensions).
Table 4. Comparison of the optimization results of five algorithms for seven test functions (500 dimensions).
F7F8F9F10F11F12F13
GWOObtained best solution8.38 × 1041.77 × 10−86.22 × 10−99.25 × 10−122.81 × 10−109.17 × 10−78.19 × 10−8
Average1.41 × 1053.31 × 101.55 × 10−41.13 × 10−66.29 × 10−14.23 × 103.79 × 10
Standard deviation5.99 × 1042.59 × 103.34 × 10−45.34 × 10−72.45 × 10−19.381.42 × 10
Success rate084%92%100%92%76%80%
BOAObtained best solution9.96 × 1041.82 × 10−185.12 × 10−137.02 × 10−181.16 × 10−129.89 × 10−119.12 × 10−8
Average1.90 × 1059.09 × 10−135.47 × 10−91.46 × 10−111.144.99 × 104.58 × 10
Standard deviation6.34 × 1047.16 × 10−133.67 × 10−96.55 × 10−111.061.94 × 101.05 × 10
Success rate0100%100%100%84%88%88%
MPAObtained best solution7.45 × 1041.82 × 10−162.53 × 10−157.34 × 10−208.48 × 10−99.68 × 10−127.63 × 10−12
Average1.17 × 1059.09 × 10−131.53 × 10−91.11 × 10−162.18 × 10−14.56 × 103.20 × 10
Standard deviation6.44 × 1043.48 × 10−131.70 × 10−97.09 × 10−163.85 × 10−11.76 × 109.16
Success rate0100%100%100%92%88%92%
COOTObtained best solution1.15 × 1053.82 × 10−149.32 × 10−191.69 × 10−143.54 × 10−101.19 × 10−128.78 × 10−11
Average1.35 × 1051.45 × 10−118.88 × 10−167.21 × 10−112.17 × 10−15.50 × 103.96 × 10
Standard deviation4.93 × 1042.76 × 10−114.27 × 10−162.71 × 10−119.80 × 10−23.24 × 103.32 × 10
Success rate0100%100%100%96%88%92%
DAEMPSOObtained best solution6.69 × 10−131.82 × 10−174.85 × 10−232.62 × 10−195.86 × 10−144.31 × 10−165.12 × 10−7
Average2.31 × 109.09 × 10−138.88 × 10−167.77 × 10−161.75 × 10−46.21 × 10−73.84 × 10−4
Standard deviation1.03 × 101.85 × 10−136.58 × 10−164.72 × 10−165.70 × 10−41.38 × 10−77.81 × 10−4
Success rate36%100%100%100%96%100%96%
Table 5. Comparison of the optimization results of five algorithms for six test functions (800 dimensions).
Table 5. Comparison of the optimization results of five algorithms for six test functions (800 dimensions).
F1F2F3F4F5F6
GWOObtained best solution6.18 × 10−91.02 × 1057.13 × 10−75.11 × 10−127.25 × 10−88.20 × 10−8
Average8.36 × 10−52.70 × 1056.70 × 107.97 × 1021.44 × 1022.49 × 10−2
Standard deviation1.36 × 10−53.59 × 1038.241.35 × 102.97 × 101.12 × 10−2
Success rate100%060%52%64%92%
BOAObtained best solution3.90 × 10−156.21 × 10−175.06 × 10−151.68 × 10−124.27 × 10−117.81 × 10−9
Average1.28 × 10−111.28 × 10−115.68 × 10−97.98 × 1021.97 × 1026.88 × 10−4
Standard deviation1.21 × 10−116.02 × 10−115.35 × 10−91.62 × 1028.89 × 108.23 × 10−4
Success rate100%100%100%68%80%96%
MPAObtained best solution6.04 × 10−184.11 × 10−61.55 × 10−145.99 × 10−122.10 × 10−111.52 × 10−8
Average5.01 × 10−155.08 × 1034.02 × 10−57.95 × 1021.24 × 1021.40 × 10−3
Standard deviation9.73 × 10−154.62 × 1036.76 × 10−55.73 × 1029.73 × 109.79 × 10−3
Success rate100%60%88%64%56%92%
COOTObtained best solution2.58 × 10−562.10 × 10−644.78 × 10−227.59 × 10−126.81 × 10−84.59 × 10−14
Average5.92 × 10−511.27 × 10−531.85 × 10−176.36 × 1031.46 × 1022.91 × 10−3
Standard deviation4.40 × 10−506.48 × 10−519.56 × 10−172.74 × 1031.07 × 1026.30 × 10−3
Success rate100%100%100%48%68%96%
DAEMPSOObtained best solution1.88 × 10−973.55 × 10−672.63 × 10−455.13 × 10−119.70 × 10−133.66 × 10−9
Average4.37 × 10−902.41 × 10−611.44 × 10−361.05 × 10−11.11 × 10−44.11 × 10−5
Standard deviation8.94 × 10−888.15 × 10−603.90 × 10−369.52 × 10−22.63 × 10−52.23 × 10−5
Success rate100%100%100%72%96%96%
Table 6. Comparison of the optimization results of five algorithms for seven test functions (800 dimensions).
Table 6. Comparison of the optimization results of five algorithms for seven test functions (800 dimensions).
F7F8F9F10F11F12F13
GWOObtained best solution1.92 × 1051.13 × 10−88.81 × 10−92.95 × 10−126.81 × 10−81.79 × 10−79.18 × 10−8
Average2.45 × 1057.15 × 101.30 × 10−34.23 × 10−27.02 × 10−17.17 × 106.41 × 10
Standard deviation7.09 × 1044.58 × 106.19 × 10−49.58 × 10−35.19 × 10−13.55 × 102.99 × 10
Success rate080%96%96%92%88%84%
BOAObtained best solution2.06 × 1052.18 × 10−162.15 × 10−197.52 × 10−166.11 × 10−84.55 × 10−119.55 × 10−11
Average3.15 × 1059.09 × 10−132.22 × 10−141.47 × 10−111.147.99 × 107.31 × 10
Standard deviation2.01 × 1054.48 × 10−135.20 × 10−143.83 × 10−118.05 × 10−15.62 × 106.74 × 10
Success rate0100%100%100%92%76%84%
MPAObtained best solution1.65 × 1053.52 × 10−151.50 × 10−154.69 × 10−222.48 × 10−96.89 × 10−98.18 × 10−8
Average2.09 × 1051.82 × 10−121.81 × 10−91.12 × 10−163.60 × 10−17.68 × 106.01 × 10
Standard deviation1.95 × 1056.33 × 10−135.05 × 10−99.27 × 10−172.23 × 10−13.78 × 104.13 × 10
Success rate0100%100%100%92%80%84%
COOTObtained best solution1.98 × 1053.51 × 10−175.48 × 10−184.12 × 10−198.15 × 10−92.26 × 10−128.25 × 10−11
Average2.54 × 1059.09 × 10−132.22 × 10−145.66 × 10−155.66 × 10−17.98 × 106.74 × 10
Standard deviation9.25 × 1047.88 × 10−135.91 × 10−143.75 × 10−154.89 × 10−15.09 × 104.90 × 10
Success rate0100%100%100%92%84%88%
DAEMPSOObtained best solution6.94 × 10−87.50 × 10−175.51 × 10−217.25 × 10−183.65 × 10−132.35 × 10−123.28 × 10−9
Average3.489.09 × 10−138.88 × 10−163.33 × 10−162.99 × 10−76.42 × 10−64.50 × 10−2
Standard deviation2.849.24 × 10−137.41 × 10−169.13 × 10−161.35 × 10−74.36 × 10−62.03 × 10−2
Success rate44%100%100%92%100%100%96%
Table 7. Comparison of the optimization results of five algorithms for six test functions (1000 dimensions).
Table 7. Comparison of the optimization results of five algorithms for six test functions (1000 dimensions).
F1F2F3F4F5F6
GWOObtained best solution2.75 × 10−65.93 × 1043.55 × 10−22.61 × 10−23.43 × 10−38.25 × 10−6
Average6.68 × 10−41.02 × 1057.135.97 × 1021.87 × 1023.03 × 10−2
Standard deviation5.39 × 10−49.48 × 1045.872.54 × 1021.15 × 1025.02 × 10−2
Success rate100%0%88%36%44%88%
BOAObtained best solution7.15 × 10−134.45 × 10−128.12 × 10−115.51 × 10−51.15 × 10−44.65 × 10−6
Average1.29 × 10−111.26 × 10−116.05 × 10−96.18 × 1022.47 × 1021.83 × 10−1
Standard deviation8.01 × 10−121.30 × 10-−125.83 × 10−93.90 × 1028.94 × 101.10 × 10−1
Success rate100%100%100%32%44%84%
MPAObtained best solution6.15 × 10−187.64 × 10−89.76 × 10−73.65 × 10−64.38 × 10−42.19 × 10−8
Average1.01 × 10−141.41 × 10−45.51 × 10−45.95 × 1021.70 × 1021.36 × 10−1
Standard deviation5.71 × 10−147.95 × 10−51.46 × 10−42.41 × 1021.25 × 1025.29 × 10−1
Success rate100%100%100%44%48%92%
COOTObtained best solution9.69 × 10−367.52 × 10−392.66 × 10−292.41 × 10−77.27 × 10−93.65 × 10−10
Average5.82 × 10−262.10 × 10−348.47 × 10−225.97 × 1021.86 × 1025.94 × 10−4
Standard deviation5.59 × 10−266.84 × 10−343.36 × 10−222.86 × 1021.37 × 1021.44 × 10−4
Success rate100%100%100%48%48%96%
DAEMPSOObtained best solution1.26 × 10−823.92 × 10−621.34 × 10−486.16 × 10−99.38 × 10−107.46 × 10−6
Average1.11 × 10−776.67 × 10−572.00 × 10−353.15 × 10−17.90 × 10−16.66 × 10−2
Standard deviation7.46 × 10−766.10 × 10−572.76 × 10−351.43 × 10−14.95 × 10−13.11 × 10−2
Success rate100%100%100%84%88%92%
Table 8. Comparison of the optimization results of five algorithms for seven test functions (1000 dimensions).
Table 8. Comparison of the optimization results of five algorithms for seven test functions (1000 dimensions).
F7F8F9F10F11F12F13
GWOObtained best solution2.26 × 1053.46 × 10−62.72 × 10−99.47 × 10−74.71 × 10−84.79 × 10−84.68 × 10−11
Average3.13 × 1051.33 × 1023.13 × 10−39.25 × 10−28.13 × 10−19.17 × 108.19 × 10
Standard deviation1.96 × 1051.12 × 1022.23 × 10−35.63 × 10−25.05 × 10−17.15 × 105.36 × 10
Success rate032%88%92%96%76%80%
BOAObtained best solution1.37 × 1047.16 × 10−159.49 × 10−195.68 × 10−145.19 × 10−52.29 × 10−68.83 × 10−8
Average3.97 × 1051.82 × 10−125.12 × 10−91.41 × 10−111.169.89 × 109.12 × 10
Standard deviation1.84 × 1054.45 × 10−123.10 × 10−91.65 × 10−119.49 × 10−14.12 × 102.34 × 10
Success rate0100%100%100%96%88%88%
MPAObtained best solution1.01 × 1055.99 × 10−198.11 × 10−157.00 × 10−183.31 × 10−106.39 × 10−89.77 × 10−5
Average2.86 × 1051.32 × 10−132.53 × 10−91.11 × 10−164.48 × 10−19.68 × 107.63 × 10
Standard deviation1.34 × 1056.73 × 10−131.06 × 10−94.95 × 10−163.34 × 10−16.38 × 106.06 × 10
Success rate0100%100%100%84%88%88%
COOTObtained best solution2.88 × 1058.50 × 10−184.92 × 10−186.50 × 10−156.68 × 10−56.72 × 10−61.82 × 10−11
Average3.40 × 1053.82 × 10−119.32 × 10−141.69 × 10−145.54 × 10−11.19 × 1028.78 × 10
Standard deviation
Success rate0100%100%100%88%72%84%
DAEMPSOObtained best solution8.85 × 10−74.90 × 10−151.99 × 10−258.34 × 10−225.54 × 10−101.29 × 10−97.65 × 10−4
Average1.14 × 101.82 × 10−128.88 × 10−162.22 × 10−25.86 × 10−74.31 × 10−25.12 × 10−3
Standard deviation1.04 × 107.96 × 10−136.04 × 10−161.92 × 10−24.99 × 10−73.67 × 10−24.98 × 10−3
Success rate32%100%100%92%100%92%96%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yin, Y.; Wang, L.; Zhang, L. A Multipopulation Dynamic Adaptive Coevolutionary Strategy for Large-Scale Complex Optimization Problems. Sensors 2022, 22, 1999. https://doi.org/10.3390/s22051999

AMA Style

Yin Y, Wang L, Zhang L. A Multipopulation Dynamic Adaptive Coevolutionary Strategy for Large-Scale Complex Optimization Problems. Sensors. 2022; 22(5):1999. https://doi.org/10.3390/s22051999

Chicago/Turabian Style

Yin, Yanlei, Lihua Wang, and Litong Zhang. 2022. "A Multipopulation Dynamic Adaptive Coevolutionary Strategy for Large-Scale Complex Optimization Problems" Sensors 22, no. 5: 1999. https://doi.org/10.3390/s22051999

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop