1. Introduction
High-dimensional optimization problems generally refer to ones with high complexity and dimensions (exceeding 100). It often has the characteristics of non-linearity and high complexity. In real life, many problems can be expressed as high-dimensional optimization problems, such as large-scale job-shop-scheduling problems [
1], vehicle-routing problems [
2], feature selection [
3], satellite autonomous observation mission planning [
4], economic environmental dispatch [
5], and parameter estimation. These kinds of optimization problems often greatly degrade the performance of the optimization algorithm as the dimension of the optimization problem increases, so it is extremely difficult to obtain the global optimal solution, which poses a technical challenge to solving many practical problems. Therefore, the study of high-dimensional optimization problems has important theoretical and practical significance [
6,
7].
The meta-heuristic optimization algorithm is a class of random search algorithms proposed by simulating biological intelligence in nature [
8], and has been successfully applied in various fields, such as the Internet of Things [
9], network information systems [
10,
11], multi-robot space exploration [
12], and so on. At present, hundreds of algorithms have emerged, such as the particle swarm optimization (PSO) algorithm, the artificial bee colony (ABC) algorithm, the artificial fish swarm algorithm (AFSA), the bacterial foraging algorithm (BFA), the grey wolf optimizer (GWO) algorithm, and the sine cosine algorithm (SCA) [
13]. These algorithms have become effective methods for solving high-dimensional optimization problems because of their simple structure and strong exploration and exploitation abilities. For example, Huang et al. proposed a hybrid optimization algorithm by combining the frog’s leaping optimization algorithm with the GWO algorithm and verified the performance of the algorithm on 10 high-dimensional complex functions [
14]. Gu et al. proposed a hybrid genetic grey wolf algorithm for solving high-dimensional complex functions by combining the genetic algorithm and GWO and verified the performance of the algorithm on 10 high-dimensional complex test functions and 13 standard test functions [
15]. Wang et al. improved the grasshopper optimization algorithm by introducing nonlinear inertia weight and used it to solve the optimization problem of high-dimensional complex functions. Experiments on nine benchmark test functions show that the algorithm has significantly improved convergence speed and convergence accuracy [
16].
The chicken swarm optimization (CSO) algorithm is a meta-heuristic optimization algorithm proposed by Meng et al. in 2014, which simulates the foraging behavior of chickens in nature [
17]. The algorithm realizes rapid optimization through information interaction and collaborative sharing among roosters, hens, and chicks. Because of its good solution accuracy and robustness, it has been widely used in network engineering [
18,
19], image processing [
20,
21,
22], power systems [
23,
24], parameter estimation [
25,
26], and other fields. For example, Kumar et al. utilized the CSO algorithm to select the best peer in the P2P network and proposed an optimal load-balancing strategy. The experimental results show that it has better load balancing than other methods [
18]. Cristin et al. applied the CSO algorithm to classify brain tumor severity in magnetic resonance imaging (MRI) images and proposed a brain-tumor image-classification method based on the fractional CSO algorithm. Experimental results show that this method has good performance in accuracy, sensitivity, and so on [
20]. Liu et al. developed an improved CSO–extreme-learning machine model by improving the CSO algorithm and applied it to predict the photovoltaic power of a power system and obtained satisfactory results [
23]. Alisan applied the CSO algorithm for the parameter estimation of the proton exchange membrane fuel cell model, and it exhibit particularly good performance [
25].
Although the CSO algorithm has been successfully applied to various fields and solved many practical problems, the above application examples are all aimed at low-dimensional optimization problems. With the increase in the dimensions of the optimization problems, the CSO algorithm is prone to premature convergence. Therefore, for the optimization problem of high-dimensional complex functions, Yang et al. constructed a genetic CSO algorithm by introducing the idea of a genetic algorithm into the CSO algorithm and verified the performance of the proposed algorithm on 10 benchmark functions [
27]. Although the convergence speed and stability were improved, the solution accuracy is still unsatisfactory. Gu et al. realized the solution to high-dimensional complex function optimization problems by removing the chicks in the chicken swarm and introducing an inverted S-shaped inertial weight to construct an adaptive simplified CSO algorithm [
28]. Although the proposed algorithm is significantly better than some other algorithms in solution accuracy, there is still room for improvement in convergence speed. By introducing the dissipative structure and differential mutation operation into the basic CSO algorithm, Han constructed a hybrid CSO algorithm to avoid premature convergence in solving high-dimensional complex problems, and verified the performance of the proposed algorithm on 18 standard functions [
29]. Although its convergence performance was improved, the solution accuracy should be further enhanced.
To address the aforementioned issues, we propose an adaptive dual-population collaborative CSO (ADPCCSO) algorithm in this paper. The algorithm solves high-dimensional complex problems by using an adaptive adjustment strategy for parameter G, an improvement strategy for foraging behaviors, and a dual-population collaborative optimization strategy. Specifically, the main technical features and originality of this paper are given below.
(1) The value of parameter G is given using an adaptive dynamic adjustment method, so as to balance the breadth and depth of the search abilities of the algorithm.
(2) To improve the solution accuracy and depth optimization ability of the CSO algorithm, an improvement strategy for foraging behaviors is proposed by introducing an improvement factor and adding a kind of chick’s foraging behavior near the optimal value.
(3) A dual-population collaborative optimization strategy based on the chicken swarm and artificial fish swarm is constructed to enhance the global search ability of the whole algorithm.
The simulation experiments on the selected standard test functions and the parameter estimation problem of the Richards model show that the ADPCCSO algorithm is better than some other meta-heuristic optimization algorithms in terms of solution accuracy, convergence performance, etc.
The rest of this paper is arranged as follows. In
Section 2, the principle and characteristics of the standard CSO algorithm are briefly introduced.
Section 3 describes the ADPCCSO algorithm proposed in this paper in detail, the improvement strategies of the algorithm, and the main implementation steps are presented in this section. Simulation experiments and analysis are presented in
Section 4 to verify the performance of the proposed ADPCCSO algorithm. Finally, we conclude the paper in
Section 5.
2. The Basic CSO Algorithm
CSO algorithm is a class of random search algorithm based on the collective intelligent behavior of chicken swarms in the process of foraging. In this algorithm, several randomly generated positions in the search range are regarded as several chickens, and the fitness function values of chickens are regarded as food sources. In light of the fitness function values, the whole chicken swarm is divided into the roosters, hens, and chicks, where roosters have the best fitness values, hens take second place, and chicks have the worst fitness values. The algorithm relies on the roosters, hens, and chicks to constantly conduct information interaction and cooperation sharing and finally finds the best food source [
30,
31]. The characteristics are as follows:
(1) The whole chicken swarm is divided into several subgroups, and each subgroup is composed of a rooster, at least one hen and several chicks. The hens and chicks look for food under the leadership of the roosters in their subgroups, and they will also obtain food from other subgroups.
(2) In the basic CSO algorithm, once the hierarchical relationship and dominance relationship between roosters, hens, and chicks are determined, they will remain unchanged for a certain period until the role update condition is met. In this way, they achieve information interaction and find the best food source.
(3) The whole algorithm realizes parallel optimization through the cooperation between roosters, hens, and chicks. The formulas corresponding to their foraging behaviors are as follows:
The roosters’ foraging behavior:
where
stands for the position of the
ith rooster at iteration
t.
Dim is the dimension of the problem to be solved.
is a random number matrix with a mean value of 0 and a variance of
.
is a smallest positive normalized floating-point number in IEEE double precision.
is the fitness function value of any rooster, and
.
The hens’ foraging behavior is described by
where
is the individual position of the
ith hen,
is the position of the group-mate rooster of the
ith hen,
is a randomly selected chicken, and
.
The chicks’ foraging behavior is described by
where
i is an index of the chick, and
m is an index of the
ith chick’s mother.
is a follow coefficient.
3. ADPCCSO Algorithm
To address the issue of precocious convergence of the basic CSO algorithm in solving high-dimensional optimization problems, an ADPCCSO algorithm is proposed. First, to balance the breadth and depth search abilities of the basic CSO algorithm, an s-shaped function is utilized to adaptively adjust the value of parameter
G. Then, in order to improve the solution accuracy of the algorithm, inspired by the literature [
32], an improvement factor is used to dynamically adjust the foraging behaviors of chickens. At the same time, when the role-update condition is met, the chicks are arranged to search for food near the global optimal value, which can enhance the depth optimization ability of the algorithm. Finally, in view of the fact that the AFSA has unique behavior-pattern characteristics, which can make the algorithm quickly jump out of the local optimal solution in solving the high-dimensional optimization problems, it is integrated into the CSO algorithm to construct a dual-population collaborative optimization strategy based on chicken swarms and artificial fish swarms to enhance the global search ability, so as to achieve rapid optimization in the algorithm.
3.1. The Improvement Strategy for Parameter G
In the basic CSO algorithm, the parameter
G determines how often the hierarchical relationship and role assignment of the chicken swarm are updated. The setting of an appropriate parameter
G plays a crucial role in balancing the breadth and depth search abilities of the algorithm. Too large a value of
G means that the information interaction between individuals is slow, which is not conducive to improving the breadth search ability of the algorithm. Too small a value of
G will make the information interaction between individuals too frequent, which is not beneficial to enhancing the depth-optimization ability of the algorithm. Considering that the value of parameter
G is a constant in the basic CSO algorithm, it is not conducive to balancing the search abilities between breadth and depth. We use Equation (7) to adaptively adjust the value of the parameter
G; that is, in the early stage of the algorithm iteration, let
G take a smaller value to enhance the breadth optimization ability of the algorithm; in the late stage of iteration of the algorithm, let
G take a larger value to enhance the depth-optimization ability of the algorithm.
where
t represents the current number of iterations and
round () is a rounding function that can round an element to the nearest integer.
3.2. The Improvement Strategy for Foraging Behaviors
To improve the solution accuracy and depth-optimization ability of the algorithm, we construct an improvement strategy for foraging behaviors in this section; that is, an improvement factor is used in updating formulas of chickens. At the same time, in an effort to improve the depth optimization ability of CSO algorithm, the chicks’ foraging behavior near the optimal value is also added.
3.2.1. Improvement Factor
To enhance the optimization ability of the algorithm, a learning factor was integrated into the foraging formula of roosters in Reference [
32], which can be shown as follows:
where
M is the maximum number of iterations and
and
are the maximum and minimum values of the learning factor, whose values are 0.9 and 0.4, respectively.
The method in Reference [
32] improved the optimization ability of the algorithm to a certain degree, but it only modified the position update formula of roosters, which is not conducive to further optimization of the algorithm. Therefore, we slightly modified the learning factor in Reference [
32] and named it the improvement factor; that is, through trial and error, we set the maximum and minimum values of the improvement factor to be 0.7 and 0.1, respectively, and then used them in the foraging formulas of roosters, hens, and chicks. The experimental results have demonstrated that the solution accuracy and convergence performance are significantly improved. The modified foraging formulas for roosters, hens, and chicks are shown in Equations (10)–(12):
3.2.2. Chicks’ Foraging Behavior near the Optimal Value
To enhance the depth optimization ability of the CSO algorithm, when the role update condition is met, chicks are allowed to search for food directly near the current optimal value. The corresponding formula is as follows:
where
is the global optimal individual position at iteration
t.
lb and
ub are the upper and lower bounds of an interval set near the current optimal value.
3.3. The Dual-Population Collaborative Optimization Strategy
To speed up the step of the algorithm jumping out of the local extrema, so as to quickly converge to the global optimal value, in view of the good robustness and global search ability of AFSA, the AFSA is introduced to construct a dual-population collaborative optimization strategy based on the chicken swarm and artificial fish swarm. With this strategy, the excellent individuals and several random individuals between the two populations are exchanged to break the equilibrium state within the population, so that the algorithm jumps out of the local extrema. The flow chart of the dual-population collaborative optimization strategy are shown in
Figure 1.
The main steps are as follows:
- (1)
Population initialization. Randomly generate two initial populations with a population size of N: the chicken swarm and the artificial fish swarm.
- (2)
Chicken swarm optimization. Calculate the fitness function values of the entire chicken swarm and record the optimal value.
- (a)
Update the position of chickens.
- (b)
Update the optimal value of the current chicken swarm.
- (3)
Artificial fish swarm optimization. Calculate the fitness function values of the entire artificial fish swarm and record the optimal value.
- (i)
Update the positions of artificial fish swarm.
Update the positions of the artificial fish swarm; that is, by simulating fish behaviors of preying, swarming, and following, compare the fitness function values to find out the best behavior and execute this behavior. Their corresponding formulas are as follows.
The preying behavior:
where
Xi is the position of the
ith artificial fish.
Step and
Visual represent the step length and visual field of an artificial fish, respectively.
The swarming behavior:
where
nf represents the number of partners within the visual field of the artificial fish.
Xc is the center position.
The following behavior:
where
Xmax is the position of an artificial fish with the optimal food concentration that can be found within the current artificial fish’s visual field.
- (ii)
Update the optimal value of the current artificial fish swarm.
- (4)
Interaction. To realize information interaction and thus break the equilibrium state within the population, first, select the optimal individuals in the chicken swarm and artificial fish swarm for exchange, and then select the remaining Num (Num < N) individuals randomly generated in the two populations for exchange.
- (5)
Repeat steps (2)–(4) until the specified maximum number of iterations is reached and the optimal value is output.
3.4. The Design and Implementation of the ADPCCSO Algorithm
To address the premature convergence issue encountered by the basic CSO algorithm in solving high-dimensional optimization problems, the ADPCCSO algorithm is proposed. Firstly, the algorithm adjusts the parameter
G adaptively and dynamically to balance the algorithm’s breadth and depth search ability. Then, the solution accuracy and depth-optimization ability of the algorithm are enhanced by using the improvement strategy for foraging behaviors described in
Section 3.2. Finally, the dual-population collaborative optimization strategy is introduced to accelerate the step of the algorithm jumping out of the local extrema. The specific process is as follows:
- (1)
Parameter initialization. The numbers of roosters, hens, and chicks are 0.2 × N, 0.6 × N, and N − 0.2 × N − 0.6 × N, respectively.
- (2)
Population initialization. Initialize the two populations according to the method described in
Section 3.3.
- (3)
Chicken swarm optimization. Calculate the fitness function values of chickens and record the optimal value of the current population.
- (4)
Conditional judgment. If t = 1, go to step (c); otherwise, execute step (a).
- (a)
Judgment of the information interaction condition in the chicken swarm. If t%G = 1, execute step (b); otherwise, go to step (d).
- (b)
Chicks’ foraging behavior near the optimal value. Chicks search for food according to Equations (13)–(15) in
Section 3.2.2.
- (c)
Information interaction. In light of the current fitness function values of the entire chicken swarm, the dominance relationship and hierarchical relationship of the whole population are updated to achieve information interaction.
- (d)
Foraging behavior. The chickens with different roles search for food according to Equations (10)–(12).
- (e)
Modification of the optimal value in the chicken swarm: after each iteration, the optimal value of the whole chicken swarm is updated.
- (5)
Artificial fish swarm optimization. Calculate the fitness function values of the artificial fish swarm and record the optimal value of the current population.
- (i)
In the artificial fish swarm, behaviors of swarming, following, preying, and random movement are executed to find the optimal food.
- (ii)
Update the optimal value of the whole artificial fish swarm.
- (6)
Exchange. This includes the exchange of the optimal individuals and the exchange of several other individuals in the two populations.
- (7)
Judgment of ending condition for the algorithm. If the specified maximum number of iterations is reached, the optimal value will be output, and the program will be terminated. Otherwise, go to step (3).
3.5. The Time Complexity Analysis of the ADPCCSO Algorithm
In the standard CSO algorithm, if the population size of the chicken swarm is assumed to be
N, then the dimension of the solution space is
d, the iteration number of the entire algorithm is
M, and the hierarchical relationship of the chicken swarm is updated every
G iterations. The numbers of roosters, hens, and chickens in the chicken swarm e
Nr,
Nh, and
Nc, respectively; that is,
Nr +
Nh +
Nc =
N. The calculation time of the fitness function value of each chicken is
tf. Therefore, the time complexity of the CSO algorithm consists of two stages, namely, the initialization stage and the iteration stage [
30,
32].
In the initialization stage (including parameter initialization and population initialization), assume that the setting time of parameters is t1, the actual time required to generate a random number is t2, and the sorting time of the fitness function values is t3. Then, the time complexity of the initial stage is T1 = t1 + N × d × t2+ t3 + N × tf = O(N × d + N × tf).
In the iteration stage, let the time for each rooster, hen, and chick to update its position on each dimension be
tr,
th, and
tc, respectively. The time it takes to compare the fitness function values between two individuals is
t4, and the time it takes for the chickens to interact with information is
t5. Therefore, the time complexity of this stage is as follows.
Therefore, the time complexity of the standard CSO algorithm is as follows.
On the basis of the standard CSO algorithm, the ADPCCSO algorithm adds the improvement factor in the position update formula of the chicken swarm, the foraging behavior of chicks near the optimal value, and the optimization strategy of the artificial fish swarm. It is assumed that the population size of the artificial fish swarm is N, and the tentative number when performing foraging behavior is try_number. In the swarming and following behaviors, it is necessary to count friend_number times when calculating the values of nf and Xmax. The time to calculate the improvement factor is t6, and the time it takes to perform the foraging, swarming, and following behaviors are t7, t8, and t9, respectively.
Therefore, the time complexity of adding the improvement factor in the position updating formula is T3 = M × N × t6 = O(M × N). The time complexity of the chicks’ foraging behavior near the optimal value is T4 = × d × Nc × tc = O( × d × Nc).
The time complexity of the artificial fish swarm optimization strategy is mainly composed of three parts: foraging behavior, swarming behavior, and following behavior. Its time complexity is as follows [
33].
Therefore, the time complexity of the ADPCCSO algorithm is as follows.
It can be seen that the time complexity of the ADPCCSO and standard CSO algorithms is still in the same order of magnitude.
5. Conclusions
In view of the precocious convergence problem that the basic CSO algorithm is prone to when solving high-dimensional complex optimization problems, an ADPCCSO algorithm is proposed in this paper. The algorithm first uses an adaptive dynamic adjustment method to give the value of parameter G, so as to balance the algorithm’s depth and breadth search ability. Additionally, then, the solution accuracy and depth optimization ability of the algorithm are improved by using a foraging-behavior-improvement strategy. Finally, a dual-population collaborative optimization strategy is constructed to improve the algorithm’s global search ability. The experimental results preliminarily show that the proposed algorithm has obvious advantages over other comparison algorithms in terms of solution accuracy and convergence performance. This provides new ideas for the study of high-dimensional optimization problems.
However, although the experimental results of the proposed algorithm on most given benchmark test functions have achieved obvious advantages over the comparison algorithms, there is still a gap between the actual optimal solutions obtained on several functions and their theoretical optimal solutions. Therefore, understanding how to improve the performance of the algorithm to better solve more complex large-scale optimization problems still needs further research. Moreover, in future research work, it is also a good choice to apply this algorithm to other fields, such as the constrained optimization problem, the multi-objective optimization problem, and the vehicle-routing problem.