1. Introduction
With the rapid advance of science and technology, there is an increasing demand to solve various engineering optimization problems. These problems are becoming more and more complex, calling for the need to find suitable solutions. Metaheuristic algorithms (MAs), a type of optimization algorithm inspired by natural phenomena, can be categorized into nine groups [
1]. Among them, bionics-based swarm intelligence optimization algorithms have gained popularity among scholars from different fields due to their simple algorithm structure and low time complexity. By simulating the group behavior of various animals, using information exchange and cooperation among individuals, and through simple and effective interaction with experienced intelligent individuals, it achieves the optimal goal. Currently, MAs have been widely applied to solve complex optimization problems in areas such as medicine and computer science [
2,
3,
4,
5].
With the emergence of intelligent optimization algorithms, researchers have proposed new algorithms by observing the social behavior of other organisms. For example, Mirjalili [
6] proposed the Gray Wolf Optimizer (GWO) in 2014. Seyedali [
7] analyzed the behavior of whales rounding up their prey and proposed the Whale Optimization Algorithm (WOA) in 2016. Mirjalili [
8] proposed the Salp Swarm Algorithm (SSA) in 2017, inspired by the feeding behavior of leaders guiding followers in the Salp swarm. Konstantinos [
9] was inspired by the mating attraction behavior of mayflies and, in 2020, proposed the Mayfly Optimization Algorithm (MFA). These proposed algorithms have provided new ideas for solving complex problems. Although these MAs play a significant role in optimization in various fields, the no-free−lunch (NFL) theorem [
10] has demonstrated that no single MA can solve all optimization problems. Each MAs has its own advantages and limitations and is effective only for certain problems. As a result, many scholars have been motivated to propose novel or improved MAs to solve various practical optimization problems.
The Flamingo Search Algorithm (FSA) is an intelligent optimization algorithm proposed by Wang and others [
11] in 2021 based on flamingo migration and foraging behavior. The researchers found that flamingos have different characteristics compared to other organisms: the long neck of flamingos can rotate 360 degrees and forage for food, which helps individual flamingos search for food better. Additionally, individual flamingos that find food will sing to communicate their current location to other individuals in the population, which increases the probability of finding food-rich areas in flamingo populations.
Due to the excellent performance of FSA, many researchers have conducted relevant research using this algorithm. Some researchers have focused on the application of the Flamingo Search Algorithm, such as Mahdi et al. [
12], who used the Flamingo Search Algorithm to optimize feature selection and classify COVID-19 patients from clinical texts. Durham et al. [
13] proposed a quasi-opposition-based Flamingo Search Algorithm and integrated a generalized cyclic crossover model to achieve more effective feature selection. Abraham et al. [
14] proposed an energy-efficient cluster head selection for wireless sensor networks based on the Flamingo Search Algorithm to improve the energy efficiency of wireless sensor networks. Srinivasarao et al. [
15] proposed an effective Flamingo Search Algorithm based on a multi-objective cost model to optimize the efficiency of instantiation view selection in data warehouse management. However, the aforementioned studies focus on the application study of the Flamingo Search Algorithm, and not much research has been conducted on the optimization of the algorithm. Other researchers have combined FSA with other intelligent optimization algorithms to improve the algorithm’s ability to jump out of local optima. For example, Fernisha et al. [
16] proposed a residual low-light image enhancement network optimized based on the hybrid Particle Swarm Algorithm and Flamingo Search Algorithm to improve image resolution. Arivubrakan et al. [
17] proposed a multi-objective hybrid search optimization algorithm for woodpeckers and flamingos to find the optimal cluster head based energy-aware routing protocol in IoT environments. Kumar et al. [
18] combined the Flamingo Search Algorithm with fuzzy decision strategies to reduce the probability of the algorithm falling into local optima and used the proposed method to solve antenna optimization problems. Raamesh et al. [
19] proposed a hybrid Random Shepherd–Flamingo Search Algorithm method, which integrates the Random Shepherd Optimization algorithm and Flamingo Search Algorithm to improve the quality of generated software test cases. Hussain and others [
20] combined the Flamingo Search Algorithm with genetic algorithms to reduce the computational complexity of the algorithm and achieve better cloud computing task scheduling. The above research method is used to optimize the Flamingo Search Algorithm’s ability to jump out of local optimum by incorporating the advantages of strong local search ability of other swarm intelligence algorithms, but it still has the problems of low accuracy and slow convergence speed. In summary, while researchers have successfully applied the FSA to different fields, further optimization and improvement of this algorithm is still necessary for better performance.
In this article, we propose a multi-strategy improved Flamingo Search Algorithm. The algorithm combines cubic chaotic mapping, information feedback model dynamically adjusted according to fitness, Random Opposition Learning and Elite Position Greedy Selection Strategy. The initial population is initialized by a cubic chaotic map strategy to improve the diversity of the initial population. The information feedback model dynamically adjusted according to fitness can promote the information exchange among individuals of the population and enhance the ability of local mining and global exploration of the algorithm. Random Opposition Learning and the Elite Positions Greedy Selection Strategy are introduced to improve the ability of the algorithm to jump out of the local optimum.
The remaining parts of this study are organized as follows.
Section 2 describes the inspiration and mathematical model of the FSA. The detailed design of three improvement strategies is described in
Section 3.
Section 4 evaluates the performance of the IFSA using two different sets of numerical experiments and Wilcoxon rank and tests.
Section 5 provides some concluding observations and several future research directions.
3. Improving Flamingo Search Algorithm
3.1. Cubic Chaos Initialization Population
In the FSA, the initial position of individuals is generated through random initialization, which can lead to an uneven distribution of the initial population and ultimately results in reduced solution accuracy. However, chaotic sequences based on chaos theory possess characteristics such as randomness and boundedness. Among the various chaotic mappings available, the cubic chaotic mapping can generate a more uniformly distributed and traversed chaotic sequence, which helps to improve the population’s diversity [
21,
22]. Therefore, this study introduces the cubic chaotic mapping with better traversability to initialize FSA, which enables the flamingo to have a more evenly distributed population in the search space during the initialization stage. After generating the chaotic sequence, the chaotic space is then mapped to the solution space of the optimization problem according to the range of variables to be optimized, and the mapping process is as follows.
The steps to initialize the flamingo population using cubic chaos are:
(1) Obtain the first individual and generate a random
-dimensional vector, which is the first flamingo individual.
where
.
(2) The remaining
individuals are obtained, and the chaotic sequence is generated by iterating over each dimension of
using the following equation.
where
, and
.
(3) After generating the chaotic sequence, the chaotic sequence is then mapped into the search space according to the range of values of the variables to be optimized, and the mapping equation is
where
is the
-dimensional position of the
th individual flamingo in the search space,
is the upper bound of the
-dimensional search space, and
is the lower bound of the
-dimensional search space.
is the
-dimensional coordinate of the
th individual flamingo obtained from Equation (8).
3.2. Information Feedback Model
Wang et al. [
23] found that in most metaheuristic algorithms, the update process fails to utilize the information available to the individuals in previous iterations. Therefore, an information feedback model was proposed to incorporate useful information from previous iterations into the update process, resulting in a significant improvement in the solution quality.
The information feedback model essentially generates a new individual by combining the information from several previous generations of individuals through weighted summation. There are two operational modes for the information feedback model: random mode and fixed mode [
24,
25]. To prevent a significant increase in algorithm complexity resulting from the retention of too many generations of population information, the number of previous individuals selected is usually no more than three generations [
23]. In this study, we only focus on the case where the case when the number of predecessors is 1. In this case, the information feedback model is expressed according to the following equation.
Assume the current generation is ; then, refers to the position of the th flamingo of the previous generation, is the intermediate individual derived using the FSA algorithm, is the position of the th flamingo of the same generation, and denotes the fitness of the corresponding individual. is derived from the weighted sum of intermediate and predecessor individuals, and the weight coefficients and are influenced by the fitness value. There are two choices for the value of . The first one is when , which is a fixed approach at this time. The second is when , which is a randomized approach. Different ways of have different facilitation effects on the information feedback model.
Different choices of the parameter will result in different types of information feedback models. Each of the two strategies for selecting has its own advantages and disadvantages, and one approach is not necessarily superior to the other. The fixed model in the information feedback model can effectively improve the convergence speed of the algorithm when the algorithm is not trapped in a local optimum. However, the information feedback model in fixed mode has the disadvantage of insufficient ability to jump out of the local optimum. The random mode in the information feedback model has good randomness and can increase the learning probability for excellent individuals in the population, thereby enhancing the global search capability of the algorithm. Nevertheless, in contrast to the facilitation effect of the fixed mode, the random mode can result in weakened convergence speed and precision before the algorithm reaches a local optimum.
Based on the analysis above, this study proposes an information feedback model based on adaptive fitness adjustment and applies it to the FSA algorithm. The model dynamically selects the appropriate information feedback mode based on the current fitness value. By combining the strengths of two different models, the performance of the algorithm is jointly improved.
The following formula is designed as a threshold for dynamically adjusting the model based on fitness.
Assume the current generation is , where represents position of the th flamingo of the current generation, represents the fitness value of the th flamingo of the previous generation, and and represent the minimum and maximum fitness values of the previous generation, respectively.
In the beginning stage of the algorithm, individuals tend to have a dispersed distribution. Therefore, it is advisable to set the feedback model as fixed to enhance the algorithm’s early optimization ability. As the algorithm continues to iterate, when the value calculated based on Formula (12) is less than the critical value set in advance (10
−4, In
Appendix A of this paper, the best parameters are tested with different values obtained by comparing the results of 30 independent runs and 150 iterations of a function with three single peaks and three multiple peaks), which indicates that the algorithm might easily become trapped in a local optimum. To avoid this, the feedback model needs to be changed to a random method. When the feedback model is changed to a random method, individuals in the population randomly learn from their previous generations, which is beneficial for discovering potential optimal locations. Throughout the algorithm updating process, the value’s magnitude is crucial in selecting an appropriate approach, and the parameter size is typically determined by comparing experimental results of different test functions.
3.3. Random Opposition-Based Learning Strategy and Elite Positions Greedy Selection
Opposition-Based Learning (OBL) is an optimization strategy that improves the ability of intelligent algorithms to jump out of local optima [
26,
27,
28]. Random Opposition-Based Learning (ROBL) is an improved version of OBL.
It is defined as follows
where
denotes the solution of stochastic backward learning,
and
are the lower and upper search space limits of the
th dimension of the solution space, and
is a random number within (0, 1). Since the movement of flamingo individuals in the FSA algorithm is mainly influenced by the optimal individuals, if they cannot jump out of the local optimal trap, the quality of the final solution is often not ideal, and the reverse solution calculated by the above equation is more stochastic than the original OBL-derived reverse solution, which can help the algorithm reduce the probability of falling into a local optimum. Therefore, using the ROBL strategy in the development stage can improve the local search capability of the algorithm.
Since the movement of the vast majority of flamingo individuals between populations in the flamingo search algorithm is mainly influenced by elite individuals, therefore, the greedy selection of elite flamingo individuals can enhance population diversity while accelerating the algorithm convergence. The fitness value of the new position of the elite individual at each iteration is compared with the original position, and if the fitness value of the new position is better than the original position, the elite individual position will be replaced by the new position in that generation. Otherwise, the flamingo position does not change. The specific design is as follows.
where
and
denote the fitness values of elite flamingo individuals in their new and original positions, respectively,
denotes the newly generated position of the
th flamingo,
represents the default position of the
th flamingo, and
denotes the position of the
th flamingo in the
th iteration.
3.4. Algorithm Implementation Flow
The flow chart of IFSA is shown in
Figure 1.
Step 1: Initialize the population parameters. The proportion of first migrating flamingos during each iteration is MPb (consistent with the FSA [
13] with a value of 0.1).
Step 2: Initialize the flamingo population location using cubic chaos mapping.
Step 3: The flamingo population location is updated and the foraging quantity for the th iteration is MPr = rand [0, 1] × P × (1 − MPb). Each iteration is divided into two migrations, where the number of the first migration in this iteration is MPo = MPb × P, and the number of the second migration is MPt = P − Mpo − MPr.
Step 4: Adaptation values were calculated for each flamingo, and the flamingo populations were ranked according to the adaptation values. The best and worst adapted individuals were executed for migration operations and the rest were used for foraging operations.
Step 5: Migrating and foraging flamingos perform updates according to Equations (5) and (6), respectively. is calculated, and the information feedback model approach is selected according to the value, and greedy selection and stochastic backward learning strategies are applied to elite individuals.
Step 6: The flamingo individuals that cross the boundary are processed.
Step 7: If the maximum number of iterations is reached, go to Step 8; otherwise, go to Step 3.
Step 8: Output the optimal solution and the optimal value.
5. Conclusions and Future Work
This paper proposes a multi-strategy improved Flamingo Search Algorithm to address the issues of local optimum and low solution accuracy. The proposed approach combines three strategies such as Cubic Chaotic Mapping, dynamically adjust the information feedback model based on fitness value, Random Opposition Learning, and Elite Position Greedy Selection to enhance the algorithm’s performance. Cubic Chaotic Mapping is utilized to improve the diversity of the initial population and the quality of the initial solutions. The dynamically adjust the information feedback model based on fitness value effectively enhances both global search and local exploration abilities of the algorithm. Additionally, the Random Opposition Learning and Elite Position Greedy Selection strategies are implemented to enable the algorithm to escape from local optima while retaining advantageous individuals, leading to an acceleration in convergence speed. The improved algorithms were assessed on 23 test functions. The results from the first comparative experiment demonstrated that each strategy improved FSA significantly. When all three strategies were applied simultaneously to FSA, there was a notable increase in the convergence speed and accuracy of the algorithm for most test functions and an improvement in its stability. The second comparative experiment indicated that IFSA outperformed PSO, GWO, WOA, TSA, ARO, and AHA algorithms for the majority of the tested functions. The Wilcoxon signed-rank test indicated that IFSA differed significantly from the other algorithms. Although IFSA exhibited remarkable convergence speed, its ability to break away from local optima for some functions remained weak, calling for further optimization.
In the next stage of our research, we aim to further enhance the algorithm’s ability to overcome local optima by implementing additional strategies. Moreover, we plan to apply IFSA to clustering analysis to evaluate its performance in practical applications.