Next Article in Journal
Prognostic Value of Lymph Node Density in Lingual Squamous Cell Carcinoma
Previous Article in Journal
Hydrothermal Calcite Precipitation in Veins: Inspired by Experiments for Oxygen Isotope Fractionation between CO2 and Calcite from 1 °C to 150 °C
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Improved Flamingo Search Algorithm for Global Optimization

1
School of Information Engineering, Tianjin University of Commerce, Tianjin 300134, China
2
School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(9), 5612; https://doi.org/10.3390/app13095612
Submission received: 27 March 2023 / Revised: 27 April 2023 / Accepted: 27 April 2023 / Published: 1 May 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
To overcome the limitations of the Flamingo Search Algorithm (FSA), such as a tendency to converge on local optima and improve solution accuracy, we present an improved algorithm known as the Multi-Strategy Improved Flamingo Search Algorithm (IFSA). The IFSA utilizes a cube chaotic mapping strategy to generate initial populations, which enhances the quality of the initial solution set. Moreover, the information feedback model strategy is improved to dynamically adjust the model based on the current fitness value, which enhances the information exchange between populations and the search capability of the algorithm itself. In addition, we introduce the Random Opposition Learning and Elite Position Greedy Selection strategies to constantly retain superior individuals while also reducing the probability of the algorithm falling into a local optimum, thereby further enhancing the convergence of the algorithm. We evaluate the performance of the IFSA using 23 benchmark functions and verify its optimization using the Wilcoxon rank-sum test. The compared experiment results indicate that the proposed IFSA can obtain higher convergence accuracy and better exploration abilities. It also provides a new optimization algorithm for solving complex optimization problems.

1. Introduction

With the rapid advance of science and technology, there is an increasing demand to solve various engineering optimization problems. These problems are becoming more and more complex, calling for the need to find suitable solutions. Metaheuristic algorithms (MAs), a type of optimization algorithm inspired by natural phenomena, can be categorized into nine groups [1]. Among them, bionics-based swarm intelligence optimization algorithms have gained popularity among scholars from different fields due to their simple algorithm structure and low time complexity. By simulating the group behavior of various animals, using information exchange and cooperation among individuals, and through simple and effective interaction with experienced intelligent individuals, it achieves the optimal goal. Currently, MAs have been widely applied to solve complex optimization problems in areas such as medicine and computer science [2,3,4,5].
With the emergence of intelligent optimization algorithms, researchers have proposed new algorithms by observing the social behavior of other organisms. For example, Mirjalili [6] proposed the Gray Wolf Optimizer (GWO) in 2014. Seyedali [7] analyzed the behavior of whales rounding up their prey and proposed the Whale Optimization Algorithm (WOA) in 2016. Mirjalili [8] proposed the Salp Swarm Algorithm (SSA) in 2017, inspired by the feeding behavior of leaders guiding followers in the Salp swarm. Konstantinos [9] was inspired by the mating attraction behavior of mayflies and, in 2020, proposed the Mayfly Optimization Algorithm (MFA). These proposed algorithms have provided new ideas for solving complex problems. Although these MAs play a significant role in optimization in various fields, the no-free−lunch (NFL) theorem [10] has demonstrated that no single MA can solve all optimization problems. Each MAs has its own advantages and limitations and is effective only for certain problems. As a result, many scholars have been motivated to propose novel or improved MAs to solve various practical optimization problems.
The Flamingo Search Algorithm (FSA) is an intelligent optimization algorithm proposed by Wang and others [11] in 2021 based on flamingo migration and foraging behavior. The researchers found that flamingos have different characteristics compared to other organisms: the long neck of flamingos can rotate 360 degrees and forage for food, which helps individual flamingos search for food better. Additionally, individual flamingos that find food will sing to communicate their current location to other individuals in the population, which increases the probability of finding food-rich areas in flamingo populations.
Due to the excellent performance of FSA, many researchers have conducted relevant research using this algorithm. Some researchers have focused on the application of the Flamingo Search Algorithm, such as Mahdi et al. [12], who used the Flamingo Search Algorithm to optimize feature selection and classify COVID-19 patients from clinical texts. Durham et al. [13] proposed a quasi-opposition-based Flamingo Search Algorithm and integrated a generalized cyclic crossover model to achieve more effective feature selection. Abraham et al. [14] proposed an energy-efficient cluster head selection for wireless sensor networks based on the Flamingo Search Algorithm to improve the energy efficiency of wireless sensor networks. Srinivasarao et al. [15] proposed an effective Flamingo Search Algorithm based on a multi-objective cost model to optimize the efficiency of instantiation view selection in data warehouse management. However, the aforementioned studies focus on the application study of the Flamingo Search Algorithm, and not much research has been conducted on the optimization of the algorithm. Other researchers have combined FSA with other intelligent optimization algorithms to improve the algorithm’s ability to jump out of local optima. For example, Fernisha et al. [16] proposed a residual low-light image enhancement network optimized based on the hybrid Particle Swarm Algorithm and Flamingo Search Algorithm to improve image resolution. Arivubrakan et al. [17] proposed a multi-objective hybrid search optimization algorithm for woodpeckers and flamingos to find the optimal cluster head based energy-aware routing protocol in IoT environments. Kumar et al. [18] combined the Flamingo Search Algorithm with fuzzy decision strategies to reduce the probability of the algorithm falling into local optima and used the proposed method to solve antenna optimization problems. Raamesh et al. [19] proposed a hybrid Random Shepherd–Flamingo Search Algorithm method, which integrates the Random Shepherd Optimization algorithm and Flamingo Search Algorithm to improve the quality of generated software test cases. Hussain and others [20] combined the Flamingo Search Algorithm with genetic algorithms to reduce the computational complexity of the algorithm and achieve better cloud computing task scheduling. The above research method is used to optimize the Flamingo Search Algorithm’s ability to jump out of local optimum by incorporating the advantages of strong local search ability of other swarm intelligence algorithms, but it still has the problems of low accuracy and slow convergence speed. In summary, while researchers have successfully applied the FSA to different fields, further optimization and improvement of this algorithm is still necessary for better performance.
In this article, we propose a multi-strategy improved Flamingo Search Algorithm. The algorithm combines cubic chaotic mapping, information feedback model dynamically adjusted according to fitness, Random Opposition Learning and Elite Position Greedy Selection Strategy. The initial population is initialized by a cubic chaotic map strategy to improve the diversity of the initial population. The information feedback model dynamically adjusted according to fitness can promote the information exchange among individuals of the population and enhance the ability of local mining and global exploration of the algorithm. Random Opposition Learning and the Elite Positions Greedy Selection Strategy are introduced to improve the ability of the algorithm to jump out of the local optimum.
The remaining parts of this study are organized as follows.
Section 2 describes the inspiration and mathematical model of the FSA. The detailed design of three improvement strategies is described in Section 3. Section 4 evaluates the performance of the IFSA using two different sets of numerical experiments and Wilcoxon rank and tests. Section 5 provides some concluding observations and several future research directions.

2. Flamingo Search Algorithm

The Flamingo Search Algorithm is divided into the foraging and migration behavior of a flamingo population, which explores the search space through inter-population information exchange and fixed location movement rules, striving to find the optimal solution. Each phase of the Flamingo Search Algorithm is described in detail as follows:

2.1. Foraging Behavior

Flamingo foraging behavior is influenced by two distinct behavioral traits, which are beak scanning and bipedal locomotion. These traits affect the way that flamingos search for food and move around in their environment.

2.1.1. Beak Scanning Behavior

During foraging, flamingos utilize their beaks as a large sieve. With their beaks facing down and swinging in all directions, they collect food and expel excess residue. The distance s of the flamingo’s beak scanning behavior during foraging can be expressed using the following formula.
s = | G 1 × x b j + ε 2 × x i j |
where ε 2 is −1 or 1, and G 1 is a randomly generated number that follows a standard normal distribution with size (0, 1). x i j represents the position of the i th flamingo bird in the population’s j th dimension, and the most abundant food location in the population is denoted by x b j . In order to simulate the scanning range d s of the flamingo’s beak scanning behavior, we introduce a normal distribution with the following mathematical model:
d s = G 2 × s
where G 2 represents a random number that conforms to a standard normal distribution.

2.1.2. Bipedal Movement Behavior

During foraging, flamingos move bipedally toward the most abundant food in the population, when the bipedal movement distance d f can be quantified as the following formula.
d f = ε 1 × x b j
where ε 1 is −1 or 1, which is mainly used to quantify the individual differences in selection.
In summary, the distance traveled by flamingos in each iteration of foraging behavior includes the range of their beak scans as well as their bipedal movement distance. The equation for the updated location of flamingo foraging behavior can be represented as follows:
b i j t = ε 1 × x b j t + G 2 × | G 1 × x b j t + ε 2 × x i j t |
The equation for the location of the flamingo foraging behavior after the update is
x i j t + 1 = ( x i j t + ε 1 × x b j t + G 2 × | G 1 × x b j t + ε 2 × x i j t | ) / K
where x i j t + 1 represents the position of the i th flamingo in the j th dimension of the population during the t + 1 th iteration, and K = K ( n ) is a diffusion factor that follows a chi-square distribution with n degrees of freedom.

2.2. Migratory Behavior

When the food in the foraging area is not sufficient for the survival of the flamingos, the flamingos will search and migrate to the next location where food is more abundant. Assuming that the food-rich location is x b j , then the equation for flamingo population migration can be expressed as follows:
x i j t + 1 = x i j t + ω × ( x b j t x i j t )
In the above equation, ω is a random number, equal in size to G 1 , to simulate the random behavior of flamingos during migration.

3. Improving Flamingo Search Algorithm

3.1. Cubic Chaos Initialization Population

In the FSA, the initial position of individuals is generated through random initialization, which can lead to an uneven distribution of the initial population and ultimately results in reduced solution accuracy. However, chaotic sequences based on chaos theory possess characteristics such as randomness and boundedness. Among the various chaotic mappings available, the cubic chaotic mapping can generate a more uniformly distributed and traversed chaotic sequence, which helps to improve the population’s diversity [21,22]. Therefore, this study introduces the cubic chaotic mapping with better traversability to initialize FSA, which enables the flamingo to have a more evenly distributed population in the search space during the initialization stage. After generating the chaotic sequence, the chaotic space is then mapped to the solution space of the optimization problem according to the range of variables to be optimized, and the mapping process is as follows.
The steps to initialize the flamingo population using cubic chaos are:
(1) Obtain the first individual and generate a random d -dimensional vector, which is the first flamingo individual.
Y = { y 1 , y 2 , , y d }
where y i [ 1 , 1 ] , 1 i d .
(2) The remaining N 1 individuals are obtained, and the chaotic sequence is generated by iterating over each dimension of Y using the following equation.
y n + 1 = 4 y n 3 3 y n
where 1 y n 1 , and y n 0 , n = 1 , 2 , , d .
(3) After generating the chaotic sequence, the chaotic sequence is then mapped into the search space according to the range of values of the variables to be optimized, and the mapping equation is
x i d = L d + ( 1 + y i d ) × U d L d 2
where x i d is the d -dimensional position of the i th individual flamingo in the search space, U d is the upper bound of the d -dimensional search space, and L d is the lower bound of the d -dimensional search space. y i d is the d -dimensional coordinate of the i th individual flamingo obtained from Equation (8).

3.2. Information Feedback Model

Wang et al. [23] found that in most metaheuristic algorithms, the update process fails to utilize the information available to the individuals in previous iterations. Therefore, an information feedback model was proposed to incorporate useful information from previous iterations into the update process, resulting in a significant improvement in the solution quality.
The information feedback model essentially generates a new individual by combining the information from several previous generations of individuals through weighted summation. There are two operational modes for the information feedback model: random mode and fixed mode [24,25]. To prevent a significant increase in algorithm complexity resulting from the retention of too many generations of population information, the number of previous individuals selected is usually no more than three generations [23]. In this study, we only focus on the case where the case when the number of predecessors is 1. In this case, the information feedback model is expressed according to the following equation.
x i t + 1 = α x j t + β y i t + 1
α = f i t + 1 f i t + 1 + f j t , β = f j t f i t + 1 + f j t
Assume the current generation is t + 1 ; then, x j t refers to the position of the j th flamingo of the previous generation, y i t + 1 is the intermediate individual derived using the FSA algorithm, x i t + 1 is the position of the i th flamingo of the same generation, and f denotes the fitness of the corresponding individual. x i t + 1 is derived from the weighted sum of intermediate and predecessor individuals, and the weight coefficients α and β are influenced by the fitness value. There are two choices for the value of j . The first one is when j = i , which is a fixed approach at this time. The second is when j = r a n d ( 1 , 2 , , N ) , which is a randomized approach. Different ways of j have different facilitation effects on the information feedback model.
Different choices of the parameter j will result in different types of information feedback models. Each of the two strategies for selecting j has its own advantages and disadvantages, and one approach is not necessarily superior to the other. The fixed model in the information feedback model can effectively improve the convergence speed of the algorithm when the algorithm is not trapped in a local optimum. However, the information feedback model in fixed mode has the disadvantage of insufficient ability to jump out of the local optimum. The random mode in the information feedback model has good randomness and can increase the learning probability for excellent individuals in the population, thereby enhancing the global search capability of the algorithm. Nevertheless, in contrast to the facilitation effect of the fixed mode, the random mode can result in weakened convergence speed and precision before the algorithm reaches a local optimum.
Based on the analysis above, this study proposes an information feedback model based on adaptive fitness adjustment and applies it to the FSA algorithm. The model dynamically selects the appropriate information feedback mode based on the current fitness value. By combining the strengths of two different models, the performance of the algorithm is jointly improved.
The following formula is designed as a threshold for dynamically adjusting the model based on fitness.
σ = | f i t + 1 f i t f t max f t min |
Assume the current generation is t + 1 , where f i t + 1 represents position of the i th flamingo of the current generation, f i t represents the fitness value of the i th flamingo of the previous generation, and f t min and f t max represent the minimum and maximum fitness values of the previous generation, respectively.
In the beginning stage of the algorithm, individuals tend to have a dispersed distribution. Therefore, it is advisable to set the feedback model as fixed to enhance the algorithm’s early optimization ability. As the algorithm continues to iterate, when the value calculated based on Formula (12) is less than the critical value set in advance (10−4, In Appendix A of this paper, the best parameters are tested with different values obtained by comparing the results of 30 independent runs and 150 iterations of a function with three single peaks and three multiple peaks), which indicates that the algorithm might easily become trapped in a local optimum. To avoid this, the feedback model needs to be changed to a random method. When the feedback model is changed to a random method, individuals in the population randomly learn from their previous generations, which is beneficial for discovering potential optimal locations. Throughout the algorithm updating process, the value’s magnitude is crucial in selecting an appropriate approach, and the parameter size is typically determined by comparing experimental results of different test functions.

3.3. Random Opposition-Based Learning Strategy and Elite Positions Greedy Selection

Opposition-Based Learning (OBL) is an optimization strategy that improves the ability of intelligent algorithms to jump out of local optima [26,27,28]. Random Opposition-Based Learning (ROBL) is an improved version of OBL.
It is defined as follows
x ^ j = l j + u j r a n d × x j , j = 1 , 2 , , n
where x ^ j denotes the solution of stochastic backward learning, L d j and U d j are the lower and upper search space limits of the j th dimension of the solution space, and r a n d is a random number within (0, 1). Since the movement of flamingo individuals in the FSA algorithm is mainly influenced by the optimal individuals, if they cannot jump out of the local optimal trap, the quality of the final solution is often not ideal, and the reverse solution calculated by the above equation is more stochastic than the original OBL-derived reverse solution, which can help the algorithm reduce the probability of falling into a local optimum. Therefore, using the ROBL strategy in the development stage can improve the local search capability of the algorithm.
Since the movement of the vast majority of flamingo individuals between populations in the flamingo search algorithm is mainly influenced by elite individuals, therefore, the greedy selection of elite flamingo individuals can enhance population diversity while accelerating the algorithm convergence. The fitness value of the new position of the elite individual at each iteration is compared with the original position, and if the fitness value of the new position is better than the original position, the elite individual position will be replaced by the new position in that generation. Otherwise, the flamingo position does not change. The specific design is as follows.
x i t + 1 = { x i n e w , if   f i n e w < f i o l d x i o l d , if   f i n e w > f i o l d
where f i n e w and f i o l d denote the fitness values of elite flamingo individuals in their new and original positions, respectively, x i n e w denotes the newly generated position of the i th flamingo, x i o l d represents the default position of the i th flamingo, and x i t + 1 denotes the position of the i th flamingo in the t + 1 th iteration.

3.4. Algorithm Implementation Flow

The flow chart of IFSA is shown in Figure 1.
Combining the improved strategies described in Section 3.1, Section 3.2 and Section 3.3 the steps of the proposed IFSA algorithm are as follows.
Step 1: Initialize the population parameters. The proportion of first migrating flamingos during each iteration is MPb (consistent with the FSA [13] with a value of 0.1).
Step 2: Initialize the flamingo population location using cubic chaos mapping.
Step 3: The flamingo population location is updated and the foraging quantity for the i th iteration is MPr = rand [0, 1] × P × (1 − MPb). Each iteration is divided into two migrations, where the number of the first migration in this iteration is MPo = MPb × P, and the number of the second migration is MPt = P − Mpo − MPr.
Step 4: Adaptation values were calculated for each flamingo, and the flamingo populations were ranked according to the adaptation values. The best and worst adapted individuals were executed for migration operations and the rest were used for foraging operations.
Step 5: Migrating and foraging flamingos perform updates according to Equations (5) and (6), respectively. σ is calculated, and the information feedback model approach is selected according to the value, and greedy selection and stochastic backward learning strategies are applied to elite individuals.
Step 6: The flamingo individuals that cross the boundary are processed.
Step 7: If the maximum number of iterations is reached, go to Step 8; otherwise, go to Step 3.
Step 8: Output the optimal solution and the optimal value.

4. Experiments and Analysis

4.1. Test Functions and Evaluation Criteria

The experimental environment of this paper adopts: a Windows 10 64-bit operating system, Intel Core i5-6300HQ CPU, 8GB memory, Python programming language, and MATLAB R2018a graphing software. To evaluate the performance of the algorithm, three sets of test experiments were conducted. These included the use of 23 general benchmark functions [11] for performance testing, which effectively verified the merit-seeking ability of IFSA. Table 1 shows that the tested functions can be classified into single–peaked functions (F1–F9), multi-peaked functions (F10–F16), and fixed-dimension functions (F17–F23). The algorithm’s performance was comprehensively measured using different types of test functions. To ensure fairness, the population size was uniformly set to 50, the termination condition was set as 300 iterations, each benchmark problem was run independently for 30 times, and the performance evaluation criteria were based on the mean, standard deviation and optimal value of the average results of 30 iterations. The mean value reflects the algorithm’s ability to find the best, the standard deviation reflects the algorithm’s robustness, and the optimal value reflects the algorithm’s best search accuracy. The optimal results are bolded in the experiment section.

4.2. Experimental Design and Other Algorithm Parameter Settings

To test the performance of IFSA and ensure the fairness and comparability of the experiments, this paper designed two sets of comparative experiments. The first set of experiments tests the effectiveness of three improvement strategies. The algorithm that introduces cubic chaos mapping in the population initialization phase is named FSA1, the algorithm that introduces stochastic reverse learning strategy in the population update phase is named FSA2, the algorithm that introduces an improved adaptation-based dynamic adjustment information feedback model in the population update phase is named FSA3, and the algorithm that incorporates the three improvement strategies is named IFSA. In the second set of experiments, seven different optimization-seeking algorithms were selected for comparison experiments: PSO [29], GWO [6], WOA [7], TSA [30], ARO [31], AHA [32] and IFSA. These algorithms are proven to have good performance in finding the best performance. Among them, PSO is a far-reaching population intelligence algorithm. GWO and WOA are chosen because they are a very classical group of intelligence algorithms proposed in recent years with strong optimization capabilities and have also been widely used in engineering design problems. TSA is chosen because it is a new algorithm proposed in the last two years.
The ARO and AHA algorithms are two new optimization algorithms published in high-level journals in 2022, whose optimization performance can represent the latest progress of current research on swarm intelligence algorithms, and a comparison with them can reflect the optimization capability of IFSA. Table 2 presents the relevant parameters of the other algorithms, while the evaluation metrics are consistent with those described in Section 4.1.

4.3. Results and Analysis of Comparison Experiments between Two Groups

Table 3 presents the test outcomes of five distinct algorithms that were executed 30 times on test functions F1–F23. As per the results displayed in the table, for single–peaked functions F1–F9, except for F6, the IFSA algorithm shows better search accuracy compared to the other four algorithms. Among these functions, IFSA discovered the theoretical optimal value on F1, F3, and F9. Despite the fact that the search accuracy of IFSA on F6 is not the highest among the five algorithms, it still performs better than the FSA algorithm. After comparing the data above, we can conclude that the improved information feedback model strategy substantially improves the search capability of FSA., and it outperforms the other two improved strategies in terms of robustness. For the multi-peaked functions, the IFSA algorithm demonstrates superior search accuracy on the test functions F10–F16 compared to the other four algorithms, except for F15. Notably, for F11 and F13, the experimental results of all five algorithms are equally optimal. After comparing the data of F15 and F16, we can conclude that FSA2 outperformed FSA3 in both mean and standard deviation for both F15 and F16. Additionally, the optimal value of FSA2 outperformed the other four algorithms. These results indicate that the strategies of stochastic backward learning and elite position greedy selection in IFSA are effective at improving the algorithm’s ability to escape from local optima in multi-peaked functions. In addition, the search accuracy of IFSA on F17–F23 is better than the other four algorithms, except for the F23 function, which performed the best among the five algorithms. Among them, for F17, only the optimal values, mean and standard deviation of FSA3 and IFSA reached the optimum, indicating that the improved information feedback model improved the search capability of FSA. For F18, FSA1 demonstrated better standard deviation compared to the other four algorithms, suggesting that incorporating chaotic mapping for the initial population can enhance the robustness of the algorithm. Overall, IFSA outperforms the other algorithms for the fixed-dimensional test function.
To summarize, IFSA demonstrated better convergence accuracy and stability compared to FSA for most of the tested functions. All three improvement strategies showed some degree of enhancement effect on the basic FSA algorithm.
Table 4 shows that for the single−peak test functions F1–F4, the IFSA algorithm performs the best and outperforms the other six algorithms. For F5 and F6, ARO performs the best and pulls away from the other algorithms, indicating that IFSA needs further optimization. However, for F7, F8, and F9, the mean and standard deviation of IFSA outperform the other six algorithms, with the standard deviation of F8 and F9 being 0, indicating that IFSA exhibits excellent stability and search accuracy for these functions. Overall, for the single−peak test functions, IFSA shows a better search capability compared to other algorithms. For the multi-peak test functions F10–F16, the IFSA algorithm achieves the best results on F11, F13 and F16, indicating that it has a significant advantage over the other six algorithms in solving these functions. For F10, the ARO and AHA algorithms outperform IFSA, with the AHA algorithm performing the best and IFSA performing moderately. For F14, the ARO algorithm has better results compared to the other algorithms. For F12 and F15, IFSA is close to the optimal result. Overall, for the multi-peak test function, IFSA shows good performance in finding the optimal value, but it still deserves further optimization. For the fixed-size test functions F18–F22, with fixed dimensions, IFSA has the optimal results for all of them. In addition, for F19–F22, the standard deviations of IFSA are all zero, indicating that IFSA has good stability and search accuracy in solving these functions. For F17, WOA performs the best, and IFSA has the least difference from the WOA algorithm in terms of standard deviation and mean value. For F23, the AHA algorithm performs the best, and IFSA is second only to the optimal result. As for the test functions with fixed dimensions, IFSA exhibits better overall performance in finding the best solutions. In summary, IFSA performs well overall among the seven compared algorithms, with good stability and optimization ability.

4.4. Comparative Analysis of Convergence Curves

Figure 2 represents the convergence curves of IFSA and other algorithms in solving the 23 benchmark functions. As can be seen in Figure 2, the IFSA algorithm converges faster than all the other six algorithms in the single−peak test function. For F1–F4 and F7–F9, IFSA has the highest search accuracy. For F5 and F6, the ARO algorithm performed the best and the IFSA algorithm performed average, indicating the need for further optimization. Among the multi-peak test functions, the IFSA algorithm performs moderately on the F10, F12, and F14 functions, best on F11, F13 and F16, and close to the optimal result on F15. Among the fixed-dimension test functions, except for F17 and F23, the IFSA algorithm outperforms the other six algorithms in terms of search accuracy and convergence speed on the remaining functions. The performance on F17 is only second to the optimal result. Its search accuracy on F23 is similar to the optimal result.
In summary, the IFSA algorithm shows better convergence accuracy and faster search efficiency in most of the tested functions, allowing the optimization process to find the convergence direction quickly.

4.5. Wilcoxon Rank and Tests

Due to the high level of randomness in optimization algorithms, any advantage shown on the data may be a small probability event. Therefore, this section introduces the Wilcoxon rank sum test [33], which is a non-parametric statistical test used to assess the fairness and robustness of algorithms. The test averages the results of 30 independent runs of five algorithms, PSO, GWO, WOA, TSA and FSA, at the 5% significance level. Here, NaN indicates that the comparison algorithms have similar merit-seeking results and cannot be judged for significance. The symbols “+”, “−” and “=” indicate better than, worse than and equivalent to the comparison algorithms, respectively. The results are shown in Table 5. From this, it can be concluded that most of the results of IFSA are “+” compared to other algorithms, which indicates that the performance of IFSA has a significant advantage in finding the best performance.

5. Conclusions and Future Work

This paper proposes a multi-strategy improved Flamingo Search Algorithm to address the issues of local optimum and low solution accuracy. The proposed approach combines three strategies such as Cubic Chaotic Mapping, dynamically adjust the information feedback model based on fitness value, Random Opposition Learning, and Elite Position Greedy Selection to enhance the algorithm’s performance. Cubic Chaotic Mapping is utilized to improve the diversity of the initial population and the quality of the initial solutions. The dynamically adjust the information feedback model based on fitness value effectively enhances both global search and local exploration abilities of the algorithm. Additionally, the Random Opposition Learning and Elite Position Greedy Selection strategies are implemented to enable the algorithm to escape from local optima while retaining advantageous individuals, leading to an acceleration in convergence speed. The improved algorithms were assessed on 23 test functions. The results from the first comparative experiment demonstrated that each strategy improved FSA significantly. When all three strategies were applied simultaneously to FSA, there was a notable increase in the convergence speed and accuracy of the algorithm for most test functions and an improvement in its stability. The second comparative experiment indicated that IFSA outperformed PSO, GWO, WOA, TSA, ARO, and AHA algorithms for the majority of the tested functions. The Wilcoxon signed-rank test indicated that IFSA differed significantly from the other algorithms. Although IFSA exhibited remarkable convergence speed, its ability to break away from local optima for some functions remained weak, calling for further optimization.
In the next stage of our research, we aim to further enhance the algorithm’s ability to overcome local optima by implementing additional strategies. Moreover, we plan to apply IFSA to clustering analysis to evaluate its performance in practical applications.

Author Contributions

Conceptualization, S.J.; Data Management, Y.Z.; Formal Analysis, J.S.; Fund acquisition, S.J.; Investigation, J.G.; Methodology, S.J.; Project Management, S.J.; Resources, J.G.; Software, J.S.; Supervision, S.J.; Verification, J.S.; Visualization, Y.Z.; Writing—Original manuscript, J.S.; Writing—Review and Editing, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by grants from Tianjin Postgraduate Research Innovation Project, Alleviating the Information Cocoon Phenomenon: Research on Diversity-based News Recommendation Method (2021YJSS274); Tianjin Research Innovation Project for Postgraduate Students (2022SKYZ314).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Test results of parameter σ .
Table A1. Test results of parameter σ .
Function1 × 10−21 × 10−31 × 10−4
F11.46 × 10−2501.83 × 10−2555.27 × 10−257
F21.45 × 10−1286.12 × 10−1237.03 × 10−130
F34.62 × 10−2416.36 × 10−2392.01 × 10−242
F125.025.084.94
F141.441.321.18
F154.29 × 10−021.36 × 10−021.02 × 10−02

References

  1. Akyol, S.; Alatas, B. Plant intelligence based metaheuristic optimization algorithms. Artif. Intell. Rev. 2017, 47, 417–462. [Google Scholar] [CrossRef]
  2. Jiang, S.; Ding, J.; Zhang, L. A Personalized Recommendation Algorithm Based on Weighted Information Entropy and Particle Swarm Optimization. Mob. Inf. Syst. 2021, 2021, 3209140. [Google Scholar] [CrossRef]
  3. Jiang, S.; Zhao, H.; Li, Z. A recommendation algorithm based on modified similarity and text content to optimise aggregate diversity. Int. J. Ad Hoc Ubiquitous Comput. 2021, 38, 151–157. [Google Scholar] [CrossRef]
  4. Forestiero, A. Bio-inspired algorithm for outliers detection. Multimedia Tools Appl. 2017, 76, 25659–25677. [Google Scholar] [CrossRef]
  5. Forestiero, A.; Mastroianni, C.; Papuzzo, G.; Spezzano, G. A Proximity-Based Self-Organizing Framework for Service Composition and Discovery. In Proceedings of the 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, Washington, DC, USA, 17–20 May 2010; pp. 428–437. [Google Scholar]
  6. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  7. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  8. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  9. Zervoudakis, K.; Tsafarakis, S. A mayfly optimization algorithm. Comput. Ind. Eng. 2020, 145, 106559. [Google Scholar] [CrossRef]
  10. Moniz, N.; Monteiro, H. No Free Lunch in imbalanced learning. Knowl. Based Syst. 2021, 227, 107222. [Google Scholar] [CrossRef]
  11. Zhiheng, W.; Jianhua, L. Flamingo Search Algorithm: A New Swarm Intelligence Optimization Algorithm. IEEE Access 2021, 9, 88564–88582. [Google Scholar] [CrossRef]
  12. Mahdi, A.Y.; Yuhaniz, S.S. Optimal feature selection using novel flamingo search algorithm for classification of COVID-19 patients from clinical text. Math. Biosci. Eng. 2023, 20, 5268–5297. [Google Scholar] [CrossRef]
  13. Durgam, R.; Devarakonda, N. A Quasi-Oppositional Based Flamingo Search Algorithm Integrated with Generalized Ring Crossover for Effective Feature Selection. IETE J. Res. 2023, 1–17. [Google Scholar] [CrossRef]
  14. Abraham, R.; Vadivel, M. An Energy Efficient Wireless Sensor Network with Flamingo Search Algorithm Based Cluster Head Selection. Wirel. Pers. Commun. 2023, 1–23. [Google Scholar] [CrossRef]
  15. Srinivasarao, P.; Satish, A.R. Multi-objective materialized view selection using flamingo search optimization algorithm. Softw. Pract. Exp. 2023, 53, 988–1012. [Google Scholar] [CrossRef]
  16. Fernisha, S.R.; Christopher, C.S.; Lyernisha, S.R. Slender Swarm Flamingo optimization-based residual low-light image enhancement network. Imaging Sci. J. 2023, 69, 391–406. [Google Scholar] [CrossRef]
  17. Arivubrakan, P.; Ramasubramanian, K. Multi-Objective Cluster Head based Energy Aware Routing Protocol using Hybrid Woodpecker and Flamingo Search Optimization Algorithm for Internet of Things Environment. Int. J. Inf. Technol. Decis. Mak. 2023, 1–20. [Google Scholar] [CrossRef]
  18. Kumar, T.R.; Madhavan, M. Design and Optimization of Wearable Microstrip Patch Antenna using Hybrid Fuzzy Flamingo Swarm Optimization Algorithm for RF Energy Harvesting. Iran. J. Sci. Technol. Trans. Electr. Eng. 2022, 1–20. [Google Scholar] [CrossRef]
  19. Raamesh, L.; Radhika, S.; Jothi, S. Generating Optimal Test Case Generation Using Shuffled Shepherd Flamingo Search Model. Neural Process. Lett. 2022, 54, 5393–5413. [Google Scholar] [CrossRef]
  20. Hussain, S.M.; Begh, G.R. Hybrid heuristic algorithm for cost-efficient QoS aware task scheduling in fog–cloud environment. J. Comput. Sci. 2022, 64, 101828. [Google Scholar] [CrossRef]
  21. Bingol, H.; Alatas, B. Chaos based optics inspired optimization algorithms as global solution search approach. Chaos Solitons Fractals 2020, 141, 110434. [Google Scholar] [CrossRef]
  22. Yang, X.; Liu, J.; Liu, Y.; Xu, P.; Yu, L.; Zhu, L.; Chen, H.; Deng, W. A Novel Adaptive Sparrow Search Algorithm Based on Chaotic Mapping and T-Distribution Mutation. Appl. Sci. 2021, 11, 11192. [Google Scholar] [CrossRef]
  23. Wang, G.-G.; Tan, Y. Improving Metaheuristic Algorithms with Information Feedback Models. IEEE Trans. Cybern. 2017, 49, 542–555. [Google Scholar] [CrossRef] [PubMed]
  24. Zhang, Y.; Wang, G.-G.; Li, K.; Yeh, W.-C.; Jian, M.; Dong, J. Enhancing MOEA/D with information feedback models for larg × 10−scale many-objective optimization. Inf. Sci. 2020, 522, 1–16. [Google Scholar] [CrossRef]
  25. Gu, Z.-M.; Wang, G.-G. Improving NSGA-III algorithms with information feedback models for larg × 10−scale many-objective optimization. Futur. Gener. Comput. Syst. 2020, 107, 49–69. [Google Scholar] [CrossRef]
  26. Yuan, Y.; Mu, X.; Shao, X.; Ren, J.; Zhao, Y.; Wang, Z. Optimization of an auto drum fashioned brake using the elite opposition-based learning and chaotic k-best gravitational search strategy based grey wolf optimizer algorithm. Appl. Soft Comput. 2022, 123, 108947. [Google Scholar] [CrossRef]
  27. Zhong, C.; Li, G.; Meng, Z.; He, W. Opposition-based learning equilibrium optimizer with Levy flight and evolutionary population dynamics for high-dimensional global optimization problems. Expert Syst. Appl. 2023, 215, 119303. [Google Scholar] [CrossRef]
  28. Si, T.; Miranda, P.B.; Bhattacharya, D. Novel enhanced Salp Swarm Algorithms using opposition-based learning schemes for global optimization problems. Expert Syst. Appl. 2022, 207, 117961. [Google Scholar] [CrossRef]
  29. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November 1995; pp. 1942–1948. [Google Scholar]
  30. Kaur, S.; Awasthi, L.K.; Sangal, A.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  31. Wang, L.; Cao, Q.; Zhang, Z.; Mirjalili, S.; Zhao, W. Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2022, 114, 105082. [Google Scholar] [CrossRef]
  32. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  33. Saplıoğlu, K.; Güçlü, Y.S. Combination of Wilcoxon test and scatter diagram for trend analysis of hydrological data. J. Hydrol. 2022, 612, 128132. [Google Scholar] [CrossRef]
Figure 1. Flow chart of IFSA.
Figure 1. Flow chart of IFSA.
Applsci 13 05612 g001
Figure 2. Comparison of convergence curves.
Figure 2. Comparison of convergence curves.
Applsci 13 05612 g002aApplsci 13 05612 g002bApplsci 13 05612 g002cApplsci 13 05612 g002d
Table 1. Benchmark functions.
Table 1. Benchmark functions.
Function FormulaRegion of SearchBestDimensionality
F 1 ( x ) = i = 1 n x i 2 [−100, 100]020
F 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | [−10, 10]020
F 3 ( x ) = i = 1 n ( j = 1 i x j ) 2 [−100, 100]020
F 4 ( x ) = max i { | x i | , 1 i n } [−100, 100]020
F 5 ( x ) = i = 1 n 1 [ 100 × ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] [−2.048, −2.048]020
F 6 ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 [−100, 100]020
F 7 ( x ) = i = 1 n i × x i 4 + random [ 0 , 1 ) [−1.28, 1.28]020
F 8 ( x ) = i = 1 n | x i | i + 1 [−1, 1]020
F 9 ( x ) = i = 1 n i x i 2 [−10, 10]020
F 10 ( x ) = i = 1 n x i × sin ( | x i | ) [−500, 500]−418.9829 × n20
F 11 ( x ) = i = 1 n [ x i 2 10 × cos ( 2 π x i ) + 10 ] [−5.12, 5.12]020
F 12 ( x ) = 20 × exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e [−32, 32]020
F 13 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 [−600, 600]020
F 14 ( x ) = i = 1 d 1 ( ω i 1 ) 2 [ 1 + 10 sin 2 ( π ω i + 1 ) ] + sin 2 ( π ω 1 ) + ( ω d 1 ) 2 × [ 1 + sin 2 ( 2 π ω d ) ] , ω i = 1 + x i 1 4 , i = 1 , , d [−10, 10]020
F 15 ( x ) = 418.9529 d i = 1 n x i sin ( | x i | ) [−500, 500]02
F 16 ( x ) = 0.5 + [ ( sin i = 1 D x i 2 ) 2 0.5 ] / [ 1 + 0.001 ( i = 1 D x i 2 ) ] 2   [−10, 10]020
F 17 ( x ) = 100 | x 2 0.01 x 1 2 | + 0.01 | x 1 + 10 | [−10, 1]02
F 18 ( x ) = 0.0001 ( | sin ( x 1 ) sin ( x 2 ) exp ( | 100 x 1 2 + x 2 2 / π | ) | + 1 ) 0.1 [−10, 10]−2.062612
F 19 ( x ) = 1 + cos ( 12 x 1 2 + x 2 2 ) / 0.5 ( x 1 2 + x 2 2 ) + 2 [−5.12, 5.12]−12
F 20 ( x ) = 0.5 + sin 2 ( x 1 2 x 2 2 ) 0.5 / [ 1 + 0.001 ( x 1 2 + x 2 2 ) ] 2 [−100, 100]02
F 21 ( x ) = 0.26 ( x 1 2 + x 2 2 ) 0.48 x 1 x 2 [−10, 10]02
F 22 ( x ) = 2 x 1 2 1.05 x 1 4 + x 1 6 / 6 + x 1 x 2 + x 2 2 [−5, 5]02
F 23 ( x ) = 100 ( x 1 2 x 2 ) 2 + ( x 1 1 ) 2 + ( x 3 1 ) 2 + 90 ( x 3 2 x 4 ) 2 + 10.1 ( ( x 2 1 ) 2 + ( x 4 1 ) 2 ) + 19.8 ( x 2 1 ) ( x 4 1 ) [−10, 10]04
Table 2. Parameter setting for the five algorithms.
Table 2. Parameter setting for the five algorithms.
AlgorithmParameterValues
w 0.9
PSO c 1 2
c 2 2
WOA a 1 [2, 0]
a 2 [−1, −2]
GWO a [2, 0]
TSA P m i n 1
P m a x 4
FSA M P b 0.1
G G 0 , 1.2 , K ( 8 )
ARO--
AHA M 2 n
Table 3. Comparison results of the FSA with different strategies.
Table 3. Comparison results of the FSA with different strategies.
FunctionsATRFSAFSA1FSA2FSA3IFSA
Best1.98 × 10−2611.45 × 10−262000
F 1 Std00000
Mean1.40 × 10−2388.60 × 10−2381.75 × 10−29100
Best3.44 × 10−1312.73 × 10−1326.97 × 10−1611.17 × 10−2013.47 × 10−225
F 2 Std1.12 × 10−1207.25 × 10−1221.82 × 10−14600
Mean3.07 × 10−1212.71 × 10−1224.89 × 10−1472.04 × 10−1681.49 × 10−216
Best2.92 × 10−2442.47 × 10−242000
F 3 Std00000
Mean1.96 × 10−2181.61 × 10−2186.80 × 10−28400
Best2.92 × 10−1312.57 × 10−1302.55 × 10−1588.73 × 10−1971.45 × 10−228
F 4 Std3.77 × 10−1122.22 × 10−1162.38 × 10−14300
Mean7.00 × 10−1134.20 × 10−1175.39 × 10−1443.73 × 10−1921.96 × 10−215
Best1.74 × 101.74 × 101.73 × 101.73 × 101.69 × 10
F 5 Std4.39 × 10−14.86 × 10−14.17 × 10−14.63 × 10−14.61 × 10−1
Mean1.81 × 101.81 × 101.81 × 101.81 × 101.80 × 10
Best2.041.721.982.111.86
F 6 Std2.41 × 10−13.26 × 10−12.44 × 10−12.80 × 10−13.22 × 10−1
Mean2.562.492.462.772.70
Best4.24 × 10−69.63 × 10−78.43 × 10−61.36 × 10−66.40 × 10−7
F 7 Std7.04 × 10−55.93 × 10−56.06 × 10−57.60 × 10−58.49 × 10−5
Mean7.47 × 10−57.07 × 10−57.56 × 10−57.50 × 10−56.43 × 10−5
Best2.78 × 10−1831.51 × 10−1851.82 × 10−19508.38 × 10−302
F 8 Std00000
Mean1.57 × 10−1732.02 × 10−1711.28 × 10−1807.80 × 10−2634.63 × 10−283
Best1.53 × 10−2657.02 × 10−258000
F 9 Std00000
Mean2.85 × 10−2384.51 × 10−2372.53 × 10−28700
Best−5.48 × 103−5.87 × 103−6.47 × 103−6.14 × 103−6.43 × 103
F 10 Std2.99 × 1023.42 × 1025.23 × 1025.81 × 1025.83 × 102
Mean−4.87 × 103−4.98 × 103−5.36 × 103−5.06 × 103−5.67 × 103
Best00000
F 11 Std00000
Mean00000
Best4.734.764.724.764.75
F 12 Std4.77 × 10−24.35 × 10−24.09 × 10−24.15 × 10−23.61 × 10−2
Mean4.864.874.844.884.84
Best00000
F 13 Std00000
Mean00000
Best1.041.089.72 × 10−11.031.01
F 14 Std8.47 × 10−27.49 × 10−29.90 × 10−29.04 × 10−29.13 × 10−2
Mean1.221.231.231.221.17
Best2.55 × 10−52.55 × 10−52.55 × 10−52.55 × 10−52.55 × 10−5
F 15 Std1.61 × 10−66.21 × 10−57.74 × 10−73.73 × 10−54.50 × 10−5
Mean2.60 × 10−53.77 × 10−52.57 × 10−53.27 × 10−52.71 × 10−5
Best00000
F 16 Std8.32 × 10−48.33 × 10−401.08 × 10−30
Mean2.12 × 10−32.12 × 10−301.80 × 10−30
Best00000
F 17 Std3.00 × 10−23.00 × 10−23.00 × 10−22.49 × 10−22.49 × 10−2
Mean1.00 × 10−21.00 × 10−21.00 × 10−26.67 × 10−36.67 × 10−3
Best−2.06−2.06−2.06−2.06−2.06
F 18 Std7.11 × 10−114.34 × 10−121.89 × 10−104.29 × 10−91.14 × 10−9
Mean−2.06−2.06−2.06−2.06−2.06
Best−1−1−1−1−1
F 19 Std00000
Mean−1−1−1−1−1
Best00000
F 20 Std00000
Mean00000
Best00000
F 21 Std00000
Mean5.30 × 10−3041.05 × 10−302000
Best00000
F 22 Std00000
Mean01.51 × 10−305000
Best6.05 × 10−22.24 × 10−11.97 × 10−11.88 × 10−12.24 × 10−1
F 23 Std7.73 × 10−11.271.401.179.20 × 10−1
Mean6.70 × 10−19.41 × 10−11.081.067.60 × 10−1
Table 4. The comparison results of five algorithms.
Table 4. The comparison results of five algorithms.
FunctionsATRPSOGWOWOATSAAROAHAIFSA
Best1.75 × 10−11.71 × 10−293.89 × 10−203.03 × 10−717.46 × 10−452.93 × 10−1070
F 1 Std3.18 × 10−13.20 × 10−274.21 × 10−172.73 × 10−708.70 × 10−325.95 × 10−800
Mean6.95 × 10−11.82 × 10−271.76 × 10−172.58 × 10−702.62 × 10−321.11 × 10−800
Best1.902.27 × 10−171.84 × 10−132.34 × 10−375.32 × 10−238.67 × 10−513.47 × 10−225
F 2 Std6.79 × 10−11.09 × 10−163.58 × 10−123.14 × 10−375.05 × 10−196.71 × 10−420
Mean3.361.48 × 10−163.20 × 10−127.29 × 10−372.24 × 10−191.49 × 10−421.49 × 10−216
Best3.21 × 1021.48 × 1023.654.32 × 10−492.49 × 10−336.94 × 10−840
F 3 Std4.12 × 1028.98 × 1022.37 × 1031.10 × 10−346.95 × 10−93.33 × 10−620
Mean8.94 × 1021.01 × 1036.87 × 1022.06 × 10−351.29 × 10−96.20 × 10−630
Best5.99 × 10−11.47 × 10−81.86 × 10−58.67 × 10−338.88 × 10−181.81 × 10−441.45 × 10−228
F 4 Std4.19 × 10−11.23 × 10−71.25 × 10−31.39 × 10−321.22 × 10−131.08 × 10−380
Mean1.091.46 × 10−78.65 × 10−42.78 × 10−324.36 × 10−143.25 × 10−391.96 × 10−215
Best4.12 × 101.61 × 101.57 × 101.80 × 103.65 × 10−31.60 × 101.69 × 10
F 5 Std2.47 × 105.18 × 10−17.60 × 10−13.77 × 10−14.225.69 × 10−14.61 × 10−1
Mean8.00 × 101.68 × 101.71 × 101.87 × 101.261.68 × 101.80 × 10
Best3.08 × 10−17.66 × 10−65.51 × 10−33.052.02 × 10−47.93 × 10−51.86
F 6 Std2.87 × 10−11.25 × 10−13.00 × 10−13.05 × 10−13.82 × 10−43.64 × 10−33.22 × 10−1
Mean7.72 × 10−15.90 × 10−24.73 × 10−13.866.41 × 10−42.57 × 10−32.70
Best1.16 × 10−15.81 × 10−42.57 × 10−51.48 × 10−65.69 × 10−52.91 × 10−56.40 × 10−7
F 7 Std1.45 × 10−17.72 × 10−45.21 × 10−47.41 × 10−57.52 × 10−42.99 × 10−48.49 × 10−5
Mean3.17 × 10−11.44 × 10−35.98 × 10−49.81 × 10−59.16 × 10−43.29 × 10−46.43 × 10−5
Best1.08 × 10−34.87 × 10−677.09 × 10−521.38 × 10−955.41 × 10−481.34 × 10−758.38 × 10−302
F 8 Std9.75 × 10−33.71 × 10−606.27 × 10−402.28 × 10−682.85 × 10−424.33 × 10−660
Mean1.37 × 10−29.90 × 10−612.43 × 10−404.36 × 10−698.91 × 10−438.15 × 10−674.63 × 10−283
Best2.321.24 × 10−311.93 × 10−229.27 × 10−737.43 × 10−442.57 × 10−990
F 9 Std2.492.04 × 10−295.00 × 10−191.31 × 10−711.33 × 10−341.46 × 10−830
Mean6.441.06 × 10−292.82 × 10−198.69 × 10−723.41 × 10−352.90 × 10−840
Best−5.20 × 103−6.08 × 103−5.34 × 103−2.99 × 103−8.02 × 103−8.32 × 103−6.43 × 103
F 10 Std7.27 × 1027.16 × 1022.32 × 1023.89 × 1024.14 × 1021.94 × 1025.83 × 102
Mean−3.89 × 103−4.59 × 103−4.80 × 103−2.08 × 103−7.19 × 103−8.00 × 103−5.67 × 103
Best4.99 × 105.68 × 10−1400000
F 11 Std2.04 × 103.863.44 × 10−110000
Mean9.09 × 103.788.45 × 10−120000
Best4.664.614.654.724.614.614.75
F 12 Std6.52 × 10−21.48 × 10−23.77 × 10−23.99 × 10−21.42 × 10−41.85 × 10−43.61 × 10−2
Mean4.764.634.734.794.614.614.84
Best9.12 × 10−2000000
F 13 Std5.27 × 10−28.74 × 10−38.33 × 10−33.46 × 10−3000
Mean1.85 × 10−15.79 × 10−32.71 × 10−31.10 × 10−3000
Best3.59 × 10−11.81 × 10−14.59 × 10−19.52 × 10−11.77 × 10−46.28 × 10−41.01
F 14 Std1.821.58 × 10−12.28 × 10−12.64 × 10−16.13 × 10−21.54 × 10−19.13 × 10−2
Mean2.084.29 × 10−18.31 × 10−11.482.48 × 10−23.25 × 10−11.17
Best2.59 × 10−52.85 × 10−56.70 × 10−54.19 × 10−22.55 × 10−52.55 × 10−52.55 × 10−5
F 15 Std7.16 × 105.91 × 109.629.08 × 101.08 × 10−604.50 × 10−5
Mean5.89 × 105.53 × 104.081.12 × 1022.57 × 10−52.55 × 10−52.71 × 10−5
Best9.58 × 10−32.45 × 10−32.45 × 10−32.45 × 10−3000
F 16 Std5.61 × 10−33.36 × 10−31.28 × 10−33.38 × 10−66.11 × 10−400
Mean1.52 × 10−27.21 × 10−32.69 × 10−32.45 × 10−32.29 × 10−300
Best1.53 × 10−1009.83 × 10−21.05 × 10−21.00 × 10−10
F 17 Std5.95 × 10−15.14 × 10−21.82 × 10−24.72 × 10−24.66 × 10−22.78 × 10−172.49 × 10−2
Mean9.75 × 10−12.26 × 10−23.37 × 10−31.34 × 10−18.70 × 10−21.00 × 10−16.67 × 10−3
Best−2.06−2.06−2.06−2.06−2.06−2.06−2.06
F 18 Std9.58 × 10−76.75 × 10−91.47 × 10−81.24 × 10−52.90 × 10−105.40 × 10−111.14 × 10−9
Mean−2.06−2.06−2.06−2.06−2.06−2.06−2.06
Best−1−1−1−1−1−1−1
F 19 Std6.51 × 10−40003.12 × 10−200
Mean−1−1−1−1−9.62 × 10−1−1−1
Best2.70 × 10−11000000
F 20 Std2.75 × 10−90001.15 × 10−300
Mean2.50 × 10−90005.23 × 10−400
Best2.47 × 10−106.32 × 10−1122.92 × 10−1413.49 × 10−1102.82 × 10−312.35 × 10−810
F 21 Std2.39 × 10−79.59 × 10−968.15 × 10−1161.86 × 10−997.38 × 10−122.58 × 10−700
Mean1.65 × 10−72.49 × 10−961.52 × 10−1167.42 × 10−1002.37 × 10−126.51 × 10−710
Best2.08 × 10−89.81 × 10−2074.23 × 10−1508.63 × 10−1473.03 × 10−552.52 × 10−1000
F 22 Std4.93 × 10−601.76 × 10−1161.90 × 10−1097.62 × 10−461.14 × 10−850
Mean3.27 × 10−62.33 × 10−1645.00 × 10−1173.87 × 10−1102.65 × 10−462.12 × 10−860
Best4.88 × 10−35.14 × 10−54.99 × 10−41.452.24 × 10−87.99 × 10−122.24 × 10−1
F 23 Std1.472.191.273.761.931.07 × 10−19.20 × 10−1
Mean8.23 × 10−11.668.79 × 10−15.439.30 × 10−11.39 × 10−17.60 × 10−1
Table 5. p-value for Wilcoxon rank.
Table 5. p-value for Wilcoxon rank.
PSOGWOWOATSAFSA
F 1 1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+
F 2 3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F 3 1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+
F 4 3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F 5 3.02 × 10−11+5.57 × 10−10+4.42 × 10−06+5.53 × 10−08+5.59 × 10−01
F 6 3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+4.98 × 10−11+3.03 × 10−02+
F 7 3.02 × 10−11+3.02 × 10−11+1.55 × 10−09+1.70 × 10−02+2.64 × 10−01
F 8 3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+3.02 × 10−11+
F 9 1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+
F 10 4.62 × 10−10+6.05 × 10−07+1.87 × 10−07+3.02 × 10−11+1.29 × 10−06+
F 11 1.21 × 10−12+1.20 × 10−12+1.74 × 10−09+NaN=NaN=
F 12 8.20 × 10−7+3.02 × 10−11+1.61 × 10−10+3.57 × 10−06+3.71 × 10−01
F 13 1.21 × 10−12+1.46 × 10−04+6.15 × 10−10+8.15 × 10−02NaN=
F 14 2.64 × 10−13.02 × 10−11+1.43 × 10−08+3.96 × 10−08+4.36 × 10−02+
F 15 5.57 × 10−10+4.08 × 10−11+3.02 × 10−11+3.02 × 10−11+8.60 × 10−05+
F 16 1.21 × 10−12+5.51 × 10−13+8.70 × 10−14+1.21 × 10−12+1.91 × 10−10+
F 17 2.36 × 10−12+1.89 × 10−015.97 × 10−014.60 × 10−12+6.54 × 10−01
F 18 6.43 × 10−12+2.90 × 10−10+2.44 × 10−10+6.43 × 10−12+4.51 × 10−02+
F 19 1.21 × 10−12+NaN=NaN=NaN=NaN=
F 20 1.21 × 10−12+NaN=NaN=NaN=NaN=
F 21 1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+6.62 × 10−04+
F 22 1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.21 × 10−12+1.61 × 10−01
F 23 5.75 × 10−25.30 × 10−019.47 × 10−014.18 × 10−09+6.63 × 10−01
+ / / = 21/2/019/2/219/2/219/1/313/6/4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, S.; Shang, J.; Guo, J.; Zhang, Y. Multi-Strategy Improved Flamingo Search Algorithm for Global Optimization. Appl. Sci. 2023, 13, 5612. https://doi.org/10.3390/app13095612

AMA Style

Jiang S, Shang J, Guo J, Zhang Y. Multi-Strategy Improved Flamingo Search Algorithm for Global Optimization. Applied Sciences. 2023; 13(9):5612. https://doi.org/10.3390/app13095612

Chicago/Turabian Style

Jiang, Shuhao, Jiahui Shang, Jichang Guo, and Yong Zhang. 2023. "Multi-Strategy Improved Flamingo Search Algorithm for Global Optimization" Applied Sciences 13, no. 9: 5612. https://doi.org/10.3390/app13095612

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop