Next Article in Journal
Biomimetic Directional Liquid Transport on a Planar Surface in a Passive and Energy-Free Way
Next Article in Special Issue
IA-DTPSO: A Multi-Strategy Integrated Particle Swarm Optimization for Predicting the Total Urban Water Resources in China
Previous Article in Journal
Advanced Modular Honeycombs with Biomimetic Density Gradients for Superior Energy Dissipation
Previous Article in Special Issue
FedDyH: A Multi-Policy with GA Optimization Framework for Dynamic Heterogeneous Federated Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Team-Oriented Swarm Optimization Algorithm (ETOSO) for Robust and Efficient High-Dimensional Search

by
Adel BenAbdennour
College of Engineering, Islamic University of Madinah, Madinah 42351, Saudi Arabia
Biomimetics 2025, 10(4), 222; https://doi.org/10.3390/biomimetics10040222
Submission received: 9 February 2025 / Revised: 23 March 2025 / Accepted: 26 March 2025 / Published: 3 April 2025

Abstract

:
This paper introduces the Enhanced Team-Oriented Swarm Optimization (ETOSO) algorithm, a novel refinement of the Team-Oriented Swarm Optimization (TOSO) algorithm aimed at addressing the stagnation problem commonly encountered in nature-inspired optimization approaches. ETOSO enhances TOSO by integrating innovative strategies for exploration and exploitation, resulting in a simplified algorithm that demonstrates superior performance across a broad spectrum of benchmark functions, particularly in high-dimensional search spaces. A comprehensive comparative evaluation and statistical tests against 26 established nature-inspired optimization algorithms (NIOAs) across 15 benchmark functions and dimensions (D = 2, 5, 10, 30, 50, 100, 200) confirm ETOSO’s superiority relative to solution accuracy, convergence speed, computational complexity, and consistency.

1. Introduction

Optimization plays a crucial role across diverse scientific and engineering disciplines. From machine learning and artificial intelligence to logistics and finance, the ability to efficiently find optimal solutions is important. However, many real-world optimization problems are characterized by high dimensionality and complex landscapes, posing significant challenges to traditional optimization techniques. Nature-inspired optimization algorithms (NIOAs) have emerged as a powerful paradigm for tackling such complex optimization problems. Inspired by natural processes such as swarm behavior and physical phenomena, NIOAs offer robust and adaptable search strategies [1,2,3]. Algorithms like Particle Swarm Optimization (PSO) have achieved notable success across a wide range of applications, often outperforming traditional gradient-based methods in complex, non-convex search spaces. Their inherent parallelism, stochastic nature, and ability to escape local optima make them particularly well suited for challenging optimization tasks.
PSO, inspired by the social foraging behavior of birds and fish, has gained widespread popularity due to its simplicity and effectiveness. It simulates a swarm of particles exploring the search space, with each particle adjusting its trajectory based on its own best-found solution and the best solution found by the entire swarm. While PSO has proven effective in numerous applications, it is susceptible to suboptimal convergence, particularly in complex, multimodal landscapes, where the algorithm can become trapped in local optima, hindering the discovery of the true global optimum.
To address the limitations of the standard PSO, the Team-Oriented Swarm Optimization algorithm (TOSO) was introduced [4]. TOSO incorporates a team-based approach, dividing the swarm into two distinct teams: explorers and exploiters. The explorers navigate a wider area within the search space while the exploiters refine the current best position. This way, the diversity of the population and the robustness of the search process are maintained. This division of labor promotes a more balanced exploration–exploitation trade-off, diminishing the risk of early stagnation and improving the overall search efficiency.
While TOSO represents a significant advancement over the standard PSO, with remarkable performance in optimizing a wide range of benchmark functions, it faces challenges in certain scenarios, particularly in parameter tuning, which can significantly influence performance. This limitation motivates the development of an Enhanced Team-Oriented Swarm Optimization algorithm (ETOSO). ETOSO is a novel enhancement of TOSO that incorporates several key improvements to further enhance exploration, exploitation, simplicity, and overall optimization performance. ETOSO builds upon the team-based structure of TOSO but introduces mechanisms to make the algorithm parameter-free, simplifying implementation and enhancing robustness. These enhancements are designed to address the limitations of TOSO and achieve superior performance in complex optimization problems. The main contributions of this paper are as follows:
  • The development of ETOSO, an enhanced version of TOSO with improved exploration and exploitation strategies and a parameter-free design;
  • A comprehensive experimental evaluation of ETOSO on a diverse suite of 15 benchmark functions, demonstrating its superior performance compared to TOSO and 25 state-of-the-art NIOAs;
  • A detailed computational and statistical analysis of the ETOSO algorithm, providing insights into its behavior and performance characteristics.
The paper is organized as follows: Section 2 reviews the related work. Section 3 provides a detailed background on TOSO, including its mathematical formulations and underlying principles. Section 4 describes the proposed ETOSO algorithm, highlighting the key enhancements and their rationale. Section 5 presents the experimental setup and benchmark functions used for evaluation. Section 6 compares ETOSO with TOSO and other state-of-the-art algorithms. It also provides a statistical and computational complexity analysis. The discussion and limitations are presented in Section 7. Finally, Section 8 concludes the paper and outlines future research directions.

2. Related Work

NIOAs have emerged as powerful paradigms for addressing complex optimization challenges, drawing inspiration from natural phenomena to navigate complex solution landscapes. These algorithms offer a dynamic and adaptable approach, proving particularly valuable in tackling nonlinear and high-dimensional problems where traditional optimization techniques often fail. However, while NIOAs present a compelling alternative, their original formulations are not without inherent limitations. Refining NIOAs’ applicability requires understanding the limitations related to parameter sensitivity, exploration–exploitation balance, constraint handling, scalability, and generalizability.
One of the most common challenges across a spectrum of NIOAs is the strong dependence on precise parameter tuning. Algorithms such as the Bat Algorithm (BAT) [5,6], Bees Algorithm (BEE) [7], Flower Pollination Algorithm (FPA) [8], Sine Cosine Algorithm (SCA) [9], Whale Optimization Algorithm (WOA) [10], and Cuckoo Search (CS) [11] can sometimes be particularly susceptible to this issue. The performance of these algorithms may vary dramatically with slight changes in control parameters, demanding careful adjustments to achieve optimal results. This sensitivity may complicate their application in real-world scenarios and may raise questions about their robustness and generalizability.
Furthermore, a significant proportion of original NIOAs may exhibit an inherent difficulty in maintaining a balanced equilibrium between exploration and exploitation. Algorithms like the Butterfly Optimization Algorithm (BOA) [12], Elephant Herding Optimization (EHO) [13], Firefly Algorithm (FA) [14,15], Grasshopper Optimization Algorithm (GOA) [16], Harris Hawks Optimization (HHO) [17], Moth–Flame Optimization (MFO) [18], Slime Mold Algorithm (SMA) [19], and Salp Swarm Algorithm (SSA) [20,21] can frequently struggle to navigate this trade-off effectively in some contexts. Over-emphasis on exploration can lead to inefficient searches, while excessive exploitation can result in premature convergence and trap in local optima. This underscores the need for more sophisticated search strategies that dynamically adapt to the problem landscape.
Moreover, the practical deployment of a number of NIOAs is frequently restricted by limitations in constraint handling and scalability. Algorithms such as the Crow Search Algorithm (CSA) [22], Grey Wolf Optimizer (GWO) [23], Teaching–Learning-Based Optimization (TLBO) [24], Flow Direction Algorithm (FDA) [25], Gravity Search Algorithm (GSA) [26], Raven Roost Optimization (RRO) [27,28], Monkey King Algorithm (MKA) [29,30,31,32], and Remora Optimization Algorithm (ROA) [33,34] may demonstrate difficulties in effectively managing constraints and scaling to high-dimensional problems in certain applications. These limitations can pose significant challenges in real-world applications where constraints are prevalent, and problem dimensions are large, often leading to increased computational complexity and diminished solution quality.
Finally, a subset of NIOAs faces concerns regarding their theoretical foundations and generalizability. Differential Evolution (DE) [35], for instance, lacks comprehensive theoretical insights into its convergence properties, while the Prairie Dog Optimization (PDO) algorithm [36] and the Seagull Optimization Algorithm (SOA) [37] require more extensive comparative analyses to validate their performance across diverse problem domains. This lack of robust theoretical studies and empirical validation may limit their broader applicability.
In response to the various limitations inherent to some original NIOAs, researchers have dedicated considerable effort to developing enhancements and hybridizations, aiming to augment performance and expand applicability. To address parameter sensitivity, a prevalent strategy has been the implementation of adaptive parameter control. Adaptive versions of the Bat Algorithm and FPA, for example, dynamically adjust parameters based on search progress, thereby enhancing robustness and accelerating convergence. Specific implementations, such as the dynamic adaptation of Levy flight step size in CS and variable step size in Firefly FA [38], demonstrate this approach. These adaptive mechanisms seek to reduce the reliance on manual parameter tuning, fostering greater autonomy and adaptability within the algorithms, though their efficacy remains dependent on the specific problem landscape.
To address the persistent challenges in achieving a balanced equilibrium between exploration and exploitation, hybrid approaches have gained significant traction. These methods combine the strengths of other algorithms to optimize search efficacy. Example instances of such an approach include hybridizations of SSA with PSO [39], SCA with TLBO [40], and GWO with CS [41]. Furthermore, enhancements to FPA [42] and TLBO [43] have focused on refining the balance between exploration and exploitation, thereby improving convergence and solution quality. However, the increased complexity of these hybrid approaches can raise concerns about computational efficiency.
Advancements in constraint handling and scalability have been realized through various modifications. Enhanced versions of CSA and TLBO have incorporated strategies to manage infeasible solutions more effectively, thereby improving performance in constrained optimization problems. For high-dimensional problems, enhancements to HHO [44,45,46,47] and CSA [48] have yielded promising results through multi-strategy enhancements and improved search mechanisms. Similarly, enhanced BEE algorithms have integrated Deb’s rules [49] to improve constraint-handling capabilities. These modifications demonstrate a commitment to expanding the applicability of NIOAs to real-world problems.
Furthermore, efforts have also been directed toward enhancing the theoretical foundations and generalizability of NIOAs. Modifications to MFO [50], SCA [51], SOA [52], and GWO [53,54] have focused on improving convergence, accuracy, and general applicability across diverse optimization landscapes. These enhancements aim to solidify the theoretical underpinnings of NIOAs and broaden their practical deployment, though standardized benchmarking remains a critical need for rigorous performance evaluation.
Despite these advancements, challenges are still being investigated. The increasing complexity of hybrid approaches raises concerns regarding computational efficiency and practical applicability. Consequently, future research should prioritize the development of robust, efficient, and generalizable NIOAs characterized by reduced parameter sensitivity and transparent tuning methodologies. There remains a pressing need for algorithms that exhibit diminished reliance on precise parameter adjustments or at least provide a clear and methodical tuning approach while delivering superior performance across a spectrum of optimization problems, and this is an area of ongoing research.

3. The TOSO Algorithm

The original PSO is a simple population-based optimization algorithm inspired by the social behavior of bird flocks. In a D-dimensional search space, the position and velocity of the ith particle in the dth dimension are updated at each iteration using the following equations [55]:
v i , d = w × v i , d + c 1 × r 1 × p b e s t i , d x i , d + c 2 × r 2 × g b e s t d x i , d
x i , d = x i , d + v i , d
where
  • x i , d : Position of the ith particle in the dth dimension;
  • v i , d : Velocity of the ith particle in the dth dimension;
  • w : Inertia weight, controlling the influence of the previous velocity;
  • c 1 , c 2 : Acceleration constants, influencing the attraction toward personal and global best positions;
  • p b e s t i , d : Personal best position of the ith particle in the dth dimension;
  • g b e s t d : Global best position of the swarm in the dth dimension;
  • r1 and r2 are random numbers.
PSO balances exploration and exploitation through the careful selection of the acceleration constants ( c 1 and c 2 ) and the inertia weight (w).
TOSO is an innovative enhanced version of PSO that aims to improve its performance by decoupling exploration and exploitation. In PSO, a single swarm explores the solution space. In contrast, TOSO divides the swarm into two teams: an exploration team responsible for discovering new areas of the search space and an exploitation team dedicated to refining potential solutions. The only information shared between the teams is the current best position. This decoupling allows TOSO to balance exploration and exploitation more effectively, resulting in performance enhancement in navigating complex search spaces, particularly in multimodal problems where multiple optimal solutions may exist. This enhancement allowed for a much-improved performance over a large number of benchmarks, even for very high dimensions [4].

3.1. TOSO Exploration Team

The exploration team aims to discover promising new regions within the search space. The explorers are guided by the local best (lbest) model instead of the PSO global (gbest) model. This enhances exploration by focusing on the best local positions, preventing rapid convergence, and maintaining diversity within the team. TOSO also uses the ring topology for neighbor selection instead of the PSO star topology for the same reason. Moreover, TOSO does not use a velocity update, and the new position is directly determined from the previous one. Given these modifications, the position of each explorer is, hence, updated according to the following equation:
x i , d = α × r 1 , d × l b e s t i , d x i , d ,
where
  • x i , d represents the new position of the ith explorer;
  • lbest is the best position found by the explorer’s local neighbors within a ring topology;
  • r1,d is a uniform random number between 0 and 1, different for each dimension;
The exploration factor (α) is dynamically adjusted to balance exploration and exploitation. It is calculated as follows:
α = m i n ( f i t n e s s ) 1 + m a x ( f i t n e s s ) ,
where fitness is the vector containing the fitness values of all explorers. This dynamic adjustment of the exploration factor ensures that the team maintains a balance between exploiting promising regions and exploring new areas of the search space. To prevent premature convergence and maintain diversity within the exploration team, TOSO applies a random mutation operator to explorers with a predefined probability pm. The rebirth is achieved using the following:
x i , d = R m i n , d + r 2 , d × R m a x , d R m i n , d
where
  • Rmin and Rmax represent the lower and upper bounds of the search space, respectively;
  • r2,d is a uniform random number between 0 and 1, different for each dimension.

3.2. TOSO Exploitation Team

The exploitation team refines the search around the current best solution. Instead of moving toward the best position, each exploiter is initialized at the best solution and then undergoes small, controlled movements within a localized region. The magnitude of these movements is determined by an exploitation factor (βj), which is individually assigned to each exploiter based on its relative performance within the exploitation team. The position update for each exploiter is given by the following:
x j = x j + M j , d
M j , d = β j × r 3 × R m a x , d R m i n , d
where r3 is a random number drawn from a standard normal distribution (to favor exploiting closer proximities) and Mj,d is a small perturbation motion for dimension d. Denoting k to be the particle’s relative position within the swarm, determined by its fitness level, and ps being the swarm size, the exploitation function is given by the following:
β k = γ 1 + γ 2 × e γ 3 k 1 0.5 * p s 1 1 e γ 3 1 ,
where γ1, γ2, and γ3 are constants that control the search radius of the exploiters. TOSO makes the best particle exploit within 0.01% of the range and the worst one within 50% of the range, relative to their current position. Under these conditions, the parameters are determined to be [55] γ1 = 0.001, γ2 = 0.499, and γ3 = 2.

4. ETOSO Algorithm

ETOSO incorporates several key enhancements to TOSO that are aimed at improving its exploration and exploitation capabilities. These enhancements include linear weight increment for exploitation, the indirect refined neighbor selection, the complete removal of the rebirth step of explorers, and avoiding any need for parameter settings. With these enhancements, ETOSO becomes a simple and parameter-free algorithm while maintaining a competitive performance.

4.1. Linear Weight Increment for Exploitation

TOSO employs an exponential weight increment for exploiters, using three parameters as given by (8). The exponential weighting strategy in TOSO, besides the inconvenience of having to use three parameters, can lead to significant disadvantages. While the weights guide exploiters toward optimizing the global best solution, the nonlinear nature of exponential weighting makes the algorithm overly sensitive to changes in fitness rankings, resulting in abrupt alterations in the weighted influence applied to the positions of the exploiters. This behavior may prompt exploiters to converge too quickly to local optima, thereby restricting their capacity to adaptively navigate the search space and explore alternative promising solutions.
Alternatively, ETOSO proposes a much simpler linear weighting strategy for the exploitation function given by the following:
β k = γ 1 ( k 1 ) + γ 2
The slope and bias are selected so that the starting and ending weights of ETOSO are similar to that of TOSO, as shown in Figure 1 for a population equal to 30. For any population ps, the values of γ 1 and γ 2 are given by the following:
γ 1 = 0.996 p s 2 , and   γ 2 = 0.001
The linear behavior, aside from being simple and less computationally demanding, allows for gradual and consistent adjustments to the weights, promoting smoother movements and reducing potential oscillations among exploiters. While both strategies utilize particle rankings, the linear approach ensures that exploiters can exploit good solutions without the erratic fluctuations that can affect convergence. A linear increase in weights provides a more controlled and progressive shift and ensures that the exploitation influence grows gradually, allowing the algorithm to explore the search space effectively while progressively focusing on promising regions. This approach enables ETOSO to maintain a balanced and effective search process, allowing for better adaptation to changes in the fitness landscape and enhancing the algorithm’s overall robustness and performance.

4.2. Dynamic Exploration Enhancement

TOSO incorporates randomization through a mutation probability pm. While this introduces some degree of exploration, the level of exploration is heavily dependent on the predetermined value of pm, which may not be optimal across different problem contexts. The authors of TOSO themselves acknowledged the lack of justification behind their choice of the value of pm. In reality, a fixed mutation probability may not adequately serve all optimization scenarios, as higher mutation rates might be required to effectively escape local optima, while lower rates could help maintain focus on promising regions.
However, after examining the effect of the mutation of the explorers, it was found that it does not really have any useful addition to the process. In fact, it adds to the complexity of TOSO, increases the average speed, and makes an additional variable to tune without significant benefit. Therefore, ETOSO employs only the basic exploration given by (3) with no rebirth. This simplifies the algorithm with even some potential improvements in performance, speed, and consistency.

4.3. Indirect Refined Neighbor Selection

While both TOSO and ETOSO engage in neighbor selection for each explorer based on their indices (including previous, current, and next explorers), ETOSO’s incorporation of a linear weight increment for exploitation can indirectly enhance the influence of neighbor interactions. The gradual adjustment of weights provides a more stable and predictable movement pattern for exploiters, leading to a smoother transition between exploration and exploitation phases. Consequently, this consistency may facilitate more effective utilization of information from neighboring explorers during the exploration process. By capitalizing on knowledge from nearby particle performances, ETOSO can achieve more informed search directions, resulting in improved exploration efficiency within the optimization task. The pseudocode for the ETOSO algorithm is presented in Figure 2.

5. Experimental Setup

The experimental setup was designed to rigorously evaluate the performance of the proposed algorithm. This section describes the benchmark functions, the comparative algorithms, and the test configuration used in the study.

5.1. Benchmark Functions

A diverse set of benchmark functions was selected in accordance with CEC guidelines [56,57] to validate the proposed algorithm. These functions include unimodal functions (f1: Sphere, f2: Zakharov, f3: Sum Squares), multimodal functions (f4: Schwefel’s Problem 2.22, f5: Schwefel’s Problem 2.26, f6: Rosenbrock, f7: Ackley, f8: Rastrigin, f9: Griewank, f10: Bent Cigar), shifted functions (f11: Shifted Ackley, f12: Shifted Rosenbrock, f13: Shifted Rastrigin, f14: Shifted Griewank), and rotated functions (f15: Rotated Griewank). The equations for all benchmark functions are provided in Table 1. Unimodal functions (f1–f3) are primarily used to evaluate the exploitation capability of the algorithm, while multimodal functions (f4–f10) assess its exploration effectiveness and ability to avoid local optima. The shifted (f11–f14) and rotated (f15) functions introduce additional complexity by altering the location and orientation of the global optimum, making them more challenging and realistic for evaluating algorithm performance.
To provide a visual understanding of the functions, 3D plots of all 15 benchmark functions are included in Figure 3. These plots illustrate the unique characteristics of each function, such as the number of local optima, symmetry, and overall landscape complexity.

5.2. Comparative Algorithms and Parameter Settings

ETOSO was compared against a wide range of state-of-the-art optimization algorithms, namely BAT, BEE, BOA, CSA, CS, DE, EHO, FA, FDA, FPA, GOA, GSA, GWO, HHO, MFO, MKA, PDO, ROA, RRO, SCA, SOA, SMA, SSA, TLBO, and WOA. These algorithms were selected to provide a comprehensive comparison across different optimization paradigms, including swarm intelligence, evolutionary algorithms, and physics-inspired methods. The optimal parameter settings for each algorithm, obtained from the literature, are summarized in Table 2. These settings were carefully chosen to ensure fair and accurate comparisons.

5.3. Test Configuration

The experiments were conducted using MATLAB 2024 (version R2024a) on a system equipped with an Intel (R) Core (TM) Ultra 9 185 H processor (2.30 GHz), 32.0 GB RAM, and a 64-bit operating system. All algorithms used a population size of 30 individuals (except for MKA, which uses 10 due to many additional internal iterations). The maximum number of function evaluations (FEs) for each dimension (D) was set to 5000 × D, following standard CEC guidelines [56,57]. All results were averaged over 25 independent runs for each algorithm and benchmark function to ensure statistical robustness evaluation. Each benchmark function was evaluated across multiple dimensions (D = 2, 5, 10, 30, 50, 100, and 200). To facilitate the reproduction of the results, the ETOSO algorithm code is available at https://github.com/adel468/ETOSO.

6. Experimental Results

In this section, we plan to evaluate the performance of the ETOSO algorithm through four different approaches. The first approach will compare ETOSO with TOSO to identify any potential improvements or differences that ETOSO presents. The second approach expands this analysis by contrasting ETOSO with 25 other algorithms, providing insights into its relative strengths and possible applications. The third approach focuses on analyzing statistical significance and sensitivity to population size. Finally, the last approach will examine the computational complexity. All these evaluations are designed to assess ETOSO’s effectiveness and its significance within the broader context of algorithmic approaches.

6.1. Comparative Analysis of ETOSO and TOSO

Figure 4 shows the convergence behavior of the two algorithms for sample benchmark functions with D = 30. Table 3 and Table 4 present the results of the experimental evaluation for dimensions D = 2, 5, 10, 30, 50, 100, and 200. Each table provides the best performance, mean performance, performance standard deviation, and average speed (over 25 replications).
The results show that both algorithms demonstrate strong performance in simpler unimodal functions, consistently achieving optimal values of zero across various dimensions. This indicates their capacity for effective exploration and exploitation in landscapes characterized by a single peak. In contrast, the performance in multimodal functions reveals that while both algorithms maintain a comparable level of effectiveness, ETOSO occasionally outperforms TOSO in speed, especially with increasing dimensions. This suggests that ETOSO may possess enhanced adaptability to complex solution spaces with multiple optima.
Notably, in challenging functions such as the shifted and rotated benchmarks, both algorithms demonstrate their capacity to handle various complexities. However, ETOSO shows significantly lower variability in its results across higher dimensions, indicating greater stability and reliability compared to TOSO. The analysis underscores ETOSO’s performance in tackling complex problems with multiple local optima and deceptive features, notably in higher dimensions (D = 30, D = 50, D = 100, and D = 200). It highlights relatively faster convergence rates and robustness, particularly in navigating rugged landscapes, while maintaining a lower standard deviation compared to TOSO. Overall, while both algorithms managed simpler problems, ETOSO emerges as a robust and versatile option for complex, high-dimensional optimization tasks, balancing considerations of speed, stability, and adaptability in diverse solution landscapes exceptionally well.
The key takeaway from this first comparative test is that ETOSO performs as well as TOSO and often outperforms it, with greater consistency, in various scenarios. This enhanced performance is attained through a simpler structure and without the necessity for parameter tuning. Such improvements position ETOSO as a strong contender for optimization problems, which can be validated by benchmarking it against recently developed algorithms.

6.2. Benchmarking ETOSO Against Other Algorithms

This subsection presents the results of the benchmarking study. Figure 5 displays the convergence plots for the 25 algorithms across 25 replications, specifically for dimensionality D = 30. This figure highlights the sample performance dynamics of each algorithm for one multimodal function (f7) and one rotated function (f15). It is important to note that the FDA algorithm has been excluded from this figure due to its considerably slower convergence, which would distort the comparative analysis of the other algorithms.
Table 5 and Table 6 provide comprehensive results for all algorithms for D = 5, evaluated over 25 replications with a total of 5000 FEs per dimension. Table 5 focuses on functions f1 through f8, while Table 6 covers functions f9 through f15. Furthermore, Table 7 ranks all algorithms based on their average performances for dimensionalities D = 2, D = 5, and D = 10, with a focus on identifying the top-performing algorithms for further analysis, particularly in higher-dimensional scenarios. This ranking aims to distill key insights regarding the strengths of ETOSO and its competitors, thus guiding subsequent investigations into their behaviors under more complex conditions. Together, these results will provide a clearer understanding of ETOSO’s capabilities and its positioning within the broader algorithmic landscape.
A significant observation was the performance degradation of many algorithms as the dimensionality of the problem increased. This highlights the inherent challenge of effectively exploring and exploiting the search space in higher-dimensional problems. The search space expands exponentially with increasing dimensionality, making it more difficult for algorithms to locate the global optimum. Based on the results, the 25 algorithms can be grouped into three tiers: top performers, average performers, and low performers.
The first tier includes ETOSO, FPA, DE, GWO, TLBO, ROA, and SCA. This tier encompasses algorithms that consistently demonstrate high-ranking performance across all dimensions. These algorithms exhibited a strong balance of exploration and exploitation capabilities, effectively navigating the search space and converging toward optimal or near-optimal solutions. ETOSO, in particular, consistently achieved top rankings, highlighting its robustness and adaptability to varying problem complexities.
The second tier includes WOA, MFO, HHO, MKA, FDA, PDO, BAT, EHO, RRO, BEE, BOA, FA, and GOA. This tier includes algorithms that exhibited moderate performance across different dimensions. While they achieved reasonable results, their performance might degrade in higher-dimensional problems, suggesting potential limitations in their capacity to thoroughly explore and exploit the search space in more complex scenarios.
The third tier includes CSA, CS, SSA, GSA, SMA, and SOA. This tier comprises algorithms that consistently ranked among the low performers across all dimensions, indicating significant limitations in their exploration and exploitation capabilities. These algorithms often struggled to escape local optima, leading to suboptimal solutions or even divergence, particularly in higher-dimensional problems.
Based on the rankings in Table 7, we have selected the top 10 performing algorithms for a detailed evaluation of how ETOSO performs against them in more complex scenarios. This assessment will specifically explore ETOSO’s effectiveness and examine any limitations in higher-dimensional and multimodal optimization problems. By comparing its performance to these established algorithms, we aim to highlight ETOSO’s strengths and identify areas for improvement, providing a comprehensive understanding of its competitive position in challenging optimization landscapes.
Table 8 details the evaluation of these algorithms for functions f1 through f8, with a dimensionality of D = 200, offering insights based on 25 replications and 5000D FEs. Similarly, Table 9 presents the evaluation for functions f9 through f15 under the same conditions. These tables collectively provide a deep dive into the capabilities and behavior of the algorithms in high-dimensional settings. Subsequently, Table 10 ranks the top algorithms in terms of their overall performance (P), speed (S), and consistency (C) for dimension D = 30, 50, 100, and 200.
As seen in Table 10, high-dimensional scenarios posed significant challenges for most algorithms, negatively impacting their performance. The complexity associated with these high dimensions emphasizes the need for robust optimization strategies. As the dimensionality of the problems increased, most algorithms exhibited a general decline in performance, with FPA and MFO being particularly vulnerable in these scenarios.
Based on the evaluation, the order of the algorithms from the top performer down is ETOSO, GWO, HHO, ROA, DE, SCA, TLBO, WOA, FPA, MFO, and PDO. ETOSO emerged as the best overall performer, demonstrating exceptional speed and consistency across all dimensions. GWO followed closely with strong performance and good speed, while HHO was noted as the fastest algorithm. ROA exhibited satisfactory performance along with reliable consistency. DE and SCA showed moderate performance levels, with TLBO being slightly less competitive in higher dimensions. WOA also demonstrated reasonable performance, though not as strong as the top contenders. FPA and MFO struggled in complex scenarios, particularly in higher dimensions, and PDO consistently displayed lower performance with reliability issues.

6.3. Analysis of Statistical Significance and Population Size Sensitivity

A comprehensive statistical analysis was undertaken to evaluate the performance of ETOSO in comparison to the 10 top-performing algorithms across a range of benchmark functions (f1–f15), particularly focusing on the impact of significantly reducing the population size from 30 to 10 in a high-dimensional problem space (D = 50). The analysis incorporated the Wilcoxon signed-rank test, p-values, and Cliff’s Delta to provide a robust assessment of statistical significance and effect size for 25 replications.
The Wilcoxon signed-rank test, a non-parametric statistical hypothesis test, was employed to determine whether there were statistically significant differences in the performance of ETOSO compared to the other algorithms. The test assesses whether pairs of observations (in this case, performance metrics of two algorithms on the same function) are selected from the same distribution. The resulting p-values, presented in Table 11, indicate the probability of observing the obtained results (or more extreme results) if there were no actual differences between the algorithms. A p-value below a predetermined significance level (α = 0.05) suggests that the null hypothesis (no difference) can be rejected, indicating a statistically significant difference in performance. In this study, statistically significant differences (p ≤ 0.05) were observed in 74% of the comparisons between ETOSO and the other algorithms across the benchmark functions, indicating that ETOSO’s performance is not equivalent to the other algorithms for a majority of the tested cases. Specifically, statistically significant differences were observed in the performance of ETOSO compared to DE, FPA, HHO, MFO, and TLBO across all 15 benchmark functions. Very low p-values strongly suggest that the performance of ETOSO is significantly different from these algorithms. The Wilcoxon Significance results, shown in Table 12, further reinforce this, with values of 1 consistently showing for these five algorithms when compared to ETOSO, confirming the statistical significance.
Complementing the p-values, Cliff’s Delta, shown in Table 13, was used to quantify the effect size, providing a measure of the magnitude of the difference between the algorithms. A negative Cliff’s Delta value indicates that the second algorithm (the comparative algorithm) is stochastically dominated by the first algorithm (ETOSO). In other words, a negative Cliff’s Delta signifies that ETOSO consistently outperformed the other algorithm. Notably, Cliff’s Delta values approaching −1 were observed for comparisons between ETOSO and DE, FPA, HHO, MFO, and TLBO across all 15 functions. These values indicate a substantial effect size, demonstrating that ETOSO’s performance was consistently and significantly superior to these algorithms. Specifically, a Cliff’s Delta of −1 indicates that ETOSO stochastically dominates the other algorithm, meaning that ETOSO’s values are consistently smaller than the other algorithm’s values across all paired comparisons.
In comparisons with GWO, PDO, ROA, SCA, and WOA, the statistical analysis reveals that ETOSO demonstrates neither a statistically significant advantage nor disadvantage across all benchmark functions. To provide further differentiation of algorithm performance where the statistical analysis identified comparable results, the detailed performance metrics in Table 14 will be utilized for further assessment. These metrics are defined as follows:
  • Number of Perfect Hits: This metric counts the instances in which an algorithm solution precisely matches the known optimal solution;
  • Number of Times Closest to Minimum: This metric assesses an algorithm’s ability to achieve the average performance closer than any other compared algorithm to the known minimum across benchmark functions, indicating superior near-optimal convergence;
  • Number of Std Dev ≤ Threshold: This metric reflects the algorithm’s consistency by counting instances where the performance standard deviation meets a threshold, defined as 1e-6 multiplied by the absolute known minimum for non-zero minima and 1e-6 for zero minima;
  • Average Normalized Error (NAE): This metric is calculated as the mean of individual normalized errors, derived from the absolute difference between average performance and known minimum, divided by the absolute known minimum or 1 for zero minima, providing a scale-invariant error measure;
  • Trimmed NAE (TNAE): This metric is the Average Normalized Error (NAE) with the outlier from each algorithm result removed. This is used to provide a more accurate and fair representation of performance by ensuring that comparisons reflect typical results rather than extreme cases;
  • Average Speed: This metric represents the mean execution time in seconds across all benchmarks, indicating computational efficiency.
Analyzing the results presented in Table 14, ETOSO demonstrates strong performance across multiple metrics. ETOSO achieved a high number of perfect hits (9), tying for first place with several other algorithms. Furthermore, it exhibited the highest number of times closest to the minimum (9), showcasing its robust convergence behavior. ETOSO also demonstrated the highest level of consistency, with 14 instances of standard deviation meeting the predefined threshold. ETOSO’s Average Normalized Error (NAE) of 1.87 × 103 (4.03 if the outlier is removed for each algorithm) is lower than the majority of the comparative algorithms, demonstrating its ability to find solutions that are close to the optimal values. Additionally, ETOSO exhibited a very low average speed (0.852), ranking second only to HHO, indicating its high computational efficiency and consistent execution time.
These results, combined with the statistical significance demonstrated in the preceding analysis, highlight ETOSO’s superior performance across a range of benchmark functions. Importantly, ETOSO’s performance was consistent even with a significant reduction in population size (from 30 to 10) in a high-dimensional problem space (D = 50). This insensitivity to changes in population size demonstrates ETOSO’s robustness and scalability, making it a powerful and reliable optimization algorithm for challenging optimization problems.

6.4. Computational Complexity and Overhead Analysis

This section presents a comprehensive analysis of the computational complexity and practical overhead inherent in the optimization algorithms under study. While original research papers often lack formal complexity analyses, we derive these complexities based on the algorithms’ structural operations and scaling properties with problem dimensionality (D), population size (ps), and function evaluations (FE). The computational complexity of these algorithms is fundamentally determined by FE, D, and, for ETOSO, ps. All algorithms, except ETOSO, exhibit a complexity of O(FE⋅D), reflecting the positional updates in D-dimensional space. ETOSO, in contrast, demonstrates a complexity of O[FE⋅(D + log (ps)], where the D term arises from positional updates and the log (ps) term stems from sorting operations for neighbor selection. In practical terms, the log (ps) term in ETOSO has a minimal impact on scalability for typical population sizes, particularly in high-dimensional problems where D significantly outweighs log (ps). For instance, with ps = 30, log ps ≈ 3.4, which is negligible compared to D = 500. In this case, the D term dominates ETOSO’s complexity, making the log (ps) term insignificant. However, for ps = 100 and D = 10, log (ps) ≈ 4.6, which becomes more relevant because it is nearly half the size of D. In such lower-dimensional problems, the log (ps) term contributes more significantly to the overall complexity. Nevertheless, for large-dimension optimization scenarios, the D term continues to dominate ETOSO’s complexity, rendering the log (ps) term relatively minor.
Beyond theoretical complexity, practical overhead from hidden operations and constant factors significantly influence algorithm performance. These hidden operations include computationally expensive mathematical functions (trigonometric, exponential, logarithmic), sorting, neighbor selection, random number generation, and conditional logic. Based on code analysis, algorithms can be categorized by overhead, as shown in Table 15.
This direct correlation between overhead classification and algorithm performance, despite similar O(FE⋅D) complexities for all algorithms other than ETOSO, underscores the importance of considering practical overhead alongside theoretical complexity. Algorithms with low overhead, such as HHO, exhibit faster performance, while those with high overhead, like FPA and ROA, are slower. ETOSO’s slightly higher theoretical complexity due to the log (ps) term is mitigated by its efficient implementation. This analysis reinforces that Big O complexity alone does not fully explain observed performance differences, highlighting the necessity of assessing practical overhead for accurate algorithm evaluation. In fact, the empirical results for average speed given in the last column of Table 14 confirm what is in Table 15.

7. Discussion and Limitations

ETOSO, with a much simpler structure, presents significant advancements over its predecessor, TOSO, as demonstrated by an extensive benchmarking study against 25 competitive algorithms. The empirical evaluations across diverse benchmark functions suggest that ETOSO shows improved performance parameters and may serve as a strong contender in the optimization landscape.
Initial comparisons between TOSO and ETOSO reveal that both algorithms perform well on simpler unimodal functions, such as the Sphere and Zakharov benchmarks, consistently achieving optimal values of zero across all tested dimensions. However, when transitioning to multimodal functions like Schwefel’s Problem 2.26 and Rosenbrock, a significant distinction emerges. While TOSO demonstrates competent performance, ETOSO sets itself apart with enhanced speed and robustness at dimensions D = 5 and D = 10. This finding underscores the effectiveness of ETOSO’s design improvements, particularly the linear weight increment, which promotes a more stable and gradual exploitation strategy.
The extensive evaluations against 25 leading algorithms further solidify ETOSO’s position as a frontrunner in swarm optimization. The comparative results reveal that ETOSO outperformed competitors such as GWO and HHO, achieving the best overall performance rank among the tested algorithms. Particularly in high-dimensional spaces (D = 30, D = 50, D = 100, and D = 200), ETOSO not only maintained a competitive edge in solution quality but also exhibited faster convergence rates. The performance metrics indicate that ETOSO consistently converges to optimal or near-optimal solutions even in challenging multimodal landscapes characterized by multiple peaks and valleys.
A notable advantage of ETOSO is its robustness; it demonstrated lower variability in results, suggesting that the algorithm is less susceptible to fluctuations and thus offers greater consistency across multiple replications. This reliability is especially significant in practical optimization scenarios where the stability of results can influence decision-making processes. Furthermore, ETOSO’s adaptability is underscored by its performance on shifted and rotated function benchmarks. The algorithm’s ability to generalize across these challenging transformations further confirms its robustness and versatility compared to both TOSO and the competing algorithms. This adaptability points to the potential of ETOSO to effectively tackle a wide variety of real-world optimization problems.
The comprehensive statistical analyses using the Wilcoxon signed-rank test and Cliff’s Delta provided robust evidence of ETOSO’s superior performance, highlighting statistically significant differences in its effectiveness compared to other algorithms across various benchmarks. Additionally, the examination of computational complexity revealed that while ETOSO’s theoretical complexity includes a log (ps) term, its practical efficiency and reduced overhead allow it to maintain competitive execution speeds in high-dimensional optimization tasks.
While ETOSO demonstrates significant performance improvements, several limitations warrant further study. First, a more thorough analysis of ETOSO’s computational complexity is needed to fully assess its scalability and practicality in resource-constrained settings. This analysis should include a detailed comparison of its computational cost (time and memory usage) against other leading algorithms to determine its suitability for various application scenarios and computational budgets. Second, the benchmark functions used, while diverse, may not fully encompass the wide range of complexities inherent in real-world optimization problems. Therefore, evaluating ETOSO’s performance on a broader set of benchmarks, such as those suggested by CEC2017 and CEC2019, in addition to real-world datasets, is crucial to confirm its generalizability and robustness in less-idealized scenarios and to identify any potential limitations in diverse applications.

8. Conclusions and Future Research

This study introduces ETOSO as a noteworthy advancement in swarm optimization, demonstrating improvements over its predecessor, TOSO, and performing competitively against a range of 25 algorithms. The enhancements, such as the implementation of a linear weight increment for exploitation and simplification of the algorithm to become parameter-free, have led to faster convergence rates and greater robustness. The benchmarking results and statistical analyses suggest that ETOSO shows encouraging performance across various benchmarks and dimensions, indicating potential advantages in the optimization landscape. Furthermore, ETOSO’s ability to sustain good performance with reduced population sizes implies its adaptability for practical applications, making it a viable option for complex optimization tasks in fields like engineering, finance, and artificial intelligence. However, further investigation into its performance in real-world scenarios and additional problem contexts would be beneficial to fully understand its capabilities and limitations.
Future research should explore the application of ETOSO in dynamic and noisy environments while assessing its performance in real-world settings. Investigating the unique mechanisms behind ETOSO’s success may offer deeper insights into optimization strategies that can benefit various applications. Specifically, applying ETOSO to practical problems is essential, including its effectiveness in engineering design optimization (e.g., optimizing designs under constraints or for multi-objective optimization), supply chain management (enhancing efficiency through inventory and transportation considerations), financial modeling (improving market predictions and portfolio optimization), and machine learning (optimizing hyperparameters). Additionally, conducting a formal theoretical analysis of ETOSO’s convergence characteristics and exploration–exploitation balance, alongside comparisons to other top-performing algorithms, will be crucial for understanding its advantages. Evaluating ETOSO’s robustness under dynamic conditions, such as time-varying objective functions and uncertain parameters, will also help determine its resilience in less predictable real-world scenarios.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The author acknowledges the Deanship of Graduate Studies and Research of the Islamic University of Madinah for their support with publication-related fees.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Rani, R.; Jain, S.; Garg, H. A review of nature-inspired algorithms on single-objective optimization problems from 2019 to 2023. Artif. Intell. Rev. 2024, 57, 126. [Google Scholar] [CrossRef]
  2. Jiao, L.; Zhao, J.; Wang, C.; Liu, X.; Liu, F.; Li, L.; Shang, R.; Li, Y.; Ma, W.; Yang, S. Nature-Inspired Intelligent Computing: A Comprehensive Survey. Research 2024, 7, 0442. [Google Scholar] [CrossRef]
  3. Vinod Chandra, V.C.; Anand, H.S. Nature inspired meta heuristic algorithms for optimization problems. Computing 2022, 104, 251–269. [Google Scholar] [CrossRef]
  4. Hafiz, F.M.F.; Abdennour, A. A team-oriented approach to particle swarms. Appl. Soft Comput. 2013, 13, 3776–3791. [Google Scholar] [CrossRef]
  5. Yang, X.-S. A New Metaheuristic Bat-Inspired Algorithm. arXiv 2010, arXiv:1004.4170. [Google Scholar] [CrossRef]
  6. Wang, Y.; Wang, P.; Zhang, J.; Cui, Z.; Cai, X.; Zhang, W.; Chen, J. A Novel Bat Algorithm with Multiple Strategies Coupling for Numerical Optimization. Mathematics 2019, 7, 135. [Google Scholar] [CrossRef]
  7. Pham, D.T.; Ghanbarzadeh, A.; Koç, E.; Otri, S.; Rahim, S.; Zaidi, M. The Bees Algorithm—A Novel Tool for Complex Optimisation Problems. In Intelligent Production Machines and Systems; Elsevier: Amsterdam, The Netherlands, 2006; pp. 454–459. [Google Scholar] [CrossRef]
  8. Yang, X.-S.; Karamanoglu, M.; He, X. Flower pollination algorithm: A novel approach for multiobjective optimization. Eng. Optim. 2014, 46, 1222–1237. [Google Scholar] [CrossRef]
  9. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  11. Yang, X.-S.; Deb, S. Cuckoo Search via Levy Flights. arXiv 2010, arXiv:1003.1594. [Google Scholar] [CrossRef]
  12. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  13. Hakli, H. A Nover approach based on elephant herding optimization for constrained optimization problems. Selcuk Univ. J. Eng. Sci. Technol. 2019, 7, 405–419. [Google Scholar] [CrossRef]
  14. Yang, X.-S. Firefly Algorithm, Stochastic Test Functions and Design Optimisation. arXiv 2010, arXiv:1003.1409. [Google Scholar] [CrossRef]
  15. Yang, X.S.; He, X. Firefly algorithm: Recent advances and applications. Int. J. Swarm Intell. 2013, 1, 36. [Google Scholar] [CrossRef]
  16. Meraihi, Y.; Gabis, A.B.; Mirjalili, S.; Ramdane-Cherif, A. Grasshopper Optimization Algorithm: Theory, Variants, and Applications. IEEE Access 2021, 9, 50001–50024. [Google Scholar] [CrossRef]
  17. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  18. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  19. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  20. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  21. Wang, Z.; Ding, H.; Yang, Z.; Li, B.; Guan, Z.; Bao, L. Rank-driven salp swarm algorithm with orthogonal opposition-based learning for global optimization. Appl. Intell. 2022, 52, 7922–7964. [Google Scholar] [CrossRef]
  22. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  24. Rao, R.V.; Waghmare, G.G. Complex constrained design optimisation using an elitist teaching-learning-based optimisation algorithm. Int. Metaheuristics J. 2014, 3, 81. [Google Scholar] [CrossRef]
  25. Karami, H.; Anaraki, M.V.; Farzin, S.; Mirjalili, S. Flow Direction Algorithm (FDA): A Novel Optimization Approach for Solving Optimization Problems. Comput. Ind. Eng. 2021, 156, 107224. [Google Scholar] [CrossRef]
  26. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  27. Brabazon, A.; Cui, W.; O’Neill, M. The raven roosting optimisation algorithm. Soft Comput. 2016, 20, 525–545. [Google Scholar] [CrossRef]
  28. Halsema, M.; Vermetten, D.; Bäck, T.; Van Stein, N. A Critical Analysis of Raven Roost Optimization. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, Melbourne, Australia, 14–18 July 2024; pp. 1993–2001. [Google Scholar] [CrossRef]
  29. Devi, R.V.; Sathya, S.S.; Kumar, N. Monkey algorithm for robot path planning and vehicle routing problems. In Proceedings of the 2017 International Conference on Information Communication and Embedded Systems (ICICES), Chennai, India, 23–24 February 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar] [CrossRef]
  30. Mucherino, A.; Seref, O.; Seref, O.; Kundakcioglu, O.E.; Pardalos, P. Monkey search: A novel metaheuristic search for global optimization. In AIP Conference Proceedings; AIP: Woodbury, NY, USA, 2007; pp. 162–173. [Google Scholar] [CrossRef]
  31. Zhao, R.; Tang, W. Monkey Algorithm for Global Numerical Optimization. J. Uncertain Syst. 2008, 2, 165–176. [Google Scholar]
  32. Devi, R.V.; Sathya, S.S. Monkey Behavior Based Algorithms—A Survey. Int. J. Intell. Syst. Appl. 2017, 9, 67–86. [Google Scholar] [CrossRef]
  33. Wang, S.; Rao, H.; Wen, C.; Jia, H.; Wu, D.; Liu, Q.; Abualigah, L. Improved Remora Optimization Algorithm with Mutualistic Strategy for Solving Constrained Engineering Optimization Problems. Processes 2022, 10, 2606. [Google Scholar] [CrossRef]
  34. Zheng, R.; Jia, H.; Abualigah, L.; Wang, S.; Wu, D. An improved remora optimization algorithm with autonomous foraging mechanism for global optimization problems. Math. Biosci. Eng. 2022, 19, 3994–4037. [Google Scholar] [CrossRef]
  35. Storn, R.; Price, K. Differential Evolution—A simple and efficient adaptive scheme for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  36. Yu, H.; Wang, Y.; Jia, H.; Abualigah, L. Modified prairie dog optimization algorithm for global optimization and constrained engineering problems. Math. Biosci. Eng. 2023, 20, 19086–19132. [Google Scholar]
  37. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  38. Yu, S.; Zhu, S.; Ma, Y.; Mao, D. A variable step size firefly algorithm for numerical optimization. Appl. Math. Comput. 2015, 263, 214–220. [Google Scholar] [CrossRef]
  39. Ibrahim, R.A.; Ewees, A.A.; Oliva, D.; Abd Elaziz, M.; Lu, S. Improved salp swarm algorithm based on particle swarm optimization for feature selection. J. Ambient Intell. Humaniz. Comput. 2019, 10, 3155–3169. [Google Scholar] [CrossRef]
  40. Nenavath, H.; Jatoth, R.K. Hybrid SCA–TLBO: A novel optimization algorithm for global optimization and visual tracking. Neural Comput. Appl. 2019, 31, 5497–5526. [Google Scholar] [CrossRef]
  41. Long, W.; Cai, S.; Jiao, J.; Xu, M.; Wu, T. A new hybrid algorithm based on grey wolf optimizer and cuckoo search for parameter extraction of solar photovoltaic models. Energy Convers. Manag. 2020, 203, 112243. [Google Scholar] [CrossRef]
  42. Nabil, E. A Modified Flower Pollination Algorithm for Global Optimization. Expert Syst. Appl. 2016, 57, 192–203. [Google Scholar] [CrossRef]
  43. Rao, R.V.; Patel, V. An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems. Sci. Iran. 2012, 20, 710–720. [Google Scholar] [CrossRef]
  44. Choo, Y.H.; Cai, Z.; Le, V.; Johnstone, M.; Creighton, D.; Lim, C.P. Enhancing the Harris’ Hawk optimiser for single- and multi-objective optimisation. Soft Comput. 2023, 27, 16675–16715. [Google Scholar] [CrossRef]
  45. Sun, Y.; Huang, Q.; Liu, T.; Cheng, Y.; Li, Y. Multi-Strategy Enhanced Harris Hawks Optimization for Global Optimization and Deep Learning-Based Channel Estimation Problems. Mathematics 2023, 11, 390. [Google Scholar] [CrossRef]
  46. Yang, T.; Fang, J.; Jia, C.; Liu, Z.; Liu, Y. An improved harris hawks optimization algorithm based on chaotic sequence and opposite elite learning mechanism. PLoS ONE 2023, 18, e0281636. [Google Scholar] [CrossRef] [PubMed]
  47. Huang, L.; Fu, Q.; Tong, N. An Improved Harris Hawks Optimization Algorithm and Its Application in Grid Map Path Planning. Biomimetics 2023, 8, 428. [Google Scholar] [CrossRef] [PubMed]
  48. He, J.; Peng, Z.; Zhang, L.; Zuo, L.; Cui, D.; Li, Q. Enhanced crow search algorithm with multi-stage search integration for global optimization problems. Soft Comput. 2023, 27, 14877–14907. [Google Scholar] [CrossRef]
  49. Karaboga, D.; Akay, B. A modified Artificial Bee Colony (ABC) algorithm for constrained optimization problems. Appl. Soft Comput. 2011, 11, 3021–3031. [Google Scholar] [CrossRef]
  50. Wang, M.; Chen, H.; Yang, B.; Zhao, X.; Hu, L.; Cai, Z.; Huang, H.; Tong, C. Toward an optimal kernel extreme learning machine using a chaotic moth-flame optimization strategy with applications in medical diagnoses. Neurocomputing 2017, 267, 69–84. [Google Scholar] [CrossRef]
  51. Peng, L.; Cai, Z.; Heidari, A.A.; Zhang, L.; Chen, H. Hierarchical Harris hawks optimizer for feature selection. J. Adv. Res. 2023, 53, 261–278. [Google Scholar] [CrossRef]
  52. Li, Y.; Li, W.; Yuan, Q.; Shi, H.; Han, M. Multi-strategy Improved Seagull Optimization Algorithm. Int. J. Comput. Intell. Syst. 2023, 16, 154. [Google Scholar] [CrossRef]
  53. Ewees, A.A.; Abd Elaziz, M.; Houssein, E.H. Improved grasshopper optimization algorithm using opposition-based learning. Expert Syst. Appl. 2018, 112, 156–172. [Google Scholar] [CrossRef]
  54. Tan, L.; Liu, D.; Liu, X.; Wu, W.; Jiang, H. Efficient Grey Wolf Optimization: A High-Performance Optimizer with Reduced Memory Usage and Accelerated Convergence. Comput. Sci. Math. 2025. [Google Scholar] [CrossRef]
  55. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  56. Luo, W.; Lin, X.; Li, C.; Yang, S.; Shi, Y. Benchmark Functions for CEC 2022 Competition on Seeking Multiple Optima in Dynamic Environments. arXiv 2022, arXiv:2201.00523. [Google Scholar] [CrossRef]
  57. Biedrzycki, R. Revisiting CEC 2022 ranking: A new ranking method and influence of parameter tuning. Swarm Evol. Comput. 2024, 89, 101623. [Google Scholar] [CrossRef]
  58. Tang, K.; Li, X.; Suganthan, P.N.; Yang, Z.; Weise, T. Benchmark Functions for the CEC’2008 Special Session and Competition on Large Scale Global Optimization. In Technical Report; USTC: Hefei, China, 2007. [Google Scholar]
  59. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
Figure 1. Exploitation function of ETOSO.
Figure 1. Exploitation function of ETOSO.
Biomimetics 10 00222 g001
Figure 2. Pseudocode of ETOSO.
Figure 2. Pseudocode of ETOSO.
Biomimetics 10 00222 g002
Figure 3. Three-dimensional plots of the benchmark functions.
Figure 3. Three-dimensional plots of the benchmark functions.
Biomimetics 10 00222 g003
Figure 4. ETOSO vs. TOSO convergence plots for D = 30 and 25 replications.
Figure 4. ETOSO vs. TOSO convergence plots for D = 30 and 25 replications.
Biomimetics 10 00222 g004aBiomimetics 10 00222 g004b
Figure 5. Convergence plots for D = 30 for f7 (multimodal) and f15 (rotated) averaged over 25 replications.
Figure 5. Convergence plots for D = 30 for f7 (multimodal) and f15 (rotated) averaged over 25 replications.
Biomimetics 10 00222 g005
Table 1. Benchmark functions (D is the dimension).
Table 1. Benchmark functions (D is the dimension).
Test FunctionRangeOpt. xfmin
Unimodalf1: Sphere f 1 ( x ) = i = 1 D x i 2 [−100, 100]{0} D0
f2: Zakharov f 2 ( x ) = i = 1 D x i 2 + i = 1 D 0.5 i x i 2 + i = 1 D 0.5 i x i 4 [−5, 10]{0} D0
f3: Sum squares f 3 ( x ) = i = 1 D i x i 2 [−10, 10]{0} D0
Multimodalf4: Schwefel’s Problem 2.22 f 4 x = i = 1 D x i + i = 1 D x i [−10, 10]{0} D0
f5: Schwefel’s Problem 2.26 f 5 x = i = 1 D x i s i n x i [−500, 500]{420.96} D−418.9829 * D
f6: Rosenbrock f 6 x = i = 1 D 1 100 x i + 1 x i 2 2 + x i 1 2 [−30, 30]{1} D0
f7: Ackley f 7 x = 20 e x p 0.2 1 D i = 1 D x i 2 e x p 1 D i = 1 D cos 2 π x i + 20 + e [−32, 32]{0} D0
f8: Rastrigin f 8 x = i = 1 D x i 2 10 cos 2 π x i + 10 [−5.12, 5.12]{0} D0
f9: Griewank f 9 x = 1 4000 i = 1 D x i 2 i = 1 D c o s x i i + 1 [−600, 600]{0} D0
f10: Bent cigar f 10 x = x 1 2 + 10 6 i = 2 D x i 2 [−100, 100]{0} D0
Shifted af11: Shifted Ackley f 11 x = 20 e x p 0.2 1 D i = 1 D z i 2 e x p 1 D i = 1 D cos 2 π z i + 20 + e + f b 1 [−32, 32]o−140
f12: Shifted Rosenbrock f 12 x = i = 1 D 1 100 z i + 1 z i 2 2 + z i 1 2 + f b 2 [−100, 100]o390
f13: Shifted Rastrigin f 13 x = i = 1 D z i 2 10 cos 2 π z i + 10 + f b 3 [−5, 5]o−330
f14: Shifted Griewank f 14 x = 1 4000 i = 1 D z i 2 i = 1 D c o s z i i + 1 + f b 4 [−600, 600]o−180
Rotated bf15: Rotated Griewank f 15 x = 1 4000 i = 1 D y i 2 i = 1 D c o s y i i + 1 [−600, 600]{0} D0
a z = x − o, o is a shift added to global optimum and fbi is a bias added to the original function [58]. b y = M*x, where M is an orthogonal rotation matrix [59].
Table 2. Optimal parameter settings for comparative algorithms.
Table 2. Optimal parameter settings for comparative algorithms.
AlgorithmParameters
Bat Algorithm BAT [5,6]A = 0.95, r0 = 0.9, Qmin = 0, Qmax = 5, α = 0.99, γ = 0.9, strategies = 8.
Bees AlgorithmBEE [7]m = 3, e = 1, nep = 7, nsp = 2, ngh = 3.
Butterfly BOA [12]c = 0.01, a = 0.05, p = 0.5.
Crow Search CSA [22]AP = 0.1, FL = 2.
Cuckoo Search CS [11] pa = 0.25, β = 1.
Differential EvolutionDE [35]CR = 0.9, F = 0.5.
Elephant Herding EHO [13]α = rand (0 to 1), β = rand (0 to 1).
Firefly FA [14,15]α = 1, γ = 1, β0 = 0.2.
Flow Direction FDA [25]α = 25, β = 3.
Flower Pollination FPA [8]p = 0.8, λ = 1.5.
Grasshopper GOA [16]C = 1 to 0.00001 decreasing linearly.
Gravity Search GSA [26]G0 = 100, α = 20.
Grey Wolf OptimizerGWO [23]a = 2 to 0 linearly, c = 1 − 0.00009 t/T, t = current iteration, T = max iteration.
Harris Hawks HHO [17]E0 = rand (−1 to 1), E = 2 * E0 * (1 − t/T), r = rand (0 to 1).
Moth–Flame MFO [18]b = −1, t = rand (−1 to 1).
Monkey King MKA [29,30,31,32]Population = 10, step length = 1, climb number = 10,
eyesight = 0.1, somersault interval = [−1, 1].
Prairie Dog PDO [36]ρ = 0.1, δ = 0.005.
Remora ROA [33,34]A0 = 0.9, Af = 0.1, Ps = 0.499.
Raven Roosting RRO [27,28]R = search radius, D = dimension, Nsteps = 5, Pfollow = 0.2, Pstop = 0.1,
Rpcpt = 3.6 * R * sqrt(D), Rleader = 1.8 * R * sqrt(D).
Sine Cosine SCA [9]t = current iteration, T max iteration, a = 2, r1 = a(1 − t)/T;
r2 = rand (0 to 2π), r3 = rand (0 to 2), r4 = rand (0 to 1), p = 0.5.
Seagull Optimizer SOA [37]fc = 2, u = 1, v = 1.
Slime Mold SMA [19]vamax = 1, vamin = 0.01, va = vamax − (vamax − vamin)t/T.
Salp Swarm SSA [20]C = 2e−(4t/T)^2.
Teaching–Learning TLBO [24]No algorithm-specific parameters.
Whale Optimizer WOA [10]a = 2 to 0 linearly decreasing
l = rand (−1 to 1), A = a(2l − 1), C = rand (0 to 2), b = 1.
Table 3. ETOSO versus TOSO for f1 to f10 with D = 2, 5, and 10 (5000D FEs and 25 replications).
Table 3. ETOSO versus TOSO for f1 to f10 with D = 2, 5, and 10 (5000D FEs and 25 replications).
FunctionMetricD = 2D = 5D = 10
TOSOETOSOTOSOETOSOTOSOETOSO
f1Opt.000000
Mean000000
Std.000000
Speed1.10 × 10⁻21.08 × 10⁻22.89 × 10⁻22.73 × 10⁻21.82 × 10⁻15.81 × 10⁻2
f2Opt.000000
Mean000000
Std.000000
Speed1.21 × 10⁻21.15 × 10⁻23.65 × 10⁻21.00 × 10⁻16.58 × 10⁻26.14 × 10⁻2
f3Opt.000000
Mean000000
Std.000000
Speed1.13 × 10⁻21.06 × 10⁻28.09 × 10⁻28.48 × 10⁻26.14 × 10⁻26.07 × 10⁻2
f4Opt.000000
Mean000000
Std.000000
Speed1.11 × 10⁻21.08 × 10⁻27.63 × 10⁻21.02 × 10⁻15.91 × 10⁻25.75 × 10⁻2
f5Opt.−8.38 × 102−8.38 × 102−2.09 × 103−2.09 × 103−4.19 × 103−4.19 × 103
Mean−8.38 × 102−8.38 × 102−2.09 × 103−2.09 × 103−4.19 × 103−4.19 × 103
Std.3.78 × 10⁻52.09 × 10⁻51.11 × 10⁻48.08 × 10⁻58.93 × 10⁻51.41 × 10⁻4
Speed1.14 × 10⁻21.13 × 10⁻21.10 × 10⁻18.46 × 10⁻28.02 × 10⁻27.64 × 10⁻2
f6Opt.000000
Mean3.35 × 10⁻13.43 × 10⁻12.38 × 1002.41 × 1007.33 × 1007.38 × 100
Std.3.06 × 10⁻22.88 × 10⁻21.47 × 10⁻11.26 × 10⁻11.38 × 10⁻11.66 × 10⁻1
Speed1.18 × 10⁻21.08 × 10⁻21.01 × 10⁻11.03 × 10⁻16.18 × 10⁻25.81 × 10⁻2
f7Opt.000000
Mean000000
Std.000000
Speed1.14 × 10⁻21.13 × 10⁻21.06 × 10⁻11.10 × 10⁻18.02 × 10⁻27.64 × 10⁻2
f8Opt.000000
Mean000000
Std.000000
Speed1.12 × 10⁻21.13 × 10⁻29.45 × 10⁻29.47 × 10⁻27.72 × 10⁻27.41 × 10⁻2
f9Opt.000000
Mean000000
Std.000000
Speed1.27 × 10⁻21.20 × 10⁻27.91 × 10⁻21.03 × 10⁻16.54 × 10⁻25.92 × 10⁻2
f10Opt.000000
Mean000000
Std.000000
Speed1.21 × 10⁻21.18 × 10⁻21.10 × 10⁻19.19 × 10⁻25.80 × 10⁻25.66 × 10⁻2
Table 4. ETOSO versus TOSO for f11 to f15 with D = 30, 50, 100, and 200 (5000D FEs and 25 replications).
Table 4. ETOSO versus TOSO for f11 to f15 with D = 30, 50, 100, and 200 (5000D FEs and 25 replications).
FunctionMetricD = 30D = 50D = 100D = 200
TOSOETOSOTOSOETOSOTOSOETOSOTOSOETOSO
f11Opt.−140.00−140.00−140.00−140.00−140.00−140.00−140.00−140.00
Mean−140.00−140.00−140.00−140.00−140.00−140.00−140.00−140.00
Std.0.000.000.000.000.000.000.000.00
Speed0.310.300.570.551.371.324.093.74
f12Opt.390.00390.00390.00390.00390.00390.00390.00390.00
Mean418.24418.24438.05438.05487.57487.58586.63586.65
Std.0.000.010.010.000.010.010.010.01
Speed0.210.210.400.381.010.962.852.71
f13Opt.−330.00−330.00−330.00−330.00−330.00−330.00−330.00−330.00
Mean−330.00−330.00−330.00−330.00−330.00−330.00−330.00−330.00
Std.0.000.000.000.000.000.000.000.00
Speed0.310.290.560.541.321.283.793.38
f14Opt.−180.00−180.00−180.00−180.00−180.00−180.00−180.00−180.00
Mean−179.98−179.96−179.98−179.98−179.99−179.98−179.99−179.98
Std.0.020.040.030.030.010.020.030.02
Speed0.270.260.550.521.571.454.964.57
f15Opt.0.000.000.000.000.000.000.000.00
Mean0.000.000.000.000.000.000.000.00
Std.0.000.000.000.000.000.000.000.00
Speed0.280.260.660.611.891.726.335.79
Table 5. Benchmark results for functions f1–f8 with D = 5 (using 5000D functions evaluations and 25 replications).
Table 5. Benchmark results for functions f1–f8 with D = 5 (using 5000D functions evaluations and 25 replications).
Algorithm f1f2f3f4f5f6f7f8
Opt.0000−2.09 × 103000
ETOSOBest0000−2.09 × 1032.1100
Mean0000−2.09 × 1032.4100
BATBest9.98 × 10⁻114.73 × 10⁻107.71 × 10⁻101.27 × 10⁻5−1.98 × 1033.47 × 10⁻22.38 × 10⁻52.98
Mean8.55 × 10⁻103.30 × 10⁻95.01 × 10⁻96.64 × 10⁻5−1.56 × 1031.17 × 1022.51 × 10⁻11.03 × 101
BEEBest3.00 × 10⁻42.00 × 10⁻43.00 × 10⁻42.65 × 10⁻2−1.98 × 1031.16 × 10⁻11.38 × 1013.22
Mean6.00 × 10⁻48.00 × 10⁻41.30 × 10⁻34.23 × 10⁻2−1.58 × 1034.66 × 10⁻11.71 × 1011.35 × 101
BOABest6.50 × 10⁻31.70 × 10⁻53.17 × 10⁻51.13 × 10⁻2−1.57 × 1031.92 × 10⁻11.39 × 1013.00
Mean1.92 × 10⁻25.71 × 10⁻52.00 × 10⁻42.39 × 10⁻2−1.13 × 1031.62 × 1011.71 × 1012.13 × 101
CSABest1.32 × 1011.04 × 10⁻13.40 × 10⁻16.11 × 10⁻1−2.02 × 1034.49 × 1013.782.83
Mean4.70 × 1016.38 × 10⁻11.061.04−1.73 × 1038.27 × 1025.017.35
CSBest2.58 × 1021.265.112.68−1.73 × 1032.51 × 1038.511.02 × 101
Mean4.99 × 1026.191.25 × 1014.10−1.58 × 1032.28 × 1041.07 × 1011.61 × 101
DEBest2.94 × 10⁻421.97 × 10⁻361.05 × 10⁻431.49 × 10⁻21−2.09 × 1031.40 × 10⁻2300
Mean2.15 × 10⁻395.54 × 10⁻343.06 × 10⁻413.45 × 10⁻20−1.98 × 1033.51 × 10⁻211.13 × 10⁻152.00 × 10⁻1
EHOBest6.18 × 10⁻95.00 × 10⁻48.55 × 10⁻91.10 × 10⁻3−2.08 × 1038.70 × 10⁻31.79 × 10⁻65.45 × 10⁻6
Mean2.876.13 × 10⁻11.95 × 10⁻12.71 × 10⁻1−1.64 × 1031.40 × 1021.436.36
FABest7.40 × 10⁻31.17 × 10⁻21.89 × 10⁻21.62 × 10⁻1−1.74 × 1031.201.49 × 1012.95
Mean2.88 × 10⁻23.75 × 10⁻26.11 × 10⁻22.87 × 10⁻1−1.59 × 1033.701.76 × 1016.62
FDABest3.90 × 10⁻45.10 × 10⁻44.20 × 10⁻42.88 × 10⁻2−1.28 × 1051.06 × 10⁻11.97 × 10⁻21.04
Mean1.43 × 10⁻32.21 × 10⁻33.49 × 10⁻36.20 × 10⁻2−8.49 × 1042.34 × 1016.09 × 10⁻25.38
FPABest5.32 × 10⁻425.85 × 10⁻265.82 × 10⁻432.49 × 10⁻26−2.09 × 1035.03 × 10⁻400
Mean8.24 × 10⁻382.17 × 10⁻231.20 × 10⁻398.78 × 10⁻25−1.99 × 1036.45 × 10⁻12.13 × 10⁻153.58 × 10⁻1
GOABest2.10 × 1038.89 × 10⁻113.42 × 10⁻131.79 × 10⁻6−1.35 × 1034.00 × 10⁻41.40 × 10⁻69.95 × 10⁻1
Mean5.54 × 1034.75 × 10⁻104.37 × 10⁻115.07 × 10⁻5−9.41 × 1023.51 × 1011.39 × 10⁻17.72
GSABest2.10 × 1034.54 × 10⁻84.84 × 10⁻79.00 × 10⁻4−1.35 × 1031.71 × 1041.61 × 1013.70 × 10⁻4
Mean5.56 × 1031.90 × 10⁻62.20 × 10⁻61.97 × 10⁻3−9.41 × 1024.29 × 1061.86 × 1013.08
GWOBest1.10 × 10⁻1922.90 × 10⁻1444.90 × 10⁻1989.70 × 10⁻107−2.09 × 1034.37 × 10⁻300
Mean1.00 × 10⁻1771.90 × 10⁻1316.10 × 10⁻1842.30 × 10⁻99−1.73 × 1031.531.56 × 10⁻150
HHOBest1.02 × 10⁻83.56 × 10⁻75.09 × 10⁻91.10 × 10⁻4−2.09 × 1035.14 × 10⁻24.50 × 10⁻53.72 × 10⁻7
Mean3.20 × 10⁻61.71 × 10⁻35.62 × 10⁻71.44 × 10⁻3−1.91 × 1039.45 × 10⁻16.67 × 10⁻23.42
MFOBest3.40 × 10⁻2231.30 × 10⁻1591.70 × 10⁻2219.60 × 10⁻118−2.09 × 1031.55 × 10⁻300
Mean4.70 × 10⁻2111.388.40 × 10⁻2136.10 × 10⁻114−1.72 × 1031.34 × 1016.58 × 10⁻22.79
MKABest1.70 × 10⁻44.47 × 10⁻55.00 × 10⁻41.96 × 10⁻2−1.98 × 1031.09 × 10⁻12.59 × 10⁻21.27 × 10⁻1
Mean6.30 × 10⁻49.00 × 10⁻41.27 × 10⁻34.01 × 10⁻2−1.58 × 1031.334.42 × 10⁻22.14
PDOBest02.8700−1.49 × 1034.40 × 10⁻200
Mean01.38 × 10200−9.23 × 1023.83 × 10⁻100
RROBest2.38 × 10⁻12.47 × 10⁻33.28 × 10⁻27.97 × 10⁻2−2.09 × 1032.93 × 10⁻11.728.29 × 10⁻2
Mean5.71 × 1017.11 × 10⁻11.401.35−2.09 × 1036.48 × 1025.116.46
ROABest07.60 × 10⁻2733.60 × 10⁻3037.30 × 10⁻162−2.09 × 1037.90 × 10⁻600
Mean3.80 × 10⁻2153.30 × 10⁻2243.50 × 10⁻2213.10 × 10⁻117−2.09 × 1031.2300
SCABest5.50 × 10⁻1831.30 × 10⁻1925.80 × 10⁻2046.20 × 10⁻108−2.09 × 1031.6000
Mean3.55 × 10⁻769.39 × 10⁻13.86 × 10⁻823.30 × 10⁻82−2.09 × 1033.0900
SOABest9.96 × 10⁻14.201.993.03−1.96 × 1034.35 × 1023.841.46 × 101
Mean1.29 × 1024.41 × 1014.40 × 1011.09 × 101−1.44 × 1032.98 × 1051.27 × 1013.58 × 101
SMABest1.174.10 × 10⁻44.80 × 10⁻42.94 × 10⁻2−1.82 × 1032.001.38 × 1014.20
Mean1.61 × 1031.85 × 10⁻32.78 × 10⁻35.83 × 10⁻2−1.40 × 1031.97 × 1031.72 × 1011.93 × 101
SSABest3.93 × 10⁻61.48 × 10⁻11.37 × 10⁻25.01 × 10⁻2−1.39 × 1034.33 × 1011.241.60 × 10⁻4
Mean3.61 × 1025.775.562.72−1.13 × 1031.85 × 1048.469.04
TLBOBest9.11 × 10⁻381.14 × 10⁻261.77 × 10⁻372.77 × 10⁻23−2.09 × 1031.40 × 10⁻43.55 × 10⁻150
Mean1.60 × 10⁻282.45 × 10⁻201.47 × 10⁻296.06 × 10⁻21−1.95 × 1031.776.58 × 10⁻21.13
WOABest8.00 × 10⁻3017.80 × 10⁻1151.90 × 10⁻2991.40 × 10⁻161−2.09 × 1033.14 × 10⁻200
Mean3.20 × 10⁻2842.90 × 10⁻662.00 × 10⁻2822.00 × 10⁻154−1.74 × 1032.542.70 × 10⁻157.10 × 10⁻17
Table 6. Benchmark results for functions f9–f15 with D = 5 (using 5000D functions evaluations and 25 replications).
Table 6. Benchmark results for functions f9–f15 with D = 5 (using 5000D functions evaluations and 25 replications).
Alg. f9f10f11f12f13f14f15
Opt.00−1.40 × 1023.90 × 102−3.30 × 102−1.80 × 1020
ETOSOBest00−1.40 × 1023.93 × 102−3.30 × 102−1.80 × 1020
Mean00−1.40 × 1023.93 × 102−3.30 × 102−1.80 × 1020
BATBest9.11 × 10⁻24.41−1.40 × 1023.93 × 102−3.28 × 102−1.80 × 1022.83 × 10⁻1
Mean5.68 × 10⁻11.47 × 103−1.40 × 1027.13 × 102−3.19 × 102−1.79 × 1029.30 × 10⁻1
BEEBest1.21 × 1011.58 × 102−1.27 × 1023.93 × 102−3.28 × 102−1.62 × 1025.15 × 101
Mean4.66 × 1016.77 × 102−1.22 × 1023.94 × 102−3.10 × 102−1.18 × 1021.10 × 102
BOABest8.27 × 10⁻21.58 × 104−1.25 × 1023.94 × 102−3.03 × 102−1.67 × 1028.84 × 10⁻2
Mean2.88 × 10⁻11.72 × 105−1.21 × 1023.99 × 102−2.65 × 102−1.26 × 1022.29 × 10⁻1
CSABest8.55 × 10⁻11.29 × 106−1.37 × 1021.28 × 103−3.26 × 102−1.79 × 1021.27
Mean1.289.41 × 106−1.35 × 1022.91 × 103−3.22 × 102−1.79 × 1022.14
CSBest2.542.42 × 107−1.33 × 1025.57 × 103−3.22 × 102−1.77 × 1022.32
Mean5.181.27 × 108−1.29 × 1022.27 × 104−3.12 × 102−1.76 × 1021.11 × 101
DEBest1.19 × 10⁻21.63 × 10⁻38−1.40 × 1023.93 × 102−3.30 × 102−1.80 × 1023.94 × 10⁻2
Mean8.56 × 10⁻24.60 × 10⁻35−1.40 × 1023.93 × 102−3.30 × 102−1.80 × 1021.95 × 10⁻1
EHOBest1.76 × 10⁻13.95−1.40 × 1026.46 × 102−3.25 × 102−1.80 × 1028.20 × 10⁻3
Mean5.21 × 10⁻16.89 × 104−1.35 × 1024.32 × 103−3.18 × 102−1.79 × 1028.33 × 10⁻1
FABest1.31 × 1016.05 × 103−1.25 × 1023.95 × 102−3.27 × 102−1.60 × 1025.20 × 101
Mean4.72 × 1011.42 × 104−1.22 × 1024.14 × 102−3.24 × 102−1.16 × 1029.65 × 101
FDABest1.76 × 10⁻22.05 × 102−1.40 × 1023.93 × 102−3.30 × 102−1.80 × 1023.56 × 10⁻2
Mean1.17 × 10⁻16.08 × 103−1.39 × 1023.94 × 102−3.26 × 102−1.80 × 1021.38 × 10⁻1
FPABest3.70 × 10⁻91.50 × 10⁻36−1.40 × 1023.93 × 102−3.30 × 102−1.80 × 1021.74 × 10⁻2
Mean4.69 × 10⁻24.65 × 10⁻34−1.40 × 1023.93 × 102−3.29 × 102−1.80 × 1021.26 × 10⁻1
GOABest1.31 × 1012.78 × 108−1.40 × 1021.37 × 105−3.29 × 102−1.60 × 1025.20 × 101
Mean4.81 × 1012.83 × 109−1.40 × 1023.11 × 105−3.24 × 102−1.16 × 1021.29 × 102
GSABest1.31 × 1012.78 × 108−1.25 × 1021.37 × 105−3.29 × 102−1.60 × 1025.20 × 101
Mean4.81 × 1012.88 × 108−1.21 × 1023.11 × 105−3.25 × 102−1.16 × 1021.29 × 102
GWOBest04.73 × 10⁻188−1.40 × 1023.93 × 102−3.29 × 102−1.80 × 1029.38 × 10⁻3
Mean1.29 × 10⁻23.36 × 10⁻176−1.39 × 1021.28 × 103−3.27 × 102−1.80 × 1026.32 × 10⁻2
HHOBest8.37 × 10⁻61.81 × 10⁻2−1.40 × 1023.94 × 102−3.29 × 102−1.80 × 1024.82 × 10⁻2
Mean2.07 × 10⁻13.84 × 103−1.39 × 1024.40 × 102−3.22 × 102−1.80 × 1023.70 × 10⁻1
MFOBest7.40 × 10⁻36.90 × 10⁻214−1.40 × 1023.93 × 102−3.30 × 102−1.80 × 1024.68 × 10⁻2
Mean6.92 × 10⁻23.60 × 103−1.38 × 1025.33 × 102−3.25 × 102−1.80 × 1022.48 × 10⁻1
MKABest2.84 × 10⁻17.30 × 101−1.40 × 1023.93 × 102−3.29 × 102−1.79 × 1021.43 × 10⁻1
Mean5.606.54 × 102−1.26 × 1023.94 × 102−3.27 × 102−1.71 × 1024.08
PDOBest00−1.25 × 1021.52 × 105−2.98 × 102−1.61 × 1020
Mean00−1.23 × 1022.44 × 105−2.78 × 102−1.39 × 1020
RROBest9.95 × 10⁻12.55 × 105−1.36 × 1022.03 × 103−3.24 × 102−1.79 × 1029.87 × 10⁻1
Mean1.582.14 × 107−1.34 × 1026.43 × 103−3.20 × 102−1.78 × 1022.37
ROABest01.38 × 10⁻282−1.37 × 1025.96 × 102−3.26 × 102−1.80 × 1020
Mean05.94 × 10⁻209−1.30 × 1021.67 × 104−3.14 × 102−1.77 × 1027.81 × 10⁻2
SCABest01.17 × 10⁻182−1.39 × 1024.53 × 102−3.29 × 102−1.80 × 1020
Mean01.05 × 10⁻75−1.37 × 1021.22 × 103−3.23 × 102−1.79 × 1024.44 × 10⁻18
SOABest3.54 × 10⁻11.34 × 102−1.33 × 1024.03 × 102−3.18 × 102−1.79 × 1024.73 × 10⁻1
Mean1.583.99 × 106−1.26 × 1022.21 × 104−2.95 × 102−1.72 × 1021.17 × 101
SMABest1.21 × 1011.73 × 103−1.27 × 1023.45 × 103−3.28 × 102−1.62 × 1025.15 × 101
Mean4.65 × 1015.79 × 108−1.22 × 1027.08 × 104−3.09 × 102−1.17 × 1021.08 × 102
SSABest1.21 × 10⁻11.63 × 104−1.37 × 1021.95 × 103−3.22 × 102−1.79 × 1022.58 × 10⁻1
Mean3.731.02 × 108−1.30 × 1022.59 × 104−3.07 × 102−1.73 × 1021.23 × 101
TLBOBest2.31 × 10⁻23.81 × 10⁻30−1.40 × 1023.93 × 102−3.30 × 102−1.80 × 1022.96 × 10⁻2
Mean9.34 × 10⁻24.00 × 102−1.40 × 1023.93 × 102−3.29 × 102−1.80 × 1021.64 × 10⁻1
WOABest01.14 × 10⁻297−1.40 × 1025.27 × 102−3.29 × 102−1.80 × 1020
Mean1.81 × 10⁻21.05 × 10⁻278−1.30 × 1021.23 × 104−3.11 × 102−1.78 × 1022.87 × 10⁻1
Table 7. Ranking of all algorithms on the basis of performance for D = 2, 5, and 10.
Table 7. Ranking of all algorithms on the basis of performance for D = 2, 5, and 10.
AlgorithmRank for
D = 2
Rank for
D = 5
Rank for
D = 10
Cumulative RankOverall Rank
ETOSO11131
FPA22262
DE42393
GWO544134
TLBO366155
ROA955196
SCA677207
WOA888248
MFO7911279
HHO101193010
PDO1110113211
MKA1313103612
FDA1212133713
BAT1413144114
EHO1615154615
CSA1717195316
RRO1918165317
BEE2116175418
BOA1819205719
FA2021185920
GOA1520266121
CS2224236922
SSA2523216923
GSA2422247024
SMA2625227325
SOA2326257426
Table 8. Top algorithms evaluation for f1 to f8 and D = 200 (25 replications and 5000D FEs).
Table 8. Top algorithms evaluation for f1 to f8 and D = 200 (25 replications and 5000D FEs).
Alg. f1f2f3f4f5f6f7f8
Opt.0000−8.38 × 104000
ETOSOBest0000−8.38 × 1041.95 × 10200
Mean0000−8.38 × 1041.95 × 10200
Std00005.50 × 10⁻41.68 × 10⁻100
Speed2.272.512.362.353.842.443.433.41
DEBest1.00 × 10⁻41.58 × 1031.00 × 10⁻44.10 × 10⁻4−6.02 × 1047.67 × 1021.99 × 1012.76 × 102
Mean2.28 × 1042.42 × 1031.85 × 10⁻26.12 × 10⁻2−5.53 × 1042.10 × 1091.99 × 1013.63 × 102
Std1.13 × 1052.77 × 1023.76 × 10⁻21.70 × 10⁻12.36 × 1031.07 × 1094.00 × 10⁻45.29 × 101
Speed1.35 × 1011.48 × 1011.42 × 1011.40 × 1011.55 × 1011.49 × 1011.54 × 1011.57 × 101
FPABest3.26 × 1051.96 × 1032.98 × 1053.06 × 102−4.45 × 1041.44 × 1091.99 × 1012.17 × 103
Mean3.62 × 1053.08 × 1033.42 × 1052.40 × 1052−4.07 × 1041.62 × 1091.99 × 1012.41 × 103
Std1.45 × 1044.24 × 1021.81 × 1041.20 × 10531.82 × 1039.78 × 1071.32 × 10⁻31.01 × 102
Speed3.02 × 1013.19 × 1013.11 × 1013.08 × 1013.34 × 1013.23 × 1013.41 × 1013.39 × 101
GWOBest03.60 × 10⁻17600−3.08 × 1041.96 × 1027.10 × 10⁻150
Mean02.93 × 10⁻15800−2.53 × 1041.98 × 1027.10 × 10⁻150
Std01.19 × 10⁻157002.75 × 1036.26 × 10⁻100
Speed2.06 × 1012.07 × 1012.11 × 1012.15 × 1012.55 × 1012.09 × 1012.40 × 1012.30 × 101
HHOBest3.93 × 10⁻65.93 × 1011.87 × 10⁻55.54 × 10⁻2−8.38 × 1041.97 × 1027.90 × 10⁻44.00 × 10⁻4
Mean3.20 × 10⁻46.17 × 1025.39 × 10⁻38.67 × 10⁻1−7.13 × 1041.97 × 1022.05 × 10⁻38.24 × 101
Std2.50 × 10⁻44.24 × 1028.06 × 10⁻32.151.22 × 1046.18 × 10⁻37.60 × 10⁻48.46 × 101
Speed1.661.961.741.793.482.083.353.08
MFOBest3.00 × 1044.16 × 1033.40 × 1042.10 × 102−5.24 × 1047.45 × 1011.94 × 1011.60 × 103
Mean6.48 × 1045.05 × 1039.05 × 1044.27 × 102−4.37 × 1046.98 × 1081.97 × 1011.84 × 103
Std2.62 × 1044.65 × 1022.43 × 1049.77 × 1014.20 × 1031.10 × 1091.76 × 10⁻11.51 × 102
Speed2.32 × 1012.44 × 1012.41 × 1012.35 × 1012.60 × 1012.55 × 1012.59 × 1012.58 × 101
PDOBest04.52 × 10800−4.60 × 1041.97 × 10200
Mean05.55 × 101400−4.23 × 1041.97 × 10200
Std01.73 × 1015002.90 × 1031.18 × 10⁻200
Speed1.52 × 1011.64 × 1011.60 × 1011.56 × 1011.89 × 1011.73 × 1011.73 × 1011.62 × 101
ROABest0000−8.38 × 1046.60 × 10⁻700
Mean0000−8.38 × 1043.30 × 10⁻300
Std00004.27 × 10⁻35.30 × 10⁻300
Speed1.54 × 1011.60 × 1011.60 × 1011.48 × 1011.69 × 1011.66 × 1011.54 × 1011.54 × 101
SCABest06.5700−8.38 × 1041.97 × 10200
Mean01.42 × 10200−8.38 × 1049.65 × 10700
Std01.40 × 102001.182.80 × 10800
Speed1.49 × 1011.52 × 1011.52 × 1011.46 × 1011.66 × 1011.63 × 1011.55 × 1011.51 × 101
TLBOBest1.40 × 10⁻31.88 × 1031.37 × 10⁻15.40 × 10⁻3−5.44 × 1041.55 × 1031.34 × 1011.82 × 102
Mean1.073.22 × 1034.382.70 × 10⁻1−4.61 × 1045.91 × 1031.45 × 1012.40 × 102
Std2.671.81 × 1036.686.99 × 10⁻13.61 × 1034.63 × 1035.90 × 10⁻13.08 × 101
Speed1.57 × 1011.65 × 1011.63 × 1011.61 × 1011.87 × 1011.75 × 1011.91 × 1011.78 × 101
WOABest06.03 × 10200−8.38 × 1041.97 × 10200
Mean02.95 × 10300−8.38 × 1041.97 × 1022.55 × 10⁻150
Std06.71 × 102008.94 × 10⁻21.27 × 10⁻12.41 × 10⁻150
Speed1.57 × 1011.55 × 1011.63 × 1011.49 × 1011.70 × 1011.63 × 1011.54 × 1011.53 × 101
Table 9. Top algorithms evaluation for f9 to f15 and D = 200 (25 replications and 5000D FEs).
Table 9. Top algorithms evaluation for f9 to f15 and D = 200 (25 replications and 5000D FEs).
Alg. f9f10f11f12f13f14f15
Opt.00−1.40 × 1023.90 × 102−3.30 × 102−1.80 × 1020
ETOSOBest00−1.40 × 1025.87 × 102−3.30 × 102−1.80 × 1020
Mean00−1.40 × 1025.87 × 102−3.30 × 102−1.80 × 1020
Std002.32 × 10⁻41.46 × 10⁻28.72 × 10⁻52.12 × 10⁻20
Speed3.162.374.092.873.924.721.11 × 101
DEBest3.55 × 10⁻54.81 × 101−1.36 × 1025.87 × 1026.47 × 101−1.80 × 1022.05 × 10⁻1
Mean4.32 × 10⁻14.76 × 106−1.33 × 1025.87 × 1022.24 × 102−1.80 × 1025.00 × 10⁻1
Std1.162.37 × 1072.396.46 × 10⁻16.30 × 1012.99 × 10⁻12.64 × 10⁻1
Speed1.65 × 1011.43 × 1011.91 × 1011.87 × 1011.87 × 1012.16 × 1015.75 × 101
FPABest2.90 × 1033.21 × 1011−1.20 × 1023.22 × 1071.81 × 1032.57 × 1039.10 × 103
Mean3.23 × 1033.60 × 1011−1.19 × 1024.14 × 1072.36 × 1033.26 × 1031.06 × 104
Std1.64 × 1021.79 × 10104.22 × 10⁻15.18 × 1061.78 × 1022.91 × 1027.08 × 102
Speed3.63 × 1013.16 × 1013.80 × 1013.61 × 1013.76 × 1013.97 × 1011.21 × 102
GWOBest00−1.21 × 1021.49 × 1071.64 × 1031.35 × 1030
Mean00−1.21 × 1021.76 × 1071.86 × 1031.72 × 1030
Std002.19 × 10⁻11.45 × 1061.13 × 1021.68 × 1020
Speed2.20 × 1012.10 × 1012.54 × 1012.09 × 1012.49 × 1013.85 × 1014.88 × 101
HHOBest1.16 × 10⁻51.20 × 101−1.21 × 1026.11 × 1021.30 × 103−1.80 × 1026.22 × 10⁻5
Mean6.87 × 10⁻32.73 × 102−1.21 × 1029.60 × 1031.53 × 103−1.80 × 1021.19 × 10⁻2
Std2.61 × 10⁻22.96 × 1028.60 × 10⁻28.98 × 1031.08 × 1024.12 × 10⁻21.34 × 10⁻2
Speed5.431.783.952.443.567.223.54 × 101
MFOBest1.80 × 1022.99 × 1010−1.20 × 1022.07 × 1071.77 × 1032.10 × 1032.38 × 103
Mean6.20 × 1025.56 × 1010−1.20 × 1023.12 × 1072.32 × 1032.86 × 1034.28 × 103
Std3.12 × 1022.02 × 10102.09 × 10⁻14.88 × 1062.56 × 1025.10 × 1029.37 × 102
Speed2.50 × 1012.46 × 1012.91 × 1012.89 × 1012.88 × 1013.84 × 1013.73 × 101
PDOBest00−1.19 × 1025.01 × 1073.28 × 1035.41 × 1030
Mean00−1.19 × 1025.02 × 1073.38 × 1035.42 × 1030
Std003.53 × 10⁻22.50 × 1046.16 × 1012.480
Speed1.79 × 1011.64 × 1012.21 × 1012.10 × 1012.14 × 1012.96 × 1012.71 × 101
ROABest00−1.19 × 1024.42 × 1072.74 × 1034.67 × 1030
Mean00−1.19 × 1024.64 × 1072.86 × 1035.01 × 1030
Std004.61 × 10⁻21.47 × 1067.66 × 1011.70 × 1020
Speed1.58 × 1011.62 × 1012.17 × 1012.02 × 1012.09 × 1012.90 × 1012.70 × 101
SCABest00−1.20 × 1022.34 × 1072.29 × 1032.70 × 1030
Mean00−1.19 × 1023.01 × 1072.53 × 1033.22 × 1030
Std006.54 × 10⁻22.40 × 1061.01 × 1023.37 × 1020
Speed1.65 × 1011.60 × 1012.09 × 1011.98 × 1012.03 × 1013.15 × 1012.76 × 101
TLBOBest1.20 × 10⁻42.83 × 103−1.21 × 1026.32 × 1021.21 × 103−1.76 × 1027.72 × 10⁻1
Mean3.23 × 10⁻29.31 × 105−1.21 × 1027.98 × 1041.36 × 103−1.50 × 1021.03
Std4.27 × 10⁻24.01 × 1061.66 × 10⁻18.21 × 1049.44 × 1011.77 × 1011.59 × 10⁻1
Speed1.74 × 1011.70 × 1012.31 × 1012.11 × 1012.17 × 1013.08 × 1012.92 × 101
WOABest00−1.19 × 1024.27 × 1072.76 × 1034.39 × 1030
Mean00−1.19 × 1024.58 × 1072.99 × 1034.77 × 1038.88 × 10⁻18
Std003.52 × 10⁻21.27 × 1067.11 × 1011.74 × 1023.07 × 10⁻17
Speed1.56 × 1011.70 × 1012.19 × 1012.13 × 1012.07 × 1013.05 × 1012.55 × 101
Table 10. Ranking of top algorithms in terms of performance (P), speed (S), and consistency (C).
Table 10. Ranking of top algorithms in terms of performance (P), speed (S), and consistency (C).
Alg.D = 30D = 50D = 100D = 200
PSCPSCPSCPSC
ETOSO121121121121
DE246445446638
FPA101110101110111110111110
GWO334236237295
HHO818618618317
MFO111011111011101011101011
PDO992993992973
ROA463362363452
SCA555454454446
TLBO788789889889
WOA677877775664
Table 11. p-values from Wilcoxon signed-rank test with D = 50 and ps = 10 (using 5000D functions evaluations and 25 replications).
Table 11. p-values from Wilcoxon signed-rank test with D = 50 and ps = 10 (using 5000D functions evaluations and 25 replications).
AlgorithmETOSODEFPAGWOHHOMFOPDOROASCATLBOWOA
f119.73 × 10⁻119.73 × 10⁻1119.73 × 10⁻119.73 × 10⁻111119.73 × 10⁻111
f219.73 × 10⁻119.73 × 10⁻1109.73 × 10⁻119.73 × 10⁻11119.73 × 10⁻119.73 × 10⁻119.73 × 10⁻11
f319.73 × 10⁻119.73 × 10⁻1119.73 × 10⁻119.73 × 10⁻111119.73 × 10⁻111
f419.73 × 10⁻119.73 × 10⁻1119.73 × 10⁻119.73 × 10⁻111119.73 × 10⁻111
f511.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻94.25 × 10⁻62.57 × 10⁻81.42 × 10⁻93.67 × 10⁻9
f612.57 × 10⁻81.42 × 10⁻92.44 × 10⁻21.42 × 10⁻94.26 × 10⁻61.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻9
f719.65 × 10⁻119.73 × 10⁻119.61 × 10⁻119.73 × 10⁻119.73 × 10⁻111112.33 × 10⁻112.08 × 10⁻2
f819.73 × 10⁻119.73 × 10⁻1119.73 × 10⁻119.73 × 10⁻111119.73 × 10⁻111
f919.73 × 10⁻119.73 × 10⁻1119.73 × 10⁻119.73 × 10⁻111119.73 × 10⁻113.37 × 10⁻1
f1019.73 × 10⁻119.73 × 10⁻1119.73 × 10⁻119.73 × 10⁻111119.73 × 10⁻110.00
f1111.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻9
f1211.41 × 10⁻93.88 × 10⁻51.41 × 10⁻92.56 × 10⁻81.41 × 10⁻91.41 × 10⁻91.41 × 10⁻91.41 × 10⁻91.41 × 10⁻91.41 × 10⁻9
f1311.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻91.42 × 10⁻9
f1419.18 × 10⁻101.31 × 10⁻69.18 × 10⁻109.18 × 10⁻109.18 × 10⁻109.18 × 10⁻109.18 × 10⁻109.18 × 10⁻109.18 × 10⁻109.18 × 10⁻10
f1519.73 × 10⁻119.73 × 10⁻1119.73 × 10⁻119.73 × 10⁻111111.62 × 10⁻19.73 × 10⁻11
Table 12. Wilcoxon significance table with D = 50 and ps = 10 (using 5000D functions evaluations and 25 replications).
Table 12. Wilcoxon significance table with D = 50 and ps = 10 (using 5000D functions evaluations and 25 replications).
AlgorithmETOSODEFPAGWOHHOMFOPDOROASCATLBOWOA
f101101100010
f201111100111
f301101100010
f401101100010
f501111111111
f601111111111
f701111100011
f801101100010
f901101100010
f1001101100011
f1101111111111
f1201111111111
f1301111111111
f1401111111111
f1501101100010
Table 13. Cliff’s Delta table with D = 50 and ps = 10 (using 5000D functions evaluations and 25 replications).
Table 13. Cliff’s Delta table with D = 50 and ps = 10 (using 5000D functions evaluations and 25 replications).
AlgorithmETOSODEFPAGWOHHOMFOPDOROASCATLBOWOA
f10−1−10−1−1000−10
f20−1−1−1−1−100−1−1−1
f30−1−10−1−1000−10
f40−1−10−1−1000−10
f50−1−1−1−1−1−1−0.76−0.92−1−0.9744
f60−0.92−1−0.3728−1−0.76−11−1−1−1
f70−1−1−1−1−1000−1−0.2
f80−1−10−1−1000−10
f90−1−10−1−1000−1−0.04
f100−1−10−1−1000−10
f110−1−1−1−1−1−1−1−1−1−1
f120−1−0.68−1−0.92−1−1−1−1−1−1
f130−1−1−1−1−1−1−1−1−1−1
f140−1−0.7872−1−1−1−1−1−1−1−1
f150−1−10−1−100−0.08−10
Table 14. Performance metrics of optimization algorithms with D = 50 and ps = 10 (using 5000D functions evaluations and 25 replications).
Table 14. Performance metrics of optimization algorithms with D = 50 and ps = 10 (using 5000D functions evaluations and 25 replications).
AlgorithmPerfect HitsClosest to MinStd Dev ≤ ThresholdNAETNAEAvg Speed (Sec)
ETOSO99141.87 × 1034.038.52 × 10⁻1
DE0113.82 × 1063.25 × 1034.20
FPA0111.35 × 10213.25 × 1095.47
GWO7781.96 × 1035.311.97
HHO0115.74 × 1032.08 × 1037.74 × 10⁻1
MFO0101.04 × 1092.56 × 1084.71
PDO9992.18 × 1034.374.45
ROA991012.10 × 1038.53 × 10⁻18.36
SCA7782.01 × 1089.58 × 1034.10
TLBO0117.99 × 1035.52 × 1034.30
WOA6681.08 × 1044.834.15
Table 15. Comparative analysis of algorithm computational complexity and overhead.
Table 15. Comparative analysis of algorithm computational complexity and overhead.
AlgorithmComputational ComplexityOverhead CategoryKey Operations Contributing to Overhead
ETOSOO[FE⋅(D + log (ps))]ModerateModerate overhead due to efficient exploitation mechanisms, including sorting.
HHOO(FE⋅D)LowAvoids expensive mathematical operations; relies on simple arithmetic and conditional logic.
DEO(FE⋅D)ModerateInvolves mutation and crossover operations.
TLBOO(FE⋅D)ModerateRequires mean calculations and duplicated loops.
GWOO(FE⋅D)ModerateInvolves sorting and best candidate calculations.
WOAO(FE⋅D)HighUses trigonometric and exponential functions in spiral updates.
MFOO(FE⋅D)HighRelies on logarithmic spiral calculations.
SCAO(FE⋅D)HighUses trigonometric functions for position updates.
FPAO(FE⋅D)Very HighInvolves Levy flight calculations.
ROAO(FE⋅D)Very HighCombines exponential and trigonometric functions with host-switching logic.
PDOO(FE⋅D)ModerateChaotic tent initialization and loop-based positional updates.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

BenAbdennour, A. An Enhanced Team-Oriented Swarm Optimization Algorithm (ETOSO) for Robust and Efficient High-Dimensional Search. Biomimetics 2025, 10, 222. https://doi.org/10.3390/biomimetics10040222

AMA Style

BenAbdennour A. An Enhanced Team-Oriented Swarm Optimization Algorithm (ETOSO) for Robust and Efficient High-Dimensional Search. Biomimetics. 2025; 10(4):222. https://doi.org/10.3390/biomimetics10040222

Chicago/Turabian Style

BenAbdennour, Adel. 2025. "An Enhanced Team-Oriented Swarm Optimization Algorithm (ETOSO) for Robust and Efficient High-Dimensional Search" Biomimetics 10, no. 4: 222. https://doi.org/10.3390/biomimetics10040222

APA Style

BenAbdennour, A. (2025). An Enhanced Team-Oriented Swarm Optimization Algorithm (ETOSO) for Robust and Efficient High-Dimensional Search. Biomimetics, 10(4), 222. https://doi.org/10.3390/biomimetics10040222

Article Metrics

Back to TopTop