1. Introduction
Optimization plays a crucial role across diverse scientific and engineering disciplines. From machine learning and artificial intelligence to logistics and finance, the ability to efficiently find optimal solutions is important. However, many real-world optimization problems are characterized by high dimensionality and complex landscapes, posing significant challenges to traditional optimization techniques. Nature-inspired optimization algorithms (NIOAs) have emerged as a powerful paradigm for tackling such complex optimization problems. Inspired by natural processes such as swarm behavior and physical phenomena, NIOAs offer robust and adaptable search strategies [
1,
2,
3]. Algorithms like Particle Swarm Optimization (PSO) have achieved notable success across a wide range of applications, often outperforming traditional gradient-based methods in complex, non-convex search spaces. Their inherent parallelism, stochastic nature, and ability to escape local optima make them particularly well suited for challenging optimization tasks.
PSO, inspired by the social foraging behavior of birds and fish, has gained widespread popularity due to its simplicity and effectiveness. It simulates a swarm of particles exploring the search space, with each particle adjusting its trajectory based on its own best-found solution and the best solution found by the entire swarm. While PSO has proven effective in numerous applications, it is susceptible to suboptimal convergence, particularly in complex, multimodal landscapes, where the algorithm can become trapped in local optima, hindering the discovery of the true global optimum.
To address the limitations of the standard PSO, the Team-Oriented Swarm Optimization algorithm (TOSO) was introduced [
4]. TOSO incorporates a team-based approach, dividing the swarm into two distinct teams: explorers and exploiters. The explorers navigate a wider area within the search space while the exploiters refine the current best position. This way, the diversity of the population and the robustness of the search process are maintained. This division of labor promotes a more balanced exploration–exploitation trade-off, diminishing the risk of early stagnation and improving the overall search efficiency.
While TOSO represents a significant advancement over the standard PSO, with remarkable performance in optimizing a wide range of benchmark functions, it faces challenges in certain scenarios, particularly in parameter tuning, which can significantly influence performance. This limitation motivates the development of an Enhanced Team-Oriented Swarm Optimization algorithm (ETOSO). ETOSO is a novel enhancement of TOSO that incorporates several key improvements to further enhance exploration, exploitation, simplicity, and overall optimization performance. ETOSO builds upon the team-based structure of TOSO but introduces mechanisms to make the algorithm parameter-free, simplifying implementation and enhancing robustness. These enhancements are designed to address the limitations of TOSO and achieve superior performance in complex optimization problems. The main contributions of this paper are as follows:
The development of ETOSO, an enhanced version of TOSO with improved exploration and exploitation strategies and a parameter-free design;
A comprehensive experimental evaluation of ETOSO on a diverse suite of 15 benchmark functions, demonstrating its superior performance compared to TOSO and 25 state-of-the-art NIOAs;
A detailed computational and statistical analysis of the ETOSO algorithm, providing insights into its behavior and performance characteristics.
The paper is organized as follows:
Section 2 reviews the related work.
Section 3 provides a detailed background on TOSO, including its mathematical formulations and underlying principles.
Section 4 describes the proposed ETOSO algorithm, highlighting the key enhancements and their rationale.
Section 5 presents the experimental setup and benchmark functions used for evaluation.
Section 6 compares ETOSO with TOSO and other state-of-the-art algorithms. It also provides a statistical and computational complexity analysis. The discussion and limitations are presented in
Section 7. Finally,
Section 8 concludes the paper and outlines future research directions.
2. Related Work
NIOAs have emerged as powerful paradigms for addressing complex optimization challenges, drawing inspiration from natural phenomena to navigate complex solution landscapes. These algorithms offer a dynamic and adaptable approach, proving particularly valuable in tackling nonlinear and high-dimensional problems where traditional optimization techniques often fail. However, while NIOAs present a compelling alternative, their original formulations are not without inherent limitations. Refining NIOAs’ applicability requires understanding the limitations related to parameter sensitivity, exploration–exploitation balance, constraint handling, scalability, and generalizability.
One of the most common challenges across a spectrum of NIOAs is the strong dependence on precise parameter tuning. Algorithms such as the Bat Algorithm (BAT) [
5,
6], Bees Algorithm (BEE) [
7], Flower Pollination Algorithm (FPA) [
8], Sine Cosine Algorithm (SCA) [
9], Whale Optimization Algorithm (WOA) [
10], and Cuckoo Search (CS) [
11] can sometimes be particularly susceptible to this issue. The performance of these algorithms may vary dramatically with slight changes in control parameters, demanding careful adjustments to achieve optimal results. This sensitivity may complicate their application in real-world scenarios and may raise questions about their robustness and generalizability.
Furthermore, a significant proportion of original NIOAs may exhibit an inherent difficulty in maintaining a balanced equilibrium between exploration and exploitation. Algorithms like the Butterfly Optimization Algorithm (BOA) [
12], Elephant Herding Optimization (EHO) [
13], Firefly Algorithm (FA) [
14,
15], Grasshopper Optimization Algorithm (GOA) [
16], Harris Hawks Optimization (HHO) [
17], Moth–Flame Optimization (MFO) [
18], Slime Mold Algorithm (SMA) [
19], and Salp Swarm Algorithm (SSA) [
20,
21] can frequently struggle to navigate this trade-off effectively in some contexts. Over-emphasis on exploration can lead to inefficient searches, while excessive exploitation can result in premature convergence and trap in local optima. This underscores the need for more sophisticated search strategies that dynamically adapt to the problem landscape.
Moreover, the practical deployment of a number of NIOAs is frequently restricted by limitations in constraint handling and scalability. Algorithms such as the Crow Search Algorithm (CSA) [
22], Grey Wolf Optimizer (GWO) [
23], Teaching–Learning-Based Optimization (TLBO) [
24], Flow Direction Algorithm (FDA) [
25], Gravity Search Algorithm (GSA) [
26], Raven Roost Optimization (RRO) [
27,
28], Monkey King Algorithm (MKA) [
29,
30,
31,
32], and Remora Optimization Algorithm (ROA) [
33,
34] may demonstrate difficulties in effectively managing constraints and scaling to high-dimensional problems in certain applications. These limitations can pose significant challenges in real-world applications where constraints are prevalent, and problem dimensions are large, often leading to increased computational complexity and diminished solution quality.
Finally, a subset of NIOAs faces concerns regarding their theoretical foundations and generalizability. Differential Evolution (DE) [
35], for instance, lacks comprehensive theoretical insights into its convergence properties, while the Prairie Dog Optimization (PDO) algorithm [
36] and the Seagull Optimization Algorithm (SOA) [
37] require more extensive comparative analyses to validate their performance across diverse problem domains. This lack of robust theoretical studies and empirical validation may limit their broader applicability.
In response to the various limitations inherent to some original NIOAs, researchers have dedicated considerable effort to developing enhancements and hybridizations, aiming to augment performance and expand applicability. To address parameter sensitivity, a prevalent strategy has been the implementation of adaptive parameter control. Adaptive versions of the Bat Algorithm and FPA, for example, dynamically adjust parameters based on search progress, thereby enhancing robustness and accelerating convergence. Specific implementations, such as the dynamic adaptation of Levy flight step size in CS and variable step size in Firefly FA [
38], demonstrate this approach. These adaptive mechanisms seek to reduce the reliance on manual parameter tuning, fostering greater autonomy and adaptability within the algorithms, though their efficacy remains dependent on the specific problem landscape.
To address the persistent challenges in achieving a balanced equilibrium between exploration and exploitation, hybrid approaches have gained significant traction. These methods combine the strengths of other algorithms to optimize search efficacy. Example instances of such an approach include hybridizations of SSA with PSO [
39], SCA with TLBO [
40], and GWO with CS [
41]. Furthermore, enhancements to FPA [
42] and TLBO [
43] have focused on refining the balance between exploration and exploitation, thereby improving convergence and solution quality. However, the increased complexity of these hybrid approaches can raise concerns about computational efficiency.
Advancements in constraint handling and scalability have been realized through various modifications. Enhanced versions of CSA and TLBO have incorporated strategies to manage infeasible solutions more effectively, thereby improving performance in constrained optimization problems. For high-dimensional problems, enhancements to HHO [
44,
45,
46,
47] and CSA [
48] have yielded promising results through multi-strategy enhancements and improved search mechanisms. Similarly, enhanced BEE algorithms have integrated Deb’s rules [
49] to improve constraint-handling capabilities. These modifications demonstrate a commitment to expanding the applicability of NIOAs to real-world problems.
Furthermore, efforts have also been directed toward enhancing the theoretical foundations and generalizability of NIOAs. Modifications to MFO [
50], SCA [
51], SOA [
52], and GWO [
53,
54] have focused on improving convergence, accuracy, and general applicability across diverse optimization landscapes. These enhancements aim to solidify the theoretical underpinnings of NIOAs and broaden their practical deployment, though standardized benchmarking remains a critical need for rigorous performance evaluation.
Despite these advancements, challenges are still being investigated. The increasing complexity of hybrid approaches raises concerns regarding computational efficiency and practical applicability. Consequently, future research should prioritize the development of robust, efficient, and generalizable NIOAs characterized by reduced parameter sensitivity and transparent tuning methodologies. There remains a pressing need for algorithms that exhibit diminished reliance on precise parameter adjustments or at least provide a clear and methodical tuning approach while delivering superior performance across a spectrum of optimization problems, and this is an area of ongoing research.
3. The TOSO Algorithm
The original PSO is a simple population-based optimization algorithm inspired by the social behavior of bird flocks. In a D-dimensional search space, the position and velocity of the
ith particle in the
dth dimension are updated at each iteration using the following equations [
55]:
where
: Position of the ith particle in the dth dimension;
: Velocity of the ith particle in the dth dimension;
: Inertia weight, controlling the influence of the previous velocity;
, : Acceleration constants, influencing the attraction toward personal and global best positions;
: Personal best position of the ith particle in the dth dimension;
: Global best position of the swarm in the dth dimension;
r1 and r2 are random numbers.
PSO balances exploration and exploitation through the careful selection of the acceleration constants ( and ) and the inertia weight (w).
TOSO is an innovative enhanced version of PSO that aims to improve its performance by decoupling exploration and exploitation. In PSO, a single swarm explores the solution space. In contrast, TOSO divides the swarm into two teams: an exploration team responsible for discovering new areas of the search space and an exploitation team dedicated to refining potential solutions. The only information shared between the teams is the current best position. This decoupling allows TOSO to balance exploration and exploitation more effectively, resulting in performance enhancement in navigating complex search spaces, particularly in multimodal problems where multiple optimal solutions may exist. This enhancement allowed for a much-improved performance over a large number of benchmarks, even for very high dimensions [
4].
3.1. TOSO Exploration Team
The exploration team aims to discover promising new regions within the search space. The explorers are guided by the local best (
lbest) model instead of the PSO global (
gbest) model. This enhances exploration by focusing on the best local positions, preventing rapid convergence, and maintaining diversity within the team. TOSO also uses the ring topology for neighbor selection instead of the PSO star topology for the same reason. Moreover, TOSO does not use a velocity update, and the new position is directly determined from the previous one. Given these modifications, the position of each explorer is, hence, updated according to the following equation:
where
represents the new position of the ith explorer;
lbest is the best position found by the explorer’s local neighbors within a ring topology;
r1,d is a uniform random number between 0 and 1, different for each dimension;
The exploration factor (α) is dynamically adjusted to balance exploration and exploitation. It is calculated as follows:
where
fitness is the vector containing the fitness values of all explorers. This dynamic adjustment of the exploration factor ensures that the team maintains a balance between exploiting promising regions and exploring new areas of the search space. To prevent premature convergence and maintain diversity within the exploration team, TOSO applies a random mutation operator to explorers with a predefined probability
pm. The rebirth is achieved using the following:
where
Rmin and Rmax represent the lower and upper bounds of the search space, respectively;
r2,d is a uniform random number between 0 and 1, different for each dimension.
3.2. TOSO Exploitation Team
The exploitation team refines the search around the current best solution. Instead of moving toward the best position, each exploiter is initialized at the best solution and then undergoes small, controlled movements within a localized region. The magnitude of these movements is determined by an exploitation factor (
βj), which is individually assigned to each exploiter based on its relative performance within the exploitation team. The position update for each exploiter is given by the following:
where
r3 is a random number drawn from a standard normal distribution (to favor exploiting closer proximities) and
Mj,d is a small perturbation motion for dimension
d. Denoting
k to be the particle’s relative position within the swarm, determined by its fitness level, and
ps being the swarm size, the exploitation function is given by the following:
where
γ1,
γ2, and
γ3 are constants that control the search radius of the exploiters. TOSO makes the best particle exploit within 0.01% of the range and the worst one within 50% of the range, relative to their current position. Under these conditions, the parameters are determined to be [
55]
γ1 = 0.001,
γ2 = 0.499, and
γ3 = 2.
4. ETOSO Algorithm
ETOSO incorporates several key enhancements to TOSO that are aimed at improving its exploration and exploitation capabilities. These enhancements include linear weight increment for exploitation, the indirect refined neighbor selection, the complete removal of the rebirth step of explorers, and avoiding any need for parameter settings. With these enhancements, ETOSO becomes a simple and parameter-free algorithm while maintaining a competitive performance.
4.1. Linear Weight Increment for Exploitation
TOSO employs an exponential weight increment for exploiters, using three parameters as given by (8). The exponential weighting strategy in TOSO, besides the inconvenience of having to use three parameters, can lead to significant disadvantages. While the weights guide exploiters toward optimizing the global best solution, the nonlinear nature of exponential weighting makes the algorithm overly sensitive to changes in fitness rankings, resulting in abrupt alterations in the weighted influence applied to the positions of the exploiters. This behavior may prompt exploiters to converge too quickly to local optima, thereby restricting their capacity to adaptively navigate the search space and explore alternative promising solutions.
Alternatively, ETOSO proposes a much simpler linear weighting strategy for the exploitation function given by the following:
The slope and bias are selected so that the starting and ending weights of ETOSO are similar to that of TOSO, as shown in
Figure 1 for a population equal to 30. For any population
ps, the values of
and
are given by the following:
The linear behavior, aside from being simple and less computationally demanding, allows for gradual and consistent adjustments to the weights, promoting smoother movements and reducing potential oscillations among exploiters. While both strategies utilize particle rankings, the linear approach ensures that exploiters can exploit good solutions without the erratic fluctuations that can affect convergence. A linear increase in weights provides a more controlled and progressive shift and ensures that the exploitation influence grows gradually, allowing the algorithm to explore the search space effectively while progressively focusing on promising regions. This approach enables ETOSO to maintain a balanced and effective search process, allowing for better adaptation to changes in the fitness landscape and enhancing the algorithm’s overall robustness and performance.
4.2. Dynamic Exploration Enhancement
TOSO incorporates randomization through a mutation probability pm. While this introduces some degree of exploration, the level of exploration is heavily dependent on the predetermined value of pm, which may not be optimal across different problem contexts. The authors of TOSO themselves acknowledged the lack of justification behind their choice of the value of pm. In reality, a fixed mutation probability may not adequately serve all optimization scenarios, as higher mutation rates might be required to effectively escape local optima, while lower rates could help maintain focus on promising regions.
However, after examining the effect of the mutation of the explorers, it was found that it does not really have any useful addition to the process. In fact, it adds to the complexity of TOSO, increases the average speed, and makes an additional variable to tune without significant benefit. Therefore, ETOSO employs only the basic exploration given by (3) with no rebirth. This simplifies the algorithm with even some potential improvements in performance, speed, and consistency.
4.3. Indirect Refined Neighbor Selection
While both TOSO and ETOSO engage in neighbor selection for each explorer based on their indices (including previous, current, and next explorers), ETOSO’s incorporation of a linear weight increment for exploitation can indirectly enhance the influence of neighbor interactions. The gradual adjustment of weights provides a more stable and predictable movement pattern for exploiters, leading to a smoother transition between exploration and exploitation phases. Consequently, this consistency may facilitate more effective utilization of information from neighboring explorers during the exploration process. By capitalizing on knowledge from nearby particle performances, ETOSO can achieve more informed search directions, resulting in improved exploration efficiency within the optimization task. The pseudocode for the ETOSO algorithm is presented in
Figure 2.
5. Experimental Setup
The experimental setup was designed to rigorously evaluate the performance of the proposed algorithm. This section describes the benchmark functions, the comparative algorithms, and the test configuration used in the study.
5.1. Benchmark Functions
A diverse set of benchmark functions was selected in accordance with CEC guidelines [
56,
57] to validate the proposed algorithm. These functions include unimodal functions (f1: Sphere, f2: Zakharov, f3: Sum Squares), multimodal functions (f4: Schwefel’s Problem 2.22, f5: Schwefel’s Problem 2.26, f6: Rosenbrock, f7: Ackley, f8: Rastrigin, f9: Griewank, f10: Bent Cigar), shifted functions (f11: Shifted Ackley, f12: Shifted Rosenbrock, f13: Shifted Rastrigin, f14: Shifted Griewank), and rotated functions (f15: Rotated Griewank). The equations for all benchmark functions are provided in
Table 1. Unimodal functions (f1–f3) are primarily used to evaluate the exploitation capability of the algorithm, while multimodal functions (f4–f10) assess its exploration effectiveness and ability to avoid local optima. The shifted (f11–f14) and rotated (f15) functions introduce additional complexity by altering the location and orientation of the global optimum, making them more challenging and realistic for evaluating algorithm performance.
To provide a visual understanding of the functions, 3D plots of all 15 benchmark functions are included in
Figure 3. These plots illustrate the unique characteristics of each function, such as the number of local optima, symmetry, and overall landscape complexity.
5.2. Comparative Algorithms and Parameter Settings
ETOSO was compared against a wide range of state-of-the-art optimization algorithms, namely BAT, BEE, BOA, CSA, CS, DE, EHO, FA, FDA, FPA, GOA, GSA, GWO, HHO, MFO, MKA, PDO, ROA, RRO, SCA, SOA, SMA, SSA, TLBO, and WOA. These algorithms were selected to provide a comprehensive comparison across different optimization paradigms, including swarm intelligence, evolutionary algorithms, and physics-inspired methods. The optimal parameter settings for each algorithm, obtained from the literature, are summarized in
Table 2. These settings were carefully chosen to ensure fair and accurate comparisons.
5.3. Test Configuration
The experiments were conducted using MATLAB 2024 (version R2024a) on a system equipped with an Intel (R) Core (TM) Ultra 9 185 H processor (2.30 GHz), 32.0 GB RAM, and a 64-bit operating system. All algorithms used a population size of 30 individuals (except for MKA, which uses 10 due to many additional internal iterations). The maximum number of function evaluations (FEs) for each dimension (D) was set to 5000 × D, following standard CEC guidelines [
56,
57]. All results were averaged over 25 independent runs for each algorithm and benchmark function to ensure statistical robustness evaluation. Each benchmark function was evaluated across multiple dimensions (D = 2, 5, 10, 30, 50, 100, and 200). To facilitate the reproduction of the results, the ETOSO algorithm code is available at
https://github.com/adel468/ETOSO.
6. Experimental Results
In this section, we plan to evaluate the performance of the ETOSO algorithm through four different approaches. The first approach will compare ETOSO with TOSO to identify any potential improvements or differences that ETOSO presents. The second approach expands this analysis by contrasting ETOSO with 25 other algorithms, providing insights into its relative strengths and possible applications. The third approach focuses on analyzing statistical significance and sensitivity to population size. Finally, the last approach will examine the computational complexity. All these evaluations are designed to assess ETOSO’s effectiveness and its significance within the broader context of algorithmic approaches.
6.1. Comparative Analysis of ETOSO and TOSO
Figure 4 shows the convergence behavior of the two algorithms for sample benchmark functions with D = 30.
Table 3 and
Table 4 present the results of the experimental evaluation for dimensions D = 2, 5, 10, 30, 50, 100, and 200. Each table provides the best performance, mean performance, performance standard deviation, and average speed (over 25 replications).
The results show that both algorithms demonstrate strong performance in simpler unimodal functions, consistently achieving optimal values of zero across various dimensions. This indicates their capacity for effective exploration and exploitation in landscapes characterized by a single peak. In contrast, the performance in multimodal functions reveals that while both algorithms maintain a comparable level of effectiveness, ETOSO occasionally outperforms TOSO in speed, especially with increasing dimensions. This suggests that ETOSO may possess enhanced adaptability to complex solution spaces with multiple optima.
Notably, in challenging functions such as the shifted and rotated benchmarks, both algorithms demonstrate their capacity to handle various complexities. However, ETOSO shows significantly lower variability in its results across higher dimensions, indicating greater stability and reliability compared to TOSO. The analysis underscores ETOSO’s performance in tackling complex problems with multiple local optima and deceptive features, notably in higher dimensions (D = 30, D = 50, D = 100, and D = 200). It highlights relatively faster convergence rates and robustness, particularly in navigating rugged landscapes, while maintaining a lower standard deviation compared to TOSO. Overall, while both algorithms managed simpler problems, ETOSO emerges as a robust and versatile option for complex, high-dimensional optimization tasks, balancing considerations of speed, stability, and adaptability in diverse solution landscapes exceptionally well.
The key takeaway from this first comparative test is that ETOSO performs as well as TOSO and often outperforms it, with greater consistency, in various scenarios. This enhanced performance is attained through a simpler structure and without the necessity for parameter tuning. Such improvements position ETOSO as a strong contender for optimization problems, which can be validated by benchmarking it against recently developed algorithms.
6.2. Benchmarking ETOSO Against Other Algorithms
This subsection presents the results of the benchmarking study.
Figure 5 displays the convergence plots for the 25 algorithms across 25 replications, specifically for dimensionality D = 30. This figure highlights the sample performance dynamics of each algorithm for one multimodal function (f7) and one rotated function (f15). It is important to note that the FDA algorithm has been excluded from this figure due to its considerably slower convergence, which would distort the comparative analysis of the other algorithms.
Table 5 and
Table 6 provide comprehensive results for all algorithms for D = 5, evaluated over 25 replications with a total of 5000 FEs per dimension.
Table 5 focuses on functions f1 through f8, while
Table 6 covers functions f9 through f15. Furthermore,
Table 7 ranks all algorithms based on their average performances for dimensionalities D = 2, D = 5, and D = 10, with a focus on identifying the top-performing algorithms for further analysis, particularly in higher-dimensional scenarios. This ranking aims to distill key insights regarding the strengths of ETOSO and its competitors, thus guiding subsequent investigations into their behaviors under more complex conditions. Together, these results will provide a clearer understanding of ETOSO’s capabilities and its positioning within the broader algorithmic landscape.
A significant observation was the performance degradation of many algorithms as the dimensionality of the problem increased. This highlights the inherent challenge of effectively exploring and exploiting the search space in higher-dimensional problems. The search space expands exponentially with increasing dimensionality, making it more difficult for algorithms to locate the global optimum. Based on the results, the 25 algorithms can be grouped into three tiers: top performers, average performers, and low performers.
The first tier includes ETOSO, FPA, DE, GWO, TLBO, ROA, and SCA. This tier encompasses algorithms that consistently demonstrate high-ranking performance across all dimensions. These algorithms exhibited a strong balance of exploration and exploitation capabilities, effectively navigating the search space and converging toward optimal or near-optimal solutions. ETOSO, in particular, consistently achieved top rankings, highlighting its robustness and adaptability to varying problem complexities.
The second tier includes WOA, MFO, HHO, MKA, FDA, PDO, BAT, EHO, RRO, BEE, BOA, FA, and GOA. This tier includes algorithms that exhibited moderate performance across different dimensions. While they achieved reasonable results, their performance might degrade in higher-dimensional problems, suggesting potential limitations in their capacity to thoroughly explore and exploit the search space in more complex scenarios.
The third tier includes CSA, CS, SSA, GSA, SMA, and SOA. This tier comprises algorithms that consistently ranked among the low performers across all dimensions, indicating significant limitations in their exploration and exploitation capabilities. These algorithms often struggled to escape local optima, leading to suboptimal solutions or even divergence, particularly in higher-dimensional problems.
Based on the rankings in
Table 7, we have selected the top 10 performing algorithms for a detailed evaluation of how ETOSO performs against them in more complex scenarios. This assessment will specifically explore ETOSO’s effectiveness and examine any limitations in higher-dimensional and multimodal optimization problems. By comparing its performance to these established algorithms, we aim to highlight ETOSO’s strengths and identify areas for improvement, providing a comprehensive understanding of its competitive position in challenging optimization landscapes.
Table 8 details the evaluation of these algorithms for functions f1 through f8, with a dimensionality of D = 200, offering insights based on 25 replications and 5000D FEs. Similarly,
Table 9 presents the evaluation for functions f9 through f15 under the same conditions. These tables collectively provide a deep dive into the capabilities and behavior of the algorithms in high-dimensional settings. Subsequently,
Table 10 ranks the top algorithms in terms of their overall performance (P), speed (S), and consistency (C) for dimension D = 30, 50, 100, and 200.
As seen in
Table 10, high-dimensional scenarios posed significant challenges for most algorithms, negatively impacting their performance. The complexity associated with these high dimensions emphasizes the need for robust optimization strategies. As the dimensionality of the problems increased, most algorithms exhibited a general decline in performance, with FPA and MFO being particularly vulnerable in these scenarios.
Based on the evaluation, the order of the algorithms from the top performer down is ETOSO, GWO, HHO, ROA, DE, SCA, TLBO, WOA, FPA, MFO, and PDO. ETOSO emerged as the best overall performer, demonstrating exceptional speed and consistency across all dimensions. GWO followed closely with strong performance and good speed, while HHO was noted as the fastest algorithm. ROA exhibited satisfactory performance along with reliable consistency. DE and SCA showed moderate performance levels, with TLBO being slightly less competitive in higher dimensions. WOA also demonstrated reasonable performance, though not as strong as the top contenders. FPA and MFO struggled in complex scenarios, particularly in higher dimensions, and PDO consistently displayed lower performance with reliability issues.
6.3. Analysis of Statistical Significance and Population Size Sensitivity
A comprehensive statistical analysis was undertaken to evaluate the performance of ETOSO in comparison to the 10 top-performing algorithms across a range of benchmark functions (f1–f15), particularly focusing on the impact of significantly reducing the population size from 30 to 10 in a high-dimensional problem space (D = 50). The analysis incorporated the Wilcoxon signed-rank test, p-values, and Cliff’s Delta to provide a robust assessment of statistical significance and effect size for 25 replications.
The Wilcoxon signed-rank test, a non-parametric statistical hypothesis test, was employed to determine whether there were statistically significant differences in the performance of ETOSO compared to the other algorithms. The test assesses whether pairs of observations (in this case, performance metrics of two algorithms on the same function) are selected from the same distribution. The resulting
p-values, presented in
Table 11, indicate the probability of observing the obtained results (or more extreme results) if there were no actual differences between the algorithms. A
p-value below a predetermined significance level (α = 0.05) suggests that the null hypothesis (no difference) can be rejected, indicating a statistically significant difference in performance. In this study, statistically significant differences (
p ≤ 0.05) were observed in 74% of the comparisons between ETOSO and the other algorithms across the benchmark functions, indicating that ETOSO’s performance is not equivalent to the other algorithms for a majority of the tested cases. Specifically, statistically significant differences were observed in the performance of ETOSO compared to DE, FPA, HHO, MFO, and TLBO across all 15 benchmark functions. Very low
p-values strongly suggest that the performance of ETOSO is significantly different from these algorithms. The Wilcoxon Significance results, shown in
Table 12, further reinforce this, with values of 1 consistently showing for these five algorithms when compared to ETOSO, confirming the statistical significance.
Complementing the
p-values, Cliff’s Delta, shown in
Table 13, was used to quantify the effect size, providing a measure of the magnitude of the difference between the algorithms. A negative Cliff’s Delta value indicates that the second algorithm (the comparative algorithm) is stochastically dominated by the first algorithm (ETOSO). In other words, a negative Cliff’s Delta signifies that ETOSO consistently outperformed the other algorithm. Notably, Cliff’s Delta values approaching −1 were observed for comparisons between ETOSO and DE, FPA, HHO, MFO, and TLBO across all 15 functions. These values indicate a substantial effect size, demonstrating that ETOSO’s performance was consistently and significantly superior to these algorithms. Specifically, a Cliff’s Delta of −1 indicates that ETOSO stochastically dominates the other algorithm, meaning that ETOSO’s values are consistently smaller than the other algorithm’s values across all paired comparisons.
In comparisons with GWO, PDO, ROA, SCA, and WOA, the statistical analysis reveals that ETOSO demonstrates neither a statistically significant advantage nor disadvantage across all benchmark functions. To provide further differentiation of algorithm performance where the statistical analysis identified comparable results, the detailed performance metrics in
Table 14 will be utilized for further assessment. These metrics are defined as follows:
Number of Perfect Hits: This metric counts the instances in which an algorithm solution precisely matches the known optimal solution;
Number of Times Closest to Minimum: This metric assesses an algorithm’s ability to achieve the average performance closer than any other compared algorithm to the known minimum across benchmark functions, indicating superior near-optimal convergence;
Number of Std Dev ≤ Threshold: This metric reflects the algorithm’s consistency by counting instances where the performance standard deviation meets a threshold, defined as 1e-6 multiplied by the absolute known minimum for non-zero minima and 1e-6 for zero minima;
Average Normalized Error (NAE): This metric is calculated as the mean of individual normalized errors, derived from the absolute difference between average performance and known minimum, divided by the absolute known minimum or 1 for zero minima, providing a scale-invariant error measure;
Trimmed NAE (TNAE): This metric is the Average Normalized Error (NAE) with the outlier from each algorithm result removed. This is used to provide a more accurate and fair representation of performance by ensuring that comparisons reflect typical results rather than extreme cases;
Average Speed: This metric represents the mean execution time in seconds across all benchmarks, indicating computational efficiency.
Analyzing the results presented in
Table 14, ETOSO demonstrates strong performance across multiple metrics. ETOSO achieved a high number of perfect hits (9), tying for first place with several other algorithms. Furthermore, it exhibited the highest number of times closest to the minimum (9), showcasing its robust convergence behavior. ETOSO also demonstrated the highest level of consistency, with 14 instances of standard deviation meeting the predefined threshold. ETOSO’s Average Normalized Error (NAE) of 1.87 × 10
3 (4.03 if the outlier is removed for each algorithm) is lower than the majority of the comparative algorithms, demonstrating its ability to find solutions that are close to the optimal values. Additionally, ETOSO exhibited a very low average speed (0.852), ranking second only to HHO, indicating its high computational efficiency and consistent execution time.
These results, combined with the statistical significance demonstrated in the preceding analysis, highlight ETOSO’s superior performance across a range of benchmark functions. Importantly, ETOSO’s performance was consistent even with a significant reduction in population size (from 30 to 10) in a high-dimensional problem space (D = 50). This insensitivity to changes in population size demonstrates ETOSO’s robustness and scalability, making it a powerful and reliable optimization algorithm for challenging optimization problems.
6.4. Computational Complexity and Overhead Analysis
This section presents a comprehensive analysis of the computational complexity and practical overhead inherent in the optimization algorithms under study. While original research papers often lack formal complexity analyses, we derive these complexities based on the algorithms’ structural operations and scaling properties with problem dimensionality (D), population size (ps), and function evaluations (FE). The computational complexity of these algorithms is fundamentally determined by FE, D, and, for ETOSO, ps. All algorithms, except ETOSO, exhibit a complexity of O(FE⋅D), reflecting the positional updates in D-dimensional space. ETOSO, in contrast, demonstrates a complexity of O[FE⋅(D + log (ps)], where the D term arises from positional updates and the log (ps) term stems from sorting operations for neighbor selection. In practical terms, the log (ps) term in ETOSO has a minimal impact on scalability for typical population sizes, particularly in high-dimensional problems where D significantly outweighs log (ps). For instance, with ps = 30, log ps ≈ 3.4, which is negligible compared to D = 500. In this case, the D term dominates ETOSO’s complexity, making the log (ps) term insignificant. However, for ps = 100 and D = 10, log (ps) ≈ 4.6, which becomes more relevant because it is nearly half the size of D. In such lower-dimensional problems, the log (ps) term contributes more significantly to the overall complexity. Nevertheless, for large-dimension optimization scenarios, the D term continues to dominate ETOSO’s complexity, rendering the log (ps) term relatively minor.
Beyond theoretical complexity, practical overhead from hidden operations and constant factors significantly influence algorithm performance. These hidden operations include computationally expensive mathematical functions (trigonometric, exponential, logarithmic), sorting, neighbor selection, random number generation, and conditional logic. Based on code analysis, algorithms can be categorized by overhead, as shown in
Table 15.
This direct correlation between overhead classification and algorithm performance, despite similar O(FE⋅D) complexities for all algorithms other than ETOSO, underscores the importance of considering practical overhead alongside theoretical complexity. Algorithms with low overhead, such as HHO, exhibit faster performance, while those with high overhead, like FPA and ROA, are slower. ETOSO’s slightly higher theoretical complexity due to the log (ps) term is mitigated by its efficient implementation. This analysis reinforces that Big O complexity alone does not fully explain observed performance differences, highlighting the necessity of assessing practical overhead for accurate algorithm evaluation. In fact, the empirical results for average speed given in the last column of
Table 14 confirm what is in
Table 15.
7. Discussion and Limitations
ETOSO, with a much simpler structure, presents significant advancements over its predecessor, TOSO, as demonstrated by an extensive benchmarking study against 25 competitive algorithms. The empirical evaluations across diverse benchmark functions suggest that ETOSO shows improved performance parameters and may serve as a strong contender in the optimization landscape.
Initial comparisons between TOSO and ETOSO reveal that both algorithms perform well on simpler unimodal functions, such as the Sphere and Zakharov benchmarks, consistently achieving optimal values of zero across all tested dimensions. However, when transitioning to multimodal functions like Schwefel’s Problem 2.26 and Rosenbrock, a significant distinction emerges. While TOSO demonstrates competent performance, ETOSO sets itself apart with enhanced speed and robustness at dimensions D = 5 and D = 10. This finding underscores the effectiveness of ETOSO’s design improvements, particularly the linear weight increment, which promotes a more stable and gradual exploitation strategy.
The extensive evaluations against 25 leading algorithms further solidify ETOSO’s position as a frontrunner in swarm optimization. The comparative results reveal that ETOSO outperformed competitors such as GWO and HHO, achieving the best overall performance rank among the tested algorithms. Particularly in high-dimensional spaces (D = 30, D = 50, D = 100, and D = 200), ETOSO not only maintained a competitive edge in solution quality but also exhibited faster convergence rates. The performance metrics indicate that ETOSO consistently converges to optimal or near-optimal solutions even in challenging multimodal landscapes characterized by multiple peaks and valleys.
A notable advantage of ETOSO is its robustness; it demonstrated lower variability in results, suggesting that the algorithm is less susceptible to fluctuations and thus offers greater consistency across multiple replications. This reliability is especially significant in practical optimization scenarios where the stability of results can influence decision-making processes. Furthermore, ETOSO’s adaptability is underscored by its performance on shifted and rotated function benchmarks. The algorithm’s ability to generalize across these challenging transformations further confirms its robustness and versatility compared to both TOSO and the competing algorithms. This adaptability points to the potential of ETOSO to effectively tackle a wide variety of real-world optimization problems.
The comprehensive statistical analyses using the Wilcoxon signed-rank test and Cliff’s Delta provided robust evidence of ETOSO’s superior performance, highlighting statistically significant differences in its effectiveness compared to other algorithms across various benchmarks. Additionally, the examination of computational complexity revealed that while ETOSO’s theoretical complexity includes a log (ps) term, its practical efficiency and reduced overhead allow it to maintain competitive execution speeds in high-dimensional optimization tasks.
While ETOSO demonstrates significant performance improvements, several limitations warrant further study. First, a more thorough analysis of ETOSO’s computational complexity is needed to fully assess its scalability and practicality in resource-constrained settings. This analysis should include a detailed comparison of its computational cost (time and memory usage) against other leading algorithms to determine its suitability for various application scenarios and computational budgets. Second, the benchmark functions used, while diverse, may not fully encompass the wide range of complexities inherent in real-world optimization problems. Therefore, evaluating ETOSO’s performance on a broader set of benchmarks, such as those suggested by CEC2017 and CEC2019, in addition to real-world datasets, is crucial to confirm its generalizability and robustness in less-idealized scenarios and to identify any potential limitations in diverse applications.
8. Conclusions and Future Research
This study introduces ETOSO as a noteworthy advancement in swarm optimization, demonstrating improvements over its predecessor, TOSO, and performing competitively against a range of 25 algorithms. The enhancements, such as the implementation of a linear weight increment for exploitation and simplification of the algorithm to become parameter-free, have led to faster convergence rates and greater robustness. The benchmarking results and statistical analyses suggest that ETOSO shows encouraging performance across various benchmarks and dimensions, indicating potential advantages in the optimization landscape. Furthermore, ETOSO’s ability to sustain good performance with reduced population sizes implies its adaptability for practical applications, making it a viable option for complex optimization tasks in fields like engineering, finance, and artificial intelligence. However, further investigation into its performance in real-world scenarios and additional problem contexts would be beneficial to fully understand its capabilities and limitations.
Future research should explore the application of ETOSO in dynamic and noisy environments while assessing its performance in real-world settings. Investigating the unique mechanisms behind ETOSO’s success may offer deeper insights into optimization strategies that can benefit various applications. Specifically, applying ETOSO to practical problems is essential, including its effectiveness in engineering design optimization (e.g., optimizing designs under constraints or for multi-objective optimization), supply chain management (enhancing efficiency through inventory and transportation considerations), financial modeling (improving market predictions and portfolio optimization), and machine learning (optimizing hyperparameters). Additionally, conducting a formal theoretical analysis of ETOSO’s convergence characteristics and exploration–exploitation balance, alongside comparisons to other top-performing algorithms, will be crucial for understanding its advantages. Evaluating ETOSO’s robustness under dynamic conditions, such as time-varying objective functions and uncertain parameters, will also help determine its resilience in less predictable real-world scenarios.