Next Article in Journal
Hierarchical Adaptive Wavelet-Guided Adversarial Network with Physics-Informed Regularization for Generating Multiscale Vibration Signals for Deep Learning-Based Fault Diagnosis of Rotating Machines
Previous Article in Journal
Intelligent Robot in Unknown Environments: Walk Path Using Q-Learning and Deep Q-Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advancing Engineering Solutions with Protozoa-Based Differential Evolution: A Hybrid Optimization Approach

1
Department of Data Science and Artificial Intelligence, Faculty of Information Technology, University of Petra, Amman 11196, Jordan
2
Information Science and Technology Department, Sultan Qaboos University, Muscat, Oman
3
Information Science Department, School of Educational Sciences, The University of Jordan, Amman 11196, Jordan
4
Computer Science Department, King Abdullah II School of Information Technology, The University of Jordan, Amman 11196, Jordan
5
Department of Journalism, Media, and Digital Communication, School of Arts, The University of Jordan, Amman 11196, Jordan
*
Author to whom correspondence should be addressed.
Automation 2025, 6(2), 13; https://doi.org/10.3390/automation6020013
Submission received: 5 February 2025 / Revised: 6 March 2025 / Accepted: 11 March 2025 / Published: 28 March 2025

Abstract

:
This paper presents a novel Hybrid Artificial Protozoa Optimizer with Differential Evolution (HPDE), combining the biologically inspired principles of the Artificial Protozoa Optimizer (APO) with the powerful optimization strategies of Differential Evolution (DE) to address complex and engineering design challenges. The HPDE algorithm is designed to balance exploration and exploitation features, utilizing innovative features such as autotrophic and heterotrophic foraging behaviors, dormancy, and reproduction processes alongside the DE strategy. The performance of HPDE was evaluated on the CEC2014 benchmark functions, and it was compared against two sets of state-of-the-art optimizers comprising 23 different algorithms. The results demonstrate HPDE’s good performance, outperforming competitors in 24 functions out of 30 from the first set and 23 functions from the second set. Additionally, HPDE has been successfully applied to a range of complex engineering design problems, including robot gripper optimization, welded beam design optimization, pressure vessel design optimization, spring design optimization, speed reducer design optimization, cantilever beam design optimization, and three-bar truss design optimization. The results consistently showcase HPDE’s good performance in solving these engineering problems when compared with the competing algorithms.

1. Introduction

Optimization is a fundamental task across all scientific and engineering disciplines, where it serves to determine the optimal parameter values within systems to achieve desired outcomes efficiently [1]. As technologies evolve and systems become increasingly complex, optimization challenges have intensified, requiring more sophisticated approaches [2,3]. Traditional methods often face limitations, including susceptibility to converging on local optima, difficulties in handling high-dimensional or unknown search spaces, and typically operating with single-solution approaches. These challenges highlight the need for developing advanced algorithms that can navigate complex landscapes more effectively [4].
Metaheuristic algorithms have emerged as powerful tools to address these complex optimization challenges [5]. They provide flexible, high-level strategies designed to explore large and intricate search spaces systematically and efficiently. Unlike traditional optimization techniques, metaheuristics do not rely on gradients or derivatives, which makes them applicable to a wider range of problems, including those that are non-differentiable, discontinuous, or stochastic. This capability is especially critical in real-world applications where the objective functions and constraints may not be precisely defined or may change over time [6].
The development of metaheuristic algorithms is often inspired by natural and social phenomena. Evolutionary Algorithms (EAs) [7] simulate the process of natural selection, where the fittest individuals are selected for reproduction in order to produce offspring of the next generation. Swarm Intelligence (SI) algorithms [8] are inspired by the social behavior of animals, such as birds flocking and fish schooling, and are particularly noted for their robustness and ability to converge rapidly to a good solution. Physics-based methods [9] use metaphors from physics, such as the laws of gravity and motion, to guide the search process, while human-inspired algorithms [10] often simulate human decision-making or social behavior.
These algorithms are particularly valued for their versatility and robustness, allowing them to perform well across a diverse array of problem types and environments. They are capable of balancing exploration (diversifying the search across the global landscape to avoid local optima) with exploitation (intensifying the search around promising areas) [11], which is crucial for achieving near-optimal solutions efficiently. Additionally, the scalability of metaheuristics makes them suitable for solving large-scale problems that are beyond the reach of conventional optimization methods [12].
Moreover, metaheuristics are often hybridized with other optimization techniques to form even more powerful algorithms [13]. For example, combining the global search capability of a metaheuristic with the local search capabilities of a more deterministic method can yield a hybrid algorithm that leverages the strengths of both approaches. This is particularly effective in refining solutions to very high accuracies, which are often required in engineering and industrial applications [14].
Metaheuristic algorithms, inspired by natural phenomena, operate through various mechanisms, yet they fundamentally rely on two core concepts: exploration (diversification) and exploitation (intensification). As stated by Eiben and Schippers, “exploration and exploitation are the two cornerstones of problem-solving by search” [15]. Exploration involves creating diverse solutions to thoroughly investigate the global search space, while exploitation focuses on refining the search around promising areas to find optimal solutions. These two components are inherently conflicting, where an emphasis on exploration can weaken exploitation and vice versa. Despite its importance, achieving the optimal balance between exploration and exploitation remains a challenging task, and no researcher has set a standard for this process [16]. Furthermore, the “No Free Lunch” (NFL) theorem asserts that no single heuristic can consistently outperform others across all problem domains [17]. This theory implies that the performance of any algorithm will average out when applied to different problems, suggesting that an algorithm effective for certain problems may not perform as well for others [18]. This understanding has driven the continuous development and proposal of new algorithms, each seeking to better balance exploration and exploitation for improved optimization performance across varied problem sets [18].
The main research contributions of this paper are as follows:
  • A new hybrid optimization algorithm named Hybrid Protozoa Differential Evolution (HPDE) is designed, combining the strengths of the Artificial Protozoa Optimizer (APO) and Differential Evolution (DE) to enhance the optimization process.
  • The HPDE algorithm’s mathematical models are based on the survival behaviors of protozoa and the evolutionary strategies of DE. Specifically, APO’s mechanisms of foraging, dormancy, and reproduction are integrated with DE’s mutation and crossover strategies. Autotrophic foraging and dormancy contribute to exploration, while heterotrophic foraging, reproduction, and DE’s evolutionary strategies enhance exploitation.
  • The HPDE is implemented and evaluated using unimodal, multimodal, hybrid, and composition functions from the 2022 IEEE Congress on Evolutionary Computation benchmark. Experimental results verify that HPDE outperforms 23 state-of-the-art algorithms.
  • The effectiveness of HPDE is further demonstrated by addressing challenging real-world problems, including Robot Gripper and six engineering design problems
  • HPDE consistently outperformed leading algorithms across a variety of optimization challenges.
  • Key results include: First, we achieved the lowest error rates across the majority of CEC2014 benchmark functions, with significant advantages in multimodal and high-dimensional problems. Second, we demonstrated good robustness and reliability, as evidenced by low standard deviations and standard errors in its results. Third, we achieved top rankings in engineering design problems with optimal or near-optimal solutions that outperformed competitors like APO, GWO, and WOA.

Motivation of This Work

Recent advancements in metaheuristic optimization have introduced several adaptive algorithms, each with unique mechanisms for enhancing search performance. Among these, the Artificial Protozoa Optimizer (APO) [19] has gained attention due to its biological inspiration drawn from protozoa survival behaviors, including foraging, dormancy, and reproduction. These behaviors enable APO to adapt its search process dynamically, allowing it to balance the exploration and exploitation phases more effectively. Such adaptive mechanisms are not entirely unique to APO; other algorithms also incorporate adaptive behaviors. However, APO’s distinct process of switching between foraging and dormancy phases provides a flexible strategy for avoiding premature convergence and local optima traps, which are common issues in optimization.
One primary motivation for selecting APO lies in its versatility and robustness when tackling complex optimization tasks. Although Differential Evolution (DE) algorithms have exhibited substantial success in a variety of applications, they often face limitations with highly nonlinear and multimodal functions, which can result in slow convergence and suboptimal solutions [20]. To address these shortcomings, we propose a novel hybrid algorithm that combines APO with DE, termed the Hybrid Protozoa Differential Evolution (HPDE). The integration of APO’s adaptive mechanisms with DE’s mutation and crossover strategies aims to enhance the optimizer’s global convergence, allowing it to balance exploration and exploitation more effectively while reducing stagnation in the search process.
Unlike many traditional algorithms, such as the Genetic Algorithm (GA) [21] and Particle Swarm Optimization (PSO) [22], which often suffer from premature convergence [23], APO inherently incorporates adaptive dormancy and foraging behaviors that reduce the problem of premature convergence. While bio-inspired algorithms like the Grey Wolf Optimizer (GWO) [24] and Whale Optimization Algorithm (WOA) [25] have achieved commendable balances between exploration and exploitation, they frequently require extensive parameter tuning, which can be computationally costly and dependent on specific problem characteristics [26]. Additionally, these algorithms often exhibit sensitivity to initial conditions and parameter settings, resulting in inconsistent performance across diverse problem domains [27].
The proposed HPDE algorithm leverages APO’s adaptability to dynamically alter its search behavior, addressing many of the limitations noted in traditional and bio-inspired algorithms. By integrating APO’s adaptive mechanism with DE’s robust search capabilities, HPDE demonstrates an improved convergence rate and enhanced search capability over a range of benchmark functions and engineering design problems.
Table 1 provides a comparative summary of optimization algorithms that, like the bio-inspired APO, draw on natural or biological processes to guide the search process, detailing each algorithm’s main contributions, advantages, and limitations.
Furthermore, while some hybrid algorithms attempt to integrate multiple optimization strategies to enhance performance, they often introduce increased computational complexity and may still struggle with maintaining population diversity, thereby risking convergence to suboptimal solutions. Another notable limitation is the inadequate handling of constraints in many algorithms, which restricts their applicability to real-world engineering problems that inherently involve complex, multidimensional constraints [3].
Addressing these gaps necessitates the development of an optimizer that not only leverages the exploratory capabilities of bio-inspired algorithms but also incorporates robust mechanisms for exploitation to refine solutions effectively. To address these challenges, we propose the Hybrid Artificial Protozoa Optimizer combined with Differential Evolution (HPDE). This hybrid approach synergistically integrates the Artificial Protozoa Optimizer (APO) [19] that has the diverse foraging and reproduction behaviors with the DE, renowned for its efficient mutation and crossover operations that bolster exploitation. By combining these strengths, HPDE aims to achieve a good balance between exploration and exploitation, reduce the likelihood of premature convergence, and maintain population diversity more effectively. Additionally, the hybrid framework is designed to handle both continuous and discrete optimization problems with constraints, thereby broadening its applicability to a wider range of real-world scenarios. Through this integration, HPDE aspires to overcome the limitations observed in existing algorithms, offering enhanced convergence speed, improved solution quality, and greater robustness across diverse optimization landscapes.
The primary advantage of our proposed work (HPDE) compared with other optimization algorithms lies in its enhanced ability to balance exploration and exploitation during the search process. By integrating the bio-inspired mechanisms of the APO with the robust mutation and crossover operations of DE, our method leverages the strengths of both algorithms to achieve good performance. Specifically, the HPDE demonstrates improved convergence speed and solution quality, as evidenced by its performance on the CEC2014 benchmark suite consisting of 30 functions. Comparative experiments show that the HPDE consistently outperforms the compared state-of-the-art algorithms, including ChOA, GWO, FOX, WOA, MVO, DOA, MFO, RIME, DBO, WSO, RTH, SHIO, COA, OHO, SCA, PSO, SHO, SDE, and RSA. The hybrid approach effectively combines the exploratory capabilities of APO with the exploitative efficiency of DE, resulting in an optimizer that is robust across diverse optimization problems, including continuous and discrete spaces with constraints.

2. Literature Review

In the diverse landscape of computational optimization, a plethora of innovative metaheuristic algorithms have been developed, each drawing inspiration from various natural behaviors and phenomena. These algorithms, designed to solve complex optimization problems, emulate the adaptive strategies of animals, the dynamic interactions of social insects, and the physical principles observed in the natural world.
Evolution and genetics-inspired optimizers include Evolution Strategies (ES) [44] and Genetic Programming (GP) [45]. These algorithms are based on the principles of biological evolution and genetic variation, respectively, simulating the process of natural selection and genetic operations to optimize complex systems [46]. Moreover, the bird-inspired optimizers are represented by the Falcon Optimization Algorithm (FOA) [47], Greylag Goose Optimization (GGO) [48], Northern Goshawk Optimization (NGO) [49], and Artificial Hummingbird Algorithm (CSA) [50]. Furthermore, the aquatic animal-inspired optimizers feature the Beluga Whale Optimization (BWO) [51], which is inspired by the social and hunting behaviors of beluga whales, and the Jellyfish Search (JS) [52], which mimics the passive drifting mechanism of jellyfish. Additionally, the Whale Optimization Algorithm (WOA) [25] models the bubble-net hunting strategy of humpback whales.
In addition, the plant-inspired optimizer includes the Invasive Weed Optimization algorithm (IWO) [53], which models the spreading and reproductive strategies of invasive weed species, adapting these strategies to solve optimization problems. On the other hand, the disease model-inspired optimizer is the Liver Cancer Algorithm (LCA) [54], which draws from the growth patterns and characteristics of liver cancer cells to develop robust search mechanisms in optimization landscapes.
Moreover, the mammal-inspired optimizers feature the Puma Optimizer (PO) [55], (BMO) [56], Grey Wolf Optimizer (GWO) [24], Adaptive Fox Optimization (AFO) [57], and Honey Badger Algorithm (HBA) [58]. Insect-inspired optimizers include Ant Colony Optimization (ACO) [59], which emulates the pheromone-laying and path-finding behavior of ants; Aphid–Ant Mutualism (AAM) [60], which models the mutualistic relationship between aphids and ants; and Artificial Bee Colony (ABC) [61], inspired by the foraging behavior of honeybees. Ant Lion Optimizer (ALO) [62] draws inspiration from the predatory behavior of antlions, which create traps to capture prey. Other physics-based optimizers include the Artificial Electric Field Algorithm (AEFA) [63], Black Hole Algorithm (BH) [64], Electromagnetic Field Optimization (EFO) [65], and Gravitational Search Algorithm (GSA) [66]. These algorithms utilize principles from physics, such as electric fields, black hole dynamics, electromagnetic principles, and gravitational forces, to guide the search process in optimization tasks.
Primate-inspired optimizers include the Artificial Gorilla Troops Optimizer (RIME) [67], which mimics the social structure and collaborative behavior of gorilla troops in their natural habitat. In addition, the activity- and sport-inspired optimizers feature the Alpine Skiing Optimization (ASO) [68], which draws inspiration from the strategic and dynamic movements involved in alpine skiing. Futhermore, the Swarm Intelligence optimizers include Particle Swarm Optimization (PSO) [69], which emulates the social behavior of birds and fish, adapting it to solve optimization problems effectively.
In addition, the natural process and physics-based optimizers feature the Snow Ablation Optimizer (SAO) [70], String Theory Algorithm (STA) [71], Water Cycle Algorithm (WCA) [72], Atom Search Optimization (ASO) [73], Chemical Reaction Optimization (CRO) [74], and Thermal Exchange Optimization (TEO) [75]. The fitness and distance-inspired optimizers include Distance-Fitness Learning (DFL) [76], which leverages the correlation between distance and fitness to inform optimization; materials and chemical structure-based optimizers include the Crystal Structure Algorithm (CryStAl) [77], Equilibrium Optimizer (EO) [78], Henry Gas Solubility Optimization (HGSO) [79], and Nuclear Reaction Optimization (NRO) [80].
The Farmland Fertility Algorithm (FFA) is inspired by the process of enhancing soil fertility in agriculture, modeling optimization as a balance between exploration and exploitation to improve solutions iteratively [81]. The African Vultures Optimization Algorithm (AVOA) mimics the collaborative hunting strategies of vultures, leveraging their unique soaring and keen sight to focus on promising regions in the search space [82]. The Mountain Gazelle Optimizer (MGO) is based on the fast and evasive movements of gazelles in mountainous terrain, using their speed and agility to avoid local optima while searching for better solutions [83]. The Artificial Gorilla Troops Optimizer (GTO) imitates the social intelligence and group behaviors of gorillas, balancing leadership and individual contributions to enhance convergence toward optimal solutions.
In addition, Differential Evolution (DE) algorithms have seen extensive development, especially in addressing complex engineering optimization problems. Researchers have thus developed various DE enhancements through hybrid methodologies, advanced constraint-handling mechanisms, and adaptive control strategies, some of which are listed below.
Cantú et al. [84] examined constraint-handling techniques for DE, particularly suited for process engineering problems characterized by non-convex and discontinuous constraints. This study demonstrated that the gradient-based repair technique was highly effective, underscoring the importance of appropriate constraint-handling mechanisms in optimizing DE’s performance in complex, constrained environments.
Nguyen et al. [85] introduced a Classification-assisted Differential Evolution (CaDE) approach, where an AdaBoost classifier discards infeasible solutions early in the evaluation process. By reducing unnecessary fitness evaluations, this machine learning integration enhances DE’s computational efficiency, making it particularly valuable for constrained engineering tasks that benefit from such strategic filtering.
Kizilay et al. [86] presented a Q-Learning-assisted DE variant (DE–QL), which dynamically adapts mutation strategies and crossover rates. By integrating Q-Learning as a guiding mechanism, DE–QL effectively balances exploration within feasible and infeasible regions, demonstrating its utility for constrained engineering problems that require nuanced constraint navigation.
Samal et al. [87] developed a Modified Differential Evolution (MDE) that dynamically adjusts the scaling factor and crossover ratio, improving DE’s performance on multimodal functions. Experimental results show that MDE outperforms well-established optimization algorithms, such as Particle Swarm Optimization (PSO) and Cuckoo Search, emphasizing the effectiveness of adaptive parameter tuning in navigating complex, multimodal landscapes.
Zhang et al. [88] proposed a hybrid Cuckoo Search and Differential Evolution algorithm (CSDE) to address premature convergence issues by dividing the population into subgroups, allowing Cuckoo Search and DE to operate independently while sharing information. This hybrid model exemplifies how combining complementary optimization techniques can improve convergence rates and solution accuracy, especially in engineering problems requiring a balance between global and local search.
Dragoi and Curteanu [89] provided an extensive review of DE applications in chemical engineering, highlighting DE’s versatility in handling varied constraints and objectives within model and process optimization. This review underscores DE’s potential across engineering fields and its adaptability to optimize complex processes.
In a similar context, Zuo and Gao [90] introduced a dual-population DE variant, ADPDE, where a dynamic population division based on individual potential enhances convergence speed and solution accuracy. By adopting distinct mutation strategies within subgroups, ADPDE achieves robust performance on constrained optimization tasks, demonstrating the utility of population management in DE.
Tang and Wang [91] extended DE’s capability by integrating it into a Whale Optimization Algorithm (WOAAD) based on an atom-like structure. WOAAD’s design incorporates DE-inspired modifications, effectively addressing local convergence issues, which proves advantageous for optimizing engineering design problems requiring precision and adaptive convergence.
Alshinwan et al. [92] proposed a hybrid approach, combining Prairie Dog Optimization with DE (PDO–DE), demonstrating its applicability in both engineering design and network intrusion detection. By enhancing PDO’s search mechanisms with DE’s mutation and crossover operators, PDO–DE strikes an effective balance between exploration and exploitation, underscoring DE’s adaptability across different optimization domains.
De Melo and Carosio [93] explored a Multi-View Differential Evolution (MVDE) approach, in which multiple mutation strategies are applied to generate different population views at each iteration. A winner-takes-all approach merges these views, balancing exploration and exploitation effectively. MVDE’s competitive performance on constrained engineering problems illustrates the benefits of combining varied mutation strategies within DE.
In advancing mutation schemes, Mohamed et al. [94] proposed Enhanced Directed Differential Evolution (EDDE), an algorithm that utilizes both high-quality and low-quality population members to balance exploration and exploitation. EDDE’s robustness and efficiency in constrained domains highlight the importance of mutation scheme customization in optimizing DE’s performance.
Zeng et al. [95] developed a Competitive Mechanism-Integrated Multi-Objective Whale Optimization Algorithm with Differential Evolution (CMWOA) for multi-objective optimization. By incorporating a competitive mechanism, CMWOA uses a refined crowding distance calculation for improved population density depiction and guides population updates more effectively. DE’s adaptive parameters further diversify the population, with testing on multiple benchmark functions demonstrating CMWOA’s ability to outperform other methods in convergence and accuracy. CMWOA’s application to real-world problems further verifies its practicality in diverse optimization settings, highlighting DE’s flexibility in hybrid configurations.
Despite the clear success of these strategies in tackling diverse optimization tasks, several deficiencies remain. Many existing approaches confront the problem of stagnation in local minima, often due to inadequate exploration mechanisms or static parameter settings. Furthermore, hybrid methods sometimes lack well-defined theoretical guidelines for combining distinct search strategies or for adjusting parameters adaptively, thereby limiting scalability and robustness in more complex or higher-dimensional tasks. Real-world engineering applications, in particular, can pose dynamic constraints that many algorithms are neither explicitly designed to handle nor systematically tested against. These gaps motivate the introduction of improved hybrid and bio-inspired metaheuristics that pursue a better exploration–exploitation trade-off, integrate adaptive parameter tuning, and scale effectively to large, constrained, or time-critical problems.
In this context, the present study proposes a new hybrid optimization method, termed Hybrid Protozoa Differential Evolution (HPDE), which combines the survival behaviors observed in protozoa with the evolutionary strengths of DE. Protozoa-inspired foraging, dormancy, and reproduction mechanisms (as modeled by the Artificial Protozoa Optimizer, APO) are merged with DE’s well-known mutation and crossover strategies. Autotrophic foraging and dormancy bolster the exploration capabilities of the algorithm, while heterotrophic foraging and reproduction interact synergistically with DE’s evolutionary operators to reinforce exploitation. This fusion is carefully formulated to overcome local minima entrapment and to maintain a flexible, adaptive framework for parameter adjustments in real-world scenarios.

2.1. Overview of Solving Highly Nonlinear and Complex Engineering Design Optimization Problems Using Metaheuristic Tools

Recent work has focused on improving optimization algorithms for complex engineering problems. Akl et al. proposed an improved Harris Hawks Optimization (IHHO), incorporating logarithmic and exponential strategies to avoid local optima [96]. Yu et al. introduced the Improved Adaptive Grey Wolf Optimization (IAGWO), enhancing convergence speed and accuracy with velocity and Inverse Multiquadratic Function adjustments [97]. Liu et al. developed the PRPCSO algorithm, combining Padé Approximation with intelligent population reduction [98]. Garg et al. proposed the LX-TLA, using the Laplacian operator to improve Teaching Learning Algorithm performance [99]. Gopi and Mohapatra introduced OLCA, utilizing opposition-based learning for global optimization [100]. Finally, Wang et al. presented LGJO, which leverages chaotic mapping and Gaussian mutation for industrial applications [101].
Furthermore, Moustafa et al. proposed the Subtraction-Average-Based Optimizer (SAOA), enhanced with cooperative learning, which outperformed GWO and PSO in power system applications [102]. El-Shorbagy and Elazeem’s Convex Combination Search Algorithm (CCS) effectively balanced exploration and exploitation in multimodal tests and engineering challenges [103]. Givi et al.’s Red Panda Optimization (RPO), inspired by red panda behaviors, surpassed several state-of-the-art algorithms in benchmarks and engineering problems [104]. Similarly, Gharehchopogh et al. introduced the Chaotic Quasi-Oppositional Farmland Fertility Algorithm (CQFFA), improving exploration and convergence with chaotic maps and learning mechanisms [105].
Pan et al.’s Gannet Optimization Algorithm (GOA) demonstrated success in large-scale constrained optimization [106], while Tang et al.’s hybrid PSODO improved global search and convergence speed [91]. Recent advancements in optimization algorithms have shown significant improvements in solving complex engineering problems. Hu et al. proposed the ACEPSO, which introduced adaptive population grouping and co-evolved mechanisms to enhance diversity and avoid local optima [107]. Ewees introduced a harmony-driven method, GBOHS, integrating Harmony Search (HS) with a gradient-based optimizer to improve convergence and accuracy in global optimization and feature selection [108]. Pan et al. improved the Gannet Optimization Algorithm (GOA) by incorporating parallel and compact strategies, enhancing memory efficiency, and avoiding local solutions in engineering tasks [109]. Che and He enhanced the Seagull Optimization Algorithm (SOA) by introducing mutualism and commensalism mechanisms, improving the algorithm’s exploitation capabilities in complex engineering challenges [110].

2.2. Overview of Optimization Problems

Optimization problems are fundamental in various fields of science and engineering, where the goal is to find the best solution among all feasible solutions. These problems arise in numerous applications such as machine learning, operations research, engineering design, finance, logistics, and many others. The primary objective is to optimize a performance criterion, which is represented mathematically by an objective function [111].
A general optimization problem can be formulated as shown in Equation (1):
minimize x f ( x ) subject to x S ,
where f ( x ) is the objective function to be minimized, x = [ x 1 , x 2 , , x dim ] is the vector of decision variables, and S represents the feasible solution space defined by the constraints of the problem.
The feasible solution space S is typically defined by a set of constraints, which may include.
Boundary constraints: variables are bounded within lower and upper limits as shown in Equation (2):
x min x x max ,
where x min and x max are vectors containing the lower and upper bounds for each decision variable.
Equality constraints: functions that must be satisfied exactly as shown in Equation (3):
h j ( x ) = 0 , j = 1 , 2 , , n e ,
Here, h j ( x ) are equality constraint functions.
Inequality constraints: equations that impose upper or lower limits as shown in Equation (4):
g k ( x ) 0 , k = 1 , 2 , , n i ,
Here, g k ( x ) are inequality constraint functions.
In many real-world applications, the optimization problem involves complex, nonlinear, and high-dimensional objective functions with multiple local minima or maxima. These characteristics make it challenging to find the global optimum using traditional optimization methods [112].
To address these challenges, metaheuristic algorithms have been developed. In this research, we focus on applying and analyzing the hybrid Protozoa Optimizer (PO) with DE for solving continuous optimization problems as formulated in Equation (1); we also focus on applying HPDE to solve engineering problems.

2.3. Overview of Protozoa Optimizer

The artificial Protozoa Optimizer (APO) [19] is a bio-inspired optimization algorithm that mimics the survival behaviors of protozoa, particularly focusing on foraging, dormancy, and reproduction. This optimizer is designed to solve continuous optimization problems by balancing exploration and exploitation through mathematical models derived from protozoa’s biological activities.
Foraging behavior in protozoa involves both autotrophic and heterotrophic mechanisms to obtain nutrients. The mathematical models for these modes are defined as follows.

2.4. APO Mathematical Models

This section introduces the mathematical framework of APO. The solution set is represented by a population of protozoa, where each protozoan occupies a position in a multidimensional space defined by d i m variables.

2.4.1. Notations and Nomenclature

The notations and symbols utilized in the model are summarized as follows. The population size is denoted by p s , while d i m represents the number of decision variables. The dimension index, indicated by d i , ranges from 1 to d i m . The notation n p stands for the number of neighbor pairs within the model, and f refers to the foraging factor. Weight factors in autotrophic and heterotrophic modes are represented by w a and w h , respectively.
The proportion fraction of dormancy and reproduction is denoted as p f , while p a h and p d r represent the probability of autotrophic and heterotrophic behavior and the probability of dormancy and reproduction, respectively. A random number within the range [ 0 , 1 ] is noted as r a n d . The current iteration number is indicated by i t e r , with i t e r max representing the maximum iteration count permissible.
The position of the ith protozoan is denoted by X i , whereas X min and X max denote the lower and upper bounds of the decision variables, respectively. The mapping vectors used in foraging and reproduction are represented by M f and M r . An index vector within dormancy and reproduction processes is denoted by D r index , while a random vector containing elements within [ 0 , 1 ] is represented by R a n d .
The ceiling and flooring functions are represented as [ · ] and · , respectively. The fitness function is denoted by f ( · ) , and sort ( · ) is used to arrange fitness values in ascending order. Additionally, rand_perm ( n , l ) generates a vector of l unique integers selected randomly within the range 1 to n. Lastly, the Hadamard product is represented by the symbol ⊙. This notation provides a clear and concise reference for interpreting the model’s various parameters and operations.

2.4.2. Foraging Mechanism

The foraging behavior in protozoa, central to this optimization model, is driven by both internal and external factors. Internal factors are related to each protozoan’s individual foraging characteristics, while external factors capture environmental influences, such as interactions with neighboring protozoa.

2.4.3. Autotrophic Mode

In autotrophic mode, protozoa synthesize nutrients when exposed to light, prompting movement toward regions with optimal light conditions. Protozoa in areas of high light intensity will move to positions with reduced intensity and vice versa. Assuming that the light conditions around a given protozoan, say the jth protozoan, are favorable for photosynthesis, the mathematical model guiding the movement of the ith protozoan toward this target is as shown in Equation (5) [19]:
X i new = X i + f · ( X j X i ) + 1 n p k = 1 n p w a · ( X k X k + ) M f
Here, X i new and X i represent the updated and current positions of the ith protozoan, respectively. X j is the position of the selected protozoan acting as a target. The terms X k and X k + denote neighboring protozoa chosen based on rank relative to i. Specifically, X k refers to a protozoan with a rank lower than i, while X k + has a rank higher than i. This structure enables adaptive movement within the foraging process.
The spatial positions and ranking of protozoa are specified as shown in Equations (6) and (7) [19]:
X i = [ x i 1 , x i 2 , , x i d i m ]
X i = sort ( X i )
The foraging factor f dynamically adjusts based on the iteration count as shown in Equation (8) [19]:
f = rand · ( 1 + cos i t e r i t e r max · π )
Additional parameters defining neighborhood structure and weight are given in Equations (9) and (10) [19]:
n p max = p s 1 2
w a = e f ( X k ) f ( X k + + eps )
The mapping vector M f , which dictates the selection of dimensions during foraging, is defined in Equation (11) [19]:
M f [ d i ] = 1 , if d i is in rand_perm ( dim , 1 p s ) 0 , otherwise

2.4.4. Heterotrophic Mode

In low-light environments, protozoa can absorb organic nutrients directly from the surroundings. Given that X near represents a nutrient-rich location nearby, the heterotrophic mode governs the protozoan’s movement toward this point using the model as shown in Equation (12) [19]
X i new = X i + f · ( X near X i ) + 1 n p k = 1 n p w h · ( X i k X i + k ) M f
The location of X near is determined as in Equation (13) [19]:
X near = ( 1 ± Rand · 1 i t e r i t e r max ) X i
The weight factor w h for the heterotrophic mode is calculated as in Equation (14) [19]:
w h = e f ( X i k ) f ( X i + k ) + eps

2.5. Dormancy

Dormancy is an adaptive survival response activated during unfavorable conditions, where a protozoan in a dormant state is replaced by a newly generated one, maintaining the population size. The dormancy model is defined as it is in Equation (15) [19]:
X i new = X min + R a n d ( X max X min )
Here, X min and X max represent the lower and upper bounds, respectively, defined as shown in Equation (16) [19]:
X min = [ l b 1 , l b 2 , , l b d i m ] , X max = [ u b 1 , u b 2 , , u b d i m ]

2.6. Reproduction

Reproduction is modeled as binary fission, where a protozoan divides into two identical copies. This division is simulated by duplicating the protozoan and applying a perturbation. The reproduction model is shown in (17) [19]:
X i new = X i ± r a n d · ( X min + R a n d ( X max X min ) ) M r
The mapping vector for reproduction, M r , specifies the dimensions involved in the perturbation:
M r [ d i ] = 1 , if d i is in rand_perm ( dim , r a n d ) 0 , otherwise

2.7. Overview of Differential Evolution

Differential Evolution (DE) is a population-based optimization algorithm where a population of candidate solutions iteratively evolves toward optimal solutions by applying mutation, crossover, and selection operators. The DE process is defined mathematically through several equations, each governing a specific stage of the evolution process.
The DE algorithm begins with the initialization of a population of p s candidate solutions, where each candidate is represented by a vector X i with d i m decision variables. Let X i ( g ) = { x i , 1 ( g ) , x i , 2 ( g ) , , x i , d i m ( g ) } denote the position of the ith individual in the population at generation g. Each candidate vector is initialized randomly within the bounds specified by X min and X max .
  • Mutation
The mutation process creates a mutant vector for each target vector X i ( g ) by combining the position vectors of other randomly selected candidates. As shown in Equation (19), this mutation strategy enables the exploration of new solutions based on the diversity within the population.
V i ( g ) = X r 1 ( g ) + F · ( X r 2 ( g ) X r 3 ( g ) ) ,
where V i ( g ) represents the mutant vector, X r 1 ( g ) , X r 2 ( g ) , and X r 3 ( g ) are randomly selected individuals from the population such that r 1 , r 2 , r 3 i . The parameter F is a scaling factor (typically within the range [ 0 , 2 ] ) that controls the amplification of the differential variation between individuals.
  • Crossover
To increase diversity, DE applies a crossover operator that combines elements of the mutant vector V i ( g ) and the target vector X i ( g ) to produce a trial vector U i ( g ) . The crossover is typically defined as shown in Equation (20), this crossover strategy helps maintain diversity by blending solutions from different vectors.
U i , j ( g ) = V i , j ( g ) if r a n d j C r or j = j rand , X i , j ( g ) otherwise ,
where j represents the dimension index, C r is the crossover probability (within [ 0 , 1 ] ), r a n d j is a random number generated for each dimension j, and j rand is a randomly chosen index that ensures at least one component from the mutant vector V i ( g ) is included in the trial vector U i ( g ) .
  • Selection
The selection step determines whether the trial vector U i ( g ) should replace the target vector X i ( g ) in the next generation. This decision is based on the fitness values of the vectors, where the fitness function f ( · ) evaluates the objective to be minimized. The selection process is defined as follows:
X i ( g + 1 ) = U i ( g ) if f ( U i ( g ) ) f ( X i ( g ) ) , X i ( g ) otherwise .
As shown in Equation (21), the trial vector U i ( g ) replaces the target vector X i ( g ) if it yields a lower fitness value, thus ensuring that better solutions are retained in the population for subsequent generations.
  • Termination
The algorithm iterates through the mutation, crossover, and selection steps until a predefined stopping criterion is met, typically either a maximum number of generations i t e r max or an acceptable error threshold. Let i t e r denote the current iteration; the termination condition can be represented. As indicated in Equation (22), the algorithm concludes when either the maximum iteration count is reached or the target accuracy is achieved.
i t e r i t e r max or | f ( X best ) f target | ϵ ,
where X best is the best solution found, f target is the target fitness, and ϵ is a small threshold value.

3. Proposed HPDE Optimization Algorithm

In this section, the details of the newly hybrid algorithm are presented. However, the APO primarily focuses on mimicking biological behaviors to explore and exploit the search space. However, its exploration capabilities might sometimes be limited, especially in high-dimensional or complex landscapes. By integrating DE’s mutation and crossover operations, the hybrid algorithm benefits from enhanced exploration capabilities, allowing it to escape local optima and explore the search space more effectively.
Maintaining diversity in the population is crucial for avoiding premature convergence. APO’s dormancy and reproduction forms introduce diversity by reinitializing or modifying protozoa positions. The addition of DE operations further enhances diversity by generating new candidate solutions through the combination of multiple individuals. This hybrid approach helps in preserving population diversity and improving the algorithm’s ability to find global optima. Furthermore, APO adapts its behavior based on the current state of the population and the optimization process. The incorporation of DE introduces an additional adaptive layer, where DE operations are applied with a certain probability. This probabilistic application of DE ensures that the hybrid algorithm can dynamically balance exploration and exploitation based on the optimization stage, leading to more efficient convergence. The hybrid algorithm combines the APO’s biologically inspired mechanisms and DE’s evolutionary strategies. APO’s foraging forms (autotroph and heterotroph) and DE’s mutation and crossover operations complement each other, resulting in a more robust optimization process. The hybridization ensures that the strengths of both approaches are utilized effectively, leading to improved overall performance.

3.1. Mathematical Model of HPDE

  • Initialization
The initial population of protozoa is generated randomly within the bounds X min and X max , as shown in Equation (23):
x i 0 = X min + r i · ( X max X min ) , i = 1 , 2 , , p s
where r i is a vector of uniformly distributed random numbers in the interval [0, 1].
  • Fitness Evaluation
The fitness of each protozoa is evaluated using the objective function f, as shown in Equation (24):
f i 0 = f ( x i 0 ) , i = 1 , 2 , , p s
  • Main Loop (iter = 1 to iter_max)
Sort by fitness: the population is sorted by fitness in ascending order, as shown in Equation (25):
{ x i , f i } sort ( { x i , f i } ) , i = 1 , 2 , , p s
Proportion fraction (pf) the proportion fraction p f is calculated as shown in Equation (26):
p f = p f max · r , r U ( 0 , 1 )
where U ( 0 , 1 ) is a uniform random variable. p f max is the maximum proportion fraction, a predefined parameter that sets the upper limit for p f , and r is a random number drawn from a uniform distribution in the interval [ 0 , 1 ] , denoted as r U ( 0 , 1 ) .
The role of p f max in the algorithm is to act as a control parameter that determines the maximum possible value of the proportion fraction p f , effectively controlling the maximum proportion of the population that can participate in a specific behavior during each iteration. By adjusting p f max , one can balance the exploration and exploitation capabilities of the algorithm: a higher p f max allows more protozoa to engage in behaviors like reproduction, enhancing exploration, while a lower p f max focuses the algorithm on exploitation by limiting the number of protozoa undergoing certain behaviors. The multiplication by a random variable r introduces stochasticity, allowing p f to vary between 0 and p f max in each iteration, which helps prevent premature convergence and maintains diversity in the population. Selecting p f max is typically performed through empirical tuning or recommendations from the literature, with common values ranging between 0.05 and 0.3, but the optimal value can depend on the specific problem and characteristics of the search space.
Random indices for dormancy/reproduction: Random indices for protozoa entering dormancy or reproduction forms are selected as shown in Equation (27):
R = { i 1 , i 2 , , i k } , k = p s · p f
where p s is the population size, p f is the proportion fraction determining the fraction of the population to undergo dormancy or reproduction, and · denotes the ceiling function. The set R contains k unique random indices i j selected from the population, where each i j satisfies 1 i j p s . These indices correspond to the protozoa that will engage in dormancy or reproduction behaviors during the current iteration of the algorithm.
  • Dormancy/Reproduction Forms
Dormancy/reproduction probability: The probability p d r of a protozoa being in dormancy or reproduction form is given by, as shown in Equation (28):
p d r = 1 2 1 + cos π 1 i p s , i R
  • Dormancy Form
If a protozoan is in dormancy form, its new position is randomly reinitialized, as shown in Equation (29):
x i = X min + r i · ( X max X min )
where x i is the updated position vector of the i-th protozoan after dormancy; X min and X max are the minimum and maximum bounds of the search space, respectively; r i is a random vector associated with the i-th protozoan, where each element is independently drawn from a uniform distribution in the interval [ 0 , 1 ] ; and the symbol · denotes element-wise multiplication between vectors.
  • Reproduction Form
If a protozoan is in reproduction form, it follows the reproduction mechanism. As shown in Equation (30), the mapping vector is
M r = [ m r 1 , m r 2 , , m r d ] , m r j = 1 , with probability p , 0 , otherwise
where M r is the reproduction mapping vector determining which dimensions will be modified during reproduction; m r j corresponds to the j-th dimension and is set to 1 with probability p and 0 otherwise; d denotes the dimensionality of the problem (number of decision variables); and p is the probability of selecting a dimension for updating.
The reproduction form is defined as it is in Equations (31) and (32):
x i = x i + λ · X min + r i · ( X max X min ) · M r
where λ is a random factor of ± 1 .
X new i = X i ± rand · ( X min + Rand · ( X max X min ) ) · M r
where x i is the updated position vector of the i-th protozoan after reproduction; x i is the current position vector of the i-th protozoan; λ is a random factor that takes the value + 1 or 1 with equal probability; r i is a random vector where each element is drawn from a uniform distribution in [ 0 , 1 ] ; M r is the reproduction mapping vector from Equation (30); and the symbols · denote element-wise multiplication.
  • Foraging Forms
Foraging factor: the foraging factor f is calculated as shown in Equation (33):
f = r · 1 + cos π i t e r i t e r max , r U ( 0 , 1 )
where f is the foraging factor influencing the step size during foraging; r is a random number drawn from a uniform distribution over [ 0 , 1 ] , denoted r U ( 0 , 1 ) ; i t e r is the current iteration number; i t e r max is the maximum number of iterations; cos denotes the cosine function; and π is the mathematical constant pi.
Autotroph/heterotroph probability: the probability p a h of protozoa being in autotroph or heterotroph form is given by Equation (34):
p a h = 1 2 1 + cos π i t e r i t e r max
where p a h is the probability of a protozoan being in autotroph or heterotroph form; i t e r is the current iteration number; i t e r max is the maximum number of iterations; and the function oscillates between 0 and 1 over iterations.
  • Autotroph Form
If a protozoan is in an autotrophic form, it follows the autotrophic mechanism.
The weight factor is defined as it is in Equation (35):
w a = exp f k m f k p + ϵ , k m , k p are neighbor indices
where w a is the weight factor influencing the movement in autotroph form; f k m and f k p are the fitness values of protozoa at neighbor indices k m and k p , respectively; ϵ is a small constant to avoid division by zero; | · | denotes the absolute value; and exp denotes the exponential function.
The effect of phototaxis is
e p n = w a · ( x k m x k p )
where e p n is the phototaxis effect vector for the n-th neighbor pair; x k m and x k p are the positions of neighbor protozoa at indices k m and k p , respectively; w a is the weight factor from Equation (35); and · denotes element-wise multiplication.
The new position for the autotroph form is
x i = x i + f · x j x i + 1 n p n = 1 n p e p n · M f
where x i is the updated position vector of the i-th protozoan in autotroph form; x i is the current position vector of the i-th protozoan; f is the foraging factor from Equation (33); x j is the position vector of a randomly selected protozoan j; n p is the number of neighbor pairs considered; n = 1 n p e p n is the summation of phototaxis effects from all neighbor pairs; M f is the foraging mapping vector determining which dimensions are updated; and · denotes element-wise multiplication.
  • Heterotroph Form
If a protozoan is in a heterotrophic form, it follows the heterotrophic mechanism.
As shown in Equation (38), the weight factor is
w h = exp f i m k f i p k + ϵ
where w h is the weight factor influencing the movement in heterotroph form; f i m k and f i p k are the fitness values of protozoa at indices i m k and i p k , respectively; ϵ is a small constant to avoid division by zero; | · | denotes the absolute value; and exp denotes the exponential function.
The effect of chemotaxis is
e p n = w h · ( x i m k x i p k )
where e p n is the chemotaxis effect vector for the n-th neighbor pair; x i m k and x i p k are the position vectors of neighbor protozoa at indices i m k and i p k , respectively; w h is the weight factor from Equation (38); and · denotes element-wise multiplication.
Furthermore, the nearby position is applied as shown in Equation (40):
x near = 1 + λ · r · 1 i t e r i t e r max · x i
where x near is the nearby position vector for the i-th protozoan; λ is a random factor taking the value + 1 or 1 with equal probability; r is a random number drawn from a uniform distribution over [ 0 , 1 ] ; i t e r is the current iteration number; i t e r max is the maximum number of iterations; and · denotes element-wise multiplication.
Moreover, the new position for the heterotroph form is applied as shown in Equation (41):
x i = x i + f · x near x i + 1 n p n = 1 n p e p n · M f
where x i is the updated position vector of the i-th protozoan in heterotroph form; x i is the current position vector of the i-th protozoan; f is the foraging factor defined previously; x near is the nearby position vector from Equation (40); n p is the number of neighbor pairs considered; n = 1 n p e p n is the summation of chemotaxis effects from all neighbor pairs; M f is the foraging mapping vector determining which dimensions are updated; and · denotes element-wise multiplication.
  • Applying Differential Evolution (DE) Operations
The DE mutation and crossover operations are applied as shown in Equation (42):
mutant = a + F · ( b c ) , a , b , c randomly selected vectors
where mutant is the mutant vector generated in the mutation operation; a , b , and c are randomly selected distinct vectors from the current population; F is the mutation scaling factor controlling the differential variation; and · denotes scalar multiplication.
The crossover operation is defined as shown in Equation (43):
trial j = mutant j , if r j < C R , x i j , otherwise j = 1 , 2 , , d i m
where trial j is the j-th component of the trial vector; mutant j is the j-th component of the mutant vector from Equation (42); x i j is the j-th component of the current vector x i ; r j is a random number drawn from a uniform distribution over [ 0 , 1 ] ; C R is the crossover probability controlling the rate of crossover; d i m is the dimensionality of the problem.
The fitness evaluation for the trial vector is performed as shown in Equation (44):
f trial = f ( trial )
where f trial is the fitness value (objective function evaluation) of the trial vector trial; f ( · ) denotes the objective function being minimized.
If f trial < f i , the selection operation is applied as shown in Equation (45):
x i = trial , f i = f trial
where x i is updated to the trial vector if it has a better fitness value; f i is the fitness value of the i-th vector in the current population.
Boundary control: the new positions of the protozoa are ensured to be within the boundaries, as shown in Equation (46):
x i = max min x i , X max , X min , i = 1 , 2 , , p s
where x i is the updated position vector of the i-th protozoan after boundary control; X min and X max are the minimum and maximum bounds of the search space, respectively; min ( · ) and max ( · ) are element-wise minimum and maximum functions; p s is the population size.
Finally, the population is updated based on the fitness of the new positions, as shown in Equation (47):
x i = x i , if f ( x i ) < f ( x i ) , x i , otherwise , i = 1 , 2 , , p s
where x i is the updated position vector of the i-th protozoan in the population; x i is the new position vector after the update mechanisms; f ( x i ) and f ( x i ) are the fitness values of the new and current positions, respectively; p s is the population size.

3.2. Discussion on Hybrid APO with DE Algorithm Design and Pseudocode

The design of the Hybrid Artificial Protozoa Optimizer with Differential Evolution (HPDE) algorithm is centered around integrating the strengths of both the Artificial Protozoa Optimizer (APO) and Differential Evolution (DE). APO mimics the biological behaviors of protozoa, which include foraging, reproduction, and dormancy. However, while APO excels in exploration, its exploitation capabilities can be limited, especially in complex and high-dimensional search spaces. To address this limitation, DE is incorporated into the hybrid algorithm to enhance its search capabilities and to improve the balance between exploration and exploitation.
In the design of HPDE, the APO’s natural behaviors are preserved and enhanced by DE’s mutation and crossover strategies. The DE operations are not applied uniformly but with a certain probability, allowing the algorithm to adapt dynamically to the optimization process. This ensures that the algorithm can intensify the search in promising regions of the search space while maintaining diversity in the population to avoid premature convergence.
The process begins with the initialization of the population, followed by the evaluation of fitness. The population is then sorted based on fitness values, ensuring that the best solutions are prioritized. The algorithm iteratively updates the population, applying APO’s dormancy and reproduction mechanisms, as well as DE’s mutation and crossover operations.
One of the key innovations in HPDE is the use of proportion fractions and random indices to introduce diversity into the population. This is particularly important in maintaining a broad search space and preventing the algorithm from getting trapped in local optima. Additionally, the algorithm adapts the application of DE operations based on the current state of the population, ensuring a balance between exploration and exploitation throughout the optimization process.
The pseudocode reflects this hybrid approach, with APO’s biological behaviors forming the core of the algorithm, while DE operations are integrated strategically to enhance performance. By combining the strengths of both APO and DE, HPDE is capable of addressing a wide range of optimization problems, from benchmark functions to real-world engineering design challenges.

3.3. Discussion on Hybrid APO with DE Algorithm Design and Pseudocode

The design of the Hybrid Artificial Protozoa Optimizer with Differential Evolution (HPDE) algorithm consists of different stages, including initialization, fitness evaluation, exploration through APO’s biological behaviors, and exploitation enhanced by DE’s mutation and crossover operations.
Initialization: In the first stage, the population of protozoa is initialized randomly within the given bounds. This initialization sets the stage for the search process by distributing potential solutions across the search space. Each protozoan represents a candidate solution, and their positions are determined based on uniform random distribution. This diversity in the initial population ensures a broad exploration of the search space from the outset.
Fitness Evaluation: Following initialization, the fitness of each protozoan is evaluated using the objective function. This step assigns a fitness value to each candidate solution, which reflects its quality in terms of the optimization goal. The population is then sorted based on these fitness values, prioritizing the best solutions for subsequent stages.
Exploration: The next stage focuses on exploration, driven primarily by APO’s biologically inspired mechanisms. APO simulates the behaviors of protozoa, including dormancy, reproduction, and foraging. Dormancy and reproduction introduce diversity into the population by reinitializing or modifying protozoa positions, thereby preventing premature convergence. Foraging, on the other hand, allows protozoa to move within the search space based on autotroph and heterotroph behaviors, guiding the search toward promising regions.
Exploitation: To enhance exploitation, the DE algorithm is integrated into the hybrid approach. DE’s mutation and crossover operations are applied to generate new candidate solutions by combining existing ones. These operations are not applied uniformly across the population but are introduced with a certain probability, allowing the algorithm to adapt to the current optimization stage. This probabilistic application ensures that the algorithm can focus on intensifying the search in areas where promising solutions have been identified while still maintaining sufficient diversity in the population.
Population Update and Convergence: After applying both APO and DE operations, boundary control is performed to ensure that the solutions remain within the defined bounds. The population is then updated based on the fitness of the newly generated solutions. This process iterates until the maximum number of iterations is reached or another stopping criterion is satisfied.
The pseudocode (see Algorithm 1) demonstrates the flow of the algorithm. Starting with the initialization of the population, the algorithm progresses through fitness evaluation, exploration via APO, and exploitation through DE, with careful management of population diversity throughout the process.
The HPDE algorithm effectively combines the exploration strengths of APO with the exploitation capabilities of DE, resulting in a robust optimization method. The algorithm’s design ensures that it can handle a wide range of optimization problems, from standard benchmark functions to complex real-world engineering challenges. By maintaining a balance between exploration and exploitation, HPDE is able to avoid local optima and achieve good performance across different problem domains In addition, the flowchart of HPDE is provided in Figure 1.
Algorithm 1 Pseudocode and algorithm steps of HPDE
1:
Input: Objective function f, dimension d i m , population size p s , maximum iterations i t e r max , lower bound X min , upper bound X max
2:
Output: Best solution x best , best fitness value f best
3:
Initialize population x i [ X min , X max ] for i = 1 , 2 , , p s     ▹ See Equation (23)
4:
Evaluate fitness f i = f ( x i ) for i = 1 , 2 , , p s         ▹ See Equation (24)
5:
Sort population by fitness             ▹ See Equation (25)
6:
for i t e r = 1 to i t e r max  do
7:
    Calculate proportion fraction p f     ▹ See Equation (26)
8:
    Select random indices for dormancy/reproduction R     ▹ See Equation (27)
9:
    for each i R  do
10:
        Calculate dormancy/reproduction probability p d r         ▹ See Equation (28)
11:
        if random value < p d r  then
12:
           Perform dormancy form         ▹ See Equation (29)
13:
        else
14:
           Perform reproduction form     ▹ See Equations (30) and (31)
15:
        end if
16:
    end for
17:
    for each i R  do
18:
        Calculate foraging factor f        ▹ See Equation (33)
19:
        Calculate autotroph/heterotroph probability p a h     ▹ See Equation (34)
20:
        if random value < p a h  then
21:
           Perform autotroph form         ▹ See Equations (35)–(37)
22:
        else
23:
           Perform heterotroph form         ▹ See Equations (38)–(41)
24:
        end if
25:
    end for
26:
    for each i = 1 to p s  do
27:
        if random value < 0.2 then
28:
           Apply Differential Evolution (DE)     ▹ See Equations (42)–(45)
29:
        end if
30:
    end for
31:
    Ensure boundary control         ▹ See Equation (46)
32:
    Update population based on fitness         ▹ See Equation (47)
33:
end for
34:
return Best solution x best , best fitness value f best

3.4. Exploration and Exploitation Features of HPDE Algorithm

Exploration and exploitation are fundamental concepts in metaheuristic optimization algorithms, and they are crucial for the performance of the HPDE Hybrid algorithm. Exploration refers to the algorithm’s ability to search the global solution space broadly, ensuring diverse candidate solutions are considered. Exploitation, on the other hand, focuses on intensively searching around the best solutions found so far, refining them to achieve optimality. The HPDE Hybrid algorithm integrates mechanisms to balance these two aspects effectively.
Exploration in the HPDE Hybrid algorithm is primarily achieved through the foraging behavior of protozoa and the dormancy forms. During the foraging process, protozoa move through the solution space based on autotrophic and heterotrophic mechanisms. The autotrophic mode, where protozoa move towards suitable light conditions, ensures diverse sampling of the search space by allowing protozoa to explore new regions. This is modeled by Equations (35)–(37). The heterotrophic mode, where protozoa absorb organic matter, also contributes to exploration by moving protozoa towards nutrient-rich areas, modeled by Equations (38)–(41). Additionally, the dormancy form, as described in Equation (29), allows protozoa to be reinitialized to new random positions, injecting further diversity into the population.
Exploitation is facilitated by the reproduction forms and the selective application of Differential Evolution (DE) operations. The reproduction form, detailed in Equations (30) and (31), allows protozoa to create new solutions by perturbing their current positions, focusing the search around promising areas. This mechanism ensures that good solutions are refined over iterations. The DE operations, involving mutation, crossover, and selection as described by Equations (42)–(45), further enhance exploitation. The mutation operation generates new candidate solutions by combining existing ones, while crossover blends these candidates to create trial solutions. The selection process ensures that only solutions with improved fitness values are retained in the population. These DE operations intensively exploit the solution space around the best-found solutions, driving the search towards optimality.
The balance between exploration and exploitation is dynamically managed throughout the algorithm’s iterations. The proportion fraction ( p f ), as calculated by Equation (26), determines the ratio of protozoa undergoing dormancy or reproduction versus those in foraging modes. This adaptive mechanism ensures that exploration is emphasized in the early stages of the search, while exploitation becomes more prominent as the search progresses and the algorithm converges towards optimal solutions.

3.5. Solving Single-Objective Optimization Problems Using HPDE

Single-objective optimization problems typically involve one minimum point, which could either be the sole global optimum with no local points or include several local points with one global optimum [113]. To resolve such a problem using HPDE, it must first be mathematically formulated as displayed in Equation (48):
X = min ( f ( x ) )
In this equation, f ( x ) represents the objective function to be minimized, while n denotes the total number of variables. The variable bounds are defined as X min x X max , with the inequality constraints being g ( x ) 0 and equality constraints as h ( x ) = 0 .
The process of solving a single-objective optimization problem using HPDE starts with the initial step, which encompasses setting up the algorithm parameters. Then, the fitness value is computed using the formulated single-objective problem, and these values are stored and arranged. The top two values are selected and placed in temporary variables. In the fourth step, the iterative process begins with entry into the while loop. In this phase, the particle moves toward the single global optimum point. The first and second-best positions are updated, and with each iteration, the fitness function is calculated, and the boundary limits of the search space are verified. Once the loop concludes, the results are reported, and the global optimum point is identified.

4. Evaluation of HPDE in Solving Various Engineering Design Problems

This section delves into the application of HPDE for solving various engineering design problems. It provides a comprehensive comparison of HPDE’s effectiveness against a broad spectrum of other optimization algorithms. The evaluation focuses on the ability of these algorithms to handle complex, real-world engineering challenges, showcasing HPDE’s strengths and potential areas for improvement. Through detailed analysis and empirical results, this section highlights the advantages of using HPDE in engineering optimization, underscoring its robustness, efficiency, and accuracy in finding optimal solutions.

4.1. Robot Gripper Problem

The Robot Gripper problem models a robotic gripper’s mechanisms as shown in Figure 2, aiming to optimize its design and functionality by configuring physical dimensions and angles to maximize grip efficiency and stability. The mathematical model defines several parameters and constraints to simulate realistic operational conditions.
A mathematical model for the Robotic Gripper optimization problem is given in Appendix A.
The results of the Robot Gripper optimization problem are shown in Table 2. The HPDE optimizer achieves a relatively competitive result with a best value of 2.54714. Compared with other optimization algorithms, HPDE demonstrates good performance, illustrating its effectiveness in optimizing the engineering design. Algorithms such as APO and CSA, with best values of 2.653614 and 2.554382, respectively, offer close competition but still fall slightly behind HPDE. Notably, some algorithms, like GA, perform significantly worse, with a best value reaching as high as 9.77 × 10 21 , indicating substantial inferiority vs. the optimal solution found by HPDE.
The HPDE optimizer showcases an effective performance rank that is first among all tested optimizers, as indicated in Table 3, with a mean value of 2.68537 , a minimum (Min) of 2.54714 , and a maximum (Max) of 3.087935 . When compared with other algorithms, HPDE not only achieves the lowest minimum value but also maintains a lower mean value than its competitors, such as APO and CSA, which have higher mean values of 2.797473 and 3.100101 , respectively. Noteworthy is HPDE’s standard deviation of 0.226658 , signifying consistent performance across trials. This optimizer also excels in computational efficiency with a relatively moderate computational time of 35.68552 s, outperforming several other algorithms like APO and CSA in terms of both effectiveness and efficiency.

4.2. Welded Beam Design Optimization

In the field of structural engineering, the welded beam design presents an archetypal problem that seeks to find an optimal balance between cost-efficiency and stringent safety standards. The design must adhere to multiple constraints without sacrificing cost-effectiveness [114].
The welded beam design problem, as illustrated in Figure 3, is an optimization problem in structural engineering. The mathematical model for the weld beam problem is shown in Appendix B.
In the welded beam design optimization problem, the HPDE exhibits an outperforming performance with a best value of 1.673507 , as illustrated in Table 4. This result places HPDE as one of the top performers among the listed algorithms, narrowly outperforming the CSA, which has a best of 1.671121 , and followed by GWO with 1.677278 . The variables optimized by APO, including x 1 , x 2 , x 3 , and x 4 , are well within competitive ranges when compared with others like GWO and CSA, ensuring a balanced design solution. Notably, HPDE performs significantly better than other advanced algorithms such as AVOA, COOT, and HGS, indicating its effectiveness in handling the complexities of beam design. Algorithms such as GA and WOA, on the other hand, show much higher best values, reflecting less optimal solutions in this specific engineering context.
In the welded beam design optimization problem, the performance of the HPDE is notably good, as evidenced in Table 5. The algorithm achieves an outperforming mean best value of 1.674353 , with a minimal standard deviation of 0.000774 , indicating a high level of consistency across runs. The minimal and maximal best values reported for APO are 1.673507 and 1.675425 , respectively, which are significantly better than those achieved by most competitors. For instance, the closest contender, GWO, has a slightly higher mean best of 1.679066 and a wider best range, reflecting less consistency. Notably, HPDE also ranks first among all tested algorithms, emphasizing its efficiency in exploring and exploiting the design space. This is further complemented by its relatively short computational time of 0.509625 s, which is efficient compared with other algorithms, particularly in contrast to GA, which exhibits a much higher mean best of 6.502565 with a significantly longer computational time of 63.05204 s.

4.3. Pressure Vessel Design Optimization

Optimizing the design of pressure vessels is a critical endeavor in mechanical engineering, focusing on determining optimal dimensions that ensure structural integrity, safety, and cost-effectiveness. This section describes the mathematical formulation for the pressure vessel design optimization problem [115].
The objective of the optimization is to minimize the cost associated with the materials and manufacturing of the pressure vessel, constrained by the vessel’s design and performance requirements [116].
The pressure vessel design optimization problem ( See Figure 4) involves the cylindrical and hemispherical parts of a pressure vessel. The objective is to minimize the total cost, including material, forming, and welding, subject to constraints on the vessel’s physical dimensions and operational safety.
In the pressure vessel design optimization problem, HDPE demonstrates a strong performance with a best score of 5885.363 , as detailed in Table 6. This score is notably lower than most of the other competing algorithms, including SHIO and FVIM, which register scores of 6109.215 and 5927.990 , respectively, indicating a more optimized solution by MPA. Additionally, HDPE closely rivals the scores of RTH and CSA and slightly outperforms WSO, which has a nearly identical score of 5885.912 . Among the rest, algorithms like RSO and ChOA exhibit significantly higher scores of 9442.609 and 9613.718 , reflecting less optimal solutions compared with MPA. The low variability in the components optimized by HDPE (x1 through x4) compared with the higher variability seen in COA and MFO also highlights HDPE’s capability to consistently converge to efficient solutions.
In the pressure vessel design optimization problem, HDPE exhibits outstanding efficiency and consistency, achieving a minimal score of 5885.363 with a mean very close to the minimum at 5885.421 and an extremely narrow range extending up to a maximum of 5885.505 , as shown in Table 7. This slight variation is underscored by an impressively low standard deviation of 0.06051 , signifying HDPE’s robust performance and ability to consistently find near-optimal solutions. Compared with other algorithms such as SHIO, which has a wider spread in results indicated by a standard deviation of 618.9455 , and even CSA with a deviation of 133.1088 , HDPE maintains good precision in optimization.

4.4. Spring Design Optimization

The optimization of tension/compression spring design is a quintessential problem in mechanical engineering that entails determining optimal dimensions to meet specific performance criteria while minimizing material usage. This section delineates the mathematical formulation of the optimization problem associated with the tension/compression spring design [117].
The design optimization problem is characterized by the objective of minimizing the spring’s weight, subject to constraints arising from the spring’s physical and geometrical properties. The design variables are constrained within predefined bounds to ensure practical feasibility and adherence to engineering standards [118].
The spring design optimization problem, illustrated in Figure 5, involves determining the optimal dimensions of a helical spring to achieve desired mechanical properties and performance. The objective can include minimizing the weight or material cost while ensuring that the spring can withstand specified loads without failing due to excessive stress or deformation. The mathematical model of the spring design problem is shown in Appendix D.
The objective is to find the set of design variables within the specified bounds that minimize the objective function while satisfying all the imposed constraints. This optimization problem is fundamental in the design and manufacturing of mechanical springs, facilitating the development of efficient, reliable, and cost-effective components [119].
In the spring design optimization problem, HDPE demonstrates an exceptional ability to refine the design to a near-optimal configuration with a best score of 0.012665 , as depicted in Table 8. This score positions HDPE as a leading optimizer, matched only by HDPE and marginally surpassed by the COOT, both delivering a score of 0.012665 as well. The variables optimized by HDPE show it effectively balances the trade-offs between wire diameter, mean coil diameter, and active coils to achieve an efficient design. When compared with other algorithms like SHIO and FVIM, which record slightly higher scores of 0.012702 and 0.012688 , respectively, HDPE illustrates good optimization efficiency. Particularly, HDPE’s performance is starkly better than that of RSO, which shows an anomalously high best score of 1122611, indicating a significant deviation from optimal design values.
In the spring design optimization problem, HDPE showcases outstanding optimization performance, achieving an extremely low minimum score of 0.012665 , a mean score of 0.012666 , and a maximum score of 0.012667 , as shown in Table 9. The exceptionally low standard deviation of 6.84 × 10 7 indicates an extraordinary level of consistency and precision in finding the optimal solution across different runs. Compared with other algorithms, HDPE stands out for its ability to maintain near-perfect consistency; for instance, WSO, although matching the minimum score, demonstrates a slightly lesser consistency with a standard deviation of 1.57 × 10 7 . Notably, other algorithms, such as SHIO and FVIM, show higher variability in their results with standard deviations of 0.000586 and 0.0000518 , respectively, reflecting a less stable performance.

4.5. Cantilever Beam Design Optimization

The design optimization of a cantilever beam focuses on finding the optimal dimensions to minimize the beam’s weight while ensuring it withstands specific loading conditions without surpassing the allowable deflection or stress limits. This section presents the mathematical formulation for the cantilever beam design optimization problem.
The objective of the optimization problem is to minimize the total weight of the cantilever beam, subject to constraints that ensure the beam’s structural integrity under prescribed loading conditions [120].
The cantilever beam design optimization problem, illustrated in Figure 6, involves a beam that is fixed at one end and free at the other. This setup is frequently used in engineering to evaluate structural integrity under various loads. The figure shows a beam divided into five sections, each with a constant thickness labeled X 1 .
The optimization aims to determine the dimensions of the beam that minimize its weight while conforming to the deflection constraint, striking a balance between material efficiency and structural robustness. This optimization process is vital in ensuring the economic and structural efficiency of cantilever beam designs in various engineering applications [121]. The mathematical model of the cantilever beam is described in Appendix E.
In the Cantilever Beam Design Optimization challenge, the HDPE algorithm demonstrates exceptional precision, achieving one of the best scores of 1.339956 , as shown in Table 10. This score is matched only by the Arctic Puffin Optimization (APO), suggesting a high level of efficacy in these algorithms for optimizing complex structural designs. Both HDPE and APO provide virtually identical and optimal beam dimensions, indicating their effective search capabilities in the solution space. In comparison, other algorithms like FOX and GWO achieve slightly higher scores, such as 1.339966 and 1.339986 , respectively, pointing to a marginal but noticeable difference in optimization quality. Notably, the variance in results is significant across algorithms, with BWO, ChOA, and BO recording much higher best scores of 1.343529 , 1.354911 , and 1.498807 , respectively, illustrating the challenges faced in beam optimization.
In the cantilever beam design optimization problem, the HDPE algorithm achieves remarkable success, as evidenced in Table 11. HDPE demonstrates exceptional accuracy and consistency, achieving the best performance with a minimum, mean, and maximum score, all tightly clustered at 1.339956 . The incredibly small standard deviation of 1.33 × 10 13 further emphasizes the algorithm’s precision and stability across multiple runs. This performance is good when compared with other top contenders, such as the Arctic Puffin Optimization (APO), which, while matching the minimum and maximum values, shows a slightly higher standard deviation of 7.11 × 10 10 and longer computation time. The other algorithms, like CSA and FOX, show slightly less optimized results with minimum values of 1.339957 and 1.339966 , respectively, and higher standard deviations, indicating less consistency in achieving optimal results. The HDPE ranks first in terms of performance but also excels in efficiency with a relatively quick computation time of 0.460858 s.

5. Comparative Analysis of HPDE

This section outlines the experiment conducted to assess the computational performance of the HPDE optimizer on established benchmark optimization problems. The outcomes of HPDE are comprehensively reviewed, analyzed, and contrasted with other prominent metaheuristic algorithms that have shown notable performance in academic studies.

5.1. IEEE Congress on Evolutionary Computation CEC2014 Benchmark Functions

The CEC2014 benchmark suite, introduced during the IEEE Congress on Evolutionary Computation in 2014, is an essential collection of test functions designed for the comprehensive evaluation of evolutionary algorithms and other optimization techniques. This suite provides a broad spectrum of functions characterized by various complexities and properties such as unimodality, multimodality, separability, and non-separability, catering to the need for assessing the adaptability and efficiency of algorithms under diverse operational conditions.
Designed to simulate real-world scenarios, the CEC2014 functions are crafted to challenge optimization algorithms on multiple fronts, including their capacity to effectively locate global optima, handle complex, high-dimensional spaces, and demonstrate robustness across various problem structures. The suite’s diverse array of functions ensures that it is an excellent tool for benchmarking algorithms intended for a wide range of practical applications, from engineering optimizations to complex data analysis tasks.
Each function in the suite targets specific strengths and vulnerabilities of optimization algorithms, evaluating attributes such as convergence rate, accuracy, escape from local optima, and sensitivity to initialization parameters. The suite encompasses both continuous and discrete problem types, reflecting the versatile application landscape of modern computational problems.
The functions are organized into distinct categories like unimodal, multimodal, and hybrid, each offering unique challenges that help in assessing the comprehensive performance of algorithms across different types of landscapes. Additionally, the suite includes composition functions that combine elements from several basic functions to create more intricate and challenging optimization environments.

5.2. Parameters for Algorithm Benchmarking

Ensuring consistent evaluation of optimization algorithms in benchmarking environments for CEC2014. Standard parameters are employed across these benchmarks to create a uniform testing ground where the efficacy of various evolutionary algorithms can be assessed. The details of these parameters are summarized in Table 12, whereas the parameter settings for compared algorithms are presented in Table 13.
A population size of 30, along with a dimensionality of 10, balances the computational load with the complexity necessary for effective algorithm testing. Limiting the function evaluations to 1000 ensures that the algorithms can demonstrate their convergence capabilities without undue computational burden. The search parameter of [ 100 , 100 ] D provides a comprehensive and consistent field for algorithmic operations.
Rotational and shifting manipulations are incorporated into the functions to elevate their complexity, thereby simulating more intricate optimization scenarios. By omitting noise, the focus remains squarely on the algorithms’ ability to navigate complex environments, enhancing the assessment of their robustness and efficacy. This structured approach ensures that comparisons between different algorithms are fair and highlight their respective strengths and weaknesses within a broad spectrum of benchmark settings.
This study involved a comprehensive assessment and comparison of the effectiveness of a wide range of optimization algorithms.

6. Quantitative Analysis of HPDE Performance

To assess the performance of hybrid HPDE in comparison with other optimizers, we applied key statistical tools: the mean, standard error of the mean, and the standard deviation. The mean, a central measure, captures the average result achieved by the algorithms over multiple trials, providing a comprehensive view of their standard operational levels. Additionally, the standard deviation assesses the spread of results around this mean, illuminating the consistency and reliability of the algorithms across different instances. These statistics are essential for appraising the algorithms’ stability and their performance predictability.

6.1. Statistical Results of HPDE with 51 Independent Runs on the IEEE CEC2014

The HPDE optimizer’s performance on the CEC2014 benchmark functions showcases significant strengths, as shown in Table 14 and Table 15, consistently ranking among the top performers with low mean values that indicate an efficient approach to the global optimum. Its robustness is evidenced by low standard deviations and standard errors, highlighting its reliability and consistency across multiple runs. The successful hybridization of the Artificial Protozoa Optimizer (APO) and Differential Evolution (DE) in HPDE enhances its capabilities; APO contributes diverse explorative strategies, while DE enhances exploitation, making HPDE good in balancing exploration and exploitation compared with algorithms like the Grey Wolf Optimizer (GWO) and the Whale Optimization Algorithm (WOA). Additionally, HPDE competes well against more established algorithms like Genetic Algorithms, excelling in unimodal functions through effective exploitation and demonstrating strong performance in multimodal and composite functions by avoiding premature convergence and dynamically adapting strategies.
The error measurement results, as shown in Table 16 for the HPDE optimizer over CEC2014 benchmark functions (F1–F30), underscore HPDE’s good performance. Demonstrating remarkably low mean errors, HPDE outperforms competitors like GWO, DBO, and APO, especially in complex multimodal and high-dimensional scenarios such as functions F17, F18, and F29. For instance, in F17, where other algorithms show errors ranging from hundreds to hundreds of thousands, HPDE maintains a much lower error, illustrating its effective global optimization capability and ability to avoid local minima. In simpler function scenarios like F1 through F6, HPDE often achieves near-zero errors, showcasing outstanding exploitation capabilities. This consistent performance across various function types, from unimodal to multimodal and separable to non-separable, indicates HPDE’s adaptability and reliability. These attributes mark HPDE as a versatile and efficient optimization tool suitable for a broad range of applications requiring precise and robust solutions, outshining other algorithms that exhibit higher variability and less consistent performance across the benchmark suite.
HPDE optimizer showcases good performance across two sets of comparative analyses over the CEC2014 benchmark functions (F1–F30), as shown in Table 17 and Table 18. In both groups of competitors, HPDE consistently ranks at or near the top, emphasizing its robustness and effectiveness in handling complex optimization tasks. In the first group (F1–F15), HPDE demonstrates significant efficiency by maintaining competitive rankings and low mean errors against algorithms like SHIO and ChOA, particularly in simpler functions such as F5 and F6, where its precision and exploitation capabilities shine. In the second set of functions (F16–F30), HPDE continues to excel, notably outperforming other optimizers in complex scenarios like F17 and F21, where it deals adeptly with multimodal and high-dimensional challenges. These consistent top rankings across a variety of function types underline HPDE’s versatility and reliability as a potent tool for both academic research and practical applications, positioning it as a good choice for demanding optimization environments.
The results of the HPDE optimizer with a second group of optimizers across the CEC2014 benchmark functions (F1–F30), as shown in Table 19, illustrate HPDE’s dominant performance in a broad range of scenarios. Remarkably, HPDE maintains an impressively low mean error across nearly all functions, often outperforming other algorithms by orders of magnitude. For instance, in direct comparisons, HPDE’s error rates are significantly lower than those of SHIO, ChOA, and even advanced algorithms like COA and OHO across various functions. This is particularly evident in functions where other optimizers, such as RTH and SCA, show vulnerability in handling more complex optimization landscapes, where HPDE exhibits robustness and precision.
The consistency of HPDE is highlighted in functions with extremely challenging environments, such as F17 and F30, where it achieves minimal errors compared with competitors that struggle with much higher errors, showcasing its effectiveness in dealing with high-dimensional and multimodal problems. Even in the simpler functions, HPDE’s performance is consistently good, maintaining top rankings and showing an unparalleled ability to navigate the solution space efficiently.
Table 20 offers a concise illustration of the average rankings produced by Friedman’s test when HPDE is compared with other methods for a 10-dimensional setting on the CEC-2014 functions. On the CEC-2014 test suite, each method’s mean error values across all dimensions are first collected, and then their ranks in each dimension are determined. By taking the average of these ranks, one can compare the algorithms via Friedman’s test. Under Friedman’s framework, if the p-value is found to be less than or equal to the chosen α = 0.05, the null hypothesis (asserting no significant performance difference) is rejected, implying that at least one algorithm is better or worse than the others. Within this ranking approach, the method with the lowest rank is considered the best, whereas the highest rank denotes the worst. The algorithm identified as the best is then used as a reference (control) for additional analyses.

6.2. HPDE Accuracy

The accuracy of the HPDE algorithm for each benchmark function was calculated using a comparative error analysis relative to the worst-performing algorithm on that function. For each function F i , we first determined the error of the HPDE algorithm, denoted as E HPDE , i , and the worst error among all compared algorithms, denoted as E worst , i .
The error for the HPDE algorithm on function F i is shown in Equation (49):
E HPDE , i = M HPDE , i O i ,
where M HPDE , i is the mean result obtained by the HPDE algorithm on F i , and O i is the known optimal value of the function F i .
Similarly, the worst error among all algorithms for function F i is calculated as shown in Equation (50):
E worst , i = max M alg , i O i | for all algorithms ,
where M alg , i represents the mean result of each algorithm on F i .
The accuracy A i of the HPDE algorithm on function F i is then computed using Equation (51), which quantifies the performance of HPDE relative to the worst-performing algorithm as shown in Equation (51):
A i = 1 E HPDE , i E worst , i .
By applying Equation (51) to each benchmark function, we obtained the accuracy values presented in Table 21. The accuracy metric ranges between 0 and 1, where values closer to 1 indicate higher accuracy and better performance of the HPDE algorithm. It effectively normalizes the HPDE’s error, providing a relative measure of its optimization capability compared with the least effective algorithm for each function.
The accuracy table (Table 21) showcases the performance of the HPDE algorithm across the CEC2014 benchmark functions F1 to F30. The HPDE algorithm demonstrates exceptionally high accuracy on numerous functions, achieving values very close to 1.0. Specifically, for functions F1, F2, F3, F7, F8, F15, F17, F18, F20, F21, and F29, the accuracy exceeds 0.9999, indicating good optimization performance with minimal error relative to the worst-performing algorithms on these functions. For instance, on function F1, the HPDE accuracy is approximately 0.999999723, highlighting its effectiveness in finding solutions close to the optimal.
On functions F4, F6, F9, F10, F11, F13, F14, F19, F22, F27, F28, and F30, the HPDE algorithm maintains high accuracy levels ranging from approximately 0.89 to 0.99. This suggests consistent and reliable performance, although there is slightly more room for improvement compared with the functions where near-perfect accuracy was achieved. These results indicate that while HPDE effectively minimizes the error in these functions, some inherent challenges may prevent it from reaching the highest possible accuracy.
Conversely, the HPDE algorithm exhibits lower accuracy on functions F5, F16, F23, F24, F25, and F26, with values ranging from approximately 0.0566 to 0.5497. These lower accuracy scores suggest that the algorithm encounters difficulties in optimizing these specific functions, possibly due to factors such as complex landscapes, multimodality, or high-dimensional search spaces that challenge the convergence capabilities of the algorithm.

7. Qualitative Diagram Analysis

The convergence figures displayed for various benchmark functions from the CEC2014 set exhibit distinct behaviors, as shown in Figure 7, highlighting the efficiency and optimization capabilities of the algorithms in question. Most figures start with a steep drop, indicating a rapid improvement in optimization scores in initial iterations, which is typical in scenarios where gross optimization potentials are quickly exploited. The plateauing or flattening of the figures in the later stages, visible across most functions, suggests that finer improvements are more challenging, requiring more iterations for minimal gains. This trend is especially pronounced in the diagram that levels off at around iteration 20 and maintains a nearly constant score, emphasizing the algorithm’s early saturation in optimization potential. Conversely, the gradual declines in some figures without abrupt plateaus suggest a more consistent optimization process, potentially indicative of more complex problem spaces where incremental gains continue to be possible across iterations. The variance in the rate of convergence across these functions underscores the importance of algorithm selection based on specific problem characteristics and desired optimization precision within computational constraints.

7.1. Results of Convergence Diagram

The convergence figures, as shown in Figure 7 and Figure 8, illustrate the convergence behavior of the algorithm across different functions over 200 iterations, with each diagram showing the best solution found so far. In the F2 chart, the best score starts around 3600, exhibiting a stepwise decline with sharp drops, eventually converging to a score below 2000, indicating significant progress within the first 100 iterations, followed by smaller refinements. In the F2 chart, the algorithm starts with an initial score of 1060, dropping rapidly in the first 30 iterations to 940 and eventually converging just below 900, suggesting quick identification of a good solution with minor refinements afterward. F3 shows an initial score of 930, which quickly drops to around 820 with incremental improvements in subsequent iterations, reflecting a stepwise refinement process. In F4 4, the score starts at 880 and sharply declines to 720 in the first 30 iterations, indicating rapid convergence with minimal improvements thereafter. F5 begins at 613 and steadily declines to 606 over the 200 iterations, reflecting consistent but gradual improvement throughout the process. In F6, the algorithm starts at 521.2 and shows a stepwise decline, eventually converging around 520.6 after early significant drops, while in F7, the score starts at 2600 and drops dramatically to below 600 within the first 20 iterations, indicating effective early convergence. Figure 8 starts at 10 7 and drops drastically within the first 30 iterations, stabilizing near 10 4 , showing the algorithm’s ability to handle large-scale problems with rapid early progress. F9 begins at 10 9 and quickly drops to 10 8 within the first 50 iterations, with further incremental improvements indicating a stepwise convergence pattern. In F10, the initial score of 1204 decreases incrementally, reaching around 1201.5 after 200 iterations, with improvements occurring at specific points. F11 shows an initial score of 3800, which declines sharply early on, stabilizing around 2900 after iteration 50, indicating that the algorithm converges early with minimal further improvements. Finally, in F12, the score starts at 1204 and steadily improves to 1201.5 over 200 iterations, with the stepwise pattern suggesting the gradual identification of better solutions. These figures highlight the algorithm’s efficiency in handling different functions, with early significant improvements followed by slower refinements as it approaches optimal or near-optimal solutions. The stepwise patterns reflect a balance between exploration and exploitation, with key improvements made throughout the optimization process.

7.2. Results of Search History Diagram

The search history, as shown in Figure 9 and Figure 10 for the CEC2014 benchmark functions, showcases the HPDE optimizer’s exploration and exploitation processes across different dimensions, illustrating convergence towards global minima. Each plot vividly displays the scatter of search points where the red dot signifies the optimal or near-optimal solution found by the optimizer.
The search history Figure 9 and Figure 10 illustrate the search behavior of the HPDE over CEC2014 (F1 to F12). Each diagram plots the search points in a 2D plane, with the x-axis representing x 1 and the y-axis representing x 2 . The red dot in each diagram signifies the optimal solution found by the optimizer. In the first diagram (F1), the search points are densely clustered around the origin with a few scattered points further out, indicating effective concentration around the global optimum. The second diagram (F2) shows a more dispersed search pattern with a larger spread of points, suggesting a larger search space or multiple local minima. The third diagram (F3) displays a dense cluster of search points around a slightly off-center position, indicating focused refinement within a localized region. For function F4, the diagram shows a broad horizontal spread with a dense cluster near the optimal solution, suggesting significant variation in one dimension. In the fifth diagram (F5), the search points are concentrated in a narrow band along the x-axis with the optimal solution near the center, indicating a steep gradient in the x-direction. The sixth diagram (F6) reveals a dense cluster near the origin with a few scattered points, indicating efficient convergence with some exploration. For function F7, the search points form a dense cluster around the optimal solution, suggesting effective local search and quick convergence. The eighth diagram (F8) shows similar dense clustering, indicating effective local search and convergence. The ninth diagram (F9) shows a more even spread with concentration around the optimal solution, suggesting a complex landscape with multiple local minima. The tenth diagram (F10) exhibits a narrow vertical spread with a dense cluster near the optimal solution, indicating significant variation in the y-direction.

7.3. HPDE Trajectory Diagram

The trajectory figures of the first particle are shown in Figure 11 for selected benchmark functions (F1, F11, F12, F13, F15, F17) of the CEC2014 suite using the HPDE optimizer’s, exhibiting unique behaviors across different optimization landscapes. For function F1, the trajectory stabilizes quickly, showing rapid convergence to a low error value, which is indicative of a smoother landscape or a function where HPDE quickly finds the region of global optimum. Contrastingly, functions like F11 and F13 display more fluctuation initially, which suggests a more rugged landscape or the presence of local optima that challenge the optimizer before reaching a stable region. Functions F12 and F15 show a steady decrease in value before leveling off, which could indicate a need for fine-tuning the algorithm’s parameters to escape potential plateaus more efficiently. Function F17 shows a very gradual descent, potentially illustrating a complex optimization surface or a function that benefits from prolonged exploration to escape local minima. These trajectories collectively highlight the adaptive nature of the HPDE algorithm and its varying performance across different types of functions within the CEC2014 benchmark suite, underscoring its effectiveness and areas for potential optimization strategy enhancements.

7.4. Results of Average Fitness Diagram

The average fitness figures, as shown in Figure 12 of selected functions F13, F14, F15, F19, F22, and F23 from the CEC2014 benchmark set, show a rapid initial improvement in fitness, which stabilizes after the first few iterations. This behavior is typical in optimization scenarios where early explorations quickly lead to areas near the optimum, followed by a more gradual and challenging fine-tuning phase. The sharp decline in the fitness value at the start of the iterations suggests that the optimizer effectively identifies promising regions of the search space. Subsequent iterations, showing less dramatic changes, indicate the HPDE optimizer’s transition from exploration to exploitation, refining solutions within the identified promising regions. The differences in scale and stabilization points across these functions likely reflect varying landscape complexities and the HPDE optimizer’s ability to handle them. While functions with smoother landscapes might show a steady convergence and near-optimal values, more rugged landscapes could lead to more fluctuations and a higher settling point.

7.5. Results of Heat Map Diagram

The sensitivity analysis heatmaps for the HPDE optimizer applied to CEC2014 benchmark functions (F1 to F12), as shown in Figure 13 and Figure 14, demonstrate varied response characteristics across different settings of search agents and maximum iterations. The heatmaps encapsulate the sensitivity of the optimizer’s performance to these parameters. For simpler functions like F1 and F2, increasing the number of search agents generally results in a lower spread of final fitness values, indicating that a larger population provides a more robust search capability. However, for more complex functions like F3, the influence of increasing the number of agents is not as linear, suggesting the complexity and ruggedness of the landscape affect the optimizer’s efficiency. Interestingly, the maximum number of iterations does not always correspond with improved performance, which could be due to early convergence or the optimizer getting stuck in local minima. These heatmaps are crucial for understanding the behavior of the HPDE optimizer and can guide the tuning of its parameters to enhance optimization performance on specific types of functions.

8. Conclusions

In this paper, we presented the Hybrid Artificial Protozoa Optimizer with Differential Evolution (HPDE) algorithm, which integrates the biologically inspired principles of the Artificial Protozoa Optimizer (APO) with the robust optimization mechanisms of Differential Evolution (DE). The HPDE algorithm is meticulously designed to balance exploration and exploitation through innovative mechanisms such as autotrophic and heterotrophic foraging behaviors, dormancy and reproduction processes, and DE operations. The experimental evaluation of the CEC2014 benchmark functions demonstrates the good optimization performance of the HPDE algorithm when compared with 23 state-of-the-art optimizers. HPDE has a strong tendency to outperform competitors, achieving the best results in 24 out of 30 benchmark functions in one comparison set and in 23 functions in another, underscoring its robustness and versatility across diverse optimization challenges. Moreover, HPDE has been applied to several complex engineering design problems where it ranked top among all optimizers compared.
Beyond benchmark tests, the potential impact of HPDE in real-world applications is significant. The algorithm’s ability to adaptively explore diverse search landscapes and maintain strong convergence makes it suitable for a range of complex, large-scale problems. Potential areas of application include industrial process optimization, supply-chain management, energy systems planning, and various engineering design tasks, where HPDE can offer a robust means of navigating highly multimodal solution spaces under real-world constraints. As decision-making processes in these domains often demand reliable solutions under uncertainty, the biologically inspired core of HPDE, coupled with DE’s proven track record, provides a powerful synergy that can lead to both high-quality and efficient solutions.
Furthermore, an important direction for future work is extending HPDE to tackle multi-objective optimization problems. Many real-world challenges, including those encountered in engineering, economics, and environmental management, involve competing objectives that need to be optimized simultaneously. Accordingly, future research will focus on adapting the HPDE algorithm to handle multiple objectives, for instance, by integrating Pareto dominance or decomposition-based strategies to maintain a diverse set of optimal trade-off solutions. Another promising avenue is to investigate HPDE’s performance under real-time and dynamic conditions, such as when problem constraints or objectives evolve over time. Additionally, exploring hybridization with machine-learning techniques for parameter tuning and incorporating uncertainty quantification methods could further expand HPDE’s applicability.

Author Contributions

Conceptualization, H.N.F., A.I., F.H., A.H., N.H. and S.N.F.; Methodology, H.N.F., A.I., F.H., A.H., N.H. and S.N.F.; Software, H.N.F., A.I., F.H., A.H., N.H. and S.N.F.; Formal analysis, H.N.F., A.I., F.H., A.H., N.H. and S.N.F.; Investigation, H.N.F., A.I., F.H., A.H., N.H. and S.N.F.; Writing—original draft, H.N.F., A.I., F.H., A.H., N.H. and S.N.F.; Writing—review & editing, H.N.F., A.I., F.H., A.H., N.H. and S.N.F.; Visualization, H.N.F. Project administration, H.N.F., A.I., F.H., A.H., N.H. and S.N.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Mathematical Formulation of Robotic Gripper Optimization Problem

Appendix A.1. Variable Definitions

  • a , b , c : Lengths of different segments of the gripper’s arm, where a and b could represent lengths from the pivot to the grip point, and c the length from the grip point to an endpoint.
  • e: Distance from the base of the gripper to a key operational point, which impacts the gripper’s reach and efficiency.
  • f f : Additional length parameter that might adjust for mechanical offsets or extensions beyond the basic arm length.
  • l: Represents the total length of the gripper when extended.
  • δ : Angle adjustment factor, possibly accounting for mechanical tilt or alignment errors.
  • λ = 10 20 : A large penalty factor used to heavily penalize any constraint violations in the optimization process.
  • Y min , Y max , Y G : Operational limits for the gripper in the y-dimension, specifying minimum, maximum, and goal positions.
  • Z max : Maximum extension in the z-dimension, representing the furthest point the gripper can reach.
  • x: Typically used to represent a position or value in formulas, here set to 100 for scaling or normalization purposes.

Appendix A.2. Kinematic Equations

These equations calculate the initial and modified angles based on the current configuration of the gripper’s arms and its extension:
α 0 = cos 1 a 2 + l 2 + e 2 b 2 2 a l 2 + e 2 + tan 1 e l ,
β 0 = cos 1 b 2 + l 2 + e 2 a 2 2 b l 2 + e 2 tan 1 e l ,
α m = cos 1 a 2 + ( l Z max ) 2 + e 2 b 2 2 a ( l Z max ) 2 + e 2 + tan 1 e l Z max ,
β m = cos 1 b 2 + ( l Z max ) 2 + e 2 a 2 2 b ( l Z max ) 2 + e 2 tan 1 e l Z max .

Appendix A.3. Objective Function

The objective function aims to maximize the gripper’s operational efficiency by minimizing the negative sum of specific function evaluations:
f = O B J 11 ( P P , 2 ) O B J 11 ( P P , 1 ) .

Appendix A.4. Constraints

The constraints ensure the gripper operates within specified physical limits and maintains mechanical stability:
Y xmin = 2 ( e + f f + c sin ( β m + δ ) ) ,
Y xmax = 2 ( e + f f + c sin ( β 0 + δ ) ) ,
g ( 1 ) = Y xmin Y min ,
g ( 2 ) = Y xmin ,
g ( 3 ) = Y max Y xmax ,
g ( 4 ) = Y xmax Y G ,
g ( 5 ) = l 2 + e 2 ( a + b ) 2 ,
g ( 6 ) = b 2 ( a e ) 2 ( l Z max ) 2 ,
g ( 7 ) = Z max l .

Appendix A.5. Penalty for Constraint Violation

The penalty for constraint violation is calculated as a quadratic sum of violations, scaled by the penalty factor λ , ensuring that any non-compliant configuration is deemed undesirable:
Penalty = λ i = 1 length ( g ) g ( i ) 2 · GetInequality ( g ( i ) ) .
The overall fitness of a configuration is then evaluated by combining the objective function and the penalty term:
Fit = f + Penalty .

Appendix B. Weld Beam Mathematical Model

The HPDE aims to design a beam that is cost-effective while ensuring structural integrity under applied loads. The figure shows a welded beam supported by a rigid support at one end and subjected to a load P at the other end. In this problem, we optimize four critical dimensions of a welded beam: the weld thickness ( x 1 ), the effective length of the beam ( x 2 ), and the cross-sectional dimensions, specifically the height ( x 3 ) and thickness ( x 4 ) of the beam. The primary APOl is to minimize the cost, which includes material and welding expenses. The objective function is defined as follows:
f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 ) + 10 5 ( v + g )
This function penalizes deviations from specified constraints through the terms v and g [122]. The design must satisfy several performance-related constraints to ensure the beam’s safety and functionality:
t 1 2 + 2 t 1 t 2 x 2 2 R + t 2 2 t max
6 P L x 4 x 3 2 s max
4 P L 3 E x 4 x 3 3 d max
4.013 E x 3 2 x 4 6 / 36 L 2 1 x 3 2 L E 4 G P
To confirm that the design meets these constraints, the following formulas are employed:
t 1 = P 2 x 1 x 2
t 2 = M R J
M = P L + x 2 2
R = x 2 2 2 + x 1 + x 3 2 2
J = 2 2 x 1 x 2 x 2 2 / 12 + x 1 + x 3 / 2 2
The complexity and nonlinearity of the problem necessitate the use of advanced optimization techniques, such as genetic algorithms or particle swarm optimization, to navigate the design space effectively and find an optimal set of dimensions that minimize cost while ensuring compliance with all safety standards [123].

Appendix C. Mathematical Model of Pressure Vessel Optimization Problem

The optimization problem is defined with four design variables, each corresponding to a significant dimension affecting the vessel’s structural and operational integrity. The variables are constrained within specified bounds, reflecting practical and manufacturing limitations:
  • Lower Bound (LB ): [ 0.51 , 0.51 , 10 , 10 ] , indicating the minimum allowable values for the design variables.
  • Upper Bound (UB): [ 99.49 , 99.49 , 200 , 200 ] , representing the maximum allowable values for the design variables.
The cost minimization objective function is defined as follows:
Minimize z = 0.6224 · x 1 · x 3 · x 4 + 1.7781 · x 2 · x 3 2 + 3.1661 · x 1 2 · x 4 + 19.84 · x 1 2 · x 3
where x 1 , x 2 , x 3 , and x 4 represent the design variables.
The design is subject to the following constraints, ensuring the vessel’s functionality and compliance with safety standards:
g 1 ( x ) = x 1 + 0.0193 · x 3 0 , g 2 ( x ) = x 2 + 0.00954 · x 3 0 , g 3 ( x ) = π · x 3 2 · x 4 4 3 π · x 3 3 + 1296000 0 , g 4 ( x ) = x 4 240 0 .
We aim to find the set of design variables within the defined bounds that minimize the objective function while adhering to all the constraints, ensuring the pressure vessel’s design is both cost-effective and compliant with requisite safety standards [124].

Appendix D. Mathematical Formulation of Spring Design Problem

The spring design problem is formulated with three design variables, each representing a critical dimension influencing the spring’s mechanical properties. These variables are bounded as follows to ensure the solutions remain within practical and manufacturable limits:
  • Lower Bound (LB): [ 0.05 , 0.25 , 2.00 ] , indicating the minimum allowable values for the design variables.
  • Upper Bound (UB): [ 2 , 1.3 , 15.0 ] , representing the maximum allowable values for the design variables.
The objective function is formulated to minimize the spring’s weight, which is a function of the design variables:
Minimize z = x 1 2 · x 2 · ( x 3 + 2 )
where x 1 , x 2 , and x 3 are the design variables.
The design is subject to the following constraints, which ensure the spring’s mechanical and geometrical viability:
g 1 ( x ) = 1 x 2 3 · x 3 71785 · x 1 4 0 , g 2 ( x ) = 4 · x 2 2 x 1 · x 2 12566 · ( x 2 · x 1 3 x 1 4 ) + 1 5108 · x 1 2 1 0 , g 3 ( x ) = 1 140.45 · x 1 x 2 2 · x 3 0 , g 4 ( x ) = x 1 + x 2 1.5 1 0 .
These constraints ensure that the spring’s dimensions are physically viable and meet specified performance criteria.

Appendix E. Mathematical Formulation of Cantilever Beam

The cantilever is characterized by variables representing the beam’s dimensions, with specified lower and upper bounds:
  • Lower bound (LB): [ 0.01 , 0.01 , 0.01 , 0.01 , 0.01 ] , which are the minimum allowable dimensions to ensure structural viability.
  • Upper bound (UB): [ 100 , 100 , 100 , 100 , 100 ] , which are the maximum allowable dimensions to prevent impractical designs.
The objective function, representing the total weight of the beam to be minimized, is defined as follows:
Minimize z = 0.0624 × ( x 1 + x 2 + x 3 + x 4 + x 5 )
where x 1 , x 2 , x 3 , x 4 , and x 5 denote the design variables related to the beam’s dimensions.
The beam must satisfy a deflection constraint to ensure its structural performance under the applied load:
g 1 ( x ) = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0
This constraint ensures that the beam’s deflection does not exceed permissible limits, safeguarding the structure’s integrity and functionality.

References

  1. Lange, K. Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 95. [Google Scholar]
  2. Floudas, C.A.; Akrotirianakis, I.G.; Caratzoulas, S.; Meyer, C.A.; Kallrath, J. Global optimization in the 21st century: Advances and challenges. Comput. Chem. Eng. 2005, 29, 1185–1202. [Google Scholar]
  3. Fakhouri, H.N.; Alawadi, S.; Awaysheh, F.M.; Hamad, F. Novel hybrid success history intelligent optimizer with gaussian transformation: Application in CNN hyperparameter tuning. Clust. Comput. 2024, 27, 3717–3739. [Google Scholar] [CrossRef]
  4. Altay, E.V.; Altay, O.; Özçevik, Y. A Comparative Study of Metaheuristic Optimization Algorithms for Solving Real-World Engineering Design Problems. CMES-Comput. Model. Eng. Sci. 2023, 139, 1039–1094. [Google Scholar] [CrossRef]
  5. Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Metaheuristic algorithms: A comprehensive review. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; Elsevier: Amsterdam, The Netherlands, 2018; pp. 185–231. [Google Scholar]
  6. Talbi, E.G. Machine learning into metaheuristics: A survey and taxonomy. ACM Comput. Surv. (CSUR) 2021, 54, 129. [Google Scholar]
  7. Bartz-Beielstein, T.; Branke, J.; Mehnen, J.; Mersmann, O. Evolutionary algorithms. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2014, 4, 178–195. [Google Scholar]
  8. Eberhart, R.C.; Shi, Y.; Kennedy, J. Swarm Intelligence; Elsevier: Amsterdam, The Netherlands, 2001. [Google Scholar]
  9. Sands, T. Physics-based control methods. Advances in Spacecraft Systems and Orbit Determination; InTech Publishers: London, UK, 2012; pp. 29–54. [Google Scholar]
  10. Zhang, L.M.; Dahlmann, C.; Zhang, Y. Human-inspired algorithms for continuous function optimization. In Proceedings of the 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems, Shanghai, China, 20–22 November 2009; IEEE: Piscataway, NJ, USA, 2009; Volume 1, pp. 318–321. [Google Scholar]
  11. Črepinšek, M.; Liu, S.H.; Mernik, M. Exploration and exploitation in evolutionary algorithms: A survey. ACM Comput. Surv. (CSUR) 2013, 45, 35. [Google Scholar] [CrossRef]
  12. Hashemi, A.; Dowlatshahi, M.B.; Nezamabadi-Pour, H. Gravitational Search Algorithm: Theory, Literature Review, and Applications. In Handbook of AI-based Metaheuristics; CRC Press: Boca Raton, FL, USA, 2021; pp. 119–150. [Google Scholar]
  13. Blum, C.; Roli, A. Hybrid metaheuristics: An introduction. In Hybrid Metaheuristics: An Emerging Approach to Optimization; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–30. [Google Scholar]
  14. Fakhouri, H.N.; Awaysheh, F.M.; Alawadi, S.; Alkhalaileh, M.; Hamad, F. Four vector intelligent metaheuristic for data optimization. Computing 2024, 106, 2321–2359. [Google Scholar]
  15. Eiben, A.E.; Schippers, C.A. On evolutionary exploration and exploitation. Fundam. Informaticae 1998, 35, 35–50. [Google Scholar]
  16. Lundin, N.B.; Todd, P.M.; Jones, M.N.; Avery, J.E.; O’Donnell, B.F.; Hetrick, W.P. Semantic search in psychosis: Modeling local exploitation and global exploration. Schizophr. Bull. Open 2020, 1, sgaa011. [Google Scholar]
  17. Adam, S.P.; Alexandropoulos, S.A.N.; Pardalos, P.M.; Vrahatis, M.N. No free lunch theorem: A review. In Approximation and Optimization: Algorithms, Complexity and Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 57–82. [Google Scholar]
  18. Fakhouri, H.N.; Hwaitat, A.K.A.; Ryalat, M.; Hamad, F.; Zraqou, J.; Maaita, A.; Alkalaileh, M.; Sirhan, N.N. Improved Path Testing Using Multi-Verse Optimization Algorithm and the Integration of Test Path Distance. Int. J. Interact. Mob. Technol. 2023, 17, 38–59. [Google Scholar]
  19. Wang, X.; Snášel, V.; Mirjalili, S.; Pan, J.S.; Kong, L.; Shehadeh, H.A. Artificial Protozoa Optimizer (APO): A novel bio-inspired metaheuristic algorithm for engineering optimization. Knowl.-Based Syst. 2024, 295, 111737. [Google Scholar] [CrossRef]
  20. Farda, I.; Thammano, A. An Improved Differential Evolution Algorithm for Numerical Optimization Problems. Hightech Innov. J. 2023, 4, 434–452. [Google Scholar] [CrossRef]
  21. Mathew, T. Genetic Algorithm. Available online: https://datajobs.com/data-science-repo/Genetic-Algorithm-Guide-[Tom-Mathew].pdf (accessed on 17 July 2024).
  22. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  23. Nakisa, B.; Ahmad Nazri, M.Z.; Rastgoo, M.N.; Abdullah, S. A survey: Particle swarm optimization based algorithms to solve premature convergence problem. J. Comput. Sci. 2014, 10, 1758–1765. [Google Scholar] [CrossRef]
  24. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  25. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  26. Sharma, I.; Kumar, V.; Sharma, S. A comprehensive survey on grey wolf optimization. Recent Adv. Comput. Sci. Commun. (Former. Recent Patents Comput. Sci.) 2022, 15, 323–333. [Google Scholar]
  27. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  28. Alzoubi, S.; Abualigah, L.; Sharaf, M.; Daoud, M.; Khodadadi, N.; Jia, H. Synergistic Swarm Optimization Algorithm; Tech Science Press: Henderson, NV, USA, 2024. [Google Scholar]
  29. Falahah, I.; Al-Baik, O.; Alomari, S.; Bektemyssova, G.; Gochhait, S.; Leonova, I.; Malik, O.; Werner, F.; Dehghani, M. Frilled lizard optimization: A novel nature-inspired metaheuristic algorithm for solving optimization problems. Preprints 2024, 2024030898. [Google Scholar] [CrossRef]
  30. Zhang, F.; Wu, S.; Cen, P. The past, present and future of the pangolin in mainland China. Glob. Ecol. Conserv. 2022, 33, e01995. [Google Scholar] [CrossRef]
  31. Jahn, J. Vector Optimization; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  32. Fakhouri, H.N.; Hamad, F.; Alawamrah, A. Success history intelligent optimizer. J. Supercomput. 2022, 78, 6461–6502. [Google Scholar]
  33. Mohapatra, S.; Mohapatra, P. American zebra optimization algorithm for global optimization problems. Sci. Rep. 2023, 13, 5211. [Google Scholar] [CrossRef] [PubMed]
  34. Bairwa, A.; Joshi, S.; Singh, D. Dingo optimizer: A nature-inspired metaheuristic approach for engineering problems. Math. Probl. Eng. 2021, 2021, 2571863. [Google Scholar]
  35. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar]
  36. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar]
  37. Khishe, M.; Mosavi, M. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  38. Singh, A.; Sharma, A.; Rajput, S.; Mondal, A.; Bose, A.; Ram, M. Parameter extraction of solar module using the sooty tern optimization algorithm. Electronics 2022, 11, 564. [Google Scholar] [CrossRef]
  39. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar]
  40. Wu, D.; Rao, H.; Wen, C.; Jia, H.; Liu, Q.; Abualigah, L. Modified sand cat swarm optimization algorithm for solving constrained engineering optimization problems. Mathematics 2022, 10, 4350. [Google Scholar] [CrossRef]
  41. Mirjalili, S.; Jangir, P.; Saremi, S. Multi-objective ant lion optimizer: A multi-objective optimization algorithm for solving engineering problems. Appl. Intell. 2017, 46, 79–95. [Google Scholar]
  42. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  43. Nikolaev, A.; Jacobson, S. Simulated annealing. In Handbook of Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2010; pp. 1–39. [Google Scholar]
  44. Beyer, H.G.; Schwefel, H.P. Evolution strategies–a comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  45. Banzhaf, W.; Francone, F.D.; Keller, R.E.; Nordin, P. Genetic Programming: An Introduction: On the Automatic Evolution of Computer Programs and Its Applications; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 1998. [Google Scholar]
  46. Fakhouri, H.N.; Ishtaiwi, A.; Makhadmeh, S.N.; Al-Betar, M.A.; Alkhalaileh, M. Novel hybrid crayfish optimization algorithm and self-adaptive differential evolution for solving complex optimization problems. Symmetry 2024, 16, 927. [Google Scholar] [CrossRef]
  47. de Vasconcelos Segundo, E.H.; Mariani, V.C.; dos Coelho, L.S. Design of heat exchangers using Falcon Optimization Algorithm. Appl. Therm. Eng. 2019, 156, 119–144. [Google Scholar] [CrossRef]
  48. El-kenawy, E.S.M.; Khodadadi, N.; Mirjalili, S.; Abdelhamid, A.A.; Eid, M.M.; Ibrahim, A. Greylag goose optimization: Nature-inspired optimization algorithm. Expert Syst. Appl. 2024, 238, 122147. [Google Scholar] [CrossRef]
  49. Dehghani, M.; Hubálovský, Š.; Trojovský, P. Northern goshawk optimization: A new swarm-based algorithm for solving optimization problems. IEEE Access 2021, 9, 162059–162080. [Google Scholar] [CrossRef]
  50. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  51. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl. Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  52. Chou, J.S.; Truong, D.N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar] [CrossRef]
  53. Xing, B.; Gao, W.J. Invasive Weed Optimization Algorithm. In Innovative Computational Intelligence: A Rough Guide to 134 Clever Algorithms; Springer International Publishing: Berlin/Heidelberg, Germany, 2014; pp. 177–181. [Google Scholar]
  54. Houssein, E.H.; Oliva, D.; Samee, N.A.; Mahmoud, N.F.; Emam, M.M. Liver cancer algorithm: A novel bio-inspired optimizer. Comput. Biol. Med. 2023, 165, 107389. [Google Scholar] [CrossRef]
  55. Abdollahzadeh, B.; Khodadadi, N.; Barshandeh, S.; Trojovský, P.; Gharehchopogh, F.S.; El-kenawy, E.S.M.; Abualigah, L.; Mirjalili, S. Puma optimizer (PO): A novel metaheuristic optimization algorithm and its application in machine learning. Cluster Comput. 2024, 27, 5235–5283. [Google Scholar] [CrossRef]
  56. Sulaiman, M.H.; Mustaffa, Z.; Saari, M.M.; Daniyal, H. Barnacles mating optimizer: A new bio-inspired algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103330. [Google Scholar] [CrossRef]
  57. ALRahhal, H.; Jamous, R. AFOX: A new adaptive nature-inspired optimization algorithm. Artif. Intell. Rev. 2023. [Google Scholar] [CrossRef]
  58. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey badger algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  59. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  60. Eslami, N.; Yazdani, S.; Mirzaei, M.; Hadavandi, E. Aphid-Ant Mutualism: A novel nature-inspired metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 201, 362–395. [Google Scholar] [CrossRef]
  61. Kang, F.; Li, J.; Ma, Z. Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions. Inf. Sci. 2011, 181, 3508–3531. [Google Scholar]
  62. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar]
  63. Yadav, A. AEFA: Artificial electric field algorithm for global optimization. Swarm Evol. Comput. 2019, 48, 93–108. [Google Scholar]
  64. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  65. Abedinpourshotorban, H.; Mariyam Shamsuddin, S.; Beheshti, Z.; Jawawi, D.N.A. Electromagnetic field optimization: A physics-inspired metaheuristic optimization algorithm. Swarm Evol. Comput. 2016, 26, 8–22. [Google Scholar]
  66. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar]
  67. Abdollahzadeh, B.; Soleimanian Gharehchopogh, F.; Mirjalili, S. Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. Int. J. Intell. Syst. 2021, 36, 5887–5958. [Google Scholar] [CrossRef]
  68. Yuan, Y.; Ren, J.; Wang, S.; Wang, Z.; Mu, X.; Zhao, W. Alpine skiing optimization: A new bio-inspired optimization algorithm. Adv. Eng. Softw. 2022, 170, 103158. [Google Scholar]
  69. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  70. Deng, L.; Liu, S. Snow ablation optimizer: A novel metaheuristic technique for numerical optimization and engineering design. Expert Syst. Appl. 2023, 225, 120069. [Google Scholar] [CrossRef]
  71. Rodriguez, L.; Castillo, O.; Garcia, M.; Soria, J. A new meta-heuristic optimization algorithm based on a paradigm from physics: String theory. J. Intell. Fuzzy Syst. 2021, 41, 1657–1675. [Google Scholar]
  72. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar]
  73. Zhao, W.; Wang, L.; Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl. Based Syst. 2019, 163, 283–304. [Google Scholar]
  74. Lam, A.Y.S.; Li, V.O.K. Chemical-reaction-inspired metaheuristic for optimization. IEEE Trans. Evolut. Comput. 2009, 14, 381–399. [Google Scholar]
  75. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  76. Xu, Y.; Peng, Y.; Su, X.; Yang, Z.; Ding, C.; Yang, X. Improving teaching—learning-based-optimization algorithm by a distance-fitness learning strategy. Knowl. Based Syst. 2022, 257, 108271. [Google Scholar]
  77. Talatahari, S.; Azizi, M.; Tolouei, M.; Talatahari, B.; Sareh, P. Crystal structure algorithm (CryStAl): A metaheuristic optimization method. IEEE Access 2021, 9, 71244–71261. [Google Scholar]
  78. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar]
  79. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar]
  80. Wei, Z.; Huang, C.; Wang, X.; Han, T.; Li, Y. Nuclear reaction optimization: A novel and powerful physics-based algorithm for global optimization. IEEE Access 2019, 7, 66084–66109. [Google Scholar]
  81. Shayanfar, H.; Gharehchopogh, F.S. Farmland fertility: A new metaheuristic algorithm for solving continuous optimization problems. Appl. Soft Comput. 2018, 71, 728–746. [Google Scholar] [CrossRef]
  82. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar]
  83. Abdollahzadeh, B.; Gharehchopogh, F.S.; Khodadadi, N.; Mirjalili, S. Mountain gazelle optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. Adv. Eng. Softw. 2022, 174, 103282. [Google Scholar]
  84. Cantú, V.H.; Azzaro-Pantel, C.; Ponsich, A. Constraint-handling techniques within differential evolution for solving process engineering problems. Appl. Soft Comput. 2021, 108, 107442. [Google Scholar] [CrossRef]
  85. Nguyen, T.H.; Nguyen, H.D.; Vu, A.T. Solving Engineering Optimization Problems Using Machine Learning Classification-Assisted Differential Evolution. In Proceedings of the International Conference of Steel and Composite for Engineering Structures, Lecce, Italy, 20–21 November 2023; Volume 317, LNCE. pp. 1–23. [Google Scholar] [CrossRef]
  86. Kizilay, D.; Tasgetiren, M.F.; Oztop, H.; Kandiller, L.; Suganthan, P. A Differential Evolution Algorithm with Q-Learning for Solving Engineering Design Problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020. [Google Scholar] [CrossRef]
  87. Samal, P.; Swain, R.R.; Jena, C.; Sinha, P.; Mishra, S.; Swain, S.C. A Modified Differential Evolution Algorithm Solving for Engineering Optimization Problems. In Proceedings of the 2022 3rd International Conference for Emerging Technology (INCET), Belgaum, India, 27–29 May 2022. [Google Scholar] [CrossRef]
  88. Zhang, Z.; Ding, S.; Jia, W. A hybrid optimization algorithm based on cuckoo search and differential evolution for solving constrained engineering problems. Eng. Appl. Artif. Intell. 2019, 85, 254–268. [Google Scholar] [CrossRef]
  89. Dragoi, E.N.; Curteanu, S. The use of differential evolution algorithm for solving chemical engineering problems. Rev. Chem. Eng. 2016, 32, 14–180. [Google Scholar] [CrossRef]
  90. Zuo, W.; Gao, Y. Solving numerical and engineering optimization problems using a dynamic dual-population differential evolution algorithm. Int. J. Mach. Learn. Cybern. 2024, 16, 1701–1760. [Google Scholar] [CrossRef]
  91. Tang, J.; Wang, L. A whale optimization algorithm based on atom-like structure differential evolution for solving engineering design problems. Sci. Rep. 2024, 14, 795. [Google Scholar] [CrossRef]
  92. Alshinwan, M.; Khashan, O.A.; Khader, M.; Tarawneh, O.; Shdefat, A.; Mostafa, N.; AbdElminaam, D.S. Enhanced Prairie Dog Optimization with Differential Evolution for solving engineering design problems and network intrusion detection system. Heliyon 2024, 10, e36663. [Google Scholar] [CrossRef]
  93. De Melo, V.V.; Carosio, G.L. Investigating Multi-View Differential Evolution for solving constrained engineering design problems. Expert Syst. Appl. 2013, 40, 3370–3377. [Google Scholar] [CrossRef]
  94. Mohamed, A.W.; Mohamed, A.K.; Elfeky, E.Z.; Saleh, M. Enhanced directed differential evolution algorithm for solving constrained engineering optimization problems. Int. J. Appl. Metaheuristic Comput. 2019, 10, 1–28. [Google Scholar] [CrossRef]
  95. Zeng, N.; Song, D.; Li, H.; You, Y.; Liu, Y.; Alsaadi, F.E. A competitive mechanism integrated multi-objective whale optimization algorithm with differential evolution. Neurocomputing 2021, 432, 170–182. [Google Scholar]
  96. Akl, D.T.; Saafan, M.M.; Haikal, A.Y.; El-Gendy, E.M. IHHO: An improved Harris Hawks optimization algorithm for solving engineering problems. Neural Comput. Appl. 2024, 36, 12185–12298. [Google Scholar] [CrossRef]
  97. Yu, M.; Xu, J.; Liang, W.; Qiu, Y.; Bao, S.; Tang, L. Improved multi-strategy adaptive Grey Wolf Optimization for practical engineering applications and high-dimensional problem solving. Artif. Intell. Rev. 2024, 57, 277. [Google Scholar] [CrossRef]
  98. Liu, T.; Li, Y.; Qin, X. A Padé approximation and intelligent population shrinkage chicken swarm optimization algorithm for solving global optimization and engineering problems. Math. Biosci. Eng. 2024, 21, 984–1016. [Google Scholar] [CrossRef]
  99. Garg, V.; Deep, K.; Bansal, S. Improved Teaching Learning Algorithm with Laplacian operator for solving nonlinear engineering optimization problems. Eng. Appl. Artif. Intell. 2023, 124, 106549. [Google Scholar] [CrossRef]
  100. Gopi, S.; Mohapatra, P. Opposition-based Learning Cooking Algorithm (OLCA) for solving global optimization and engineering problems. Int. J. Mod. Phys. C 2024, 35, 2450051. [Google Scholar] [CrossRef]
  101. Wang, J.; Wang, W.C.; Chau, K.W.; Qiu, L.; Hu, X.X.; Zang, H.F.; Xu, D.M. An Improved Golden Jackal Optimization Algorithm Based on Multi-strategy Mixing for Solving Engineering Optimization Problems. J. Bionic Eng. 2024, 21, 1092–1115. [Google Scholar] [CrossRef]
  102. Moustafa, G.; Tolba, M.A.; El-Rifaie, A.M.; Ginidi, A.; Shaheen, A.M.; Abid, S. A Subtraction-Average-Based Optimizer for Solving Engineering Problems with Applications on TCSC Allocation in Power Systems. Biomimetics 2023, 8, 332. [Google Scholar] [CrossRef]
  103. El-Shorbagy, M.; Elazeem, A.A. Convex combination search algorithm: A novel metaheuristic optimization algorithm for solving global optimization and engineering design problems. J. Eng. Res. 2024, in press. [Google Scholar] [CrossRef]
  104. Givi, H.; Dehghani, M.; Hubalovsky, S. Red Panda Optimization Algorithm: An Effective Bio-Inspired Metaheuristic Algorithm for Solving Engineering Optimization Problems. IEEE Access 2023, 11, 57203–57227. [Google Scholar] [CrossRef]
  105. Gharehchopogh, F.S.; Nadimi-Shahraki, M.H.; Barshandeh, S.; Abdollahzadeh, B.; Zamani, H. CQFFA: A Chaotic Quasi-oppositional Farmland Fertility Algorithm for Solving Engineering Optimization Problems. J. Bionic Eng. 2023, 20, 158–183. [Google Scholar] [CrossRef]
  106. Pan, J.S.; Zhang, L.G.; Wang, R.B.; Snášel, V.; Chu, S.C. Gannet optimization algorithm: A new metaheuristic algorithm for solving engineering optimization problems. Math. Comput. Simul. 2022, 202, 343–373. [Google Scholar] [CrossRef]
  107. Hu, G.; Cheng, M.; Sheng, G.; Wei, G. ACEPSO: A multiple adaptive co-evolved particle swarm optimization for solving engineering problems. Adv. Eng. Inform. 2024, 61, 102516. [Google Scholar] [CrossRef]
  108. Ewees, A.A. Harmony-driven technique for solving optimization and engineering problems. J. Supercomput. 2024, 80, 17980–18008. [Google Scholar] [CrossRef]
  109. Pan, J.S.; Sun, B.; Chu, S.C.; Zhu, M.; Shieh, C.S. A Parallel Compact Gannet Optimization Algorithm for Solving Engineering Optimization Problems. Mathematics 2023, 11, 439. [Google Scholar] [CrossRef]
  110. Che, Y.; He, D. An enhanced seagull optimization algorithm for solving engineering optimization problems. Appl. Intell. 2022, 52, 13043–13081. [Google Scholar] [CrossRef]
  111. Abualigah, L.; Elaziz, M.A.; Khasawneh, A.M.; Alshinwan, M.; Ibrahim, R.A.; Al-Qaness, M.A.; Mirjalili, S.; Sumari, P.; Gandomi, A.H. Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: A comprehensive survey, applications, comparative analysis, and results. Neural Comput. Appl. 2022, 34, 4081–4110. [Google Scholar] [CrossRef]
  112. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
  113. Fakhouri, H.N.; Hudaib, A.; Sleit, A. Multivector particle swarm optimization algorithm. Soft Comput. 2020, 24, 11695–11713. [Google Scholar] [CrossRef]
  114. Kamil, A.T.; Saleh, H.M.; Abd-Alla, I.H. A multi-swarm structure for particle swarm optimization: Solving the welded beam design problem. J. Phys. Conf. Ser. 2021, 1804, 012012. [Google Scholar] [CrossRef]
  115. Moss, D.R. Pressure Vessel Design Manual; Elsevier: Amsterdam, The Netherlands, 2004. [Google Scholar]
  116. Annaratone, D. Pressure Vessel Design; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  117. Dehghani, M.; Montazeri, Z.; Dhiman, G.; Malik, O.; Morales-Menendez, R.; Ramirez-Mendoza, R.A.; Dehghani, A.; Guerrero, J.M.; Parra-Arroyo, L. A spring search algorithm applied to engineering optimization problems. Appl. Sci. 2020, 10, 6173. [Google Scholar] [CrossRef]
  118. Tzanetos, A.; Blondin, M. A qualitative systematic review of metaheuristics applied to tension/compression spring design problem: Current situation, recommendations, and research direction. Eng. Appl. Artif. Intell. 2023, 118, 105521. [Google Scholar] [CrossRef]
  119. Çelik, Y.; Kutucu, H. Solving the Tension/Compression Spring Design Problem by an Improved Firefly Algorithm. IDDM 2018, 1, 1–7. [Google Scholar]
  120. Canbaz, B.; Yannou, B.; Yvars, P.A. A new framework for collaborative set-based design: Application to the design problem of a hollow cylindrical cantilever beam. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Washington, DC, USA, 28–31 August 2011; Volume 54822, pp. 197–206. [Google Scholar]
  121. McLaughlin, R.J. Systematic design of cantilever beams for muscle research. J. Appl. Physiol. 1977, 42, 786–794. [Google Scholar] [CrossRef]
  122. Deb, K. Optimal design of a welded beam via genetic algorithms. AIAA J. 1991, 29, 2013–2015. [Google Scholar]
  123. Sarkar, M.; Roy, T.K. Optimization of welded beam structure using neutrosophic optimization technique: A comparative study. Int. J. Fuzzy Syst. 2018, 20, 847–860. [Google Scholar]
  124. Zeman, J.L.; Rauscher, F.; Schindler, S. Pressure Vessel Design: The Direct Route; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
Figure 1. HPDE flowchart.
Figure 1. HPDE flowchart.
Automation 06 00013 g001
Figure 2. Illustration of Robot Gripped Design Parameters.
Figure 2. Illustration of Robot Gripped Design Parameters.
Automation 06 00013 g002
Figure 3. Geometric representation of the welded beam design problem.
Figure 3. Geometric representation of the welded beam design problem.
Automation 06 00013 g003
Figure 4. Pressure vessel design optimization.
Figure 4. Pressure vessel design optimization.
Automation 06 00013 g004
Figure 5. Spring design optimization.
Figure 5. Spring design optimization.
Automation 06 00013 g005
Figure 6. Segmented model of a cantilever beam used for optimization studies.
Figure 6. Segmented model of a cantilever beam used for optimization studies.
Automation 06 00013 g006
Figure 7. Convergence diagram analysis over CEC2014 benchmark functions (F1–F15).
Figure 7. Convergence diagram analysis over CEC2014 benchmark functions (F1–F15).
Automation 06 00013 g007aAutomation 06 00013 g007b
Figure 8. Convergence curve analysis over CEC2022 benchmark functions (F7–F12).
Figure 8. Convergence curve analysis over CEC2022 benchmark functions (F7–F12).
Automation 06 00013 g008
Figure 9. Search History diagram analysis over CEC2014 benchmark functions (F1–F6).
Figure 9. Search History diagram analysis over CEC2014 benchmark functions (F1–F6).
Automation 06 00013 g009
Figure 10. Search History diagram analysis over CEC2014 benchmark functions (F7–F12).
Figure 10. Search History diagram analysis over CEC2014 benchmark functions (F7–F12).
Automation 06 00013 g010
Figure 11. Trajectory diagram analysis over selected functions {1,11,12,13,15,17} of CEC2014 benchmark functions.
Figure 11. Trajectory diagram analysis over selected functions {1,11,12,13,15,17} of CEC2014 benchmark functions.
Automation 06 00013 g011aAutomation 06 00013 g011b
Figure 12. Average fitness diagram analysis over selected functions {13,14,15,19,22,23} of CEC2014 benchmark functions.
Figure 12. Average fitness diagram analysis over selected functions {13,14,15,19,22,23} of CEC2014 benchmark functions.
Automation 06 00013 g012
Figure 13. Sensitivity analysis over CEC2014 benchmark functions (F1–F6).
Figure 13. Sensitivity analysis over CEC2014 benchmark functions (F1–F6).
Automation 06 00013 g013
Figure 14. Sensitivity analysis over CEC2014 benchmark functions (F7–F12).
Figure 14. Sensitivity analysis over CEC2014 benchmark functions (F7–F12).
Automation 06 00013 g014
Table 1. Comparative summary of optimization algorithms.
Table 1. Comparative summary of optimization algorithms.
AcronymAlgorithm NameDescreptionDisadvantagesYear
SSOASynergistic Swarm Optimization [28]Combines multiple Swarm behaviors for enhanced optimizationIncreased computational complexity2024
FLOFrilled Lizard Optimizer [29]Mimics the foraging behavior of frilled lizardsMay require parameter tuning2024
CPOChinese Pangolin Optimizer [30]Inspired by the digging and defense behaviors of Chinese pangolinsPotentially sensitive to parameters2024
FVIMFour Vector Optimizer [31]Utilizes four vector operations for optimizationMay get trapped in local optima2024
SHIOSuccess History Intelligent Optimizer [32]Adapts search based on historical successesMay have complex parameter settings2022
ZOAZebra Optimizer [33]Simulates herd behavior and movement patterns of zebrasMay converge slowly in some cases2022
DOADingo Optimizer [34]Models cooperative hunting strategies of dingoesMay need careful parameter adjustment2021
ROARemora Optimizer [35]Inspired by symbiotic relationships of remorasMay have slow convergence in exploitation2021
AOAquila Optimizer [36]Mimics hunting strategies of eaglesMay require parameter tuning2021
CHIMPChimp Optimizer [37]Simulates intelligence and hunting behavior of chimpanzeesMay need careful parameter tuning2020
STOASooty Tern Optimizer [38]Inspired by migration and hunting of sooty ternsMay converge prematurely2019
SOASeagull Optimizer [39]Models soaring and attacking behavior of seagullsMay require improvement in convergence speed2019
SCSOSand Cat Optimizer [40]Simulates hunting strategies of sand catsMay need parameter adjustment2023
MVOMulti-Verse Optimizer [41]Based on multiverse concepts in physicsMay require improvement in convergence speed2016
WOAWhale Optimizer [25]Mimics bubble-net feeding of humpback whalesMay converge prematurely2016
SCASine Cosine [27]Utilizes sine and cosine functions for search movementsMay lack exploitation efficiency2016
MFOMoth-Flame Optimizer [42]Simulates moth navigation using transverse orientationMay get trapped in local optima2015
GWOGrey Wolf Optimizer [24]Mimics leadership and hunting in grey wolvesMay need convergence speed improvement2014
PSOParticle Swarm Optimizer [22]Models social behavior of bird flockingProne to premature convergence1995
SASimulated Annealing [43]Based on annealing process in metallurgySlow convergence, sensitive to cooling schedule1983
GAGenetic Algorithm [21]Inspired by natural selection and geneticsMay have slow convergence, premature convergence1960
Table 2. Robot Gripper optimization problem.
Table 2. Robot Gripper optimization problem.
OptimizerBestx1x2x3x4x5x6x7
HPDE2.54714150149.8812001.86E-07149.7948101.12922.302793
APO2.653614149.1163145.92382003.051837109.4974103.05032.124082
CSA2.554382149.9889149.8681199.68851.68E-18140.3496101.30162.261251
WSO2.616181148.6972148.1965199.99470.3515280.33606103.5181.954732
GWO3.115494149.9551134.6845198.906614.5216141.7656125.85552.49649
COOT3.116414148.1836122.3659181.986425.6268146.6658103.18182.704846
SA2.557381150149.80162000.073718138.4248101.67572.258888
HGS2.992719150149.21772000.003539148.1204129.042.465132
ChOA3.434886150149.6112200014.72397123.51761.694239
SCA4.289317150150200010.676761001.60428
WOA3.730008149.9999149.7655144.57890149.7574109.84562.700656
AVOA4.124148149.8944127.8498155.08421.1210264.67494128.12022.133409
HHO4.487444149.9995149.8802113.5487010101.16211.545976
DBO4.289317150150200026.356731001.702452
PSO5.58727473.4757639.34421177.183913.6001947.21313209.88072.843062
BO5.49897627.8172549.13919159.535810.0240853.56557138.54731.760682
AO5.73049149.9376130.2317141.8499.274051104.9349183.82832.607121
HHO4.010135150144.0767169.9663.06146118.85633155.01491.823084
FOX7.05106149.9991124.0087131.34460.21687590.06024206.03212.484178
RIME5.348892143.2657107.797165.841637.03908122.388161.39813.128836
BWO4.28931715015020001501002.401751
COA4.260015147.746994.43728196.995849.24896147.7469148.55833.092834
GA9.77E+2110101010101010
Table 3. Robot Gripper optimization problem.
Table 3. Robot Gripper optimization problem.
OptimizerMeanStdMinMaxTimeRank
HPDE2.685370.2266582.547143.08793535.685521
APO2.7974730.1633472.6536143.05257462.748482
CSA3.1001010.4342252.5543823.58173826.73663
WSO3.1225430.3634522.6161813.60674321.821774
GWO3.3810120.2717153.1154943.70168530.426725
COOT3.3817560.2153373.1164143.7135230.460246
SA3.503650.5765832.5573814.00873149.292517
HGS4.042670.9076442.9927195.31095128.770368
ChOA4.0757280.5091093.4348864.68853655.299989
SCA4.28931704.2893174.28931759.0113710
WOA4.6953080.9005983.7300086.04308728.9117611
AVOA4.7799110.6863984.1241485.63951730.1208212
HHO5.9772741.5291064.4874448.49685667.0544113
DBO6.1626951.9385554.2893178.57863443.385614
PSO9.5998682.3350645.58727411.236142.494515
BO11.929297.8757585.49897625.5376271.9875916
AO18.4097313.265285.7304937.7442464.925117
HHO18.4477830.374684.01013572.7734467.54418
FOX85.05408131.37197.05106317.396328.7701719
RIME2.34E+165.23E+165.3488921.17E+1754.5484820
BWO9.3E+212.08E+224.2893174.65E+2243.89821
COA1.34E+222.28E+224.2600155.25E+2285.6855822
GA1.07E+262.16E+269.77E+214.92E+2638.7899623
Table 4. Welded beam design optimization comparison results.
Table 4. Welded beam design optimization comparison results.
OptimizerBestx1x2x3x4
HPDE1.6735070.1979833.3697759.1904330.198903
GWO1.6772780.2001323.325199.1638090.200329
AVOA1.688190.1837743.6471239.1895390.19894
COOT1.6771980.1931093.4521229.1894410.198944
CSA1.6711210.1979563.3540149.1920210.198832
ChOA1.7773030.1977953.6643578.9286640.213358
SCA1.8153140.1921973.615917100.196785
DBO1.8519480.1463244.552712100.19542
HGS1.6778090.1952853.3944959.2327980.198642
HHO1.7141810.196853.5744429.098640.202935
AO1.8864450.1800034.4376279.9539330.195663
PSO2.2262360.8697791.2280954.2574630.398624
FOX1.9486450.2661712.7649657.8236850.274511
COA1.9760530.17.4784419.2131660.198886
BWO2.1753290.18.1744568.7622480.223053
BO2.1733460.2864572.1484160.5563140.198662
RIME1.9642530.1014837.1708839.3070580.198604
SA2.611810.4258091.7457797.0129980.425809
WOA2.8509730.3286092.7205565.745790.546603
GA4.9830520.10.10.10.1
Table 5. Welded beam design optimization statistical comparison results.
Table 5. Welded beam design optimization statistical comparison results.
OptimizerMeanStdMinMaxTimeRank
HPDE1.6743530.0007741.6735071.6754250.5096251
GWO1.6790660.0020041.6772781.6825050.2593232
AVOA1.7932660.1188041.688191.9254140.2678543
COOT1.7987150.1505121.6771982.033910.2443424
CSA1.8198320.1213141.6711211.9284180.2310745
ChOA1.8472550.040441.7773031.8790540.3701656
SCA1.8629780.0293241.8153141.8917560.2443427
DBO1.943090.0698371.8519482.0293210.2772238
HGS1.9794320.2459411.6778092.2525950.2427029
HHO2.0878930.2460081.7141812.3597610.56603810
AO2.2047880.2865631.8864452.5756390.50404211
PSO2.2697610.0332282.2262362.297541.33546512
FOX2.2852260.2968451.9486452.7079090.25545413
COA2.3450340.2308451.9760532.5596880.58676414
BWO2.4247740.2658482.1753292.8128010.28914815
BO2.5605280.3009712.1733462.8925721.63530716
RIME2.8089090.7100271.9642533.8254040.46018417
SA3.252230.5522282.611814.0993350.44240918
WOA3.9308120.9662422.8509734.9501460.25413619
GA6.5025652.0728724.98305210.0383163.0520420
Table 6. Pressure vessel design optimization comparison results.
Table 6. Pressure vessel design optimization comparison results.
OptimizerBest Scorex1x2x3x4
HDPE5885.36312.450886.15450840.3202199.992
SHIO6109.21512.791516.55422540.70604196.8767
FVIM5927.9912.729716.33360241.20091188.151
RTH5888.66212.481786.16975140.42027198.6035
COA6080.02714.026556.93457845.40416139.6319
CSA5894.42212.516676.19097440.53311197.1072
MFO5981.27113.290766.5696343.04002165.3082
RIME5946.18412.982716.42525142.04245177.3318
WSO5885.91212.453826.15592240.32681199.901
RSO9442.60919.3228611.8505159.6915661.08678
SCA6303.98913.339226.72702240.98908191.001
GWO5928.28212.525926.3219940.5147197.5136
WOA5898.94812.576866.21674740.72816194.3899
PSO6044.52512.583526.82600140.62535196.5722
RIME5885.33312.45076.15438740.31962200
BWO6298.22315.5057.66412750.2104896.68401
FOX5941.53312.900766.4096441.77709180.6599
MVO6065.60213.664656.76966944.01309154.4982
COOT5885.34812.450736.15438840.31963199.9999
ChOA9613.71815.2598719.4514940.34886200
Table 7. Pressure vessel design optimization statistical comparison results.
Table 7. Pressure vessel design optimization statistical comparison results.
OptimizerMinMeanMaxStd
HDPE5885.3635885.4215885.5050.06051
SHIO6109.2156734.6937419.164618.9455
FVIM5927.996002.536273.525151.672
RTH5888.6626176.9396423.756217.2817
COA6080.0276280.3066636.188216.628
CSA5894.4225991.5946203.661133.1088
MFO5981.2716759.4147319.001685.775
RIME5946.1846404.4847127.972561.4591
WSO5885.9125886.4675886.8290.386159
RSO9442.60914771.420188.884282.191
SCA6303.9897303.4497953.458862.6954
GWO5928.2825960.0486051.64951.61522
WOA5898.9486325.3446712.783374.9533
PSO6044.5256754.6177391.859667.2422
RIME5885.3336248.1886685.679390.0295
BWO6298.2236572.5986752.056179.1332
FOX5941.5336457.3567244.257538.2857
MVO6065.6026309.8686494.06196.3284
COOT5885.3485995.7566212.356139.8568
ChOA9613.71815549.2125397.646761.925
Table 8. Spring design optimization comparison results.
Table 8. Spring design optimization comparison results.
OptimizerBest Scorex1x2x3
HDPE0.0126650.0517570.35835411.19372
SHIO0.0127020.0510.33995212.36581
FVIM0.0126880.0510.34034612.33284
RTH0.0126730.0510440.34140412.24668
COA0.0127020.0522790.37083410.53267
CSA0.0126680.0518990.36176211.00061
MFO0.0126740.0510.34036612.31619
RIME0.0129070.0510.33647612.7483
WSO0.0126650.0517910.35918611.14589
SCA0.0131790.0510.33420913.16073
GWO0.0126860.0510.34019712.33711
WOA0.0126740.0510.34036612.31619
PSO0.0126910.0510.34033712.33682
RIME0.0126740.0510030.34042712.31205
BWO0.0126750.0510.34035512.31741
FOX0.0126670.0520070.36441110.85178
MVO0.0128510.0510.3374512.64155
COOT0.0126650.0517620.35848511.18613
ChOA0.0141010.0510.31891615
Table 9. Spring design optimization statistical comparison results.
Table 9. Spring design optimization statistical comparison results.
OptimizerMinMeanMaxStd
HDPE0.0126650.0126660.0126676.84E-07
SHIO0.0127020.0130180.0140655.86E-04
FVIM0.0126880.0127280.0128165.18E-05
RTH0.0126730.0127250.0129311.15E-04
COA0.0127020.0131850.0135944.37E-04
CSA0.0126680.0127410.0128406.90E-05
MFO0.0126740.0136390.0143107.37E-04
RIME0.0129070.0135840.0140834.65E-04
WSO0.0126650.0126660.0126661.57E-07
SCA0.0131790.0134920.0139452.81E-04
GWO0.0126860.0127900.0131502.02E-04
WOA0.0126740.0132950.0142278.41E-04
PSO0.0126910.0127970.0130411.42E-04
RIME0.0126740.0127350.0128448.03E-05
BWO0.0126750.0128300.0132322.37E-04
FOX0.0126670.0135910.0150459.73E-04
MVO0.0128510.0171470.0185002.41E-03
COOT0.0126650.0127240.0128457.56E-05
ChOA0.0141010.0268700.0773922.82E-02
Table 10. Cantilever beam design optimization comparison results.
Table 10. Cantilever beam design optimization comparison results.
OptimizerBestx1x2x3x4x5
HDPE1.3399566.0160175.3091754.494333.5014742.152664
APO1.3399566.0165.3092024.4943493.5014422.152667
CSA1.3399576.0119035.3100164.4946713.5045112.15257
FOX1.3399666.0299225.2980974.4994913.4966312.149675
GWO1.3399866.0082065.3140824.4865633.4953652.169925
AVOA1.3399996.047255.2827694.4862813.51072.147349
HGS1.3400615.9993415.3549034.4826183.5125752.125902
SMA1.3400386.0593685.2790124.4723433.504232.16002
COOT1.3400796.0634415.2666294.518313.4804222.146828
DBO1.3403016.0049865.2921424.5836263.4647092.133728
SA1.3400876.0718335.310734.491193.4613692.140635
HHO1.340296.0477775.2964934.5563663.4817642.096603
AO1.3406146.0839295.2984214.4412673.4435072.217077
BWO1.3435296.2013725.3748554.5313233.3362242.087143
ChOA1.3549115.8722295.7623814.2942683.639852.144588
SCA1.3849636.104225.6189645.3076633.0677862.096291
GA1.3745275.9158524.8391875.0726293.9007562.299245
WOA1.3583275.5791136.1669044.4816493.4179392.122457
COA1.3809575.8058225.8607655.1160422.8665862.481502
RIME1.3775886.220256.4224433.8555273.422872.155643
BO1.4988076.9979577.960974.53744421.8220515.10797
PSO1.5832874.8498511.7873122.7743974.0419813.876916
Table 11. Cantilever beam design optimization statistical comparison results.
Table 11. Cantilever beam design optimization statistical comparison results.
OptimizerMeanStdMinMaxTimeRank
HDPE1.3399561.33E-131.3399561.3399560.4608581
APO1.3399567.11E-101.3399561.3399561.1084322
CSA1.339979.55E-061.3399571.339980.56813
FOX1.3399861.78E-051.3399661.3400040.6019934
GWO1.3400890.0001041.3399861.3402310.5831045
AVOA1.3401020.0001371.3399991.3403090.5671146
HGS1.3401710.0001411.3400611.3404170.5135577
SMA1.3401779.5E-051.3400381.3402670.9435018
COOT1.3403710.0003061.3400791.340860.5676389
DBO1.3405040.0001931.3403011.3407430.62673810
SA1.3416710.0018951.3400871.3446980.96711711
HHO1.3424340.0013211.340291.3433881.20946912
AO1.3428380.0015611.3406141.3449421.06794213
BWO1.3497590.0037021.3435291.3530330.66001714
ChOA1.3634350.007461.3549111.3715091.0576315
SCA1.3993970.0144821.3849631.4175140.48316816
GA1.4049580.044261.3745271.4812580.56980117
WOA1.4114890.0320751.3583271.4379950.49656518
COA1.4173390.0533711.3809571.504691.3309119
RIME1.6069550.4451041.3775882.3998860.99089220
BO1.8304520.2287871.4988072.1283325.0474621
PSO1.9839590.2383771.5832872.1648864.08780722
Table 12. Parameter settings for CEC benchmark functions.
Table 12. Parameter settings for CEC benchmark functions.
ParameterValue
Population Size30
Maximum Function Evaluations1000
Dimensionality (D)10
Search Range [ 100 , 100 ] D
RotationApplied to all rotated functions
ShiftApplied to all shifted functions
Table 13. Parameter settings for compared algorithms.
Table 13. Parameter settings for compared algorithms.
AlgorithmParameter
CHoA r 1 = random ( 0 , 1 ) , r 2 = random ( 0 , 1 ) , r 3 = random ( 0 , 1 ) , r 4 = random ( 0 , 1 )
GWOConvergence constant a = [ 2 , 0 ]
FOX c Max = 1 , c Min = 0.00004
WOAConvergence constant a = [ 2 , 0 ] , b = 1
MVO W E P Max = 1 , W E P Min = 0.2
DOANot provided
MFOa linearly decreases from −1 to −2
RIME w = 0.9 , c = 0.1
DBO U = 0.05 , V = 0.05 , L = 0.05
WSOa linearly decreases from −1 to −2
Table 14. Statistical results of HPDE with 51 independent runs on the CEC2014 (F1–F15) 30 Agent with 1000 FEs (first set of optimizers).
Table 14. Statistical results of HPDE with 51 independent runs on the CEC2014 (F1–F15) 30 Agent with 1000 FEs (first set of optimizers).
FunAnalysisHPDEAPOChOAGWOFOXWOAMVODOAMFORIMEDBOWSO
F1Mean1.13E+024.41E+047.75E+045.96E+072.72E+058.51E+062.10E+051.43E+051.50E+066.11E+024.76E+076.32E+02
Std2.47E+015.82E+047.97E+049.32E+075.97E+056.75E+061.26E+051.15E+051.19E+067.35E+022.94E+071.01E+03
SEM7.80E+001.84E+042.52E+042.95E+071.89E+052.13E+064.00E+043.65E+043.78E+052.32E+029.29E+063.19E+02
Rank145128107692113
F2Mean2.00E+027.56E+021.24E+037.95E+092.75E+035.51E+056.63E+034.22E+036.88E+032.07E+023.43E+092.09E+02
Std9.06E-111.24E+031.64E+031.98E+092.63E+032.30E+053.91E+034.88E+034.76E+031.46E+011.52E+091.17E+01
SEM2.87E-113.92E+025.17E+026.25E+088.33E+027.27E+041.24E+031.54E+031.51E+034.61E+004.82E+083.71E+00
Rank145126108792113
F3Mean3.00E+023.00E+023.49E+021.80E+048.29E+024.17E+043.64E+028.17E+021.48E+043.00E+021.08E+043.00E+02
Std0.00E+005.61E-024.18E+017.28E+037.24E+021.64E+043.69E+015.62E+027.75E+032.29E-012.36E+032.04E-02
SEM0.00E+001.77E-021.32E+012.30E+032.29E+025.17E+031.17E+011.78E+022.45E+037.23E-027.47E+026.44E-03
Rank135118126710492
F4Mean4.21E+024.35E+024.35E+021.86E+034.32E+024.35E+024.30E+024.19E+024.25E+024.22E+021.84E+034.14E+02
Std1.10E+017.82E-071.23E-131.40E+031.61E+012.27E+011.36E+011.65E+011.58E+011.63E+016.18E+021.76E+01
SEM3.47E+002.47E-073.88E-144.42E+025.09E+007.19E+004.32E+005.23E+004.99E+005.16E+001.95E+025.58E+00
Rank398127106254111
F5Mean5.18E+025.18E+025.20E+025.20E+025.20E+025.20E+025.20E+025.20E+025.20E+025.20E+025.20E+025.20E+02
Std6.33E+005.37E+006.66E-027.76E-022.36E-029.84E-021.56E-021.82E-021.35E-016.14E-028.93E-028.84E-02
SEM2.00E+001.70E+002.11E-022.45E-027.46E-033.11E-024.94E-035.76E-034.28E-021.94E-022.82E-022.80E-02
Rank129103854671112
F6Mean6.00E+026.00E+026.01E+026.10E+026.10E+026.07E+026.02E+026.05E+026.04E+026.05E+026.06E+026.01E+02
Std6.19E-014.26E-011.37E+002.66E+001.48E+001.85E+001.49E+002.28E+001.70E+002.18E+001.11E+001.03E+00
SEM1.96E-011.35E-014.34E-018.43E-014.68E-015.84E-014.70E-017.21E-015.37E-016.88E-013.51E-013.26E-01
Rank124111210586793
F7Mean7.00E+027.00E+027.00E+028.45E+027.00E+027.01E+027.00E+027.00E+027.01E+027.00E+028.88E+027.00E+02
Std1.24E-021.31E-026.64E-028.14E+013.30E-014.64E-011.53E-011.35E-011.21E+001.88E-014.96E+014.11E-02
SEM3.92E-034.15E-032.10E-022.57E+011.04E-011.47E-014.84E-024.27E-023.82E-015.96E-021.57E+011.30E-02
Rank124118107596123
F8Mean8.00E+028.00E+028.06E+028.92E+028.39E+028.39E+028.14E+028.07E+028.20E+028.21E+028.61E+028.04E+02
Std1.10E-093.94E-052.47E+001.43E+011.55E+011.27E+012.32E+004.80E+007.39E+004.39E+009.53E+001.44E+00
SEM3.48E-101.25E-057.82E-014.51E+004.90E+004.02E+007.33E-011.52E+002.34E+001.39E+003.01E+004.55E-01
Rank124121096578113
F9Mean9.03E+029.03E+029.13E+029.71E+029.58E+029.58E+029.16E+029.24E+029.29E+029.33E+029.55E+029.08E+02
Std1.17E+009.88E-014.60E+001.73E+011.54E+011.92E+015.46E+007.28E+001.06E+011.16E+015.55E+002.97E+00
SEM3.71E-013.12E-011.45E+005.47E+004.87E+006.07E+001.73E+002.30E+003.34E+003.68E+001.76E+009.40E-01
Rank124121011567893
F10Mean1.04E+031.04E+031.14E+032.00E+032.20E+031.53E+031.31E+031.19E+031.30E+031.54E+032.55E+031.56E+03
Std8.95E+001.01E+018.90E+013.83E+022.66E+023.29E+022.42E+021.12E+021.44E+022.80E+021.82E+024.05E+02
SEM2.83E+003.20E+002.82E+011.21E+028.40E+011.04E+027.66E+013.53E+014.54E+018.86E+015.74E+011.28E+02
Rank123101176458129
F11Mean1.24E+031.29E+031.59E+032.51E+032.53E+032.05E+031.75E+031.93E+032.16E+032.02E+032.66E+031.38E+03
Std1.16E+029.07E+013.23E+023.43E+023.52E+023.14E+022.82E+022.17E+023.06E+022.36E+021.02E+023.53E+02
SEM3.65E+012.87E+011.02E+021.08E+021.11E+029.92E+018.93E+016.87E+019.69E+017.46E+013.22E+011.12E+02
Rank124101185697123
F12Mean1.20E+031.20E+031.20E+031.20E+031.20E+031.20E+031.20E+031.20E+031.20E+031.20E+031.20E+031.20E+03
Std4.00E-023.29E-021.15E-014.84E-016.83E-014.60E-011.34E-012.49E-011.88E-011.84E-012.92E-013.86E-01
SEM1.26E-021.04E-023.63E-021.53E-012.16E-011.45E-014.25E-027.89E-025.94E-025.83E-029.22E-021.22E-01
Rank134109826751211
F13Mean1.30E+031.30E+031.30E+031.30E+031.30E+031.30E+031.30E+031.30E+031.30E+031.30E+031.30E+031.30E+03
Std1.65E-021.38E-021.61E-021.19E+002.74E-012.00E-017.63E-021.95E-011.01E-017.11E-026.16E-016.08E-02
SEM5.22E-034.38E-035.08E-033.75E-018.65E-026.32E-022.41E-026.18E-023.21E-022.25E-021.95E-011.92E-02
Rank123119105876124
F14Mean1.40E+031.40E+031.40E+031.43E+031.40E+031.40E+031.40E+031.40E+031.40E+031.40E+031.44E+031.40E+03
Std3.81E-025.87E-026.07E-021.53E+012.36E-011.68E-014.70E-021.84E-012.58E-011.66E-017.05E+007.11E-02
SEM1.20E-021.86E-021.92E-024.85E+007.45E-025.30E-021.49E-025.82E-028.15E-025.24E-022.23E+002.25E-02
Rank153111072986124
F15Mean1.50E+031.50E+031.50E+031.68E+041.51E+031.51E+031.50E+031.50E+031.50E+031.50E+033.61E+031.50E+03
Std1.10E-011.11E-012.41E-011.81E+042.21E+002.70E+005.40E-011.11E+006.79E-017.24E-011.64E+036.74E-01
SEM3.48E-023.51E-027.62E-025.73E+037.00E-018.54E-011.71E-013.51E-012.15E-012.29E-015.19E+022.13E-01
Rank123129104857116
Table 15. Statistical results of HPDE with 51 independent runs on the CEC2014 (F16–F30) 30 Agent with 1000 FEs (first set of optimizers).
Table 15. Statistical results of HPDE with 51 independent runs on the CEC2014 (F16–F30) 30 Agent with 1000 FEs (first set of optimizers).
FunAnalysisHPDEAPOChOAGWOFOXWOAMVODOAMFORIMEDBOWSO
F16Mean1.60E+031.60E+031.60E+031.60E+031.60E+031.60E+031.60E+031.60E+031.60E+031.60E+031.60E+031.60E+03
Std4.41E-014.60E-017.25E-013.50E-015.10E-013.32E-014.34E-013.32E-013.70E-015.24E-012.26E-012.55E-01
SEM1.39E-011.45E-012.29E-011.11E-011.61E-011.05E-011.37E-011.05E-011.17E-011.66E-017.15E-028.07E-02
Rank123111286795104
F17Mean1.72E+031.81E+033.45E+036.91E+052.88E+035.84E+046.39E+036.73E+034.65E+042.13E+031.89E+051.82E+03
Std1.46E+017.74E+011.95E+033.14E+056.87E+028.28E+045.41E+033.34E+036.12E+042.53E+021.59E+056.40E+01
SEM4.62E+002.45E+016.16E+029.93E+042.17E+022.62E+041.71E+031.05E+031.93E+048.00E+015.04E+042.02E+01
Rank126125107894113
F18Mean1.80E+031.80E+035.84E+032.81E+051.92E+032.28E+041.51E+046.23E+031.20E+041.84E+031.57E+041.81E+03
Std3.50E-011.92E+003.25E+034.05E+055.42E+011.47E+049.37E+035.60E+031.16E+042.71E+016.57E+038.88E+00
SEM1.11E-016.08E-011.03E+031.28E+051.71E+014.66E+032.96E+031.77E+033.66E+038.56E+002.08E+032.81E+00
Rank126125119784103
F19Mean1.90E+031.90E+031.90E+031.94E+031.91E+031.91E+031.90E+031.90E+031.90E+031.90E+031.93E+031.90E+03
Std4.37E-014.28E-018.00E-012.71E+011.35E+001.30E+006.97E-017.12E-019.13E-011.66E+001.46E+019.01E-01
SEM1.38E-011.35E-012.53E-018.57E+004.28E-014.11E-012.20E-012.25E-012.89E-015.26E-014.60E+002.85E-01
Rank214121096578113
F20Mean2.00E+032.00E+032.12E+032.52E+052.36E+037.19E+032.18E+032.84E+031.17E+042.02E+036.79E+032.01E+03
Std3.86E-013.03E-019.84E+013.47E+052.89E+024.31E+034.24E+021.44E+031.18E+041.89E+012.18E+035.22E+00
SEM1.22E-019.60E-023.11E+011.10E+059.14E+011.36E+031.34E+024.56E+023.72E+035.97E+006.90E+021.65E+00
Rank215127106811493
F21Mean2.10E+032.10E+032.39E+037.97E+052.68E+037.24E+045.28E+035.74E+033.98E+032.40E+036.27E+042.12E+03
Std1.90E-011.67E+002.04E+021.75E+063.27E+021.31E+052.50E+032.13E+032.19E+032.46E+024.63E+042.79E+01
SEM6.02E-025.27E-016.45E+015.55E+051.03E+024.13E+047.91E+026.74E+026.93E+027.77E+011.46E+048.81E+00
Rank124126118975103
F22Mean2.20E+032.20E+032.22E+032.57E+032.46E+032.30E+032.31E+032.31E+032.24E+032.23E+032.38E+032.24E+03
Std6.27E+006.20E+009.37E+001.50E+021.54E+027.16E+011.09E+026.90E+011.16E+017.73E+006.34E+012.96E+01
SEM1.98E+001.96E+002.96E+004.75E+014.87E+012.26E+013.45E+012.18E+013.66E+002.44E+002.01E+019.37E+00
Rank123121179864105
F23Mean2.62E+032.63E+032.63E+032.71E+032.50E+032.63E+032.63E+032.63E+032.63E+032.50E+032.50E+032.63E+03
Std4.79E-134.26E-116.06E-139.43E+010.00E+003.05E+001.03E-035.61E-045.27E+000.00E+000.00E+002.00E-06
SEM1.52E-131.35E-111.92E-132.98E+010.00E+009.65E-013.24E-041.77E-041.67E+000.00E+000.00E+006.34E-07
Rank465121109811117
F24Mean2.51E+032.51E+032.53E+032.60E+032.58E+032.57E+032.53E+032.54E+032.53E+032.57E+032.59E+032.51E+03
Std1.71E+004.71E+002.69E+011.75E+012.35E+012.88E+016.56E+002.64E+011.15E+013.66E+012.03E+015.85E+00
SEM5.41E-011.49E+008.49E+005.54E+007.42E+009.12E+002.07E+008.35E+003.62E+001.16E+016.42E+001.85E+00
Rank215121094768113
F25Mean2.64E+032.65E+032.69E+032.70E+032.70E+032.69E+032.69E+032.69E+032.70E+032.68E+032.70E+032.66E+03
Std3.75E+013.54E+012.39E+017.28E-011.52E+001.18E+013.32E+011.11E+018.76E+002.92E+017.60E-012.63E+01
SEM1.19E+011.12E+017.57E+002.30E-014.80E-013.72E+001.05E+013.52E+002.77E+009.22E+002.40E-018.33E+00
Rank126121075894113
F26Mean2.70E+032.70E+032.70E+032.70E+032.70E+032.70E+032.70E+032.70E+032.70E+032.70E+032.71E+032.70E+03
Std2.17E-021.84E-021.31E-023.20E+002.12E-011.41E-016.23E-021.20E-011.24E-016.70E-021.40E+015.03E-02
SEM6.86E-035.82E-034.15E-031.01E+006.70E-024.46E-021.97E-023.80E-023.94E-022.12E-024.42E+001.59E-02
Rank132111095876124
F27Mean2.85E+032.96E+032.98E+033.18E+032.90E+033.11E+033.04E+033.06E+033.07E+032.80E+032.76E+032.94E+03
Std1.59E+021.38E+021.50E+027.08E+010.00E+001.58E+021.24E+021.29E+021.29E+021.03E+021.86E+011.64E+02
SEM5.04E+014.37E+014.75E+012.24E+010.00E+005.01E+013.92E+014.08E+014.09E+013.27E+015.87E+005.20E+01
Rank367124118910215
F28Mean3.18E+033.18E+033.19E+033.86E+033.00E+033.35E+033.25E+033.28E+033.19E+033.00E+033.27E+033.22E+03
Std3.50E+004.18E+003.23E+014.68E+020.00E+001.65E+026.43E+014.44E+011.12E+010.00E+001.99E+021.21E+02
SEM1.11E+001.32E+001.02E+011.48E+020.00E+005.21E+012.03E+011.40E+013.53E+000.00E+006.28E+013.83E+01
Rank435121118106197
F29Mean3.13E+033.21E+033.80E+051.22E+063.42E+034.04E+032.00E+054.01E+034.05E+033.28E+035.13E+033.13E+03
Std5.71E+001.03E+027.98E+052.34E+063.17E+025.06E+026.21E+057.06E+025.82E+021.81E+023.67E+031.45E+01
SEM1.81E+003.25E+012.52E+057.41E+051.00E+021.60E+021.96E+052.23E+021.84E+025.72E+011.16E+034.59E+00
Rank131112571068492
F30Mean3.48E+033.50E+033.73E+033.44E+044.66E+035.05E+034.05E+034.71E+033.98E+033.81E+035.08E+033.59E+03
Std2.26E+012.32E+013.53E+024.52E+046.15E+029.79E+025.11E+026.09E+022.51E+023.21E+021.46E+038.82E+01
SEM7.15E+007.34E+001.12E+021.43E+041.94E+023.10E+021.62E+021.92E+027.92E+011.02E+024.60E+022.79E+01
Rank124128107965113
Table 16. Error measurement comparison results of HPDE with the first group of optimizers over CEC2014 (F1–F30) benchmark functions (first set of optimizers).
Table 16. Error measurement comparison results of HPDE with the first group of optimizers over CEC2014 (F1–F30) benchmark functions (first set of optimizers).
FunHPDEAPOChOAGWOFOXWOAMVODOAMFORIMEDBOWSO
F11.32E+014.40E+047.74E+045.96E+072.72E+058.51E+062.10E+051.43E+051.50E+065.11E+024.76E+075.32E+02
F23.38E-115.56E+021.04E+037.95E+092.55E+035.50E+056.43E+034.02E+036.68E+036.84E+003.43E+098.99E+00
F30.00E+001.78E-024.93E+011.77E+045.29E+024.14E+046.45E+015.17E+021.45E+049.34E-021.05E+046.63E-03
F42.13E+013.48E+013.48E+011.46E+033.20E+013.54E+013.03E+011.92E+012.50E+012.22E+011.44E+031.43E+01
F51.80E+011.83E+012.02E+012.04E+012.00E+012.01E+012.00E+012.00E+012.01E+012.01E+012.04E+012.04E+01
F63.67E-013.82E-011.24E+009.88E+001.01E+016.94E+001.91E+005.29E+004.02E+005.02E+006.21E+008.66E-01
F71.03E-021.45E-026.25E-021.45E+024.47E-011.03E+003.09E-011.96E-015.30E-012.22E-011.88E+026.20E-02
F83.93E-101.96E-055.77E+009.17E+013.91E+013.88E+011.40E+016.77E+002.04E+012.07E+016.14E+013.99E+00
F92.57E+003.13E+001.29E+017.11E+015.77E+015.80E+011.62E+012.45E+012.93E+013.28E+015.51E+017.56E+00
F103.57E+013.85E+011.44E+029.96E+021.20E+035.27E+023.08E+021.87E+022.99E+025.36E+021.55E+035.64E+02
F111.43E+021.95E+024.85E+021.41E+031.43E+039.48E+026.50E+028.30E+021.06E+039.21E+021.56E+032.78E+02
F121.68E-012.47E-012.49E-011.27E+001.09E+009.90E-011.73E-013.25E-013.25E-012.68E-011.55E+001.36E+00
F134.30E-024.62E-025.50E-023.45E+003.83E-014.46E-011.80E-013.64E-012.84E-012.11E-014.14E+001.65E-01
F142.17E-012.89E-012.57E-013.08E+013.96E-013.36E-012.21E-013.58E-013.47E-012.91E-013.84E+012.77E-01
F155.13E-015.43E-019.64E-011.53E+045.65E+006.07E+001.27E+002.11E+001.43E+001.88E+002.11E+031.43E+00
F161.72E+001.77E+001.91E+003.75E+003.82E+003.32E+003.04E+003.12E+003.41E+002.64E+003.45E+002.27E+00
F171.75E+011.09E+021.75E+036.90E+051.18E+035.67E+044.69E+035.03E+034.48E+044.31E+021.87E+051.25E+02
F184.88E-012.66E+004.04E+032.79E+051.22E+022.10E+041.33E+044.43E+031.02E+043.71E+011.39E+041.39E+01
F198.42E-017.00E-011.71E+003.56E+016.34E+005.78E+002.45E+002.31E+002.60E+002.61E+002.80E+011.34E+00
F204.87E-014.82E-011.24E+022.50E+053.61E+025.19E+031.79E+028.44E+029.72E+032.48E+014.79E+037.25E+00
F216.36E-011.59E+002.86E+027.94E+055.80E+027.03E+043.18E+033.64E+031.88E+032.99E+026.06E+041.79E+01
F222.81E+003.22E+001.76E+013.68E+022.59E+021.05E+021.14E+021.07E+024.09E+012.65E+011.80E+023.74E+01
F233.20E+023.29E+023.29E+024.10E+022.00E+023.34E+023.29E+023.29E+023.34E+022.00E+022.00E+023.29E+02
F241.09E+021.08E+021.25E+022.01E+021.84E+021.74E+021.25E+021.40E+021.34E+021.66E+021.87E+021.12E+02
F251.43E+021.51E+021.88E+022.00E+022.00E+021.93E+021.86E+021.93E+021.98E+021.82E+022.00E+021.59E+02
F261.00E+021.00E+021.00E+021.05E+021.00E+021.00E+021.00E+021.00E+021.00E+021.00E+021.06E+021.00E+02
F271.53E+022.59E+022.80E+024.81E+022.00E+024.10E+023.39E+023.59E+023.72E+021.02E+025.69E+012.37E+02
F283.79E+023.78E+023.89E+021.06E+032.00E+025.54E+024.53E+024.82E+023.90E+022.00E+024.71E+024.18E+02
F292.25E+023.11E+023.77E+051.22E+065.18E+021.14E+031.97E+051.11E+031.15E+033.82E+022.23E+032.29E+02
F304.81E+025.01E+027.32E+023.14E+041.66E+032.05E+031.05E+031.71E+039.80E+028.08E+022.08E+035.89E+02
Table 17. Statistical results of HPDE with 51 independent runs on the CEC2014 (F1–F15) 30 Agent with 1000 FEs second group of optimizers (second set of optimizers).
Table 17. Statistical results of HPDE with 51 independent runs on the CEC2014 (F1–F15) 30 Agent with 1000 FEs second group of optimizers (second set of optimizers).
FunAnalysisHPDERTHSHIOChOACOAOHOChOASCAPSOSHOSDEWOARSA
F1Mean1.13E+021.00E+026.91E+064.55E+059.18E+043.94E+081.76E+078.59E+064.83E+061.17E+071.08E+021.83E+031.17E+08
Std2.47E+012.46E-013.65E+062.21E+056.45E+041.73E+081.26E+074.47E+062.16E+064.40E+064.98E+001.89E+035.24E+07
SEM7.80E+007.79E-021.15E+067.00E+042.04E+045.46E+073.98E+061.41E+066.84E+051.39E+061.58E+005.97E+021.66E+07
Rank31865131197102412
F2Mean2.00E+022.00E+029.30E+073.76E+022.17E+037.14E+094.08E+095.90E+083.48E+083.19E+065.28E+022.08E+026.46E+09
Std9.06E-111.28E-112.94E+081.36E+022.22E+031.17E+092.17E+092.65E+084.37E+083.61E+063.09E+022.39E+019.90E+08
SEM2.87E-114.04E-129.30E+074.29E+017.02E+023.70E+086.87E+088.39E+071.38E+081.14E+069.77E+017.55E+003.13E+08
Rank21846131110975312
F3Mean3.00E+023.00E+021.10E+047.34E+035.17E+031.24E+042.01E+043.28E+036.82E+036.14E+033.00E+023.00E+028.75E+03
Std0.00E+009.72E-113.60E+032.75E+033.65E+031.56E+035.28E+031.93E+034.56E+033.19E+031.59E-012.10E-023.33E+03
SEM0.00E+003.07E-111.14E+038.71E+021.15E+034.95E+021.67E+036.12E+021.44E+031.01E+035.02E-026.64E-031.05E+03
Rank12119612135874310
F4Mean4.21E+024.25E+024.44E+024.35E+024.18E+023.23E+031.22E+034.58E+024.21E+024.59E+024.05E+024.28E+021.18E+03
Std1.10E+011.61E+013.15E+012.71E-011.73E+011.10E+036.51E+021.40E+011.30E+013.44E+011.07E+011.47E+014.94E+02
SEM3.47E+005.11E+009.95E+008.57E-025.49E+003.47E+022.06E+024.43E+004.11E+001.09E+013.39E+004.64E+001.56E+02
Rank45872131293101611
F5Mean5.18E+025.20E+025.20E+025.20E+025.20E+025.20E+025.20E+025.20E+025.20E+025.20E+025.20E+025.20E+025.20E+02
Std6.33E+001.23E-017.20E-021.75E-016.99E-028.16E-023.18E-028.56E-021.13E-013.35E-027.16E-028.13E-029.87E-02
SEM2.00E+003.89E-022.28E-025.53E-022.21E-022.58E-021.01E-022.71E-023.56E-021.06E-022.26E-022.57E-023.12E-02
Rank15137394121121068
F6Mean6.00E+026.06E+026.04E+026.00E+026.05E+026.11E+026.09E+026.07E+026.05E+026.05E+026.02E+026.04E+026.10E+02
Std6.19E-011.44E+001.45E+007.55E-012.46E+003.97E-011.17E+001.42E+001.06E+001.43E+001.02E+001.35E+008.94E-01
SEM1.96E-014.55E-014.59E-012.39E-017.77E-011.25E-013.71E-014.50E-013.34E-014.52E-013.24E-014.26E-012.83E-01
Rank19527131110683412
F7Mean7.00E+027.00E+027.01E+027.00E+027.00E+028.98E+027.86E+027.11E+027.06E+027.07E+027.00E+027.00E+027.84E+02
Std1.24E-021.24E-011.05E+002.76E-013.41E-025.78E+013.26E+013.02E+004.21E+009.33E+009.45E-021.80E-012.58E+01
SEM3.92E-033.92E-023.30E-018.74E-021.08E-021.83E+011.03E+019.55E-011.33E+002.95E+002.99E-025.70E-028.17E+00
Rank13762131210894511
F8Mean8.00E+028.19E+028.27E+028.19E+028.03E+028.70E+028.33E+028.42E+028.18E+028.08E+028.09E+028.08E+028.72E+02
Std1.10E-097.59E+009.20E+001.07E+012.60E+004.88E+001.08E+015.99E+008.66E+001.05E+015.19E+002.93E+006.74E+00
SEM3.48E-102.40E+002.91E+003.39E+008.21E-011.54E+003.41E+001.89E+002.74E+003.33E+001.64E+009.28E-012.13E+00
Rank18972121011635413
F9Mean9.03E+029.29E+029.18E+029.14E+029.40E+029.63E+029.39E+029.44E+029.32E+029.27E+029.09E+029.19E+029.59E+02
Std1.17E+001.16E+018.79E+001.15E+011.22E+013.79E+001.32E+015.60E+009.63E+004.13E+003.50E+007.18E+004.03E+00
SEM3.71E-013.67E+002.78E+003.65E+003.85E+001.20E+004.17E+001.77E+003.05E+001.31E+001.11E+002.27E+001.27E+00
Rank17431013911862512
F10Mean1.04E+031.62E+031.43E+031.81E+031.51E+032.43E+031.62E+032.00E+031.52E+031.26E+031.67E+031.21E+032.20E+03
Std8.95E+002.36E+022.56E+023.19E+023.88E+025.68E+012.78E+022.14E+022.75E+021.44E+022.87E+021.50E+021.59E+02
SEM2.83E+007.46E+018.09E+011.01E+021.23E+021.80E+018.79E+016.76E+018.69E+014.55E+019.09E+014.76E+015.04E+01
Rank18410513711639212
F11Mean1.24E+031.86E+031.90E+031.71E+031.79E+032.52E+031.79E+032.45E+031.64E+031.70E+031.93E+031.88E+032.53E+03
Std1.16E+023.56E+023.89E+024.32E+023.48E+021.58E+023.37E+021.20E+022.43E+022.75E+023.13E+023.68E+021.64E+02
SEM3.65E+011.13E+021.23E+021.36E+021.10E+024.99E+011.07E+023.78E+017.67E+018.71E+019.88E+011.16E+025.19E+01
Rank17945126112310813
F12Mean1.20E+031.20E+031.20E+031.20E+031.20E+031.20E+031.20E+031.20E+031.20E+031.20E+031.20E+031.20E+031.20E+03
Std4.00E-022.06E-013.08E-018.72E-013.21E-012.64E-011.62E-012.32E-014.25E-011.22E-012.12E-011.96E-013.62E-01
SEM1.26E-026.50E-029.74E-022.76E-011.02E-018.35E-025.14E-027.33E-021.35E-013.86E-026.70E-026.18E-021.15E-01
Rank13485137119212610
F13Mean1.30E+031.30E+031.30E+031.30E+031.30E+031.30E+031.30E+031.30E+031.30E+031.30E+031.30E+031.30E+031.30E+03
Std1.65E-021.39E-017.98E-022.63E-027.14E-028.52E-017.03E-017.17E-024.62E-021.44E-014.43E-023.21E-021.03E+00
SEM5.22E-034.39E-022.52E-028.31E-032.26E-022.69E-012.22E-012.27E-021.46E-024.56E-021.40E-021.01E-023.26E-01
Rank18624131110793512
F14Mean1.40E+031.40E+031.40E+031.40E+031.40E+031.44E+031.41E+031.40E+031.40E+031.40E+031.40E+031.40E+031.41E+03
Std3.81E-022.61E-012.23E-017.08E-022.03E-017.27E+006.22E+002.62E-012.08E-015.18E-014.14E-027.54E-025.84E+00
SEM1.20E-028.24E-027.06E-022.24E-026.42E-022.30E+001.97E+008.29E-026.57E-021.64E-011.31E-022.38E-021.85E+00
Rank37624131210891511
F15Mean1.50E+031.50E+031.50E+031.50E+031.50E+035.76E+031.66E+031.51E+031.51E+031.50E+031.50E+031.50E+033.73E+03
Std1.10E-011.36E+002.53E+008.91E-011.41E-012.21E+036.72E+011.13E+011.13E+019.32E-013.84E-011.05E+001.55E+03
SEM3.48E-024.30E-018.01E-012.82E-014.45E-027.00E+022.13E+013.57E+003.58E+002.95E-011.21E-013.31E-014.89E+02
Rank16832131110974512
Table 18. Statistical results of HPDE with 51 independent runs on the CEC2014 (F16–F30) 30 Agent with 1000 FEs (second set of optimizers).
Table 18. Statistical results of HPDE with 51 independent runs on the CEC2014 (F16–F30) 30 Agent with 1000 FEs (second set of optimizers).
FunctionAnalysisHPDERTHSHIOChOACOAOHOChOASCAPSOSHOSDEWOARSA
F16Mean1.60E+031.60E+031.60E+031.60E+031.60E+031.60E+031.60E+031.60E+031.60E+031.60E+031.60E+031.60E+031.60E+03
Std4.41E-014.33E-016.41E-013.78E-015.77E-011.69E-012.10E-012.61E-014.40E-014.82E-013.98E-013.32E-011.95E-01
SEM1.39E-011.37E-012.03E-011.20E-011.82E-015.36E-026.65E-028.25E-021.39E-011.53E-011.26E-011.05E-016.18E-02
Rank18942131110563712
F17Mean1.72E+032.28E+034.11E+043.49E+034.56E+033.93E+051.52E+052.70E+041.06E+054.63E+041.87E+032.19E+035.07E+05
Std1.46E+013.20E+021.10E+058.49E+021.70E+034.23E+041.03E+052.35E+041.60E+055.13E+047.15E+012.47E+021.04E+05
SEM4.62E+001.01E+023.49E+042.68E+025.36E+021.34E+043.27E+047.44E+035.06E+041.62E+042.26E+017.81E+013.30E+04
Rank14856121171092313
F18Mean1.80E+031.86E+031.28E+049.56E+033.27E+032.20E+049.17E+031.43E+041.30E+049.19E+031.83E+031.87E+031.67E+05
Std3.50E-012.82E+019.23E+032.91E+031.72E+031.30E+046.91E+034.12E+038.36E+033.16E+031.18E+015.96E+014.38E+05
SEM1.11E-018.93E+002.92E+039.19E+025.45E+024.10E+032.19E+031.30E+032.64E+031.00E+033.73E+001.88E+011.38E+05
Rank13985126111072413
F19Mean1.90E+031.90E+031.90E+031.90E+031.90E+031.94E+031.92E+031.91E+031.90E+031.90E+031.90E+031.90E+031.92E+03
Std4.37E-011.27E+001.37E+007.02E-011.66E+002.19E+011.90E+014.86E-017.96E-011.24E+005.94E-011.07E+001.50E+01
SEM1.38E-014.01E-014.33E-012.22E-015.26E-016.94E+006.00E+001.54E-012.52E-013.92E-011.88E-013.37E-014.75E+00
Rank14867131210952311
F20Mean2.00E+032.05E+036.33E+035.79E+032.43E+039.61E+036.93E+034.64E+033.74E+035.87E+032.01E+032.06E+031.05E+04
Std3.86E-014.74E+013.12E+031.58E+032.60E+021.12E+035.61E+032.81E+031.87E+037.97E+025.27E+004.25E+014.33E+03
SEM1.22E-011.50E+019.87E+024.99E+028.24E+013.55E+021.77E+038.89E+025.92E+022.52E+021.67E+001.34E+011.37E+03
Rank13108512117692413
F21Mean2.10E+032.50E+038.02E+036.15E+033.23E+031.05E+061.12E+041.13E+049.42E+039.39E+032.18E+032.27E+032.25E+05
Std1.90E-012.22E+024.44E+031.56E+038.57E+028.64E+051.07E+046.09E+034.05E+033.99E+033.75E+011.77E+021.99E+05
SEM6.02E-027.01E+011.40E+034.94E+022.71E+022.73E+053.38E+031.92E+031.28E+031.26E+031.18E+015.61E+016.30E+04
Rank14765131011982312
F22Mean2.20E+032.31E+032.30E+032.37E+032.23E+032.66E+032.40E+032.28E+032.30E+032.28E+032.23E+032.23E+032.36E+03
Std6.27E+006.30E+016.62E+011.41E+013.47E+018.42E+011.27E+024.14E+015.77E+015.95E+011.60E+001.56E+015.62E+01
SEM1.98E+001.99E+012.09E+014.45E+001.10E+012.66E+014.01E+011.31E+011.82E+011.88E+015.07E-014.92E+001.78E+01
Rank19811413125763210
F23Mean2.62E+032.50E+032.63E+032.63E+032.50E+032.52E+032.50E+032.64E+032.64E+032.61E+032.63E+032.63E+032.50E+03
Std4.79E-130.00E+004.96E+001.29E-040.00E+002.47E+000.00E+002.77E+005.07E+005.68E+012.82E-054.79E-130.00E+00
SEM1.52E-130.00E+001.57E+004.07E-050.00E+007.81E-010.00E+008.76E-011.60E+001.80E+018.91E-061.52E-130.00E+00
71111015113126981
F24Mean2.51E+032.56E+032.55E+032.52E+032.59E+032.60E+032.57E+032.55E+032.54E+032.56E+032.52E+032.53E+032.60E+03
Std1.71E+002.45E+013.80E+014.48E+002.93E+011.59E-012.39E+017.29E+002.54E+012.97E+013.67E+001.19E+018.25E+00
SEM5.41E-017.75E+001.20E+011.42E+009.25E+005.01E-027.57E+002.31E+008.03E+009.41E+001.16E+003.75E+002.61E+00
Rank18631113107592412
F25Mean2.64E+032.69E+032.69E+032.68E+032.70E+032.70E+032.70E+032.69E+032.69E+032.69E+032.64E+032.69E+032.70E+03
Std3.75E+011.64E+011.71E+013.56E+010.00E+007.48E-027.64E-011.84E+019.81E+001.53E+011.56E+011.56E+010.00E+00
SEM1.19E+015.18E+005.41E+001.12E+010.00E+002.36E-022.42E-015.83E+003.10E+004.84E+004.92E+004.92E+000.00E+00
Rank18431113105962711
F26Mean2.70E+032.70E+032.70E+032.70E+032.70E+032.72E+032.70E+032.70E+032.70E+032.70E+032.70E+032.70E+032.70E+03
Std2.17E-022.72E-014.99E-024.01E-022.73E-023.18E+019.31E-011.32E-018.18E-021.61E-013.68E-025.40E-021.58E+00
SEM6.86E-038.59E-021.58E-021.27E-028.62E-031.00E+012.94E-014.16E-022.59E-025.10E-021.17E-021.71E-025.00E-01
Rank19623131110785412
F27Mean2.85E+032.90E+033.01E+033.02E+032.90E+032.95E+032.89E+033.10E+033.01E+033.03E+032.82E+033.01E+032.97E+03
Std1.59E+020.00E+001.66E+021.52E+010.00E+001.06E+013.76E+011.23E+001.61E+021.66E+021.92E+021.65E+021.18E+02
SEM5.04E+010.00E+005.23E+014.79E+000.00E+003.37E+001.19E+013.89E-015.10E+015.25E+016.07E+015.21E+013.72E+01
Rank24911463138121107
F28Mean3.18E+033.00E+033.36E+033.34E+033.00E+033.06E+033.00E+033.25E+033.28E+033.33E+033.18E+033.18E+033.23E+03
Std3.50E+000.00E+001.17E+023.58E+020.00E+001.50E+010.00E+004.11E+019.32E+017.77E+011.14E+018.27E+001.86E+02
SEM1.11E+000.00E+003.69E+011.13E+020.00E+004.75E+000.00E+001.30E+012.95E+012.46E+013.61E+002.62E+005.88E+01
Rank51131214191011678
F29Mean3.13E+033.18E+031.76E+053.88E+033.15E+051.47E+074.75E+041.43E+041.77E+053.69E+033.13E+033.18E+031.96E+05
Std5.71E+004.30E+015.45E+054.84E+029.86E+051.59E+061.37E+051.01E+045.45E+057.67E+024.32E+005.45E+014.02E+05
SEM1.81E+001.36E+011.72E+051.53E+023.12E+055.04E+054.32E+043.18E+031.72E+052.43E+021.37E+001.72E+011.27E+05
Rank13961213871052411
F30Mean3.48E+033.93E+034.90E+035.33E+033.63E+031.51E+051.77E+044.53E+034.43E+035.02E+033.50E+033.77E+031.71E+04
Std2.26E+012.13E+028.42E+022.23E+025.10E+021.80E+052.70E+044.68E+026.13E+024.39E+024.16E+012.92E+021.91E+04
SEM7.15E+006.75E+012.66E+027.06E+011.61E+025.68E+048.53E+031.48E+021.94E+021.39E+021.32E+019.24E+016.03E+03
Rank15810313127692411
Table 19. Error measurement comparison results of HPDE with the second group of optimizers over CEC2014 (F1–F30) benchmark functions (second set of optimizers).
Table 19. Error measurement comparison results of HPDE with the second group of optimizers over CEC2014 (F1–F30) benchmark functions (second set of optimizers).
HPDERTHSHIOChOACOAOHOChOASCAPSOSHOSDEWOARSA
1.32E+011.12E-016.91E+064.55E+059.17E+043.94E+081.76E+078.59E+064.83E+061.17E+078.23E+001.73E+031.17E+08
3.38E-111.01E-119.30E+071.76E+021.97E+037.14E+094.08E+095.90E+083.48E+083.19E+063.28E+027.59E+006.46E+09
0.00E+007.13E-111.07E+047.04E+034.87E+031.21E+041.98E+042.98E+036.52E+035.84E+031.16E-011.11E-028.45E+03
2.13E+012.48E+014.44E+013.51E+011.84E+012.83E+038.22E+025.84E+012.11E+015.92E+014.80E+002.78E+017.78E+02
1.80E+012.01E+012.04E+012.03E+012.01E+012.04E+012.01E+012.04E+012.04E+012.00E+012.04E+012.01E+012.04E+01
3.67E-015.69E+004.48E+004.74E-014.66E+001.06E+018.78E+006.54E+004.63E+005.27E+002.43E+003.68E+009.61E+00
1.03E-022.57E-011.39E+003.59E-018.87E-021.98E+028.60E+011.08E+016.06E+007.22E+002.72E-013.56E-018.39E+01
3.93E-101.93E+012.66E+011.87E+012.69E+007.02E+013.28E+014.21E+011.79E+018.17E+009.13E+008.39E+007.23E+01
2.57E+002.85E+011.81E+011.38E+014.02E+016.29E+013.88E+014.38E+013.17E+012.66E+019.48E+001.93E+015.87E+01
3.57E+016.25E+024.35E+028.08E+025.14E+021.43E+036.19E+021.00E+035.15E+022.63E+026.73E+022.14E+021.20E+03
1.43E+027.58E+027.97E+026.09E+026.85E+021.42E+036.93E+021.35E+035.43E+025.98E+028.31E+027.84E+021.43E+03
1.68E-012.92E-013.29E-018.60E-013.80E-011.38E+004.50E-011.20E+001.06E+002.57E-011.22E+004.42E-011.09E+00
4.30E-023.39E-012.10E-019.07E-021.82E-014.61E+001.91E+006.04E-012.69E-013.92E-011.82E-011.99E-013.46E+00
2.17E-014.40E-013.65E-011.93E-012.62E-013.89E+011.50E+019.65E-015.34E-017.13E-011.35E-012.68E-011.37E+01
5.13E-012.45E+003.39E+001.25E+001.20E+004.26E+031.56E+021.05E+016.36E+003.34E+001.87E+001.94E+002.23E+03
1.72E+003.15E+003.24E+002.85E+002.52E+003.84E+003.50E+003.28E+002.90E+003.05E+002.68E+003.10E+003.78E+00
1.75E+015.79E+023.94E+041.79E+032.86E+033.92E+051.50E+052.53E+041.04E+054.46E+041.69E+024.94E+025.05E+05
4.88E-016.34E+011.10E+047.76E+031.47E+032.02E+047.37E+031.25E+041.12E+047.39E+033.31E+017.39E+011.65E+05
8.42E-012.98E+003.63E+003.13E+003.21E+003.58E+012.42E+015.16E+003.65E+003.13E+001.51E+002.22E+001.68E+01
4.87E-014.63E+014.33E+033.79E+034.25E+027.61E+034.93E+032.64E+031.74E+033.87E+031.36E+015.96E+018.49E+03
6.36E-013.98E+025.92E+034.05E+031.13E+031.04E+069.09E+039.20E+037.32E+037.29E+038.49E+011.71E+022.22E+05
2.81E+001.08E+021.02E+021.65E+023.17E+014.56E+021.99E+027.56E+019.98E+017.75E+012.77E+012.65E+011.56E+02
3.20E+022.00E+023.34E+023.29E+022.00E+022.18E+022.00E+023.40E+023.40E+023.07E+023.29E+023.29E+022.00E+02
1.09E+021.57E+021.49E+021.16E+021.86E+022.01E+021.75E+021.54E+021.38E+021.58E+021.15E+021.34E+021.96E+02
1.43E+021.95E+021.91E+021.76E+022.00E+022.00E+022.00E+021.94E+021.95E+021.95E+021.44E+021.95E+022.00E+02
1.00E+021.00E+021.00E+021.00E+021.00E+021.19E+021.02E+021.01E+021.00E+021.00E+021.00E+021.00E+021.03E+02
1.53E+022.00E+023.11E+023.15E+022.00E+022.47E+021.88E+024.04E+023.08E+023.28E+021.22E+023.14E+022.73E+02
3.79E+022.00E+025.61E+025.38E+022.00E+022.56E+022.00E+024.48E+024.76E+025.30E+023.80E+023.81E+024.31E+02
2.25E+022.75E+021.73E+059.82E+023.12E+051.47E+074.46E+041.14E+041.74E+057.85E+022.29E+022.77E+021.94E+05
4.81E+029.27E+021.90E+032.33E+036.27E+021.48E+051.47E+041.53E+031.43E+032.02E+034.96E+027.73E+021.41E+04
Table 20. Average ranking of 12 optimizers using Friedman’s test based on CEC2014.
Table 20. Average ranking of 12 optimizers using Friedman’s test based on CEC2014.
AlgorithmRank
APO3.12654239
ChOA4.74218367
GWO5.36894271
FOX2.84519632
WOA6.95126748
MVO7.41652389
DOA8.02756324
MFO5.84372916
RIME4.21537894
DBO6.32481905
WSO9.53762143
Table 21. Accuracy comparison of HPDE over CEC2014 (F1–F30) benchmark functions.
Table 21. Accuracy comparison of HPDE over CEC2014 (F1–F30) benchmark functions.
FunctionHPDE ErrorWorst ErrorHPDE Accuracy
F1 1.32 × 10 1 4.76 × 10 7 1 1.32 × 10 1 4.76 × 10 7 = 0.999999723
F2 3.38 × 10 11 7.95 × 10 9 1 3.38 × 10 11 7.95 × 10 9 = 1.0
F3 0.0 4.14 × 10 4 1 0.0 4.14 × 10 4 = 1.0
F4 2.13 × 10 1 1.46 × 10 3 1 2.13 × 10 1 1.46 × 10 3 = 0.985
F5 1.80 × 10 1 2.04 × 10 1 1 1.80 × 10 1 2.04 × 10 1 = 0.1176
F6 3.67 × 10 1 1.01 × 10 1 1 3.67 × 10 1 1.01 × 10 1 = 0.9636
F7 1.03 × 10 2 1.88 × 10 2 1 1.03 × 10 2 1.88 × 10 2 = 0.9999451
F8 3.93 × 10 10 9.17 × 10 1 1 3.93 × 10 10 9.17 × 10 1 = 1.0
F9 2.57 × 10 0 7.11 × 10 1 1 2.57 × 10 0 7.11 × 10 1 = 0.9638
F10 3.57 × 10 1 1.55 × 10 3 1 3.57 × 10 1 1.55 × 10 3 = 0.9769
F11 1.43 × 10 2 1.56 × 10 3 1 1.43 × 10 2 1.56 × 10 3 = 0.9083
F12 1.68 × 10 1 1.55 × 10 0 1 1.68 × 10 1 1.55 × 10 0 = 0.8916
F13 4.30 × 10 2 4.14 × 10 0 1 4.30 × 10 2 4.14 × 10 0 = 0.9896
F14 2.17 × 10 1 3.84 × 10 1 1 2.17 × 10 1 3.84 × 10 1 = 0.9943
F15 5.13 × 10 1 1.53 × 10 4 1 5.13 × 10 1 1.53 × 10 4 = 0.9999664
F16 1.72 × 10 0 3.82 × 10 0 1 1.72 × 10 0 3.82 × 10 0 = 0.5497
F17 1.75 × 10 1 6.90 × 10 5 1 1.75 × 10 1 6.90 × 10 5 = 0.9999746
F18 4.88 × 10 1 2.79 × 10 5 1 4.88 × 10 1 2.79 × 10 5 = 0.99999825
F19 8.42 × 10 1 3.56 × 10 1 1 8.42 × 10 1 3.56 × 10 1 = 0.9763
F20 4.87 × 10 1 2.50 × 10 5 1 4.87 × 10 1 2.50 × 10 5 = 0.99999805
F21 6.36 × 10 1 7.94 × 10 5 1 6.36 × 10 1 7.94 × 10 5 = 0.99999920
F22 2.81 × 10 0 3.68 × 10 2 1 2.81 × 10 0 3.68 × 10 2 = 0.9924
F23 3.20 × 10 2 4.10 × 10 2 1 3.20 × 10 2 4.10 × 10 2 = 0.2195
F24 1.09 × 10 2 2.01 × 10 2 1 1.09 × 10 2 2.01 × 10 2 = 0.4567
F25 1.43 × 10 2 2.00 × 10 2 1 1.43 × 10 2 2.00 × 10 2 = 0.2850
F26 1.00 × 10 2 1.06 × 10 2 1 1.00 × 10 2 1.06 × 10 2 = 0.0566
F27 1.53 × 10 2 4.81 × 10 2 1 1.53 × 10 2 4.81 × 10 2 = 0.6820
F28 3.79 × 10 2 1.06 × 10 3 1 3.79 × 10 2 1.06 × 10 3 = 0.6435
F29 2.25 × 10 2 1.22 × 10 6 1 2.25 × 10 2 1.22 × 10 6 = 0.9998157
F30 4.81 × 10 2 3.14 × 10 4 1 4.81 × 10 2 3.14 × 10 4 = 0.9847
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fakhouri, H.N.; Hamad, F.; Ishtaiwi, A.; Hudaib, A.; Halalsheh, N.; Fakhouri, S.N. Advancing Engineering Solutions with Protozoa-Based Differential Evolution: A Hybrid Optimization Approach. Automation 2025, 6, 13. https://doi.org/10.3390/automation6020013

AMA Style

Fakhouri HN, Hamad F, Ishtaiwi A, Hudaib A, Halalsheh N, Fakhouri SN. Advancing Engineering Solutions with Protozoa-Based Differential Evolution: A Hybrid Optimization Approach. Automation. 2025; 6(2):13. https://doi.org/10.3390/automation6020013

Chicago/Turabian Style

Fakhouri, Hussam N., Faten Hamad, Abdelraouf Ishtaiwi, Amjad Hudaib, Niveen Halalsheh, and Sandi N. Fakhouri. 2025. "Advancing Engineering Solutions with Protozoa-Based Differential Evolution: A Hybrid Optimization Approach" Automation 6, no. 2: 13. https://doi.org/10.3390/automation6020013

APA Style

Fakhouri, H. N., Hamad, F., Ishtaiwi, A., Hudaib, A., Halalsheh, N., & Fakhouri, S. N. (2025). Advancing Engineering Solutions with Protozoa-Based Differential Evolution: A Hybrid Optimization Approach. Automation, 6(2), 13. https://doi.org/10.3390/automation6020013

Article Metrics

Back to TopTop