1. Introduction
Optimization is a fundamental task across all scientific and engineering disciplines, where it serves to determine the optimal parameter values within systems to achieve desired outcomes efficiently [
1]. As technologies evolve and systems become increasingly complex, optimization challenges have intensified, requiring more sophisticated approaches [
2,
3]. Traditional methods often face limitations, including susceptibility to converging on local optima, difficulties in handling high-dimensional or unknown search spaces, and typically operating with single-solution approaches. These challenges highlight the need for developing advanced algorithms that can navigate complex landscapes more effectively [
4].
Metaheuristic algorithms have emerged as powerful tools to address these complex optimization challenges [
5]. They provide flexible, high-level strategies designed to explore large and intricate search spaces systematically and efficiently. Unlike traditional optimization techniques, metaheuristics do not rely on gradients or derivatives, which makes them applicable to a wider range of problems, including those that are non-differentiable, discontinuous, or stochastic. This capability is especially critical in real-world applications where the objective functions and constraints may not be precisely defined or may change over time [
6].
The development of metaheuristic algorithms is often inspired by natural and social phenomena. Evolutionary Algorithms (EAs) [
7] simulate the process of natural selection, where the fittest individuals are selected for reproduction in order to produce offspring of the next generation. Swarm Intelligence (SI) algorithms [
8] are inspired by the social behavior of animals, such as birds flocking and fish schooling, and are particularly noted for their robustness and ability to converge rapidly to a good solution. Physics-based methods [
9] use metaphors from physics, such as the laws of gravity and motion, to guide the search process, while human-inspired algorithms [
10] often simulate human decision-making or social behavior.
These algorithms are particularly valued for their versatility and robustness, allowing them to perform well across a diverse array of problem types and environments. They are capable of balancing exploration (diversifying the search across the global landscape to avoid local optima) with exploitation (intensifying the search around promising areas) [
11], which is crucial for achieving near-optimal solutions efficiently. Additionally, the scalability of metaheuristics makes them suitable for solving large-scale problems that are beyond the reach of conventional optimization methods [
12].
Moreover, metaheuristics are often hybridized with other optimization techniques to form even more powerful algorithms [
13]. For example, combining the global search capability of a metaheuristic with the local search capabilities of a more deterministic method can yield a hybrid algorithm that leverages the strengths of both approaches. This is particularly effective in refining solutions to very high accuracies, which are often required in engineering and industrial applications [
14].
Metaheuristic algorithms, inspired by natural phenomena, operate through various mechanisms, yet they fundamentally rely on two core concepts: exploration (diversification) and exploitation (intensification). As stated by Eiben and Schippers, “exploration and exploitation are the two cornerstones of problem-solving by search” [
15]. Exploration involves creating diverse solutions to thoroughly investigate the global search space, while exploitation focuses on refining the search around promising areas to find optimal solutions. These two components are inherently conflicting, where an emphasis on exploration can weaken exploitation and vice versa. Despite its importance, achieving the optimal balance between exploration and exploitation remains a challenging task, and no researcher has set a standard for this process [
16]. Furthermore, the “No Free Lunch” (NFL) theorem asserts that no single heuristic can consistently outperform others across all problem domains [
17]. This theory implies that the performance of any algorithm will average out when applied to different problems, suggesting that an algorithm effective for certain problems may not perform as well for others [
18]. This understanding has driven the continuous development and proposal of new algorithms, each seeking to better balance exploration and exploitation for improved optimization performance across varied problem sets [
18].
The main research contributions of this paper are as follows:
A new hybrid optimization algorithm named Hybrid Protozoa Differential Evolution (HPDE) is designed, combining the strengths of the Artificial Protozoa Optimizer (APO) and Differential Evolution (DE) to enhance the optimization process.
The HPDE algorithm’s mathematical models are based on the survival behaviors of protozoa and the evolutionary strategies of DE. Specifically, APO’s mechanisms of foraging, dormancy, and reproduction are integrated with DE’s mutation and crossover strategies. Autotrophic foraging and dormancy contribute to exploration, while heterotrophic foraging, reproduction, and DE’s evolutionary strategies enhance exploitation.
The HPDE is implemented and evaluated using unimodal, multimodal, hybrid, and composition functions from the 2022 IEEE Congress on Evolutionary Computation benchmark. Experimental results verify that HPDE outperforms 23 state-of-the-art algorithms.
The effectiveness of HPDE is further demonstrated by addressing challenging real-world problems, including Robot Gripper and six engineering design problems
HPDE consistently outperformed leading algorithms across a variety of optimization challenges.
Key results include: First, we achieved the lowest error rates across the majority of CEC2014 benchmark functions, with significant advantages in multimodal and high-dimensional problems. Second, we demonstrated good robustness and reliability, as evidenced by low standard deviations and standard errors in its results. Third, we achieved top rankings in engineering design problems with optimal or near-optimal solutions that outperformed competitors like APO, GWO, and WOA.
Motivation of This Work
Recent advancements in metaheuristic optimization have introduced several adaptive algorithms, each with unique mechanisms for enhancing search performance. Among these, the Artificial Protozoa Optimizer (APO) [
19] has gained attention due to its biological inspiration drawn from protozoa survival behaviors, including foraging, dormancy, and reproduction. These behaviors enable APO to adapt its search process dynamically, allowing it to balance the exploration and exploitation phases more effectively. Such adaptive mechanisms are not entirely unique to APO; other algorithms also incorporate adaptive behaviors. However, APO’s distinct process of switching between foraging and dormancy phases provides a flexible strategy for avoiding premature convergence and local optima traps, which are common issues in optimization.
One primary motivation for selecting APO lies in its versatility and robustness when tackling complex optimization tasks. Although Differential Evolution (DE) algorithms have exhibited substantial success in a variety of applications, they often face limitations with highly nonlinear and multimodal functions, which can result in slow convergence and suboptimal solutions [
20]. To address these shortcomings, we propose a novel hybrid algorithm that combines APO with DE, termed the Hybrid Protozoa Differential Evolution (HPDE). The integration of APO’s adaptive mechanisms with DE’s mutation and crossover strategies aims to enhance the optimizer’s global convergence, allowing it to balance exploration and exploitation more effectively while reducing stagnation in the search process.
Unlike many traditional algorithms, such as the Genetic Algorithm (GA) [
21] and Particle Swarm Optimization (PSO) [
22], which often suffer from premature convergence [
23], APO inherently incorporates adaptive dormancy and foraging behaviors that reduce the problem of premature convergence. While bio-inspired algorithms like the Grey Wolf Optimizer (GWO) [
24] and Whale Optimization Algorithm (WOA) [
25] have achieved commendable balances between exploration and exploitation, they frequently require extensive parameter tuning, which can be computationally costly and dependent on specific problem characteristics [
26]. Additionally, these algorithms often exhibit sensitivity to initial conditions and parameter settings, resulting in inconsistent performance across diverse problem domains [
27].
The proposed HPDE algorithm leverages APO’s adaptability to dynamically alter its search behavior, addressing many of the limitations noted in traditional and bio-inspired algorithms. By integrating APO’s adaptive mechanism with DE’s robust search capabilities, HPDE demonstrates an improved convergence rate and enhanced search capability over a range of benchmark functions and engineering design problems.
Table 1 provides a comparative summary of optimization algorithms that, like the bio-inspired APO, draw on natural or biological processes to guide the search process, detailing each algorithm’s main contributions, advantages, and limitations.
Furthermore, while some hybrid algorithms attempt to integrate multiple optimization strategies to enhance performance, they often introduce increased computational complexity and may still struggle with maintaining population diversity, thereby risking convergence to suboptimal solutions. Another notable limitation is the inadequate handling of constraints in many algorithms, which restricts their applicability to real-world engineering problems that inherently involve complex, multidimensional constraints [
3].
Addressing these gaps necessitates the development of an optimizer that not only leverages the exploratory capabilities of bio-inspired algorithms but also incorporates robust mechanisms for exploitation to refine solutions effectively. To address these challenges, we propose the Hybrid Artificial Protozoa Optimizer combined with Differential Evolution (HPDE). This hybrid approach synergistically integrates the Artificial Protozoa Optimizer (APO) [
19] that has the diverse foraging and reproduction behaviors with the DE, renowned for its efficient mutation and crossover operations that bolster exploitation. By combining these strengths, HPDE aims to achieve a good balance between exploration and exploitation, reduce the likelihood of premature convergence, and maintain population diversity more effectively. Additionally, the hybrid framework is designed to handle both continuous and discrete optimization problems with constraints, thereby broadening its applicability to a wider range of real-world scenarios. Through this integration, HPDE aspires to overcome the limitations observed in existing algorithms, offering enhanced convergence speed, improved solution quality, and greater robustness across diverse optimization landscapes.
The primary advantage of our proposed work (HPDE) compared with other optimization algorithms lies in its enhanced ability to balance exploration and exploitation during the search process. By integrating the bio-inspired mechanisms of the APO with the robust mutation and crossover operations of DE, our method leverages the strengths of both algorithms to achieve good performance. Specifically, the HPDE demonstrates improved convergence speed and solution quality, as evidenced by its performance on the CEC2014 benchmark suite consisting of 30 functions. Comparative experiments show that the HPDE consistently outperforms the compared state-of-the-art algorithms, including ChOA, GWO, FOX, WOA, MVO, DOA, MFO, RIME, DBO, WSO, RTH, SHIO, COA, OHO, SCA, PSO, SHO, SDE, and RSA. The hybrid approach effectively combines the exploratory capabilities of APO with the exploitative efficiency of DE, resulting in an optimizer that is robust across diverse optimization problems, including continuous and discrete spaces with constraints.
2. Literature Review
In the diverse landscape of computational optimization, a plethora of innovative metaheuristic algorithms have been developed, each drawing inspiration from various natural behaviors and phenomena. These algorithms, designed to solve complex optimization problems, emulate the adaptive strategies of animals, the dynamic interactions of social insects, and the physical principles observed in the natural world.
Evolution and genetics-inspired optimizers include Evolution Strategies (ES) [
44] and Genetic Programming (GP) [
45]. These algorithms are based on the principles of biological evolution and genetic variation, respectively, simulating the process of natural selection and genetic operations to optimize complex systems [
46]. Moreover, the bird-inspired optimizers are represented by the Falcon Optimization Algorithm (FOA) [
47], Greylag Goose Optimization (GGO) [
48], Northern Goshawk Optimization (NGO) [
49], and Artificial Hummingbird Algorithm (CSA) [
50]. Furthermore, the aquatic animal-inspired optimizers feature the Beluga Whale Optimization (BWO) [
51], which is inspired by the social and hunting behaviors of beluga whales, and the Jellyfish Search (JS) [
52], which mimics the passive drifting mechanism of jellyfish. Additionally, the Whale Optimization Algorithm (WOA) [
25] models the bubble-net hunting strategy of humpback whales.
In addition, the plant-inspired optimizer includes the Invasive Weed Optimization algorithm (IWO) [
53], which models the spreading and reproductive strategies of invasive weed species, adapting these strategies to solve optimization problems. On the other hand, the disease model-inspired optimizer is the Liver Cancer Algorithm (LCA) [
54], which draws from the growth patterns and characteristics of liver cancer cells to develop robust search mechanisms in optimization landscapes.
Moreover, the mammal-inspired optimizers feature the Puma Optimizer (PO) [
55], (BMO) [
56], Grey Wolf Optimizer (GWO) [
24], Adaptive Fox Optimization (AFO) [
57], and Honey Badger Algorithm (HBA) [
58]. Insect-inspired optimizers include Ant Colony Optimization (ACO) [
59], which emulates the pheromone-laying and path-finding behavior of ants; Aphid–Ant Mutualism (AAM) [
60], which models the mutualistic relationship between aphids and ants; and Artificial Bee Colony (ABC) [
61], inspired by the foraging behavior of honeybees. Ant Lion Optimizer (ALO) [
62] draws inspiration from the predatory behavior of antlions, which create traps to capture prey. Other physics-based optimizers include the Artificial Electric Field Algorithm (AEFA) [
63], Black Hole Algorithm (BH) [
64], Electromagnetic Field Optimization (EFO) [
65], and Gravitational Search Algorithm (GSA) [
66]. These algorithms utilize principles from physics, such as electric fields, black hole dynamics, electromagnetic principles, and gravitational forces, to guide the search process in optimization tasks.
Primate-inspired optimizers include the Artificial Gorilla Troops Optimizer (RIME) [
67], which mimics the social structure and collaborative behavior of gorilla troops in their natural habitat. In addition, the activity- and sport-inspired optimizers feature the Alpine Skiing Optimization (ASO) [
68], which draws inspiration from the strategic and dynamic movements involved in alpine skiing. Futhermore, the Swarm Intelligence optimizers include Particle Swarm Optimization (PSO) [
69], which emulates the social behavior of birds and fish, adapting it to solve optimization problems effectively.
In addition, the natural process and physics-based optimizers feature the Snow Ablation Optimizer (SAO) [
70], String Theory Algorithm (STA) [
71], Water Cycle Algorithm (WCA) [
72], Atom Search Optimization (ASO) [
73], Chemical Reaction Optimization (CRO) [
74], and Thermal Exchange Optimization (TEO) [
75]. The fitness and distance-inspired optimizers include Distance-Fitness Learning (DFL) [
76], which leverages the correlation between distance and fitness to inform optimization; materials and chemical structure-based optimizers include the Crystal Structure Algorithm (CryStAl) [
77], Equilibrium Optimizer (EO) [
78], Henry Gas Solubility Optimization (HGSO) [
79], and Nuclear Reaction Optimization (NRO) [
80].
The Farmland Fertility Algorithm (FFA) is inspired by the process of enhancing soil fertility in agriculture, modeling optimization as a balance between exploration and exploitation to improve solutions iteratively [
81]. The African Vultures Optimization Algorithm (AVOA) mimics the collaborative hunting strategies of vultures, leveraging their unique soaring and keen sight to focus on promising regions in the search space [
82]. The Mountain Gazelle Optimizer (MGO) is based on the fast and evasive movements of gazelles in mountainous terrain, using their speed and agility to avoid local optima while searching for better solutions [
83]. The Artificial Gorilla Troops Optimizer (GTO) imitates the social intelligence and group behaviors of gorillas, balancing leadership and individual contributions to enhance convergence toward optimal solutions.
In addition, Differential Evolution (DE) algorithms have seen extensive development, especially in addressing complex engineering optimization problems. Researchers have thus developed various DE enhancements through hybrid methodologies, advanced constraint-handling mechanisms, and adaptive control strategies, some of which are listed below.
Cantú et al. [
84] examined constraint-handling techniques for DE, particularly suited for process engineering problems characterized by non-convex and discontinuous constraints. This study demonstrated that the gradient-based repair technique was highly effective, underscoring the importance of appropriate constraint-handling mechanisms in optimizing DE’s performance in complex, constrained environments.
Nguyen et al. [
85] introduced a Classification-assisted Differential Evolution (CaDE) approach, where an AdaBoost classifier discards infeasible solutions early in the evaluation process. By reducing unnecessary fitness evaluations, this machine learning integration enhances DE’s computational efficiency, making it particularly valuable for constrained engineering tasks that benefit from such strategic filtering.
Kizilay et al. [
86] presented a Q-Learning-assisted DE variant (DE–QL), which dynamically adapts mutation strategies and crossover rates. By integrating Q-Learning as a guiding mechanism, DE–QL effectively balances exploration within feasible and infeasible regions, demonstrating its utility for constrained engineering problems that require nuanced constraint navigation.
Samal et al. [
87] developed a Modified Differential Evolution (MDE) that dynamically adjusts the scaling factor and crossover ratio, improving DE’s performance on multimodal functions. Experimental results show that MDE outperforms well-established optimization algorithms, such as Particle Swarm Optimization (PSO) and Cuckoo Search, emphasizing the effectiveness of adaptive parameter tuning in navigating complex, multimodal landscapes.
Zhang et al. [
88] proposed a hybrid Cuckoo Search and Differential Evolution algorithm (CSDE) to address premature convergence issues by dividing the population into subgroups, allowing Cuckoo Search and DE to operate independently while sharing information. This hybrid model exemplifies how combining complementary optimization techniques can improve convergence rates and solution accuracy, especially in engineering problems requiring a balance between global and local search.
Dragoi and Curteanu [
89] provided an extensive review of DE applications in chemical engineering, highlighting DE’s versatility in handling varied constraints and objectives within model and process optimization. This review underscores DE’s potential across engineering fields and its adaptability to optimize complex processes.
In a similar context, Zuo and Gao [
90] introduced a dual-population DE variant, ADPDE, where a dynamic population division based on individual potential enhances convergence speed and solution accuracy. By adopting distinct mutation strategies within subgroups, ADPDE achieves robust performance on constrained optimization tasks, demonstrating the utility of population management in DE.
Tang and Wang [
91] extended DE’s capability by integrating it into a Whale Optimization Algorithm (WOAAD) based on an atom-like structure. WOAAD’s design incorporates DE-inspired modifications, effectively addressing local convergence issues, which proves advantageous for optimizing engineering design problems requiring precision and adaptive convergence.
Alshinwan et al. [
92] proposed a hybrid approach, combining Prairie Dog Optimization with DE (PDO–DE), demonstrating its applicability in both engineering design and network intrusion detection. By enhancing PDO’s search mechanisms with DE’s mutation and crossover operators, PDO–DE strikes an effective balance between exploration and exploitation, underscoring DE’s adaptability across different optimization domains.
De Melo and Carosio [
93] explored a Multi-View Differential Evolution (MVDE) approach, in which multiple mutation strategies are applied to generate different population views at each iteration. A winner-takes-all approach merges these views, balancing exploration and exploitation effectively. MVDE’s competitive performance on constrained engineering problems illustrates the benefits of combining varied mutation strategies within DE.
In advancing mutation schemes, Mohamed et al. [
94] proposed Enhanced Directed Differential Evolution (EDDE), an algorithm that utilizes both high-quality and low-quality population members to balance exploration and exploitation. EDDE’s robustness and efficiency in constrained domains highlight the importance of mutation scheme customization in optimizing DE’s performance.
Zeng et al. [
95] developed a Competitive Mechanism-Integrated Multi-Objective Whale Optimization Algorithm with Differential Evolution (CMWOA) for multi-objective optimization. By incorporating a competitive mechanism, CMWOA uses a refined crowding distance calculation for improved population density depiction and guides population updates more effectively. DE’s adaptive parameters further diversify the population, with testing on multiple benchmark functions demonstrating CMWOA’s ability to outperform other methods in convergence and accuracy. CMWOA’s application to real-world problems further verifies its practicality in diverse optimization settings, highlighting DE’s flexibility in hybrid configurations.
Despite the clear success of these strategies in tackling diverse optimization tasks, several deficiencies remain. Many existing approaches confront the problem of stagnation in local minima, often due to inadequate exploration mechanisms or static parameter settings. Furthermore, hybrid methods sometimes lack well-defined theoretical guidelines for combining distinct search strategies or for adjusting parameters adaptively, thereby limiting scalability and robustness in more complex or higher-dimensional tasks. Real-world engineering applications, in particular, can pose dynamic constraints that many algorithms are neither explicitly designed to handle nor systematically tested against. These gaps motivate the introduction of improved hybrid and bio-inspired metaheuristics that pursue a better exploration–exploitation trade-off, integrate adaptive parameter tuning, and scale effectively to large, constrained, or time-critical problems.
In this context, the present study proposes a new hybrid optimization method, termed Hybrid Protozoa Differential Evolution (HPDE), which combines the survival behaviors observed in protozoa with the evolutionary strengths of DE. Protozoa-inspired foraging, dormancy, and reproduction mechanisms (as modeled by the Artificial Protozoa Optimizer, APO) are merged with DE’s well-known mutation and crossover strategies. Autotrophic foraging and dormancy bolster the exploration capabilities of the algorithm, while heterotrophic foraging and reproduction interact synergistically with DE’s evolutionary operators to reinforce exploitation. This fusion is carefully formulated to overcome local minima entrapment and to maintain a flexible, adaptive framework for parameter adjustments in real-world scenarios.
2.1. Overview of Solving Highly Nonlinear and Complex Engineering Design Optimization Problems Using Metaheuristic Tools
Recent work has focused on improving optimization algorithms for complex engineering problems. Akl et al. proposed an improved Harris Hawks Optimization (IHHO), incorporating logarithmic and exponential strategies to avoid local optima [
96]. Yu et al. introduced the Improved Adaptive Grey Wolf Optimization (IAGWO), enhancing convergence speed and accuracy with velocity and Inverse Multiquadratic Function adjustments [
97]. Liu et al. developed the PRPCSO algorithm, combining Padé Approximation with intelligent population reduction [
98]. Garg et al. proposed the LX-TLA, using the Laplacian operator to improve Teaching Learning Algorithm performance [
99]. Gopi and Mohapatra introduced OLCA, utilizing opposition-based learning for global optimization [
100]. Finally, Wang et al. presented LGJO, which leverages chaotic mapping and Gaussian mutation for industrial applications [
101].
Furthermore, Moustafa et al. proposed the Subtraction-Average-Based Optimizer (SAOA), enhanced with cooperative learning, which outperformed GWO and PSO in power system applications [
102]. El-Shorbagy and Elazeem’s Convex Combination Search Algorithm (CCS) effectively balanced exploration and exploitation in multimodal tests and engineering challenges [
103]. Givi et al.’s Red Panda Optimization (RPO), inspired by red panda behaviors, surpassed several state-of-the-art algorithms in benchmarks and engineering problems [
104]. Similarly, Gharehchopogh et al. introduced the Chaotic Quasi-Oppositional Farmland Fertility Algorithm (CQFFA), improving exploration and convergence with chaotic maps and learning mechanisms [
105].
Pan et al.’s Gannet Optimization Algorithm (GOA) demonstrated success in large-scale constrained optimization [
106], while Tang et al.’s hybrid PSODO improved global search and convergence speed [
91]. Recent advancements in optimization algorithms have shown significant improvements in solving complex engineering problems. Hu et al. proposed the ACEPSO, which introduced adaptive population grouping and co-evolved mechanisms to enhance diversity and avoid local optima [
107]. Ewees introduced a harmony-driven method, GBOHS, integrating Harmony Search (HS) with a gradient-based optimizer to improve convergence and accuracy in global optimization and feature selection [
108]. Pan et al. improved the Gannet Optimization Algorithm (GOA) by incorporating parallel and compact strategies, enhancing memory efficiency, and avoiding local solutions in engineering tasks [
109]. Che and He enhanced the Seagull Optimization Algorithm (SOA) by introducing mutualism and commensalism mechanisms, improving the algorithm’s exploitation capabilities in complex engineering challenges [
110].
2.2. Overview of Optimization Problems
Optimization problems are fundamental in various fields of science and engineering, where the goal is to find the best solution among all feasible solutions. These problems arise in numerous applications such as machine learning, operations research, engineering design, finance, logistics, and many others. The primary objective is to optimize a performance criterion, which is represented mathematically by an objective function [
111].
A general optimization problem can be formulated as shown in Equation (
1):
where
is the objective function to be minimized,
is the vector of decision variables, and
represents the feasible solution space defined by the constraints of the problem.
The feasible solution space is typically defined by a set of constraints, which may include.
Boundary constraints: variables are bounded within lower and upper limits as shown in Equation (
2):
where
and
are vectors containing the lower and upper bounds for each decision variable.
Equality constraints: functions that must be satisfied exactly as shown in Equation (
3):
Here, are equality constraint functions.
Inequality constraints: equations that impose upper or lower limits as shown in Equation (
4):
Here, are inequality constraint functions.
In many real-world applications, the optimization problem involves complex, nonlinear, and high-dimensional objective functions with multiple local minima or maxima. These characteristics make it challenging to find the global optimum using traditional optimization methods [
112].
To address these challenges, metaheuristic algorithms have been developed. In this research, we focus on applying and analyzing the hybrid Protozoa Optimizer (PO) with DE for solving continuous optimization problems as formulated in Equation (
1); we also focus on applying HPDE to solve engineering problems.
2.3. Overview of Protozoa Optimizer
The artificial Protozoa Optimizer (APO) [
19] is a bio-inspired optimization algorithm that mimics the survival behaviors of protozoa, particularly focusing on foraging, dormancy, and reproduction. This optimizer is designed to solve continuous optimization problems by balancing exploration and exploitation through mathematical models derived from protozoa’s biological activities.
Foraging behavior in protozoa involves both autotrophic and heterotrophic mechanisms to obtain nutrients. The mathematical models for these modes are defined as follows.
2.4. APO Mathematical Models
This section introduces the mathematical framework of APO. The solution set is represented by a population of protozoa, where each protozoan occupies a position in a multidimensional space defined by variables.
2.4.1. Notations and Nomenclature
The notations and symbols utilized in the model are summarized as follows. The population size is denoted by , while represents the number of decision variables. The dimension index, indicated by , ranges from 1 to . The notation stands for the number of neighbor pairs within the model, and f refers to the foraging factor. Weight factors in autotrophic and heterotrophic modes are represented by and , respectively.
The proportion fraction of dormancy and reproduction is denoted as , while and represent the probability of autotrophic and heterotrophic behavior and the probability of dormancy and reproduction, respectively. A random number within the range is noted as . The current iteration number is indicated by , with representing the maximum iteration count permissible.
The position of the ith protozoan is denoted by , whereas and denote the lower and upper bounds of the decision variables, respectively. The mapping vectors used in foraging and reproduction are represented by and . An index vector within dormancy and reproduction processes is denoted by , while a random vector containing elements within is represented by .
The ceiling and flooring functions are represented as and , respectively. The fitness function is denoted by , and is used to arrange fitness values in ascending order. Additionally, generates a vector of l unique integers selected randomly within the range 1 to n. Lastly, the Hadamard product is represented by the symbol ⊙. This notation provides a clear and concise reference for interpreting the model’s various parameters and operations.
2.4.2. Foraging Mechanism
The foraging behavior in protozoa, central to this optimization model, is driven by both internal and external factors. Internal factors are related to each protozoan’s individual foraging characteristics, while external factors capture environmental influences, such as interactions with neighboring protozoa.
2.4.3. Autotrophic Mode
In autotrophic mode, protozoa synthesize nutrients when exposed to light, prompting movement toward regions with optimal light conditions. Protozoa in areas of high light intensity will move to positions with reduced intensity and vice versa. Assuming that the light conditions around a given protozoan, say the
jth protozoan, are favorable for photosynthesis, the mathematical model guiding the movement of the
ith protozoan toward this target is as shown in Equation (
5) [
19]:
Here, and represent the updated and current positions of the ith protozoan, respectively. is the position of the selected protozoan acting as a target. The terms and denote neighboring protozoa chosen based on rank relative to i. Specifically, refers to a protozoan with a rank lower than i, while has a rank higher than i. This structure enables adaptive movement within the foraging process.
The spatial positions and ranking of protozoa are specified as shown in Equations (
6) and (
7) [
19]:
The foraging factor
f dynamically adjusts based on the iteration count as shown in Equation (
8) [
19]:
Additional parameters defining neighborhood structure and weight are given in Equations (
9) and (
10) [
19]:
The mapping vector
, which dictates the selection of dimensions during foraging, is defined in Equation (
11) [
19]:
2.4.4. Heterotrophic Mode
In low-light environments, protozoa can absorb organic nutrients directly from the surroundings. Given that
represents a nutrient-rich location nearby, the heterotrophic mode governs the protozoan’s movement toward this point using the model as shown in Equation (
12) [
19]
The location of
is determined as in Equation (
13) [
19]:
The weight factor
for the heterotrophic mode is calculated as in Equation (
14) [
19]:
2.5. Dormancy
Dormancy is an adaptive survival response activated during unfavorable conditions, where a protozoan in a dormant state is replaced by a newly generated one, maintaining the population size. The dormancy model is defined as it is in Equation (
15) [
19]:
Here,
and
represent the lower and upper bounds, respectively, defined as shown in Equation (
16) [
19]:
2.6. Reproduction
Reproduction is modeled as binary fission, where a protozoan divides into two identical copies. This division is simulated by duplicating the protozoan and applying a perturbation. The reproduction model is shown in (
17) [
19]:
The mapping vector for reproduction,
, specifies the dimensions involved in the perturbation:
2.7. Overview of Differential Evolution
Differential Evolution (DE) is a population-based optimization algorithm where a population of candidate solutions iteratively evolves toward optimal solutions by applying mutation, crossover, and selection operators. The DE process is defined mathematically through several equations, each governing a specific stage of the evolution process.
The DE algorithm begins with the initialization of a population of candidate solutions, where each candidate is represented by a vector with decision variables. Let denote the position of the ith individual in the population at generation g. Each candidate vector is initialized randomly within the bounds specified by and .
The mutation process creates a mutant vector for each target vector
by combining the position vectors of other randomly selected candidates. As shown in Equation (
19), this mutation strategy enables the exploration of new solutions based on the diversity within the population.
where
represents the mutant vector,
,
, and
are randomly selected individuals from the population such that
. The parameter
F is a scaling factor (typically within the range
) that controls the amplification of the differential variation between individuals.
To increase diversity, DE applies a crossover operator that combines elements of the mutant vector
and the target vector
to produce a trial vector
. The crossover is typically defined as shown in Equation (
20), this crossover strategy helps maintain diversity by blending solutions from different vectors.
where
j represents the dimension index,
is the crossover probability (within
),
is a random number generated for each dimension
j, and
is a randomly chosen index that ensures at least one component from the mutant vector
is included in the trial vector
.
The selection step determines whether the trial vector
should replace the target vector
in the next generation. This decision is based on the fitness values of the vectors, where the fitness function
evaluates the objective to be minimized. The selection process is defined as follows:
As shown in Equation (
21), the trial vector
replaces the target vector
if it yields a lower fitness value, thus ensuring that better solutions are retained in the population for subsequent generations.
The algorithm iterates through the mutation, crossover, and selection steps until a predefined stopping criterion is met, typically either a maximum number of generations
or an acceptable error threshold. Let
denote the current iteration; the termination condition can be represented. As indicated in Equation (
22), the algorithm concludes when either the maximum iteration count is reached or the target accuracy is achieved.
where
is the best solution found,
is the target fitness, and
is a small threshold value.
3. Proposed HPDE Optimization Algorithm
In this section, the details of the newly hybrid algorithm are presented. However, the APO primarily focuses on mimicking biological behaviors to explore and exploit the search space. However, its exploration capabilities might sometimes be limited, especially in high-dimensional or complex landscapes. By integrating DE’s mutation and crossover operations, the hybrid algorithm benefits from enhanced exploration capabilities, allowing it to escape local optima and explore the search space more effectively.
Maintaining diversity in the population is crucial for avoiding premature convergence. APO’s dormancy and reproduction forms introduce diversity by reinitializing or modifying protozoa positions. The addition of DE operations further enhances diversity by generating new candidate solutions through the combination of multiple individuals. This hybrid approach helps in preserving population diversity and improving the algorithm’s ability to find global optima. Furthermore, APO adapts its behavior based on the current state of the population and the optimization process. The incorporation of DE introduces an additional adaptive layer, where DE operations are applied with a certain probability. This probabilistic application of DE ensures that the hybrid algorithm can dynamically balance exploration and exploitation based on the optimization stage, leading to more efficient convergence. The hybrid algorithm combines the APO’s biologically inspired mechanisms and DE’s evolutionary strategies. APO’s foraging forms (autotroph and heterotroph) and DE’s mutation and crossover operations complement each other, resulting in a more robust optimization process. The hybridization ensures that the strengths of both approaches are utilized effectively, leading to improved overall performance.
3.1. Mathematical Model of HPDE
The initial population of protozoa is generated randomly within the bounds
and
, as shown in Equation (
23):
where
is a vector of uniformly distributed random numbers in the interval [0, 1].
The fitness of each protozoa is evaluated using the objective function
f, as shown in Equation (
24):
Sort by fitness: the population is sorted by fitness in ascending order, as shown in Equation (
25):
Proportion fraction (pf) the proportion fraction
is calculated as shown in Equation (
26):
where
is a uniform random variable.
is the maximum proportion fraction, a predefined parameter that sets the upper limit for
, and
r is a random number drawn from a uniform distribution in the interval
, denoted as
.
The role of in the algorithm is to act as a control parameter that determines the maximum possible value of the proportion fraction , effectively controlling the maximum proportion of the population that can participate in a specific behavior during each iteration. By adjusting , one can balance the exploration and exploitation capabilities of the algorithm: a higher allows more protozoa to engage in behaviors like reproduction, enhancing exploration, while a lower focuses the algorithm on exploitation by limiting the number of protozoa undergoing certain behaviors. The multiplication by a random variable r introduces stochasticity, allowing to vary between 0 and in each iteration, which helps prevent premature convergence and maintains diversity in the population. Selecting is typically performed through empirical tuning or recommendations from the literature, with common values ranging between 0.05 and 0.3, but the optimal value can depend on the specific problem and characteristics of the search space.
Random indices for dormancy/reproduction: Random indices for protozoa entering dormancy or reproduction forms are selected as shown in Equation (
27):
where
is the population size,
is the proportion fraction determining the fraction of the population to undergo dormancy or reproduction, and
denotes the ceiling function. The set
contains
k unique random indices
selected from the population, where each
satisfies
. These indices correspond to the protozoa that will engage in dormancy or reproduction behaviors during the current iteration of the algorithm.
Dormancy/reproduction probability: The probability
of a protozoa being in dormancy or reproduction form is given by, as shown in Equation (
28):
If a protozoan is in dormancy form, its new position is randomly reinitialized, as shown in Equation (
29):
where
is the updated position vector of the
i-th protozoan after dormancy;
and
are the minimum and maximum bounds of the search space, respectively;
is a random vector associated with the
i-th protozoan, where each element is independently drawn from a uniform distribution in the interval
; and the symbol · denotes element-wise multiplication between vectors.
If a protozoan is in reproduction form, it follows the reproduction mechanism. As shown in Equation (
30), the mapping vector is
where
is the reproduction mapping vector determining which dimensions will be modified during reproduction;
corresponds to the
j-th dimension and is set to 1 with probability
p and 0 otherwise;
d denotes the dimensionality of the problem (number of decision variables); and
p is the probability of selecting a dimension for updating.
The reproduction form is defined as it is in Equations (
31) and (
32):
where
is a random factor of
.
where
is the updated position vector of the
i-th protozoan after reproduction;
is the current position vector of the
i-th protozoan;
is a random factor that takes the value
or
with equal probability;
is a random vector where each element is drawn from a uniform distribution in
;
is the reproduction mapping vector from Equation (
30); and the symbols · denote element-wise multiplication.
Foraging factor: the foraging factor
f is calculated as shown in Equation (
33):
where
f is the foraging factor influencing the step size during foraging;
r is a random number drawn from a uniform distribution over
, denoted
;
is the current iteration number;
is the maximum number of iterations; cos denotes the cosine function; and
is the mathematical constant pi.
Autotroph/heterotroph probability: the probability
of protozoa being in autotroph or heterotroph form is given by Equation (
34):
where
is the probability of a protozoan being in autotroph or heterotroph form;
is the current iteration number;
is the maximum number of iterations; and the function oscillates between 0 and 1 over iterations.
If a protozoan is in an autotrophic form, it follows the autotrophic mechanism.
The weight factor is defined as it is in Equation (
35):
where
is the weight factor influencing the movement in autotroph form;
and
are the fitness values of protozoa at neighbor indices
and
, respectively;
is a small constant to avoid division by zero;
denotes the absolute value; and exp denotes the exponential function.
The effect of phototaxis is
where
is the phototaxis effect vector for the
n-th neighbor pair;
and
are the positions of neighbor protozoa at indices
and
, respectively;
is the weight factor from Equation (
35); and · denotes element-wise multiplication.
The new position for the autotroph form is
where
is the updated position vector of the
i-th protozoan in autotroph form;
is the current position vector of the
i-th protozoan;
f is the foraging factor from Equation (
33);
is the position vector of a randomly selected protozoan
j;
is the number of neighbor pairs considered;
is the summation of phototaxis effects from all neighbor pairs;
is the foraging mapping vector determining which dimensions are updated; and · denotes element-wise multiplication.
If a protozoan is in a heterotrophic form, it follows the heterotrophic mechanism.
As shown in Equation (
38), the weight factor is
where
is the weight factor influencing the movement in heterotroph form;
and
are the fitness values of protozoa at indices
and
, respectively;
is a small constant to avoid division by zero;
denotes the absolute value; and exp denotes the exponential function.
The effect of chemotaxis is
where
is the chemotaxis effect vector for the
n-th neighbor pair;
and
are the position vectors of neighbor protozoa at indices
and
, respectively;
is the weight factor from Equation (
38); and · denotes element-wise multiplication.
Furthermore, the nearby position is applied as shown in Equation (
40):
where
is the nearby position vector for the
i-th protozoan;
is a random factor taking the value
or
with equal probability;
r is a random number drawn from a uniform distribution over
;
is the current iteration number;
is the maximum number of iterations; and · denotes element-wise multiplication.
Moreover, the new position for the heterotroph form is applied as shown in Equation (
41):
where
is the updated position vector of the
i-th protozoan in heterotroph form;
is the current position vector of the
i-th protozoan;
f is the foraging factor defined previously;
is the nearby position vector from Equation (
40);
is the number of neighbor pairs considered;
is the summation of chemotaxis effects from all neighbor pairs;
is the foraging mapping vector determining which dimensions are updated; and · denotes element-wise multiplication.
The DE mutation and crossover operations are applied as shown in Equation (
42):
where mutant is the mutant vector generated in the mutation operation;
,
, and
are randomly selected distinct vectors from the current population;
F is the mutation scaling factor controlling the differential variation; and · denotes scalar multiplication.
The crossover operation is defined as shown in Equation (
43):
where
is the
j-th component of the trial vector;
is the
j-th component of the mutant vector from Equation (
42);
is the
j-th component of the current vector
;
is a random number drawn from a uniform distribution over
;
is the crossover probability controlling the rate of crossover;
is the dimensionality of the problem.
The fitness evaluation for the trial vector is performed as shown in Equation (
44):
where
is the fitness value (objective function evaluation) of the trial vector trial;
denotes the objective function being minimized.
If
, the selection operation is applied as shown in Equation (
45):
where
is updated to the trial vector if it has a better fitness value;
is the fitness value of the
i-th vector in the current population.
Boundary control: the new positions of the protozoa are ensured to be within the boundaries, as shown in Equation (
46):
where
is the updated position vector of the
i-th protozoan after boundary control;
and
are the minimum and maximum bounds of the search space, respectively;
and
are element-wise minimum and maximum functions;
is the population size.
Finally, the population is updated based on the fitness of the new positions, as shown in Equation (
47):
where
is the updated position vector of the
i-th protozoan in the population;
is the new position vector after the update mechanisms;
and
are the fitness values of the new and current positions, respectively;
is the population size.
3.2. Discussion on Hybrid APO with DE Algorithm Design and Pseudocode
The design of the Hybrid Artificial Protozoa Optimizer with Differential Evolution (HPDE) algorithm is centered around integrating the strengths of both the Artificial Protozoa Optimizer (APO) and Differential Evolution (DE). APO mimics the biological behaviors of protozoa, which include foraging, reproduction, and dormancy. However, while APO excels in exploration, its exploitation capabilities can be limited, especially in complex and high-dimensional search spaces. To address this limitation, DE is incorporated into the hybrid algorithm to enhance its search capabilities and to improve the balance between exploration and exploitation.
In the design of HPDE, the APO’s natural behaviors are preserved and enhanced by DE’s mutation and crossover strategies. The DE operations are not applied uniformly but with a certain probability, allowing the algorithm to adapt dynamically to the optimization process. This ensures that the algorithm can intensify the search in promising regions of the search space while maintaining diversity in the population to avoid premature convergence.
The process begins with the initialization of the population, followed by the evaluation of fitness. The population is then sorted based on fitness values, ensuring that the best solutions are prioritized. The algorithm iteratively updates the population, applying APO’s dormancy and reproduction mechanisms, as well as DE’s mutation and crossover operations.
One of the key innovations in HPDE is the use of proportion fractions and random indices to introduce diversity into the population. This is particularly important in maintaining a broad search space and preventing the algorithm from getting trapped in local optima. Additionally, the algorithm adapts the application of DE operations based on the current state of the population, ensuring a balance between exploration and exploitation throughout the optimization process.
The pseudocode reflects this hybrid approach, with APO’s biological behaviors forming the core of the algorithm, while DE operations are integrated strategically to enhance performance. By combining the strengths of both APO and DE, HPDE is capable of addressing a wide range of optimization problems, from benchmark functions to real-world engineering design challenges.
3.3. Discussion on Hybrid APO with DE Algorithm Design and Pseudocode
The design of the Hybrid Artificial Protozoa Optimizer with Differential Evolution (HPDE) algorithm consists of different stages, including initialization, fitness evaluation, exploration through APO’s biological behaviors, and exploitation enhanced by DE’s mutation and crossover operations.
Initialization: In the first stage, the population of protozoa is initialized randomly within the given bounds. This initialization sets the stage for the search process by distributing potential solutions across the search space. Each protozoan represents a candidate solution, and their positions are determined based on uniform random distribution. This diversity in the initial population ensures a broad exploration of the search space from the outset.
Fitness Evaluation: Following initialization, the fitness of each protozoan is evaluated using the objective function. This step assigns a fitness value to each candidate solution, which reflects its quality in terms of the optimization goal. The population is then sorted based on these fitness values, prioritizing the best solutions for subsequent stages.
Exploration: The next stage focuses on exploration, driven primarily by APO’s biologically inspired mechanisms. APO simulates the behaviors of protozoa, including dormancy, reproduction, and foraging. Dormancy and reproduction introduce diversity into the population by reinitializing or modifying protozoa positions, thereby preventing premature convergence. Foraging, on the other hand, allows protozoa to move within the search space based on autotroph and heterotroph behaviors, guiding the search toward promising regions.
Exploitation: To enhance exploitation, the DE algorithm is integrated into the hybrid approach. DE’s mutation and crossover operations are applied to generate new candidate solutions by combining existing ones. These operations are not applied uniformly across the population but are introduced with a certain probability, allowing the algorithm to adapt to the current optimization stage. This probabilistic application ensures that the algorithm can focus on intensifying the search in areas where promising solutions have been identified while still maintaining sufficient diversity in the population.
Population Update and Convergence: After applying both APO and DE operations, boundary control is performed to ensure that the solutions remain within the defined bounds. The population is then updated based on the fitness of the newly generated solutions. This process iterates until the maximum number of iterations is reached or another stopping criterion is satisfied.
The pseudocode (see Algorithm 1) demonstrates the flow of the algorithm. Starting with the initialization of the population, the algorithm progresses through fitness evaluation, exploration via APO, and exploitation through DE, with careful management of population diversity throughout the process.
The HPDE algorithm effectively combines the exploration strengths of APO with the exploitation capabilities of DE, resulting in a robust optimization method. The algorithm’s design ensures that it can handle a wide range of optimization problems, from standard benchmark functions to complex real-world engineering challenges. By maintaining a balance between exploration and exploitation, HPDE is able to avoid local optima and achieve good performance across different problem domains In addition, the flowchart of HPDE is provided in
Figure 1.
Algorithm 1 Pseudocode and algorithm steps of HPDE |
- 1:
Input: Objective function f, dimension , population size , maximum iterations , lower bound , upper bound - 2:
Output: Best solution , best fitness value - 3:
Initialize population for ▹ See Equation ( 23) - 4:
Evaluate fitness for ▹ See Equation ( 24) - 5:
Sort population by fitness ▹ See Equation ( 25) - 6:
for to do - 7:
Calculate proportion fraction ▹ See Equation ( 26) - 8:
Select random indices for dormancy/reproduction ▹ See Equation ( 27) - 9:
for each do - 10:
Calculate dormancy/reproduction probability ▹ See Equation ( 28) - 11:
if random value < then - 12:
Perform dormancy form ▹ See Equation ( 29) - 13:
else - 14:
Perform reproduction form ▹ See Equations ( 30) and ( 31) - 15:
end if - 16:
end for - 17:
for each do - 18:
Calculate foraging factor f ▹ See Equation ( 33) - 19:
Calculate autotroph/heterotroph probability ▹ See Equation ( 34) - 20:
if random value < then - 21:
Perform autotroph form ▹ See Equations ( 35)–( 37) - 22:
else - 23:
Perform heterotroph form ▹ See Equations ( 38)–( 41) - 24:
end if - 25:
end for - 26:
for each to do - 27:
if random value < 0.2 then - 28:
Apply Differential Evolution (DE) ▹ See Equations ( 42)–( 45) - 29:
end if - 30:
end for - 31:
Ensure boundary control ▹ See Equation ( 46) - 32:
Update population based on fitness ▹ See Equation ( 47) - 33:
end for - 34:
return Best solution , best fitness value
|
3.4. Exploration and Exploitation Features of HPDE Algorithm
Exploration and exploitation are fundamental concepts in metaheuristic optimization algorithms, and they are crucial for the performance of the HPDE Hybrid algorithm. Exploration refers to the algorithm’s ability to search the global solution space broadly, ensuring diverse candidate solutions are considered. Exploitation, on the other hand, focuses on intensively searching around the best solutions found so far, refining them to achieve optimality. The HPDE Hybrid algorithm integrates mechanisms to balance these two aspects effectively.
Exploration in the HPDE Hybrid algorithm is primarily achieved through the foraging behavior of protozoa and the dormancy forms. During the foraging process, protozoa move through the solution space based on autotrophic and heterotrophic mechanisms. The autotrophic mode, where protozoa move towards suitable light conditions, ensures diverse sampling of the search space by allowing protozoa to explore new regions. This is modeled by Equations (
35)–(
37). The heterotrophic mode, where protozoa absorb organic matter, also contributes to exploration by moving protozoa towards nutrient-rich areas, modeled by Equations (
38)–(
41). Additionally, the dormancy form, as described in Equation (
29), allows protozoa to be reinitialized to new random positions, injecting further diversity into the population.
Exploitation is facilitated by the reproduction forms and the selective application of Differential Evolution (DE) operations. The reproduction form, detailed in Equations (
30) and (
31), allows protozoa to create new solutions by perturbing their current positions, focusing the search around promising areas. This mechanism ensures that good solutions are refined over iterations. The DE operations, involving mutation, crossover, and selection as described by Equations (
42)–(
45), further enhance exploitation. The mutation operation generates new candidate solutions by combining existing ones, while crossover blends these candidates to create trial solutions. The selection process ensures that only solutions with improved fitness values are retained in the population. These DE operations intensively exploit the solution space around the best-found solutions, driving the search towards optimality.
The balance between exploration and exploitation is dynamically managed throughout the algorithm’s iterations. The proportion fraction (
), as calculated by Equation (
26), determines the ratio of protozoa undergoing dormancy or reproduction versus those in foraging modes. This adaptive mechanism ensures that exploration is emphasized in the early stages of the search, while exploitation becomes more prominent as the search progresses and the algorithm converges towards optimal solutions.
3.5. Solving Single-Objective Optimization Problems Using HPDE
Single-objective optimization problems typically involve one minimum point, which could either be the sole global optimum with no local points or include several local points with one global optimum [
113]. To resolve such a problem using HPDE, it must first be mathematically formulated as displayed in Equation (
48):
In this equation, represents the objective function to be minimized, while n denotes the total number of variables. The variable bounds are defined as , with the inequality constraints being and equality constraints as .
The process of solving a single-objective optimization problem using HPDE starts with the initial step, which encompasses setting up the algorithm parameters. Then, the fitness value is computed using the formulated single-objective problem, and these values are stored and arranged. The top two values are selected and placed in temporary variables. In the fourth step, the iterative process begins with entry into the while loop. In this phase, the particle moves toward the single global optimum point. The first and second-best positions are updated, and with each iteration, the fitness function is calculated, and the boundary limits of the search space are verified. Once the loop concludes, the results are reported, and the global optimum point is identified.