1. Introduction
In the field of engineering design, the quest for optimal solutions represents a formidable challenge. This difficulty arises primarily due to the complex, multi-dimensional nature of design spaces, where multiple variables and interdependencies must be navigated simultaneously [
1]. Moreover, the intricate constraints that govern these spaces—ranging from material properties and environmental considerations to budgetary and time constraints—further complicate the optimization process [
2]. Traditional optimization methods, such as gradient-based techniques and linear programming, often grapple with several limitations that undermine their effectiveness in such complex scenarios [
3].
One of the most significant limitations of traditional methods is their tendency to converge to local optima rather than global ones. This issue is particularly problematic in complex design landscapes that feature numerous feasible solutions, separated by suboptimal regions [
4]. Furthermore, traditional methods often entail high computational costs, especially as the dimensionality of the problem increases, which can make them impractical for large-scale applications or for use in real-time scenarios [
5]. Additionally, these methods typically rely on the availability and accuracy of gradient information, which may not be obtainable for all types of problems, such as those involving discontinuous or non-differentiable spaces [
6].
In recent years, there has been significant growth in the development of metaheuristic algorithms to address these challenges [
7]. These algorithms provide a powerful, adaptable, and efficient approach to solving optimization problems in diverse fields [
8]. Unlike traditional methods, metaheuristics operate independently of gradient information, employing strategies inspired by natural or social processes to locate optimal solutions [
9]. Their flexibility enables their application across a broad spectrum of problems, ranging from abstract mathematical models to real-world engineering design challenges [
10,
11]. Additionally, metaheuristics are highly regarded for their capability to overcome local optima, ensuring a more thorough exploration of the solution space and enhancing the probability of identifying global optima [
12]. Due to their stochastic and dynamic nature, these algorithms are particularly well suited for addressing the complexities and uncertainties characteristic of engineering design, offering innovative and efficient solutions in a rapidly advancing technological landscape [
13].
The Arctic Puffin Optimization (APO) algorithm, introduced by [
14], is a recently developed metaheuristic inspired by the distinctive foraging strategies of Arctic puffins. This algorithm draws on the puffins’ ability to navigate and adapt to the harsh and variable conditions of the Arctic, effectively balancing exploration and exploitation phases. Despite its potential, APO, like many nature-inspired algorithms, can benefit from enhancements to address complex, multimodal engineering design problems more effectively.
In this study, we propose an innovative hybrid algorithm that synergizes the strengths of Arctic Puffin Optimization with JADE, an advanced differential evolution algorithm characterized by its dynamic self-adaptive parameter control mechanism [
15]. JADE’s dynamic parameter tuning significantly accelerates convergence and improves the precision of the evolutionary process, making it particularly advantageous for continuous optimization tasks. By integrating JADE’s adaptive strategies with the robust exploratory capabilities of APO, the resulting hybrid algorithm is designed to enhance performance in tackling challenging engineering design problems.
This research introduces the Hybrid Arctic Puffin Optimization with JADE (APO-JADE) algorithm, demonstrating its application in solving engineering design problems. Through comprehensive comparative evaluations with established optimization techniques, the results underscore the superior efficiency, convergence speed, and solution quality of APO-JADE. Furthermore, the versatility of the proposed algorithm is validated through its application to real-world engineering problems, highlighting its potential as a robust and effective tool for complex design optimization challenges.
1.1. Research Contribution
The primary contributions of this research are outlined as follows:
Proposing of the Hybrid APO-JADE Algorithm: This study introduces the Hybrid Arctic Puffin Optimization with JADE (APO-JADE) algorithm, integrating the exploratory capabilities of APO with the adaptive evolutionary mechanisms of JADE to enhance optimization performance and reliability.
Outperforming Performance on Standardized Benchmarks: Through extensive evaluations on the CEC2022 benchmark functions, APO-JADE is demonstrated to outperform existing algorithms, achieving superior convergence speed, solution accuracy, and computational efficiency.
Practical Applications in Engineering Design: The APO-JADE algorithm has been effectively applied to optimize the design of planetary gear trains and three-bar truss structures, validating its robustness and practical utility in solving complex engineering optimization problems.
Statistical Analysis and Comparison with the State of the Art: This research includes a thorough statistical comparison of APO-JADE with other state-of-the-art optimization techniques, highlighting its superior performance across various metrics.
Versatility and Applicability: The adaptability of APO-JADE is showcased through its ability to handle diverse optimization landscapes, emphasizing its potential as a general-purpose optimization technique for a variety of industrial applications.
1.2. Paper Structure
This paper is structured as follows: The Literature Review section examines advancements in engineering design optimization, with a particular focus on hybrid algorithms developed to address the limitations of standalone optimization methods. This section emphasizes improvements in solution accuracy, convergence rates, and the ability to overcome local optima by leveraging the complementary strengths of multiple algorithms. The Hybrid Arctic Puffin Optimization with JADE (APO-JADE) section introduces the conceptual framework of the APO-JADE algorithm, elaborating on the integration of APO’s nature-inspired mechanisms with JADE’s adaptive parameter control and external archive strategies. The Mathematical Model section presents the mathematical formulations underpinning the APO-JADE algorithm, encompassing initialization, population dynamics, hybridization mechanisms, and the equations guiding its exploration and exploitation phases.
The Results and Discussion section provides a comprehensive performance analysis of APO-JADE based on rigorous testing using CEC2022 benchmark functions. This analysis includes statistical evaluations, convergence behavior, search dynamics, fitness assessments, diversity metrics, and box plot visualizations. The Engineering Design Optimization Applications section demonstrates the practical applicability of APO-JADE by addressing two real-world engineering optimization problems: the design of a planetary gear train and a three-bar truss structure. This section includes detailed problem formulations, fitness functions, constraints, and optimization outcomes. Finally, the Conclusion section summarizes the key findings, underscores the significant enhancements achieved by APO-JADE, and proposes potential avenues for future research and development.
2. Literature Review
In the domain of engineering design optimization, a variety of hybrid algorithms have been developed to address the limitations of standalone optimization methods. These hybrid strategies aim to enhance solution precision, accelerate convergence rates, and improve the capability to evade local optima by integrating the complementary strengths of multiple algorithms.
Hu et al. [
16] proposed the Dynamic Hybrid Dandelion Optimizer (DETDO), which incorporates dynamic tent chaotic mapping, differential evolution (DE), and dynamic t-distribution perturbation. This method addresses the original dandelion optimizer’s shortcomings, such as limited exploitation capabilities and slow convergence rates. Experimental findings reveal that DETDO delivers superior optimization accuracy and faster convergence, establishing its suitability for real-world engineering challenges. Similarly, Saberi et al. [
17] introduced a biomimetic electrospun cartilage decellularized matrix (CDM)/chitosan nanofiber hybrid material for tissue engineering, optimized using the Box–Behnken design to achieve optimal mechanical properties and structural characteristics. This hybrid material demonstrated improved cell proliferation and enhanced nanofiber properties, making it a promising solution for tissue engineering applications.
Verma and Parouha [
18] developed the haDEPSO algorithm, a hybrid approach combining advanced differential evolution (aDE) and particle swarm optimization (aPSO). This algorithm achieves a balance between global and local search capabilities, leading to superior solutions for intricate engineering optimization problems. Similarly, Hashim et al. [
19] introduced AOA-BSA, a hybrid optimization algorithm that merges the Archimedes Optimization Algorithm (AOA) with the Bird Swarm Algorithm (BSA). This integration enhances the exploitation phase while maintaining a balance between exploration and exploitation, demonstrating exceptional performance in solving both constrained and unconstrained engineering problems. Zhang et al. [
20] presented the CSDE hybrid algorithm, which combines Cuckoo Search (CS) with differential evolution (DE). By segmenting the population into subgroups and independently applying CS and DE, the algorithm avoids premature convergence and achieves superior global optima for constrained engineering problems.
Sun [
21] proposed a hybrid role-engineering optimization method that integrates natural language processing with integer linear programming to construct optimal role-based access control systems, significantly improving security. Verma and Parouha [
22] further extended their work on haDEPSO for constrained function optimization, demonstrating its effectiveness in solving complex engineering challenges by employing a multi-population strategy. This approach combines advanced differential evolution with Particle Swarm Optimization, outperforming other state-of-the-art algorithms. Lastly, Panagant et al. [
23] developed the HMPANM algorithm, which integrates the Marine Predators Optimization Algorithm with the Nelder–Mead method. This hybrid algorithm has proven highly effective in optimizing structural design problems within the automotive industry, showcasing its practical application in industrial component optimization.
Yildiz and Mehta [
24] proposed the HTSSA-NM and MRFO algorithms to optimize the structural and shape parameters of automobile brake pedals. These algorithms demonstrated strong performance in achieving lightweight, efficient designs, outperforming several established metaheuristics. Similarly, Duan and Yu [
25] introduced a collaboration-based hybrid GWO-SCA optimizer (cHGWOSCA), which integrates the Grey Wolf Optimizer (GWO) and the Sine Cosine Algorithm (SCA). This hybrid method enhances global exploration and local exploitation, achieving notable success in global optimization and solving constrained engineering design problems. Barshandeh et al. [
26] developed the HMPA, a hybrid multi-population algorithm that combines artificial ecosystem-based optimization with Harris Hawks Optimization. By dynamically exchanging solutions among sub-populations, this approach effectively balances exploration and exploitation, solving a wide array of engineering optimization challenges.
Uray et al. [
27] presented a hybrid harmony search algorithm augmented by the Taguchi method to optimize algorithm parameters for engineering design problems. This combination improves the robustness and effectiveness of the optimization process by leveraging statistical methods for parameter estimation. Varaee et al. [
28] introduced a hybrid algorithm that combines Particle Swarm Optimization (PSO) with the Generalized Reduced Gradient (GRG) algorithm, achieving a balance between exploration and exploitation. This method exhibited competitive results when applied to benchmark optimization problems and constrained engineering challenges. Fakhouri et al. [
29] proposed a novel hybrid evolutionary algorithm that integrates PSO, the Sine Cosine Algorithm (SCA), and the Nelder–Mead Simplex (NMS) optimization method, significantly enhancing the search process and demonstrating superior performance in solving engineering design problems.
Dhiman [
30] introduced the SSC algorithm, a hybrid metaheuristic combining sine–cosine functions with the Spotted Hyena Optimizer’s attack strategy and the Chimp Optimization Algorithm. This approach proved effective in addressing real-world complex problems and engineering applications. Kundu and Garg [
31] developed the LSMA-TLBO algorithm, which integrates the Slime Mould Algorithm (SMA) with Teaching–Learning-Based Optimization (TLBO) and employs Lévy flight-based mutation. This hybrid approach achieved remarkable performance in numerical optimization and engineering design problems. Yang et al. [
32] optimized a ladder-shaped hybrid anode for GaN-on-Si Schottky Barrier Diodes, achieving reduced reverse leakage current and exceptional electrical characteristics.
Yang et al. [
33] proposed a hybrid proxy model for optimizing engineering parameters in deflagration fracturing for shale reservoirs. This model effectively balances reservoir failure degree with stimulation range, providing an efficient solution for multi-objective optimization in deflagration fracturing engineering. Zhong et al. [
34] introduced the Hybrid Remora Crayfish Optimization Algorithm (HRCOA), designed to address continuous optimization problems and wireless sensor network coverage optimization. The algorithm demonstrated scalability and effectiveness across various optimization scenarios. Yildiz and Erdaş [
35] developed the Hybrid Taguchi–Salp Swarm Optimization Algorithm (HTSSA), specifically aimed at enhancing the optimization of structural design problems in industry. This algorithm achieved superior results in shape optimization challenges when compared to recent optimization techniques.
Cheng et al. [
36] proposed a robust optimization methodology for engineering structures with hybrid probabilistic and interval uncertainties. By incorporating stochastic and interval uncertain system parameters, the approach utilized a multi-layered refining Latin hypercube sampling-based Monte Carlo simulation and a novel genetic algorithm to solve robust optimization problems. The method was validated through complex engineering structural applications. Finally, Huang and Hu [
37] developed the Hybrid Beluga Whale Optimization Algorithm (HBWO), which integrates Quasi-Oppositional-Based Learning (QOBL), dynamic and spiral predation strategies, and the Nelder–Mead Simplex search method. The HBWO algorithm demonstrated exceptional feasibility and effectiveness in solving practical engineering problems.
Tang et al. [
38] introduced the Multi-Strategy Particle Swarm Optimization Hybrid Dandelion Optimization Algorithm (PSODO), which addresses challenges such as slow convergence rates and susceptibility to local optima. This algorithm demonstrated substantial improvements in global optimization accuracy, convergence speed, and computational efficiency. Similarly, Chagwiza et al. [
39] developed a hybrid matheuristic algorithm by integrating the Grotschel–Holland and Max–Min Ant System algorithms. This approach proved effective in solving complex design and network engineering problems by increasing the certainty of achieving optimal solutions. Liu et al. [
40] proposed a hybrid algorithm that combines the Seeker Optimization Algorithm with Particle Swarm Optimization, achieving superior performance on benchmark functions and in constrained engineering optimization scenarios.
Adegboye and Ülker [
41] presented the AEFA-CSR, a hybrid algorithm that integrates the Cuckoo Search Algorithm with Refraction Learning into the Artificial Electric Field Algorithm. This integration enhances convergence rates and solution precision, yielding promising results across benchmark functions and engineering applications. Wang et al. [
42] proposed the Improved Hybrid Aquila Optimizer and Harris Hawks Algorithm (IHAOHHO), which showed exceptional performance in standard benchmark functions and industrial engineering design problems. Kundu and Garg [
43] introduced the TLNNABC, a hybrid algorithm combining the Artificial Bee Colony (ABC) algorithm with the Neural Network Algorithm (NNA) and Teaching–Learning-Based Optimization (TLBO). This approach demonstrated remarkable effectiveness in reliability optimization and engineering design applications.
Knypiński et al. [
44] employed hybrid variations of the Cuckoo Search (CS) and Grey Wolf Optimization (GWO) algorithms to optimize steady-state functional parameters of LSPMSMs while adhering to non-linear constraint functions. The primary goal was to minimize design parameters related to motor performance, such as efficiency and operational stability. The hybridization leveraged the exploratory capabilities of one algorithm and the exploitative strengths of the other, resulting in a balanced search mechanism capable of escaping local optima and improving overall optimization outcomes.
Dhiman [
45] proposed the Emperor Salp Algorithm (ESA), a hybrid bio-inspired metaheuristic optimization method that integrates the strengths of the Emperor Penguin Optimizer with the Salp Swarm Algorithm. This hybrid approach demonstrated superior robustness and the ability to achieve optimal solutions, outperforming several competing algorithms in comparative evaluations.
2.1. Overview of Arctic Puffin Optimization (APO)
The Arctic Puffin Optimization (APO) algorithm [
14] is a bio-inspired metaheuristic approach developed to address complex engineering design optimization challenges. This algorithm draws inspiration from the survival strategies and foraging behaviors of Arctic puffins, incorporating two distinct phases: aerial flight (exploration) and underwater foraging (exploitation). These phases are meticulously designed to achieve a balance between global exploration and local exploitation, thereby improving the algorithm’s efficiency in locating optimal solutions [
14].
The conceptual foundation of the APO algorithm is rooted in the behavior of Arctic puffins, which exhibit highly coordinated flight patterns and group foraging strategies to enhance their hunting efficiency. These birds fly at low altitudes, dive underwater to capture prey, and dynamically adapt their behaviors based on environmental conditions. When food resources are scarce, puffins adjust their underwater positions strategically to maximize foraging success and use signaling mechanisms to communicate with one another, thereby minimizing risks from predators [
14].
2.2. Mathematical Model
2.2.1. Population Initialization
In the Arctic Puffin Optimization (APO) algorithm, each Arctic puffin symbolizes a candidate solution. The initial positions of these solutions are generated randomly within predefined bounds, as described by Equation (
1) [
14]:
Here, represents the position of the i-th puffin, rand denotes a random value uniformly distributed between 0 and 1, ub and lb indicate the upper and lower bounds of the search space, respectively, and N corresponds to the total population size.
2.2.2. Aerial Flight Stage (Exploration)
This stage emulates the coordinated flight and searching behaviors exhibited by Arctic puffins, employing techniques such as Lévy flight and velocity factors to enable efficient exploration of the solution space.
Aerial Search Strategy
The positions of the puffins are updated using the Lévy flight mechanism, as represented in Equation (
2) [
14]:
In this equation, denotes the position of the i-th puffin at iteration t, represents a random value generated based on the Lévy flight distribution, and R incorporates a normal distribution factor to introduce stochasticity and enhance exploration.
Swooping Predation Strategy
The swooping predation strategy, modeled by Equation (
3), adjusts the displacement of the puffin to simulate a rapid dive for capturing prey [
14]:
where
S is a velocity coefficient.
2.2.3. Merging Candidate Positions
The positions obtained from the exploration and exploitation stages are combined and ranked based on their fitness values. From this sorted pool, the top
N individuals are selected to constitute the updated population. This merging and selection process is mathematically represented by Equations (
4)–(
6) [
14]:
2.3. Underwater Foraging Stage (Exploitation)
The underwater foraging stage involves strategies that enhance the algorithm’s local search capabilities. These include gathering foraging, intensifying search, and avoiding predators.
2.3.1. Gathering Foraging
In the gathering foraging strategy, puffins update their positions based on cooperative behavior, as shown in Equation (
7):
where
F is the cooperative factor, set to 0.5.
2.3.2. Intensifying Search
The intensifying search strategy adjusts the search strategy when food resources are depleted, as described by Equations (
8) and (
9):
In these equations, T represents the total number of iterations, while t denotes the current iteration. The factor f is dynamically adjusted based on the iteration progress and incorporates a random component to enhance search intensification.
2.3.3. Avoiding Predators
The strategy for avoiding predators is modeled by Equation (
10):
Here, represents a uniformly distributed random variable within the range [0, 1].
The Arctic Puffin Optimization (APO) algorithm leverages a variety of exploration and exploitation mechanisms inspired by the natural behaviors of Arctic puffins. By incorporating Lévy flight for efficient global exploration, swooping predation for accelerated search, and adaptive underwater foraging strategies for intensified local search, the algorithm achieves a well-calibrated balance between exploration and exploitation. Additionally, the merging and selection processes further enhance the refinement of solutions, establishing APO as a highly effective approach for addressing complex optimization challenges.
2.4. Overview of Dynamic Differential Evolution with Optional External Archive (JADE)
The JADE optimizer (Dynamic Differential Evolution with Optional External Archive) [
15] is a sophisticated extension of the traditional differential evolution (DE) algorithm. This advanced variant integrates dynamic parameter control mechanisms alongside an optional external archive to maintain and enhance population diversity. These features enable JADE to effectively address complex optimization challenges by achieving a balanced trade-off between exploration and exploitation [
15].
2.5. Inspiration and Motivation
The development of the JADE optimizer stems from the objective of enhancing the conventional differential evolution algorithm by incorporating dynamic parameter adaptation and preserving diversity through the use of an external archive. This innovative approach significantly improves the algorithm’s efficiency and effectiveness in addressing a wide range of optimization challenges [
15].
2.6. Mathematical Model
2.6.1. Population Initialization
In the JADE optimizer, the initial population is randomly generated within specified bounds, as represented by Equation (
11) [
15]:
Here, denotes the position of the i-th individual, rand is a random value uniformly distributed between 0 and 1, ub and lb represent the upper and lower bounds, respectively, and N is the population size.
2.6.2. Mutation Strategy
JADE employs a current-to-pbest mutation strategy, described by Equation (
12):
In this equation, represents the mutant vector, is the current vector, is a randomly selected vector from the top of the population, and are randomly chosen vectors from the population, and F is the scaling factor.
2.6.3. Crossover Strategy
The crossover operation in JADE is defined by Equation (
13) [
15]:
Here, represents the trial vector, is the mutant vector, is the current vector, is a uniformly distributed random value, denotes the crossover rate, and is a randomly chosen index to ensure at least one element from the mutant vector is selected.
2.6.4. Selection Strategy
The selection strategy determines the individuals for the next generation based on fitness evaluation, as expressed in Equation (
14) [
15]:
Here, f represents the fitness function used to evaluate the solutions.
2.6.5. Parameter Adaptation
JADE dynamically adjusts the parameters
F and
using historical data and a learning process, as modeled by Equations (
15) and (
16):
In these equations, represents a normal distribution with means and , and standard deviations and , respectively.
2.6.6. External Archive
JADE incorporates an external archive to maintain a set of inferior solutions, enhancing population diversity and guiding the mutation process. The archive is updated by adding new solutions and removing older ones based on predefined criteria.
3. Hybrid Arctic Puffin Optimization (APO) with JADE
The hybrid Arctic Puffin Optimization (APO) with JADE represents a novel optimization algorithm that combines the complementary strengths of two distinct methods: Arctic Puffin Optimization (APO) and JADE (Dynamic Differential Evolution with Optional External Archive). APO draws inspiration from the natural behaviors of Arctic puffins, particularly their foraging and predation strategies, which are translated into exploration and exploitation mechanisms within the optimization framework [
14]. Conversely, JADE enhances the traditional differential evolution (DE) algorithm by introducing dynamic parameter control and an external archive, significantly improving diversity maintenance and convergence efficiency.
By integrating these approaches, the hybrid algorithm benefits from JADE’s adaptive mechanisms, which dynamically adjust control parameters and incorporate an external archive of inferior solutions to enhance population diversity. Simultaneously, it capitalizes on APO’s robust exploration and exploitation strategies, inspired by the puffins’ efficient foraging behaviors. This combination results in a hybrid algorithm capable of navigating complex optimization landscapes, escaping local optima, and converging to high-quality solutions.
The development of the APO-JADE algorithm is motivated by the need for a more robust and efficient optimization tool that integrates the strengths of APO and JADE. The hybrid approach recognizes that different optimization strategies offer unique benefits, which, when combined, can effectively address their respective limitations.
JADE contributes to the hybrid algorithm through its dynamic parameter control and external archive mechanisms. The dynamic parameter control adjusts the crossover rate () and scaling factor (F) based on the evolving search environment, ensuring adaptability and maintaining a balance between exploration (searching new regions) and exploitation (refining current solutions). Furthermore, the external archive stores inferior solutions, which can reintroduce diversity into the population, preventing premature convergence and enabling the algorithm to escape local optima.
APO enhances the hybrid algorithm through its bio-inspired mechanisms, which mimic the natural behaviors of Arctic puffins. These behaviors include collective foraging and dynamic adjustments to search strategies based on environmental feedback. During the exploration phase, modeled after puffins’ aerial flight, Lévy flights are employed to facilitate long-distance jumps in the solution space, enabling a broad search. In the exploitation phase, inspired by underwater foraging, the algorithm fine-tunes solutions around the best-found candidates, ensuring effective utilization of promising regions within the search space.
By integrating these complementary strategies, the APO-JADE algorithm achieves a comprehensive search capability that effectively balances exploration and exploitation. This results in a robust optimization process capable of addressing complex, high-dimensional problem spaces and discovering high-quality solutions.
3.1. Mathematical Model
3.1.1. JADE Parameters
The JADE parameters are initialized to ensure effective adaptation and diversity:
If the upper bound (ub) and lower bound (lb) are scalar values, they are expanded into vectors with a dimension equal to the problem’s dimensionality (dim). This ensures consistent boundary constraints across all dimensions of the search space.
3.1.2. Population Initialization
The population is generated using an initialization function that creates a random distribution of candidate solutions within the specified bounds:
Here, represents the population size, dim denotes the dimensionality of the problem, and ub and lb correspond to the upper and lower bounds, respectively.
3.1.3. Hybrid JADE-APO Loop
The optimization process alternates between the mechanisms of JADE and APO, iteratively refining the solution. This loop continues for a specified maximum number of iterations ().
JADE Mechanism
During the initial half of the iterations, the algorithm operates under the JADE mechanism:
Evaluate the fitness of the current population:
where
represents the fitness value of the
i-th individual in the population
P, computed using the fitness function
. The notation
denotes the
i-th individual across all dimensions.
Dynamically update
and
F using normal and Cauchy distributions:
As outlined in the equations, denotes the crossover probability for the i-th individual, sampled from a normal distribution with mean and standard deviation . Similarly, represents the scaling factor for differential mutation, sampled from a Cauchy distribution with location and scale . Both parameters are dynamically constrained to remain within their valid ranges.
Mutant vectors are generated using the current-to-pbest mutation strategy, as described in Equation (
27):
Here, represents the mutant vector for the i-th individual at generation , is the current position of the i-th individual, is the position of the p-best individual, and and are the positions of two randomly selected individuals. F serves as the scaling factor for mutation.
Crossover is applied to generate trial vectors, as expressed in Equation (
28):
In this equation, represents the trial vector generated for the i-th individual. The crossover operation combines the mutant vector and the current vector based on a crossover probability . A randomly selected index ensures that at least one dimension is taken from the mutant vector.
The fitness of trial vectors is evaluated, and the population is updated as shown in Equation (
29):
Here, represents the updated position of the i-th individual. If the fitness value of the trial vector is better than or equal to the fitness of the current vector , the trial vector replaces the current vector; otherwise, the current vector is retained.
APO Mechanism
During the second half of the iterations, the algorithm employs the APO mechanism.
The behavioral conversion factor
B is calculated using Equation (
30):
Here, B represents the factor that modulates the balance between exploration and exploitation, rand is a uniformly distributed random number, l is the current iteration, and denotes the maximum number of iterations.
Positions are updated using Lévy flight and swooping strategies, as expressed in Equation (
31):
Here, Y represents the updated position, X is the current position, represents the Lévy flight operator, is a randomly selected position, and randn is a normally distributed random number.
The updated positions are bounded within the search space using Equation (
32):
Here, X represents the adjusted position, and the SpaceBound function ensures that all positions remain within the predefined upper () and lower () bounds.
The APO-JADE algorithm effectively integrates the dynamic parameter control and external archive features of JADE with the exploration and exploitation strategies of APO. This hybridization ensures a robust optimization process capable of addressing complex and multi-dimensional optimization problems. The APO mechanism also includes detailed behavioral modeling equations inspired by the natural foraging behaviors of Arctic puffins.
3.1.4. Behavioral Conversion Factor
The behavioral conversion factor
B transitions between exploration and exploitation phases, as shown in Equation (
33):
3.1.5. Levy Flight
The Levy flight mechanism enables large, random jumps in the search space to enhance global exploration, as described in Equation (
34):
Here, u and v are random variables sampled from a normal distribution, and represents the stability parameter that controls the step size.
3.1.6. Swooping Strategy
The swooping strategy mimics rapid predation behavior, facilitating local exploitation, as expressed in Equation (
35):
In this equation, R is a randomly generated vector, and tan denotes the tangent function used to simulate sharp directional changes.
3.1.7. Bounding Positions
To ensure feasibility, the positions of individuals are bounded within the predefined search space, as shown in Equation (
36):
Here, SpaceBound is a function that adjusts X to ensure it remains within the upper bound () and lower bound ().
3.2. Hybrid APO-JADE Algorithm
The Hybrid APO-JADE algorithm (See Algorithm 1) enhances the optimization process by dynamically adjusting the crossover rate (
) and scaling factor (
F) using normal and Cauchy distributions, respectively. Additionally, it employs an external archive to maintain diversity and mitigate premature convergence. The algorithm alternates between the JADE and APO mechanisms in two distinct phases, ensuring a robust balance between exploration and exploitation, thereby efficiently navigating complex optimization landscapes and converging to high-quality solutions, the steps of the algorithm has been shown in
Figure 1.
Algorithm 1 Hybrid APO-JADE algorithm. |
- 1:
Initialize: Set parameters , , , - 2:
Initialize archive A as empty - 3:
Initialize iteration counter - 4:
Generate initial population P of size with dimensions using bounds and - 5:
Evaluate fitness of initial population - 6:
while do - 7:
if then - 8:
JADE Mechanism: - 9:
for each individual i in population do - 10:
Compute and ensure - 11:
Compute and ensure - 12:
end for - 13:
Identify the best individual and select top as - 14:
for each individual i do - 15:
Generate mutant vector using Equation ( 27) - 16:
Generate trial vector using crossover (Equation ( 28)) - 17:
end for - 18:
for each individual i do - 19:
Bound using Equation ( 36) - 20:
Evaluate fitness of - 21:
Update population and archive if improves the fitness - 22:
end for - 23:
Update control parameters and - 24:
else - 25:
APO Mechanism: - 26:
for each individual i do - 27:
Compute behavioral conversion factor B (Equation ( 30)) - 28:
if then - 29:
Perform Levy flight using Equation ( 34) - 30:
else - 31:
Apply dynamic search strategies - 32:
end if - 33:
Bound positions using Equation ( 36) - 34:
Update population based on fitness - 35:
end for - 36:
end if - 37:
Update the best solution found so far - 38:
Increment iteration counter l - 39:
end while - 40:
Output: Best solution and its fitness value
|
9. Planetary Gear Train Design Optimization Problem
The provided code snippet addresses an optimization problem related to the design of a planetary gear train. The objective is to optimize the gear ratios and dimensions while ensuring adherence to all specified constraints. Below is a detailed explanation of the code’s components.
Initially, the code defines a significantly large penalty factor (), which is utilized to impose substantial penalties within the fitness function for any violations of the constraints.
9.1. Parameter Initialization
The variables are initialized based on the input vector x:
x is rounded to ensure integer values, as gears must have integer numbers of teeth.
and are predefined arrays representing possible values for the number of planets and module sizes, respectively.
, , , , , and represent the numbers of teeth on different gears.
p is the number of planets, and and are the module sizes for different gear pairs, selected from .
9.2. Fitness Function
The fitness function is designed to minimize the deviation between the actual and desired gear ratios, as follows:
represents the gear ratio of the first stage.
denotes the gear ratio of the second stage.
corresponds to the gear ratio of the ring gear.
The desired gear ratios are denoted as , , and .
The fitness function f is formulated to capture the maximum deviation among the three gear ratios.
9.3. Constraints
Various constraints are defined to ensure the gear design is feasible:
is the maximum allowable diameter.
, , , , , and are predefined constants representing allowable deviations.
is the angle calculated based on the gear geometry.
The constraints to are defined as follows:
Constraint on the maximum diameter involving and .
Constraint on the maximum diameter involving and , .
Constraint on the maximum diameter involving and , .
Constraint on the compatibility of gear sizes and .
Constraint on the minimum tooth addendum for gears and .
Constraint on the minimum tooth addendum for gears and .
Constraint on the minimum tooth addendum for gears and .
Geometric constraint involving to ensure the gear arrangement is physically feasible.
Constraint on the positioning of gear relative to and .
Constraint on the positioning of gear relative to and .
Constraint to ensure is a multiple of p.
9.4. Penalty Calculation
A penalty term is calculated to heavily penalize any violation of the constraints:
The penalty term accumulates the squared violations of each constraint, scaled by .
The GetInequality function (assumed to be defined elsewhere) likely returns a boolean indicating if a constraint is violated.
9.5. Fitness Function
The final fitness function
integrates the fitness function
f and a penalty term, as expressed in Equation (
56):
This formulation ensures that any design violating the constraints incurs a significantly higher fitness value due to the large penalty factor (), effectively discouraging such solutions. The objective of the optimization process is to minimize the value, thereby identifying a planetary gear train design that aligns with the desired gear ratios while satisfying all imposed constraints.
As presented in
Table 9 and
Table 10, the results for solving the planetary gear train design optimization problem demonstrate the performance of various algorithms, measured by the best fitness value (
) achieved, alongside the corresponding design variables (
to
).
The APO-JADE algorithm attained the best fitness value () of 0.525769, with the design variables , , , , , , and . This outcome indicates that APO-JADE identified the most optimal solution by minimizing the deviation in gear ratios while adhering to all constraints.
The Grey Wolf Optimizer (GWO) closely followed APO-JADE, achieving a fitness value of 0.526281, indicating competitive performance but with a slightly higher deviation. The Slime Mould Algorithm (SMA), with a fitness value of 0.537059, performed marginally worse than APO-JADE and GWO, indicating a larger deviation from the target gear ratios. The Whale Swarm Optimization (WSO) and Whale Optimization Algorithm (WOA) both achieved fitness values around 0.53, demonstrating reasonable performance, though less optimal compared to APO-JADE.
Other algorithms, such as COOT, ChOA, and SMA, produced fitness values around 0.537059, while the Owl Optimization Algorithm (OOA) and Binary Wolf Optimization (BWO) reported significantly higher values of 0.774379 and 0.868333, respectively, indicating suboptimal solutions in comparison to APO-JADE.