Next Article in Journal
Linear Matrix Inequality-Based Design of Structured Sparse Feedback Controllers for Sensor and Actuator Networks
Previous Article in Journal
Intelligent Fault Diagnosis for Rotating Mechanical Systems: An Improved Multiscale Fuzzy Entropy and Support Vector Machine Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Arctic Puffin Algorithm for Solving Design Optimization Problems

by
Hussam N. Fakhouri
1,*,
Mohannad S. Alkhalaileh
2,
Faten Hamad
3,4,
Najem N. Sirhan
5 and
Sandi N. Fakhouri
6
1
Data Science and Artificial Intelligence Department, Faculty of Information Technology, University of Petra, Amman 11932, Jordan
2
College of Education, Humanities and Social Sciences, Al Ain University, Al-Ain P.O. Box 64141, United Arab Emirates
3
Information Studies Department, College of Arts and Social Sciences, Sultan Qaboos University, Muscat 123, Oman
4
Library and Information Science Department, School of Educational Sciences, University of Jordan, Amman 11962, Jordan
5
Computer Science Department, Faculty of Information Technology, University of Petra, Amman 11932, Jordan
6
Computer Science Department, King Abdullah II School of Information Technology, University of Jordan, Amman 11962, Jordan
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(12), 589; https://doi.org/10.3390/a17120589
Submission received: 10 November 2024 / Revised: 14 December 2024 / Accepted: 16 December 2024 / Published: 20 December 2024
(This article belongs to the Section Algorithms for Multidisciplinary Applications)

Abstract

:
This study presents an innovative hybrid evolutionary algorithm that combines the Arctic Puffin Optimization (APO) algorithm with the JADE dynamic differential evolution framework. The APO algorithm, inspired by the foraging patterns of Arctic puffins, demonstrates certain challenges, including a tendency to converge prematurely at local minima, a slow rate of convergence, and an insufficient equilibrium between the exploration and exploitation processes. To mitigate these drawbacks, the proposed hybrid approach incorporates the dynamic features of JADE, which enhances the exploration–exploitation trade-off through adaptive parameter control and the use of an external archive. By synergizing the effective search mechanisms modeled after the foraging behavior of Arctic puffins with JADE’s advanced dynamic strategies, this integration significantly improves global search efficiency and accelerates the convergence process. The effectiveness of APO-JADE is demonstrated through benchmark tests against well-known IEEE CEC 2022 unimodal and multimodal functions, showing superior performance over 32 compared optimization algorithms. Additionally, APO-JADE is applied to complex engineering design problems, including the optimization of engineering structures and mechanisms, revealing its practical utility in navigating challenging, multi-dimensional search spaces typically encountered in real-world engineering problems. The results confirm that APO-JADE outperformed all of the compared optimizers, effectively addressing the challenges of unknown and complex search areas in engineering design optimization.

1. Introduction

In the field of engineering design, the quest for optimal solutions represents a formidable challenge. This difficulty arises primarily due to the complex, multi-dimensional nature of design spaces, where multiple variables and interdependencies must be navigated simultaneously [1]. Moreover, the intricate constraints that govern these spaces—ranging from material properties and environmental considerations to budgetary and time constraints—further complicate the optimization process [2]. Traditional optimization methods, such as gradient-based techniques and linear programming, often grapple with several limitations that undermine their effectiveness in such complex scenarios [3].
One of the most significant limitations of traditional methods is their tendency to converge to local optima rather than global ones. This issue is particularly problematic in complex design landscapes that feature numerous feasible solutions, separated by suboptimal regions [4]. Furthermore, traditional methods often entail high computational costs, especially as the dimensionality of the problem increases, which can make them impractical for large-scale applications or for use in real-time scenarios [5]. Additionally, these methods typically rely on the availability and accuracy of gradient information, which may not be obtainable for all types of problems, such as those involving discontinuous or non-differentiable spaces [6].
In recent years, there has been significant growth in the development of metaheuristic algorithms to address these challenges [7]. These algorithms provide a powerful, adaptable, and efficient approach to solving optimization problems in diverse fields [8]. Unlike traditional methods, metaheuristics operate independently of gradient information, employing strategies inspired by natural or social processes to locate optimal solutions [9]. Their flexibility enables their application across a broad spectrum of problems, ranging from abstract mathematical models to real-world engineering design challenges [10,11]. Additionally, metaheuristics are highly regarded for their capability to overcome local optima, ensuring a more thorough exploration of the solution space and enhancing the probability of identifying global optima [12]. Due to their stochastic and dynamic nature, these algorithms are particularly well suited for addressing the complexities and uncertainties characteristic of engineering design, offering innovative and efficient solutions in a rapidly advancing technological landscape [13].
The Arctic Puffin Optimization (APO) algorithm, introduced by [14], is a recently developed metaheuristic inspired by the distinctive foraging strategies of Arctic puffins. This algorithm draws on the puffins’ ability to navigate and adapt to the harsh and variable conditions of the Arctic, effectively balancing exploration and exploitation phases. Despite its potential, APO, like many nature-inspired algorithms, can benefit from enhancements to address complex, multimodal engineering design problems more effectively.
In this study, we propose an innovative hybrid algorithm that synergizes the strengths of Arctic Puffin Optimization with JADE, an advanced differential evolution algorithm characterized by its dynamic self-adaptive parameter control mechanism [15]. JADE’s dynamic parameter tuning significantly accelerates convergence and improves the precision of the evolutionary process, making it particularly advantageous for continuous optimization tasks. By integrating JADE’s adaptive strategies with the robust exploratory capabilities of APO, the resulting hybrid algorithm is designed to enhance performance in tackling challenging engineering design problems.
This research introduces the Hybrid Arctic Puffin Optimization with JADE (APO-JADE) algorithm, demonstrating its application in solving engineering design problems. Through comprehensive comparative evaluations with established optimization techniques, the results underscore the superior efficiency, convergence speed, and solution quality of APO-JADE. Furthermore, the versatility of the proposed algorithm is validated through its application to real-world engineering problems, highlighting its potential as a robust and effective tool for complex design optimization challenges.

1.1. Research Contribution

The primary contributions of this research are outlined as follows:
  • Proposing of the Hybrid APO-JADE Algorithm: This study introduces the Hybrid Arctic Puffin Optimization with JADE (APO-JADE) algorithm, integrating the exploratory capabilities of APO with the adaptive evolutionary mechanisms of JADE to enhance optimization performance and reliability.
  • Outperforming Performance on Standardized Benchmarks: Through extensive evaluations on the CEC2022 benchmark functions, APO-JADE is demonstrated to outperform existing algorithms, achieving superior convergence speed, solution accuracy, and computational efficiency.
  • Practical Applications in Engineering Design: The APO-JADE algorithm has been effectively applied to optimize the design of planetary gear trains and three-bar truss structures, validating its robustness and practical utility in solving complex engineering optimization problems.
  • Statistical Analysis and Comparison with the State of the Art: This research includes a thorough statistical comparison of APO-JADE with other state-of-the-art optimization techniques, highlighting its superior performance across various metrics.
  • Versatility and Applicability: The adaptability of APO-JADE is showcased through its ability to handle diverse optimization landscapes, emphasizing its potential as a general-purpose optimization technique for a variety of industrial applications.

1.2. Paper Structure

This paper is structured as follows: The Literature Review section examines advancements in engineering design optimization, with a particular focus on hybrid algorithms developed to address the limitations of standalone optimization methods. This section emphasizes improvements in solution accuracy, convergence rates, and the ability to overcome local optima by leveraging the complementary strengths of multiple algorithms. The Hybrid Arctic Puffin Optimization with JADE (APO-JADE) section introduces the conceptual framework of the APO-JADE algorithm, elaborating on the integration of APO’s nature-inspired mechanisms with JADE’s adaptive parameter control and external archive strategies. The Mathematical Model section presents the mathematical formulations underpinning the APO-JADE algorithm, encompassing initialization, population dynamics, hybridization mechanisms, and the equations guiding its exploration and exploitation phases.
The Results and Discussion section provides a comprehensive performance analysis of APO-JADE based on rigorous testing using CEC2022 benchmark functions. This analysis includes statistical evaluations, convergence behavior, search dynamics, fitness assessments, diversity metrics, and box plot visualizations. The Engineering Design Optimization Applications section demonstrates the practical applicability of APO-JADE by addressing two real-world engineering optimization problems: the design of a planetary gear train and a three-bar truss structure. This section includes detailed problem formulations, fitness functions, constraints, and optimization outcomes. Finally, the Conclusion section summarizes the key findings, underscores the significant enhancements achieved by APO-JADE, and proposes potential avenues for future research and development.

2. Literature Review

In the domain of engineering design optimization, a variety of hybrid algorithms have been developed to address the limitations of standalone optimization methods. These hybrid strategies aim to enhance solution precision, accelerate convergence rates, and improve the capability to evade local optima by integrating the complementary strengths of multiple algorithms.
Hu et al. [16] proposed the Dynamic Hybrid Dandelion Optimizer (DETDO), which incorporates dynamic tent chaotic mapping, differential evolution (DE), and dynamic t-distribution perturbation. This method addresses the original dandelion optimizer’s shortcomings, such as limited exploitation capabilities and slow convergence rates. Experimental findings reveal that DETDO delivers superior optimization accuracy and faster convergence, establishing its suitability for real-world engineering challenges. Similarly, Saberi et al. [17] introduced a biomimetic electrospun cartilage decellularized matrix (CDM)/chitosan nanofiber hybrid material for tissue engineering, optimized using the Box–Behnken design to achieve optimal mechanical properties and structural characteristics. This hybrid material demonstrated improved cell proliferation and enhanced nanofiber properties, making it a promising solution for tissue engineering applications.
Verma and Parouha [18] developed the haDEPSO algorithm, a hybrid approach combining advanced differential evolution (aDE) and particle swarm optimization (aPSO). This algorithm achieves a balance between global and local search capabilities, leading to superior solutions for intricate engineering optimization problems. Similarly, Hashim et al. [19] introduced AOA-BSA, a hybrid optimization algorithm that merges the Archimedes Optimization Algorithm (AOA) with the Bird Swarm Algorithm (BSA). This integration enhances the exploitation phase while maintaining a balance between exploration and exploitation, demonstrating exceptional performance in solving both constrained and unconstrained engineering problems. Zhang et al. [20] presented the CSDE hybrid algorithm, which combines Cuckoo Search (CS) with differential evolution (DE). By segmenting the population into subgroups and independently applying CS and DE, the algorithm avoids premature convergence and achieves superior global optima for constrained engineering problems.
Sun [21] proposed a hybrid role-engineering optimization method that integrates natural language processing with integer linear programming to construct optimal role-based access control systems, significantly improving security. Verma and Parouha [22] further extended their work on haDEPSO for constrained function optimization, demonstrating its effectiveness in solving complex engineering challenges by employing a multi-population strategy. This approach combines advanced differential evolution with Particle Swarm Optimization, outperforming other state-of-the-art algorithms. Lastly, Panagant et al. [23] developed the HMPANM algorithm, which integrates the Marine Predators Optimization Algorithm with the Nelder–Mead method. This hybrid algorithm has proven highly effective in optimizing structural design problems within the automotive industry, showcasing its practical application in industrial component optimization.
Yildiz and Mehta [24] proposed the HTSSA-NM and MRFO algorithms to optimize the structural and shape parameters of automobile brake pedals. These algorithms demonstrated strong performance in achieving lightweight, efficient designs, outperforming several established metaheuristics. Similarly, Duan and Yu [25] introduced a collaboration-based hybrid GWO-SCA optimizer (cHGWOSCA), which integrates the Grey Wolf Optimizer (GWO) and the Sine Cosine Algorithm (SCA). This hybrid method enhances global exploration and local exploitation, achieving notable success in global optimization and solving constrained engineering design problems. Barshandeh et al. [26] developed the HMPA, a hybrid multi-population algorithm that combines artificial ecosystem-based optimization with Harris Hawks Optimization. By dynamically exchanging solutions among sub-populations, this approach effectively balances exploration and exploitation, solving a wide array of engineering optimization challenges.
Uray et al. [27] presented a hybrid harmony search algorithm augmented by the Taguchi method to optimize algorithm parameters for engineering design problems. This combination improves the robustness and effectiveness of the optimization process by leveraging statistical methods for parameter estimation. Varaee et al. [28] introduced a hybrid algorithm that combines Particle Swarm Optimization (PSO) with the Generalized Reduced Gradient (GRG) algorithm, achieving a balance between exploration and exploitation. This method exhibited competitive results when applied to benchmark optimization problems and constrained engineering challenges. Fakhouri et al. [29] proposed a novel hybrid evolutionary algorithm that integrates PSO, the Sine Cosine Algorithm (SCA), and the Nelder–Mead Simplex (NMS) optimization method, significantly enhancing the search process and demonstrating superior performance in solving engineering design problems.
Dhiman [30] introduced the SSC algorithm, a hybrid metaheuristic combining sine–cosine functions with the Spotted Hyena Optimizer’s attack strategy and the Chimp Optimization Algorithm. This approach proved effective in addressing real-world complex problems and engineering applications. Kundu and Garg [31] developed the LSMA-TLBO algorithm, which integrates the Slime Mould Algorithm (SMA) with Teaching–Learning-Based Optimization (TLBO) and employs Lévy flight-based mutation. This hybrid approach achieved remarkable performance in numerical optimization and engineering design problems. Yang et al. [32] optimized a ladder-shaped hybrid anode for GaN-on-Si Schottky Barrier Diodes, achieving reduced reverse leakage current and exceptional electrical characteristics.
Yang et al. [33] proposed a hybrid proxy model for optimizing engineering parameters in deflagration fracturing for shale reservoirs. This model effectively balances reservoir failure degree with stimulation range, providing an efficient solution for multi-objective optimization in deflagration fracturing engineering. Zhong et al. [34] introduced the Hybrid Remora Crayfish Optimization Algorithm (HRCOA), designed to address continuous optimization problems and wireless sensor network coverage optimization. The algorithm demonstrated scalability and effectiveness across various optimization scenarios. Yildiz and Erdaş [35] developed the Hybrid Taguchi–Salp Swarm Optimization Algorithm (HTSSA), specifically aimed at enhancing the optimization of structural design problems in industry. This algorithm achieved superior results in shape optimization challenges when compared to recent optimization techniques.
Cheng et al. [36] proposed a robust optimization methodology for engineering structures with hybrid probabilistic and interval uncertainties. By incorporating stochastic and interval uncertain system parameters, the approach utilized a multi-layered refining Latin hypercube sampling-based Monte Carlo simulation and a novel genetic algorithm to solve robust optimization problems. The method was validated through complex engineering structural applications. Finally, Huang and Hu [37] developed the Hybrid Beluga Whale Optimization Algorithm (HBWO), which integrates Quasi-Oppositional-Based Learning (QOBL), dynamic and spiral predation strategies, and the Nelder–Mead Simplex search method. The HBWO algorithm demonstrated exceptional feasibility and effectiveness in solving practical engineering problems.
Tang et al. [38] introduced the Multi-Strategy Particle Swarm Optimization Hybrid Dandelion Optimization Algorithm (PSODO), which addresses challenges such as slow convergence rates and susceptibility to local optima. This algorithm demonstrated substantial improvements in global optimization accuracy, convergence speed, and computational efficiency. Similarly, Chagwiza et al. [39] developed a hybrid matheuristic algorithm by integrating the Grotschel–Holland and Max–Min Ant System algorithms. This approach proved effective in solving complex design and network engineering problems by increasing the certainty of achieving optimal solutions. Liu et al. [40] proposed a hybrid algorithm that combines the Seeker Optimization Algorithm with Particle Swarm Optimization, achieving superior performance on benchmark functions and in constrained engineering optimization scenarios.
Adegboye and Ülker [41] presented the AEFA-CSR, a hybrid algorithm that integrates the Cuckoo Search Algorithm with Refraction Learning into the Artificial Electric Field Algorithm. This integration enhances convergence rates and solution precision, yielding promising results across benchmark functions and engineering applications. Wang et al. [42] proposed the Improved Hybrid Aquila Optimizer and Harris Hawks Algorithm (IHAOHHO), which showed exceptional performance in standard benchmark functions and industrial engineering design problems. Kundu and Garg [43] introduced the TLNNABC, a hybrid algorithm combining the Artificial Bee Colony (ABC) algorithm with the Neural Network Algorithm (NNA) and Teaching–Learning-Based Optimization (TLBO). This approach demonstrated remarkable effectiveness in reliability optimization and engineering design applications.
Knypiński et al. [44] employed hybrid variations of the Cuckoo Search (CS) and Grey Wolf Optimization (GWO) algorithms to optimize steady-state functional parameters of LSPMSMs while adhering to non-linear constraint functions. The primary goal was to minimize design parameters related to motor performance, such as efficiency and operational stability. The hybridization leveraged the exploratory capabilities of one algorithm and the exploitative strengths of the other, resulting in a balanced search mechanism capable of escaping local optima and improving overall optimization outcomes.
Dhiman [45] proposed the Emperor Salp Algorithm (ESA), a hybrid bio-inspired metaheuristic optimization method that integrates the strengths of the Emperor Penguin Optimizer with the Salp Swarm Algorithm. This hybrid approach demonstrated superior robustness and the ability to achieve optimal solutions, outperforming several competing algorithms in comparative evaluations.

2.1. Overview of Arctic Puffin Optimization (APO)

The Arctic Puffin Optimization (APO) algorithm [14] is a bio-inspired metaheuristic approach developed to address complex engineering design optimization challenges. This algorithm draws inspiration from the survival strategies and foraging behaviors of Arctic puffins, incorporating two distinct phases: aerial flight (exploration) and underwater foraging (exploitation). These phases are meticulously designed to achieve a balance between global exploration and local exploitation, thereby improving the algorithm’s efficiency in locating optimal solutions [14].
The conceptual foundation of the APO algorithm is rooted in the behavior of Arctic puffins, which exhibit highly coordinated flight patterns and group foraging strategies to enhance their hunting efficiency. These birds fly at low altitudes, dive underwater to capture prey, and dynamically adapt their behaviors based on environmental conditions. When food resources are scarce, puffins adjust their underwater positions strategically to maximize foraging success and use signaling mechanisms to communicate with one another, thereby minimizing risks from predators [14].

2.2. Mathematical Model

2.2.1. Population Initialization

In the Arctic Puffin Optimization (APO) algorithm, each Arctic puffin symbolizes a candidate solution. The initial positions of these solutions are generated randomly within predefined bounds, as described by Equation (1) [14]:
X i t = rand · ( ub lb ) + lb , i = 1 , 2 , 3 , , N
Here, X i t represents the position of the i-th puffin, rand denotes a random value uniformly distributed between 0 and 1, ub and lb indicate the upper and lower bounds of the search space, respectively, and N corresponds to the total population size.

2.2.2. Aerial Flight Stage (Exploration)

This stage emulates the coordinated flight and searching behaviors exhibited by Arctic puffins, employing techniques such as Lévy flight and velocity factors to enable efficient exploration of the solution space.

Aerial Search Strategy

The positions of the puffins are updated using the Lévy flight mechanism, as represented in Equation (2) [14]:
Y i t + 1 = X i t + ( X i t X r t ) · L ( D ) + R
In this equation, X i t denotes the position of the i-th puffin at iteration t, L ( D ) represents a random value generated based on the Lévy flight distribution, and R incorporates a normal distribution factor to introduce stochasticity and enhance exploration.

Swooping Predation Strategy

The swooping predation strategy, modeled by Equation (3), adjusts the displacement of the puffin to simulate a rapid dive for capturing prey [14]:
Z i t + 1 = Y i t + 1 S
where S is a velocity coefficient.

2.2.3. Merging Candidate Positions

The positions obtained from the exploration and exploitation stages are combined and ranked based on their fitness values. From this sorted pool, the top N individuals are selected to constitute the updated population. This merging and selection process is mathematically represented by Equations (4)–(6) [14]:
P i t + 1 = Y i t + 1 Z i t + 1
new = sort ( P i t + 1 )
X i t + 1 = new ( 1 : N )

2.3. Underwater Foraging Stage (Exploitation)

The underwater foraging stage involves strategies that enhance the algorithm’s local search capabilities. These include gathering foraging, intensifying search, and avoiding predators.

2.3.1. Gathering Foraging

In the gathering foraging strategy, puffins update their positions based on cooperative behavior, as shown in Equation (7):
W i t + 1 = X r 1 + F · L ( D ) · ( X r 2 X r 3 ) if rand 0.5 X r 1 + F · ( X r 2 X r 3 ) if rand < 0.5
where F is the cooperative factor, set to 0.5.

2.3.2. Intensifying Search

The intensifying search strategy adjusts the search strategy when food resources are depleted, as described by Equations (8) and (9):
Y i t + 1 = W i t + 1 · ( 1 + f )
f = 0.1 · ( rand 1 ) · ( T t ) T
In these equations, T represents the total number of iterations, while t denotes the current iteration. The factor f is dynamically adjusted based on the iteration progress and incorporates a random component to enhance search intensification.

2.3.3. Avoiding Predators

The strategy for avoiding predators is modeled by Equation (10):
Z i t + 1 = X i + F · L ( D ) · ( X r 1 X r 2 ) if rand 0.5 X i + β · ( X r 1 X r 2 ) if rand < 0.5
Here, β represents a uniformly distributed random variable within the range [0, 1].
The Arctic Puffin Optimization (APO) algorithm leverages a variety of exploration and exploitation mechanisms inspired by the natural behaviors of Arctic puffins. By incorporating Lévy flight for efficient global exploration, swooping predation for accelerated search, and adaptive underwater foraging strategies for intensified local search, the algorithm achieves a well-calibrated balance between exploration and exploitation. Additionally, the merging and selection processes further enhance the refinement of solutions, establishing APO as a highly effective approach for addressing complex optimization challenges.

2.4. Overview of Dynamic Differential Evolution with Optional External Archive (JADE)

The JADE optimizer (Dynamic Differential Evolution with Optional External Archive) [15] is a sophisticated extension of the traditional differential evolution (DE) algorithm. This advanced variant integrates dynamic parameter control mechanisms alongside an optional external archive to maintain and enhance population diversity. These features enable JADE to effectively address complex optimization challenges by achieving a balanced trade-off between exploration and exploitation [15].

2.5. Inspiration and Motivation

The development of the JADE optimizer stems from the objective of enhancing the conventional differential evolution algorithm by incorporating dynamic parameter adaptation and preserving diversity through the use of an external archive. This innovative approach significantly improves the algorithm’s efficiency and effectiveness in addressing a wide range of optimization challenges [15].

2.6. Mathematical Model

2.6.1. Population Initialization

In the JADE optimizer, the initial population is randomly generated within specified bounds, as represented by Equation (11) [15]:
X i 0 = rand · ( ub lb ) + lb , i = 1 , 2 , 3 , , N
Here, X i 0 denotes the position of the i-th individual, rand is a random value uniformly distributed between 0 and 1, ub and lb represent the upper and lower bounds, respectively, and N is the population size.

2.6.2. Mutation Strategy

JADE employs a current-to-pbest mutation strategy, described by Equation (12):
V i ( t + 1 ) = X i t + F · ( X p b e s t t X i t ) + F · ( X r 1 t X r 2 t )
In this equation, V i ( t + 1 ) represents the mutant vector, X i t is the current vector, X p b e s t t is a randomly selected vector from the top p % of the population, X r 1 t and X r 2 t are randomly chosen vectors from the population, and F is the scaling factor.

2.6.3. Crossover Strategy

The crossover operation in JADE is defined by Equation (13) [15]:
U i ( t + 1 ) = V i ( t + 1 ) if rand j C r or j = j r a n d X i t if rand j > C r and j j r a n d
Here, U i ( t + 1 ) represents the trial vector, V i ( t + 1 ) is the mutant vector, X i t is the current vector, rand j is a uniformly distributed random value, C r denotes the crossover rate, and j r a n d is a randomly chosen index to ensure at least one element from the mutant vector is selected.

2.6.4. Selection Strategy

The selection strategy determines the individuals for the next generation based on fitness evaluation, as expressed in Equation (14) [15]:
X i ( t + 1 ) = U i ( t + 1 ) if f ( U i ( t + 1 ) ) f ( X i t ) X i t otherwise
Here, f represents the fitness function used to evaluate the solutions.

2.6.5. Parameter Adaptation

JADE dynamically adjusts the parameters F and C r using historical data and a learning process, as modeled by Equations (15) and (16):
F i ( t + 1 ) = N ( μ F , σ F )
C r ( t + 1 ) = N ( μ C r , σ C r )
In these equations, N represents a normal distribution with means μ F and μ C r , and standard deviations σ F and σ C r , respectively.

2.6.6. External Archive

JADE incorporates an external archive to maintain a set of inferior solutions, enhancing population diversity and guiding the mutation process. The archive is updated by adding new solutions and removing older ones based on predefined criteria.

3. Hybrid Arctic Puffin Optimization (APO) with JADE

The hybrid Arctic Puffin Optimization (APO) with JADE represents a novel optimization algorithm that combines the complementary strengths of two distinct methods: Arctic Puffin Optimization (APO) and JADE (Dynamic Differential Evolution with Optional External Archive). APO draws inspiration from the natural behaviors of Arctic puffins, particularly their foraging and predation strategies, which are translated into exploration and exploitation mechanisms within the optimization framework [14]. Conversely, JADE enhances the traditional differential evolution (DE) algorithm by introducing dynamic parameter control and an external archive, significantly improving diversity maintenance and convergence efficiency.
By integrating these approaches, the hybrid algorithm benefits from JADE’s adaptive mechanisms, which dynamically adjust control parameters and incorporate an external archive of inferior solutions to enhance population diversity. Simultaneously, it capitalizes on APO’s robust exploration and exploitation strategies, inspired by the puffins’ efficient foraging behaviors. This combination results in a hybrid algorithm capable of navigating complex optimization landscapes, escaping local optima, and converging to high-quality solutions.
The development of the APO-JADE algorithm is motivated by the need for a more robust and efficient optimization tool that integrates the strengths of APO and JADE. The hybrid approach recognizes that different optimization strategies offer unique benefits, which, when combined, can effectively address their respective limitations.
JADE contributes to the hybrid algorithm through its dynamic parameter control and external archive mechanisms. The dynamic parameter control adjusts the crossover rate ( C r ) and scaling factor (F) based on the evolving search environment, ensuring adaptability and maintaining a balance between exploration (searching new regions) and exploitation (refining current solutions). Furthermore, the external archive stores inferior solutions, which can reintroduce diversity into the population, preventing premature convergence and enabling the algorithm to escape local optima.
APO enhances the hybrid algorithm through its bio-inspired mechanisms, which mimic the natural behaviors of Arctic puffins. These behaviors include collective foraging and dynamic adjustments to search strategies based on environmental feedback. During the exploration phase, modeled after puffins’ aerial flight, Lévy flights are employed to facilitate long-distance jumps in the solution space, enabling a broad search. In the exploitation phase, inspired by underwater foraging, the algorithm fine-tunes solutions around the best-found candidates, ensuring effective utilization of promising regions within the search space.
By integrating these complementary strategies, the APO-JADE algorithm achieves a comprehensive search capability that effectively balances exploration and exploitation. This results in a robust optimization process capable of addressing complex, high-dimensional problem spaces and discovering high-quality solutions.

3.1. Mathematical Model

3.1.1. JADE Parameters

The JADE parameters are initialized to ensure effective adaptation and diversity:
l = 1 ( Start l from 1 to avoid zero index )
u C R = 0.5 ( Initial crossover rate )
u F = 0.5 ( Initial scaling factor )
p 0 = 0.05 ( Proportion of top individuals )
t o p = 1 ( Number of top individuals considered )
A = [ ] ( External archive )
t = 1 ( Archive counter )
If the upper bound (ub) and lower bound (lb) are scalar values, they are expanded into vectors with a dimension equal to the problem’s dimensionality (dim). This ensures consistent boundary constraints across all dimensions of the search space.

3.1.2. Population Initialization

The population is generated using an initialization function that creates a random distribution of candidate solutions within the specified bounds:
P = initialization_SSA ( p o p , d i m , u b , l b )
Here, p o p represents the population size, dim denotes the dimensionality of the problem, and ub and lb correspond to the upper and lower bounds, respectively.

3.1.3. Hybrid JADE-APO Loop

The optimization process alternates between the mechanisms of JADE and APO, iteratively refining the solution. This loop continues for a specified maximum number of iterations ( M a x _ i t e r ).

JADE Mechanism

During the initial half of the iterations, the algorithm operates under the JADE mechanism:
Evaluate the fitness of the current population:
fitnessP ( i ) = f obj ( P ( i , : ) )
where fitnessP ( i ) represents the fitness value of the i-th individual in the population P, computed using the fitness function f obj . The notation P ( i , : ) denotes the i-th individual across all dimensions.
Dynamically update C R and F using normal and Cauchy distributions:
C R ( i ) = N ( u C R , 0.1 ) ( while C R ( i ) > 1 or C R ( i ) < 0 )
F ( i ) = C ( u F , 0.1 ) ( while F ( i ) 0 )
As outlined in the equations, C R ( i ) denotes the crossover probability for the i-th individual, sampled from a normal distribution N with mean u C R and standard deviation 0.1 . Similarly, F ( i ) represents the scaling factor for differential mutation, sampled from a Cauchy distribution C with location u F and scale 0.1 . Both parameters are dynamically constrained to remain within their valid ranges.
Mutant vectors are generated using the current-to-pbest mutation strategy, as described in Equation (27):
V i ( t + 1 ) = X i t + F · ( X p b e s t t X i t ) + F · ( X r 1 t X r 2 t )
Here, V i ( t + 1 ) represents the mutant vector for the i-th individual at generation t + 1 , X i t is the current position of the i-th individual, X p b e s t t is the position of the p-best individual, and X r 1 t and X r 2 t are the positions of two randomly selected individuals. F serves as the scaling factor for mutation.
Crossover is applied to generate trial vectors, as expressed in Equation (28):
U i ( t + 1 ) = V i ( t + 1 ) if rand j C r or j = j r a n d X i t if rand j > C r and j j r a n d
In this equation, U i ( t + 1 ) represents the trial vector generated for the i-th individual. The crossover operation combines the mutant vector V i ( t + 1 ) and the current vector X i t based on a crossover probability C r . A randomly selected index j r a n d ensures that at least one dimension is taken from the mutant vector.
The fitness of trial vectors is evaluated, and the population is updated as shown in Equation (29):
X i ( t + 1 ) = U i ( t + 1 ) if f ( U i ( t + 1 ) ) f ( X i t ) X i t otherwise
Here, X i ( t + 1 ) represents the updated position of the i-th individual. If the fitness value of the trial vector f ( U i ( t + 1 ) ) is better than or equal to the fitness of the current vector f ( X i t ) , the trial vector replaces the current vector; otherwise, the current vector is retained.

APO Mechanism

During the second half of the iterations, the algorithm employs the APO mechanism.
The behavioral conversion factor B is calculated using Equation (30):
B = 2 log 1 rand 1 l M a x _ i t e r
Here, B represents the factor that modulates the balance between exploration and exploitation, rand is a uniformly distributed random number, l is the current iteration, and M a x _ i t e r denotes the maximum number of iterations.
Positions are updated using Lévy flight and swooping strategies, as expressed in Equation (31):
Y = X + Levy ( d i m ) · ( X X r a n d ) + round 0.5 · ( 0.05 + rand ) · randn
Here, Y represents the updated position, X is the current position, Levy ( d i m ) represents the Lévy flight operator, X r a n d is a randomly selected position, and randn is a normally distributed random number.
The updated positions are bounded within the search space using Equation (32):
X = SpaceBound ( X , u b , l b )
Here, X represents the adjusted position, and the SpaceBound function ensures that all positions remain within the predefined upper ( u b ) and lower ( l b ) bounds.
The APO-JADE algorithm effectively integrates the dynamic parameter control and external archive features of JADE with the exploration and exploitation strategies of APO. This hybridization ensures a robust optimization process capable of addressing complex and multi-dimensional optimization problems. The APO mechanism also includes detailed behavioral modeling equations inspired by the natural foraging behaviors of Arctic puffins.

3.1.4. Behavioral Conversion Factor

The behavioral conversion factor B transitions between exploration and exploitation phases, as shown in Equation (33):
B = 2 log 1 rand 1 l M a x _ i t e r

3.1.5. Levy Flight

The Levy flight mechanism enables large, random jumps in the search space to enhance global exploration, as described in Equation (34):
Levy ( D ) = u | v | 1 / β
Here, u and v are random variables sampled from a normal distribution, and β represents the stability parameter that controls the step size.

3.1.6. Swooping Strategy

The swooping strategy mimics rapid predation behavior, facilitating local exploitation, as expressed in Equation (35):
Z = Y · tan ( R 0.5 ) · π
In this equation, R is a randomly generated vector, and tan denotes the tangent function used to simulate sharp directional changes.

3.1.7. Bounding Positions

To ensure feasibility, the positions of individuals are bounded within the predefined search space, as shown in Equation (36):
X = SpaceBound ( X , u b , l b )
Here, SpaceBound is a function that adjusts X to ensure it remains within the upper bound ( u b ) and lower bound ( l b ).

3.2. Hybrid APO-JADE Algorithm

The Hybrid APO-JADE algorithm (See Algorithm 1) enhances the optimization process by dynamically adjusting the crossover rate ( C R ) and scaling factor (F) using normal and Cauchy distributions, respectively. Additionally, it employs an external archive to maintain diversity and mitigate premature convergence. The algorithm alternates between the JADE and APO mechanisms in two distinct phases, ensuring a robust balance between exploration and exploitation, thereby efficiently navigating complex optimization landscapes and converging to high-quality solutions, the steps of the algorithm has been shown in Figure 1.
Algorithm 1 Hybrid APO-JADE algorithm.
  1:
Initialize: Set parameters u C R = 0.5 , u F = 0.5 , p 0 = 0.05 , t o p = 1
  2:
Initialize archive A as empty
  3:
Initialize iteration counter l = 1
  4:
Generate initial population P of size p o p with dimensions d i m using bounds u b and l b
  5:
Evaluate fitness of initial population
  6:
while  l M a x _ i t e r  do
  7:
    if  l M a x _ i t e r / 2  then
  8:
        JADE Mechanism:
  9:
        for each individual i in population do
10:
           Compute C R ( i ) = N ( u C R , 0.1 ) and ensure 0 C R ( i ) 1
11:
           Compute F ( i ) = C ( u F , 0.1 ) and ensure F ( i ) > 0
12:
        end for
13:
        Identify the best individual X b e s t and select top p % as X p b e s t
14:
        for each individual i do
15:
           Generate mutant vector V i using Equation (27)
16:
           Generate trial vector U i using crossover (Equation (28))
17:
        end for
18:
        for each individual i do
19:
           Bound U i using Equation (36)
20:
           Evaluate fitness of U i
21:
           Update population and archive if U i improves the fitness
22:
        end for
23:
        Update control parameters u C R and u F
24:
    else
25:
        APO Mechanism:
26:
        for each individual i do
27:
           Compute behavioral conversion factor B (Equation (30))
28:
           if  B > 0.5  then
29:
               Perform Levy flight using Equation (34)
30:
           else
31:
               Apply dynamic search strategies
32:
           end if
33:
           Bound positions using Equation (36)
34:
           Update population based on fitness
35:
        end for
36:
    end if
37:
    Update the best solution found so far
38:
    Increment iteration counter l
39:
end while
40:
Output: Best solution and its fitness value

4. Exploration and Exploitation

The APO-JADE algorithm employs a strategic combination of exploration and exploitation to efficiently navigate the search space and identify optimal solutions.
Exploration involves investigating diverse regions of the search space to identify promising solutions. This is achieved through mechanisms like Levy flights, which enable large, random jumps, facilitating escape from local optima (Equation (34)). The behavioral conversion factor B dynamically adjusts the focus on exploration based on the iteration progress (Equation (30)).
Exploitation refines the identified promising solutions by focusing on specific regions of the search space. This is accomplished through the swooping strategy, which intensively searches around the best solutions (Equation (35)), and JADE’s dynamic parameter control, which fine-tunes C R and F based on algorithm performance (Equations (28) and (29)).

5. Experimental Setup

All optimization algorithms were assessed using the benchmark functions provided in the Congress on Evolutionary Computation (CEC) 2022 suite. For each function, every algorithm was executed across 30 independent runs. Each run employed a population size of N, where N =  (specify value here). The initial population for all algorithms was randomly initialized within the specified boundaries of each function, ensuring consistency by using the same random seed across all experiments.
The performance metrics reported include the mean, standard deviation (Std), and standard error of the mean (SEM), which are defined as follows:
  • Mean: The arithmetic mean of the best fitness values obtained over the 30 runs.
  • Standard deviation (Std): A measure of the variability in the fitness values across the 30 runs.
  • Standard error of the mean (SEM): Computed as
    SEM = Std 30

IEEE CEC2022 Benchmark Functions

The CEC2022 benchmark suite (see Table 1) comprises 12 standard optimization functions that are widely recognized for evaluating the performance of optimization algorithms. These functions present a diverse range of mathematical challenges, including unimodal and multimodal landscapes, as well as hybrid and composition functions. Each function is defined within a 10-dimensional search space with boundaries [ 100 , 100 ] . The suite includes simpler functions such as Zakharov (F1) and Rosenbrock (F2), alongside more complex problems like Rastrigin’s (F4), Levy (F5), and a series of hybrid (F6–F8) and composition functions (F9–F12). The minimum known objective values ( F min ) for these functions vary, offering a spectrum of difficulties designed to assess the robustness and efficiency of optimization algorithms.

6. Results and Discussion

This section presents a comprehensive evaluation of the proposed JADEAPO algorithm, benchmarked against an extensive set of state-of-the-art optimization methods including FLO, STOA, SOA, SPBO, AO, SSOA, TTHHO, ChOA, CPO [46], and ROA [47], as summarized in Table 2, and the parameter settings are presented in Table 3 and its parameters are presented in Table 3. These well-established approaches draw upon a wide array of evolutionary paradigms—ranging from natural and biological inspirations to principles grounded in physical or mathematical processes. The analysis relies on standard performance metrics derived from the CEC 2022 benchmark suite, ensuring a consistent and rigorous comparison of algorithmic effectiveness. The results are presented in Table 4, Table 5 and Table 6 offer a detailed statistical comparison of JADEAPO’s performance.

6.1. Convergence Curve

As illustrated in Figure 2 and Figure 3, the convergence curves of the APO-JADE algorithm for the CEC2022 benchmark functions (F1–F12) effectively highlight its capability to navigate the search space and achieve optimal solutions. For F1, the curve exhibits a rapid initial decline, signifying that the algorithm quickly identifies a promising region of the search space, with convergence observed around iteration 400. Similarly, for F2, a sharp decrease is evident in the early iterations, with the algorithm stabilizing swiftly and maintaining the optimal solution after approximately 100 iterations. In the case of F3, the curve demonstrates a consistent reduction in the best fitness value obtained, with steady improvements culminating in convergence around iteration 250. Similar to F1, the F4 curve exhibits rapid initial improvement followed by gradual flattening, suggesting effective exploration and subsequent exploitation. The F5 curve shows very steep initial decline, reaching near-optimal solutions within the first 50 iterations, followed by minimal further improvement. The curve for F6 demonstrates a steep initial drop with gradual improvement, stabilizing around iteration 300. For F7, the curve indicates rapid early improvement, steady decline, and flattening around iteration 200, highlighting effective exploration and exploitation. The F9 curve exhibits a sharp initial decline, stabilizing early, which indicates efficient identification and maintenance of optimal solutions. The F11 curve shows steep initial decline followed by gradual stabilization around iteration 200, suggesting robust exploration and fine-tuning. Lastly, the F12 curve shows rapid initial improvement and early stabilization, indicating quick convergence to a good solution.

6.2. JADE APO Search History Diagram

As depicted in Figure 4, the search history curves for the Hybrid APO-JADE algorithm across various test functions (F1–F12) from the CEC2022 benchmark suite provide critical insights into its exploration and exploitation dynamics. For F1, the search history reveals a dense clustering of points, indicative of intensive exploration around the best solution identified. The horizontal and vertical coverage of the search space ensures comprehensive exploration. In the case of F2, the search history exhibits a pronounced focus around the optimal solutions, with a tightly concentrated cluster, reflecting effective exploitation following an initial exploratory phase. For F3, the search history shows a vertically aligned distribution of points, signifying a concentrated search along a specific dimension, likely influenced by the structural characteristics of the function. Similarly, the search history for F4 displays vertical clustering, emphasizing intensive exploitation around the optimal region. The F6 search history illustrates a dense central cluster with some dispersed points, demonstrating a well-balanced approach between exploration and exploitation. Finally, for F8, the search history indicates a strong vertical clustering with minimal dispersion, highlighting a thorough and targeted search in specific regions of the solution space. The F9 search history curve shows a concentrated cluster around the optimal solution, indicating efficient exploitation after initial exploration. The F10 search history presents a circular clustering around the best solutions, highlighting effective exploitation with minimal exploration outside the optimal region. For F11, the search history curve demonstrates a focused search with some dispersion, indicating both exploration and exploitation activities. Finally, the F12 search history shows a dense clustering around the optimal solution with a few dispersed points, suggesting a strong focus on exploitation after initial exploration.

6.3. JADE APO Average Fitness Diagram

As illustrated in Figure 5, the average fitness curves of the APO-JADE algorithm over the CEC2022 benchmark functions provide valuable insights into its performance and convergence characteristics. For F1, the average fitness begins at a high value and exhibits a significant drop during the early iterations, signifying rapid initial improvement. This is followed by a steady decline and a slight increase towards the end, reflecting the transition from exploration to exploitation. In F2, the average fitness experiences a sharp decline within the first 50 iterations and stabilizes at a low value, indicating efficient convergence to a near-optimal solution in the early stages. The curve for F3 follows a similar trajectory, with an initial steep drop and a gradual decrease, highlighting effective exploitation of the search space. The curve for F4 demonstrates a consistent reduction, reaching a low point around iteration 350, indicative of a delayed yet robust convergence process. For F5 and F6, the curves exhibit significant reductions in average fitness early on, followed by a plateau and a final dip, underscoring the algorithm’s capacity to refine solutions progressively over iterations. In the case of F8, the average fitness decreases rapidly before stabilizing, showcasing efficient convergence. The curves for F9 and F10 display an initial steep decline, followed by minor oscillations and eventual stabilization, reflecting a balance between exploration and fine-tuning. Finally, the curve for F12 mirrors the trends observed in other functions, with a rapid decrease in fitness values, early stabilization, and sustained robustness, demonstrating the algorithm’s effective convergence behavior.

6.4. APO-JADE First Particle Trajectory Diagram

As illustrated in Figure 6, the trajectory curves of the first particle in the APO-JADE algorithm for various functions within the CEC2022 benchmark offer valuable insights into the algorithm’s search dynamics. For function F1, the particle exhibits a trajectory characterized by high variance during the initial stages, reflecting extensive exploration. This variance diminishes progressively as the algorithm converges toward the global optimum, indicating a transition from exploration to exploitation. Similarly, in F2, the trajectory exhibits large oscillations initially, signifying a broad search space coverage before settling into a more refined search pattern as the iterations progress. For F3 and F4, the trajectories show a rapid convergence towards the global optimum with minimal oscillations, reflecting a quick and steady exploitation phase. The trajectory for F5 demonstrates a combination of exploration and exploitation phases, with initial large oscillations followed by a steady approach towards the optimum. F6 and F7 display significant initial oscillations, indicating extensive exploration, which later stabilizes as the particles converge. Functions F9 and F11 exhibit similar patterns of high initial variance with eventual stabilization, highlighting the algorithm’s ability to transition from exploration to exploitation effectively. Finally, for F12, the trajectory shows rapid convergence with minimal oscillations, indicating an efficient search process.

6.5. APO-JADE Exploitation Diagram

As illustrated in Figure 7, the exploitation curves of the APO-JADE algorithm across the CEC2022 benchmark functions reveal its proficiency in concentrating search efforts on promising regions of the solution space over successive iterations. For function F1, the exploitation metric exhibits a rapid decrease within the first 10 iterations, stabilizing near a lower bound. This behavior indicates efficient early exploration of the search space. Similarly, functions F2, F3, and F4 demonstrate a steep decline in the exploitation metric during the initial iterations, reflecting the algorithm’s swift convergence toward local optima. Notably, for functions F2 and F6, the exploitation metric undergoes a pronounced drop, emphasizing the algorithm’s robust ability to exploit high-potential regions effectively. The patterns observed in other functions, such as F8, F9, F10, and F11, also show a rapid initial reduction in the exploitation metric, followed by stabilization. This trend underscores the algorithm’s consistent capacity to exploit viable solutions across diverse problem landscapes. The uniform pattern of an early, rapid decrease followed by a plateau across all benchmark functions suggests that the APO-JADE algorithm efficiently narrows the search space at the outset, enabling a focused refinement process in subsequent iterations. The algorithm’s ability to maintain this behavior across various optimization challenges highlights its adaptability and effectiveness in balancing exploration and exploitation, ultimately leading to convergence on optimal solutions.

6.6. JADE APO Diversity Diagram

As illustrated in Figure 8, the diversity curves of the APO-JADE algorithm across the CEC2022 benchmark functions reveal a distinct pattern of diminishing diversity over successive iterations. At the outset, the standard deviation of particle positions across all dimensions is relatively high, indicating extensive exploration of the search space. However, as the optimization process advances, a rapid decline in diversity is observed within the first 20 to 30 iterations. This reduction signifies that the particles are converging toward promising regions within the search space, reflecting a transition from broad exploration to focused exploitation. This rapid convergence phase is followed by a more gradual reduction in diversity, stabilizing around a low value after approximately 60 iterations. Each dimension follows a similar pattern, although the extent of diversity and the rate of decrease can vary slightly among different dimensions. This consistent pattern across all benchmark functions indicates that APO-JADE effectively balances exploration and exploitation, quickly narrowing down the search to optimal regions while maintaining enough diversity to avoid premature convergence. The final low but non-zero diversity suggests that the algorithm still retains some exploration capacity, potentially aiding in fine-tuning the final solutions.

6.7. JADE APO Box Plot Analysis

As shown in Figure 9, the box plot of fitness scores provides a statistical overview of the fitness values obtained by the algorithm across various functions. For instance, the box plot for F1 demonstrates minimal variance and a lack of significant outliers, indicating consistent performance. In contrast, F2 exhibits a broader range of fitness values, suggesting some variability in the algorithm’s performance. For F3, the box plot shows a narrow range of fitness scores concentrated near 600, with a few minor outliers slightly exceeding this value. This reflects a high degree of consistency, with occasional deviations. Similarly, F4 displays a compact distribution with a median around 812, signifying stable performance with slight variability in the upper range. The F5 box plot highlights almost no variability, with fitness values tightly clustered around 900, showcasing exceptional consistency. In the case of F8, the interquartile range is wider, and some outliers are present, with the median fitness value around 2216. This suggests greater variability and occasional suboptimal runs. The occurrence of outliers in functions such as F3 and F12 indicates occasional deviations, which may be attributed to the complexity of the search space or the presence of multiple local optima.

6.8. JADE APO Heat Map Analysis

As depicted in Figure 10, the sensitivity analysis heat maps for the APO-JADE algorithm applied to the CEC2022 benchmark functions illustrate performance variations based on the number of search agents and the maximum number of iterations. For the F1 function, the heat map shows substantial performance improvement as the number of iterations increases, particularly when the number of search agents exceeds 20, resulting in optimal values around 300. Similarly, for the F2 function, a stable region emerges where increasing iterations beyond 200 provides negligible performance gains, maintaining a best fitness score near 40. The F3 function demonstrates consistent performance across varying numbers of agents and iterations, with fitness values stabilizing around 600. For the F4 function, the heat map indicates that increasing both the number of agents and iterations generally enhances performance, with best fitness scores improving to approximately 810 as configurations optimize. The F5 function exhibits strong sensitivity to the number of agents and iterations, with scores converging near 900. In contrast, the F6 function reveals a more gradual improvement in performance, achieving optimal scores of approximately 1800 with higher iterations and agents. The F7 function demonstrates stability, with slight performance improvements as iterations increase, stabilizing around 2000. For the F8 function, the sensitivity analysis indicates consistent performance around 2215, showcasing robust behavior across varying configurations. Finally, the F10 and F12 functions reveal that performance improves with increasing iterations and agents, with fitness scores stabilizing at approximately 2500 and 2865, respectively. These heat maps provide valuable insights into the impact of algorithm parameters on performance, highlighting the importance of balancing search agents and iterations for different functions.

6.9. JADE APO Histogram Analysis

As illustrated in Figure 11, the histogram analysis of final fitness values for functions F1–F12 using the JADE dynamic Arctic Parameter Optimization (APO) algorithm reveals distinct distributions, showcasing the variability in optimization outcomes for each function. For F1, the fitness values are highly concentrated around 300, with a prominent probability peak, indicating consistent optimization performance. In contrast, F2 demonstrates a bimodal distribution with peaks near 400 and 410, reflecting variability in the optimization process. The fitness values for F3 exhibit a skewed distribution leaning toward 600, whereas F4 presents a more symmetric distribution centered around 810. The histogram for F5 demonstrates a narrow peak around 900, indicating highly consistent optimization results. F6 shows a predominant peak at 1800 with a few higher values, reflecting occasional deviations in the optimization outcome. F8 has a broad distribution with a peak around 2210, indicating variability in the final fitness values. The distribution for F9 is extremely narrow around 2530, similar to F5, reflecting consistent optimization results. Finally, F11 and F12 show broader distributions centered around 2600 and 2864 respectively, indicating some variability in the final fitness values.

7. Application of APO-JADE for Planetary Gear Train Design Optimization Problem

The planetary gear train design model (see Figure 12) aims to optimize the gear ratios of a planetary gear train while ensuring compliance with mechanical and geometric constraints. This model incorporates multiple variables, a defined fitness function, a set of constraints, and a penalty function to address any violations of the constraints [64].

7.1. Parameter Initialization

The design variables include the number of teeth on each gear and the module of gear pairs, which are initialized as follows [64]:
N 1 , N 2 , N 3 , N 4 , N 5 , N 6 : Number of teeth on gears
p : Pressure angle index , chosen from predefined options P ind
m 1 , m 2 : Module for gear pairs , chosen from predefined options m ind

7.2. Fitness Function

The fitness function is formulated to minimize the deviation between the actual gear ratios and their desired target values [64]:
i 1 = N 6 N 4 , i 01 = 3.11
i 2 = N 6 ( N 1 · N 3 + N 2 · N 4 ) N 1 · N 3 ( N 6 N 4 ) , i 02 = 1.84
i R = N 2 · N 6 N 1 · N 3 , i 0 R = 3.11
f = max | i 1 i 01 | , | i 2 i 02 | , | i R i 0 R |
Equation (43) calculates the maximum deviation of actual gear ratios from their desired values.

7.3. Constraints

The design must satisfy several geometric and operational constraints:
g ( 1 ) = m 2 · ( N 6 + 2.5 ) D max
g ( 2 ) = m 1 · ( N 1 + N 2 ) + m 1 · ( N 2 + 2 ) D max
g ( 3 ) = m 2 · ( N 4 + N 5 ) + m 2 · ( N 5 + 2 ) D max
g ( 4 ) = | m 1 · ( N 1 + N 2 ) m 2 · ( N 6 N 3 ) | ( m 1 + m 2 )

7.4. Penalty Function

A penalty function is applied to heavily penalize constraint violations, as shown in Equation (48):
Penalty = λ i = 1 n g i 2 · GetInequality ( g i )
where GetInequality ( g i ) equals 1 if g i indicates a constraint violation and 0 otherwise.

7.5. Fitness Function

The overall fitness function combines the deviation in gear ratios with a penalty for any constraint violations. Since the primary objective function f is maximized, the penalty term is subtracted to ensure constraint violations reduce the overall fitness. The modified fitness function is defined as
Fit = f Penalty
Equation (49) defines the fitness of a design, which is maximized during the optimization process while minimizing constraint violations.

8. Three-Bar Truss Design Optimization Problem

The Three-Bar Truss Design problem (refer to Figure 13) focuses on optimizing the material distribution within a truss system to minimize the total material length while adhering to stress constraints under an applied load. This structural optimization task aims to determine the optimal cross-sectional areas that can withstand the specified loading conditions without exceeding the permissible material stress limits.

8.1. Variables

  • x 1 : Cross-sectional area of the horizontal bar.
  • x 2 : Cross-sectional area of the diagonal bars.

8.2. Fitness Function

The objective is to minimize the total length of material used, as shown in Equation (50):
f = ( 2 2 x 1 + x 2 ) l
where l is the length of each truss member, assumed to be 100 units for simplification.

8.3. Constraints

The structure is subject to the following stress constraints under a load P, as shown in Equations (51)–(53):
g 1 = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P o 0
g 2 = x 2 2 x 1 2 + 2 x 1 x 2 P o 0
g 3 = 1 2 x 2 + x 1 P o 0
where P is the applied load and o represents the allowable stress.

8.4. Penalty Function

To ensure compliance with the constraints, a penalty function is incorporated into the fitness function. This function imposes a significant penalty when any constraints are violated, as expressed in Equation (54):
Penalty = λ i = 1 3 g i 2 · GetInequality ( g i )
Here, GetInequality ( g i ) is a function that evaluates to 1 if g i < 0 (indicating a violation) and 0 otherwise. The parameter λ is a large number (e.g., 10 20 ) to ensure significant penalization.

8.5. Fitness Function

The overall fitness function that needs to be minimized combines the fitness function and the penalty for any constraint violations, as shown in Equation (55):
Fit = f + Penalty
Table 7 and Table 8 show results from various optimization algorithms. APO-JADE achieved the best fitness value (BestF) of 1.339959, with design variables x1 at 6.017209 and x2 at 5.316636, indicating that it found the most optimal truss design. FOX closely followed APO-JADE with a fitness value of 1.339962, while Grey Wolf Optimizer (GWO) and SMA also performed well with slightly higher fitness values. Other algorithms like AVOA were competitive, achieving values close to APO-JADE. As the list progresses, algorithms like AO, HHO, SA, and ChOA have higher BestF values, indicating less optimal designs. Significant increases in BestF for optimizers like SCA, COA, OOA, WOA, BO, and GA show poorer performance compared to APO-JADE.

9. Planetary Gear Train Design Optimization Problem

The provided code snippet addresses an optimization problem related to the design of a planetary gear train. The objective is to optimize the gear ratios and dimensions while ensuring adherence to all specified constraints. Below is a detailed explanation of the code’s components.
Initially, the code defines a significantly large penalty factor ( λ = 10 20 ), which is utilized to impose substantial penalties within the fitness function for any violations of the constraints.

9.1. Parameter Initialization

The variables are initialized based on the input vector x:
  • x is rounded to ensure integer values, as gears must have integer numbers of teeth.
  • P i n d and m i n d are predefined arrays representing possible values for the number of planets and module sizes, respectively.
  • N 1 , N 2 , N 3 , N 4 , N 5 , and N 6 represent the numbers of teeth on different gears.
  • p is the number of planets, and m 1 and m 2 are the module sizes for different gear pairs, selected from m i n d .

9.2. Fitness Function

The fitness function is designed to minimize the deviation between the actual and desired gear ratios, as follows:
  • i 1 represents the gear ratio of the first stage.
  • i 2 denotes the gear ratio of the second stage.
  • i R corresponds to the gear ratio of the ring gear.
  • The desired gear ratios are denoted as i 01 , i 02 , and i 0 R .
The fitness function f is formulated to capture the maximum deviation among the three gear ratios.

9.3. Constraints

Various constraints are defined to ensure the gear design is feasible:
  • D m a x is the maximum allowable diameter.
  • δ 22 , δ 33 , δ 55 , δ 35 , δ 34 , and δ 56 are predefined constants representing allowable deviations.
  • β is the angle calculated based on the gear geometry.
The constraints g ( 1 ) to g ( 11 ) are defined as follows:
  • Constraint on the maximum diameter involving m 2 and N 6 .
  • Constraint on the maximum diameter involving m 1 and N 1 , N 2 .
  • Constraint on the maximum diameter involving m 2 and N 4 , N 5 .
  • Constraint on the compatibility of gear sizes m 1 and m 2 .
  • Constraint on the minimum tooth addendum for gears N 1 and N 2 .
  • Constraint on the minimum tooth addendum for gears N 6 and N 3 .
  • Constraint on the minimum tooth addendum for gears N 4 and N 5 .
  • Geometric constraint involving β to ensure the gear arrangement is physically feasible.
  • Constraint on the positioning of gear N 6 relative to N 3 and N 4 .
  • Constraint on the positioning of gear N 6 relative to N 4 and N 5 .
  • Constraint to ensure N 6 N 4 is a multiple of p.

9.4. Penalty Calculation

A penalty term is calculated to heavily penalize any violation of the constraints:
  • The penalty term accumulates the squared violations of each constraint, scaled by λ .
  • The GetInequality function (assumed to be defined elsewhere) likely returns a boolean indicating if a constraint is violated.

9.5. Fitness Function

The final fitness function F i t integrates the fitness function f and a penalty term, as expressed in Equation (56):
F i t = f + P e n a l t y
This formulation ensures that any design violating the constraints incurs a significantly higher fitness value due to the large penalty factor ( λ ), effectively discouraging such solutions. The objective of the optimization process is to minimize the F i t value, thereby identifying a planetary gear train design that aligns with the desired gear ratios while satisfying all imposed constraints.
As presented in Table 9 and Table 10, the results for solving the planetary gear train design optimization problem demonstrate the performance of various algorithms, measured by the best fitness value ( B e s t F ) achieved, alongside the corresponding design variables ( x 1 to x 7 ).
The APO-JADE algorithm attained the best fitness value ( B e s t F ) of 0.525769, with the design variables x 1 = 35.4056 , x 2 = 26.0947 , x 3 = 24.5062 , x 4 = 24.4957 , x 5 = 22.1659 , x 6 = 87.2760 , and x 7 = 0.5529 . This outcome indicates that APO-JADE identified the most optimal solution by minimizing the deviation in gear ratios while adhering to all constraints.
The Grey Wolf Optimizer (GWO) closely followed APO-JADE, achieving a fitness value of 0.526281, indicating competitive performance but with a slightly higher deviation. The Slime Mould Algorithm (SMA), with a fitness value of 0.537059, performed marginally worse than APO-JADE and GWO, indicating a larger deviation from the target gear ratios. The Whale Swarm Optimization (WSO) and Whale Optimization Algorithm (WOA) both achieved fitness values around 0.53, demonstrating reasonable performance, though less optimal compared to APO-JADE.
Other algorithms, such as COOT, ChOA, and SMA, produced fitness values around 0.537059, while the Owl Optimization Algorithm (OOA) and Binary Wolf Optimization (BWO) reported significantly higher values of 0.774379 and 0.868333, respectively, indicating suboptimal solutions in comparison to APO-JADE.

10. Conclusions

The Hybrid Arctic Puffin Optimization with JADE (APO-JADE) algorithm, developed in this study, marks a significant advancement in the field of engineering optimization. By combining the unique exploration strategies of the Arctic Puffin Optimization (APO) with the adaptive and evolutionary characteristics of the JADE algorithm, APO-JADE addresses critical limitations found in traditional optimization methods. Our extensive testing on benchmark functions demonstrates that APO-JADE significantly outperforms existing algorithms in terms of convergence speed, accuracy, and computational efficiency. Moreover, when applied to real-world engineering problems, APO-JADE consistently delivered superior solutions, highlighting its ability to effectively navigate and optimize complex, multimodal design spaces. This capability is crucial for engineering applications where design parameters are hiinterdependent and optimal solutions are difficult to delineate using standard methodsghly.
Future research will focus on refining APO-JADE’s adaptability and exploring its application across a broader range of industrial problems. Additionally, integrating machine learning techniques to predict algorithm parameters dynamically could further enhance its performance and robustness.

Author Contributions

H.N.F., M.S.A., N.N.S., F.H. and S.N.F.; methodology, H.N.F., M.S.A., N.N.S., F.H. and S.N.F.; software, H.N.F., M.S.A., N.N.S., F.H. and S.N.F.; validation, H.N.F., M.S.A., N.N.S., F.H. and S.N.F.; formal analysis, H.N.F., M.S.A., N.N.S., F.H. and S.N.F.; investigation, H.N.F., M.S.A., N.N.S., F.H. and S.N.F.; writing—original draft preparation, H.N.F., M.S.A., N.N.S., F.H. and S.N.F.; writing—review and editing, H.N.F., M.S.A., N.N.S., F.H. and S.N.F.; visualization, H.N.F., M.S.A., N.N.S., F.H. and S.N.F.; supervision, H.N.F.; project administration, H.N.F.; funding acquisition, H.N.F., M.S.A., N.N.S., F.H. and S.N.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fogel, L.J.; Owens, A.J.; Walsh, M.J. Artificial Intelligence Through Simulated Evolution; Wiley: New York, NY, USA, 1966. [Google Scholar]
  2. Koza, J. Genetic Programming as a Means for Programming Computers by Natural Selection. Stat. Comput. 1994, 4, 87–112. [Google Scholar] [CrossRef]
  3. Lučić, P.; Teodorović, D. Computing with Bees: Attacking Complex Transportation Engineering Problems. Int. J. Artif. Intell. Tools 2003, 12, 375–394. [Google Scholar] [CrossRef]
  4. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  5. Rabanal, P.; Rodríguez, I.; Rubio, F. Using River Formation Dynamics to Design Heuristic Algorithms. In Unconventional Computation; Springer: Berlin/Heidelberg, Germany, 2007; pp. 163–177. [Google Scholar]
  6. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  7. Fakhouri, H.N.; Alawadi, S.; Awaysheh, F.M.; Hamad, F. Novel hybrid success history intelligent optimizer with gaussian transformation: Application in CNN hyperparameter tuning. Clust. Comput. 2023, 27, 3717–3739. [Google Scholar] [CrossRef]
  8. Civicioglu, P. Transforming Geocentric Cartesian Coordinates to Geodetic Coordinates by Using Differential Search Algorithm. Comput. Geosci. 2012, 46, 229–247. [Google Scholar] [CrossRef]
  9. Fakhouri, H.N.; Al-Shamayleh, A.S.; Ishtaiwi, A.; Makhadmeh, S.N.; Fakhouri, S.N.; Hamad, F. Hybrid Four Vector Intelligent Metaheuristic with Differential Evolution for Structural Single-Objective Engineering Optimization. Algorithms 2024, 17, 417. [Google Scholar] [CrossRef]
  10. Jung, S.H. Queen-bee Evolution for Genetic Algorithms. Electron. Lett. 2003, 39, 575–576. [Google Scholar] [CrossRef]
  11. Fakhouri, H.N.; Alawadi, S.; Awaysheh, F.M.; Alkhabbas, F.; Zraqou, J. A cognitive deep learning approach for medical image processing. Sci. Rep. 2024, 14, 4539. [Google Scholar] [CrossRef] [PubMed]
  12. Fakhouri, H.; Awaysheh, F.; Alawadi, S.; Alkhalaileh, M.; Hamad, F. Four vector intelligent metaheuristic for data optimization. Computing 2024, 106, 2321–2359. [Google Scholar] [CrossRef]
  13. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  14. Wang, W.c.; Tian, W.c.; Xu, D.m.; Zang, H.f. Arctic puffin optimization: A bio-inspired metaheuristic algorithm for solving engineering design optimization. Adv. Eng. Softw. 2024, 195, 103694. [Google Scholar] [CrossRef]
  15. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  16. Hu, G.; Zheng, Y.; Abualigah, L.; Hussien, A.G. DETDO: An adaptive hybrid dandelion optimizer for engineering optimization. Adv. Eng. Inform. 2023, 57, 102004. [Google Scholar] [CrossRef]
  17. Saberi, A.; Khodaverdi, E.; Kamali, H.; Movaffagh, J.; Mohammadi, M.; Yari, D.; Moradi, A.; Hadizadeh, F. Fabrication and Characterization of Biomimetic Electrospun Cartilage Decellularized Matrix (CDM)/Chitosan Nanofiber Hybrid for Tissue Engineering Applications: Box-Behnken Design for Optimization. J. Polym. Environ. 2024, 32, 1573–1592. [Google Scholar] [CrossRef]
  18. Verma, P.; Parouha, R.P. Engineering Design Optimization Using an Advanced Hybrid Algorithm. Int. J. Swarm Intell. Res. (IJSIR) 2022, 13, 18. [Google Scholar] [CrossRef]
  19. Hashim, F.A.; Khurma, R.A.; Albashish, D.; Amin, M.; Hussien, A.G. Novel hybrid of AOA-BSA with double adaptive and random spare for global optimization and engineering problems. Alex. Eng. J. 2023, 73, 543–577. [Google Scholar] [CrossRef]
  20. Zhang, Z.; Ding, S.; Jia, W. A hybrid optimization algorithm based on cuckoo search and differential evolution for solving constrained engineering problems. Eng. Appl. Artif. Intell. 2019, 85, 254–268. [Google Scholar] [CrossRef]
  21. Sun, W. Hybrid Role-Engineering Optimization with Multiple Cardinality Constraints Using Natural Language Processing and Integer Linear Programming Techniques. Mob. Inf. Syst. 2022, 2022, 3453041. [Google Scholar] [CrossRef]
  22. Verma, P.; Parouha, R.P. An advanced hybrid algorithm for constrained function optimization with engineering applications. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 8185–8217. [Google Scholar] [CrossRef]
  23. Panagant, N.; Ylldlz, M.; Pholdee, N.; Ylldlz, A.R.; Bureerat, S.; Sait, S.M. A novel hybrid marine predators-Nelder-Mead optimization algorithm for the optimal design of engineering problems. Mater. Test. 2021, 63, 453–457. [Google Scholar] [CrossRef]
  24. Yildiz, A.R.; Mehta, P. Manta ray foraging optimization algorithm and hybrid Taguchi salp swarm-Nelder-Mead algorithm for the structural design of engineering components. Mater. Test. 2022, 64, 706–713. [Google Scholar] [CrossRef]
  25. Duan, Y.; Yu, X. A collaboration-based hybrid GWO-SCA optimizer for engineering optimization problems. Expert Syst. Appl. 2023, 213, 119017. [Google Scholar] [CrossRef]
  26. Barshandeh, S.; Piri, F.; Sangani, S.R. HMPA: An innovative hybrid multi-population algorithm based on artificial ecosystem-based and Harris Hawks optimization algorithms for engineering problems. Eng. Comput. 2022, 38, 1581–1625. [Google Scholar] [CrossRef]
  27. Uray, E.; Carbas, S.; Geem, Z.W.; Kim, S. Parameters Optimization of Taguchi Method Integrated Hybrid Harmony Search Algorithm for Engineering Design Problems. Mathematics 2022, 10, 327. [Google Scholar] [CrossRef]
  28. Varaee, H.; Safaeian Hamzehkolaei, N.; Safari, M. A hybrid generalized reduced gradient-based particle swarm optimizer for constrained engineering optimization problems. J. Soft Comput. Civ. Eng. 2021, 5, 86–119. [Google Scholar] [CrossRef]
  29. Fakhouri, H.N.; Hudaib, A.; Sleit, A. Hybrid Particle Swarm Optimization with Sine Cosine Algorithm and Nelder–Mead Simplex for Solving Engineering Design Problems. Arab. J. Sci. Eng. 2020, 45, 3091–3109. [Google Scholar] [CrossRef]
  30. Dhiman, G. SSC: A hybrid nature-inspired meta-heuristic optimization algorithm for engineering applications. Knowledge-Based Syst. 2021, 222, 106926. [Google Scholar] [CrossRef]
  31. Kundu, T.; Garg, H. LSMA-TLBO: A hybrid SMA-TLBO algorithm with lévy flight based mutation for numerical optimization and engineering design problems. Adv. Eng. Softw. 2022, 172, 103185. [Google Scholar] [CrossRef]
  32. Yang, C.Y.; Wu, J.H.; Chung, C.H.; You, J.Y.; Yu, T.C.; Ma, C.J.; Lee, C.T.; Ueda, D.; Hsu, H.T.; Chang, E.Y. Optimization of Forward and Reverse Electrical Characteristics of GaN-on-Si Schottky Barrier Diode Through Ladder-Shaped Hybrid Anode Engineering. IEEE Trans. Electron Devices 2022, 69, 6644–6649. [Google Scholar] [CrossRef]
  33. Yang, X.; Guo, T.; Yu, M.; Chen, M. Optimization of engineering parameters of deflagration fracturing in shale reservoirs based on hybrid proxy model. Geoenergy Sci. Eng. 2023, 231, 212318. [Google Scholar] [CrossRef]
  34. Zhong, R.; Fan, Q.; Zhang, C.; Yu, J. Hybrid remora crayfish optimization for engineering and wireless sensor network coverage optimization. Clust. Comput. 2024, 27, 10141–10168. [Google Scholar] [CrossRef]
  35. Yildiz, A.R.; Erdaş, M.U. A new Hybrid Taguchi-salp swarm optimization algorithm for the robust design of real-world engineering problems. Mater. Test. 2021, 63, 157–162. [Google Scholar] [CrossRef]
  36. Cheng, J.; Lu, W.; Liu, Z.; Wu, D.; Gao, W.; Tan, J. Robust optimization of engineering structures involving hybrid probabilistic and interval uncertainties. Struct. Multidiscip. Optim. 2021, 63, 1327–1349. [Google Scholar] [CrossRef]
  37. Huang, J.; Hu, H. Hybrid beluga whale optimization algorithm with multi-strategy for functions and engineering optimization problems. J. Big Data 2024, 11, 1–55. [Google Scholar] [CrossRef]
  38. Tang, W.; Cao, L.; Chen, Y.; Chen, B.; Yue, Y. Solving Engineering Optimization Problems Based on Multi-Strategy Particle Swarm Optimization Hybrid Dandelion Optimization Algorithm. Biomimetics 2024, 9, 298. [Google Scholar] [CrossRef] [PubMed]
  39. Chagwiza, G.; Jones, B.; Hove-Musekwa, S.; Mtisi, S. A new hybrid matheuristic optimization algorithm for solving design and network engineering problems. Int. J. Manag. Sci. Eng. Manag. 2018, 13, 11–19. [Google Scholar] [CrossRef]
  40. Liu, H.; Duan, S.; Luo, H. A hybrid engineering algorithm of the seeker algorithm and particle swarm optimization. Mater. Test. 2022, 64, 1051–1089. [Google Scholar] [CrossRef]
  41. Adegboye, O.R.; Deniz Ülker, E. Hybrid artificial electric field employing cuckoo search algorithm with refraction learning for engineering optimization problems. Sci. Rep. 2023, 13, 4098. [Google Scholar] [CrossRef] [PubMed]
  42. Wang, S.; Jia, H.; Abualigah, L.; Liu, Q.; Zheng, R. An improved hybrid aquila optimizer and harris hawks algorithm for solving industrial engineering optimization problems. Processes 2021, 9, 1551. [Google Scholar] [CrossRef]
  43. Kundu, T.; Garg, H. A hybrid TLNNABC algorithm for reliability optimization and engineering design problems. Eng. Comput. 2022, 38, 5251–5295. [Google Scholar] [CrossRef]
  44. Knypiński, Ł.; Devarapalli, R.; Gillon, F. The hybrid algorithms in constrained optimization of the permanent magnet motors. IET Sci. Meas. Technol. 2024, 18, 455–461. [Google Scholar] [CrossRef]
  45. Dhiman, G. ESA: A hybrid bio-inspired metaheuristic optimization approach for engineering problems. Eng. Comput. 2021, 37, 323–353. [Google Scholar] [CrossRef]
  46. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 56, 1919–1979. [Google Scholar] [CrossRef]
  47. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  48. Falahah, I.A.; Al-Baik, O.; Alomari, S.; Bektemyssova, G.; Gochhait, S.; Leonova, I.; Malik, O.P.; Werner, F.; Dehghani, M. Frilled Lizard Optimization: A Novel Bio-Inspired Optimizer for Solving Engineering Applications. Comput. Mater. Contin. 2024, 79, 3631–3678. [Google Scholar] [CrossRef]
  49. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  50. Alzoubi, S.; Abualigah, L.; Sharaf, M.; Daoud, M.S.; Khodadadi, N.; Jia, H. Synergistic swarm optimization algorithm. CMES-Comput. Model. Eng. Sci. 2024, 139, 2557. [Google Scholar] [CrossRef]
  51. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  52. Fakhouri, H.N.; Hamad, F.; Alawamrah, A. Success history intelligent optimizer. J. Supercomput. 2022, 78, 6461–6502. [Google Scholar] [CrossRef]
  53. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  54. Aribowo, W.; Suprianto, B.; Kartini, U.T.; Prapanca, A. Dingo optimization algorithm for designing power system stabilizer. Indones. J. Electr. Eng. Comput. Sci. (IJEECS) 2023, 29, 1–7. [Google Scholar] [CrossRef]
  55. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  56. Seyyedabbasi, A.; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2023, 39, 2627–2651. [Google Scholar] [CrossRef]
  57. Fan, J.; Li, Y.; Wang, T. An improved African vultures optimization algorithm based on tent chaotic mapping and time-varying mechanism. PLoS ONE 2021, 16, e0260725. [Google Scholar] [CrossRef] [PubMed]
  58. Mohammed, H.; Rashid, T. FOX: A FOX-inspired optimization algorithm. Appl. Intell. 2023, 53, 1030–1050. [Google Scholar] [CrossRef]
  59. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  60. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  61. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  62. Trojovská, E.; Dehghani, M.; Trojovskỳ, P. Zebra optimization algorithm: A new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access 2022, 10, 49445–49473. [Google Scholar] [CrossRef]
  63. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Faris, H. MTDE: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Appl. Soft Comput. 2020, 97, 106761. [Google Scholar] [CrossRef]
  64. Filiz, I.H.; Olguner, S.; Evyapan, E. A study on optimization of planetary gear trains. In Proceedings of the Special issue of the 3rd International Conference on Computational and Experimental Science and Engineering (ICCESEN 2016), Antalya, Turkey, 19–24 October 2016; Volume 132, pp. 728–733. [Google Scholar]
Figure 1. Flowchart of the Hybrid APO-JADE algorithm.
Figure 1. Flowchart of the Hybrid APO-JADE algorithm.
Algorithms 17 00589 g001
Figure 2. Convergence curve analysis with CEC2022 benchmark functions (F1–F8).
Figure 2. Convergence curve analysis with CEC2022 benchmark functions (F1–F8).
Algorithms 17 00589 g002
Figure 3. Convergence curve analysis with CEC2022 benchmark functions (F9–F12).
Figure 3. Convergence curve analysis with CEC2022 benchmark functions (F9–F12).
Algorithms 17 00589 g003
Figure 4. Search history analysis for CEC2022 (F1–F12).
Figure 4. Search history analysis for CEC2022 (F1–F12).
Algorithms 17 00589 g004
Figure 5. Average Fitness analysis for CEC2022 (F1–F12).
Figure 5. Average Fitness analysis for CEC2022 (F1–F12).
Algorithms 17 00589 g005
Figure 6. APO-JADE first particle trajectory diagram for CEC2022 (F1–F12).
Figure 6. APO-JADE first particle trajectory diagram for CEC2022 (F1–F12).
Algorithms 17 00589 g006
Figure 7. APO-JADE exploitation diagram for CEC2022 (F1–F12).
Figure 7. APO-JADE exploitation diagram for CEC2022 (F1–F12).
Algorithms 17 00589 g007
Figure 8. Diversity analysis for CEC2022 (F1–F12).
Figure 8. Diversity analysis for CEC2022 (F1–F12).
Algorithms 17 00589 g008
Figure 9. Box plot analysis of CEC2022 (F1–F12).
Figure 9. Box plot analysis of CEC2022 (F1–F12).
Algorithms 17 00589 g009
Figure 10. Sensitivity analysis of CEC2022 (F1–F12).
Figure 10. Sensitivity analysis of CEC2022 (F1–F12).
Algorithms 17 00589 g010
Figure 11. Histogram analysis of CEC2022 (F1–F12).
Figure 11. Histogram analysis of CEC2022 (F1–F12).
Algorithms 17 00589 g011
Figure 12. Planetary gear train design.
Figure 12. Planetary gear train design.
Algorithms 17 00589 g012
Figure 13. Three-Bar Truss Design.
Figure 13. Three-Bar Truss Design.
Algorithms 17 00589 g013
Table 1. CEC2022 benchmark functions.
Table 1. CEC2022 benchmark functions.
Problem No.Problem NameDimRange F min
F1Zakharov Function10 [ 100 , 100 ] 300
F2Rosenbrock’s Function10 [ 100 , 100 ] 400
F3Schaffer’s F710 [ 100 , 100 ] 600
F4Rastrigin’s Function10 [ 100 , 100 ] 800
F5Levy Function10 [ 100 , 100 ] 900
F6Hybrid Function 110 [ 100 , 100 ] 1800
F7Hybrid Function 210 [ 100 , 100 ] 1900
F8Hybrid Function 310 [ 100 , 100 ] 2000
F9Composition Function 110 [ 100 , 100 ] 2300
F10Composition Function 210 [ 100 , 100 ] 2400
F11Composition Function 310 [ 100 , 100 ] 2600
F12Composition Function 410 [ 100 , 100 ] 2700
Table 2. Compared optimizers.
Table 2. Compared optimizers.
AcronymFull NameYear
FLOFrilled Lizard Optimization [48]2024
AOAquila Optimizer [49]2021
SSOASynergistic Swarm Optimization Algorithm [50]2024
CPOCrayfish Optimization Algorithm [46]2023
ROARemora Optimization Algorithm [47]2021
RIMERIME: A Physics-Based Optimization [51]2023
FVIMFour Vector Intelligent Metaheuristic [52]2022
SCASine Cosine Algorithm [53]2016
DOADingo Optimization Algorithm [54]2023
HHOHarris Hawks Optimization [55]2019
SCSOSand Cat Optimization Algorithm [56]2023
OMAOptimization of Multiphase Algorithm2022
AVOAAfrican Vultures Optimization Algorithm [57]2021
COACrayfish Optimization Algorithm [46]2023
FOXFox Optimization Algorithm [58]2023
GWOGrey Wolf Optimizer [59]2014
WOAWhale Optimization Algorithm [60]2016
MFOMoth Flame Optimization [61]2015
SHIOSuccess History Intelligent Optimizer [52]2022
ZOAZebra Optimization Algorithm [62]2022
MTDEModified Teaching-Learning-Based Differential Evolution [63]2020
Table 3. Optimizers with parameter settings.
Table 3. Optimizers with parameter settings.
OptimizerParameter Settings
FLODiffusion constant = 1 , Exploration weight = 0.6 [48]
SPBO p m i n = 0.2 , Exploration decay = 0.5
AO a = 5 , μ = 0.499 [49]
SSOA A = [ 2 , 0 ] , f c = 2 [50]
TTHHOEscape energy = 0.5 , Convergence rate = 0.2
ChOAHunting pressure = 0.8 , Escape rate = 0.4
CPO C 3 = 3 , μ = 25 , σ = 3 [46]
ROA C = 0.1 [47]
RIMEInternal parameter w = 5 [51]
SCA a = 5 , μ = 0.5 [53]
DOA a = 5 , b = 100 , β = 0.5 [54]
SDEScaling factor F = 0.8 , Crossover probability C R = 0.9
HHOEscape energy E = [ 1 , 1 ] , Convergence constant = 0.5 [55]
SCSO r g [ 0 , 2 ] [56]
OMA a = 2 , Local search weight = 0.5
AVOA s = 0.5 , Flight distance = 1.2 [57]
COA C 3 = 3 , μ = 25 , σ = 3 [46]
FOX f = 0.3 , p = 0.7 [58]
GWOPack size = 30 , α , β , δ leadership weights [59]
WOA b = 1 , Spiral coefficient = 1.5 [60]
MFOFlame decay constant = 1.2 , Levy flight parameter = 1.5 [61]
ZOAConvergence rate a = [ 2 , 0 ] [62]
BBOMigration rate = 0.4 , Mutation rate = 0.1
Table 4. Statistical comparison results of Congress on Evolutionary Computation (CEC) 2022.
Table 4. Statistical comparison results of Congress on Evolutionary Computation (CEC) 2022.
FunMeasureJADEAPOFLOSTOASOASPBOAOSSOATTHHOChimpCPOROA
F1Mean300.009072.96877.201920.1939,702.412226.5515,024.07460.672385.853241.359754.18
Std0.001195.30600.091522.995971.71661.035291.74124.78990.942016.30809.51
SEM0.00534.56268.37681.102670.63295.622366.5455.80443.16901.72362.02
Rank12413153017278182025
F2Mean402.571182.73440.85442.391375.08441.191922.55557.62594.39432.20933.59
Std4.88670.3330.8939.54373.0467.95377.92175.85125.9336.05636.21
SEM2.18299.7813.8117.68166.8330.39169.0178.6456.3216.12284.52
Rank228151829173024261427
F3Mean600.00653.20605.85610.25678.21613.76658.90636.93636.53655.00648.58
Std0.0010.884.274.237.619.468.5612.827.3118.1616.22
SEM0.004.871.911.893.404.233.835.733.278.127.25
Rank12791230142923222825
F4Mean805.97859.21829.20820.73910.79826.18860.49829.26847.04832.88852.37
Std0.707.1011.477.068.127.8511.815.556.800.0813.43
SEM0.313.175.133.163.633.515.282.483.040.046.01
Rank12816630132917252026
F5Mean900.001342.96982.001039.824218.421052.091649.431478.541372.761641.311515.39
Std0.00145.4931.84110.90583.74124.12234.03109.64149.38185.74209.49
SEM0.0065.0714.2449.60261.0655.51104.6649.0366.8083.0693.69
Rank120101430152823222725
F6Mean1807.4870,773,129.5827,599.9329,245.19454,989,772.7811,049.68136,023,579.416254.731,375,673.355821.332,518,986.46
Std7.8245,936,702.2511,090.4616,864.13414,115,250.198997.5557,133,732.282270.631,847,548.632475.175,383,447.21
SEM3.5020,543,517.784959.817541.87185,197,969.994023.8325,550,981.841015.46826,248.861106.932,407,550.78
Rank128212230202919231824
F7Mean2000.002103.632040.692038.802202.612039.452122.282084.732061.822172.782101.92
Std0.0025.8313.6912.0548.357.4624.0421.524.5135.5719.77
SEM0.0011.556.125.3921.623.3410.759.622.0215.918.84
Rank12512930102723202924
F8Mean2204.652260.102228.832229.712328.672228.002346.342242.962307.422280.542246.98
Std2.8918.653.132.1261.062.8967.2218.6563.7471.0026.51
SEM1.298.341.400.9527.311.2930.068.3428.5131.7511.86
Rank123131428122920272621
F9Mean2529.282808.222551.132558.412751.102587.302822.712600.802582.092549.812704.90
Std0.0092.1836.1024.7256.7139.6885.4727.8920.5712.7146.92
SEM0.0041.2316.1511.0625.3617.7538.2212.479.205.6820.98
Rank1299112817301815827
F10Mean2500.322595.272500.612528.182587.892593.612700.022551.312740.252744.102617.03
Std0.0396.980.1461.4714.3852.1469.2369.06521.58253.8299.97
SEM0.0243.370.0627.496.4323.3230.9630.89233.26113.5144.71
Rank1203918192611282921
F11Mean2630.093712.752830.992745.733718.212698.953579.672825.403433.802834.393046.23
Std67.27415.52214.6025.35592.99139.24444.31140.25286.00158.92180.10
SEM30.09185.8395.9711.34265.1962.27198.7062.72127.9071.0780.54
Rank3291883052817271926
F12Mean2863.693150.982865.202864.712893.842868.803118.922893.162872.272890.702888.44
Std0.54141.890.991.747.743.37117.5610.9213.2714.6517.90
SEM0.2463.450.440.783.461.5152.584.895.946.558.01
Rank130632282921142018
Table 5. Statistical comparison results of Congress on Evolutionary Computation (CEC) 2022.
Table 5. Statistical comparison results of Congress on Evolutionary Computation (CEC) 2022.
FunMeasureCOACMAESFOXGWOWOAMFOSHIOZOAMTDEBBO
F1Mean674.0324,779.12648.241972.9120,054.165150.766520.75764.6410,598.09300.09
Std497.4015,963.94483.421689.119073.093913.803846.63617.264855.850.05
SEM222.447139.29216.19755.394057.611750.301720.26276.052171.600.02
Rank102991628212212265
F2Mean416.33588.87418.81430.81451.89416.74428.72440.93485.91403.68
Std33.2964.9529.1328.1568.9119.6519.6540.5019.702.27
SEM14.8929.0513.0312.5930.828.798.7918.118.811.01
Rank6259111971016223
F3Mean600.57621.47652.61600.52630.44604.38609.81625.59620.21600.22
Std0.4529.9810.240.6316.534.264.504.133.860.49
SEM0.2013.414.580.287.391.902.011.851.730.22
Rank6172652181119164
F4Mean824.61822.53833.63814.43834.01840.99821.10814.27858.37820.89
Std5.839.893.023.8912.4220.319.708.3612.539.72
SEM2.614.421.351.745.569.084.343.745.604.35
Rank1211214222382277
F5Mean909.12900.001500.97932.111725.751082.38941.331033.731356.56974.20
Std11.950.0085.4769.71971.82312.6927.8498.73218.92136.22
SEM5.350.0038.2231.18434.61139.8412.4544.1597.9060.92
Rank612472916813219
F6Mean4557.7030,665,655.834439.163552.023032.534897.174311.562307.202,660,772.462455.26
Std1503.8241,808,974.442410.341382.622094.772298.45860.57697.971,875,669.65994.42
SEM672.5318,697,541.781077.93618.32936.811027.90384.86312.14838,824.97444.72
Rank1527148616124255
F7Mean2018.632114.932144.812044.572069.692033.242052.002039.962058.332029.87
Std8.3780.5160.3618.5415.5915.5220.3121.5117.455.70
SEM3.7436.0126.998.296.976.949.089.627.802.55
Rank32628142171511176
F8Mean2224.722262.872460.292224.652232.252227.742265.612224.312236.902220.65
Std3.3122.92126.612.173.696.5251.062.192.640.43
SEM1.4810.2556.620.971.652.9222.830.981.180.19
Rank7243061510255184
F9Mean2529.282538.942626.012557.372621.152534.052658.232637.662602.482529.28
Std0.007.7234.3937.4463.478.6840.3627.3631.830.00
SEM0.003.4515.3816.7428.393.8818.0512.2314.230.00
Rank4723102262625192
F10Mean2771.592729.962657.352567.712563.242581.162623.792577.912509.462523.72
Std404.37220.0491.6761.4888.2974.1415.1970.346.4651.81
SEM180.8498.4041.0027.4939.4933.166.7931.462.8923.17
Rank302724131215221467
F11Mean2750.402887.192770.112798.292847.252699.172780.292743.542888.942600.03
Std150.1727.40178.87215.43158.6291.0982.8374.28225.510.01
SEM67.1612.2579.9996.3470.9440.7337.0433.22100.850.01
Rank9231214206137241
F12Mean2869.092873.773010.742871.692896.062865.652915.542936.532879.352888.84
Std7.851.2867.9810.9531.423.0631.3023.024.4310.94
SEM3.510.5730.404.9014.051.3714.0010.301.984.89
Rank915281223724271719
Table 6. Statistical comparison results of Congress on Evolutionary Computation (CEC) 2022.
Table 6. Statistical comparison results of Congress on Evolutionary Computation (CEC) 2022.
FunMeasureRIMEFVIMSCADOASDEHHOSCSOOMAAVOA
F1Mean300.092906.651243.818171.97300.05316.98709.52300.00301.32
Std0.122589.27413.135251.310.0610.81561.280.002.94
SEM0.051157.96184.762348.460.034.84251.010.001.31
Rank4191423371126
F2Mean405.47431.46456.62500.90404.50467.58431.25401.65417.32
Std4.7334.7721.90102.983.1768.0334.172.1726.54
SEM2.1115.559.7946.051.4230.4215.280.9711.87
Rank51320234211218
F3Mean600.07609.17618.34627.28601.35638.27622.81600.15611.16
Std0.0410.301.379.611.6312.2317.050.315.22
SEM0.024.610.614.300.735.477.620.142.33
Rank210152072418313
F4Mean821.49814.28843.19829.09819.59827.56829.87821.83830.25
Std6.703.178.846.638.267.512.423.0411.87
SEM2.991.423.952.963.693.361.081.365.31
Rank932415514181019
F5Mean900.18984.341000.631150.00900.111524.281103.10901.621104.87
Std0.2870.2725.55185.190.0658.97272.962.79109.06
SEM0.1331.4311.4382.820.0326.37122.071.2548.77
Rank411121932617518
F6Mean4399.193495.875,808,489.954947.621849.914132.323857.612146.414081.42
Std2599.331458.865,096,731.303138.4729.201659.071521.81392.621340.72
SEM1162.45652.422,279,327.531403.5713.06741.96680.57175.59599.59
Rank13726172119310
F7Mean2012.672038.622059.402075.702026.352058.662054.842021.902040.97
Std10.9217.5518.4041.533.7423.5721.757.8524.01
SEM4.887.858.2318.571.6710.549.733.5110.74
Rank28192251816413
F8Mean2214.102251.272234.172227.882216.692237.812226.052224.842233.58
Std9.4854.362.826.178.3211.205.253.2412.68
SEM4.2424.311.262.763.725.012.351.455.67
Rank22217113199816
F9Mean2558.672603.742575.752632.522529.282609.012585.772529.282558.67
Std65.7136.2425.2420.680.0063.0732.590.0065.71
SEM29.3916.2111.299.250.0028.2114.580.0029.39
Rank1320142432116512
F10Mean2549.962688.332502.812631.532500.382581.962526.852500.612585.73
Std68.06300.340.70133.650.0974.1558.800.1377.72
SEM30.44134.320.3159.770.0433.1626.290.0634.76
Rank10255232168417
F11Mean2630.702878.352866.532913.212610.622755.092807.272758.152807.11
Std66.93195.35199.2875.0214.82111.66167.5810.26220.35
SEM29.9387.3689.1233.556.6349.9374.954.5998.54
Rank4222125210161115
F12Mean2864.872871.792869.362926.842863.932927.132874.572869.622865.05
Std2.003.751.6869.581.3542.6510.822.532.41
SEM0.891.680.7531.120.6019.074.841.131.08
Rank413102522616115
Table 7. Results of Three-Bar Truss Design optimization problem.
Table 7. Results of Three-Bar Truss Design optimization problem.
OptimizerBestFx1x2
APO-JADE1.3399596.0172095.316636
FOX1.3399626.0293125.305661
GWO1.3400216.0497985.3041
SMA1.3399826.0416425.29299
AVOA1.3399956.0421155.319658
HGS1.3402855.9464155.324573
WSO1.3399566.0159265.309176
DBO1.3404626.070125.289151
MFO1.3401546.0479045.32697
COOT1.3405885.9523755.329455
MVO1.3408136.0050455.346861
AO1.3432176.2593655.263324
HHO1.3415486.1922835.161075
SA1.3403665.9260175.344088
ChOA1.3532425.8899335.586923
BWO1.3531685.9610075.7868
SCA1.379526.1009234.739083
COA1.4148065.4115935.341151
OOA1.499295.0334075.455155
WOA1.5123815.0030577.375842
BO1.6018859.3382249.939113
GA1.5246574.7099165.967707
Table 8. Statistical results of Three-Bar Truss Design optimization problem.
Table 8. Statistical results of Three-Bar Truss Design optimization problem.
OptimizerMeanStdMinMaxTimeRank
APO-JADE1.345.79 × 10−51.3399591.3401010.2873931
FOX1.3400014.61 × 10−51.3399621.3400790.3163252
GWO1.3400523.35 × 10−51.3400211.3401020.3058423
SMA1.3402260.0002291.3399821.3405860.5060824
AVOA1.3402810.0002781.3399951.3405950.3368325
HGS1.3405470.0003461.3402851.3411380.281816
WSO1.340610.0014611.3399561.3432240.2575567
DBO1.3408080.0003891.3404621.3414790.3443118
MFO1.3414580.0011841.3401541.3431520.294979
COOT1.3415840.0015561.3405881.3443430.30486410
MVO1.3416090.0007171.3408131.3427280.34713311
AO1.3460450.0032811.3432171.3515750.59019312
HHO1.3486120.0056121.3415481.3568260.66269713
SA1.3494510.0105651.3403661.3670140.527514
ChOA1.3660610.0075041.3532421.3728550.56865515
BWO1.3699650.0173291.3531681.3976010.36745416
SCA1.3999210.0153931.379521.4192730.2711117
COA1.4503650.0577031.4148061.5530010.69029618
OOA1.5347910.0229881.499291.558050.56877719
WOA1.7343720.1800971.5123811.9750230.29164520
BO2.105710.3592861.6018852.4775082.66437921
GA2.1289890.5230691.5246572.9237660.31330522
Table 9. Results of planetary gear train design optimization problem.
Table 9. Results of planetary gear train design optimization problem.
OptimizerBestFx1x2x3x4x5x6x7
APO-JADE0.52576935.405626.0946724.5062224.4956922.1659587.275990.552916
GWO0.52628137.0887622.0424319.6051924.0634324.8301186.854150.56138
SMA0.53705951.8921531.3972913.5116.8106713.7311862.346333.35882
WSO0.5346.9274528.5847823.8496327.7476229.7172999.816581.411459
WOA0.5338.5641528.4397724.6632124.8998524.1615991.379370.855906
COOT0.53705923.5874214.198113.7168716.6806913.7089361.845280.539953
ChOA0.53085848.2930631.8271327.4357628.6686523.84089105.05771.600849
MVO0.54384649.2291825.8621219.2251825.7342221.9972794.814070.801071
CSA0.53705921.1816813.8030716.3039717.2664216.0631161.692021.466052
HGS0.53705939.3302322.7260213.5116.5113.5162.289360.9005
SCA0.62687316.5113.5125.0431723.4188813.5180.322020.766665
OOA0.77437934.2663424.4816924.4483426.0850222.8661379.747471.188417
MFO0.60041528.4419416.7886313.5116.5113.5140358.921390.51
BWO0.86833316.5113.8976713.5116.9906413.5165.364270.51
DBO0.66887626.0979115.6869513.5116.5113.5155.769950.553839
Table 10. Statistical results of planetary gear train design optimization problem.
Table 10. Statistical results of planetary gear train design optimization problem.
OptimizerMeanStdMinMaxTimeRank
APO-JADE0.5340440.0082550.5257690.5462930.3465361
GWO0.5366220.0085310.5262810.5462260.3145012
SMA0.5377470.0015380.5370590.5404990.4956693
WSO0.5385280.0092770.530.5508270.2653974
WOA0.5434740.0126130.530.560270.3171915
COOT0.5499010.0090180.5370590.5601480.3269156
ChOA0.554870.0476820.5308580.6400350.5928027
MVO0.5567140.0091490.5438460.5697220.3112678
CSA0.5752040.0449160.5370590.6268420.3118059
HGS0.6325340.1342720.5370590.8311760.32358210
SCA0.7283570.0855620.6268730.8500790.30953411
OOA0.912520.1271540.7743791.1061280.59790912
MFO0.9596230.6325340.6004152.0870530.29451813
BWO2 × 10194.47 × 10190.8683331 × 10200.40044814
DBO3.6 × 10204.93 × 10200.6688769 × 10200.34197215
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fakhouri, H.N.; Alkhalaileh, M.S.; Hamad, F.; Sirhan, N.N.; Fakhouri, S.N. Hybrid Arctic Puffin Algorithm for Solving Design Optimization Problems. Algorithms 2024, 17, 589. https://doi.org/10.3390/a17120589

AMA Style

Fakhouri HN, Alkhalaileh MS, Hamad F, Sirhan NN, Fakhouri SN. Hybrid Arctic Puffin Algorithm for Solving Design Optimization Problems. Algorithms. 2024; 17(12):589. https://doi.org/10.3390/a17120589

Chicago/Turabian Style

Fakhouri, Hussam N., Mohannad S. Alkhalaileh, Faten Hamad, Najem N. Sirhan, and Sandi N. Fakhouri. 2024. "Hybrid Arctic Puffin Algorithm for Solving Design Optimization Problems" Algorithms 17, no. 12: 589. https://doi.org/10.3390/a17120589

APA Style

Fakhouri, H. N., Alkhalaileh, M. S., Hamad, F., Sirhan, N. N., & Fakhouri, S. N. (2024). Hybrid Arctic Puffin Algorithm for Solving Design Optimization Problems. Algorithms, 17(12), 589. https://doi.org/10.3390/a17120589

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop