Next Article in Journal
A Novel Tracking Strategy Based on Real-Time Monitoring to Increase the Lifetime of Dual-Axis Solar Tracking Systems
Previous Article in Journal
Numerical Simulation of Seismic-Wave Propagation in Specific Layered Geological Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Artificial Protozoa-Based JADE for Attack Detection

by
Ahmad k. Al Hwaitat
1,* and
Hussam N. Fakhouri
2
1
King Abdullah II School of Information Technology, The University of Jordan, Amman 11942, Jordan
2
Data Science and Artificial Intelligence Department, Faculty of Information Technology, University of Petra, Amman 11196, Jordan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(18), 8280; https://doi.org/10.3390/app14188280
Submission received: 5 August 2024 / Revised: 27 August 2024 / Accepted: 30 August 2024 / Published: 13 September 2024
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
This paper presents a novel hybrid optimization algorithm that combines JADE Adaptive Differential Evolution with Artificial Protozoa Optimizer (APO) to solve complex optimization problems and detect attacks. The proposed Hybrid APO-JADE Algorithm leverages JADE’s adaptive exploration capabilities and APO’s intensive exploitation strategies, ensuring a robust search process that balances global and local optimization. Initially, the algorithm employs JADE’s mutation and crossover operations, guided by adaptive control parameters, to explore the search space and prevent premature convergence. As the optimization progresses, a dynamic transition to the APO mechanism is implemented, where Levy flights and adaptive change factors are utilized to refine the best solutions identified during the exploration phase. This integration of exploration and exploitation phases enhances the algorithm’s ability to converge to high-quality solutions efficiently. The performance of the APO-JADE was verified via experimental simulations and compared with state-of-the-art algorithms using the 2022 IEEE Congress on Evolutionary Computation benchmark (CEC) 2022 and 2021. Results indicate that APO-JADE achieved outperforming results compared with the other algorithms. Considering practicality, the proposed APO-JADE was used to solve a real-world application in attack detection and tested on DS2OS, UNSW-NB15, and ToNIoT datasets, demonstrating its robust performance.

1. Introduction

As modern problems become more intricate and the outcomes more nuanced, the need for advanced optimization algorithms has surged. However, optimization is essentially about finding the most efficient solution to a problem, whether it be the least or most extreme value of a particular function [1]. The solution, represented by a variable x, lies within a predefined search space and is refined over several iterations to approach the best possible outcome, which is known as the optimal solution. Each round of optimization tweaks the solution based on a set of heuristics or rules, aiming to improve it compared to the previous best. However, challenges arise in ensuring these algorithms do not become trapped in local optima—solutions that are best in a nearby region but not in the entire search space—or stray too far from the solution range. As such, the design of these algorithms requires a delicate balance, updating solutions carefully to navigate toward the global optimum, the absolute best solution across the entire landscape. This iterative, heuristic-based approach is at the core of metaheuristic optimization, which is a method that is increasingly vital in navigating complex, multi-dimensional problem spaces across various disciplines [2].
Metaheuristic algorithms stand out for their simplicity and adaptability, especially when contrasted with traditional optimization methods. Traditional approaches like linear, integer, and mixed programming are well suited for structured problems with clear definitions and constraints [3]. They offer precise solutions and are analytically approachable, revealing insights into their computational complexity and convergence behaviors. However, these methods often falter in complex scenarios characterized by multiple potential solutions or extremes. Here, metaheuristic algorithms shine by offering a versatile balance between exploration and exploitation [4]. They adeptly navigate through multiple solutions, efficiently avoiding local optima—suboptimal points that are better than adjacent possibilities but inferior to the best overall solution. This flexibility enhances their ability to locate the global optimum or the absolute best solution across the entire problem space. The strategic design of metaheuristics allows them to adapt and perform where traditional methods might struggle, making them particularly effective for a wide array of complex optimization problems [5].
The effectiveness and versatility of metaheuristic algorithms are evidenced by their widespread adoption across various sectors. Their ability to reliably find global optima makes them invaluable, particularly as they are not overly reliant on initial conditions and demonstrate robust performance across different solution domains. Such features make them highly adaptable and robust, leading to significant applications in complex, real-world problems. For instance, they have been successfully employed in optimizing travel routes, enhancing image segmentation, streamlining ship routing and scheduling, selecting features in datasets, and more [6].
Notably, the development of metaheuristic algorithms is inspired by a range of natural phenomena, leading to a rich diversity of approaches categorized broadly into swarm intelligence, physics-based, evolutionary, and human-inspired algorithms. Each category draws from different aspects of natural and human-made systems, leveraging their unique principles to navigate the search space efficiently. The continuous evolution and specialization of these algorithms have made them a go-to solution for problems that traditional optimization methods struggle to solve, highlighting their growing importance in both academic research and practical applications [7].
The No Free Lunch (NFL) theorem for optimization, proposed by Wolpert and Macready, is a fundamental principle that underscores the limitations of metaheuristic algorithms. It states that no single optimization algorithm is universally superior for all problems. Essentially, the theorem posits that when averaged over all possible problems, all algorithms perform equally well. This means that an algorithm showing excellent performance on certain types of problems may not necessarily perform as well on others [8].
The NFL theorem emphasizes the importance of understanding the specific nature and requirements of the problem at hand when selecting or designing an optimization algorithm. It suggests that the effectiveness of an algorithm is highly dependent on how well suited its mechanisms are to the particularities of the problem’s landscape. As a result, the field of optimization involves a constant search for new algorithms or the adaptation of existing ones to better fit the unique challenges of different problems [9].
This understanding has significant implications for the design and application of optimization methods. It encourages diversity in algorithm development and promotes a more nuanced approach to algorithm selection and problem solving. Instead of seeking a “one-size-fits-all” solution, researchers and practitioners are more inclined to consider a variety of strategies, tailor approaches to specific contexts, and remain open to innovation and adaptation in algorithm design [10].

1.1. Contributions

The key contributions of this paper are as follows:
  • Proposes a novel Hybrid APO-JADE optimization algorithm that integrates the strengths of the Artificial Protozoa Optimizer (APO) and JADE (Adaptive Differential Evolution) to effectively balance exploration and exploitation in optimization tasks.
  • Demonstrates the effectiveness of the Hybrid APO-JADE algorithm through comprehensive testing across multiple benchmark suites from the IEEE Congress on Evolutionary Computation (CEC) for the years 2017, 2021, and 2022, showcasing superior performance in terms of convergence rates and solution accuracy.
  • Applies the Hybrid APO-JADE algorithm to a real-world problem in cybersecurity, specifically for attack detection using the DS2OS, UNSW-NB15, and ToNIoT datasets, highlighting the algorithm’s practical applicability and robustness in diverse scenarios.
  • Introduces an innovative approach to hyperparameter tuning using the APO-JADE algorithm, enhancing the performance of deep learning models in detecting cybersecurity threats across different datasets.
  • Provides detailed experimental results and analyses, including accuracy, precision, recall, and F1 score metrics, to validate the proposed algorithm’s effectiveness in both binary and multiclass classification tasks.

1.2. Paper Structure

The rest of this paper is organized as follows. Section 2 contains the Literature Review, providing an overview of the foundational components, including the Artificial Protozoa Optimizer (APO), Differential Evolution (DE), and JADE: Adaptive Differential Evolution with an Optional External Archive. Section 3 contains the Hybrid APO-JADE Optimization Algorithm, covering aspects such as initialization, behavioral strategies (foraging, dormancy, reproduction, autotroph, and heterotroph), adaptive mechanisms, and the integration of JADE with APO. Next, Section 4 evaluates the algorithm’s performance across multiple IEEE Congress on Evolutionary Computation (CEC) benchmark suites from 2017, 2021, and 2022, using various metrics and diagrams to illustrate the results. The Case Study: Application of APO-JADE for Attack Detection demonstrates the algorithm’s application in cybersecurity, focusing on the DS2OS, UNSW-NB15, and ToNIoT datasets, with detailed discussions on data preparation, pre-processing, normalization, and hyperparameter tuning. Section 5.8 discusses the metrics used to evaluate the model, including accuracy, precision, recall, and F1 score. This is followed by Section 5.9, which outlines the experimental setup, methodology, and results, particularly focusing on the DS2OS dataset. The paper concludes with the Conclusion, summarizing the findings and highlighting the contributions of the Hybrid APO-JADE algorithm.

2. Literature Review

2.1. Overview of Artificial Protozoa Optimizer (APO)

The Artificial Protozoa Optimizer (APO) [11] is a novel bio-inspired metaheuristic algorithm designed for engineering optimization. The algorithm mimics the survival behaviors of protozoa, specifically their foraging, dormancy, and reproductive activities. The APO is inspired by the biological characteristics and behaviors of protozoa, particularly the Euglena species. Protozoa exhibit both autotrophic (photosynthesis) and heterotrophic (consumption of organic matter) behaviors, making them versatile in different environmental conditions. This adaptability is harnessed in the APO to perform effective optimization in both continuous and discrete spaces. The APO simulates three primary behaviors: foraging, dormancy, and reproduction. Foraging behavior is modeled by both autotrophic and heterotrophic modes. In the autotrophic mode, protozoa move toward suitable light conditions for photosynthesis. In the heterotrophic mode, they move toward areas rich in organic nutrients. Dormancy occurs when environmental conditions are unfavorable, leading protozoa to enter a dormant state by forming cysts to survive until conditions improve. Reproduction is modeled as asexual reproduction through binary fission, creating identical offspring. These behaviors are mathematically modeled to balance exploration and exploitation during the optimization process. The APO uses mathematical models to simulate the above behaviors: Autotrophic Foraging, where protozoa move toward light conditions suitable for photosynthesis, modeled using a foraging factor and the random selection of neighboring protozoa; Heterotrophic Foraging, where protozoa move toward nutrient-rich areas, with movements influenced by environmental factors and nearby food sources; Dormancy, where protozoa enter a dormant state, represented by randomly generating new positions to simulate surviving harsh conditions; and Reproduction, where protozoa split into two, with slight perturbations to simulate genetic diversity.

2.2. Overview of Differential Evolution (DE)

Differential Evolution (DE) is a population-based metaheuristic optimization algorithm introduced by Storn and Price in 1995. It is widely recognized for its simplicity and effectiveness in solving continuous optimization problems. DE is inspired by the principles of natural evolution, particularly the processes of mutation, crossover, and selection.
DE begins with a randomly generated population of candidate solutions, each represented by a vector of parameters. The population size remains constant throughout the optimization process. Each individual in the population is evaluated using a fitness function, which quantifies the quality of the solution with respect to the optimization objective.
For each individual in the population, a mutant vector is created by adding the weighted difference between two randomly selected population vectors to a third vector. This mutation operation is critical for introducing diversity into the population and is mathematically expressed as
v i = x r 1 + F · ( x r 2 x r 3 ) ,
where v i is the mutant vector, x r 1 , x r 2 , and x r 3 are randomly selected vectors from the population (and r 1 r 2 r 3 i ), and F is a mutation factor typically in the range [0, 2]. This operation helps with exploring the search space effectively by generating new candidate solutions that are different from the existing ones.
A crossover operation is then performed between the mutant vector and the target vector (the current population member) to generate a trial vector. This operation ensures that the trial vector inherits parameters from both the target and mutant vectors. The binomial crossover is commonly used, which is defined as
u i , j = v i , j if rand ( 0 , 1 ) C r or j = j r a n d , x i , j otherwise ,
where u i , j is the trial vector, v i , j is the mutant vector, x i , j is the target vector, C r is the crossover probability, and j r a n d is a randomly chosen index to ensure that at least one parameter is taken from the mutant vector. This crossover mechanism is crucial for combining the explorative power of the mutation operation with the exploitative power of the target vector.
The trial vector is then compared to the target vector using a selection mechanism based on their fitness values. If the trial vector has a lower (or higher, depending on the optimization objective) objective function value than the target vector, it replaces the target vector in the next generation. This can be represented as
x i ( t + 1 ) = u i if f ( u i ) f ( x i ) , x i otherwise ,
where x i ( t + 1 ) is the individual in the next generation, u i is the trial vector, and f ( · ) is the objective function. This selection process ensures that only the best solutions are carried forward to the next generation, guiding the population toward better solutions over successive generations.

2.3. Overview of JADE: Adaptive Differential Evolution with Optional External Archive

JADE (Adaptive Differential Evolution with Optional External Archive) is an advanced variant of the Differential Evolution (DE) algorithm, which is designed to enhance the optimization performance by introducing adaptive control parameters and an external archive for better exploration and exploitation. Proposed by J. Zhang and A. Sanderson in 2009 [12], JADE addresses some of the limitations of traditional DE by adapting mutation and crossover rates dynamically and utilizing historical information to guide the search process more effectively.
JADE retains the core structure of DE, including population initialization, mutation, crossover, and selection, but it introduces several innovative features. The key improvement in JADE is the adaptation of the mutation factor F and the crossover rate C r during the optimization process. These parameters are crucial for balancing exploration and exploitation. Instead of using fixed values, JADE adjusts F and C r based on the success of previous generations, allowing the algorithm to adapt to different stages of the search process. This adaptability helps in maintaining diversity in the population and avoiding premature convergence to local optima.
The mutation operation in JADE is enhanced with an external archive, which stores inferior solutions that were replaced in previous generations. The archive is used to provide additional information for generating mutant vectors, thus enriching the diversity of potential solutions. The mutation strategy in JADE can be expressed as
v i = x i + F · ( x p x i ) + F · ( x r 1 x r 2 ) ,
where v i is the mutant vector, x i is the target vector, x p is the best solution in the current population, x r 1 and x r 2 are randomly selected vectors, and F is the mutation factor. The use of the best solution x p guides the search towards promising regions, while the random vectors x r 1 and x r 2 introduce variability.
Crossover in JADE is performed similarly to traditional DE, but with an adaptive crossover rate C r , which is adjusted based on the performance of the previous generations. The crossover operation combines the mutant vector and the target vector to generate a trial vector u i :
u i , j = v i , j if rand ( 0 , 1 ) C r or j = j r a n d , x i , j otherwise ,
where u i , j is the trial vector, v i , j is the mutant vector, x i , j is the target vector, C r is the crossover probability, and j r a n d is a randomly chosen index to ensure that at least one parameter is taken from the mutant vector.
Selection in JADE follows the DE principle, where the trial vector u i competes with the target vector x i . The vector with the better fitness value is selected for the next generation:
x i ( t + 1 ) = u i if f ( u i ) f ( x i ) , x i otherwise ,
where x i ( t + 1 ) is the individual in the next generation, u i is the trial vector, and f ( · ) is the objective function.
JADE’s adaptive mechanism and the use of an external archive significantly enhance its performance on various optimization problems. The adaptability of F and C r allows JADE to fine-tune its search strategy dynamically, improving its ability to find global optima and avoid stagnation. The external archive contributes to maintaining diversity and preventing the loss of valuable information throughout the optimization process.

3. Hybrid APO-JADE Optimization Algorithm

3.1. Algorithm Framework

The hybrid APO-JADE algorithm combines the diverse behavioral strategies of the Artificial Protozoa Optimizer (APO) with the adaptive parameter control mechanisms of Adaptive Differential Evolution (JADE). This hybrid approach leverages the strengths of both algorithms to enhance the optimization process.

3.1.1. Artificial Protozoa Optimizer (APO)

APO is inspired by the behaviors of protozoa, single-celled organisms that exhibit complex behaviors such as foraging, dormancy, reproduction, autotrophy, and heterotrophy. These behaviors allow the algorithm to explore the search space in diverse ways:
Foraging refers to protozoa moving toward food sources, which, in optimization terms, translates to individuals moving toward promising areas in the search space. This movement helps with the local exploration and exploitation of potential solutions.
Dormancy occurs when protozoa enter a state of low activity in unfavorable conditions. In the context of optimization, this corresponds to individuals exploring new random positions in the search space, thereby enhancing diversity and avoiding local optima.
Reproduction involves protozoa generating offspring, which is analogous to the generation of new candidate solutions through perturbations of existing ones. This behavior promotes localized exploration within the search space.
Autotrophy is the process by which protozoa produce their own food through photosynthesis. In optimization, this can be seen as individuals adjusting their positions toward a peer in order to balance exploration and exploitation.
Lastly, heterotrophy occurs when protozoa feed on other organisms. In an optimization context, this means individuals adjust their positions based on the differences between two peers, thereby enhancing diversity in the search process.

3.1.2. Adaptive Differential Evolution (JADE)

JADE enhances traditional Differential Evolution by introducing adaptive mechanisms for the crossover rate ( C R ) and mutation factor (F). These parameters are dynamically adjusted based on the historical performance of solutions, allowing the algorithm to adapt to different optimization landscapes:
The Crossover Rate (CR) controls the probability of components being recombined from different parent solutions. By adapting CR, JADE maintains a balance between exploration and exploitation, ensuring that the algorithm searches effectively across the solution space.
The Mutation Factor (F) determines the step size for mutations. An adaptive F allows the algorithm to adjust the search intensity dynamically, based on the progress made toward optimal solutions. This adaptability helps JADE fine-tune its exploration and exploitation throughout the optimization process.

3.1.3. Hybrid Approach

The hybrid APO-JADE algorithm integrates these two methodologies to create a robust optimization framework. The key steps in the algorithm are outlined below:
  • Initialization: A population of candidate solutions is generated uniformly within the defined lower and upper bounds, ensuring a diverse starting point.
  • Behavioral Strategy Selection: Each individual in the population randomly selects one of the APO-inspired behaviors (foraging, dormancy, reproduction, autotrophy, heterotrophy) in each iteration.
  • Adaptive Parameter Adjustment: JADE’s adaptive mechanisms are applied to adjust C R and F, enhancing the search capabilities.
  • Crossover and Mutation: The selected APO behavior and JADE’s crossover method are used to generate new candidate solutions.
  • Fitness Evaluation: New candidate solutions are evaluated using the objective function.
  • Selection and Archive Update: Individuals are replaced with better-performing candidates, and an archive of replaced solutions is maintained to preserve diversity and guide parameter adjustments.
  • Parameter Adaptation: C R and F are updated based on the performance of the archive solutions, ensuring the algorithm remains adaptable throughout the optimization process.
  • Repeat: The process continues for a predefined number of iterations or until convergence criteria are met.
This hybrid framework allows the algorithm to effectively explore the search space using APO’s diverse behaviors while dynamically adjusting search parameters using JADE’s adaptive mechanisms. The combination of these strategies aims to achieve superior convergence rates and solution accuracy across a variety of optimization problems.

Initialization

The algorithm begins by initializing a population of candidate solutions uniformly within the defined lower and upper bounds. This ensures a diverse starting point for the optimization process, which is critical for effective exploration.
Let
  • P be the population matrix of size pop × dim ;
  • lb and ub be the lower and upper bounds, respectively.
The initialization is defined as shown in Equation (7):
P i = lb + r i · ( ub lb )
where r i is a random vector with values uniformly distributed in [ 0 , 1 ] .

Behavioral Strategies

The hybrid algorithm incorporates five distinct behaviors inspired by APO.

Foraging

Individuals adjust their positions based on a randomly chosen peer, promoting exploration as shown in Equation (8).
X new = X i + F · ( X j X i )
where j is a randomly chosen index different from i.

Dormancy

Individuals randomly relocate within the search space, enhancing diversity as shown in Equation (9).
X new = lb + r · ( ub lb )
where r is a random vector with values uniformly distributed in [ 0 , 1 ] .

Reproduction

Individuals generate new positions based on Gaussian perturbations, enabling localized exploration as shown in Equation (10).
X new = X i + N ( 0 , 1 ) · ( ub lb ) · F
where N ( 0 , 1 ) represents a vector of normally distributed random variables.

Autotroph

Individuals adjust positions toward a selected peer, balancing exploration and exploitation as shown in Equation (11).
X new = X i + F · ( X j X i )
where j is a randomly chosen index different from i.

Heterotroph

Individuals move based on differences between two peers, promoting diverse search patterns as shown in Equation (12).
X new = X i + F · ( X j X k )
where j and k are randomly chosen indices different from i and each other.

Adaptive Mechanisms

The hybrid algorithm incorporates JADE’s adaptive mechanisms for crossover rate ( C R ) and mutation factor (F), which are adjusted based on historical performance to optimize search effectiveness.

Crossover and Mutation (JADE)

JADE’s adaptive C R and F are applied to modify the positions of individuals. The crossover operation is defined as shown in Equation (14).
U i = X i
U i j = V i j if rand j < C R or j = jrand
where V is the mutant vector, rand j is a uniformly distributed random number, and jrand is a randomly chosen dimension.

Fitness Evaluation

Each candidate solution is evaluated using the objective function f ( X ) , determining its fitness within the current population.

Selection and Replacement Strategies

The algorithm employs a greedy selection mechanism, where individuals are replaced if the new candidate solutions provide better fitness. An archive of replaced solutions is maintained to preserve diversity and guide adaptive parameter adjustments.
The selection is defined as shown in Equation (15).
X i = U i if f ( U i ) < f ( X i ) X i otherwise

Parameter Adaptation

The C R and F values are updated based on the performance of the archive solutions, ensuring that the algorithm remains adaptable to the problem landscape throughout the optimization process.
The adaptation is defined as shown in Equations (16) and (17).
C R = ( 1 c ) · C R + c · mean ( C R successful )
F = ( 1 c ) · F + c · F successful 2 F successful
where c is a constant, and C R successful and F successful are the crossover rates and mutation factors of successful mutations.

3.2. APO-JADE Optimization Algorithm Description and Pseudocode

The hybrid APO-JADE optimization algorithm (Algorithm 1) integrates the diverse behavioral strategies of the Artificial Protozoa Optimizer (APO) with the adaptive parameter control mechanisms of Adaptive Differential Evolution (JADE) to enhance optimization performance. The algorithm starts by initializing a population of candidate solutions uniformly within the defined bounds, ensuring diversity. Each individual in the population randomly selects one of five APO-inspired behaviors (foraging, dormancy, reproduction, autotrophy, or heterotrophy) to adjust its position, balancing exploration and exploitation. JADE’s adaptive mechanisms dynamically adjust the crossover rate ( C R ) and mutation factor (F) based on historical performance, allowing the algorithm to adapt to the optimization landscape. Fitness evaluation determines the quality of each candidate solution, and a greedy selection mechanism ensures that only better-performing solutions replace existing ones. An archive of replaced solutions is maintained to preserve diversity and guide adaptive parameter adjustments. The hybrid algorithm iterates through these steps, continuously updating the population and parameters, until convergence criteria are met or the maximum number of iterations is reached. By combining APO’s exploratory behaviors with JADE’s adaptability, the hybrid algorithm aims to achieve superior convergence rates and solution accuracy across various optimization problems.
Algorithm 1: Pseudocode for Hybrid APO-JADE Optimization Algorithm.
1:
Input: Population size (pop), Maximum iterations (Max_iter), Lower bounds (lb), Upper bounds (ub), Dimension (dim), Objective function (fobj)
2:
Output: Best solution found (Best_pos), Best fitness value (Best_score)
3:
Initialize the population P using Equation (7)
4:
Evaluate initial fitness of the population
5:
Initialize archive A as empty
6:
Set initial crossover rate C R = 0.5
7:
Set initial mutation factor F = 0.5
8:
for gen = 1 to Max_iter do
9:
  Generate adaptive C R and F for each individual based on historical success
10:
for i = 1 to pop do
11:
  Select a behavioral strategy for individual i:
12:
  if behavior = foraging then
13:
     Update position using Equation (8)
14:
  else if behavior = dormancy then
15:
     Update position using Equation (9)
16:
  else if behavior = reproduction then
17:
     Update position using Equation (10)
18:
  else if behavior = autotroph then
19:
     Update position using Equation (11)
20:
  else if behavior = heterotroph then
21:
     Update position using Equation (12)
22:
  end if
23:
  Perform crossover to generate new candidate solution using Equation (14)
24:
  Evaluate the fitness of the new candidate solution
25:
  if new candidate solution is better then
26:
     Update archive A with current solution
27:
     Replace current solution with new candidate solution
28:
  end if
29:
  end for
30:
  Update adaptive parameters C R and F using Equations (16) and (17)
31:
  Record the best solution and fitness value of the current generation
32:
  Manage the size of archive A to maintain diversity
33:
end for
34:
return Best solution (Best_pos) and best fitness value (Best_score)

3.3. Exploration and Exploitation

Exploration refers to the ability of an optimization algorithm to search through diverse regions of the solution space to discover potentially optimal areas. In the hybrid APO-JADE algorithm, exploration is primarily facilitated by the behavioral strategies inspired by the Protozoa Optimizer (APO). Each behavior promotes exploration in different ways. For example, the foraging behavior enables individuals to adjust their positions based on randomly chosen peers, helping to explore new areas in the solution space. This is mathematically represented by Equation (8), where an individual’s position is updated by moving toward a peer’s position scaled by a mutation factor F. Similarly, the dormancy behavior introduces randomness by relocating individuals to entirely new positions within the search space, as described in Equation (9). This random relocation helps in avoiding local optima and enhances the global search capability of the algorithm. Additionally, reproduction involves generating new positions through Gaussian perturbations around the current position, as shown in Equation (10). This localized random search helps in exploring the neighborhood of current solutions, contributing to the diversity of the population.
Exploitation, on the other hand, refers to the ability of the algorithm to intensively search around promising regions of the solution space to refine solutions and find the optimal solution. The hybrid APO-JADE algorithm incorporates exploitation through both APO’s autotroph and heterotroph behaviors, as well as JADE’s adaptive parameter mechanisms. In the autotroph behavior, individuals adjust their positions toward a selected peer, balancing exploration and exploitation, as detailed in Equation (11). This behavior helps in exploiting known good areas of the search space while still allowing some exploration. The heterotroph behavior involves moving based on differences between two peers, which promotes diverse search patterns and helps in refining the search in promising regions, as described in Equation (12).
JADE’s adaptive mechanisms dynamically adjust the crossover rate ( C R ) and mutation factor (F), ensuring that the algorithm can exploit promising regions effectively. The crossover operation, as defined in Equation (14), and the adaptive parameter updates, as shown in Equations (16) and (17), allow the algorithm to fine-tune the search process based on historical performance, enhancing exploitation capabilities. By combining these strategies, the algorithm achieves a balance between exploration and exploitation.
Achieving a balance between exploration and exploitation is crucial for the success of any optimization algorithm. The hybrid APO-JADE algorithm addresses this balance by combining APO’s exploratory behaviors with JADE’s adaptive exploitation mechanisms. By dynamically adjusting C R and F and incorporating diverse behavioral strategies, the algorithm maintains population diversity while focusing search efforts on promising regions. This hybrid approach ensures that the algorithm can effectively navigate complex optimization landscapes, avoid premature convergence, and find high-quality solutions. The interplay between exploration and exploitation in the hybrid APO-JADE algorithm is designed to leverage the strengths of both APO and JADE, resulting in an optimization process that is both robust and adaptive to various types of optimization problems.

4. Testing and Comparison

In order to conduct a rigorous statistical evaluation of the performance of various optimization algorithms, we utilized several fundamental statistical measures: mean and standard deviation. The mean, which represents the measure of central tendency, provides an average outcome achieved by the algorithms across multiple trials, giving a general view of their typical performance levels. The standard deviation, on the other hand, measures the variability or dispersion from the mean, offering insights into the consistency and reliability of the results obtained across different runs. This metric is essential for assessing the stability and predictability of the algorithms’ performance.
The effectiveness of different optimization algorithms was assessed and compared. The algorithms evaluated include the Fox Optimization Algorithm (FOX) [13], the Arithmetic Optimization Algorithm (AOA) [14], the Artificial Hummingbird Algorithm [15], and the Beluga Whale Optimization (BWO) [16]. Additionally, the Gray Wolf Optimizer (GWO) [17], the Optical Microscope Algorithm (OMA) [18], and the Sand Cat Optimization Algorithm (SCSO) [19] were included. The Moth-Flame Optimization (MFO) [20], the Multi-Trial vector-based Differential Evolution Algorithm (MTDE) [21], and the Multi-Verse Optimizer (MVO) [22] were also evaluated. Furthermore, the Chimp Optimization Algorithm (ChOA) [23], the Sine Cosine Algorithm (SCA) [24], the Whale Optimization Algorithm (WOA) [25], and the White Shark Optimizer (WSO) [26] were part of the comparison.

4.1. IEEE Congress on Evolutionary Computation (CEC) Benchmark Suites

The benchmark functions from the IEEE Congress on Evolutionary Computation (CEC) across the years 2017, 2021, and 2022, are standardized test problems designed to rigorously evaluate the performance of optimization algorithms. These suites, as discussed in [27,28,29], categorize functions into Unimodal, Multimodal, Hybrid, and Composition types, each presenting distinct challenges. Unimodal functions feature a single global optimum, which is ideal for assessing the convergence behavior of algorithms. Multimodal functions, with multiple local optima, test an algorithm’s ability to escape local minima and find the global optimum. Hybrid functions combine several basic functions, creating complex landscapes that simulate real-world scenarios, while Composition functions further increase complexity by integrating multiple hybrid functions, each with unique characteristics. These benchmark suites serve as critical tools in competitions and research, enabling a comparative analysis and validation of various optimization techniques.

4.2. Comprehensive Testing across Different IEEE Benchmarks over Different Years

We have tested JADE-APO on multiple CEC benchmark suites, specifically CEC 2022 as shown in Figure 1, CEC 2021, and CEC 2017, rather than relying on a single suite, to ensure a comprehensive and robust evaluation of the optimization algorithms. Each of these benchmark suites presents unique sets of functions, with different characteristics and challenges, designed to test various aspects of algorithm performance.
The CEC2022 benchmark functions [27], are categorized into Unimodal, Multimodal, Hybrid, and Composition functions. These functions have been specifically designed to reflect more recent advancements and emerging challenges in optimization, providing a modern set of problems that can highlight the strengths and weaknesses of contemporary algorithms.
In contrast, the CEC2021 benchmark suite [28] offers a diverse set of functions that test the convergence, robustness, and versatility of optimization algorithms. Testing on this suite allows for a comparative analysis against the previous year’s functions, providing insights into the consistency and adaptability of the algorithms when confronted with new but similar challenges.
The CEC2017 benchmark functions [29], though older, have been extensively used in the optimization community and offer a well-established standard for algorithm comparison. These functions are essential for ensuring that the tested algorithms perform well against challenging problems, thus providing a baseline for evaluating improvements over time.

4.3. Diverse Problem Landscapes

Each benchmark suite includes a variety of problem landscapes:
  • Unimodal Functions: Ideal for testing the convergence speed and precision of algorithms.
  • Multimodal Functions: Challenge the algorithm’s ability to avoid local optima and locate the global optimum.
  • Hybrid Functions: Combine multiple basic functions to create complex landscapes, simulating real-world scenarios.
  • Composition Functions: Integrate multiple hybrid functions to create highly complex optimization challenges, testing an algorithm’s robustness and versatility.
By testing across the CEC 2022, CEC 2021, and CEC 2017 suites, the evaluation covers a broader spectrum of optimization problems. This approach ensures that the algorithms are not only effective on a specific set of problems but also versatile and robust across different types of challenges, both historical and contemporary. It provides a more complete and rigorous assessment of the algorithms’ performance, offering valuable insights into their generalizability and potential for real-world application.
Furthermore, using multiple benchmark suites enables the identification of trends in algorithm performance over time, helping to understand whether improvements in optimization techniques are consistent with advances in benchmark design.

4.4. Evaluated Metrics

For evaluation in Table 1, Table 2, Table 3 and Table 4, we used “mean”, “std”, and “SEM”, which are statistical metrics calculated for the results obtained from running various optimization algorithms on benchmark functions.
Mean: The mean (or average) represents the average fitness score achieved by an algorithm over multiple runs (denoted by the variable RUN). This metric provides a central tendency of the results, indicating the overall performance of the algorithm across different runs. The mean is calculated as shown in Equation (18):
Mean = μ = 1 n i = 1 n x i
where n is the total number of runs, and x i represents the fitness score from the ith run.
Std (Standard Deviation): The standard deviation is a measure of the variability or dispersion of the fitness scores from the mean. A low standard deviation indicates that the fitness scores are close to the mean, implying consistent performance across runs. Conversely, a high standard deviation suggests more variability in the results. The standard deviation is calculated as shown in Equation (19):
Std = σ = 1 n i = 1 n ( x i μ ) 2
where μ is the mean fitness score, and n is the number of runs.
SEM (Standard Error of the Mean): The standard error of the mean is calculated by dividing the standard deviation by the square root of the number of runs (RUN). SEM provides an estimate of how much the mean fitness score is expected to vary if the experiment were repeated multiple times. A lower SEM indicates that the mean is a more reliable estimate of the true average performance. The SEM is calculated as shown in Equation (20):
SEM = σ n
where σ is the standard deviation and n is the number of runs.

4.5. Results on IEEE Congress on Evolutionary Computation (CEC) 2022 Benchmark Suits

Table 1 shows that the APO-JADE optimizer significantly outperforms other algorithms across multiple functions in the CEC2022 benchmark set. APO-JADE consistently achieves top ranks, notably in functions F1, F3, F4, F5, F7, F8, F9, F11, and F12, demonstrating superior performance. Not only does APO-JADE often report the lowest mean values, indicating its efficiency in finding solutions closer to the global optimum, but it also shows lower or comparable standard deviations and standard errors of the mean, which highlight its consistency and reliability in results across different runs. Such attributes underscore its robustness and versatility, which are capable of effectively handling the complex optimization landscapes characteristic of the diverse test functions in the CEC2022 set. This evidence strongly supports the conclusion that APO-JADE is a superior choice for tackling the intricate and varied optimization challenges presented.
Table 1. Results on IEEE Congress on Evolutionary Computation (CEC) 2022 benchmark suites.
Table 1. Results on IEEE Congress on Evolutionary Computation (CEC) 2022 benchmark suites.
FunctionStatisticsAPO-JADEPSOOMABWOCOOTSHIOChOADBOMTDEMFOSCA
F1Mean3.000E+021.060E+034.835E+032.155E+032.102E+032.164E+038.307E+033.703E+021.451E+046.409E+031.076E+03
Std3.769E-031.281E+036.720E+021.613E+032.027E+032.472E+031.860E+033.287E+014.418E+038.441E+034.990E+02
SEM1.192E-034.052E+022.125E+025.102E+026.409E+027.819E+025.882E+021.039E+011.397E+032.669E+031.578E+02
Rank1386571021194
F2Mean4.132E+024.132E+024.943E+024.285E+024.397E+024.266E+028.643E+024.096E+025.522E+024.169E+024.657E+02
Std2.211E+011.357E+011.665E+012.738E+014.983E+012.380E+012.385E+022.031E+013.711E+011.558E+012.238E+01
SEM6.991E+004.290E+005.265E+008.657E+001.576E+017.528E+007.543E+016.421E+001.173E+014.926E+007.077E+00
Rank2396751111048
F3Mean6.003E+026.006E+026.267E+026.077E+026.113E+026.063E+026.443E+026.150E+026.291E+026.016E+026.188E+02
Std8.379E-011.034E+004.732E+006.862E+007.233E+004.910E+005.282E+009.467E+009.666E+002.119E+003.489E+00
SEM2.650E-013.268E-011.496E+002.170E+002.287E+001.553E+001.670E+002.994E+003.057E+006.701E-011.103E+00
Rank1295641171038
F4Mean8.171E+028.173E+028.327E+028.288E+028.208E+028.172E+028.489E+028.207E+028.671E+028.329E+028.375E+02
Std7.897E+007.602E+003.034E+007.700E+004.822E+001.255E+017.216E+006.144E+001.146E+011.135E+018.011E+00
SEM2.497E+002.404E+009.594E-012.435E+001.525E+003.969E+002.282E+001.943E+003.624E+003.590E+002.533E+00
Rank1376521041189
F5Mean9.001E+029.159E+029.976E+029.475E+021.038E+039.186E+021.443E+031.045E+031.604E+031.083E+031.009E+03
Std2.889E-012.390E+013.686E+013.656E+011.040E+022.613E+011.242E+021.471E+021.696E+021.886E+022.782E+01
SEM9.137E-027.558E+001.166E+011.156E+013.290E+018.262E+003.928E+014.652E+015.363E+015.965E+018.798E+00
Rank1254731081196
F6Mean4.121E+035.349E+032.448E+066.764E+034.267E+034.368E+035.471E+073.476E+031.326E+074.685E+032.380E+06
Std2.261E+032.944E+032.337E+062.565E+031.462E+031.700E+032.421E+072.442E+037.369E+062.234E+032.027E+06
SEM7.149E+029.308E+027.392E+058.113E+024.623E+025.377E+027.657E+067.722E+022.330E+067.064E+026.410E+05
Rank2697341111058
F7Mean2.021E+032.024E+032.065E+032.045E+032.032E+032.051E+032.129E+032.031E+032.087E+032.024E+032.053E+03
Std6.156E+009.864E+001.091E+011.665E+011.293E+011.801E+011.777E+017.335E+001.450E+014.318E+005.858E+00
SEM1.947E+003.119E+003.449E+005.266E+004.088E+005.695E+005.619E+002.320E+004.584E+001.366E+001.852E+00
Rank1396571141028
F8Mean2.216E+032.223E+032.233E+032.225E+032.223E+032.238E+032.240E+032.216E+032.242E+032.223E+032.231E+03
Std3.963E+017.155E+002.937E+005.979E+001.840E+003.620E+015.594E+009.302E+004.381E+001.742E+004.247E+00
SEM1.253E+012.263E+009.288E-011.891E+005.818E-011.145E+011.769E+002.942E+001.385E+005.509E-011.343E+00
Rank1486391021157
F9Mean2.529E+032.572E+032.597E+032.606E+032.585E+032.595E+032.723E+032.534E+032.650E+032.538E+032.572E+03
Std1.886E-024.164E+012.330E+014.539E+011.665E+014.142E+014.592E+011.289E+013.752E+012.578E+013.481E+01
SEM5.963E-031.317E+017.367E+001.435E+015.265E+001.310E+011.452E+014.075E+001.187E+018.153E+001.101E+01
Rank1589671121034
F10Mean2.524E+032.547E+032.521E+032.549E+032.541E+032.711E+032.632E+032.549E+032.536E+032.528E+032.502E+03
Std6.236E+016.025E+014.434E+016.267E+016.247E+013.416E+022.185E+026.284E+013.223E+015.684E+015.606E-01
SEM1.972E+011.905E+011.402E+011.982E+011.975E+011.080E+026.909E+011.987E+011.019E+011.798E+011.773E-01
Rank3728611109541
F11Mean2.631E+032.851E+032.791E+032.858E+032.742E+032.791E+033.192E+032.689E+032.899E+032.738E+032.768E+03
Std9.536E+011.954E+021.868E+012.143E+021.710E+021.785E+023.755E+026.309E+017.767E+017.368E+011.362E+01
SEM3.015E+016.178E+015.906E+006.776E+015.409E+015.643E+011.187E+021.995E+012.456E+012.330E+014.308E+00
Rank1879461121035
F12Mean2.863E+032.864E+032.897E+032.870E+032.890E+032.896E+032.948E+032.865E+032.896E+032.864E+032.869E+03
Std2.509E+001.108E+004.688E+009.132E+001.650E+012.632E+016.632E+014.185E+001.380E+011.577E+001.972E+00
SEM7.934E-013.503E-011.483E+002.888E+005.218E+008.322E+002.097E+011.323E+004.363E+004.987E-016.235E-01
Rank1310678114925

4.6. Results on IEEE Congress on Evolutionary Computation (CEC) 2021 Benchmark Suits

In the results in the CEC2021 benchmark suite, as shown in Table 2, APO-JADE distinctly outperformed other algorithms, demonstrating its exceptional efficacy in navigating complex optimization landscapes. Notably, APO-JADE often achieved the lowest mean values and consistently ranked at or near the top across various functions, clearly outpacing other algorithms like MVO, WOA, and SCA, and even edging out newer algorithms such as MFO and BWO in certain scenarios. This superior performance is highlighted by its minimal standard deviations and errors, suggesting that APO-JADE not only finds optimal solutions but does so with remarkable consistency and reliability. The algorithm’s information-sharing mechanism, which allows particles to share knowledge and adapt their search strategies dynamically, likely contributes significantly to its success, enabling it to efficiently explore and exploit the solution space. Overall, APO-JADE’s robustness and adaptability make it a standout performer in the CEC2021 evaluations, showcasing its potential as a powerful tool for solving a wide array of complex optimization problems.
Table 2. Results on IEEE Congress on Evolutionary Computation (CEC) 2021 benchmark suites.
Table 2. Results on IEEE Congress on Evolutionary Computation (CEC) 2021 benchmark suites.
FunStatisticsAPO-JADEPSOWOAMPAGWOFOXBWOSHIOMFOSCA
C1Mean9.7E-1915176.3193.7E-1565.83E-532.6E-122420.60921.1E-2263.4E-12960002.85E-22
Std02390.7998.2E-1565.65E-533.9E-122383.839107.6E-1295477.2263.95E-22
SEM01069.1983.7E-1562.53E-531.7E-122171.658103.4E-1292449.491.77E-22
Rank29365814107
C2Mean0703.8813154.403400.359157386.23280107.8942275.96741.14E-10
Std0428.0812345.256600.803098291.19290154.851395.92862.55E-10
SEM0191.4437154.403400.359157130.2254069.251642.900571.14E-10
Rank11071591684
C3Mean032.836230020.0721540.6936034.4480124.6449818.50074
Std07.971007004.04133510.38824020.0697811.1307741.3689
SEM03.564743001.807344.64576208.9754794.97783418.50074
Rank18116101975
C4Mean01.269271000.5733580.88346101.1066341.2000054.44E-17
Std00.459856000.3729010.16002200.5680780.7680289.93E-17
SEM00.205654000.1667660.07156400.2540520.3434734.44E-17
Rank11011671895
C5Mean1.49E-274281.4295.3E-192.58E-230.116914282.4713.1E-2272.591568309.39224.63E-21
Std1.44E-273875.5251.19E-183.81E-230.2614193597.06101.68483188.75571.02E-20
SEM6.42E-281733.1885.3E-191.7E-230.116911608.65400.75347984.414124.58E-21
Rank29536101784
C6Mean2.19E-19111.36480.075661.84E-050.97305474.877110.0106781.43919550.332960.228576
Std4.89E-19102.13130.0437661.3E-050.97649375.784080.0230892.01044446.874920.354379
SEM2.19E-1945.674520.0195735.82E-060.43670133.891670.0103260.89909820.96310.158483
Rank11042693785
C7Mean2.28E-141103.7010.0181970.0002070.0122542136.8730.0007130.675402106.26650.002972
Std5.11E-14992.16090.0240130.0002610.010472827.73990.0009090.86993198.82620.003538
SEM2.28E-14443.70780.0107390.0001170.004683370.17650.0004070.38904588.917780.001582
Rank19625103784
C8Mean0230.83000351.304408.365755388.42210
Std0218.0023000344.3144018.7064371.83020
SEM097.49361000153.982108.365755166.28750
Rank18111917101
C9Mean5.33E-151.1001187.11E-159.88E-518.88E-150.9321423.55E-151.42E-140.9320231.28E-13
Std4.86E-152.0280267.43E-152.13E-5002.0844.86E-154.86E-152.0840672.51E-13
SEM2.18E-150.9069613.32E-159.54E-5100.9319932.18E-152.18E-150.9320231.12E-13
Rank31041592687
C10Mean3.1E-16948.891670.05298628.8937549.634149.5291310.3768255.0426148.6392961.18547
Std00.2158880.02749126.37610.5500220.71676523.2003711.505470.3598923.670574
SEM00.0965480.01229411.795750.2459780.32054710.375525.1454040.1609491.64153
Rank16248739510

4.7. Results on IEEE Congress on Evolutionary Computation (CEC) 2017 Benchmark Suits

The results of APO-JADE over the CEC2017 benchmark functions F1–F15 as shown in Table 3 demonstrate its superior performance and consistency across various optimization problems. APO-JADE frequently achieved the best results, securing the top rank in most functions such as F1, F2, F3, F4, F5, F6, F11, F12, F13, F14, and F15. Notably, APO-JADE had the lowest mean values in these functions, indicating its effectiveness. The standard deviation (Std) and standard error of the mean (SEM) values for APO-JADE were generally low, suggesting stable and reliable performance. In functions F7, F8, F9, and F10, APO-JADE still performed competitively, ranking second or third. These results underscore the robustness and versatility of APO-JADE in handling a wide range of optimization tasks, making it a strong candidate for various applications. The comprehensive comparison with other algorithms like WOA, COA, GWO, SCSO, OMA, BWO, SHO, SHIO, ChOA, DBO, MTDE, and SCA highlights the consistent superiority of APO-JADE across different benchmark functions.
Table 3. Results on IEEE Congress on Evolutionary Computation (CEC) 2017 benchmark suites (F1–F15).
Table 3. Results on IEEE Congress on Evolutionary Computation (CEC) 2017 benchmark suites (F1–F15).
FunStatisticsAPO-JADEWOACOAGWOSCSOOMABWOCOOTSHIOChOADBOMTDESCA
F1Mean2.02E+032.48E+064.36E+035.46E+063.43E+035.91E+081.79E+081.60E+083.95E+071.03E+105.45E+052.91E+119.13E+08
Std2.06E+033.07E+064.09E+031.46E+073.40E+032.14E+082.16E+081.93E+081.13E+084.28E+091.07E+061.62E+103.55E+08
SEM6.51E+029.72E+051.29E+034.61E+061.08E+036.76E+076.82E+076.11E+073.58E+071.35E+093.38E+055.13E+091.12E+08
Rank15362109871241311
F2Mean2.00E+024.92E+042.00E+021.22E+052.00E+025.24E+073.19E+061.40E+055.56E+072.80E+121.14E+047.54E+911.46E+07
Std5.72E-043.20E+049.97E-032.36E+052.32E-016.50E+079.40E+062.60E+051.48E+085.27E+121.15E+041.55E+923.61E+07
SEM1.81E-041.01E+043.15E-037.47E+047.33E-022.06E+072.97E+068.23E+044.69E+071.67E+123.64E+034.90E+911.14E+07
Rank15263108711124139
F3Mean3.00E+022.50E+033.02E+021.25E+033.00E+021.43E+032.20E+033.06E+034.52E+039.48E+033.77E+024.80E+051.51E+03
Std1.12E-123.50E+033.63E+001.32E+031.57E-027.10E+022.44E+032.14E+033.90E+033.34E+034.91E+015.07E+048.40E+02
SEM3.53E-131.11E+031.15E+004.17E+024.96E-032.24E+027.71E+026.77E+021.23E+031.06E+031.55E+011.60E+042.66E+02
Rank19352681011124137
F4Mean4.00E+024.22E+024.04E+024.13E+024.10E+024.64E+024.33E+024.28E+024.28E+021.00E+034.20E+021.59E+054.44E+02
Std1.27E-013.25E+011.66E+001.79E+012.17E+011.34E+012.42E+012.59E+012.27E+013.46E+022.02E+011.00E+041.76E+01
SEM4.03E-021.03E+015.26E-015.66E+006.86E+004.24E+007.66E+008.20E+007.17E+001.09E+026.39E+003.16E+035.58E+00
Rank16243119871251310
F5Mean5.12E+025.55E+025.16E+025.16E+025.71E+025.52E+025.30E+025.26E+025.24E+025.87E+025.35E+021.80E+035.48E+02
Std9.66E+002.23E+017.76E+004.64E+001.94E+015.46E+001.36E+016.85E+001.39E+011.68E+011.57E+013.66E+015.56E+00
SEM3.05E+007.06E+002.46E+001.47E+006.13E+001.73E+004.30E+002.17E+004.40E+005.32E+004.97E+001.16E+011.76E+00
Rank11032119654127138
F6Mean6.03E+026.30E+026.07E+026.00E+026.48E+026.25E+026.08E+026.10E+026.09E+026.45E+026.10E+027.43E+026.19E+02
Std5.53E+001.18E+011.17E+014.63E-015.65E+004.76E+004.84E+005.88E+009.00E+001.01E+017.41E+004.28E+002.56E+00
SEM1.75E+003.72E+003.70E+001.46E-011.79E+001.50E+001.53E+001.86E+002.85E+003.19E+002.34E+001.35E+008.09E-01
Rank21031129475116138
F7Mean7.36E+027.83E+027.64E+027.28E+027.97E+027.70E+027.45E+027.49E+027.44E+028.10E+027.67E+026.47E+037.73E+02
Std1.49E+012.80E+012.35E+011.35E+011.86E+014.91E+001.36E+018.91E+009.65E+007.57E+002.44E+013.61E+027.97E+00
SEM4.71E+008.86E+007.43E+004.25E+005.90E+001.55E+004.31E+002.82E+003.05E+002.39E+007.72E+001.14E+022.52E+00
Rank21061118453127139
F8Mean8.27E+028.35E+028.29E+028.15E+028.40E+028.37E+028.27E+028.23E+028.19E+028.50E+028.17E+022.40E+038.39E+02
Std9.30E+009.18E+009.92E+004.83E+001.14E+014.34E+008.03E+005.11E+007.59E+001.08E+016.02E+003.46E+018.42E+00
SEM2.94E+002.90E+003.14E+001.53E+003.61E+001.37E+002.54E+001.62E+002.40E+003.41E+001.90E+001.09E+012.66E+00
Rank68711195431221310
F9Mean9.86E+021.37E+039.97E+029.20E+021.63E+031.01E+031.00E+031.06E+039.80E+021.56E+031.07E+038.78E+041.03E+03
Std1.96E+014.02E+022.74E+025.73E+012.68E+023.46E+011.19E+021.47E+028.30E+012.06E+021.43E+026.44E+031.28E+02
SEM6.21E+001.27E+028.66E+011.81E+018.47E+011.09E+013.75E+014.64E+012.63E+016.50E+014.52E+012.04E+034.05E+01
Rank31041126582119137
F10Mean1.51E+032.09E+031.85E+031.42E+032.32E+032.49E+031.86E+031.69E+031.85E+032.61E+031.58E+031.69E+042.23E+03
Std2.58E+023.23E+024.48E+022.41E+023.15E+021.36E+024.91E+022.28E+023.44E+022.14E+022.54E+025.39E+022.13E+02
SEM8.15E+011.02E+021.42E+027.61E+019.96E+014.31E+011.55E+027.21E+011.09E+026.77E+018.02E+011.71E+026.74E+01
Rank28511011746123139
F11Mean1.12E+031.19E+031.14E+031.12E+031.19E+031.27E+031.58E+031.14E+031.60E+034.69E+031.15E+037.17E+041.20E+03
Std1.01E+014.52E+015.28E+011.23E+015.07E+013.52E+011.39E+033.81E+011.38E+031.70E+033.87E+017.71E+032.64E+01
SEM3.18E+001.43E+011.67E+013.87E+001.60E+011.11E+014.38E+021.21E+014.35E+025.37E+021.22E+012.44E+038.36E+00
Rank17426910311125138
F12Mean1.37E+042.34E+061.50E+046.70E+051.44E+041.09E+076.09E+056.15E+054.02E+052.78E+083.70E+041.75E+111.15E+07
Std1.13E+043.41E+061.06E+048.10E+051.29E+045.66E+068.27E+058.31E+058.18E+052.73E+083.66E+041.42E+108.40E+06
SEM3.58E+031.08E+063.36E+032.56E+054.08E+031.79E+062.62E+052.63E+052.59E+058.64E+071.16E+044.48E+092.66E+06
Rank19382106751241311
F13Mean1.47E+032.30E+045.26E+031.15E+041.97E+032.30E+048.53E+036.63E+031.74E+042.25E+071.98E+031.14E+112.60E+04
Std2.38E+021.43E+047.19E+039.47E+033.00E+021.90E+045.34E+031.52E+031.16E+043.00E+073.96E+025.85E+091.13E+04
SEM7.53E+014.53E+032.27E+033.00E+039.49E+015.99E+031.69E+034.80E+023.67E+039.49E+061.25E+021.85E+093.58E+03
Rank11047296581231311
F14Mean1.45E+031.90E+031.52E+032.69E+031.57E+032.22E+032.15E+033.20E+034.12E+035.78E+031.46E+036.04E+081.62E+03
Std2.62E+011.11E+035.16E+011.96E+031.29E+027.34E+021.40E+031.48E+031.81E+034.64E+032.60E+011.11E+089.87E+01
SEM8.28E+003.52E+021.63E+016.19E+024.08E+012.32E+024.43E+024.69E+025.72E+021.47E+038.21E+003.51E+073.12E+01
Rank16394871011122135
F15Mean1.55E+034.76E+031.86E+034.49E+032.44E+035.76E+033.80E+032.77E+033.27E+039.24E+031.68E+034.92E+102.72E+03
Std3.64E+013.14E+032.19E+022.14E+031.64E+031.24E+031.72E+031.14E+031.46E+035.38E+039.84E+016.67E+091.20E+03
SEM1.15E+019.92E+026.92E+016.77E+025.19E+023.92E+025.42E+023.60E+024.61E+021.70E+033.11E+012.11E+093.79E+02
Rank11039411867122135
Table 4 shows the results of APO-JADE over the CEC2017 benchmark functions. F16–F30 indicate its competitive performance across a range of optimization problems. For F16, APO-JADE secured the second rank with a mean value of 1.70 × 10 3 , showcasing its effectiveness and consistency. In F17, APO-JADE ranked first with the lowest mean value of 1.74 × 10 3 , further demonstrating its superior performance. Similarly, for F18 and F19, APO-JADE achieved top ranks with mean values of 1.95 × 10 3 and 1.93 × 10 3 , respectively, outperforming most other algorithms. The trend continued with F20, where APO-JADE ranked second with a mean of 2.02 × 10 3 . In F21, APO-JADE again led the rankings with a mean of 2.20 × 10 3 . For F22, it maintained a high rank, securing second place with a mean of 2.30 × 10 3 . In F23 and F24, APO-JADE continued its strong performance, ranking third and second, respectively, with mean values of 2.62 × 10 3 and 2.68 × 10 3 . APO-JADE also performed well in F25 and F26, ranking first and second with mean values of 2.92 × 10 3 and 2.98 × 10 3 , respectively. In F27, APO-JADE achieved the top rank with a mean of 3.09 × 10 3 , and it ranked second in F28 with a mean of 3.26 × 10 3 . For F29, APO-JADE secured the first place with a mean value of 3.16 × 10 3 . Lastly, in F30, APO-JADE ranked third with a mean of 3.24 × 10 5 .
Table 4. Results on IEEE Congress on Evolutionary Computation (CEC) 2017 benchmark suites (F16–F30).
Table 4. Results on IEEE Congress on Evolutionary Computation (CEC) 2017 benchmark suites (F16–F30).
FunStatisticsAPO-JADEWOACOAGWOSCSOOMABWOCOOTSHIOChOADBOMTDESCA
F16Mean1.70E+031.86E+031.62E+031.70E+031.97E+031.88E+031.79E+031.78E+031.82E+032.07E+031.72E+032.76E+041.74E+03
Std7.24E+011.69E+024.17E+016.46E+011.47E+025.79E+011.61E+021.46E+021.20E+021.25E+021.04E+021.61E+038.65E+01
SEM2.29E+015.33E+011.32E+012.04E+014.64E+011.83E+015.10E+014.62E+013.78E+013.95E+013.28E+015.08E+022.74E+01
Rank29131110768124135
F17Mean1.74E+031.82E+031.74E+031.75E+031.89E+031.78E+031.76E+031.75E+031.76E+031.84E+031.76E+032.40E+061.78E+03
Std2.05E+014.35E+018.10E+001.99E+011.20E+027.01E+002.17E+011.53E+012.83E+013.57E+011.55E+011.09E+061.32E+01
SEM6.50E+001.38E+012.56E+006.29E+003.80E+012.22E+006.85E+004.84E+008.95E+001.13E+014.91E+003.43E+054.17E+00
Rank11024128637115139
F18Mean1.95E+031.49E+041.74E+042.12E+044.14E+035.18E+053.52E+041.78E+042.40E+045.59E+072.00E+031.12E+099.75E+04
Std1.19E+021.17E+041.19E+041.49E+042.45E+034.15E+051.19E+049.52E+031.04E+041.04E+081.40E+022.80E+086.29E+04
SEM3.77E+013.71E+033.75E+034.71E+037.76E+021.31E+053.75E+033.01E+033.28E+033.28E+074.44E+018.84E+071.99E+04
Rank14573119681221310
F19Mean1.93E+032.87E+042.15E+038.47E+032.02E+039.31E+039.39E+032.74E+044.50E+034.57E+051.92E+038.83E+095.16E+03
Std2.14E+015.46E+043.01E+026.26E+039.53E+015.07E+036.45E+037.24E+044.90E+036.59E+052.04E+018.36E+084.95E+03
SEM6.78E+001.73E+049.52E+011.98E+033.01E+011.60E+032.04E+032.29E+041.55E+032.09E+056.46E+002.65E+081.57E+03
Rank21147389105121136
F20Mean2.02E+032.21E+032.02E+032.06E+032.23E+032.16E+032.07E+032.07E+032.16E+032.26E+032.06E+035.62E+032.10E+03
Std3.30E+015.93E+011.02E+014.03E+019.23E+013.01E+014.28E+014.99E+019.58E+017.43E+012.59E+011.91E+022.38E+01
SEM1.04E+011.88E+013.24E+001.28E+012.92E+019.52E+001.35E+011.58E+013.03E+012.35E+018.18E+006.04E+017.52E+00
Rank21013118659124137
F21Mean2.20E+032.29E+032.28E+032.31E+032.36E+032.27E+032.30E+032.33E+032.33E+032.30E+032.26E+033.59E+032.22E+03
Std1.40E+007.86E+015.67E+016.31E+001.40E+013.31E+015.14E+018.61E+001.05E+015.01E+016.95E+013.95E+013.99E+01
SEM4.44E-012.49E+011.79E+011.99E+004.42E+001.05E+011.63E+012.72E+003.34E+001.58E+012.20E+011.25E+011.26E+01
Rank16591248111073132
F22Mean2.30E+032.32E+032.30E+032.34E+032.42E+032.43E+032.37E+032.36E+032.31E+032.94E+032.30E+031.78E+042.37E+03
Std3.54E+001.22E+015.80E-011.16E+023.65E+023.00E+015.66E+015.13E+012.70E+012.14E+023.26E+018.63E+023.20E+01
SEM1.12E+003.84E+001.83E-013.67E+011.15E+029.49E+001.79E+011.62E+018.53E+006.77E+011.03E+012.73E+021.01E+01
Rank25161011974123138
F23Mean2.62E+032.65E+032.62E+032.61E+032.67E+032.68E+032.63E+032.65E+032.63E+032.69E+032.65E+037.24E+032.65E+03
Std1.42E+012.01E+015.36E+007.56E+004.03E+011.02E+011.10E+011.64E+011.39E+018.98E+002.24E+012.01E+028.54E+00
SEM4.48E+006.35E+001.70E+002.39E+001.27E+013.23E+003.46E+005.20E+004.41E+002.84E+007.08E+006.35E+012.70E+00
Rank36211011485127139
F24Mean2.68E+032.75E+032.73E+032.74E+032.82E+032.56E+032.76E+032.78E+032.76E+032.84E+032.59E+036.99E+032.76E+03
Std1.23E+027.90E+017.93E+015.10E+003.84E+011.75E+011.09E+011.54E+011.79E+013.59E+011.10E+021.89E+026.80E+01
SEM3.90E+012.50E+012.51E+011.61E+001.22E+015.55E+003.45E+004.86E+005.65E+001.13E+013.49E+015.97E+012.15E+01
Rank36451118107122139
F25Mean2.92E+032.94E+032.94E+032.93E+032.93E+032.97E+032.95E+032.93E+032.95E+033.29E+032.93E+034.89E+042.97E+03
Std2.43E+014.15E+012.21E+012.21E+012.35E+012.33E+012.06E+016.63E+012.79E+011.47E+022.34E+013.37E+031.07E+01
SEM7.68E+001.31E+017.00E+006.98E+007.43E+007.37E+006.51E+002.10E+018.84E+004.64E+017.40E+001.06E+033.39E+00
Rank16725118491231310
F26Mean2.98E+033.21E+032.98E+033.02E+033.61E+033.21E+033.18E+033.04E+033.28E+033.94E+032.95E+036.88E+043.09E+03
Std1.23E+023.77E+021.63E+023.28E+024.34E+027.41E+013.15E+022.01E+023.15E+022.93E+021.85E+023.90E+032.98E+01
SEM3.90E+011.19E+025.17E+011.04E+021.37E+022.34E+019.97E+016.37E+019.96E+019.26E+015.85E+011.23E+039.44E+00
Rank29341187510121136
F27Mean3.09E+033.11E+033.11E+033.09E+033.18E+033.13E+033.10E+033.13E+033.11E+033.18E+033.10E+032.18E+043.10E+03
Std2.96E+002.46E+013.22E+011.78E+006.57E+011.09E+011.57E+012.38E+012.90E+018.81E+017.50E+002.41E+032.62E+00
SEM9.35E-017.79E+001.02E+015.64E-012.08E+013.43E+004.98E+007.53E+009.18E+002.79E+012.37E+007.63E+028.29E-01
Rank17621210598113134
F28Mean3.26E+033.31E+033.33E+033.31E+033.39E+033.33E+033.43E+033.36E+033.37E+033.76E+033.23E+032.88E+043.31E+03
Std1.41E+021.16E+021.30E+021.08E+021.42E+029.39E+017.02E+012.25E+021.42E+021.20E+021.23E+021.26E+038.52E+01
SEM4.46E+013.65E+014.10E+013.43E+014.49E+012.97E+012.22E+017.12E+014.50E+013.78E+013.88E+013.99E+022.69E+01
Rank23741061189121135
F29Mean3.16E+033.33E+033.22E+033.23E+033.43E+033.26E+033.22E+033.23E+033.24E+033.37E+033.20E+032.13E+063.21E+03
Std2.21E+019.40E+016.20E+016.06E+011.76E+023.11E+016.51E+012.67E+017.22E+011.19E+022.82E+011.04E+061.29E+01
SEM6.99E+002.97E+011.96E+011.92E+015.58E+019.84E+002.06E+018.44E+002.28E+013.76E+018.93E+003.30E+054.08E+00
Rank11046129578112133
F30Mean3.24E+057.45E+056.17E+054.00E+052.28E+062.33E+056.52E+058.82E+055.75E+053.86E+069.04E+031.56E+109.62E+05
Std3.63E+055.90E+056.81E+055.81E+053.09E+062.36E+057.69E+058.13E+051.01E+063.98E+066.75E+031.79E+095.03E+05
SEM1.15E+051.87E+052.15E+051.84E+059.78E+057.46E+042.43E+052.57E+053.18E+051.26E+062.14E+035.66E+081.59E+05
Rank38641127951211310

4.8. APO-JADE Convergence Diagram

The convergence curves as shown in Figure 2 and Figure 3 provided for the APO-JADE optimizer on the CEC2022 benchmark functions (F1 to F10) exhibit the optimizer’s performance over 500 iterations. For F1, the curve shows a rapid initial decrease in the best value obtained, dropping from over 10 6 to below 10 5 within the first 50 iterations, which is followed by a flattening curve. In F2, the optimizer demonstrates a steady decrease from around 2000 to 600, indicating consistent improvement. The F3 curve drops sharply from around 740 to 670 within the first 50 iterations and then decreases more slowly. For F4, the best value drops from 920 to 830 within 100 iterations with minor improvements thereafter. In F5, the optimizer reduces the best value from 6500 to 1500 within 100 iterations with the curve flattening out afterwards. The F6 curve starts at 10 9 and drops dramatically to 10 5 within 300 iterations, showing strong initial performance. For F7, the best value decreases from 2350 to 2100 over 500 iterations with a noticeable flattening after 200 iterations. In F8, the curve drops from 8000 to 3000 within the first 50 iterations; then, it flattens out. The F9 curve shows a decrease from 3800 to 2800 within 100 iterations with subsequent flattening. Finally, the F10 curve starts at 4500 and drops sharply to 3000 within the first 50 iterations, which is followed by diminishing returns. Overall, the APO-JADE optimizer shows a consistent pattern of rapid initial improvements followed by slower gains across all functions, which are typical of many optimization algorithms where easy-to-find improvements are made quickly, but further refinements require more time and effort.

4.9. APO-JADE Search History Diagram

As shown in Figure 4 and Figure 5, the search history analysis of the APO-JADE optimizer on the CEC2022 benchmark functions reveals various patterns in the search space exploration, Where red dot represent best solution. The plots show a tendency for the optimizer to focus its search efforts on specific regions of the search space, which likely correspond to promising areas identified during the optimization process. For instance, the search history for several functions shows dense clusters of search points, indicating areas where the optimizer has concentrated its efforts. This behavior is observed in the dense concentration of search points around specific coordinates. In some cases, the optimizer exhibits a broader spread in its search, suggesting an exploration of a wider range of values. However, even in these cases, there are often still noticeable areas of higher density, indicating focused exploration. This pattern of combining broad exploration with targeted search in promising regions is indicative of an effective optimization strategy, where the optimizer initially explores a wide range of potential solutions and then hones in on areas that appear most promising based on initial findings. Overall, the APO-JADE optimizer shows a consistent pattern of efficiently balancing exploration and exploitation, which is crucial for effectively navigating complex search spaces.

4.10. APO-JADE Average Fitness Diagram

The average fitness score serves as a key indicator of how effectively a particular solution performs in comparison to others within the scope of the optimization problem at hand. When evaluating the performance of the APO-JADE algorithm, the fitness score is determined based on the value of the objective function that the algorithm seeks to minimize or maximize. In the context of minimization, a lower fitness score typically signifies a superior solution, whereas in maximization, a higher fitness score denotes a better outcome. This fitness value reflects the quality or suitability of a solution, steering the optimizer toward progressively better-performing solutions over successive iterations.
The performance of the optimizer as shown in Figure 6 and Figure 7 across the different fitness functions in the CEC2022 benchmark suite can be observed through the variation in the average fitness of all particles over 500 iterations. In most cases, a rapid decline in the average fitness occurs within the initial iterations, demonstrating a quick convergence toward a local or global optimum. However, the scale and behavior of this convergence differ significantly across functions, which suggests varying levels of problem complexity and landscape ruggedness. Some functions show a relatively smooth and continuous improvement, while others exhibit fluctuations indicating possible re-exploration or escaping from local optima, which are particularly noticeable in functions where average fitness temporarily increases before continuing a downward trend. This behavior emphasizes the optimizer’s capability to adapt and search efficiently across diverse problem spaces, although the extent of optimization and stability varies, highlighting the need for adaptive strategies or parameter tuning tailored to specific types of optimization landscapes.

4.11. APO-JADE Box-Plot Diagram

The box plots (see Figure 8 and Figure 9) for the APO-JADE optimizer provide a visual representation of the distribution of the best fitness scores obtained for each benchmark function (F1 to F10). For F1, the median fitness score is around 2 × 10 4 with a noticeable spread between the first and third quartiles, indicating variability, and a few outliers above 4 × 10 4 . For F2, the median score is slightly above 400 with a wider interquartile range extending up to approximately 480, suggesting performance variability and no significant outliers. The F3 box plot shows a median score around 630, with an interquartile range from approximately 620 to 650 and a single outlier around 660, indicating consistent performance with occasional deviations. For F4, the median score is approximately 840, with a narrow interquartile range from 830 to 850, suggesting stable performance and no significant outliers. For F5, the median score is around 1500, with a wide interquartile range extending up to 2000 and a few outliers around 2500, indicating higher score variability. The F6 plot shows a median score around 4000, with a broad interquartile range extending up to approximately 7000, suggesting high variability. For F7, the median score is around 2080, with an interquartile range from 2060 to 2100, indicating stable performance and no significant outliers. For F8, the median score is around 2330, with the interquartile range extending to approximately 2250, and several outliers above 2270. For F9, the median score is around 2600, with the interquartile range extending from approximately 2580 to 2640, indicating moderate variability with no significant outliers. Lastly, for F10, the median score is around 2700, with a narrow interquartile range and a few outliers above 3500, suggesting stable performance with occasional deviations. Overall, these box plots demonstrate the APO-JADE optimizer’s performance across different benchmark functions, highlighting its ability to achieve consistent fitness scores with occasional variability depending on the function’s complexity and nature.

4.12. APO-JADE Heat Map Diagram

As shown in Figure 10 and Figure 11, the heat maps of the sensitivity analysis for different population sizes and iterations provide information on how the performance of the APO-JADE optimizer varies with these parameters in different benchmark functions (F1 to F10). For F1, the heat map shows that with 10 search agents and 100 iterations, the fitness score is highest at 5.59 × 10 4 , indicating poor performance. As the number of iterations increases, the fitness scores generally decrease, with the best performance around 500 iterations and 40 agents, where the fitness score drops to 1.648 × 10 4 . This trend suggests that higher iterations and a moderate number of search agents improve performance. For F2, the highest fitness score ( 713.7 ) occurs with 10 search agents and 100 iterations. The performance improves significantly as the number of iterations and search agents increase, with the best fitness score ( 446.8 ) observed at 50 search agents and 400 iterations, indicating that both parameters contribute to better optimization results. In F3, the fitness score is highest ( 662.2 ) with 10 search agents and 100 iterations, showing initial poor performance. The scores improve with more iterations and agents, reaching a better score ( 636.4 ) at 40 agents and 300 iterations, suggesting that increasing both parameters enhances optimization. The heat map for F4 shows that the highest fitness score (871) occurs with 10 agents and 100 iterations, and performance improves with increased iterations and agents. The best score ( 829.4 ) is at 50 agents and 500 iterations, indicating that a larger population and more iterations lead to better results. For F5, the highest score (2159) is with 10 agents and 100 iterations, showing poor initial performance. Performance improves as both parameters increase, with the best score (1437) at 50 agents and 500 iterations, highlighting the benefits of larger populations and more iterations. In F6, the highest score ( 5.208 × 10 6 ) is with 10 agents and 100 iterations, and performance improves with increased iterations and agents, with the best score (4313) at 50 agents and 500 iterations, suggesting that larger populations and more iterations significantly enhance performance. For F7, the highest score (2142) is with 20 agents and 100 iterations, and the performance generally improves with more iterations and agents, reaching the best score (2066) at 50 agents and 300 iterations, indicating the benefit of larger populations and more iterations. The F8 heat map shows that the highest score (2301) is with 10 agents and 100 iterations, and performance improves with increased iterations and agents with the best score (2233) at 50 agents.

5. Case Study: Application of APO-JADE for Attack Detection

In this section, we delve into the comprehensive structure of the attack detection APO-JADE model. The model is inspired by DRaNN_PSO [30].
The inception phase of our proposed methodology, which utilizes Light Particle Swarm Optimization (APO-JADE), begins with the process of data assimilation and meticulous observation. To ensure the robustness and adaptability of the model [31], three state-of-the-art security datasets are employed: DS2OS, UNSW-NB15, and ToN_IoT. Each dataset possesses distinct characteristics and features, rendering them suitable for diverse types of analysis. We provide a concise exposition on each of these datasets [30]:
  • DS2OS;
  • UNSW-NB15;
  • ToN_IoT.
This diverse dataset collection facilitates a comprehensive and multifaceted analysis, contributing to the development of a more effective and resilient attack detection system.

5.1. DS2OS Dataset

In 2018, Marc-Oliver P. and François-X. introduced the DS2OS dataset, which is a contemporary Industrial Internet of Things (IIoT) security dataset [32]. As an open-source resource, it is an invaluable asset for assessing the competency of artificial intelligence-centric cybersecurity paradigms, particularly in the realms of smart industries, urban intelligent systems (smart cities), and various IIoT applications [30].
The DS2OS dataset consists of a cumulative 357,952 samples. Out of these, 347,935 are categorized as standard samples, while the remaining 10,017 are designated as anomalous entries. Structurally, DS2OS is composed of 13 features and is classified into 8 distinct categories. This diverse and comprehensive dataset offers a rich resource for developing and testing cybersecurity models in IIoT contexts, providing both typical and atypical scenarios for robust analysis [30].

5.1.1. UNSW-NB15 Dataset

Originating from the Cyber Range Lab of the Australian Centre for Cyber Security, the UNSW-NB15 dataset was introduced to the public in 2015 by Moustafa et al. [33]. Renowned in cybersecurity research, this dataset comprises an exhaustive collection of 257,673 samples. Delving into its composition, 93,000 of these samples are classified as regular, while a substantial 164,673 are identified as malicious entries [30].
In terms of features, the UNSW-NB15 dataset is embedded with 49 distinct characteristics and is divided into 10 categorical classes. This rich and diverse dataset is pivotal for cybersecurity research, offering a comprehensive set of data points for the development and validation of advanced cybersecurity models [30].

5.1.2. ToN_IoT Dataset

A novel addition to the security datasets tailored for IoT/IIoT applications is the ToN_IoT dataset, which was introduced by the Cyber Range and IoT Labs of the University of New South Wales, Australia, in 2019 [34]. The ToN_IoT dataset is paramount for evaluating the effectiveness and accuracy of various cybersecurity solutions, especially those underpinned by Machine Learning (ML) and Deep Learning (DL) architectures.
The dataset is extensive, housing a total of 1,379,274 samples. Of these, 270,279 are classified as normal instances with the remaining 1,108,995 are categorized as anomalous readings. In terms of its structure, the ToN_IoT dataset is organized into 10 distinct classes. This comprehensive and diverse dataset is crucial for developing and testing advanced cybersecurity models in the context of IoT and IIoT environments.

5.1.3. Data Preparation and Curation

The act of preparing datasets is a pivotal stage in the research process particularly when dealing with artificial intelligence (AI) systems. Properly curated data are essential for accelerating model training and enhancing the accuracy and efficiency of the resultant model. The data preparation process encompasses various intricate operations, such as eliminating non-essential attributes, converting categorical values into numerical formats, and applying imputation strategies for missing or incomplete data points.
To ensure the completeness and integrity of the dataset, we employed the Mean Imputation method to address missing or incomplete data. In this approach, missing values in numerical features were replaced with the mean value of the respective feature. This method was chosen for its simplicity and effectiveness in maintaining the central tendency of the data without introducing significant bias.
For each feature with missing values, the mean was calculated using the available data points, and these mean values were then used to fill in the gaps where data was missing. By doing so, we ensured that the dataset remained robust and suitable for further analysis, allowing our models to train on a complete dataset without the potential distortions that might arise from more complex imputation methods.
In our study, we meticulously employed a bifurcated procedure for data preparation, which is broadly classified into two methodologies: pre-processing and normalization. Pre-processing involves steps to make the raw data more suitable for model building, while normalization focuses on scaling the data to a specific range to ensure consistent data representation. This approach ensures the data are optimally prepared for effective and efficient analysis within AI systems.

5.1.4. Pre-Processing

The nuances of pre-processing in our research involve strategic transformations of data types, ensuring seamless integration with the architecture at hand, especially the neural network’s input layer. One of the key challenges we faced was the presence of categorical attributes, which required conversion into a numerical format for effective processing. The technique employed for this transformation was label encoding. Label encoding is particularly advantageous as it translates categorical data into a format more amenable to Machine Learning algorithms.
Furthermore, it is worth noting that certain attributes commonly found in many datasets, such as time, date, and timestamps, were determined to be inconsequential for the specific objective of attack detection in our study. Therefore, these attributes were judiciously excised from the dataset to streamline the data and focus on the most relevant features. This selective approach in pre-processing ensures that the data fed into the neural network are optimally structured for the purpose of attack detection.

5.1.5. Normalization

In data analytics, datasets often harbor attributes with vastly differing scales and magnitudes. When these discrepancies are left unchecked, they can inadvertently skew the model, leading to biased outcomes and potentially compromising the integrity of the results. The normalization process emerges as a crucial remedy to this predicament. It ensures a uniform scaling of the dataset attributes, mapping them onto a consistent range between 0.0 and 1.0. This transformation is accomplished without distorting the inherent relationships and patterns within the data.
The normalization technique of min–max scaling was judiciously adopted. Min–max scaling is a method that rescales the range of features to align with the smallest and largest values for each feature. This technique effectively maintains data consistency and ensures that all features contribute equitably during the modeling process. By implementing this approach, we mitigated the risk of skewed models and biased outcomes, enhancing the reliability and integrity of our research findings.

5.2. Hyper Parameter Tuning and Configuration

Hyperparameters play an instrumental role in delineating the architecture of a neural network and presiding over its training dynamics. In our study, while the foundational architecture of the Deep Recursive Artificial Neural Network (APO-JADE) remains invariant, the task of ascertaining the optimal values for the hyperparameters is entrusted to the APO-JADE algorithm. This approach is intended to fine-tune the network for enhanced attack detection precision.
The hyperparameters under consideration include the learning rate, number of epochs, momentum, batch size, and dropout rate. We provide a comprehensive elucidation of these parameters, which is followed by their optimal values as determined by the hybrid PSO algorithm across different datasets. The optimization of these hyperparameters is pivotal for the effectiveness of the neural network in accurately detecting attacks, ensuring that the model is not only robust but also sensitive to the nuances of the data it processes.
The optimization of these hyperparameters is achieved using APO-JADE algorithm. This method combines the adaptive behavior of Artificial Protozoa with the efficiency of the JADE algorithm to iteratively search for the optimal combination of hyperparameters. Each candidate solution within the population adjusts its position in the search space based on its experience and the collective experiences of neighboring candidates, emulating the dynamic adaptability of protozoa.
Given the distinct characteristics and variability of the datasets employed—DS2OS, UNSW-NB15, and ToN_IoT—the optimal set of hyperparameters often varies across different problems. The APO-JADE algorithm is designed to dynamically explore the hyperparameter space, adapting to the unique features and demands of each dataset. For instance, the DS2OS dataset, with its focus on Industrial Internet of Things (IIoT) security, requires different hyperparameter settings compared to the UNSW-NB15 dataset, which is centered on traditional cybersecurity threats, or the ToN_IoT dataset, which is tailored for IoT/IIoT environments.
The APO-JADE algorithm is executed separately for each dataset, with its parameters—such as population size, inertia weight, and cognitive and social coefficients—fine-tuned to the specific characteristics of the dataset. This process results in hyperparameters that are not only tailored to the particularities of each dataset but are also optimized to enhance the neural network’s performance in detecting cybersecurity threats across diverse contexts.

5.3. Learning Rate

Serving as a linchpin in Deep Learning (DL) algorithms, the learning rate is a critical hyperparameter that dictates the pace at which a model adapts during the training phase. The choice of the learning rate involves a trade-off: a diminutive learning rate might lead to more refined learning, yet it may concurrently prolong the training duration. Conversely, an elevated learning rate might expedite the learning process but could result in potentially large prediction errors. Consequently, discerning the optimal learning rate is one of the cardinal challenges in the design of DL models.
This challenge is particularly pronounced in the context of DL, where models are often complex and sensitive to the rate of learning. The learning rate impacts the convergence of the training process, with implications for both the accuracy and efficiency of the model. Thus, carefully calibrating the learning rate is essential for achieving a balance between rapid convergence and the accuracy of the learned model.

5.4. Number of Epochs

The ‘number of epochs’ hyperparameter is representative of the iterations in neural network training, demarcating the frequency with which the entire dataset is parsed by the learning algorithm. This hyperparameter is critical as it determines the number of times the weights and biases within the neural network architecture undergo updates. An appropriate selection of the number of epochs ensures that the model converges to an optimal solution without succumbing to overfitting or underfitting the data.
The optimal number of epochs is vital for the efficacy of the training process. Too few epochs might result in an undertrained model that fails to capture the complexity of the data, whereas too many epochs can lead to overfitting, where the model becomes overly tailored to the training data and performs poorly on unseen data. Thus, finding the right balance in the number of epochs is essential for developing a robust and generalizable neural network model.

5.5. Momentum

The momentum hyperparameter acts as a guiding force in neural network training, amalgamating information from preceding iterations to shape the trajectory of subsequent steps in the learning process. This strategic incorporation of ‘historical knowledge’ serves to accelerate convergence and introduce stability into the model. Specifically, momentum helps to ameliorate the erratic oscillations that can plague weight updates, thereby enabling a smoother transition toward the global minimum of the loss function.
The efficacy of the momentum hyperparameter lies in its ability to navigate the parameter space more effectively. By considering the gradients of past iterations, momentum prevents the model from getting stuck in local minima and mitigates the risk of erratic updates. Consequently, it plays a pivotal role in enhancing the model’s convergence rate and improving its overall performance.

5.6. Batch Size

The batch size is a pivotal hyperparameter in the realm of deep learning. It determines the number of training samples to be processed before the model’s intrinsic parameters, such as weights, are updated. The choice of batch size significantly influences both the computational efficiency of the training process and the granularity of the model’s weight updates.
A smaller batch size tends to lead to more frequent updates, offering a more refined and granular learning curve. This can be beneficial for capturing subtle patterns in the data but may increase computational demands. On the other hand, a larger batch size typically provides a more generalized update at each step and can potentially lead to faster convergence in terms of epochs. However, it may overlook finer nuances in the data and requires more memory.
Thus, selecting an appropriate batch size is crucial, as it strikes a balance between the accuracy of the learning process and computational efficiency. This decision is often guided by the specific characteristics of the dataset and the computational resources available.

5.7. Dropout

Dropout is a prominent regularization technique employed during the training phase of neural networks. It functions by stochastically ‘turning off’ a fraction of neurons within the network. This sporadic disabling of neurons is designed to compel the model to develop robust and diversified internal representations, reducing its reliance on any single neuron or specific feature set.
The primary function of dropout is to serve as a preventive measure against overfitting. By temporarily removing neurons during training, dropout ensures that the neural network does not become overly specialized to the training data. This approach enhances the model’s ability to generalize, thus improving its performance on previously unseen data. Consequently, dropout is an essential technique for maintaining the versatility and general applicability of neural network models.

5.8. Performance Assessment Metrics

To discern the robustness and adaptability of our proposed design, we employed a repertoire of performance metrics. These metrics are essential in gauging the congruence between the model’s predicted outcomes and the actual ground truths. Central to our evaluation methodology are the constructs of True Positives (TPs), False Positives (FPs), True Negatives (TNs), and False Negatives (FNs).
  • True Positives (TPs): Quantify instances where the model accurately identifies actual intrusions.
  • False Positives (FPs): Correspond to instances where the model erroneously labels normal activities as intrusions.
  • True Negatives (TNs): Represent instances where the model correctly identifies benign behaviors.
  • False Negatives (FNs): Occur when the model fails to detect actual intrusive activities.
Building upon these foundational metrics, subsequent measures such as accuracy, precision, recall, and the F1 score are derived to provide a comprehensive snapshot of the model’s performance. These metrics collectively offer a holistic view of the model’s effectiveness in correctly classifying and identifying various activities, illuminating its strengths and areas for improvement in the context of intrusion detection.

5.8.1. Accuracy

As a seminal metric in model evaluation, accuracy encapsulates the proportion of instances where the model’s predictions align with actual events. This includes correctly identifying both malicious intrusions and legitimate actions. Mathematically, accuracy is evaluated by the formula in Equation (21)
Accuracy = TP + TN TP + TN + FP + FN
In this formula, TP represents True Positive, TN represents True Negative, FP represents False Positive, and FN represents False Negative. The sum of TPs and TNs is divided by the total number of predictions (the sum of TPs, TNs, FPs, and FNs), yielding the accuracy rate. This rate effectively measures the overall correctness of the model in classifying and predicting both intrusion and non-intrusion events. The formula for accuracy is delineated as shown below.

5.8.2. Precision

Precision is a crucial performance metric that quantifies a model’s ability to correctly identify anomalous observations. It is calculated as the fraction of True Positives (TPs), which are correctly classified anomalies, relative to the sum of True Positives and False Positives (FPs), where FPs denote incorrectly classified normal observations. Mathematically, precision can be expressed as shown in Equation (22):
Precision = TP TP + FP
This metric is instrumental in gauging the trustworthiness of positive identifications made by the model. A higher precision value indicates that the model is more reliable in correctly identifying anomalous events, minimizing the likelihood of false alarms or misclassifications.

5.8.3. Recall (Sensitivity)

Recall, often referred to as sensitivity, is a crucial metric that offers insights into a model’s ability to flag anomalies. It calculates the proportion of True Positives (TPs), which are correctly identified anomalies, to the sum of True Positives and False Negatives (FNs), where FNs represent actual anomalies that the model failed to detect. The formula for recall is presented in Equation (23):
Recall = TP TP + FN
Recall is an invaluable metric in scenarios where failing to detect an anomaly could result in severe consequences. It emphasizes the model’s capability to cover and correctly identify actual anomalies, serving as a critical measure of the model’s effectiveness in sensitive applications.

5.8.4. F1 Score

The F1 score is a comprehensive metric that harmoniously merges both precision and recall to produce a singular measure, balancing the trade-off between these two metrics. It is particularly relevant in scenarios characterized by an uneven class distribution. The F1 score is computed as the harmonic mean of precision and recall, offering a more holistic evaluation of the model’s performance, especially when both False Positives and False Negatives carry significant implications. The computation of the F1 score is given in Equation (24):
F 1 Score = 2 × Precision × Recall Precision + Recall
The F1 score achieves its best value at 1, indicating perfect precision and recall, and its worst at 0. This metric is particularly useful for assessing the balance between precision and recall, providing a more nuanced understanding of the model’s overall predictive accuracy in the context of anomaly detection.

5.9. Experimental Procedure and Resultant Insights

This segment is dedicated to elucidating the experimental design underpinning our study alongside offering a cursory exploration of the emergent findings associated with the proposed method. The experimental setup is meticulously crafted to evaluate the efficacy and robustness of the proposed model, encompassing various scenarios and datasets to ensure a comprehensive assessment.
The subsequent sections will delve into the specifics of the experimental procedures, including the configuration of the model, the datasets employed, and the metrics used for evaluation. Additionally, we will present an initial overview of the findings that have emerged from the application of our proposed method, highlighting its potential implications and contributions to the field.

5.9.1. Implementation Methodology

The devised model was instantiated on a hardware setup comprising a Lenovo Core i7 processor, which is supplemented by 32 GB of DDR4 RAM. This hardware configuration provided the necessary computational power and memory capacity to efficiently run the model and process the datasets.
For the software scaffolding of the proposed algorithm, Matlab was chosen as the primary development environment. The choice of Matlab was due to its robust numerical computing capabilities and extensive libraries, making it well suited for implementing complex algorithms and data processing tasks. The entire development and testing of the algorithm were conducted on a Windows 11 Professional operating environment, ensuring a stable and powerful platform for the execution of the model.

5.9.2. Delving into the Experimental Results

The proficiency of the proposed method has been exhaustively scrutinized over three distinct datasets in both binary and multiclass contexts. Our experimental approach was anchored around the quintessential 5-fold cross-validation technique, which is a method ubiquitously acknowledged and employed in the field of machine learning and deep learning. This cross-validation approach offers a robust and impartial platform to evaluate the effectiveness of a wide array of ML/DL algorithms.
  • Binary context analysis;
  • Multiclass context analysis.
Subsequent sections will dissect the insights culled from each dataset, providing a detailed exploration of the results. This comprehensive analysis will illuminate the strengths and areas for improvement of our proposed method in various classification scenarios.

5.9.3. Evaluative Insights for the DS2OS Dataset

For our analysis focusing on the DS2OS dataset, we implemented a 70:30 train–test split. The pivotal hyperparameters were meticulously calibrated using the hybrid PSO-SQP algorithm. Our intrusion detection model was trained on 10 salient features and iteratively refined over 100 epochs.
In the binary classification analysis, we employed a 5-fold cross-validation mechanism. Here, the training dataset was divided into quintiles based on sample volume with a detailed breakdown of results corresponding to each fold. Notably, the APO-JADE paradigm reached its peak attack detection accuracy during the fifth fold, which had the highest sample volume, achieving accuracies of 97.42% in training and 97.51% in testing. In contrast, the first fold, with fewer samples, recorded the lowest accuracies of 95.21% (training) and 94.22% (testing). The subsequent folds marked accuracies of 96.47%, 96.42%, and 97.33%. Additionally, the fifth fold demonstrated the superiority of APO-JADE through auxiliary metrics like precision, recall, and the F1 score.
In multiclass classification as shown in Table 5, the APO-JADE architecture excelled in distinguishing seven distinct classes. The model showed remarkable precision in identifying categories like ‘Denial of Service (DoS)’, ‘Scan’, and ‘Wrong setup’, with accuracies of 96.544%, 95.24%, and 92.68%, respectively. Other attack vectors such as ‘Malicious operation’, ‘Spying’, ‘Malicious control’, and ‘Data type probing’ were identified with accuracies of 97.52%, 94.74%, 96.87%, and 93.81%, respectively. The ‘Normal’ class was accurately categorized with a 90.65% success rate with a minor 1.46% being erroneously flagged as malicious.

6. Conclusions

In this paper, we have presented the Hybrid APO-JADE optimizer, which is a novel optimization metaheuristic that integrates the strengths of JADE (JADE Adaptive Differential Evolution) and Artificial Protozoa Optimizer (APO) to effectively tackle complex optimization problems. The proposed algorithm is designed to balance the crucial aspects of exploration and exploitation, enhancing its ability to find high-quality solutions efficiently. The initial phase of the algorithm utilizes JADE’s adaptive mechanisms to explore the search space comprehensively. By dynamically adjusting the control parameters and employing differential mutation and crossover operations, JADE prevents premature convergence and ensures a diverse set of candidate solutions. This global exploration phase is crucial for identifying promising regions in the search space. As the optimization progresses, the algorithm transitions to the APO mechanism, which focuses on intensifying the search around the best solutions identified by JADE. The use of Levy flights and adaptive change factors in the APO phase enhances local exploitation, allowing for thorough refinement of the solutions. This dynamic transition between JADE and APO, governed by a predefined iteration threshold, ensures that the algorithm effectively shifts from exploration to exploitation at the appropriate time. The APO-JADE algorithm was evaluated using benchmark functions from CEC2017, CEC2021, and CEC2022, demonstrating notable improvements in convergence rates and accuracy over the standard PSO. Furthermore, the application of APO-JADE to real-world attack detection scenarios using the DS2OS, UNSW-NB15, and ToNIoT datasets showcased its robust performance. The experimental results highlighted APO-JADE’s capability to effectively navigate complex optimization landscapes and achieve high-quality solutions.

Author Contributions

Conceptualization, A.k.A.H. and H.N.F.; Methodology, A.k.A.H. and H.N.F.; Software, H.N.F.; Validation, H.N.F.; Formal analysis, A.k.A.H.; Investigation, A.k.A.H.; Writing—original draft, A.k.A.H. and H.N.F.; Writing—review & editing, A.k.A.H. and H.N.F.; Visualization, A.k.A.H. and H.N.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Security Management Technology Group (SMT), grant number 20243.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

We thank Samir M. Abu Tahoun, Security Management Technology Group (SMT) (http://www.smtgroup.org/) for the financial support of our research project.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fakhouri, H.N.; Awaysheh, F.M.; Alawadi, S.; Alkhalaileh, M.; Hamad, F. Four vector intelligent metaheuristic for data optimization. Computing 2024, 106, 2321–2359. [Google Scholar] [CrossRef]
  2. Abualigah, L.; Elaziz, M.A.; Khasawneh, A.M.; Alshinwan, M.; Ibrahim, R.A.; Al-Qaness, M.A.; Mirjalili, S.; Sumari, P.; Gandomi, A.H. Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: A comprehensive survey, applications, comparative analysis, and results. Neural Comput. Appl. 2022, 34, 4081–4110. [Google Scholar] [CrossRef]
  3. Fakhouri, H.N.; Alawadi, S.; Awaysheh, F.M.; Hamad, F. Novel hybrid success history intelligent optimizer with gaussian transformation: Application in CNN hyperparameter tuning. Clust. Comput. 2024, 27, 3717–3739. [Google Scholar] [CrossRef]
  4. Al Hwaitat, A.K.; Fakhouri, H.N. The OX Optimizer: A Novel Optimization Algorithm and Its Application in Enhancing Support Vector Machine Performance for Attack Detection. Symmetry 2024, 16, 966. [Google Scholar] [CrossRef]
  5. Fahimnia, B.; Davarzani, H.; Eshragh, A. Planning of complex supply chains: A performance comparison of three meta-heuristic algorithms. Comput. Oper. Res. 2018, 89, 241–252. [Google Scholar] [CrossRef]
  6. Fakhouri, H.N.; Alawadi, S.; Awaysheh, F.M.; Hamad, F. Novel Hybrid Crayfish Optimization Algorithm and Self-Adaptive Differential Evolution for Solving Complex Optimization Problems. Symmetry 2024, 16, 927. [Google Scholar] [CrossRef]
  7. Parouha, R.P.; Verma, P. Design and applications of an advanced hybrid meta-heuristic algorithm for optimization problems. Artif. Intell. Rev. 2021, 54, 5931–6010. [Google Scholar]
  8. Hamad, F.; Fakhouri, H.N.; Alzghoul, F.; Zraqou, J. Development and Design of Object Avoider Robot and Object, Path Follower Robot Based on Artificial Intelligence. Arab. J. Sci. Eng. 2024, 1–22. [Google Scholar] [CrossRef]
  9. Ryalat, M.H.; Fakhouri, H.N.; Zraqou, J.; Hamad, F.; Alzboun, M.S. Enhanced multi-verse optimizer (TMVO) and applying it in test data generation for path testing. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 662–673. [Google Scholar]
  10. Adam, S.P.; Alexandropoulos, S.A.N.; Pardalos, P.M.; Vrahatis, M.N. No free lunch theorem: A review. In Approximation and Optimization: Algorithms, Complexity and Applications; Springer: Cham, Switzerland, 2019; pp. 57–82. [Google Scholar]
  11. Wang, X.; Snášel, V.; Mirjalili, S.; Pan, J.S.; Kong, L.; Shehadeh, H.A. Artificial Protozoa Optimizer (APO): A novel bio-inspired metaheuristic algorithm for engineering optimization. Knowl.-Based Syst. 2024, 295, 111737. [Google Scholar] [CrossRef]
  12. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  13. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  14. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar]
  15. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  16. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  17. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar]
  18. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  19. Peraza-Vázquez, H.; Peña-Delgado, A.; Merino-Treviño, M.; Morales-Cepeda, A.B.; Sinha, N. A novel metaheuristic inspired by horned lizard defense tactics. Artif. Intell. Rev. 2024, 57, 59. [Google Scholar] [CrossRef]
  20. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  21. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Faris, H. MTDE: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Appl. Soft Comput. 2020, 97, 106761. [Google Scholar] [CrossRef]
  22. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  23. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile search algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  24. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar]
  25. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  26. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White Shark Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl.-Based Syst. 2022, 243, 108457. [Google Scholar]
  27. Luo, W.; Lin, X.; Li, C.; Yang, S.; Shi, Y. Benchmark functions for CEC 2022 competition on seeking multiple optima in dynamic environments. arXiv 2022, arXiv:2201.00523. [Google Scholar]
  28. Mohamed, A.W.; Sallam, K.M.; Agrawal, P.; Hadi, A.A.; Mohamed, A.K. Evaluating the performance of meta-heuristic algorithms on CEC 2021 benchmark problems. Neural Comput. Appl. 2023, 35, 1493–1517. [Google Scholar] [CrossRef]
  29. Stanovov, V.; Akhmedova, S.; Semenkin, E. LSHADE algorithm with rank-based selective pressure strategy for solving CEC 2017 benchmark problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–8. [Google Scholar]
  30. Ahmad, J.; Shah, S.A.; Latif, S.; Ahmed, F.; Zou, Z.; Pitropakis, N. DRaNN_PSO: A deep random neural network with particle swarm optimization for intrusion detection in the industrial internet of things. J. King Saud-Univ.-Comput. Inf. Sci. 2022, 34, 8112–8121. [Google Scholar] [CrossRef]
  31. Al Hwaitat, A.K.; Almaiah, M.A.; Almomani, O.; Al-Zahrani, M.; Al-Sayed, R.M.; Asaifi, R.M.; Adhim, K.K.; Althunibat, A.; Alsaaidah, A. Improved security particle swarm optimization (PSO) algorithm to detect radio jamming attacks in mobile networks. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 614–625. [Google Scholar] [CrossRef]
  32. Benaddi, H.; Jouhari, M.; Ibrahimi, K.; Ben Othman, J.; Amhoud, E.M. Anomaly Detection in Industrial IoT Using Distributional Reinforcement Learning and Generative Adversarial Networks. Sensors 2022, 22, 8085. [Google Scholar] [CrossRef]
  33. Moustafa, N.; Slay, J. UNSW-NB15: A comprehensive datasets for network intrusion detection systems (UNSW-NB15 network data set). In Proceedings of the 2015 Military Communications and Information Systems Conference (MilCIS), Canberra, ACT, Australia, 10–12 November 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–6. [Google Scholar]
  34. Alsaedi, A.; Moustafa, N.; Tari, Z.; Mahmood, A.; Anwar, A. TON_IoT telemetry dataset: A new generation dataset of IoT and IIoT for data-driven intrusion detection systems. IEEE Access 2020, 8, 165130–165150. [Google Scholar] [CrossRef]
Figure 1. Objective space of CEC2022 benchmark functions (F1–F6).
Figure 1. Objective space of CEC2022 benchmark functions (F1–F6).
Applsci 14 08280 g001
Figure 2. APO-JADE convergence diagram on CEC2022 benchmark (F1–F6).
Figure 2. APO-JADE convergence diagram on CEC2022 benchmark (F1–F6).
Applsci 14 08280 g002
Figure 3. APO-JADE convergence diagram on CEC2022 benchmark (F7–F12).
Figure 3. APO-JADE convergence diagram on CEC2022 benchmark (F7–F12).
Applsci 14 08280 g003
Figure 4. APO−JADE search history diagram on CEC2022 benchmark (F1–F6).
Figure 4. APO−JADE search history diagram on CEC2022 benchmark (F1–F6).
Applsci 14 08280 g004
Figure 5. APO−JADE search history diagram on CEC2022 benchmark (F7–F12).
Figure 5. APO−JADE search history diagram on CEC2022 benchmark (F7–F12).
Applsci 14 08280 g005
Figure 6. APO-JADE average fitness diagram on CEC2022 (F1–F6).
Figure 6. APO-JADE average fitness diagram on CEC2022 (F1–F6).
Applsci 14 08280 g006
Figure 7. APO-JADE average fitness diagram on CEC2022 (F7–F12).
Figure 7. APO-JADE average fitness diagram on CEC2022 (F7–F12).
Applsci 14 08280 g007
Figure 8. APO-JADE box-plot diagram on CEC2022 benchmark (F1–F6).
Figure 8. APO-JADE box-plot diagram on CEC2022 benchmark (F1–F6).
Applsci 14 08280 g008
Figure 9. APO-JADE box-plot diagram on CEC2022 benchmark (F7–F12).
Figure 9. APO-JADE box-plot diagram on CEC2022 benchmark (F7–F12).
Applsci 14 08280 g009
Figure 10. APO-JADE heat map diagram on CEC2022 benchmark suite (F1–F6).
Figure 10. APO-JADE heat map diagram on CEC2022 benchmark suite (F1–F6).
Applsci 14 08280 g010
Figure 11. APO-JADE heat map diagram on CEC2022 benchmark (F7–F12).
Figure 11. APO-JADE heat map diagram on CEC2022 benchmark (F7–F12).
Applsci 14 08280 g011
Table 5. Multiclass classification APO-JADE results.
Table 5. Multiclass classification APO-JADE results.
CategoryAccuracy (%)
DoS96.54
Scan95.24
Wrong setup92.68
Malicious operation97.52
Spying94.74
Malicious control96.87
Data type probing93.81
Normal90.65
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Al Hwaitat, A.k.; Fakhouri, H.N. Hybrid Artificial Protozoa-Based JADE for Attack Detection. Appl. Sci. 2024, 14, 8280. https://doi.org/10.3390/app14188280

AMA Style

Al Hwaitat Ak, Fakhouri HN. Hybrid Artificial Protozoa-Based JADE for Attack Detection. Applied Sciences. 2024; 14(18):8280. https://doi.org/10.3390/app14188280

Chicago/Turabian Style

Al Hwaitat, Ahmad k., and Hussam N. Fakhouri. 2024. "Hybrid Artificial Protozoa-Based JADE for Attack Detection" Applied Sciences 14, no. 18: 8280. https://doi.org/10.3390/app14188280

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop