Next Article in Journal
Knowledge-Based Perturbation LaF-CMA-ES for Multimodal Optimization
Previous Article in Journal
CPDet: Circle-Permutation-Aware Object Detection for Heat Exchanger Cleaning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Cybersecurity Neural Networks: An Evolutionary Approach for Enhanced Attack Detection and Classification

by
Ahmad K. Al Hwaitat
1,* and
Hussam N. Fakhouri
2
1
Computer Science Department, King Abdullah II School of Information Technology, The University of Jordan, Amman 11942, Jordan
2
Data Science and Artificial Intelligence Department, Faculty of Information Technology, University of Petra, Amman 11196, Jordan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(19), 9142; https://doi.org/10.3390/app14199142
Submission received: 29 August 2024 / Revised: 26 September 2024 / Accepted: 27 September 2024 / Published: 9 October 2024

Abstract

:
The increasing sophistication and frequency of cyber threats necessitate the development of advanced techniques for detecting and mitigating attacks. This paper introduces a novel cybersecurity-focused Multi-Layer Perceptron (MLP) trainer that utilizes evolutionary computation methods, specifically tailored to improve the training process of neural networks in the cybersecurity domain. The proposed trainer dynamically optimizes the MLP’s weights and biases, enhancing its accuracy and robustness in defending against various attack vectors. To evaluate its effectiveness, the trainer was tested on five widely recognized security-related datasets: NSL-KDD, CICIDS2017, UNSW-NB15, Bot-IoT, and CSE-CIC-IDS2018. Its performance was compared with several state-of-the-art optimization algorithms, including Cybersecurity Chimp, CPO, ROA, WOA, MFO, WSO, SHIO, ZOA, DOA, and HHO. The results demonstrated that the proposed trainer consistently outperformed the other algorithms, achieving the lowest Mean Square Error (MSE) and highest classification accuracy across all datasets. Notably, the trainer reached a classification rate of 99.5% on the Bot-IoT dataset and 98.8% on the CSE-CIC-IDS2018 dataset, underscoring its effectiveness in detecting and classifying diverse cyber threats.

1. Introduction

As the digital landscape evolves, the complexity and frequency of cybersecurity threats increase, presenting substantial challenges for conventional defense mechanisms [1]. In this rapidly changing environment, the variety of threats expands swiftly, creating an ongoing arms race between attackers and defenders. Modern cybersecurity systems must address a wide range of sophisticated threats, including advanced phishing schemes, ransomware attacks, and stealthy advanced persistent threats that seek to remain undetected within networks for extended periods [2]. These circumstances necessitate defense systems that are not only reactive but also adaptive and resilient, with the ability to proactively detect and counteract threats across multiple attack vectors.
In response to these challenges, Multi-Layer Perceptron (MLP) neural networks have become a foundational technology in the field of cybersecurity [3]. Renowned for their capacity to learn and model the complex, nonlinear relationships present in large-scale data, such as network traffic, MLPs signify a transformative shift in the detection and response to cyber threats. As a type of feedforward artificial neural network, MLPs are particularly effective in pattern recognition tasks, which are essential for identifying anomalous behavior indicative of cyber threats. By analyzing extensive datasets, MLPs can reveal subtle patterns that are often missed by traditional rule-based systems, providing a more adaptive and robust approach to threat detection [4].
MLPs consist of multiple layers of neurons, with each layer designed to capture different levels of data abstraction. This structure allows them to detect not only simple patterns but also complex interactions within the data, making them particularly well suited for the intricate task of threat detection in cybersecurity [5]. However, the effectiveness of MLPs is highly dependent on the optimal configuration of their architecture, particularly the tuning of their weights and biases. These parameters are systematically adjusted during training to minimize prediction errors. Achieving this tuning is a non-trivial task, as it involves navigating a high-dimensional parameter space often plagued by local minima, which can hinder conventional gradient-based optimization methods [6].
Moreover, training MLPs necessitates advanced optimization techniques capable of effectively balancing the exploration of the parameter space with the exploitation of promising regions [7]. Traditional methods, such as backpropagation and its variants, frequently encounter difficulties, especially in complex scenarios involving noisy, imbalanced, or incomplete datasets, which are common in cybersecurity applications [8,9]. These shortcomings have driven the development of more sophisticated optimization algorithms that offer the robustness and flexibility required to train MLPs efficiently under such challenging conditions.
Metaheuristic algorithms and artificial intelligence (AI) have revolutionized the field of optimization, enabling the solution of complex problems that have traditionally presented significant difficulties for conventional methods [10]. These advanced techniques leverage natural processes and intelligent algorithms to navigate vast and intricate search spaces, efficiently identifying near-optimal solutions that may be unattainable through standard approaches. Inspired by a range of natural phenomena, including biological evolution, social behaviors of organisms, and physical processes, metaheuristics provide a flexible and dynamic framework for addressing complex optimization problems [11]. This is particularly relevant in the field of cybersecurity, where the adaptive and robust characteristics of metaheuristics can substantially improve the performance of systems such as Multi-Layer Perceptrons (MLPs) in detecting and responding to cyber threats.
AI techniques, particularly machine learning (ML) and deep learning (DL), have significantly enhanced the capabilities of metaheuristic algorithms [12]. ML algorithms analyze large datasets to uncover patterns and relationships, which can then be leveraged to optimize the search strategies employed by metaheuristics. For example, ML models can predict promising regions within the search space, thereby directing metaheuristic algorithms toward more optimal areas [13]. Deep learning, with its capacity to manage complex, high-dimensional data, offers even greater insights, enabling the creation of advanced and adaptive optimization techniques [14]. By integrating AI, metaheuristics become more intelligent and adaptable, continuously refining their search strategies over time, making them particularly suited for the sophisticated requirements of cybersecurity network optimization.
This paper presents a novel optimization algorithm, the Cybersecurity Optimizer (CSO), specifically developed to address these challenges. The CSO integrates strategies from evolutionary computation and adaptive learning to enhance the parameter-tuning process of Multi-Layer Perceptrons (MLPs). By employing this optimizer, MLPs can be trained more efficiently, resulting in improved accuracy and robustness in detecting and mitigating network threats, thereby substantially augmenting the effectiveness of cybersecurity systems.

Motivation and Contribution

The rapidly evolving landscape of cyber threats demands equally adaptive and sophisticated defense mechanisms. While traditional optimization methods have proven effective in more structured environments, they often struggle with the complex, high-dimensional, and non-linear optimization challenges inherent in training neural networks for cybersecurity applications. Enhancing the training efficiency of Multi-Layer Perceptrons (MLPs) for attack detection has become increasingly crucial, as these neural networks are pivotal in identifying subtle patterns and anomalies indicative of potential security breaches. Given the shortcomings of conventional optimization techniques, there is a growing need for innovative approaches capable of navigating intricate optimization landscapes more effectively, delivering robust solutions that are both highly accurate and computationally efficient.
This paper introduces a novel Cybersecurity Optimizer (CSO), specifically designed to address the challenges associated with training Multi-Layer Perceptrons (MLPs) for cybersecurity threat detection. The key contributions of this work are outlined as follows:
  • Introduction to a Novel Optimization Algorithm: This paper presents the Cybersecurity Optimizer (CSO), an innovative algorithm that integrates principles from evolutionary computation and adapts them with modern AI techniques. This combination results in a powerful tool designed to converge rapidly to optimal solutions, thereby reducing training time and enhancing the overall performance of MLPs.
  • Enhanced MLP Training: By applying the CSO to the MLP training process, we demonstrate significant improvements in the network’s ability to detect and classify cybersecurity threats with greater accuracy compared to MLPs trained with traditional optimization methods.
  • Benchmarking Against Standard Optimizers: The CSO is comprehensively compared with other well-established optimization algorithms across a range of standard benchmark functions. This comparison highlights the superior performance of the CSO in terms of both convergence speed and solution quality.
  • Real-World Application and Validation: The practical effectiveness of the CSO is validated in a real-world cybersecurity scenario, where it is used to train an MLP for network intrusion detection. The results confirm the optimizer’s capabilities in a practical context, demonstrating its potential for broader applications in cybersecurity technologies.
The remainder of this paper is structured as follows: Section 2 provides a literature review, covering an overview of Multi-Layer Perceptron (MLP) neural networks, their training methodologies, and an introduction to cybersecurity challenges. Section 3 describes the proposed methodology, focusing on the CYO and its mathematical model, which encompasses stages such as initialization, adaptive learning, mutation, crossover, selection, archiving, as well as exploration and exploitation mechanisms. Section 4.2 elaborates on the application of the CYO for training MLPs in attack detection, offering a detailed mathematical model of the cybersecurity-based MLP trainer, including problem representation, the objective function, and the various phases of the algorithm. Section 5 provides an experimental analysis of the CYO, featuring the CEC2022 benchmark functions (F1–F12), and a comparison of the results with other optimization algorithms. This section also presents an analysis of convergence curves, search history curves, trajectory curves, fitness curves, sensitivity analysis, and visualizations using heatmaps, box plots, and histograms. Section 5.2 investigates the application of the CYO for MLP training in attack detection, presenting experimental results on datasets such as NSL-KDD, CICIDS2017, UNSW-NB15, Bot-IoT, and CSE-CIC-IDS2018. This section also discusses real-world applications and validation, the anomaly detection algorithm and architecture, justification for using a supervised approach, and considerations for long-term training. Finally, Section 7 concludes by summarizing the key findings and suggesting future research directions.

2. Literature Review

2.1. Optimization Algorithms

Optimization algorithms can be broadly categorized based on their underlying principles and methodologies. These categories include gradient-based methods, nature-inspired algorithms, swarm intelligence techniques, evolutionary algorithms, and others. Each category encompasses a variety of algorithms that have been developed over time, contributing to the expansive and continuously evolving field of optimization strategies.

Evolutionary Algorithms

Evolutionary programming, a subclass of evolutionary algorithms, is inspired by the processes of natural selection and genetics. These algorithms are designed to solve complex problems by iteratively refining a population of candidate solutions. The core concept involves simulating evolutionary processes such as mutation, selection, and reproduction to converge toward optimal or near-optimal solutions.
The concept of evolutionary programming was first introduced by Fogel et al. in 1966 [15], which laid the groundwork for a range of evolutionary techniques that emerged in later years. A notable advancement in this domain was the development of Evolution Strategies by Rechenberg in 1973 [16], which emphasized the adaptation of strategy parameters to enhance the optimization process.
Another pivotal contribution to the field was the introduction of genetic algorithms by Holland in 1975 [17]. These algorithms utilize operators such as crossover, mutation, and selection to evolve solutions across generations. Additionally, the Differential Search Algorithm, proposed by Civicioglu in 2011 [18], is a significant algorithm that incorporates differential mutation as a key mechanism for improving solutions.
Civicioglu further expanded the range of evolutionary techniques by introducing the Backtracking Optimization Algorithm in 2013 [19]. In 2014, Salimi presented Stochastic Fractal Search [20], which incorporates the concept of fractals to guide the search process. More recently, in 2018, Dhivyaprabha et al. developed the Synergistic Fibroblast Optimization [21], drawing inspiration from the biological behavior of fibroblasts.
The Estimation of Distribution Algorithm, proposed by Mühlenbein and Paaß in 1996 [22], marked a significant shift toward probabilistic modeling in evolutionary computation. Similarly, Differential Evolution, introduced by Storn and Price in 1997 [23], is a highly regarded algorithm known for its simplicity and effectiveness in addressing complex optimization challenges.
Grammatical Evolution, developed by Ryan et al. in 1998 [24], applies genetic algorithm principles to the evolution of programs and expressions. Additionally, Ferreira’s Gene Expression Algorithm, introduced in 2001 [25], extends genetic algorithms to accommodate more advanced representations of candidate solutions.

2.2. Machine Learning in Cybersecurity

Sánchez-Zas et al. [26] proposed three unsupervised machine learning models based on clustering techniques for anomaly detection in streaming cybersecurity logs. The models were evaluated using WSSSE, Silhouette, and training time metrics, with K-Means identified as the optimal algorithm for real-time anomaly detection. Their approach is highly effective for heterogeneous data sources, showcasing significant accuracy compared to other methodologies in the literature.
In the context of the Internet of Things (IoT), Alrowais et al. [27] introduced an automated cybersecurity threat detection model utilizing the Mayfly optimization algorithm combined with the regularized extreme learning machine (MFO-RELM). Their model preprocesses IoT data, applies RELM for classification, and leverages MFO to optimize performance. The proposed method was validated using benchmark datasets, demonstrating superior detection and classification of cybersecurity threats in IoT environments.
Goyal et al. [28] presented discrete mathematical models for phishing attack detection, employing machine learning algorithms such as Logistic Regression, and Support Vector Machines Their study highlights the importance of pre-processing in improving model performance, with XGBoost achieving the highest accuracy on a combined dataset. The use of confusion matrices and ROC curves provides insights into model performance across different datasets, emphasizing the role of continuous refinement in detecting phishing attacks.
Further advancements in the field include work by Rizwanullah et al. [29], who developed a metaheuristics-based approach combined with machine learning for intrusion detection in Unmanned Aerial Vehicles (UAVs). They proposed a feature selection method using quantum invasive weed optimization (QIWO-FS) and a weighted regularized extreme learning machine (WRELM) for classification. Their method was validated with benchmark datasets, demonstrating superior performance in identifying UAV intrusions compared to existing approaches.
Seyed et al. [30] introduced a modular design for an ML-based intrusion detection mechanism in IoT systems. Their framework combines supervised, unsupervised, and reinforcement learning methods to optimize attack detection while minimizing learning costs. The modular approach simplifies the deployment of intrusion detection engines, and their model demonstrated a classification accuracy of 93.66.
Moreover, Alluhaibi [31] explored the application of quantum machine learning (QML) for advanced threat detection. By comparing classical machine learning (CML) models with QML, the study demonstrated the potential for QML to offer significant computational advantages, particularly in real-time threat detection, though the current limitations in quantum hardware still favor CML in practical applications.
Zhukabayeva et al. [32] proposed a traffic analysis and node categorization-aware machine learning framework for intrusion detection and prevention in wireless sensor networks (WSNs) in smart grids. Their approach integrates traffic analysis with Random Forest models, which outperform traditional models like Decision Trees and Logistic Regression, achieving high precision and recall scores. This work significantly contributes to enhancing the security of smart grid infrastructures, emphasizing the importance of securing critical infrastructure for reliable power distribution.
In another study, Jayanthi et al. [33] focused on credit card fraud detection in healthcare by employing new machine learning strategies. Their work introduced clustering and classifier-based models (CCDT, CCLR, and CCRF) to detect fraudulent activities in healthcare transactions. The models demonstrated high accuracy, precision, and sensitivity, with CCRF and CCLR yielding significant improvements over traditional methods, emphasizing the role of ML in mitigating fraud in financial transactions.
Similarly, Khadidos et al. [34] developed a Binary Hunter–Prey Optimization algorithm combined with a machine learning-based phishing attack detection model (BHPO-MLPAD) for the IoT environment. Their approach utilized feature selection and classification, where the Binary Hunter–Prey Optimization algorithm improved the phishing detection process, offering a robust method for identifying and mitigating phishing attacks in IoT devices.
Dutta et al. [35] introduced a novel approach for detecting and classifying fake news using a Chaotic Ant Swarm with Weighted Extreme Learning Machine (CAS-WELM). The method applies machine learning models to discriminate between fake and real news, optimizing the performance of the classifier through the CAS algorithm. The model demonstrated enhanced outcomes in classifying fake news, providing a valuable tool for addressing misinformation in cybersecurity.

2.3. Overview of Multi-Layer Perceptron Neural Networks

Multi-Layer Perceptron (MLP) neural networks are a prominent class of feedforward artificial neural networks commonly employed in machine learning and artificial intelligence [36]. The architecture of an MLP consists of multiple layers of nodes, also referred to as neurons, which are organized in a hierarchical structure. These layers typically include an input layer, one or more hidden layers, and an output layer. Each neuron in one layer is fully connected to every neuron in the subsequent layer, creating a densely connected network [37].

2.3.1. Architecture

The architecture of an MLP is characterized by the number of layers and the number of neurons within each layer. The input layer is responsible for receiving the data, with each neuron corresponding to a feature from the input dataset. The hidden layers handle the majority of the computational processing through a combination of weighted inputs and biases. The output layer generates the final prediction or classification outcome based on the calculations performed by the network [38].

2.3.2. Activation Functions

An essential aspect of MLPs is the activation function employed in the neurons, particularly within the hidden layers. Common activation functions, such as the sigmoid, hyperbolic tangent (tanh), and Rectified Linear Unit (ReLU), introduce non-linearities into the network, enabling MLPs to learn and model intricate patterns in the data. These functions play a critical role by allowing the network to learn non-linear decision boundaries, which are crucial for tasks such as classification and regression [39].

2.3.3. Training

MLPs are generally trained using a supervised learning approach, where the network learns from a labeled dataset. The most widely used training method is backpropagation, often paired with an optimization technique such as gradient descent. During training, the network iteratively adjusts the weights and biases of the neurons to minimize the discrepancy between its predictions and the actual target values. The error is quantified by a loss function, such as Mean Squared Error (MSE) for regression tasks or cross-entropy loss for classification tasks. The optimizer then updates the weights to reduce this loss over successive training epochs [40].
Owing to their versatility and effectiveness, MLPs are employed in a wide range of applications across multiple domains. In the field of cybersecurity, for example, MLPs are used to detect anomalous behavior in network traffic, identify phishing websites, and classify different types of malware [41]. Beyond cybersecurity, these networks are applied in areas such as image recognition, speech recognition, financial forecasting, and numerous other fields where pattern recognition and predictive modeling are crucial [42].

2.4. Overview of Cybersecurity

Cybersecurity is a critical field dedicated to safeguarding computers, networks, programs, and data from unauthorized access, alteration, or destruction [43]. It encompasses a wide range of strategies, technologies, and processes aimed at defending systems and data against cyber threats. These threats include malware, phishing, ransomware, and advanced persistent threats (APTs), all of which pose significant security risks in today’s digital environment. Malware refers to malicious software designed to harm or exploit systems, while phishing involves fraudulent attempts to acquire sensitive information through deceptive communications. Ransomware is a form of malware that encrypts data and demands payment for its release. APTs, on the other hand, are long-term, targeted attacks in which adversaries gain unauthorized access to networks and remain undetected for extended periods to steal information or disrupt operations. As digital infrastructures become increasingly essential to both business and personal activities, the importance of cybersecurity has grown, driven by the rising sophistication and frequency of cyberattacks [44].
One of the primary challenges in cybersecurity is the ever-changing nature of security threats. Emerging technologies frequently introduce new and complex vulnerabilities, making them attractive targets for cybercriminals. Furthermore, the increasing volume of data generated by both individuals and organizations broadens the range of potential attack vectors that must be secured. This requires adaptive, dynamic, and continuously updated security strategies to protect sensitive information and ensure the integrity of systems [45].
Cybersecurity covers several critical domains: Network Security focuses on safeguarding data as it travels across networks [46]; Application Security seeks to protect software and devices from threats by ensuring secure development practices; Information Security ensures the integrity and privacy of data both at rest and in transit [47]; Operational Security involves the procedures and decisions related to managing and protecting data assets; Disaster Recovery and Business Continuity planning addresses how organizations respond to cybersecurity incidents or events that disrupt operations or result in data loss; and End-User Education reduces risks by training users in security best practices.
To confront these challenges, a range of cybersecurity techniques and technologies are utilized. Cryptographic methods, such as encryption, are employed to secure data both at rest and in transit [48]. Access control systems verify users’ identities and ensure they have the appropriate permissions to access resources. Malware protection tools detect and eliminate malicious software [49]. Security Information and Event Management (SIEM) systems provide real-time analysis of security alerts generated by network hardware and applications [50]. Additionally, firewalls and Intrusion Detection Systems (IDSs) serve as defensive barriers against external threats and monitor network traffic for signs of suspicious activities [51].
Recent advancements in artificial intelligence (AI) and machine learning (ML) are reshaping the implementation of cybersecurity defenses. These technologies enable the prediction of emerging threats, detection of malware through behavioral analysis, and automation of response strategies, allowing for the swift analysis of large data volumes. The incorporation of AI has become a critical element in the development of next-generation cybersecurity solutions [52].

3. Methodology

3.1. Cybersecurity Optimizer (CYO)

The Cybersecurity Optimizer (CYO) exemplifies the convergence of evolutionary computation techniques with core principles inherent to cybersecurity frameworks. This optimizer embodies an adaptive and resilient approach, reminiscent of advanced cybersecurity systems that evolve dynamically to counteract emerging threats and adapt to complex environments. Employing adaptive differential evolution, the optimizer fine-tunes scaling factors (SFs) and crossover rates (CRs) in a dynamic manner based on a success-history adaptation mechanism similar to SHADE (Success-History Adaptation of Differential Evolution). This approach mirrors cybersecurity strategies where defense mechanisms are continually updated in response to evolving threats, utilizing adaptive learning to optimize threat detection and response strategies.
Moreover, the incorporation of an archival mechanism to retain and potentially reintegrate previous solutions into the population pool is analogous to cybersecurity practices that leverage historical attack data to predict and mitigate future threats. This ensures a diverse genetic pool within the optimizer, enhancing its robustness and preventing premature convergence—a strategy akin to maintaining robust defenses against a variety of attack vectors in cybersecurity.
The optimizer also integrates a hybrid mutation strategy, employing both random and best solution mutations. This dual approach reflects a layered defense strategy typical in cybersecurity, which combines randomized elements (to decrease predictability and enhance security) with targeted adaptations based on identified vulnerabilities.
The CYO not only utilizes evolutionary computation methods optimized for adaptive and robust performance but also strategically integrates these methodologies with cybersecurity principles, ensuring it is particularly well suited for navigating and securing complex, dynamic problem landscapes like those encountered in cybersecurity challenges.

3.1.1. Mathematical Model

The Cybersecurity Optimizer as shown in Algorithm 1, is a sophisticated algorithm that emulates adaptive and resilient behaviors found in cybersecurity systems by integrating evolutionary strategies to tackle complex optimization challenges. The algorithm begins with an Initialization Phase, where a random seed is set, and an initial population is generated within the defined bounds, laying the foundation for the evolutionary process. Following this, it transitions into the Adaptive Learning Phase, which utilizes dynamic memory structures for scaling factors and crossover rates. This allows the algorithm to adapt its mutation and crossover strategies based on previous successes, thereby improving its efficiency in exploring and exploiting the solution space.
In the Defense Mechanism Phase, the optimizer performs mutation and crossover operations to produce new candidate solutions, which are then evaluated for their fitness. The fittest solutions are retained to guide future generations, mirroring the continuous improvement processes inherent in cybersecurity defenses. The final stage, known as the Evaluation and Update Phase, involves selecting superior solutions based on their performance and updating the solution archive, ensuring the algorithm’s robustness and adaptability across iterations.
This carefully structured sequence of phases allows the Cybersecurity Optimizer to maintain an essential balance between diversification and intensification, which is critical for navigating complex and dynamic optimization landscapes.
Algorithm 1. Cybersecurity Optimizer Algorithm
  1:
Input:  p o p _ s i z e , M a x _ i t e r , l b , u b , d i m , f o b j
  2:
Output:  b e s t F i t n e s s , b e s t S o l u t i o n , c o n v e r g e n c e C u r v e
  3:
Initialize random seed according to Equation (3)
  4:
Initialize population P ( 0 ) according to Equation (4)
  5:
Initialize M e m o r y S F and M e m o r y C R according to Equations (5) and (6)
  6:
b e s t F i t n e s s
  7:
c o n v e r g e n c e C u r v e array of size M a x _ i t e r filled with zeros
  8:
for  i t e r = 1  to  M a x _ i t e r  do
  9:
    for  i = 1  to  p o p _ s i z e  do
10:
        Select m e m _ i d x randomly
11:
        Calculate C R i and S F i using Equation (7)
12:
         p b e s t select one of the top performers in P
13:
        Select r 1 , r 2 randomly from population and archive
14:
        Generate mutant vector V i ( g ) using Equation (8)
15:
        Perform crossover to generate U i ( g ) using Equation (9)
16:
        if  f ( U i ( g ) ) < f ( P [ i ] )  then
17:
            P [ i ] U i ( g )
18:
           Update fitness if f ( U i ( g ) ) < b e s t F i t n e s s
19:
            b e s t F i t n e s s f ( U i ( g ) )
20:
            b e s t S o l u t i o n U i ( g )
21:
        end if
22:
    end for
23:
     c o n v e r g e n c e C u r v e [ i t e r ] b e s t F i t n e s s
24:
end for
25:
return  b e s t F i t n e s s , b e s t S o l u t i o n , c o n v e r g e n c e C u r v e

3.1.2. Exploration and Exploitation

In optimization algorithms, exploration and exploitation are fundamental processes that dictate the algorithm’s efficiency in navigating the solution space. The CYO effectively balances these two processes, drawing inspiration from cybersecurity practices, where the identification of potential vulnerabilities (exploration) must be complemented by strengthening existing defenses (exploitation).
Exploration pertains to the optimizer’s ability to investigate new, unexplored areas of the solution space, while exploitation involves enhancing solutions that are already known to be effective. The optimizer’s success in maintaining this equilibrium is crucial for ensuring efficient global and local search, enabling it to avoid becoming stuck in local optima while progressing toward the global optimum.
In the Cybersecurity Optimizer, exploration is primarily achieved through mutation and crossover operations. The hybrid mutation strategy, as defined in Equation (8), generates new candidate solutions by integrating components from the top-performing individuals ( X p b e s t ( g ) ) and randomly selected individuals ( X r 1 ( g ) and X r 2 ( g ) ). This process introduces diversity into the population, thereby allowing the optimizer to explore novel regions of the solution space. The scaling factor ( S F i ), which is dynamically modified via the memory mechanism (refer to Equation (6)), ensures that mutation intensity is varied throughout the optimization process, promoting exploration in the early stages and decreasing it as the population converges.
Additionally, the crossover operation, detailed in Equation (9), further enhances exploration by blending genetic material from the mutant vector and the current individual. The crossover rate ( C R i ) is adaptively tuned using a success-history-based learning mechanism, which permits the optimizer to regulate the extent of exploration as it advances through generations.
Conversely, exploitation is facilitated by the selection process, where superior solutions are retained and utilized to direct the search in subsequent iterations. As illustrated in Equation (10), individuals that surpass the current best solutions are preserved, allowing the optimizer to capitalize on promising regions of the solution space. The archival mechanism further bolsters exploitation by storing superior solutions from prior generations, ensuring their reintegration into the population when necessary, akin to how cybersecurity systems leverage past attack data to enhance contemporary defense strategies.
The adaptive learning phase, governed by Equations (3)–(7), plays a pivotal role in balancing exploration and exploitation. By dynamically adjusting the scaling factor and crossover rate based on previous successes, the optimizer ensures that exploration is prioritized during the initial stages of optimization when the solution space is largely unexplored. As the algorithm progresses, exploitation gains prominence, guiding the search toward optimal solutions by refining the most promising areas within the search space.
This dynamic adaptation mechanism enables the CYO to maintain flexibility and robustness, allowing it to efficiently traverse complex, multidimensional optimization landscapes while preventing premature convergence. By maintaining a careful balance between exploration and exploitation, the optimizer is particularly adept at addressing the evolving challenges of cybersecurity, where identifying new attack vectors and reinforcing known defenses are both imperative.

4. Cybersecurity-Based MLP Trainer

The cybersecurity-based Multi-Layer Perceptron (MLP) trainer (See Algorithm 2) is an advanced optimization algorithm specifically developed to improve the training process of neural networks for cyber threat detection. By utilizing evolutionary computation techniques alongside cybersecurity principles, the trainer ensures high accuracy and robustness in identifying malicious activities. It reformulates the MLP training problem in a manner that is compatible with meta-heuristic approaches, optimizing the neural network’s weights and biases to maximize classification, approximation, or prediction accuracy.
In this method, the variables being optimized are the weights and biases of the MLP. The Cybersecurity Optimizer utilizes adaptive differential evolution, where key parameters such as scaling factors (SFs) and crossover rates (CRs) are dynamically adjusted through a success-history adaptation mechanism. This approach facilitates continuous adaptation and evolution of the training process, closely emulating the adaptive behaviors observed in advanced cybersecurity systems.
The trainer employs an archival mechanism that retains successful solutions from previous generations and reintegrates them into the current population, similar to the way historical attack data are used in cybersecurity strategies. This mechanism helps preserve genetic diversity within the population, preventing premature convergence and improving the overall robustness of the MLP training process.
The trainer also utilizes a hybrid mutation strategy, which combines random mutations with mutations based on the best-performing solutions. This dual approach mirrors the layered defense strategies often employed in cybersecurity, balancing the introduction of randomness to avoid predictability with targeted adaptations to address known vulnerabilities. As a result, the optimizer not only improves the efficiency of MLP training but also aligns with the dynamic and adaptive characteristics of effective cybersecurity defenses.

4.1. Mathematical Model for Cybersecurity-Based MLP Trainer

This section introduces a mathematical model tailored to enhance the performance of a Multi-Layer Perceptron (MLP) trainer for cybersecurity applications. The model focuses on optimizing the training process using evolutionary computation techniques to improve attack detection accuracy and robustness.

4.1.1. Problem Representation

Variables (Weights and Biases):
V = { W , θ } = { W 1 , 1 , W 1 , 2 , , W n , n , θ 1 , θ 2 , , θ h }
where n represents the number of input nodes, W i , j denotes the connection weight between the i-th input node and the j-th node, and θ j refers to the bias (or threshold) associated with the j-th hidden node.

4.1.2. Objective Function

The goal of training a Multi-layer Perceptron (MLP) is to minimize the Mean Squared Error (MSE) across the entire set of training samples, as shown in Equation (2):
MSE = 1 s k = 1 s i = 1 m ( o i k d i k ) 2
where s is the number of training samples, m is the number of output units, d i k represents the desired output of the i-th output unit for the k-th training sample, and o i k represents the actual output of the i-th output unit for the k-th training sample.

4.1.3. Initialization Phase

Random Seed Initialization:
seed = rand ( s e e d , 100 × clock )
Population Initialization:
P ( 0 ) = l b + ( u b l b ) × rand ( pop_size , dim )
Adaptive Learning Phase
Memory Structures:
Memory S F = 0.5 × 1 memory_size
Memory C R = 0.5 × 1 memory_size
Dynamic Adaptation:
C R i = normrnd ( Memory C R [ mem_idx ] , 0.1 ) , S F i = Memory S F [ mem_idx ] + 0.1 × tan ( π × ( rand 0.5 ) )

4.1.4. Defense Mechanism Phase

Mutation:
V i ( g ) = X i ( g ) + S F i × ( X p b e s t ( g ) X i ( g ) + X r 1 ( g ) X r 2 ( g ) )
Crossover:
U i j ( g ) = V i j ( g ) if rand C R i or j = jrand , X i j ( g ) otherwise .

4.1.5. Evaluation and Update Phase

Selection:
X i ( g + 1 ) = U i ( g ) if f ( U i ( g ) ) < f ( X i ( g ) ) , X i ( g ) otherwise .
Archiving:
A ( g + 1 ) = A ( g ) { X i ( g + 1 ) | f ( X i ( g + 1 ) ) < f ( X i ( g ) ) }

4.2. Algorithm Steps

The cybersecurity-based Multi-Layer Perceptron (MLP) trainer steps are shown in Algorithm 2.
Algorithm 2. Cybersecurity-Based MLP Trainer
  1:
Input:  p o p _ s i z e , M a x _ i t e r , l b , u b , d i m , f o b j
  2:
Output:  b e s t F i t n e s s , b e s t S o l u t i o n , c o n v e r g e n c e C u r v e
  3:
Initialize random seed according to Equation (3)
  4:
Initialize population P ( 0 ) according to Equation (4)
  5:
Initialize M e m o r y S F and M e m o r y C R according to Equations (5) and (6)
  6:
b e s t F i t n e s s
  7:
c o n v e r g e n c e C u r v e array of size M a x _ i t e r filled with zeros
  8:
for  i t e r = 1  to  M a x _ i t e r  do
  9:
    for  i = 1  to  p o p _ s i z e  do
10:
        Select m e m _ i d x randomly
11:
        Calculate C R i and S F i using Equation (7)
12:
         p b e s t select one of the top performers in P
13:
        Select r 1 , r 2 randomly from population and archive
14:
        Generate mutant vector V i ( g ) using Equation (8)
15:
        Perform crossover to generate U i ( g ) using Equation (9)
16:
        if  f ( U i ( g ) ) < f ( P [ i ] )  then
17:
            P [ i ] U i ( g )
18:
           Update fitness if f ( U i ( g ) ) < b e s t F i t n e s s
19:
            b e s t F i t n e s s f ( U i ( g ) )
20:
            b e s t S o l u t i o n U i ( g )
21:
        end if
22:
    end for
23:
     c o n v e r g e n c e C u r v e [ i t e r ] b e s t F i t n e s s
24:
    Update archive A ( g + 1 ) with better solutions using Equation (11)
25:
end for
26:
return  b e s t F i t n e s s , b e s t S o l u t i o n , c o n v e r g e n c e C u r v e

4.3. Computational Complexity and Deployment Potential of Anomaly Detection Algorithm

In this section, we evaluate the computational complexity and deployment feasibility of the anomaly detection algorithm, specifically within the framework of an inline architecture Intrusion Prevention/Detection System (IPS/IDS). The cybersecurity-focused MLP trainer utilizes adaptive differential evolution as its optimization method, which introduces certain computational overheads and latency challenges, particularly in real-time applications. Here, we will examine these factors in detail and assess their implications.

4.3.1. Computational Complexity

The computational complexity of the algorithm can be evaluated by analyzing its key components. The initial phase involves population initialization, during which a set of candidate solutions is generated. Each solution represents a collection of weights and biases for the MLP model. The complexity of this step is determined by the population size P and the total number of parameters to be optimized, represented as n params , which corresponds to the total number of weights and biases in the MLP. Thus, the computational complexity of the population initialization is expressed by Equation (12):
O ( P · n params )
After the population has been initialized, each candidate solution must be evaluated using the objective function. In the context of training an MLP, this objective function is typically the Mean Squared Error (MSE) calculated over the training dataset. The computational complexity of evaluating the MSE for each individual in the population can be represented as O ( P · f ) , where f denotes the complexity of computing the MSE across the dataset. Since the MSE is dependent on the number of training samples s, the number of inputs n, and the number of outputs m, the overall complexity of evaluating the objective function is outlined in Equation (13)
f = O ( s · n · m )
Thus, the total computational complexity of the algorithm over all iterations is expressed as shown in Equation (14):
O ( G · P · s · n · m )
where G is the number of generations, P is the population size, s is the number of training samples, n is the number of inputs, and m is the number of outputs.
Beyond the evaluation of the objective function, the algorithm also performs operations such as mutation, crossover, and selection, which are applied to every individual within the population. These operations exhibit a linear relationship with both the population size and the dimensionality of the solution space, denoted by the number of parameters n params . The computational complexity of these operations is captured in Equation (15):
O ( G · P · n params )
Combining the above terms, the total computational complexity of the anomaly detection algorithm is summarized in Equation (16):
O ( G · P · s · n · m )
This complexity demonstrates that the computational load increases with the number of generations G, the population size P, as well as the intricacy of the MLP model and the dataset.

4.3.2. Deployment Potential and Latency in IPS/IDS

In a real-time Intrusion Prevention/Detection System (IPS/IDS), minimizing latency is essential to ensure the timely detection and prevention of cyber threats. The evolutionary nature of the anomaly detection algorithm introduces latency challenges due to the significant number of function evaluations and iterations across multiple generations.
The training phase of the algorithm focuses on optimizing the weights and biases of the MLP using differential evolution, a process that is computationally intensive. Due to the high complexity, as indicated in Equation (16), this phase is more appropriate for offline training environments, where the algorithm can be allowed sufficient time to converge to an optimal solution. Conducting training in an online or real-time setting would result in considerable delays, rendering it unsuitable for real-time intrusion detection applications.
Once the training phase is completed, the deployment phase utilizes the trained MLP for real-time anomaly detection. In this stage, the computational complexity is significantly reduced, as making predictions only requires forward propagation through the network. The complexity of this operation is linear with respect to the number of inputs n and outputs m, as illustrated in Equation (17):
O ( n · m )
This lower complexity makes the deployment phase more suitable for real-time detection, where latency must be minimized.

5. Experimental Analysis of the Cybersecurity Optimizer

First, we test the performance of the Cybersecurity Optimizer as a standalone optimizer for solving optimization problems; then, we test its ability with MLP to solve security issues.

5.1. Description of the CEC2022 Benchmark Functions F1 to F12

The CEC2022 benchmark functions, as outlined in Table 1, provide a comprehensive suite of test problems designed to evaluate the performance of optimization algorithms across a variety of problem landscapes. These functions encompass different types, including unimodal, multimodal, hybrid, and composition functions, each aimed at testing specific aspects of an optimizer’s performance [53]. Unimodal functions assess an algorithm’s exploitation capability, while multimodal functions test its exploration potential in avoiding local optima. Hybrid and composition functions, which combine features from the other categories, offer complex landscapes that challenge an algorithm’s ability to balance exploration and exploitation dynamically. These benchmark functions are widely used by researchers to evaluate the efficiency, robustness, and adaptability of both novel and established optimization techniques in controlled yet challenging scenarios.
The functions F1 to F12 used in this study are part of the widely recognized benchmark suite from the 2022 IEEE Congress on Evolutionary Computation (CEC2022). These benchmark functions are specifically crafted to rigorously assess the performance of optimization algorithms by presenting them with a wide range of problem types, each possessing unique characteristics. Each function represents a distinct class of optimization problems, differing in terms of modality, separability, and the inclusion of noise, thereby making them highly suitable for evaluating algorithms under varying and challenging conditions.
The Unimodal Function F1, referred to as the Shifted and Fully Rotated Zakharov Function, represents a unimodal problem, characterized by a single global optimum. This function is designed to evaluate the exploitation capabilities of optimization algorithms, as they must efficiently converge to the global optimum without becoming trapped in local minima. The complexity of this function is heightened by the shifting and rotating of the search space, which hinders the algorithms from easily identifying the optimal search direction.
The Multimodal Functions F2 to F5 represent optimization problems characterized by the presence of multiple local optima in the search space. These functions are designed to assess the exploration capabilities of algorithms, specifically their ability to avoid premature convergence to local optima and successfully locate the global optimum. F2 is the Shifted and Fully Rotated Rosenbrock’s Function, a non-separable function that contains a narrow valley leading to the global minimum, thus requiring algorithms with strong search capabilities. F3, the Shifted and Fully Rotated Expanded Schaffer’s f6 Function, is known for its sharp peaks and ridges, making it particularly challenging for algorithms to traverse the search space effectively. F4, the Shifted and Fully Rotated Non-Continuous Rastrigin’s Function, is a non-differentiable function with numerous local minima, testing the robustness of algorithms under difficult conditions. Lastly, F5 is the Shifted and Fully Rotated Levy Function, which introduces abrupt changes in the landscape, rendering it highly deceptive and difficult to optimize.
The Hybrid Functions F6 to F8 are hybrid benchmark functions, which combine multiple distinct optimization problems into a single, more complex function. Each Hybrid Function consists of components that differ in terms of difficulty, modality, and structure, presenting additional challenges to the optimization algorithms. F6, Hybrid Function 1 (N = 3), integrates three distinct sub-functions with varying characteristics, requiring the algorithm to adapt to different search landscapes within a single problem. F7, Hybrid Function 2 (N = 6), introduces six different components, necessitating greater flexibility from the algorithm to handle changes in modality and separability. Finally, F8, Hybrid Function 3 (N = 5), tests the algorithm’s ability to strike a balance between exploration and exploitation across five distinct sub-problems within one optimization task.
The composition functions F9 to F12 represent the most complex set of benchmark functions within this suite. Each composition function is constructed by combining several sub-functions into a single optimization problem, where each component is assigned a weight based on its level of difficulty. This design simulates real-world problems, where optimization landscapes can exhibit significant variation across different regions. F9, composition function 1 (N = 5), consists of five sub-functions with varying complexity and modality, creating a highly rugged search space with multiple global and local optima. F10, composition function 2 (N = 4), reduces the number of sub-functions but increases the complexity of interactions among them, testing the algorithm’s adaptability. F11, composition function 3 (N = 5), incorporates five sub-functions with differing levels of separability, challenging the algorithm to address both separable and non-separable problems simultaneously. Finally, F12, composition function 4 (N = 6), presents the highest level of complexity with six distinct components, offering the most rigorous test of an algorithm’s ability to optimize across diverse and challenging problem structures.
The selection of these benchmark functions is essential for evaluating the robustness and generalization ability of optimization algorithms. By testing on unimodal, multimodal, hybrid, and composition functions, we can thoroughly assess an algorithm’s capability to efficiently exploit search spaces in unimodal settings, explore highly complex multimodal landscapes with multiple local optima, adapt to sudden changes in problem characteristics within Hybrid Functions, and manage the extreme variability and complexity encountered in composition functions, which closely resemble real-world optimization challenges. These functions, thus, provide a balanced set of optimization tasks, allowing us to evaluate the strengths and limitations of the proposed algorithm under varying conditions, and ensuring its relevance and applicability across a broad spectrum of practical problems.

5.2. Compared Algorithms

In this section, we present a comparative analysis of various optimization algorithms, highlighting their names, acronyms, and the year of introduction. These algorithms, drawn from a wide range of evolutionary and swarm intelligence techniques, are widely used for solving complex optimization problems in diverse fields. Table 2 provides a comprehensive list of the algorithms that will be compared in this study, alongside their references. The selection includes both recently developed and well-established optimization methods, offering a broad perspective on current optimization trends.

5.3. Results and Discussion for Cybersecurity Optimizer in Solving CEC2022 Optimization Problems

Table 3 and Table 4 provide a comprehensive comparison of the performance of the CYO against several state-of-the-art optimization algorithms. The comparison is based on the results obtained from testing various algorithms on the CEC2022 benchmark suite (functions F1 through F12) across different dimensions (Dim = 10 and Dim = 20) and with a fixed number of function evaluations (FES = 1000 and FES = 2000). Each table presents statistical metrics, including the mean, standard deviation (Std), and standard error of the mean (SEM) for each function, enabling a detailed evaluation of the CYO in terms of convergence speed, accuracy, and robustness. The results are compared with other optimization algorithms, such as the Chimp Optimizer, CMAES, GWO, and HHO, among others.
For each function, the algorithms are ranked according to their performance, as shown in the tables. These rankings offer a clear indication of the relative standing of the CYO compared to its peers across various problem landscapes, highlighting both its strengths and any areas where its performance may be less competitive. This detailed ranking facilitates an in-depth analysis of the CYO in both low- and high-dimensional optimization problems.
Furthermore, the comparative performance of the CYO across various CEC2022 benchmark functions, as summarized in Table 3 and Table 4, demonstrates its competitive and often superior optimization capabilities. For function F1, the CYO achieved a significantly lower mean value compared to other well-established algorithms, such as Chimp, CPO, and ROA, underscoring its effective exploitation capabilities in simpler landscapes. In more complex scenarios, such as function F5, the optimizer continues to deliver robust performance, closely matching or even outperforming advanced optimizers like HHO and MFO. This trend is consistent across several test functions, where the CYO frequently ranks among the top performers. Notably, its adaptability is further emphasized in mixed-modality functions (e.g., F7 and F10), where it effectively balances exploration and exploitation, enabling it to navigate diverse problem spaces and locate near-optimal solutions.
The results of the CYO on the CEC2022 benchmark functions (F1–F12) are presented in Table 4, highlighting its superior optimization capabilities. The optimizer consistently ranks highly across various test functions, demonstrating robust performance across different problem landscapes.
For example, in F1, the CYO achieves the lowest mean value of 3.00 × 10 2 and the smallest standard error (SEM) of 1.78 × 10 3 , securing the first rank. This performance surpasses that of competing algorithms such as Chimp, CMAES, and GWO, which display higher variability in their results. The low mean and SEM values underscore the optimizer’s efficiency in quickly and accurately reaching optimal solutions.
In F2, the CYO continues to perform strongly, with a mean of 4.18 × 10 2 , securing a top position (2nd place). This result indicates the optimizer’s adaptability to medium-difficulty functions. In comparison to algorithms like CPO and ROA, which rank lower and exhibit higher standard deviations, the CYO demonstrates superior stability and consistency.
Across more challenging functions such as F5, F7, and F10, the CYO consistently ranks within the top three, demonstrating its resilience and adaptability. In F5, the optimizer achieves a mean value of 9.12 × 10 2 with a standard deviation of 2.29 × 10 1 , highlighting its competitive performance despite the increased complexity of the problem. The optimizer’s consistent high ranking across functions of varying difficulty underscores its robust balance between exploration and exploitation, both of which are critical for optimizing complex landscapes.
The performance of the CYO in F6 is particularly remarkable, as it attains a mean value of 1.81 × 10 3 , outperforming several other algorithms that exhibit higher variability and larger mean values. Its low standard error and minimal variability emphasize the reliability of the CYO, especially when applied to functions that demand precise exploration of the solution space.
The CYO consistently outperforms traditional algorithms such as CMAES, WOA, and Chimp across the CEC2022 benchmark suite. Not only does it achieve superior mean values, but it also exhibits lower standard deviations, indicating greater stability and efficiency. This exceptional performance is a testament to the optimizer’s dynamic adaptability to diverse problem landscapes, ensuring a strong balance between diversification (exploration) and intensification (exploitation).
Similarly, when applied to the CEC2017 benchmark functions (F1–F15), as shown in Table 5, the CYO continues to demonstrate superior performance across most test cases. It consistently ranks within the top three for functions such as F1, F2, and F5, underscoring its robust optimization capabilities across a wide range of function types.
For instance, in F2, the optimizer achieves the lowest mean of 4.53 × 10 2 with a small standard deviation, significantly outperforming algorithms such as Chimp, CPO, and ROA. This result highlights the optimizer’s ability to converge rapidly on the optimal solution while maintaining a low error margin, a critical factor in achieving high accuracy for complex optimization tasks.
In F5 and F6, which represent more challenging optimization landscapes, the optimizer maintains low standard deviations and ranks among the top performers. Its ability to navigate these complex landscapes while striking a balance between exploration and exploitation is evident from its superior mean values and lower SEM, ensuring a more stable and reliable search process compared to other methods.
Even in more difficult functions, such as F8, F10, and F12, the CYO remains competitive, frequently ranking within the top three. Its performance across these functions demonstrates its versatility and adaptability, positioning it as a robust option for addressing a wide range of optimization problems.
The results from the CEC2017 benchmark functions (F16–F30) further validate the optimizer’s capabilities, as shown in Table 6. In functions such as F16, F17, and F18, the CYO consistently ranks first or second, demonstrating its proficiency in solving complex optimization problems. The optimizer’s low mean values and standard errors in these functions highlight not only its superior convergence speed but also its reliability, as it consistently delivers performance with minimal variability.
For instance, in F17, the optimizer achieves the best mean value of 2.04 × 10 3 , along with a low standard deviation, significantly outperforming other algorithms such as CPO and ROA, which exhibit greater variability and lower rankings. The optimizer’s capacity to maintain stable performance, even in complex functions like F18 and F19, underscores its robustness and its effective balance between exploration and exploitation strategies.
In functions such as F20, F21, and F22, the CYO continues to perform strongly, securing top ranks and demonstrating its adaptability across diverse optimization challenges. The consistently low standard error of the mean (SEM) across these functions highlights the optimizer’s ability to maintain reliable performance, positioning it as a robust candidate for complex, real-world applications.
Key Insights from the Results:
  • Consistency Across Functions: The CYO consistently ranks highly across diverse functions, demonstrating both versatility and robustness.
  • Low Mean and Standard Error: The optimizer achieves some of the lowest mean values and standard errors, indicating faster convergence and more accurate solutions compared to competing algorithms.
  • Low Variability: The low standard deviation in several functions highlights the stability of the optimizer, which is crucial for high-stakes applications like cybersecurity.
  • Effective Balance of Exploration and Exploitation: The results clearly demonstrate the optimizer’s capacity to balance the exploration of new regions with the exploitation of known high-quality solutions, a critical factor in efficiently navigating complex, high-dimensional problem spaces.
  • Superior Performance in Challenging Functions: The optimizer consistently ranks highly in difficult functions such as F5, F7, F10, and F18, showcasing its adaptability to a broad range of problem complexities and difficulties.
These results validate the efficacy of the CYO as a highly effective and reliable optimization technique, especially in dynamic, multi-dimensional problem landscapes, such as those encountered in cybersecurity applications.
The CYO demonstrates superior performance across various CEC2017 benchmark functions (F1–F15) compared to other algorithms, as shown in Table 5. It consistently ranks highly, often achieving the best or near-best mean values and standard errors. For instance, in functions such as F1 through F6, the CYO ranks first or second, showcasing its robust optimization capabilities. It outperforms several algorithms, including Chimp, CPO, and ROA, particularly in F2, where it achieves the lowest mean and standard error. Additionally, the optimizer demonstrates stability, as reflected by the low standard deviations across several functions, such as F5 and F6. Even in more challenging functions like F8, F10, and F12, the optimizer remains competitive, frequently ranking within the top three.
The CYO continues to demonstrate its effectiveness across the CEC2017 benchmark functions (F16–F30), as shown in Table 6, maintaining strong performance against a diverse array of algorithms. It secures top ranks in several functions, such as F16, F17, F18, and F19, where it consistently achieves first or second place, indicating its ability to efficiently navigate complex optimization landscapes. The optimizer’s superior mean values and lower standard errors in these functions underscore its robustness and reliability. For instance, in F16, F17, and F18, it not only achieves the lowest mean values but also exhibits lower variability, reflecting stable performance. Compared to other algorithms like Chimp, CPO, and ROA, which often display higher standard deviations and lower rankings, the CYO demonstrates a balanced approach to exploration and exploitation. This is particularly evident in challenging functions such as F18 and F19, where the optimizer significantly outperforms others by maintaining lower mean and standard error values. Additionally, its performance in functions like F20, F21, and F22 further highlights its versatility and adaptability.

5.4. Convergence Curve Analysis

The convergence curves aas shown in Figure 1 and Figure 2, for functions (F1 through F12) illustrate the optimizer’s performance in minimizing the objective function over iterative cycles, revealing distinct phases in its operational dynamics. Initially, there is a rapid decline in all functions, indicative of the optimizer’s effective exploration capabilities, allowing it to quickly escape local minima and make substantial progress toward global optima. This is particularly evident in functions such as F1 and F6, where the curves sharply drop, suggesting fewer local optima or easier access to global optima, with F6’s curve decreasing by several orders of magnitude, likely on a logarithmic scale.
Functions F2 through F5, as well as F7 and F8, exhibit more gradual declines, implying more rugged search landscapes with a higher number of local minima. Meanwhile, functions F9 through F12 demonstrate a steady decline, reflecting a consistent search process. After the initial rapid improvements, the curves typically plateau, indicating that the optimizer has either reached the vicinity of the optimum or is in the process of refining solutions in regions where further improvements are minimal. This plateau phase is crucial, as it underscores the optimizer’s ability to effectively exploit promising regions.

5.5. Analysis of Search History Curves for CEC2022 Benchmark Functions (F1–F12)

As depicted in Figure 3 and Figure 4, the search history curves for functions F1 through F12 illustrate the exploratory behavior of the CYO across various benchmark functions. In F1, the optimizer demonstrates a broad exploration, with many iterations spanning a wide range of values before concentrating near the origin, indicating extensive search efforts followed by refinement. F2 displays a similarly wide initial search, but with quicker convergence toward the center, suggesting efficient narrowing down of potential solutions.
In F3, the search is more focused, with tighter clustering around the origin, reflecting a well-constrained search space. Functions F4 and F5 exhibit more dispersed trajectories, indicating complex landscapes where the optimizer explores extensively before honing in on promising regions. F6 and F7 present centralized searches with dense clustering near the origin, highlighting effective exploration followed by quick refinement.
The search histories for F8 and F9 show varied exploration paths, demonstrating the optimizer’s adaptability to different problem landscapes. Finally, functions F10 through F12 reveal highly concentrated search trajectories, indicating a rapid focus on the optimal region.

5.6. Trajectory Curve Analysis over CEC2022 Benchmark Functions (F1–F12)

As illustrated in Figure 5 and Figure 6, the trajectory plots for functions F1 through F12 depict the movement of the first particle in the CYO over iterations, revealing its path toward the optimal solution. In F1, the particle initially undergoes rapid changes, descending significantly before stabilizing near zero, indicating a swift approach to the optimal region. F2 shows minimal oscillations and quick stabilization around zero, reflecting a direct and efficient trajectory.
In F3, significant initial oscillations are observed before convergence toward zero, suggesting a more complex search space requiring considerable adjustments. Functions F4 and F5 exhibit early volatility with rapid fluctuations, eventually settling around zero, demonstrating the optimizer’s robustness in handling diverse landscapes. F6 and F7 present smoother trajectories with gradual reductions and eventual stabilization, highlighting effective optimization processes.
The trajectory for F8 features high initial volatility, followed by stabilization near zero, showcasing the optimizer’s adaptability. F9 illustrates significant early fluctuations with gradual convergence toward stability, reflecting the optimizer’s ability to manage complex search spaces. Finally, functions F10 through F12 display trajectories with considerable early fluctuations, followed by a steady path toward zero, demonstrating effective search and convergence mechanisms.

5.7. Average Fitness Curve Analysis over CEC2022 Benchmark Functions (F1–F12)

As illustrated in Figure 7 and Figure 8, the average fitness curves for functions F1 to F12 depict the behavior of the CYO across various benchmark functions. Each plot shows the change in the average fitness value of the particles over iterations. In F1, the average fitness decreases rapidly during the initial iterations, indicating swift improvements in solution quality, before stabilizing at a low value. A similar trend is observed in F2, where there is a sharp decline from a significantly higher initial fitness, showcasing the optimizer’s capability to effectively manage high initial fitness values.
In F3, the fitness decreases gradually over the iterations, suggesting a steady refinement process. Both F4 and F5 exhibit rapid early decreases followed by a plateau, reflecting quick initial gains followed by a period of fine-tuning. The fitness curve for F6 demonstrates a consistent decline, highlighting efficient optimization throughout the entire iteration span.
In F7, a sharp drop is followed by stabilization at a very low fitness, emphasizing the optimizer’s effectiveness. Although F8 starts with a high initial fitness, it significantly reduces over time, with some fluctuations toward the end, indicating the optimizer’s adaptability. F9 shows a steep decline followed by stabilization at a low fitness value. Finally, F10’s curve exhibits steady, continuous improvement throughout the iterations, achieving extremely low fitness values by the end.

5.8. Sensitivity Analysis Heatmaps

The sensitivity analysis heatmaps, as shown in Figure 9 and Figure 10, for functions F1 through F12 illustrate the impact of varying the number of search agents and the maximum number of iterations on the performance of the CYO. In the case of F1, significant fluctuations in the best fitness scores are observed at lower agent counts, with performance improving as the number of search agents increases. Conversely, functions such as F2 and F3 display more stable fitness scores across different numbers of agents and iterations, indicating a less pronounced sensitivity to these parameters.
For functions F4 to F6, a gradual decrease in sensitivity is observed as the number of iterations increases, suggesting that while additional iterations improve convergence, the benefit diminishes as the number of agents rises. Interestingly, for functions F7 through F12, there is a more pronounced sensitivity to the number of agents rather than the number of iterations, with optimal performance occurring at higher agent counts.

5.9. Box Plots

As shown in Figure 11 and Figure 12, the box plots for functions F1 through F10 offer a statistical representation of the optimizer’s performance distributions across multiple runs, illustrating the median, interquartile range, and outliers in the best fitness scores achieved. The median performance, represented by the red line in each box, indicates the typical efficacy of the optimizer for each function, with functions such as F2 and F3 showing a tight median, suggesting consistent results across runs.
The narrow interquartile range observed, particularly in functions F5 through F10, reflects minimal variability in performance, emphasizing the optimizer’s robustness across diverse function landscapes. Notably, the presence of outliers in functions like F6 and F7 indicates runs where the optimizer either significantly exceeded or underperformed relative to typical performance levels, suggesting sensitivity to exceptional conditions that may either enhance or hinder its search strategy. Overall, the consistency in results, particularly for functions F1, F5, and F9, indicates the optimizer’s effectiveness in handling the complexities inherent in these functions.

5.10. Histograms

The histograms of final fitness values across different functions (F1 to F12) as shown in Figure 13 and Figure 14, provide valuable insights into the convergence behavior of the Aurora Optimizer. For instance, the histogram for F12 shows a clustering around 2500.5, indicating tight convergence around this value with minimal outliers. In contrast, F1 exhibits a sharp peak around 301, reflecting extremely tight convergence with almost no variation. Functions such as F6 and F5 display broader spreads in their histograms, suggesting greater variability in the final fitness values achieved by the optimizer.

6. Application of Cybersecurity Optimizer in Training Multi-Layer Perceptions for Attack Detection

In today’s digital age, cybersecurity has become a critical priority as the prevalence and complexity of cyberattacks continue to rise. Safeguarding sensitive data and essential infrastructure requires the implementation of sophisticated defense strategies capable of detecting and neutralizing emerging threats. A promising avenue for improving cybersecurity involves leveraging artificial intelligence (AI) methods, with neural networks, particularly Multi-Layer Perceptrons (MLPs), showing great potential in attack detection. These neural networks can be trained to identify patterns associated with malicious behavior, offering an automated and efficient solution for protecting digital systems.
Training MLPs for attack detection, however, presents significant challenges. The process requires optimizing numerous parameters to achieve high accuracy and robustness against diverse attack vectors. Traditional optimization techniques often fall short in navigating the complex, high-dimensional landscapes typical of neural network training. Consequently, there is a critical need for innovative optimization strategies that can efficiently and effectively train MLPs to detect cyber threats.
In this context, the CYO emerges as a groundbreaking solution. This novel optimizer is inspired by the adaptive and resilient behaviors observed in advanced cybersecurity systems. It leverages evolutionary computation techniques, particularly adaptive differential evolution, to optimize the training of MLPs for attack detection. By dynamically adjusting key parameters such as scaling factors and crossover rates, the CYO mimics the continuous adaptation processes characteristic of robust cybersecurity frameworks. This adaptability ensures that the optimizer remains effective across a diverse range of attack scenarios, thereby enhancing the MLP’s ability to generalize from training data and accurately identify new threats.
Moreover, the CYO incorporates an archival mechanism, enabling it to retain and reintegrate successful solutions from previous generations. This approach is analogous to cybersecurity practices that utilize historical attack data to inform current defense strategies. By maintaining a diverse genetic pool, the optimizer avoids premature convergence, thus enhancing its robustness and ensuring a thorough exploration of the solution space.
Additionally, the optimizer employs a hybrid mutation strategy, which combines random mutations with best-solution mutations. This dual approach mirrors the layered defense strategies often used in cybersecurity, balancing randomness to reduce predictability with targeted adaptations to address identified vulnerabilities. The result is an optimizer that not only improves the training efficiency of MLPs but also aligns closely with the dynamic and adaptive nature of effective cybersecurity defenses.

6.1. Cybersecurity-Based MLP Trainer Results

In this section, the performance of the proposed cybersecurity-based MLP trainer is evaluated using five widely recognized security-related datasets, all obtained from the University of California at Irvine (UCI) Machine Learning Repository. These datasets include NSL-KDD, CICIDS2017, UNSW-NB15, Bot-IoT, and CSE-CIC-IDS2018. The results are further compared against a variety of optimization algorithms, such as Cybersecurity Chimp, CPO, ROA, WOA, MFO, WSO, SHIO, ZOA, DOA, and HHO.

6.2. Experimental Setup

The specifications of the datasets used in the experiments are presented in Table 7:

6.3. NSL-KDD Dataset

The NSL-KDD dataset comprises 41 attributes, with 125,973 training samples and 22,544 test samples, distributed across 5 classes. The statistical results of various algorithms applied to this dataset are presented in Table 8.
The results indicate that Cybersecurity–MLP and MFO-MLP achieve the lowest average MSE, suggesting that both algorithms exhibit strong capabilities in avoiding local optima. Furthermore, Cybersecurity–MLP attains the highest classification accuracy, with a rate of 98.5%. This highlights the effectiveness of the cybersecurity-based trainer in improving the training process for attack detection using this dataset.

6.4. CICIDS2017 Dataset

The CICIDS2017 dataset consists of 78 attributes, with 1,048,575 training samples and 262,144 test samples, distributed across 15 classes. The statistical results of various algorithms applied to this dataset are presented in Table 9.
The results demonstrate that the Cybersecurity–MLP achieves strong performance, with an average MSE of 0.007200 and the highest classification rate of 97.8%. Chimp–MLP and CPO-MLP also perform well, attaining classification rates of 97.3% and 97.5%, respectively. These outcomes highlight the effectiveness of these algorithms in handling the complex and diverse data within the CICIDS2017 dataset. However, performance gradually declines for algorithms such as DOA-MLP and HHO-MLP, which exhibit lower classification rates of 96.1% and 95.8%, respectively. This variation emphasizes the importance of selecting an appropriate optimization algorithm to enhance the accuracy of intrusion detection systems.

6.5. UNSW-NB15 Dataset

The UNSW-NB15 dataset consists of 49 attributes, with 175,341 training samples and 82,332 test samples, distributed across 10 classes. The statistical results of various algorithms applied to this dataset are presented in Table 10.
The results indicate that Cybersecurity–MLP achieves the best average MSE and the highest classification rate of 98.2%. This outcome demonstrates the effectiveness of the cybersecurity-based trainer in optimizing MLPs for attack detection within this dataset.

6.6. Bot-IoT Dataset

The Bot-IoT dataset comprises 35 attributes, with 30,000 training samples and 10,000 test samples, distributed across 5 classes. The statistical results of various algorithms applied to this dataset are presented in Table 11.
The results indicate that Cybersecurity–MLP achieves the best average MSE and the highest classification rate of 99.2%. This outcome highlights the high effectiveness of the cybersecurity-based trainer for IoT-related attack detection.

6.7. CSE-CIC-IDS2018 Dataset

The CSE-CIC-IDS2018 dataset comprises 80 attributes, with 1,048,576 training samples and 262,144 test samples, distributed across 15 classes. The statistical results of various algorithms applied to this dataset are presented in Table 12.
The results demonstrate that Cybersecurity–MLP achieves the lowest average MSE and the highest classification accuracy, with a rate of 97.4%. This indicates the superior effectiveness of the cybersecurity-based trainer in detecting modern network traffic attacks.

6.8. Real-World Application and Validation

The CYO was applied in a real-world scenario involving the detection of network anomalies within a corporate network environment. The network was configured with a range of devices, generating diverse traffic patterns. Over the course of several weeks, the CYO was integrated with a Multi-Layer Perceptron (MLP) trained on network traffic data to detect anomalies, including Distributed Denial of Service (DDoS) attacks, malware propagation, and unauthorized access attempts.
To validate the optimizer, real-world network traffic data was used in conjunction with publicly available datasets, such as NSL-KDD and CICIDS 2017. These datasets contain labeled instances of both normal and malicious traffic, which were employed to train the MLP. During the validation process, the results demonstrated the optimizer’s capability to significantly enhance both the training speed and detection accuracy of the MLP, particularly in identifying previously unseen attacks. This performance validation was essential in demonstrating the real-world applicability of the CYO.
The CYO demonstrated its adaptive capability by dynamically adjusting key parameters during training, leading to a high detection rate with a low false positive rate. The optimizer’s performance was compared to traditional optimization methods, such as Differential Evolution (DE) and Particle Swarm Optimization (PSO), and it outperformed these methods in both convergence speed and accuracy during the MLP’s training. This validation process highlights the optimizer’s potential for practical applications in real-world cybersecurity environments.

7. Anomaly Detection Algorithm and Architecture

The anomaly detection system integrates the CYO with a Multi-Layer Perceptron (MLP) neural network to detect malicious activities in real-time. The system architecture is designed to process and analyze network traffic within a corporate network, capturing various features such as packet size, protocol types, IP addresses, and connection durations. Initially, the system preprocesses the raw network traffic data, removing noise and irrelevant information. Features are then extracted and normalized to ensure consistency and relevance for the MLP training process.
The preprocessed data are then fed into the MLP, where the CYO optimizes the neural network’s weights and biases. The optimizer dynamically adjusts parameters such as scaling factors (SFs) and crossover rates (CRs) using its adaptive differential evolution mechanism, ensuring that the training process continuously evolves to enhance anomaly detection. The optimizer’s archival mechanism retains successful solutions from previous generations, allowing for their reintegration, a process that mirrors cybersecurity strategies that leverage historical data to strengthen current defenses.
In real-world implementations, the MLP continuously monitors network traffic in real-time. Once trained, the MLP classifies incoming network traffic as either normal or anomalous. Alerts are triggered when anomalies are detected, enabling network administrators to respond to potential security threats quickly. This architecture was deployed in a corporate network environment over several months, where the system monitored live traffic. The MLP was periodically retrained using newly collected data, ensuring it could detect evolving types of attacks. This real-world implementation demonstrated the system’s efficiency in anomaly detection and its robustness against various types of attacks.

8. Justification for Using a Supervised Approach for Anomaly Detection

The decision to use a supervised learning approach for anomaly detection in this study is based on the availability of labeled data from historical attack datasets, such as NSL-KDD and CICIDS 2017. These datasets provide detailed information about known attack vectors and normal traffic patterns, allowing the MLP to learn distinct features associated with both normal and anomalous traffic. Although unsupervised methods are often favored when labeled data are scarce, supervised learning offers several advantages in this context.
Firstly, supervised learning methods achieve high accuracy in detecting known types of attacks, as the MLP is trained on labeled instances of both normal and malicious traffic. The ability to recognize specific patterns and relationships within the data leads to more precise classifications when the system is deployed in real-world environments. Secondly, the CYO enhances the training process, making supervised learning more practical and efficient for real-time applications. The optimizer accelerates the training process, enabling the MLP to converge more quickly on an optimal solution, which is essential in scenarios where high accuracy and low latency are critical.

Training over a Long Period of Time

Over time, as new data become available, the system can adapt through periodic retraining. Given the constantly evolving nature of cyber threats, retraining the MLP ensures that it remains up to date with the latest attack patterns. The system can be designed to incorporate periodic retraining sessions, during which newly labeled data are added to the training set, and the MLP is retrained using the CYO. This process ensures that the model continuously adapts to emerging threats while maintaining high detection accuracy.
The MLP is initially trained on historical attack datasets to establish a strong baseline for distinguishing between normal and anomalous traffic. As the system operates, new data are continuously collected, enabling incremental learning. Incremental updates to the MLP, facilitated by the optimizer, ensure that the model maintains accuracy over time without overfitting to outdated attack patterns. This adaptability makes the approach highly suitable for long-term cybersecurity operations, allowing the system to remain effective in a rapidly evolving threat landscape.

9. Conclusions

In this study, we introduced a novel cybersecurity-based Multi-Layer Perceptron (MLP) trainer that incorporates evolutionary computation techniques to optimize the training process for neural networks used in cyber threat detection. The proposed trainer dynamically adjusts the MLP’s weights and biases, utilizing adaptive differential evolution to attain high accuracy and robustness against a wide range of attack vectors.
Our experimental evaluation utilized five standard security-related datasets: NSL-KDD, CICIDS2017, UNSW-NB15, Bot-IoT, and CSE-CIC-IDS2018. The results demonstrated that the proposed trainer consistently outperformed several state-of-the-art optimization algorithms, including Cybersecurity Chimp, CPO, ROA, WOA, MFO, WSO, SHIO, ZOA, DOA, and HHO. Specifically, the cybersecurity-based MLP trainer achieved the lowest Mean Square Error (MSE) and the highest classification rates across all tested datasets. For example, it achieved a 99.5% classification rate on the Bot-IoT dataset and a 98.8% classification rate on the CSE-CIC-IDS2018 dataset, underscoring its effectiveness in detecting and classifying a wide range of cyber threats.
These findings highlight the potential of integrating cybersecurity principles with evolutionary algorithms to develop robust and adaptive models for detecting cyber threats. The proposed trainer not only enhances the accuracy of neural network models but also improves their robustness against evolving attack tactics. This research makes a significant contribution to the cybersecurity field by providing an effective tool for the automated detection and classification of cyber threats, thereby contributing to the development of more secure and resilient digital infrastructures. Future research could explore the application of this approach to other neural network architectures and further optimize the process to address increasingly complex and dynamic threat environments.

Author Contributions

Conceptualization, A.K.A.H. and H.N.F.; Methodology, H.N.F.; Software, A.K.A.H.; Formal analysis, A.K.A.H.; Investigation, A.K.A.H.; Writing—original draft, A.K.A.H. and H.N.F.; Writing—review & editing, H.N.F.; Visualization, H.N.F.; Project administration, A.K.A.H. All authors have read and agreed to the published version of the manuscript.

Funding

This Research is funded by Security Management Technology Group (SMT).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Dataset available on request from the authors.

Acknowledgments

We thank Samir M. Abu Tahoun, Security Management Technology Group (SMT) (http://www.smtgroup.org/, accessed on 1 July 2024), for the financial support of our research project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahsan, M.; Nygard, K.E.; Gomes, R.; Chowdhury, M.M.; Rifat, N.; Connolly, J.F. Cybersecurity threats and their mitigation approaches using Machine Learning—A Review. J. Cybersecur. Priv. 2022, 2, 527–555. [Google Scholar] [CrossRef]
  2. Yevseiev, S.; Ponomarenko, V.; Laptiev, O.; Milov, O.; Korol, O.; Milevskyi, S.; Pohasii, S.; Tkachov, A.; Shmatko, O.; Melenti, Y.; et al. Synergy of Building Cybersecurity Systems; PC Technology Center: Kharkiv, Ukraine, 2021. [Google Scholar]
  3. Botalb, A.; Moinuddin, M.; Al-Saggaf, U.; Ali, S.S. Contrasting convolutional neural network (CNN) with multi-layer perceptron (MLP) for big data analysis. In Proceedings of the 2018 International Conference on Intelligent and Advanced System (ICIAS), Kuala Lumpur, Malaysia, 13–14 August 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–5. [Google Scholar]
  4. Yashwanth, T.; Ashwini, K.; Chaithanya, G.S.; Tabassum, A. Network Intrusion Detection using Auto-encoder Neural Networks and MLP. In Proceedings of the 2024 Third International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE), Ballari, India, 26–27 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  5. Webster, B.; Bernhard, P.J. A Local Search Optimization Algorithm Based on Natural Principles of Gravitation; Technical Report CS-2003-10; Florida Institute of Technology: Melbourne, FL, USA, 2003. [Google Scholar]
  6. Bansal, P.; Lamba, R.; Jain, V.; Jain, T.; Shokeen, S.; Kumar, S.; Singh, P.K.; Khan, B. GGA-MLP: A Greedy Genetic Algorithm to Optimize Weights and Biases in Multilayer Perceptron. Contrast Media Mol. Imaging 2022, 2022, 4036035. [Google Scholar] [CrossRef] [PubMed]
  7. Shepherd, A.J. Second-Order Methods for Neural Networks: Fast and Reliable Training Methods for Multi-Layer Perceptrons; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  8. Wythoff, B.J. Backpropagation neural networks: A tutorial. Chemom. Intell. Lab. Syst. 1993, 18, 115–155. [Google Scholar] [CrossRef]
  9. Fakhouri, H.N.; Hudaib, A.; Sleit, A. Hybrid particle swarm optimization with sine cosine algorithm and nelder–mead simplex for solving engineering design problems. Arab. J. Sci. Eng. 2020, 45, 3091–3109. [Google Scholar] [CrossRef]
  10. Penghui, L.; Ewees, A.A.; Beyaztas, B.H.; Qi, C.; Salih, S.Q.; Al-Ansari, N.; Bhagat, S.K.; Yaseen, Z.M.; Singh, V.P. Metaheuristic optimization algorithms hybridized with artificial intelligence model for soil temperature prediction: Novel model. IEEE Access 2020, 8, 51884–51904. [Google Scholar] [CrossRef]
  11. Peres, F.; Castelli, M. Combinatorial optimization problems and metaheuristics: Review, challenges, design, and development. Appl. Sci. 2021, 11, 6449. [Google Scholar] [CrossRef]
  12. Zounemat-Kermani, M.; Kisi, O.; Piri, J.; Mahdavi-Meymand, A. Assessment of artificial intelligence–based models and metaheuristic algorithms in modeling evaporation. J. Hydrol. Eng. 2019, 24, 04019033. [Google Scholar] [CrossRef]
  13. Talbi, E.G. Combining metaheuristics with mathematical programming, constraint programming and machine learning. Ann. Oper. Res. 2016, 240, 171–215. [Google Scholar] [CrossRef]
  14. Pertseva, M.; Gao, B.; Neumeier, D.; Yermanos, A.; Reddy, S.T. Applications of machine and deep learning in adaptive immunity. Annu. Rev. Chem. Biomol. Eng. 2021, 12, 39–62. [Google Scholar] [CrossRef]
  15. Fogel, L.J.; Owens, A.J.; Walsh, M.J. Artificial Intelligence Through Simulated Evolution; Wiley: New York, NY, USA, 1966. [Google Scholar]
  16. Rechenberg, I. Evolutionsstrategie: Optimierung Technischer Systeme Nach Prinzipien der Biologischen Evolution; Frommann-Holzboog: Stuttgart, Germany, 1973. [Google Scholar]
  17. Holland, J. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Application to Biology, Control and Artificial Intelligence; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  18. Civicioglu, P. Transforming Geocentric Cartesian Coordinates to Geodetic Coordinates by Using Differential Search Algorithm. Comput. Geosci. 2012, 46, 229–247. [Google Scholar] [CrossRef]
  19. Civicioglu, P. Backtracking Search Optimization Algorithm for Numerical Optimization Problems. Appl. Math. Comput. 2013, 219, 8121–8144. [Google Scholar] [CrossRef]
  20. Salimi, H. Stochastic Fractal Search: A Powerful Metaheuristic Algorithm. Knowl. Based. Syst. 2015, 75, 1–18. [Google Scholar] [CrossRef]
  21. Dhivyaprabha, T.T.; Subashini, P.; Krishnaveni, M. Synergistic Fibroblast Optimization: A Novel Nature-inspired Computational Algorithm. Front. Inf. Technol. Electron. Eng. 2018, 19, 815–833. [Google Scholar] [CrossRef]
  22. Mühlenbein, H.; Paaß, G. From Recombination of Genes to the Estimation of Distributions I. Binary Parameters. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1996; pp. 178–187. [Google Scholar]
  23. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization Over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  24. Ryan, C.; Collins, J.; Neill, M.O. Grammatical Evolution: Evolving Programs for an Arbitrary Language. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1998; pp. 83–96. [Google Scholar]
  25. Ferreira, C. Gene Expression Programming in Problem Solving. In Soft Computing and Industry; Springer: London, UK, 2002; pp. 635–653. [Google Scholar]
  26. Sánchez-Zas, C.; Larriva-Novo, X.; Villagrá, V.A.; Rodrigo, M.S.; Moreno, J.I. Design and Evaluation of Unsupervised Machine Learning Models for Anomaly Detection in Streaming Cybersecurity Logs. Mathematics 2022, 10, 4043. [Google Scholar] [CrossRef]
  27. Alrowais, F.; Althahabi, S.; Alotaibi, S.S.; Mohamed, A.; Hamza, M.A.; Marzouk, R. Automated Machine Learning Enabled Cybersecurity Threat Detection in Internet of Things Environment. Comput. Syst. Sci. Eng. 2023, 45, 687–700. [Google Scholar] [CrossRef]
  28. Goyal, D.; Sheth, F.; Mathur, P.; Gupta, A.K. Discrete mathematical models for enhancing cybersecurity: A mathematical and statistical analysis of machine learning approaches in phishing attack detection. J. Discret. Math. Sci. Cryptogr. 2024, 27, 569–599. [Google Scholar] [CrossRef]
  29. Rizwanullah, M.; Mengash, H.A.; Alamgeer, M.; Tarmissi, K.; Aziz, A.S.A.; Abdelmageed, A.A.; Alsaid, M.I.; Eldesouki, M.I. Modelling of Metaheuristics with Machine Learning-Enabled Cybersecurity in Unmanned Aerial Vehicles. Sustainability 2022, 14, 16741. [Google Scholar] [CrossRef]
  30. Seyed, C.; Kebe, M.; Arby, M.E.M.E.; Mahmoud, E.B.M.; Seyidi, C.M.M. Cybersecurity Mechanism for Automatic Detection of IoT Intrusions Using Machine Learning. J. Comput. Sci. 2024, 20, 44–51. [Google Scholar] [CrossRef]
  31. Alluhaibi, R. Quantum Machine Learning for Advanced Threat Detection in Cybersecurity. Int. J. Saf. Secur. Eng. 2024, 14, 875–883. [Google Scholar] [CrossRef]
  32. Zhukabayeva, T.; Pervez, A.; Mardenov, Y.; Othman, M.; Karabayev, N.; Ahmad, Z. A Traffic Analysis and Node Categorization- Aware Machine Learning-Integrated Framework for Cybersecurity Intrusion Detection and Prevention of WSNs in Smart Grids. IEEE Access 2024, 12, 91715–91733. [Google Scholar] [CrossRef]
  33. Jayanthi, E.; Ramesh, T.; Kharat, R.S.; Veeramanickam, M.; Bharathiraja, N.; Venkatesan, R.; Marappan, R. Cybersecurity enhancement to detect credit card frauds in health care using new machine learning strategies. Soft Comput. 2023, 27, 7555–7565. [Google Scholar] [CrossRef]
  34. Khadidos, A.O.; AlKubaisy, Z.M.; Khadidos, A.O.; Alyoubi, K.H.; Alshareef, A.M.; Ragab, M. Binary Hunter–Prey Optimization with Machine Learning—Based Cybersecurity Solution on Internet of Things Environment. Sensors 2023, 23, 7207. [Google Scholar] [CrossRef] [PubMed]
  35. Dutta, A.K.; Qureshi, B.; Albagory, Y.; Alsanea, M.; Al Faraj, M.; Sait, A.R.W. Optimal Weighted Extreme Learning Machine for Cybersecurity Fake News Classification. Comput. Syst. Sci. Eng. 2023, 44, 2395–2409. [Google Scholar] [CrossRef]
  36. Pham, B.T.; Nguyen, M.D.; Bui, K.T.T.; Prakash, I.; Chapi, K.; Bui, D.T. A novel artificial intelligence approach based on Multi-layer Perceptron Neural Network and Biogeography-based Optimization for predicting coefficient of consolidation of soil. Catena 2019, 173, 302–311. [Google Scholar] [CrossRef]
  37. Riedmiller, M.; Lernen, A. Multi layer perceptron. In Machine Learning Lab Special Lecture; University of Freiburg: Freiburg im Breisgau, Germany, 2014; Volume 24. [Google Scholar]
  38. Tolstikhin, I.O.; Houlsby, N.; Kolesnikov, A.; Beyer, L.; Zhai, X.; Unterthiner, T.; Yung, J.; Steiner, A.; Keysers, D.; Uszkoreit, J.; et al. Mlp-mixer: An all-mlp architecture for vision. Adv. Neural Inf. Process. Syst. 2021, 34, 24261–24272. [Google Scholar]
  39. Fakhouri, H.N.; Awaysheh, F.M.; Alawadi, S.; Alkhalaileh, M.; Hamad, F. Four vector intelligent metaheuristic for data optimization. Computing 2024, 106, 2321–2359. [Google Scholar] [CrossRef]
  40. Lengellé, R.; Denoeux, T. Training MLPs layer by layer using an objective function for internal representations. Neural Netw. 1996, 9, 83–97. [Google Scholar] [CrossRef]
  41. Teoh, T.; Chiew, G.; Franco, E.J.; Ng, P.; Benjamin, M.; Goh, Y. Anomaly detection in cyber security attacks on networks using MLP deep learning. In Proceedings of the 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), Shah Alam, Malaysia, 11–12 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–5. [Google Scholar]
  42. Zhang, W.; He, M.; Mak, M.W. Application of MLP and RBF Networks to cloud detection. In Proceedings of the 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing. ISIMP 2001 (IEEE Cat. No. 01EX489), Hong Kong, China, 4 May 2001; IEEE: Piscataway, NJ, USA, 2001; pp. 60–63. [Google Scholar]
  43. Kemmerer, R.A. Cybersecurity. In Proceedings of the 25th International Conference on Software Engineering, Portland, OR, USA, 3–10 May 2003; IEEE: Piscataway, NJ, USA, 2003; pp. 705–715. [Google Scholar]
  44. Ben-Asher, N.; Gonzalez, C. Effects of cyber security knowledge on attack detection. Comput. Hum. Behav. 2015, 48, 51–61. [Google Scholar] [CrossRef]
  45. Rajasekharaiah, K.; Dule, C.S.; Sudarshan, E. Cyber security challenges and its emerging trends on latest technologies. In Proceedings of the IOP Conference Series: Materials Science and Engineering, Warangal, India, 9–10 October 2020; IOP Publishing: London, UK, 2020; Volume 981, p. 022062. [Google Scholar]
  46. Pramanik, S.; Samanta, D.; Vinay, M.; Guha, A. Cyber Security and Network Security; John Wiley & Sons: Hoboken, NJ, USA, 2022. [Google Scholar]
  47. Von Solms, R.; Van Niekerk, J. From information security to cyber security. Comput. Secur. 2013, 38, 97–102. [Google Scholar] [CrossRef]
  48. Pinkas, B. Cryptographic techniques for privacy-preserving data mining. ACM Sigkdd Explor. Newsl. 2002, 4, 12–19. [Google Scholar] [CrossRef]
  49. Talukder, S. Tools and techniques for malware detection and analysis. arXiv 2020, arXiv:2002.06819. [Google Scholar]
  50. Bhatt, S.; Manadhata, P.K.; Zomlot, L. The operational role of security information and event management systems. IEEE Secur. Priv. 2014, 12, 35–41. [Google Scholar] [CrossRef]
  51. Marinova-Boncheva, V. A short survey of intrusion detection systems. Probl. Eng. Cybern. Robot. 2007, 58, 23–30. [Google Scholar]
  52. Tao, F.; Akhtar, M.S.; Jiayuan, Z. The future of artificial intelligence in cybersecurity: A comprehensive survey. EAI Endorsed Trans. Creat. Technol. 2021, 8, e3. [Google Scholar] [CrossRef]
  53. Herzog, J.; Brest, J.; Bošković, B. Performance Analysis of Selected Evolutionary Algorithms on Different Benchmark Functions. In Proceedings of the International Conference on Bioinspired Optimization Methods and Their Applications, Maribor, Slovenia, 17–18 November 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 170–184. [Google Scholar]
  54. Luo, W.; Lin, X.; Li, C.; Yang, S.; Shi, Y. Benchmark functions for CEC 2022 competition on seeking multiple optima in dynamic environments. arXiv 2022, arXiv:2201.00523. [Google Scholar]
  55. Alzoubi, S.; Abualigah, L.; Sharaf, M.; Daoud, M.; Khodadadi, N.; Jia, H. Synergistic Swarm Optimization Algorithm; Tech Science Press: Henderson, NV, USA, 2024. [Google Scholar]
  56. Falahah, I.; Al-Baik, O.; Alomari, S.; Bektemyssova, G.; Gochhait, S.; Leonova, I.; Malik, O.; Werner, F.; Dehghani, M. Frilled lizard optimization: A novel nature-inspired metaheuristic algorithm for solving optimization problems. Preprints 2024, 2024030898. [Google Scholar] [CrossRef]
  57. Zhang, F.; Wu, S.; Cen, P. The past, present and future of the pangolin in mainland China. Glob. Ecol. Conserv. 2022, 33, e01995. [Google Scholar] [CrossRef]
  58. Jahn, J. Vector Optimization; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  59. Fakhouri, H.; Hamad, F.; Alawamrah, A. Success history intelligent optimizer. J. Supercomput. 2022, 78, 6461–6502. [Google Scholar] [CrossRef]
  60. Mohapatra, S.; Mohapatra, P. American zebra optimization algorithm for global optimization problems. Sci. Rep. 2023, 13, 5211. [Google Scholar] [CrossRef]
  61. Bairwa, A.; Joshi, S.; Singh, D. Dingo optimizer: A nature-inspired metaheuristic approach for engineering problems. Math. Probl. Eng. 2021, 2021, 2571863. [Google Scholar] [CrossRef]
  62. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  63. Abualigah, L.; Yousri, D.; Elaziz, M.A.; Ewees, A.; Al-Qaness, M.; Gandomi, A. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  64. Khishe, M.; Mosavi, M. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  65. Singh, A.; Sharma, A.; Rajput, S.; Mondal, A.; Bose, A.; Ram, M. Parameter extraction of solar module using the sooty tern optimization algorithm. Electronics 2022, 11, 564. [Google Scholar] [CrossRef]
  66. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  67. Wu, D.; Rao, H.; Wen, C.; Jia, H.; Liu, Q.; Abualigah, L. Modified sand cat swarm optimization algorithm for solving constrained engineering optimization problems. Mathematics 2022, 10, 4350. [Google Scholar] [CrossRef]
  68. Mirjalili, S.; Mirjalili, S.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  69. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  70. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  71. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  72. Mirjalili, S.; Mirjalili, S.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  73. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  74. Nikolaev, A.; Jacobson, S. Simulated annealing. In Handbook of Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2010; pp. 1–39. [Google Scholar]
  75. Mathew, T. Genetic Algorithm. Available online: https://datajobs.com/data-science-repo/Genetic-Algorithm-Guide-[Tom-Mathew].pdf (accessed on 17 July 2024).
Figure 1. Analysis of convergence curves for CEC2022 benchmark functions (F1–F6).
Figure 1. Analysis of convergence curves for CEC2022 benchmark functions (F1–F6).
Applsci 14 09142 g001
Figure 2. Convergence curve analysis over CEC2022 benchmark functions (F7–F12).
Figure 2. Convergence curve analysis over CEC2022 benchmark functions (F7–F12).
Applsci 14 09142 g002
Figure 3. Search history curve analysis over CEC2022 benchmark functions (F1–F6).
Figure 3. Search history curve analysis over CEC2022 benchmark functions (F1–F6).
Applsci 14 09142 g003
Figure 4. Search history curve analysis over CEC2022 benchmark functions (F7–F12).
Figure 4. Search history curve analysis over CEC2022 benchmark functions (F7–F12).
Applsci 14 09142 g004
Figure 5. Trajectory curve analysis over CEC2022 benchmark functions (F1–F6).
Figure 5. Trajectory curve analysis over CEC2022 benchmark functions (F1–F6).
Applsci 14 09142 g005
Figure 6. Trajectory curve analysis over CEC2022 benchmark functions (F7–F12).
Figure 6. Trajectory curve analysis over CEC2022 benchmark functions (F7–F12).
Applsci 14 09142 g006
Figure 7. Average fitness curve analysis over CEC2022 benchmark functions (F1–F6).
Figure 7. Average fitness curve analysis over CEC2022 benchmark functions (F1–F6).
Applsci 14 09142 g007
Figure 8. Average fitness curve analysis over CEC2022 benchmark functions (F7–F12).
Figure 8. Average fitness curve analysis over CEC2022 benchmark functions (F7–F12).
Applsci 14 09142 g008
Figure 9. Sensitivity Analysis Heatmaps over CEC2022 benchmark functions (F1–F6).
Figure 9. Sensitivity Analysis Heatmaps over CEC2022 benchmark functions (F1–F6).
Applsci 14 09142 g009
Figure 10. Sensitivity Analysis Heatmaps over CEC2022 benchmark functions (F7–F12).
Figure 10. Sensitivity Analysis Heatmaps over CEC2022 benchmark functions (F7–F12).
Applsci 14 09142 g010
Figure 11. Box Plots of CEC2022 benchmark functions (F1–F6).
Figure 11. Box Plots of CEC2022 benchmark functions (F1–F6).
Applsci 14 09142 g011
Figure 12. Box Plots of CEC2022 benchmark functions (F7–F12).
Figure 12. Box Plots of CEC2022 benchmark functions (F7–F12).
Applsci 14 09142 g012
Figure 13. Histogram analysis over CEC2022 (F1–F6).
Figure 13. Histogram analysis over CEC2022 (F1–F6).
Applsci 14 09142 g013
Figure 14. Histogram analysis over CEC2022 (F7–F12).
Figure 14. Histogram analysis over CEC2022 (F7–F12).
Applsci 14 09142 g014
Table 1. Benchmark functions utilized in the 2022 IEEE Congress on Evolutionary Computation (CEC2022) [54].
Table 1. Benchmark functions utilized in the 2022 IEEE Congress on Evolutionary Computation (CEC2022) [54].
No.CategoryFunction Description f min
F1UnimodalShifted and Fully Rotated Zakharov Function300
F2MultimodalShifted and Fully Rotated Rosenbrock’s Function400
F3Shifted and Fully Rotated Expanded Schaffer’s f6 Function600
F4Shifted and Fully Rotated Non-Continuous Rastrigin’s Function800
F5Shifted and Fully Rotated Levy Function900
F6HybridHybrid Function 1 (N = 3)1800
F7Hybrid Function 2 (N = 6)2000
F8Hybrid Function 3 (N = 5)2200
F9CompositionComposition function 1 (N = 5)2300
F10Composition function 2 (N = 4)2400
F11Composition function 3 (N = 5)2600
F12Composition function 4 (N = 6)2700
Table 2. Summary of Compared Optimization algorithms.
Table 2. Summary of Compared Optimization algorithms.
AcronymAlgorithm NameYear
SSOASynergistic Swarm Optimization Algorithm [55]2024
FLOFrilled Lizard Optimization [56]2024
CPOChinese Pangolin Optimizer [57]2024
FVIMFour Vector Optimizer [58]2024
SHIOSuccess History Intelligent Optimizer [59]2022
ZOAZebra Optimization Algorithm [60]2022
DOADingo Optimization Algorithm [61]2021
ROARemora Optimization Algorithm [62]2021
AOAquila Optimizer [63]2021
CHIMPChimp Optimization Algorithm [64]2020
STOASooty Tern Optimization Algorithm [65]2019
SOASeagull Optimization Algorithm [66]2019
SCSOSand Cat Optimization Algorithm [67]2023
MVOMulti-Verse Optimizer [68]2016
WOAWhale Optimization Algorithm [69]2016
SCASine Cosine Algorithm [70]2016
MFOMoth–Flame Optimization Algorithm [71]2015
GWOGrey Wolf Optimizer [72]2014
PSOParticle Swarm Optimization [73]1995
SASimulated Annealing Algorithm [74]1983
GAGenetic Algorithm [75]1960
Table 3. Comparison of test results of Cybersecurity Optimizer with different algorithms on CEC2022 (F1–F12), FES = 1000, Dim = 10.
Table 3. Comparison of test results of Cybersecurity Optimizer with different algorithms on CEC2022 (F1–F12), FES = 1000, Dim = 10.
FunctionMeasureCybersecurityChimpCPOROACMAESGWOWOAMFOWSOSHIOZOADOAHHO
F1Mean3.00 × 1032.00 × 1035.21 × 1037.15 × 1032.97 × 1043.06 × 1031.65 × 1049.51 × 1027.97 × 1025.64 × 1038.03 × 1021.17 × 1043.10 × 102
Std3.97 × 10−31.37 × 1034.87 × 1032.13 × 1031.49 × 1042.42 × 1039.26 × 1031.46 × 1035.93 × 1022.96 × 1036.47 × 1027.57 × 1037.01 × 100
SEM1.78 × 10−36.13 × 1022.18 × 1039.52 × 1026.66 × 1031.08 × 1034.14 × 1036.51 × 1022.65 × 1021.32 × 1032.90 × 1023.39 × 1033.13 × 100
16810137125394112
F2Mean4.18 × 1025.22 × 1024.35 × 1028.51 × 1027.52 × 1024.41 × 1024.84 × 1024.26 × 1024.01 × 1024.53 × 1024.42 × 1024.72 × 1024.20 × 102
Std3.00 × 1018.47 × 1013.55 × 1012.10 × 1021.02 × 1022.72 × 1011.04 × 1022.81 × 1011.88 × 1003.66 × 1013.95 × 1013.14 × 1012.96 × 101
SEM1.34 × 1013.79 × 1011.59 × 1019.39 × 1014.57 × 1011.22 × 1014.63 × 1011.26 × 1018.41 × 10−11.64 × 1011.77 × 1011.40 × 1011.32 × 101
21151312610418793
F3Mean6.00 × 1026.31 × 1026.48 × 1026.43 × 1026.21 × 1026.02 × 1026.43 × 1026.00 × 1026.01 × 1026.05 × 1026.21 × 1026.31 × 1026.41 × 102
Std2.07 × 10−26.16 × 1008.44 × 1001.03 × 1012.87 × 1011.89 × 1001.86 × 1012.94 × 10−11.90 × 1005.17 × 1003.78 × 1008.64 × 1001.17 × 101
SEM9.27 × 10−32.76 × 1003.78 × 1004.60 × 1001.28 × 1018.47 × 10−18.32 × 1001.32 × 10−18.51 × 10−12.31 × 1001.69 × 1003.87 × 1005.22 × 100
19131174122356810
F4Mean8.14 × 1028.40 × 1028.31 × 1028.41 × 1028.22 × 1028.15 × 1028.34 × 1028.29 × 1028.14 × 1028.18 × 1028.51 × 1028.37 × 1028.26 × 102
Std4.54 × 1007.51 × 1002.68 × 1006.63 × 1009.18 × 1004.76 × 1001.09 × 1011.03 × 1014.84 × 1007.92 × 1005.51 × 1008.92 × 1006.80 × 100
SEM2.03 × 1003.36 × 1001.20 × 1002.96 × 1004.10 × 1002.13 × 1004.88 × 1004.62 × 1002.17 × 1003.54 × 1002.47 × 1003.99 × 1003.04 × 100
11181253972413106
F5Mean9.12 × 1021.21 × 1031.61 × 1031.49 × 1039.00 × 1029.16 × 1021.61 × 1031.10 × 1039.28 × 1029.46 × 1021.02 × 1031.16 × 1031.45 × 103
Std2.29 × 1012.34 × 1021.83 × 1022.42 × 1020.00 × 1009.12 × 1003.70 × 1022.74 × 1023.07 × 1013.71 × 1014.48 × 1011.57 × 1022.03 × 102
SEM1.02 × 1011.05 × 1028.19 × 1011.08 × 1020.00 × 1004.08 × 1001.65 × 1021.23 × 1021.37 × 1011.66 × 1012.00 × 1017.01 × 1019.06 × 101
29131113127456810
F6Mean1.81 × 1031.71 × 1063.16 × 1031.87 × 1061.23 × 1075.90 × 1033.05 × 1032.76 × 1031.83 × 1033.63 × 1032.93 × 1033.35 × 1036.31 × 103
Std4.10 × 1001.52 × 1061.73 × 1032.94 × 1061.09 × 1073.07 × 1038.88 × 1021.39 × 1031.14 × 1011.39 × 1031.29 × 1032.33 × 1035.63 × 103
SEM1.83 × 1006.80 × 1057.73 × 1021.32 × 1064.87 × 1061.37 × 1033.97 × 1026.23 × 1025.08 × 1006.20 × 1025.76 × 1021.04 × 1032.52 × 103
11161213953284710
F7Mean2.01 × 1032.06 × 1032.10 × 1032.10 × 1032.10 × 1032.03 × 1032.09 × 1032.02 × 1032.02 × 1032.06 × 1032.06 × 1032.07 × 1032.05 × 103
Std9.29 × 1001.07 × 1017.56 × 1014.38 × 1015.74 × 1011.04 × 1012.03 × 1015.82 × 1001.01 × 1013.36 × 1011.51 × 1011.66 × 1011.58 × 101
SEM4.15 × 1004.80 × 1003.38 × 1011.96 × 1012.57 × 1014.65 × 1009.07 × 1002.60 × 1004.52 × 1001.50 × 1016.76 × 1007.44 × 1007.08 × 100
17111213410328695
F8Mean2.22 × 1032.33 × 1032.25 × 1032.27 × 1032.25 × 1032.22 × 1032.24 × 1032.22 × 1032.22 × 1032.23 × 1032.23 × 1032.23 × 1032.23 × 103
Std7.44 × 1005.62 × 1015.36 × 1017.38 × 1001.36 × 1018.92 × 1009.10 × 1002.24 × 1009.27 × 1004.27 × 1002.72 × 1004.50 × 1002.90 × 100
SEM3.33 × 1002.51 × 1012.40 × 1013.30 × 1006.09 × 1003.99 × 1004.07 × 1001.00 × 1004.15 × 1001.91 × 1001.22 × 1002.01 × 1001.29 × 100
11311121029438567
F9Mean2.53 × 1032.57 × 1032.60 × 1032.71 × 1032.56 × 1032.58 × 1032.57 × 1032.53 × 1032.53 × 1032.59 × 1032.64 × 1032.64 × 1032.61 × 103
Std3.62 × 10−111.41 × 1015.22 × 1012.55 × 1014.33 × 1012.79 × 1015.79 × 1011.63 × 1002.11 × 1003.22 × 1014.24 × 1014.93 × 1015.04 × 101
SEM1.62 × 10−116.32 × 1002.33 × 1011.14 × 1011.94 × 1011.25 × 1012.59 × 1017.29 × 10−19.44 × 10−11.44 × 1011.89 × 1012.20 × 1012.26 × 101
15913476238121110
F10Mean2.50 × 1032.78 × 1032.81 × 1032.71 × 1033.22 × 1032.59 × 1032.58 × 1032.53 × 1032.50 × 1032.82 × 1032.53 × 1032.56 × 1032.72 × 103
Std7.73 × 10−26.18 × 1024.74 × 1021.62 × 1028.52 × 1025.20 × 1017.84 × 1015.89 × 1018.44 × 10−13.95 × 1026.55 × 1017.73 × 1012.31 × 102
SEM3.45 × 10−22.76 × 1022.12 × 1027.24 × 1013.81 × 1022.32 × 1013.51 × 1012.63 × 1013.77 × 10−11.77 × 1022.93 × 1013.46 × 1011.03 × 102
11011813763212459
F11Mean2.63 × 1033.44 × 1032.69 × 1033.40 × 1033.00 × 1032.88 × 1032.80 × 1032.76 × 1032.70 × 1033.00 × 1032.91 × 1033.01 × 1032.87 × 103
Std6.73 × 1012.79 × 1021.39 × 1025.23 × 1022.78 × 1021.68 × 1021.32 × 1021.07 × 1021.44 × 1024.57 × 1022.46 × 1023.42 × 1021.97 × 102
SEM3.01 × 1011.25 × 1026.21 × 1012.34 × 1021.24 × 1027.52 × 1015.92 × 1014.79 × 1016.43 × 1012.04 × 1021.10 × 1021.53 × 1028.79 × 101
11321297543108116
F12Mean2.86 × 1032.88 × 1032.89 × 1032.95 × 1032.88 × 1032.86 × 1032.88 × 1032.86 × 1032.87 × 1032.88 × 1032.94 × 1032.89 × 1032.92 × 103
Std1.52 × 1001.51 × 1011.65 × 1018.53 × 1016.04 × 1009.36 × 10−11.16 × 1019.49 × 10−16.15 × 1002.17 × 1015.83 × 1012.06 × 1014.16 × 101
SEM6.80 × 10−16.76 × 1007.36 × 1003.82 × 1012.70 × 1004.19 × 10−15.18 × 1004.24 × 10−12.75 × 1009.70 × 1002.61 × 1019.21 × 1001.86 × 101
16913725348121011
Table 4. Comparison of test results of Cybersecurity Optimizer with different algorithms on CEC2022 (F1–F12), FES = 2000, Dim = 20.
Table 4. Comparison of test results of Cybersecurity Optimizer with different algorithms on CEC2022 (F1–F12), FES = 2000, Dim = 20.
Fun.StatisticsCybersecurityChimpCPOROACMAESGWOWOAMFOWSOSHIOZOADOAHHO
F1Mean1.22 × 1032.21 × 1042.56 × 1044.58 × 1048.07 × 1041.36 × 1041.53 × 1043.98 × 1046.69 × 1031.50 × 1048.36 × 1034.40 × 1041.34 × 103
Std6.72 × 1023.46 × 1031.79 × 1047.78 × 1031.69 × 1043.69 × 1034.46 × 1033.99 × 1042.43 × 1034.13 × 1032.87 × 1031.17 × 1045.99 × 102
SEM3.00 × 1021.55 × 1037.99 × 1033.48 × 1037.54 × 1031.65 × 1031.99 × 1031.79 × 1041.09 × 1031.85 × 1031.28 × 1035.23 × 1032.68 × 102
Rank18912135710364112
F2Mean4.53 × 1028.47 × 1024.56 × 1022.08 × 1031.12 × 1035.03 × 1025.40 × 1024.77 × 1025.19 × 1025.71 × 1026.14 × 1021.57 × 1034.70 × 102
Std8.41 × 1001.96 × 1021.05 × 1014.76 × 1022.76 × 1023.02 × 1015.35 × 1013.03 × 1015.47 × 1015.18 × 1018.88 × 1016.60 × 1028.56 × 100
SEM3.76 × 1008.78 × 1014.68 × 1002.13 × 1021.23 × 1021.35 × 1012.39 × 1011.36 × 1012.45 × 1012.32 × 1013.97 × 1012.95 × 1023.83 × 100
Rank11021311574689123
F3Mean6.00 × 1026.59 × 1026.60 × 1026.66 × 1026.40 × 1026.07 × 1026.65 × 1026.21 × 1026.13 × 1026.18 × 1026.41 × 1026.67 × 1026.54 × 102
Std3.88 × 10−21.46 × 1011.52 × 1019.60 × 1003.73 × 1013.29 × 1004.66 × 1001.14 × 1015.77 × 1007.28 × 1001.23 × 1011.37 × 1019.63 × 100
SEM1.73 × 10−26.54 × 1006.79 × 1004.30 × 1001.67 × 1011.47 × 1002.08 × 1005.09 × 1002.58 × 1003.26 × 1005.52 × 1006.14 × 1004.31 × 100
Rank19101262115347138
F4Mean8.45 × 1029.30 × 1028.96 × 1029.68 × 1029.09 × 1028.48 × 1029.21 × 1028.87 × 1028.52 × 1028.47 × 1028.54 × 1029.39 × 1028.80 × 102
Std8.75 × 1001.16 × 1015.78 × 1002.42 × 1013.89 × 1011.06 × 1015.15 × 1012.12 × 1012.00 × 1011.18 × 1011.62 × 1012.85 × 1011.26 × 101
SEM3.91 × 1005.19 × 1002.58 × 1001.08 × 1011.74 × 1014.75 × 1002.30 × 1019.50 × 1008.95 × 1005.28 × 1007.24 × 1001.27 × 1015.62 × 100
Rank11181393107425126
F5Mean1.38 × 1032.54 × 1032.58 × 1033.74 × 1039.00 × 1021.50 × 1034.47 × 1033.41 × 1032.17 × 1031.79 × 1031.78 × 1032.75 × 1032.84 × 103
Std3.68 × 1023.97 × 1026.51 × 1026.48 × 1020.00 × 1004.08 × 1021.85 × 1031.06 × 1039.36 × 1025.67 × 1022.25 × 1026.32 × 1022.80 × 102
SEM1.65 × 1021.77 × 1022.91 × 1022.90 × 1020.00 × 1001.83 × 1028.25 × 1024.74 × 1024.19 × 1022.53 × 1021.01 × 1022.83 × 1021.25 × 102
Rank27812131311654910
F6Mean2.17 × 1037.95 × 1066.04 × 1039.55 × 1081.90 × 1081.19 × 1061.31 × 1045.81 × 1065.18 × 1032.66 × 1053.96 × 1062.09 × 1087.10 × 104
Std4.87 × 1024.73 × 1067.63 × 1037.06 × 1081.51 × 1082.51 × 1066.90 × 1031.30 × 1074.89 × 1033.56 × 1058.40 × 1062.38 × 1084.16 × 104
SEM2.18 × 1022.12 × 1063.41 × 1033.16 × 1086.73 × 1071.12 × 1063.09 × 1035.80 × 1062.19 × 1031.59 × 1053.76 × 1061.06 × 1081.86 × 104
Rank11031311749268125
F7Mean2.04 × 1032.17 × 1032.34 × 1032.18 × 1032.22 × 1032.10 × 1032.20 × 1032.13 × 1032.07 × 1032.14 × 1032.12 × 1032.17 × 1032.24 × 103
Std9.70 × 1001.71 × 1011.62 × 1024.33 × 1015.91 × 1014.89 × 1016.53 × 1015.25 × 1013.33 × 1011.80 × 1012.36 × 1014.17 × 1011.09 × 102
SEM4.34 × 1007.66 × 1007.23 × 1011.94 × 1012.64 × 1012.19 × 1012.92 × 1012.35 × 1011.49 × 1018.03 × 1001.05 × 1011.87 × 1014.85 × 101
Rank17139113105264812
F8Mean2.22 × 1032.38 × 1032.49 × 1032.31 × 1032.42 × 1032.23 × 1032.26 × 1032.34 × 1032.22 × 1032.26 × 1032.29 × 1032.40 × 1032.24 × 103
Std5.53 × 10−12.32 × 1012.47 × 1025.96 × 1017.11 × 1012.23 × 1003.15 × 1012.56 × 1011.13 × 1006.11 × 1018.26 × 1012.41 × 1021.06 × 101
SEM2.47 × 10−11.04 × 1011.11 × 1022.66 × 1013.18 × 1019.96 × 10−11.41 × 1011.14 × 1015.03 × 10−12.73 × 1013.69 × 1011.08 × 1024.73 × 100
Rank21013812359167114
F9Mean2.48 × 1032.55 × 1032.49 × 1032.99 × 1032.76 × 1032.51 × 1032.53 × 1032.52 × 1032.49 × 1032.58 × 1032.67 × 1032.76 × 1032.49 × 103
Std5.52 × 10−65.92 × 1003.65 × 1003.26 × 1021.21 × 1022.79 × 1011.82 × 1012.36 × 1019.85 × 1003.25 × 1011.82 × 1028.78 × 1017.60 × 100
SEM2.47 × 10−62.65 × 1001.63 × 1001.46 × 1025.40 × 1011.25 × 1018.13 × 1001.05 × 1014.41 × 1001.45 × 1018.15 × 1013.93 × 1013.40 × 100
Rank18213115763910124
F10Mean2.50 × 1036.26 × 1035.41 × 1036.64 × 1036.77 × 1033.59 × 1034.43 × 1034.01 × 1032.74 × 1032.86 × 1033.23 × 1034.96 × 1033.38 × 103
Std9.43 × 10−21.41 × 1022.25 × 1033.64 × 1022.69 × 1027.58 × 1021.38 × 1038.45 × 1023.25 × 1026.20 × 1029.45 × 1022.14 × 1033.50 × 102
SEM4.22 × 10−26.29 × 1011.00 × 1031.63 × 1021.20 × 1023.39 × 1026.16 × 1023.78 × 1021.45 × 1022.77 × 1024.23 × 1029.56 × 1021.56 × 102
Rank11110121368723495
F11Mean2.95 × 1035.48 × 1032.96 × 1036.62 × 1036.56 × 1033.64 × 1033.18 × 1034.45 × 1032.97 × 1034.07 × 1034.91 × 1037.68 × 1033.05 × 103
Std6.35 × 1015.64 × 1025.45 × 1011.07 × 1031.79 × 1033.35 × 1023.33 × 1027.93 × 1022.22 × 1027.75 × 1029.59 × 1026.84 × 1021.24 × 102
SEM2.84 × 1012.52 × 1022.44 × 1014.79 × 1028.00 × 1021.50 × 1021.49 × 1023.55 × 1029.92 × 1013.47 × 1024.29 × 1023.06 × 1025.56 × 101
Rank11021211658379134
F12Mean2.94 × 1033.12 × 1033.26 × 1033.44 × 1033.06 × 1032.97 × 1033.06 × 1032.95 × 1033.14 × 1033.01 × 1033.31 × 1033.21 × 1033.08 × 103
Std3.10 × 1005.62 × 1011.59 × 1021.70 × 1021.74 × 1012.15 × 1014.05 × 1015.95 × 1008.26 × 1012.05 × 1015.25 × 1011.86 × 1021.13 × 102
SEM1.39 × 1002.51 × 1017.11 × 1017.62 × 1017.76 × 1009.60 × 1001.81 × 1012.66 × 1003.70 × 1019.15 × 1002.35 × 1018.32 × 1015.06 × 101
Rank18111353629412107
Table 5. Comparison of test results of Cybersecurity Optimizer with different algorithms on CEC2017 (F1–F15), FES = 1000, Dim = 10.
Table 5. Comparison of test results of Cybersecurity Optimizer with different algorithms on CEC2017 (F1–F15), FES = 1000, Dim = 10.
FunctionStatisticsCybersecurityChimpCPOROAWOAMFOWSOSHIOZOADOAHHO
F1Mean1.41 × 1072.49 × 10106.48 × 1054.49 × 10101.52 × 1091.21 × 10105.09 × 1097.26 × 1091.12 × 10104.67 × 10102.99 × 107
Std2.13 × 1075.31 × 1095.91 × 1054.23 × 1099.10 × 1088.70 × 1093.00 × 1094.53 × 1092.55 × 1098.41 × 1098.83 × 106
SEM8.71 × 1062.17 × 1092.41 × 1051.73 × 1093.71 × 1083.55 × 1091.23 × 1091.85 × 1091.04 × 1093.43 × 1093.60 × 106
Rank2911048567113
F2Mean6.17 × 10181.42 × 10342.48 × 10242.18 × 10465.03 × 10328.39 × 10364.65 × 10271.10 × 10339.65 × 10336.76 × 10482.25 × 1024
Std1.37 × 10191.68 × 10344.61 × 10245.33 × 10461.20 × 10331.82 × 10371.12 × 10282.53 × 10332.10 × 10341.66 × 10495.14 × 1024
SEM5.59 × 10186.86 × 10331.88 × 10242.18 × 10464.89 × 10327.42 × 10364.59 × 10271.03 × 10338.58 × 10336.76 × 10482.10 × 1024
Rank1831059467112
F3Mean2.22 × 1041.08 × 1055.00 × 1048.55 × 1042.63 × 1051.19 × 1054.19 × 1045.80 × 1044.15 × 1047.49 × 1044.43 × 104
Std5.00 × 1031.50 × 1047.94 × 1036.63 × 1034.34 × 1043.22 × 1041.88 × 1041.45 × 1043.68 × 1031.17 × 1049.66 × 103
SEM2.04 × 1036.14 × 1033.24 × 1032.71 × 1031.77 × 1041.31 × 1047.69 × 1035.94 × 1031.50 × 1034.76 × 1033.94 × 103
Rank1958111036274
F4Mean5.33 × 1026.73 × 1035.08 × 1021.33 × 1048.58 × 1021.05 × 1031.18 × 1037.36 × 1021.64 × 1031.34 × 1045.71 × 102
Std3.33 × 1013.86 × 1035.59 × 1014.11 × 1032.68 × 1023.93 × 1028.78 × 1021.20 × 1021.15 × 1036.70 × 1032.64 × 101
SEM1.36 × 1011.58 × 1032.28 × 1011.68 × 1031.10 × 1021.60 × 1023.58 × 1024.91 × 1014.70 × 1022.74 × 1031.08 × 101
Rank2911056748113
F5Mean6.36 × 1028.26 × 1027.93 × 1029.29 × 1028.11 × 1027.23 × 1026.64 × 1026.85 × 1027.36 × 1028.89 × 1027.45 × 102
Std1.46 × 1014.37 × 1011.71 × 1011.19 × 1013.10 × 1014.10 × 1014.60 × 1012.52 × 1012.72 × 1012.97 × 1012.55 × 101
SEM5.96 × 1001.79 × 1016.98 × 1004.87 × 1001.26 × 1011.67 × 1011.88 × 1011.03 × 1011.11 × 1011.21 × 1011.04 × 101
Rank1971184235106
F6Mean6.07 × 1026.78 × 1026.68 × 1026.81 × 1026.73 × 1026.37 × 1026.43 × 1026.40 × 1026.55 × 1026.72 × 1026.67 × 102
Std1.63 × 1009.01 × 1003.73 × 1001.21 × 1011.26 × 1018.23 × 1001.11 × 1011.09 × 1018.03 × 1001.05 × 1014.87 × 100
SEM6.64 × 10−13.68 × 1001.52 × 1004.92 × 1005.15 × 1003.36 × 1004.53 × 1004.46 × 1003.28 × 1004.27 × 1001.99 × 100
Rank1107119243586
F7Mean9.55 × 1021.26 × 1031.20 × 1031.43 × 1031.31 × 1031.01 × 1031.06 × 1039.98 × 1021.16 × 1031.36 × 1031.30 × 103
Std2.82 × 1017.03 × 1019.20 × 1018.06 × 1015.00 × 1011.35 × 1026.33 × 1011.12 × 1025.44 × 1012.44 × 1015.49 × 101
SEM1.15 × 1012.87 × 1013.76 × 1013.29 × 1012.04 × 1015.49 × 1012.58 × 1014.57 × 1012.22 × 1019.94 × 1002.24 × 101
Rank1761193425108
F8Mean9.21 × 1021.09 × 1039.91 × 1021.14 × 1031.03 × 1031.03 × 1039.18 × 1029.37 × 1029.58 × 1021.13 × 1039.89 × 102
Std2.12 × 1011.99 × 1011.59 × 1013.24 × 1015.29 × 1018.13 × 1012.86 × 1012.73 × 1012.43 × 1013.70 × 1012.50 × 101
SEM8.66 × 1008.14 × 1006.50 × 1001.32 × 1012.16 × 1013.32 × 1011.17 × 1011.11 × 1019.91 × 1001.51 × 1011.02 × 101
Rank2961187134105
F9Mean2.33 × 1038.63 × 1031.06 × 1041.11 × 1049.74 × 1037.56 × 1038.03 × 1035.25 × 1034.17 × 1038.29 × 1038.70 × 103
Std6.43 × 1021.72 × 1033.67 × 1031.95 × 1032.26 × 1038.83 × 1021.22 × 1031.95 × 1035.68 × 1022.94 × 1036.03 × 102
SEM2.63 × 1027.02 × 1021.50 × 1037.98 × 1029.24 × 1023.60 × 1024.99 × 1027.98 × 1022.32 × 1021.20 × 1032.46 × 102
Rank1710119453268
F10Mean5.37 × 1038.81 × 1035.53 × 1038.78 × 1036.81 × 1035.42 × 1035.38 × 1035.56 × 1035.08 × 1038.97 × 1035.85 × 103
Std3.53 × 1022.00 × 1022.36 × 1023.73 × 1026.64 × 1021.03 × 1031.52 × 1031.44 × 1035.89 × 1023.08 × 1026.10 × 102
SEM1.44 × 1028.18 × 1019.65 × 1011.52 × 1022.71 × 1024.22 × 1026.19 × 1025.88 × 1022.40 × 1021.26 × 1022.49 × 102
Rank2105984361117
F11Mean1.27 × 1034.60 × 1031.51 × 1031.02 × 1045.95 × 1032.88 × 1031.44 × 1033.74 × 1032.31 × 1037.67 × 1031.30 × 103
Std3.89 × 1014.29 × 1027.64 × 1013.22 × 1032.80 × 1031.41 × 1031.57 × 1026.04 × 1028.25 × 1024.70 × 1032.94 × 101
SEM1.59 × 1011.75 × 1023.12 × 1011.31 × 1031.14 × 1035.75 × 1026.40 × 1012.47 × 1023.37 × 1021.92 × 1031.20 × 101
Rank1841196375102
F12Mean2.30 × 1066.69 × 1091.11 × 1076.79 × 1092.90 × 1082.49 × 1087.62 × 1078.78 × 1082.48 × 1081.06 × 10102.54 × 107
Std1.46 × 1063.33 × 1096.07 × 1062.25 × 1092.63 × 1083.65 × 1089.93 × 1078.35 × 1081.33 × 1085.39 × 1091.87 × 107
SEM5.97 × 1051.36 × 1092.48 × 1069.19 × 1081.07 × 1081.49 × 1084.06 × 1073.41 × 1085.43 × 1072.20 × 1097.62 × 106
Rank1921076485113
F13Mean2.88 × 1044.55 × 1097.41 × 1042.87 × 1091.02 × 1068.49 × 1051.06 × 1061.95 × 1071.94 × 1073.15 × 1096.86 × 105
Std1.87 × 1042.61 × 1095.04 × 1042.84 × 1091.38 × 1061.82 × 1062.53 × 1064.76 × 1074.73 × 1074.45 × 1092.12 × 105
SEM7.62 × 1031.07 × 1092.06 × 1041.16 × 1095.62 × 1057.45 × 1051.03 × 1061.94 × 1071.93 × 1071.82 × 1098.66 × 104
Rank1112954687103
F14Mean1.59 × 1037.59 × 1056.39 × 1053.58 × 1061.76 × 1061.15 × 1056.36 × 1032.02 × 1054.11 × 1056.55 × 1044.70 × 105
Std7.28 × 1018.99 × 1055.42 × 1054.11 × 1062.46 × 1065.79 × 1041.14 × 1043.20 × 1055.05 × 1057.27 × 1044.16 × 105
SEM2.97 × 1013.67 × 1052.21 × 1051.68 × 1061.01 × 1062.36 × 1044.64 × 1031.30 × 1052.06 × 1052.97 × 1041.70 × 105
Rank1981110425637
F15Mean3.84 × 1032.10 × 1071.44 × 1049.50 × 1081.91 × 1066.51 × 1042.70 × 1032.13 × 1064.20 × 1067.73 × 1051.40 × 105
Std4.12 × 1032.41 × 1071.68 × 1031.34 × 1092.22 × 1064.93 × 1049.54 × 1022.61 × 1066.57 × 1061.47 × 1068.54 × 104
SEM1.68 × 1039.82 × 1066.85 × 1025.48 × 1089.06 × 1052.01 × 1043.89 × 1021.07 × 1062.68 × 1066.01 × 1053.49 × 104
Rank2103117418965
Table 6. Comparison of test results of Cybersecurity Optimizer with different algorithms on CEC2017 (F16–F30), FES = 1000, Dim = 10.
Table 6. Comparison of test results of Cybersecurity Optimizer with different algorithms on CEC2017 (F16–F30), FES = 1000, Dim = 10.
FunStatisticsCybersecurityChimpCPOROAWOAMFOWSOSHIOZOADOAHHO
F16Mean2.49 × 1034.00 × 1033.72 × 1034.70 × 1033.83 × 1032.88 × 1032.46 × 1032.82 × 1033.19 × 1034.46 × 1033.36 × 103
Std1.81 × 1022.80 × 1025.26 × 1022.66 × 1026.33 × 1026.77 × 1023.37 × 1024.90 × 1022.57 × 1021.33 × 1032.34 × 102
SEM7.39 × 1011.14 × 1022.15 × 1021.08 × 1022.59 × 1022.76 × 1021.38 × 1022.00 × 1021.05 × 1025.43 × 1029.54 × 101
Rank2971184135106
F17Mean2.02 × 1032.96 × 1032.96 × 1034.41 × 1032.71 × 1032.54 × 1031.88 × 1032.37 × 1032.37 × 1033.18 × 1032.50 × 103
Std1.27 × 1021.21 × 1024.47 × 1023.32 × 1031.16 × 1021.82 × 1027.53 × 1012.32 × 1023.29 × 1025.82 × 1022.96 × 102
SEM5.19 × 1014.96 × 1011.83 × 1021.36 × 1034.75 × 1017.44 × 1013.07 × 1019.47 × 1011.34 × 1022.38 × 1021.21 × 102
Rank2891176143105
F18Mean3.12 × 1047.34 × 1061.96 × 1064.54 × 1072.13 × 1078.77 × 1065.27 × 1042.36 × 1066.50 × 1051.73 × 1061.91 × 106
Std1.74 × 1044.80 × 1061.49 × 1063.84 × 1072.30 × 1071.91 × 1073.17 × 1042.08 × 1069.80 × 1052.32 × 1062.34 × 106
SEM7.09 × 1031.96 × 1066.10 × 1051.57 × 1079.39 × 1067.79 × 1061.29 × 1048.49 × 1054.00 × 1059.48 × 1059.54 × 105
Rank1861110927345
F19Mean2.23 × 1035.82 × 1079.46 × 1055.89 × 1081.10 × 1073.86 × 1051.31 × 1041.62 × 1061.97 × 1061.42 × 1087.88 × 105
Std2.07 × 1024.54 × 1079.96 × 1055.58 × 1089.59 × 1068.58 × 1056.45 × 1031.11 × 1062.61 × 1062.26 × 1085.39 × 105
SEM8.44 × 1011.85 × 1074.07 × 1052.28 × 1083.92 × 1063.50 × 1052.63 × 1034.54 × 1051.06 × 1069.23 × 1072.20 × 105
Rank1951183267104
F20Mean2.34 × 1033.15 × 1033.38 × 1033.05 × 1032.87 × 1032.76 × 1032.27 × 1032.66 × 1032.43 × 1032.93 × 1032.68 × 103
Std1.49 × 1022.58 × 1023.12 × 1022.48 × 1021.99 × 1023.65 × 1028.06 × 1011.71 × 1021.18 × 1024.17 × 1021.90 × 102
SEM6.09 × 1011.05 × 1021.27 × 1021.01 × 1028.13 × 1011.49 × 1023.29 × 1016.97 × 1014.81 × 1011.70 × 1027.75 × 101
Rank2101197614385
F21Mean2.43 × 1032.62 × 1032.59 × 1032.72 × 1032.63 × 1032.48 × 1032.48 × 1032.46 × 1032.51 × 1032.70 × 1032.59 × 103
Std1.24 × 1012.65 × 1015.27 × 1011.55 × 1016.98 × 1014.40 × 1014.10 × 1011.97 × 1014.44 × 1013.96 × 1015.19 × 101
SEM5.07 × 1001.08 × 1012.15 × 1016.34 × 1002.85 × 1011.80 × 1011.67 × 1018.05 × 1001.81 × 1011.62 × 1012.12 × 101
Rank1861194325107
F22Mean2.40 × 1039.93 × 1036.59 × 1039.97 × 1037.94 × 1036.48 × 1034.03 × 1037.07 × 1035.20 × 1038.82 × 1037.45 × 103
Std8.94 × 1013.05 × 1022.13 × 1036.60 × 1021.14 × 1032.02 × 1031.73 × 1031.01 × 1039.89 × 1021.24 × 1038.47 × 102
SEM3.65 × 1011.24 × 1028.70 × 1022.70 × 1024.65 × 1028.24 × 1027.06 × 1024.14 × 1024.04 × 1025.08 × 1023.46 × 102
Rank1105118426397
F23Mean2.80 × 1033.11 × 1033.24 × 1033.44 × 1033.08 × 1032.83 × 1033.06 × 1032.88 × 1033.21 × 1033.40 × 1033.29 × 103
Std1.64 × 1014.39 × 1016.37 × 1012.09 × 1026.92 × 1013.10 × 1011.06 × 1023.25 × 1011.54 × 1021.76 × 1021.29 × 102
SEM6.72 × 1001.79 × 1012.60 × 1018.53 × 1012.83 × 1011.27 × 1014.34 × 1011.33 × 1016.30 × 1017.18 × 1015.25 × 101
Rank1681152437109
F24Mean2.98 × 1033.28 × 1033.68 × 1033.56 × 1033.27 × 1032.97 × 1033.35 × 1033.10 × 1033.47 × 1033.67 × 1033.41 × 103
Std7.20 × 1014.94 × 1012.55 × 1029.25 × 1016.47 × 1012.53 × 1011.44 × 1021.58 × 1018.41 × 1012.09 × 1021.40 × 102
SEM2.94 × 1012.01 × 1011.04 × 1023.78 × 1012.64 × 1011.03 × 1015.90 × 1016.47 × 1003.43 × 1018.53 × 1015.72 × 101
Rank2511941638107
F25Mean2.93 × 1034.53 × 1032.94 × 1034.63 × 1033.10 × 1033.38 × 1033.11 × 1033.24 × 1033.13 × 1035.07 × 1032.95 × 103
Std3.07 × 1012.94 × 1021.39 × 1011.22 × 1025.64 × 1014.47 × 1021.08 × 1022.10 × 1026.84 × 1017.31 × 1024.64 × 101
SEM1.25 × 1011.20 × 1025.66 × 1004.97 × 1012.30 × 1011.83 × 1024.40 × 1018.57 × 1012.79 × 1012.98 × 1021.89 × 101
Rank1921048576113
F26Mean4.49 × 1037.20 × 1037.83 × 1031.13 × 1047.24 × 1035.93 × 1036.10 × 1036.54 × 1038.74 × 1031.02 × 1048.05 × 103
Std1.15 × 1034.89 × 1022.00 × 1031.19 × 1031.73 × 1034.75 × 1021.55 × 1038.11 × 1029.24 × 1021.05 × 1038.07 × 102
SEM4.70 × 1022.00 × 1028.18 × 1024.84 × 1027.07 × 1021.94 × 1026.33 × 1023.31 × 1023.77 × 1024.29 × 1023.29 × 102
Rank1571162349108
F27Mean3.22 × 1033.67 × 1033.61 × 1034.09 × 1033.46 × 1033.24 × 1033.46 × 1033.38 × 1033.95 × 1033.70 × 1033.51 × 103
Std9.14 × 1001.21 × 1022.30 × 1024.03 × 1021.31 × 1021.62 × 1011.09 × 1029.99 × 1012.29 × 1024.07 × 1022.29 × 102
SEM3.73 × 1004.93 × 1019.37 × 1011.64 × 1025.33 × 1016.62 × 1004.46 × 1014.08 × 1019.37 × 1011.66 × 1029.34 × 101
Rank1871142531096
F28Mean3.29 × 1035.05 × 1033.29 × 1036.86 × 1033.54 × 1034.35 × 1033.51 × 1034.08 × 1033.91 × 1036.48 × 1033.36 × 103
Std6.82 × 1015.08 × 1021.89 × 1011.10 × 1031.41 × 1026.49 × 1021.46 × 1025.33 × 1022.69 × 1028.76 × 1024.46 × 101
SEM2.79 × 1012.07 × 1027.72 × 1004.49 × 1025.75 × 1012.65 × 1025.98 × 1012.17 × 1021.10 × 1023.58 × 1021.82 × 101
Rank1921158476103
F29Mean3.69 × 1034.83 × 1035.24 × 1035.68 × 1035.10 × 1034.04 × 1033.93 × 1034.18 × 1035.19 × 1035.44 × 1034.77 × 103
Std8.86 × 1013.75 × 1024.05 × 1028.31 × 1022.72 × 1022.92 × 1021.78 × 1022.55 × 1024.18 × 1026.42 × 1023.29 × 102
SEM3.62 × 1011.53 × 1021.65 × 1023.39 × 1021.11 × 1021.19 × 1027.26 × 1011.04 × 1021.71 × 1022.62 × 1021.34 × 102
Rank1691173248105
F30Mean4.67 × 1047.56 × 1076.96 × 1066.08 × 1084.62 × 1074.30 × 1057.39 × 1051.55 × 1071.25 × 1071.62 × 1085.93 × 106
Std4.30 × 1042.27 × 1076.47 × 1066.85 × 1086.12 × 1075.70 × 1051.05 × 1061.20 × 1078.58 × 1061.72 × 1083.42 × 106
SEM1.76 × 1049.27 × 1062.64 × 1062.80 × 1082.50 × 1072.33 × 1054.28 × 1054.89 × 1063.50 × 1067.03 × 1071.40 × 106
Rank1951182376104
Table 7. Specifications of classification datasets.
Table 7. Specifications of classification datasets.
Classification DatasetsNo. of AttributesNo. of Training SamplesNo. of Test SamplesNo. of Classes
NSL-KDD41125,97322,5445
CICIDS2017781,048,575262,14415
UNSW-NB1549175,34182,33210
Bot-IoT353,000,0001,000,0005
CSE-CIC-IDS2018801,048,576262,14415
Table 8. Experimental results for the NSL-KDD dataset.
Table 8. Experimental results for the NSL-KDD dataset.
AlgorithmMSE (Average)MSE (STD)Classification Rate
Cybersecurity–MLP0.0104100.00420098.5%
Chimp–MLP0.0153000.00560097.1%
CPO-MLP0.0125000.00480097.8%
ROA-MLP0.0142000.00520097.3%
WOA-MLP0.0164000.00570096.8%
MFO-MLP0.0118000.00450098.1%
WSO-MLP0.0139000.00510097.5%
SHIO-MLP0.0172000.00600096.4%
ZOA-MLP0.0185000.00620096.0%
DOA-MLP0.0193000.00650095.8%
HHO-MLP0.0204000.00680095.5%
Table 9. Experimental results for the CICIDS2017 dataset.
Table 9. Experimental results for the CICIDS2017 dataset.
AlgorithmMSE (Average)MSE (STD)Classification Rate
Cybersecurity–MLP0.0072000.00150097.8%
Chimp–MLP0.0081000.00180097.3%
CPO-MLP0.0075000.00160097.5%
ROA-MLP0.0084000.00190095.1%
WOA-MLP0.0090000.00200096.8%
MFO-MLP0.0074000.00170097.4%
WSO-MLP0.0082000.00180097.2%
SHIO-MLP0.0093000.00210096.7%
ZOA-MLP0.0100000.00230096.4%
DOA-MLP0.0105000.00240096.1%
HHO-MLP0.0110000.00250095.8%
Table 10. Experimental results for the UNSW-NB15 dataset.
Table 10. Experimental results for the UNSW-NB15 dataset.
AlgorithmMSE (Average)MSE (STD)Classification Rate
Cybersecurity–MLP0.0071000.00250098.2%
Chimp–MLP0.0092000.00310097.5%
CPO-MLP0.0081000.00280097.9%
ROA-MLP0.0088000.00300097.6%
WOA-MLP0.0097000.00330097.3%
MFO-MLP0.0075000.00260098.0%
WSO-MLP0.0085000.00290097.7%
SHIO-MLP0.0101000.00340097.2%
ZOA-MLP0.0106000.00350097.0%
DOA-MLP0.0112000.00370096.8%
HHO-MLP0.0117000.00380096.6%
Table 11. Experimental results for the Bot-IoT dataset.
Table 11. Experimental results for the Bot-IoT dataset.
AlgorithmMSE (Average)MSE (STD)Classification Rate
Cybersecurity–MLP0.0041000.00180099.2%
Chimp–MLP0.0056000.00220098.8%
CPO-MLP0.0048000.00200099.0%
ROA-MLP0.0052000.00210098.9%
WOA-MLP0.0059000.00230098.7%
MFO-MLP0.0044000.00190099.1%
WSO-MLP0.0050000.00210098.9%
SHIO-MLP0.0063000.00240098.5%
ZOA-MLP0.0068000.00250098.4%
DOA-MLP0.0074000.00260098.2%
HHO-MLP0.0079000.00270098.0%
Table 12. Experimental results for the CSE-CIC-IDS2018 dataset.
Table 12. Experimental results for the CSE-CIC-IDS2018 dataset.
AlgorithmMSE (Average)MSE (STD)Classification Rate
Cybersecurity–MLP0.0032000.00240097.4%
Chimp–MLP0.0045000.00280096.0%
CPO-MLP0.0038000.00260097.2%
ROA-MLP0.0041000.00270096.1%
WOA-MLP0.0047000.00290095.9%
MFO-MLP0.0034000.00250094.3%
WSO-MLP0.0040000.00270095.1%
SHIO-MLP0.0051000.00300094.7%
ZOA-MLP0.0056000.00310096.6%
DOA-MLP0.0062000.00330097.4%
HHO-MLP0.0067000.00340096.3%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Al Hwaitat, A.K.; Fakhouri, H.N. Adaptive Cybersecurity Neural Networks: An Evolutionary Approach for Enhanced Attack Detection and Classification. Appl. Sci. 2024, 14, 9142. https://doi.org/10.3390/app14199142

AMA Style

Al Hwaitat AK, Fakhouri HN. Adaptive Cybersecurity Neural Networks: An Evolutionary Approach for Enhanced Attack Detection and Classification. Applied Sciences. 2024; 14(19):9142. https://doi.org/10.3390/app14199142

Chicago/Turabian Style

Al Hwaitat, Ahmad K., and Hussam N. Fakhouri. 2024. "Adaptive Cybersecurity Neural Networks: An Evolutionary Approach for Enhanced Attack Detection and Classification" Applied Sciences 14, no. 19: 9142. https://doi.org/10.3390/app14199142

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop