Skip to Content
ElectronicsElectronics
  • Article
  • Open Access

13 May 2025

Performance Analysis of Cloud Computing Task Scheduling Using Metaheuristic Algorithms in DDoS and Normal Environments

and
Department of Computer Engineering, Konya Technical University, Selçuklu, 42250 Konya, Turkey
*
Author to whom correspondence should be addressed.
This article belongs to the Section Computer Science & Engineering

Abstract

Cloud computing has emerged as a fundamental pillar of modern technology, enabling large-scale data management, computational efficiency, and operational flexibility. However, critical challenges persist, particularly concerning security and performance. DDoS attacks severely impact cloud infrastructure by degrading system performance and causing service disruptions. These persistent threats raise concerns about cloud system reliability and underscore the necessity for advanced security measures. This study investigates the cloud computing task scheduling problem, recognized as NP-hard, and explores the impact of adversarial conditions such as DDoS attacks on system performance. To address this challenge, metaheuristic algorithms are employed. The research evaluates the effectiveness of traditional approaches, including genetic algorithms (GAs), particle swarm optimization (PSO), and artificial bee colony (ABC), while also introducing a GA–PSO algorithm designed to enhance task scheduling efficiency. The proposed method aims to minimize makespan by optimizing task allocation across virtual machines (VMs) within cloud environments. A comparative analysis of scheduling performance under both normal and DDoS-affected conditions reveals that metaheuristic techniques contribute significantly to system resilience. Furthermore, the GA–PSO algorithm demonstrates notable improvements at specific iteration levels. The findings underscore the potential of advanced scheduling methods to enhance cloud computing sustainability while offering practical solutions to mitigate real-world security threats.

1. Introduction

Cloud computing has emerged as a fundamental component of modern technology, offering scalable solutions for data management, computational power, and operational flexibility across various industries [1]. As organizations migrate their operations and data to cloud environments, the need for efficient and resilient cloud infrastructures has significantly increased. Cloud services enable users to dynamically scale resources, manage large datasets, and optimize performance, transforming how businesses and individuals interact with technology [2]. However, alongside these advantages, significant security and performance challenges persist. One of the most pressing concerns in cloud computing is ensuring system reliability against security threats. Distributed denial-of-service (DDoS) attacks, in particular, have become one of the most prevalent methods of targeting cloud environments. These attacks aim to disrupt network resources and compromise system functionality, often through malware infections such as viruses [3]. By overloading system resources, DDoS attacks can severely degrade performance, cause service disruptions, and, in severe cases, lead to prolonged outages that impact both providers and users [4]. Such interruptions not only threaten the operational stability of cloud systems but also undermine user confidence in these platforms, especially when handling sensitive data and critical computational tasks. Additionally, inefficient task scheduling in virtualized environments exacerbates these issues by increasing task completion time (makespan), ultimately raising costs for cloud consumers. Prolonged makespan further contributes to increased energy consumption and maintenance costs for cloud providers. Given these challenges, the implementation of advanced scheduling mechanisms and security strategies is essential to enhancing cloud system resilience while ensuring optimal performance under adversarial conditions.
Given the critical importance of maintaining uninterrupted cloud services, this study examines task scheduling challenges in cloud computing with a primary focus on security-driven performance optimization. Task scheduling, which involves the organization and management of computational workloads within a cloud environment, plays a pivotal role in ensuring both system efficiency and security [5]. Traditional scheduling approaches often fall short in mitigating the effects of malicious activities, necessitating the adoption of more advanced and adaptive strategies [6,7,8]. In response to these limitations, this study explores the use of metaheuristic algorithms, renowned for their effectiveness in solving complex optimization problems, to develop scheduling mechanisms under simulated DDoS attack conditions.
This research aims to provide a comprehensive analysis of how cloud task scheduling can be optimized to withstand malicious activities. By examining the use of metaheuristic algorithms under adversarial conditions, this study not only offers new insights into resilient cloud scheduling practices but also contributes to the development of proactive security strategies. As discussed by Verma and Kaushal, task scheduling in cloud computing is classified as an NP-hard problem [9,10,11,12,13,14,15,16]. Due to its complexity, metaheuristic approaches are often preferred over deterministic methods for achieving performance-efficient solutions. However, search-based metaheuristic algorithms are prone to becoming trapped in local optima, a challenge exacerbated by increasing task and virtual machine counts, as well as the presence of malware, including DDoS attacks.
To address these challenges, this study leverages well-established metaheuristic techniques, including GAs, PSO, and ABC algorithms, which are commonly employed in job-shop scheduling problems. These algorithms are evaluated in both normal and DDoS-affected cloud environments to determine their effectiveness in task scheduling under varying conditions. Additionally, a hybrid approach is included among the metaheuristic algorithms to assess its relative effectiveness under adversarial conditions.
The findings of this study address a notable gap in the existing literature by offering practical insights into improving the resilience of cloud computing environments against evolving cybersecurity threats. By demonstrating the applicability of hybrid metaheuristic scheduling techniques in hostile settings, this research contributes to the development of robust and adaptable cloud computing frameworks capable of sustaining high performance in the face of security threats.
This study differentiates itself from prior works by evaluating the stability and adaptability of the hybrid GA–PSO algorithm specifically under adversarial conditions caused by DDoS attacks. Furthermore, the integration of a stagnation-based control mechanism within the hybrid model, which has not been explored in previous studies, represents a novel contribution aimed at improving scheduling consistency in disrupted cloud environments. This focus on adversarial environments and the inclusion of stagnation control provide a unique perspective on how hybrid algorithms can be fine-tuned for better performance under real-world cybersecurity threats.
The remainder of this study is structured to provide a comprehensive analysis of cloud task scheduling challenges and possible solutions. Section 2 details the problem and examines the current state of workflow scheduling, with a particular emphasis on existing scheduling algorithms and the challenges posed by infrastructure-as-a-service (IaaS) environments. Section 3 presents the details of the simulation environment and experimental setup, outlining various cloud computing scenarios designed to evaluate scheduling performance under different conditions. Section 4 offers an in-depth performance evaluation of task scheduling, comparing results in both normal and DDoS-affected cloud environments. This section also discusses experimental findings, performance metrics, and their broader implications. Finally, Section 5 concludes the study by summarizing key insights and proposing future research directions aimed at optimizing cloud task scheduling while enhancing security and system resilience.

3. Materials and Methods

In this section, we first introduce the dataset used in the study, followed by a detailed explanation of the experimental environment and the methodological steps applied to enhance the performance of PSO, GAs, ABC, and the GA–PSO hybrid algorithm. To ensure a controlled and efficient analysis, all computational experiments were conducted on a DELL Latitude 3540 system equipped with a 13th-generation Intel Core i5-1335U processor (1.30 GHz), 8 GB RAM, and a 500 GB SSD, operating on Windows 11 Pro. To support reproducibility, the simulation was implemented using CloudSim v3.0.3, which is an open-source software framework developed by the CLOUDS Laboratory at the University of Melbourne, Melbourne, Australia. CloudSim is a Java-based application and has open-source usage. Parameters such as the number of tasks, number of VMs, and number of repetitions used in the CloudSim platform are shown in Table 1.
Table 1. Overview of the simulation environment configuration.
All experiments were executed deterministically using a fixed base seed of 0. For each independent repetition, the random number generator was seeded by adding the repetition index to this base seed. This approach ensures that rerunning any given scenario with the same seed and repetition index reproduces the exact same sequence of random values, cloudlet workloads, and algorithm behaviors, allowing bit-for-bit replication of all results.
This setup provided a stable and consistent environment for evaluating algorithmic efficiency and reliability under different conditions, ensuring accurate and reproducible results. This hardware configuration was chosen to provide a standardized mid-range workstation environment commonly used in comparable studies, ensuring reproducible and consistent results.

3.1. The Proposed Algorithm and Methods

To address the complexities of task scheduling in cloud computing, this study utilizes GAs, PSO, and ABC as benchmark metaheuristic methods [31]. The research examines the resilience and efficiency of these approaches under both standard and DDoS-compromised cloud environments. Recognizing the individual strengths and weaknesses of GAs and PSO, a hybrid GA–PSO algorithm was introduced to enhance scheduling performance and system robustness. The hybrid GA-PSO method used in this study is detailed in Algorithm 1. This approach combines the global search capabilities of Genetic Algorithm with the convergence speed of PSO.
Algorithm 1: Algorithm Steps
  • Initialization:
-
GA: ‘CreateRandomPopulation()’
-
PSO: ‘CreateRandomSwarm()’
2.
Iteration Loop:
-
PSO: Update ‘velocity’ & ‘position’, evaluate fitness.
-
GA: Selection, crossover, mutation, evaluate fitness.
-
If PSO is stagnant → inject GA’s fittest individual into PSO.
3.
Termination:
-
When max iterations or evaluations reached → return global best (‘g_best’).
PSO offers rapid convergence, but it tends to become trapped in local optima. On the other hand, the GA provides a more extensive global search capability but suffers from slow convergence, which can be computationally expensive. The proposed hybrid GA–PSO algorithm integrates the GA’s genetic diversity with PSO’s fast convergence to achieve an optimal balance. The GA’s mutation and crossover mechanisms introduce variation when PSO encounters stagnation, while PSO enhances the GA’s search efficiency, enabling quicker and more effective task scheduling solutions.
A common challenge in PSO is its susceptibility to early convergence at suboptimal solutions, whereas the GA often requires a large number of generations to reach a global optimum, increasing computational overheads. This issue is particularly pronounced in cloud computing, where large-scale scheduling tasks demand a high degree of efficiency. Furthermore, in DDoS-affected cloud environments, the complexity of task allocation increases as a result of dynamic disruptions, making traditional methods less effective. The hybrid GA–PSO approach mitigates these challenges by leveraging PSO’s adaptability and the GA’s evolutionary search efficiency, ensuring a more adaptive and resilient scheduling strategy capable of handling both routine and adversarial conditions in cloud systems.
The workflow of the hybrid GA–PSO algorithm is depicted in Figure 1. The algorithm initiates by generating a random population, where each individual represents a candidate solution for the cloud computing task scheduling problem. Each solution corresponds to a distinct allocation of tasks across VMs within the cloud environment.
Figure 1. The flowchart of the proposed hybrid GA–PSO algorithm.
For GAs, PSO, and the ABC, the number of iterations is denoted as n. However, to maintain computational efficiency and ensure a balanced comparison across methods, the hybrid GA–PSO algorithm adopts n/2 iterations. This reduction prevents excessive complexity while enabling meaningful performance evaluation.
To determine the most effective configuration, preliminary experiments were conducted, systematically adjusting iteration counts, cloudlet numbers, and other parameters to assess their impact on scheduling performance. The results of these experiments guided the selection of optimal parameter values, as outlined in Table 1 and Table 2. The final parameter tuning ensures that the hybrid GA–PSO algorithm maintains a balance between computational efficiency and scheduling effectiveness, allowing it to perform optimally across diverse cloud computing scenarios.
Table 2. Simulation parameters for all algorithms.

3.1.1. Population Initialization

In the case of the hybrid GA–PSO algorithm, the initialization phase begins by generating a random population for the GA and a swarm for PSO. Each PSO particle is assigned a random velocity and position, while the GA initializes its population with randomly generated individuals representing potential scheduling solutions. Once the population is established, the GA’s fittest individual is selected to assist in guiding PSO particles during their optimization process. Within PSO, the best local (pBest) and global (gBest) solutions are continuously updated. If PSO encounters stagnation and becomes trapped in a local optimum, the GA introduces genetic diversity to restore exploration capabilities. The GA population undergoes evolutionary adaptation, incorporating crossover and mutation operations to refine the search process, improving the global exploration of the solution space. This interaction between the GA and PSO ensures a balanced trade-off between exploration and exploitation, enabling a more adaptive and efficient task scheduling mechanism in cloud computing environments. The population initialization of this hybrid approach are detailed in Algorithm 2, where both GA and PSO populations are created and evaluated to set the foundation for the optimization process.
Algorithm 2: Initialize Population
Input Initial GA population, PSO swarm, max iterations/evaluations
Output Best solution found (fitness)
GA_population = CreateRandomPopulation() PSO_swarm = CreateRandomSwarm()
For each individual in GA_population:
  EvaluateFitness(individual)
End For
For each particle in PSO_swarm: EvaluateFitness(particle)
End For
Fittest_GA = GetFittestIndividual(GA_population)
g_best = GetGlobalBest(PSO_swarm)
p_best = GetLocalBests(PSO_swarm)
This step initializes the GA population using CreateRandomPopulation() and the PSO swarm using CreateRandomSwarm(). It then evaluates each GA individual and PSO particle by computing their fitness based on the makespan of their task-to-VM mappings, establishing the initial pBest and gBest values for the subsequent optimization process.

3.1.2. Hybridization Strategy

In the case of the hybrid GA–PSO algorithm, optimization is enhanced by establishing a mutual reinforcement mechanism between the GA and PSO. The hybridization strategy extends the mechanism described in Section 3.1.1 by reinforcing PSO with the GA’s best individuals and vice versa.
By integrating the GA’s evolutionary operators with PSO’s adaptive learning, this hybrid approach balances exploration and exploitation, mitigating the risk of premature convergence while maintaining computational efficiency. The synergy between the GA’s genetic adaptability and PSO’s rapid optimization ensures solution diversity and enhanced task scheduling accuracy in dynamic cloud computing environments, particularly under adversarial conditions such as those relating to DDoS attacks. The hybridization strategy employed in this study is detailed in Algorithm 3.
Algorithm 3: Hybridization Strategy
while eval < max_evaluations:
UpdateSwarmWithGA(PSO_swarm, fittest_GA)
   If IsStagnant(PSO_swarm) Then
     GA_populations = ApplyCrossoverAndMutation(GA_population)
     Fittest_GA = GetFittestIndividual(GA_population)
          End If
 G_best = GetGlobalBest(PSO_swarm)
DefineGAPopulation(GA_population, g_best)
In this step, the GA’s fittest individual is used to guide the PSO swarm; if the swarm remains stagnant (no improvement for the specified stagnation count), the GA applies crossover and mutation to reintroduce genetic diversity and restore exploration capability.

3.1.3. Iterative Optimization Loop

The hybrid GA–PSO algorithm follows an iterative optimization process, where the GA and PSO work in tandem to refine task scheduling solutions dynamically. In each iteration, the GA evolves the population, searching for the most optimal solution through selection, crossover, and mutation. Meanwhile, PSO adjusts the velocity and position of particles, ensuring efficient exploration and fast convergence towards promising solutions. The specific steps of this iterative co-optimization process are outlined in Algorithm 4.
This loop continues the integrated GA–PSO interaction described in Section 3.1.1, leveraging genetic operations to ensure convergence stability when PSO stagnates.
Algorithm 4: Iterative Optimization Algorithm
  for each individual in GA_population. ApplySelection(individual)
     ApplyCrossover(individual)
     ApplyMutation(individual)
     EvaluateFitness(individual)
  End for
      Fittest_GA = GetFittestIndividual(GA_population)
      For each particle in PSO_swarm:
      UpdateVelocity(particle, g_best, p_best)
      UpdatePosition(particle)
      EvaluateFitness(particle)
   End for
  g_best = GetGlobalBests(PSO_swarm)
  p_best = GetLocalBests(PSO_swarm)
End While
In each iteration, the GA evolves its population via selection, crossover, and mutation while PSO simultaneously updates particle velocities and positions; after evaluating both GA individuals and PSO particles, the global best solution is updated to drive convergence.

3.1.4. Termination and Result

The optimization process runs until the maximum iteration threshold is reached. Throughout the execution, the GA and PSO continuously exchange their best solutions, refining their search strategies to achieve higher efficiency and adaptability. Upon reaching the termination condition, the algorithm selects the final optimized solution, representing the best task scheduling configuration obtained during the process. This solution reflects an optimal balance between execution time, resource allocation, and computational efficiency, ensuring improved performance in dynamic cloud computing environments.

3.1.5. Hybrid GA–PSO Summary

The following unified pseudocode summarizes the overall hybrid GA–PSO workflow, and is given in Algorithm 5. Detailed step-by-step algorithms remain in Section 3.1.1, Section 3.1.2, Section 3.1.3 and Section 3.1.4 for reference.
The unified pseudocode first initializes both the GA population and the PSO swarm, then evaluates their fitness to establish initial pBest and gBest values. During each iteration, PSO particles update their velocities and positions and refresh pBest/gBest, while the GA population undergoes selection, crossover, and mutation to produce new candidate solutions. If the PSO swarm remains stagnant for the specified threshold, the algorithm injects the fittest GA individual into the swarm to re-introduce diversity and avoid local optima. After all iterations, the best overall solution (gBest) is returned as the optimal task scheduling configuration.
Algorithm 5: Hybrid GA–PSO Algorithm
Input:
   - pop_size (population/swarm size)
   - MaxIter (maximum iterations)
   - StagThresh (stagnation threshold)
Output:
   - g_best (best scheduling solution)
1.   GA_pop ← CreateRandomPopulation(pop_size)
   PSO_swarm ← CreateRandomSwarm(pop_size)
2.   EvaluateFitness(GA_pop)
   EvaluateFitness(PSO_swarm)
3.   pBest, gBest ← UpdateBests(PSO_swarm)
4.   for iter = 1 to MaxIter do
   a. PSO update
      for each particle in PSO_swarm do
      UpdateVelocity(particle, pBest, gBest)
      UpdatePosition(particle)
      EvaluateFitness(particle)
      end for
      UpdateBests(PSO_swarm)
   b. GA evolution
     GA_pop ← Selection(GA_pop)
     GA_pop ← Crossover(GA_pop)
     GA_pop ← Mutation(GA_pop)
     EvaluateFitness(GA_pop)
     fittestGA ← GetFittest(GA_pop)
   c. Hybridization
     if IsStagnant(PSO_swarm, StagThresh) then
      ReplaceWorstParticle(PSO_swarm, fittestGA)
     end if
     gBest ← max(gBest, fittestGA)
   end for
5. return gBest

3.2. The Approach

This section provides a comprehensive overview of the proposed methodology, detailing the workflow and implementation steps. The approach is structured into six distinct stages, each playing a crucial role in optimizing task scheduling and resource management. The workflow of the initial stage is depicted in Figure 2, outlining the fundamental processes that form the foundation of the overall strategy. These stages collectively ensure a structured and efficient approach to enhancing scheduling performance in cloud environments while addressing potential computational and security challenges.
Figure 2. Overall workflow of the study.

3.2.1. Phase 1: Setting up the Simulation Environment

In this phase, a simulation environment was developed using the CloudSim framework to evaluate the scheduling approach under different operational conditions. The setup involved defining data centers, physical hosts, and VMs while also configuring cloudlets, which represent computational tasks. These tasks were designed as input for scheduling, utilizing metaheuristic algorithms to enhance efficiency. The environment was structured to ensure effective resource allocation and task management, supporting adaptability in cloud-based systems.
To examine system performance, simulations were conducted under two distinct scenarios:
  • Standard cloud environment: simulates normal operating conditions, where system performance is assessed without disruptions.
  • DDoS-affected cloud environment: evaluates system behavior under DDoS attack conditions, analyzing its impact on task execution and resource management.
  • In our DDoS simulation, volumetric flooding was emulated purely by scaling cloudlet resource demands: each “attack” cloudlet’s execution length was multiplied by 100 and both its input and output file sizes were increased to 3000 MB (from 300 MB). This artificial overload targets CPU and network bandwidth to mimic high-volume attack behavior commonly used in CloudSim studies. Although we did not incorporate a real-world traffic trace, this parameter-scaling approach is a recognized method to emulate DDoS conditions.
Detailed explanations regarding the execution of these scenarios are presented in a later section of the study. This controlled experimental setup allows for a comparative assessment of scheduling performance, providing insights into the resilience and efficiency of different optimization techniques under varying network conditions.

3.2.2. Phase 2: Scheduling Process

After configuring the simulation environment and initializing the metaheuristic algorithms, the task scheduling phase commenced. The primary goal was to enhance scheduling efficiency, optimize system performance, and reduce execution time by systematically assigning tasks to VMs.
Metaheuristic algorithms played a critical role in this process by dynamically adapting to workload conditions, ensuring efficient resource utilization and balanced task distribution. These optimization techniques enabled flexible and adaptive scheduling, thereby improving overall computational efficiency. The outcomes of this phase provided a basis for assessing how different algorithms perform under varying operational scenarios, including normal conditions and DDoS-induced disruptions.

3.2.3. Phase 3: Simulation of Scheduled Tasks and Results Analysis

At this stage, the scheduled tasks were incorporated into the simulation environment, allowing for the replication of task execution in a cloud computing system. The performance of the system was then evaluated, and the results were analyzed to measure the effectiveness of the proposed scheduling method.
A key aspect of the evaluation was makespan analysis, which quantifies the total time required for task completion. Makespan serves as a crucial metric in determining system efficiency, as it captures the time span between the task start and the task completion. The output of the scheduling algorithm gives this duration, indicating how efficiently tasks are assigned and executed.
To assess makespan performance, four statistical indicators were employed:
  • Min makespan: the shortest execution time recorded across all test runs (expressed in seconds);
  • Max makespan: the longest observed makespan value, indicating the worst-case execution time;
  • Avg makespan: the mean makespan across multiple trials, representing the overall scheduling efficiency;
  • Std: a measure of variation in makespan values, providing insight into the stability of the scheduling approach.
  • The makespan formula is defined as
C m a x = T f i n i s h _ max   T s t a r t _ m i n
Min value;
C min = min { C 1 , C 2 , , C n }
where, C i , is the makespan value obtained in the i-th experiment
  • Max value;
C max = max { C 1 , C 2 , , C n }
where C i , the makespan value obtained in the i-th experiment is
  • AVG value;
A v g = i = 1 N C i N
  • C i : the makespan obtained in the i-th trial is
  • N: Total number of attempts
  • Std Value (standard deviation):
S t d = 1 N i = 1 N ( C i A v g ) 2
  • Another key focus of the evaluation was resource utilization, which measures the efficiency of VM usage within a cloud computing system. This metric indicates the proportion of utilized resources relative to the total available capacity over a defined time period, offering insights into how well computational resources are distributed and managed.
  • Resource utilization is represented as a percentage and is calculated using the following formula:
R e s o u r c e   U t i l i z a t i o n = ( V M   t i m e   t o   l i f e ) M a k e s p a n × t o t a l   V M   n u m b e r
An effective scheduling strategy ensures balanced resource allocation, preventing issues such as underutilization, which leads to wasted computational power, or overloading, which may degrade performance. By analyzing resource utilization trends, this study evaluates the impact of different scheduling techniques on overall system stability and efficiency under both normal and DDoS-affected conditions.
Figure 3 presents the structural foundation of CloudSim, highlighting its significance in simulating and modeling cloud computing environments. CloudSim provides an adaptable framework for replicating key cloud components, including data centers, physical hosts, VMs, and task scheduling processes.
Figure 3. The basic framework of CloudSim.
The simulation process begins with the creation of cloudlets, representing workloads assigned to VMs for execution. At the core of this framework is the broker, which serves as an intermediary between tasks and available resources. The broker’s role is to allocate tasks based on predetermined rules or scheduling algorithms, ensuring efficient task execution and resource distribution. By facilitating a structured and controlled simulation, CloudSim allows for a detailed evaluation of scheduling strategies in various operational scenarios, including those affected by DDoS attacks.
In this study, we enhanced task scheduling efficiency in CloudSim by integrating metaheuristic algorithms, introducing an advanced optimization mechanism beyond the default broker-based approach. Unlike static task allocation, these algorithms dynamically analyze task complexity, resource availability, and overall system performance to determine the most efficient VM assignments.
The scheduling process begins with metaheuristic techniques evaluating task demands and current resource conditions. Once an optimized allocation strategy is established, tasks are assigned to the most suitable VMs, and the simulation proceeds accordingly. This adaptive scheduling approach not only optimizes resource usage but also enhances system responsiveness to changing workloads, resulting in more realistic and efficient cloud simulations.
By incorporating advanced optimization methods, CloudSim extends its functionality, enabling more accurate modeling of complex cloud environments. This refinement not only improves task scheduling effectiveness but also establishes a robust platform for further exploration in cloud computing research and performance evaluation.
Figure 4 illustrates a DDoS attack scenario modeled within CloudSim, where an unauthorized entity targets the IaaS layer to disrupt task scheduling and resource allocation. The purpose of this simulation is to analyze how such an attack influences system efficiency and stability, utilizing CloudSim’s environment to assess the impact of security threats on cloud computing performance.
Figure 4. DDoS attack scenario simulated in CloudSim.
In this simulation, user identity modifications lead to the creation of high-load cloudlets that place excessive demand on system resources. These cloudlets are configured to exhibit prolonged execution times and require substantial input–output operations, causing increased bandwidth consumption and computational strain. Expanding the parameters, such as page size and file size, intensifies network congestion, delaying application processing and reducing overall system efficiency.
A major objective of this study is to evaluate how scheduling algorithms adapt to DDoS-induced system stress and whether they can effectively manage task execution while minimizing delays. This analysis provides valuable insights into the resilience and adaptability of scheduling techniques under extreme workload conditions.
By simulating attack scenarios, this research identifies potential security weaknesses in cloud computing environments and offers a structured framework for assessing and improving scheduling mechanisms. The ultimate goal is to enhance task scheduling strategies, ensuring that cloud systems maintain efficiency and stability, even under malicious cyber threats or operational disruptions.

3.3. Simulation Environment

The CloudSim simulation framework, widely utilized in cloud computing research, was chosen as the experimental platform. By allowing the simulation of VMs and tasks, this framework has enabled the implementation and analysis of metaheuristic algorithms for solving the task scheduling problem.
In the CloudSim simulation framework, tasks are represented as cloudlets, which mimic real-world workloads in a cloud computing environment. To assess system performance in this study, 50 to 100 cloudlets were generated following a uniform distribution. These cloudlets were assigned to VMs and executed within a simulated cloud infrastructure. The simulation environment, detailed in Table 1, consists of 10 VMs, a single data center, and one physical host. This setup closely reflects a practical scenario where tasks are scheduled and executed on virtualized resources.
Each VM in the cloud possesses a specific computational capacity, quantified in MIPS. Tasks are characterized by their size in terms of million instructions (MI) and are allocated to VMs accordingly. In this context, the execution time of a task depends on both its size in MI and the computational power of the assigned virtual machine MIPS. The set of tasks assigned to the m-th VM can be expressed as T m = { M I 0 , M I 1 , . . , M I k }, where k denotes a specific task and m identifies the VM it is assigned to.
E T k m = M I k M I P S m
T E T m = k = 0 K   E T k m
In this study, the execution time of the k-th task assigned to the m-th VM is determined using Equation (7), while the total execution time for all tasks is calculated with Equation (8). Among all VM machines, the highest total execution time, derived from Equation (9), represents the makespan value.
M a k e s p a n = max { T E T m } , m { V M 1 , V M 2 , . . }
Explanation for Equations (7)–(9):
M I k : m i l l i o n   i n s t r u c t i o n s   f o r   t h e   k t h   C l o u d l e t .
M I P S m : m i l l i o n   i n s t r u c t i o n s   p e r   s e c o n d   o f   t h e   m t h   V M .
T m : s e t   o f   C l o u d l e t s   a s s i g n e d   t o   V M   m .
The makespan is defined as the maximum total execution time across all VMs.
Table 2 provides an overview of the simulation parameters applied across all algorithms, detailing the fundamental configurations and settings used during the experimental process. These parameters have been standardized to ensure consistency and comparability in performance evaluation, ensuring a reliable assessment of each algorithm’s effectiveness.
Table 2 shows the parameter selection rationale as follows:
Population/swarm size = 75: Pre-tests showed smaller sizes lacked exploration, while larger sizes significantly increased computation time.
GA crossover rate = 0.5 and mutation rate = 0.015: common literature values, confirmed by preliminary experiments for balanced exploration/exploitation.
PSO inertia = 0.9, C1 = C2 = 1.5: standard canonical PSO settings chosen for fast convergence.
The algorithms were initialized with a population of 75 randomly generated individuals. For the GA, parameters were set as follows: a mutation rate of 0.015, a crossover rate of 0.5, a tournament size of 15, and the application of elitism. Additionally, an enhancement was introduced by incorporating a stagnation threshold of 0.5 and a maximum stagnation count of 15. These modifications led to improved performance compared to the traditional GA and contributed positively to the hybrid algorithm. For the PSO algorithm, the inertia factor was set at 0.9, with both C₁ and C₂ assigned a value of 1.5. Similarly, in the ABC algorithm, the swarm size and other parameters were adjusted to align with the simulation environment.
To comprehensively analyze task scheduling algorithms, experiments were conducted across different iteration counts. Specifically, each algorithm was tested at 10, 50, and 100 iterations to examine performance variations as the iteration count increased. This experimental design facilitated the simulation of diverse operational conditions, providing insights into the scalability and robustness of the algorithms when handling different workloads.
To enhance the reliability of the findings, each algorithm was executed independently 10 times for each iteration setting. Given the inherent randomness in metaheuristic algorithms, different outcomes were observed in each run. Repeated execution helped mitigate the influence of stochastic variations, ensuring statistical significance and offering a clearer assessment of algorithm efficiency and stability. The final results were obtained by computing the arithmetic mean of the recorded values.
Table 3 outlines the number of trials, iteration counts, and the task distribution across the simulated scenarios.
Table 3. Simulation scenarios.
The simulation environment and testing methodology were systematically structured to assess key performance indicators such as task completion times, resource utilization, and scheduling efficiency. This well-defined experimental framework enabled a thorough evaluation of the proposed scheduling mechanisms, offering valuable insights into their effectiveness in terms of managing workloads within cloud computing environments.

3.4. Experimental Setup and Scenarios

The experimental framework was designed to assess the efficiency and adaptability of metaheuristic algorithms under different operational conditions. Two distinct scenarios were examined to evaluate task scheduling performance.

3.4.1. Normal Cloud Environment

In this scenario, tasks were processed in a standard cloud computing environment without external disturbances. The following metaheuristic algorithms were employed to optimize task scheduling and resource allocation:
  • PSO algorithm;
  • GA;
  • GA–PSO hybrid algorithm;
  • ABC algorithm.

3.4.2. DDoS-Attacked Cloud Environment

To analyze the resilience of task scheduling algorithms under high-stress conditions, a DDoS attack scenario was simulated. This environment was characterized by excessive resource consumption and increased task execution delays. The same metaheuristic algorithms utilized in the standard cloud computing scenario were applied to assess their performance and adaptability in mitigating the impact of resource saturation.

3.5. Workflow Execution

The task scheduling process followed a structured execution sequence to ensure efficient resource utilization and performance evaluation (Figure 4).
The workflow consisted of the following steps:
  • Initialization: The cloud simulation environment was configured, including the generation of cloudlets and VMs.
  • Algorithm execution: Metaheuristic algorithms were applied to determine optimal task assignments based on resource availability and computational efficiency.
  • Fitness evaluation: Task scheduling outcomes were assessed using predefined performance metrics such as makespan, resource utilization, and execution time.
  • Iterative optimization: Algorithm parameters, including positional and velocity adjustments in heuristic-based approaches, were refined iteratively to enhance solution accuracy.
  • Termination conditions: The execution process concluded upon reaching an optimal solution or satisfying predefined stopping criteria.
  • Result documentation and analysis: Performance data were systematically recorded, compared, and analyzed across both scenarios to evaluate algorithm effectiveness in different cloud computing environments.
This structured workflow ensured a rigorous and consistent evaluation of scheduling strategies, allowing for an in-depth comparison of algorithmic efficiency across varying operational conditions.

3.6. Data Analysis

The evaluation of each algorithm’s performance was primarily based on makespan, which represents the total time required to complete all scheduled tasks. This analysis aimed to assess the algorithms’ effectiveness in responding to different operational challenges, including both standard execution conditions and high-stress scenarios induced by DDoS attacks. While the CloudSim framework is primarily designed to support makespan-based task scheduling analysis, the presence of a DDoS attack inherently affects resource consumption, which may indirectly influence factors such as energy efficiency and overall system utilization.
It is important to note that this study specifically focuses on makespan as the primary performance metric. Although resource utilization and energy efficiency are inevitably impacted in such conditions, they were not explicitly analyzed or optimized in this research. A more in-depth examination of these aspects would require distinct methodologies and targeted investigations. The findings presented align with the analytical capabilities of CloudSim, maintaining a focused approach on task completion times across varying operational scenarios.
While broader performance metrics such as throughput, energy efficiency, and SLA violations are important for comprehensive evaluation, our study focuses on makespan and resource utilization due to limitations inherent in the CloudSim framework. CloudSim primarily supports scheduling-based metrics and does not model energy consumption or SLA enforcement at a fine-grained level. Incorporating such metrics would require extensive customization or integration with specialized simulators, which is beyond the scope of this study.

4. Experimental Results

This section details the experimental procedures undertaken to evaluate the proposed approach and presents the corresponding findings. The experiments were systematically designed to assess the methodology’s performance under diverse operational conditions, including both standard execution scenarios and environments impacted by security threats such as DDoS attacks.
To determine the efficiency and robustness of the proposed solution, key performance indicators in the form of execution time, accuracy, and resource utilization were analyzed. The obtained results were then compared with existing methodologies reported in the literature to highlight the achieved improvements and substantiate the effectiveness of the proposed approach. This comparative evaluation provides insights into the advantages and potential applications of the method in real-world cloud computing environments.
Figure 5 shows how different meta-intuitive algorithms managed task scheduling in a cloud computing environment under normal conditions, using 10 iterations and 50 cloudlets. The results highlighted clear differences in how each algorithm responded to disruptions, particularly in the presence of DDoS attacks.
Figure 5. Performance analysis of cloud computing task scheduling using meta-intuitive algorithms in a normal environment (10 iterations of 50 cloudlets, (a) ABC algorithms, (b) GAs, (c) hybrid GA–PSO algorithms, (d) PSO algorithms).
The ABC algorithm displayed complete consistency, maintaining the same task assignments regardless of external interference. This suggests that its scheduling approach remains rigid and unaffected, ensuring stability but lacking adaptability to changing conditions. In contrast, the GA exhibited considerable instability, with over 30 out of 50 cloudlets reassigned to different VMs under attack. This shift indicated that the GA failed to sustain a structured task allocation, leading to unpredictable assignments.
A comparison between the GA and GA–PSO revealed that the hybrid approach achieved greater scheduling balance. While the GA struggled with frequent disruptions, the GA–PSO maintained a more controlled task distribution. However, fluctuations still occurred, suggesting that although the hybrid method improved reliability, it did not entirely prevent task reallocation when subjected to attacks.
Further examination of the GA–PSO and PSO algorithms under DDoS influence demonstrated PSO’s superior stability. While the GA–PSO performed more consistently than the GA, PSO experienced only minor task shifts, making it the most resilient approach in the comparison.
These findings indicate that the ABC maintained an unchanged structure, PSO experienced minimal disruptions, the GA suffered significant inconsistencies, and the GA–PSO provided an intermediate level of stability. To fully assess the impact of these variations on system efficiency, further analysis of execution time and makespan differences would be essential.
Figure 5 shows that in a normal environment, the ABC (a) remains unchanged under DDoS; GA (b) shows significant task reassignments; hybrid GA–PSO (c) and PSO (d) exhibit intermediate stability.
Figure 6 shows that under DDoS attacks, PSO maintains the most stable task assignments, the GA shows drastic shifts, the ABC is static, and the hybrid GA–PSO mitigates the GA’s instability to some extent.
Figure 6. Performance analysis of cloud computing task scheduling using meta-intuitive algorithms in a DDoS attack environment. (10 iterations of 50 Cloudlets, (a) ABC algorithms, (b) GAs, (c) hybrid GA–PSO algorithms, (d) PSO algorithms).
Figure 6 also shows an evaluation of how meta-intuitive algorithms handle task scheduling in a cloud computing environment under DDoS attack conditions. The analysis covers four key approaches: ABC, GA, hybrid GA–PSO, and PSO, each tested over 10 iterations with 50 cloudlets.
The results reveal clear distinctions in terms of how these algorithms respond to system disruptions. ABC maintains an identical scheduling structure to its normal state, showing no adaptation to attack-induced variability. This suggests that while the ABC remains consistent, it lacks responsiveness to dynamic threats.
In contrast, the GA undergoes significant disruption, with over 30 cloudlets being reassigned to different VMs. This instability indicates that the GA’s scheduling mechanism is highly sensitive to external interference, leading to a loss of structured task distribution. The GA–PSO, however, demonstrates greater stability, mitigating some of the scheduling inconsistencies seen in GA. Although the GA–PSO still exhibits some variations, its hybrid approach enhances resilience, offering a more controlled allocation strategy.
Among all the algorithms, PSO proves to be the most stable under attack conditions. Most cloudlets retain their original VM assignments, with only minor shifts observed. This suggests that PSO inherently provides a more structured and resilient task scheduling mechanism, making it less prone to attack-induced inefficiencies. The analysis indicates that PSO maintains the most stable and efficient scheduling performance in a disrupted environment, whereas the GA exhibits significant instability, struggling to preserve task assignments under attack conditions. The GA–PSO mitigates some of the GA’s weaknesses by providing a more balanced allocation, though it still experiences fluctuations. In contrast, the ABC remains entirely unchanged, neither adapting to external threats nor experiencing performance degradation.

4.1. Overall Performance Degradation

This section examines the overall performance degradation of meta-intuitive algorithms under DDoS attack conditions, focusing on their ability to maintain scheduling stability and system efficiency. The findings reveal notable differences in how each algorithm responds to external disruptions, impacting task allocation, execution time, and makespan performance.
Among the evaluated algorithms, PSO demonstrates the highest level of stability, with only minor task assignment variations observed. Most cloudlets remain allocated to their original VMs, indicating a robust scheduling mechanism that resists disruptions. Conversely, the GA experiences substantial instability, with over 30 out of 50 cloudlets being reassigned under attack conditions, reflecting its high sensitivity to scheduling inconsistencies. The ABC, on the other hand, remains entirely static, neither adapting to the attack nor suffering from task displacement, suggesting a rigid but predictable scheduling approach. Increasing the number of iterations from 10 to 100 improved the GA–PSO makespan by approximately 15%, while PSO and the ABC showed less than 5% improvement. This indicates that the hybrid model benefits more significantly from increased exploration time.
To fully assess the impact of DDoS attacks on overall system efficiency, key performance indicators such as execution time and makespan must be closely analyzed. PSO’s scheduling consistency may help minimize processing delays, whereas the GA’s unpredictability could lead to extended execution times and inefficient resource utilization. The GA–PSO, despite its relative improvement over the GA, requires further investigation to determine whether its structured scheduling translates into measurable efficiency gains. The ABC’s unchanged scheduling may ensure predictability, but its lack of adaptability raises concerns about how well it would perform in dynamic, real-world cloud environments.
By analyzing execution time, makespan variations, and resource utilization under attack conditions, this study aims to provide a comprehensive evaluation of how different scheduling strategies contribute to cloud computing resilience in the face of cyber threats.
  • DDoS Environment
DDoS attacks disrupt cloud-based systems by overwhelming network and computational resources, causing delays, resource exhaustion, and instability in task execution. In such an environment, cloud infrastructures experience increased latency, irregular resource allocation, and unpredictable execution times, making efficient task scheduling a significant challenge.
Different scheduling algorithms respond to DDoS-induced disruptions in varying ways. PSO maintains a relatively stable task allocation, minimizing deviations in assignments, while the GA exhibits high volatility, frequently reassigning tasks and losing scheduling consistency. The GA–PSO offers a more structured approach than the GA, yet still experiences fluctuations that affect performance. The ABC, on the other hand, remains completely static, showing neither degradation nor adaptation, which raises concerns about its responsiveness in dynamic cloud environments.
To assess how well these algorithms function under attack conditions, key performance indicators such as execution time, makespan, and system resource utilization need to be analyzed. Understanding how each algorithm handles scheduling in an unstable environment provides critical insights into its ability to sustain efficiency and operational reliability in cloud computing systems.

4.2. Impact by Threat Type

This section explores how different threat types, particularly DDoS attacks, influence the performance of task scheduling algorithms. The analysis emphasizes variations in algorithm stability and adaptability, offering insights into their effectiveness in minimizing performance disruptions under adverse conditions.
The effects of DDoS attacks are most pronounced in short-term task execution, where network congestion and resource exhaustion create scheduling inconsistencies and irregular execution times. These conditions disrupt system efficiency, forcing algorithms to adapt in real time to maintain workload distribution. While the GA exhibits significant instability, frequently reassigning tasks and struggling to maintain structured scheduling, PSO demonstrates greater resilience by preserving task consistency with minimal deviations.
The GA–PSO improves system stability, particularly with regard to long-term optimization tasks in dynamic conditions.
The inconsistent performance of the GA, particularly under DDoS conditions, may be attributed to its reliance on stochastic genetic operations such as mutation and crossover. These operations introduce significant variability between runs and are sensitive to noise caused by resource saturation. In contrast, PSO maintains a population-based memory of best solutions, enabling more consistent and stable performance across varying scenarios. The GA–PSO mitigates the GA’s randomness by integrating PSO’s convergence behavior, yet some residual fluctuation remains due to genetic diversity mechanisms.
To fully assess the impact of threats on scheduling efficiency, it is essential to analyze execution time fluctuations, makespan stability, and resource utilization. These factors provide a deeper understanding of how different scheduling strategies can be refined to improve resilience in dynamic cloud environments.

4.3. Critical Insights and Resilience of Hybrid Algorithms

Building on the comparative results, the GA–PSO hybrid strategy has demonstrated strong performance in addressing the complexities of cloud-based task scheduling.
By seamlessly combining the advantages of PSO and GA, the hybrid GA–PSO algorithm effectively adapts to dynamic cloud environments characterized by fluctuating workloads and variable resource availability. Its resilience remains intact even in the presence of network disruptions or security threats, making it a highly adaptive scheduling approach.
Furthermore, this hybrid strategy has demonstrated its ability to address the complexities of cloud-based task scheduling by optimizing cloudlet allocation to VMs and minimizing execution time, even in scenarios where security vulnerabilities compromise system stability. The integration of local and global optimization techniques, coupled with stagnation mitigation strategies, enables high adaptability and resilience in resource-constrained and high-risk environments.
By tackling critical challenges in cloud systems that are susceptible to security threats, the hybrid GA–PSO algorithm provides a reliable framework for ensuring stable and efficient task scheduling. Its adaptability and robustness make it a valuable tool for researchers and practitioners striving to enhance both the performance and security of cloud computing infrastructures.
Interestingly, although the ABC algorithm displays faster execution times, it consistently reports lower resource utilization. This behavior suggests that the ABC may be prioritizing speed by allocating tasks unevenly, potentially leaving some VMs underutilized. While this results in low makespan values in certain trials, it raises concerns about fairness and scalability. In contrast, PSO and the GA–PSO exhibit better workload balance and are more resilient under dynamic conditions, even if their execution times are slightly higher.
Table 4 presents a comparative analysis of the average execution times of various algorithms across different studies, evaluating their performance under identical or similar simulation settings. The results demonstrate that the proposed hybrid GA–PSO algorithm achieves significantly lower execution times compared to conventional approaches such as PSO, GAs, and the ABC. This improvement underscores the effectiveness of hybrid methodologies in optimizing task scheduling and enhancing computational efficiency, particularly in complex and resource-constrained cloud environments. These findings highlight the potential of hybrid algorithms in addressing the limitations of traditional scheduling techniques by balancing exploration and exploitation to improve overall system performance.
Table 4. Average results in makespan of experiments performed in phase 1.
The findings from the study were evaluated using Min, Max, Avg, and Std values. However, the assessments in Table 4, Table 5 and Table 6 were based on average makespan (Avg) results, as they provide a more balanced representation of algorithmic performance across different scenarios. The study highlights the performance differences of GAs, PSO, ABC, and the proposed hybrid GA–PSO algorithm under both normal and DDoS attack conditions. A particularly noteworthy observation is the unexpected performance of the ABC algorithm. While the ABC exhibits higher makespan values in terms of Avg results, its resource utilization remains significantly lower compared to other algorithms (Table 7, Table 8 and Table 9). This suggests that the ABC executes tasks rapidly but does not efficiently allocate system resources or distribute workloads evenly. The low resource utilization raises questions about whether the ABC benefits from an advantageous scheduling mechanism or if an experimental inconsistency is influencing the results. The irregularities observed in the ABC’s performance warrant further investigation. Despite showing high makespan values, its lower resource utilization suggests an imbalance between execution speed and efficiency. This discrepancy may be attributed to a scheduling anomaly, an unintended modeling advantage, or a fundamental characteristic of the algorithm. As a result, it is crucial to determine whether the ABC’s behavior is an inherent feature of its scheduling logic or an experimental inconsistency that requires further validation. The role of the ABC in comparative analysis also requires careful consideration. Excluding the ABC entirely could weaken the comparative strength of the study, as it would narrow the scope of evaluation between the GA, PSO, and GA–PSO. However, the ABC’s unusually low resource utilization despite its high execution speed raises the possibility of an underlying anomaly in task execution. A balanced approach would be to retain the ABC in the analysis while clearly addressing this anomaly, ensuring a comprehensive yet transparent evaluation.
Table 5. Average results in makespan of experiments performed in phase 2.
Table 6. Average results in makespan of experiments performed in phase 3.
Table 7. The usage rate of phase 1.
Table 8. The usage rate of phase 2.
Table 9. The usage rate of phase 3.
Meanwhile, the GA, PSO, and GA–PSO exhibit more consistent performance trends, making their comparisons more reliable. Removing ABC from the assessment could limit the study’s depth but including it without verifying its results may introduce misleading conclusions. The most rigorous approach would be to highlight ABC’s unusual performance while conducting additional validation to determine the accuracy and reliability of its reported outcomes. Ultimately, a thorough investigation into ABC’s unexpected performance trends is essential to ensure an accurate evaluation of all algorithms. Without further verification, any comparative analysis involving ABC could introduce uncertainty regarding the validity of the findings. By examining the interplay between execution time, resource allocation, and workload distribution, a clearer understanding of ABC’s role in the study can be achieved, maintaining both scientific integrity and analytical precision.
The results of this study were analyzed using Min, Max, Avg, and Std values. However, in Table 4, Table 5 and Table 6, the evaluation was based on average makespan (Avg) values, as they offer a more stable reference for comparing algorithmic performance under different conditions. The findings highlight the differences in scheduling efficiency among the GA, PSO, ABC, and the proposed hybrid GA–PSO algorithm in both normal and DDoS environments. One of the most striking observations is the unexpected performance of the ABC algorithm. While the ABC consistently demonstrates lower resource utilization compared to the GA, PSO, and GA–PSO (Table 7, Table 8 and Table 9), it simultaneously achieves the fastest execution times. This discrepancy suggests that the ABC completes tasks quickly but does not fully utilize available system resources, raising questions about whether its efficiency is due to optimized scheduling or an imbalance in workload distribution. Unlike PSO and the GA–PSO, which maintain higher resource utilization, the ABC prioritizes speed over computational resource engagement, making its scheduling mechanism notably different from the others.
The execution time analysis (Table 10, Table 11 and Table 12) further illustrates this distinction. The ABC consistently outperforms the other algorithms in terms of speed, achieving significantly shorter execution times. However, this rapid task completion does not necessarily indicate a more efficient scheduling strategy, as it may result from an uneven distribution of computational workload rather than true optimization. PSO and the GA–PSO, while slightly slower, demonstrate a more stable workload balance, making them preferable choices in scenarios where predictable and well-distributed execution is required.
Table 10. Execution time phase 1.
Table 11. Execution time phase 2.
Table 12. Execution time phase 3.
A critical finding of this study is the impact of DDoS attacks on scheduling performance. The expectation was that task allocation would remain consistent between normal and DDoS environments. This expectation held true for PSO, the GA–PSO, and the ABC, which maintained stable scheduling structures despite the attacks. However, the GA exhibited significant task reassignment variations under DDoS conditions, indicating its susceptibility to external disruptions. This instability underscores the GA’s vulnerability in adversarial conditions, reinforcing the GA–PSO’s advantages, as it mitigates the GA’s weaknesses while retaining PSO’s adaptive capabilities.
To further examine the consistency of scheduling under both normal and DDoS conditions, graphical representations were created for scenarios involving 50 cloudlets (10 iterations, 50 cloudlets; 50 iterations, 50 cloudlets; and 100 iterations, 50 cloudlets). Due to the large number of cloudlets, direct visualization may not be entirely clear. To address this, a color-based approach without algorithm labels was used, ensuring the visualization remains comprehensible while preserving meaningful comparisons.
The findings emphasize critical trade-offs between execution speed, resource utilization, and scheduling stability:
  • The ABC completes tasks at a much faster rate but does not efficiently allocate resources, raising concerns about workload distribution fairness.
  • PSO and the GA–PSO demonstrate superior resilience under DDoS conditions, making them more suitable for cloud environments where scheduling stability is essential.
  • The GA’s instability under attack conditions suggests the need for more robust scheduling mechanisms, particularly in environments where external disruptions are a risk factor.
Given these findings, further analysis is required to determine whether the ABC’s lower resource utilization is a natural characteristic of its scheduling logic or if it benefits from an unintended advantage in the execution model. Additionally, visualizing task scheduling consistency under normal and DDoS conditions is crucial for validating how each algorithm adapts to adversarial conditions.
Table 13 provides a comprehensive comparison of makespan results across different algorithms, emphasizing the enhanced performance of hybrid approaches in optimizing task scheduling. The results consistently indicate that hybrid algorithms, particularly the hybrid GA–PSO, outperform conventional methods by significantly reducing makespan in diverse computational settings.
Table 13. Average results in makespan for the different algorithms.
Previous research supports these findings. In a study conducted by Guo et al. [13]. the hybrid stagnation PSO–GA achieved a makespan of 18.704, significantly outperforming standalone PSO (38.69) and other PSO variations, such as CM-PSO (51.53) and L-PSO (329.96) Likewise, findings from Alsaidy et al. [14] indicate that hybrid methods resulted in makespan values as low as 63.714, surpassing conventional scheduling techniques such as PSO (155.9) and heuristic-based approaches such as Min–Min (155.3) and Max–Min (172.4)
These results highlight the efficiency and adaptability of hybrid algorithms, even under varying conditions such as different population sizes, cloud infrastructures, and resource constraints. The key strength of hybrid methods lies in their ability to combine global search capabilities with local optimization techniques, allowing for better task distribution and reduced execution time. This balanced approach makes hybrid scheduling particularly well-suited for complex cloud computing environments, where task scheduling efficiency directly impacts overall system performance.
As shown in Table 13, the proposed hybrid GA–PSO outperforms conventional methods in terms of makespan performance across various simulation settings, confirming its effectiveness with regard to both optimization and adaptability.
Table 13 presents a comparative analysis between the makespan values reported from selected prior studies and the results obtained with the proposed hybrid GA–PSO algorithm under equivalent experimental conditions. For instance, Guo et al. [13] reported makespans of 38.69 s for PSO, 51.53 s for CM-PSO, and 329.96 s for L-PSO, while our hybrid GA–PSO achieved a significantly lower makespan of 20.17 s under the same workload and configuration. In the study by Bhagwan et al. [15], various CSO variants yielded makespans ranging from 151.23 s to 181.09 s, compared to the 161.51 s achieved by our approach. Alsaidy et al. [14] documented makespans between 155.3 s and 172.4 s for different PSO and heuristic variants, while the hybrid GA–PSO achieved a substantially lower value of 59.09 s. Furthermore, Tamilarasu et al. [6] reported makespans between 213.74 s and 245.14 s for the ICOATS and WHACO-TSM methods, compared to the 126.46 s achieved in our experiments. Likewise, Amalarethinam et al. [12] observed makespans of 142 s (HEFT) and 111 s (WSGA), whereas the hybrid GA–PSO completed scheduling tasks in only 47.93 s. Finally, Alsaidy et al. [14] obtained makespans ranging from 20.6 s to 22.0 s with the SJFP-PSO and LJFP-PSO methods, while our hybrid GA–PSO achieved an even lower makespan of 11.75 s, outperforming the best PSO variants. These results collectively demonstrate that across multiple independent studies, the hybrid GA–PSO consistently outperforms prior techniques in terms of execution time. Moreover, since the experimental conditions, simulation parameters, and workload settings were carefully synchronized with those used in previous studies, thus ensuring a direct and reliable comparison, the significant reductions in makespan confirm that the proposed hybrid GA–PSO algorithm not only accelerates scheduling processes but also maintains robust performance even under dynamic and complex cloud computing environments.
Additionally, unlike previous studies that primarily assess performance in static conditions, this research focuses on dynamic and unpredictable cloud environments, demonstrating enhanced resilience in the face of disruptions. These findings not only align with existing literature but also expand upon it, underscoring the need for integrated, robust scheduling frameworks that can optimize performance in complex cloud computing systems (Table 13).

4.4. Alternative Mechanisms for Performance Enhancement

Beyond traditional hybrid metaheuristics, other complementary strategies can further improve exploration, convergence speed, and robustness against local optima.
For example:
  • Chaos-based optimization: adapts control parameters via deterministic chaotic maps.
If the PSO inertia weight was dynamically updated with logistic or tent chaotic maps, iterative vibrations would decrease, and the convergence rate could increase.
  • Stochastic models: introduce probabilistic operators to escape suboptimal traps.
Simulated annealing, like probabilistic acceptance rules, can increase the rate of exiting local optimum traps by allowing PSO particles to sometimes accept worse solutions.
  • Diversity enhancement techniques: periodically generate long jumps to explore new regions.
It is observed that our study focuses on the intensification aspect of the GA–PSO hybrid. If diversity strategies such as random restart or opposition-based learning are added, the discovery of new solution sets during stagnation periods can increase, and the deviation in the worst-case makespan can be further reduced, according to our findings.
  • Memetic algorithms: combining global search with local refinement (hill-climbing), can fine-tune promising solutions more effectively.
If short-term hill-climbing were applied to selected individuals while performing a global search PSO–GA blend in the hybrid, it would reduce our best makespan values.
While our current work focuses on GA–PSO hybridization, future model extensions will integrate and compare these mechanisms—individually and in combination—to assess their impact on scheduling performance in both normal and adversarial cloud environments.

5. Conclusions

Cloud computing has emerged as a fundamental pillar of modern technology, enabling scalable data management, high computational power, and operational flexibility across various industries. However, its widespread adoption also introduces significant challenges, particularly in terms of security and performance optimization. Threats such as DDoS attacks and malware (e.g., viruses) present substantial risks by degrading system efficiency and potentially disrupting service availability. These challenges emphasize the critical need for robust and secure solutions to maintain the reliability and efficiency of cloud infrastructure.
This study examined task scheduling in cloud environments through the application of metaheuristic algorithms, with a particular focus on the impact of DDoS scenarios. The findings reveal that while security threats negatively affect system performance, their influence varies in scope and severity. DDoS attacks primarily overload network resources, causing immediate yet temporary disruptions that manifest as delays and inefficiencies. However, they also exhaust essential computational resources such as CPU and RAM, leading to sustained performance degradation, particularly in scenarios requiring prolonged execution cycles.
The hybrid GA–PSO approach showed consistent and resilient performance across both normal and adversarial cloud scenarios. By effectively combining global exploration and local optimization strategies, hybrid metaheuristic approaches enhance task scheduling efficiency and resilience in the face of malicious disruptions. In contrast, conventional algorithms such as the ABC, PSO, and GAs exhibited higher susceptibility to resource-intensive attacks, highlighting the need for further enhancements to improve their robustness and efficiency in dynamic cloud environments.
From a practical standpoint, this study reinforces the need for designing cloud systems that are both secure and resource-efficient. Strengthening resilience against evolving security threats while optimizing resource allocation can lead to reduced operational costs and improved service reliability. The findings presented in this study address a critical gap in the literature by providing a detailed analysis of DDoS impact on task scheduling, offering valuable insights into the intersection of cloud security and performance optimization.
In conclusion, this research not only emphasizes the importance of balancing security and performance but also offers a unique perspective by evaluating scheduling algorithms specifically under adversarial conditions such as DDoS attacks. The integration of a hybrid GA–PSO algorithm, with its focus on maintaining scheduling efficiency and structure under cyber stress, presents a novel approach in cloud security and task scheduling optimization. These findings contribute to the development of more resilient cloud systems and offer practical strategies for improving both performance and security in the face of dynamic and evolving threats.
Future work will integrate real-world DDoS traffic datasets to rigorously validate the robustness of the proposed algorithms and expand the evaluation framework to include additional metrics—such as energy consumption, throughput, and SLA adherence—thereby offering a more comprehensive assessment of task scheduling performance in adversarial cloud environments. In addition, to address uncertainty in user request patterns, we plan to implement and evaluate CRSOA under stochastic workload arrivals; although CRSOA presents a promising solution for handling task uncertainty, its integration was beyond the scope of the present study and will be undertaken as part of our ongoing framework enhancements.

Author Contributions

Conceptualization, F.K. and A.B.; methodology, F.K. and A.B.; software, F.K.; validation, F.K. and A.B.; formal analysis, F.K.; investigation, F.K.; resources, F.K.; data curation, F.K.; writing—original draft preparation, F.K.; writing—review and editing, F.K. and A.B.; visualization, F.K.; supervision, A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

Symbol/AbbreviationDescription
MIMillion Instructions (task size)
MIPSMillion Instructions Per Second (VM computational capacity)
T m Set of cloudlet sizes { M I 0 ,   M I 1 ,   . . ,   M I k }, assigned to VM m
C m Total execution time on VM m
MakespanMaximum completion time across all VMs
NTotal number of independent simulation runs
C i Makespan value obtained in the i-th run
CPUCentral Processing Unit
RAMRandom Access Memory
BWBandwidth
VMVirtual Machine
VMMVirtual Machine Monitor
OSOperating System
DCData Center
DDoSDistributed Denial-of-Service
IaaSInfrastructure-as-a-Service
NP-hardNondeterministic Polynomial-time hard
SLAService Level Agreement
PSOParticle Swarm Optimization
GAGenetic Algorithm
ABCArtificial Bee Colony
FAFirefly Algorithm
GWOGrey Wolf Optimization
DEDifferential Evolution
CMA-ESCovariance Matrix Adaptation Evolution Strategy
SVDSingular Value Decomposition
RADRuntime Reliability Anomalies Detection
EDoSEconomic Denial of Sustainability
FESFunction Evaluations (termination criterion)
ETCExpected Time to Completion
ETC_MATRIXMatrix of Expected Execution Times

References

  1. Aldawood, M. Secure Virtual Machines Allocation in Cloud Computing Environments. Ph.D. Thesis, University of Warwick, Coventry, UK, 2021. [Google Scholar]
  2. Kanbar, A.B.; Faraj, K. Region aware dynamic task scheduling and resource virtualization for load balancing in IoT–fog multi-cloud environment. Future Gener. Comput. Syst. 2022, 137, 70–86. [Google Scholar] [CrossRef]
  3. Prathyusha, D.J.; Govinda, K. Securing virtual machines from DDoS attacks using hash-based detection techniques. Multiagent Grid Syst. 2019, 15, 121–135. [Google Scholar] [CrossRef]
  4. Karthik, S.; Shah, J.J. Analysis of simulation of DDOS attack in cloud. In Proceedings of the International Conference on Information Communication and Embedded Systems (ICICES2014), New York, NY, USA, 27–28 February 2014; pp. 1–5. [Google Scholar]
  5. Mangalampalli, S.; Karri, G.R.; Satish, G.N. Efficient workflow scheduling algorithm in cloud computing using whale optimization. Procedia Comput. Sci. 2023, 218, 1936–1945. [Google Scholar] [CrossRef]
  6. Tamilarasu, P.; Singaravel, G. Quality of service aware improved coati optimization algorithm for efficient task scheduling in cloud computing environment. J. Eng. Res. 2023, 12, 768–780. [Google Scholar] [CrossRef]
  7. Sood, S.K.; Deep Singh, K. Identification of a malicious optical edge device in the SDN-based optical fog/cloud computing network. J. Opt. Commun. 2021, 42, 91–102. [Google Scholar] [CrossRef]
  8. Shishido, H.Y.; Estrella, J.C.; Toledo, C.F.M.; Reiff-Marganiec, S. A CloudSim extension for evaluating security overhead in workflow execution in clouds. In Proceedings of the 2018 Sixth International Symposium on Computing and Networking (CANDAR), New York, NY, USA, 23–27 November 2018; pp. 174–180. [Google Scholar]
  9. Nayebalsadr, S.; Analoui, M. Simulating the Resource Freeing Attack: Using Cloudsim Simulator. J. Comput. Secur. 2015, 2, 293–302. [Google Scholar]
  10. Bazm, M.; Khatoun, R.; Begriche, Y.; Khoukhi, L.; Chen, X.; Serhrouchni, A. Malicious virtual machines detection through a clustering approach. In Proceedings of the 2015 International Conference on Cloud Technologies and Applications (CloudTech), New York, NY, USA, 2–4 June 2015; pp. 1–8. [Google Scholar]
  11. Manasrah, A.M.; Ba Ali, H. Workflow scheduling using hybrid GA-PSO algorithm in cloud computing. Wirel. Commun. Mob. Comput. 2018, 2018, 1934784. [Google Scholar] [CrossRef]
  12. Amalarethinam, D.G.; Beena, T.L.A. Workflow scheduling for public cloud using genetic algorithm (WSGA). IOSR J. (IOSR J. Comput. Eng.) 2016, 1, 23–27. [Google Scholar]
  13. Guo, L.; Zhao, S.; Shen, S.; Jiang, C. Task scheduling optimization in cloud computing based on heuristic algorithm. J. Netw. 2012, 7, 547. [Google Scholar] [CrossRef]
  14. Alsaidy, S.A.; Abbood, A.D.; Sahib, M.A. Heuristic initialization of PSO task scheduling algorithm in cloud computing. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 2370–2382. [Google Scholar] [CrossRef]
  15. Bhagwan, J.; Kumar, S. Independent task scheduling in cloud computing using meta-heuristic HC-CSO algorithm. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 281–291. [Google Scholar] [CrossRef]
  16. Verma, A.; Kaushal, S. Cost-time efficient scheduling plan for executing workflows in the cloud. J. Grid Comput. 2015, 13, 495–506. [Google Scholar] [CrossRef]
  17. Wang, L.; Chen, S.; Zhang, X.; Liu, J. Runtime reliability fractional distribution change analytics against cloud-based systems DDoS attacks. J. Syst. Softw. 2025, 220, 112265. [Google Scholar] [CrossRef]
  18. Hezavehi, S.M.; Rahmani, R. Interactive anomaly-based DDoS attack detection method in cloud computing environments using a third-party auditor. J. Parallel Distrib. Comput. 2023, 178, 82–99. [Google Scholar] [CrossRef]
  19. Huang, H.; Sun, B.; Hu, L. A task offloading approach based on risk assessment to mitigate edge DDoS attacks. Comput. Secur. 2024, 140, 103789. [Google Scholar] [CrossRef]
  20. Mason, K.; Duggan, M.; Barrett, E.; Duggan, J.; Howley, E. Predicting host CPU utilization in the cloud using evolutionary neural networks. Future Gener. Comput. Syst. 2018, 86, 162–173. [Google Scholar] [CrossRef]
  21. Behera, I.; Sobhanayak, S. HTSA: A novel hybrid task scheduling algorithm for heterogeneous cloud computing environment. Simul. Model. Pract. Theory 2024, 137, 103014. [Google Scholar] [CrossRef]
  22. Behera, I.; Sobhanayak, S. Task scheduling optimization in heterogeneous cloud computing environments: A hybrid GA-GWO approach. J. Parallel Distrib. Comput. 2024, 183, 104766. [Google Scholar] [CrossRef]
  23. Pabitha, P.; Nivitha, K.; Gunavathi, C.; Panjavarnam, B. A chameleon and remora search optimization algorithm for handling task scheduling uncertainty problem in cloud computing. Sustain. Comput. Inform. Syst. 2024, 41, 100944. [Google Scholar] [CrossRef]
  24. Bhardwaj, A.; Mangat, V.; Vig, R.; Halder, S.; Conti, M. Distributed denial of service attacks in cloud: State-of-the-art of scientific and commercial solutions. Comput. Sci. Rev. 2021, 39, 100332. [Google Scholar] [CrossRef]
  25. Kara, I.; Ok, M.; Ozaday, A. Characteristics of understanding URLs and domain names features: The detection of phishing websites with machine learning methods. IEEE Access 2022, 10, 124420–124428. [Google Scholar] [CrossRef]
  26. Alsadie, D. A metaheuristic framework for dynamic virtual machine allocation with optimized task scheduling in cloud data centers. IEEE Access 2021, 9, 74218–74233. [Google Scholar] [CrossRef]
  27. Prity, F.S.; Uddin, K.A.; Nath, N. Exploring swarm intelligence optimization techniques for task scheduling in cloud computing: Algorithms, performance analysis, and future prospects. Iran J. Comput. Sci. 2024, 7, 337–358. [Google Scholar] [CrossRef]
  28. Saif, F.A.; Latip, R.; Hanapi, Z.M.; Shafinah, K. Multi-objective grey wolf optimizer algorithm for task scheduling in cloud-fog computing. IEEE Access 2023, 11, 20635–20646. [Google Scholar] [CrossRef]
  29. Nabi, S.; Ahmad, M.; Ibrahim, M.; Hamam, H. AdPSO: Adaptive PSO-based task scheduling approach for cloud computing. Sensors 2022, 22, 920. [Google Scholar] [CrossRef]
  30. Sandhu, R.; Faiz, M.; Kaur, H.; Srivastava, A.; Narayan, V. Enhancement in performance of cloud computing task scheduling using optimization strategies. Clust. Comput. 2024, 27, 6265–6288. [Google Scholar] [CrossRef]
  31. Abraham, O.L.; Ngadi, M.A.; Sharif, J.B.M.; Sidik, M.K.M. Multi-Objective Optimization Techniques in Cloud Task Scheduling: A Systematic Literature Review. IEEE Access 2025, 13, 12255–12291. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.