Next Article in Journal
Enhancing Healthcare Security: A Unified RBAC and ABAC Risk-Aware Access Control Approach
Previous Article in Journal
Joint Exploitation of Physical-Layer and Artificial Features for Privacy-Preserving Distributed Source Camera Identification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Network-, Cost-, and Renewable-Aware Ant Colony Optimization for Energy-Efficient Virtual Machine Placement in Cloud Datacenters

by
Ali Mohammad Baydoun
1 and
Ahmed Sherif Zekri
2,*
1
Department of Mathematics & Computer Science, Beirut Arab University, Beirut 1107, Lebanon
2
Department of Mathematics & Computer Science, Alexandria University, Alexandria 21526, Egypt
*
Author to whom correspondence should be addressed.
Future Internet 2025, 17(6), 261; https://doi.org/10.3390/fi17060261 (registering DOI)
Submission received: 2 May 2025 / Revised: 6 June 2025 / Accepted: 9 June 2025 / Published: 14 June 2025

Abstract

:
Virtual machine (VM) placement in cloud datacenters is a complex multi-objective challenge involving trade-offs among energy efficiency, carbon emissions, and network performance. This paper proposes NCRA-DP-ACO (Network-, Cost-, and Renewable-Aware Ant Colony Optimization with Dynamic Power Usage Effectiveness (PUE)), a bio-inspired metaheuristic that optimizes VM placement across geographically distributed datacenters. The approach integrates real-time solar energy availability, dynamic PUE modeling, and multi-criteria decision-making to enable environmentally and cost-efficient resource allocation. The experimental results show that NCRA-DP-ACO reduces power consumption by 13.7%, carbon emissions by 6.9%, and live VM migrations by 48.2% compared to state-of-the-art methods while maintaining Service Level Agreement (SLA) compliance. These results indicate the algorithm’s potential to support more environmentally and cost-efficient cloud management across dynamic infrastructure scenarios.

Graphical Abstract

1. Introduction

Cloud computing operates through thousands of globally distributed datacenters and offers three main service models: Software as a Service (SaaS), where applications are provided by third-party servers; Platform as a Service (PaaS), providing software and hardware tools to the user; and Infrastructure as a Service (IaaS), which enables users to access resources such as storage and memory through virtual machines (VMs). These services are offered on a subscription basis [1]. A primary issue in cloud datacenters is their high power consumption, which increases both the environmental impact and cost [2]. Accordingly, some countries have implemented carbon taxes on CO2 emissions to encourage sustainability [3]. Datacenter total energy consumption is typically modeled as the sum of IT equipment power (such as servers and switches) and infrastructure overhead (including cooling systems and lighting). The Power Usage Effectiveness (PUE) metric quantifies how efficiently a datacenter uses energy by comparing the total power to the power consumed solely by IT equipment. The PUE captures cooling, UPS losses, and other facility overheads. A lower PUE (closer to 1.0) indicates more energy efficiency.
Cloud computing relies on virtualization, which allows a single physical machine (PM) to run multiple VMs simultaneously, increasing resource usage, lowering costs, and making server management easier. However, challenges related to power consumption persist, primarily for cooling, computing resources, and network components. In datacenters, 40% of their energy budget is spent on cooling them down [4]. Datacenters around the world waste about 3% of all the electricity generated globally. This percentage corresponds to an annual consumption exceeding 416 terawatt-hours. For example, the U.S. alone consumed more than 73 billion kWh in 2020 [5], illustrating the significance of addressing energy consumption.
Power management for datacenters can either take hardware or software forms. One important software method is the optimization of VM allocation to PMs, known as VM placement. VMs can run concurrently on a virtualized platform, competing among themselves for physical resources that are scheduled by a hypervisor [6]. Currently, there is active research being conducted on designing efficient scheduling algorithms for VM placement. Cloud providers must carefully design VM-to-PM mapping strategies to ensure energy efficiency while meeting user requirements, for which the specification is mostly found through Service Level Agreements (SLAs). VM placement techniques have evolved, but existing approaches usually employ a single-objective optimization strategy and consequently neglect others, resulting in suboptimal trades. For example, energy-aware algorithms may forget network performance, while network-aware strategies may increase energy consumption.
Green VM placement aims to align workload execution with periods or locations of abundant renewable energy and lower carbon intensity. The integration of renewable energy sources into VM placement has not been adequately examined in the literature. In this paper, we propose NCRA-DP-ACO (Network-, Cost-, and Renewable-Aware Dynamic-PUE ACO), a multi-objective framework that jointly balances energy, carbon, and network performance and cost efficiency.
Traditional exact optimization methods (e.g., Integer Linear Programming) may guarantee optimality, but they become computationally infeasible as the scale of the VM placement problems grows, especially in large and dynamic datacenter environments. Because VM placement is NP-hard, exact real-time solutions are impractical. Therefore, heuristic and metaheuristic techniques are favored due to their ability to efficiently explore large solution spaces and produce near-optimal solutions within acceptable time limits.
Ant Colony Optimization (ACO) is a flexible, decentralized metaheuristic inspired by ant foraging behavior. Multiple candidate solutions are constructed in parallel, and pheromone-guided probabilistic selection helps avoid premature convergence to suboptimal solutions. These traits make ACO well suited for multi-objective VM placement (VMP). In Section 4, we describe the algorithm framework, the pheromone-update process, and the multi-objective heuristic development in detail.
Our primary contribution is a network-, cost-, and renewable-aware VMP framework that leverages ACO to dynamically optimize VM allocation across multiple datacenters. Unlike traditional placement strategies, the proposed NCRA-DP-ACO algorithm integrates real-world solar energy availability, carbon pricing, the network transmission cost, and the dynamic PUE per datacenter into a unified multi-objective model. We evaluate placement decisions using four real-world U.S. datacenters with a heterogeneous infrastructure and variable solar availability. Using authentic Metacentrum workload traces [7], we show that NCRA-DP-ACO reduces energy consumption by 13.7%, carbon emissions by 6.9%, and live VM migrations by 48.2% versus a baseline ACO approach while preserving SLA compliance. The remainder of this paper is organized as follows. Section 2 reviews related work. Section 3 details the system model and problem formulation. Section 4 presents our NCRA-DP-ACO design. Section 5 describes the experimental setup and result analysis. Finally, Section 6 concludes this paper and outlines future directions.

Materials and Methods

During the preparation of this work, the authors used ChatGPT-4o to edit and polish the writing of the final manuscript. After using this tool, the authors reviewed and edited the content as needed, and they take full responsibility for the content of this publication.

2. Related Work

VMP focuses on optimizing resource utilization while minimizing energy consumption through several scheduling algorithms. Efficient VM placement is central to managing virtualized environments, directly impacting energy usage and system performance. It compromises between high resource utilization, lower energy consumption, and better overall system performance. Research in the area continuously develops heuristics and metaheuristics to solve the complex problems of VMP.
Over time, researchers have proposed adaptive models [8], approaches from graph theory [9], and communication-aware metaheuristic algorithms [10]. These techniques target energy savings in network traffic and resource optimization while considering SLAs, communication patterns, and traffic exchange between VMs.
Among heuristic algorithms, Particle Swarm Optimization (PSO) has been applied to VMP, modeling each solution as a particle moving toward optimal placement by learning from local and global experiences. The enhancement of VM placement while taking into consideration energy consumption and network bandwidth has been the application of these techniques [11,12,13]. However, PSO alone can be ineffective for large-scale VM placement because it often converges prematurely to local optima and is highly sensitive to parameter settings, making it difficult to maintain robust performance as the problem size grows.
Simulated Annealing (SA), another metaheuristic algorithm, originates from a similar technique in metallurgy. It allows for the acceptance of uphill moves on occasion. Therefore, it can explore the solution space freely so that it does not get trapped in local optima. Although effective, SA’s inherently slow convergence limits its applicability in real-time decision-making environments [14,15,16]. This limits the use of SA in real-time situations where decisions need to be made quickly.
Evolutionary algorithms like Genetic Algorithms (GAs) have also been widely explored. GAs mimic evolutionary processes, applying selection, crossover, and mutation to evolve optimal placement strategies. An improved cluster-based GA was suggested in [17], which aimed to reduce the number of active PMs needed for VM hosting by enhancing population clustering, achieving improved crossover performance and energy efficiency. Another advanced version, I-GA [18], integrated a virtual hierarchy architecture into the GA framework to decrease energy consumption and increase resource availability. The simulated results illustrated that the energy efficiency of the datacenter improved remarkably, while high availability was maintained compared with benchmark results from other VMP algorithms. However, GA approaches are often computationally expensive, limiting their use in real-time dynamic cloud environments.
Multi-objective evolutionary algorithms have also been studied. Gopu et al. [19] applied NSGA-III for energy-efficient VMP in distributed clouds, minimizing resource wastage, power consumption, and network transmission delay simultaneously. Yet, evolutionary methods often suffer from long convergence times and require careful parameter tuning.
ACO algorithms emerge as a strong alternative for optimizing VM placement. These algorithms are known for their efficient information exchange, adaptability, and ability to handle multiple objectives. Such characteristics make ACO well suited for the changing conditions of datacenter environments, where VM placement must be flexible. ACO algorithms can optimize VM placement by considering several factors, including energy consumption, communication costs, and quality-of-service requirements. MoOuACO [20] applies multi-objective ACO to minimize unused resources and communication costs by co-locating cooperating VMs. The results show improvements in both communication costs and energy consumption. ACOBF [21] combines ACO with bacterial foraging to enhance [21] energy efficiency and resource utilization; however, it focuses on task-level scheduling without addressing VM placement dynamics and scalability considerations across multi-datacenter environments. AP-ACO [22] enhances ACO convergence and search capabilities via adaptive parameter settings [22] but focuses on traffic-aware placement without renewable factors. Hybrid ACO+Sine Cosine [23] improves exploration and exploitation for VM placement, targeting power consumption and resource wastage; yet, it lacks renewable energy and SLA constraints. Inverted ACO (IACO) [24] improves cloud workflow scheduling, [24] reducing the total execution time and operational cost, but it does not incorporate renewable energy sources or SLA constraints. ETA-ACO [25] optimized both the power consumption of PMs and networking components concurrently using a weighted sum, outperforming several heuristics and metaheuristics. ACOGA [26] combines ACO and GA principles in a communication-aware framework to optimize the total traffic flow (TTF) and reduce active servers. Improved ACO for virtual resource scheduling [27] addresses node distribution challenges but lacks broader environmental and cost objectives, limiting its applicability to green cloud computing.
In this paper, we propose a novel scheme that incorporates renewable energy sources (solar) into VM placement through an ACO-based algorithm using solar irradiance data. Our energy-aware algorithm prioritizes minimizing energy consumption, supplemented by bandwidth optimization. We evaluate metrics such as the carbon footprint, energy consumption, and total cost to ensure both environmental and economic benefits. In addition to this, we implement VM migration for server consolidation to improve resource utilization, contributing to an effective and more sustainable cloud computing operation.

3. Problem Formulation

VM placement involves assigning V M s to P M s , either for initial placement or relocation after migration. When a user requests a V M , the scheduling algorithm looks for suitable hosts based on resource requirements like C P U , R A M , bandwidth, and storage. C P U utilization is the primary dynamic factor, while other resources are typically fixed. To initiate a VM, the virtual machine image is sent or mapped to a P M using a predefined template, and algorithms determine if the VM can be admitted and where it should be executed [28].
Table 1 summarizes the symbols and notations used throughout the optimization model. Let us assume there are N   V M s and M   P M s . The set of V M s is denoted as V M = { V M 1 ,   V M 2 ,   ,   V M N } , and the set of P M s is denoted as P M = { P M 1 ,   P M 2 ,   ,   P M M }. Each V M , denoted as V M j V M , has C P U , R A M , and bandwidth ( B W ) requirements represented by V M j C P U , V M j R A M , and V M j B W , respectively. Similarly, each server P M i P M has C P U , R A M , and B W   capacities denoted as P M i C P U , P M i R A M , and P M i B W , respectively.
Figure 1 shows the overall system architecture. It illustrates the interaction between the cloud broker, datacenters, Cloud Information System (CIS), and renewable energy sources. The objective is to find an assignment solution, denoted as Sa, with the minimum number of active servers, aiming to improve energy efficiency, reduce the network communication cost, and reduce the carbon cost. In the solution, each VM is placed on one and only one server.
The assignment is represented by a zero–one adjacency matrix X, as described in Equation (1), where the element x i j indicates whether V M j   is assigned to P M i . If V M j is placed on server P M i , then x i j = 1 ; otherwise, x i j = 0 .
i = 1 M x i j = 1 , j ϵ V M ,
It is important to ensure that each server has sufficient resources to meet the demand of all VMs assigned to it. It is assumed that no VM requires more resources than the capacity of a single server, and VMs cannot be assigned across servers. Equations (2)–(4) define the CPU, RAM, and BW constraints for server capacity.
j = 1 N V M j C P U · x i j P M i C P U , i ϵ P M
j = 1 N V M j R A M · x i j P M i R A M , i ϵ P M
j = 1 N V M j B W · x i j P M i B W , i ϵ P M

3.1. Server Power Model

Each server in the system has a different capacity to host VMs based on its configuration and the sizes of the VMs. The power consumption of a server is influenced by the incoming load it receives, which can vary over time depending on the scheduling policy. The relationship between server power consumption and CPU utilization can exhibit different patterns, such as being constant, cubic, or quadratic [29]. The goal of energy-efficient server design is to achieve energy proportionality, where servers consume power only when there is a workload present. In contemporary servers, the idle power consumption ( P M i , m i n P o w e r ) is typically half of the peak power consumption ( P M i , m a x P o w e r ). In this work, we establish a model for the relationship between server power consumption and server utilization using data derived from the SpecPower benchmark [3]. The benchmark data reveals that as server utilization increases, the total power drawn by the server follows a linear growth pattern. Consequently, in this paper, we directly link server utilization to CPU utilization as in Equation (5), recognizing their close relationship.
P M i , t P o w e r = P M i , m i n P o w e r + P M i , m a x P o w e r P M i , m i n P o w e r × P M i , c u r r e n t C P U

3.2. Datacenter Power Efficiency

The PUE metric is a valuable parameter used to assess the power utilization of a datacenter [3]. Within a datacenter, the energy consumed by non-IT equipment sources like cooling systems, power supplies, and lighting is categorized as overhead power (calculated through Equations (6) and (7)). The PUE is calculated by dividing the total power consumption of the datacenter (Pdc) by the power consumption of the IT equipment (PIT).
P U E = T o t a l   P o w e r I T   D e v i c e   s p o w e r = P d c P I T
This ratio provides insights into how efficiently a datacenter utilizes power resources. A PUE value greater than one indicates the presence of overhead power, while a value of one represents an ideal scenario where all power consumption is attributed to IT equipment.
In this study, we consider both the increase in IT power consumption and the impact of overhead power after the initial placement of VMs. Instead of using a fixed PUE value, we adopt a dynamic PUE model proposed by [3] to accurately calculate the overhead power and minimize energy consumption in datacenters.
P U E U t , H t = 1 + 0.2 + 0.01 · U t + 0.01 · U t · H t U t
The constants 0.2 and 0.01 are adopted directly from [3]. In their work, these values were obtained by linear regression on measured PUE data collected across four geographically dispersed U.S. datacenters, covering various utilization levels and ambient temperatures.

Integration of Renewable Energy and Dynamic PUE

To achieve sustainable cloud operations, our framework embeds real-time solar energy availability and dynamic PUE modeling into the ACO algorithm. These two inputs influence both the heuristic evaluation and objective function during VM placement decisions.
  • Dynamic PUE Modeling
This work adopts a dynamic PUE model that varies with external temperature and workload intensity. Overhead power, calculated using Equation (7), is updated in each placement cycle based on the datacenter’s current state. This adjustment allows the algorithm to penalize inefficient datacenters with higher overhead power due to less favorable operating conditions.
Figure 2 below illustrates the flow of how solar energy and dynamic PUE modeling are integrated into the ACO optimization process:
  • Solar Energy Availability
Each datacenter is associated with hourly solar irradiance profiles obtained from national datasets. The available solar energy is converted into usable power based on panel area and efficiency parameters and is treated as a free and zero-emission energy source. For each placement decision, the algorithm first attempts to satisfy demand using the available solar power. Any remaining unmet demand is assumed to be drawn from the grid, thereby incurring energy costs and carbon emissions. During placement, the algorithm prioritizes solutions that maximize solar energy utilization—especially when this leads to a reduced total cost and carbon footprint. This green-energy-first principle drives inter-datacenter selection within the optimization process.
  • Incorporating Wind Energy into the Renewable-Aware Framework
Although this work concentrates on solar irradiance as the primary renewable input, the underlying framework can be readily extended to accommodate additional energy sources such as wind. In practice, we would augment the hourly solar irradiance curves described in Section 5.1.3 with wind-speed profiles obtained from meteorological datasets. A standard turbine-power conversion model would then translate wind speed into usable power output. By defining a composite renewable-availability metric, the placement algorithm can prioritize on-site renewables and only draw from the grid to satisfy any residual demand. In this way, datacenters would be ranked by a combined renewable curve, further reducing reliance on grid power and lowering carbon emissions, particularly at times when wind energy is available but solar irradiance is low. Although we do not incorporate wind energy in the current experiments, this extension highlights the model’s potential for more general multi-renewable integration in future work.

4. Proposed Algorithm (NCRA-DP-ACO)

4.1. System Architecture

Inspired by ant foraging, ACO finds the shortest paths using pheromone trails. Creating a traveling path for food from their nest requires that real ants utilize site pheromone cues as a form of memory. Thus, pheromones in ACO store collective search experiences, guiding future ants toward promising solutions. Heuristic information is also injected into the search to enhance its performance. When addressing a problem such as the Travelling Salesman Person (TSP), ants construct their solution by visiting cities one at a time until all cities are visited. The choice of the next city to be visited is guided probabilistically by combining pheromone and heuristic information. Similarly, the ACO algorithm is in place in the VMP context by placing VMs step by step onto suitable servers. This step-type construction looks very much like the behavior of real ants and thus makes ACO applicable to VMP problems [30].

4.2. Pheromone Management

ACO algorithms use pheromone trails as a collective memory, guiding future solutions based on past successes, similar to real ants. The pheromone values in VMP thus indicate how desirable it is to assign certain VMs to certain PMs. In order to make full use of collective knowledge, there are two kinds of updates introduced in ACO.

4.2.1. Local Updates

After an ant completes its solution, local pheromone updates are applied to discourage repeating previous assignments, promoting diversity. This decreases the attraction to previously assigned servers and motivates ants to explore alternatives for VM-PM combinations, enabling a better exploration of the VMP. Specifically, the local pheromone updating operation is applied to each requested VM-PM pair. The updating rule for the local pheromone is defined in Equation (8).
τ i j t + 1 = 1 ρ · τ i j t + ρ · τ 0
The pheromone decay parameter, denoted as ρ , controls how quickly the pheromone trails diminish over time. The initial pheromone value is represented by τ 0 , indicating the starting level of the pheromone trails. It is important to highlight that the changes made during the local pheromone updating process are only observed by the ants in the current iteration.

4.2.2. Global Updates

After each iteration, the best solution discovered up to that point, referred to as S B e s t , undergoes a global pheromone reinforcement update, as in Equations (8)–(10). This step reinforces the pheromone trails of the best solution to guide future ants. Augmented pheromones guide future ants towards potentially advantageous server assignments and direct their search towards more optimal solutions.
τ i j t + 1 = 1 ρ · τ i j t + ρ · τ i j   i f   i , j S B e s t
τ i j = 1 f S B e s t + 1
where f S B e s t represents the normalized total increase in power consumption of the solution.

4.2.3. Initialization

The initialization phase sets parameters and assigns a uniform initial pheromone value, denoted as τ 0 , calculated as in Equation (11). This ensures that all pheromone trails start with the same initial concentration.
τ 0 = 1 M
where M is the number of available PMs.

4.2.4. Solution Construction

Each ant independently constructs VM-to-PM assignments based on shuffled VM requests. The ant thus sequentially chooses the PM and assigns VMs to it. Each ant independently builds a potential solution, which corresponds to its own unique VM-to-PM assignment. To encourage exploration, ants first randomly shuffle the list of requested VMs, preventing any fixed order during placement. Ants carefully evaluate server suitability for each VM, ensuring compatibility based on the available CPU, RAM, and bandwidth resources according to Equation (12).
k = 1 N V M k C P U · x i k + V M j C P U P M i C P U k = 1 N V M k R A M · x i k + V M j R A M P M i R A M k = 1 N V M k B W · x i k + V M j B w P M i B W , i ϵ P M
To prevent server overloading after VM placement, Equation (13) is used, incorporating an overutilization threshold ( T h o v e r ). This constraint ensures that a VM can be placed on a server with the same CPU capacity if it is not already occupied. However, if the server already hosts a VM, the equation considers the current VM’s CPU utilization.
The purpose of this constraint is to maintain a balance between the CPU utilization of the server and the VMs placed on it. By considering the overutilization threshold, the algorithm ensures that the server’s CPU utilization does not exceed the predefined limit. If a server has sufficient CPU capacity available (i.e., it is not yet fully utilized), the algorithm can place a new VM on that server without causing overloading. However, if the server is already hosting a VM, the algorithm considers the CPU utilization of that VM when determining whether placing an additional VM would lead to overutilization.
P M j a v a i l a b l e = k = 1 N x i k · V M k , c u r r e n t C P U + V M j , c u r r e n t C P U · T h o v e r   P M i , c u r r e n t C P U · T h o v e r   ,   i f   i = 1 M x i j = 0                           k = 1 N x i k · V M k , c u r r e n t C P U + V M j , c u r r e n t C P U   P M i , c u r r e n t C P U · T h o v e r                                         ,   o t h e r w i s e                                          
The heuristic information associated with each VM assignment measures the potential utilization improvement that V M j   can bring to P M i . The specific calculation of this heuristic value is conducted as in Equation (14):
η i j = θ × η e i j + γ × η c i j + δ × η b i j
The weighted combination approach allows for fine-tuning objective prioritization by combining multiple objective factors using weights. It offers flexibility, balancing conflicting objectives, and solution diversity.
η e i j is the energy heuristic favoring placements with lower power consumption, defined as in Equation (15):
η e i j = 1 P M i j P o w e r + P M i , m i n P o w e r    
where P M i j P o w e r is the estimated power consumption of V M j   after placement on P M i , and P M i , m i n P o w e r is the idle power of the server. A higher η e i j value indicates a lower estimated power consumption for placing V M j   on P M i . This results in a more desirable placement from the energy-saving perspective.
η c i j is the carbon emission heuristic favoring placements with lower carbon emissions, defined as in Equation (16):
η c i j = 1 C i j e m i s s i o n × R d T a x
where C i j e m i s s i o n is the estimated carbon emissions of V M j   when placed on P M i , and R d T a x represents the cost associated with carbon emissions.
η b i j is the bandwidth heuristic favoring placements with a lower bandwidth impact, defined as in Equation (17):
η b i j = log P M i , a v a i l a b l e B W V M j , e s t i m a t e d B W
where P M i , a v a i l a b l e B W is the server’s available network bandwidth, and V M j , e s t i m a t e d B W is the estimated bandwidth usage of V M j   . η b i j represents the bandwidth efficiency of placing V M j   on P M i . A higher value (1) signifies more available bandwidth (relative to VM usage), (2) indicates less contribution to increasing network utilization on P M i , and (3) implies a favored placement from the network perspective. In other words, a higher ratio suggests that placing V M j   on P M i   would cause a smaller negative impact on the overall network bandwidth usage. This makes it a desirable placement based on the network optimization goal. Normalization should be applied to each sub-heuristic to ensure comparable scales for different objectives (Min-Max scales values to a range of [0, 1]).
To determine which VM to assign next, the ant uses a pseudo-random proportional rule [30]. This rule considers two factors, the current pheromone concentration τ and a heuristic value η , and is calculated as in Equation (18):
i = arg max k P M j available τ k j × η k j β   i f   q q 0 P i j   ,                                                                                                               o t h e r w i s e
where q0 is a predefined parameter (q0  [0, 1]) that is used to regulate the ant’s exploration and exploitation behaviors, and β (β > 0) is a predefined parameter that determines the relative importance of heuristic information. If q is less than q0, an exploitative ant will select the server whose pheromone τ and heuristic η product τ i j × η i j β is maximal. Otherwise, the ant’s behavior is explorative, selecting server P M i based on the probability selection modeled by Equation (19):
P i j = τ i j × η i j β i P M j a v a i l a b l e τ i j × η i j β ,   i   P M j a v a i l a b l e
The pheromone concentration provides information about the desirability of a particular VM based on previous movements, while the heuristic guides the ants towards selecting the most promising VMs based on some predefined criteria.

4.2.5. System Parameters

We employ a weighted-sum method to combine our three optimization objectives—energy consumption, network bandwidth, and carbon cost—into a single heuristic function. The multi-objective weights (θ, γ, δ) were determined by grid-search experiments over five representative workload scenarios, resulting in θ = 0.3 for energy, γ = 0.4 for network bandwidth, and δ = 0.3 for carbon cost.
To identify appropriate values for the ACO parameters, we conducted a systematic tuning study in which each parameter was varied across a range commonly found in the literature [25,31,32,33,34,35,36,37]. Specifically, we tested the heuristic influence β ∈ {2, 3, 5}, pheromone-evaporation rate ρ ∈ {0.2, 0.3, 0.4}, number of ants N a n t s ∈ {10, 20, 50}, and iteration counts I ∈ {1, 2, 5}. For each candidate combination, we ran the ACO algorithm on five representative workloads and recorded the resulting average energy consumption and SLA violation ratio. We then selected the parameter set that delivered the lowest energy usage while maintaining acceptable SLA compliance. This process ensured that our final settings were grounded in empirical performance rather than chosen arbitrarily. Specifically, the pheromone evaporation rate was set to ρ = 0.3, the heuristic influence coefficient was set to β = 2, and the ant colony comprised N a n t s = 20 agents performing I = 2 iterations per placement round with an exploitation probability of q₀ = 0.7.
Migration thresholds were likewise determined empirically, with underutilization and overutilization bounds set to 30% and 90%, respectively, to balance the convergence speed against the solution quality. While structured methods such as AHP or TOPSIS could automate weight derivation, our choice of a simple weighted sum supports faster simulation runs; the exploration of adaptive or ML-based weight tuning is left for future work. A full list of parameters appears in Table 2.

4.2.6. Objective Function

When placing VMs in datacenters, choosing the best setup is crucial. In this document, this study focuses on reducing energy consumption, reducing the carbon footprint, and lowering network costs. Our goal is to find the most cost-effective way to run the workload in the cloud, being mindful of the environment and money. We want to prioritize using less energy and reducing carbon emissions for the cloud provider while also controlling network costs through smart VM placement.
The cost of running all VMs at different sites is formulated in Equation (20).
C T = j V M i P M C   V M i j × x i j  
C T is composed of the cost of energy used by the VMs in datacenters in addition to the cost of the carbon footprint at each site according to the associated tax, as shown in Equation (21).
C T = C E C + C C F
where C E C is the cost of energy consumption modeled in Equation (22), and C C F is the cost of the carbon footprint determined using Equation (24).
C E C = i = 1 P M P M i , t + p o w e r P M i , t p o w e r + P M i , t + o v e r h e a d   × z i
where P M i , t p o w e r is the server power consumption at time t, and P M i , t + p o w e r is the server power consumption after running the new VM calculated according to Equation (23). z i shows whether the PM belongs to the ant’s solution, and the overhead power is expressed as
P M i , t + o v e r h e a d = P M i , t + p o w e r P M i , t p o w e r × P U E U t , H t 1
Equation (7) can be used to calculate the future overhead power of the datacenter ( P U E U t , H t 1 ).
C C F = C E C × R d C a r b o n × R d T a x
where R d C a r b o n and R d T a x are the carbon footprint and carbon tax in datacenter d, respectively.
With all the definitions in the previous sections, the overall multi-objective optimization goal integrating all aspects is formulated in Equation (25):
min C T Subject   to   1 4
The objective function is a fitness measure for evaluating solution suitability. Solutions are primarily ordered with respect to their green energy use to choose the datacenter. Within a datacenter, the solutions are sorted according to their overall fitness value per objective function. The solution with the best overall fitness is selected.

4.2.7. Algorithm

ACO is deployed in this work to solve the problem of VMP across geographically distributed datacenters. Algorithm 1 describes the initial placement strategy, in which ACO selects an appropriate set of PMs for new VMs depending on server availability and cost estimation. The core iterative optimization process is then provided in Algorithm 2, whereby artificial ants investigate and refine placement solutions through pheromone updating and heuristic-guided search. Algorithm 3 monitors each datacenter’s hosts and—when utilization thresholds are crossed—collects the VMs to migrate, invokes the ACO-based placement on that set, and issues live migrations to re-optimize the system.
Algorithm 1: Initial Placement
Input: D, VMnew
Output: Deployment or Failure of VMs
Compute:
While (true) do
| If (VMnew is not empty) then
| | solList = new List() // Initialize solution list
| | For (each datacenter d in D) do
| | | PM = getAvailableServers(d) // (Equation (12))
| | | Sa = runACO(PM, VMnew)
| | | solList = solList.add(Sa) // Combine solutions from all datacenters
| | End
| | If (solList is not empty) then
| | | // Step 1: Calculate costs for each solution
| | | solutionCostMap = new Map()
| | | For (each solution Sa in solList) do
| | | | solutionCost = calculateSolutionCost(Sa) // (Equation (21))
| | | | solutionCostMap.put(Sa, solutionCost)
| | | End
| | | // Step 2: Find the best solution
| | | Sbest = null
| | | maxGreenEnergy = −1
| | | solTotalCost = infinity
| | | For (each entry in solutionCostMap) do
| | | | Sa = entry.key
| | | | cost = entry.value
| | | | If (cost.usedGreenEnergy > maxGreenEnergy) then
| | | | | maxGreenEnergy = cost.usedGreenEnergy
| | | | | Sbest = Sa
| | | | | solTotalCost = cost.totalCost
| | | | Else if (cost.usedGreenEnergy == maxGreenEnergy and cost.totalCost < solTotalCost) then
| | | | | Sbest = Sa
| | | | | solTotalCost = cost.totalCost
| | | | End
| | | End
| | | // Step 3: Deploy VMs using the best solution
| | | createVMsAccordingTo(Sbest)
| | End
| | Else
| | | failVMs(VMnew)
| | End
| EndEnd
The algorithm continuously monitors incoming VM requests and searches for placement solutions. If VM requests are pending (VMnew.isNotEmpty()), the algorithm moves on and continues its work in search of placement solutions. An empty solution list (solList) is initialized to hold the potential placement configurations found during the operations. Then, it goes through each datacenter (D). For each datacenter, it will obtain the allowable server list (PM) and apply the ACO algorithm (runACO) to provide a solution (Sa) for placing those VMs on those PMs. The solution Sa is then added to the solList, and this repeats for all datacenters.
After all datacenters have given solutions, the algorithm checks whether any solutions have been discovered (solList.isNotEmpty()). If so, it will calculate the cost for every solution, which includes the green energy used and the total cost. It initializes a solutionCostMap, which is meant to store these costs, and iterates through each solution in solList. For each of the solutions Sa, it will calculate its cost in terms of calculateSolutionCost(Sa) and thereafter insert the solution and its cost into solutionCostMap.
Then, the algorithm obtains the best solution. It initializes Sbest, maxGreenEnergy, and solTotalCost. It runs an iteration on each of the entries within solutionCostMap. For each entry, it fetches the solution Sa and its corresponding cost. It prioritizes solutions based on higher green energy usage. Multiple solutions that have the same condition concerning green energy usage will have the total least-cost solution run.
After obtaining the best solution (Sbest), the algorithm will then create the corresponding VMs based on that solution via createVMsAccordingTo(Sbest). For failures in the result, the algorithm would conduct this via failVms(VMnew) if no appropriate solutions occur (solList.isEmpty()) or if there are no VMs to begin with (VMnew.isEmpty()). The entire process will repeat cyclically.
The time complexity of Algorithm 1, which performs initial virtual machine placement across multiple datacenters, is primarily governed by three components: the server availability assessment, execution of the ACO algorithm, and evaluation of candidate placement solutions. For each decision interval across D datacenters, the algorithm first identifies available PMs, with a worst-case complexity of O ( m V M m a x ) , where m is the number of servers and V M m a x is the maximum number of VMs hosted on a single server. It then invokes the ACO algorithm in each datacenter to determine a candidate VM-to-PM mapping, with complexity O ( A C O ( m , n ) ) , where n is the number of VMs to be placed. Finally, the algorithm computes the cost of each solution based on the number of servers used ( m ) and selects the best-performing datacenter, contributing an additional O ( m ) and O ( D ) , respectively. Combining these, the overall time complexity is
O ( D ( m V M m a x + A C O ( m , n ) + m ) )
where O ( A C O ( m , n ) ) is defined in Equation (27) and further detailed in Algorithm 2.
Algorithm 2: Ant Colony Optimization for VMP
Input: PM, VM
Output: Sbest
Initialize:
τ = 1 P M
S b e s t =
I = 1
Compute:
While (i < I) do
| a = 1
| While (aNants) do
| | VMnew = shuffleList(VMnew)
| | Sa = ∅
| | For   ( each   V M j   in VMnew) do
| | | P M j a v a i l a b l e = getAvailableServersforVm ( V M j )
| | | If   ( P M j a v a i l a b l e is empty) then
| | | | Continue
| | | End
| | | For   ( each   P M i in   P M j a v a i l a b l e ) do
| | | | Calculate heuristic information (Equation (14))
| | | End
| | | Generate a random number q ∈ [0, 1]
| | | If (q ≤ q₀) then
| | | | Select   server   P M i   for   V M j   (Equation (18))
| | | Else
| | | | Select   server   P M i   for   V M j   (Equation (19))
| | | End
| | | S a . add ( ( P M i , V M j   ))
| | End
| |
| | If (Sa contains all VMs in VMnew) then
| | | Sa.objective_value = calculateObjectiveFunctionValue(Sa)
| | | If (Sbest == ∅ or Sa.objective_value < Sbest.objective_value) then
| | | | Sbest = Sa
| | | End
| | End
| | Do local pheromone updating (Equation (8))
| | a = a + 1
| End
| If (Sbest ≠ ∅) then
| | Do global pheromone update (Equation (9))
| End
| i = i + 1
End
Return Sbest
This algorithm aims to find the optimal placement of VMs on PMs within a distributed computing environment. It utilizes the ACO approach inspired by the foraging behavior of ants. Each PM is initialized with a pheromone value indicating its VM hosting potential. It also establishes an empty “best solution” placeholder (Initialization).
The algorithm then runs through a predefined number of iterations. In each iteration, several “ants” explore the solution space by individually trying to assign VMs to PMs. They consider both the pheromone trails on each PM and their own heuristic preferences (Iterative Improvement).
For each VM, an ant finds the most suitable PM based on a combination of pheromone levels and its specific requirements. This process builds a candidate solution for that ant (Solution Construction).
If an ant manages to place all VMs onto PMs successfully, its solution is evaluated using an objective function. This function measures the quality of the solution based on desired criteria (Fitness Evaluation).
The best solution is to be updated if a superior one is found. This ensures the algorithm continuously improves its findings throughout the iterations (Best Solution Update).
After all ants have completed their exploration, the pheromone levels on PMs are updated. This update mechanism strengthens trails associated with the best solutions and weakens those linked to less successful ones, guiding future ants towards promising directions (Pheromone Updates).
By running through these steps repeatedly, the algorithm gradually converges towards a near-optimal placement of VMs across the available PMs, balancing individual VM requirements with overall system efficiency. The time complexity of the proposed ACO algorithm can be derived by analyzing its nested loop structure. For each of the iterations, the algorithm launches Nants ants. Each ant constructs a complete placement solution by iterating over the set of VMnew. For every VM, the algorithm evaluates a subset of available PMs, which in the worst case includes all ∣PM∣ servers. Within this process, heuristic information is computed, and server selection is performed based on a probabilistic rule. Therefore, the overall time complexity is expressed as in Equation (27).
O ( I N a n t s n m )
where n is the number of requested VMs, and m is the total number of PMs.
Algorithm 3: Dynamic Placement
Input: D
Compute:
While (true) do
| For (each datacenter d in D) do
| | vmlistToMigrate = new List()
| | For   ( each   P M i   in getAvailableServers(d)) do
| | | If   ( P M i , u C P U > T h o v e r ) then
| | | | vmList = getMigrationVMs ( P M i )
| | | | vmlistToMigrate.addAll(vmList)
| | | End
| | | If   ( P M i , u C P U T h u n d e r ) then
| | | | vmList = getAll VMs ( P M i )
| | | | vmlistToMigrate.addAll(vmList)
| | | End
| | End
| |
| | If (vmlistToMigrate is not empty) then
| | | Sa = runACO(PM, vmlistToMigrate)
| | | If (Sa ≠ ∅) then
| | | | MigrateVMs(Sa)
| | | End
| | End
| End
End
This dynamic VM placement algorithm is designed to continuously monitor and optimize the placement of VMs across multiple datacenters, with the goal of achieving balanced resource utilization and preventing server over- or underutilization. Algorithm 3 iterates through each datacenter in the system. For each datacenter, the algorithm initializes an empty list called vmlistToMigrate. This list will be used to store VMs that need to be migrated for optimization purposes. Next, the algorithm iterates through each PM within the datacenter that is allowed for VM placement. It evaluates the CPU utilization of each PM and performs the following checks:
If the CPU utilization of the PM exceeds a certain threshold ( T h o v e r ), it indicates that the server is overloaded. In this case, the algorithm retrieves a list of VMs that can be migrated from the PM using the getMigrationVms() function. The VMs obtained from getMigrationVms( P M i ) are then added to the vmlistToMigrate.
If the CPU utilization of the PM falls below another threshold, ( T h u n d e r ), it suggests that the server is underloaded. The algorithm retrieves all the VMs placed on that PM using the getAllVms() function and adds them to the vmlistToMigrate.
After evaluating all the PMs within the datacenter, the algorithm checks if the vmlistToMigrate is empty. If there are VMs on the list, it proceeds to optimize the placement. The optimization attempt involves leveraging the ACO algorithm. If the ACO algorithm generates a non-empty solution (Sa), it means that an optimization opportunity has been identified. The algorithm then executes the recommended VM migrations using the MigrateVMs() function, effectively moving the VMs to their new optimized placements and turning PMs into standby mode wherever necessary. The algorithm time complexity is defined in Equation (28).
O ( D m A C O ( m , n ) )
where D is the number of datacenters, m is the number of PMs per datacenter, and n is the number of VMs identified for live migration.

5. Experimental Setup and Results

5.1. Experimental Setup

All experiments were implemented using CloudSimPlus 5.4.2 [38], a cloud simulation toolkit in Java 1.8. To support energy-aware scheduling and sustainability analysis, we extended CloudSimPlus to incorporate green energy modeling, solar datasets, and carbon tax metrics.

5.1.1. Datacenter Configuration

We consider four U.S. datacenters (Dallas, Richmond, San Jose, Portland) in different time zones, similar to [3]. According to the evaluated workload, each datacenter has 126 heterogeneous PMs of six configurations based on four parameters: number of cores, core speed (GHz), memory (GB), and storage (GB). Detailed characteristics are shown in Table 3.
The simulation environment includes a heterogeneous set of PMs to reflect the diversity commonly found in real-world datacenters. These server types differ in terms of processing power, memory capacity, and energy efficiency, providing a realistic basis for evaluating VM placement decisions. Incorporating multiple server configurations allows for a more robust assessment of the algorithm’s performance in balancing resource utilization and energy consumption. The detailed specifications of the PMs used in the simulation are provided in Table 4.

5.1.2. VM Instances

To mimic a realistic cloud environment, different VM configurations were created, reflecting the demands different users place on CPU, memory, and storage. These VM types represent typical requests for services encountered on Infrastructure-as-a-Service (IaaS) platforms. Details of the types of VMs used in the simulation are given in Table 5.

5.1.3. Solar Energy

Solar energy data for the four datacenters were procured from an NREL project [40]. Global Horizontal Irradiance (GHI) data for south-facing flat-plate collectors fixed on tilt were considered, measured in W/m2. We simulated a five-day interval (26–30 May 2014). The total area of the solar panels is 2684 m2 [3]. The hourly solar energy output in kWh was calculated using these data, as shown in Figure 3.

5.1.4. Temperature

Hourly temperatures for each datacenter location (26–30 May 2014) were retrieved from Weatherbase [41]. These profiles serve as H t in Equation (7) to compute each site’s dynamic PUE at every hour. The hourly temperatures for the sites are represented in Figure 4.

5.1.5. Energy Price

Energy prices were obtained from the U.S. Energy Information Administration [42]. The average industrial electricity price for each site is listed in Table 3. We considered datacenters as an industrial sector and used the average price of electricity. On-site solar has zero incremental cost in our model, since capital and maintenance expenses are fixed.

5.1.6. Carbon Tax and Carbon Footprint

Datacenter carbon intensities (ton CO2/MWh) were sourced from the EPA’s eGRID database (eGRID) [43]. Carbon tax rates (USD/ton CO2) are from the Carbon Tax Center [44]. Each site’s carbon intensity and tax rate appear in Table 3.

5.1.7. PUE Model

For all datacenters, we use the PUE model discussed in Section 3.1. We assume that the efficiencies of the entire datacenter site infrastructure are all the same. The PUE model is shown in Table 3.

5.1.8. Server Power Consumption

As mentioned in Section 3.1, we adopt the approximate linear relation with server utilization against SpecPower results for two HP ProLiant servers (HP ProLiant Ml110 G4 and G5), and their respective power models are stated in Table 6.

5.1.9. Workload

This work uses real workload from the Metacentrum cloud provider, a distributed computing environment for providing resources for scientific research, including high-performance computing (HPC) and cloud computing.
This workload is represented in SWF (Standard Workload Format). It represents different types of jobs, including bag-of-tasks jobs, long running computational tasks, and batch jobs, as demonstrated in Table 7.
Most jobs are batch-style, submitted to a queue. A substantial portion are bag-of-tasks jobs—many independent, short-lived tasks ideal for parallel execution—reflecting the characteristics of distributed computing. The workload also contains long-running, computer- or memory-intensive jobs (scientific simulations, data analysis, machine learning). This diversity makes the Metacentrum traces well suited for evaluating VM placement in hybrid cloud/HPC environments.
After extensive experiments, the interval for cloudlets submission is set to 600 s (10 min), particularly because the ACO algorithm is performing a search for the best placement at every submission. This allows the ACO algorithm to have sufficient time to process a significant batch of cloudlets, for example, up to 500 cloudlets (VM request), rather than being overwhelmed with frequent and small submissions for shorter intervals (less than 10 cloudlets).

5.2. Result Analysis

The experimental analysis evaluates our proposed ACO-based VMP’s efficiency in reducing energy consumption, minimizing carbon emissions, assuring SLA compliance, and reducing operational costs. All experiments were performed in a simulated cloud environment. ACO was compared against well-accepted heuristic methods —BFD and FFD [39] —which are widely adopted baselines in the literature, as well as against Cost and Renewable-Aware with Dynamic PUE (CRADP) [3] and the metaheuristic Unified Ant Colony System (UACS) [45].
CRADP was selected because it explicitly models both renewable energy availability and dynamic PUE in its cost function, penalizing placements with low solar input or high cooling overheads. This aligns directly with our objectives of minimizing energy and carbon emissions under variable environmental conditions.
UACS, on the other hand, represents a recent, state-of-the-art ACO variant that extends classical pheromone-update rules with multi-objective weighting and adaptive heuristic factors.
To ensure a direct comparison, we re-implemented all algorithms (BFD, FFD, CRADP, UACS, and NCRA-DP-ACO) from scratch in CloudSimPlus. We used identical server configurations, workload traces, and algorithm parameters for UACS and NCRA-DP-ACO. Running all algorithms under the same simulation conditions allows us to directly compare their responses across different workload intensities (500–14,000 VMs).
Because NCRA-DP-ACO is nondeterministic, each workload configuration was run 30 times to ensure statistical robustness. The reported performance metrics represent the mean values across these runs. Performance trends are visualized and discussed in detail in subsequent sections.

5.2.1. Energy Consumption

For the sustainability of clouds, energy efficiency is of significant importance. A much lower energy consumption is expected from the proposed ACO placement because of the dynamic VM consolidation strategy, which minimizes the number of servers in an active state. It shows the total energy consumed by each algorithm for different VM workload sizes, as depicted in Figure 5.
Figure 5 shows that the ACO algorithm achieves lower energy consumption compared to the best baseline heuristics. It becomes evident when the number of VMs increases; for example, at 14,000 VMs, ACO saves approximately 18% energy compared to BFD. This is because ACO uses pheromone-guided search, which efficiently consolidates VMs onto fewer active PMs and thus reduces both dynamic and idle power consumption.

5.2.2. Carbon Footprint (kg CO2)

This outcome aligns with the well-established correlation between energy consumption and carbon emissions that less energy use will result in a lower carbon footprint. Figure 6 gives a comparative analysis of carbon emissions using different algorithms for various workloads.
The slight carbon footprint spike for ACO at 500–1000 VM workloads occurs because consolidation is less effective at low scales: ACO may keep more servers active to accommodate renewable usage, resulting in higher idle power.
As workloads exceed 1000 VMs, ACO’s carbon emission savings surpass 15% relative to BFD and FFD, due to energy-efficient consolidation and renewable-first placement. These results confirm ACO’s scalability for green cloud computing.

5.2.3. Total Cost

The total cost (energy + carbon tax) is a major financial parameter for cloud service providers. Figure 7 compares the total operational cost outcomes of the different algorithms.
The ACO algorithm achieved the least total cost for various workloads, as depicted in Figure 7. This cost advantage is enhanced for larger workloads up to 17% with respect to the standard heuristics. The cost savings are primarily attributed to improved energy efficiency and reduced emissions, establishing the new method as an economically viable solution for green cloud computing.

5.2.4. Number of Live Migrations

In our experimental evaluation of live VM migrations, we focused on comparing the NCRA-DP-ACO algorithm against UACS because both approaches implement dynamic VM migration strategies during operation. In contrast, traditional heuristic algorithms such as BFD, FFD, and CRADP perform static initial VM placements without continuous migration. These static heuristics do not actively monitor server utilization after placement or initiate migrations to optimize energy consumption or SLA compliance. Therefore, including them in the live migration comparison would not add insight, as their designs inherently lack dynamic re-optimization features. ACO implemented the balanced migration strategy, which involved dynamic placement optimization with a minimum SLA impact, whereas UACS exhibited a higher frequency of migration without the use of any SLA or cost benefits, as shown in Figure 8.

5.2.5. Average Host Uptime

Host uptime sums the total active time of PMs during placements. An effective VMP strategy consolidates workloads onto fewer hosts, reducing the active time, energy consumption, and cost.
As illustrated by Figure 9, the proposed ACO presents substantially lower average host uptimes. ACO consolidates the VMs into fewer active servers while allowing the remaining machines to go low-power or even idle, thus saving energy.

5.2.6. SLA Violations

The following areas were considered regarding SLA violations:
  • PDM (Performance Degradation due to Migrations)
PDM measures the performance degradation suffered by VMs due to their live migrations. Live migrations are essential operations that move VMs from one host to another without shutting them down. Performance degradation usually refers to a reduction in the quality of service (QoS) or performance of a VM during or after migration. This may refer to increased latency, reduced throughput, or higher response times.
This metric helps operators understand trade-offs between migrating VMs for load balancing and preserving performance.
P D M = 1 V M j V M V M V M j p d m V M j M I P S
|VM| is the number of VMs; V M j p d m is the estimate of the performance degradation of V M j caused by migrations (taken as 10% as in [39]); and V M j C P U is the total CPU capacity requested by the V M j   during its lifetime. In other words, PDM is a percentage that measures how good the initial VM placement decision was; a lower PDM percentage means better placement and consequently fewer migrations needed. Figure 10 shows the stability of the placement quality using the PDM metric.
2.
SLATAH (SLA Time per Active Host)
SLATAH captures the average time that each active host spends in SLA violation. An SLA violation occurs when a host fails to meet performance or availability standards for its VMs. The conditions leading to SLA violations can include a lack of resource contention, overloading, or the intervention of any other issues affecting the performance of the VM.
S L A T A H = 1 P M i P M P M V M p d m V M j M I P S
SLAV is a composite index that represents the total SLA violations calculated from performance degradation (PDM) and the time during which the SLA is violated (SLATAH) [39].
S L A V = P D M × S L A T A H  
It provides insight into the effect of migrations and resource contention on overall SLA compliance. Figure 11 illustrates the SLA violations.

5.2.7. Total Computational Overhead

To evaluate the computational efficiency of NCRA-DP-ACO, we measured its cumulative execution time over a five-day simulation period across varying VM workload sizes, with each configuration tested over 30 independent runs (see Table 8). Experiments were conducted on a Lenovo (Lenovo Group Limited, Beijing, China) system with an Intel(R) Core(TM) (Intel Corporation, Santa Clara, CA, USA) i5-8250U CPU (1.60 GHz base, 1.80 GHz boost), 16.0 GB RAM, and NVMe (Samsung Electronics Co., Ltd. Suwon, South Korea) storage.
The algorithm showed scalable performance, with total execution times increasing from 4983.96 s (about 83 min) at 500 VMs to 18,655.12 s (about 5.2 h) at 14,000 VMs. This trend reflects the natural growth in the solution space and constraint complexity as the workload size increases. A 13.6% reduction in time at 10,000 VMs, relative to the trend between 5000 and 14,000 VMs, suggests accelerated convergence due to pheromone reinforcement under balanced resource conditions. Overall, the performance consistency was robust: the standard deviation remained below 15% of the mean (8.6% [428.23 s] at 500 VMs and 10.4% [1934.12 s] at 14,000 VMs). This low variance persisted even under dynamic workload distributions and simulated renewable energy fluctuations. These results confirm that NCRA-DP-ACO is well suited for deployment in energy-conscious datacenters operating with batch-style placement cycles, deadline-sensitive cloud environments, and scenarios requiring high-quality VM allocation with predictable overhead.

5.2.8. Comparisons with Hybrid and Multi-Objective Algorithms

To strengthen the validation of NCRA-DP-ACO, we compared its energy efficiency (kWh/VM) with a range of state-of-the-art VM placement algorithms reported in the literature. The inferred energy-per-VM values for the baseline algorithms are listed in Table 9. The comparison includes multi-objective evolutionary algorithms (NSGA-III and GA + threshold), a hybrid metaheuristic (ABGWO), and other metaheuristics (ETA-ACO, ACO, and PSO) that target energy-aware scheduling. Where direct implementation was not feasible, our comparisons rely on published values and approximations.
For several baseline algorithms where the energy consumption (in kWh) was not directly reported, we inferred the values using standard power modeling assumptions based on information provided in the original papers. For instance, in [46], the ABGWO algorithm utilized approximately 210 PMs to host 2000 VMs under Azure workload traces. Assuming a typical PM power draw of 200 watts at moderate utilization (40–50%, as suggested by cloud datacenter standards), the total system power consumption was approximated as 42 kW. Assuming a 1-h simulation duration—a default in many CloudSim-based experimental setups—the estimated energy usage is 42 kWh. Dividing this by 2000 VMs yields an approximate energy-per-VM value of 0.210 kWh. This method was similarly applied to other studies that reported power values in watts or server usage rather than in kilowatt-hours [47]. We normalized all per-VM energy values to a baseline of 5000 VMs to align with the scale at which our algorithm achieves efficiency using the following formula:
N o r m a l i z e d ( k W h V M ) = R e p o r t e d k W h V M × 5000 O r i g i n a l _ V M _ C o u n t
Table 9. Energy consumption per VM (kWh) at 5000 VMs for NCRA-DP-ACO and published baselines.
Table 9. Energy consumption per VM (kWh) at 5000 VMs for NCRA-DP-ACO and published baselines.
AlgorithmOriginal VM CountEnergy/VM (kWh)Total Energy at 5000 VM (kWh) Trace TypeSource
NCRA-DP-ACO50000.169845.10MetacentrumThis study
NSGA-III2002.1310,650Synthetic[19]
ETA-ACO5000.4902450Synthetic[25]
Standard ACO5000.6063030Synthetic[25]
Standard PSO5000.5333190Synthetic[25]
ABGWO (GWO Hybrid)20000.2101050Azure (real)[46]
GA + Threshold3000.5332666Synthetic[48]
Under this normalized evaluation, NCRA-DP-ACO achieves the lowest per-VM energy at 5000 VMs—0.169 kWh/VM—demonstrating its scalability and energy awareness in large-scale cloud environments. When scaled further, its efficiency continues to improve, dropping to 0.062 kWh/VM at 14,000 VMs. We note, however, that real-world outcomes may vary due to workload diversity, hardware heterogeneity, and cooling-system differences not captured by synthetic-trace simulations.

5.3. Discussion of Results

The experimental results demonstrate that NCRA-DP-ACO excels across multiple objectives—energy efficiency, SLA compliance, carbon emission reduction, and operational cost savings. The ACO algorithm dynamically adapts to the system load, renewable energy availability, and resource heterogeneity among datacenters. These adaptive features are particularly effective as the system scales from medium to large workloads (e.g., 5000+ VMs).
The simulated infrastructure comprises four geographically distributed datacenters, each with 126 heterogeneous PMs (504 total) connected through a backbone network. A centralized broker and CIS coordinate placement decisions using up-to-date site states.
Under light workloads (500–1000 VMs), NCRA-DP-ACO may incur slightly higher carbon emissions and costs than simple heuristics because it prioritizes renewable availability (often requiring more active servers at a low scale). In contrast, heuristic methods typically aim to minimize the number of active servers, yielding marginally better results at small scales.
However, as the workload intensity grows, the advantages of NCRA-DP-ACO become clearly dominant due to better overall consolidation and green energy prioritization. Traditional approaches begin to incur remarkably high SLA violations and energy consumption due to the absence of adaptive placement mechanisms. On the other hand, with such a density of VMs, ACO maintains SLA violation rates very satisfactorily below 0.006. This trade-off reflects the proposed algorithm’s bias toward long-term environmental sustainability over short-term server consolidation efficiency, and the advantages are clearer at scale. The advantages are as follows:
  • Scalability to Workload Intensity
The framework was tested on a wide range of VM loads ranging from 500 to 14,000 VMs, demonstrating stable behavior and low overheads across scales. Key findings include the following:
  • Energy savings reached up to 18% compared to traditional heuristics (see Figure 5).
  • Carbon emissions were reduced by more than 15% in large-scale deployments due to intelligent VM consolidation and datacenter selection (see Figure 6).
  • Live migrations remained low (under 2.3% of total VMs), showing that the solution is stable and does not rely on excessive re-optimization (see Figure 7).
These outcomes (see Table 10) confirm that NCRA-DP-ACO scales efficiently and maintains optimization quality across a broad spectrum of cloud loads.
  • Sensitivity to Solar Energy Availability
Our proposed algorithm incorporates solar energy availability into every placement decision in real time. For each VM request, the framework evaluates the current solar output across all datacenters and prioritizes placement to sites with the highest available green energy. This is executed in the initial stages of our algorithm and ensures that no post hoc rebalancing is needed. This responsiveness was evident during simulation across four geographically diverse U.S. datacenters (Dallas, Richmond, San Jose, and Portland), each with distinct solar irradiance levels. When solar generation peaked in San Jose, placement density increased there. When irradiance was low across all locations, our algorithm shifted focus toward datacenters with lower carbon intensity and lower electricity and carbon tax rates.
  • Environmental and Economic Effectiveness
Our algorithm’s consolidation behavior not only reduces energy usage but also minimizes carbon emissions and carbon tax liabilities by favoring green-powered datacenters. This environmentally aware strategy translates into significant cost savings (see Figure 7).
Placement quality, measured by the placement degradation metric (PDM), remained below 1% across all workloads. Even at 14,000 VMs, the PDM only reached 0.54% (see Figure 10), well within acceptable thresholds, demonstrating minimal post-placement migrations and strong SLA compliance.
In conclusion, NCRA-DP-ACO shows remarkable scalability and robustness. The advantages become ever more significant with an increased system load, making it a promising solution for modern cloud infrastructures where resource demand is highly dynamic and sustainability is a priority.

6. Conclusions and Future Work

We present NCRA-DP-ACO, a multi-objective ACO framework for VM placement across geographically distributed datacenters. By incorporating real-time renewable energy availability and dynamic PUE modeling, it achieves more energy- and cost-efficient resource allocation while reducing carbon emissions and network overhead. Simulated experiments under diverse workloads demonstrate significant gains in energy savings, cost reduction, and SLA compliance, especially at higher scales where sustainability objectives are more impactful.
Future enhancements will focus on the following directions: First, we aim to incorporate adaptive pheromone strategies and reinforcement learning techniques to improve the convergence speed and responsiveness of the algorithm under rapidly changing cloud conditions.
Second, we plan to introduce a dynamic weighting mechanism that adjusts the trade-off between green energy usage and operational cost based on workload intensity and renewable energy availability, enabling more flexible and context-aware optimization.
Finally, we plan to integrate the proposed ACO-based VMP framework into real-world cloud orchestration platforms such as Kubernetes v1.33, and KubeEdge v1.20. This integration will enable real-time, scalable deployment across distributed edge–cloud environments. Specifically, we aim to expose our optimization module as a custom scheduler extension within Kubernetes, allowing it to dynamically manage pod-to-node assignments based on energy, carbon, and network considerations. Real-time telemetry such as the host load, temperature, and renewable energy availability will be realized through monitoring tools like Prometheus and Node Exporter, enabling data-driven decision-making. This step will help bridge simulation and deployment, allowing the algorithm to operate on live data streams and adapt to changing infrastructure conditions.

Author Contributions

Conceptualization, A.S.Z. and A.M.B.; Methodology, A.S.Z. and A.M.B.; Software, A.M.B.; Validation, A.M.B.; Formal analysis, A.M.B.; Resources, A.S.Z.; Writing—original draft, A.M.B.; Writing—review & editing, A.S.Z.; Visualization, A.M.B.; Supervision, A.S.Z.; Project administration, A.S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are contained within the article. The base simulation framework, CloudSimPlus, is publicly available on GitHub at https://github.com/cloudsimplus. Additional modifications developed for the proposed ACO-based placement mechanism are available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to acknowledge the use of ChatGPT GPT-4o to edit and polish the writing of the final manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. National Institute of Standards and Technology. The NIST Definition of Cloud Computing. In NIST Special Publication 800-145; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2011. [Google Scholar] [CrossRef]
  2. Bliedy, D.; Mazen, S.; Ezzat, E. Datacentre Total Cost of Ownership (TCO) Models: A Survey. Int. J. Comput. Sci. Eng. Appl. 2018, 8, 47–62. [Google Scholar] [CrossRef]
  3. Khosravi, A.; Andrew, L.L.H.; Buyya, R. Dynamic VM Placement Method for Minimizing Energy and Carbon Cost in Geographically Distributed Cloud Data Centers. IEEE Trans. Sustain. Comput. 2017, 2, 183–196. [Google Scholar] [CrossRef]
  4. Borah, A.D.; Muchahary, D.; Singh, S.K.; Borah, J. Power saving strategies in green cloud computing systems. Int. J. Grid Distrib. Comput. 2015, 8, 299–306. [Google Scholar] [CrossRef]
  5. Data Center Power Market Size & Share|North America. Available online: https://www.globenewswire.com/en/news-release/2021/05/12/2227889/0/en/Data-Center-Power-Market-Size-Share-North-America-Europe-APAC-Industry-Forecasts-2026-Graphical-Research.html (accessed on 25 June 2021).
  6. García-Valls, M.; Cucinotta, T.; Lu, C. Challenges in real-time virtualization and predictable cloud computing. J. Syst. Archit. 2014, 60, 726–740. [Google Scholar] [CrossRef]
  7. METACENTRUM. MetaCentrum Workload Traces for Distributed and Cloud Computing Research. Available online: https://www.cs.huji.ac.il/labs/parallel/workload/l_metacentrum2/index.html (accessed on 15 August 2024).
  8. Yadav, R.; Zhang, W.; Kaiwartya, O.; Singh, P.R.; Elgendy, I.A.; Tian, Y.C. Adaptive Energy-Aware Algorithms for Minimizing Energy Consumption and SLA Violation in Cloud Computing. IEEE Access 2018, 6, 55923–55936. [Google Scholar] [CrossRef]
  9. Dias, D.S.; Costa, H.M.K. Online Traffic-aware Virtual Machine Placement in Data Center Networks. In Proceedings of the 2012 Global Information Infrastructure and Networking Symposium (GIIS), Choroni, Venezuela, 17–19 December 2012. [Google Scholar]
  10. Zhang, B.; Qian, Z.; Huang, W.; Li, X.; Lu, S. Minimizing communication traffic in data centers with power-aware VM placement. In Proceedings of the 6th International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing, Palermo, Italy, 4–6 July 2012. [Google Scholar] [CrossRef]
  11. Shi, T.; Ma, H.; Chen, G. Energy-Aware Container Consolidation Based on PSO in Cloud Data Centers. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation, CEC 2018—Proceedings, Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar] [CrossRef]
  12. Tripathi, A.; Pathak, I.; Vidyarthi, D.P. Energy Efficient VM Placement for Effective Resource Utilization using Modified Binary PSO. Comput. J. 2018, 61, 832–846. [Google Scholar] [CrossRef]
  13. Xiong, A.P.; Xu, C.X. Energy efficient multiresource allocation of virtual machine based on PSO in cloud data center. Math. Probl. Eng. 2014, 2014, 816518. [Google Scholar] [CrossRef]
  14. Dubey, K.; Disciplines, C.; Sharma, S.C.; Disciplines, C.; Nasr, A.A. A Simulated Annealing based Energy-Efficient VM Placement Policy in Cloud Computing. In Proceedings of the 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE), Vellore, India, 24–25 February 2020; pp. 1–5. [Google Scholar]
  15. Addya, S.K.; Turuk, A.K.; Sahoo, B.; Sarkar, M.; Biswash, S.K. Simulated annealing based VM placement strategy to maximize the profit for Cloud Service Providers. Eng. Sci. Technol. Int. J. 2017, 20, 1249–1259. [Google Scholar] [CrossRef]
  16. Marotta, A.; Avallone, S. A Simulated Annealing Based Approach for Power Efficient Virtual Machines Consolidation. In Proceedings of the 2015 IEEE 8th International Conference on Cloud Computing, CLOUD, New York, NY, USA, 27 June–2 July 2015; pp. 445–452. [Google Scholar] [CrossRef]
  17. Zhang, B.; Wang, X.; Wang, H. Virtual machine placement strategy using cluster-based genetic algorithm. Neurocomputing 2021, 428, 310–316. [Google Scholar] [CrossRef]
  18. Lu, J.; Zhao, W.; Zhu, H.; Li, J.; Cheng, Z.; Xiao, G. Optimal machine placement based on improved genetic algorithm in cloud computing. J. Supercomput. 2022, 78, 3448–3476. [Google Scholar] [CrossRef]
  19. Gopu, A.; Thirugnanasambandam, K.; AlGhamdi, A.S.; Alshamrani, S.S.; Maharajan, K.; Rashid, M. Energy-efficient virtual machine placement in distributed cloud using NSGA-III algorithm. J. Cloud Comput. 2023, 12, 124. [Google Scholar] [CrossRef]
  20. Karmakar, K.; Das, R.K.; Khatua, S. An ACO-based multi-objective optimization for cooperating VM placement in cloud data center. J. Supercomput. 2022, 78, 3093–3121. [Google Scholar] [CrossRef]
  21. Zambuk, F.U.; Gital, A.Y.U.; Jiya, M.; Gari, N.A.S.; Ja’afaru, B.; Muhammad, A. Efficient Task Scheduling in Cloud Computing using Multi-objective Hybrid Ant Colony Optimization Algorithm for Energy Efficiency. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 450–456. [Google Scholar] [CrossRef]
  22. Wei, W.; Gu, H.; Lu, W.; Zhou, T.; Liu, X. Energy Efficient Virtual Machine Placement with an Improved Ant Colony Optimization over Data Center Networks. IEEE Access 2019, 7, 60617–60625. [Google Scholar] [CrossRef]
  23. Vijaya, C.; Srinivasan, P. Multi-objective Meta-heuristic Technique for Energy Efficient Virtual Machine Placement in Cloud Computing Data Centers. Informatica 2024, 48, 1–18. [Google Scholar] [CrossRef]
  24. Ding, H.; Zhang, Y. Efficient Cloud Workflow Scheduling with Inverted Ant Colony Optimization Algorithm. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 2023. [Google Scholar] [CrossRef]
  25. Xing, H.; Zhu, J.; Qu, R.; Dai, P.; Luo, S.; Iqbal, M.A. An ACO for energy-efficient and traffic-aware virtual machine placement in cloud computing. Swarm. Evol. Comput. 2022, 68, 101012. [Google Scholar] [CrossRef]
  26. Farzai, S.; Shirvani, M.H.; Rabbani, M. Communication-Aware Traffic Stream Optimization for Virtual Machine Placement in Cloud Datacenters with VL2 Topology. J. Adv. Comput. Res. Q. 2020, 11, 1–21. [Google Scholar]
  27. Zhong, C.; Yang, G. An Improved Ant Colony Algorithm for Virtual Resource Scheduling in Cloud Computing Methods to Improve the Performance of Virtual Resource Scheduling. Int. J. Adv. Comput. Sci. Appl. 2025, 14, 2023. [Google Scholar]
  28. Sun, A.; Ji, T.; Yue, Q.; Xiong, F. IaaS Public Cloud Computing Platform Scheduling Model and Optimization Analysis. Int. J. Commun. Netw. Syst. Sci. 2011, 4, 803–811. [Google Scholar] [CrossRef]
  29. Pelley, S.; Meisner, D.; Wenisch, T.F.; VanGilder, J.W. Understanding and abstracting total data center power. In Workshop on Energy-Efficient Design; Ann Arbor, MI, USA. 2009. Available online: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=c9f3ff0a7c7c823ca3994144f33eb7ee1b31f396 (accessed on 2 May 2025).
  30. Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [Google Scholar] [CrossRef]
  31. Tawfeek, M.A.; El-Sisi, A.B.; Keshk, A.E.; Torkey, F.A. Virtual Machine Placement Based on Ant Colony Optimization for Minimizing Resource Wastage. Commun. Comput. Inf. Sci. 2014, 488, 153–164. [Google Scholar] [CrossRef]
  32. Tawfeek, M.A.; El-Sisi, A.; Keshk, A.E.; Torkey, F.A. Cloud Task Scheduling Based on Ant Colony Optimization. In Proceedings of the 2013 8th International Conference on Computer Engineering & Systems (ICCES), Cairo, Egypt, 26–28 November 2013. [Google Scholar]
  33. Gao, Y.; Guan, H.; Qi, Z.; Hou, Y.; Liu, L. A multi-objective ant colony system algorithm for virtual machine placement in cloud computing. J. Comput. Syst. Sci. 2013, 79, 1230–1242. [Google Scholar] [CrossRef]
  34. Alharbi, F.; Tian, Y.C.; Tang, M.; Zhang, W.Z.; Peng, C.; Fei, M. An Ant Colony System for energy-efficient dynamic Virtual Machine Placement in data centers. Expert Syst. Appl. 2019, 120, 228–238. [Google Scholar] [CrossRef]
  35. Pang, S.; Xu, K.; Wang, S.; Wang, M.; Wang, S. Energy-Saving Virtual Machine Placement Method for User Experience in Cloud Environment. Math. Probl. Eng. 2020, 2020, 4784191. [Google Scholar] [CrossRef]
  36. Khodayarseresht, E.; Shameli-Sendi, A. A multi-objective cloud energy optimizer algorithm for federated environments. J. Parallel. Distrib. Comput. 2023, 174, 81–99. [Google Scholar] [CrossRef]
  37. Shabeera, T.P.; Kumar, S.D.M.; Salam, S.M.; Krishnan, K.M. Optimizing VM allocation and data placement for data-intensive applications in cloud using ACO metaheuristic algorithm. Eng. Sci. Technol. Int. J. 2017, 20, 616–628. [Google Scholar] [CrossRef]
  38. Filho, M.C.S.; Oliveira, R.L.; Monteiro, C.C.; Inácio, P.R.M.; Freire, M.M. CloudSim Plus: A cloud computing simulation framework pursuing software engineering principles for improved modularity, extensibility and correctness. In Proceedings of the IM 2017—2017 IFIP/IEEE International Symposium on Integrated Network and Service Management, Lisbon, Portugal, 8–12 May 2017; pp. 400–406. [Google Scholar] [CrossRef]
  39. Beloglazov, A.; Buyya, R. Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in Cloud data centers. Concurr. Comput. Pract. Exp. 2012, 24, 1397–1420. [Google Scholar] [CrossRef]
  40. NSRDB. NSRDB: National Solar Radiation Database. National Renewable Energy Laboratory. Available online: https://nsrdb.nrel.gov/data-viewer (accessed on 15 August 2024).
  41. Weatherbase. Available online: https://www.weatherbase.com/ (accessed on 15 August 2024).
  42. US Energy Information Administration. International Energy Outlook. Available online: http://large.stanford.edu/courses/2010/ph240/riley2/docs/EIA-0484-2010.pdf (accessed on 5 March 2025).
  43. Environmental Protection Agency; Emissions & Generation Resource Integrated Database (eGRID). United States Environmental Protection Agency. Available online: https://www.epa.gov/sites/production/files/2017-02/documents/egrid2014_summarytables_v2.pdf (accessed on 5 March 2025).
  44. Carbon Tax Center. International Affairs. Available online: https://www.carbontax.org/ (accessed on 15 August 2024).
  45. Liu, X.F.; Zhan, Z.H.; Zhang, J. An energy aware unified ant colony system for dynamic virtual machine placement in cloud computing. Energies 2017, 10, 609. [Google Scholar] [CrossRef]
  46. Feng, H.; Li, H.; Liu, Y.; Cao, K.; Zhou, X. A novel virtual machine placement algorithm based on grey wolf optimization. J. Cloud Comput. 2025, 14, 7. [Google Scholar] [CrossRef]
  47. Beloglazov, A.; Abawajy, J.; Buyya, R. Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing. Future Gener. Comput. Syst. 2012, 28, 755–768. [Google Scholar] [CrossRef]
  48. Alourani, A.; Khalid, A.; Tahir, M.; Sardaraz, M. Energy efficient virtual machines placement in cloud datacenters using genetic algorithm and adaptive thresholds. PLoS ONE 2024, 19, e0296399. [Google Scholar] [CrossRef]
Figure 1. System model for geographically distributed green cloud computing environment.
Figure 1. System model for geographically distributed green cloud computing environment.
Futureinternet 17 00261 g001
Figure 2. Dataflow diagram for dynamic PUE and real-time solar energy integration into the ACO heuristic calculation.
Figure 2. Dataflow diagram for dynamic PUE and real-time solar energy integration into the ACO heuristic calculation.
Futureinternet 17 00261 g002
Figure 3. Hourly solar energy (kWh) profiles over five days at four datacenter sites.
Figure 3. Hourly solar energy (kWh) profiles over five days at four datacenter sites.
Futureinternet 17 00261 g003
Figure 4. Hourly outside temperature (°C) over five days at four datacenter sites.
Figure 4. Hourly outside temperature (°C) over five days at four datacenter sites.
Futureinternet 17 00261 g004
Figure 5. Energy consumption comparison across algorithms for varying workloads.
Figure 5. Energy consumption comparison across algorithms for varying workloads.
Futureinternet 17 00261 g005
Figure 6. Carbon footprint trends for proposed ACO vs. baseline algorithms under differing request numbers.
Figure 6. Carbon footprint trends for proposed ACO vs. baseline algorithms under differing request numbers.
Futureinternet 17 00261 g006
Figure 7. Total cost comparison for varying workload sizes across proposed ACO and baseline algorithms.
Figure 7. Total cost comparison for varying workload sizes across proposed ACO and baseline algorithms.
Futureinternet 17 00261 g007
Figure 8. Number of live VM migrations of proposed ACO versus UACS across varying workload sizes.
Figure 8. Number of live VM migrations of proposed ACO versus UACS across varying workload sizes.
Futureinternet 17 00261 g008
Figure 9. Average host uptime (s) for proposed ACO and baseline algorithms for increasing workload sizes.
Figure 9. Average host uptime (s) for proposed ACO and baseline algorithms for increasing workload sizes.
Futureinternet 17 00261 g009
Figure 10. Placement degradation metric (PDM) comparison for proposed ACO and UACS across varying workload sizes.
Figure 10. Placement degradation metric (PDM) comparison for proposed ACO and UACS across varying workload sizes.
Futureinternet 17 00261 g010
Figure 11. SLA violation rates for proposed ACO and UACS for rising request numbers.
Figure 11. SLA violation rates for proposed ACO and UACS for rising request numbers.
Futureinternet 17 00261 g011
Table 1. Notations and symbols.
Table 1. Notations and symbols.
NotationDescriptionNotationDescription
D Datacenter sites τ i j Pheromone   value   between   V M j   and   P M i
P M List of servers in a datacenter τ 0 Initial pheromone value
P M i C P U Total   CPU   of   P M i ρ Pheromone   decay   parameter   ( 0 < ρ < 1 )
P M i , c u r r e n t C P U P M i current CPU utilization τ i j Pheromone reinforcement value
P M i R A M Total   RAM   of   P M i S B e s t Current generation’s best solution
P M i B W Total   bandwidth   of   P M i N a n t s Number of ants
P M i , t p o w e r P M i   power consumption at time t q 0 Exploration and exploitation behavior of ants
P M i , c u r r e n t B W Current   P M i bandwidthqRandom number [0, 1]
P M i , a v a i l a b l e B W P M i available network bandwidth η i j Heuristic   info   between   V M j   and   P M i
P M i j P o w e r Estimated   power   consumption   of   V M j   after   placing   on   P M i βImportance of heuristic information
P M j a v a i l a b l e Set of available PMs for placement P i j Probability   of   placing   V M j   and   P M i
P M i , m i n P o w e r P M i idle power P M i , t + o v e r h e a d Overhead power consumption of server at time t
P M i , m a x P o w e r Peak   power   of   P M i H t Datacenter outside temperature at time t
V M j , e s t i m a t e d B W Estimated   bandwidth   usage   of   V M j C T The cost of running all VMs at different sites
P M i , t + p o w e r Power   consumption   of   P M i at time t after placing new VM C i j e m i s s i o n Estimated   carbon   emissions   of   placing   V M j   on   P M i
V M List of running VMs R d T a x Carbon tax in datacenter d
V M n e w List of VMs to be (re)placed R d C a r b o n Carbon footprint in datacenter d
V M j C P U Required   CPU   for   V M j C E C Energy consumption cost
V M j R A M Required   RAM   for   V M j C C F Carbon footprint cost
V M j B W Required   bandwidth   for   V M j T h u n d e r Underutilization threshold
V M j , c u r r e n t C P U V M j current CPU utilization x i j Matrix element to show VM-to-PM mapping
T h o v e r Overutilization threshold
S a VM assignment solution
Table 2. System parameters.
Table 2. System parameters.
Symbolρθ, γ, δβAIq₀ T h u n d e r T h o v e r
Value0.30.3, 0.4, 0.322020.730%90%
Table 3. Datacenter characteristics.
Table 3. Datacenter characteristics.
Site CharacteristicsDallasRichmondSan JosePortland
Server Power ModelCalculated using (5) based on SpecPower benchmark [39]
PUE Model P U E U t , H t = 1 + 0.2 + 0.01 · U t + 0.01 · U t · H t U t
Carbon Intensity (ton CO2 /MWh)0.3350.2680.1990.287
Carbon Tax (USD/ton CO2)2417.638.5925.75
Energy Price (cents/kWh)6.388.6219.87.7
Table 4. Server types.
Table 4. Server types.
Server TypeCPU CoresMemory (GB)Storage (GB)
Type 12162000
Type 24326000
Type 38327000
Type 48647000
Type 5161289000
Type 63212812,000
Table 5. VM types.
Table 5. VM types.
VM TypeNumber of PEs (CPU Cores)Memory (GB)Storage (GB)
Type 1 A1_Medium11100
Type2 m5.large22200
Type 3 m5.xlarge44500
Type 4 m5.2xlarge881000
Type 5 m5.4xlarge16642000
Table 6. Power modeling.
Table 6. Power modeling.
Server ModelCPU Utilization (%)0%10%20%30%40%50%60%70%80%90%100%
G4Power (W)8689.492.69699.5102106108112114117
G5Power (W)93.797101105110116121125129133135
Table 7. Workload characteristics.
Table 7. Workload characteristics.
Cloudlet PEsVM TypeCloudlets PercentageExample Workload
1Type 140%Small web apps, APIs, development environments
2Type 230%Medium-sized apps, databases, caching servers
4Type 320%Enterprise apps, high-traffic web servers
8Type 48%Video encoding, data processing
16+Type 52%Machine learning, big data
Table 8. Average runtime per workload size.
Table 8. Average runtime per workload size.
VM Workload SizeAverage Execution Time (s)Standard Deviation
5004983.96428.23
10009151.37944.57
500018,160.69581.48
10,00013,640.211259.35
14,00018,655.121934.12
Table 10. Key metric comparison.
Table 10. Key metric comparison.
MetricNCRA-DP-ACOBFDFFDCRADPUACS
Energy Consumption (Max)−18%Baseline−10%−6%−8%
Carbon Emissions (Max)−15%Baseline−8%−5%−6%
Total Cost (Max)−17%Baseline−9%−7%−9%
Live Migrations−48.2% vs. UACSN/AN/AN/ABaseline
PDM (Placement Quality)<1%2–4%3–5%1.7%1.4%
SLAVVery lowModerateModerateLowHigh
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Baydoun, A.M.; Zekri, A.S. Network-, Cost-, and Renewable-Aware Ant Colony Optimization for Energy-Efficient Virtual Machine Placement in Cloud Datacenters. Future Internet 2025, 17, 261. https://doi.org/10.3390/fi17060261

AMA Style

Baydoun AM, Zekri AS. Network-, Cost-, and Renewable-Aware Ant Colony Optimization for Energy-Efficient Virtual Machine Placement in Cloud Datacenters. Future Internet. 2025; 17(6):261. https://doi.org/10.3390/fi17060261

Chicago/Turabian Style

Baydoun, Ali Mohammad, and Ahmed Sherif Zekri. 2025. "Network-, Cost-, and Renewable-Aware Ant Colony Optimization for Energy-Efficient Virtual Machine Placement in Cloud Datacenters" Future Internet 17, no. 6: 261. https://doi.org/10.3390/fi17060261

APA Style

Baydoun, A. M., & Zekri, A. S. (2025). Network-, Cost-, and Renewable-Aware Ant Colony Optimization for Energy-Efficient Virtual Machine Placement in Cloud Datacenters. Future Internet, 17(6), 261. https://doi.org/10.3390/fi17060261

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop