Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (65)

Search Parameters:
Keywords = virtual machine migration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1338 KB  
Article
EVMC: An Energy-Efficient Virtual Machine Consolidation Approach Based on Deep Q-Networks for Cloud Data Centers
by Peiying Zhang, Jingfei Gao, Jing Liu and Lizhuang Tan
Electronics 2025, 14(19), 3813; https://doi.org/10.3390/electronics14193813 - 26 Sep 2025
Abstract
As the mainstream computing paradigm, cloud computing breaks the physical rigidity of traditional resource models and provides heterogeneous computing resources, better meeting the diverse needs of users. However, the frequent creation and termination of virtual machines (VMs) tends to induce resource fragmentation, resulting [...] Read more.
As the mainstream computing paradigm, cloud computing breaks the physical rigidity of traditional resource models and provides heterogeneous computing resources, better meeting the diverse needs of users. However, the frequent creation and termination of virtual machines (VMs) tends to induce resource fragmentation, resulting in resource wastage in cloud data centers. Virtual machine consolidation (VMC) technology effectively improves resource utilization by intelligently migrating virtual machines onto fewer physical hosts. However, most existing approaches lack rational host detection mechanisms and efficient migration strategies, often neglecting quality of service (QoS) guarantees while optimizing energy consumption, which can easily lead to Service Level Agreement Violations (SLAVs). To address these challenges, this paper proposes an energy-efficient virtual machine consolidation method (EVMC). First, a co-location coefficient model is constructed to detect the fewest suitable VMs on hosts. Then, leveraging the environment-aware decision-making capability of the DQN agent, dynamic VM migration strategies are implemented. Experimental results demonstrate that EVMC outperforms existing state-of-the-art approaches in terms of energy consumption and SLAV rate, showcasing its effectiveness and potential for practical application. Full article
Show Figures

Figure 1

20 pages, 1550 KB  
Article
Strategy for Precopy Live Migration and VM Placement in Data Centers Based on Hybrid Machine Learning
by Taufik Hidayat, Kalamullah Ramli and Ruki Harwahyu
Informatics 2025, 12(3), 71; https://doi.org/10.3390/informatics12030071 - 15 Jul 2025
Viewed by 1172
Abstract
Data center virtualization has grown rapidly alongside the expansion of application-based services but continues to face significant challenges, such as downtime caused by suboptimal hardware selection, load balancing, power management, incident response, and resource allocation. To address these challenges, this study proposes a [...] Read more.
Data center virtualization has grown rapidly alongside the expansion of application-based services but continues to face significant challenges, such as downtime caused by suboptimal hardware selection, load balancing, power management, incident response, and resource allocation. To address these challenges, this study proposes a combined machine learning method that uses an MDP to choose which VMs to move, the RF method to sort the VMs according to load, and NSGA-III to achieve multiple optimization objectives, such as reducing downtime, improving SLA, and increasing energy efficiency. For this model, the GWA-Bitbrains dataset was used, on which it had a classification accuracy of 98.77%, a MAPE of 7.69% in predicting migration duration, and an energy efficiency improvement of 90.80%. The results of real-world experiments show that the hybrid machine learning strategy could significantly reduce the data center workload, increase the total migration time, and decrease the downtime. The results of hybrid machine learning affirm the effectiveness of integrating the MDP, RF method, and NSGA-III for providing holistic solutions in VM placement strategies for large-scale data centers. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

30 pages, 1687 KB  
Article
Network-, Cost-, and Renewable-Aware Ant Colony Optimization for Energy-Efficient Virtual Machine Placement in Cloud Datacenters
by Ali Mohammad Baydoun and Ahmed Sherif Zekri
Future Internet 2025, 17(6), 261; https://doi.org/10.3390/fi17060261 - 14 Jun 2025
Viewed by 664
Abstract
Virtual machine (VM) placement in cloud datacenters is a complex multi-objective challenge involving trade-offs among energy efficiency, carbon emissions, and network performance. This paper proposes NCRA-DP-ACO (Network-, Cost-, and Renewable-Aware Ant Colony Optimization with Dynamic Power Usage Effectiveness (PUE)), a bio-inspired metaheuristic that [...] Read more.
Virtual machine (VM) placement in cloud datacenters is a complex multi-objective challenge involving trade-offs among energy efficiency, carbon emissions, and network performance. This paper proposes NCRA-DP-ACO (Network-, Cost-, and Renewable-Aware Ant Colony Optimization with Dynamic Power Usage Effectiveness (PUE)), a bio-inspired metaheuristic that optimizes VM placement across geographically distributed datacenters. The approach integrates real-time solar energy availability, dynamic PUE modeling, and multi-criteria decision-making to enable environmentally and cost-efficient resource allocation. The experimental results show that NCRA-DP-ACO reduces power consumption by 13.7%, carbon emissions by 6.9%, and live VM migrations by 48.2% compared to state-of-the-art methods while maintaining Service Level Agreement (SLA) compliance. These results indicate the algorithm’s potential to support more environmentally and cost-efficient cloud management across dynamic infrastructure scenarios. Full article
Show Figures

Graphical abstract

24 pages, 2188 KB  
Article
Optimizing Energy Efficiency in Cloud Data Centers: A Reinforcement Learning-Based Virtual Machine Placement Strategy
by Abdelhadi Amahrouch, Youssef Saadi and Said El Kafhali
Network 2025, 5(2), 17; https://doi.org/10.3390/network5020017 - 27 May 2025
Viewed by 1735
Abstract
Cloud computing faces growing challenges in energy consumption due to the increasing demand for services and resource usage in data centers. To address this issue, we propose a novel energy-efficient virtual machine (VM) placement strategy that integrates reinforcement learning (Q-learning), a Firefly optimization [...] Read more.
Cloud computing faces growing challenges in energy consumption due to the increasing demand for services and resource usage in data centers. To address this issue, we propose a novel energy-efficient virtual machine (VM) placement strategy that integrates reinforcement learning (Q-learning), a Firefly optimization algorithm, and a VM sensitivity classification model based on random forest and self-organizing map. The proposed method, RLVMP, classifies VMs as sensitive or insensitive and dynamically allocates resources to minimize energy consumption while ensuring compliance with service level agreements (SLAs). Experimental results using the CloudSim simulator, adapted with data from Microsoft Azure, show that our model significantly reduces energy consumption. Specifically, under the lr_1.2_mmt strategy, our model achieves a 5.4% reduction in energy consumption compared to PABFD, 12.8% compared to PSO, and 12% compared to genetic algorithms. Under the iqr_1.5_mc strategy, the reductions are even more significant: 12.11% compared to PABFD, 15.6% compared to PSO, and 18.67% compared to genetic algorithms. Furthermore, our model reduces the number of live migrations, which helps minimize SLA violations. Overall, the combination of Q-learning and the Firefly algorithm enables adaptive, SLA-compliant VM placement with improved energy efficiency. Full article
Show Figures

Figure 1

14 pages, 397 KB  
Article
Service Function Chain Migration: A Survey
by Zhiping Zhang and Changda Wang
Computers 2025, 14(6), 203; https://doi.org/10.3390/computers14060203 - 22 May 2025
Viewed by 1060
Abstract
As a core technology emerging from the convergence of Network Function Virtualization (NFV) and Software-Defined Networking (SDN), Service Function Chaining (SFC) enables the dynamic orchestration of Virtual Network Functions (VNFs) to support diverse service requirements. However, in dynamic network environments, SFC faces significant [...] Read more.
As a core technology emerging from the convergence of Network Function Virtualization (NFV) and Software-Defined Networking (SDN), Service Function Chaining (SFC) enables the dynamic orchestration of Virtual Network Functions (VNFs) to support diverse service requirements. However, in dynamic network environments, SFC faces significant challenges, such as resource fluctuations, user mobility, and fault recovery. To ensure service continuity and optimize resource utilization, an efficient migration mechanism is essential. This paper presents a comprehensive review of SFC migration research, analyzing it across key dimensions including migration motivations, strategy design, optimization goals, and core challenges. Existing approaches have demonstrated promising results in both passive and active migration strategies, leveraging techniques such as reinforcement learning for dynamic scheduling and digital twins for resource prediction. Nonetheless, critical issues remain—particularly regarding service interruption control, state consistency, algorithmic complexity, and security and privacy concerns. Traditional optimization algorithms often fall short in large-scale, heterogeneous networks due to limited computational efficiency and scalability. While machine learning enhances adaptability, it encounters limitations in data dependency and real-time performance. Future research should focus on deeply integrating intelligent algorithms with cross-domain collaboration technologies, developing lightweight security mechanisms, and advancing energy-efficient solutions. Moreover, coordinated innovation in both theory and practice is crucial to addressing emerging scenarios like 6G and edge computing, ultimately paving the way for a highly reliable and intelligent network service ecosystem. Full article
Show Figures

Figure 1

23 pages, 3481 KB  
Article
Evaluating QoS in Dynamic Virtual Machine Migration: A Multi-Class Queuing Model for Edge-Cloud Systems
by Anna Kushchazli, Kseniia Leonteva, Irina Kochetkova and Abdukodir Khakimov
J. Sens. Actuator Netw. 2025, 14(3), 47; https://doi.org/10.3390/jsan14030047 - 25 Apr 2025
Viewed by 1170
Abstract
The efficient migration of virtual machines (VMs) is critical for optimizing resource management, ensuring service continuity, and enhancing resiliency in cloud and edge computing environments, particularly as 6G networks demand higher reliability and lower latency. This study addresses the challenges of dynamically balancing [...] Read more.
The efficient migration of virtual machines (VMs) is critical for optimizing resource management, ensuring service continuity, and enhancing resiliency in cloud and edge computing environments, particularly as 6G networks demand higher reliability and lower latency. This study addresses the challenges of dynamically balancing server loads while minimizing downtime and migration costs under stochastic task arrivals and variable processing times. We propose a queuing theory-based model employing continuous-time Markov chains (CTMCs) to capture the interplay between VM migration decisions, server resource constraints, and task processing dynamics. The model incorporates two migration policies—one minimizing projected post-migration server utilization and another prioritizing current utilization—to evaluate their impact on system performance. The numerical results show that the blocking probability for the first VM for Policy 1 is 2.1% times lower than for Policy 2 and the same metric for the second VM is 4.7%. The average server’s resource utilization increased up to 11.96%. The framework’s adaptability to diverse server–VM configurations and stochastic demands demonstrates its applicability to real-world cloud systems. These results highlight predictive resource allocation’s role in dynamic environments. Furthermore, the study lays the groundwork for extending this framework to multi-access edge computing (MEC) environments, which are integral to 6G networks. Full article
(This article belongs to the Section Communications and Networking)
Show Figures

Figure 1

25 pages, 3341 KB  
Article
Adaptive BBU Migration Based on Deep Q-Learning for Cloud Radio Access Network
by Sura F. Ismail and Dheyaa Jasim Kadhim
Appl. Sci. 2025, 15(7), 3494; https://doi.org/10.3390/app15073494 - 22 Mar 2025
Viewed by 758
Abstract
The efficiency of the current cellular network is limited due to the imbalance between resource availability and traffic demand. To overcome these limitations, baseband units (BBUs) are deployed on virtual machines (VMs) to form a virtual pool of BBUs. This setup enables the [...] Read more.
The efficiency of the current cellular network is limited due to the imbalance between resource availability and traffic demand. To overcome these limitations, baseband units (BBUs) are deployed on virtual machines (VMs) to form a virtual pool of BBUs. This setup enables the pooling of hardware resources, reducing the costs associated with building base stations (BSs) and simplifying both management and control. However, extreme levels of server resource use within the pool can increase physical maintenance costs and impact virtual BBU performance. This study introduces an adaptive, threshold-based dynamic migration strategy for virtual BBUs within the iCanCloud framework by setting upper and lower limits on the servers’ resource usage in the pool. The proposed method determines whether to initiate a migration by evaluating resource usage on each compute node and identifies the target node for migration if required. This aims to balance server load and cut energy consumption, and also to avoid unnecessary migration because of too high or too low server load, and effectively determine the time to trigger migration and not depend only on a certain instantaneous peak of server resource utilization. This paper used a deep Q-network learning method to predict resource utilization and make an accurate migration decision based on a history dataset. Experimental results show that as compared with Kalman filter prediction and other traditional methods, this model can effectively lower the cost of VM migration by decreasing the migration time and occurrence of it to enhance overall performance while reducing energy consumption. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 2730 KB  
Article
Deep Learning for Network Intrusion Detection in Virtual Networks
by Daniel Spiekermann, Tobias Eggendorfer and Jörg Keller
Electronics 2024, 13(18), 3617; https://doi.org/10.3390/electronics13183617 - 11 Sep 2024
Cited by 2 | Viewed by 2528
Abstract
As organizations increasingly adopt virtualized environments for enhanced flexibility and scalability, securing virtual networks has become a critical part of current infrastructures. This research paper addresses the challenges related to intrusion detection in virtual networks, with a focus on various deep learning techniques. [...] Read more.
As organizations increasingly adopt virtualized environments for enhanced flexibility and scalability, securing virtual networks has become a critical part of current infrastructures. This research paper addresses the challenges related to intrusion detection in virtual networks, with a focus on various deep learning techniques. Since physical networks do not use encapsulation, but virtual networks do, packet analysis based on rules or machine learning outcomes for physical networks cannot be transferred directly to virtual environments. Encapsulation methods in current virtual networks include VXLAN (Virtual Extensible LAN), an EVPN (Ethernet Virtual Private Network), and NVGRE (Network Virtualization using Generic Routing Encapsulation). This paper analyzes the performance and effectiveness of network intrusion detection in virtual networks. It delves into challenges inherent in virtual network intrusion detection with deep learning, including issues such as traffic encapsulation, VM migration, and changing network internals inside the infrastructure. Experiments on detection performance demonstrate the differences between intrusion detection in virtual and physical networks. Full article
(This article belongs to the Special Issue Network Intrusion Detection Using Deep Learning)
Show Figures

Figure 1

17 pages, 576 KB  
Article
Elevating Security in Migration: An Enhanced Trusted Execution Environment-Based Generic Virtual Remote Attestation Scheme
by Jie Yuan, Yinghua Shen, Rui Xu, Xinghai Wei and Dongxiao Liu
Information 2024, 15(8), 470; https://doi.org/10.3390/info15080470 - 7 Aug 2024
Cited by 1 | Viewed by 2264
Abstract
Cloud computing, as the most widely applied and prominent domain of distributed systems, has brought numerous advantages to users, including high resource sharing efficiency, strong availability, and excellent scalability. However, the complexity of cloud computing environments also introduces various risks and challenges. In [...] Read more.
Cloud computing, as the most widely applied and prominent domain of distributed systems, has brought numerous advantages to users, including high resource sharing efficiency, strong availability, and excellent scalability. However, the complexity of cloud computing environments also introduces various risks and challenges. In the current landscape with numerous cloud service providers and diverse hardware configurations in cloud environments, addressing challenges such as establishing trust chains, achieving general-purpose virtual remote attestation, and ensuring secure virtual machine migration becomes a crucial issue that traditional remote attestation architectures cannot adequately handle. Confronted with these issues in a heterogeneous multi-cloud environment, we present a targeted solution—a secure migration-enabled generic virtual remote attestation architecture based on improved TEE. We introduce a hardware trusted module to establish and bind with a Virtual Root of Trust (VRoT), addressing the challenge of trust chain establishment. Simultaneously, our architecture utilizes the VRoT within TEE to realize a general-purpose virtual remote attestation solution across heterogeneous hardware configurations. Furthermore, we design a controller deployed in the trusted domain to verify migration conditions, facilitate key exchange, and manage the migration process, ensuring the security and integrity of virtual machine migration. Lastly, we conduct rigorous experiments to measure the overhead and performance of our proposed remote attestation scheme and virtual machine secure migration process. The results unequivocally demonstrate that our architecture provides better generality and migration security with only marginal overhead compared to other traditional remote attestation solutions. Full article
Show Figures

Figure 1

15 pages, 1744 KB  
Article
Machine Learning to Estimate Workload and Balance Resources with Live Migration and VM Placement
by Taufik Hidayat, Kalamullah Ramli, Nadia Thereza, Amarudin Daulay, Rushendra Rushendra and Rahutomo Mahardiko
Informatics 2024, 11(3), 50; https://doi.org/10.3390/informatics11030050 - 19 Jul 2024
Cited by 4 | Viewed by 3655
Abstract
Currently, utilizing virtualization technology in data centers often imposes an increasing burden on the host machine (HM), leading to a decline in VM performance. To address this issue, live virtual migration (LVM) is employed to alleviate the load on the VM. This study [...] Read more.
Currently, utilizing virtualization technology in data centers often imposes an increasing burden on the host machine (HM), leading to a decline in VM performance. To address this issue, live virtual migration (LVM) is employed to alleviate the load on the VM. This study introduces a hybrid machine learning model designed to estimate the direct migration of pre-copied migration virtual machines within the data center. The proposed model integrates Markov Decision Process (MDP), genetic algorithm (GA), and random forest (RF) algorithms to forecast the prioritized movement of virtual machines and identify the optimal host machine target. The hybrid models achieve a 99% accuracy rate with quicker training times compared to the previous studies that utilized K-nearest neighbor, decision tree classification, support vector machines, logistic regression, and neural networks. The authors recommend further exploration of a deep learning approach (DL) to address other data center performance issues. This paper outlines promising strategies for enhancing virtual machine migration in data centers. The hybrid models demonstrate high accuracy and faster training times than previous research, indicating the potential for optimizing virtual machine placement and minimizing downtime. The authors emphasize the significance of considering data center performance and propose further investigation. Moreover, it would be beneficial to delve into the practical implementation and dissemination of the proposed model in real-world data centers. Full article
Show Figures

Figure 1

20 pages, 501 KB  
Article
Efficient Resource Management in Cloud Environments: A Modified Feeding Birds Algorithm for VM Consolidation
by Deafallah Alsadie and Musleh Alsulami
Mathematics 2024, 12(12), 1845; https://doi.org/10.3390/math12121845 - 13 Jun 2024
Cited by 7 | Viewed by 1259
Abstract
Cloud data centers play a vital role in modern computing infrastructure, offering scalable resources for diverse applications. However, managing costs and resources efficiently in these centers has become a crucial concern due to the exponential growth of cloud computing. User applications exhibit complex [...] Read more.
Cloud data centers play a vital role in modern computing infrastructure, offering scalable resources for diverse applications. However, managing costs and resources efficiently in these centers has become a crucial concern due to the exponential growth of cloud computing. User applications exhibit complex behavior, leading to fluctuations in system performance and increased power usage. To tackle these obstacles, we introduce the Modified Feeding Birds Algorithm (ModAFBA) as an innovative solution for virtual machine (VM) consolidation in cloud environments. The primary objective is to enhance resource management and operational efficiency in cloud data centers. ModAFBA incorporates adaptive position update rules and strategies specifically designed to minimize VM migrations, addressing the unique challenges of VM consolidation. The experimental findings demonstrated substantial improvements in key performance metrics. Specifically, the ModAFBA method exhibited significant enhancements in energy usage, SLA compliance, and the number of VM migrations compared to benchmark algorithms such as TOPSIS, SVMP, and PVMP methods. Notably, the ModAFBA method achieved reductions in energy usage of 49.16%, 55.76%, and 65.13% compared to the TOPSIS, SVMP, and PVMP methods, respectively. Moreover, the ModAFBA method resulted in decreases of around 83.80%, 22.65%, and 89.82% in the quantity of VM migrations in contrast to the aforementioned benchmark techniques. The results demonstrate that ModAFBA outperforms these benchmarks by significantly reducing energy consumption, operational costs, and SLA violations. These findings highlight the effectiveness of ModAFBA in optimizing VM placement and consolidation, offering a robust and scalable approach to improving the performance and sustainability of cloud data centers. Full article
Show Figures

Figure 1

20 pages, 4067 KB  
Article
Toward Optimal Virtualization: An Updated Comparative Analysis of Docker and LXD Container Technologies
by Daniel Silva, João Rafael and Alexandre Fonte
Computers 2024, 13(4), 94; https://doi.org/10.3390/computers13040094 - 9 Apr 2024
Cited by 2 | Viewed by 5274
Abstract
Traditional hypervisor-assisted virtualization is a leading virtualization technology in data centers, providing cost savings (CapEx and OpEx), high availability, and disaster recovery. However, its inherent overhead may hinder performance and seems not scale or be flexible enough for certain applications, such as microservices, [...] Read more.
Traditional hypervisor-assisted virtualization is a leading virtualization technology in data centers, providing cost savings (CapEx and OpEx), high availability, and disaster recovery. However, its inherent overhead may hinder performance and seems not scale or be flexible enough for certain applications, such as microservices, where deploying an application using a virtual machine is a longer and resource-intensive process. Container-based virtualization has received attention, especially with Docker, as an alternative, which also facilitates continuous integration/continuous deployment (CI/CD). Meanwhile, LXD has reactivated the interest in Linux LXC containers, which provides unique operations, including live migration and full OS emulation. A careful analysis of both options is crucial for organizations to decide which best suits their needs. This study revisits key concepts about containers, exposes the advantages and limitations of each container technology, and provides an up-to-date performance comparison between both types of containers (applicational vs. system). Using extensive benchmarks and well-known workload metrics such as CPU scores, disk speed, and network throughput, we assess their performance and quantify their virtualization overhead. Our results show a clear overall trend toward meritorious performance and the maturity of both technologies (Docker and LXD), with low overhead and scalable performance. Notably, LXD shows greater stability with consistent performance variability. Full article
Show Figures

Figure 1

20 pages, 847 KB  
Article
Queuing Model with Customer Class Movement across Server Groups for Analyzing Virtual Machine Migration in Cloud Computing
by Anna Kushchazli, Anastasia Safargalieva, Irina Kochetkova and Andrey Gorshenin
Mathematics 2024, 12(3), 468; https://doi.org/10.3390/math12030468 - 1 Feb 2024
Cited by 10 | Viewed by 2100
Abstract
The advancement of cloud computing technologies has positioned virtual machine (VM) migration as a critical area of research, essential for optimizing resource management, bolstering fault tolerance, and ensuring uninterrupted service delivery. This paper offers an exhaustive analysis of VM migration processes within cloud [...] Read more.
The advancement of cloud computing technologies has positioned virtual machine (VM) migration as a critical area of research, essential for optimizing resource management, bolstering fault tolerance, and ensuring uninterrupted service delivery. This paper offers an exhaustive analysis of VM migration processes within cloud infrastructures, examining various migration types, server load assessment methods, VM selection strategies, ideal migration timing, and target server determination criteria. We introduce a queuing theory-based model to scrutinize VM migration dynamics between servers in a cloud environment. By reinterpreting resource-centric migration mechanisms into a task-processing paradigm, we accommodate the stochastic nature of resource demands, characterized by random task arrivals and variable processing times. The model is specifically tailored to scenarios with two servers and three VMs. Through numerical examples, we elucidate several performance metrics: task blocking probability, average tasks processed by VMs, and average tasks managed by servers. Additionally, we examine the influence of task arrival rates and average task duration on these performance measures. Full article
(This article belongs to the Special Issue Modeling and Analysis of Queuing Systems)
Show Figures

Figure 1

12 pages, 1257 KB  
Proceeding Paper
Sustainable Power Prediction and Demand for Hyperscale Datacenters in India
by Ashok Pomnar, Anand Singh Rajawat, Nisha S. Tatkar and Pawan Bhaladhare
Eng. Proc. 2023, 59(1), 124; https://doi.org/10.3390/engproc2023059124 - 27 Dec 2023
Cited by 1 | Viewed by 2939
Abstract
Data localization, data explosion, data security, data protection, and data acceleration are important driving forces in India’s datacenter revolution, which has raised a demand for datacenter expansion in the country. In addition, the pandemic has pushed the need for technology adoption, digitization across [...] Read more.
Data localization, data explosion, data security, data protection, and data acceleration are important driving forces in India’s datacenter revolution, which has raised a demand for datacenter expansion in the country. In addition, the pandemic has pushed the need for technology adoption, digitization across industries, and migration to cloud-based services across the globe. The launch of 5G services, digital payments, big data analytics, smartphone usage, digital data access, IoT services, and other technologies like AI (artificial intelligence), AR (augmented reality), ML (machine learning), 5G, VR (virtual reality), and Blockchain have been a strong driving force for datacenter investments in India. However, the rapid expansion of these datacenters presents unique challenges, particularly in predicting and managing their power requirements. This abstract focuses on understanding the power prediction and demand aspects specific to hyperscale datacenters in India. The study aims to analyze historical power consumption data from existing hyperscale datacenters in India and develop predictive models to estimate future power requirements. Factors such as server density, workload patterns, cooling systems, and energy-efficient technologies will be considered in the analysis. Datacenter negatively impacts the environment because of the large consumption of power sources and 2% of the global contribution of greenhouse gas emissions. Given the increasing cost of power, datacenter players are naturally encouraged to save energy, as power is a high datacenter operational expenditure cost. Additionally, this research will explore the impact of renewable energy integration, backup power solutions, and demand–response mechanisms to optimize energy usage and reduce reliance on conventional power sources. Many datacenter providers globally have started using power from renewable energy like solar and wind energy through Power Purchase Agreements (PPA) to reduce these carbon footprints and work towards a sustainable environment. In addition, today’s datacenter industry constantly looks for ways to become more energy-efficient through real innovation to reduce its carbon footprint. Full article
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)
Show Figures

Figure 1

16 pages, 2871 KB  
Article
Service Reliability Based on Fault Prediction and Container Migration in Edge Computing
by Lizhao Liu, Longyu Kang, Xiaocui Li and Zhangbing Zhou
Appl. Sci. 2023, 13(23), 12865; https://doi.org/10.3390/app132312865 - 30 Nov 2023
Cited by 3 | Viewed by 1660
Abstract
With improvements in the computing capability of edge devices and the emergence of edge computing, an increasing number of services are being deployed on the edge side, and container-based virtualization is used to deploy services to improve resource utilization. This has led to [...] Read more.
With improvements in the computing capability of edge devices and the emergence of edge computing, an increasing number of services are being deployed on the edge side, and container-based virtualization is used to deploy services to improve resource utilization. This has led to challenges in reliability because services deployed on edge nodes are pruned owing to hardware failures and a lack of technical support. To solve this reliability problem, we propose a solution based on fault prediction combined with container migration to address the service failure problem caused by node failure. This approach comprises two major steps: fault prediction and container migration. Fault prediction collects the log of services on edge nodes and uses these data to conduct time-sequence modeling. Machine-learning algorithms are chosen to predict faults on the edge. Container migration is modeled as an optimization problem. A migration node selection approach based on a genetic algorithm is proposed to determine the most suitable migration target to migrate container services on the device and ensure the reliability of the services. Simulation results show that the proposed approach can effectively predict device faults and migrate services based on the optimal container migration strategy to avoid service failures deployed on edge devices and ensure service reliability. Full article
Show Figures

Figure 1

Back to TopTop