Next Article in Journal
The Mitigation of Phytopathogens in Wheat under Current and Future Climate Change Scenarios: Next-Generation Microbial Inoculants
Previous Article in Journal
Exploring the Significance of Heritage Preservation in Enhancing the Settlement System Resilience
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GEECO: Green Data Centers for Energy Optimization and Carbon Footprint Reduction

by
Sudipto Mondal
,
Fashat Bin Faruk
,
Dibosh Rajbongshi
,
Mohammad Masum Khondhoker Efaz
and
Md. Motaharul Islam
*
Department of Computer Science and Engineering, United International University, Dhaka 1212, Bangladesh
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(21), 15249; https://doi.org/10.3390/su152115249
Submission received: 10 September 2023 / Revised: 14 October 2023 / Accepted: 20 October 2023 / Published: 25 October 2023

Abstract

:
Cloud computing has revolutionized data storage, processing, and access in modern data center operations. Conventional data centers use enormous amounts of energy for server operation, power supply, and cooling. The processors produce heat while processing the data and therefore increase the center’s carbon footprint, and the rising energy usage and carbon emissions caused by data centers pose serious environmental challenges. Under these circumstances, energy-efficient green data centers are being used as a phenomenal source of sustainable modernization. This study proposes the implementation of the Green Energy Efficiency and Carbon Optimization (GEECO) model for enhancing energy usage. Within the data center, the GEECO model dynamically adjusts workload distribution and task assignment to balance performance and manage service-level reconciliation. The ability to identify possibilities for energy optimization and carbon emission reduction is possible through real-time monitoring of energy usage and workload demand. The results revealed a considerable increase in energy efficiency, with significant decreases in energy usage and related costs. The GEECO model provides a significant improvement in energy consumption and carbon emission reduction for the different introduced scenarios. This model’s introduction to practical application would be made possible by these improvements in the quantitative results. The approach of this study also has a positive impact on the environment by reducing carbon emissions. The resilience and practicality of the solution are also analyzed, highlighting the probability of widespread adoption and its associated improvements in the advancement of sustainable cloud computing.

1. Introduction

In today’s technologically advanced world, data centers are crucial as the foundation of modern businesses, organizations, and institutions. Data centers are becoming a necessary piece of infrastructure for storing, processing, and managing enormous volumes of data because of the exponential rise of digital data and the growing reliance on information technology. To illustrate the crucial role that data centers play in facilitating the digital transformation of many companies and society as a whole, this paper examines the essential relevance of data centers in various sectors such as Information Technology (IT), healthcare, telecommunications, energy, and education, etc. Organizations store and manage their important data and applications in data centers, which serve as centralized hubs. They offer a dependable and safe setting for hosting servers, networking hardware, and storage devices.
Recent studies have estimated that the energy consumed by data centers is increasing and its demand is growing rapidly due to the increase in popularity of internet services and distributed computing platforms such as clusters, grids, and clouds. It is estimated that cloud data centers consume more than 2.4% of electricity worldwide, with a global economic impact of USD 30 billion [1]. Despite the evolution in IT equipment efficiencies, the electricity consumption of data center is expected to grow by 15–20% each year [2]. Furthermore, Cloud Data Centers (CDCs) are responsible for the emission of greenhouse gasses produced during the process of electricity generation and Information Technology (IT) equipment manufacturing and disposal [3,4]. It is also estimated that the data centers were responsible for 78.7 million metric tons of carbon dioxide (CO 2 ) emissions, equal to 2% of global emissions, in 2011 [5,6]. In particular, data centers are large in capacity, including tens of thousands of computing servers, data storage, various pieces of cooling equipment, and power transformers [7,8]. A 56% increase in electricity use by data centers was observed worldwide from 2005 to 2010 [9]. Regarding the efficiency of data centers, studies have found that, on average, nearly 55% of the energy is consumed by the computing system, and the rest is consumed by the support system, such as cooling, uninterrupted power supply, etc. in a data center [10,11]. As a result, energy consumption has become a major concern, and considerable research has been dedicated to reducing the energy consumption of data centers by integrating green data centers. The EPA states that American data centers consume the same amount of energy annually as five power plants (U.S. Environmental Protection Agency, 2007) [12]. Therefore, it is essential for data centers to be energy efficient. A Green Data Center (GDC) functions like any other data center, serving as a storage, management, and distribution hub for data.
Data centers and high-performance computing facilities significantly contribute to climate change, emitting 100 mega-tonnes of CO 2 annually, comparable to American commercial aircraft [13]. According to Strubell et al. [14], creating and honing translation engines may produce between 0.6 and 280 tonnes of CO 2 . For instance, it has been calculated that Australian astronomers using supercomputers produce 15 kilotons of CO 2 each year, or 22 tons for each researcher [15]. The energy consumption of hyperscale data centers is expected to nearly double between 2015 and 2023, making it the world’s largest percentage of data center energy consumption [16]. The energy consumption of data centers is projected to rise from 200 TWh in 2016 to 2967 TWh in 2030 [17]. Despite the COVID-19 issue, the global market for Internet Data Centers is anticipated to grow at a CAGR of 13.4% between 2020 and 2027, from an estimated USD 59.3 billion in 2020 to USD 143.4 billion by 2027 [18]. By 2020, it is predicted that the US Internet Data Centers market will be worth USD 16 billion. China, the second-largest economy in the world, is anticipated to have a data center market of USD 32 billion by 2027, growing at a rate of 17.5% between 2020 and 2027 [18]. Additionally, Xiao et al. [19] looked at the Input/Output (I/O) virtualization paradigm and the VM scheduling approach in terms of optimizing energy efficiency. They offered a unique I/O offset mechanism and a power-fairness credit sequencing methodology to enable quick I/O performance for promoting energy saving. The most recent report [20] states that Google’s carbon footprint for 2015 was 2.99 million metric tons, which is 79.04% more than it was for 2011. Therefore, the urgent issue facing sustainable data centers is how to lessen carbon emissions. Since traditional electrical grids (often referred to as “brown energy”) constitute the primary source of power for most data centers, carbon 10 emissions are still kept at a fairly high level [21,22]. In recent times, some of the largest Cloud Service Providers (CSPs), such as Google, Microsoft [23], and Facebook, have opted for green cloud centers powered by renewable energy.
According to Mata-Toledo and Young (2010) [12], reducing the carbon footprint of computer technology is a major goal of green technologies. As previously mentioned, power plants contribute significantly to greenhouse gas emissions. Therefore, it is important to decrease the amount of electricity needed globally, particularly in data centers for computing purposes. According to Gowri (2005) [24] and Air Conditioning Engineers, Inc. a “green” data center is one that is designed to achieve maximum energy efficiency and minimum environmental impact through the simultaneous design of its mechanical, electrical, and computer systems. A green data center differs from a conventional data center in terms of different aspects such as energy efficiency, cooling systems, renewable energy, hardware efficiency, monitoring optimization, environmental impacts, cost efficiency, etc.
Data centers play a crucial role in our digitalized world, serving as the backbone of modern technology and facilitating various online services and applications. However, they are not without challenges. One of the primary concerns is their high energy consumption, as data centers demand substantial power to operate and cool the servers, resulting in significant operational costs and environmental implications due to increased carbon emissions. Additionally, traditional data centers often face space constraints, which can hinder the ability to accommodate the growing number of servers and equipment, leading to potential capacity limitations and difficulties in scaling up operations. The heat generated by the servers poses another challenge, necessitating advanced cooling systems to maintain optimal temperatures and prevent hardware failures. Moreover, scalability and flexibility are essential for effectively meeting fluctuating demands. However, conventional data centers may struggle to quickly expand the infrastructure, leading to potential bottlenecks and service disruptions during peak periods. Regular hardware maintenance and upgrades also add to the complexity of managing data centers. They require careful planning and execution to ensure smooth operations. Addressing these challenges is crucial to harnessing the full potential of data centers and ensuring their sustainability and efficiency in the digital age.
We are proposing a model named Green Energy Efficiency and Carbon Optimization (GEECO). The proposed solution involves a series of modules that works cohesively to optimize energy consumption in data centers while maintaining high performance levels. The workflow begins with the User Interface (UI) module, where tasks are received from cloudlets and forwarded to the data center layer for processing. In the dependency-check state, tasks are checked for dependencies, and if none exist they proceed to the task scheduling module. Here, tasks are categorized using the Shortest Processing Time (SPT), Longest Processing Time (LPT), and Longest Setup Times First (LSTF) algorithms to prioritize and schedule them efficiently.
Next, the resource estimation module employs historical, statistical, and machine-learning algorithms to accurately estimate resource requirements. The best option is then finalized based on these estimations. The data information provider plays a crucial role in monitoring and providing real-time data for decision making.
To ensure resource availability, the resource-available check continuously monitors resource levels. If resources are scarce, load balancing and dynamic scaling strategies are employed to distribute tasks across multiple data centers and optimize resource usage.
The execution module oversees task execution, and the final stage involves performing calculations to evaluate energy consumption, cost, carbon emission rates, and overall performance. By integrating these modules, our proposed solution achieves energy efficiency, cost-effectiveness, and sustainability in data center operations.
The major contributions of this paper are summarized below:
  • We have proposed an innovative model to improve energy efficiency. This algorithm dynamically adjusts workload distribution and resource allocation within the data center to reduce energy consumption and manage service-level reconciliation.
  • We have analyzed the performance and evaluated the effectiveness of our proposed model. The results showed a significant increase in energy efficiency and a considerable decrease in energy use and related costs.
  • Our research emphasizes the importance of green computing and green IT practices, including using a balanced approach of performance evaluation and strategically placed data centers to enhance energy efficiency and reduce environmental impact.
  • We have reviewed studies that propose energy efficiency metrics for data centers, evaluating factors such as energy consumption, carbon emission factor, performance, and cost-related measurements.
These key points collectively demonstrate the contribution of our research in promoting sustainable cloud computing practices through the implementation of green data centers and energy optimization techniques. This study highlights the significance of reducing energy consumption and carbon emissions in data centers, ensuring efficient performance and cost-effectiveness.
The rest of the paper is organized as follows: Section 2 presents the literature review related to energy efficiency and carbon footprint reduction, and Section 3 provides an overview of our proposed framework, which includes the workflow. Section 4 describes our proposed optimization model methodology, which is called GEECO, and the model used to estimate the service request’s energy consumption, performance, cost, and carbon footprint reduction. Our methodology’s performance evaluation is presented in Section 5, which also includes the overall results of the research. In Section 6, we have featured our future work. Finally, Section 7 concludes our proposed work.

2. Literature Review

For this review, we sought analyses that are influential in addressing energy consumption and environmental implications. Some related evaluations periodically accentuate the advancement of sustainable approaches and algorithms for cloud data centers. To expand different cloud data center features, some researches offer particular techniques and algorithms [6]. Certain algorithms, such as support vector machines and random forest-based feature selection, are used for workload adjustment, job scheduling, and virtual machine deployment. These techniques decrease energy usage, expenses, and carbon footprint rates while taking service-level agreement assurances into account [25]. The use of software-defined networking strategies, exclusive routing, and a flow scheduling strategy that uses less energy than fair-sharing routing is another frequent practice [26]. Performance assessments and software-defined networking/OpenFlow protocol use are emphasized [26]. Environmentally friendly networking techniques are also being analyzed regarding data centers that facilitate the Internet of Things (IoT). Data centers emphasize the use of Network Simulator version 2 for operating energy efficiency [27]. To address energy waste and latency delay concerns, mobile cloud computing maximizes resource usage, e.g., a dynamic energy-aware mobile cloud computing model and optimization-based virtual machine allocation mechanism for a typical Infrastructure-as-a-Service (IaaS) cloud provider system are also presented [28]. These systems seek to balance user needs, quality of service, and energy usage through optimizing job scheduling, virtual machine placement, and resource allocation. Waste heat recovery, renewable energy sources, and strategically placed data centers are highlighted as essential elements of sustainable cloud data centers. The necessity of sustainable design and construction approaches in data centers and the significance of energy efficiency measures are finally examined, with an emphasis on prevailing high performance, reducing energy consumption, and measuring sustainability [29]. The unifying themes of these conclusions are sustainable techniques, algorithms, cloud data centers, and overcoming energy-related difficulties.
In order to solve energy usage and environmental issues in the IT sector to achieve sustainable power management and boost energy efficiency, we can underline the relevance and necessity of green computing. This study will elaborate on the necessity of additional study in sustainability, cost-effectiveness, and server virtualization. It will also provide an overview of green computing and cloud computing and several fields of green IT. However, two factors make it difficult to accomplish carbon capping by 50 percent while keeping costs to a minimum: One of them is that it is challenging to decide when to schedule online batteries in order to minimize operating costs due to dynamically shifting elements such as energy prices, workload arrivals, and renewable energy [20]. This study also examines the environmental viability of cloud computing and proposes strategies to reduce carbon emissions through the adoption of green energy sources [30]. Additionally, challenges such as data center placement and provided solutions are addressed here [31]. Moving on to data centers supporting the Internet of Things, the authors discuss the requirement for environmentally friendly networking strategies for managing energy-efficient data centers. The effectiveness of the framework was evaluated using Network Simulator version 2 [27]. In addition, a dynamic energy-aware cloudlet-based mobile cloud computing model to tackle energy waste and latency delays in mobile cloud computing is proposed. It emphasizes the use of dynamic programming and cloudlets to optimize cloud resources and achieve green computing [32]. Overall, these articles conjointly contribute to the understanding and advancement of green computing practices in cloud computing and data center environments.
Several ideas and strategies are presented for achieving sustainable development and lowering carbon emissions in data centers [33]. A conceptual approach that integrates multiple small and geographically distributed data centers with renewable energy sources is proposed to achieve green and sustainable data centers [34]. Previous studies have stressed the significance of effective resource allocation, workload forecasting, and task-conditioned models for Central Processing Unit (CPU) usage optimization and stability [35]. Furthermore, we investigate the integration of renewable energy sources, such as wind turbines, solar panels, and waste heat reuse systems, to enhance energy efficiency and reduce environmental effects. Microgrid layouts, cost issues, and the need to balance economic development, national security, and environmental sustainability are discussed in the research [36]. By combining insights from these papers, researchers and practitioners can gain a comprehensive understanding of sustainable practices, optimization techniques, and energy management strategies for green data centers [37].
Data center energy efficiency is a critical issue, as data centers consume a significant amount of energy. Researchers are developing and implementing a range of innovations and techniques to reduce data center energy consumption. A range of innovations and techniques is included for data center energy efficiency. The energy consumption rates and cost-cutting measures of rack arrangements with vertical cooling airflow system are studied and are compared with discrete cooling techniques using computer room air conditioning and inside economizers techniques [38]. In addition, an optimization-based virtual machine allocation plan called Strategy-based User Requirement (SSUR) is introduced, which considers user needs, energy use rate, and quality of service [25]. The method includes virtual machine allocation, which depends on hardware resource usage, virtual machine migration, and Power Management (PM) shutdown strategies to increase dependability and reduce energy consumption. The value of energy efficiency parameters in data center communication systems provides a set of parameters to increase performance levels and decrease energy consumption rates. Four different designs, DCell, BCube, Hypercube, and Fat Tree three-tier, are used to examine these metrics, to assess whether they are effective in reaching the targets of green computing and decreasing the carbon footprint in data centers [39]. A framework for uniform categories of indicators includes energy consumption, well-organized infrastructure for data centers, airflow techniques, cooling systems, energy efficiency, carbon emissions, and cost-related measurements [40].

Gap Analysis

With an emphasis on energy efficiency, cost efficiency, performance, carbon reduction, and energy modeling, we conducted a gap analysis, as shown in Table 1, and evaluated a selection of research publications. However, there are still certain gaps, such as energy efficiency transitions and the lack of thorough cost-benefit analyses. Studies have shown that energy interventions can improve performance; however, there is a gap in the absence of defined performance indicators. The evaluation of the plans of carbon reduction for long-term environmental effects and the incorporation of real-time data into energy modeling were two other notable areas of inadequacy. This study helps to focus future research on filling these gaps and promoting more effective and all-encompassing strategies for sustainable energy management.
We have examined many techniques and strategies that have been employed in the past for a variety of research goals, and they are presented in Table 1. To emphasize the contrast and gap analysis, we have selected a few relevant works. Here are a few distinct strategies that most of the researchers have employed. The Dynamic Energy aware Cloudlet-based Mobile (DECM) cloud computing model was employed by Keke Gai et al. [32]. Stephen Bird et al. [34] used the Distributed Green Data Center (DGDC) as their main approach. Jianxiong Wan et al. [37] worked with Combined Cooling, Heating, and Power (CCHP) and Waste Heat Reuse (WHR). Shanchen Pang et al. [42] built a model for a dynamic energy management system for cloud data centers that included a Dynamic Voltage Scaling (DVS) management module, analyzed the scheduling procedure, and proposed a task-oriented resource allocation method (LET-ACO). This study has developed the Green Energy Efficiency and Carbon Optimization (GEECO) model to compare all the alternatives.

3. Proposed Framework

3.1. Model Architecture

Our study proposes an architecture for green computing in which we aim to achieve efficient energy use by using green data centers. As shown in Figure 1, our architecture presents multiple end devices that send requests to the data centers for data and information. Between the end device layers and green data centers, we use cloudlet layers, where there will be multiple cloudlets. The cloudlets will help to collect information from the data centers faster. The cloudlet layer comprises a cache storage system for frequent and common requests from end devices. There is a workload measurement system in the cloudlet layer to distribute the workload evenly. The cloudlet layer contains an encryption and decryption layer for data security. The request is encrypted and sent to the data centers through the cloudlet layer. Figure 2 provides a graphical display of the entire proposed architecture.
When an end user makes a request to our architecture, it moves through different layers. Figure 1 represents the interactions between all the cloudlets layers, which include encryption and decryption, cache store, and workload measuring. For security purposes, the encryption and decryption layer is the first layer the request travels through. Then, the process moves on to the cache store, where frequently used data is retrieved to speed up data retrieval. If the data is not in the cache, it goes to the cloudlet layer’s workload management component, which oversees the capacity and job distribution of the cloudlet. The cloudlet chooses which data center will handle the request, reducing the burden on the data center layer. The response is delivered back in the same way to the end user after the data center completes the necessary processing. The cache mechanism stores responses for faster processing for future operations.

3.1.1. End User Layer

The end device layer comprises a variety of consumer devices, including computers, cell phones, and Internet of Things gadgets, as mentioned in Figure 1 and Figure 2. These devices produce data requests that are sent to the cloudlet layers for processing. To protect data during transmission, end devices connect with the cloudlet layer via secure communication protocols. Between the endpoints and green data centers, the cloudlet layer serves as a connecting tier. It consists of many cloudlets that are intentionally placed close to end devices to lower data latency and speed up reaction times.

3.1.2. Cloudlet Layer

Encryption and Decryption Layer

A critical component of the design is data security. The cloudlet layer incorporates an encryption and decryption layer to guarantee the secrecy and integrity of the data while it is being sent. A data request is encrypted using robust cryptographic techniques when the device is terminated. Using the cloudlet layer and green data centers, the encrypted request ensures that critical data are shielded from potential threats. Upon reaching its destination, the encrypted data request is meticulously decrypted using advanced cryptographic methods, ensuring the secure and reliable retrieval of the requested information, as shown in Figure 1.

Cache Storage System

Each cloudlet’s cache storage mechanism is essential for improving the speed of data retrieval. To satisfy recurrent demands from end devices, it maintains frequently requested and common data. As shown in Figure 1, by maintaining a local cache, the cloudlet will reply quickly to frequent requests without contacting the data centers, saving energy and lowering the overhead associated with data transfer.

Workload Measurement System

A dynamic workload-measuring system is used to achieve load balancing and distribute the burden equally across cloudlets. Figure 1 shows that this system periodically evaluates each cloudlet’s processing power and workload. According to this research, incoming data requests are sent to the cloudlet that is least busy, maximizing resource use and lowering the possibility of bottlenecks.

3.1.3. Data Center Layer

The foundation of our energy-efficient computing infrastructure is the green data center layer, as presented in Figure 1. To reduce their carbon impact, these data centers use energy-efficient techniques, renewable energy sources, and cutting-edge cooling systems. The green data centers analyze encrypted data requests after receiving them from the cloudlet layer, obtaining the necessary data, and encrypting the data responses, before sending them back.
The design of our model addresses the urgent issue of conventional data centers’ high energy usage and the rising demand for effective data processing and delivery in the digital era. This technology creates a decentralized and distributed computing architecture, enabling smooth connectivity between end devices and green data centers through the smart integration of cloudlet layers. To maximize resource efficiency and encourage load balancing, this strategy ensures that data requests are intelligently forwarded to the most appropriate cloudlet based on workload assessments. In addition to addressing the pressing issue of excessive energy consumption in conventional data centers, our proposed model design also addresses the growing need for efficient data processing and transmission in the digital era.
Our methodology heavily emphasizes data security and privacy in addition to energy economy. Sensitive data are kept safe during the entire data transmission process because of the encryption and decryption layers implemented in the cloudlet layer. In this model, data are protected from potential risks and unlawful access by using strong encryption techniques that promote user confidence in the system.
The use of green data centers strengthens our dedication to environmentally friendly computing methods. These green data centers drastically minimize their carbon footprint by using energy-efficient technology, cutting-edge cooling methods, and renewable energy sources. This model supports worldwide efforts to lessen the environmental effect of information technology and encourages a more sustainable future by integrating this design with green data centers.

3.2. Flowchart

3.2.1. User Interface Module

Tasks arrive at the data center from cloudlets through the User Interface (UI) module. The UI module, the very first state of the workflow, as shown in Figure 3, provides an interface for users to interact with the data center through cloudlets. It includes web-based portals, command-line interfaces, or Graphical User Interfaces (GUIs) that allow users to submit tasks, view status updates, and access the various services offered by the data center.
The UI module plays a pivotal role in the data center’s operations. It gathers crucial task information, such as the type of task, resources needed, input data, and job priority, and then smoothly sends that information to the cloudlet layer and the data center layer for further processing, as presented in Figure 3. The UI module also provides a user-friendly interface for tracking the status and advancement of submitted tasks or jobs once they have been submitted. Users are provided with real-time information on job execution, completion, and any faults or failures that may have occurred. The module also provides task management tools that enable users to pause, cancel, or alter jobs in compliance with data center regulations and capabilities. To guarantee that only authorized users have access to the data center’s resources and services, the UI module manages user authentication and access control. This is carried out by using procedures such as username–password authentication and multi-factor authentication. The UI module instantly offers error warnings and reporting methods in situations of problems or issues during task submission or processing to keep users informed and direct them toward viable solutions.

3.2.2. Cloudlet Layer

The cloudlet layer in this flowchart is crucial in choosing the best data center to handle user requests. Cloudlets evaluate the present load and capacity of several data centers by utilizing a dynamic workload-measuring mechanism, as presented in Figure 3. The cloudlet layer automatically routes user requests to the most appropriate data center based on parameters such as location, available resources, and workload when they are initiated by users. By effectively distributing the computing load among data centers, this method lessens the strain on any one data center. This strategy improves the system’s overall responsiveness by reducing the requirement for intensive intercommunication between data centers. As a clever middleman, the cloudlet layer optimizes job distribution and adds the distributed infrastructure’s smooth and effective operation.

3.2.3. Data Center Layer

The data center layer is where the GEECO model operates. This layer serves as the hub where all user requests are handled and managed and is essential to this operation. Its main goal is to manage resource allocation to successfully satisfy customer requests. This entails evaluating the resources at hand, classifying and arranging jobs deftly, and guaranteeing optimal resource use. It also manages task dependencies to preserve the orderly progression of events, as presented in Figure 3.
Additionally, the data information provider and resource availability check contribute to informed decision making. If resources are insufficient, dynamic resource scaling and load balancing are employed. The execution phase handles task execution, with subsequent measurement modules gauging performance, energy consumption, cost, and carbon emissions.
The data center infrastructure is scalable and fault-tolerant to respond quickly to changing demands and recover from any faults. Overall, the data center layer acts as the brain of the workflow, coordinating a variety of tasks to provide effective, secure, and high-quality data center services.

3.2.4. Dependency Check

At the dependency-check state, tasks are checked for any dependencies or inter-task relationships that may affect scheduling. Task dependencies and inter-task links are examined during the dependency-check state to determine whether they might have an impact on scheduling, as presented in Figure 3. At this crucial phase, the execution of activities in the proper order for identification and resolution of any limitations or requirements helps to optimize project workflow and resource allocation.
The dependency-check state ensures that required activities are completed before dependent ones can run, which is crucial to task management. Based on these dependencies, it methodically determines the execution order. Tasks without dependencies or those whose requirements have already been fulfilled move on to the task scheduling module, whereas those whose dependencies are yet unmet are temporarily put on hold until the required prerequisites are completed. This ensures a smooth and organized task execution process.

Dependency Handle

In Figure 3, the dependency-handle state acts as a watchful keeper of task dependencies, alerting users or administrators through error messages or alerts as soon as there are any unresolved dependency problems. By using the distributed deadlock detection algorithm to detect and gently resolve deadlock situations, it efficiently manages complicated scenarios, such as circular dependencies or interdependent processes. To ensure a continuous workflow, this is specifically designed for distributed systems where resources and processes span multiple nodes. To detect deadlocks, this approach uses distributed component communication. This technique is appropriate for a large data center since it is scalable.
This distributed data center’s deadlock detection algorithm depends on several crucial parameters. The wait-for graph, which has nodes for processes and directed edges for wait-for connections, shows the link between transactions. Wait-die and wound-wait are two crucial methods that control transactions according to age and priority. Transaction priority is mostly determined by assigned timestamps. Transaction statuses, such as active, blocked, or aborted, offer information about their status. Dependencies between transactions and resources are depicted in the resource allocation graph. Timeouts offer a temporal component by defining the length of time after which a transaction is deemed to be in a deadlock. The effectiveness of identifying and controlling deadlocks in our distributed system is improved by this comprehensive methodology.

3.2.5. Task Scheduling Module

The task scheduling module prioritizes tasks based on various criteria, such as importance, deadline, and resource requirements. It ensures that critical tasks are given higher priority to meet service-level agreements and deadlines. The state implements task scheduling algorithms to determine the order of task execution and resource allocation. Various scheduling algorithms, such as Longest Process Time, Shortest Processing Time, and Longest Setup Times First, have been used based on the specific requirements of the data center. The state monitors employ deadline-aware scheduling techniques to prioritize tasks with approaching deadlines.
This module incorporates energy-aware scheduling techniques to optimize energy consumption in the data center. It can schedule tasks on energy-efficient servers or consolidate tasks to reduce overall power usage. The state supports dynamic rescheduling of tasks in response to changes in task priorities, resource availability, or system load. The state monitors the progress of task execution and updates the status of the tasks once they are completed, as presented in Figure 3.

3.2.6. Categorizing Tasks

Longest Process Time (LPT)

In Figure 3, the LPT scheduling algorithm and tasks with the longest processing times are given higher priority. This approach prioritizes tasks that require more processing resources and time, potentially reducing the overall processing time of the task queue. In a data center context, this algorithm is used when there are tasks with varying complexities or when some tasks have higher processing requirements. By scheduling longer tasks first, the data center can focus on completing resource-intensive tasks early and potentially reduce the waiting time for other tasks in the queue.

Shortest Processing Time (SPT)

The SPT scheduling algorithm works in contrast to the LPT. It prioritizes tasks with the shortest processing times and schedules. This approach minimizes the average waiting time for tasks in the queue and maximizes task throughput, as in Figure 3. In a data center, we have used the SPT algorithm when the goal is to process tasks quickly and efficiently. It is particularly useful for tasks with short execution times, as it ensures that these tasks are completed promptly, which can lead to improved overall performance and responsiveness of the data center.

Longest Setup Times First (LSTF)

The LSTF scheduling algorithm considers the setup times required for each task before execution, as shown in Figure 3. Setup time refers to the time required to prepare resources or contexts for task execution. In LSTF, tasks with longer setup times are given higher priority. This approach is beneficial when tasks have significant setup overheads and when  arranging tasks based on setup time. In a data center, we have used this algorithm when there are tasks with significant start or preparation requirements before actual processing. By scheduling tasks with longer setup times first, the data center can optimize resource utilization and reduce the idle time between task executions.

3.2.7. Resource Estimation Module

The resource estimation module performs a thorough evaluation after classifying tasks into LPT, SPT, or LSTF, as mentioned in Figure 3. It makes estimations for the amount of processing power, memory, storage, and network bandwidth required for each activity. Additionally, the module performs cost estimates for supplying the necessary resources, taking into account elements such as resource use, pricing models, and service level agreements. This module assesses resource allocation as well as the carbon footprint that results from the provision and use of these resources. It takes into account factors such as the energy sources used in the data center, their carbon emissions, and the overall energy efficiency of the infrastructure.

3.2.8. Resource Availability Check

Load Balancing and Dynamic Scaling

The equal allocation of work among the available resources is a key component of load balancing, as in Figure 3. Load balancing minimizes the danger of resource bottlenecks and maximizes resource usage by properly managing job allocation. By doing this, it ensures that no resource is overused while others are still being underused. As a consequence, data center operations are efficient, activities are completed quickly, and overall performance is optimized, making the computing environment more dependable and responsive.
Dynamic resource scaling is a pivotal strategy for enhancing the efficiency of data centers and resource management, as presented in Figure 3. This approach allows the data center to respond dynamically to shifting workload demands. For instance, during periods of high workload, the data center can seamlessly allocate additional computing nodes to ensure tasks are processed efficiently. As a result, operating expenses can be decreased when the workload lowers and resources can be released. Enhancing efficiency and cost-effectiveness, dynamic resource scaling optimizes resource allocation by coordinating it with critical demands.
According to the presented flowchart in Figure 4, the procedure starts when a request is received from outside sources. The system then determines if the required resources are accessible. The model dynamically modifies the workload distribution to ensure an ideal job assignment if resources are available. The system then determines if any tasks need to be allocated. If tasks are present, they are dynamically assigned, resulting in their execution. After a job has been completed, the system measures performance and transitions into the data information provider state. This condition involves ongoing observation and data archiving for projections. In the absence of assignable tasks, the system watches for incoming requests. If initial resource availability is poor, the system goes through load balancing, dynamic scaling, task execution, and performance monitoring via essential management, and enters the data information provider state. The flowchart shows the sequential processes a request goes through while emphasizing the dynamic changes that are made along the way to improve effectiveness and performance.

3.2.9. Measurements

After execution, the measurement module has several elements and is essential for job completion. It systematically assesses task performance, taking into account processing time, response time, and other important performance metrics to determine efficiency, as mentioned in Figure 3. It closely examines power utilization while a job is being carried out, carefully tracking it across all processing nodes and parts to calculate energy usage. Another crucial process is cost computation, which measures the cost of each activity by taking into account resource allocation, energy use, and other costs. In addition, the module measures carbon emissions attributable to task execution by computing carbon dioxide emissions resulting from energy consumption, in line with green computing principles.

3.2.10. Data Information Provider

The data information provider has a variety of responsibilities in the data center setting. In the beginning, it extensively gathers information from many sources, including measurements about resource use, energy consumption, job execution times, and performance. It can also expand the sources from which it collects data, including the weather and energy prices. The data is then thoroughly processed and analyzed using a variety of methods, including data analytics and machine-learning algorithms. The analytical skill of it enables data-driven decision making by enabling the discovery of insightful trends, patterns, and anomalies.
From Figure 3, it can be seen that the data information provider also takes on a crucial function in performance monitoring, supervising the effectiveness, resource utilization, job execution times, and reaction times of the data center. It carefully monitors Key Performance Indicators (KPIs) for the assessment of the efficiency of the data center. In addition, it specializes in monitoring and evaluating energy consumption trends in the context of green data centers, using measures such as Power Use Effectiveness (PUE) to evaluate energy-saving activities. Additionally, it carries out environmental impact analyses and calculates carbon emissions brought on by energy use, which promotes sustainability initiatives.
The data information provider also provides helpful support through reporting and visualization, utilizing dashboards, charts, graphs, and other visualization tools to make it easier for users to understand the data. It expands resource management suggestions based on data and analysis, offering methods for boosting energy effectiveness, cutting costs, and improving overall performance. In addition, it forecasts resource demands, energy consumption patterns, and possible performance bottlenecks using historical data and predictive modeling. In certain cases, it even offers real-time monitoring, warning managers of important occurrences and assisting preemptive replies for the best possible data center operations. In the end, it provides data-driven decision assistance for resource allocation, workload management, and energy-saving techniques to data center administrators.

Real-Time Monitoring Tools, Data Collection, and Analysis Techniques

A powerful open-source toolkit called Prometheus is renowned for its scalability and dependability in monitoring and alerting. It is good at gathering and analyzing time-series data. The combination of the two enables a dynamic and visually appealing way to evaluate the gathered data when used in conjunction with Grafana, which allows configurable dashboards. For log and event data, the ELK (Elasticsearch, Logstash, and Kibana) Stack is useful as it enables centralized logging and real-time analytics.
Technologies such as Apache Kafka enable continuous updates as events happen for real-time data streaming. This strategy enables the system to adapt to sudden changes. In addition, a thorough and timely data-gathering approach includes periodic polling at certain intervals, made possible by technologies such as Telegraf or custom scripts, to handle metrics that do not require real-time updates. Machine-Learning (ML) methods are used to improve the system’s analytical skills. Algorithms such as clustering, regression, and anomaly detection help in trend prediction, anomaly identification, and resource allocation optimization. Utilizing statistical analysis techniques also helps with identifying outliers, comprehending data distribution, and supporting well-informed decision-making processes. Finally, these analytical methods enable the system to draw valuable conclusions from the gathered data.
The development of a reliable monitoring system includes a number of essential elements. First, easy data flow between system components is made possible by seamless interaction with task management systems through APIs. This guarantees quick reactions to changing situations. Flexibility and efficiency are increased by utilizing particular database technologies, such as InfluxDB for time-stamped data and MongoDB or Cassandra for massive amounts of heterogeneous data storage. The creation of a feedback loop is also essential. The system may learn from past data thanks to this loop, which is supported by continuous improvement approaches. As a result, algorithms and settings are regularly updated. Critical metrics are tracked in almost real-time, while other metrics are analyzed at regular intervals in accordance with the system’s performance goals and objectives.

3.3. Algorithm

The Categorize _ Processes function in Algorithm 1 categorizes processes and accepts an array of processes as input. Here the algorithm shows that SPT, LPT, and LSTF are initialized to store the categorized tasks for the Shortest Processing Time, Longest Processing Time, and Longest Setup Times First categories, respectively. For each task, in the iterative approach, processes are listed in ascending order of their processing times for the SPT category; processes are sorted in descending order of their processing times to prioritize longer tasks for the LPT category and processes based on decreasing setup times, favoring those processes with longer setup times.
Algorithm 1 Process Categorizing Algorithm
1:
function Categorize_Processes( Processes )
2:
     s p t , l p t , l s t f
3:
    for  p p r o c e s s e s  do
4:
         s y n c e d S P T p . p _ t i m e + p . s _ t i m e
5:
         s p t s p t + s y n c e d S P T
6:
    end for
7:
    for  p p r o c e s s e s  do
8:
         s y n c e d L P T p . p _ t i m e + p . s _ t i m e
9:
         l p t l p t + s y n c e d L P T
10:
    end for
11:
    for  p p r o c e s s e s  do
12:
         s y n c e d L S T F p . p _ t i m e + p . s _ t i m e
13:
         l s t f l s t f + s y n c e d L S T F
14:
    end for
15:
    return  s p t , l p t , l s t f
16:
end function

3.3.1. Process Categorizing Algorithm

Categorizing all tasks, the function returns the three categories of processes: SPT, LPT, and LSTF. This categorization can help in decision making and task scheduling based on selected criteria.

3.3.2. Best Process Determining Algorithm

In Algorithm 2, the function D e t e r m i n e _ B e s t _ P r o c e s s takes a list of options as input and aims to determine the best option based on the estimated criteria. For each option, the values for energy consumption, cost, performance, and carbon emission are estimated using historical data analysis, statistical analysis, and machine-learning algorithms. After the necessary aspects, it reports to the data information provider for further processing. The  r e p o r t _ d a t a _ i n f o r m a t i o n _ p r o v i d e r function, mentioned in Algorithm 2, then provides the total information about the resources and other necessary information.
Algorithm 2 Best Process Determining Algorithm
1:
function Determine_Best_Process(processes)
2:
     s e l e c t e d _ p r o c e s s N o n e
3:
    for  p a l l _ p r o c e s s e s  do
4:
         x e s t i m a t e d _ e n e r g y _ c o n s u m p t i o n ( p )
5:
         y e s t i m a t e d _ c o s t ( p )
6:
         w e s t i m a t e d _ p e r f o r m a n c e ( p )
7:
         z e s t i m a t e d _ c a r b o n _ e m i s s i o n ( p )
8:
    end for
9:
    report_data_information_provider(x, y, w, z)
10:
  return  s e l e c t e d _ p r o c e s s
11:
end function

3.3.3. Load Balancing and Dynamic Scaling Algorithm

Perform Load Balancing Function: The P e r f o r m _ l o a d _ b a l a n c i n g function performs load balancing by calculating the total available resources across all data centers and provides the average of available resources by dividing the total resources by the number of data centers, as in Algorithm 3.
If the available resources of a data center are larger than the average resources, then there is an excess of resources in that data center. The algorithm calculates the difference between the available and average resources. It then distributes the excess resources to other data centers with below-average resources. This particular algorithm will perform until the distribution is completed.
Perform Dynamic Scaling Function: The P e r f o r m _ D y n a m i c _ S c a l i n g function monitors each data center’s available resources and scales them down if they are overloaded or scales them up if they are underutilized, as mentioned in Algorithm 3.
When the available resources of a data center are above a high threshold, it is considered overloaded and needs to be scaled down by reducing the available resources of the overloaded data center by a predefined scale amount. The opposite case is when the available resources of a data center are below a low threshold.
Algorithm 3 Load Balancing and Dynamic Scaling Algorithm
1:
function  P e r f o r m _ L o a d _ B a l a n c i n g ( dc , n )
2:
     f i n a l _ r e s o u r c e s 0
3:
    for  i 0 to n do
4:
         f i n a l _ r e s o u r c e s f i n a l _ r e s o u r c e s + d c [ i ] . a _ r e s o u r c e s
5:
    end for
6:
     a v g _ r f i n a l _ r e s o u r c e s / n
7:
    for  i 0 to n do
8:
        if  d c [ i ] . a _ r e s o u r c e s a v g _ r  then
9:
            d d c [ i ] . a _ r e s o u r c e s a v g _ r
10:
            d c [ i ] . a _ r e s o u r c e s a v g _ r
11:
           for  j 0 to n and d > 0  do
12:
               if  i j and d c [ j ] . a _ r e s o u r c e s < a v g _ r  then
13:
                    t _ r e s o u r c e s a v g _ r d c [ j ] . a _ r e s o u r c e s
14:
                   if  t _ r e s o u r c e s d  then
15:
                        d c [ j ] . a _ r e s o u r c e s d c [ j ] . a _ r e s o u r c e s + t _ r e s o u r c e s
16:
                        d d t _ r e s o u r c e s
17:
                   else
18:
                        d c [ j ] . a _ r e s o u r c e s d c [ j ] . a _ r e s o u r c e s + d
19:
                        d 0
20:
                   end if
21:
               end if
22:
           end for
23:
        end if
24:
    end for
25:
end function
26:
function  P e r f o r m _ D y n a m i c _ S c a l i n g ( d c , n, h t , l t , t c a )
27:
    for  i 0 to n do
28:
        if  d c [ i ] . a _ r e s o u r c e s > h t  then
29:
            d c [ i ] . a _ r e s o u r c e s d c [ i ] . a _ r e s o u r c e s t c a
30:
        else if  d c [ i ] . a _ r e s o u r c e s < l t  then
31:
            d c [ i ] . a _ r e s o u r c e s d c [ i ] . a _ r e s o u r c e s + t c a
32:
        end if
33:
    end for
34:
end function

3.3.4. Evaluation Metrics

During iteration for each process execution, the code performs calculations related to energy consumption, cost, performance, and carbon emission, as shown in Algorithm 4. For the current process execution, the energy consumption is calculated using a specific calculation method and stored. It is also used to calculate the cost associated with energy consumption. The cost is computed based on the energy cost per unit and any additional operational costs and stored data. The performance of the current process execution is calculated using appropriate metrics, such as execution time, throughput, or response time, which are stored. The energy consumption calculated earlier is used to estimate the carbon emissions produced by energy consumption. The carbon emission calculation is stored by considering the carbon intensity of the energy sources used. It also calculates the average response time for measuring performance and informs the data information provider.
Algorithm 4 Metrics Measuring Algorithm
1:
function Measure_Metrics_After_Execution(ped)
2:
     x 0
3:
     y 0
4:
     w 0
5:
     z 0
6:
    for each  p e p e d  do
7:
         e c a l c u l a t e _ e n e r g y _ c o n s u m p t i o n ( p e )
8:
         c c a l c u l a t e _ c o s t ( e )
9:
         p c a l c u l a t e _ p e r f o r m a n c e ( p e )
10:
         c e r c a l c u l a t e _ c a r b o n _ e m i s s i o n ( e )
11:
         x x + e
12:
         y y + c
13:
         w w + p
14:
         z z + c e r
15:
    end for
16:
     a v g _ p w / p e d . l e n g t h
17:
     r e p o r t _ d a t a _ i n f o r m a t i o n _ p r o v i d e r ( x , y , a v g _ p , z )
18:
end function

4. Optimization Model Methodology

4.1. Energy Consumption

Energy Consumption ( E C ) in our model encapsulates the intricate relationship between the energy consumption, Power Usage Effectiveness ( P U E ), and Total Energy Input ( T E I ) in data centers. P U E represents the efficiency of energy usage, i.e., the ratio of total energy input to the energy consumed solely by IT equipment, including Information Technology (IT) equipment and auxiliary systems. This equation underscores how energy consumption is profoundly influenced by the interplay of P U E and various energy inputs, emphasizing the need for efficient resource allocation and optimization strategies to curtail energy consumption, enhance data center performance, and mitigate carbon emissions.
Energy Consumption (EC):
E C = P U E × T E I
P U E = i = 1 3 ( E i ) c p u l o a d
where E 1 = E c , E 2 = L e , E 3 = D l .
P U E indicates the effectiveness of power usage defined with different parameters, which are the summation of E i and c p u l o a d . E i refers to all the sources of energy consumption, encompassing all components, such as energy consumption ( E c ). This includes lighting energy ( L e ), which is utilized for maintenance, security, and operational chores, as well as other functions; data center buildings need to have enough illumination. As in any other commercial or industrial location, data centers’ lighting systems depend on electricity to operate. Also included is distribution loss ( D l ) and other infrastructure; the central processing units ( c p u l o a d ) in the data center are subject to a total computing load or demand, represented by this metric. It can be calculated using computational jobs, processing power, or other pertinent indicators illustrating the burden being handled by the servers.
T E I = c p u l o a d + c o o l i n g e n e r g y + d i s t r i b u t i o n l o s e + b a c k u p e n e r g y
Total Energy Input ( T E I ) is the combination of some parameters: c p u l o a d is the total computing load of the processor, c o o l i n g e n e r g y represents the energy that is used to cool down the servers, d i s t r i b u t i o n l o s e refers to the energy losses that happen when power or electricity is distributed throughout the system. Distribution losses, which lower the overall efficiency of energy distribution, generally result from resistance in cables, transformers, and other parts of the distribution network. Here, b a c k u p e n e r g y refers to the energy produced or stored by backup systems inside a building. Uninterruptible power supply (UPS), generators, or energy storage devices (such as batteries) are frequently used as backup energy sources.

4.2. Cost

The cost equation in our model is of paramount importance because it quantifies the financial implications of energy consumption in data centers. This study highlights the critical connection between energy usage and operational expenses, enabling decision-makers to understand the economic impact of energy optimization strategies. By considering the cost factors, organizations can make informed choices that align with their budgetary constraints while pursuing energy-efficient practices. The equation steps involve multiplying the total energy consumption by the cost per kilowatt-hour, resulting in a clear representation of the direct relationship between energy usage and financial expenditure. This insight aids in identifying cost-efficient approaches that optimize energy consumption while maintaining effective task execution and operational performance.
Cost (C),
C = i = 1 3 ( E i ) c p u l o a d × ( c p u l o a d + c o o l i n g e n e r g y + d i s t r i b u t i o n l o s e + b a c k u p e n e r g y ) × c o s t p e r k W H
In this cost function equation, all the parameters are already introduced from the P U E and T E I terms. The c o s t p e r k W H refers to the cost per kilowatt-hour (kWh), which is the price that an energy supplier or utility company assesses for the use of one kilowatt-hour of electrical energy. It is a common way to measure how much power is used and how much it costs.

4.3. Performance

Performance (p) evaluation is a crucial aspect of our model and assesses the efficiency of task execution within data centers. It provides insights into how well the system can handle workloads and deliver timely responses to user requests. By calculating performance using the reciprocal of the Average Response Time (ART), we capture the responsiveness of the data center’s operations. This metric allows us to gauge the system’s ability to meet user demands promptly. The performance equation and assessment contribute to optimizing resource allocation, ensuring that tasks are executed efficiently and user satisfaction is maintained. It also aids in making informed decisions about load balancing, task scheduling, and other strategies to enhance the overall performance of the data center environment.
Performance (p),
p = 1 i = 1 n p 1 + p 2 + p 3 + . . . + p n n

4.4. Carbon Emissions

The carbon emission factor is a critical element in our model, addressing the environmental impact of data center operations. It quantifies the amount of carbon emissions produced per unit of energy consumption. By decomposing the Carbon Emission Factor( C e f ) into the product of the Carbon Intensity of Energy Source ( C i e s i ) and Energy Demand Intensity ( E d i i ), we can capture the emissions associated with both the energy source used and the energy demand of the data center. This decomposition offers a granular understanding of the carbon footprint, allowing us to target specific areas for improvement. It aligns with our model’s goal of reducing carbon emissions by optimizing energy usage adopting greener energy sources and energy-efficient practices. By manipulating the carbon emission factor, we can assess the environmental impact of different strategies and select the most sustainable options for data center operations.
Carbon Emission (CE),
C e f = E C × i = 1 n C i e s i × E d i i

4.5. Comparative Analysis

In the comparative analysis, we have evaluated four scenarios, Balanced Approach, Energy-Efficient Focus, Performance-Driven Strategy, and Carbon-Neutral Objective, against the conventional baseline. Each aspect (Energy Consumption, Cost, Performance, Carbon Emission) is examined in the conventional data center.
When considering the Balanced Approach, the data center operates with moderate cost, energy consumption, and carbon emissions. Performance is moderate, indicating efficient task execution. The Energy-Efficient Focus scenario highlights energy efficiency as both energy consumption and carbon emissions are significantly reduced. Performance remains moderate, demonstrating effective resource allocation and task scheduling.
By implementing a performance-driven strategy, the focus is on maximizing performance, resulting in higher cost, energy consumption, and carbon emissions. The Carbon-Neutral Objective scenario aims for carbon neutrality by reducing both energy consumption and carbon emissions, with a balanced cost and performance. The comparative analysis provides insights into the trade-offs between different scenarios, allowing us to make informed decisions regarding resource allocation and energy optimization strategies in data centers.

4.6. Annotations and Explanatory Notes

In the context of our research, several key factors play pivotal roles in shaping data center sustainability and performance. For example, Power Usage Effectiveness (PUE) serves as a benchmark for energy efficiency, while Total Energy Input (TEI) comprehensively captures energy consumption. Our approach is enhanced by the Energy Dependency Index (EDI), which quantifies renewable energy reliance, and the Carbon Intensity of Energy Source (CIES), which reflects energy sustainability. To gauge holistic efficiency, we introduce Overall Efficiency (OE), a composite metric involving PUE, Average Response Time (ART), and Carbon Emission Factor. In tandem, these parameters drive our task scheduling techniques, optimizing resource allocation for heightened performance, energy efficiency, and overall system productivity.

5. Performance Evaluation

The current case examines the particular difficulties that Bangladeshi data centers face, with a particular emphasis on energy use and carbon emissions. Because of the current setting, this study uses a synthetic data technique, where environmental concerns are not the primary focus of data center operations in the nation. The goal is to model situations since direct surveys or interviews would not offer full information owing to Bangladeshi data center authorities’ low focus on energy efficiency and carbon footprint reduction. The illustrative case tries to provide a detailed analysis highlighting potential difficulties and suggesting speculative solutions through the use of synthetic data. In the setting where environmental concerns are not yet prominent, the case presentation, which is based on simulated important people and events, provides a narrative backdrop for debatable solutions to improve energy efficiency and minimize carbon emissions. The conclusive and suggestive remedies are supported by synthetic data analysis, offering a basis for further thought about the alignment of Bangladeshi data centers with global sustainability objectives.
The subsequent Table 2 provides a meticulous comparative analysis of four specific scenarios within our model, namely Balanced Approach, Energy-Efficient Focus, Performance-Driven Strategy, and Carbon-Neutral Objective. Additionally, a scenario representing a conventional data center is included for reference in Table 2. It is crucial to clarify that all data presented in the table is synthetic and purposefully generated to simulate realistic scenarios for thorough analysis. It refers to artificially generated data rather than real-world data obtained from observations or measurements. Through these synthetically constructed scenarios, we showcase key metrics encompassing energy consumption, cost, performance, and carbon emissions. Our calculations involve pertinent equations that account for critical factors such as Power Usage Effectiveness (PUE), Average Response Time (ART), and Carbon Emission Factor (CEF). This synthetic analysis provides nuanced insights into the trade-offs and advantages linked with diverse strategies, thereby facilitating well-informed decision making for the adept management of efficient and sustainable data centers.

5.1. Data Generation Methodology

In order to simulate scenarios, we used a data-generating technique, which allowed us to perform in-depth analysis. The synthetic datasets were created using a statistical data production approach that involves randomization within defined bounds to emulate the characteristics of real-world environments. The technique used to generate the data combines numerical simulations with statistical distributions. To replicate the variability seen in real-world events, we specifically used methods from probability theory, such as random sampling from well-defined distributions. In order to represent the dynamic characteristics and interactions between various factors, numerical simulations were also used.
For each scenario, the procedure entails setting factors related to energy use, cost, performance, and carbon emissions, as in Table 2. Controlled randomization is then applied to these characteristics, guaranteeing a varied yet realistic dataset for our studies. It is important to remember that the goal is to construct a flexible dataset that captures the spirit of various scenarios, not to replicate particular real-world examples. We intend to study the potential results and trends connected with various techniques without being bound by current datasets by using a synthetic data creation approach. The methodology offers a thorough grasp of the ramifications of alternative approaches and allows for flexibility in scenario research.

5.2. Energy Consumption Comparison

The “Energy-Efficient Focus” scenario from Table 2 and Figure 5 demonstrates the lowest energy consumption among the scenarios, with 4025 kWh, indicating a strong emphasis on energy optimization. On the other hand, the “Performance-Driven Strategy” scenario exhibits higher energy consumption, with 5500 kWh, suggesting a trade-off for improved task execution. In the “Balanced Approach” scenario, the data center acceptably consumes 5050 kWh. Again, the “Carbon-neutral Objective” consumes 4400 kWh, which is also in an acceptable range. Compared with conventional data centers, all four scenarios consume less energy. In comparison with our approach, a conventional data center consumes 6500 kWh of energy, which is the biggest amount among all scenarios.

5.3. Cost Comparison

The “Energy-Efficient Focus” scenario from Table 2 and Figure 6 yields the lowest cost due to its reduced energy consumption, which costs USD 393.75. Conversely, the “Performance-Driven Strategy” scenario incurs a higher cost of USD 775, emphasizing the cost implications of prioritizing performance. In the Balanced Approach and Carbon-Neutral scenarios, the data center operates with a moderate cost of USD 562.5 and USD 450. Conventional data centers cost USD 825, which is higher than the Balanced Approach, Energy-Efficient Focus, and Carbon-Neutral scenarios of our approach.

5.4. Performance Comparison

The “Performance-Driven Strategy” scenario from Table 2 and Figure 7 achieves the lowest response time score of 0.145 ms, showcasing efficient resource allocation. In contrast, the “Energy-Efficient Focus” scenario obtains the same performance, with a score of 0.25 ms, highlighting the potential compromise on energy efficiency for enhanced task execution. The “Balanced Approach” scenario exhibits a moderate response time of 0.20 ms. In the “Carbon-Neutral Objective” scenarios, the data center requires 0.1818 ms for its objective to be met. In the Balanced Approach, Performance-Driven Strategy, and Carbon-Neutral scenario, the data center operates with a high response time in comparison with the conventional data centers, which have a 0.27 ms response time.

5.5. Carbon Emission Comparison

From Table 2 and Figure 8, we analyzed that the “Carbon-Neutral Objective” scenario excels in minimizing carbon emissions, with the amount of 1200 kg of CO 2 , aligning with its environmental focus. Conversely, the “Performance-Driven Strategy” scenario results in higher carbon emissions, which are 2250 kg, suggesting a balance between performance and sustainability. In the Balanced Approach and Energy-Efficient Focus scenario, the data center produces 1650 kg and 1412.5 kg of CO 2 . Conventional data centers produce 2400 kg of CO 2 , respectively, which is far more than the approach of our system.

5.6. Narrative Interpretation and Summary

The analysis of these scenarios from Table 2 highlights the essential trade-offs between energy consumption, cost, performance, and carbon emissions. The “Energy-Efficient Focus” scenario is a superior choice for eco-conscious data centers because it excels in both energy efficiency and economic effectiveness. The “Performance-Driven Strategy” scenario, in contrast, gives optimal job execution and performance a higher priority than energy efficiency. In the “Carbon-Neutral Objective” scenario, carbon emissions are effectively reduced, advancing sustainability. The “Balanced Approach” strikes a midway ground in the meantime, maintaining reasonable levels in each of these areas. As a result, this research emphasizes the importance of using an energy optimization model to direct the operation of data centers, highlighting the necessity of matching decisions to particular goals and using a strategic management approach to data center management.
The methodology that has been suggested consists of four unique focal strategies that are customized to meet certain system needs. While not exactly replicating real-world datasets, quantitative analysis of synthetic data produced important insights for the goals of this study. We obtained the synthetic data from Table 2 and, by contrasting our approach to the conventional systems, we were able to determine the resource usage improvement or the percentage of energy consumption and carbon emission reduction. The strategy demonstrated notable progress in lowering two important aspects of this study, which are energy use and carbon emissions, as presented in Table 3. The balanced approach resulted in a 22.31% decrease in energy usage and a 31.25% reduction in carbon emissions. The energy-efficient focus strategy demonstrated impressive drops in energy use of 38.08% and carbon emissions of 41.15%. As a result of the performance-driven strategy, the energy usage was reduced by 15.38% and the carbon emissions by 6.25%. Finally, achieving the carbon-neutral goal led to a noteworthy 50% decrease in carbon emissions and a 32.31% increase in energy efficiency.

5.7. Decision

In the Balanced Approach scenario, the focus is on all necessary aspects, such as energy efficiency, cost-effectiveness, performance, and carbon emissions rate. This approach ensures that all four aspects provide acceptable services. The Energy Efficient Focus scenario places a strong emphasis on reducing energy consumption to maintain acceptable performance levels. The primary goal is to minimize the data center’s energy usage, leading to cost savings and reducing carbon emissions. If we consider the performance-driven strategy scenario, the primary concern is maximizing the performance and responsiveness of the data center. Energy efficiency and cost may take a backseat to ensure that applications and tasks run at peak levels. This approach is suitable for situations where performance is critical, such as high-performance computing environments. However, nowadays, the environment is a big issue in the context of carbon emissions. The carbon-neutral objective scenario minimizes the carbon footprint and environmental impact of the data center. Efforts are being made to reduce carbon emissions associated with energy consumption. This could involve using renewable energy sources, optimizing cooling systems, and adopting energy-efficient hardware.
It is impossible to stress the importance of adopting sustainability in the areas of energy efficiency and carbon footprint reduction. We are on a dangerous path because of the widespread abuse of energy and the unrestrained generation of carbon emissions. The stark truth is shown by the data: excessive energy use has not only put a strain on our limited resources but also contributed to a startling rise in carbon emissions, which has accelerated climate change to previously unheard-of levels. A sobering indicator of the negative effects of our energy-intensive lifestyles, global energy-related CO 2 emissions increased by 0.9% in 2022 and hit a record high of approximately 36.8 billion metric tons in 2019. If we continue down this path, it might lead to irreversible environmental disasters that would endanger global ecosystems, weather patterns, and human well-being. Therefore, adopting sustainability is not only necessary but also imperative. We can create a more sustainable route for future generations by focusing on energy efficiency, implementing renewable energy sources, and reducing carbon emissions. This will protect the fragile ecosystem balance of our planet and lessen the effects of a looming climate disaster.
Within the scope of sustainable development, energy efficiency is a crucial pillar. It embodies a tactical strategy to produce similar output with less energy use, resulting in significant economic gains and clear environmental advantages. The fundamental tenet of energy efficiency, which states that little energy input leads to sustained production, resonates with more significance, especially as it takes the initiative in advancing a sustainable global energy paradigm. This overwhelming emphasis accentuates the importance of energy efficiency.
We provide the structure for our strategy by creating a design that is sustainable and efficient, with energy optimization. Our model accepts the laudable goal of reducing carbon footprints as a symbiotic companion to energy optimization as it advances and changes. This collaboration represents a crucial step toward a more peaceful life on our planet by forging a formidable front against environmental degradation and resource depletion.
Our concept makes a bold foray into the world of sustainable development, with energy efficiency as its compass. It appears as a solution for preparing to coordinate the reduction of carbon emissions and energy optimization simultaneously. Our methodology integrates pragmatic economic considerations with environmentally conscious actions by diligently addressing these linked features. In addition to promoting financial savings, this convergence of objectives advances the larger cause of protecting our planet’s natural resources.

6. Limitations and Future Work

6.1. Implementation Challenges and Overcome Methods

6.1.1. Implementation Challenges

It can be difficult to navigate the complexities of task dependencies in a dynamic context as effective dependency verification and delay-free resolution techniques are needed. Additionally, it is difficult to develop an adaptive scheduler that optimizes both efficiency and speed due to the complexity of scheduling activities when dependencies, resource availability, and prioritization are taken into account. Predictive models that strike a balance between accuracy and responsiveness in real time are required for estimating metrics such as energy consumption, cost, and performance since they add another degree of complexity. It can be difficult to maintain a real-time monitoring system without sacrificing performance, particularly in dynamic circumstances where ongoing monitoring could place a burden on resources. When resources are inadequate, the problem becomes more difficult and calls for efficient load balancing and dynamic scaling to control changing workloads and improve job allocation. A fine balance must be struck between task execution and resource optimization, matching task priority with resource limits to avoid under- or over-using resources. When computing numerous interrelated metrics at once, such as energy, cost, carbon emissions, and performance, computational complexity is introduced that necessitates the use of effective methodologies. The last problem is to maintain a balance between the provision of real-time data and the preservation of historical data. This requires a well-organized and effective data management system to provide easy access to real-time data despite the volume of previous data that must be preserved.

6.1.2. Overcome Methodology

Implementing many strategic initiatives is necessary to overcome these obstacles. First, the development of a thorough dependency graph that updates dynamically as activities are completed with efficient dependency checks is required. Unexpected dependencies are handled through fallback mechanisms and other routes, and resolution techniques are kept up-to-date and tuned to ensure timeliness. Utilizing heuristic methods is necessary to develop an intelligent task scheduler while taking dependencies, priority levels, and resource limits into account. Dynamic scheduling enables optimal resource use with continual improvement based on performance feedback. It adjusts in real-time to changing situations. Combining historical data with real-time inputs is essential for predictive modeling. Algorithms that can be learned online, known as machine-learning algorithms, are constantly updated depending on the most recent data and user performance feedback to ensure accuracy. In order to reduce the effect and provide timely updates, a distributed monitoring architecture with strategically placed monitoring agents is used. For equitable resource use, anticipatory dynamic scaling techniques and intelligent load-balancing algorithms that can quickly and effectively adjust to changing workloads are essential. Predictive analytics optimizes resource allocation, whereas execution approaches that dynamically adapt based on resource availability prioritize jobs wisely, taking dependencies and criticality into account. By using optimal methodologies for quick and precise computing, parallelized calculation processes divide complicated computations into concurrent jobs. The design of a tiered storage system solves the problem of data storage by storing real-time data in high-performance tiers and archiving old data in a scalable way that enables easy data access and retrieval. There are still interesting areas for further research and development in the effort to increase the contributions of this model to the disciplines of energy efficiency, carbon footprint reduction, and balanced performance improvement. Our upcoming efforts, which aim to create a thorough and reliable framework that effortlessly harmonizes these objectives, are focused on the interaction of these fundamental elements.

6.2. Holistic Parameter Integration

Moving forward, our study will focus on developing a single model that integrates energy efficiency, carbon footprint reduction, and balanced performance enhancement holistically. We understand that these three factors are intimately intertwined and that improvements in one area may impact the others. We want to develop a model that not only optimizes energy use and eliminates carbon emissions but also does so while preserving or even improving the performance metrics of the system. To achieve this, we devised complex algorithms and novel approaches. To ensure the concurrent accomplishment of these three essential goals, this project requires a deep dive into dynamic resource allocation, predictive analysis, and intelligent load-balancing techniques.

6.3. Advanced Algorithm Development

The creation of sophisticated algorithms will play a vital role in the success of our model as the study progresses. The thorough design of algorithms that automatically react to changes in workload and resource availability in real-time, dynamically optimize energy usage, reduce carbon emissions, and enhance system performance will be the focus of our future work. This intricate algorithmic orchestration will provide the groundwork for a model that not only satisfies the requirements of modern computing but also acts as a standard for future efforts in environmentally friendly and effective technology.

6.4. Real-World Implementation and Practical Validation

Validating and using our developed model in practical contexts will be a key component of our future efforts. Collaborations with industry leaders and data centers will be crucial throughout this phase, providing insightful information on the real-world issues and possibilities that our model can solve. Thorough testing will produce empirical proof of the effectiveness of our methodology in delivering on the promises of energy savings, carbon footprint reduction, and improved performance, both in controlled environments and operational data centers. This validation procedure will increase the legitimacy of our work and open the door for wider industry adoption, spurring the shift to greener and more sustainable data center operations.
We are dedicated to creating a strong base that will usher in a new era of environmentally responsible computing by leveraging cutting-edge tools, methods, and defining guidelines for both technical advancement and environmental responsibility.

7. Conclusions

In this study, we have proposed an energy-efficient and carbon-reducing system model based on task scheduling. The primary objectives of this study were to achieve energy efficiency and reduce the carbon footprint of data center operations. Through an exhaustive analysis of various scenarios, we have effectively showcased the significant potential for improving energy efficiency while concurrently mitigating environmental impact. Notably, in the GEECO model, the ‘Balanced Approach’ scenario represents a harmonious equilibrium among energy consumption, cost-effectiveness, performance, and carbon emissions. This approach provides a robust framework for organizations seeking comprehensive optimization.
Moreover, the ‘Energy-Efficient Focus’ and ‘Carbon-Neutral Objective’ scenarios delve deeper into energy-efficient practices, demonstrating this model’s ability to substantially reduce energy consumption and carbon emissions. These findings align seamlessly with the objectives of sustainable and environmentally conscious data center management, making our model an invaluable tool for organizations prioritizing sustainability. It empowers data center managers to make well-informed decisions, optimizing not only energy usage and costs but also significantly reducing their carbon footprint.
Our model continually monitors energy consumption to eliminate inefficiencies, evaluates performance to ensure optimal responsiveness, and incorporates cost estimates to guide economically sound decisions. Furthermore, the meticulous tracking of carbon emission rates aligns with our commitment to ecological awareness, uniting sustainability with operational success. In conclusion, our approach responds to the increasing demands for enhanced energy efficiency and reduced carbon footprints within data center infrastructures. By strategically integrating energy-efficient practices, predictive data, and dynamic resource allocation, our concept heralds a paradigm shift in data center operations.

Author Contributions

Conceptualization, S.M., F.B.F., D.R. and M.M.K.E.; methodology, S.M., F.B.F., D.R. and M.M.K.E.; software, M.M.K.E.; validation, M.M.I.; formal Analysis, S.M., F.B.F., D.R. and M.M.K.E.; investigation, S.M., F.B.F., D.R. and M.M.K.E.; resources, S.M., F.B.F., D.R. and M.M.K.E.; data curation, M.M.K.E.; writing—original draft, S.M., F.B.F., D.R. and M.M.K.E.; writing—review & editing, S.M., F.B.F., D.R., M.M.K.E. and M.M.I.; visualization, S.M., F.B.F., D.R. and M.M.K.E.; supervision, M.M.I.; project administration, M.M.I.; funding acquisition, M.M.I. All authors have read and agreed to the published version of the manuscript.

Funding

Institute for Advanced Research Publications from the United International University has supported this research. Ref. No.: IAR-2023-Pub-041.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study.

References

  1. Meijer, G.I. Cooling energy-hungry data centers. Science 2010, 328, 318–319. [Google Scholar] [CrossRef] [PubMed]
  2. Ebrahimi, K.; Jones, G.F.; Fleischer, A.S. A review of data center cooling technology, operating conditions and the corresponding low-grade waste heat recovery opportunities. Renew. Sustain. Energy Rev. 2014, 31, 622–638. [Google Scholar] [CrossRef]
  3. Ahmad, R.W.; Gani, A.; Hamid, S.H.A.; Xia, F.; Shiraz, M. A Review on mobile application energy profiling: Taxonomy, state-of-the-art, and open research issues. J. Netw. Comput. Appl. 2015, 58, 42–59. [Google Scholar] [CrossRef]
  4. Masanet, E.; Shehabi, A.; Koomey, J. Characteristics of low-carbon data centres. Nat. Clim. Chang. 2013, 3, 627–630. [Google Scholar] [CrossRef]
  5. Brown, R.E.; Brown, R.; Masanet, E.; Nordman, B.; Tschudi, B.; Shehabi, A.; Stanley, J.; Koomey, J.; Sartor, D.; Chan, P.; et al. Report to Congress on Server and Data Center Energy Efficiency: Public Law 109-431; Technical Report; Lawrence Berkeley National Lab. (LBNL): Berkeley, CA, USA, 2007. [Google Scholar]
  6. Shuja, J.; Gani, A.; Shamshirband, S.; Ahmad, R.W.; Bilal, K. Sustainable cloud data centers: A survey of enabling techniques and technologies. Renew. Sustain. Energy Rev. 2016, 62, 195–214. [Google Scholar] [CrossRef]
  7. Katz, R.H. Tech titans building boom. IEEE Spectr. 2009, 46, 40–54. [Google Scholar] [CrossRef]
  8. Ghamkhari, M.; Mohsenian-Rad, H. Energy and performance management of green data centers: A profit maximization approach. IEEE Trans. Smart Grid 2013, 4, 1017–1025. [Google Scholar] [CrossRef]
  9. Chen, H.; Coskun, A.K.; Caramanis, M.C. Real-time power control of data centers for providing regulation service. In Proceedings of the 52nd IEEE Conference on Decision and Control, Firenze, Italy, 10–13 December 2013; pp. 4314–4321. [Google Scholar]
  10. Juarez, F.; Ejarque, J.; Badia, R.M. Dynamic energy-aware scheduling for parallel task-based application in cloud computing. Future Gener. Comput. Syst. 2018, 78, 257–271. [Google Scholar] [CrossRef]
  11. Buyya, R.; Beloglazov, A.; Abawajy, J. Energy-efficient management of data center resources for cloud computing: A vision, architectural elements, and open challenges. arXiv 2010, arXiv:1006.0308. [Google Scholar]
  12. Mata-Toledo, R.; Gupta, P. Green data center: How green can we perform. J. Technol. Res. Acad. Bus. Res. Inst. 2010, 2, 1–8. [Google Scholar]
  13. Lannelongue, L.; Grealey, J.; Inouye, M. Green algorithms: Quantifying the carbon footprint of computation. Adv. Sci. 2021, 8, 2100707. [Google Scholar] [CrossRef]
  14. Strubell, E.; Ganesh, A.; McCallum, A. Energy and policy considerations for deep learning in NLP. arXiv 2019, arXiv:1906.02243. [Google Scholar]
  15. Stevens, A.R.; Bellstedt, S.; Elahi, P.J.; Murphy, M.T. The imperative to reduce carbon emissions in astronomy. Nat. Astron. 2020, 4, 843–851. [Google Scholar] [CrossRef]
  16. Building Energy Codes Working Group. International Review of Energy Efficiency in Data Centres for IEA EBC Building Energy Codes Working Group; Ballarat Consulting for the Building Energy Codes Working Group (BECWG): Paris, France, 2022. [Google Scholar]
  17. Koot, M.; Wijnhoven, F. Usage impact on data center electricity needs: A system dynamic forecasting model. Appl. Energy 2021, 291, 116798. [Google Scholar] [CrossRef]
  18. Katal, A.; Dahiya, S.; Choudhury, T. Energy efficiency in cloud computing data centers: A survey on software technologies. Clust. Comput. 2023, 26, 1845–1875. [Google Scholar] [CrossRef]
  19. Xiao, P.; Ni, Z.; Liu, D.; Hu, Z. Improving the energy-efficiency of virtual machines by I/O compensation. J. Supercomput. 2021, 77, 11135–11159. [Google Scholar] [CrossRef]
  20. He, H.; Shen, H. Minimizing the operation cost of distributed green data centers with energy storage under carbon capping. J. Comput. Syst. Sci. 2021, 118, 28–52. [Google Scholar] [CrossRef]
  21. Huang, P.; Copertaro, B.; Zhang, X.; Shen, J.; Löfgren, I.; Rönnelid, M.; Fahlen, J.; Andersson, D.; Svanfeldt, M. A review of data centers as prosumers in district energy systems: Renewable energy integration and waste heat reuse for district heating. Appl. Energy 2020, 258, 114109. [Google Scholar] [CrossRef]
  22. Hu, X.; Li, P.; Wang, K.; Sun, Y.; Zeng, D.; Wang, X.; Guo, S. Joint workload scheduling and energy management for green data centers powered by fuel cells. IEEE Trans. Green Commun. Netw. 2019, 3, 397–406. [Google Scholar] [CrossRef]
  23. DiCaprio, T. Becoming Carbon Neutral—How Microsoft Is Striving to Become Leaner, Greener, and More Accountable; Tamara DiCaprio-Microsoft Corporation: Albuquerque, NW, USA, 2012. [Google Scholar]
  24. Gowri, K. Desktop tools for sustainable design. ASHRAE J. 2005, 47, 42–46. [Google Scholar]
  25. Huang, Y.; Xu, H.; Gao, H.; Ma, X.; Hussain, W. SSUR: An approach to optimizing virtual machine allocation strategy based on user requirements for cloud data center. IEEE Trans. Green Commun. Netw. 2021, 5, 670–681. [Google Scholar] [CrossRef]
  26. Li, D.; Shang, Y.; Chen, C. Software defined green data center network with exclusive routing. In Proceedings of the IEEE INFOCOM 2014—IEEE Conference on Computer Communications, Toronto, ON, Canada, 27 April–2 May 2014; pp. 1743–1751. [Google Scholar]
  27. Peoples, C.; Parr, G.; McClean, S.; Scotney, B.; Morrow, P. Performance evaluation of green data centre management supporting sustainable growth of the internet of things. Simul. Model. Pract. Theory 2013, 34, 221–242. [Google Scholar] [CrossRef]
  28. Kaur, K.; Garg, S.; Aujla, G.S.; Kumar, N.; Zomaya, A.Y. A multi-objective optimization scheme for job scheduling in sustainable cloud data centers. IEEE Trans. Cloud Comput. 2019, 10, 172–186. [Google Scholar] [CrossRef]
  29. Moud, H.I.; Hariharan, J.; Hakim, H.; Kibert, C.; Flood, I. Sustainability assessment of data centers beyond LEED. In Proceedings of the 2020 IEEE Green Technologies Conference (GreenTech), Oklahoma City, OK, USA, 1–3 April 2020; pp. 62–64. [Google Scholar]
  30. Patel, Y.S.; Mehrotra, N.; Soner, S. Green cloud computing: A review on Green IT areas for cloud computing environment. In Proceedings of the 2015 International Conference on Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE), Noida, India, 25–27 February 2015; pp. 327–332. [Google Scholar]
  31. Wu, Y.; Tornatore, M.; Ferdousi, S.; Mukherjee, B. Green data center placement in optical cloud networks. IEEE Trans. Green Commun. Netw. 2017, 1, 347–357. [Google Scholar] [CrossRef]
  32. Gai, K.; Qiu, M.; Zhao, H.; Tao, L.; Zong, Z. Dynamic energy-aware cloudlet-based mobile cloud computing model for green computing. J. Netw. Comput. Appl. 2016, 59, 46–54. [Google Scholar] [CrossRef]
  33. Wang, S.; Sun, Y.; Shi, X.; Zhu, S.; Ma, L.T.; Zhang, J.; Zheng, Y.; Liu, J. Full Scaling Automation for Sustainable Development of Green Data Centers. arXiv 2023, arXiv:2305.00706. [Google Scholar]
  34. Bird, S.; Achuthan, A.; Maatallah, O.A.; Hu, W.; Janoyan, K.; Kwasinski, A.; Matthews, J.; Mayhew, D.; Owen, J.; Marzocca, P. Distributed (green) data centers: A new concept for energy, computing, and telecommunications. Energy Sustain. Dev. 2014, 19, 83–91. [Google Scholar] [CrossRef]
  35. Haddad, M.; Da Costa, G.; Nicod, J.M.; Péra, M.C.; Pierson, J.M.; Rehn-Sonigo, V.; Stolf, P.; Varnier, C. Combined it and power supply infrastructure sizing for standalone green data centers. Sustain. Comput. Inform. Syst. 2021, 30, 100505. [Google Scholar] [CrossRef]
  36. Rahmani, R.; Moser, I.; Cricenti, A.L. Modelling and optimisation of microgrid configuration for green data centres: A metaheuristic approach. Future Gener. Comput. Syst. 2020, 108, 742–750. [Google Scholar] [CrossRef]
  37. Wan, J.; Zhou, J.; Gui, X. Sustainability analysis of green data centers with CCHP and waste heat reuse systems. IEEE Trans. Sustain. Comput. 2020, 6, 155–167. [Google Scholar] [CrossRef]
  38. Bhattacharya, T.; Qin, X. Modeling energy efficiency of future green data centers. In Proceedings of the 2020 11th International Green and Sustainable Computing Workshops (IGSC), Pullman, WA, USA, 19–22 October 2020; pp. 1–3. [Google Scholar]
  39. Vatsal, S.; Agarwal, S. Energy Efficiency Metrics for Safeguarding the Performance of Data Centre Communication Systems by Green Cloud Solutions. In Proceedings of the 2019 International Conference on Cutting-edge Technologies in Engineering (ICon-CuTE), Uttar Pradesh, India, 14–16 November 2019; pp. 136–140. [Google Scholar]
  40. Song, Z.; Zhang, X.; Eriksson, C. Data center energy and cost saving evaluation. Energy Procedia 2015, 75, 1255–1260. [Google Scholar] [CrossRef]
  41. Wu, R.; Chen, W.; Li, L.; Zhao, K. Energy efficient optimization method for green data center based on cloud computing. In Proceedings of the 2015 4th National Conference on Electrical, Electronics and Computer Engineering, Xi’an, China, 12–13 December 2015; pp. 1494–1498. [Google Scholar]
  42. Pang, S.; Zhang, W.; Ma, T.; Gao, Q. Ant colony optimization algorithm to dynamic energy management in cloud data center. Math. Probl. Eng. 2017, 2017, 4810514. [Google Scholar] [CrossRef]
  43. Kaur, M.; Singh, P. Energy efficient green cloud: Underlying structure. In Proceedings of the 2013 International Conference on Energy Efficient Technologies for Sustainability, Nagercoil, India, 10–12 April 2013; pp. 207–212. [Google Scholar]
  44. Kansal, N.J.; Chana, I. Cloud load balancing techniques: A step towards green computing. IJCSI Int. J. Comput. Sci. Issues 2012, 9, 238–246. [Google Scholar]
Figure 1. Proposed model architecture.
Figure 1. Proposed model architecture.
Sustainability 15 15249 g001
Figure 2. Model architecture visual representation.
Figure 2. Model architecture visual representation.
Sustainability 15 15249 g002
Figure 3. Proposed work flowchart.
Figure 3. Proposed work flowchart.
Sustainability 15 15249 g003
Figure 4. Dynamically adjusted workload distribution and task assignment diagram.
Figure 4. Dynamically adjusted workload distribution and task assignment diagram.
Sustainability 15 15249 g004
Figure 5. Energy consumption comparison graph.
Figure 5. Energy consumption comparison graph.
Sustainability 15 15249 g005
Figure 6. Cost comparison graph.
Figure 6. Cost comparison graph.
Sustainability 15 15249 g006
Figure 7. Performance comparison graph.
Figure 7. Performance comparison graph.
Sustainability 15 15249 g007
Figure 8. Carbon emission comparison graph.
Figure 8. Carbon emission comparison graph.
Sustainability 15 15249 g008
Table 1. Gap Analysis.
Table 1. Gap Analysis.
Sl no.AuthorApproachEnergy EfficiencyCost EfficiencyPerformanceCarbon ReductionEnergy Model
1Runze Wu et al. [41]Energy profit maximization
2Keke Gai et al. [32]DECM
3Stephen Bird et al. [34]DGDC
4R. Rahmani et al. [36]Green microgrid system
5Jianxiong Wan et al. [37]CCHP and WHR
6Shanchen Pang et al. [42]LET-ACO
7Manjot Kaur et al. [43]Virtualized CPU model
8Nidhi Jain Kansal et al. [44]Load balancing approach
9Our ApproachGEECO
Table 2. Performance evaluation value.
Table 2. Performance evaluation value.
ScenarioEnergy Consumption (kWh)Cost ($)PerformanceCarbon Emission
(kg CO 2 )
Balanced Approach5050562.50.201650
Energy-Efficient Focus4025393.750.251412.5
Performance-Driven Strategy55007750.1452250
Carbon-Neutral Objective44004500.18181200
Conventional65008250.272400
Table 3. Performance improvement table.
Table 3. Performance improvement table.
ScenarioEnergy Consumption Reduction (kWh)Carbon Emission Reduction
(kg CO 2 )
Balanced Approach22.31%31.25%
Energy-Efficient Focus38.08%41.15%
Performance-Driven Strategy15.38%6.25%
Carbon-Neutral Objective32.31%50%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mondal, S.; Faruk, F.B.; Rajbongshi, D.; Efaz, M.M.K.; Islam, M.M. GEECO: Green Data Centers for Energy Optimization and Carbon Footprint Reduction. Sustainability 2023, 15, 15249. https://doi.org/10.3390/su152115249

AMA Style

Mondal S, Faruk FB, Rajbongshi D, Efaz MMK, Islam MM. GEECO: Green Data Centers for Energy Optimization and Carbon Footprint Reduction. Sustainability. 2023; 15(21):15249. https://doi.org/10.3390/su152115249

Chicago/Turabian Style

Mondal, Sudipto, Fashat Bin Faruk, Dibosh Rajbongshi, Mohammad Masum Khondhoker Efaz, and Md. Motaharul Islam. 2023. "GEECO: Green Data Centers for Energy Optimization and Carbon Footprint Reduction" Sustainability 15, no. 21: 15249. https://doi.org/10.3390/su152115249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop