Next Article in Journal
Current Drops in CF4 Plasma-Treated AlGaN/GaN Heterojunction in Polar Gas Ambient
Previous Article in Journal
Authentication Technology in Internet of Things and Privacy Security Issues in Typical Application Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Advanced Job Scheduling Algorithmic Architecture to Reduce Energy Consumption and CO2 Emissions in Multi-Cloud

1
School of Computer Applications, Lovely Professional University, Phagwara 144411, India
2
Department of Computer Science, Baba Farid College, Bathinda 151001, India
3
School of Computer Science and Engineering, Lovely Professional University, Phagwara 144411, India
4
Department of Computer Science, Akal University, Talwandi Sabo, Bathinda 151302, India
5
Department of Computer Science and Engineering, Uttaranchal University, Dehradun 248007, India
6
School of Electronic and Information Engineering, Kunsan National University, Gunsan 54150, Republic of Korea
7
Department of Artificial Intelligence and Big Data, Woosong University, Daejeon 34606, Republic of Korea
8
School of Computer, Information, and Communication Engineering, Kunsan National University, Gunsan 54150, Republic of Korea
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(8), 1810; https://doi.org/10.3390/electronics12081810
Submission received: 16 February 2023 / Revised: 26 March 2023 / Accepted: 4 April 2023 / Published: 11 April 2023
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
Cloud Computing is one of the emerging fields in the modern-day world. Due to the increased volume of job requests, job schedulers have received updates one at a time. The evolution of machine learning in the context of cloud schedules has had a significant impact on cost reduction in terms of energy consumption and makespan. The research article presents a two-phase process for the scheduling architecture of cloud computing where PMs are the main working unit and users are supplied to the PMs based on the work abilities of the PM in terms of resources. A minimum cost is desired in the preliminary phase of the allocation of the user to the PM. A clustered approach utilizing k-means and Q-learning was imposed to migrate the users from one PM to another PM based on Quality of Service (QoS) parameters. The proposed work has also incorporated CO2 emissions as a major evaluation parameter other than energy consumption. To support resource sharing, the deployment model is a multi-cloud model. The proposed work is evaluated against other recently proposed state of the art techniques on the basis of QoS parameters and the proposed work proved to be better in terms of efficiency at the end of the draft.

1. Introduction

Cloud Computing (CC) has been one of the most emerging areas in the computation world where the users are associated with a cloud platform to get services from it. For any computation platform, the requests are viewed as jobs and are processed under the datacenter of CC. A CC data center is a physically measured component that uses computational elements to process requests from users. Physical Machines (PMs) are the main handler of the jobs and receive instructions from the data center. For example, Netflix is a cloud platform that provides a significant number of movies to its associated and authorized users and Netflix uses hardware resources to store and process the computation of the jobs. A data center can consume the same amount of energy as that consumed by 25,000 households if they are left in working mode for an hour. Hence, energy consumption and carbon emissions are a few of the serious issues in the CC architecture. In general, a CC architecture is made up of three layers namely Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The data center handles the jobs and assigns the jobs at the IaaS layer through the SaaS layer. In the early days of development, a CC architecture was only designed to speed up the efficiency of the computation but due to increasing load and energy consumption through cloud data centers, energy efficacy has become a vital issue. Jobs are supplied to the datacenter with a constraint deadline and hence several algorithm architectures illustrate the operation based on execution time [1]. For example, Minimum Execution Time (MET) selects the PM based on the time contracts that are associated with the job. When it comes to connected jobs, viz. if the dependency of the job is on the outcome of another job, it also becomes vital to execute the dependent jobs first before executing the primary job. Based on the connected job models, the CyberShake algorithm was introduced to the world in early 2013 and has received updates over time [2]. The CyberShake algorithm uses a dependency map based on Heterogenous Earliest Finish Time (HEFT) to check the overall cost of execution. The CyberShake algorithm utilizes Statistical Machine Learning (S-ML) for the evaluation of the total cost of the job over different physical machines or data centers. Machine learning has spread into various sectors of computation including cloud computing and big data analytics [3]. Machine learning works on the behavior pattern of the entire process and requires a bulk amount of data to learn and to reach a conclusion. The ordinal measure of machine learning, Minimum Execution Time (MET), and the CyberShake algorithm are provided below.

1.1. MET

MET is one of the simplest algorithm architectures for the scheduling of jobs in a cloud server. MET looks for the minimum execution time based on the total number of buffers in the buffer list and a total number of jobs in each buffer. Based on the computational complexity of the job, the total time to execute the job is calculated. Each PM submits its cost to the scheduler and based on the minimum cost, the PM is selected for the allocation. One of the major advantages of this algorithm architecture is its computational complexity but at the same time, it does not create or utilize the history of allocations. These characteristics bar the algorithm architecture from expanding in terms of elasticity [4].

1.2. ML in Cloud Computing

The cloud computing industry has been completely upended by the emergence of multi-clouds as a successor to CC architectures. Using this idea, the performance of different cloud designs can be improved by having multiple cloud components pool their resources together.
In multi-cloud computing, where resources are frequently underutilized due to a lack of effective resource allocation methodologies, efficient resource allocation is a crucial challenge. The current methods have difficulty optimizing both available and utilized cloud resources, which can result in costly performance inefficiencies for end users. The purpose of this work is to develop and apply a novel learning pattern that utilizes both ideal and active machines to optimize resource use.
The major objectives have been outlined to this end.
  • To begin, a labeling architecture will be created so that the system may be trained to recognize labels of interest.
  • Second, a Q-learning-based pattern of behavior analysis is used to help make decisions at any stage of the work distribution process.
  • Finally, the suggested method will be investigated for quality of service (QoS) characteristics and compared with state of the art algorithms to demonstrate its superior performance.
When applied to multi-cloud computing environments, the proposed learning pattern has the potential to greatly increase the efficiency with which computer resources are allocated. Compared to standard practices, the learning pattern can save time, cost, and effort by making use of both unused and underutilized cloud computing resources. The overarching goal of this effort is to present a more efficient and effective method for allocating multi-cloud computing resources that will be cost-effective for end users and boost overall performance.
The rest of the paper is organized in the following manner. Section 2 contains a literature survey covering job allocation and handling architectures. It also incorporates modeling improvements through machine learning and the evolution of machine learning in the field of multi-cloud architecture systems. Section 3 describes the proposed system model along with the implementation design. The evaluation of the results and the comparison with other state of the art techniques is demonstrated in Section 4 and the paper is concluded in Section 5.

2. Related Works

In recent years, cloud computing’s popularity has skyrocketed due to its capacity to supply customers with versatile and inexpensive computing resources. Multi-cloud computing, in which many cloud providers are employed to host applications, increases the potential benefits of cloud computing. The benefits include increased scalability, dependability, and cost-effectiveness. However, there are also substantial obstacles to overcome when it comes to optimizing resource allocation in multi-cloud computing. There is a need for more advanced methodologies that can optimize resource allocation across several clouds in a way that is more energy-efficient, cost-effective, and environmentally friendly than traditional approaches to multi-cloud computing resource allocation.
While several researchers have developed models for allocating resources in multi-cloud environments, these approaches frequently have their own set of problems.
Some models, for instance, rely on a single person making all the decisions, which might introduce instabilities and prevent the system from expanding. Some models may not account for the heterogeneity of cloud resources, which can lead to inefficient use of those assets. The majority of the currently available models also fail to account for fluctuations in the job, which can lead to subpar performance and unnecessary resource consumption. The contributions of several researchers in this field are as follows.
Hu et al. (2018) proposed a multi-objective scheduling algorithm for multi-cloud environments to reduce the workflow makespan and scheduling cost of data centers. Their approach utilized Particle Swarm Optimization to customize job scheduling based on location and order of data transmission. Although their simulation study showed promising results, their algorithm did not consider the impact of renewable energy sources or the carbon footprint of data centers [1]. Xu and Buyya (2020) proposed a workload shift approach that addressed the CO2 emissions issue by shifting jobs among multi-clouds located in different time zones. They also utilized renewable energy sources, such as solar energy, to reduce the usage of non-renewable energy. Their approach was effective in reducing CO2 emissions by 40% while maintaining a near-average response time for user requests. However, their approach did not consider load balancing or resource utilization, which are important factors in multi-cloud environments [2]. Cai et al. (2021) proposed a distributed job scheduling approach for multi-cloud environments that considered multiple objectives, including cost, energy consumption, time consumed, throughput, load balancing, and resource utilization. Their approach utilized intelligent algorithms based on the aforementioned objectives, as well as a sine function for model implementation. Although their simulation analysis demonstrated high scheduling efficiency with enhanced security, their approach did not consider the impact of renewable energy sources on energy consumption or CO2 emissions [3]. Renugadevi and Geetha (2021) developed a model for a geographically distributed multi-cloud environment that utilized solar energy as the main source of energy. Their model considered electricity prices, favorable location, CO2 emissions, and carbon taxes in energy management and resource allocation. The type of task was customized in response to the deadline of the task, which resulted in adaptive management of the multi-cloud model and workload algorithm. However, their approach did not consider load balancing or resource utilization, which are critical factors for optimal performance in multi-cloud environments [4]. Gaurang Patel et al. (2015) present study of task scheduling algorithms and modification of load balanced min-min algorithm. The proposed algorithm is based on survey of load balancing algorithm for static meta-task scheduling in grid computing. It select task based on maximum completion time and improve resources utilization and makespan [5]. Zhang et al. (2021) proposed a distributed deployment approach for tasks in multi-cloud environments, based on reinforcement learning. Their approach performed two steps: job offloading and task scheduling based on cloud center decisions. Their simulation analysis showed that their approach had the least latency in a heterogeneous environment, compared to existing approaches. However, their approach did not consider the impact of energy sources on energy consumption or CO2 emissions [6]. Sangeetha et al. (2022) utilized deep learning Neural Networks to control routing capabilities in multi-cloud environments to manage space and task allocation. Their approach minimized the delay associated with data processing and storage across clouds. Their simulation analysis demonstrated improved performance in terms of delay and costs associated with resource allocation in multi-cloud environments. However, their approach did not consider the impact of renewable/non-renewable energy sources on energy consumption or CO2 emissions [7]. Cao et al. (2022) presented the carbon footprint of data centers as a major source of intensive CO2 emissions and suggested that data centers increasingly switch to renewable energy to minimize the negative effects of CO2 emissions and boost energy circulation. They further suggested the integration of Artificial Intelligence to address the challenges and put the framework in place in real-world scenarios. However, they did not propose a specific approach to integrate AI or address the challenge of load balancing or resource utilization in multi-cloud environments [8]. Jun-Qing Li et al. (2021) proposed a hybrid technique of greedy and simulated annealing for scheduling crane transportation process. The objective was to reduce the completion time and energy consumption. It works in two steps: the first step is the scheduling of jobs and the second step is the assignment of machines [9]. Yu Du et al. (2022) develop a multi-objective optimization algorithm for distribution algorithm estimation and deep q-network to solve the problem of scheduling job shops. The problem was the processing speed of the machine, idle time, setup time, and transportation between machines. The proposed model was validated on CPLEX. The results showed that the proposed method performed effectively for job shop scheduling [10]. Ju Du (2022) presented a deep Q-network (DQN) model for multi-objective flexible job shop scheduling for crane transportation and setup times. It included 12 states and 7 actions for the scheduling process. The result showed that the proposed method produced an effective and efficient output. DQN also has the option to use dispatching rules according to situations [11]. Haider Ali et al. (2021) explored IOT and its application in the area of smart cities and also discussed wireless sensors. The authors also discussed the multiprocessor chip on-chip (MPSoC) used in daily IOT-based gadgets. At the end of this survey, the author also mentioned the future directions [12]. Umair Ullah Tariq et al. (2021) proposed a novel energy-aware scheduler with constraints on Voltage Frequency Island (VFI)-based heterogeneous NoC-MPSoCs deploying re-timing integrated with DVFS for real-time streaming applications. The authors proposed R-CTG which is a task-level timing approach integrated with non-linear programming and a voltage-level approach. The comparison results showed that the proposed method performed better in terms of latency and energy consumption [13].
More recent efforts have aimed to optimize multi-cloud resource allocation using machine learning and deep reinforcement learning (DRL) approaches. However, there is still a large knowledge gap in the application of DRL to multi-cloud resource allocation, especially in the areas of energy efficiency, CO2 emissions, and cost optimization.
We present a unique DRL-based methodology for optimal resource allocation in multi-cloud computing that accounts for energy efficiency, CO2 emissions, and cost. We consider the dynamic and complicated character of multi-cloud computing systems, and our proposed model uses DRL algorithms to learn optimal resource allocation techniques in real time. Extensive experimental and simulated results show that our suggested approach provides significant improvements over the current state of the art methods in terms of energy efficiency, cost, and carbon dioxide (CO2) emissions.

3. Materials and Methods

The proposed work is a novel work model that aims to reduce the overall energy consumption and carbon emission of the processing that must be performed to complete the supplied number of jobs. The proposed work uses the following definitions to illustrate the proposed work architecture; the important notations and symbols used in the proposed model are shown in Table 1.
The model proposed is a novel approach for reducing energy consumption and carbon emissions during the processing of a certain number of tasks. The model is based on a set of definitions and architectural components that assist in characterizing the relationships between the many relevant elements.
The ‘Z’ symbol represents the concept of a job, which is central to the model. A job is a Million Instructions Per Second (MIPS) sequence that is given to a data center (DC). The job set is denoted as Z 1,2 g , ranges from 1 to g , g , and Zg D C . Z L where Z is the total number of given jobs, Z g is the subset of ZL for a given data center and ZL DC indicates that the work list associated with each DC may vary. The DC divides jobs into execution machines, also referred to as PMs (processing machines). A PM is responsible for the sequential execution of one or more jobs. One PM may have more than two jobs in a row under the suggested approach. The model incorporates numerous clouds in addition to jobs and PMs. ‘s’ represents the number of clouds. A cloud is a cluster of PMs that have common attributes, such as RAM, CPU utilization capacity, and location.
Based on the component module, the following is used to represent a cloud structure:
C l o u d s i = 1 d j = 1 n P M R , C , L i j d n
This equation demonstrates that a cloud is a collection of physical machines (PMs), with each ( P M ) aving its own RAM, CPU usage capacity, and data center location. C l o u d s Denotes the cloud’s representation as a set.
The sum of PMs is represented by the integration symbols in the equation. The outer integral reflects the total number of data centers, ‘ d ,’ whereas the inner integral represents the number of PMs within each data center, ‘ n ’. The variable i indicates the data center number, and the variable ‘j’ represents the PM number for that data center. The notation ‘ P M R , C , L i j ’ denotes the characteristics of the jth PM in the ith data center. ‘ R ’ Indicates the RAM connected to the PM, ‘ C ’ represents the CPU usage capacity, and ‘ L ’ reflects the PM’s location inside the data center.
Consequently, this equation is utilized to represent the cloud structure in the suggested model. The cloud s represents the collection of PMs (Processing Machines) associated with the ith data center’s RAM R, CPU utilization capacity C, and location L. The value of s represents the number of clouds in the system as a whole. The usage of this equation facilitates the definition of the cloud’s structure in a more ordered and systematic manner, which is essential for the effective administration of data centers and jobs.
Figure 1 show the important component of multi-cloud environment. The proposed work is divided into two subsequent segments from here. The first segment performs a pollination process to manage the J L i j based on the R, C, and L and executes the jobs based on the best fit decreasing (BFD) algorithm that was proposed by Baker in 1981 which has been further modified as the Modified Best Fit Decreasing (MBFD) in the later stage of the development of the CC architecture [2,14]. The first segment is termed as the preliminary allocation of the job to the data center execution machines based on the pollination process with improved fitness values of the pollination and the second segment is termed as advanced allocation which utilizes the propagated Q-learning method for further illustration. The illustration is as follows.

3.1. The Preliminary Allocation

The preliminary allocation utilizes the attribute set{R,C,L} as the primary measure considering CyberShake and MBFD as the base of allocation. To do so, the proposed work can be viewed with the help of the process diagram that is shown in Figure 2.
The preliminary allocation phase is divided into six steps and the number mentioned in this figure resperents the flow of working as follows:
  • The user sends a request to the CC regardless of knowing the intercommunication architecture of the cloud. The cloud architecture has multiple clouds that are interconnected to each other to share resources.
  • The CC transfers the request to the IaaS layer.
  • The IaaS layer broadcasts the requirement to the job manager and the job manager sorts the jobs based on CPU utilization in descending order. The IaaS layer broadcast the sorted requirements to the data centers and the data centers compute the overall expected cost based on the CPU utilization.
  • IaaS passes the responses of dc to the job manager and the job manager views the minimum cost that has been offered by dc.
  • The job manager also evaluates the location compatibility of the user and the data center and the job manager returns the evaluated values to the IaaS layer.
  • With the help of SaaS allocation architecture, the job is allocated to the minimum cost holder dc.
The pre-allocation phase can be algorithmically written as in Algorithm 1.
Algorithm 1: Preliminary Allocation
Require: UL dcd where UL is the user list and dcd is the datacentre description
Output allocation a returns the allocation architecture
tu = UL.COUNT fetch total number of users
CU = UL.CPUU fetch CPU utilization from user demands
RU = UL.RAMU fetch ram demand from the users
Nr = norm(CU, RU) perform normalization over the parameters
Demand = nr. CU + nr.Ru
SD = Sort (demand, de) sort the demand vector into descending order
dcc = 1;Initialize data center counter to 1
for1 usr = 1:tc
While dcc ≠ dcd. Count
pma = dcd.PMs pmc = pma.count;min(ec) = max.assign maximum value to energy consumption as minimum value pmcr = 1;pm counter
while pmcr ≠ pmc
Compute ec = EC (pma[pmcr]) computer energy consumption
If 1ec ≤ min (ec)
[ux,uy] = location (usr) fetch location of users
[dcx,dcy] = location(usr);
Dist = √(dcx − ux)2 + (dcy + uy)2(1)
If 2dist ≤ th where th is the location threshold
Min(ec) = ec
Allocationt = allocate(usr,dcc.pmcr)
End if1
End if2 endwhile2
End while1 end for1
Output: SF
Once the allocation is performed for a user to a PM, the PM starts executing the job assigned to it and by the end of the execution, the following parameters have been evaluated:
  • Distance from user to PM.
  • Total consumed energy in the allocation as well as in the execution against total number of supplied instructions.
  • The total amount of CO2 emission in the processing of the job at the PM.
Each cloud data center has its own energy consumption profile. To estimate the energy consumption of a multi-cloud environment, we can use the following equation:
ECtotal = ∑ DCi (Efixed_DCi + Edyn_DCi)
where:
  • ECtotal: total energy consumption of the multi-cloud environment.
  • DCi: a specific data center in the multi-cloud environment.
  • Efixed_DCi: the fixed energy consumption of the data center, which is independent of the actual workload and is related to the power consumed by the cooling and networking equipment.
  • Edyn_DCi: the dynamic energy consumption of the data center, which is directly related to the actual workload and is caused by the power consumed by the servers and other computing resources.
To compute Edyn_DCi, we can use the following equation:
Edyn_DCi = ∑ a∈CNiEdyn_a + ∑b∈SNiEdyn_b
where:
  • Edyn_DCi: the dynamic energy consumption of data center DCi.
  • a: a compute node in the data center that is used to run cloud services and virtual machines.
  • CNi: the set of compute nodes in data center DCi that are used to run cloud services and virtual machines.
  • Edyn_a: the dynamic energy consumption of compute node a, which is proportional to its CPU utilization and the power consumed by its CPU and memory.
  • b: a storage node in the data center that is used to store data and provide access to it.
  • SNi: the set of storage nodes in data center DCi that are used to store data and provide access to it.
  • Edyn_b: the dynamic energy consumption of storage node b, which is proportional to the power consumed by its disk drives and other storage components.
To compute Edyn_a and Edyn_b, we can use the following equations:
Edyn_a = T·[(Pcores_a,j + 1_P_cores_a,j) · k · u_a + [(j + 1) · P_cores_a,j_j · P_cores_a,j + 1]]
Edyn_b =T·[(P_disks_b,j + 1_P_disks_b,j) · k · u_b + [(j + 1) · P_disks_b,j_j · P_disks_b,j + 1]]
where:
  • Edyn_a: the dynamic energy consumption of compute node a.
  • Edyn_b: the dynamic energy consumption of storage node b.
  • T: the time interval for which the energy consumption is being estimated.
  • P_cores_a,j: the power consumed by the CPU of compute node a at a time interval j.
  • P_disks_b,j: the power consumed by the disk drives of storage node b at time interval j.
  • k: a constant that reflects the power consumption per unit of CPU utilization or disk I/O activity.
  • u_a: the average CPU utilization of compute node a over the time interval T.
  • u_b: the average disk I/O activity of storage node b over the time interval.
  • Tj: the time interval index, which varies from 1 to N, where N is the total number of time intervals in T.
The above equations can be used to estimate the energy consumption of a multi-cloud environment by computing the fixed and dynamic energy consumption of each data center and summing them up over all data centers. Note that the accuracy of these equations depends on the accuracy of the input parameters, such as the power consumption profiles of the computing and storage resources, the CPU utilization and disk I/O activity levels, and the time interval T.
The above study shows that about 70% of the energy is consumed when the physical machine in the cloud data center is in an idle state in comparison to fully utilized physical machines [15,16,17]. The enormous consumption of energy results in massive emissions of carbon dioxide. This results in the excellent efficiency of resources and carbon footprint being dependent on the emission of CO2 from different sources such as active physical machines, idle physical machines, and CO2 emitted from other sources. Equation (3) shows the relationship between the energy consumed by the machines and the emission of CO2. Therefore, the energy consumed by the machines is directly related to the emission of carbon dioxide given as follows [18,19,20]:
T C O 2 = E C t o t a l × I
where T C O 2 is the total emission of CO2, E C t o t a l is the total power consumed by the machines, and I is the intensity of emission of greenhouse gases. In the given equation, E C t o t a l is obtained from Table 2 and I is the emitted CO2 by each physical server. According to [14,15], I is 12.62 MT CO2/month in the cloud data center for 100 servers. The total emission of CO2 is shown in Table 2. Furthermore, the consumed energy and carbon emission in the allocation and processing of the job at a PM per machine is realized further using Equations (7) and (8) as follows:
E C = m = 1 M I m × e c e j + s j × I m × e c s j
where M is total number of instructions under 1 PM ‘j’, ece is the execution cost at the jth PM, s is the storage required for the mth instruction, and ecs is the total consumed energy in the storage of one instruction at the jth PM. In a similar fashion, CO2 emission is computed by Equation (8).
C E = m = 1 M I m × c e e j + s j × I m × c e s j
where cee is the CO2 emitted in the execution of the mth instruction set on the jth PM and ces is the CO2 emitted in the storage.
The proposed algorithm architecture introduces a separation mechanism that is attained using a modified k-means algorithm. The ordinal measures are provided in the next section.

3.2. The Elasticity

The concept of elasticity was introduced to the cloud computing environment in early 2012 when the load management concept was introduced to the scheduling architecture of cloud computing. The proposed work continues the effort in the same area, particularly in minimal energy consumption and CO2 emissions. To do so, the proposed work utilizes the concept of a Q-learning algorithm architecture by creating the following aspects [16,17,18,19,20].
  • Environment;
  • States;
  • Actions.
To create the environment, the proposed algorithm enhances the existing k-means algorithm architecture in which the data is clustered based on the Euclidean distance. The ordinal measures of k-means are as follows.

Updated K-Means

To update the existing k-means algorithm, two major changes were made as follows.
  • Adding cosine similarity as another measure to be utilized in the calculation of the distance to maintain the records in any cluster. Cosine similarity is quite commonly being used in parameter distance evaluations [21,22,23,24,25]. It is the cosine of the angular difference between two supplied vector values with the same number of attributes. In the case of the proposed work, the cosine similarity is evaluated between two vector values containing four attributes as follows:
    v = { E C , C O 2 , d , M }
    where E C is the energy consumption, C O 2 represents the carbon dioxide emission, ‘ d ’ represents the distance between the user and the location of the datacenter, and M is the total supplied instruction set from users.
  • Removing the possibilities of random centroid selection for the first convergence. To reduce the convergence rate, the proposed algorithm architecture utilizes the mean value of every attribute as the first centroid and performs some variation in using a 20% margin value to create the other two centroids at first glance.
The updated k-means algorithm aims to improve performance and accuracy. The first update adds cosine similarity to the distance computation to maintain records in any cluster. Cosine similarity helps the computer locate groups with comparable properties. Second, random centroid selection is removed for the initial convergence. This is because random centroid selection might reduce convergence rates and produce poor clustering solutions. The proposed algorithm design uses the mean value of every characteristic as the first centroid and a 20% margin value to build the other two centroids at first glance. This helps the algorithm start with a better clustering solution and converge faster to a better one.
The rest of the process runs in a similar fashion in which the data is placed in its respective clusters with the nearest hybrid cluster distance. The process continues until the data is assigned into one cluster.
The clustered elements are now considered as an environment for the Q-learning algorithm architecture. The Q-learning is propagated by the weight method to generate actions against each state of the environment. The proposed algorithm architecture considers three states for one cloud and the environment is considered for multiple cloud environments to support load and resource sharing [26,27,28,29,30,31,32]. The proposed process can be understood using the workflow shown in Figure 3.
The reward repository is created by updating Bellman’s equation for reward and penalty generation by integrating the propagation-based reward mechanism. The illustration of the proposed algorithm for reward generation is as follows. The proposed algorithm is different in terms of propagation, reward, and penalty generation. This paper proposes a new algorithm for generating rewards and penalties based on the concept of propagation. Specifically, Bellman’s equation for reward and penalty generation is updated to include the propagation-based reward mechanism. This helps to improve the efficiency and effectiveness of the algorithm in decision-making related to migration and allocation of users in a cloud computing environment.
The proposed algorithm differs from existing algorithms in terms of propagation, reward, and penalty generation. It uses two time slots for weight generation, where t represents the current state value and t − 1 represents the previous state value, number represents the flow and Y for yes and N for no is used. This allows for a more accurate assessment of the current and past states, which in turn helps to make better decisions regarding migration and allocation.
The reward and penalty generated by the algorithm are used to decide whether a user should be migrated from the current physical machine (PM) or not. Additionally, the reward and penalty are also used for the allocation of a user to a PM. If the reward of a PM is higher than the penalties, the user will be allocated to that PM. However, if the reward is lower than the penalties, the user will remain on the existing PM.
The proposed algorithm has been evaluated for several Quality of Service (QoS) parameters and compared with existing algorithm architectures from the literature. The results show that the proposed algorithm outperformed existing algorithms in terms of efficiency, effectiveness, and accuracy. The proposed algorithm uses two time slots for weight generation in which t is the current state value and t − 1 is the previous state value. The algorithmic description is provided as in the following algorithm.
Q-learning: Learn function Q: X × A→ R
Require:
States X = {1,2,3} proposed work has 3 states
Actions A= {1, 2, 3} quad A: X ⇒ A proposed work has 3 actions
 Reward function R: X × A→R
Gradient based (weighted sum) transition function T: X: ax + b × A→ X
Learning rate α ϵ [0, 1], typically α = 0.1
Discounting factor ϒ ϵ [0, 1]
procedure Q LEARNING(X, A, R, T, α, ϒ)
Initialize Q: X ax + b: a→0.01b = 0.02x = norm (PC, CO2, d, M) × A→ R arbitrarily
while Q is not converged do
Start in state s ϵ X
while s is not terminal do
Calculate π according to w = ow+nw where nw = ax + b and ow = nw (t − 1) t is the time state of current value and exploration strategy (e.g., π (x) ←arg maxa Q(x, a))
A ← π(s)
r ← R(s, a) P(s, a) ▷ Receive the reward or penalty
s’← T(s, a) ▷ Receive the new state
Q(s’, a) ← (1 − α)·Q(s, a) + α·(r + ϒ·maxa’ Q(s’, a’))
 s ← s’
return Q
The reward and the penalty are used to decide whether the user should be migrated from the PM or not or even for the allocation part if the user has to be allocated to the PM, if the reward of the PM is more than that of the served penalties, the user will be allocated to the PM or the user will remain at the existing PM only. The proposed work has been evaluated for several QoS parameters comparing the current state of the art with different algorithm architectures that have been proposed in this context earlier.
In our proposed approach, the preliminary and elastic phase work follow each other. To understand it better, consider the flow of execution shown in Figure 3. When a user sends a request to the cloud controller (CC), the CC transfers the request to the Infrastructure as a Service (IaaS) layer. The IaaS layer then broadcasts the requirement to the job manager, which sorts the jobs based on CPU utilization in descending order. At this stage, we use the updated k-means algorithm to cluster the jobs based on their CPU requirements and geographical locations. This helps to reduce the overall energy consumption by allocating jobs to the nearest data centers, which reduces the distance for data transmission.
Next, the IaaS layer broadcasts the sorted requirements to the data centers, and the data centers compute the overall expected cost based on CPU utilization. At this stage, we use the Q-learning algorithm to optimize the energy consumption of data centers. Q-learning algorithm helps to determine the optimal energy consumption of a data center by considering the tradeoff between energy consumption and performance. This helps to minimize the energy consumption of data centers while still maintaining optimal performance. After the data centers compute the overall expected cost, the IaaS layer passes the responses to the job manager. The job manager views the minimum cost offered by the data centers and also evaluates the location compatibility of the user and the data center using the updated k-means algorithm. Finally, with the help of the Software as a Service (SaaS) allocation architecture, the job is allocated to the data center with the minimum cost.

4. Results and Discussion

The proposed work model minimizes total data center energy consumption and carbon emissions by employing a Q-learning algorithm for job allocation and migration among processing machines (PMs). The proposed algorithm was evaluated based on QoS parameters using the following formulas.
  • Energy Consumption: The energy consumption of the proposed model is calculated based on the total amount of energy used in a given interval of time. The total consumed energy can be defined as the sum of idle energy consumption and processing energy consumption. This is shown in Equation (9):
    E C = E C i d o l + E C p r o c e s s i n g
    where ECidol is the energy consumed by the data center in the idle state and ECprocessing is the energy consumed during the processing of jobs. The processing energy consumption can be further defined using Equation (10):
    E C p r o c e s s i n g = E C i u × M I P S + E C i u s × M I P S
    where E C i u is the unit cost of execution of one MIPS, E C i u s is the unit cost of storage of the executed instruction set, and MIPS is the total amount of processing done in the given time interval.
  • CO2 emission: The CO2 emission is calculated using Equation (11).
    C O 2 e = C O 2 e i d o l + C O 2 e p r o c e s s i n g
    where C O 2 eidol is the amount of CO2 emission released in the idle state and C O 2 e p r o c e s s i n g is the amount of CO2 emission released during the processing of jobs. The amount of CO2 emission released during job processing can be defined as
    C O 2 e p r o c e s s i n g = C 02 i u × M I P S + C O 2 i u s × M I P S
    where C O 2 i u is the unit cost of emitting one unit of CO2 during the execution of one MIPS and C O 2 i u s is the unit cost of emitting one unit of CO2 during the storage of the executed instruction set.
  • Overall cost: This is evaluated based on the total cost of implementation and overall cost of storage.
    The overall cost is calculated using Equation (13):
    O v e r a l l   C o s t = E C × α + β
    where α is the unit cost to produce EC amount of energy to be consumed and β is the malicious processing cost.
In the context of comparing our proposed work model to other architectures using Q-learning and Neural Networks, it should be noted that the use of Neural Networks for this kind of problem is a common approach. However, the typical feedforward Neural Network architecture used for supervised learning in our comparison did not perform as well as expected. This is largely due to its static allocation mechanism, which did not adapt well to changes in user demand.
Our proposed work model utilizing Q-learning is intended to allocate resources dynamically based on current user demand and migration trends. This method enables more efficient and effective resource usage, resulting in superior overall performance compared to static allocation processes such as those utilized in Neural Network designs.
Furthermore, our proposed work model was assessed in two distinct scenarios. The first scenario involves adding users incrementally, with a minimum of 50 and a maximum of 1000. In this case, 10 PMs were maintained for 50 users, and for every 10 users, two additional PMs were deployed to accommodate network demand. The proposed work model outperformed the Neural Network architecture by a margin of 7 to 9 percent in every area due to the updated allocation and reward mechanism that determines the most suitable PM for the allocation of users and the migration of users from one PM to another.
In the second scenario, users are added incrementally, beginning with fifty and ending with one thousand. In all scenarios, the load is maintained at 10,000 MIPS.
Consequently, we believe that our proposed work model utilizing Q-learning provides a more efficient and effective solution for resource allocation in cloud computing environments, particularly when compared to traditional static allocation mechanisms such as those utilized in typical feedforward Neural Network architectures.
The results shown in Table 2 depict the energy consumption and CO2 emissions of a cloud data center as the number of users increases. There may be several reasons why the energy consumption is lower for 50 users than for 60 users, such as resource heterogeneity, workload types, and resource allocation strategies. Table 2 was generated through simulations or experiments designed to measure the energy consumption and CO2 emissions of a cloud data center as the number of users increases. These experiments may involve executing a set of workloads on the cloud data center and using power meters and carbon footprint calculators to measure energy consumption and CO2 emissions. These experiments are typically repeated several times to obtain an average value for each data point in the table.
Furthermore, the experiments may consider various scenarios, such as different resource allocation strategies, workload characteristics, and server utilization rates, to obtain a comprehensive understanding of the energy consumption and CO2 emissions of the cloud data center. The results from these experiments presented in Table 2 demonstrate the relationship between the number of users and the energy consumption and CO2 emissions of the cloud data center.
For 1000 users, the Q-learning algorithm consumed 0.02191633 mj, whereas the Neural Network algorithm consumed 0.02204761 mj which is [( 0.02284044 0.02745898 ) / 0.02745898 ] × 100 = 16 % . When it comes to CO2 emissions, a 2.72712565 2.80097843 2.80097843 × 100 = 2.6 % improvement was observed.
The second section provides significance to the comparison with other state of the art techniques and is illustrated as follows.
As shown in Table 3, as the load variation increased in MIPS, the energy consumption increased with the increase in load. For this comparison and all the other comparisons made here, the user count was kept constant at 150. The proposed work was compared with two state-of-the-art techniques namely MOS (Multi-Objective Scheduling) [1] and MOIA (Many Objective Intelligent Algorithm) [3] and MET. Starting from the proposed algorithm and ending at MOIA, the average PC is shown in Figure 4.
Similarly, CO2 emission was also been calculated and compared to existing state of the art techniques and is illustrated in Table 4.
CO2 emission is dependent on energy consumption and the supplied number of jobs. The average energy consumption for the proposed and state of the art techniques is shown in Figure 5. The proposed algorithm outperformed the state of the art techniques by a significant margin. For the same load scenario, the average energy consumption for the proposed algorithm was 75.99 units whereas MET demonstrated an overall consumption of 89.05 units. Compared to MOS [1], the average improvement was 85.10 75.99 85.10 = 10.70 % , whereas compared to MOIA [3], the proposed algorithm demonstrated an overall improvement of 13%.
The improvement in the case of the proposed algorithm was observed due to the dynamic nature of load adaption and adjustment of the jobs in terms of higher moderate and lower utilization. Due to the presence of the Q-table in the system, the processes after pre-allocation are updated in terms of policy selection, and the proposed algorithm selects the best PMs for the job when the user has to be migrated from one PM to another.
In a similar context, the overall cost was also evaluated using Equation (7) and is illustrated in Table 5.
Due to the effective management of energy efficiency, the proposed algorithm consumed less energy compared to other architectures and hence the overall cost was also low as the cost is dependent on energy consumption. The average energy consumption in all scenarios is shown in Figure 6.
For all the considered scenarios, the average cost of the proposed algorithm was just below 120 units whereas other than MOIA, all the other algorithm architectures consumed more than 140 units. To be precise in the comparison analysis, the proposed algorithm was 140.29 119.83 119.83 × 100 = 20.46 119.83 × 100 = 17.04 % improved compared to MET, whereas in comparison to MOS, the proposed algorithm was 140.56 119.83 119.83 × 100 = 24.73 119.83 × 100 = 20.63 % more efficient in terms of average cost. When compared to MOIA, the proposed algorithm was 136.38 119.83 119.83 × 100 = 16.55 119.83 × 100 = 13.81 % more cost-effective.

5. Conclusions

This research article presented a simulation architecture with multiple clouds sharing resources and loads among each other. The data centers of the cloud are equipped with physical machines that handle the users in terms of their passed instruction set. The proposed algorithm architecture is divided into two phases namely preliminary allocation and support for elasticity. In the preliminary phase, the user is allocated to a PM based on the distance of the user from the PM and the resource capabilities of the PM with the lowest proposed cost. Once the users are allocated to a PM, a threshold is set for simulations and Q-learning intakes the aggregated data of the utilized simulation until now. The Q-learning algorithm architecture creates a Q-table that updates the reward based on the presented propagation architecture. Q-learning is supplied with three states to be considered and the Q-table is updated by Bellman’s reward policy. The proposed solution to the job scheduling problem reduced the overall energy consumption and CO2 emission compared to the state of the art techniques. The proposed algorithm outperformed a MOS architecture by 10.75% in terms of energy consumption whereas an improvement of 13% was observed compared to MOIA. The proposed work also calculated the overall cost of the system to execute the jobs. The proposed work showed an improvement of more than 13% and just less than 21% in every compared scenario. The proposed work has a lot of future possibilities. The listed work also demonstrates that there are possibilities for the merger of modern Q-learning algorithms with swarm-based algorithms like PSO.

Author Contributions

Conceptualization, R.K., U.K. and D.A.; Methodology, R.K., U.K., D.A., S.V. and K.; Software, R.K.; Validation, U.K. and D.A.; Formal Analysis, U.K. and D.A. Investigation, R.K., U.K. and D.A.; Resources, U.K. and D.A.; Data Curation, U.K. and D.A.; Writing—Original Draft Preparation, R.K.; Supervision, D.A. and U.K.; Project Administration, U.K. and D.A.; Writing—Review and Editing, S.-W.P., A.S.M.S.H. and I.-H.R.; Funding Acquisition, A.S.M.S.H. and I.-H.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a National Research Foundation of Korea (NRF) grant funded by Korean Government (MSIT) (No. 2021R1A2C2014333).

Data Availability Statement

Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hu, H.; Li, Z.; Hu, H.; Chen, J.; Ge, J.; Li, C.; Chang, V. Multi-objective scheduling for scientific workflow in multicloud environment. J. Netw. Comput. Appl. 2018, 114, 108–122. [Google Scholar] [CrossRef]
  2. Xu, M.; Buyya, R. Managing renewable energy and carbon footprint in multi-cloud computing environments. J. Parallel Distrib. Comput. 2020, 135, 191–202. [Google Scholar] [CrossRef]
  3. Cai, X.; Geng, S.; Wu, D.; Cai, J.; Chen, J. A multicloud-model-based many-objective intelligent algorithm for efficient task scheduling in internet of things. IEEE Internet Things J. 2020, 8, 9645–9653. [Google Scholar] [CrossRef]
  4. Renugadevi, T.; Geetha, K. Task aware optimized energy cost and carbon emission-based virtual machine placement in sustainable data centers. J. Intell. Fuzzy Syst. 2021, 41, 5677–5689. [Google Scholar] [CrossRef]
  5. Patel, G.; Mehta, R.; Bhoi, U. Enhanced load balanced min-min algorithm for static meta task scheduling in cloud computing. Procedia Comput. Sci. 2015, 57, 545–553. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Di, B.; Zheng, Z.; Lin, J.; Song, L. Distributed multi-cloud multi-access edge computing by multi-agent reinforcement learning. IEEE Trans. Wirel. Commun. 2020, 20, 2565–2578. [Google Scholar] [CrossRef]
  7. Sangeetha, S.B.; Sabitha, R.; Dhiyanesh, B.; Kiruthiga, G.; Yuvaraj, N.; Raja, R.A. Resource management framework using deep neural networks in multi-cloud environment. In Operationalizing Multi-Cloud Environments; Springer: Cham, Switzerland, 2022; pp. 89–104. [Google Scholar]
  8. Cao, Z.; Zhou, X.; Hu, H.; Wang, Z.; Wen, Y. Towards a Systematic Survey for Carbon Neutral Data Centers. IEEE Commun. Surv. Tutor. 2022, 24, 895–936. [Google Scholar] [CrossRef]
  9. Li, J.; Du, Y.; Gao, K.; Duan, P.; Gong, D.; Pan, Q.; Suganthan, P. A Hybrid Iterated Greedy Algorithm for a Crane Transportation Flexible Job Shop Problem. IEEE Trans. Autom. Sci. Eng. 2022, 19, 2153–2170. [Google Scholar] [CrossRef]
  10. Du, Y.; Li, J.; Chen, X.; Duan, P.; Pan, Q. Knowledge-Based Reinforcement Learning and Estimation of Distribution Algorithm for Flexible Job Shop Scheduling Problem. IEEE Trans. Emerg. Top. Comput. Intell. 2022, 1–15. [Google Scholar] [CrossRef]
  11. Du, Y.; Li, J.; Li, C.; Duan, P. A Reinforcement Learning Approach for Flexible Job Shop Scheduling Problem with Crane Transportation and Setup Times. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–15. [Google Scholar] [CrossRef]
  12. Ali, H.; Tariq, U.; Hardy, J.; Zhai, X.; Lu, L.; Zheng, Y.; Bensaali, F.; Amira, A.; Fatema, K.; Antonopoulos, N. A Survey on system level energy optimisation for MPSoCs in IOT and consumer electronics. Comput. Sci. Rev. 2021, 41, 100416. [Google Scholar] [CrossRef]
  13. Tariq, U.; Ali, H.; Lu, L.; Hardy, J.; Kazim, M.; Ahmed, W. Energy-Aware Scheduling of Streaming Applications on Edge-Devices in IoT-Based Healthcare. IEEE Trans. Green Commun. Netw. 2021, 5, 803–815. [Google Scholar] [CrossRef]
  14. Gupta, R.; Verma, S.; Kavita. Test driven software development technique for software engineering. IJREISS 2012, 2, 81–91. [Google Scholar]
  15. Bansal, K.; Singh, A.; Verma, S.; Kavita; Jhanjhi, N.Z.; Shorfuzzaman, M.; Masud, M. Evolving cnn with paddy field algorithm for geographical landmark recognition. Electronics 2022, 11, 1075. [Google Scholar] [CrossRef]
  16. Kusic, D.; Kephart, J.O.; Hanson, J.E.; Kandasamy, N.; Jiang, G. Power and performance management of virtualized computing environments via lookahead control. In Proceedings of the 2008 International Conference on Autonomic Computing, Chicago, IL, USA, 2–6 June 2008; Volume 12, pp. 1–15. [Google Scholar]
  17. Verma, A.; Ahuja, P.; Neogi, A. pMapper: Power and migration cost aware application placement in virtualized systems. In Proceedings of the ACM/IFIP/USENIX International Conference on Distributed Systems Platforms and Open Distributed Processing, Leuven, Belgium, 1–5 December 2008; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  18. Kaur, D.; Singh, S.; Mansoor, W.; Kumar, Y.; Verma, S.; Dash, S.; Koul, A. Computational intelligence and metaheuristic techniques for brain tumor detection through IoMT-Enabled MRI Devices. Wirel. Commun. Mob. Comput. 2022, 2022, 1519198. [Google Scholar] [CrossRef]
  19. Yadav, A.L.; Verma, S.; Kavita. Grip on the cloud and service grid technologies some pain points that clouds and service grids address. IJECS 2013, 2, 3384–3388. [Google Scholar]
  20. Rani, P.; Verma, S.; Kaur, N.; Wozniak, M.; Shafi, J.; Ijaz, M.F. Robust and secure data transmission using artificial intelligence techniques in ad-hoc networks. Sensors 2022, 22, 251. [Google Scholar] [CrossRef]
  21. Tong, Z.; Chen, H.; Deng, X.; Li, K.; Li, K. A scheduling scheme in the cloud computing environment using deep Q-learning. Inf. Sci. 2020, 512, 1170–1191. [Google Scholar] [CrossRef]
  22. Uma, J.; Vivekanandan, P.; Shankar, S. Optimized intellectual resource scheduling using deep reinforcement Q-learning in cloud computing. Trans. Emerg. Telecommun. Technol. 2022, 33, e4463. [Google Scholar] [CrossRef]
  23. Tran, C.H.; Bui, T.K.; Pham, T.V. Virtual machine migration policy for multi-tier application in cloud computing based on Q-learning algorithm. Computing 2022, 104, 1285–1306. [Google Scholar] [CrossRef]
  24. Naeem, S.; Rahman, D.; Haider, S.M.; Mughal, A.B. Exploiting Transliterated Words for Finding Similarity in Inter-Language News Articles using Machine Learning. arXiv 2022, arXiv:2206.11860. [Google Scholar]
  25. Jianhua, L.; Zhiheng, W. A hybrid sparrow search algorithm based on constructing similarity. IEEE Access 2021, 9, 117581–117595. [Google Scholar] [CrossRef]
  26. Rafiq, A.; Ping, W.; Min, W.; Muthanna, M.S. Fog assisted 6TiSCH tri-layer network architecture for adaptive scheduling and energy-efficient offloading using rank-based Q-learning in smart industries. IEEE Sens. J. 2021, 21, 25489–25507. [Google Scholar] [CrossRef]
  27. Ghosh, G.; Verma, S.; Jhanjhi, N.Z.; Talib, M.N. Secure surveillance system using chaotic image encryption technique. IOP Conf. Ser. Mater. Sci. Eng. 2020, 993, 012062. [Google Scholar] [CrossRef]
  28. Yang, G.; Jan, M.A.; Rehman, A.U.; Babar, M.; Aimal, M.M.; Verma, S. Interoperability and Data Storage in Internet of Multimedia Things: Investigating Current Trends, Research Challenges and Future Directions. IEEE Access 2020, 8, 124382–124401. [Google Scholar] [CrossRef]
  29. Kumar, M.; Raju, K.S.; Kumar, D.; Goyal, N.; Verma, S.; Singh, A. An efficient framework using visual recognition for IoT based smart city surveillance. Multimed. Tools Appl. 2021, 80, 31277–31295. [Google Scholar] [CrossRef]
  30. Kumar, S.; Shanker, R.; Verma, S. Context Aware Dynamic Permission Model: A Retrospect of Privacy and Security in Android System. In Proceedings of the 2018 International Conference on Intelligent Circuits and Systems (ICICS), Phagwara, India, 19–20 April 2018; pp. 324–329. [Google Scholar] [CrossRef]
  31. Gupta, R.; Verma, S.; Kavita. Solving ipv4 (32 bits) address shortage problem using ipv6 (128 bits). IJREISS 2012, 2, 58–68. [Google Scholar]
  32. Rani, P.; Kavita; Verma, S.; Rawat, D.B.; Dash, S. Mitigation of black hole attacks using firefly and artificial neural network. Neural Comput. Appl. 2022, 34, 15101–15111. [Google Scholar] [CrossRef]
Figure 1. Components of proposed work.
Figure 1. Components of proposed work.
Electronics 12 01810 g001
Figure 2. Allocation process at the preliminary allocation.
Figure 2. Allocation process at the preliminary allocation.
Electronics 12 01810 g002
Figure 3. Proposed workflow.
Figure 3. Proposed workflow.
Electronics 12 01810 g003
Figure 4. Average Energy Consumption.
Figure 4. Average Energy Consumption.
Electronics 12 01810 g004
Figure 5. Average CO2 emissions of proposed vs. state of the art techniques.
Figure 5. Average CO2 emissions of proposed vs. state of the art techniques.
Electronics 12 01810 g005
Figure 6. Average Cost.
Figure 6. Average Cost.
Electronics 12 01810 g006
Table 1. Important Notations and Explanations.
Table 1. Important Notations and Explanations.
NotationExplanation
PMSPhysical Machines
QoSQuality of Service
CCCloud Computing
IaaSInfrastructure as a Service
PaaSPlatform as a Service
SaaSSoftware as a Service
METMinimum Execution Time
HEFTHeterogenous Earliest Finish Time
S-MLStatistical Machine Learning
MIPSMillion Instruction Per Second
DCData center
ZLJob list
STotal number of clouds
RAssociated RAM
CCPU utilization capacity
LLocation of jth
DTotal number of data centers in one cloud
NTotal number of PMs under one cloud
UTotal number of users under one cloud
dx and dyLocation of the data center
ux and uyUser location
BFDBest Fit Decreasing
MBFDModified Best Fit Decreasing
VMVirtual Machine
E C P U Energy consumed by the CPU in the physical computer
E m e m o r y Energy consumed by the memory in the physical server
E f i x e d s e r v e r Fixed power consumed by the server
E d y n a m i c Dynamic energy consumed
P d y n a m i c Dynamic power consumed
T C O 2 Total emission of CO2
E C t o t a l Total power consumed by the machines
IIntensity of emission of greenhouse gases
MTotal number of instructions under 1 PM
EceExecution cost at jth PM
SStorage required for mth instruction
EcsTotal consumed energy in the storage of one instruction at jth PM
CeeCO2 emitted in execution of mth instruction set on jth PM
CesCO2 emitted in the storage
ECEnergy consumption
CO2Carbon emission
DDistance between the user and the location of the datacenter
MTotal supplied instruction set from users
TCurrent state value
t − 1Previous state value
E C i u Unit cost of execution one MIPS
E C i u s Unit cost of storage of the executed instruction set
C O 2 i d o l Amount of emission released in idol state
C O 2 i u s Amount of emission released in storing the executed instruction set
A Unit cost to produce EC amount of energy to be consumed
Β Malicious processing cost
MOSMulti Objective Scheduling
MOIAMany Objective Intelligent Algorithm
Table 2. Result illustration for variation in user base.
Table 2. Result illustration for variation in user base.
Number of UsersQ-Learning Energy Consumption in mjQ-Learning CO2 EmissionQ-Learning CostNeural Energy ConsumptionNeural CO2 EmissionNeural Cost
500.021068226.42986867521.5725540.02203526.5413934545.511394
600.0221797914.430218242.51450.0221797917.130218242.5145
700.0216434210.5698643551.4193520.0221346810.8097756563.935287
800.0236028111.2820382541.5833540.0236028114.283821541.583354
900.021068226.42986867521.5725540.02203526.5413934545.511394
1000.0221797914.430218242.51450.0221797917.130218242.5145
1100.021261496.1577718721.9988750.022007386.3383637747.327726
1200.021220076.65439454526.1092690.022319886.99928378553.376878
1300.021123218.23285148158.4809550.022196988.24468813166.537091
1400.021658949.4244620690.86827770.0227259.4453542495.3408492
1500.0220650711.6133645550.5806140.227184717.4150556566.884408
1600.021660655.5407717901.2465740.022017595.63207531916.097766
1700.02396551.80370788665.6217010.02396551.80370788665.621701
1800.021592852.97493584845.6139120.022041453.03674146863.181919
1900.022069867.80330226707.0914220.022128587.82406631708.972944
2000.0219623310.0295133191.4431180.0228201411.0697247198.920631
2100.022014328.5942338517.7079150.0220143211.5942338517.707915
2200.0221551215.159949171.9253460.022155120.11159949171.925346
2300.022229792.95209659624.9684450.022888033.03950965643.474075
2400.02134010.00819311291.8816890.022081910.00847791302.02796
2500.022179120.4468032351.4645320.022547490.45422409357.301957
2600.021402261.18694688219.0565450.022183251.23025979227.050143
2700.0222981.67893868183.6360690.0222981.67893868183.636069
2800.023022630.56100439357.6862640.023022630.56100439357.686264
2900.0221332371.0216162698.7511990.022420671.9437083707.823267
3000.0227010772.7395481637.1705750.0230261673.781194646.294994
3100.022118871.66328916284.4145360.022118871.66328916284.414536
3200.022040710.58939459131.3594520.022040710.58939459131.359452
3300.022173189.09875558772.712840.023004749.43998375801.691681
3400.022015540.47459599255.2407770.022015540.47459599255.240777
3500.02372440.1483026522.6710160.02372440.1483026522.671016
3600.022167497.70194038681.4160680.022647637.86876229696.175353
3700.022296213.36930451730.6447650.022742573.43675669745.271993
3800.0224985279.2656184560.800460.0229382280.8147721571.760648
3900.022496720.63418433184.6356170.022496720.63418433184.635617
4000.022840442.72712565538.1814070.023458982.80097843552.755798
4100.0225191.28845628283.5352460.0225191.28845628283.535246
4200.02182472.6149194542.4073960.0229020176.2018021569.200124
4300.0224375867.4184839725.643220.0227545468.3708582735.893879
4400.022101432.19684776247.2497020.022101432.19684776247.249702
4500.02206490.2008478674.75940370.02206490.2008478674.7594037
4600.022689737.71539389711.4689340.023142317.86928955725.660301
4700.0212359761.9374793669.1971340.0221087164.4829288696.699182
4800.022100450.47457086150.6500280.022100450.47457086150.650028
4900.021991890.42806465415.8145660.022579110.43949459426.917409
5000.022222790.00588392160.8085410.0282222790.00588392160.808541
5100.0220650711.6133645550.5806140.227184717.4150556566.884408
5200.0221551215.159949171.9253460.022155120.11159949171.925346
5300.022229792.95209659624.9684450.022888033.03950965643.474075
5400.02134010.00819311291.8816890.022081910.00847791302.02796
5500.022179120.4468032351.4645320.022547490.45422409357.301957
5600.021402261.18694688219.0565450.022183251.23025979227.050143
5700.0222981.67893868183.6360690.0222981.67893868183.636069
5800.023022630.56100439357.6862640.023022630.56100439357.686264
5900.0221332371.0216162698.7511990.022420671.9437083707.823267
6000.0227010772.7395481637.1705750.0230261673.781194646.294994
6100.022118871.66328916284.4145360.022118871.66328916284.414536
6200.022040710.58939459131.3594520.022040710.58939459131.359452
6300.022173189.09875558772.712840.023004749.43998375801.691681
6400.022015540.47459599255.2407770.022015540.47459599255.240777
6500.02372440.1483026522.6710160.02372440.1483026522.671016
6600.022167497.70194038681.4160680.022647637.86876229696.175353
6700.022296213.36930451730.6447650.022742573.43675669745.271993
6800.0224985279.2656184560.800460.0229382280.8147721571.760648
6900.022496720.63418433184.6356170.022496720.63418433184.635617
7000.022840442.72712565538.1814070.023458982.80097843552.755798
7100.0225191.28845628283.5352460.0225191.28845628283.535246
7200.02182472.6149194542.4073960.0229020176.2018021569.200124
7300.0224375867.4184839725.643220.0227545468.3708582735.893879
7400.022101432.19684776247.2497020.022101432.19684776247.249702
7500.02206490.2008478674.75940370.02206490.2008478674.7594037
7600.022689737.71539389711.4689340.023142317.86928955725.660301
7700.0212359761.9374793669.1971340.0221087164.4829288696.699182
7800.022100450.47457086150.6500280.022100450.47457086150.650028
7900.021991890.42806465415.8145660.022579110.43949459426.917409
8000.022222790.00588392160.8085410.0282222790.00588392160.808541
8100.021402261.18694688219.0565450.022183251.23025979227.050143
8200.0222981.67893868183.6360690.0222981.67893868183.636069
8300.023022630.56100439357.6862640.023022630.56100439357.686264
8400.0221332371.0216162698.7511990.022420671.9437083707.823267
8500.0227010772.7395481637.1705750.0230261673.781194646.294994
8600.022118871.66328916284.4145360.022118871.66328916284.414536
8700.022040710.58939459131.3594520.022040710.58939459131.359452
8800.022173189.09875558772.712840.023004749.43998375801.691681
8900.022015540.47459599255.2407770.022015540.47459599255.240777
9000.02372440.1483026522.6710160.02372440.1483026522.671016
9100.022167497.70194038681.4160680.022647637.86876229696.175353
9200.022296213.36930451730.6447650.027742573.43675669745.271993
9300.0224985279.2656184560.800460.0229382280.8147721571.760648
9400.022496720.63418433184.6356170.022496720.63418433184.635617
9500.022840442.72712565538.1814070.025458982.80097843552.755798
9600.0225191.28845628283.5352460.0225191.28845628283.535246
9700.02182472.6149194542.4073960.0229020176.2018021569.200124
9800.0224375867.4184839725.643220.0227545468.3708582735.893879
9900.022101432.19684776247.2497020.022101432.19684776247.249702
10000.022840442.72712565538.1814070.027458982.80097843552.755798
Table 3. Energy consumption of proposed and state of the art techniques in mj.
Table 3. Energy consumption of proposed and state of the art techniques in mj.
Total Work Load in MIPSEC ProposedEC METEC MOSECMOIA
10,0000.086432210.110533430.097050790.08644106
20,0000.140095980.175077960.163319920.18034036
30,0000.216447880.218081010.243358560.26394136
40,0000.272408360.3482050.276601850.32084257
50,0000.341290120.375453410.364798440.43713668
60,0000.418649370.514511220.522447150.49067343
70,0000.483907810.573150990.627800870.51347926
80,0000.549762180.688481410.608230410.62069295
90,0000.616994440.66814550.73218550.72589921
100,0000.682301120.752690470.709171450.73564827
Table 4. CO2 emissions of proposed vs. state of the art techniques.
Table 4. CO2 emissions of proposed vs. state of the art techniques.
Total Workload in MIPSCO2 Emission ProposedCO2 Emission METCO2 Emission MOSCO2 Emission MOIA
10,00015.496846616.462597516.516307917.0115375
20,00030.237999933.471986935.194630133.3828888
30,00043.12549647.877312252.762334754.6693355
40,00053.984928967.563311756.698658462.1183083
50,00069.399421889.430464178.517947488.9770678
60,00083.170326791.380265888.745948390.9647089
70,00096.6421697116.23719103.859875110.436456
80,000108.955624131.863262133.2245133.728346
90,000123.853874127.045688142.450492143.234144
100,000135.115958169.327299143.069083147.432519
Table 5. Overall cost of proposed vs. state of the art techniques.
Table 5. Overall cost of proposed vs. state of the art techniques.
Total Workload in MIPSTotal Cost ProposedTotal Cost METTotal Cost MOSTotal Cost MOIA
10,00021.947409626.869337826.875759425.5246142
20,00046.652604557.801989457.091706648.4095822
30,00065.552661768.101289769.978801372.4359649
40,00091.729042101.50938993.0245697.6087062
50,000106.704006126.240668132.347404114.162025
60,000131.74095164.541054143.220692147.65082
70,000151.887415179.201553188.364215156.675002
80,000173.157091195.458122222.836448202.921307
90,000192.577062244.842466243.67097217.938865
100,000216.426419238.359854268.270447280.541938
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaur, R.; Anand, D.; Kaur, U.; Verma, S.; Kavita; Park, S.-W.; Hosen, A.S.M.S.; Ra, I.-H. An Advanced Job Scheduling Algorithmic Architecture to Reduce Energy Consumption and CO2 Emissions in Multi-Cloud. Electronics 2023, 12, 1810. https://doi.org/10.3390/electronics12081810

AMA Style

Kaur R, Anand D, Kaur U, Verma S, Kavita, Park S-W, Hosen ASMS, Ra I-H. An Advanced Job Scheduling Algorithmic Architecture to Reduce Energy Consumption and CO2 Emissions in Multi-Cloud. Electronics. 2023; 12(8):1810. https://doi.org/10.3390/electronics12081810

Chicago/Turabian Style

Kaur, Ramanpreet, Divya Anand, Upinder Kaur, Sahil Verma, Kavita, Seok-Woo Park, A. S. M. Sanwar Hosen, and In-Ho Ra. 2023. "An Advanced Job Scheduling Algorithmic Architecture to Reduce Energy Consumption and CO2 Emissions in Multi-Cloud" Electronics 12, no. 8: 1810. https://doi.org/10.3390/electronics12081810

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop