**2. Background Survey**

There exist a variety of methods for the development of cloud services and smart meters; this paper discusses a few of them that apply to our problem statement. In [3], smart meter analytics were examined from a software performance perspective and a performance benchmark design inclusive of a typical smart meter analytics tasks. This system uses both an o ffline model for feature extraction and online anomaly detection. Due to privacy issues, an algorithm is presented to generate large realistic datasets from a small set of real data.

In [4], a thorough analysis of 100 anonymized 5-min commercial building meter data sets was used to explore time series of electricity consumption using a simple forecast model. This method improves energy managemen<sup>t</sup> with the support of grid control and provides a model for detecting any anomalies.

In [5], a novel design of a smart metering system was developed as a graphical user interface (GUI)-based NTL detection platform. A 3-tier model of a detection algorithm is proposed to combine three mechanisms that complement each other for enhanced performance. The triangulation technique facilitates validation of detection results through cross verification of the three sources of measurement data. Furthermore, the system also supports better flexibility with built-in or externally developed AI methods and a user-friendly GUI-based platform to monitor and analyze the NTL status of the power grid in real-time for revenue recovery.

In [6], a smart metering technique with di fferent technologies to capture the data from smart meters is presented. In [7] the user privacy is maintained using the tools from information theory and a hidden Markov model for the measurements. It further addresses the issues due to the trade-o ff between privacy and utility. In [8], open-source tools are used to measure the smart meter data and data storage, and an analytics ecosystem based on publicly available test data set is studied.

There exist a variety of methods for the development of cloud services and smart meters; the paper discusses a few of them here that relate to our problem statement. Reference [9], using a tabu search algorithm, presented an e fficient search algorithm to identify the locations of data and software components in data clouds. The main goal of this approach is to minimize the cost incurred in operations and emissions, modelled using mixed-integer programming. The proposed model is solved with the search algorithm discussed earlier. Virtual data center embedding (VDCE) across distributed infrastructures (DI) was introduced to make the infrastructure user-friendly and increase the revenue of providers. Reference [10], performs an analysis of di fferent VM-based cloud environments like Eucalyptus, Open Nebula, and Nimbus. All those platforms have been evaluated with High-Level Petri Nets (HLPN).

In [11], a formal analysis, modelling, and verification of three open-source state-of-the-art VM-based cloud platforms—Eucalyptus, Open Nebula, and Nimbus—is provided. HLPN are used to model, analyze the structural and behavioral properties of the systems. Moreover, to verify the models, they have used the Satisfiability Modulo Theories Library (SMTL) and Z3 Solver.

In this article, we modelled about 100 VM to verify the correctness and feasibility of proposed original models. The results reveal that the models function correctly. Moreover, the increase in the number of VMs does not a ffect the working of the models, which indicates the practicability of the models in a highly scalable and flexible environment.

Reference [12] analyzes the robustness of advanced DCNs. First, the authors present multi-layered graph modelling of various DCNs, and they study the traditional robustness metrics considering di fferent failure scenarios to perform a comparative analysis. Finally, they describe the inadequacy of the conventional network robustness metrics to appropriately evaluate the DCN robustness and propose new procedures to quantify DCN robustness. Currently, there is no detailed study available centering on DCN robustness.

Reference [13], designs a protocol to enable secure, robust, cheating-resistant, and e fficient outsourcing of MIC to a malicious cloud in this paper. The main idea to protect the privacy is by employing some transformations on the original matrix to ge<sup>t</sup> an encrypted matrix, which is sent to the cloud; and then transforming the result returned from the cloud to ge<sup>t</sup> the correct inversion of the original matrix. Next, a randomized Monte Carlo verification algorithm with one-sided error is employed to successful handle result verification. Further, in this paper, the superiority of this novel technique in designing inexpensive result verification algorithm for secure outsourcing and well demonstrated. They analytically show that the proposed protocol simultaneously fulfils the goals of correctness, security, robust cheating- resistance, and high-e fficiency. Extensive theoretical analysis and experimental evaluations also show its high-e fficiency and immediate practicability.

In [14] Cloud Capacity Manager (CCM), a prototype system and its methods for dynamically multiplexing the computing capacity of virtualized data centers at scales of thousands of machines, for diverse workloads with variable demands are presented. Extending prior studies primarily concerned with accurate capacity allocation and ensuring acceptable application performance. CCM also sheds light on the tradeo ffs due to two unavoidable issues in large scale commodity data centers: (i) maintaining low operational overhead given the variable cost of performing managemen<sup>t</sup> operations necessary to allocate resources, and (ii) coping with the increased incidences of these operations' failures.

Reference [15] adopts the intuitive idea of High-QoS First-Replication (HQFR) to perform data replication. However, this greedy algorithm cannot minimize the data replication cost and the number of QoS-violated data replicas. To achieve these two minimum objectives, the algorithm transforms the QADR problem into the well-known minimum-cost maximum-flow (MCMF) problem. By applying the existing MCMF algorithm to solve the QADR problem, the second algorithm can produce the optimal solution to the QADR problem in polynomial time. Still, it takes more computational time than the first algorithm.

Reference [16], utilizes Voronoi partitions to determine which data center requests should be routed based on the relative priorities of the cloud operator. In [17], the ability to forecast electricity demand, respond to peak load events, and improve sustainable use of energy by consumers, are made possible by energy informatics. Information and software system techniques for a smarter power grid include pattern mining and machine learning over complex events and integrated semantic information, distributed stream processing for low latency response, cloud platforms for scalable operations and privacy policies to mitigate information leakage in an information-rich environment. Reference [18], proposes a new prototype system, in which the cloud-computing system is combined with a so-called Trusted Platform Support Service (TSS) based on a Trusted Platform Module. In this design, better e ffects can be obtained in authentication, role-based access and data protection in a cloud computing environment.

Reference [19] presents a pruning algorithm in which a threshold parameter is used to control the tradeo ff between computation time and solution accuracy qualitatively. The algorithm is iterative with decoupled state values in each iteration, and the paper parallelizes the state estimations to reduce the overall computation time. They illustrate the proposal with examples where the pruning algorithm reduces the computation time significantly without losing much precision in-game solutions, and that parallelization further reduces the computation time.

An interdisciplinary MIT study [20] focused on integrating and evaluating existing knowledge rather than performing original research and analysis. Besides, this study's predecessors focused on implications of national policies limiting carbon emissions, while do not make any assumptions regarding future carbon policy initiatives. Instead, they mainly consider the impact of a set of ongoing trends and existing policies. Reference [21] identifies and reviews several low-cost technology products that enable various load control functions and in an innovative prepaid power meter that will have the capacity to direct cash exchanges through remote intervention is built, keeping in mind the end goal is to empower the client to energize his record from home. The user interface comprises an LCD, which shows the power used and a measure of the bill to be paid, and will sound an alert when the balance goes beneath a specific sum utilizing GSM. Prepaid meters are now present in the market and used widely in a few African and European nations. Additionally, this will help service organizations in staying aware of power thievery.

The review concludes that interval metering is not necessary to carry out load control functions. Available technology can remotely switch loads without requiring a connection to a meter. While one-way communication is essential to carry out remote switching of loads, two-way communication is not necessary to carry out remote switching of loads. Metering, in some form, is required for the settlement of the financial transactions associated with load control programs.

Reference [22] examined uncertainty in demand response baseline models and variability in automated responses to dynamic pricing. It defined several demand response (DR) parameters, which characterize changes in electricity use on DR days, and then presented a method for computing the error associated with DR parameter estimates. In addition to analyzing the magnitude of DR parameter errors, in this article, the authors develop a metric to determine how much observed DR parameter variability is attributable to real event-to-event variability versus only baseline model error. Using data from 38 C&I facilities that participated in an automated DR program in California, it was found that DR parameter errors are significant. For most facilities, observed DR parameter variability is more likely explained by baseline model errors, not real DR parameter variability; however, and several facilities do exhibit real DR parameter variability.

Reference [23] proposes a mathematical model for the dynamic evolution where in particular supply, demand, and clearing prices under a class of real-time pricing mechanisms are characterized by passing on the real-time wholesale prices to the end consumers. The e ffects that such mechanisms could have on the stability and e fficiency of the entire system are investigated, and several stability criteria are discussed. It is shown that relaying the real-time wholesale electricity prices to the end consumers creates a closed-loop feedback system, which could be unstable or lack robustness, leading to extreme price volatility. Finally, a result is presented, which characterizes the e fficiency losses incurred when, to achieve stability, the wholesale prices are adjusted by a static pricing function before they are passed on to the retail consumers.

Fault current coe fficient and time delay assignment for a microgrid protection system with a central protection unit was discussed in [24], which utilizes relays of type-distributed generators providing more protection between generators. This approach performs critical parameter assignments like fault current coe fficient and relay hierarchy. This method overcomes the di fficulty of the manual task of distributed generation (DG) and protection units.

Reference [25] proposed an approach for distributed network reinforcement using a time segmentation algorithm, which reduces the computation overhead. Discrete particle swarm optimization is used to overcome the problem of nonlinear and discrete optimization.

Reference [26] was focused on load models of appliances. The loads for di fferent appliances are generated using the profiles maintained and validated with actual distribution circuits. Then on-demand sensitive load models are used to reduce the consumption of di fferent consumer ports, introduced the improvement of such load models at the machine level, and incorporated traditional controllable burdens, i.e., space cooling/space warming, water heater, garmen<sup>t</sup> dryer and electric vehicles. Approval of the machine level load models' is done by contrasting the models' output and

the actual power utilization information for the related apparatus. The machine-level load models' are combined to create stack profiles for a dispersion circuit, which are approved against the load profiles of a real distribution circuit. The DR-touchy load models are utilized to examine changes in power utilization at both the household and the distribution levels, given an arrangemen<sup>t</sup> of client practices and additional actions by a utility.

In [27], an approach for real-time voltage-stability margin control via reactive power reserve sensitivities is proposed. In detail, a man-in-loop control method is used to boost reactive power reserves (RPRs) while maintaining a minimum amount of voltage stability margin bus voltage limits. The objective is to determine the most e ffective control actions to reestablish critical RPRs across the system. Initially, the concept of reactive power reserve sensitivity concerning control actions is introduced. In the sequel, a control approach based on convex quadratic optimization is used to find the minimal amount of control necessary to increase RPRs above their pre-specified (o ffline) levels. Voltage stability margin constraints are incorporated using a linear approximation of critical RPRs.

Reference [28], discussed an evaluation method for renewable DG in distributed networks. This approach allocates DGs to maximize the usage of connections to the local distribution company and customers. Reference [29] presents the algorithms and associated analysis, but guidelines, rules, and implementation considerations are also discussed, especially for the more complicated situations where mathematical analysis is di fficult. In general, it is challenging to codify and taxonomize the scheduling knowledge because there are many performance metrics, task characteristics, and system configurations. Also, adding to the complexity is the fact that a variety of algorithms are designed for di fferent combinations of these considerations. In spite of the recent advances, there are still gaps in the solution space, and there is a need to integrate the available solutions.

Security-aware scheduling strategy for real-time applications on clusters (SAREC) [30], proposes a security-aware scheduling strategy, or which integrates security requirements into scheduling for real-time applications by employing our security overhead model.

Scheduling real-time data-intensive applications (SARDIG) is a security-attentive dynamic real-time scheduling algorithm architecture and a dynamic grid scheduling algorithm for providing security for real-time data-intensive applications [31]. It proposes a grid architecture, which describes the scheduling framework of real-time data-intensive applications. Also, the authors introduced a mathematical model for providing security of the real-time data-intensive applications and a security gain function to quantitatively measure the security enhancement for applications running in the grid sites. They have also proved that the SARDIG algorithm always provides optimum security for real-time data-intensive applications.

It could be concluded that the system has a problem of service selection in all the above approaches when there is the considerable number of customers from di fferent locations, so a new location-based service and selection approaches are proposed in this paper. In the following Section 3, the proposed location-based service selection approach is discussed, while Section 4 discusses the results achieved by the proposed method.

#### **3. User-Aware Power Regulatory System with a Location-Based Service Selection Scheme**

The proposed user-aware power regulatory scheme gets updates about the power usage of users through smart meters. The smart meters update the power usage details to the cloud service. Based on the power usage details, the power regulatory model provides inputs to the users to control the power usage of the connection. The selection of a location-based service selection scheme to provide additional comprehensive and consistent services has to be done in a more tactical approach where the time complexity should be reduced. The method audits the identity of the user using the auditing service, and if the user has cleared the auditing service, then the way performs service selection. The entire process is split into several stages, namely Location-Based Service Selection, Hash Function for Key Generation, Signature Verification, Trust Management, and Profile-Based Access Restriction. In this section, we will explain each of the functional stages in detail. Figure 1 presents the architecture of the proposed user aware power regulatory model with a location-based service selection framework, and it shows the various stages of the proposed model.

**Figure 1.** The architecture of the LBSS Model.

### *3.1. User-Aware Power Regulatory Model*

The smart meter accesses the cloud service intermittently to send power use details, prior month use, cost, every month's average units, number of units utilized at peak times, number of units used during off peak-times and their relative offered price. Additionally, the shutdown date and complaints registered, number of aggregate units, number of units utilized at peak time, number of units used during the off-peak times and their relative offered cost are included, and user notifications are shown on a smart meter display. The power regulatory model controls the flow of electric supply to the user connection. At intermittent periods, the model produces electric bills and regulates the power supply. At whatever point the instalment for the electrical supply not been paid, the substation sends a control message to the smart meter to detach the electric supply. When the user pays his bills again, then the substation sends the turn-on message to the meter, which provides the power back. The power regulatory model utilizes the ZigBee convention to the correspondence between the smart meter and substation.

**Algorithm 1:** User-Aware Power Regulatory System Input: User Usage Details Output: Power to be regulated. **Start** Initialize the value of time slot Ti. Tv = value of time slot Ti. For each Time window Ti Read Current usage units Cu. Compute Number of units at peak time NPTu. Compute Number of units at O ff-Peak Time NOPTu. Compute the cost of power usage CPU. Compute Previous month usage Pmu. Compute Previous month cost Pmc. Compute Number of units on previous month Npm. Compute No of units on peak time Nupt. Compute No of units on o ff-peak time Nuopt. Identify complaints registered Compl. Compute shutdown date Sd. Access Cloud Service Cs. Cloud-Update-Service (Cu, NPTu, NOPTu, CPU, PMU, Pmc, Npm, Nupt, Nuopt, Compl). Display details to the smart meter. **End** If paymen<sup>t</sup> date Alive Else Access Payment-Detail Service. If true then Reset Payment date. Else Disconnect the power supply. **End End Stop**.

Algorithm 1, above generates user awareness. It accesses the cloud service to update the cloud, so the smart meter can also show the recently registered estimations of energy utilization to the user. The model controls the power supply by checking the instalment status by accessing the cloud service. In light of the state of the instalment, the power status controls the electric supply through the smart meter.
