Next Article in Journal
Common Perceptions about the Use of Fillers in Silicone Rubber Insulation Housing Composites
Previous Article in Journal
Comparison of Efficient Ways of Mud Cake Removal from Casing Surface with Traditional and New Agents
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Distributed Learning Applications in Power Systems: A Review of Methods, Gaps, and Challenges

by
Nastaran Gholizadeh
1,* and
Petr Musilek
1,2
1
Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada
2
Applied Cybernetics, University of Hradec Králové, 500 03 Hradec Králové, Czech Republic
*
Author to whom correspondence should be addressed.
Energies 2021, 14(12), 3654; https://doi.org/10.3390/en14123654
Submission received: 12 May 2021 / Revised: 12 June 2021 / Accepted: 17 June 2021 / Published: 19 June 2021
(This article belongs to the Topic Power System Modeling and Control)

Abstract

:
In recent years, machine learning methods have found numerous applications in power systems for load forecasting, voltage control, power quality monitoring, anomaly detection, etc. Distributed learning is a subfield of machine learning and a descendant of the multi-agent systems field. Distributed learning is a collaboratively decentralized machine learning algorithm designed to handle large data sizes, solve complex learning problems, and increase privacy. Moreover, it can reduce the risk of a single point of failure compared to fully centralized approaches and lower the bandwidth and central storage requirements. This paper introduces three existing distributed learning frameworks and reviews the applications that have been proposed for them in power systems so far. It summarizes the methods, benefits, and challenges of distributed learning frameworks in power systems and identifies the gaps in the literature for future studies.

1. Introduction

Large integration of renewable energy resources, electric vehicles, demand-side management techniques, and dynamic electricity tariffs have dramatically increased power system complexity [1]. Moreover, the proliferation of advanced metering infrastructure (AMI) and digital assets has led to significant amounts of data production in power systems [2]. Hence, new data and energy management techniques are required to efficiently handle this amount of complexity and data in power systems. Machine learning approaches are finding extensive applications in power systems in the areas of power quality disturbance, transient stability assessment, voltage stability assessment, SCADA network vulnerabilities [3], etc. The main advantage of machine learning-based methods is their ability to handle high volumes of data and their easy implementation without requiring specific system models and parameters [4]. Therefore, they can be a promising solution for fast and reliable power system operation.
Nowadays, the public is paying more attention to data security than ever [5]. Therefore, preventing data leakage is a paramount priority when it comes to data management. Various methodologies are used in the literature to protect power system data privacy. Keshk et al. [6] proposed a two-level data privacy framework where the first level used the enhanced proof-of-work technique based on blockchain to authenticate data records and prevent cyber attacks to alter data. The second level used a variational autoencoder to convert the data to an encoded format to prevent inference attacks. To secure the information such as local energy consumption, power generation, and cost function parameters in the optimal power flow problem, a privacy-preserving alternating direction method of multipliers (ADMM) framework was proposed by Liu et al. [7]. An energy management method was developed for smart homes by Yang et al. [8] which used rechargeable batteries to hide the energy consumption patterns of electricity customers. Alsharif et al. [9] adapted the multidimension and multisubset (MDMS) scheme for data collection from AMIs where bill computation was delegated to the AMI network’s gateway for better scalability and privacy.
Other privacy-related studies include the use of masking approaches for data aggregation [10], consortium blockchain framework for electric vehicles’ power trading data [11], inner product encryption (IPE) for data sharing [12], differential privacy (DP) technique for load data privacy [13], generative adversarial networks (GANs) for power generation data [14], decomposition algorithm-based decentralized transactive control for peak demand reduction and preserving data privacy [15], and Benders decomposition method for integrated power and natural gas distribution in networked energy hubs while maintaining data security [16].
Distributed learning is a subfield of machine learning used mainly to address the data island, privacy, bandwidth, and data storage problems [17]. In this method, multiple clients perform the local learning process using edge devices and then send the learned model to a central server. The server performs model updates and sends the final model back to the clients. This schema eliminates the need for high volumes of data exchange and central data storage and allows preserving data privacy [18]. Distributed learning is often interchangeably used with federated learning, and it has wide applications in the areas of communications [19], mobile Internet technology [17], natural disaster analysis [20], heavy haul railway control [21], and others.
Considering the distributed nature of the power system components (smart meters, generators, etc.), distributed learning can be a promising solution for a wide variety of challenges existent in power systems. To explore power system applications that have benefited from distributed learning architectures, this paper provides a literature review of research integrating distributed learning schemes in power systems. In addition, it identifies the benefits, challenges, and knowledge gaps in this area. Although some of the previous review studies explored the potential of distributed learning algorithms for energy trading [22], machine learning methods for power system security and stability [23], deep learning applications in power systems [24,25], and intersection of deep learning and edge computing [26], no previous study has focused on the applications of distributed learning frameworks in power systems. The goal of this paper is to present a systematic overview of this area to attract the attention of power system researchers to the distributed learning framework as a highly promising research topic that has been rapidly expanding in other research areas such as communications, healthcare, and many others.
The remainder of this paper is organized as follows. Section 3 defines the distributed learning mechanism and introduces three major variants of distributed learning. Section 4 discusses distributed learning applications that have been implemented in power systems so far. Gaps and challenges associated with distributed learning in power systems are discussed in Section 5. Finally, the concluding remarks are drawn in Section 6.

2. Review Methodology

This section describes the review question that this paper aims to answer, the methodology used to search for references, and the criteria for including/excluding individual studies identified in the initial search.
The main question posed by this review paper is: How have distributed, federated, and assisted learning been implemented in power systems? This central question brought several other lines of inquiry:
  • Are there any publications that have implemented various forms of distributed learning in power systems?
  • What applications can distributed learning frameworks have in power systems?
  • What are the main benefits of distributed learning for power systems?
  • What kind of data is exchanged in distributed learning methods in power systems?
  • What are some possible research areas for implementing distributed learning in power systems?
Four main criteria were considered to include an article in this review.
1.
The article should have a learning-based structure. This could include any type of learning algorithm where the aim is to construct a mathematical representation for an unknown model.
2.
The article should focus on solving a power system-related problem.
3.
It should use a distributed structure where there is data exchange between multiple agents or between agents and a central server.
4.
Only research articles that have tested their algorithms on a case study and have presented the results should be included.
Articles that used centralized machine learning methods in power systems were not considered in the review. If a research paper used a distributed structure but did not have any type of learning method, it was excluded from consideration as well unless it provided some other useful information. Distributed, federated, and assisted learning methods that were applied in applications other than power systems were disqualified as well. The research was performed through search for articles in the following databases.
  • Multidisciplinary databases:
  • –  MDPI;
  • –  Elsevier;
  • –  Springer;
  • –  Arxiv; and
  • –  Wiley Online Library.
  • Specific databases:
  • –  ACM Digital Library; and
  • –  IEEE Xplore Library.
In addition, search results from Google Scholar and ResearchGate were used during the review process as well. The search was conducted using the following product set of keywords.
  • Machine learning keywords: [learning, distributed learning, federated learning, assisted learning, ADMM, dual decomposition, primal decomposition, consensus gradient, and privacy.]
  • Power system keywords: [power system, voltage control, resiliency, renewable energy, energy, energy management, electric vehicle, and agent.]
Multiple searches were performed between February 2021 and April 2021. Therefore, publications published or made available after April 2021 were not included in this review. During the screening process, papers were accepted/rejected based on their title, abstract, and keywords. In cases where the title, abstract, and keywords were not sufficiently specific about the content, the paper was judged based on the full text.

3. Distributed Learning Overview

The basic idea behind distributed learning is to perform the learning process in edge devices, communicate the learned model to the central server, and then receive model updates from the server, as illustrated by Figure 1. In this method, the privacy of the data is maintained since no raw data are exchanged between the edge devices and the central server. Moreover, the bandwidth and central data storage requirements of this method are significantly lowered [27] due to the reduced amount of data exchange.
Consider a graph G ( J , ξ ) with M nodes where J = { 1 , 2 , , M } represents the nodes of the graph and ξ denotes the set of edges between the nodes. In this context, edges are considered as the communication links for data transfer between nodes. Assume that the local training set for each node j is S j = { ( x j n , y j n ) } n = 1 | S j | where x shows the training features and y shows the labels. In distributed learning, it is assumed that the training samples are independent and identically distributed (i.i.d.) with an unknown distribution of D. Moreover, the sizes of the training sets are equal.
In traditional machine learning, the datasets are collected in one central unit and a centralized training is performed. A popular representation of this kind of learning is denoted as y = w T x + b where w is the training weights and b is the biases of a network representing the model. To find the best value of w, a loss function needs to be minimized. This can be shown as
Minimize F s ( w , ( x , y ) ) = L ( w , ( x , y ) ) + R ( w ) ,
where L ( w , ( x , y ) ) is the loss function and R ( w ) is the regularizer to prevent overfitting. The function F s ( w , ( x , y ) ) is called the statistical risk [28]. The new weight values in iteration k + 1 are calculated by
w k + 1 = w k γ F s ( w , ( x , y ) ) ,
where γ is the learning rate.
In distributed learning, the distribution function D is not known. Therefore, it is not possible to calculate w directly from (6). One way is to minimize the empirical risk instead of the statistical risk [29] as follows:
Minimize F j e ( w , ( x , y ) ) = 1 N n = 1 N L ( w , ( x n , j , y n , j ) ) + R ( w j ) ,
where N is the size of training set for each node which is denoted by | S j | . It can be proven that the minimization of the empirical risk converges to the optimal solution with high probability at the rate of O ( 1 / M N ) [30,31].
The main goal is to solve the following optimization problem
Minimize F e ( w , ( x , y ) ) = j = 1 M F j e ( w , ( x , y ) ) .
To update the weights, two methods can be used. In the first method, the gradient is updated in a decentralized manner
w k + 1 = w k γ j = 1 M F j s ( w , ( x , y ) ) .
In this method, the nodes either broadcast their gradients and update their gradients locally or they send their gradients to the central server to perform the gradient update [32]. In the second method, the nodes cooperate to solve (4) as a single optimization problem. For this purpose, various methods such as ADMM [33,34], dual decomposition [35,36], and distributed consensus gradient [37,38] have been used.
Distributed learning is a general term. Federated learning, which was first introduced by Google [39] in 2017, is a variant of distributed learning [40]. In federated learning, the central server coordinates multiple clients to perform a single learning task in parallel [41]. However, the usual approach in distributed learning is that the central server partitions the training data between multiple clients and then entrusts each client with a separate learning subtask which is a part of the main learning task. Moreover, clients in distributed learning can communicate with each other, in contrast to federated learning. Another key difference is that clients in federated learning are also data generation nodes (smartphones, sensors, etc.) while the clients in distributed learning are only processing units [42]. Figure 2 shows the federated learning scheme.
Currently, federated averaging (FedAvg) is the best known global update method in federated learning. In this approach, each agent takes one step of local gradient update, and then sends the obtained weights to the central server. The server then takes the weighted average of the received weights to compute new weights for each agent and sends them back to the agents. The FedAvg update rule can be shown as
w k + 1 = w k γ c = 1 C n c n F c ( w , ( x , y ) ) ,
where n is the total number of data points used for training the global model, n c is the total number of data points used for training by a single agent c, and F c ( w , ( x , y ) ) is the average gradient on the agent’s local data. This equation basically updates each agent’s weights based on the amount of its contribution to the global model.
From the viewpoint of data partition, federated learning has three main categories: (1) horizontal federated learning; (2) vertical federated learning; and (3) federated transfer learning [43]. In horizontal federated learning, the features of the datasets of clients are the same while the samples are different [44]. Vertical federated learning is the case when features are different and the samples partially overlap. Federated transfer learning is used when neither the features nor samples overlap [45].
Another variant of distributed learning is assisted learning developed in 2020 [46]. In this method, the clients do not exchange any data with the central server, and they have a protocol to assist each other’s private learning tasks by iteratively exchanging nonsensitive information such as fitted residuals.
In assisted learning, a client can seek assistance from other clients by sharing a few key statistics in an iterative communication process. Client A who seeks assistance sends a query to a selected list of other clients. If Client B agrees to help Client A, it fits a model to the latest statistics e B , k of Client A and sends the obtained residuals back to Client A. Based on the collected responses, Client A initializes the next round of assistance. After this iterative process converges, the training for Client A is complete. In the prediction stage, Client A can combine its own prediction by prediction results from Client B for a new feature vector and form a final prediction. Different methods can be used for combining the prediction results. An example of these methods is weighted summation. Table 1 summarizes the differences between distributed, federated, and assisted learning.

4. Applications of Distributed Learning

It is estimated that the proliferation of advanced metering devices such as smart meters in power systems will produce more than 2 petabytes of data annually by 2022 [47]. To take advantage of this amount of data, various big data and machine learning methods are being developed and described in the literature. In power systems, distributed learning is an emerging area which can be used to perform forecasting or control tasks without the need to transfer large amounts of data. This section provides an overview of distributed learning applications that have been proposed in power systems so far.

4.1. Voltage Control

Tousi et al. [48] used distributed reinforcement learning to control the voltage magnitude of IEEE 39-bus New England power system. In this schema, four static compensators (STATCOMs) were considered as servicing agents that received voltage values from bus agents. Each time a voltage deviation occurred in the system, the servicing agents performed primary or secondary voltage control to retrieve the voltage magnitude. Servicing agents used reinforced Q-learning to decide about the most proper voltage control action. In this method, if Agent A with an action state of s t chooses action a t , it enters a new state s t + 1 and receives reward r. A table of expected aggregate future rewards was constructed and, based on the reward values, it was determined how good it is to take action a t in state s t . The actions were STATCOMs’ reactive power injection/absorption to/from the buses.
Distributed learning was preferred over centralized learning since the action space was very large [48]. Four variants of multiagent tabular Q-learning were studied: Markov decision process (MDP) learners, independent learners, coordinated reinforcement learning, and distributed value functions. The results show that MDP performed better than all other methods. However, since the joint action space as a function of the agents was exponential, this method was not feasible for large problems. The coordinated reinforcement learning showed the next best performance. The independent learners method only had acceptable performance when the rewards of all agents were equal to the global system reward.
Similarly, an iterative distributed learning approach based on approximate dynamic programming was developed by Liu et al. [49] to perform secondary voltage control in a DC microgrid. Each of the sources inside the microgrid had a local primary and secondary voltage control unit. Each secondary voltage controller unit was an agent that exchanged power flow information with the environment and made decisions based on the actor–critic framework. The effectiveness of the proposed approach was verified using a DC microgrid with four sources.
Other studies include performing secondary voltage control via synchronous generators in an islanded microgrid [50], and via flexible AC transmission systems (FACTS) in the IEEE 39-bus New England power system [51]. Karim et al. [50] used K-means clustering to first classify the system state (as stable or unstable) after a sudden disturbance in the system. After that, a distributed neural network structure was used to predict the reference rotor speed and reference field voltage for individual synchronous generators. Tousi et al. [51] used distributed SARSA Q-learning algorithm to assist FACTS devices in making better secondary voltage control decisions.

4.2. Renewable Energy Forecast

Wind power forecasting up to a few hours ahead is paramount for participation in electricity markets and maintaining the reliability of power systems [52]. Statistical learning approaches require sharing confidential data with third parties and, therefore, are not popular. As a result, distributed learning algorithms have gained prominence in wind power forecasting. Online ADMM and mirror-descent inspired algorithms were implemented by Sommer et al. [53] to forecast wind power generation in a distributed manner. Each wind operator site that wanted to perform the forecast was a central agent. The central agent could contact a number of other operator sites for information and subsequently enter a learning contract. The AR-X model states the power output at site j as a function of past power generations at site j and the contracted agents. Therefore, the exchanged pieces of information between the central agent and other agents were the partial power predictions, explanatory variables, model coefficients of sites, and the information encryption matrix. A similar approach was presented by Pinson [54]. However, the proposed method was not online and only learned model parameters were exchanged between agents. Moreover, Goncalves et al. [55] used the same approach for solar energy forecasting.
Probabilistic wind power forecasting was formulated as a distributed framework by Zhang and Wang [56,57] using the ADMM algorithm. In this schema, the agents were the wind farm operators, and they only exchanged partial power predictions with the central collector which was the power system operator. A quantile regression model was used to formulate the wind farm power output as a function of on-site and off-site input features.
In wind farms, wind turbines are placed close to each other due to the limited land availability. This causes the power output of the downstream wind turbines to decrease as a result of the operation of upstream wind turbines. This phenomenon is called the wake effect [58]. To address this issue, Bui et al. [59] used double deep Q-learning to construct a state vector for each wind turbine generator based on state information including pitch angle, tip speed ratio, and wind speed. Each wind turbine generator in state s k performed an action a k (changing pitch angle and tip speed ratio based on the wind speed) and then entered a new state s k + 1 with a reward of r k . The purpose was to teach the wind turbine generators to choose the best actions in each state by maximizing the common rewards. It was found that the proposed approach increased the output power of the wind turbines around 1.99% to 4.11% compared to the maximum power point tracking method.

4.3. Demand Prediction

Simultaneous electric vehicle (EV) charging can create energy transfer congestion for electric utilities. Therefore, charging station providers predict the EV demand in advance to reserve the energy needed by EVs in real-time. Charging stations may not be willing to share their local data with the charging station provider for EV demand prediction purposes. Therefore, a method based on federated learning was developed by Saputra et al. [60] to predict EV demand without sharing private information between charging stations and the charging station provider. Charging stations were the agents and they only exchanged trained models (gradient information) with the central agent which was the charging station provider. The provider aggregated all trained models, updated the global model using deep learning, and sent the new model to the agents. To increase the prediction accuracy, the charging stations were clustered into different groups based on their location before performing the prediction. Similarly, Wang et al. [61] used weather conditions, geography, characteristics of the vehicle, and driving style as inputs to a federated learning model to predict EV charging demand while considering the charging stations as agents.
Distributed Q-learning was used by Ebell et al. [62] to facilitate energy sharing among households where the agents were only aware of their own actions but received a common reward. An edge computing architecture for energy sharing between smart houses was presented by Albataineh et al. [63] that also used the decision tree learning method to calculate the electricity usage by each edge.

4.4. Energy Management

Microgrids are a combination of distributed energy resources and loads. They can operate in both grid-connected and islanded modes, and are often accompanied with a control or management system. Due to the distributed nature of the resources inside the microgrids, distributed learning has recently gained prominence in the control and management of energy in microgrids. Kohn et al. [64] proposed an intelligent control and management system for elements of the microgrid based on distributed architecture. In this model, the microgrid management server was the central server that supervised the control actions of element controllers. The dynamic behavior of the loads and resources was learned using localized Hamiltonians with the variables being operational cost and voltage–current relationships. These relationships were constructed as rules in Hamiltonians and used in control algorithms. To model the interactions between control elements, one virtual control element was proposed that interacted with all other elements. This interaction was formulated as an optimal control subject to constraints. The interaction was iterated by playing a two-person Pareto game until a Pareto equilibrium was reached.
Hu and Kwasinski [65] designed an energy management system for microgrids using a hierarchical game machine learning algorithm combined with reinforcement learning. Each microgrid consisted of several base stations that searched for their optimal load-ratio policies. Each base station chose an initial load-ratio and then conducted a two-player game with a virtual user to find the resulting system status (state of charge of batteries, peak signal-to-noise ratio, etc.) and compute the reward. Later, it updated the load-ratio policy according to a rule that maximized the reward. Gao et al. [66] proposed a similar study for energy management of wind–photovoltaic (PV) power systems using distributed reinforcement learning, where each wind turbine or PV system acted as an agent and decided its own action strategy while observing the action history of other agents.

4.5. Transient Stability Enhancement

Power system stability margins are decreasing due to the increasing utilization of the tie line capacity, power exchange over long distances, environmental and economical restrictions for building new transmission lines, uncoordinated controls, increase in demand, and increase in power system complexity [67,68]. This has led to difficulties in implementing a central controller for power system automation and, therefore, has motivated the application of distributed mechanisms. To this end, Hadidi and Jeyasurya [69] proposed a multiagent control based on reinforcement learning to enhance the damping of interarea oscillations and increase power system stability margins. The agents in this model were the generator excitation systems and power system stabilizers at the generator locations. The action considered in reinforcement learning was the discrete signal to the excitation reference of the generator and the system state was system oscillations (measured by the magnitude of the one-machine infinite bus speed deviation) and synchronism between the generators in case of severe incidents. The value of angular separation between the two groups of machines was used to determine the value of the penalty, and the area under the speed deviation signal was used to determine the reward associated with each action.

4.6. Resilience Enhancement

Power system resilience is the capability of a power system to restore its original condition after a major disturbance, such as an extreme weather event [70]. According to the US Department of Energy, these events have the most significant role in causing blackouts [71]. Therefore, they are known as high-impact and low-probability (HILP) events. Distributed approaches for service restoration decrease the computational burden by dividing the tasks between independent agents. Karim et al. [72] developed a power system restoration method based on distributed machine learning. After a fault occurrence in the system, the power network was divided into several groups based on rotor speed data using the K-means clustering algorithm. Later, a corrective control using supervised machine learning was applied to restore the system. An ensemble of three algorithms including Random Forest and Random Subspace together with Bagging and Boosting was used. The selected features were terminal voltages, frequency values, and sensitivity of the active power generation to voltage at the faulted bus. The final decision, which was a combination of machine learning outputs from different groups (or agents), was the active power reference for the governor of the generators and the amount of load shedding in each group.
Ghorbani et al. [73,74,75] proposed a distributed restoration strategy that used Q-learning to select the actions that restore power to the largest number of loads. The presented structure consisted of multiple feeder agents which could perform learning and load prediction and could communicate with each other or the substation agent for decision making. A similar approach was proposed by Hong [76] to find the best switching configurations for service restoration using Q-learning.

4.7. Economic Dispatch

Economic dispatch is allocating demand among power system generators in a way to minimize power generation costs. As a result of the distributed nature of power supply and demand, distributed mechanisms form a promising solution for power dispatch in the current power system. Kim [77] developed a formulation for economic dispatch of distributed generators using a multiagent learning scheme. Lagrangian multipliers were used in the primal update to perform a local optimization (distributed learning) for each generator for reducing generation costs. In the dual update stage, the central agent received Lagrangian multipliers and updated them to satisfy demand–supply balance.

4.8. Energy Storage Systems Control

The utilization of regenerative braking energy in urban railway system is essential to prevent the pantograph voltage rise. Zhu et al. [78] proposed a decentralized cooperative control for energy storage systems in urban railway using deep reinforcement learning. The training phase was carried out in a centralized manner aiming at minimizing the global loss function. In the execution phase, each agent performed ϵ greedy action selection in a decentralized manner and based on local observations and local Q-function. The ultimate common goal was to keep the line voltage at the charge voltage threshold during train braking and at the discharge voltage threshold during train powering. The value of the loss function was determined based on the substation’s output energy increase and the braking resistor loss. Al-Saffar and Musilek [79] used Q-learning in the same manner to control the energy storage units and mitigate overvoltage issues caused by high penetration of PV systems.

4.9. Other Applications

In addition to the applications discussed above, distributed learning has been used for wide-area monitoring [80], optimal allocation of DC/AC converters and synchronous machines in power systems [81], and technology deployment inside energy hubs [82]. Log-linear learning algorithm was used in a game theoretic context by Jouini and Sun [81] to perform the DC/AC converter and synchronous machine allocation between generation units by minimizing the steady state angle deviations of these units from their optimum values. For technology deployment inside energy hubs, both Q-learning and continuous actor–critic learning automaton were tested by Bollinger and Evins [82] while treating each technology type (gas boiler, battery, combined heat and power (CHP) unit, etc.) as an agent.
Gusrialdi et al. [83] developed a distributed learning method for the estimation of the eigenvectors of the power system after a small signal disturbance in the system. The system was divided into several regions, and each region had its own local estimator to compute the average of the electro-mechanical states. This information was exchanged between local estimators to estimate the eigenvectors of the original system. Al-Saffar and Musilek [84,85] used deep reinforcement learning to solve the optimal power flow (OPF) problem while considering the microgrids as agents. Federated learning has also been used for increasing cybersecurity [86] and customer privacy [87] in non-intrusive load monitoring (NILM). A summary of the discussed distributed learning applications in power systems is given in Table 2.

5. Research Gaps and Challenges

Machine learning has found numerous applications in power systems. It has been used for designing demand response programs [88], consumer behavior modeling [89], fault location detection [90,91] and protection [92], cybersecurity [93], electricity price forecasting [94], demand prediction [95], renewable energy generation forecasting [96,97], transient stability assessment [98], voltage control [79,99], bad data detection [100], energy theft detection [101], grid topology identification [102], outage identification [103], microgrid energy management [104], emergency management [105], power flow estimation [106], optimal power flow prediction [107], unit commitment [108], state estimation [109], reliability management [110], event classification [111], power fluctuation identification [112], energy disaggregation [113], and power quality disturbance classification [114]. However, most of the presented works use a centralized learning framework, and, despite these accomplishments, research on distributed learning architectures in power systems remains very limited. Therefore, there are many applications for which distributed, federated, and assisted learning can be implemented.
Although distributed Q-learning and federated learning have already been used for energy sharing and load forecasting problems, no previous study has used distributed or federated learning for consumer behavior modeling. Numerous studies have used multi-layer perceptions to learn the behavior of heating, ventilation, and air conditioning (HVAC) systems in a centralized or individual manner. However, none of the distributed learning architectures have been tested yet on the same problem. In this model, consumers can serve as agents and the utility can be the central server. The aim is to design the best demand response programs for each customer based on their energy usage behavior and network constraints, without directly sharing the data with the utility. Blockchain-based federated learning was used by Zhao et al. [115] to learn the behavior of customers based on their smart home system data. This allowed the smart home system manufacturers to receive feedback from customers to better understand their needs and enhance their designs while preserving customer privacy. Federated learning can be used in a similar way to design customized demand response programs for consumers, and it can be recommended as a future research direction.
Fault localization in power systems using artificial intelligence is a complex problem. Most articles presented in the literature use small case studies to test their centralized machine learning algorithms for fault location detection and ignore the effect that the rest of the system might have on the test region. Evaluating the effectiveness of distributed learning architectures in fault location detection can be a focus of future research. In this problem, the network can be divided into different regions, where each region is an agent and the final goal is to design a general fault detector for the network by minimizing the detection errors in smaller parts of the system. Blockchain-based federated learning was used by Zhang et al. [116] to detect device failure in Industrial Internet of Things (IIoT). A similar approach can be adapted to fit fault location detection, equipment failure detection, and equipment failure prediction in power systems.
Similarly, the anomaly detection, grid topology identification, and state and optimal power flow estimation methods that are present in the literature have only been tested on small case studies while ignoring the effects of the rest of the system. Federated learning can be a good solution for these problems. Considering that it has not been applied to any of these areas yet, it can be a very interesting topic for future studies. In these cases, the network can be divided into different regions to minimize local training errors and reduce the bandwidth and central data storage requirements needed for transferring data to the utility. Nguyen et al. [117] used federated learning to detect anomalies and attacks in IoT systems. In this study, each security gateway was a local training point for anomaly detection and the IoT security service was the central agent. The detection was performed by analyzing the density of the network traffic. A similar method can be developed for anomaly detection in power systems.
It often happens in power systems that various organizations and operators do not want to share raw data with each other for privacy or competition reasons. An example of this case is the wind power generation forecast where wind turbine operators prefer to not share data with each other. Although this problem has already been addressed in literature using ADMM and mirror-descent algorithms, it has not been investigated using federated or assisted learning methods. Federated learning is simpler in implementation compared to the ADMM and mirror-descent algorithms. Therefore, comparing the accuracy and time requirements of these algorithms can be a very interesting topic for future studies.
Perhaps the greatest benefit of federated learning in power systems would be to collect heterogeneous data from various sources (AMIs, renewable energy sites, generators, protection equipment, tweets, weather forecasts, etc.) and incorporate these data into a federated framework for better training results or for the development of new software tools and a holistic approach for big data management in power systems. This approach has not been studied yet. No previous study has implemented assisted learning in power systems. Therefore, for future studies, it is recommended to explore the potential of assisted learning in power systems applications as well.
Although distributed/federated learning frameworks significantly increase user privacy by eliminating the need for raw data exchange, they are still susceptible to adversarial attacks [118], poisoning attacks [119], and privacy leakage due to the exchange of gradients [120]. This issue can be addressed by using differential privacy and data obfuscation methods. However, this comes at the cost of reduced convergence rate and accuracy [121,122]. Therefore, further research is needed in this area. Another associated challenge is that, in practice, the data in federated learning are non-i.i.d. Therefore, the locally stored data may not represent the population distribution. This further leads to convergence problems when there are missing updates from some clients [123,124]. Moreover, better model aggregation methods for optimizing the performance of distributed/federated learning need to be developed.

6. Conclusions

This paper provides an overview of distributed learning applications in power systems. It first defines three major variants of distributed learning and points out their differences. Then, the studies that have already implemented distributed and federated learning in power systems are discussed. Finally, the challenges, gaps, and potential research directions in this area are identified. We conclude that a major study area in power systems would be to incorporate heterogeneous data from AMIs, renewable energy sites, generators, protection equipment, tweets, weather forecasts, etc. into training models using federated learning. This serves as a promising solution for developing a holistic approach to becoming a data-driven electric power utility.

Author Contributions

Conceptualization, P.M.; methodology, N.G.; investigation, N.G.; resources, P.M.; writing—original draft preparation, N.G.; writing—review and editing, P.M.; supervision, P.M.; project administration, P.M.; and funding acquisition, P.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada grant number ALLRP 549804-19 and by Alberta Electric System Operator, AltaLink, ATCO Electric, ENMAX, EPCOR Inc., and FortisAlberta.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Siebert, L.C.; Aoki, A.R.; Lambert-Torres, G.; Lambert-de Andrade, N.; Paterakis, N.G. An Agent-Based Approach for the Planning of Distribution Grids as a Socio-Technical System. Energies 2020, 13, 4837. [Google Scholar] [CrossRef]
  2. Marinakis, V. Big Data for Energy Management and Energy-Efficient Buildings. Energies 2020, 13, 1555. [Google Scholar] [CrossRef] [Green Version]
  3. Alimi, O.A.; Ouahada, K.; Abu-Mahfouz, A.M. A Review of Machine Learning Approaches to Power System Security and Stability. IEEE Access 2020, 8, 113512–113531. [Google Scholar] [CrossRef]
  4. Hossain, E.; Khan, I.; Un-Noor, F.; Sikander, S.S.; Sunny, M.S.H. Application of Big Data and Machine Learning in Smart Grid, and Associated Security Concerns: A Review. IEEE Access 2019, 7, 13960–13988. [Google Scholar] [CrossRef]
  5. Wu, N.; Peng, C.; Niu, K. A Privacy-Preserving Game Model for Local Differential Privacy by Using Information-Theoretic Approach. IEEE Access 2020, 8, 216741–216751. [Google Scholar] [CrossRef]
  6. Keshk, M.; Turnbull, B.; Moustafa, N.; Vatsalan, D.; Choo, K.R. A Privacy-Preserving-Framework-Based Blockchain and Deep Learning for Protecting Smart Power Networks. IEEE Trans. Ind. Inform. 2020, 16, 5110–5118. [Google Scholar] [CrossRef]
  7. Liu, E.; Cheng, P. Mitigating Cyber Privacy Leakage for Distributed DC Optimal Power Flow in Smart Grid With Radial Topology. IEEE Access 2018, 6, 7911–7920. [Google Scholar] [CrossRef]
  8. Yang, L.; Chen, X.; Zhang, J.; Poor, H.V. Cost-Effective and Privacy-Preserving Energy Management for Smart Meters. IEEE Trans. Smart Grid 2015, 6, 486–495. [Google Scholar] [CrossRef] [Green Version]
  9. Alsharif, A.; Nabil, M.; Sherif, A.; Mahmoud, M.; Song, M. MDMS: Efficient and Privacy-Preserving Multidimension and Multisubset Data Collection for AMI Networks. IEEE Internet Things J. 2019, 6, 10363–10374. [Google Scholar] [CrossRef]
  10. Knirsch, F.; Eibl, G.; Engel, D. Error-Resilient Masking Approaches for Privacy Preserving Data Aggregation. IEEE Trans. Smart Grid 2018, 9, 3351–3361. [Google Scholar] [CrossRef]
  11. Li, Y.; Hu, B. A Consortium Blockchain-Enabled Secure and Privacy-Preserving Optimized Charging and Discharging Trading Scheme for Electric Vehicles. IEEE Trans. Ind. Inform. 2021, 17, 1968–1977. [Google Scholar] [CrossRef]
  12. Zhang, Q.; Fan, W.; Qiu, Z.; Liu, Z.; Zhang, J. A New Identification Approach of Power System Vulnerable Lines Based on Weighed H-Index. IEEE Access 2019, 7, 121421–121431. [Google Scholar] [CrossRef]
  13. Mak, T.W.K.; Fioretto, F.; Shi, L.; Van Hentenryck, P. Privacy-Preserving Power System Obfuscation: A Bilevel Optimization Approach. IEEE Trans. Power Syst. 2020, 35, 1627–1637. [Google Scholar] [CrossRef]
  14. Feng, X.; Lan, J.; Peng, Z.; Huang, Z.; Guo, Q. A Novel Privacy Protection Framework for Power Generation Data based on Generative Adversarial Networks. In Proceedings of the 2019 IEEE PES Asia-Pacific Power and Energy Engineering Conference (APPEEC), Macao, China, 1–4 December 2019; pp. 1–5. [Google Scholar] [CrossRef]
  15. Ge, Y.; Ye, H.; Loparo, K.A. Agent-Based Privacy Preserving Transactive Control for Managing Peak Power Consumption. IEEE Trans. Smart Grid 2020, 11, 4883–4890. [Google Scholar] [CrossRef]
  16. Li, Y.; Li, Z.; Wen, F.; Shahidehpour, M. Privacy-Preserving Optimal Dispatch for an Integrated Power Distribution and Natural Gas System in Networked Energy Hubs. IEEE Trans. Sustain. Energy 2019, 10, 2028–2038. [Google Scholar] [CrossRef]
  17. Ye, Y.; Li, S.; Liu, F.; Tang, Y.; Hu, W. EdgeFed: Optimized Federated Learning Based on Edge Computing. IEEE Access 2020, 8, 209191–209198. [Google Scholar] [CrossRef]
  18. Shen, P.; Li, C.; Zhang, Z. Distributed Active Learning. IEEE Access 2016, 4, 2572–2579. [Google Scholar] [CrossRef]
  19. Mowla, N.I.; Tran, N.H.; Doh, I.; Chae, K. AFRL: Adaptive federated reinforcement learning for intelligent jamming defense in FANET. J. Commun. Netw. 2020, 22, 244–258. [Google Scholar] [CrossRef]
  20. Ahmed, L.; Ahmad, K.; Said, N.; Qolomany, B.; Qadir, J.; Al-Fuqaha, A. Active Learning Based Federated Learning for Waste and Natural Disaster Image Classification. IEEE Access 2020, 8, 208518–208531. [Google Scholar] [CrossRef]
  21. Hua, G.; Zhu, L.; Wu, J.; Shen, C.; Zhou, L.; Lin, Q. Blockchain-Based Federated Learning for Intelligent Control in Heavy Haul Railway. IEEE Access 2020, 8, 176830–176839. [Google Scholar] [CrossRef]
  22. Wang, N.; Li, J.; Ho, S.S.; Qiu, C. Distributed machine learning for energy trading in electric distribution system of the future. Electr. J. 2021, 34, 106883. [Google Scholar] [CrossRef]
  23. Farhoumandi, M.; Zhou, Q.; Shahidehpour, M. A review of machine learning applications in IoT-integrated modern power systems. Electr. J. 2021, 34, 106879. [Google Scholar] [CrossRef]
  24. Zhang, D.; Han, X.; Deng, C. Review on the research and practice of deep learning and reinforcement learning in smart grids. CSEE J. Power Energy Syst. 2018, 4, 362–370. [Google Scholar] [CrossRef]
  25. Khodayar, M.; Liu, G.; Wang, J.; Khodayar, M.E. Deep learning in power systems research: A review. CSEE J. Power Energy Syst. 2020, 1–13. [Google Scholar] [CrossRef]
  26. Chen, J.; Ran, X. Deep Learning With Edge Computing: A Review. Proc. IEEE 2019, 107, 1655–1674. [Google Scholar] [CrossRef]
  27. Chiu, T.C.; Shih, Y.Y.; Pang, A.C.; Wang, C.S.; Weng, W.; Chou, C.T. Semisupervised Distributed Learning With Non-IID Data for AIoT Service Platform. IEEE Internet Things J. 2020, 7, 9266–9277. [Google Scholar] [CrossRef]
  28. Yang, Z.; Bajwa, W.U. ByRDiE: Byzantine-Resilient Distributed Coordinate Descent for Decentralized Learning. IEEE Trans. Signal Inf. Process. Netw. 2019, 5, 611–627. [Google Scholar] [CrossRef] [Green Version]
  29. Wu, X.; Zhang, J.; Wang, F. Stability-Based Generalization Analysis of Distributed Learning Algorithms for Big Data. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 801–812. [Google Scholar] [CrossRef]
  30. Li, H.; Zhang, H.; Wang, Z.; Zhu, Y.; Han, Q. Distributed consensus-based multi-agent convex optimization via gradient tracking technique. J. Frankl. Inst. 2019, 356, 3733–3761. [Google Scholar] [CrossRef]
  31. Shalev-Shwartz, S.; Shamir, O.; Srebro, N.; Sridharan, K. Stochastic Convex Optimization; COLT: Berlin, Germany, 2009. [Google Scholar]
  32. Magnússon, S.; Shokri-Ghadikolaei, H.; Li, N. On Maintaining Linear Convergence of Distributed Learning and Optimization Under Limited Communication. IEEE Trans. Signal Process. 2020, 68, 6101–6116. [Google Scholar] [CrossRef]
  33. Huang, Z.; Hu, R.; Guo, Y.; Chan-Tin, E.; Gong, Y. DP-ADMM: ADMM-Based Distributed Learning With Differential Privacy. IEEE Trans. Inf. Forensics Secur. 2020, 15, 1002–1012. [Google Scholar] [CrossRef] [Green Version]
  34. Zhang, T.; Zhu, Q. Dynamic Differential Privacy for ADMM-Based Distributed Classification Learning. IEEE Trans. Inf. Forensics Secur. 2017, 12, 172–187. [Google Scholar] [CrossRef]
  35. Gu, C.; Li, J.; Wu, Z. An adaptive online learning algorithm for distributed convex optimization with coupled constraints over unbalanced directed graphs. J. Frankl. Inst. 2019, 356, 7548–7570. [Google Scholar] [CrossRef]
  36. Falsone, A.; Margellos, K.; Garatti, S.; Prandini, M. Dual decomposition for multi-agent distributed optimization with coupling constraints. Automatica 2017, 84, 149–158. [Google Scholar] [CrossRef] [Green Version]
  37. Niu, Y.; Wang, H.; Wang, Z.; Xia, D.; Li, H. Primal-dual stochastic distributed algorithm for constrained convex optimization. J. Frankl. Inst. 2019, 356, 9763–9787. [Google Scholar] [CrossRef]
  38. Yang, Q.; Chen, G. Primal-Dual Subgradient Algorithm for Distributed Constraint Optimization Over Unbalanced Digraphs. IEEE Access 2019, 7, 85190–85202. [Google Scholar] [CrossRef]
  39. McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-Efficient Learning of Deep Networks from Decentralized Data, 2017. arXiv 2017, arXiv:cs.LG/1602.05629. [Google Scholar]
  40. Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečný, J.; Mazzocchi, S.; McMahan, H.B.; et al. Towards Federated Learning at Scale: System Design. arXiv 2019, arXiv:cs.LG/1902.01046. [Google Scholar]
  41. Savazzi, S.; Nicoli, M.; Rampa, V. Federated Learning With Cooperating Devices: A Consensus Approach for Massive IoT Networks. IEEE Internet Things J. 2020, 7, 4641–4654. [Google Scholar] [CrossRef] [Green Version]
  42. Shen, S.; Zhu, T.; Wu, D.; Wang, W.; Zhou, W. From distributed machine learning to federated learning: In the view of data privacy and security. In Concurrency and Computation: Practice and Experience; Wiley Online Library: Hoboken, NJ, USA, 2020. [Google Scholar]
  43. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning: Concept and Applications. ACM Trans. Intell. Syst. Technol. 2019, 10. [Google Scholar] [CrossRef]
  44. Li, L.; Fan, Y.; Tse, M.; Lin, K.Y. A review of applications in federated learning. Comput. Ind. Eng. 2020, 149, 106854. [Google Scholar] [CrossRef]
  45. Zhang, C.; Xie, Y.; Bai, H.; Yu, B.; Li, W.; Gao, Y. A survey on federated learning. Knowl. Based Syst. 2021, 216, 106775. [Google Scholar] [CrossRef]
  46. Xian, X.; Wang, X.; Ding, J.; Ghanadan, R. Assisted Learning: A Framework for Multi-Organization Learning. In Advances in Neural Information Processing Systems; Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2020; Volume 33, pp. 14580–14591. [Google Scholar]
  47. Mor, G.; Vilaplana, J.; Danov, S.; Cipriano, J.; Solsona, F.; Chemisana, D. EMPOWERING, a Smart Big Data Framework for Sustainable Electricity Suppliers. IEEE Access 2018, 6, 71132–71142. [Google Scholar] [CrossRef]
  48. Tousi, M.; Hosseinian, S.H.; Menhaj, M.B. A Multi-agent-based voltage control in power systems using distributed reinforcement learning. Simulation 2011, 87, 581–599. [Google Scholar] [CrossRef]
  49. Liu, X.; Jiang, H.; Wang, Y.; He, H. A Distributed Iterative Learning Framework for DC Microgrids: Current Sharing and Voltage Regulation. IEEE Trans. Emerg. Top. Comput. Intell. 2020, 4, 119–129. [Google Scholar] [CrossRef]
  50. Karim, M.A.; Currie, J.; Lie, T. A distributed machine learning approach for the secondary voltage control of an Islanded micro-grid. In Proceedings of the 2016 IEEE Innovative Smart Grid Technologies—Asia (ISGT-Asia), Melbourne, VIC, Australia, 28 November–1 December 2016; pp. 611–616. [Google Scholar] [CrossRef] [Green Version]
  51. Tousi, M.R.; Hosseinian, S.H.; Menhaj, M.B. Voltage Coordination of FACTS Devices in Power Systems Using RL-Based Multi-Agent Systems. AUT J. Electr. Eng. 2009, 41, 39–49. [Google Scholar] [CrossRef]
  52. da Silva, R.G.; Ribeiro, M.H.D.M.; Moreno, S.R.; Mariani, V.C.; dos Santos Coelho, L. A novel decomposition-ensemble learning framework for multi-step ahead wind energy forecasting. Energy 2021, 216, 119174. [Google Scholar] [CrossRef]
  53. Sommer, B.; Pinson, P.; Messner, J.W.; Obst, D. Online distributed learning in wind power forecasting. Int. J. Forecast. 2021, 37, 205–223. [Google Scholar] [CrossRef]
  54. Pinson, P. Introducing distributed learning approaches in wind power forecasting. In Proceedings of the 2016 International Conference on Probabilistic Methods Applied to Power Systems (PMAPS), Beijing, China, 16–20 October 2016; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  55. Goncalves, C.; Bessa, R.J.; Pinson, P. Privacy-preserving Distributed Learning for Renewable Energy Forecasting. IEEE Trans. Sustain. Energy 2021, 1. [Google Scholar] [CrossRef]
  56. Zhang, Y.; Wang, J. A Distributed Approach for Wind Power Probabilistic Forecasting Considering Spatio-Temporal Correlation Without Direct Access to Off-Site Information. IEEE Trans. Power Syst. 2018, 33, 5714–5726. [Google Scholar] [CrossRef]
  57. Zhang, Y.; Wang, J. A Distributed Approach for Wind Power Probabilistic Forecasting Considering Spatiotemporal Correlation without Direct Access to Off-site Information. In Proceedings of the 2020 IEEE Power Energy Society General Meeting (PESGM), Montreal, QC, Canada, 2–6 August 2020; p. 1. [Google Scholar] [CrossRef]
  58. Howlader, A.M.; Senjyu, T.; Saber, A.Y. An Integrated Power Smoothing Control for a Grid-Interactive Wind Farm Considering Wake Effects. IEEE Syst. J. 2015, 9, 954–965. [Google Scholar] [CrossRef]
  59. Bui, V.H.; Nguyen, T.T.; Kim, H.M. Distributed Operation of Wind Farm for Maximizing Output Power: A Multi-Agent Deep Reinforcement Learning Approach. IEEE Access 2020, 8, 173136–173146. [Google Scholar] [CrossRef]
  60. Saputra, Y.M.; Hoang, D.T.; Nguyen, D.N.; Dutkiewicz, E.; Mueck, M.D.; Srikanteswara, S. Energy Demand Prediction with Federated Learning for Electric Vehicle Networks. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM), Waikoloa, HI, USA, 9–13 December 2019; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  61. Wang, Z.; Ogbodo, M.; Huang, H.; Qiu, C.; Hisada, M.; Abdallah, A.B. AEBIS: AI-Enabled Blockchain-Based Electric Vehicle Integration System for Power Management in Smart Grid Platform. IEEE Access 2020, 8, 226409–226421. [Google Scholar] [CrossRef]
  62. Ebell, N.; Gütlein, M.; Pruckner, M. Sharing of Energy Among Cooperative Households Using Distributed Multi-Agent Reinforcement Learning. In Proceedings of the 2019 IEEE PES Innovative Smart Grid Technologies Europe (ISGT-Europe), Bucharest, Romania, 29 September–2 October 2019; pp. 1–5. [Google Scholar] [CrossRef]
  63. Albataineh, H.; Nijim, M.; Bollampall, D. The Design of a Novel Smart Home Control System using Smart Grid Based on Edge and Cloud Computing. In Proceedings of the 2020 IEEE 8th International Conference on Smart Energy Grid Engineering (SEGE), Oshawa, ON, Canada, 12–14 August 2020; pp. 88–91. [Google Scholar] [CrossRef]
  64. Kohn, W.; Zabinsky, Z.B.; Nerode, A. A Micro-Grid Distributed Intelligent Control and Management System. IEEE Trans. Smart Grid 2015, 6, 2964–2974. [Google Scholar] [CrossRef]
  65. Hu, R.; Kwasinski, A. Energy Management for Microgrids Using a Hierarchical Game-Machine Learning Algorithm. In Proceedings of the 2019 1st International Conference on Control Systems, Mathematical Modelling, Automation and Energy Efficiency (SUMMA), Lipetsk, Russia, 20–22 November 2019; pp. 546–551. [Google Scholar] [CrossRef]
  66. Gao, L.m.; Zeng, J.; Wu, J.; Li, M. Cooperative reinforcement learning algorithm to distributed power system based on Multi-Agent. In Proceedings of the 2009 3rd International Conference on Power Electronics Systems and Applications (PESA), Hong Kong, China, 20–22 May 2009; pp. 1–4. [Google Scholar]
  67. Ernst, D.; Glavic, M.; Wehenkel, L. Power systems stability control: Reinforcement learning framework. IEEE Trans. Power Syst. 2004, 19, 427–435. [Google Scholar] [CrossRef] [Green Version]
  68. Rogers, G. Power System Oscillations; Springer US: Boston, MA, USA, 2000; pp. 1–6. [Google Scholar] [CrossRef]
  69. Hadidi, R.; Jeyasurya, B. A real-time multiagent wide-area stabilizing control framework for power system transient stability enhancement. In Proceedings of the 2011 IEEE Power and Energy Society General Meeting, Detroit, MI, USA, 24–28 July 2011; pp. 1–8. [Google Scholar] [CrossRef]
  70. Mohamed, M.A.; Chen, T.; Su, W.; Jin, T. Proactive Resilience of Power Systems Against Natural Disasters: A Literature Review. IEEE Access 2019, 7, 163778–163795. [Google Scholar] [CrossRef]
  71. Mahzarnia, M.; Moghaddam, M.P.; Baboli, P.T.; Siano, P. A Review of the Measures to Enhance Power Systems Resilience. IEEE Syst. J. 2020, 14, 4059–4070. [Google Scholar] [CrossRef]
  72. Karim, M.A.; Currie, J.; Lie, T.T. Distributed Machine Learning on Dynamic Power System Data Features to Improve Resiliency for the Purpose of Self-Healing. Energies 2020, 13, 3494. [Google Scholar] [CrossRef]
  73. Ghorbani, M.J.; Choudhry, M.A.; Feliachi, A. A Multiagent Design for Power Distribution Systems Automation. IEEE Trans. Smart Grid 2016, 7, 329–339. [Google Scholar] [CrossRef]
  74. Ghorbani, M.J. A Multi-Agent Design for Power Distribution Systems Automation. Ph.D. Thesis, West Virginia University, Morgantown, WV, USA, 2014. [Google Scholar] [CrossRef]
  75. Ghorbani, J.; Choudhry, M.A.; Feliachi, A. A MAS learning framework for power distribution system restoration. In Proceedings of the 2014 IEEE PES T D Conference and Exposition, Chicago, IL, USA, 14–17 April 2014; pp. 1–6. [Google Scholar] [CrossRef]
  76. Hong, J. A Multiagent Q-Learning-Based Restoration Algorithm for Resilient Distribution System Operation. Master’s Thesis, University of Central Florida, Orlando, FL, USA, 2017. [Google Scholar]
  77. Kim, K.K.K. Distributed Learning Algorithms and Lossless Convex Relaxation for Economic Dispatch with Transmission Losses and Capacity Limits. Math. Probl. Eng. 2019, 2019. [Google Scholar] [CrossRef] [Green Version]
  78. Zhu, F.; Yang, Z.; Lin, F.; Xin, Y. Decentralized Cooperative Control of Multiple Energy Storage Systems in Urban Railway Based on Multiagent Deep Reinforcement Learning. IEEE Trans. Power Electron. 2020, 35, 9368–9379. [Google Scholar] [CrossRef]
  79. Al-Saffar, M.; Musilek, P. Reinforcement Learning-Based Distributed BESS Management for Mitigating Overvoltage Issues in Systems With High PV Penetration. IEEE Trans. Smart Grid 2020, 11, 2980–2994. [Google Scholar] [CrossRef]
  80. Li, K.; Guo, Y.; Laverty, D.; He, H.; Fei, M. Distributed Adaptive Learning Framework for Wide Area Monitoring of Power Systems Integrated with Distributed Generations. Energy Power Eng. 2013, 5, 962–969. [Google Scholar] [CrossRef] [Green Version]
  81. Jouini, T.; Sun, Z. Distributed learning for optimal allocation of synchronous and converter-based generation. arXiv 2021, arXiv:math.OC/2009.13857. [Google Scholar]
  82. Bollinger, L.; Evins, R. Multi-Agent Reinforcement Learning for Optimizing Technology Deployment in Distributed Multi-Energy Systems; European Group for Intelligent Computing in Engineering: Plymouth, UK, 2016. [Google Scholar]
  83. Gusrialdi, A.; Chakrabortty, A.; Qu, Z. Distributed Learning of Mode Shapes in Power System Models. In Proceedings of the 2018 IEEE Conference on Decision and Control (CDC), Miami, FL, USA, 17–19 December 2018; pp. 4002–4007. [Google Scholar] [CrossRef]
  84. Al-Saffar, M.; Musilek, P. Distributed Optimization for Distribution Grids With Stochastic DER Using Multi-Agent Deep Reinforcement Learning. IEEE Access 2021, 9, 63059–63072. [Google Scholar] [CrossRef]
  85. Al-Saffar, M.; Musilek, P. Distributed Optimal Power Flow for Electric Power Systems with High Penetration of Distributed Energy Resources. In Proceedings of the 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE), Edmonton, AB, Canada, 5–8 May 2019; pp. 1–5. [Google Scholar] [CrossRef]
  86. You, S. A Cyber-secure Framework for Power Grids Based on Federated Learning. engrXiv 2020. [Google Scholar] [CrossRef]
  87. Cao, H.; Liu, S.; Zhao, R.; Xiong, X. IFed: A novel federated learning framework for local differential privacy in Power Internet of Things. Int. J. Distrib. Sens. Netw. 2020, 16, 1550147720919698. [Google Scholar] [CrossRef]
  88. Afzalan, M.; Jazizadeh, F. A Machine Learning Framework to Infer Time-of-Use of Flexible Loads: Resident Behavior Learning for Demand Response. IEEE Access 2020, 8, 111718–111730. [Google Scholar] [CrossRef]
  89. Gholizadeh, N.; Abedi, M.; Nafisi, H.; Marzband, M.; Loni, A. Fair-optimal bi-level transactive energy management for community of microgrids. IEEE Syst. J. 2021, 15, 1–11. [Google Scholar] [CrossRef]
  90. Mamuya, Y.D.; Lee, Y.D.; Shen, J.W.; Shafiullah, M.; Kuo, C.C. Application of Machine Learning for Fault Classification and Location in a Radial Distribution Grid. Appl. Sci. 2020, 10, 4965. [Google Scholar] [CrossRef]
  91. Gush, T.; Bukhari, S.B.A.; Mehmood, K.K.; Admasie, S.; Kim, J.S.; Kim, C.H. Intelligent Fault Classification and Location Identification Method for Microgrids Using Discrete Orthonormal Stockwell Transform-Based Optimized Multi-Kernel Extreme Learning Machine. Energies 2019, 12, 4504. [Google Scholar] [CrossRef] [Green Version]
  92. Karim, M.A.; Currie, J.; Lie, T.T. Dynamic Event Detection Using a Distributed Feature Selection Based Machine Learning Approach in a Self-Healing Microgrid. IEEE Trans. Power Syst. 2018, 33, 4706–4718. [Google Scholar] [CrossRef]
  93. Cui, M.; Wang, J.; Chen, B. Flexible Machine Learning-Based Cyberattack Detection Using Spatiotemporal Patterns for Distribution Systems. IEEE Trans. Smart Grid 2020, 11, 1805–1808. [Google Scholar] [CrossRef]
  94. Rafiei, M.; Niknam, T.; Khooban, M.H. Probabilistic Forecasting of Hourly Electricity Price by Generalization of ELM for Usage in Improved Wavelet Neural Network. IEEE Trans. Ind. Inform. 2017, 13, 71–79. [Google Scholar] [CrossRef]
  95. Li, W.; Yang, X.; Li, H.; Su, L. Hybrid Forecasting Approach Based on GRNN Neural Network and SVR Machine for Electricity Demand Forecasting. Energies 2017, 10, 44. [Google Scholar] [CrossRef]
  96. Zhu, H.; Lian, W.; Lu, L.; Dai, S.; Hu, Y. An Improved Forecasting Method for Photovoltaic Power Based on Adaptive BP Neural Network with a Scrolling Time Window. Energies 2017, 10, 1542. [Google Scholar] [CrossRef] [Green Version]
  97. Khodayar, M.; Kaynak, O.; Khodayar, M.E. Rough Deep Neural Architecture for Short-Term Wind Speed Forecasting. IEEE Trans. Ind. Inform. 2017, 13, 2770–2779. [Google Scholar] [CrossRef]
  98. Shi, Z.; Yao, W.; Zeng, L.; Wen, J.; Fang, J.; Ai, X.; Wen, J. Convolutional neural network-based power system transient stability assessment and instability mode prediction. Appl. Energy 2020, 263, 114586. [Google Scholar] [CrossRef]
  99. Duan, J.; Shi, D.; Diao, R.; Li, H.; Wang, Z.; Zhang, B.; Bian, D.; Yi, Z. Deep-Reinforcement-Learning-Based Autonomous Voltage Control for Power Grid Operations. IEEE Trans. Power Syst. 2020, 35, 814–817. [Google Scholar] [CrossRef]
  100. Araya, D.B.; Grolinger, K.; ElYamany, H.F.; Capretz, M.A.; Bitsuamlak, G. An ensemble learning framework for anomaly detection in building energy consumption. Energy Build. 2017, 144, 191–206. [Google Scholar] [CrossRef]
  101. Jindal, A.; Dua, A.; Kaur, K.; Singh, M.; Kumar, N.; Mishra, S. Decision Tree and SVM-Based Data Analytics for Theft Detection in Smart Grid. IEEE Trans. Ind. Inform. 2016, 12, 1005–1016. [Google Scholar] [CrossRef]
  102. Zhao, J.; Li, L.; Xu, Z.; Wang, X.; Wang, H.; Shao, X. Full-Scale Distribution System Topology Identification Using Markov Random Field. IEEE Trans. Smart Grid 2020, 11, 4714–4726. [Google Scholar] [CrossRef]
  103. Zhao, Y.; Chen, J.; Poor, H.V. A Learning-to-Infer Method for Real-Time Power Grid Multi-Line Outage Identification. IEEE Trans. Smart Grid 2020, 11, 555–564. [Google Scholar] [CrossRef] [Green Version]
  104. Venayagamoorthy, G.K.; Sharma, R.K.; Gautam, P.K.; Ahmadi, A. Dynamic Energy Management System for a Smart Microgrid. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1643–1656. [Google Scholar] [CrossRef] [PubMed]
  105. Wang, X.Z.; Zhou, J.; Huang, Z.L.; Bi, X.L.; Ge, Z.Q.; Li, L. A multilevel deep learning method for big data analysis and emergency management of power system. In Proceedings of the 2016 IEEE International Conference on Big Data Analysis (ICBDA), Hangzhou, China, 12–14 March 2016; pp. 1–5. [Google Scholar] [CrossRef]
  106. Donnot, B.; Guyon, I.; Schoenauer, M.; Panciatici, P.; Marot, A. Introducing machine learning for power system operation support. arXiv 2017, arXiv:stat.ML/1709.09527. [Google Scholar]
  107. Fioretto, F.; Mak, T.W.K.; Hentenryck, P.V. Predicting AC Optimal Power Flows: Combining Deep Learning and Lagrangian Dual Methods. arXiv 2019, arXiv:eess.SP/1909.10461. [Google Scholar]
  108. Dalal, G.; Mannor, S. Reinforcement learning for the unit commitment problem. In Proceedings of the 2015 IEEE Eindhoven PowerTech, Eindhoven, The Netherlands, 29 June–2 July 2015; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  109. Zhang, L.; Wang, G.; Giannakis, G.B. Real-Time Power System State Estimation and Forecasting via Deep Unrolled Neural Networks. IEEE Trans. Signal Process. 2019, 67, 4069–4077. [Google Scholar] [CrossRef] [Green Version]
  110. Duchesne, L.; Karangelos, E.; Wehenkel, L. Machine learning of real-time power systems reliability management response. In Proceedings of the 2017 IEEE Manchester PowerTech, Manchester, UK, 18–22 June 2017; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  111. Kim, D.I.; Wang, L.; Shin, Y.J. Data Driven Method for Event Classification via Regional Segmentation of Power Systems. IEEE Access 2020, 8, 48195–48204. [Google Scholar] [CrossRef]
  112. Wen, S.; Wang, Y.; Tang, Y.; Xu, Y.; Li, P.; Zhao, T. Real-Time Identification of Power Fluctuations Based on LSTM Recurrent Neural Network: A Case Study on Singapore Power System. IEEE Trans. Ind. Inform. 2019, 15, 5266–5275. [Google Scholar] [CrossRef]
  113. Khodayar, M.; Wang, J.; Wang, Z. Energy Disaggregation via Deep Temporal Dictionary Learning. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 1696–1709. [Google Scholar] [CrossRef]
  114. Khokhar, S.; Mohd Zin, A.A.; Memon, A.P.; Mokhtar, A.S. A new optimal feature selection algorithm for classification of power quality disturbances using discrete wavelet transform and probabilistic neural network. Measurement 2017, 95, 246–259. [Google Scholar] [CrossRef]
  115. Zhao, Y.; Zhao, J.; Jiang, L.; Tan, R.; Niyato, D.; Li, Z.; Lyu, L.; Liu, Y. Privacy-Preserving Blockchain-Based Federated Learning for IoT Devices. IEEE Internet Things J. 2021, 8, 1817–1829. [Google Scholar] [CrossRef]
  116. Zhang, W.; Lu, Q.; Yu, Q.; Li, Z.; Liu, Y.; Lo, S.K.; Chen, S.; Xu, X.; Zhu, L. Blockchain-Based Federated Learning for Device Failure Detection in Industrial IoT. IEEE Internet Things J. 2021, 8, 5926–5937. [Google Scholar] [CrossRef]
  117. Nguyen, T.D.; Marchal, S.; Miettinen, M.; Fereidooni, H.; Asokan, N.; Sadeghi, A.R. DÏoT: A Federated Self-learning Anomaly Detection System for IoT. In Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), Dallas, TX, USA, 7–10 July 2019; pp. 756–767. [Google Scholar] [CrossRef] [Green Version]
  118. Jiang, J.C.; Kantarci, B.; Oktug, S.; Soyata, T. Federated Learning in Smart City Sensing: Challenges and Opportunities. Sensors 2020, 20, 6230. [Google Scholar] [CrossRef] [PubMed]
  119. Kang, J.; Xiong, Z.; Niyato, D.; Zou, Y.; Zhang, Y.; Guizani, M. Reliable Federated Learning for Mobile Networks. IEEE Wirel. Commun. 2020, 27, 72–80. [Google Scholar] [CrossRef] [Green Version]
  120. Melis, L.; Song, C.; De Cristofaro, E.; Shmatikov, V. Exploiting Unintended Feature Leakage in Collaborative Learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 19–23 May 2019; pp. 691–706. [Google Scholar] [CrossRef] [Green Version]
  121. McMahan, H.B.; Ramage, D.; Talwar, K.; Zhang, L. Learning Differentially Private Recurrent Language Models. arXiv 2018, arXiv:cs.LG/1710.06963. [Google Scholar]
  122. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Process. Mag. 2020, 37, 50–60. [Google Scholar] [CrossRef]
  123. Prakash, S.; Dhakal, S.; Akdeniz, M.; Avestimehr, A.S.; Himayat, N. Coded Computing for Federated Learning at the Edge. arXiv 2020, arXiv:cs.DC/2007.03273. [Google Scholar]
  124. Wu, Q.; He, K.; Chen, X. Personalized Federated Learning for Intelligent IoT Applications: A Cloud-Edge Based Framework. IEEE Open J. Comput. Soc. 2020, 1, 35–44. [Google Scholar] [CrossRef]
Figure 1. Distributed learning scheme.
Figure 1. Distributed learning scheme.
Energies 14 03654 g001
Figure 2. Federated learning scheme.
Figure 2. Federated learning scheme.
Energies 14 03654 g002
Table 1. Comparison of distributed, federated, and assisted learning.
Table 1. Comparison of distributed, federated, and assisted learning.
MethodData SourceCommunication with Central ServerCommunication between Agents
Distributed LearningCentral server
Federated LearningAgents×
Assisted LearningAgents×
Table 2. Distributed learning applications summary.
Table 2. Distributed learning applications summary.
Ref.ApplicationAgentsCentral ServerMachine Learning AlgorithmExchanged Data
[48]Voltage controlSTATCOMs-Q-learningRewards, value functions
[49]Voltage controlVoltage control units-Actor–critic frameworkPowerflow information
[50]Voltage controlSynchronous generatorsVirtual serverMultilayer perceptronControl actions
[51]Voltage controlFACTs-SARSA Q-learningRewards, value functions
[53]Wind power forecastNeighbor wind turbine operatorsWind turbine operatorADMM, mirror-descentPartial power predictions, model coefficients of sites encryption matrix
[54]Wind power forecastNeighbor wind turbine operatorsWind turbine operatorADMMPartial power predictions
[56,57]Wind power forecastWind farm operatorsPower system operatorADMMPartial power predictions
[59]Wind power maximizationWind turbine operatorsTransmission system operatorDeep Q-learningRewards
[60,61]EV demand predictionCharging stationsCharging station providerFederated learningGradient information
[62]Energy sharing among householdsHouseholdsUtilityQ-learningRewards
[64]Microgrid energy managementElement controllersMicrogrid management serverHamiltoniansControl variables
[65]Microgrid energy managementElement controllersVirtual serverReinforcement learningLoad ratio
[66]Wind–PV managementWind turbines PV systems-Reinforcement learningAction history
[69]Increasing power system stability marginsGenerator excitation systems, power system stabilizers-Reinforcement learningStates, rewards
[72]Resiliency enhancementNetwork regions-Ensemble learningRotor angle
[73,74,75,76]Resiliency enhancementFeeder agentsSubstation agentQ-learningMeasurements
[77]Economic dispatchGeneratorsTransmission system operatorPrimal-dual decompositionLagrange multipliers
[78,79]Energy Storage ControlEnergy storage systemsVirtual serverQ-learningRewards
[80]Wide-area monitoringSynchrophasorsVirtual serverIncremental learningMeasured data
[81]Optimal allocationGeneration units-Log-linear learningGeneration types
[82]Technology deploymentTechnology typesMarket agentQ-learning, Actor–critic frameworkEnergy prices, production prices
[83]Mode shapes estimationLocal estimators-Linear regressionElectro-mechanical states
[84,85]OPFMicrogridsCentral critic serverDeep reinforcement learningLoss gradients
[87]NILMHouseholdsUtilityFederated learningGradient information
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gholizadeh, N.; Musilek, P. Distributed Learning Applications in Power Systems: A Review of Methods, Gaps, and Challenges. Energies 2021, 14, 3654. https://doi.org/10.3390/en14123654

AMA Style

Gholizadeh N, Musilek P. Distributed Learning Applications in Power Systems: A Review of Methods, Gaps, and Challenges. Energies. 2021; 14(12):3654. https://doi.org/10.3390/en14123654

Chicago/Turabian Style

Gholizadeh, Nastaran, and Petr Musilek. 2021. "Distributed Learning Applications in Power Systems: A Review of Methods, Gaps, and Challenges" Energies 14, no. 12: 3654. https://doi.org/10.3390/en14123654

APA Style

Gholizadeh, N., & Musilek, P. (2021). Distributed Learning Applications in Power Systems: A Review of Methods, Gaps, and Challenges. Energies, 14(12), 3654. https://doi.org/10.3390/en14123654

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop