Next Article in Journal
Bench-Scale Gasification of Olive Cake in a Bubbling Fluidized Bed Reactor
Previous Article in Journal
Pre-Launch Calibration of the Bidirectional Reflectance Distribution Function (BRDF) of Ultraviolet-Visible Hyperspectral Sensor Diffusers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Predictive Model Using Long Short-Time Memory (LSTM) Technique for Power System Voltage Stability

by
Muhammad Jamshed Abbass
,
Robert Lis
* and
Waldemar Rebizant
*
Faculty of Electrical Engineering, Wrocław University of Science and Technology, 27 Wybrzeże Stanisława Wyspiańskiego St., 50-370 Wrocław, Poland
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(16), 7279; https://doi.org/10.3390/app14167279
Submission received: 27 June 2024 / Revised: 23 July 2024 / Accepted: 15 August 2024 / Published: 19 August 2024
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
The stability of the operation of the power system is essential to ensure a continuous supply of electricity to meet the load of the system. In the operational process, voltage stability (VS) should be recognized and predicted as a basic requirement. In electrical systems, deep learning and machine learning algorithms have found widespread applications. These algorithms can learn from previous data to detect and predict future scenarios of potential instability. This study introduces long short-term memory (LSTM) technology to predict the stability of the nominal voltage of the power system. Based on the results, the recommended LSTM technology achieved the highest accuracy target of 99.5%. In addition, the LSTM model outperforms other machine learning (ML) and deep learning techniques, i.e., support vector machines (SVMs), Naive Bayes (NB), and convolutional neural networks (CNNs), when comparing the accuracy of the VS forecast. The results show that the LSTM method is useful to predict the voltage of an electrical system. The IEEE 33-bus system indicates that the recommended approach can rapidly and precisely verify the system stability category. Furthermore, the proposed method outperforms conventional assessment methods that rely on shallow learning.

1. Introduction

Maintaining a continuous and reliable power supply for consumer loads hinges on the crucial task of ensuring the operational stability of the power grid. Modern power systems, as shown in Figure 1, are intricate systems that are vulnerable to a variety of factors, including time-varying loads, Renewable Energy Sources (RESs), equipment failures, and unexpected events such as natural disasters. These factors can cause significant variations in power generation and consumption, leading to instability in power systems [1]. The initial investigations developed several methodologies for evaluating the VS of the power system. The analytical method [2,3,4], the honey badger algorithm [5,6] for DG allocation and size, the continuation power flow method [7], the singular value decomposition method [8], and predictive control [9] are some of these methods. In recent years, deep learning (DL) and machine learning (ML) methods have shown great potential in addressing real-world applications such as natural language processing, speech recognition, autonomous vehicles, and image and vision processing, as well as electric load and energy prediction [10]. These techniques have also been applied in electric power systems for adaptive voltage control (VVC) in microgrid integration, smart inverters, microgrid energy management, solar photovoltaic power forecasting, fault diagnosis for the active distribution grid, VS forecasting of the power system [11], and overall stability and security of power systems [10]. In [12,13], we used deep reinforcement learning (DRL) techniques to optimize VVC in active distribution networks (DN), addressing issues such as power loss and voltage changes on the bus. In [14], we created a DRL model using a soft actor–critic algorithm to enhance the VVC of smart solar photovoltaic inverters based on droop operation. The proposed multiagent DRL method coordinates optimal VVC for solar photovoltaic inverters and static var compensators, maximizing nodal voltage in distributed networks. In [15], we used a deep Q network and a deep deterministic policy gradient method to stabilize the voltage of a 200-bus power system across various operating conditions, aiming to improve both the efficiency of training and the precision of the proposed algorithm.
The problem of the frequency stability of electrical systems based on the ML and DL algorithms has garnered considerable interest [16,17,18,19]. In [19], a hybrid technique was introduced that combines fuzzy logic, deep learning, and a grey wolf optimizer based on ordered position to enhance microgrid frequency stability. In [20], a K-vector nearest-neighbor algorithm was described for dynamic frequency control, accounting for changes in wind turbine speed and varying loads over time. In [21,22,23,24,25], we employed a spike neural network to tune the coefficients of a nonlinear integral backstepping controller and address frequency deviations in an autonomic microgrid caused by the intermittent nature of RESs, such as solar and wind energy. The primary aim of this network is to eliminate vanishing gradients, a common issue in regular recurrent neural networks (RNNs) that can hinder the capture of long-term dependencies within datasets. The long short-term memory (LSTM) network, with its gating mechanism, allows for selective retention or discarding of information from previous time steps, making it well-suited for modeling sequences with persistent dependencies. The remarkable precision of the LSTM classifier suggests that the sequential description of the data in this dataset plays a significant role in the prediction of the target variable. We proposed using long short-term memory (LSTM) to forecast the VS of the IEEE 33-bus approach. The LSTM model can determine the stability or instability of the power system based on historical inputs, such as load demand, generation capacity, and line parameters. Over the years, LSTM has become a popular choice for addressing various real-world problems due to its efficiency in handling both linear and non-linear data patterns.
This article proposes the long short-term memory (LSTM) system to forecast the VS of a 33-bus power network. The LSTM technique was evaluated by comparing its performance in forecasting VS with other methods, including support vector machine (SVM), Naive Bayes (NB), and convolutional neural network (CNN). The major contributions of this paper are as follows:
  • Designing a model that can predict the electrical grid voltage stability by combining deep learning and machine learning techniques. The proposed LSTM technique for VS assessment outperforms conventional models that rely on shallow machine learning methods, such as SVM, NB and CNN, in terms of accuracy and response time in simulations run on the IEEE 33-bus scheme.
  • Validating that the LSTM-based voltage stability evaluation method is more accurate and has a faster response time compared to conventional techniques based on traditional machine learning, such as artificial neural networks (ANNs), CNN, SVM, and decision trees (DTs), as demonstrated by the simulation outcomes of the IEEE 33-bus scheme.
  • Enhancing the performance of each model through hyper-tuning of the parameters.
The remaining parts of this research are organized as follows: Section 2 describes the methods used to resolve the VS assessment issue in the electrical power system. Section 3 presents the analysis and discussion of the findings. Section 4 concludes with a summary of the findings and proposals for future research.

2. The Proposed System Methodology

In this system, a constant voltage source represents an infinite bus, and the line and load impedances are denoted by the symbols Z and Z x , respectively. When the maximum power transmission is reached, more power can be delivered to the load by lowering the impedance Z x . The voltage drop will then grow more noticeable as Z x is decreased, which, in turn, increases the power demand. This increased power demand will continue to rise, eventually resulting in less power being supplied to the load.
The PV curve represents the process illustrated in Figure 2. At this moment, the load is receiving active power, denoted as P 0 . The V x   and   V y are voltages at the receiving and sending ends; P x and P y are active power values at the receiving and sending ends; x and y are apparent powers at the receiving and sending ends; Q x and Q y are reactive powers at the receiving and sending ends of the system, respectively; Y is the admittance; R is resistance; X is reactance; Z is impedance; ϑ is the impedance angle of the lines and I is the current passing through the line.
x = P x i + j Q x i = V x I
I = V x x V y y Z ϑ
x = V x V y x + y V y 2 0 Z ϑ
x = V x V y Z ϑ x + y V y 2 Z ϑ
If = x y , it can be expressed at the receiving end for real and active power:
P x i = V x V y Z cos ϑ V x 2 Z cos ϑ
Q x i = V x V y Z sin ϑ V x 2 Z sin ϑ
V s = P max P 0 P max
A classical load system in Equation (2) is included in the scheme model of Equation (3) with the continuation power flow (CPF) approach [16].
P x i = P x 0 i + λ 0 P x d i Q x i = Q x 0 i + λ 0 Q x d i
where P x i and Q x i represent powers provided to the load bus i t h ; the limitation λ 0 is an integer that defines the system’s loading level; P x d i and Q x d i represent the velocities at which the load bus’ i t h active and reactive powers change, respectively.
P i = V i 2 L i i + V i i c V c L i c cos θ i c + M i c sin θ i c Q i = V i 2 M i i + V i i c V c L i c cos θ i c M i c sin θ i c
whereas the voltage consequence at bus i t h is represented as V i and the angle voltage variation involving nodes i t h with V i is denoted as L0; the actual and fictional sections of the i c t h part of the admittance conditions of the structure are represented by θ i c ; L i c ; and M i c . Variations in active and reactive power as the constraint λ 0 changes establish the value of revolution in active and reactive power. The CPF approach raises the λ 0 value gradually until the maximum load limit is reached, increasing the system load. The λ 0 , P x , and Q x are equivalent at the bifurcation node to their maximum values, λ max , P max , and Q max , respectively.
V x = V y 2 2 P i tan ϑ X ± V y 2 4 P i 2 X 2 X V y 2 P i tan ϑ
where ϑ is the power factor angle, the V x , and P i represents the constant power factor of the Pv curves.
The importance of implementing an artificial intelligence strategy in the electric power system is growing, driven by its effectiveness across various domains, including load prediction, assessment of the power system, fault detection, and more. To address essential protocol changes and create visual representations, the use of three distinct machine learning frameworks—Scikit-learn, TensorFlow, and PyTorch 3.0—is indispensable.
The proposed model for assessing VS using LSTM consists of five primary components: an input layer, LSTM layer, fully connected (FC) layer, softmax layer, and output layer. The frameworks utilized in this model include various tools such as the regular scaler for statistical normalization, the confusion matrix for evaluating performance, and cross-validation for the K-Fold. Figure 3 illustrates the typical structure of the deep learning models used. The LSTM approach was used to forecast the PV curves based on VS assessments of the electrical system, as shown in Figure 4. In addition, a comparison was made with other machine learning (ML) and deep learning (DL) models, including support vector machines (SVMs), Naive Bayes, K-nearest neighbors (KNN), and convolutional neural networks (CNNs), to evaluate the efficacy of the long short-term memory (LSTM) algorithm. The power–voltage curve (P-V) analysis is a widely used method for assessing voltage stability in power systems. This curve illustrates the relationship between power transfer in lines and voltage magnitudes at buses. The characterization of the PV curve shows that a steep curve reveals voltage sensitivity, while a flat slope indicates approaching instability. During the data study procedure, the scikit learn library was employed with a labeled contribution dataset. In this specific case, a supplementary dataset consisting of 30,000 observations was used as an independent variable. In this framework, a numerical value of 1 represents a state of stability (stable), whereas a value of 0 indicates a state of instability (unstable).
The assessment of features involved generating graphical representations for the 12 features, providing visual insights into their distribution and their correlation with the dependent variable ‘Control Action’. This section provides a visual representation of how these 12 numerical values relate to the dependent variable. The dataset was divided into two distinct subsets using the DL algorithm, one for training and the other for evaluation. The algorithm was trained on the first subset of 24,000 data points and then tested on the remaining 6000 data points. The analytical variables in the dataset included the notional power that each participant in the network created (positive) or consumed (negative), ranging from −3.0 to −0.5 for the data labeled for the national power grid load value to calculate the stability or instability of the IEEE 33-bus system, and the reaction time each participant experienced, ranging from 0.5 to 1.0. The dataset also included the price elasticity of demand for each node in the network, ranging from 0.05 to 1.0 as a continuous variable.
The principle of energy conservation resulted in the overall load demand being identical to the total generation capacity. A non-linear activation function was used to help the system modify its input consequences to its environment. The dataset underwent an initial filtering process to eliminate missing values, such as blank or zero values within its features, thereby ensuring the accuracy and validity of the results. Missing values were removed to ensure dataset execution, and a thorough search was conducted to identify artifacts like outliers, duplicates, or atypical patterns that could affect the analysis. The data were divided into different testing and training sets with a 30:70 ratio. We constructed a system model which evaluates the effectiveness of the training and testing data. Assigning 70% of the dataset for training and reserving 30% for testing helped to prevent overfitting of the model to the training data improved its capacity to generalize to new unseen data. In the ML technique, it is critical to select the appropriate number of layers, type of layers, and activation functions. The setting limitations and parameters of these techniques will be discussed in further detail to understand their impact on model effectiveness.
  • LSTM-Based Power System Voltage Stability Assessment Model
The LSTM concepts and the suggested LSTM algorithm for predicting voltage stability are explained in the next sections.
a i = α X a . h i 1 , x i + j a
b i = α   X b   . h i 1 ,   x i   + j b  
c i = α X c   . h i 1 ,   x i   + j c
An LSTM cell comprises an input layer, forget gate, and output gate. The memory cell in an LSTM can determine how information is added to or removed from the cell state at each time step. The following is a breakdown of the components: ai is the forget gate; b i is the input gate and c i is the output gate; Xa, Xb, Xc, Xe and ja, jb, jc, je are the associated bias vectors and weight matrices, respectively; α() is the sigmoid function. In summary, the LSTM system learns how to effectively manage and update the cell state through these gates and their associated biases and weights. The sigmoid function is integral in controlling the amount of information retained or discarded, which is a crucial part of the learning process.
The whole change procedure of the cell position is expressed as a check on:
E i = t a n h X c   h i 1 ;   x i + j e
E i = f t   .   E i 1   + k i × E i
where * denotes the outcome in component order, Ei is the candidate cell, and Ei − 1, respectively, denotes the position of the cell in time steps i and i − 1.
The hidden state hi is evaluated as follows:
h i = c i × t a n h ( E t )  
The non-linear activation function is represented by the function t a n h . The extended time domain correlation feature extraction in the LSTM network is made possible by its special architecture, which substitutes memory cells for hidden layer nodes. The dynamics of the post-contingency system serve as the input in this evaluation model. The VS of the IEEE 33-bus was predicted employing the LSTM algorithm. The network used the resolved linear unit (ReLU) activation function from the first to fifth layers, improving its ability to represent non-linear relationships and capture intrinsic patterns.
V x = s o f t max ( X s . h t + j s   )
where h t , the hidden feature, X s , j s are the weight and biases that are used to tune the training data throughout procedure. The final outcome of our structure V x is to find the voltage stability position of the electrical power system. The sigmoid activation function in the last layer allows the network to produce an output within a limited range, usually between 0 and 1. This output was good for predicting voltage stability, as shown in Table 1. The initial layer of the LSTM model was constructed using an LSTM architecture with 32 units, employing the ReLU activation function, a return sequence parameter set to true, and an input shape of (13, 1). The subsequent stratum incorporated an additional LSTM layer with 24 units, using the ReLU function, and a true setting for the return sequence parameter. The third LSTM layer consisted of 16 units, including the ReLU activation function. The fourth layer with 24 units was densely connected, and used the ReLU activation function. The fifth layer was characterized by a high level of neuronal density, comprising 48 units. Finally, the last layer, serving as the output layer, contained a compact structure with a single unit. The sigmoid activation function was applied to this layer to produce the desired result. The proposed LSTM model consisted of a total of 14,105 trainable parameters, as illustrated in Table 1, indicating the number of adjustable weights and biases within the model. This model used the Adam optimization algorithm with a learning rate of 0.01. The binary cross-entropy loss function was applied to measure the difference between the predicted and true labels, and precision served as the evaluation metric.
The ML technique was trained with hyperparameters to find the optimal value for the corresponding parameters. The performance of SVM, Naive Bayes and CNN models by tuning the corresponding hyperparameters is demonstrated in Table 2.
The DL models mentioned above underwent 30-epoch training with a basic barrier in the training set. After training, these were verified on the test dataset, and precision was assessed using the accuracy score from sklearn_metrics. Then, precision was displayed on the console. Using pre-existing models saves time and computation as they have prior exposure to extensive data, eliminating the need for training from scratch, which is time-consuming and computationally demanding.
b.
The Voltage Stability Assessment Indicators
In this paper, an approach is proposed that is thoroughly evaluated using statistical indicators, such as the AUC and the F1 score, in addition to the precision of the evaluation [20], which is considered true positive (TP) for stable situations, while it becomes false negative (FN) for unstable conditions. True negatives (TN) are produced when an unstable sample is determined to be unstable; false positives (FP) are produced when the opposite is true. Some metrics are used to determine how well ML models work [17]. The accuracy indicator measures the alignment between the predictions of the model and the actual results, representing the proportion of correct predictions among the total number of predictions, as shown in (11):
A c c u r a c y = T P + T N T P + T N + F P + F N
where precision is the proportion of accurate positive predictions to the absolute number of positive estimates. The indicator also calculates the percentage of actual positives rate (APR) that are correctly identified by the model.
A P R = T P T P + F P
Formula (13) represents the ratio of true positives to the sum of true positives and false negatives.
R e c a l l = T P T P + F N
The F 1   s c o r e is a statistical measure that calculates the harmonic mean of precision and recall, with a weight assigned to each. It serves as a metric to assess the trade-off between precision and recall, especially in situations where there is a notable class imbalance. Mathematically, it can be defined as follows:
F 1   s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l

3. Results and Discussion of Machine Learning and Deep Learning Algorithm

The proposed scheme was experimentally implemented using the Python programming language in a Jupyter Notebook environment. The Microsoft Windows 11 operating system operates the LSTM on a personal computer (PC) with an Intel Core i7 processor at a frequency of 2.2 GHz. In addition, the PC has a memory capacity of 16 GB. The cross-correlation of all features present in the dataset was evaluated.
The figures below show significant correlation between the characteristics. In machine learning, classification is a type of supervised learning. The aim is to predict the class labels of the test data using patterns obtained from the training data. There are several classification algorithms, including support vector machines (SVMs), Naive Bayes, K nearest neighbor (KNN), long short-term memory (LSTM), convolutional neural networks (CNNs), and TabNet. The algorithms can be evaluated using the accuracy metric, which quantifies the ratio of accurately classified instances to the total number of instances. The efficiency of these algorithms was measured in terms of their precision. The long short-term memory model (LSTM) has been trained and validated to anticipate the VS of the system, as shown in Figure 5.
In this case, the graphic shows how accurate the LSTM classifier can be, with an amazing degree of 0.9995 (99.95%). Over time, clear patterns and connections emerged between the input factors and the target outcome, which explains the extraordinary result.
The LSTM algorithm excels at integrating data from previous time steps to improve its predictive capabilities. Due to this feature, the model can capture long-term temporal correlations and make more accurate predictions. However, an accuracy of 0.9995 shows that the model may have just memorized the training data rather than properly applying its knowledge to unknown data, which is evidence of overfitting. The generalizability of a model can be established by testing it on data that were not used to create it. Overfitting-related problems can be remedied by experimenting with regularization strategies such as failure or weight loss.
Figure 6 shows the loss curves of the LSTM model during both the training and validation on the VS dataset. The voltage stability dataset was analyzed using the proposed LSTM algorithm. Figure 7 displays the resulting confusion matrix and Figure 8 visually depicts the remarkable accuracy of the CNN algorithm of 0.9610 in predicting the variable VS.
This study demonstrates the successful application of CNN models, typically used for image recognition and processing tasks, in tabular data classification. Given the circumstances, it is likely that the CNN model utilizes the fundamental spatial relationships in the input features. This enabled the development of the model to identify significant patterns and appearances, resulting in a significant improvement in accuracy. Non-linear activation functions, such as ReLU, introduce non-linearity to augment the capabilities of the model. Furthermore, the pooling of layers decreased the size of the output, thereby reducing the number of parameters and preventing overfitting. This study used a one-dimensional convolutional neural network (1DCNN) to accurately forecast VS using a particular dataset. Figure 9 displays the loss curves for the CNN model. The results show that the system model performed admirably, with a total loss of the CNN model of 0.0962 and a training set accuracy of 0.9604. The model had a loss of 0.1078 and a precision of 0.9570 and performed well on the validation set. Figure 10 shows a visual description of the confusion matrix. The results of this learning highlight the impressive ability of the CNN algorithm to predict VS on a specific dataset.
The reliability and ability to generalize to new data are shown by the remarkable accuracy attained in the validation set. With an overall accuracy of 0.9610 in the voltage stability dataset, the CNN model proved that it could learn and extract significant patterns from input features, leading to very accurate predictions. The ML and DL models mentioned above are compared in Figure 11 to forecast the VS of the IEEE 33-bus system.

4. Conclusions

In this work, the voltage stability of the 33-bus energy power scheme was predicted using several ML and DL algorithms. Especially for the linear kernel, the SVM model showed a high degree of precision, especially when the regularization parameter C was changed to 2. The strong performance of the Gaussian NB algorithm was demonstrated by the high accuracy of 0.98 achieved for the Naive Bayes model over the entire range of alpha values examined. The results obtained provide strong evidence that the proposed LSTM model achieved an amazing precision of 99.95%. Moreover, the LSTM model clearly outperforms all other models when assessing its predictive accuracy for voltage stability with the SVM, Naive Bayes, and CNN models.

Author Contributions

Conceptualization, M.J.A. and R.L.; methodology, M.J.A.; software, M.J.A.; validation, M.J.A., R.L. and W.R.; formal analysis, R.L.; investigation, M.J.A.; resources, M.J.A.; data curation, M.J.A.; writing—original draft preparation, M.J.A. and R.L.; writing—review and editing, R.L. and W.R.; visualization, M.J.A.; supervision, R.L. and W.R.; project administration, R.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alam, M.M.; Rahman, M.H.; Ahmed, M.F.; Chowdhury, M.Z.; Jang, Y.M. Optimal energy management based on deep learning for the integrated home microgrid system for photovoltaic and battery energy storage. Sci. Rep. 2022, 12, 15133. [Google Scholar] [CrossRef] [PubMed]
  2. Amani, A.M.; Sajjadi, S.S.; Somaweera, W.A.; Jalili, M.; Yu, X. Data-driven model predictive control of community batteries for voltage regulation in power grids subject to EV charging. Energy Rep. 2023, 9, 236–244. [Google Scholar] [CrossRef]
  3. Bai, H.; Ajjarapu, V. A Novel Online Load Shedding Strategy for Mitigating Fault-Induced Delayed Voltage Recovery. IEEE Trans. Power Syst. 2011, 26, 294–304. [Google Scholar] [CrossRef]
  4. Li, Y.; Yang, Z.; Li, G.; Zhao, D.; Tian, W. Optimal Scheduling of an Isolated Microgrid With Battery Storage Considering Load and Renewable Generation Uncertainties. IEEE Trans. Ind. Electron. 2019, 66, 1565–1575. [Google Scholar] [CrossRef]
  5. Li, Y.; Li, K. Incorporating Demand Response of Electric Vehicles in Scheduling of Isolated Microgrids With Renewables Using a Bi-Level Programming Approach. IEEE Access 2019, 7, 116256–116266. [Google Scholar] [CrossRef]
  6. Praprost, K.L.; Loparo, K.A. An energy function method for determining voltage collapse during a power system transient. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 1994, 41, 635–651. [Google Scholar] [CrossRef]
  7. Kawabe, K.; Tanaka, K. Analytical Method for Short-Term Voltage Stability Using the Stability Boundary in the P-V Plane. IEEE Trans. Power Syst. 2014, 29, 3041–3047. [Google Scholar] [CrossRef]
  8. Li, Y.; Yang, Z. Application of EOS-ELM With Binary Jaya-Based Feature Selection to Real-Time Transient Stability Assessment Using PMU Data. IEEE Access 2017, 5, 23092–23101. [Google Scholar] [CrossRef]
  9. Xu, Y.; Zhang, R.; Zhao, J.; Dong, Z.Y.; Wang, D.; Yang, H.; Wong, K.P. Assessing Short-Term Voltage Stability of Electric Power Systems by a Hierarchical Intelligent System. Trans. Neural Netw. Learn. Syst. 2016, 27, 1686–1696. [Google Scholar] [CrossRef] [PubMed]
  10. Zhu, L.; Lu, C.; Sun, Y. Time Series Shapelet Classification Based Online Short-Term Voltage Stability Assessment. IEEE Trans. Power Syst. 2016, 31, 1430–1439. [Google Scholar] [CrossRef]
  11. Zhu, L.; Lu, C.; Dong, Z.Y.; Hong, C. Imbalance Learning Machine-Based Power System Short-Term Voltage Stability Assessment. IEEE Trans. Ind. Inform. 2017, 13, 2533–2543. [Google Scholar] [CrossRef]
  12. Zhang, Y.; Xu, Y.; Dong, Z.Y.; Zhang, R. A Hierarchical Self-Adaptive Data-Analytics Method for Real-Time Power System Short-Term Voltage Stability Assessment. IEEE Trans. Ind. Inform. 2019, 15, 74–84. [Google Scholar] [CrossRef]
  13. Ren, C.; Xu, Y.; Zhang, Y.; Zhang, R. A Hybrid Randomized Learning System for Temporal-Adaptive Voltage Stability Assessment of Power Systems. IEEE Trans. Ind. Inform. 2020, 16, 3672–3684. [Google Scholar] [CrossRef]
  14. Greff, K.; Srivastava, R.K.; Koutník, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A Search Space Odyssey. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2222–2232. [Google Scholar] [CrossRef]
  15. Abbass, M.J.; Lis, R.; Mushtaq, Z. Artificial Neural Network (ANN)-Based Voltage Stability Prediction of Test Microgrid Grid. IEEE Access 2023, 11, 58994–59001. [Google Scholar] [CrossRef]
  16. Alimi, O.A.; Ouahada, K.; Abu-Mahfouz, A.M. A review of machine learning approaches to power system security and stability. IEEE Access 2020, 8, 113512–113531. [Google Scholar] [CrossRef]
  17. Abbass, M.J.; Lis, R.; Awais, M.; Nguyen, T.X. Convolutional Long Short-Term Memory (ConvLSTM)-Based Prediction of Voltage Stability in a Microgrid. Energies 2024, 17, 1999. [Google Scholar] [CrossRef]
  18. Nazari-Heris, M.; Asadi, S.; Mohammadi-Ivatloo, B.; Abdar, M.; Jebelli, H.; Sadat-Mohammadi, M. Application of Machine Learning and Deep Learning Methods to Power System Problems; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  19. Shafiullah, M.; AlShumayri, K.A.; Alam, M.S. Machine learning tools for active distribution grid fault diagnosis. Adv. Eng. Softw. 2022, 173, 103279. [Google Scholar] [CrossRef]
  20. Sharma, D. Fuzzy with adaptive membership function and deep learning model for frequency control in PV-based microgrid system. Soft Comput. 2022, 26, 9883–9896. [Google Scholar] [CrossRef]
  21. Shanthamallu, U.S.; Spanias, A. Machine and Deep Learning Algorithms and Applications; Morgan & Claypool Publishers: San Rafael, CA, USA, 2021. [Google Scholar]
  22. Singh, P. Fundamentals and Methods of Machine and Deep Learning: Algorithms, Tools, and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2022. [Google Scholar]
  23. Venkatesan, R.; Li, B. Convolutional Neural Networks in Visual Computing: A Concise Guide; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  24. Weedy, B.M.; Cory, B.J.; Jenkins, N.; Ekanayake, J.B.; Strbac, G. Electric Power Systems; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  25. Wang, X.; Luo, Y.; Qin, B.; Guo, L. Power allocation strategy for urban rail HESS based on deep reinforcement learning sequential decision optimization. IEEE Trans. Transp. Electrif. 2022, 9, 2693–2710. [Google Scholar] [CrossRef]
Figure 1. Architecture of modern electric power systems.
Figure 1. Architecture of modern electric power systems.
Applsci 14 07279 g001
Figure 2. The P-V curve [16].
Figure 2. The P-V curve [16].
Applsci 14 07279 g002
Figure 3. Structure of deep learning models.
Figure 3. Structure of deep learning models.
Applsci 14 07279 g003
Figure 4. Structure of the evaluation and labeling process (IEEE bus system).
Figure 4. Structure of the evaluation and labeling process (IEEE bus system).
Applsci 14 07279 g004
Figure 5. Accuracy in training and validation of long short-term memory.
Figure 5. Accuracy in training and validation of long short-term memory.
Applsci 14 07279 g005
Figure 6. Loss of long short-term memory training and validation.
Figure 6. Loss of long short-term memory training and validation.
Applsci 14 07279 g006
Figure 7. Confusion matrix of the LSTM algorithm for forecasting VS.
Figure 7. Confusion matrix of the LSTM algorithm for forecasting VS.
Applsci 14 07279 g007
Figure 8. CNN model training and validation accuracy for the IEEE 33-bus system was observed.
Figure 8. CNN model training and validation accuracy for the IEEE 33-bus system was observed.
Applsci 14 07279 g008
Figure 9. The CNN model training and validation loss for the IEEE 33-bus system was observed.
Figure 9. The CNN model training and validation loss for the IEEE 33-bus system was observed.
Applsci 14 07279 g009
Figure 10. Confusion matrix of the voltage stability dataset for SVM, Naive Bayes and 1DCNN.
Figure 10. Confusion matrix of the voltage stability dataset for SVM, Naive Bayes and 1DCNN.
Applsci 14 07279 g010
Figure 11. The assessment of the accuracy of all classifiers.
Figure 11. The assessment of the accuracy of all classifiers.
Applsci 14 07279 g011
Table 1. The proposed parameter of LSTM prototype.
Table 1. The proposed parameter of LSTM prototype.
Primary Layer (Type)System Output ShapeSystem Parameter
LSTM_0None, 13, 324352
LSTM_1None, 13, 245472
LSTM_2None, 162624
Dense_0None, 24408
Dense_1None, 481200
Dense_2None, 149
Total output parameters: 14,105
Trainable_parameters: 14,105
Non-trainable_parameters: 0.0
Table 2. Performance of the SVM, Naive Bayes system models.
Table 2. Performance of the SVM, Naive Bayes system models.
ClassifierStemmer
PorterLovinWordnet
SVMRBF78.0%93.0%80.0%
Linear81.0%94.0%80.3%
Sigmoid82.0%95.0%82.1%
Naive Bayes 90.0%94.5%96.3%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abbass, M.J.; Lis, R.; Rebizant, W. A Predictive Model Using Long Short-Time Memory (LSTM) Technique for Power System Voltage Stability. Appl. Sci. 2024, 14, 7279. https://doi.org/10.3390/app14167279

AMA Style

Abbass MJ, Lis R, Rebizant W. A Predictive Model Using Long Short-Time Memory (LSTM) Technique for Power System Voltage Stability. Applied Sciences. 2024; 14(16):7279. https://doi.org/10.3390/app14167279

Chicago/Turabian Style

Abbass, Muhammad Jamshed, Robert Lis, and Waldemar Rebizant. 2024. "A Predictive Model Using Long Short-Time Memory (LSTM) Technique for Power System Voltage Stability" Applied Sciences 14, no. 16: 7279. https://doi.org/10.3390/app14167279

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop