Next Article in Journal
New Trends in the Control of Grid-Connected Photovoltaic Systems for the Provision of Ancillary Services
Previous Article in Journal
Fracture Mechanism of Crack-Containing Strata under Combined Static and Harmonic Dynamic Loads Based on Extended Finite Elements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Neural Networks: Multilayer Perceptron and Radial Basis to Obtain Post-Contingency Loading Margin in Electrical Power Systems

by
Alfredo Bonini Neto
1,
Dilson Amancio Alves
2 and
Carlos Roberto Minussi
2,*
1
School of Sciences and Engineering, São Paulo State University (Unesp), Tupã 17602-496, Brazil
2
School of Engineering, São Paulo State University (Unesp), Ilha Solteira 15385-000, Brazil
*
Author to whom correspondence should be addressed.
Energies 2022, 15(21), 7939; https://doi.org/10.3390/en15217939
Submission received: 17 September 2022 / Revised: 19 October 2022 / Accepted: 20 October 2022 / Published: 26 October 2022

Abstract

:
This paper presents the ANN (Artificial Neural Networks) approach to obtaining complete P-V curves of electrical power systems subjected to contingency. Two networks were presented: the MLP (multilayer perceptron) and the RBF (radial basis function) networks. The differential of our methodology consisted in the speed of obtaining all the P-V curves of the system. The great advantage of using ANN models is that they can capture the nonlinear characteristics of the studied system to avoid iterative procedures. The applicability and effectiveness of the proposed methodology have been investigated on IEEE test systems (14 buses) and compared with the continuation power flow, which obtains the post-contingency loading margin starting from the base case solution. From the results, the ANN performed well, with a mean squared error (MSE) in training below the specified value. The network was able to estimate 98.4% of the voltage magnitude values within the established range, with residues around 10−4 and a percentage of success between the desired and obtained output of approximately 98%, with better result for the RBF (radial basis function) network compared to MLP (multilayer perceptron).

1. Introduction

Electrical power dispatch has become more complicated in recent years with the development of large electrical grids, ultra-high voltage, and long-distance power transmission, making electrical power system stability problems more serious, with voltage collapse accidents occurring frequently [1]. These accidents have caused enormous economic losses and social impacts, making the issue of voltage stability more important.
In studies of the static voltage stability of electrical systems, companies in the electrical sector recommend obtaining bus voltage profiles (P-V curves) as a function of system loading. These profiles are used to (1) determine the limits of power transfer between areas of a system; (2) adjust safety margins; (3) observe the behavior of the voltage in the buses of the analyzed electrical system; and (4) compare planning strategies for the adequate expansion and reinforcement of electrical networks to avoid loss of load. The P-V curves allow a qualitative evaluation of the different operating conditions of the electrical system under different loading and contingency conditions. Tracing the P-V curve is the most appropriate methodology for calculating stability margins. Thus, one of the main objectives of this study is to obtain the maximum loading point of the system.
Traditionally, P-V curve tracing has been achieved using conventional power flow, using the Newton method. This procedure is performed until the iterative process fails to converge (Jacobian matrix singularity). For practical purposes, this point is considered as the maximum loading point.
Continuation power flow (CPF) is the most common static voltage stability analysis method [2,3,4] and includes four steps: prediction, step-size control, parameterization, and correction.
Systems that operate close to their operational limits are subject to the occurrence of a higher number of contingencies. In this context, security analysis is critical to identify contingencies that may affect the system. An electrical power system is subjected to a large number of contingencies, but few are severe enough to cause instability [5].
The Western System Coordinating Council (WSCC) [6] requires its companies to evaluate voltage stability margins by P-V and V-Q analyses, requiring at least a 5% real power margin in any simple contingency situation. On the other hand, WSCC requires a 2.5% of real power margin for severe contingency (N-2).
In Brazil, the operational procedures of electrical grids of the National Electric System Operator (ONS) [7] suggest as a criterion for expansion planning that the stability margin be higher than or equal to 6% for simple contingency situations, using the same WSCC criteria.
These processes have motivated companies to invest in tools that aim to improve the operation of electrical power systems (EPS). Artificial neural networks (ANN) are one of these tools.
A relatively new and promising learning algorithm called an extreme learning machine (ELM) was proposed in [8] for a more accurate and efficient voltage stability margin prediction. The inputs to the prediction model are the system operating parameters and loading direction, and the output is the voltage stability margin. The average percent error using the algorithm is only 3.32% and the average error is only 0.0495, both of which are satisfactory for practical use.
Promising results were found in [9], whose ANN reproduced the same results with the same high precision and speed as conventional methods of voltage stability calculation. For this purpose, the loading parameter and the voltage stability margin index were calculated using eight different input variables and fourteen different training functions. This allowed for verifying which training function was the fastest and had the best resource to predict the loading margin and the voltage stability index.
Many works involving artificial neural networks have been proposed in the literature, especially the MLP and RBF networks. Reference [10] used artificial intelligence (AI) to identify predictive biomarkers for diffuse large B-cell lymphoma prognoses. Two neural networks using the MLP and RBF networks show the methodology for efficiently identifying biomarkers. Reference [11] presents an application of a multilayer perceptron (MLP) neural network model for fast and automatic prediction of the geometry of finished products. The results indicate that the training and testing were accurate (accuracy rate exceeded 92%), demonstrating the feasibility of the proposed method.
In this context, this study presents a different approach to obtaining post-contingency P-V curves. The use of multilayer perceptron (MLP) and radial basis function (RBF) artificial neural networks (ANN) is proposed to estimate the voltage magnitude in a system subjected to a simple or severe contingency.

2. Materials and Methods

The system studied in this research corresponds to the IEEE 14-bus configuration, shown in Figure 1. The 1890 samples used for training, validation, and testing were obtained using the method presented in [3]. Each sample was composed of 18 data: the ANN input data (4 data), represented by the loading factor λ, the real and reactive power generated in the reference bus (Pgslack and Qgslack), and the branch number (transmission lines or transformers); and the output data (14 data), represented by the voltage magnitudes of all the buses in the system.
The IEEE 14-bus system has 20 branches, as shown in Figure 1. Each branch can be represented by a transmission line or a transformer connecting two buses of the system. Ninety samples were obtained for each branch removed from the system, representing the applied contingency. Branch 1 (r1) is subjected to a severe contingency N-2 (double contingency) when it is removed from the system, resulting in a drastic reduction in the system loading margin, as shown in the results. The other contingencies are considered simple (N-1).
The ANNs used consisted of MLP [12] with a learning algorithm in backpropagation training [13] and the RBF network [14,15], both with three layers: input with 4 neurons represented by the loading factor λ, the real and reactive power generated in the reference bus (Pgslack and Qgslack), and the branch number (transmission lines or transformers), intermediate with 15 neurons for the MLP network, and s neurons for the RBF network, where s is the number of centers of the network which also represent the number of radial basis functions ( 1 s p , in which p represents the number of samples), and output with 14 neurons (voltage magnitudes of all buses in the system), as shown in Figure 2. The software used to prepare the data and obtain the results was Matlab® [16].
The RBF network (Figure 2b) differs significantly from the MLP network (Figure 2a), as it contains a hidden layer with radial basis functions and its activation occurs through the distance between the input vector and a prototype vector (centers); this gives it several advantages, such as faster and more efficient training, faster learning, and no need to start with random weights because it is an incremental training. RBF networks perform better in approximating function and time series than MLP, while MLP performs better in classification problems. These networks are part of the Gaussian function techniques [17].
The continuation power flow, different from a random experiment of classification, is composed of a series of nonlinear equations, and the solution is obtained by varying the loading factor parameter; i.e., for each loading factor, a value is obtained for the voltage magnitude at each bus of the system. For these types of data, the MLP networks, especially the RBF, which has better approximating functions (nonlinear functions of the continuation power flow), have good applicability.
The radial and linear hyperbolic tangent activation functions used in the two networks are represented in (1), (2), and (3), respectively:
f ( u ) = ( 1 e t u ) ( 1 + e t u )
where t is an arbitrary constant, corresponding to the slope of the curve.
f ( n 1 ) = e n 1 2
f ( u ) = u
The distance between the central neuron and the input pattern is calculated when an input pattern is presented to the RBF network, and the network output to the central neuron is the result of applying the radial basis function at this distance. The mean square error (MSE) vector of the neural networks is calculated using (4):
M S E = 1 p i = 1 n ( Y o b Y d e s ) 2
in which Yob and Ydes are the obtained and desired outputs of the ANN, compared during the network training (MLP) and creation (RBF). The more similar they are to each other, the lower the error and the better the weight adjustment.
Figure 3 presents the flowchart of the networks used in this paper. For the RBF and MLP networks, R corresponds to the number of inputs (4 inputs—loading factor λ, the real and reactive power generated in the reference bus (Pgslack and Qgslack), and the branch number (transmission lines or transformers)), p to the number of samples (1322 samples for training), s the number of centers—weights (just for the RBF network, and this is what we want to optimize for better performance, 108 centers were used), and T to the number of desired outputs—targets (for both networks, 1322 samples of desired outputs, the same number of training samples). For the MLP network only, m corresponds to the number of neurons in the middle layer (15 neurons) and i to the number of neurons in the output layer (14 neurons). The ‖W1x‖ represents the Euclidean distance weight function.
Neural networks that use the backpropagation algorithm, as well as many other types of artificial neural networks, can be seen as “black boxes” in which it is almost unknown why the network reaches a certain result, since the models do not present justifications for their answers. With this in mind, many studies have been carried out aiming at the extraction of knowledge of artificial neural networks and the creation of explanatory procedures, in which the network’s behavior in certain situations is justified [18]. Therefore, it may be noted that a different value will always be obtained for each time the network is retrained [18]. The values specified for the training and the number of layers were chosen after several tests were performed. Other values could be used.
During training with the backpropagation algorithm, the network operates in a two-step sequence. First, a pattern is presented to the network input layer. The resulting activity flows through the network, layer by layer, until the response is produced by the output layer. In the second step, the obtained output is compared to the desired output for that particular pattern. If this is not correct, the error is calculated. The error is propagated from the output layer to the input layer, and the connections’ weights of the internal layer units are modified as the error is backpropagated. This result emphasizes the ANN application potential, which can act both as a classification and as a prediction tool.
For the RBF network, the error is propagated from the output layer to the input layer and the number of centers s is modified (increased) until the value of W2 is optimized and the error is minimized. The maximum value that could be used for the number of centers in this study is 1322 (number of samples p), which also corresponds to the number of radial basis functions. In this paper, only 108 centers (radial basis functions) were needed for MSE < 0.001.

3. Results and Discussion

From the 1890 samples, randomly, 70% were used for training (1322 samples), 15% for validation (284 samples), and 15% for testing (284 samples). All the results for 100% of the samples were shown, i.e., all three phases simultaneously. Figure 4 shows the training performances of the artificial neural networks used in this study. Figure 4a shows the MSE for each iteration of the MLP network, with a lower-than-expected value (0.001) of 0.00092773 obtained with 48 iterations and 7 s of processing. Figure 4b shows the histogram of the error (Yob relative to the desired Ydes), with 20 intervals for the 26,460 data in the training. The errors were around zero for most of the data. The error in estimating the voltage magnitudes using the MLP network was 1.023%. Better results were found for the RBR network, with an MSE at 0.000163906, which is well below the expected (0.001), as shown in Figure 4c, and with higher data accumulation, with errors around zero for the histogram, as shown in Figure 4d. The error in estimating the voltage magnitudes using the MLP network was 0.14%. Table 1 proves these results. The training parameters were iterations, time, performance, and correlation. For the MLP network, after 7 s, with 48 iterations (48 comparisons (|YobYdes| < 0.001)), the error was smaller than the established 0.001, achieving 0.00092773 and correlation between the desired and obtained outputs of 0.98977. Similar results are presented for the RBF network.
In [10], as presented in this paper, there was a comparison between the MLP and RBF networks. The MLP network performed better than the RBF network. This is explained by the fact that [10] is focused on data classification, in which the MLP network, depending on the established parameters, becomes better than the RBF network. However, the application of this paper is focused on the approximation of functions, and in this case, the RBF network is better and faster than the MLP [10,17].
Figure 5 shows a correlation analysis for both methodologies in the training, validation, and testing of the network. Figure 5a shows the correlation between the obtained and desired outputs for the MLP network, and Figure 5b shows the correlation between the obtained and desired outputs for the RBF network. Both networks had good results, with a slight improvement for the RBF network, with an R value very close to 1 (0.9986), showing that 99% of the Yob variables is explained by the Ydes variables.
Figure 6 shows the voltage magnitudes of critical bus 14 for all the samples (1890); that is, all the contingencies of the system, with desired output (Ydes) vs. obtained output (Yob) via ANN in training, validation, and testing. Figure 6a,b show the performance of the MLP and RBF networks, respectively. Again, the best result can be observed for the RBF network.
The number of centers s automatically adopted to create the RBR network was 108 for vector W1, and the processing time was 3 s after 100 iterations, as shown in Table 1. Although the number of iterations is higher compared to the network MLP, the number of performed calculations is lower, resulting in less CPU time [19]. Other values for centers could be adopted by the network, but 108 were already sufficient to reach the adopted criterion (0.001). The higher the number of centers, the more accurate the obtained value Yob.
Figure 7 shows the P-V curves of critical bus 14 for all the contingencies of the studied system, also presenting the similarity between the desired (Ydes) and the obtained outputs (Yob) by the MLP neural network, whose MSE was 0.00092773.
In the operating phase (validation and test), the network estimated samples that were not part of the training process were included. We observed an approximation with an error around 0.01945 of the output obtained by the network (Yob) compared to the desired values (Ydes), showing that the network could act as an estimator of the magnitudes of nodal voltages.
For instance, Figure 7 shows three curves in the foreground relative to contingency r1. One of the curves corresponds to the pre-contingency P-V curve and the other two to the post-contingency (Ydes and Yob). Contingency r1, corresponding to the output of the transmission line between buses 1 and 2 (N-2) (Figure 1), presents a significant reduction in the loading margin, lower than the value corresponding to the base case. Discrepancies can be observed at some points between the obtained P-V curves and the desired P-V curves. This behavior no longer occurs in Figure 8. The P-V curves were obtained by the RBF neural network and compared to those desired, showing a higher similarity between them. This is due to the better training of the RBF network, whose MSE was around 0.000163906, well below that of the MLP network. The obtained P-V curves (Yob) are practically the same as the desired curves (Ydes). Table 2 shows the values of maximum loading points of bus 14 for both networks. Column 1 presents all the contingencies of the IEEE-14 System (r1–r20), including r0, related to pre-contingency (normal operating conditions). Column 2 presents the loading factor (λ) of the system at the maximum loading point. Column 3 presents the voltage magnitudes values (V14) obtained by the method presented in [3], representing the desired output (Ydes). Columns 4 and 5 present the voltage magnitudes values (V14) obtained by the MLP and RBF methods (Yob). Better performance can be observed in the RBF network compared to the MLP network. The values of the maximum loading points correspond to the voltage magnitude for the maximum load value, i.e., the voltage magnitude value for the maximum value of the loading factor λ (critical point of the P-V curve). These desired output values (Ydes) (corresponding to the P-V curve) were obtained by the method proposed in [3], modified parameterized Newton, with a precision of 10−5.
Table 3 shows the differences between the values of the desired maximum loading points and those obtained in bus 14. The average error value was 0.0266 for the MLP network and 0.0122 for the RBF network, proving its best performance in obtaining the maximum loading point and, consequently, the loading margins and P-V curves of the studied system. Similar results were obtained for the other buses of the system.

4. Conclusions

This study presented methodologies via MLP and RBF artificial neural networks to obtain the voltage magnitudes and complete P-V curves of electrical power systems subjected to a contingency as a function of the load factor λ, the real and reactive powers generated in the reference bus (Pgslack and Qgslack), and the branch number. The network had good training, with MSE at 0.00092773 for the MLP network in the forty-eighth iteration, a training time of 7 s, and an R value of 0.98977, on average, for all contingencies. The difference between the desired and obtained output for the MLP network was 0.026643 at bus 14 and other buses in the system. In contrast, the RBF network had a lower MSE, with a value of 0.000163906 and 3 s of training, showing that the desired output was very close to that obtained. That is, the RBF network had better performance than the MLP network for the RBF network, and the difference between the desired and obtained output was around 0.012257 at bus 14 and other buses, as well for network training, validation, and testing. In this operating phase (validation and testing), where the network estimated samples that were not part of the training process, we observed an approximation with an error around 0.01945 of the output obtained by the network (Yob) compared to the desired values (Ydes), showing that the network could act as an estimator. Moreover, the loading margins, i.e., all pre- and post-contingency P-V curves, were obtained, showing that the proposed tools have high potential in estimating the magnitudes of nodal voltages.

Author Contributions

A.B.N.: Conceptualization, Writing—Original Draft, Writing—Review and Editing Investigation. D.A.A.: Conceptualization, Writing—Original Draft, Writing—Review and Editing Investigation. C.R.M.: Writing—Review and Editing, Funding Acquisition, Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

The financial support provided by the Brazilian Research Funding Agencies CNPq, process 408630/2018-3 via CNPq, FAPESP—process 2018/12353-9, and the postgraduate program in Electrical Engineering (UNESP—Ilha Solteira).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that they have no conflict of interest.

Nomenclature

CPFContinuation Power Flow
PFPower Flow
ANNArtificial Neural Networks
MLPMultilayer Perceptron Network
RBFRadial Basis Function Network
MSEMean Square Error
P-VVoltage versus real power curve
V-QVoltage versus reactive power curve
EPSElectric Power Systems
LMLoading Margin
TLTransmission lines
PVGeneration bus
PQLoad Bus
λLoading factor
ONSNational Electric System Operator
WSCCWestern System Coordinating Council

References

  1. Ruan, C.; Wang, X.; Wang, X.; Gao, F.; Li, Y. Improved continuation power flow calculation method based on coordinated combination of parameterization. In Proceedings of the 2018 IEEE 2nd International Electrical and Energy Conference (CIEEC), Beijing, China, 4–6 November 2018; pp. 207–211. [Google Scholar] [CrossRef]
  2. Tostado-Véliz, M.; Kamel, S.; Jurado, F. Development and comparison of efficient newton-like methods for voltage stability assessment. Electr. Power Compon. Syst. 2020, 48, 1798–1813. [Google Scholar] [CrossRef]
  3. Bonini Neto, A.; Piazentin, J.C.; Alves, D.A. Vandermonde interpolating as nonlinear predictor applied to continuation method. IEEE Lat. Am. Trans. 2018, 16, 2954–2962. [Google Scholar] [CrossRef]
  4. Canossa, J.H.; Bonini Neto, A.; Alves, D.A.; Putti, F.F.; Gabriel Filho, L.R.A. Development of an interactive program to study of the continuation power flow. IEEE Lat. Am. Trans. 2018, 16, 1227–1235. [Google Scholar] [CrossRef]
  5. Matarucco, R.R.; Bonini Neto, A.; Alves, D.A. Assessment of branch outage contingencies using the continuation method. Int. J. Electr. Power Energy Syst. 2014, 55, 74–81. [Google Scholar] [CrossRef]
  6. Western System Coordinating Council (WSCC); Reactive Power Reserve Work Group. Final Report: Voltage Stability Criteria, Undervoltage Load Shedding Strategy, and Reactive Power Reserve Monitoring Methodology; Western System Coordinating Council (WSCC): Salt Lake City, UT, USA, May 1998; 154p. [Google Scholar]
  7. Operador Nacional do Sistema Elétrico (ONS). Procedimentos de Rede, Sub-Módulo 23.3, Diretrizes e Critérios Para Estudos Elétricos. 2002. Available online: www.ons.org.br (accessed on 15 April 2022).
  8. Zhang, R.; Xu, Y.; Dong, Z.Y.; Zang, P.; Wong, K.P. Voltage stability margin prediction by ensemble based extreme learning machine. In Proceedings of the IEEE Power & Energy Society General Meeting, Vancouver, BC, Canada, 21–25 July 2013; pp. 1–5. [Google Scholar] [CrossRef]
  9. Aydin, F.; Gümüş, B. Study of different ANN algorithms for voltage stability analysis. In Proceedings of the Innovations in Intelligent Systems and Applications Conference (ASYU), Istanbul, Turkey, 15–17 October 2000; pp. 1–5. [Google Scholar] [CrossRef]
  10. Carreras, J.; Kikuti, Y.Y.; Miyaoka, M.; Hiraiwa, S.; Tomita, S.; Ikoma, H.; Kondo, Y.; Ito, A.; Nakamura, N.; Hamoudi, R. A combination of multilayer perceptron, radial basis function artificial neural networks and machine learning image segmentation for the dimension reduction and the prognosis assessment of diffuse large B-cell lymphoma. AI 2021, 2, 106–134. [Google Scholar] [CrossRef]
  11. Ke, K.-C.; Huang, M.-S. Quality prediction for injection molding by using a multilayer perceptron neural network. Polymers 2020, 12, 1812. [Google Scholar] [CrossRef] [PubMed]
  12. Haykin, S. Neural Networks and Learning Machines, 3rd ed.; Prentice-Hall: Hoboken, NJ, USA, 2008. [Google Scholar]
  13. Werbos, P.J. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Ph.D. Thesis, Harvard University, Harvard, UK, 1974. [Google Scholar]
  14. Park, J.; Sandberg, I.W. Universal approximation using radial-basis-function networks. Neural Comput. 1991, 3, 246–257. [Google Scholar] [CrossRef] [PubMed]
  15. Bodyanskiy, Y.; Pirus, A.; Deineko, A. Multilayer radial-basis function network and its learning. In Proceedings of the 2020 IEEE 15th International Conference on Computer Sciences and Information Technologies (CSIT), Zbarazh, Ukraine, 23–26 September 2020; pp. 92–95. [Google Scholar] [CrossRef]
  16. Mathworks. Available online: http://www.mathworks.com (accessed on 2 July 2022).
  17. IBM SPSS Neural Networks. New Tools for Building Predictive Models; IBM Software Business Analytics; IBM Corporation Software Group: Somers, NY, USA, 2017. [Google Scholar]
  18. Souza, A.V.; Bonini Neto, A.; Piazentin, J.C.; Dainese Junior, B.J.; Gomes, E.P.; Bonini, C.S.B.; Putti, F.F. Artificial neural network modelling in the prediction of bananas? Harvest. Sci. Hortic. 2019, 257, 108724. [Google Scholar] [CrossRef]
  19. Bonini Neto, A.; Magalhaes, E.M.; Alves, D.A. Dishonest Newton method applied in continuation power flow through a geometric parameterization technique. Revista IEEE América Lat. 2016, 14, 161–170. [Google Scholar] [CrossRef]
Figure 1. IEEE 14-bus system with the respective branches r.
Figure 1. IEEE 14-bus system with the respective branches r.
Energies 15 07939 g001
Figure 2. RNA used in this study; (a) MLP, (b) RBF.
Figure 2. RNA used in this study; (a) MLP, (b) RBF.
Energies 15 07939 g002
Figure 3. Flowchart of the MLP and RBF networks used in this article.
Figure 3. Flowchart of the MLP and RBF networks used in this article.
Energies 15 07939 g003
Figure 4. ANN performance; (a) training performance (MSE) of the MLP network, (b) error histogram (YdesYob) for the MLP network with 20 intervals for the 1890 output samples, (c) training performance (MSE) of the RBF network, (d) error histogram (YdesYob) for the RBF network with 20 intervals for the 1890 output samples.
Figure 4. ANN performance; (a) training performance (MSE) of the MLP network, (b) error histogram (YdesYob) for the MLP network with 20 intervals for the 1890 output samples, (c) training performance (MSE) of the RBF network, (d) error histogram (YdesYob) for the RBF network with 20 intervals for the 1890 output samples.
Energies 15 07939 g004
Figure 5. Correlation analysis between the variables: desired output (Ydes) and obtained output (Yob) in training, validation, and test; (a) MLP, (b) RBF.
Figure 5. Correlation analysis between the variables: desired output (Ydes) and obtained output (Yob) in training, validation, and test; (a) MLP, (b) RBF.
Energies 15 07939 g005
Figure 6. Voltage magnitudes of load bus 14 for all samples (1890), i.e., for all system contingencies, desired output (Ydes) vs. output obtained (Yob) via RNA in training; (a) MLP, (b) RBF.
Figure 6. Voltage magnitudes of load bus 14 for all samples (1890), i.e., for all system contingencies, desired output (Ydes) vs. output obtained (Yob) via RNA in training; (a) MLP, (b) RBF.
Energies 15 07939 g006
Figure 7. Desired output (Ydes) and obtained output (Yob) by the MLP network, P-V curves for all contingencies r of the system for bus 14.
Figure 7. Desired output (Ydes) and obtained output (Yob) by the MLP network, P-V curves for all contingencies r of the system for bus 14.
Energies 15 07939 g007
Figure 8. Desired output (Ydes) and obtained output (Yob) by the RBF network, P-V curves for all contingencies r of the system for bus 14.
Figure 8. Desired output (Ydes) and obtained output (Yob) by the RBF network, P-V curves for all contingencies r of the system for bus 14.
Energies 15 07939 g008
Table 1. Values specified and achieved in training MLP and RBF networks compared to Ydes output.
Table 1. Values specified and achieved in training MLP and RBF networks compared to Ydes output.
MLPSpecified ValuesAchieved Values
Iterations10048
Time (s)207
Performance (MSE)0.0010.00092773 *
Correlation (R)1.00.98977
RBFSpecified ValuesAchieved Values
Iterations100100
Time (s)203
Performance (MSE)0.0010.000163906 *
Correlation (R)1.00.9986
* Achieved criterion.
Table 2. Values of voltage magnitudes of the maximum loading point of bus 14 obtained via the MLP and RBF network for each contingency.
Table 2. Values of voltage magnitudes of the maximum loading point of bus 14 obtained via the MLP and RBF network for each contingency.
ContingenciesλV14 (Ydes)MLP V14 (Yob)RBF V14 (Yob)
r01.76800.60630.58540.6206
r10.98100.83480.85630.8463
r21.39700.64480.64950.6479
r31.29910.78280.80630.7665
r41.58720.62660.61110.6452
r51.66000.60590.60400.6227
r61.71660.68040.67590.6670
r71.59240.64150.62960.6482
r81.56850.64880.63180.6677
r91.67890.60650.59810.6004
r101.34210.70230.69320.7086
r111.75080.60680.60190.6115
r121.74330.61600.63640.6297
r131.66110.59240.63710.5787
r141.67920.58790.62460.5643
r151.46660.63190.67150.6231
r161.72130.65560.63190.6605
r171.61810.52870.66190.5050
r181.76560.61410.59470.6206
r191.76660.61330.56750.6288
r201.74780.58300.53080.5933
Table 3. Differences between the maximum loading point values of bus 14 obtained via MLP and RBF network for each contingency.
Table 3. Differences between the maximum loading point values of bus 14 obtained via MLP and RBF network for each contingency.
ContingenciesMLP
|Ydes − Yob|
RBF
|Ydes − Yob|
r00.02090.0143
r10.02150.0115
r20.00470.0031
r30.02350.0163
r40.01550.0186
r50.00190.0168
r60.00450.0134
r70.01190.0067
r80.01700.0189
r90.00840.0061
r100.00910.0063
r110.00490.0047
r120.02040.0137
r130.04470.0137
r140.03670.0236
r150.03960.0088
r160.02370.0049
r170.13320.0237
r180.01940.0065
r190.04580.0155
r200.05220.0103
Average0.0266430.012257
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bonini Neto, A.; Alves, D.A.; Minussi, C.R. Artificial Neural Networks: Multilayer Perceptron and Radial Basis to Obtain Post-Contingency Loading Margin in Electrical Power Systems. Energies 2022, 15, 7939. https://doi.org/10.3390/en15217939

AMA Style

Bonini Neto A, Alves DA, Minussi CR. Artificial Neural Networks: Multilayer Perceptron and Radial Basis to Obtain Post-Contingency Loading Margin in Electrical Power Systems. Energies. 2022; 15(21):7939. https://doi.org/10.3390/en15217939

Chicago/Turabian Style

Bonini Neto, Alfredo, Dilson Amancio Alves, and Carlos Roberto Minussi. 2022. "Artificial Neural Networks: Multilayer Perceptron and Radial Basis to Obtain Post-Contingency Loading Margin in Electrical Power Systems" Energies 15, no. 21: 7939. https://doi.org/10.3390/en15217939

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop