Next Article in Journal
Analyzing the Interactions among Barriers to the Use of Solar Energy for Heating in Residential Buildings in Van, Türkiye
Previous Article in Journal
Well Logging Reconstruction Based on a Temporal Convolutional Network and Bidirectional Gated Recurrent Unit Network with Attention Mechanism Optimized by Improved Sand Cat Swarm Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Analysis of Deep Learning Methods for Fault Avoidance and Predicting Demand in Electrical Distribution

by
Karla Schröder
1,
Gonzalo Farias
1,
Sebastián Dormido-Canto
2,* and
Ernesto Fabregas
2
1
Escuela de Ingeniería Eléctrica, Pontificia Universidad Católica de Valparaíso, Av. Brasil 2147, Valparaíso 2362804, Chile
2
Departamento de Informática y Automática, Uiversidad Nacional de Educación a Distancia, Juan del Rosal 16, 28040 Madrid, Spain
*
Author to whom correspondence should be addressed.
Energies 2024, 17(11), 2709; https://doi.org/10.3390/en17112709
Submission received: 16 May 2024 / Revised: 28 May 2024 / Accepted: 30 May 2024 / Published: 3 June 2024
(This article belongs to the Section D: Energy Storage and Application)

Abstract

:
In recent years, the distribution network in Chile has undergone various modifications to meet new demands and integrate new technologies. However, these improvements often do not last as long as expected due to inaccurate forecasting, resulting in frequent equipment changes and service interruptions. These issues affect project investment, unsold energy, and penalties for poor quality of supply. Understanding the electricity market, especially in distribution, is crucial and requires linking technical quality standards with service quality factors, such as the frequency and duration of interruptions, to understand their impact on regulated distribution to customers. In this context, a comparative study will be carried out between Long Short-Term Memory (LSTM) and transformer architectures, with the aim of improving the sizing of distribution transformers and preventing failures when determining the nominal power of the transformer to be installed. Variables such as voltages and operating currents of transformers installed between 2020 and 2021 in the Valparaíso region, Chile, along with the type and number of connected customers, maximum and minimum temperatures of the sectors of interest, and seasonality considerations will be used. The compilation of previous studies and the identification of key variables will help to propose solutions based on error percentages to optimise the accuracy of transformer sizing.

1. Introduction

Efficient sizing of electrical distribution transformers is a critical element of the electrical infrastructure as it directly affects the stability, reliability, and quality of the power supply. The ability to accurately predict power demand and size transformers accordingly is essential to avoid service interruptions, optimise network performance, and reduce operating and maintenance costs [1,2,3].
In this context, various techniques and approaches have emerged to improve accuracy in transformer sizing, particularly in the use of tools based on artificial intelligence and machine learning [4]. Two of the most prominent architectures in this field are Long Short-Term Memory (LSTM) neural networks and transformer models.
LSTM neural networks have excelled in capturing temporal dependencies in data sequences, which is especially relevant in applications where behaviour over time is crucial [5]. On the other hand, transformer models, driven by attention mechanisms, have demonstrated excellent results in natural language processing tasks and have also been successfully applied to sequential problems in other domains [6,7,8,9].
This article focuses on comparing and analysing the performance of these two techniques, LSTM and transformers, specifically applied to the sizing of electrical distribution transformers. The selection of these techniques is based on their demonstrated ability to capture complex patterns in sequential data and their potential to effectively model the dynamic behaviour of transformers in real-world settings [10,11,12,13].
Moreover, the scalability of these models is a crucial aspect of their practical implementation. These techniques can be adapted to various scales of power prediction, ranging from household consumption to the sizing of distribution elements, and even to the generation of energy. This adaptability allows for a comprehensive approach to transformer sizing, addressing both small-scale and large-scale power demands. By extending the application of these models to include scenarios like power generation from electric generators and considering additional factors such as water demand, the complexity and robustness of the predictive models are further enhanced [14,15,16].
Working with real world data is crucial as it ensures that the models are tested and validated under realistic conditions, enhancing their reliability and applicability in practical scenarios, as in [17,18] where they use real data from electric vehicles to evaluate the future scenario.
The main contributions of this study can be summarised as follows:
  • The study demonstrates the potential of deep learning models to accurately predict electricity consumption. The work includes an analysis of the use of two different methods and the considerations that need to be taken into account when applying deep learning techniques in the context of a real operating environment. The use of such techniques could help electrical distribution companies to improve the estimation of the required capacity of electrical transformers.
  • Insights generated into the effectiveness and suitability of LSTM and transformer techniques in sizing electrical distribution transformers. The study aims to delineate the strengths and limitations of each technique.
  • This work uses real data from electrical distribution transformers, encompassing variables like voltages, operating currents, connected customer types and quantities, and ambient temperatures, among others. This data was collected from transformers installed between 2020 and 2021 in the Valparaíso region, Chile. The incorporation of real data strengthens the validity and relevance of the results obtained, facilitating a precise evaluation of the compared techniques’ performance in an operational real-world setting.
The article is structured as follows: Section 2 provides a comprehensive review of the literature related to transformer sizing and the use of machine learning techniques in this context. Section 3 describes in detail the methodology used, including the configuration of LSTM and transformer models. In Section 4, data processing is extensively addressed, delving into the details of how to manage and prepare the data for analysis. Section 5 presents the results of the comparison between LSTM and transformer techniques, including performance metrics and qualitative analysis. Finally, Section 6 presents the conclusions of the study, discusses the implications of the results, and suggests areas for future research and practical applications in the electrical distribution domain.

2. Related Works

Between 2017 and 2019, distributors in the region faced a number of significant challenges according to data collected in the novelty book [19]. In 2017, there was a notable frequency of failures primarily caused by tree-related problems, totalling 1177 incidents, followed by self-produced failures at 1092 and maintenance at 1013. Accidents ranked fourth with 907 cases, followed by overloads at 587. In the following year, 2018, there was a decrease in the overall frequency of failures, although tree-related failures remained the main cause with 1019 cases, followed by maintenance at 976 and accidents at 873. However, overloads experienced a notable increase, with 727 cases recorded. Lastly, in 2019, the data showed a general decrease in the frequency of failures, highlighting maintenance as the main cause with 648 cases, followed by accidents at 498 and self-produced failures at 481. These findings can largely be attributed to the durability of transformers and issues related to their sizing. The decrease in failure frequency over these years suggests an improvement in the management and maintenance of this critical equipment. However, the observed increase in overloads between 2017 and 2018 indicates the need to review load and discharge protocols as well as preventive maintenance practices to mitigate such problems in the future. Overall, these data underscore the importance of a comprehensive approach to managing electrical assets to ensure a reliable and safe supply of electrical power.
In Figure 1, it is possible to appreciate the power system, where if the voltage-reducing transformer experiences any complications, it produces immediate effects on the service quality for both residential and industrial customers.
Various techniques have been used to estimate the value of transformers, such as in the year 2018 [20], where a statistical power estimation model based on linear regression was implemented. Subsequently, another technique used in 2019 [21] involved generating a model based on the analysis of power response from different modules to the climatic behaviour of an area over a year. In 2020, a forecast model based on Monte Carlo simulation was used [22]. These mentioned models allowed for obtaining a power estimation that yielded better results than those obtained by the distributor, considering the factor mechanism used for sizing.
The accurate estimation of electrical consumption is crucial for efficient energy management and decision-making in the electrical sector. Time series analysis, which involves studying electrical data over time, is a fundamental tool for understanding variability and trends in electrical energy consumption. In [23,24], a detailed overview of modern techniques for time series analysis is presented, including advanced modelling and forecasting approaches that are essential for understanding and predicting electrical consumption behaviour over time.
The use of neural networks, particularly Long Short-Term Memory (LSTM) networks and transformer models, has emerged as an effective strategy in electrical consumption estimation. In [25], the effectiveness of LSTMs in capturing complex temporal patterns in electrical data is highlighted. Additionally, transformer models have shown great potential in sequence processing, as discussed in [26], making them valuable tools for accurate electrical consumption prediction and energy management optimization.
Although in this study, the historical data may be synthetic and not derived from real records, the use of synthetic data remains relevant for evaluating and comparing the performance of different electrical consumption estimation techniques. In [27], the importance of synthetic data in evaluating machine learning models is discussed, especially when real data may be limited or unavailable.
The application of machine learning techniques, such as LSTMs and transformers, in electrical consumption estimation represents a significant advancement in energy management optimization and informed decision-making. These techniques offer greater adaptability to the complexity and variability of electrical data, resulting in more precise estimation and overall improvement in energy efficiency and strategic resource planning.

3. Neural Networks

In the field of electrical distribution transformer sizing, the use of neural networks, especially Long Short-Term Memory (LSTM) networks and transformer models, has gained prominence due to their ability to capture complex patterns in sequential data and adapt to temporal changes. Below, we delve deeper into the role and functioning of these neural networks in accurately estimating electrical consumption and optimizing transformer sizing.

3.1. LSTM

Neural networks are a set of algorithms inspired by the functioning of the human brain, designed to recognize patterns in data and perform machine learning tasks. These networks are composed of basic units called artificial neurons that are interconnected in layers. Each neuron takes inputs, performs weighted calculations, and produces an output that is transmitted to the subsequent layers. As neural networks are trained with data, they adjust their weights and connections to learn and improve their ability to perform specific tasks such as classification, pattern recognition, or content generation [28].
Within neural networks, there are various types, and one of the most commonly used is Recurrent Neural Networks (RNNs). RNNs are well suited for handling sequences of data, as they allow information to flow through neurons in loops, giving them a kind of short-term memory. However, traditional RNNs struggle to maintain long-term information due to the vanishing gradient problem.
To address this problem, Long Short-Term Memory (LSTM) neural networks were developed. LSTMs are a variant of RNNs that include specialized memory units, allowing the network to learn and retain long-term information. Each LSTM unit has three gates (input, output, and forget) that regulate the flow of information, giving them the ability to remember and process sequences more effectively. LSTMs are widely used in tasks such as natural language processing, automatic translation, and text generation, where a deep contextual understanding of data sequences is required. In summary, LSTM neural networks are a powerful extension of recurrent neural networks that have revolutionized machines’ ability to effectively understand and generate sequential data [29].
In a Long Short-Term Memory (LSTM) neuron, the fundamental structure consists of a memory unit that enables the retention of long-term information, setting it apart from neurons in traditional Recurrent Neural Networks (RNNs). Each LSTM unit has three main gates: the input gate, the output gate, and the forget gate, as visualized in Figure 2. These gates are essential for regulating the flow of information within the neuron. The input gate controls how much new information should be stored in the cell’s memory, the output gate regulates how much information is sent as the neuron’s output, and the forget gate determines how much prior information should be deleted or “forgotten” from the memory. This gate and internal memory structure allows LSTMs to learn and retain relevant information across temporal sequences, making them particularly suitable for tasks involving long-term dependencies in sequential data, such as natural language processing or speech recognition.
LSTM (Long Short-Term Memory) networks are equipped with specialized units called cells that enable them to capture and remember long-term dependencies in sequential data. One of the key equations in the LSTM cell is the update equation for the cell state ( c t ) at time step (t), which is given by [30]:
c t = f t · c t 1 + i t · c ˜ t ,
where:
  • c t is the cell state at the time step (t);
  • f t is the forget gate at time step (t), which determines how much of the previous state ( c t 1 ) to forget;
  • i t is the input gate at time step (t), which determines how much of the new candidate state c ˜ t to incorporate into the current cell state;
  • c ˜ t is the new candidate state at time step (t), which is calculated using the tanh activation function and input values at that moment.
This equation illustrates the core of the long-term memory and information control process in an LSTM cell, making it an excellent choice for explaining the operation of these models.

3.2. Transformers

Transformers are a type of neural network architecture that has gained significant traction in the field of natural language processing and sequence modelling [31,32]. Unlike traditional recurrent neural networks (RNNs) and their variants like LSTM, which process sequences in a sequential manner, transformers operate using an attention mechanism that allows them to consider relationships between all elements in a sequence simultaneously, as visualized in Figure 3. This attention mechanism enables transformers to capture long-range dependencies and contextual information effectively, making them highly suitable for tasks requiring a deep understanding of sequential data, such as language translation, text generation, and sentiment analysis [33,34].
At the core of transformers are self-attention layers, which allow the network to weigh the importance of each element in the input sequence when generating an output. This mechanism enables transformers to process inputs in parallel, significantly improving their efficiency compared to traditional sequential processing in RNNs. Additionally, transformers incorporate positional encoding to retain information about the order of elements in the input sequence, addressing one of the limitations of traditional attention mechanisms.
One of the key innovations introduced by transformers is the transformer architecture, which consists of encoder and decoder layers. The encoder processes the input sequence and generates a representation that captures the contextual information, while the decoder uses this representation to generate the output sequence. This modular architecture, along with the attention mechanism, allows transformers to achieve state-of-the-art performance in various natural language processing tasks, surpassing previous limitations in handling long-range dependencies and contextual information in sequential data.
Transformers are a type of neural network architecture that has gained prominence in natural language processing and sequence modelling tasks. Unlike traditional recurrent neural networks (RNNs) and their variants like LSTMs, which process sequences sequentially, transformers operate using a mechanism called attention that allows them to consider relationships between all elements of a sequence simultaneously.
One of the key components of a transformer model is the multi-head attention mechanism, which is used to compute attention scores between input tokens. The simplified equation for multi-head attention can be expressed as [35]:
Attention ( Q , K , V ) = softmax Q K T d k V ,
where:
  • Q is the query matrix;
  • K is the key matrix;
  • V is the value matrix;
  • d k is the dimension of the key vectors.
This equation illustrates how attention scores are calculated by taking the dot product of the query and key matrices, followed by applying the softmax function to obtain attention weights, and finally multiplying by the value matrix to obtain the weighted sum of values.
The multi-head attention mechanism in transformers allows them to capture long-range dependencies and contextual information effectively, making them suitable for tasks that require a deep understanding of sequential data.

4. Data Processing

In this section, the data were derived from equipment located at Valparaíso between July 2020 and June 2021, with a temporal resolution of 10 min. The variables considered in this study are the following: operational voltage and current, customer type, customer count, customer consumption, daily variability, and maximum and minimum temperatures in the location areas.
To ensure confidentiality, the data has been normalized on a scale from 0 to 1. This means that real values have been transformed to preserve the privacy of the original data.
Atmospheric information from the Agrometeorology Service [36] was used to complement the transformer operation data. It is important to note that, to simplify the presentation and analysis, 20 transformers were selected in a representative manner from the total sample.
One of the challenges encountered when working with this real data was the presence of gaps in the measurements. These gaps were addressed using interpolation techniques. Interpolation is a mathematical technique used to estimate values within a range of known data points. For this study, linear interpolation was applied, which is a simple and commonly used method. The formula for linear interpolation is as follows [37,38]:
y = y 1 + ( x x 1 ) · ( y 2 y 1 ) ( x 2 x 1 ) ,
where:
  • x is the value for which interpolation is desired;
  • x 1 and x 2 are the known data points on either side of x;
  • y 1 and y 2 are the values corresponding to x 1 and x 2 , respectively;
  • y is the estimated value at x.
Although linear interpolation provides a basic estimation, it is acknowledged that more sophisticated techniques such as Markov processes can offer more accurate results by considering dependencies and sequential patterns in the data.
For better understanding, Table 1 presents a sample of the normalized data from the selected transformers. This table provides an overview of the data used in the study, highlighting the normalization of values and relevant information for each selected transformer.
The properties of the computer where the training and subsequent testing of both techniques will be carried out are specified in Table 2.

5. Result and Analysis

For this study, data from 111 distribution transformers located in the Valparaíso region between the years 2020 and 2021 were utilized. However, for this work, the results of 20 randomly selected transformers will be presented. The focus of this analysis is on predicting the nominal power of these transformers based on real data collected between July 2020 and June 2021 at 10-min intervals. This dataset includes a range of parameters, specifically operational voltage and current, types of customers, customer count, consumption patterns, daily fluctuations, as well as the maximum and minimum temperatures in the region.

5.1. LSTM

The model uses a recurrent neural network (LSTM) to process data sequences, such as time series. The LSTM layer with 100 units and ‘ReLU’ activation allows capturing patterns over time and handling long-term dependencies in the data. After the LSTM layer, dense layers are applied for prediction. The model is trained using the RMSprop optimizer in Python [39], which evaluates its performance with metrics such as mean squared error (MSE). In summary, the LSTM model learns complex temporal relationships to predict future steps in time series data. The parameters of the model are found in Table 3.
The analysis of the training results in Table 4 reveals that the recurrent neural network (LSTM) model demonstrates strong performance on the training set with low RMSE values. However, concerns arise regarding its generalization ability, as the metrics on the validation and test sets are significantly higher, as shown in Figure 4. This suggests the presence of potential overfitting, especially given that the standard deviations for these metrics are also higher in the validation and test sets compared to the training set. The model architecture, comprising an LSTM layer followed by dense layers, appears to lack regularization techniques such as Dropout, which could contribute to this disparity among the data sets. Additionally, the use of the MSE metrics during training provides detailed insight into the model’s learning process. In summary, it is recommended to explore additional strategies, such as incorporating regularization and adjusting the model architecture, to enhance its generalization capability and mitigate the observed potential overfitting in the validation and test data.

5.2. Transformers

This model is a transformer in Python [39] that uses multi-head attention and convolutions to learn patterns in input sequences of 12 steps with 8 features each. Each layer of the model comprises a multi-head attention followed by a convolution layer and normalization, allowing the model to capture complex relationships in the data. After several layers, a Global Average Pooling layer is used to obtain an aggregated representation of the sequence, followed by dense layers with ReLU activations and a linear output layer for regression. The model is trained using the Adam optimizer and MSE loss function with the option to save the model for later use. The parameters of the model are found in Table 5.
The analysis of the results in this table showcases the model’s performance across three datasets in Table 6 for different transformers identified by their ID. The metrics used to evaluate the model’s performance are root mean squared error (RMSE).
We observe significant variations in RMSE values among different transformers. For instance, the transformer with ID 1 has an RMSE of 0.2118 in the training set, while the transformer with ID 10 has an RMSE of 0.1848 in the same set. This indicates that the model’s performance varies depending on the specific transformer. It can be inferred that if the metrics are consistent across all three sets, it suggests good generalization capability, as shown in Figure 5.
Finally, the analysis conducted in the previous LSTM table and the current transformer table reveals some key differences in the performance and generalization of the models.
In the LSTM table, it can be observed that the model exhibits solid performance on the training set with low RMSE values. However, the metrics on the validation and test sets are higher, indicating possible errors, which are visualized in Figure 4. This is further verified by the higher standard deviations in the validation and test sets compared to the training set.
On the other hand, in the transformer table, we see that the RMSE metrics vary considerably among different transformers. This suggests that the model’s performance varies depending on the specific transformer. Additionally, comparing the training, validation, and test metrics provides insights into the model’s generalization capability.
The analysis of RMSE test values in the two techniques reveals a significant difference: when using transformers, errors remain below 0.2, as opposed to LSTM, where they can easily exceed 0.4. These findings suggest that the choice of technique can substantially influence accuracy, with a potential 50 % reduction in RMSE by changing the technique, as illustrated in Figure 6.

6. Conclusions

In summary, this study reveals that the LSTM model performs well on the training set but faces challenges in generalization on the validation and test sets, suggesting possible overfitting issues. On the other hand, the transformer model shows significant variability in its performance depending on the specific transformer used, implying a differential adaptability of the model to different contexts. A key contribution of this work lies in comparing these two models in a specific prediction context, providing valuable insights into their strengths and weaknesses. Additionally, it is important to note that this study was conducted using real transformer distribution data, adding relevance and practical applicability to the obtained results.
The analysis of RMSE values in the test set for both techniques reveals a significant difference when using transformers. These findings suggest that the choice of technique can substantially influence accuracy, with a potential 50 % reduction in RMSE by changing the technique. For future work, exploring regularization strategies to improve the generalization of both models and delving into the transformer model’s adaptability to different data configurations is recommended.
This work demonstrates the potential of deep learning models to accurately predict electricity consumption in the context of a real operating environment. The use of such techniques could help electrical distribution companies improve the estimation of the required capacity of electrical transformers. Additionally, it is crucial to consider electromobility and net billing in future work due to their significant impact on power consumption in distribution transformers.

Author Contributions

Conceptualization, G.F. and S.D.-C.; methodology, G.F.; software, K.S.; validation, E.F., K.S. and G.F.; formal analysis, S.D.-C.; investigation, K.S.; resources, S.D.-C.; data curation, K.S.; writing—original draft preparation, K.S.; writing—review and editing, E.F. and G.F.; visualization, E.F.; supervision, G.F.; project administration, S.D.-C.; funding acquisition, S.D.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the Chilean Research and Development Agency (ANID) under Project FONDECYT 1191188; the Ministry of Science and Innovation of Spain under Project PID2022-137680OB-C32; the Agencia Estatal de Investigación (AEI) under Project PID2022-139187OB-I00.

Data Availability Statement

Data is unavailable due to privacy restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Arango, A.R.; Aguilar, J.; R-Moreno, M.D. Deep reinforcement learning approaches for the hydro-thermal economic dispatch problem considering the uncertainties of the context. Sustain. Energy Grids Netw. 2023, 35, 101109. [Google Scholar] [CrossRef]
  2. Wang, S.; Zhuge, C.; Shao, C.; Wang, P.; Yang, X.; Wang, S. Short-term electric vehicle charging demand prediction: A deep learning approach. Appl. Energy 2023, 340, 121032. [Google Scholar] [CrossRef]
  3. Schau, H.; Novitskiy, A. Economic transformer load estimation considering power quality. In Proceedings of the 2008 13th International Conference on Harmonics and Quality of Power, Wollongong, NSW, USA, 28 September–1 October 2008; pp. 1–5. [Google Scholar] [CrossRef]
  4. Agudelo, L.; Velilla, E.; López, J.M. Estimación de la carga de transformadores de potencia utilizando una red neuronal artificial. Inf. Tecnológica 2014, 25, 15–23. [Google Scholar] [CrossRef]
  5. Liu, X.; Wu, X.; Sang, J.; Huang, K.; Feng, G.; Song, M.; Wang, X. Research on the heat supply prediction method of a heat pump system based on timing analysis and a neural network. Energy Built Environ. 2024; in press. [Google Scholar] [CrossRef]
  6. Farias, G.; Fabregas, E.; Dormido-Canto, S.; Vega, J.; Vergara, S. Automatic recognition of anomalous patterns in discharges by recurrent neural networks. Fusion Eng. Des. 2020, 154, 111495. [Google Scholar] [CrossRef]
  7. Oliveira, H.S.; Oliveira, H.P. Transformers for Energy Forecast. Sensors 2023, 23, 6840. [Google Scholar] [CrossRef] [PubMed]
  8. Chowdary, M.K.; Anitha, J.; Hemanth, D.J. Emotion Recognition from EEG Signals Using Recurrent Neural Networks. Electronics 2022, 11, 2387. [Google Scholar] [CrossRef]
  9. Moussad, B.; Roche, R.; Bhattacharya, D. The transformative power of transformers in protein structure prediction. Proc. Natl. Acad. Sci. USA 2023, 120, e2303499120. [Google Scholar] [CrossRef] [PubMed]
  10. Nazir, A.; Shaikh, A.K.; Shah, A.S.; Khalil, A. Forecasting energy consumption demand of customers in smart grid using Temporal Fusion Transformer (TFT). Results Eng. 2023, 17, 100888. [Google Scholar] [CrossRef]
  11. Koohfar, S.; Woldemariam, W.; Kumar, A. Prediction of electric vehicles charging demand: A transformer-based deep learning approach. Sustainability 2023, 15, 2105. [Google Scholar] [CrossRef]
  12. Mahmood, A.M.; Abdul Zahra, M.M.; Hamed, W.; Bashar, B.S.; Abdulaal, A.H.; Alawsi, T.; Adhab, A.H. Electricity Demand Prediction by a Transformer-Based Model. Majlesi J. Electr. Eng. 2022, 16, 97–102. [Google Scholar] [CrossRef]
  13. L’Heureux, A.; Grolinger, K.; Capretz, M.A. Transformer-based model for electrical load forecasting. Energies 2022, 15, 4993. [Google Scholar] [CrossRef]
  14. Villao Paredes, K.A. Diseño de un Prototipo de Sistema de Monitoreo y Predicción del Consumo Eléctrico en Zonas Residenciales Usando Redes Neuronales Artificiales. Bachelor’s Thesis, Universidad Estatal Península de Santa Elena, La Libertad, Ecuador, 2023. [Google Scholar]
  15. Ibargüengoytia-González, P.H.; Reyes-Ballesteros, A.; Borunda-Pacheco, M.; García-López, U.A. Predicción de potencia eólica utilizando técnicas modernas de Inteligencia Artificial. Ing. Investig. Tecnol. 2018, 19, 1–11. [Google Scholar] [CrossRef]
  16. López, J.D.O.; Castellanos, L.J.R.; Jiménez, B.J.C.; Escalante, R.J.P. Predicción De Potencia Fotovoltaica Mediante Redes Neuronales Wavelet. Pist. Educ. 2018, 39, 1224–1236. [Google Scholar]
  17. De Cauwer, C.; Van Mierlo, J.; Coosemans, T. Energy consumption prediction for electric vehicles based on real-world data. Energies 2015, 8, 8573–8593. [Google Scholar] [CrossRef]
  18. Gallet, M.; Massier, T.; Hamacher, T. Estimation of the energy demand of electric buses based on real-world data for large-scale public transport networks. Appl. Energy 2018, 230, 344–356. [Google Scholar] [CrossRef]
  19. Castro Arredondo, J.C. Predicción del Índice de Choques a Postes por Calle. Bachelor’s Thesis, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile, 2019. [Google Scholar]
  20. Paniego, J.M.; Libutti, L.; Pi Puig, M.; Chichizola, F.; De Giusti, L.C.; Naiouf, M.; De Giusti, A.E. Modelado estadístico de potencia usando contadores de rendimiento sobre Raspberry Pi. In Proceedings of the XXIV Congreso Argentino de Ciencias de la Computación (La Plata, 2018), Tandil, Argentina, 8–12 October 2018; pp. 113–123. [Google Scholar]
  21. Eraso, F.J.; Erazo, O.F.; Escobar, E. Modelo para la estimacion de potencia electrica en modulos fotovoltaicos de tecnologia basada en silicio. Ingeniare Rev. Chil. Ing. 2019, 27, 188–196. [Google Scholar] [CrossRef]
  22. Gandica de Roa, E.M. Potencia y Robustez en Pruebas de Normalidad con Simulación Montecarlo. Rev. Sci. 2020, 5, 108–119. [Google Scholar] [CrossRef]
  23. Liao, W.; Porte-Agel, F.; Fang, J.; Rehtanz, C.; Wang, S.; Yang, D.; Yang, Z. TimeGPT in Load Forecasting: A Large Time Series Model Perspective. arXiv 2024, arXiv:2404.04885. [Google Scholar] [CrossRef]
  24. Moreno, S.R.; Seman, L.O.; Stefenon, S.F.; dos Santos Coelho, L.; Mariani, V.C. Enhancing wind speed forecasting through synergy of machine learning, singular spectral analysis, and variational mode decomposition. Energy 2024, 292, 130493. [Google Scholar] [CrossRef]
  25. Luzia, R.; Rubio, L.; Velasquez, C.E. Sensitivity analysis for forecasting Brazilian electricity demand using artificial neural networks and hybrid models based on Autoregressive Integrated Moving Average. Energy 2023, 274, 127365. [Google Scholar] [CrossRef]
  26. Kim, T.Y.; Cho, S.B. Predicting residential energy consumption using CNN-LSTM neural networks. Energy 2019, 182, 72–81. [Google Scholar] [CrossRef]
  27. Pérez-Porras, F.J.; Triviño-Tarradas, P.; Cima-Rodríguez, C.; Meroño-de Larriva, J.E.; García-Ferrer, A.; Mesas-Carrascosa, F.J. Machine learning methods and synthetic data generation to predict large wildfires. Sensors 2021, 21, 3694. [Google Scholar] [CrossRef] [PubMed]
  28. Zhou, S.; Zhou, L.; Mao, M.; Tai, H.M.; Wan, Y. An optimized heterogeneous structure LSTM network for electricity price forecasting. IEEE Access 2019, 7, 108161–108173. [Google Scholar] [CrossRef]
  29. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef] [PubMed]
  30. Zha, W.; Liu, Y.; Wan, Y.; Luo, R.; Li, D.; Yang, S.; Xu, Y. Forecasting monthly gas field production based on the CNN-LSTM model. Energy 2022, 260, 124889. [Google Scholar] [CrossRef]
  31. Zeng, A.; Chen, M.; Zhang, L.; Xu, Q. Are transformers effective for time series forecasting? In Proceedings of the 37th AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 11121–11128. [Google Scholar] [CrossRef]
  32. Liu, Y.; Wu, H.; Wang, J.; Long, M. Non-stationary transformers: Exploring the stationarity in time series forecasting. In Proceedings of the 36th Conference on Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022; Volume 35, pp. 9881–9893. [Google Scholar]
  33. Raghu, M.; Unterthiner, T.; Kornblith, S.; Zhang, C.; Dosovitskiy, A. Do Vision Transformers See Like Convolutional Neural Networks? In Proceedings of the Advances in Neural Information Processing Systems; Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W., Eds.; Curran Associates, Inc.: Glasgow, UK, 2021; Volume 34, pp. 12116–12128. [Google Scholar]
  34. Wu, B.; Wang, L.; Zeng, Y.R. Interpretable wind speed prediction with multivariate time series and temporal fusion transformers. Energy 2022, 252, 123990. [Google Scholar] [CrossRef]
  35. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. In Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 10–16 December 2023. [Google Scholar] [CrossRef]
  36. Agrometeorologia. Available online: https://agrometeorologia.cl/ (accessed on 15 April 2024).
  37. Sarabia, H.; Villarino, E. Interpolación de secciones eficaces para el cálculo de reactores de múltiple dependecias. Mecánica Comput. 2014, 33, 3065–3079. [Google Scholar]
  38. Arévalo-Ovalle, D.; Bernal-Yermanos, M.A.; Posada-Restrepo, J.A. Interpolación; Editorial Institución Universitaria Politécnico Grancolombiano: Bogotá, Colombia, 2021; pp. 47–72. [Google Scholar]
  39. De Smedt, T.; Daelemans, W. Pattern for python. J. Mach. Learn. Res. 2012, 13, 2063–2067. [Google Scholar]
Figure 1. Power System (SEP).
Figure 1. Power System (SEP).
Energies 17 02709 g001
Figure 2. LSTM Structure.
Figure 2. LSTM Structure.
Energies 17 02709 g002
Figure 3. Transformers Structure.
Figure 3. Transformers Structure.
Energies 17 02709 g003
Figure 4. Power predictions across the entire signal with LSTM for transformer ID 11 considering normalization of Section 4 (Blue: train, Green: validation, Red: test).
Figure 4. Power predictions across the entire signal with LSTM for transformer ID 11 considering normalization of Section 4 (Blue: train, Green: validation, Red: test).
Energies 17 02709 g004
Figure 5. Predictions of power across the entire signal with transformers for transformer ID 11 considering normalization of Section 4 (Blue: train, Green: validation, Red: test).
Figure 5. Predictions of power across the entire signal with transformers for transformer ID 11 considering normalization of Section 4 (Blue: train, Green: validation, Red: test).
Energies 17 02709 g005
Figure 6. Comparison of RMSE test between LSTM and transformers: Blue: LSTM, Red: transformers.
Figure 6. Comparison of RMSE test between LSTM and transformers: Blue: LSTM, Red: transformers.
Energies 17 02709 g006
Table 1. Example of normalized data from selected transformers.
Table 1. Example of normalized data from selected transformers.
IDV (Normalized)I (Normalized)Customer ConsumptionT (°C)
T10.750.68120 kWh20
T200.700.55110 kWh21
Table 2. Computer Specifications.
Table 2. Computer Specifications.
ComponentSpecifications
CPUIntel(R) Xeon(R) CPU @ 2.30 GHz
Cores: 1 per processor, 2 in total
Cache Memory: 46,080 KB
Architecture: 46 bits physical, 48 bits virtual
RAMCapacity: 13.6 GB
GPUModel: Tesla T4
Memory: 15,360 MiB
CUDA Driver: Version 12.2
Table 3. Parameters of the LSTM model.
Table 3. Parameters of the LSTM model.
ParameterValue
input_shape(12, 8)
output_dim1
activation functionReLU
optimizerRMSprop
loss functionMSE
metricsMSE, MAE
batch_size72
epochs1000
Table 4. Twenty Training Results with LSTM.
Table 4. Twenty Training Results with LSTM.
IDTime (s)Train
RMSE
Val
RMSE
Test
RMSE
1399.580.07181.01550.6224
2398.720.07280.82660.4336
3395.440.07350.84640.4517
4274.650.07480.82360.4437
5391.440.07390.83340.4535
6391.890.07420.82930.4418
7390.580.07460.82680.4440
8396.890.08420.83990.4479
9392.270.13100.82830.4537
10391.480.07930.83210.4436
11394.010.15060.81980.4548
12397.980.08570.82910.4680
13396.870.07790.87800.4734
14393.840.07470.81630.4386
15400.090.07380.83070.4560
16398.100.07420.85530.4572
20397.570.07670.81890.4438
21397.400.07850.83570.4563
22396.350.08100.95940.5207
23396.110.07680.84470.4418
Table 5. Parameters of the Transformers Model.
Table 5. Parameters of the Transformers Model.
ParameterValue
input_shape(12, 8)
d_model1
nhead4
num_layers4
ff_dim4
OptimizerAdam
Learning_rate0.001
LossMSE
Epochs10
Batch_size72
Validation_split0.33
Table 6. Twenty Training Results with Transformers.
Table 6. Twenty Training Results with Transformers.
IDTime (s)Train
RMSE
Val
RMSE
Test
RMSE
127.200.21180.22020.1840
221.550.21770.20710.1589
326.400.21790.20720.1592
421.020.23500.21610.1745
521.070.21660.24570.1942
621.070.23820.21870.1799
720.070.19790.18260.1555
825.920.18630.17360.1300
922.590.19910.21550.1569
1026.010.18480.17400.1284
1121.350.19930.21020.1523
1226.620.18630.17550.1298
1321.290.18860.18100.1332
1422.200.20740.23560.1776
1526.970.21160.24240.1855
1621.980.19880.20460.1602
2023.000.21240.20590.1550
2118.870.22440.21160.1654
2221.930.23130.21370.1709
2326.130.21580.19620.1617
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Schröder, K.; Farias, G.; Dormido-Canto, S.; Fabregas, E. Comparative Analysis of Deep Learning Methods for Fault Avoidance and Predicting Demand in Electrical Distribution. Energies 2024, 17, 2709. https://doi.org/10.3390/en17112709

AMA Style

Schröder K, Farias G, Dormido-Canto S, Fabregas E. Comparative Analysis of Deep Learning Methods for Fault Avoidance and Predicting Demand in Electrical Distribution. Energies. 2024; 17(11):2709. https://doi.org/10.3390/en17112709

Chicago/Turabian Style

Schröder, Karla, Gonzalo Farias, Sebastián Dormido-Canto, and Ernesto Fabregas. 2024. "Comparative Analysis of Deep Learning Methods for Fault Avoidance and Predicting Demand in Electrical Distribution" Energies 17, no. 11: 2709. https://doi.org/10.3390/en17112709

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop