Next Article in Journal
Dynamic Simulation of a Gas and Oil Separation Plant with Focus on the Water Output Quality
Previous Article in Journal
Energy Efficiency of AGV-Drone Joint In-Plant Supply of Production Lines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of a Short-Term Electrical Load Forecasting in Disaggregated Levels Using a Hybrid Modified Fuzzy-ARTMAP Strategy

by
Leonardo Brain García Fernández
,
Anna Diva Plasencia Lotufo
and
Carlos Roberto Minussi
*
Electrical Engineering Department, UNESP—São Paulo State University, Av. Brasil 56, Ilha Solteira 15385-000, SP, Brazil
*
Author to whom correspondence should be addressed.
Energies 2023, 16(10), 4110; https://doi.org/10.3390/en16104110
Submission received: 5 April 2023 / Revised: 5 May 2023 / Accepted: 10 May 2023 / Published: 16 May 2023

Abstract

:
In recent years, electrical systems have evolved, creating uncertainties in short-term economic dispatch programming due to demand fluctuations from self-generating companies. This paper proposes a flexible Machine Learning (ML) approach to address electrical load forecasting at various levels of disaggregation in the Peruvian Interconnected Electrical System (SEIN). The novelty of this approach includes utilizing meteorological data for training, employing an adaptable methodology with easily modifiable internal parameters, achieving low computational cost, and demonstrating high performance in terms of MAPE. The methodology combines modified Fuzzy ARTMAP Neural Network (FAMM) and hybrid Support Vector Machine FAMM (SVMFAMM) methods in a parallel process, using data decomposition through the Wavelet filter db20. Experimental results show that the proposed approach outperforms state-of-the-art models in predicting accuracy across different time intervals.

1. Introduction

Electrical load forecasting in deregulated electricity markets is a valuable tool for managing Power Systems [1,2]. Managing such systems, whether in operation, maintenance, or planning, is a complex and challenging task [3] that results from the technological evolution of electrical systems [4]. This evolution [5] is due to the incorporation of non-conventional renewable resources [6] and the participation of industrial users in self-supplying.
To ensure the efficient management of current electrical systems, it is important to optimize their operation. This can be achieved through economic dispatch, which aims to minimize costs while ensuring safety and maximizing the utilization of energy resources [2]. As a result, it becomes crucial to accurately monitor the balance between supply and demand at various busbars of electrical systems in the short, medium, and long term.
Predicting this demand in the short term [7], following the dispatch schedule for the next day, at different busbars is a complex and challenging task, given the large variety of disaggregated loads [8], self-supply from users, and energy injection from non-conventional sources [5]. Therefore, the next-day market programming needs to have load profiles with high efficiency for various levels of disaggregation, so that optimization of the programming can ensure security, avoiding the operation of inflexible power plants which generate high variable costs and, consequently, high operating costs [2].
The complexity of the forecasts is quantified by the level of load aggregation. Elevated levels of aggregation, which use traditional forecasting methods, achieve high efficiency, thus surpassing uncertainties. However, for various levels of disaggregation, given the current conditions involving electrical systems, the methods become complex in modeling the characteristic load patterns [8]. For this reason, fluctuating consumption patterns motivate the exploration of new proposals for forecasting tools with more efficient and robust methods to minimize errors.
ML is a field of Artificial Intelligence (AI) that encompasses a set of methods that can automatically detect and generalize patterns in datasets, thus supporting the prediction of the future or decision-making under various conditions of uncertainty [9]. The AI field has been used worldwide to solve problems in various areas of knowledge [10]. In the last two decades, several models have been applied to energy systems to solve energy prediction problems, solar irradiation, wind speed, urban heating, classification of power quality disturbances, electricity market price prediction, among others [11].
This paper is motivated by the application of ML techniques to model, design, and forecast electric loads at various levels of disaggregation. We propose a flexible methodology that can adapt to the current dynamics of the electrical system at various levels of disaggregation.
In addition to this introduction, this paper includes a section on related works for various levels of disaggregation in the day-ahead load forecasting field, along with our contributions. Section 2 presents the materials and methods employed. After that, Section 3 presents the methodology process. The results are shown in Section 4, providing a detailed comparison of the load forecasting between our proposal and its determinants. The conclusion is presented in Section 5, along with limitations and future recommendations.

1.1. Related Works

The load forecasting models in the literature, as outlined in [12], can be categorized into three groups: (i) Average Method: involves extrapolating time series data using simple moving average and exponential smoothing techniques. (ii) Mathematical Model-based method: uses statistical techniques such as regression and autoregressive methods to forecast demand by establishing a relationship between load consumption and variables that can affect demand, such as weather, consumer behavior, and occupancy. (iii) Artificial Intelligence method: learns consumption patterns from historical load data and is commonly used to discover nonlinear relationships between inputs and outputs without prior assumptions about the correlation between data. Therefore, based on this categorization, the current state-of-the-art in this field is focused on enhancing forecast accuracy at various levels of disaggregation using robust methods that propose an efficient utilization of all available natural resources to achieve an optimal supply schedule.
In the Average Method approach, the moving average and exponential smoothing techniques are simple forms to extrapolate the time series data for forecasting problems, as observed in the literature [13,14,15,16]. In recent research, structural combinations that use the average method as base models have been proposed to outperform competitive benchmarks [17,18]. Therefore, model uncertainty, data uncertainty, and parameter uncertainty are associated with the model base to guarantee excellent performance.
Mathematical Model-based methods are also applied in load forecasting studies, such as simple and multiple linear regression models [19,20,21,22], multivariate linear regressions [23,24] such as multivariate adaptive regression splines (MARS), and autoregressive models [25,26,27]. All these mathematical methods are extensively used for accuracy in load forecasting problems. One of their highlights compared to average methods is their capacity to establish relationships between load consumption and external variables that can affect demand.
In recent years, there has been a growing demand for load forecasting models that utilize Machine Learning methods applied in diverse load-disaggregation levels [8,28]. This is primarily due to their efficiency and flexibility in discovering non-linear associations between data, which are caused by the electrical system’s evolution, unlike the Average Method and Mathematical Model-based methods.
Among Machine Learning (ML) methods applied to day-ahead load-disaggregation forecasting are Artificial Neural Network [29,30,31], Support Vector Machine (SVM) [32,33], Fuzzy Logic (FL) [34], Ensemble Learning [35,36,37], and Decision Tree [38]. These ML methods offer various advantages in identifying and modeling nonlinear relationships in the context of the evolving electrical system and have demonstrated superior performance compared to traditional approaches as shown in Table 1. Besides, various hybrid methods have been applied to address similar problems. These approaches combine multiple techniques to enhance the performance of load forecasting models. The references mentioned below showcase the application of such hybrid methods in the context of load disaggregation forecasting. Barman and Dev Choudhury [39] proposed an innovative approach to improve short-term load forecasting by incorporating seasonality effects using a season-specific similarity concept (SSSC) based on a firefly algorithm (FA) and SVM. Eseye et al. [40] used a Feedforward Artificial Neural Network (FFANN) based on a Binary Genetic Algorithm and Gaussian Process Regression (BGA-GPR) to forecast aggregate customer classes, which is a decentralized Energy System from Finland. Yan et al. [41] proposed a hybrid ensemble deep learning framework that combines Long Short-Term Memory (LSTM) and Stationary Wavelet Transform (SWT) to accurately forecast energy consumption for individual households. Wang et al. [42] proposed a novel ensemble Hidden Markov Model (e-HMM) framework based on bagging ensemble learning to improve the accuracy of Short-Term Load Forecasting (STLF) for individual industrial customers. Amorim et al. [43] proposed a reverse training concept to Fuzzy-ARTMAP (FAM) ANN, applying it to the global and multinodal load forecasting up to 24 h ahead. Müller et al. [44] proposed a method to improve forecasting electrical load in disaggregated levels up to 24 h ahead by using a combination of singular spectrum analysis (SSA) and FAM ANN. Jin et al. [45] proposed a hybrid model combining system singular spectrum analysis (SSA) and Parallel Long Short-Term Memory (PLSTM) ANN for STLF of residential electricity consumption.
From the related works, it is clear that hybrid methods have been investigated and employed for disaggregated load forecasting. However, only a limited number of studies have specifically focused on adapting ML models for multi-level disaggregated load forecasting. This paper aims to present a versatile approach that can be tailored to various levels of disaggregation.

1.2. Contributions

The paper presents several contributions to multi-level disaggregated load forecasting. Firstly, it introduces SVMFAMM, a hybrid ML framework that provides reliable and accurate load forecasting. The internal parameters of SVMFAMM are set using a cross-validation process applied in FAMM, which generates a geometric region for the best parameters related to multi-level feature disaggregated load data.
In addition, the paper proposes a methodology based on Wavelet Parallel Training between FAMM and SVMFAMM, called WPT-SVMFAMM. This methodology manages and organizes inputs through multi-resolution analysis via Wavelet decomposition, resulting in accurate electricity load-disaggregation forecasting.
To validate the effectiveness of WPT-SVMFAMM for reliable load forecasting, it is applied to various levels of disaggregation using real data acquired from the Peruvian electrical system operator (COES) website, and meteorological data are taken into consideration as input patterns. The paper also assesses the performance of WPT-SVMFAMM compared to FAMM and the Hybrid Method between Least Squares and FAMM (MMQFAMM) for load forecasting models, in terms of RMSE, MSE, MAE, and MAPE for the day-ahead electricity market. Overall, the contributions of this paper provide a valuable framework for improving the accuracy and reliability of multi-level disaggregated load forecasting.

2. Material and Methods

In this section, we will provide a detailed explanation of the data and methods utilized in the present paper.

2.1. Data Description

The publicly available data sources employed in this study include electrical load and meteorological data, gathered from September 2017 to December 2019.
Electrical load data were obtained from the COES’s Daily Operation Evolution Report (IEOD) [46], which includes global loads, northern area, southwest sub-area, and large free users in central and southern areas. The global aggregation level shows predictable weekly patterns, while disaggregated levels introduce fluctuations and complexities.
The National Service of Meteorology and Hydrology of Peru (SENAMHI), a Peruvian public institution, provides real-time meteorological variables on its website [47], such as temperature, humidity, precipitation, and wind speed. In this paper, we utilized temperature and humidity data as exogenous variables due to their degree of correlation with disaggregated electrical loads, unlike precipitation and wind speed data, which show a weaker relationship.
The construction of the input structures in the proposal is based on a mobile window approach [48]. In this sense, the input vector a is defined as follows:
The input for the global, north area, and southwest subarea electrical load levels will be as follows; these inputs consider meteorological values due to their correlation with each other:
a = t         d S         d A t         n 1 T         n 2 T         L t 3   L t 2   L t 1   L t T ,         a R m
The input for the electrical load levels of the large free users, large free users of the northern area, and the southern area will be as follows. In this situation, there is no correlation with meteorological data:
a = t         d S         d A t         n 2 T         L t 3   L t 2   L t 1   L t T ,         a m 1
where t —time encoding in 30-min intervals; d S —day of the week; d A t —coding of normal and atypical days; L t n —load belonging to instantly t n , n 1 —vector that stores temperature and humidity; n 2 —vector that stores L t 1334 ; L t 1008 ; L t 672 ;   L t 336 The output vector b corresponds to the time (t + 1), this represents the future value from 30 min onwards.
b =   L t + 1 ,         b R 1

2.2. Wavelet Transform

The Wavelet transform (WT) [49,50,51] enables a multiresolution analysis [52,53], which adapts the signal resolution at various frequencies. It decomposes the signal into different components, changing time and frequency resolution to address signal-processing problems with varying time-frequency resolutions. Thus, the WT is designed to produce high time resolution and low-frequency resolution for high-frequency signals and good frequency resolution for low-frequency signals. The two most used classes of WT are Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) [54].
The development of the DWT algorithm led to the theory of multiresolution analysis [52,53], which forms the basis for implementing filter banks [55]. These filters consist of a low-pass and high-pass filter, and when the original signal passes through these filters, the resulting output coefficients are c j , k and d j , k , respectively. Decomposing a signal into various levels can be achieved by passing the scaling coefficients obtained in the previous filtering stage through a pair of identical filters, resulting in the coefficients of the next level.
The multiresolution analysis of signals, which uses the orthogonal dyadic Wavelet Transform, is a multi-step WT process. This corresponds to sequences of nested subspaces in the band-pass filters. In this process, the coefficients are divided into approximation (A) and detail (D) coefficients, which are the results of sampling operators such as downsampling in the decomposition process and upsampling in the signal reconstruction [55]. The approximation coefficients represent the high-scale values, corresponding to the low-frequency components of the signal and are associated with the scaling function determined with a low-pass filter. Conversely, the detail coefficients are the low-scale values corresponding to the high-frequency components and are associated with the Wavelet function determined as a high-pass filter.

2.3. Modified Fuzzy ARTMAP

The modified Fuzzy ARTMAP (FAMM) model [56] is a special proposal of the Adaptive Resonance Theory (ART) neural networks [57,58] that provides a rational and economical form to improve the training process. The principles of this proposal are based on a process analogous to how the brain reacts to new knowledge. The brain contains a wide number of inactive neurons, which become active because of learning. This process occurs when new knowledge is required. Additionally, FAMM retains a special property of ART that solves the plasticity/stability dilemma faced by all intelligent systems where the plasticity assumes the new patterns’ capacity without losing the previously acquired knowledge and the stability brings the ability to respond appropriately to irrelevant events [59].
Management of the plasticity/stability dilemma contains three internal modules based on supervised training that provides an associative memory with an auto-regulatory mechanism, as shown in Figure 1. The Fuzzy A R T a and A R T b modules receive arbitrary input data to create stable recognition categories, where A R T a processes the input vector and A R T b processes the expected output vector. The final module, the associative module Inter-ART verifies an active category correspondence between the A R T a and A R T b modules. This module works with an auto-regulator match-tracking mechanism that permits A R T a and A R T b modules to match in a special tuning parameter condition [57].
Moreover, to become a more rational and economical training process, FAMM has an algorithm that fires internal neurons according to the new inputs [56]. The process starts with just a cluster, whose data constitute the first entry data, and during the training process, will be expanding to accommodate new patterns. As a result, the calculation of the activation function is performed for a reduced number of clusters, and for an established vigilance criterion.
The FAMM structure consists of various parameters that require prior tuning. These parameters are inherently linked to the problem and thus have a significant impact on the system’s learning speed, efficiency, and effectiveness [58].
The model’s internal parameters consist of the chosen parameter ( α ), responsible for controlling the search sequence between nodes of layer F 2 . The value of α must be positive α > 0 , according to Carpenter et al. [58]. The training rate ( β ), within the interval (0,1], determines the speed at which the weights can adapt in the network. Lower values result in slower learning. The vigilance parameter ( ρ ), within the interval (0,1], controls resonance in the neural network and is related to the number of categories created. Higher values result in more categories, but less generalization of the network, according to Carpenter et al. [58]. The vigilance criterion or analysis of resonance occurrence applies to the ARTa( ρ a ), ARTb( ρ b ), and Inter-ART( ρ a b ) modules.
In the following paragraph, we outline the training algorithm for the modified Fuzzy ART [56]. To assist with the explanation, Figure 2 depicts the flowchart utilized during the training of the FAMM, which comprises two Fuzzy ART modules and an Inter-ART.
Step 1: Reading input data:
The following vector represents the input data:
a = a 1   a 2     a M T ;
M -dimensional. The input vector is normalized to avoid the proliferation of categories, as follows:
a ¯ = a a
where a ¯ —normalized vector.
a = i M a i ;
Step 2: Input vector encoding:
Complement coding is performed to preserve the breadth of information.
a ¯ i c = 1 a ¯ i ;
where vector a ¯ i c is complementary to the normalized input vector. In this way, the encoding vector will have 2 M -dimensional which will be represented by:
I = a ¯   T   a ¯ c T T ;
Note that:
I = i M a ¯ i + i M a ¯ i c = M
All input vectors after normalization and coding will have the same magnitude M .
Step 3: Activity vector:
F 2 , activity vector will be represented by the following relationship:
y   =   y 1   y 2     y N T ;
where N —number of categories created in F 2 , so the following relationship is presented.
y j = 1 , I f   n o d e   j   o f   F 2   i s   a c t i v e , 0 , O t h e r w i s e ;
Step 4: Network parameters:
The parameters used in the processing of the modified Fuzzy ART network are:
  • Chosen parameters: α > 0
  • Training rate: β 0 , 1
  • Vigilance parameter: ρ 0 , 1
Step 5: Initialization of weights:
Compared to conventional Fuzzy ART, this network starts with only one cluster that contains the data of the first input vector, thus:
w 11 0 = I 11 ;             w 12 0 = I 12 ;                         w j M 0 = I 1 , 2 M ;
This means that there is only one cluster active as the first pattern data.
Step 6: Cluster counter initialization:
N = 1 ;
where N —indicates the number of active clusters.
Start of Training.
Step 7: Counter initialization:
In order to verify that all clusters have been tested to receive the said entry and, in the event that no existing cluster is able to allow the current entry, a temporary cluster is created to solve this problem, so it was necessary to include a counter:
C o n t = 1 ;
Each time an entry is presented to the network, this counter is initialized.
Step 8: Calculation of the choice function:
Having vector I in F 1 , for each node in F 2 , the choice function is determined by:
T j = I T w j α + w j ;
where —the Fuzzy intersection or conjunction (AND).
Cluster search.
Step 9: Category choice:
The category is chosen to be the node J active, that is:
J = arg max T j : j = 1 ,   2 ,   N ;
When using this equation, it is possible to find more than one active category; the selected category will be the one with the lowest index.
Step 10: Vigilance test:
The vigilance criterion is represented by:
I T w j I T ρ ;
If the criterion defined in this equation is satisfied, go to Step 13.
Step 11: Verification that all clusters have been tested.
Checking whether all clusters have already been tested is performed using the equation:
N < C o n t ;
If the defined criterion presented in the equation is not satisfied, go to Step 12. Otherwise, the counter is updated as follows:
C o n t n e w = C o n t o l d + 1   ;
Then, the reset takes place. On reset, node J of F 2 is excluded from the search process given by J through the following relation:
T j = 0 ;
Then, return to Step 9.
Step 12: Creating a Cluster:
For the case in which relation (18) is not satisfied, it means that none of the already existing clusters supports the current entry. In this way, it is necessary to create a new cluster, so the cluster counter is updated:
N n e w = N o l d + 1 ;
and
J = N ;
Therefrom,
w J = I T ;
After creating the cluster, go to Step 14.
Step 13: Update of weights (Training):
In this step, the training takes place, the weight vector is updated in this way:
w J n e w = β I T w J o l d + 1 β w J o l d   ;
where J   —active category; w J o l d —updated weight vector; w J n e w —weight vector referring to the previous update.
Step 14: Activity vector:
This activity vector F 2 is symbolized by:
y =   y 1   y 2     y N T ;
In this case, N will be the number of clusters created in F 2 . In this way, the following relation is presented:
y j = 1 , I f   n o d e   j   o f   F 2   i s   a c t i v e , 0 , O t h e r w i s e ;

2.4. Support Vector Machines

The Support Vector Machine (SVM) is a universal method for solving pattern recognition problems, including nonlinear problems, with good generalization capability, as evidenced in studies such as that of Cortes and Vapnik [60]. This method demonstrated that regression vectors are a valuable tool in regression estimation [61]. The generic estimating function of SVM in regression is expressed with the following equation:
y ^ = w 0 ^ + w ^ T x
where y ^ —estimating function; w 0 ^ —bias scalar; w ^ —weight vector; x —support vectors.
Therefore, the SVM in the regression process is a useful technique for solving multidimensional function estimation problems [61]. The technique consists of in minimizing the quadratic function of w subject to the linear constraints. The corresponding objective function is usually written with the following equation:
J = C i = 1 N L y i , y   i ^ + 1 2 w 2
where J —objective function; y i —the set of label training patterns;   y ^ —estimating function; C   . is a regularization constant. Vapk [62], who rendered the estimation not only robust but also sparse, proposed the epsilon insensitive loss function, defined by:
L y , y ^ 0    s e   y y ^ < ϵ y y ^ ϵ      o t h e r w i s e
where ϵ 0 —epsilon represents an acceptable error margin.
This function is the most used cost function which means that any point lying outside the ϵ -insensitive tube is penalized. Therefore, the objective problem represented is convex and unconstrained, but not differentiable as a result of ϵ -insensitive. A popular approach is to introduce slack variables to represent the degree to which each point lies outside the tube:
J = C i = 1 N ξ i + + ξ i + 1 2 w 2
Subject to
y i f x i + ϵ + ξ i +
y i f x i ϵ ξ i
ξ i + > 0 ;   ξ i > 0
where ξ i + —points above the ϵ -insensitive tube; ξ i —points below the ϵ -insensitive tube.
The performance of the generalized algorithm depends on correctly adjusting the regularization parameter C and the epsilon parameter, as well as the parameters related to the kernel [63]. The optimal solution for minimizing the objective function takes the following form:
w ^ = i α i x i
where α i 0 —is the weight of support vector in the feature space; replacing the solution in the generic estimating function, we obtain the following equation:
y ^ = w 0 ^ + i α i x i T x
Finally, by replacing x i T x with the kernel function, K x i , x we obtain a kernelized solution form:
y ^ = w 0 ^ + i α i   K x i , x
There are basic kernel functions used in SVM [64] such as the Linear, the Gaussian or Radial-Basis Function, and the Sigmoid kernel. The Linear kernel that computes the dot product is expressed by the following equation:
K x i , x = δ x i . x + k
where δ > 0 ; k —these values are constant.
The Gaussian kernel that computes the dot product is expressed by the following equation:
K x i , x = exp σ x i x 2
where σ > 0 —this value is a constant.
The Sigmoid kernel that computes the dot product is expressed by the following equation:
K x i , x = t a n h δ x i . x + k
where δ > 0 ; k < 0 —these values are constant.

2.5. SVMFAMM Forecasting Framework

To address the challenge of disaggregated multi-level load forecasting, our proposal utilized a hybrid model called SVMFAMM. This forecasting framework combines two methods, which are FAMM and SVM. We used FAMM in the training process to adapt the neuronal network, and the SVM regression in the diagnostic process to adjust the internal categories data and improve the load forecasting.
The training process for SVMFAMM was the same as that used for FAMM, as shown in Figure 3. In this case, exogenous variables and historical loads were used as inputs in the training process using the windows mobile process. After the training was complete, we moved on to the diagnostic process, which focuses on forecasting new load values. This process is important because it allows us to save the best candidates between categories for use in the SVM regression. We iteratively searched and stored the categories created during the diagnostic process for values greater than a cutoff of the vigilance parameter ρ a .
Finally, the categories vector was used in the SVM regression process to obtain a generic estimation function. This estimation function uses input data to generate load forecasting. The flowchart for the diagnostic process is shown in Figure 4.

3. Methodology

In summary, the methodology utilized in this paper involved dividing the raw data into smoothed and noise data via Wavelet filters. In a parallel training approach, we used the smoothed data as the training set for FAMM, and the noisy data as the training set for SVMFAMM. The load forecasting results are obtained by superimposing the outputs from both models.

3.1. Wavelet Decomposition

In this step, we utilized the multiresolution analysis methodology, a powerful method for extracting specific features from data. To decompose the electricity load data, we divided the raw data into smoothed and noise components using the Wavelet filter. The smoothed component contains the approximation part, while the noise component contains the detail coefficients up to the third level.
To accomplish this, we opted to use the Wavelet Daubechies in db20, applied to three decomposition levels, due to its ability to form an orthonormal basis, have compact support, and maintain low computational cost [11]. The resulting coefficients were then reconstructed. This procedure is illustrated in Figure 5.
The reconstructed coefficients were stored in vectors and used as inputs for the proposed models of this methodology. It is worth noting that we repeated this process for all six levels of disaggregation proposed in this paper, ensuring that the electricity load data were analyzed in detail at all levels.
To summarize, the Wavelet transform allows for the analysis of time series databases in multiresolution, making it a critical component of this methodology.

3.2. Hybrid FAMM Methodology Implementation Strategy

The hybrid methodology consists of a parallel process of training and testing between the FAMM and SVMFAMM methods, shown in Figure 6. Wavelet decomposition is used to generate inputs based on reconstructed data at various levels of disaggregation, to use them in the parallel process. The approximation data from the Wavelet reconstruction are applied in the training process linked to FAMM, while all the detailed data obtained are added and applied in the training process linked to SVMFAMM. Once the parallel training process is complete, the testing process begins for both methods, in accordance with their respective established procedures.

3.3. Perform Evaluation

To evaluate the performance, in the literature there are several statistical measures [65]. The forecasting results are compared using RMSE, MSE, MAE, and MAPE. Additionally, the Pearson correlation coefficient (PCC) is used to measure the linear correlation between the real load data and the forecasted values [66]:
RMSE = 1 n i = 1 n Υ i Υ ^ i 2
MSE = 1 n i = 1 n Υ i Υ ^ i 2
MAE = 1 n i = 1 n Υ i Υ ^ i
MAPE = 1 n i = 1 n Υ i Υ ^ i Υ i × 100
PCC = c o v Υ , Υ ^ σ Υ σ Υ ^
In the above equations, Υ ^ i represent the forecasted electricity load; Υ i the real load value. Additionally, n represents the quantity of inputs presented during the diagnosis phase.

4. Results

In this section, we present the results based on the information provided in Section 2 and Section 3, as shown in Table 2. The evaluation of the forecasts was carried out for the day-ahead horizon in 30-min intervals, covering forty-eight load values for the day. The methodologies were trained with 4032 items of historical data from 20 May to 11 August 2019. To assess the performance of the established methods in Section 3.3, appropriate metrics were applied. Furthermore, all simulations were conducted on a computer equipped with an Intel(R) Core (TM) i7-8550U CPU @ 1.80 GHz 2.00 GHz and 8.00 GB of RAM using MATLAB software with Wavelet and SVM Toolboxes.
The methodologies in Table 2 used the FAMM training process shown in Figure 2. Internal parameter selection in the training process was important to achieve accurate results; the parameter ρ a was tuned using the cross-validation method k -fold [68] equal 5 in the interval of 0.85 to 0.99 with steps of 0.01. Other parameters were considered fixed as the following values 0.99 for ρ a , 0.99 for ρ a b , 1 for β , 0.05 for α , 0.001 for ε . The cutoff in the testing process of SVMFAMM was optimized in the interval of 0.90 to 0.99 with steps of 0.01 for ρ a . Furthermore, Table 3 presents the computational time required for this process that yielded the best result. Detailed insights into the evaluation process and its relation to the computational time for each level of disaggregation will be provided in the following.
Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 provide a comparison of the results obtained for diverse levels of load forecasting. For each comparison, the best result obtained from tuned vigilance parameters is presented. The vigilance parameters ρ a are indicated in the header of all tables.
Table 4 presents the result of the global load forecast results, which demonstrate that the WPT-SVMFAMM with a Gaussian kernel outperforms the benchmark methods. The weekly average of MAPE improved slightly, with a decrease of 1% compared to FAMM and a drop of 18% compared to MMQFAMM. In daily comparisons, our proposed method exhibited acceptable accuracy for 12–16 August, with a maximal reduction of 42% in terms of MAPE when compared to FAMM. Furthermore, the daily comparison between our method and MMQFAMM showed a high performance for 13–18 August, with an up to 35% drop in terms of MAPE.
On the other hand, when comparing the computational time between methods for this level, Table 3 indicates that the FAMM method requires less effort compared to the others, due to a non-exhaustive search for patterns within the data related to the vigilance parameter.
The results presented in Table 5 demonstrate the superior performance of WPT-SVMFAMM with a Gaussian kernel over the benchmark methods for load forecasting in the north area. The weekly average of MAPE indicated a slight improvement, with a 22% decrease compared to FAMM and a 24% drop compared to MMQFAMM. Daily comparisons showed that our proposed method achieved acceptable accuracy for 12, 15, 17, and 18 August, with a maximal reduction of 48% in terms of MAPE compared to FAMM. Furthermore, the daily comparison between our method and MMQFAMM demonstrated a remarkable performance for 12 August and the period of 15–18 August, with an up to 50% drop in terms of MAPE.
Alternatively, when examining the computational time for this level among different methods, Table 3 reveals that the FAMM method demands less effort compared to the others. This is a result of a less comprehensive search for patterns in the data, which is related to the vigilance parameter.
Table 6 presents the results of the load forecast for the southwest sub-area, highlighting the superior performance of WPT-SVMFAMM using a Gaussian kernel over the benchmark methods. The weekly average of MAPE showed a slight improvement, with a 7% decrease compared to FAMM. In the daily comparisons, our proposed method exhibited acceptable accuracy for 12, 14, 17, and 18 August, achieving a maximal reduction of 49% in terms of MAPE compared to FAMM. Additionally, the daily comparison between our method and MMQFAMM demonstrated remarkable performance for 12, 17, and 18 August, with a drop of up to 46% to the MAPE metric.
On the other hand, when assessing the computational cost of the methods, Table 3 reveals that the computational effort of our WPT-SVMFAMM Linear and Gaussian proposal surpasses the benchmark methods. In this specific analysis, the methods were assessed using closely related high vigilance parameters, which led to longer processing times.
Table 7 highlights the superior performance of WPT-SVMFAMM with a Gaussian kernel over the benchmark methods in the load forecast for the north area. The weekly average of MAPE showed a slight improvement, with an 11% decrease compared to FAMM and a 15% drop compared to MMQFAMM. Regarding daily comparisons, our proposed method exhibited acceptable accuracy for 12–16 August, achieving a maximal reduction of 46% in terms of MAPE compared to FAMM. Moreover, the daily comparison between our method and MMQFAMM showed a remarkable performance for 12–15 August, with an up to 44% drop in terms of MAPE.
On the other hand, when assessing the computational cost of the methods, Table 3 reveals that the computational effort of our WPT-SVMFAMM Linear and Gaussian proposal outperforms the benchmark methods. In this analysis, the MMQFAMM method exhibited a high computational cost due to its direct relationship with the vigilance parameter, which resulted from an exhaustive search for characteristic patterns. In contrast, the other methods, with lower parameters, were able to significantly improve their speed in the training and diagnostic process.
In Table 8, the load forecast results for large free users in the central area are presented, showing the superior performance of WPT-SVMFAMM with a Gaussian kernel over the benchmark methods. The weekly average of MAPE demonstrated a slight improvement, with a 6% decrease compared to FAMM and a 60% drop compared to MMQFAMM. For daily comparisons, our proposed method achieved acceptable accuracy during 12–15 August and 18 August, with a maximal reduction of 19% in terms of MAPE compared to FAMM. Furthermore, the comparison between our method and MMQFAMM showed a remarkable performance during 12–18 August, with an up to 73% drop in terms of MAPE.
Alternatively, when evaluating the computational costs of the various methods, Table 3 demonstrates that our WPT-SVMFAMM Linear and Gaussian approach surpasses the benchmark methods in terms of computational effort. In this analysis, the MMQFAMM method displayed a high computational cost, stemming from its close connection with the vigilance parameter and the exhaustive search for distinctive patterns. On the other hand, the remaining methods, possessing lower parameters, managed to enhance their speed during the training and diagnostic phases.
The results of the load forecast for large free users in the southern area are presented in Table 9, highlighting the superior performance of WPT-SVMFAMM using an optimization process over the benchmark methods. The weekly average of MAPE showed a slight improvement, with a decrease of 11% compared to FAMM and a drop of 5% compared to MMQFAMM. Regarding daily comparisons, our proposed method showed acceptable accuracy for the period of 12–14 August and 16 August, achieving a maximal reduction of 28% in terms of MAPE compared to FAMM. Furthermore, the daily comparison between our method and MMQFAMM showed a remarkable performance for the period of 12–14 August, with an up to 19% drop in terms of MAPE.
In contrast, when examining the computational expenses associated with the different methods, Table 3 shows that the computational effort required by our WPT-SVMFAMM Linear and Gaussian approach shows better performance than the benchmark methods. In this specific analysis, the methods were assessed using closely related high vigilance parameters, which led to longer processing times.
The load curves exhibit a high degree of correlation with the real data, enabling us to observe that WPT-SVMFAMM provides better visual agreement with the actual values. Figure 7 also indicates that the FAMM method demonstrates greater fluctuations, exceeding the real values, thus raising doubts about the system’s response to unusual peaks during certain periods.
In Figure 8, our proposed method outperformed all benchmark methods in terms of curves related to Pearson correlation. All benchmark methods exhibited large fluctuations in load forecasts in daily profile curves. Additionally, the MMQFAMM methods in the central area lost their ability to generate stable results due to insufficient data in the storage vector linked to the diagnostic process. In summary, our proposed methodology, which employs a Gaussian kernel, allows for more accurate tracking of the curve in the day-ahead horizon in 30-min intervals.
Additionally, the statistical significance of the error measures shown in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 was verified through a Diebold–Mariano (DM) Test [69]. The results of the test for the best-performing method are shown in Figure 9, which represents the DM test statistic value. A 5% significance level was used in this test, and the corresponding critical values are 1.96. The interpretation of these DM values is carried out using the following intervals: a DM test < 1.96 implies significance outperformance, a DM test > 1.96 implies significance underperformance, and no significance is observed for out- or underperformance otherwise. These results are visually represented in Figure 9, where green signifies both outperformance and underperformance, while gold indicates a lack of significant difference in performance.
The test process considered the results of the weekly projection carried out on a 24-h day-ahead forecasting horizon, with 30-min intervals. According to Figure 9, the proposed methodology, in general, has an acceptable level of outperformance compared to the FAMM and MMQ-FAMM methods. However, in the southwest sub-area, the statistical test does not guarantee a significant performance difference between the compared methods because it is not within the critical value.

5. Conclusions, Limitations, and Future Direction

This paper proposes a hybrid ML methodology that combines FAMM and SVMFAMM models in a parallel process to address the multi-level disaggregation load forecasting problem. Our proposed methodology achieved the best results by using approximation and detailed data from the Wavelet reconstruction process, outperforming the benchmark methods established for day-ahead forecasting in a weekly window. The complexity of the data presents a challenge for modeling due to the variability in user load requirements at various levels of disaggregation and the occurrence of atypical operations in the electrical power system. To address this challenge, we reconstructed the historical data using Wavelet filters, resulting in a flexible electrical forecasting model that can forecast the electrical loads at various levels of disaggregation for the following day. We evaluated our proposed methodology against two benchmarks using FAMM and MMQFAMM models at all the levels of disaggregation established. We carefully analyzed the internal parameters of the models via cross-validation and used various performance evaluation metrics such as RMSE, PCC, MSE, MAE, and MAPE to assess the results of the forecasts; additionally, the DM Test was used for comparing methods.
Addressing concerns about the computational effort and performance of the proposed implementation, our methodology demonstrates a relatively low computational cost. This is achieved through a combination of efficient ML techniques and a parallel processing approach, allowing for improved performance without adding significant computational overhead. Furthermore, the flexible nature of the methodology allows for the easy modification of internal parameters, enabling optimization of the trade-off between computational effort and forecasting accuracy. The results of our study indicate that the proposed approach not only achieves high accuracy across various time intervals but also maintains a manageable computational burden, making it a suitable solution for electrical load forecasting in fluctuating contexts. After comparing and analyzing the results, as well as the computational effort, the proposed methodology utilizing the Gaussian kernel was determined to be the most efficient and flexible forecasting model that can effectively adapt to the various levels of electrical load disaggregation in this study.
In summary, this paper contributes to improving day-ahead forecasting for a weekly window at various levels of disaggregation, identifying crucial factors, developing pre-processing data methodologies, and evaluating methodologies for diverse kernels and benchmarks. Our proposed methodology can be applied to other electrical systems to improve operational efficiency and reduce associated costs.
Although the methodology proposed in this paper has achieved satisfactory load forecasting results, there are still some limitations that need to be addressed. First, the optimal selection of internal parameters to improve performance requires further study. Second, additional research is required to determine the optimal selection of exogenous variables and the appropriate quantity of past electrical load data for the input. Third, the evaluation of various Wavelet functions in the filter should be explored. Finally, the prediction ability of the methodology needs to be further strengthened and its universality tested.
As future recommendations, we suggest the following: The current study does not consider the optimal permutation of input values between exogenous variables that can potentially improve the performance of our methodology. Investigating this effect could be a future research direction. Additionally, while our methodology employs the FAMM method, there are various other Fuzzy ART methods in the literature that can be hybridized with SVM to further enhance our proposed methodology.

Author Contributions

L.B.G.F.: Conceptualization, writing—original draft, writing—review and editing, investigation; A.D.P.L.: Conceptualization, writing—original draft, writing—review and editing, investigation; C.R.M.: Writing—review and editing, funding acquisition, supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are openly available in COES—Daily Operation Evaluation Report at http://www.coes.org.pe/Portal/PostOperacion/Reportes/Ieod (accessed on 7 February 2023) and the National Service of Meteorology and Hydrology of Peru (SENAMHI). Available online: https://www.senamhi.gob.pe/?p=pronostico-meteorologico (accessed on 7 February 2023).

Acknowledgments

The authors are grateful for the financial support provided by Brazilian Funding Agencies CNPq process 302896/2022-8, and UNESP, Edital PROPG 37/2023.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

MLMachine Learning
SEINPeruvian Interconnected Electrical System
FAMMModified Fuzzy ARTMAP Neural Network
SVMFAMMHybrid Support Vector Machine and FAMM
AIArtificial Intelligence
COESPeruvian Electrical System Operator
MMQFAMMHybrid Method between Least Squares and FAMM
RMSERoot Mean Squared Error
MSEMean Squared Error
MAEMean Absolute Error
MAPEMean Absolute Percentage Error
PCCPearson Correlation Coefficient
IEODDaily Operation Evolution Report
SENAMHINational Service of Meteorology and Hydrology of Peru
CWTContinuous Wavelet Transform
DWTDiscrete Wavelet Transform
ARTAdaptive Resonance Theory
SVMSupport Vector Machine
WPT-SVMFAMMWavelet Parallel methodology structure between FAMM and SVMFAMM.
DMDiebold and Mariano test

References

  1. Kyriakides, E.; Polycarpou, M. Short Term Electric Load Forecasting: A Tutorial. In Trends in Neural Computation; Studies in Computational Intelligence; Chen, K., Wang, L., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; Volume 35, pp. 391–418. ISBN 978-3-540-36121-3. [Google Scholar]
  2. Wood, A.J.; Wollenberg, B.F.; Sheblé, G.B. Power Generation, Operation, and Control, 3rd ed.; Wiley-IEEE: Hoboken, NJ, USA, 2013; ISBN 978-1-118-73391-2. [Google Scholar]
  3. Delboni, L.F.N.; Marujo, D.; Balestrassi, P.P.; Oliveira, D.Q. Electrical Power Systems: Evolution from Traditional Configuration to Distributed Generation and Microgrids. In Microgrids Design and Implementation; Zambroni de Souza, A.C., Castilla, M., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 1–25. ISBN 978-3-319-98686-9. [Google Scholar]
  4. Hatziargyriou, N.; Milanovic, J.; Rahmann, C.; Ajjarapu, V.; Canizares, C.; Erlich, I.; Hill, D.; Hiskens, I.; Kamwa, I.; Pal, B.; et al. Definition and Classification of Power System Stability—Revisited & Extended. IEEE Trans. Power Syst. 2021, 36, 3271–3281. [Google Scholar] [CrossRef]
  5. Ogimoto, K.; Wani, H. Making Renewables Work: Operational Practices and Future Challenges for Renewable Energy as a Major Power Source in Japan. IEEE Power Energy Mag. 2020, 18, 47–63. [Google Scholar] [CrossRef]
  6. Kroposki, B.; Johnson, B.; Zhang, Y.; Gevorgian, V.; Denholm, P.; Hodge, B.-M.; Hannegan, B. Achieving a 100% Renewable Grid: Operating Electric Power Systems with Extremely High Levels of Variable Renewable Energy. IEEE Power Energy Mag. 2017, 15, 61–73. [Google Scholar] [CrossRef]
  7. Feinberg, E.A.; Genethliou, D. Load Forecasting. In Applied Mathematics for Restructured Electric Power Systems; Power Electronics and Power Systems; Chow, J.H., Wu, F.F., Momoh, J., Eds.; Kluwer Academic Publishers: Boston, MA, USA, 2005; pp. 269–285. ISBN 978-0-387-23470-0. [Google Scholar]
  8. Sevlian, R.; Rajagopal, R. A Scaling Law for Short Term Load Forecasting on Varying Levels of Aggregation. Int. J. Electr. Power Energy Syst. 2018, 98, 350–361. [Google Scholar] [CrossRef]
  9. Murphy, K.P. Machine Learning: A Probabilistic Perspective; Adaptive Computation and Machine Learning Series; MIT Press: Cambridge, MA, USA, 2012; ISBN 978-0-262-01802-9. [Google Scholar]
  10. Lakemeyer, G.; Nebel, B. Exploring Artificial Intelligence in the New Millennium, 1st ed.; Morgan Kaufmann Publishers: San Diego, CA, USA, 2003; ISBN 978-1-55860-811-5. [Google Scholar]
  11. Mosavi, A.; Salimi, M.; Faizollahzadeh Ardabili, S.; Rabczuk, T.; Shamshirband, S.; Varkonyi-Koczy, A. State of the Art of Machine Learning Models in Energy Systems, a Systematic Review. Energies 2019, 12, 1301. [Google Scholar] [CrossRef]
  12. Islam, M.A.; Che, H.S.; Hasanuzzaman, M.; Rahim, N.A. Energy Demand Forecasting. In Energy for Sustainable Development; Elsevier: Amsterdam, The Netherlands, 2020; pp. 105–123. ISBN 978-0-12-814645-3. [Google Scholar]
  13. Göb, R.; Lurz, K.; Pievatolo, A. Electrical Load Forecasting by Exponential Smoothing with Covariates. Appl. Stoch. Model. Bus. Ind. 2013, 29, 629–645. [Google Scholar] [CrossRef]
  14. Lee, Y.W.; Tay, K.G.; Choy, Y.Y. Forecasting Electricity Consumption Using Time Series Model. IJET 2018, 7, 218. [Google Scholar] [CrossRef]
  15. Sobhani, M.; Campbell, A.; Sangamwar, S.; Li, C.; Hong, T. Combining Weather Stations for Electric Load Forecasting. Energies 2019, 12, 1510. [Google Scholar] [CrossRef]
  16. Rendon-Sanchez, J.F.; De Menezes, L.M. Structural Combination of Seasonal Exponential Smoothing Forecasts Applied to Load Forecasting. Eur. J. Oper. Res. 2019, 275, 916–924. [Google Scholar] [CrossRef]
  17. Bergmeir, C.; Hyndman, R.J.; Benítez, J.M. Bagging Exponential Smoothing Methods Using STL Decomposition and Box–Cox Transformation. Int. J. Forecast. 2016, 32, 303–312. [Google Scholar] [CrossRef]
  18. Petropoulos, F.; Hyndman, R.J.; Bergmeir, C. Exploring the Sources of Uncertainty: Why Does Bagging for Time Series Forecasting Work? Eur. J. Oper. Res. 2018, 268, 545–554. [Google Scholar] [CrossRef]
  19. Fumo, N.; Rafe Biswas, M.A. Regression Analysis for Prediction of Residential Energy Consumption. Renew. Sustain. Energy Rev. 2015, 47, 332–343. [Google Scholar] [CrossRef]
  20. Dudek, G. Pattern-Based Local Linear Regression Models for Short-Term Load Forecasting. Electr. Power Syst. Res. 2016, 130, 139–147. [Google Scholar] [CrossRef]
  21. Saber, A.Y.; Alam, A.K.M.R. Short Term Load Forecasting Using Multiple Linear Regression for Big Data. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017; pp. 1–6. [Google Scholar]
  22. Ciulla, G.; D’Amico, A. Building Energy Performance Forecasting: A Multiple Linear Regression Approach. Appl. Energy 2019, 253, 113500. [Google Scholar] [CrossRef]
  23. Li, Y.; He, Y.; Su, Y.; Shu, L. Forecasting the Daily Power Output of a Grid-Connected Photovoltaic System Based on Multivariate Adaptive Regression Splines. Appl. Energy 2016, 180, 392–401. [Google Scholar] [CrossRef]
  24. Selvi, M.V.; Mishra, S. Investigation of Performance of Electric Load Power Forecasting in Multiple Time Horizons with New Architecture Realized in Multivariate Linear Regression and Feed-Forward Neural Network Techniques. IEEE Trans. Ind. Appl. 2020, 56, 5603–5612. [Google Scholar] [CrossRef]
  25. Shah, I.; Iftikhar, H.; Ali, S. Modeling and Forecasting Electricity Demand and Prices: A Comparison of Alternative Approaches. J. Math. 2022, 2022, 1–14. [Google Scholar] [CrossRef]
  26. Jan, F.; Shah, I.; Ali, S. Short-Term Electricity Prices Forecasting Using Functional Time Series Analysis. Energies 2022, 15, 3423. [Google Scholar] [CrossRef]
  27. Shah, I.; Jan, F.; Ali, S. Functional Data Approach for Short-Term Electricity Demand Forecasting. Math. Probl. Eng. 2022, 2022, 6709779. [Google Scholar] [CrossRef]
  28. Peng, Y.; Wang, Y.; Lu, X.; Li, H.; Shi, D.; Wang, Z.; Li, J. Short-Term Load Forecasting at Different Aggregation Levels with Predictability Analysis. In Proceedings of the 2019 IEEE Innovative Smart Grid Technologies—Asia (ISGT Asia), Chengdu, China, 21–24 May 2019; pp. 3385–3390. [Google Scholar]
  29. Dagdougui, H.; Bagheri, F.; Le, H.; Dessaint, L. Neural Network Model for Short-Term and Very-Short-Term Load Forecasting in District Buildings. Energy Build. 2019, 203, 109408. [Google Scholar] [CrossRef]
  30. Sajjad, M.; Khan, Z.A.; Ullah, A.; Hussain, T.; Ullah, W.; Lee, M.Y.; Baik, S.W. A Novel CNN-GRU-Based Hybrid Approach for Short-Term Residential Load Forecasting. IEEE Access 2020, 8, 143759–143768. [Google Scholar] [CrossRef]
  31. Khwaja, A.S.; Anpalagan, A.; Naeem, M.; Venkatesh, B. Joint Bagged-Boosted Artificial Neural Networks: Using Ensemble Machine Learning to Improve Short-Term Electricity Load Forecasting. Electr. Power Syst. Res. 2020, 179, 106080. [Google Scholar] [CrossRef]
  32. Zhang, S.; Zhang, N.; Zhang, Z.; Chen, Y. Electric Power Load Forecasting Method Based on a Support Vector Machine Optimized by the Improved Seagull Optimization Algorithm. Energies 2022, 15, 9197. [Google Scholar] [CrossRef]
  33. Zulfiqar, M.; Kamran, M.; Rasheed, M.B.; Alquthami, T.; Milyani, A.H. Hyperparameter Optimization of Support Vector Machine Using Adaptive Differential Evolution for Electricity Load Forecasting. Energy Rep. 2022, 8, 13333–13352. [Google Scholar] [CrossRef]
  34. Li, C. A Fuzzy Theory-Based Machine Learning Method for Workdays and Weekends Short-Term Load Forecasting. Energy Build. 2021, 245, 111072. [Google Scholar] [CrossRef]
  35. Yang, Y.; Hong, W.; Li, S. Deep Ensemble Learning Based Probabilistic Load Forecasting in Smart Grids. Energy 2019, 189, 116324. [Google Scholar] [CrossRef]
  36. Sideratos, G.; Ikonomopoulos, A.; Hatziargyriou, N.D. A Novel Fuzzy-Based Ensemble Model for Load Forecasting Using Hybrid Deep Neural Networks. Electr. Power Syst. Res. 2020, 178, 106025. [Google Scholar] [CrossRef]
  37. Massaoudi, M.; Refaat, S.S.; Chihi, I.; Trabelsi, M.; Oueslati, F.S.; Abu-Rub, H. A Novel Stacked Generalization Ensemble-Based Hybrid LGBM-XGB-MLP Model for Short-Term Load Forecasting. Energy 2021, 214, 118874. [Google Scholar] [CrossRef]
  38. Yu, Z.; Haghighat, F.; Fung, B.C.M.; Yoshino, H. A Decision Tree Method for Building Energy Demand Modeling. Energy Build. 2010, 42, 1637–1646. [Google Scholar] [CrossRef]
  39. Barman, M.; Dev Choudhury, N.B. Season Specific Approach for Short-Term Load Forecasting Based on Hybrid FA-SVM and Similarity Concept. Energy 2019, 174, 886–896. [Google Scholar] [CrossRef]
  40. Eseye, A.T.; Lehtonen, M.; Tukia, T.; Uimonen, S.; John Millar, R. Machine Learning Based Integrated Feature Selection Approach for Improved Electricity Demand Forecasting in Decentralized Energy Systems. IEEE Access 2019, 7, 91463–91475. [Google Scholar] [CrossRef]
  41. Yan, K.; Li, W.; Ji, Z.; Qi, M.; Du, Y. A Hybrid LSTM Neural Network for Energy Consumption Forecasting of Individual Households. IEEE Access 2019, 7, 157633–157642. [Google Scholar] [CrossRef]
  42. Wang, Y.; Kong, Y.; Tang, X.; Chen, X.; Xu, Y.; Chen, J.; Sun, S.; Guo, Y.; Chen, Y. Short-Term Industrial Load Forecasting Based on Ensemble Hidden Markov Model. IEEE Access 2020, 8, 160858–160870. [Google Scholar] [CrossRef]
  43. Amorim, A.J.; Abreu, T.A.; Tonelli-Neto, M.S.; Minussi, C.R. A New Formulation of Multinodal Short-Term Load Forecasting Based on Adaptive Resonance Theory with Reverse Training. Electr. Power Syst. Res. 2020, 179, 106096. [Google Scholar] [CrossRef]
  44. Müller, M.R.; Gaio, G.; Carreno, E.M.; Lotufo, A.D.P.; Teixeira, L.A. Electrical Load Forecasting in Disaggregated Levels Using Fuzzy ARTMAP Artificial Neural Network and Noise Removal by Singular Spectrum Analysis. SN Appl. Sci. 2020, 2, 1218. [Google Scholar] [CrossRef]
  45. Jin, N.; Yang, F.; Mo, Y.; Zeng, Y.; Zhou, X.; Yan, K.; Ma, X. Highly Accurate Energy Consumption Forecasting Model Based on Parallel LSTM Neural Networks. Adv. Eng. Inform. 2022, 51, 101442. [Google Scholar] [CrossRef]
  46. Daily Operation Evolution Report (IEOD)—Peruvian Electrical System Operator (COES). Available online: https://www.coes.org.pe/portal/postoperacion/reportes/ieod (accessed on 7 February 2023).
  47. National Service of Meteorology and Hydrology of Peru (SENAMHI). Available online: https://www.senamhi.gob.pe/?P=pronostico-meteorologico (accessed on 7 February 2023).
  48. Park, D.C.; El-Sharkawi, M.A.; Marks, R.J.; Atlas, L.E.; Damborg, M.J. Electric Load Forecasting Using an Artificial Neural Network. IEEE Trans. Power Syst. 1991, 6, 442–449. [Google Scholar] [CrossRef]
  49. Morlet, J.; Arens, G.; Fourgeau, E.; Giard, D. Wave Propagation and Sampling Theory—Part II: Sampling Theory and Complex Waves. Geophysics 1982, 47, 222–236. [Google Scholar] [CrossRef]
  50. Grossmann, A.; Morlet, J. Decomposition of Hardy Functions into Square Integrable Wavelets of Constant Shape. SIAM J. Math. Anal. 1984, 15, 723–736. [Google Scholar] [CrossRef]
  51. Meyer, Y. Principe d’incertitude, bases hilbertiennes et algèbres d’opérateurs. Astérisque 1987, 145–146, 209–223. [Google Scholar]
  52. Mallat, S.G. Multiresolution Approximations and Wavelet Orthonormal Bases of L2 (R). Trans. Am. Math. Soc. 1989, 315, 69. [Google Scholar] [CrossRef]
  53. Mallat, S.G. A Theory for Multiresolution Signal Decomposition: The Wavelet Representation. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 674–693. [Google Scholar] [CrossRef]
  54. Daubechies, I. Ten Lectures on Wavelets; CBMS-NSF Regional Conference Series in Applied Mathematics; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1992; ISBN 978-0-89871-274-2. [Google Scholar]
  55. Vetterli, M.; Herley, C. Wavelets and Filter Banks: Theory and Design. IEEE Trans. Signal Process. 1992, 40, 2207–2232. [Google Scholar] [CrossRef]
  56. Moreno, A.L. Transient Stability Analysis Based on Modified Fuzzy Euclidean ART-ARTMAP Neural Network with Continuous Training. Ph.D. Thesis, UNESP-São Paulo State University, Ilha Solteira, Brazil, 2010. [Google Scholar]
  57. Carpenter, G.A.; Grossberg, S.; Rosen, D.B. Fuzzy ART: Fast Stable Learning and Categorization of Analog Patterns by an Adaptive Resonance System. Neural Netw. 1991, 4, 759–771. [Google Scholar] [CrossRef]
  58. Carpenter, G.A.; Grossberg, S.; Markuzon, N.; Reynolds, J.H.; Rosen, D.B. Fuzzy ARTMAP: A Neural Network Architecture for Incremental Supervised Learning of Analog Multidimensional Maps. IEEE Trans. Neural Netw. 1992, 3, 698–713. [Google Scholar] [CrossRef]
  59. Carpenter, G.A.; Grossberg, S. The ART of Adaptive Pattern Recognition by a Self-Organizing Neural Network. Computer 1988, 21, 77–88. [Google Scholar] [CrossRef]
  60. Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  61. Vapnik, V.; Golowich, S.E.; Smola, A.J. Support Vector Method for Function Approximation, Regression Estimation and Signal Processing. In Proceedings of the NIPS’96: Proceedings of the 9th International Conference on Neural Information Processing Systems, Denver, CO, USA, 3–5 December 1996. [Google Scholar]
  62. Vapnik, V. The Nature of Statistical Learning Theory, 2nd ed.; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1999; ISBN 978-1-4757-3264-1. [Google Scholar]
  63. Smola, A.J.; Schölkopf, B. A Tutorial on Support Vector Regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef]
  64. Schölkopf, B.; Smola, A.J. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond; Adaptive Computation and Machine Learning; MIT Press: Cambridge, MA, USA, 2002; ISBN 978-0-262-19475-4. [Google Scholar]
  65. Hyndman, R.J.; Koehler, A.B. Another Look at Measures of Forecast Accuracy. Int. J. Forecast. 2006, 22, 679–688. [Google Scholar] [CrossRef]
  66. Pearson, K. LIII. On Lines and Planes of Closest Fit to Systems of Points in Space. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1901, 2, 559–572. [Google Scholar] [CrossRef]
  67. Alves, M.F. Mixed Non-Residential Electrical Loads Forecast via Fuzzy ARTMAP Neural Networks. Ph.D. Thesis, UNESP-São Paulo State University, Ilha Solteira, Brazil, 2010. [Google Scholar]
  68. Jung, Y.; Hu, J. A K-Fold Averaging Cross-Validation Procedure. J. Nonparametr. Stat. 2015, 27, 167–179. [Google Scholar] [CrossRef] [PubMed]
  69. Diebold, F.X.; Mariano, R.S. Comparing Predictive Accuracy. J. Bus. Econ. Stat. 1995, 13, 253. [Google Scholar] [CrossRef]
Figure 1. The internal FAMM architecture.
Figure 1. The internal FAMM architecture.
Energies 16 04110 g001
Figure 2. Flowchart of the FAMM training process.
Figure 2. Flowchart of the FAMM training process.
Energies 16 04110 g002
Figure 3. Structure and process of SVMFAMM in the load forecasting. (a) Inputs for the SVMFAMM training process; (b) SVMFAMM process for load forecasting.
Figure 3. Structure and process of SVMFAMM in the load forecasting. (a) Inputs for the SVMFAMM training process; (b) SVMFAMM process for load forecasting.
Energies 16 04110 g003
Figure 4. Flowchart of the SVMFAMM testing (diagnostic) process.
Figure 4. Flowchart of the SVMFAMM testing (diagnostic) process.
Energies 16 04110 g004
Figure 5. The coefficient reconstruction using Wavelet Daubechies in db20- three decomposition levels for global loads.
Figure 5. The coefficient reconstruction using Wavelet Daubechies in db20- three decomposition levels for global loads.
Energies 16 04110 g005
Figure 6. Structure and parallel training process of WPT-SVMFAMM in the load forecasting. (a) Inputs for the SVMFAMM training process; (b) SVMFAMM process for load forecasting.
Figure 6. Structure and parallel training process of WPT-SVMFAMM in the load forecasting. (a) Inputs for the SVMFAMM training process; (b) SVMFAMM process for load forecasting.
Energies 16 04110 g006aEnergies 16 04110 g006b
Figure 7. Comparison of day-ahead load forecast in weekly windows. (a) Global loads (SEIN); (b) North area; (c) Southwest sub-area.
Figure 7. Comparison of day-ahead load forecast in weekly windows. (a) Global loads (SEIN); (b) North area; (c) Southwest sub-area.
Energies 16 04110 g007
Figure 8. Comparison of day-ahead load forecast of free users in weekly windows. (a) Large free users (Global); (b) Large free users of the central area; (c) Large free users of the southern area.
Figure 8. Comparison of day-ahead load forecast of free users in weekly windows. (a) Large free users (Global); (b) Large free users of the central area; (c) Large free users of the southern area.
Energies 16 04110 g008
Figure 9. Diebold–Mariano Test for FAMM, MMQFAMM, WPT-SVMFAMM Optimized, WPT-SVMFAMM Linear, and WPT-SVMFAMM Gaussian.
Figure 9. Diebold–Mariano Test for FAMM, MMQFAMM, WPT-SVMFAMM Optimized, WPT-SVMFAMM Linear, and WPT-SVMFAMM Gaussian.
Energies 16 04110 g009
Table 1. Related Works.
Table 1. Related Works.
Ref.ProposalsDatasetHorizonPerformance MetricsContributions
[29]Artificial Neural Network (ANN)District building.Very-short-term
Short-term
Mean Absolute Percentage Error (MAPE)(1) Assessment of various neural networks and learning methods for forecasting load consumption accuracy in a district of buildings.
(2) Evaluation of forecast performance depending on the time horizon used for generating the forecast.
(3) Examination of forecast performance for single buildings and aggregated buildings.
[30]Hybrid Convolution Neural Network (CNN) and Gated Recurrent Units (GRU)Appliances Energy Prediction.
Individual Household Electric Power Consumption.
Short-TermMean Squared Error (MSE)
Root Mean Squared Error (RMSE)
Mean Absolute Error (MAE)
(1) Application and comparison of diverse machine learning and deep learning models for prediction accuracy.
(2) Development of a hybrid CNN–GRU model for hourly electricity consumption prediction.
(3) Evaluation of the proposed model’s high performance on the dataset.
[31]Combines both bagging and boosting to train Bagged–Boosted ANNs (Bag-BoostNN)The New England Pool region.Short-TermMAPE(1) Improved short-term load forecasting technique using Bag-BoostNN.
(2) Demonstrated reduction in bias and variance using real data, compared to single ANN, Bagged ANN, and Boosted ANN.
[32]Improved Seagull Optimization Algorithm (ISOA) that optimizes Support Vector Machines (SVM) (ISOA-SVM).Power plant in eastern Slovakia.Short-TermMSE(1) Constructed a power load forecasting model based on the Improved Seagull Optimization Algorithm (ISOA) that optimizes Support Vector Machines (SVM) (ISOA-SVM).
(2) Utilized ISOA to optimize SVM’s internal parameters, addressing the issue of random parameter selection affecting the model’s performance.
(3) Proposed three strategies to improve the optimization performance and convergence accuracy of the Seagull Optimization Algorithm (SOA).
(4) Developed the Improved Seagull Optimization Algorithm (ISOA) with better optimization performance and higher convergence accuracy.
(5) Established a load forecasting model based on ISOA-SVM using Mean Square Error (MSE) as the objective function.
(6) Demonstrated better prediction performance and accuracy of the ISOA-SVM model compared to other models, providing guidance for power generation and power consumption planning.
[33]Hybrid model that integrates the multivariate empirical modal decomposition (MEMD) and adaptive differential evolution (ADE) algorithm with a Support Vector Machine (SVM).Independent System Operator (ISO) New England (ISO-NE) energy sectorShort-TermDirectional Accuracy (DA)
MAPE
RMSE
Coefficient of determination (R2)
(1) Transition towards hybridization in the approach.
(2) Utilized ADE technique for hyperparameters modification/adaption.
(3) Implemented innovative performance evaluation criteria.
[34]Improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN)Victoria in AustraliaShort-TermThe average error (AE)
MAE
RMSE
MAPE
Direction Accuracy (DA)
Fractional Bias (FB)
Theil’s Inequality Coefficient (TIC)
Internal Forecasting Coverage Probability (IFCP)
Internal Forecasting Average Width (IFAW)
(1) Conducted data cleaning to eliminate noise and mine hidden characteristics using data de-noising methods and FTS.
(2) Applied a multi-objective optimization algorithm to optimize ANNs for enhanced forecasting stability and accuracy.
(3) Considered seasonal and workday–weekend temporal patterns in the analysis.
[35]A scalable and flexible deep ensemble learning frameworkIrish Commission for Energy RegulationShort-TermThe pinball loss score
The Winkler score
(1) Proposed an end-to-end deep ensemble learning model for probabilistic load forecasting without the need for additional feature extraction and selection; well-suited for distributed computing and large-scale industry applications.
(2) Formulated a LASSO-based quantile forecast combination strategy for the deep ensemble learning model to elevate performance by refining individual forecasts.
[36]Fuzzy-based ensemble model that uses hybrid deep learning neural networksHellenic interconnected power system
The isolated power system of Crete
Short-TermMAPE
MAE
RMSE
(1) Developed a novel, hybrid structure for week-ahead load forecasting combining ensemble forecasting, artificial neural networks, and deep learning architectures.
(2) Implemented a new Fuzzy clustering method to create an ensemble prediction by clustering input data.
(3) Applied a new regression approach to model the load forecasting problem locally for each cluster.
(4) Utilized a two-stage approach, involving training a radial basis function neural network (RBFNN) using three-fold cross-validation, followed by using a convolutional neural network (CNN) with the transformed input data.
(5) Constructed a neural network with an RBF, a convolutional, a pooling, and two fully-connected layers, trained using the Adam optimization algorithm within the Tensorflow deep learning framework.
(6) Designed the model to predict hourly load for the next seven days and evaluated its effectiveness in two different case studies.
[37]Stacked generalization ensemble-based hybrid Light Gradient Boosting Machine (LGBM), eXtreme Gradient Boosting machine (XGB), and Multi-Layer Perceptron (MLP) (LGBM-XGBMLP)Power supply industry in the city of Johor.Short-TermMAE
RMSE
R2
Mean Squared Logarithmic Error (MSLE)
Median Absolute Error (MdAE)
MAPE
(1) Proposed a novel Stacked XGB-LGBM-MLP model to improve overall regression performance.
(2) Explored and developed a novel short-term load forecasting (STLF) technique.
(3) Conducted a comprehensive comparative analysis of five hybrid optimization (HO) algorithms for STLF.
(4) Assessed the proposed technique using two real datasets.
(5) Performed a comparative study with recent benchmark techniques.
Table 2. Information on the methods used for the evaluation process.
Table 2. Information on the methods used for the evaluation process.
MethodologiesML MethodsDay-Ahead ForecastTrained Data
DayMonthYear
Benchmark 1FAMM12 to 18August20194032
Benchmark 2MMQFAMM [67]
ProposalWPT-SVMFAMM
(Linear/Gaussian/Optimized)
Table 3. Computational time for the analysis during the training and diagnostic process.
Table 3. Computational time for the analysis during the training and diagnostic process.
ML MethodsDay-Ahead Average Computational Time (s)
GlobalNorth AreaSouthwest Sub-AreaLarge Free Users (LFU)Central Area LFUSouthern Area LFU
FAMM70773917463280
MMQFAMM124120340120115265
WPT-SVMFAMM
Linear
1502052825363234
Gaussian1522092855465230
Optimized19226645811088277
Table 4. Evaluation metrics by methods—Global loads (SEIN).
Table 4. Evaluation metrics by methods—Global loads (SEIN).
DAYMETRICSFAMMMMQFAMMWPTSVMFAMM
Opt
WPTSVMFAMM
Linear
WPTSVMFAMM
Gauss
0.930.970.98A/0.99D0.910.98A/0.99D0.900.98A/0.99D0.90
12 August 2019RMSE110.6097.98104.67105.86103.65
PCC0.990.990.990.990.99
MSE12,233.329600.3310,955.4311,205.5110,742.97
MAE83.5177.9282.7683.5479.79
MAPE1.511.361.451.461.41
13 August 2019RMSE107.88154.57108.25105.3097.86
PCC0.980.960.980.970.98
MSE11,638.3223,893.3211,717.6311,087.349576.81
MAE85.07118.0087.8087.1079.31
MAPE1.421.941.471.451.32
14 August 2019RMSE136.09143.85117.46117.57122.27
PCC0.980.970.980.970.98
MSE18,521.2420,693.3113,797.9413,822.6414,951.15
MAE112.73117.1292.0893.1499.78
MAPE1.881.981.541.561.68
15 August 2019RMSE94.03109.4285.4985.2384.25
PCC0.980.970.980.980.98
MSE8842.2211,972.217309.187264.907097.56
MAE72.9685.5270.5669.8467.16
MAPE1.201.401.181.171.12
16 August 2019RMSE104.5692.5659.3159.7863.47
PCC0.970.980.990.990.99
MSE10,932.118566.663517.123573.134028.41
MAE85.3676.5949.4249.6950.64
MAPE1.411.260.810.810.82
17 August 2019RMSE71.80156.16123.88126.01129.44
PCC0.980.950.980.980.98
MSE5155.3324,385.9415,345.9015,878.3716,754.35
MAE53.22131.39107.59109.89111.61
MAPE0.892.191.781.821.84
18 August 2019RMSE94.34106.5198.7195.7694.52
PCC0.970.970.970.970.97
MSE8900.9111,345.209742.989170.908934.71
MAE76.6087.3079.5276.3576.91
MAPE1.391.601.461.401.41
Table 5. Evaluation metrics by methods—North area.
Table 5. Evaluation metrics by methods—North area.
DAYMETRICSFAMMMMQFAMMWPTSVMFAMM
Opt
WPTSVMFAMM
Linear
WPTSVMFAMM
Gauss
0.960.960.99A/0.99D0.940.99A/0.99D0.900.99A/0.99D0.90
12 August 2019RMSE38.3440.3223.9124.7120.22
PCC0.960.930.970.970.98
MSE1470.191625.58571.92610.70408.88
MAE31.3533.2318.7619.4315.98
MAPE3.934.122.392.472.05
13 August 2019RMSE45.6845.1345.8946.7746.79
PCC0.910.870.920.920.93
MSE2086.792036.372106.162187.402189.29
MAE36.3833.2837.4238.2138.25
MAPE4.263.864.304.394.40
14 August 2019RMSE31.5140.1038.8539.2136.54
PCC0.950.860.930.930.94
MSE992.781607.841509.601537.391335.07
MAE26.0628.1731.1331.5029.20
MAPE3.043.243.533.583.32
15 August 2019RMSE34.7039.0724.7324.1824.31
PCC0.950.900.950.950.95
MSE1203.991526.44611.64584.73590.76
MAE25.4431.1717.6217.7117.82
MAPE3.083.662.102.112.13
16 August 2019RMSE23.9327.9625.4624.2123.05
PCC0.960.930.950.950.96
MSE572.85781.96648.41585.98531.42
MAE18.3920.9720.1119.4318.92
MAPE2.142.392.322.242.20
17 August 2019RMSE36.5037.7826.1825.0123.66
PCC0.960.920.950.950.96
MSE1331.991427.54685.17625.63559.93
MAE30.2928.2421.1319.9019.25
MAPE3.553.252.452.312.25
18 August 2019RMSE47.8851.7133.6133.9331.17
PCC0.940.900.940.940.95
MSE2292.742673.431129.431151.01971.86
MAE38.7739.1825.4325.2123.87
MAPE4.654.633.033.022.86
Table 6. Evaluation metrics by methods—Southwest sub-area.
Table 6. Evaluation metrics by methods—Southwest sub-area.
DAYMETRICSFAMMMMQFAMMWPTSVMFAMM
Opt
WPTSVMFAMM
Linear
WPTSVMFAMM
Gauss
0.990.990.99A/0.99D0.900.99A/0.99D0.970.99A/0.99D0.90
12 August 2019RMSE52.7050.9145.9445.8948.12
PCC0.730.760.820.820.80
MSE2777.792592.172110.792105.832315.67
MAE43.4139.9735.3435.2936.63
MAPE3.673.403.003.003.10
13 August 2019RMSE58.5856.3955.8856.7955.96
PCC0.190.010.480.490.46
MSE3431.623180.353122.733225.573131.07
MAE47.7744.5847.3648.5347.73
MAPE4.043.774.004.094.03
14 August 2019RMSE58.2456.0357.1158.1855.90
PCC0.240.220.180.200.20
MSE3391.743139.003261.453384.443124.71
MAE50.1948.5250.4450.6148.39
MAPE4.384.234.404.424.23
15 August 2019RMSE108.97103.43119.86120.57121.14
PCC0.240.260.220.180.17
MSE11,874.8110,697.2114,366.6114,535.9614,673.70
MAE77.3973.5197.0296.6397.40
MAPE7.256.909.028.999.06
16 August 2019RMSE63.0455.9462.3362.1562.17
PCC0.770.800.820.820.80
MSE3973.523129.393885.613862.083864.70
MAE52.8846.8654.4454.4753.54
MAPE4.483.974.614.614.54
17 August 2019RMSE44.1639.4631.3031.2431.36
PCC0.570.590.830.830.83
MSE1950.101556.93979.68975.83983.43
MAE37.0333.5526.8126.6826.78
MAPE3.112.812.232.222.22
18 August 2019RMSE68.2166.8135.5936.0035.90
PCC0.280.100.470.480.47
MSE4652.064463.641266.361295.651288.71
MAE56.4053.9528.9629.7829.42
MAPE4.934.732.502.572.54
Table 7. Evaluation metrics by methods—Large free users (Global).
Table 7. Evaluation metrics by methods—Large free users (Global).
DAYMETRICSFAMMMMQFAMMWPTSVMFAMM
Opt
WPTSVMFAMM
Linear
WPTSVMFAMM
Gauss
0.920.970.92A/0.99D0.900.92A/0.99D0.900.92A/0.99D0.97
12 August 2019RMSE79.65110.0979.7681.0082.67
PCC0.870.650.780.780.80
MSE6344.2412,118.916361.466560.816835.11
MAE61.1493.2459.7360.5858.83
MAPE3.104.733.013.052.96
13 August 2019RMSE86.6888.1683.5584.3873.06
PCC0.850.820.810.810.86
MSE7513.267772.866981.367120.615337.67
MAE61.8666.7563.8163.5459.01
MAPE3.113.393.243.222.98
14 August 2019RMSE104.35104.3359.6860.2258.41
PCC0.650.610.860.860.87
MSE10,889.5410,885.643561.633626.223412.13
MAE86.8982.1147.2047.6447.13
MAPE4.634.452.502.522.48
15 August 2019RMSE88.4491.2577.9979.2281.40
PCC0.790.780.790.790.77
MSE7822.028326.976081.936275.576625.84
MAE64.0172.9160.6462.2062.37
MAPE3.343.863.183.263.27
16 August 2019RMSE87.8474.5579.9376.7581.46
PCC0.870.860.890.890.90
MSE7715.705557.556388.445889.876635.45
MAE75.2160.1266.9963.8369.96
MAPE3.883.023.393.223.54
17 August 2019RMSE81.2386.1991.0090.6784.17
PCC0.700.660.570.580.66
MSE6599.127429.018281.008220.697084.54
MAE61.8862.7770.0370.0764.30
MAPE3.113.283.633.633.33
18 August 2019RMSE61.1049.0952.7956.9956.88
PCC0.500.570.560.540.57
MSE3733.372410.212786.543247.583234.86
MAE48.0140.7944.1647.5048.20
MAPE2.311.962.122.282.31
Table 8. Evaluation metrics by methods—Large free users of the central area.
Table 8. Evaluation metrics by methods—Large free users of the central area.
DAYMETRICSFAMMMMQFAMMWPTSVMFAMM
Opt
WPTSVMFAMM
Linear
WPTSVMFAMM
Gauss
0.930.970.93A/0.99D0.900.93A/0.99D0.930.93A/0.99D0.99
12 August 2019RMSE67.55108.9252.4353.4546.69
PCC0.830.780.900.890.92
MSE4563.1511,864.412748.952856.942179.92
MAE47.4594.4944.0543.3838.07
MAPE5.8011.495.505.374.73
13 August 2019RMSE68.34177.0356.6156.8051.45
PCC0.820.450.820.820.85
MSE4670.6431,338.283204.343226.612647.07
MAE47.39159.9848.4148.4742.22
MAPE5.9619.436.076.105.38
14 August 2019RMSE74.1983.7464.5165.1467.26
PCC0.790.750.850.850.85
MSE5504.007011.694161.094242.724524.39
MAE60.4772.2252.7353.1855.16
MAPE8.199.867.377.447.65
15 August 2019RMSE75.2295.2167.0567.4368.23
PCC0.680.680.700.710.71
MSE5657.319065.314495.284547.314655.71
MAE61.5378.5654.7855.3255.63
MAPE8.0710.517.157.237.23
16 August 2019RMSE67.09233.9674.0375.6873.16
PCC0.800.710.860.860.88
MSE4501.4654,736.325480.105728.025351.75
MAE52.55223.8062.1363.8561.73
MAPE7.0429.427.958.178.02
17 August 2019RMSE66.75181.2975.9477.0369.66
PCC0.700.410.690.710.73
MSE4455.7032,867.845767.225933.834852.88
MAE52.41163.3661.5364.4955.18
MAPE7.1220.958.158.407.25
18 August 2019RMSE72.11138.4460.8363.3663.28
PCC0.06−0.05−0.14−0.06−0.07
MSE5200.4419,166.673700.124014.964004.97
MAE56.70123.3949.4251.0949.73
MAPE6.7214.175.816.025.84
Table 9. Evaluation metrics by methods –Large free users of the southern area.
Table 9. Evaluation metrics by methods –Large free users of the southern area.
DAYMETRICSFAMMMMQFAMMWPTSVMFAMM
Opt
WPTSVMFAMM
Linear
WPTSVMFAMM
Gauss
0.990.990.99A/0.99D0.910.99A/0.99D0.930.99A/0.99D0.93
12 August 2019RMSE83.7182.3872.5472.3872.67
PCC−0.10−0.15−0.03−0.03−0.01
MSE7007.126787.235262.145239.265280.70
MAE70.3569.3155.8755.7655.92
MAPE7.026.885.615.595.61
13 August 2019RMSE68.3066.7359.9659.9962.15
PCC0.700.700.760.770.75
MSE4665.284452.953595.453598.503862.63
MAE55.9954.3845.7745.8247.92
MAPE5.685.514.664.674.88
14 August 2019RMSE86.1366.9951.3351.6350.85
PCC0.090.340.490.490.50
MSE7418.554487.512634.772666.152585.51
MAE63.3951.1144.0444.2143.37
MAPE6.195.004.484.504.42
15 August 2019RMSE55.9655.7554.3354.0954.99
PCC0.610.600.620.620.61
MSE3131.543107.812951.322926.223024.07
MAE37.1437.8338.0037.9838.81
MAPE3.793.863.873.863.96
16 August 2019RMSE50.8746.2449.3149.3549.44
PCC0.450.540.500.500.51
MSE2587.482138.162431.762435.452443.85
MAE42.3638.5441.9141.7241.82
MAPE4.133.764.074.054.06
17 August 2019RMSE20.4118.1122.6123.0222.54
PCC0.720.720.530.540.55
MSE416.47327.84511.43530.07508.21
MAE17.1914.6717.5117.6917.73
MAPE1.621.381.661.671.68
18 August 2019RMSE65.3162.6265.8666.8567.71
PCC−0.25−0.32−0.16−0.15−0.16
MSE4264.993921.364337.854468.564584.65
MAE51.2248.5652.7053.8054.61
MAPE4.974.725.115.215.29
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fernández, L.B.G.; Lotufo, A.D.P.; Minussi, C.R. Development of a Short-Term Electrical Load Forecasting in Disaggregated Levels Using a Hybrid Modified Fuzzy-ARTMAP Strategy. Energies 2023, 16, 4110. https://doi.org/10.3390/en16104110

AMA Style

Fernández LBG, Lotufo ADP, Minussi CR. Development of a Short-Term Electrical Load Forecasting in Disaggregated Levels Using a Hybrid Modified Fuzzy-ARTMAP Strategy. Energies. 2023; 16(10):4110. https://doi.org/10.3390/en16104110

Chicago/Turabian Style

Fernández, Leonardo Brain García, Anna Diva Plasencia Lotufo, and Carlos Roberto Minussi. 2023. "Development of a Short-Term Electrical Load Forecasting in Disaggregated Levels Using a Hybrid Modified Fuzzy-ARTMAP Strategy" Energies 16, no. 10: 4110. https://doi.org/10.3390/en16104110

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop