1. Introduction
Predicting energy demand has a significant effect on the operation and schedule of any power system, no matter its dimensions. Whether the system is a microgrid or a traditional power network, forecasting load electricity demand is required if a reliable and efficient power system is desired. Many elements, such as energy price, unit commitment, load dispatch for generators and so on, depend on the value obtained through load forecasters [
1].
Due to the deregulation of the energy market and new free competition policies, the number of stakeholders involved in the power system has unexpectedly increased. Moreover, traditional energy storage systems are not able to store the overproduction of energy produced in a specific moment if reversible hydropower plants are not taken into account. For this reason, it is necessary to balance power generation and energy demand curves so that frequency or voltage issues are avoided and power system stability is guaranteed. Therefore, big power system operators and control algorithms at microgrids need reliable and accurate forecasts for their demand response programs, keeping in mind that uncertainty will be reduced but not entirely removed [
2].
Microgrids can be defined as a small-scale replication of the traditional power system, one in which distributed energy generators, renewable and/or non-renewable resources, storage technologies and loads are integrated. In addition, they can be connected to or islanded from the main grid, depending on policies and the interests of the microgrid’s owner and the power system operator. Microgrids can provide certain benefits, such as fewer distribution and transmission power loses, lower dependence on the volatile price of energy due to the intermittency of renewable resources, the use of renewable resources, and so on [
3]. Because of these advantages, the number of microgrids has increased and they have been applied in different environments, such as support for traditional networks, university campuses, islanded regions and military command centres.
The development of microgrids makes it necessary to improve current control algorithms by introducing forecasters so the microgrids can operate in optimal conditions and maximise their efficiency. In particular, being able to obtain accurate energy demand forecasts is one of the key challenges that researchers are trying to overcome by using different techniques [
4,
5,
6,
7] to develop better control strategies. Nevertheless, predicting energy demand in microgrids is usually more complex than in conventional grids owing to the fact that the load time curves of a microgrid are much more volatile than those of a traditional power system [
8]. Although the forecaster proposed in this paper has been developed for microgrids, it can also be used for bigger power systems due to their lower volatility.
A review of the available literature reveals that researchers are setting their forecasters for different prediction horizons depending on what information is needed to make a good decision. In terms of prediction horizons, forecasters are classified into four different categories:
Very Short-Term Forecast (VSTF): The forecaster makes predictions for a few minutes ahead, and prognosticated values are given to the control unit for real-time dispatch. This forecasting is used for getting a quick response to the intra-day energy demand fluctuations [
9,
10].
Short-Term Forecast (STF): The forecaster makes predictions from few hours to days ahead, and the results are used for a wide range of decisions related to unit commitment, economic dispatch and power system operation [
11,
12]. Currently, VSTF and STF have become more relevant due to the need for accurate forecasters, and thus in the last few years the area has been addressed by many studies on different forecasters.
Medium-Term Forecast: In this case, the forecaster gives information from a few hours to a few weeks ahead. The obtained predictions provide information about weekly fluctuations and this information is mainly used for scheduling maintenance for the power system’s stakeholders, such as power plants, transmission and distribution lines, transformers, and so on. [
13,
14].
Long-Term Forecast: The forecaster provides predictions from few weeks to months and the given information is commonly used for power assessment or to analyse the necessity of new power lines [
15,
16].
This paper presents a very short-term load demand forecaster for implementation in a microgrid. Although several methods are proposed in the literature [
9,
10,
17,
18], this forecaster is based on an Artificial Neural Network (ANN), specifically on a layer recurrent neural network. The forecaster is able to predict the demand for energy that will be consumed by the loads 15 min ahead with sufficient accuracy. The predicted values obtained through the forecaster is given to the microgrid’s control unit so it can take better decisions and improve the system’s overall efficiency. Moreover, due to the need to have a high standard of accuracy in an energy demand forecaster that will be implemented in a microgrid, the forecaster can also be used in bigger power electric systems owing to the lower volatility of such systems, providing information to the power system operator. It must be taken into account that the bigger a power system is, the lower the energy demand volatility is due to the large number of loads.
An iterative process was used to design the final structure of the forecaster, and this paper describes the most relevant steps and the rationale for those steps. The key contributions of this paper can be summarised as follows:
A VSTF (15 min ahead) for load demand forecasting is presented for implementation in a microgrid. The forecaster is based on a layer recurrent neural network which takes into account the following input parameters to predict the future energy demand: time, season and the energy demand for the previous 24 h.
The calculated error metrics used to analyse the difference between predicted and consumed energy demonstrate that the forecaster has sufficient accuracy. If the accuracy of the forecaster is compared against the available literature, the developed forecaster provides a slight improvement. Moreover, the architecture and steps taken in developing this tool are easier than for forecasters presented in the literature.
As research in the field of energy demand forecasting has only focused on improving methods, the analysis of determining the optimal input vector and the optimal database length has been ignored for the most part. In this paper, a sensitivity analysis was performed, examining different forecasters by changing these parameters to obtain the optimal ones.
3. Forecaster Development and Parameter Selection
This section presents the steps taken to develop our energy demand forecaster. In order to fix the RNN-related parameters, such as input variables, length of the database and number of neurons, a methodology developed for a similar solar irradiation forecaster [
27] based on an RNN was used. However, some steps were changed, and the final architecture of the forecaster was different.
3.1. Input Parameter Selection
As explained in
Section 2.3.3, it was necessary to select the variables that really characterise the process as input parameters. Firstly, because the parameter that we wanted to forecast was energy demand, past values of this parameter were used as inputs in the forecaster. In this case, the energy demand values for the previous 24 h, recorded in intervals of 15 min, were used. Secondly, from among the different meteorological variables, we analysed whether temperature could be a possible parameter that characterises the process. To examine whether there is a true relationship between both parameters, temperature and energy demand, a Pearson correlation test was performed.
Figure 3 shows the results obtained through this test.
Figure 3 shows the correlation analysis between power demanded by a building and the temperature measured in a meteorological station near this building. Each of the dots that are represented in the graph of
Figure 3 is related to the Pearson correlation coefficient for a sample day, where the relationship between power demand and temperature was analysed. Temperature and power demand measurements for these sample days were taken from the different seasons of a two-year database covering 2016–2017.
As explained above, some authors [
10,
35] argue that meteorological variables such as temperature can be avoided in VSTF of power demand. However, from
Figure 3, it can be concluded that there is a strong relationship between energy demand and temperature. Therefore, the temperature parameter was used in some forecasters as an input in order to analyse whether it would be a relevant input parameter in the final forecaster.
3.1.1. Power Demand Evolution
A building’s power demand is strongly related to the activities carried out inside it and the weather conditions [
36]. Although the power curve for building’s energy demand is more or less the same throughout a year, it is also affected by other phenomena such as the season and time of day [
37]. Therefore, it was concluded that season and time in 15-min intervals should be analysed as possible input parameters.
3.1.2. Proposed Forecasters
As there are different combinations of the selected input parameters, and taking into account that there is no agreement on the length of the database for the training step, the forecasters purposed in
Table 2 were tested to determine which was most accurate.
Table 2 shows the parameters that were selected for each forecaster and the length of the database used.
3.2. Selecting the Optimal Forecaster
After the input parameters were combined, six possible forecasters were proposed. For options 1 and 2, 96 values were used in the training input array, X. These 96 values were related to the power demand for the building under study, for the previous 24 h and in 15-min intervals. For options 3 and 4, 98 values were used in the input array: the season, the hour of the day at which the forecast was to be made, and the 96 values of the power demand. Finally, for options 5 and 6, the input array had 194 values: the season, the hour, the 96 values related to the power demand and the 96 figures related to the temperature. All forecasters had a single target output, T—power demand.
Moreover, to select the optimal structure of each forecaster, data from January 2016 to April 2016 and January 2017 to April 2017 were used as an approximation to reduce each forecaster’s training computation time, taking into account that the longer the database, the longer the forecasters’ training time. Once the forecasters’ optimal structures were defined, the databases for the whole of 2016 and 2017 were applied to develop final forecasters. For the options in
Table 2 that indicate that the length of the applied database was a single year, the structure was optimised with data from 2017. However, for the options in
Table 2 that indicate a two-year length database was used, the data were from 2016 and 2017. The optimal structure for each forecaster was selected by examining the RMSE of the forecasts.
Because an RNN had been chosen, it was necessary to first select delay and then the number of neurons. To select the delay, the delay parameter was modified, while the number of neurons was kept at the same value. When the delay had been fixed, it was held constant while the number of neurons parameter was modified.
Two ANNs designed with the same structure and trained on the same number of epochs will not yield the same forecasted value. This happens because of the black box nature of ANNs. Therefore, if the desire is to ensure that the selected architecture is the best one, it is necessary to run the possible structures a certain number of times and average the forecasted values. In our case, the process described in [
27] was used to calculate the “RMSE Training Data” and “RMSE Validation Data” values shown in
Table 3,
Table 4,
Table 5,
Table 6,
Table 7 and
Table 8. While the “RMSE Training Data” are obtained by averaging the RMSE values obtained from the five repeated tests for the same structure for the January 2016 forecast, the “RMSE Validation Data” are obtained by averaging the forecast for January 2018.
To summarize,
Table 9 contains the optimal structures for each option described in
Table 2 after each forecaster’s sensitivity analysis was done (see
Table 3,
Table 4,
Table 5,
Table 6,
Table 7 and
Table 8). In addition, there are other relevant parameters such as the number of outputs or layers that have been chosen for each forecaster without performing a sensitivity analysis. Thus, all developed forecasters have a single output that will provide the demanded energy 15 min ahead and a three-layer structure (input, hidden and output).
Finally, a sensitivity analysis was performed in order to choose the optimal training time for each forecaster. This test relies on examining the evolution of the accuracy of each option by increasing the epochs of the learning step. The database used for these tests covers January 2016–December 2017 or January 2017–December 2017, depending on the length of the database required by each option. To analyse the accuracy of these tests, the average error in percentage for the forecasted period that covers January 2017–April 2017 was calculated. Because each test was repeated five times, the values presented in
Table 10 are an average of the results obtained for those five tests.
An analysis of the results in
Table 10 revealed that the relationship between the forecasters’ improvement in accuracy and the epochs of the learning step was similar to an exponential function. For this reason, little improvement was expected in the average error if the epoch was increased when the exponential elbow point was achieved. In addition, it can be seen how, in certain cases, such as options 2 and 5, the error increased if the epoch time continued increasing. Hence,
Table 11 presents the optimal epoch number chosen for each option when the final RNNs were trained on their respective whole databases.
4. Results and Discussion
Once the architecture and the training step epochs for each forecaster had been determined, the accuracy of these needed to be analysed. To select the best forecaster between the different choices, each forecaster predicted the power demand from January 2018 to August 2018. For these tests, data outside the database applied to the learning steps were used, so that the real accuracy of the developed forecasters could be examined.
Figure 4 shows the demand curve of the building under study prior to analysing the results, in order to have an idea of the order of magnitude of the demanded energy. Moreover,
Figure 5,
Figure 6 and
Figure 7 compare the forecasters’ computed error metrics in the analysed period.
With regard to the proposed choices, three forecasters (Options 3, 4 and 6) slightly outperformed the others when error metrics were analysed. Because the error metrics of these forecasters were quite similar, a deeper analysis needed to be done to select the optimal forecaster. For this new analysis, the difference in percentage between the forecasted and real power demand was analysed for each sample day in the period January-August 2018 for forecasters 3, 4 and 6. While
Figure 8a presents the trend of the forecasted and real demanded power curves obtained on 31 August, 2018 by forecaster 3,
Figure 8b shows the accumulated energy for same sample day. On this sample day, the difference between the actual and forecast power demand is 0.14%; while the real power demand was 62.00 MWh and the predicted power demand was 62.09 MWh.
Moreover, the pie charts in
Figure 9,
Figure 10 and
Figure 11 present the results of the proposed test to select the optimal forecaster from among the available choices.
An examination of the pie diagrams for the different options led to the conclusion that the best architecture for the forecaster is option 3. While, in option 3, the average error in percentage is 0.16%, options 4 and 6 have an error average of 0.19% and 0.22%, respectively.
Table 12 summarises the key parameters of the forecaster that was selected for this study.
To compare the results obtained by the forecaster developed in this study with the forecasters proposed in literature, it was necessary to check not only whether the prediction horizon time was similar but also whether the power demand values were on the same order of magnitude. The fact that small variations in power demand would have different effects on the error metrics if the analysed system demanded MW, kW or W also had to be taken into account.
De Andrade et al. [
8] proposed a nonlinear autoregressive model with exogenous input (NARX) neural network for predicting power demand in two cities from 5 to 25 min ahead, in steps of 5 min. As the forecaster developed in this study predicts power demand 15 min ahead, the error metrics for both cities in this horizon time were compared with the error metrics presented in this study. Moreover, while the data that were given in [
8] for City I and City II had a power demand range of 3–9 MW and 5–15 MW, respectively, the building that was analysed in this study had a power demand range of 2.25–5.75 MW. Therefore, the order of magnitude was the same and the error metrics of both forecasters could be compared.
If the MAPE error metric of both forecasters is compared, City I and II have a MAPE of 2.60% and 1.57%, respectively, while the forecaster developed in this study has a MAPE of 1.61%. Even when the power range of the building analysed in this study was lower than in the cities analysed in [
8], it was hoped that the accuracy of the forecaster developed in this study would be worse due to the fact that slight and sudden changes in power demand would have a bigger effect in the building analysed in this study. However, the predictions made by the forecaster developed in this study exhibited slightly greater accuracy than the predictions made by the forecaster in [
8].
De Andrade et al. [
25] proposed a fuzzy neural network for predicting power demand for four different points 5 min ahead. It must be taken into account that the forecast horizon in [
25] is lower than in the forecast developed in this study, so it was expected that the MAPE error was bigger in this study. The power demand ranges analysed in [
8] were 11–14 MW, 5–11 MW, 1–8 MW and 6–16 MW and the MAPE errors were 0.84%, 1.45%, 0.47% and 1.88%. Although it seems that the forecaster proposed in [
25] slightly outperforms the forecaster proposed in this study, it should be highlighted that the forecaster presented in this article predicted 15 min ahead with a degree of accuracy that was similar to the forecaster proposed in [
25] makes for 5 min ahead. Therefore, it can be concluded that the proposed forecaster is comparable to the forecaster proposed in [
25] since the MAPE error is similar and the prediction horizon is higher in this study.
Rana et al. [
10] proposed a wavelet neural network (WNN) for forecasting a power demand range of 6000–9250 MW for 5–60 min ahead in 5-min time steps. WNNs combine wavelet theory and neural networks. Wavelet theory is based on decomposing the high and low frequency components of the parameters used in the input vector data that are applied in an ANN. The error metrics provided by [
10] for a 15minute ahead horizon forecast time are a MAE of 39.95 MW and a MAPE of 0.45%. However, the error metrics provided by this study are a MAE of 0.05 MW and MAPE of 1.61%. Although the forecast horizon is the same, the order of magnitude of the power demand range is much higher in [
10] than in this study. For this reason, the MAE error metric is better in this study than in [
10], whereas in the MAPE error metric [
10] obtains better results than those provided in this study. To summarise, while the MAE error metric improves from 39.95 MW to 0.05 MW, which is an improvement of 99%, the MAPE error metric worsens from 0.45% to 1.61%, which is a worsening of 72%. As explained above, the higher the order of magnitude of the power range, the higher the MAE and the lower the MAPE error metrics are. Nevertheless, the MAE error improves more than the MAPE worsens, so it can be concluded that the forecaster presented in this study is comparable to the forecaster proposed in [
10].
5. Conclusions
This study presents a forecaster that is able to predict power demand in the very short term, specifically 15 min ahead. The main conclusions drawn from this study are explained below.
The study presents a sensitivity analysis where not only the optimal input vector, but also the length of the database were examined. It was found that the optimal architecture for the forecaster was a layer recurrent neural network with ten neurons and a 1:3 delay ratio in the hidden layer. In addition, while the pie chart diagram for the third forecaster demonstrates that 79% of the error was between 0 and 0.25%, the average error in percentage for the validation step was the lowest (0.16%) of the different options analysed.
Although the forecasters’ accuracy was validated with real data, the steps explained in
Section 3.2 had to be taken to ensure that the optimal number of neurons and delay ratio was selected. Nevertheless, it was not necessary to repeat the process of selecting from among the different possible forecasters, due to the fact that the results of forecaster 3 outperformed the predictions made by the other options.
The analysis of the results provided by each option demonstrated that the season and hour of the day were key parameters in predicting power demand. In addition, after analysing the results, it was concluded that a database for a single year provided better results than a two-year database. Unexpectedly, if temperature was included as a parameter in the input vector, the forecaster’s error rate increased. This happened because the temperature changes were more volatile than the power demand changes, thus causing the uncertainty of the forecaster to increase.
The error metrics presented in this article were compared with the available literature and it was concluded that there was a slight improvement when the prediction horizon was 15 min and the power demand range was similar to or higher than the building examined in this study.
Since microgrids and smart grids are equipped with smart metres that record energy demand in real time, the methodology proposed in this research study can be applied to develop other forecasters to predict with sufficient accuracy the power demand 15 min ahead and send this information to the central unit control. This will allow the unit control to have more information about the future situation and make better decisions related to increasing the efficiency of the whole system.