Next Article in Journal
Pickup and Delivery Problem of Automobile Outbound Logistics Considering Trans-Shipment among Distribution Centers
Next Article in Special Issue
Ecological Efficiency Measurement and Technical Heterogeneity Analysis in China: A Two-Stage Three-Level Meta-Frontier Network Model Based on Segmented Projection
Previous Article in Journal
Identifying Key Factors Influencing Teaching Quality: A Computational Pedagogy Approach
Previous Article in Special Issue
Forecasting the Natural Gas Supply and Consumption in China Using a Novel Grey Wavelet Support Vector Regressor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

BiGTA-Net: A Hybrid Deep Learning-Based Electrical Energy Forecasting Model for Building Energy Management Systems

1
Department of ICT Convergence, Soonchunhyang University, Asan 31538, Republic of Korea
2
Department of AI and Big Data, Soonchunhyang University, Asan 31538, Republic of Korea
3
Department of Medical Science, Soonchunhyang University, Asan 31538, Republic of Korea
4
Department of Software, Sejong University, Seoul 05006, Republic of Korea
5
Department of Industrial Security, Chung-Ang University, Seoul 06974, Republic of Korea
*
Authors to whom correspondence should be addressed.
Systems 2023, 11(9), 456; https://doi.org/10.3390/systems11090456
Submission received: 27 July 2023 / Revised: 22 August 2023 / Accepted: 1 September 2023 / Published: 2 September 2023

Abstract

:
The growth of urban areas and the management of energy resources highlight the need for precise short-term load forecasting (STLF) in energy management systems to improve economic gains and reduce peak energy usage. Traditional deep learning models for STLF present challenges in addressing these demands efficiently due to their limitations in modeling complex temporal dependencies and processing large amounts of data. This study presents a groundbreaking hybrid deep learning model, BiGTA-net, which integrates a bi-directional gated recurrent unit (Bi-GRU), a temporal convolutional network (TCN), and an attention mechanism. Designed explicitly for day-ahead 24-point multistep-ahead building electricity consumption forecasting, BiGTA-net undergoes rigorous testing against diverse neural networks and activation functions. Its performance is marked by the lowest mean absolute percentage error (MAPE) of 5.37 and a root mean squared error (RMSE) of 171.3 on an educational building dataset. Furthermore, it exhibits flexibility and competitive accuracy on the Appliances Energy Prediction (AEP) dataset. Compared to traditional deep learning models, BiGTA-net reports a remarkable average improvement of approximately 36.9% in MAPE. This advancement emphasizes the model’s significant contribution to energy management and load forecasting, accentuating the efficacy of the proposed hybrid approach in power system optimizations and smart city energy enhancements.

1. Introduction

The rapid influx of populations into urban areas presents many challenges, ranging from resource constraints to heightened traffic and escalating greenhouse gas (GHG) emissions [1]. Many cities globally are transitioning towards ‘smart cities’ to handle these multifaceted urban issues efficiently [2]. At its core, a smart city aims to enhance its inhabitants’ efficiency, safety, and living standards [3]. For example, smart cities tackle GHG emissions by reducing traffic congestion, optimizing energy usage, and incorporating alternatives, such as electric vehicles, energy storage systems (ESSs), and sustainable energy sources [4]. A significant portion of urban GHG emissions is attributed to building electricity consumption, which powers essential systems and amenities such as heating, domestic hot water (DHW), ventilation, lighting, and various electronic appliances [5]. Thus, advancing energy efficiency in urban buildings, especially through the integration of energy storage and renewable energy sources, is paramount. Recognizing this, many smart city designs have embraced integrated systems such as building energy management systems (BEMSs) to boost the energy efficiency of existing infrastructure [6].
A BEMS is a technology-driven tool that harnesses the capabilities of the Internet of Things (IoT) [7] and big data analytics [8] to specifically regulate and monitor building electricity consumption. Electricity accounts for a substantial portion of a building’s energy profile, powering everything from lighting and heating to advanced electronic appliances. One of the cardinal functions of a BEMS is short-term load forecasting (STLF) for electricity [9]. Accurate STLF is essential as it enables facilities to trade surplus electricity, foster economic benefits, and precisely manage power loads, thereby preventing blackouts by moderating peak electrical demands [10]. However, mastering STLF for electricity consumption is an intricate task. This is because buildings exhibit diverse and complex electricity consumption patterns with a non-linear relationship with several external factors such as weather, occupancy, and time of day [11]. Additionally, the inherent noise in electricity consumption data further muddles the forecasting process, making accurate predictions challenging [12]. Given these complexities, many researchers have turned to artificial intelligence (AI) as a promising approach for building electricity consumption forecasting. AI techniques excel in deciphering the recent trends in electricity consumption and processing the intricate, non-linear interactions between various influencing factors and electricity demand [13].
Recent research underscores the intricate dynamics governing building energy performance. Several factors, both internal, such as building orientation, and external, such as climatic changes, play pivotal roles. These complexities can be unraveled through mathematical modeling, leading to the formulation of more accurate regression models [14]. Delving into the digital realm, machine learning (ML) stands out as a potent tool. With the support of vast datasets and advanced computing, ML offers groundbreaking solutions for predicting energy demands [15]. Its potential is evident throughout a building’s lifecycle, impacting both its design and operation phases. However, the journey to its broad acceptance presents numerous challenges. Two notable hurdles include the necessity for extensive labeled data and concerns regarding model transferability. In response to these challenges, innovative strategies have emerged. One notable approach is the introduction of easy-to-install forecast control systems designed for heating. These systems prove especially beneficial for older structures that necessitate detailed documentation [16]. Such systems not only exemplify technical advancements but also adapt to diverse inputs, considering factors from weather conditions to occupant preferences, ensuring an optimal balance between energy efficiency and occupant comfort.
Building on the promise of ML, as highlighted in recent research, traditional AI techniques, including ML [13] and deep learning (DL) [17], have indeed been extensively employed to develop forecasting models. Several ML approaches have been explored, showcasing innovative methodologies to predict hourly or peak energy consumption. Granderson et al. [18] focused on the versatility of regression models in predicting hourly energy consumption. By emphasizing its applicability in STLF, their model showcased its potential in the broader realm of energy management. Huang et al. [19] presented a multivariate empirical mode decomposition-based model that harmoniously integrated particle swarm optimization (PSO) and support vector regression for day-ahead peak load forecasting. Li et al. [20] pioneered a data-driven strategy for STLF by integrating cluster analysis, Cubist regression models, and PSO, presenting an innovative approach that balanced multiple techniques for improved forecasting. Moon et al. [21] introduced the ranger-based online learning approach, called RABOLA, a testament to adaptive forecasting specially tailored for buildings with intricate power consumption patterns. This model prioritized real-time, multistep-ahead forecasting, demonstrating its potential in dynamic environments.
While traditional ML methods have significantly advanced energy forecasting, the advent of DL, especially convolutional neural networks (CNNs) and recurrent neural networks (RNNs), has opened new horizons. These neural networks have set a precedent for understanding intricate data patterns, paving the way for more sophisticated forecasting methodologies [22]. Compared to traditional ML and mathematical methods, these models stand out due to their learning capabilities and generalization ability [13]. Understanding the characteristics of building electricity consumption data, including time-based [23] and spatial features [24], is essential for efficiently applying these DL models. Despite these sophisticated techniques, the models often deliver unreliable, low forecasts [25]. They need help with several challenges, such as issues related to short-term memory, overfitting, learning from scratch, and understanding complex variable correlations. Some researchers have investigated hybrid models to overcome these challenges, as single models frequently have difficulties learning time-based and spatial features simultaneously [13].
Building upon the aforementioned advances in forecasting techniques, a multitude of studies, including those mentioned above, have delved into the realm of STLF. These collective efforts spanning several years are comprehensively summarized in Table 1. Taking a leaf from hybrid model designs, Aksan et al. [26] introduced models that combined variational mode decomposition (VMD) with DL models, such as CNN and RNNs. Their models, VMD-CNN-long short-term memory (LSTM) and VMD-CNN-gated recurrent unit (GRU), showcased versatility, adeptly managing seasonal and daily energy consumption variations. Wang et al. [27], in their recent endeavors, proposed a wavelet transform neural network that uniquely integrates time and frequency-domain information for load forecasting. Their model leveraged three cutting-edge wavelet transform techniques, encompassing VMD, empirical mode decomposition (EMD), and empirical wavelet transform (EWT), presenting a comprehensive approach to forecasting. Zhang et al. [28] emphasized the indispensable role of STLF in modern power systems. They introduced a hybrid model that combined EWT with bidirectional LSTM. Moreover, their model integrated the Bayesian hyperparameter optimization algorithm, refining the forecasting process. Saoud et al. [29] ventured into wind speed forecasting and introduced a model that amalgamated the stationary wavelet transform with quaternion-valued neural networks, marking a significant stride in renewable energy forecasting.
Kim et al. [30] seamlessly merged the strengths of RNN and one-dimensional (1D)-CNN for STLF, targeting the refinement of prediction accuracy. They adjusted the hidden state vector values to suit closely better-occurring prediction times, showing a marked evolution in prediction approaches. Jung et al. [31] delved into attention mechanisms (Att) with their Att-GRU model for STLF. Their approach was particularly noteworthy for adeptly managing sudden shifts in power consumption patterns. Zhu et al. [32] showcased an advanced LSTM-based dual-attention model, meticulously considering the myriad of influencing factors and the effects of time nodes on STLF. Liao et al. [33], with their innovative fusion of LSTM and a time pattern attention mechanism, augmented STLF methodologies, emphasizing feature extraction and model versatility. By incorporating external factors, their comprehensive approach improved feature extraction and demonstrated superior performance compared to existing methods. While effective, their model should have capitalized on the strengths of hybrid DL models, such as GRU and temporal convolutional network (TCN), which could be used to handle both long-term dependencies and varying input sequence lengths [34].
BiGTA-net is introduced as a novel hybrid DL model that seamlessly integrates the strengths of a bi-directional gated recurrent unit (Bi-GRU), a temporal convolutional network (TCN), and an attention mechanism. These components collectively address the persistent challenges encountered in STLF. The conventional DL models sometimes require assistance in dealing with intricate nonlinear dependencies. However, the amalgamation within the proposed model represents an innovative approach for capturing long-term data dependencies and effectively handling diverse input sequences. Moreover, the incorporation of the attention mechanism within BiGTA-net optimizes the weighting of features, thereby enhancing predictive accuracy. This research establishes its unique contribution within the energy management and load forecasting domains, which can be attributed to the following key contributions:
  • BiGTA-net emerges as a pioneering hybrid DL model designed to enhance day-ahead forecasting within power system operation, prioritizing accuracy.
  • The experimental framework employed for testing BiGTA-net’s capabilities is strategically devised, showcasing its adaptability and resilience across different models and configurations.
  • Utilizing data sourced from a range of building types, the approach employed in this study establishes the widespread applicability and adaptability of BiGTA-net across diverse consumption scenarios.
The structure of this paper is outlined as follows: Section 2 elaborates on the configuration of input variables that are crucial to the STLF model and discusses the proposed hybrid deep learning model, BiGTA-net. In Section 3, the performance of the model is thoroughly examined through extensive experimentation. Finally, Section 4 encapsulates the findings and provides an overview of the study.

2. Materials and Methods

This section provides an in-depth exploration of the meticulous processes utilized to structure the datasets, configure the models, and assess their performance. Serving as an initial reference, Figure 1 displays a block schema that visually encapsulates the progression of the approach from raw datasets to performance evaluation. This schematic illustration is essential in providing readers with a comprehensive perspective of the methodological steps, emphasizing critical inputs, outputs, and incorporated innovations.

2.1. Data Preprocessing

This section explains the procedure undertaken to identify crucial input variables necessary for shaping the STLF model. Central to this study is the forecast of the day-ahead hourly electricity load. This forecasting holds immense significance, primarily due to its role as a foundational element in the planning and optimization of power system operations for the upcoming day [35]. These forecasts contribute to the following aspects:
  • Demand Response: An approach centered on adjusting electricity consumption patterns rather than altering the power supply. This method ensures the power system can cater to fluctuating demands without overextending its resources.
  • ESS Scheduling: This entails critical decisions on when to conserve energy in storage systems and when to discharge it. Effective scheduling ensures optimal stored energy utilization, aligning with predicted demand peaks and troughs.
  • Renewable Electricity Production: Anticipating the forthcoming electricity demand facilitates strategic planning for harnessing renewable sources. It ensures renewable sources are optimally utilized, considering their intermittent nature.
The study explores two distinct datasets that represent divergent building types, contributing to a comprehensive understanding of power consumption patterns and enhancing the formulation of the model. The first dataset originates from Sejong University, exemplifying educational institutions [36]. In contrast, the Appliances Energy Prediction (AEP) dataset represents residential buildings [37]. The objective was to enhance the precision of the STLF model by incorporating insights from these datasets, ensuring its adaptability to various electricity consumption scenarios.
Sejong University employed the Power Planner tool, which generates electricity usage statistics, to optimize electricity consumption. These statistics include predicted bills, electricity consumption, and load pattern analysis. Five years’ worth of hourly electricity consumption data, spanning from March 2015 to February 2021, were compiled using this tool. From the collected dataset, approximately 0.006% of time points (equivalent to 275 instances) contained missing values, which were imputed based on prior research on handling missing electricity consumption data. Conversely, the publicly available AEP dataset provides residential electricity consumption data at 5 min intervals. To align with the study’s objective of predicting day-ahead hourly electricity consumption, this dataset was resampled at 1 h intervals.
Details of the building electricity consumption, including statistical analysis, data collection periods, and building locations, are presented in Table 2, while Figure 2 illustrates the electricity consumption distribution through a histogram. Figure 3 and Figure 4 illustrate boxplots representing the hourly electricity consumption. Figure 3 presents the consumption data segmented by hours for two datasets: the educational building dataset (Figure 3a) and the AEP dataset (Figure 3b). Similarly, Figure 4 provides boxplots of the same consumption data, which is segmented by days of the week, again for the educational building dataset (Figure 4a) and the AEP dataset (Figure 4b). The minimum and maximum values in Table 2 are omitted due to university privacy concerns. Analysis of Figure 3 revealed a clear distinction in electricity consumption during work hours and non-work hours for both datasets. While the educational building dataset showed a noticeable variation in electricity consumption between weekdays and weekends, the AEP dataset needed to show such a clear distinction.

2.1.1. Timestamp Information

The study considered a spectrum of external and internal factors in determining the input variables. Among the external factors, timestamps and weather details held significance. These timestamp details encompass the month, hour, day of the week, and holiday indicators. Such details are crucial as they elucidate diverse electricity consumption patterns within buildings. For example, hourly electricity consumption can vary based on customary working hours, mealtime tendencies, and other factors. Similarly, distinct days of the week and holiday indicators can provide insights into contrasting consumption patterns, particularly when contrasting workdays with weekends.
A significant challenge emerges when considering time-related data: the disparity in representing cyclical time data. Specifically, within the hourly context, the difference between 11 p.m. and midnight, though consecutive hours, is illustrated as a substantial gap of 23 units. To address such disparities and effectively capture the cyclic essence of these variables with their inherent sequential structure, a two-dimensional projection was utilized. Equations (1) through (4) were employed to transition from representing these categorical variables in one-dimensional space to depicting them as continuous variables in two-dimensional space [30]:
Hourx = sin(360°/24 × Hour),
Houry = cos(360°/24 × Hour).
For the day of the week (DOTW) component, considering the ISO 8601 standard where Monday is denoted as one and Sunday as seven, a similar challenge emerges, with a numerical gap of six between Sunday and Monday. This numerical gap can be addressed with the following equations:
DOTWx = sin(360°/7 × DOTW),
DOTWy = cos(360°/7 × DOTW),
here the x and y subscripts in Equations (1) to (4) indicate the two-dimensional coordinates to represent the cyclical nature of hours and days of the week. The transformation to a two-dimensional space allows for a more natural representation of cyclical time data, reducing potential discontinuities.
Beyond these considerations, the analysis also encompassed the integration of holiday indicators [36]. These indicators, denoting weekends and national holidays, were represented as binary variables: ‘1’ indicated a date falling on either a holiday or a weekend, while ‘0’ indicated a typical weekday. By incorporating these indicators, the aim was to account for the evident influence of holidays and weekends on electricity consumption patterns. Notably, the month within a year significantly affects these patterns. However, due to constraints posed by the AEP dataset, which provides data for only a single year, the incorporation of monthly variations was not feasible. As a result, monthly data were not included in the analysis for the AEP dataset.

2.1.2. Climate Data

Climate conditions exert a notable influence on STLF, primarily attributed to their integral role within the operational dynamics of high-energy-consuming devices. This influence extends to heating and cooling systems, whose operational patterns align closely with fluctuations in weather conditions [38]. The AEP dataset encompasses six distinct weather variables: temperature, humidity, wind velocity, atmospheric pressure, visibility, and dew point. Conversely, the Korea Meteorological Administration (KMA) offers a comprehensive collection of weather forecast data for each region in South Korea. These data include a range of variables, such as climate observations, forecasts for rainfall likelihood and quantity, peak and trough daily temperatures, wind metrics, and humidity levels [39]. To heighten the real-world applicability of the method, the primary input variables were selectively chosen as temperature, humidity, and wind velocity. This selection was motivated by two factors: firstly, these variables are present both in the AEP dataset and in KMA’s forecasts. Secondly, their well-documented strong correlation with power consumption patterns supports their significance [30].
The data reservoir was populated through the automated synoptic observing system of the Korea Meteorological Administration (KMA), maintained by the Seoul Meteorological Observatory. This observatory is located within a mere 10 km of the Sejong University campus. The objective was to contextualize the climatic variables with the environmental conditions of the university’s academic buildings. To bridge the gap between raw climatic data and its tangible influence on electricity consumption—the human perceptual experience of temperature fluctuations—two distinct indices were extrapolated. The temperature–humidity index (THI) [40], colloquially known as the discomfort index, provides insights into the perceived discomfort caused by the summer heat, thereby influencing the use of cooling systems. Conversely, the wind chill temperature (WCT) [41] encapsulates the chilling effect of winter weather, often prompting the activation of heating appliances. These perceptual aspects are formulated mathematically in Equations (5) and (6), respectively, where Temp, Humi, and WS represent temperature, humidity, and wind speed.
THI = (1.8 × Temp + 32) − [(0.55 − 0.0055 × Humi) × (1.8 × Temp − 26)].
Drawing from the feedback loop between temperature, humidity, and the human body’s thermoregulation, Equation (5) for THI has been crafted. Its constants—1.8, 32, 0.55, 0.0055, and 26—are the outcome of rigorous empirical studies that evaluated human discomfort across a spectrum of temperature and humidity gradients [40].
WCT = 13.12 + 0.6215 × Temp − 11.37 × WS0.16 + 0.3965 × Temp × WS0.16.
The derivation of Equation (6) for the wind chill temperature (WCT) is grounded in a model that seeks to quantify the perceived decrease in ambient temperature due to wind speed, particularly in colder regions. The constants incorporated within the equation—13.12, 0.6215, 11.37, and 0.3965—as well as the exponent 0.16 trace their origins to comprehensive field experiments conducted across various weather conditions. These experiments were designed to establish a comprehensive model for human tactile perception of cold induced by wind [41]. Taking these considerations into account, the analysis encompassed a set of ten external determinants that were carefully selected as input variables for the model’s training process.

2.1.3. Past Power Consumption

Past power consumption data were treated as internal factors, as they capture recent patterns in electricity usage [31]. Data from the same time point one day and one week prior were utilized. Power consumption data from the preceding day could provide insight into the most recent hourly trends, while power consumption data from the preceding week could capture the most recent weekly patterns [21]. Given the potential variation in power usage patterns between regular days and holidays, holiday indicators were also integrated for both types of power consumption [36].
Furthermore, an innovative inclusion was made of a past electricity usage value as an internal factor, effectively capturing the trend in electricity consumption leading up to the prediction time point over a span of one week [36]. To achieve this, two distinct scenarios, illustrated in Figure 5, were considered. In the first scenario, if the prediction time point fell on a regular day, the mean electricity consumption of the preceding seven regular days was computed. In the second scenario, if the prediction time point corresponded to a holiday, the average electricity consumption of the preceding seven holidays was calculated. As a result, five internal factors were incorporated for model training, and a comprehensive list of all input variables and their respective details can be found in Table 3.

2.2. BiGTA-Net Modeling

The BiGTA-net model, illustrated in Figure 6, presents a meticulously crafted hybrid architecture that adeptly merges the advantages of both Bi-GRU and TCN, effectively transcending their respective limitations. The primary objective is to formulate a three-stage prediction model that systematically enhances predictive accuracy by harnessing the inherent strengths of these constituent components. To achieve this objective, a significant attention mechanism is seamlessly integrated to facilitate the harmonious fusion of Bi-GRU and TCN. This orchestrated synergy serves the purpose of constructing a predictive model for building electricity consumption that boasts high accuracy and encompasses multiple stages of prediction refinement. For an in-depth comprehension of the theoretical foundations underpinning Bi-GRU and TCN, readers are referred to Appendix A, which provides comprehensive details. This supplementary resource offers a thorough exploration of the conceptual underpinnings, operational principles, and pertinent prior research pertaining to these two pivotal elements within the model.

2.2.1. Bidirectional Gated Recurrent Unit

The modeling journey commences with the Bi-GRU, an advancement over traditional RNNs designed to excel in processing sequential time-series data. While conventional RNNs are recognized for their capability to recall historical sequences, they have encountered challenges such as gradient vanishing and exploding. To address these challenges, the GRU was introduced, incorporating specialized gating mechanisms to effectively manage long-term data dependencies [42]. Within the architecture, two distinct GRUs—forward and backward GRUs—are integrated to compose the Bi-GRU, enabling a comprehensive analysis of sequence dynamics [43]. Despite its computationally demanding dual-structured design, this two-pronged approach empowers the model to discern intricate temporal patterns. For an in-depth comprehension of the mathematical intricacies underpinning the Bi-GRU’s design, readers are referred to the extensive elaboration in the Keras official documentation [44].

2.2.2. Temporal Convolutional Network

The TCN emerges as a groundbreaking solution tailored explicitly for time-series data processing, offering a countermeasure to challenges encountered by sequential models such as the Bi-GRU. TCN employs causal convolutions at its core, ensuring predictions rely solely on current and past data, preserving the temporal sequence’s integrity [45]. A defining characteristic of TCNs is their adeptness in capturing long-term patterns through dilated convolutions. These convolutions expand the network’s receptive field by introducing fixed steps between neighboring filter taps, enhancing computational efficiency while capturing extended dependencies [46]. The TCN architecture also incorporates residual blocks, addressing the vanishing gradient problem and ensuring stable learning across layers. TCN’s adaptability to varying sequence lengths and seamless integration with Bi-GRU outputs form a hierarchical structure that boosts computational efficiency and learning potential. However, TCN’s lack of inherent consideration for future data points can impact tasks with significant forward-looking dependencies.

2.2.3. Attention Mechanism

The innovation becomes prominent through the introduction of the attention mechanism, a dynamic concept within the realm of deep learning. This mechanism assigns significance or ‘attention’ to specific segments of sequences, ensuring the model captures essential features for precise predictions. Within the context of the BiGTA-net architecture, this concept has been ingeniously adapted, resulting in a distinctive approach that seamlessly integrates Bi-GRU, TCN, and the attention mechanism. The attention mechanism introduced is referred to as the dual-stage self-attention mechanism (DSSAM), situated at the junction of TCN’s output and the subsequent stages of the model [47]. By establishing correlations across various time steps and dimensions, the DSSAM enhances computational efficiency while strategically highlighting informative features.
The role of the attention mechanism is pivotal in refining the output generated by TCN. Instead of treating all features uniformly, it adeptly identifies and amplifies the most relevant and predictive elements. This dynamic allocation of attention ensures that while the Bi-GRU captures temporal dynamics and the TCN captures long-term dependencies, the attention mechanism focuses on crucial features. As a result, the model achieves enhanced predictive capabilities by synergizing the strengths of Bi-GRU, TCN, and the attention mechanism. The approach incorporates the utilization of the scaled exponential linear units (SELU) [48] activation function, a strategic choice made to address challenges linked to long-term dependencies and gradient vanishing. This integration of SELU enhances stability in the learning process and ultimately contributes to more accurate predictions [49].

3. Results and Discussion

3.1. Evaluation Criteria

To evaluate the predictive capabilities of the forecasting model, a variety of performance metrics were utilized, including mean absolute percentage error (MAPE), root mean square error (RMSE), and mean absolute error (MAE). These metrics hold widespread recognition and offer a robust assessment of prediction accuracy [50].
The MAPE serves as a valuable statistical measure of prediction accuracy, particularly pertinent in the context of trend forecasting. This metric quantifies the error as a percentage, rendering the outcomes intuitively interpretable. While the MAPE may become inflated when actual values approach zero, this circumstance does not apply to the dataset under consideration. The calculation of MAPE is performed using Equation (7).
MAPE = 1/n × ∑|(Yt – Ŷt)/Yt| × 100,
where Yt and Ŷt represent the actual and predicted values, respectively, and n represents the total number of observations.
The RMSE, or the root mean square deviation, aggregates the residuals to provide a single metric of predictive capability. The RMSE, calculated using Equation (8), is the square root of the average squared differences between the forecast values (Ŷt) and the actual values (Yt). The RMSE equals the standard deviation for an unbiased estimator, indicating the standard error.
RMSE = √(1/n × ∑(Yt – Ŷt)2).
The MAE is a statistical measure used to gauge the proximity of predictions or forecasts to the eventual outcomes. This metric is calculated by considering the average of the absolute differences between the predicted and actual values. Equation (9) outlines the calculation for the MAE.
MAE = 1/n × ∑|Yt – Ŷt|.

3.2. Experimental Design

The experiments were conducted in an environment that utilized Python (v.3.8) [51], complemented by machine learning libraries such as scikit-learn (v.1.2.1) [52] and Keras (v.2.9.0) [44,53]. The computational resources included an 11th Gen Intel(R) Core(TM) i9-11900KF CPU operating at 3.50 GHz, an NVIDIA GeForce RTX 3070 GPU, and 64.0GB of RAM. The proposed BiGTA-net model was evaluated against various well-regarded RNN models, such as LSTM, Bi-LSTM, GRU, and GRU-TCN. The hyperparameters were standardized across all models to ensure a fair and balanced comparison. This approach minimized potential bias in the evaluation results due to model-specific preferences or advantageous parameter settings. The common set of hyperparameters for all the models included 25 training epochs, a batch size of 24, and the Adam optimizer with a learning rate of 0.001 [54]. The MAE was chosen as the key metric for evaluating the performance of the models, providing a standardized measure of comparison.
The training dataset for the BiGTA-net model was constructed by utilizing hourly electrical consumption data from 1 to 7 March 2015, for the educational building dataset, and from 11 to 17 January 2016, for the AEP dataset. In the case of the educational building dataset, the data spanning from 8 March 2015, to 28 February 2019, was allocated for training, while the subsequent period, 1 March 2019, to 29 February 2020, was designated as the testing set. For the AEP dataset, data ranging from 18 January to 30 April 2016, was employed for training purposes, with the timeframe between 1 and 27 May 2016, reserved for testing. The dataset was partitioned into training (in-sample) and testing (out-of-sample) subsets, maintaining an 80:20 ratio. Prior to the division, min–max scaling was applied to the training data, standardizing the raw electricity consumption values within a specific range. This scaling transformation was subsequently extended to the testing data, ensuring uniformity in the range of both training and testing datasets. This process ensured that the original data scale did not influence the model’s performance.

3.3. Experimental Results

In the experimental outcomes, the performances of diverse model configurations were initially investigated, as presented in Table 4. Specifically, a total of 16 models with varying network architectures, activation functions, and the incorporation of the attention mechanism were evaluated. Among the specifics detailed in Table 4, the prominent focus is on the Bi-GRU-TCN-I model, alternatively known as BiGTA-net, which was proposed in this study. This particular model embraced the Bi-GRU-TCN architecture, utilized the SELU activation function, and integrated the attention mechanism, setting it apart from the remaining models.
The performance of these models was evaluated using three key metrics: MAPE, RMSE, and MAE, as presented in Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10. The experimental results were divided into two main categories, results obtained from the educational building dataset and the AEP dataset.
In the context of the educational building dataset, the proposed model (Bi-GRU-TCN-I) consistently showcased superior performance in comparison to alternative model configurations. As illustrated in Table 5, the proposed model achieved the lowest MAPE, underscoring its heightened predictive accuracy. Strong corroboration for its superior performance is also substantiated by the findings presented in Table 6 and Table 7, where the proposed model demonstrates the least RMSE and MAE values, respectively, signifying a close alignment between the model’s predictions and actual values.
  • Table 5 demonstrates that among all models, the proposed Bi-GRU-TCN-I model boasts the best MAPE performance with an average of 5.37. The Bi-GRU-TCN-II model follows closely with a MAPE of 5.39. When exploring the performance of LSTM-based models, LSTM-TCN-III emerges as a top contender with a MAPE of 5.59, which, although commendable, is still higher than the leading Bi-GRU-TCN-I model. The Bi-LSTM-TCN results, on the other hand, range from 6.90 to 8.53, further emphasizing the efficacy of the BiGTA-net. Traditional GRU-TCN models displayed a wider variation in MAPE values, from 5.68 to 6.50.
  • In Table 6, when assessing RMSE values, the proposed BiGTA-net model (Bi-GRU-TCN-I) again leads the pack with a score of 171.3. This result is significantly better than all other models, with the closest competitor being Bi-GRU-TCN-II at 169.5 RMSE. Among the LSTM variants, LSTM-TCN-I holds the most promise, with an RMSE of 134.8. However, the Bi-GRU models are generally superior in predicting values closer to the actual values, underscoring their robustness.
  • Table 7, although not provided in its entirety, indicates the reliability of BiGTA-net with the lowest MAE of 122.0. Bi-GRU-TCN-II closely follows with an MAE of 122.7. As observed from previous results, other models, potentially including the LSTM and Bi-LSTM series, reported higher MAE scores, ranging between 131.6 and 153.7.
In the context of the AEP dataset, as demonstrated in Table 8, Table 9 and Table 10, the proposed model (Bi-GRU-TCN-I) showcased competitive performance. While marginal differences were observed among the various model configurations, the Bi-GRU-TCN-I model consistently outperformed the alternative models in terms of MAPE, RMSE, and MAE metrics.
  • In Table 8, which presents the MAPE comparison for the AEP dataset, the proposed model, Bi-GRU-TCN-I, still manifests the lowest average MAPE of 26.77. This result emphasizes its unparalleled predictive accuracy among all tested models. Delving into the LSTM family, the LSTM-TCN-I achieved an average MAPE of 28.42, while the Bi-LSTM-TCN-I recorded an average MAPE of 29.12. It is notable that while these models exhibit competitive performance, neither managed to outperform the BiGTA-net.
  • Table 9, focused on the RMSE comparison, depicts the Bi-GRU-TCN-I model registering an RMSE of 375.9 on step 1. This performance, when averaged, proves to be competitive with the other models, especially when considering the range for all the models, which goes as low as 369.1 for Bi-GRU-TCN-III and as high as 622.2 for Bi-LSTM-TCN-III. Looking into the LSTM family, LSTM-TCN-I kicked off with an RMSE of 473.6, whereas Bi-LSTM-TCN-I began with 431.1. This further accentuates the superiority of the BiGTA-net in terms of prediction accuracy.
  • Finally, in Table 10, where MAE values are compared, the Bi-GRU-TCN-I model still shines with an MAE of 198.4. This consistently low error rate across different evaluation metrics underscores the robustness of the BiGTA-net across various datasets.
In summary, the proposed model, Bi-GRU-TCN-I, designated as BiGTA-net, exhibited exceptional performance across both datasets, affirming its efficacy and dependability in precise electricity consumption forecasting. These outcomes serve to substantiate the benefits derived from the incorporation of the Bi-GRU-TCN architecture, utilization of the SELU activation function, and integration of the attention mechanism, thereby validating the chosen design approaches.
To evaluate the performance of the BiGTA-net model, a comprehensive comparative analysis was conducted. This analysis included models such as Att-LSTM, Att-Bi-LSTM, Att-GRU, and Att-Bi-GRU, all of which integrate the attention mechanism, a characteristic known for enhancing prediction capabilities. Furthermore, this evaluation also incorporated several state-of-the-art methodologies introduced over the past three years, offering a robust understanding of BiGTA-net’s performance relative to contemporary models:
  • Park and Hwang [55] introduced the LGBM-S2S-Att-Bi-LSTM, a two-stage methodology that merges the functionalities of the light gradient boosting machine (LGBM) and sequence-to-sequence Bi-LSTM. By employing LGBM for single-output predictions from recent electricity data, the system transitions to a Bi-LSTM reinforced with an attention mechanism, adeptly addressing multistep-ahead forecasting challenges.
  • Moon et al. [21] presented RABOLA, previously touched upon in the Introduction section. This model is an innovative ranger-based online learning strategy for electricity consumption forecasts in intricate building structures. At its core, RABOLA utilizes ensemble learning strategies, specifically bagging, boosting, and stacking. It employs tools, namely, the random forest, gradient boosting machine, and extreme gradient boosting, for STLF while integrating external variables such as temperature and timestamps for improved accuracy.
  • Khan et al. [56] unveiled the ResCNN-LSTM, a segmented framework targeting STLF. The primary phase is data driven, ensuring data quality and cleanliness. The next phase combines a deep residual CNN with stacked LSTM. This model has shown commendable performance on the Individual Household Electricity Power Consumption (IHEPC) and Pennsylvania, Jersey, and Maryland (PJM) datasets.
  • Khan et al. [57] also introduced the Att-CNN-GRU, blending CNN and GRU and enriching with a self-attention mechanism. This model specializes in analyzing refined electricity consumption data, extracting pivotal features via CNN, and subsequently transitioning the output through GRU layers to grasp the temporal dynamics of the data.
Table 11 elucidates the comparative performance of several attention-incorporated models on the educational building dataset, with the BiGTA-net model’s performance distinctly superior. Specifically, BiGTA-net records a MAPE of 5.37 (±0.44%), RMSE of 171.3 (±15.0 kWh), and MAE of 122.0 (±10.5 kWh). The Att-LSTM model, a unidirectional approach, records a MAPE of 8.38 (±1.57%), RMSE of 242.1 (±48.2 kWh), and MAE of 188.8 (±39.5 kWh). Its bidirectional sibling, the Att-Bi-LSTM, delivers a slightly better MAPE at 7.85 (±0.70%) but comparable RMSE and MAE values. Interestingly, GRU-based models, such as Att-GRU and Att-Bi-GRU, lag with higher error metrics, the former recording a MAPE of 13.42 (±3.39%). The 2023 Att-CNN-GRU model reports a MAPE of 6.35 (±0.23%), an RMSE of 189.6 (±5.3 kWh), still falling short compared to the BiGTA-net. The RAVOLA model from 2022 registers an impressive MAPE of 7.17 (±0.63%), but again, BiGTA-net outperforms it. In essence, these results demonstrate the BiGTA-net’s unparalleled efficiency when measured against traditional unidirectional models and newer advanced techniques.
Table 12 unveils the comparative performance metrics of various attention-incorporated models using the AEP dataset. Distinctly, the BiGTA-net model consistently outperforms its peers, equipped with a sophisticated blend of the attention mechanism and SELU activation within its bidirectional framework. This model impressively returns a MAPE of 26.77 (±0.90%), RMSE of 386.5 (±6.3 Wh), and MAE of 198.4 (±3.2 Wh). The Att-LSTM model offers a MAPE of 30.91 (±1.03%), RMSE of 447.4 (±5.3 Wh), and MAE of 239.8 (±5.3 Wh). Its bidirectional counterpart, the Att-Bi-LSTM, shows a modest enhancement, delivering a MAPE of 30.54 (±2.58%), RMSE of 402.7 (±8.9 Wh), and MAE of 214.0 (±7.3 Wh). The GRU-based models present a close-knit performance. For instance, the Att-GRU model achieves a MAPE of 30.03 (±0.25%), RMSE of 443.5 (±3.9 Wh), and MAE of 234.5 (±2.4 Wh), while the Att-Bi-GRU mirrors this with slightly varied figures. The 2023 model, Att-CNN-GRU, logs a MAPE of 29.94 (±1.73%), RMSE of 405.1 (±9.7 Wh), yet its precision remains overshadowed by BiGTA-net. RAVOLA, a 2022 entrant, exhibits metrics such as a MAPE of 35.89 (±5.78%), emphasizing the continual advancements in the domain. The disparities in performance underscore BiGTA-net’s superiority. Models that lack the refined structure of BiGTA-net falter in their forecast accuracy, thereby underscoring the merits of the introduced architecture.
The combination of Bi-GRU and TCN, along with the integration of attention mechanisms and the adoption of the SELU activation function, synergistically reinforced BiGTA-net as a robust model. The experimental results consistently demonstrated BiGTA-net’s exceptional performance across diverse datasets and metrics, highlighting the model’s efficacy and flexibility in different forecasting contexts. These results decisively endorsed the effectiveness of the hybrid approach utilized in this study.

3.4. Discussion

To highlight the effectiveness of the BiGTA-net model, rigorous statistical analysis was employed, utilizing both the Wilcoxon signed-rank [58] and the Friedman [59] tests.
  • Wilcoxon Signed-Rank Test: The Wilcoxon signed-rank test [58], a non-parametric counterpart for the paired t-test, is formulated to gauge differences between two paired samples. Mathematically, given two paired sets of observations, x and y, the differences di = yi – xi are computed. Ranks are then assigned to the absolute values of these differences, and subsequently, these ranks are attributed either positive or negative signs depending on the sign of the original difference. The test statistic W is essentially the sum of these signed ranks. Under the null hypothesis, it is assumed that W follows a specific symmetric distribution. Suppose the computed p-value from the test is less than the chosen significance level (often 0.05). We have grounds to reject the null hypothesis in that case, implying a statistically significant difference between the paired samples.
  • Friedman Test: The Friedman test [59] is a non-parametric alternative to the repeated measures ANOVA. At its core, this test ranks each row (block) of data separately. The differences among the columns (treatments) are evaluated using the ranks. This expression is mathematically captured in the following expression, referred to as Equation (10).
    x 2 = 12 N k ( k + 1 ) [ j R j 2 k ( k + 1 ) 2 4 ] ,
    where N is the number of blocks, k is the number of treatments, and R j is the sum of the ranks for the jth treatment. The observed value of x 2 is then compared with the critical value from the x 2 distribution with k 1 degrees of freedom.
The meticulous validation, as demonstrated in Table 13 and Table 14, underscores the proficiency of BiGTA-net in the context of energy management. To fortify the conclusions drawn from the analyses, the approach was anchored on three crucial metrics: MAPE, RMSE, and MAE. The data were aggregated across all deep learning models, focusing on 24 h forecasts at hourly intervals. Comprehensive results stemming from the Wilcoxon and Friedman tests, each grounded in the metrics, are presented in Table 13 and Table 14. A perusal of the table illustrates the distinct advantage of BiGTA-net, with p-values consistently falling below the 0.05 significance threshold across varied scenarios and metrics.
Delving deeper into the tables, the BiGTA-net consistently outperforms other models in both datasets. The exceptionally low p-values from the Wilcoxon and Friedman tests indicate significant differences between the BiGTA-net and its competitors. In almost every instance, other models were lacking when juxtaposed against the BiGTA-net’s results. This empirical evidence is vital in understanding the superior capabilities of the BiGTA-net in energy forecasting. Furthermore, the fact that the p-values consistently fell below the conventional significance threshold of 0.05 only emphasizes the robustness and reliability of BiGTA-net. The variations in metrics, namely, MAPE, RMSE, and MAE, across Table 13 and Table 14 vividly portray the margin by which BiGTA-net leads in accuracy and precision. The unique architecture and methodology behind BiGTA-net have positioned it as a front-runner in this domain.
In the intricate realm of BEMS, the gravity of data-driven decisions cannot be overstated; they bear a twofold onus of economic viability and environmental stewardship. The need for precise and decipherable modeling is, therefore, undeniably paramount. BiGTA-net envisaged as an advanced hybrid model, sought to meet these exacting standards. Its unique amalgamation of Bi-GRU and TCN accentuates its proficiency in parsing intricate temporal patterns, which remain at the heart of energy forecasting.
In the complex BEMS landscape, BiGTA-net’s hybrid design brings a distinctive strength in capturing intricate temporal dynamics. However, this prowess has its challenges. Particularly in industrial environments or regions heavily dependent on unpredictable renewable energy sources, the model may find it challenging to adapt to abrupt shifts in energy consumption patterns swiftly. This adaptability issue is further accentuated when considering the sheer volume of data that the energy sector typically handles. Given the influx of granular data from many sensors and IoT devices, BiGTA-net’s intricate architecture could face scalability issues, especially when implemented across vast energy distribution networks or grids. Furthermore, the predictive nature of energy management demands an acute sense of foresight, especially with the increasing reliance on renewable energy sources. In this context, the TCN’s inherent limitations in accounting for prospective data pose challenges, especially when energy matrices constantly change, demanding agile and forward-looking predictions.
Within the multifaceted environment of the BEMS domain, the continuous evolution and refinement of models, i.e., BiGTA-net are essential. One avenue of amplification lies in broadening its scope to account for external determinants. By incorporating influential factors such as climatic fluctuations and scheduled maintenance events directly into the model’s input parameters, BiGTA-net could enhance responsiveness to unpredictable energy consumption variances. Further bolstering its real-time applicability, introducing an adaptive learning mechanism designed to self-tune based on the influx of recent data could ensure that the model remains abreast of the ever-changing energy dynamics. Additionally, enhancing the model’s interpretability is vital in a sector where transparency and clarity are paramount. Integrating principles from the “explainable AI” domain into BiGTA-net can provide a deeper understanding of its decision-making process, enabling stakeholders to discern the rationale behind specific energy consumption predictions and insights.
As the forward trajectory of BiGTA-net within the energy sector is contemplated, several avenues of research come into focus. Foremost is the potential enhancement of the model’s attention mechanism, tailored explicitly to the intricacies of energy consumption dynamics. The model’s ability to discern and emphasize critical energy patterns could be substantially elevated by tailoring attention strategies to highlight domain-specific energy patterns. Furthermore, while BiGTA-net showcases an intricate architecture, the ongoing challenge resides in seamlessly integrating its inherent complexity with optimal predictive accuracy. By addressing this balance, models could be engineered to be more streamlined and suitable for decentralized or modular BEMS frameworks, all while retaining their predictive capabilities. Lastly, a compelling proposition emerges for integrating BiGTA-net’s forecasting prowess with existing BEMS decision-making platforms. Such integration holds the promise of a future where real-time predictive insights seamlessly inform energy management strategies, thereby advancing both energy utilization efficiency and a tangible reduction in waste.
While BiGTA-net has demonstrated commendable forecasting capabilities in its initial stages, a thorough exploration of its limitations in conjunction with potential improvements and future directions can contribute to the enhancement of its role within the BEMS domain. By incorporating these insights, the relevance and adaptability of BiGTA-net can be advanced, thus positioning it as a frontrunner in the continuously evolving energy sector landscape.

4. Conclusions

Our study presents the BiGTA-net, a transformative deep-learning model tailored for urban energy management in smart cities, enhancing the accuracy and efficiency of STLF. This model harmoniously integrates the capabilities of Bi-GRU, TCN, and an attention mechanism, capturing both recurrent and convolutional data patterns effectively. A thorough examination of the BiGTA-net against other models on the educational building dataset showcased its distinct superiority. Specifically, BiGTA-net excelled with a MAPE of 5.37, RMSE of 171.3, and MAE of 122.0. Notably, the closest competitor, Bi-GRU-TCN-II, lagged slightly with metrics such as MAPE of 5.39 and MAE of 122.7. This superiority was mirrored in the AEP dataset, where BiGTA-net again led with a MAPE of 26.77, RMSE of 386.5, and MAE of 198.4. Such consistent outperformance underscores the model’s capability, especially when juxtaposed with other configurations.
Furthermore, the integration of the attention mechanism serves to enhance the performance of BiGTA-net, reinforcing its effectiveness in forecasting tasks. The distinct bidirectional architecture of BiGTA-net demonstrated superior performance, further establishing its supremacy. This performance advantage becomes notably apparent when contrasted with models, i.e., Att-LSTM, which exhibited higher errors across pivotal metrics, highlighting the resilience and dependability of the proposed model. The evident strength of BiGTA-net lies in its innovative amalgamation of Bi-GRU and TCN, harmonized with the attention mechanism and bolstered by the SELU activation function. Its consistent dominance across diverse datasets and metrics robustly validates the efficacy of this hybrid approach.
Despite its promising results, it is important to explore the BiGTA-net’s capabilities further and identify areas for improvement. Its generalizability has yet to be extensively tested beyond the datasets used in this study, which presents a limitation. Future research should apply the model across various consumption domains, such as residential or industrial sectors, and compare its effectiveness with a wider range of advanced machine learning models. By doing so, researchers can further refine the model for specific scenarios and delve deeper into hyperparameter optimizations.

Author Contributions

Conceptualization, D.S. and J.O.; methodology, D.S.; software, D.S. and J.O.; validation, I.J. and J.M.; formal analysis, J.O. and I.J.; investigation, D.S. and J.O.; resources, J.M.; data curation, M.L.; writing—original draft preparation, D.S.; writing—review and editing, J.M. and S.R.; visualization, D.S. and J.M.; supervision, S.R.; project administration, S.R.; funding acquisition, M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019M3F2A1073179) and also supported by the Soonchunhyang University Research Fund.

Data Availability Statement

Per MDPI’s data-sharing policies, the manuscript provides detailed dataset information. The supporting educational building dataset associated with this research is available in “Appendix A. Supplementary data” of Reference [36]. The supplementary data can be accessed on https://doi.org/10.1016/j.seta.2022.102888. Moreover, the AEP dataset used in this study is publicly available as the “Appliances Energy Prediction Data Set” in the UCI Machine Learning Repository. It can be directly accessed on https://archive.ics.uci.edu/dataset/374/appliances+energy+prediction (accessed on 15 July 2023). The manuscript promotes transparent and reproducible research, urging readers and future researchers to utilize these datasets while appropriately citing their sources.

Acknowledgments

We would like to express our sincere gratitude to the four reviewers for their insightful and valuable feedback, which has helped us improve our work.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Bi-directional Gated Recurrent Unit (Background): RNNs, renowned for their capability in managing sequential time-series data, inherently retain and process historical sequences. Nevertheless, the intrinsic challenges of gradient vanishing and exploding have been obstacles in RNNs, particularly when handling long sequences. The GRU, an advanced variant of the RNN, was engineered to address these hindrances. It integrates sophisticated gating mechanisms to manage long-term data dependencies more effectively. Traditional GRUs, by nature, are unidirectional and process sequences in a forward trajectory. The Bi-GRU emerged to overcome this constraint. It functions as a fusion of two distinct GRUs: one focused on past sequences (forward GRU) and another interpreting upcoming data sequences (backward GRU). This bidirectional perspective is not exclusive to the GRU but is also witnessed in Bidirectional RNNs. Prior studies have spotlighted the efficacy of such bidirectional constructs, especially in settings marked by significant variability and intricate causative dynamics. It is crucial to acknowledge that while the Bi-GRU delivers a profound insight into temporal sequences, it necessitates more computational resources, given its dual-structured design, which effectively amplifies the parameters for both GRU components, each tailored for either forward or backward sequences.
Temporal Convolutional Network (Background): The TCN emerges as an innovative strategy tailored for time-series data processing. At its core, the TCN utilizes causal convolutions, ensuring that forecasts at a given time instant depend exclusively on present and past data, safeguarding the integrity of the temporal sequence. A pivotal attribute of TCNs is their aptitude for discerning prolonged data patterns, predominantly facilitated by dilated convolutions. These dilations expand the network’s receptive field sans the addition of new parameters by introducing predetermined intervals between adjacent filter taps. Such a configuration empowers TCNs to recognize more extended dependencies with elevated computational efficiency, enabled by parallel computations across the time domain. To further bolster its architecture, the TCN integrates residual blocks, addressing the challenges accompanying the training of deep networks, for instance, the vanishing gradient dilemma, ensuring consistent learning throughout the layers. The standout merit of the TCN is its adaptability in managing sequences of diverse lengths, yielding outputs that mirror the input lengths. Nonetheless, the TCN, in its inherent design, omits upcoming data points, which could potentially influence its efficacy in scenarios demanding a forward-looking perspective. As an extension of its capabilities, TCN’s stackable nature combines layers, amplifying its capacity to perceive intricate temporal nuances.

References

  1. Shams Esfandabadi, Z.; Ranjbari, M. Exploring Carsharing Diffusion Challenges through Systems Thinking and Causal Loop Diagrams. Systems 2023, 11, 93. [Google Scholar] [CrossRef]
  2. Secinaro, S.; Brescia, V.; Calandra, D.; Biancone, P. Towards a hybrid model for the management of smart city initiatives. Cities 2021, 116, 103278. [Google Scholar] [CrossRef]
  3. Xia, L.; Semirumi, D.T.; Rezaei, R. A Thorough Examination of Smart City Applications: Exploring Challenges and Solutions Throughout the Life Cycle with Emphasis on Safeguarding Citizen Privacy. Sustain. Cities Soc. 2023, 98, 104771. [Google Scholar] [CrossRef]
  4. Ghasemi-Marzbali, A. Fast-charging station for electric vehicles, challenges and issues: A comprehensive review. J. Energy Storage 2022, 49, 104136. [Google Scholar]
  5. Zhang, S.; Ocłoń, P.; Klemeš, J.J.; Michorczyk, P.; Pielichowska, K.; Pielichowski, K. Renewable energy systems for building heating, cooling and electricity production with thermal energy storage. Renew. Sustain. Energy Rev. 2022, 165, 112560. [Google Scholar] [CrossRef]
  6. ur Rehman, U.; Faria, P.; Gomes, L.; Vale, Z. Future of Energy Management Systems in Smart Cities: A Systematic Literature Review. Sustain. Cities Soc. 2023, 96, 104720. [Google Scholar] [CrossRef]
  7. Hashmi, S.A.; Ali, C.F.; Zafar, S. Internet of things and cloud computing-based energy management system for demand side management in smart grid. Int. J. Energy Res. 2021, 45, 1007–1022. [Google Scholar] [CrossRef]
  8. Chin, W.L.; Li, W.; Chen, H.H. Energy big data security threats in IoT-based smart grid communications. IEEE Commun. Mag. 2017, 55, 70–75. [Google Scholar] [CrossRef]
  9. Rathor, S.K.; Saxena, D. Energy management system for smart grid: An overview and key issues. Int. J. Energy Res. 2020, 44, 4067–4109. [Google Scholar] [CrossRef]
  10. Jagait, R.K.; Fekri, M.N.; Grolinger, K.; Mir, S. Load forecasting under concept drift: Online ensemble learning with recurrent neural network and ARIMA. IEEE Access 2021, 9, 98992–99008. [Google Scholar] [CrossRef]
  11. Somu, N.; MR, G.R.; Ramamritham, K. A deep learning framework for building energy consumption forecast. Renew. Sustain. Energy Rev. 2021, 137, 110591. [Google Scholar] [CrossRef]
  12. Imani, M. Electrical load-temperature CNN for residential load forecasting. Energy 2021, 227, 120480. [Google Scholar] [CrossRef]
  13. Al Mamun, A.; Sohel, M.; Mohammad, N.; Sunny, M.S.H.; Dipta, D.R.; Hossain, E. A comprehensive review of the load forecasting techniques using single and hybrid predictive models. IEEE Access 2020, 8, 134911–134939. [Google Scholar] [CrossRef]
  14. Bilous, I.; Deshko, V.; Sukhodub, I. Parametric analysis of external and internal factors influence on building energy performance using non-linear multivariate regression models. J. Build. Eng. 2018, 20, 327–336. [Google Scholar] [CrossRef]
  15. Hong, T.; Wang, Z.; Luo, X.; Zhang, W. State-of-the-art on research and applications of machine learning in the building life cycle. Energy Build. 2020, 212, 109831. [Google Scholar] [CrossRef]
  16. Cholewa, T.; Siuta-Olcha, A.; Smolarz, A.; Muryjas, P.; Wolszczak, P.; Guz, Ł.; Bocian, M.; Balaras, C.A. An easy and widely applicable forecast control for heating systems in existing and new buildings: First field experiences. J. Clean. Prod. 2022, 352, 131605. [Google Scholar] [CrossRef]
  17. Fekri, M.N.; Patel, H.; Grolinger, K.; Sharma, V. Deep learning for load forecasting with smart meter data: Online Adaptive Recurrent Neural Network. Appl. Energy 2021, 282, 116177. [Google Scholar] [CrossRef]
  18. Granderson, J.; Sharma, M.; Crowe, E.; Jump, D.; Fernandes, S.; Touzani, S.; Johnson, D. Assessment of Model-Based peak electric consumption prediction for commercial buildings. Energy Build. 2021, 245, 111031. [Google Scholar] [CrossRef]
  19. Huang, Y.; Hasan, N.; Deng, C.; Bao, Y. Multivariate empirical mode decomposition based hybrid model for day-ahead peak load forecasting. Energy 2022, 239, 122245. [Google Scholar] [CrossRef]
  20. Li, K.; Ma, Z.; Robinson, D.; Lin, W.; Li, Z. A data-driven strategy to forecast next-day electricity usage and peak electricity demand of a building portfolio using cluster analysis, Cubist regression models and Particle Swarm Optimization. J. Clean. Prod. 2020, 273, 123115. [Google Scholar] [CrossRef]
  21. Moon, J.; Park, S.; Rho, S.; Hwang, E. Robust building energy consumption forecasting using an online learning approach with R ranger. J. Build. Eng. 2022, 47, 103851. [Google Scholar] [CrossRef]
  22. Eskandari, H.; Imani, M.; Moghaddam, M.P. Convolutional and recurrent neural network based model for short-term load forecasting. Electr. Power Syst. Res. 2021, 195, 107173. [Google Scholar] [CrossRef]
  23. Sehovac, L.; Grolinger, K. Deep learning for load forecasting: Sequence to sequence recurrent neural networks with attention. IEEE Access 2020, 8, 36411–36426. [Google Scholar] [CrossRef]
  24. Fekri, M.N.; Grolinger, K.; Mir, S. Distributed load forecasting using smart meter data: Federated learning with Recurrent Neural Networks. Int. J. Electr. Power Energy Syst. 2022, 137, 107669. [Google Scholar] [CrossRef]
  25. Hong, T.; Wang, P. Artificial Intelligence for Load Forecasting: History, Illusions, and Opportunities. IEEE Power Energy Mag. 2022, 20, 14–23. [Google Scholar] [CrossRef]
  26. Aksan, F.; Suresh, V.; Janik, P.; Sikorski, T. Load Forecasting for the Laser Metal Processing Industry Using VMD and Hybrid Deep Learning Models. Energies 2023, 16, 5381. [Google Scholar] [CrossRef]
  27. Wang, Y.; Guo, P.; Ma, N.; Liu, G. Robust Wavelet Transform Neural-Network-Based Short-Term Load Forecasting for Power Distribution Networks. Sustainability 2022, 15, 296. [Google Scholar] [CrossRef]
  28. Zhang, X.; Kuenzel, S.; Colombo, N.; Watkins, C. Hybrid short-term load forecasting method based on empirical wavelet transform and bidirectional long short-term memory neural networks. J. Mod. Power Syst. Clean Energy 2022, 10, 1216–1228. [Google Scholar] [CrossRef]
  29. Saoud, L.S.; Al-Marzouqi, H.; Deriche, M. Wind speed forecasting using the stationary wavelet transform and quaternion adaptive-gradient methods. IEEE Access 2021, 9, 127356–127367. [Google Scholar] [CrossRef]
  30. Kim, J.; Moon, J.; Hwang, E.; Kang, P. Recurrent inception convolution neural network for multi short-term load forecasting. Energy Build. 2019, 194, 328–341. [Google Scholar] [CrossRef]
  31. Jung, S.; Moon, J.; Park, S.; Hwang, E. An attention-based multilayer GRU model for multistep-ahead short-term load forecasting. Sensors 2021, 21, 1639. [Google Scholar] [CrossRef] [PubMed]
  32. Zhu, K.; Li, Y.; Mao, W.; Li, F.; Yan, J. LSTM enhanced by dual-attention-based encoder-decoder for daily peak load forecasting. Electr. Power Syst. Res. 2022, 208, 107860. [Google Scholar] [CrossRef]
  33. Liao, W.; Ruan, J.; Xie, Y.; Wang, Q.; Li, J.; Wang, R.; Zhao, J. Deep Learning Time Pattern Attention Mechanism-Based Short-Term Load Forecasting Method. Front. Energy Res. 2023, 11, 1227979. [Google Scholar] [CrossRef]
  34. Cai, C.; Li, Y.; Su, Z.; Zhu, T.; He, Y. Short-Term Electrical Load Forecasting Based on VMD and GRU-TCN Hybrid Network. Appl. Sci. 2022, 12, 6647. [Google Scholar] [CrossRef]
  35. Jang, M.; Choi, H.J.; Lim, C.G.; An, B.; Sim, J. Optimization of ESS scheduling for cost reduction in commercial and industry customers in Korea. Sustainability 2022, 14, 3605. [Google Scholar] [CrossRef]
  36. Moon, J.; Rho, S.; Baik, S.W. Toward explainable electrical load forecasting of buildings: A comparative study of tree-based ensemble methods with Shapley values. Sustain. Energy Technol. Assess. 2022, 54, 102888. [Google Scholar] [CrossRef]
  37. Candanedo, L.M.; Feldheim, V.; Deramaix, D. Data driven prediction models of energy use of appliances in a low-energy house. Energy Build. 2017, 140, 81–97. [Google Scholar] [CrossRef]
  38. Li, L.; Wang, J.; Zhong, X.; Lin, J.; Wu, N.; Zhang, Z.; Meng, C.; Wang, X.; Shah, N.; Brandon, N.; et al. Combined multi-objective optimization and agent-based modeling for a 100% renewable island energy system considering power-to-gas technology and extreme weather conditions. Appl. Energy 2022, 308, 118376. [Google Scholar] [CrossRef]
  39. KMA. Dong-Nae Forecast (Digital Forecast), Korea Meteorological Administration. Available online: https://www.kma.go.kr/eng/weather/forecast/timeseries.jsp (accessed on 15 August 2023).
  40. Park, S.; Jung, S.; Jung, S.; Rho, S.; Hwang, E. Sliding window-based LightGBM model for electric load forecasting using anomaly repair. J. Supercomput. 2021, 77, 12857–12878. [Google Scholar] [CrossRef]
  41. Fahad, M.U.; Arbab, N. Factor affecting short term load forecasting. J. Clean Energy Technol. 2014, 2, 305–309. [Google Scholar] [CrossRef]
  42. Jaihuni, M.; Basak, J.K.; Khan, F.; Okyere, F.G.; Sihalath, T.; Bhujel, A.; Park, J.; Lee, D.H.; Kim, H.T. A novel recurrent neural network approach in forecasting short term solar irradiance. ISA Trans. 2022, 121, 63–74. [Google Scholar] [CrossRef]
  43. Li, X.; Ma, X.; Xiao, F.; Xiao, C.; Wang, F.; Zhang, S. Time-series production forecasting method based on the integration of Bidirectional Gated Recurrent Unit (Bi-GRU) network and Sparrow Search Algorithm (SSA). J. Pet. Sci. Eng. 2022, 208, 109309. [Google Scholar] [CrossRef]
  44. Chollet, F. Deep Learning with Python; Simon and Schuster: New York, NY, USA, 2021. [Google Scholar]
  45. Wang, Y.; Chen, J.; Chen, X.; Zeng, X.; Kong, Y.; Sun, S.; Guo, Y.; Liu, Y. Short-term load forecasting for industrial customers based on TCN-LightGBM. IEEE Trans. Power Syst. 2020, 36, 1984–1997. [Google Scholar] [CrossRef]
  46. Hewage, P.; Behera, A.; Trovati, M.; Pereira, E.; Ghahremani, M.; Palmieri, F.; Liu, Y. Temporal convolutional neural (TCN) network for an effective weather forecasting using time-series data from the local weather station. Soft Comput. 2020, 24, 16453–16482. [Google Scholar] [CrossRef]
  47. Tian, C.; Niu, T.; Wei, W. Developing a wind power forecasting system based on deep learning with attention mechanism. Energy 2022, 257, 124750. [Google Scholar] [CrossRef]
  48. Klambauer, G.; Unterthiner, T.; Mayr, A.; Hochreiter, S. Self-normalizing neural networks. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
  49. He, H.; Lu, Z.; Zhang, C.; Wang, Y.; Guo, W.; Zhao, S. A data-driven method for dynamic load forecasting of scraper conveyer based on rough set and multilayered self-normalizing gated recurrent network. Energy Rep. 2021, 7, 1352–1362. [Google Scholar] [CrossRef]
  50. Nti, I.K.; Teimeh, M.; Nyarko-Boateng, O.; Adekoya, A.F. Electricity load forecasting: A systematic review. J. Electr. Syst. Inf. Technol. 2020, 7, 13. [Google Scholar] [CrossRef]
  51. Sanner, M.F. Python: A programming language for software integration and development. J. Mol. Graph. Model. 1999, 17, 57–61. [Google Scholar] [PubMed]
  52. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  53. Gulli, A.; Pal, S. Deep Learning with Keras; Packt Publishing Ltd.: Birmingham, UK, 2017. [Google Scholar]
  54. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  55. Park, J.; Hwang, E. A two-stage multistep-ahead electricity load forecasting scheme based on LightGBM and attention-BiLSTM. Sensors 2021, 21, 7697. [Google Scholar] [CrossRef]
  56. Khan, Z.A.; Ullah, A.; Haq, I.U.; Hamdy, M.; Mauro, G.M.; Muhammad, K.; Hijji, M.; Baik, S.W. Efficient short-term electricity load forecasting for effective energy management. Sustain. Energy Technol. Assess. 2022, 53, 102337. [Google Scholar] [CrossRef]
  57. Khan, Z.A.; Hussain, T.; Ullah, A.; Ullah, W.; Del Ser, J.; Muhammad, K.; Sajjad, M.; Baik, S.W. Modelling Electricity Consumption During the COVID19 Pandemic: Datasets, Models, Results and a Research Agenda. Energy Build. 2023, 294, 113204. [Google Scholar] [CrossRef]
  58. Fan, G.F.; Peng, L.L.; Hong, W.C. Short-term load forecasting based on empirical wavelet transform and random forest. Electr. Eng. 2022, 104, 4433–4449. [Google Scholar] [CrossRef]
  59. Alawadi, S.; Mera, D.; Fernández-Delgado, M.; Alkhabbas, F.; Olsson, C.M.; Davidsson, P. A comparison of machine learning algorithms for forecasting indoor temperature in smart buildings. Energy Syst. 2020, 13, 689–705. [Google Scholar] [CrossRef]
Figure 1. Schematic flow of data preprocessing and BiGTA-net modeling.
Figure 1. Schematic flow of data preprocessing and BiGTA-net modeling.
Systems 11 00456 g001
Figure 2. Distribution of hourly electricity consumption of a building. (a) Educational building dataset; (b) Appliances Energy Prediction dataset.
Figure 2. Distribution of hourly electricity consumption of a building. (a) Educational building dataset; (b) Appliances Energy Prediction dataset.
Systems 11 00456 g002
Figure 3. Boxplots for hourly electricity consumption by hours. (a) Educational building dataset; (b) Appliances Energy Prediction dataset.
Figure 3. Boxplots for hourly electricity consumption by hours. (a) Educational building dataset; (b) Appliances Energy Prediction dataset.
Systems 11 00456 g003
Figure 4. Boxplots for hourly electricity consumption by days of the week. (a) Educational building dataset; (b) Appliances Energy Prediction dataset.
Figure 4. Boxplots for hourly electricity consumption by days of the week. (a) Educational building dataset; (b) Appliances Energy Prediction dataset.
Systems 11 00456 g004
Figure 5. Average electricity use per hour for each day of the week and holidays.
Figure 5. Average electricity use per hour for each day of the week and holidays.
Systems 11 00456 g005
Figure 6. System architecture of BiGTA-net.
Figure 6. System architecture of BiGTA-net.
Systems 11 00456 g006
Table 1. Comparative analysis of previous studies and the current research concerning short-term load forecasting.
Table 1. Comparative analysis of previous studies and the current research concerning short-term load forecasting.
ResearchersModel UsedAddresses Long-Term DependenciesManages Varying Input Sequence LengthsWeighs Different Features
Granderson et al. [18]Regression modelNoPartiallyNo
Huang et al. [19]MEMD-PSO-SVRPartiallyPartiallyPartially
Li et al. [20]Cluster analysis, Cubist regression models, PSONoNoPartially
Moon et al. [21]RABOLAPartiallyPartiallyPartially
Aksan et al. [26]VMD-CNN-GRU and LSTMYesPartiallyPartially
Wang et al. [27]Wavelet transformer, LSTMYesNoYes
Zhang et al. [28]Bi-LSTM, BHO, EWTYesPartiallyYes
Kim et al. [30]RNN, 1D-CNNPartiallyYesPartially
Jung et al. [31]Attention-GRUYesNoPartially
Zhu et al. [32]LSTM based dual-attention modelPartiallyPartiallyPartially
Liao et al. [33]LSTM, TPA mechanismYesNoYes
BiGTA-netBi-GRU, TCN, attention mechanismYesYesYes
Table 2. Building electricity consumption dataset information.
Table 2. Building electricity consumption dataset information.
StatisticsEducational Building (Unit: kWh) [36]Appliances Energy Prediction (Unit: Wh) [37]
Number of samples43,8483289
Mean2183.70586.18
Standard deviation756.41488.98
Median1950.84380
Trimmed mean2104.24476.53
Median absolute deviation708.80163.09
Range3793.323830
Skew0.792.41
Kurtosis−0.396.49
Standard error3.618.53
Data collection period1 March 2015−29 February 202011 January 2016−27 May 2016
Building locationSeoul, Republic of KoreaStambruges, Belgium
Public accessNoYes
Table 3. Input variables and their information for BiGTA-net modeling.
Table 3. Input variables and their information for BiGTA-net modeling.
Variable NameData TypeData FormatDescription
HourxNumericTimestamp informationSine value of the hour
HouryNumericTimestamp informationCosine value of the hour
DOTWxNumericTimestamp informationSine value of the day of the week
DOTWyNumericTimestamp informationCosine value of the day of the week
HolibinaryTimestamp informationHoliday indicator for holiday
TempNumericClimate DataAmbient temperature
HumiNumericClimate DataRelative humidity
WSNumericClimate DataWind velocity
THINumericClimate DataTemperature–humidity index
WCTNumericClimate DataWind chill temperature
Cons1NumericPast Power ConsumptionPower consumption one day prior
Holi1binaryPast Power ConsumptionHoliday indicator one day prior
Cons7NumericPast Power ConsumptionPower consumption one week prior
Holi7binaryPast Power ConsumptionHoliday indicator one week prior
ConsavgnumericPast Power ConsumptionAverage weekly power consumption
Table 4. Comparison of hybrid deep learning model architectures.
Table 4. Comparison of hybrid deep learning model architectures.
ModelsNeural NetworkActivation FunctionAttention Mechanism
LSTM-TCN-ILSTM-TCNSELUO
LSTM-TCN-IIX
LSTM-TCN-IIIReLUO
LSTM-TCN-IVX
Bi-LSTM-TCN-IBi-LSTM-TCNSELUO
Bi-LSTM-TCN-IIX
Bi-LSTM-TCN-IIIReLUO
Bi-LSTM-TCN-IVX
GRU-TCN-IGRU-TCNSELUO
GRU-TCN-IIX
GRU-TCN-IIIReLUO
GRU-TCN-IVX
Bi-GRU-TCN-IBi-GRU-TCNSELUO
Bi-GRU-TCN-IIX
Bi-GRU-TCN-IIIReLUO
Bi-GRU-TCN-IVX
Table 5. MAPE comparison for the educational building dataset.
Table 5. MAPE comparison for the educational building dataset.
StepLSTM-TCNBi-LSTM-TCNGRU-TCNBi-GRU-TCN
IIIIIIIVIIIIIIIVIIIIIIIVIIIIIIIV
14.515.144.0923.365.617.389.8322.864.644.664.355.043.883.594.644.66
24.935.644.7423.355.837.0410.3323.034.984.705.185.004.494.414.984.70
35.005.784.8023.396.217.378.4523.045.374.925.575.364.764.395.374.92
45.376.205.1423.406.387.307.9615.385.615.165.755.504.944.685.615.16
55.766.405.3423.336.297.337.4012.165.765.705.745.925.125.095.765.70
65.896.695.4523.176.297.716.8711.205.925.755.945.875.255.215.925.75
75.896.785.6822.946.287.926.9911.656.016.095.826.095.325.186.016.09
85.817.015.6122.666.628.627.0413.196.116.045.976.155.425.196.116.04
95.877.135.6122.386.439.087.5413.616.236.496.076.385.485.386.236.49
105.767.475.5022.116.518.637.4914.106.026.706.006.435.525.356.026.70
115.897.775.6121.956.418.447.4914.046.096.846.156.535.535.426.096.84
126.007.835.6121.886.497.747.8814.066.207.066.066.725.525.466.207.06
136.148.115.7421.886.477.538.0412.666.427.036.096.605.665.576.427.03
146.148.155.8121.946.517.739.1913.166.427.406.156.795.655.526.427.40
156.348.165.9222.026.737.838.5312.256.547.276.197.085.815.476.547.27
166.238.185.9722.107.038.678.6512.586.617.416.127.155.665.446.617.41
176.448.026.0122.177.178.538.0513.376.557.596.327.235.665.746.557.59
186.397.936.2422.287.349.187.8213.926.487.616.357.335.495.706.487.61
196.467.916.0022.507.749.767.9115.286.467.886.317.245.595.706.467.88
206.447.856.0322.777.9010.088.0117.066.727.416.437.435.565.766.727.41
216.657.845.9323.058.259.568.0118.936.617.206.327.385.596.016.617.20
226.497.915.9923.308.4910.237.7821.156.637.096.297.115.676.186.637.09
236.327.815.7623.508.3110.208.1323.956.687.036.286.865.666.396.687.03
245.977.615.6023.628.4410.888.6625.876.576.816.226.865.656.456.576.81
Avg.5.957.315.5922.716.908.538.0916.196.156.585.996.505.375.396.156.58
Table 6. RMSE comparison for the educational building dataset.
Table 6. RMSE comparison for the educational building dataset.
StepLSTM-TCNBi-LSTM-TCNGRU-TCNBi-GRU-TCN
IIIIIIIVIIIIIIIVIIIIIIIVIIIIIIIV
1134.8144.7128.8704.6165.4233.2307.3644.7140.1141.8133.9156.2118.8110.7140.1141.8
2148.8164.6152.7699.9172.4218.9296.3621.0150.0149.9155.7162.1140.4136.7150.0149.9
3154.2170.5156.8696.8181.0223.8254.9778.4159.6156.9166.4167.2151.1141.5159.6156.9
4164.5179.1163.9694.2185.2216.6233.0439.3165.2162.1171.7173.0158.6149.5165.2162.1
5175.6188.6169.2691.4185.1224.3217.7331.3170.5176.1175.2181.7164.5160.7170.5176.1
6181.4195.6177.3688.4185.4230.9219.1317.3176.7180.4181.3183.1169.1164.3176.7180.4
7181.5200.4182.6685.2187.8234.8210.7349.7180.2188.5179.7188.4174.4164.3180.2188.5
8179.1206.5182.3682.3198.6245.9216.4397.0184.1189.8182.4192.9176.6165.3184.1189.8
9179.2215.0184.9680.0198.9251.7232.1399.7184.5200.1184.3200.7175.8168.2184.5200.1
10175.9223.4179.6678.2203.3240.5236.1397.2180.6207.7182.2201.9173.7168.3180.6207.7
11176.9232.4183.1677.3205.6235.8230.3411.5180.7213.1185.2204.7173.1170.6180.7213.1
12179.5238.3182.2677.1209.9222.7235.8409.0182.4217.5184.0208.1173.7170.9182.4217.5
13181.6243.3184.9677.5213.0221.4235.6363.7186.0219.3185.7204.1175.7173.2186.0219.3
14183.1245.9183.6678.6212.8232.1262.7363.3186.9226.1186.5207.6178.3172.8186.9226.1
15187.0242.6187.2680.9219.3239.2242.0362.5188.2222.6187.3214.4183.1173.3188.2222.6
16185.0239.9191.3684.1226.0262.9254.7418.2191.3223.3186.6217.3180.6173.6191.3223.3
17188.2233.6182.8688.0232.5258.0237.7485.0188.3228.8188.3217.8181.3177.2188.3228.8
18188.6230.7190.4693.0232.7278.9228.0532.6188.5227.4191.4219.3178.6178.3188.5227.4
19189.9228.4184.4699.2238.8287.7237.1608.7190.5233.0192.1218.4180.5181.4190.5233.0
20189.9225.4183.1706.2240.3288.0231.4699.6195.3223.2197.3223.4180.3183.0195.3223.2
21194.8223.1182.4713.6247.4270.8243.1797.5194.8221.1192.5219.3180.2188.8194.8221.1
22193.6224.2184.4720.0251.0290.3244.4879.6195.8217.7193.7211.8182.6192.7195.8217.7
23193.6221.2181.0725.3244.2285.5264.6971.5197.6213.7189.8206.3180.6199.1197.6213.7
24192.9221.0177.5729.6246.6316.9294.4863.1198.7212.5189.8208.1180.4203.3198.7212.5
Avg.179.1214.1177.4693.8211.8250.5244.4535.1181.5202.2181.8199.5171.3169.5181.5202.2
Table 7. MAE comparison for the educational building dataset.
Table 7. MAE comparison for the educational building dataset.
StepLSTM-TCNBi-LSTM-TCNGRU-TCNBi-GRU-TCN
IIIIIIIVIIIIIIIVIIIIIIIVIIIIIIIV
199.2113.193.7559.0128.2168.2214.7527.0103.5107.897.5118.286.480.3103.5107.8
2109.3126.8110.5557.4132.0158.2222.0501.2110.9110.0115.9119.3100.599.4110.9110.0
3112.5130.0112.0557.3138.6166.2188.8578.3119.5114.4124.4124.1107.299.9119.5114.4
4120.8137.9118.1557.1142.0163.3175.6343.8123.8118.6128.2127.5111.7105.9123.8118.6
5129.3144.7122.2555.5140.8167.5162.4259.4127.0130.5129.1136.0116.5115.3127.0130.5
6133.0150.6126.5552.3139.9175.2157.4245.5131.4133.0134.1135.1119.8118.1131.4133.0
7132.9153.8131.9548.0141.4179.3156.1263.4134.0140.6131.8139.7122.1117.5134.0140.6
8130.9159.1131.1543.0150.8192.4160.3300.4136.6140.5134.2142.4123.8117.3136.6140.5
9130.9164.9132.1538.0149.1199.5170.4305.9137.7150.7136.1149.3124.4120.7137.7150.7
10128.0172.7127.6533.5152.3189.9172.8313.0133.4157.2133.8150.3123.9120.4133.4157.2
11130.2180.6130.5531.1152.3185.6169.7320.3134.2161.4137.1152.5123.6122.7134.2161.4
12132.9183.7130.1530.7155.0171.7177.2318.7136.3166.1135.7156.6123.8123.1136.3166.1
13135.2190.0133.2531.3155.7169.0179.4282.4140.9166.5137.2153.2126.4125.7140.9166.5
14135.6191.1133.1532.7155.9175.6202.5286.3141.0174.2137.7157.3127.1124.9141.0174.2
15140.1189.6136.1534.4161.4178.8187.7270.3142.9171.0138.5164.0131.5124.3142.9171.0
16137.5187.9137.9535.8168.7200.4194.6288.6145.3172.9136.9166.4128.9123.9145.3172.9
17141.0183.6134.6537.0174.0196.8178.5318.2143.2177.6140.6167.8129.5129.4143.2177.6
18140.3181.2138.7539.2176.5214.8173.3337.9143.1177.1142.3169.6126.6129.6143.1177.1
19141.7179.9134.5543.8184.5227.3174.5382.7144.0183.6141.6168.3128.6131.6144.0183.6
20141.4177.7133.5549.5187.4231.5176.1442.3149.3173.1145.9172.8128.3133.1149.3173.1
21145.6176.4132.8555.6195.0215.2177.9507.0148.1169.8141.8169.9128.5139.2148.1169.8
22143.2177.4133.9561.4199.4230.0175.4574.6148.5166.9142.6163.4130.1143.0148.5166.9
23141.6174.8129.6565.8193.5227.1184.3652.7149.8164.1139.7157.3129.3148.6149.8164.1
24136.2172.4125.8568.5194.9247.7199.4659.3148.8160.2137.9157.7128.7150.9148.8160.2
Avg.132.0166.7127.9546.6161.2193.0180.5386.6136.4153.7134.2150.8122.0122.7136.4153.7
Table 8. MAPE comparison for the AEP dataset.
Table 8. MAPE comparison for the AEP dataset.
StepLSTM-TCNBi-LSTM-TCNGRU-TCNBi-GRU-TCN
IIIIIIIVIIIIIIIVIIIIIIIVIIIIIIIV
129.3428.3928.1833.2933.0535.4836.6155.7926.8629.1830.1227.2423.7824.6426.8629.18
228.8526.1827.5332.8431.2032.0334.8337.9926.0628.2129.9328.8525.6226.8426.0628.21
327.8325.9227.8229.8432.3830.1936.2677.8626.3230.5929.6828.2226.4527.1526.3230.59
426.9927.0428.1528.2330.9630.2836.3950.0227.6229.4029.3529.0127.2330.3627.6229.40
526.4731.2729.3928.0228.1431.8834.9632.9029.4128.4229.0829.1226.5228.8129.4128.42
626.2230.7729.1027.6728.0427.5435.8977.4930.1229.3828.6830.4026.2229.8030.1229.38
726.1630.5229.6227.3426.0535.4232.5844.1030.2929.8328.3630.5826.7930.0930.2929.83
826.2935.9428.4026.7125.7330.2635.7832.6630.6129.9927.9930.5828.8730.2430.6129.99
926.5332.7729.1227.2725.6027.7734.2276.0329.2928.6327.8629.5327.0629.9629.2928.63
1027.3229.6927.4827.5426.6127.3635.7036.9129.0227.7227.9028.6927.1829.2229.0227.72
1127.5129.9528.1127.4426.8229.7032.2438.4529.1026.6128.0827.9727.1429.4829.1026.61
1227.7928.3727.8126.3027.8532.3131.4472.6828.1026.2328.2827.7226.9530.2828.1026.23
1328.1327.9427.9727.0635.0333.4533.6344.0727.2925.3728.6326.3426.0429.8627.2925.37
1428.4028.5028.6726.5427.4632.8331.1933.4827.0525.3029.0326.8826.4028.8027.0525.30
1529.2627.9529.3128.2930.3432.1331.9668.8428.0224.6529.4726.7327.2330.1628.0224.65
1629.7328.3429.1531.2727.4531.1932.8034.3328.0724.9929.7426.6927.1029.9728.0724.99
1729.7828.8129.0928.1627.1130.0134.3930.0828.8824.5930.1525.7126.4828.6828.8824.59
1830.0328.7228.1826.3328.2030.0531.6559.9528.6324.8930.4425.6726.3527.9328.6324.89
1929.5129.1028.1725.1828.2630.3333.1230.4828.5825.4730.6226.5126.3728.3128.5825.47
2028.3728.6127.5425.3128.6429.6532.9827.7328.4025.7530.6426.4026.7126.2928.4025.75
2128.1429.2627.3026.9231.0028.7334.2927.7528.4626.9430.7327.5127.2627.5328.4626.94
2229.0329.4827.3827.9330.8328.0733.4830.3228.5827.2730.5327.9627.6427.0328.5827.27
2330.8329.5027.4831.7931.3828.2731.7535.2628.8428.3230.3729.3327.8128.0628.8428.32
2433.6430.4227.9436.8230.6429.7839.8053.7627.5427.3930.2829.8027.3528.4527.5427.39
Avg.28.4229.3128.2928.5029.1230.6134.0846.2128.3827.3029.4128.0626.7728.6628.3827.30
Table 9. RMSE comparison for the AEP dataset.
Table 9. RMSE comparison for the AEP dataset.
StepLSTM-TCNBi-LSTM-TCNGRU-TCNBi-GRU-TCN
IIIIIIIVIIIIIIIVIIIIIIIVIIIIIIIV
1473.6417.6392.5401.8431.1427.9430.3419.2368.3386.1448.4380.8372.2375.9368.3386.1
2451.5420.0395.2406.0424.1436.4429.9430.5381.8392.7450.0378.8378.5369.6381.8392.7
3449.4431.3425.5407.3426.1440.9435.3622.2381.9391.8450.9379.0381.8372.8381.9391.8
4447.3422.0420.2409.6411.1440.0423.6431.3379.6387.9450.3377.6380.0370.8379.6387.9
5446.3421.8420.2407.0412.4430.8432.2464.4378.8384.8450.1388.1384.6373.2378.8384.8
6446.3429.0414.4414.9417.1452.0434.1622.0383.3383.3449.6384.0391.1373.7383.3383.3
7447.2428.3411.0416.9422.8417.4436.7439.0382.7384.5449.3380.5378.4369.1382.7384.5
8446.8430.3417.9412.0422.8428.3438.4469.2381.1381.4447.5384.1379.7368.9381.1381.4
9447.1427.3417.2414.3429.4437.5448.7614.7377.1384.0447.7387.0382.6371.7377.1384.0
10445.2436.9413.3413.2441.3435.7440.2442.0376.3384.0447.9387.8385.0369.3376.3384.0
11446.4435.4404.8416.7425.8425.1452.0432.7374.2384.5448.6387.3387.5373.6374.2384.5
12447.8439.8412.1418.3455.5425.8450.1601.6380.1390.3449.5387.3387.5374.2380.1390.3
13444.5438.7406.4423.2421.2428.8453.1416.0389.9395.5450.6387.8390.8373.6389.9395.5
14443.1435.4398.1428.6445.5431.8447.2428.6391.0394.6451.8387.8390.0375.1391.0394.6
15434.3442.7393.7433.0411.6433.9457.0587.4390.5398.0453.4389.5383.8375.5390.5398.0
16428.3441.2391.2432.4408.7433.9445.9417.9393.8396.2454.6392.7384.5377.9393.8396.2
17423.2441.5410.0437.1408.0437.1447.6430.1398.7392.7457.7387.5383.7374.9398.7392.7
18412.3446.1416.4433.6407.5436.7455.3560.6397.0397.3459.8391.9387.7380.4397.0397.3
19406.7448.1423.0437.1415.8436.5453.5427.7396.6399.0463.2396.6391.0382.9396.6399.0
20405.0456.2423.5433.7419.6435.8453.2441.7393.4404.9464.3398.8390.2385.2393.4404.9
21409.8456.4410.1430.8429.3436.6444.7441.2392.5402.3465.2397.5394.4382.9392.5402.3
22416.0457.4414.7437.1440.7434.8452.2427.2394.0401.3464.7396.4395.3386.1394.0401.3
23423.7459.3412.6441.1450.1434.3444.5417.8397.8401.2463.4396.1396.2390.0397.8401.2
24434.2459.1418.9443.2452.6440.9451.3418.8401.8403.5462.8403.8398.5400.3401.8403.5
Avg.436.5438.4410.9422.9426.3434.1444.0475.2386.8392.6454.2388.7386.5377.0386.8392.6
Table 10. MAE comparison for the AEP dataset.
Table 10. MAE comparison for the AEP dataset.
StepLSTM-TCNBi-LSTM-TCNGRU-TCNBi-GRU-TCN
IIIIIIIVIIIIIIIVIIIIIIIVIIIIIIIV
1250.5215.7218.1212.1239.3241.7244.3285.3193.0203.5239.3196.6188.2190.6193.0203.5
2236.5211.4217.0210.0229.8237.1239.7247.9195.4203.8239.2199.8195.7195.7195.4203.8
3232.5218.0225.6211.1234.2234.4245.8466.1195.3210.6238.7199.3199.7196.4195.3210.6
4229.0215.3218.4213.5220.0234.2240.0278.5198.8206.7237.4200.9201.1205.2198.8206.7
5227.0226.3217.0215.7212.1233.2240.1254.7203.6202.3236.2206.0198.5198.5203.6202.3
6226.2229.6212.5218.6213.3233.2243.9465.0207.6206.0234.5207.5200.3201.7207.6206.0
7226.3227.8209.8221.0211.9235.8238.5270.2207.3208.7233.1206.2196.2201.0207.3208.7
8226.2242.5211.6213.5210.3227.5245.3257.3207.1206.4230.6206.9202.0200.0207.1206.4
9226.4233.5212.9217.0214.1225.4246.7456.9201.5203.7230.0205.8197.3201.0201.5203.7
10227.3230.5211.7210.0223.7223.2246.1253.8199.6199.9230.1202.4198.3196.7199.6199.9
11228.5229.7206.4213.4217.2223.9244.2252.6199.0196.0230.9199.8199.3199.2199.0196.0
12230.0227.9207.5212.7234.7231.7241.1441.0198.6196.8231.8198.4198.4202.0198.6196.8
13229.2226.2207.6215.9236.3236.4249.3258.5200.1196.9233.4194.7196.4201.0200.1196.9
14229.4225.8200.6221.0231.4235.9238.8237.4198.5196.1235.1196.8196.4197.5198.5196.1
15227.6228.4204.3225.6218.8235.1247.0423.9201.1196.3237.5197.0197.2202.9201.1196.3
16226.1228.6211.9224.6210.0232.2243.2234.7202.4196.3238.9198.4197.1203.5202.4196.3
17224.1230.0211.9226.7207.3230.7248.9229.2206.9194.1241.9193.8194.9198.5206.9194.1
18220.0232.4209.8221.9211.4231.1245.3386.7205.8196.8244.1196.0196.7198.7205.8196.8
19214.7234.1210.3223.3215.7231.5249.1228.4206.7198.5246.4200.7198.1200.8206.7198.5
20209.3237.7209.9220.9217.7229.4247.9228.2204.9201.9247.3200.5198.7195.8204.9201.9
21210.3239.8208.9219.3228.8226.9248.7227.8205.3205.4248.4202.7201.8199.4205.3205.4
22215.3240.8213.1221.9233.3222.7249.4226.7205.2205.5247.5204.0203.3198.0205.2205.5
23223.4241.8223.7224.5240.3221.6240.5234.4207.9207.5246.2208.5203.3202.9207.9207.5
24236.3244.1240.4226.2240.5228.4262.3283.9205.6205.8245.9211.5202.7205.8205.6205.8
Avg.226.3229.9213.4218.4223.0231.0245.3297.1202.4201.9238.5201.4198.4199.7202.4201.9
Table 11. Performance comparison of attention-inclusive models on the educational building dataset. Left values denote the mean values across all steps, while the values in parentheses on the right represent the corresponding standard deviations.
Table 11. Performance comparison of attention-inclusive models on the educational building dataset. Left values denote the mean values across all steps, while the values in parentheses on the right represent the corresponding standard deviations.
Model (Year)MAPE (Unit: %)RMSE (Unit: kWh)MAE (Unit: kWh)
Att-LSTM8.38 (1.57)242.1 (48.2)188.8 (39.5)
Att-Bi-LSTM7.85 (0.70)241.8 (25.1)176.9 (17.5)
Att-GRU [31]13.42 (3.39)436.5 (177.0)313.3 (110.8)
Att-Bi-GRU14.43 (3.07)433.7 (91.8)328.1 (72.6)
LGBM-S2S-Att-Bi-LSTM (2021) [55]7.57 (0.77)220.4 (19.2)174.4 (19.1)
RABOLA (2022) [21]7.17 (0.63)214.2 (13.8)166.5 (15.6)
ResCNN-LSTM (2022) [56]6.56 (0.28)201.5 (4.1)152.8 (6.1)
Att-CNN-GRU (2023) [57]6.35 (0.23)189.6 (5.3)142.3 (4.5)
BiGTA-net5.37 (0.44)171.3 (15.0)122.0 (10.5)
Table 12. Performance comparison of attention-inclusive models on the AEP dataset. Left values denote the mean values across all steps, while the values in parentheses on the right represent the corresponding standard deviations.
Table 12. Performance comparison of attention-inclusive models on the AEP dataset. Left values denote the mean values across all steps, while the values in parentheses on the right represent the corresponding standard deviations.
ModelMAPE (Unit: %)RMSE (Unit: Wh)MAE (Unit: Wh)
Att-LSTM30.91 (1.03)447.4 (5.3)239.8 (5.3)
Att-Bi-LSTM30.54 (2.58)402.7 (8.9)214.0 (7.3)
Att-GRU [31]30.03 (0.25)443.5 (3.9)234.5 (2.4)
Att-Bi-GRU30.06 (0.26)442.5 (4.2)234.4 (8.1)
LGBM-S2S-Att-Bi-LSTM (2021) [55]32.29 (3.78)415.7 (12.1)222.6 (14.2)
RABOLA (2022) [21]35.89 (5.78)432.1 (20.9)238.5 (18.9)
ResCNN-LSTM (2022) [56]29.99 (2.12)376.2 (7.0)206.2 (5.7)
Att-CNN-GRU (2023) [57]29.94 (1.73)405.1 (9.7)215.8 (4.7)
BiGTA-net26.77 (0.90)386.5 (6.3)198.4 (3.2)
Table 13. Results of the Wilcoxon signed-rank and Friedman tests with BiGTA-net on the educational building dataset.
Table 13. Results of the Wilcoxon signed-rank and Friedman tests with BiGTA-net on the educational building dataset.
Compared ModelsMAPERMSEMAE
Att-LSTM1.192 × 10−71.192 × 10−71.192 × 10−7
Att-Bi-LSTM1.192 × 10−71.192 × 10−71.192 × 10−7
Att-GRU [31]1.192 × 10−71.192 × 10−71.192 × 10−7
Att-Bi-GRU1.192 × 10−71.192 × 10−71.192 × 10−7
LGBM-S2S-Att-Bi-LSTM [55]1.192 × 10−71.192 × 10−71.192 × 10−7
RABOLA [21]1.192 × 10−71.192 × 10−71.192 × 10−7
ResCNN-LSTM [56]1.192 × 10−71.192 × 10−71.192 × 10−7
Att-CNN-GRU [57]1.192 × 10−71.192 × 10−71.192 × 10−7
Friedman TestFriedman chi-squared: 167.2
p-value: 2.2 × 10−16
Friedman chi-squared: 166.86
p-value: 2.2 × 10−16
Friedman chi-squared: 163.98
p-value: 2.2 × 10−16
Table 14. Results of the Wilcoxon signed-rank and Friedman tests with BiGTA-net on the AEP dataset.
Table 14. Results of the Wilcoxon signed-rank and Friedman tests with BiGTA-net on the AEP dataset.
Compared ModelsMAPERMSEMAE
Att-LSTM1.192 × 10−71.192 × 10−71.192 × 10−7
Att-Bi-LSTM1.192 × 10−71.192 × 10−71.192 × 10−7
Att-GRU [31]1.192 × 10−71.192 × 10−71.192 × 10−7
Att-Bi-GRU1.192 × 10−71.192 × 10−71.192 × 10−7
LGBM-S2S-Att-Bi-LSTM [55]2.980 × 10−61.192 × 10−71.192 × 10−7
RABOLA [21]2.384 × 10−71.192 × 10−71.192 × 10−7
ResCNN-LSTM [56]1.192 × 10−71.192 × 10−71.192 × 10−6
Att-CNN-GRU [57]1.192 × 10−71.192 × 10−71.192 × 10−7
Friedman TestFriedman chi-squared: 75.5
p-value: 3.917 × 10−13
Friedman chi-squared: 170.26
p-value: 2.2 × 10−16
Friedman chi-squared: 140.54
p-value: 2.2 × 10−16
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

So, D.; Oh, J.; Jeon, I.; Moon, J.; Lee, M.; Rho, S. BiGTA-Net: A Hybrid Deep Learning-Based Electrical Energy Forecasting Model for Building Energy Management Systems. Systems 2023, 11, 456. https://doi.org/10.3390/systems11090456

AMA Style

So D, Oh J, Jeon I, Moon J, Lee M, Rho S. BiGTA-Net: A Hybrid Deep Learning-Based Electrical Energy Forecasting Model for Building Energy Management Systems. Systems. 2023; 11(9):456. https://doi.org/10.3390/systems11090456

Chicago/Turabian Style

So, Dayeong, Jinyeong Oh, Insu Jeon, Jihoon Moon, Miyoung Lee, and Seungmin Rho. 2023. "BiGTA-Net: A Hybrid Deep Learning-Based Electrical Energy Forecasting Model for Building Energy Management Systems" Systems 11, no. 9: 456. https://doi.org/10.3390/systems11090456

APA Style

So, D., Oh, J., Jeon, I., Moon, J., Lee, M., & Rho, S. (2023). BiGTA-Net: A Hybrid Deep Learning-Based Electrical Energy Forecasting Model for Building Energy Management Systems. Systems, 11(9), 456. https://doi.org/10.3390/systems11090456

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop