Next Article in Journal
Numerical Prediction on In-Cylinder Mixture Formation and Combustion Characteristics for SIDI Engine Fueled with Hydrogen: Effect of Injection Angle and Equivalence Ratio
Next Article in Special Issue
Prediction of Electricity Generation Using Onshore Wind and Solar Energy in Germany
Previous Article in Journal
Optimal Scheduling Strategy of Microgrid Based on Reactive Power Compensation of Electric Vehicles
Previous Article in Special Issue
Primary Energy Consumption Patterns in Selected European Countries from 1990 to 2021: A Cluster Analysis Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Regression Framework for Energy Consumption in Smart Cities with Encoder-Decoder Recurrent Neural Networks

Department of Industrial and Management Engineering, Incheon National University, Incheon 22012, Republic of Korea
*
Author to whom correspondence should be addressed.
Energies 2023, 16(22), 7508; https://doi.org/10.3390/en16227508
Submission received: 27 September 2023 / Revised: 31 October 2023 / Accepted: 2 November 2023 / Published: 9 November 2023
(This article belongs to the Special Issue Energy – Machine Learning and Artificial Intelligence)

Abstract

:
Currently, a smart city should ideally be environmentally friendly and sustainable, and energy management is one method to monitor sustainable use. This research project investigates the potential for a “smart city” to improve energy management by enabling the adoption of various types of intelligent technology to improve the energy sustainability of a city’s infrastructure and operational efficiency. In addition, the South Korean smart city region of Songdo serves as the inspiration for this case study. In the first module of the proposed framework, we place a strong emphasis on the data capabilities necessary to generate energy statistics for each of the numerous structures. In the second phase of the procedure, we employ the collected data to conduct a data analysis of the energy behavior within the microcities, from which we derive characteristics. In the third module, we construct baseline regressors to assess the proposed model’s varying degrees of efficacy. Finally, we present a method for building an energy prediction model using a deep learning regression model to solve the problem of 48-hour-ahead energy consumption forecasting. The recommended model is preferable to other models in terms of R2, MAE, and RMSE, according to the study’s findings.

1. Introduction

Modern smart buildings employ a vast sensor network to gather data that might be helpful for energy management. These sensors gather energy information that may be used to forecast the building’s energy use, adjust thermostat settings, and even improve the building’s resilience and safety. Controlling and evaluating the energy usage of many buildings, nevertheless, can be difficult, especially if the power consumption is very variable. Only ordinary businesses and homes consume such a large quantity of energy. The U.S. Department of Energy estimated that by the year 2020, Americans would consume more than 20 million megawatt hours (MWh) in the residential sector and more than 16 million MWh in the commercial sector, accounting for a combined 29% of all energy used in the country [1]. Around 60% of household energy is utilized for space cooling, space heating, and related electrical equipment, which makes up the largest component of total energy usage [2]. Cities can use energy-saving measures that make use of the built environment to solve this problem. These involve monitoring, managing, and improving how much energy is used in buildings as well as how other resources are used.
An essential part of managing the energy use of buildings is predicting their energy usage. The prediction of building energy demand is crucial to the operation and maintenance of buildings, as well as the deployment of energy-efficient building technologies. From the perspective of a building owner, estimating energy consumption can enable them to decide on the right course of action regarding operation and maintenance. Additionally, accurate estimation of energy demand from buildings is also important to building owners for deciding on suitable financial investment building such as refurbishment and renovation, for example, for the replacement of major building components such as heating, ventilation, and air conditioning (HVAC), lighting, power generation, hot water, or refrigeration systems. Building owners can also benefit from accurate energy demand prediction of their buildings for energy forecasting, energy demand reduction, energy audit, efficiency improvement, and demand response. Other applications of accurate energy demand estimation include building certification/testing/inspection, building life-cycle cost management, and energy policy and planning.
The ‘2020 Energy Consumption Statistics’ published by the Korea Energy Management Corporation and the Korea Energy Agency have reported that energy consumption in academia is growing steadily as technology advances. In Korea, in the year 2020, the amount of energy used in educational buildings is estimated at 10.2% of the total annual energy consumption, and the percentage is expected to increase because of further advances in technology. They reported that the annual energy consumption by educational buildings increased by about 16.3% yearly from the year 2000 in the total energy consumption [3]. In particular, the electricity consumption in laboratories is likely to increase in the future; moreover, it has to be taken into account that the energy consumption may vary depending on the nature of research activities, the type of experiments, and the research facility, but it is mostly driven by laboratory activity.
Universities and research facilities must reduce energy waste, manage, and analyze energy usage in order to maximize energy utilization and assess the success of energy-saving initiatives. In this work, a framework for the energy forecast of a smart city is proposed. The proposed framework, based on the integration of a cloud platform for storage, processing, and communication, integrates and unifies the data collected from the different elements and buildings inside the campus. Moreover, in addition to the storage of historical consumption of energy, the platform also includes the monitoring, control, and evaluation of energy consumption. Beyond the analysis within the university, we take into account temporal and environmental variables such as temperature or humidity. This research demonstrates how, using deep learning, we can turn raw data into meaningful information that might assist university town decision-makers in becoming more conscious of energy use in a smart city context. In addition, this study is based on the energy consumption of a university city due to the characteristic features that include the student residence, laboratories, gym, cafeterias, restaurants, and offices, representing a smart city on a smaller scale.
The symbiosis between smart industry and smart cities, like the energy sector, is a dynamic interdependence that employs cutting-edge technologies to optimize urban development and industrial processes such as electric vehicles or batteries [4,5,6]. From a smart city perspective, one of the main outcomes is its ability to create information and produce big data using the Internet of Things and embedded systems, creating a lot of opportunities to improve the actual energy systems for electricity grid operators or energy companies [7]. Furthermore, we look to provide an effective tool to support the decision-making processes and the improvement of the academic, operational, and commercial performance of the university facilities regarding efficient energy management. Using this statement, this investigation seeks to address the following questions: (1) What can we expect with the energy consumption data set of smart buildings? (2) To what extent can we analyze a smart building through its energy data? (3) How can we enrich our data to get a better perspective on what is happening in the environment? (4) What regression model can we use? (5) How can we benefit from deep learning models? The study explores a symbiotic relationship between advancements in smart industry and the development of smart cities, specifically focusing on predicting energy consumption through the analysis of individual buildings.
In this study, our first goal was to expose the possibilities offered by the energy data on a campus-wide level for the smart energy management of a college town, transforming the data into applicable information using deep learning for energy consumption and prediction. We used a structure that is typical for most small cities around the world. We wanted to show that energy data may be useful for energy management on a small scale. By combining energy data collected by an energy management system between the end of 2019 and the beginning of 2021, individual building energy consumption, with related weather data, we pursued to show that these three different but correlated data sets are related to each other.
Most previous literature on energy prediction is based on several regression models with a few parameters [8,9,10,11,12], although some of these studies analyze the relationship between energy consumption and climate change [13,14,15,16,17]. Other studies analyze the energy consumption in Korean buildings [18,19,20,21]. Previous research in the energy area of smart cities studied the energy consumption of individual buildings but not as a whole. Basically, they can be classified into two categories, studies that analyze residential and non-residential buildings. Among the studies that focus on residential buildings, it should be noted that they focus on a greater number of buildings [22,23,24,25,26]. The electricity consumption in an average home in the United States alone ranges from 800 kWh to 1000 kWh per month [2]. This number can vary according to different factors, such as the geographical area, whose climate, according to the EIA (US Energy Information Administration), causes the highest peaks of electricity consumption in a home to occur both in summer and winter due to the use of electricity by heating and air conditioning. Additionally, after the start of the pandemic by COVID-19, this consumption increased due to remote work and the lack of need for users to leave their homes to consume in restaurants, etc. In such a way, we can note that the climate and the seasons of the year are important because they indicate the increase or decrease in energy consumption. For this reason, in this study, these variables were considered. In the second category, studies based on non-residential buildings are usually based on offices, hospitals, or universities in the academic area [27,28,29,30,31]. Both categories of study help us understand how energy consumption behaves.
We have built our research on the significant foundation provided by both types of studies. For example, residential building studies have supported the idea that energy consumption is affected by the number of occupants and weather, both of which considerably impact energy consumption prediction. In addition, some researchers have conducted tests and verification of their models by employing synthetic data created through the utilization of Ecotect simulation software [23,24]. In their study, Dagdougui et al. [32] introduced an innovative approach to energy load forecasting. This approach involved the analysis of five buildings belonging to three distinct types, namely residential, commercial, and educational/office buildings. Numerous scholarly investigations have focused on different types of buildings; nevertheless, there is a lack of research that comprehensively explores a complete urban area. The objective of our research is to examine the efficacy of forecasting energy consumption requirements in smart urban areas through the analysis of a diverse set of buildings situated across an educational campus.
Urban facilities take into account the different parts of a city, such as its office buildings, residential buildings, restaurants or shops, schools, etc., but it is the office buildings that make up most of the urban areas. In this study, we try to approach a smart city as a space that houses this type of part, which in turn has many dynamic patterns. It is widely recognized that the primary aim of the smart city system is to enhance the efficiency of energy use and waste management. In this study, our objective is to analyze the environment of a smart city and understand the different patterns of energy consumption. But something very complicated is happening because there are many external variables that can affect their behavior. So, we come to the question of how can the analysis of the individual energy consumption of each building help in the energy prediction consumption of the entire city?
In addition, the analysis of historical data is important since different patterns can be observed within energy consumption. There are other external factors, such as weather conditions, the seasons of the year, or the working hours of the city’s inhabitants. Although it is very difficult to predict all the dynamic patterns, better energy management for the city can be obtained in the same way as better public policy.
Finally, it is important to provide an analysis of the evolution of energy consumption, which can help predict it at an estimated time. For this reason, this study analyzes different algorithms and approaches and develops an encoder-decoder framework for the integration of the different data and a comparison of them. Therefore, forecasting energy, including external variables, and selecting important variables such as individual building consumption are important in generating meaningful information that helps electricity producers and the government accurately supply the correct amount of energy.
This study proposes an encode-decode framework for 48-hour-ahead energy prediction by applying different types of neural networks. Therefore, the framework is based on energy consumption by understanding the individual patterns of each building, creating a data structure to understand them, and combining them. It is possible to obtain complete energy consumption for the entire city. Technically, predicting the energy consumption in a whole city can create a better policy to supply the right amount of energy and not waste it. Furthermore, this study proposes an encoding-decoding model for energy consumption that incorporates external factors, such as meteorological observations. The decoding component of the model utilizes a multi-LSTM network, which is constructed using each variable employed in this investigation. From a city perspective, this research tries to understand the individual dynamics of each building, making it sustainable for the entire city to reduce energy consumption. By analyzing each building’s energy consumption and identifying the factors that affect it, we can develop effective energy-saving strategies for the entire city. To validate our model, we used it on our campus and compared the predicted energy consumption with the actual energy consumption. This comparison helps us determine the accuracy of our model and identify areas where improvements can be made. Additionally, we used statistical techniques to evaluate the performance of our model and ensure that it meets the required standards for energy consumption prediction.
Following the introduction of our study, this document is structured as follows: Section 2 provides an introduction to our framework, including a comprehensive discussion of the machine learning approaches that are compared in this study. Section 3 provides a comprehensive account of the data collection, preparation, and analytic procedures employed in this study. In Section 4, the principal findings of our proposed model are presented and compared with those of other regression models. Section 5, present a further discussion of findings from our study. Lastly, Section 6 presents the conclusions and outlines potential avenues for further research.

2. Methodology

Sustainable and resilient smart cities must efficiently manage electricity usage in the age of urbanization and rising energy demand [33,34]. Power scheduling, which optimizes electrical device power consumption, is key to this effort. Power scheduling reduces energy demand fluctuations, promotes cost-effective electricity use, and improves renewable energy integration. Advanced smart grid technologies enable time-of-use planning, load balancing, and demand response in this method. Power scheduling is crucial to synchronizing energy supply and demand as the globe strives to decrease carbon footprints and strengthen energy infrastructures [35,36]. Basically, here we attack one of the key aspects of power scheduling, which is the prediction of power consumption. Therefore, this section outlines the suggested methodology for predicting energy usage on a smart campus. The framework for predicting energy consumption has four primary modules. The first module starts with data collection; the second module is data preprocessing; the third module is training and validation; and the last module is the test module, as shown in Figure 1. The Data Collection Module shown in Figure 1 showcases three separate subgraphs that serve as examples of data collected over the course of a day. The first subgraph illustrates the energy consumption pattern of a specific building, while the second subgraph depicts the accompanying fluctuations in temperature throughout the course of the day. The third subgraph provides a complete representation of the total energy consumption across all buildings during the day.
The first module data collection uses input from four sources: The first two datasets come from Incheon National University, which is a present-day national university operated in Incheon, Republic of Korea. INU campuses are in distinct locations around Incheon and Seoul, but in our study, we directly focus on Songdo Global Campus, founded in 2009. The INU Songdo campus is presently under construction as a newly established institution, resulting in the recent completion of several building appliances. Accordingly, the INU started to record information in November 2019. INU Songdo Campus is located in the coordinates 37.3751° N, 126.6328° E and possesses a plottage of 456,806 m2, a total area of major buildings of 216,732 m2, and an underground parking lot of 35,801 m2. INU proposes an energy consumption collecting system that offers hourly data on energy consumption for all academic buildings, as well as comprehensive overall energy consumption for the whole campus, as seen in Figure 2. The system collects energy consumption at hourly intervals from most buildings. In Figure 2a, there are four buildings from which energy consumption is not collected: the “International Exchange Center”, “College of Urban Science”, “College of Business/School of Northeast Asian Studies”, and “College of Social Sciences/College of Global Law, Politics and Economics”.
The weather data was obtained from the Korea Meteorological Administration (KMA), getting specific data for the Songdo area that includes atmospheric pressure, temperature, dew point temperature, precipitation, wind speed, and sky condition. The temporal data was obtained from the moment in which the data was recorded in such a way as to continue to maintain the time series of the model.
Lastly, combining the data sources gives us a total energy consumption of one total energy consumption, nine weather variables, four temporal variables, and the energy of 17 academic buildings, including two dormitories, one sports center, and one gymnasium, from 30 November 2019 to 19 January 2021. The duration of the data collection spans around 14 months, resulting in a dataset including 32 columns and 9976 entries.
The second module, the data preprocessing module, is an explorative analysis, manipulation, and transformation of data to enhance the performance of the tested algorithms, which is explained more in detail in Section 3. To check the performance and hyperparameter optimization of the models, the input data are split into three segments: a training set with 70%, a validation set with 20%, and a test set with 10%. First, the data coming from the data preprocessing module to be used in the deep neural networks has to have numerical values in the bounded range [37]. Therefore, nominal categorical variables are converted into multiple binary variables, and ordinal categorical variables use numerical label encoding. Later, all variables are rescaled into a mean normalization range of [−1, 1], as observed in Equation (1) [38].
x i = x i m e a n ( x i ) x m a x x m i n ,
A temporal segmentation is performed since the first layer of the proposed Encoder-Decoder Recurrent Neural Network uses an LSTM network where it takes the input data as a time-series vector over the previous 24 h x t 24 v ,   x t 23 v ,   ,   x t 1 v ,   x t v , where v represents all the variables, and t the current time of the prediction. Therefore, an important design of an LSTM network is how much data should be used as delayed inputs for the network. In our experiments, we tried different days, with 24 h ago being the most optimal.
The third module focuses on the process of choosing optimal hyperparameters for the proposed network. First, the train and validation modules use a grid search method across the training set and the validation set. To further validate the performance of the proposed Encoder-Decoder Recurrent Neural Network regression model, comparisons are made with other regression models. This study considers single regression methods such as Auto Regression (AR), auto ARIMA, SARIMA, auto SARIMA, PROPHET from Facebook, Linear Regression, Decision Tree, and ensemble regression methods such as Extra Trees, Random Forest, Gradient Boosting, CatBoost, and LightGBM.
In the fourth module, there is a detailed comparison of the performances of selected models in the test set. Furthermore, the outcomes of each model represent the mean prediction for energy consumption 48 h in advance, based on the test data segment. Notably, the Encoder-Decoder Recurrent Neural Network regression model demonstrates the highest level of performance. Table 1 presents the evaluated hyper-parameters for the prediction algorithms.

An Encoder-Decoder Recurrent Neural Network for Energy Consumption Prediction

The proposed Encoder-Decoder Recurrent Neural Network consists of five layers: input layer, hidden layer, encode layer, decoder layer, and output layer, as shown in Figure 3. In the hidden layer, to build the proposed Encoder-Decoder Recurrent Neural Network [39], the most popular algorithms in deep learning were combined, artificial neural networks (ANNs) or dense networks when we connect more hidden layers and recurrent neural networks (RNNs) [40].
z = w x + b ,
z l = a l 1 w l + b l ,
d = a l = Φ z l ,
tanh z = ( 1 e 2 z ) ( 1 + e 2 z ) ,
R e L U z = z         z > 0 0         z 0 .
A single neuron represented as a linear regression is represented in Equation (2). A dense network consisting of various neurons is represented in Equations (3) and (4). Here, a = ( a 1 , , a m ) is the input feature vector, where m represents the total input variables, w = ( w 1 , , w m ) represents the matrix of weights, b denotes the bias vector, and z = ( z 1 , , z m ) represents the hidden vector calculation over them. l represents the hidden layers with l   { 1 ,   2 ,   ,   L } , while al is the neural nonlinear activation function applied in the function Φ( z l ). Here, the initial input is all the available x data, which will help us predict the total energy. In general, it is worth mentioning two types of activation functions, denoted by the function Φ(z). In Equation (5), the rescaled logistic sigmoid function tanh is defined, and the piecewise-linear function ReLU, which is the rectified linear activation function, is defined in Equation (6).
To capture the temporal dependence variability of the input data, we use RNNs, where the long short-term memory (LSTM) networks are well-known to provide better performance than the vanilla RNN [41,42]. Basically, the proposed model uses RNN to infer an encoding of the input sequence of each variable by successively updating its hidden state. Thus, the LSTM architecture adopted in this work was proposed by Gers, Schraudolph, and Schmidhuber [41] and is defined in Equations (7)–(11).
i t = Φ W i d d t + W i a a t 1 + W i c c t 1 + b i ,
f t = Φ ( W f d d t + W f a a t 1 + W f c c t 1 + b f ) ,
c t = f t c t 1 + i t Φ ( W c d d t + W c a a t 1 + b c ) ,
o t = Φ ( W o d d t + W o a a t 1 + W o c c t + b o ) ,
a t = o t Φ ( c t ) ,
The LSTM network includes an input gate it, a memory cell ct, a forget gate ft, and an output gate ot at time t. The input gate accepts the output from the dense layer. The weight matrices W represent the connections between the network components. W d describes the weights from the input gate, W a are the weights associated with the LSTM network’s hidden layers, and W c are the weights associated with the memory cell activations. Here, the output gate produces the input encoding vector after the last LSTM in the network is processed. The input encoding vector encapsulates the outputs obtained from the LSTM network to serve as an input for the decoder section of the model. In the decoding section, we use two neural networks to improve the accuracy of our energy regression. The first uses a Deep Feed Forward (DFF) network, which uses stacked dense networks that we reduce until we only have a hidden layer that will be the result of the total energy. For the second neural network, the decoder section uses an auto-encoder, which helps us perform a feature compression of all the results of the energy obtained from the vector encode and, in the end, generate the best neuron by looking for common patterns in the energy consumed. Finally, both results are combined in the last neuron obtained in our experiments to determine the best prediction of energy consumption.

3. Data Preparation and Analysis

This section provides an overview of the data pretreatment and analysis techniques that were employed on the energy dataset obtained from Incheon National University in Songdo, which is situated in the Incheon Metropolitan Region of Republic of Korea. The dataset has a time span from 30 November 2019, at 10:00 am to 17 January 2021, resulting in a total of 9975 records. Furthermore, it should be noted that all variables and models employ an input data frame with a temporal resolution of one hour. Section 3.1 describes the dependent variable, which is energy power consumption, as well as the time variables. The factors pertaining to the energy consumption of each building and climatic information are discussed in Section 3.2 and 3.3, respectively.

3.1. Energy and Time Variables

The campus in Songdo collects hourly energy consumption data from energy meters installed throughout the campus. In other words, if it were possible to capture the total energy consumption of an entire smart city from a macro perspective, the data would be more granular and therefore more specific. Table 2 shows the dependent variable of energy consumption per hour as well as the time variables with their respective preprocessing. The variable time is expressed in terms of years, days, and hours. This unit type is not suitable for our model; for instance, the schedule must begin at 00:00 and terminate at 23:00 and be near together. To perform this feature extraction, we will convert these variables to their sine and cosine forms. We use the function t i m e s e c o n d s that returns the time in the exact seconds of the measurement.
Figure 4 presents the time series, the decomposition trend, and resid plots for the energy consumption in INU. The trend component shows the variations of the low frequency, presenting us with a better observation of the behavior of energy. Moreover, the trend component shows an increase in energy consumption in December 2019. Additionally, we can observe that starting year 2020, since the COVID-19 pandemic began, there was a general decrease in energy consumption. We can observe a decrease in energy at the end of the year 2019, which is when the academic cycle ends. During the year 2020, the energy consumption pattern increases from the end of January until mid-February where heating is used a lot and decreases in spring between the end of February and May. Furthermore, a slight increase in energy consumption is evident between June and September, corresponding to the summer season, due to the utilization of air conditioning. Conversely, energy consumption declines during the fall season. Hence, the period of greatest energy consumption is observed during the winter season due to the heightened utilization of heating systems.
Figure 5 shows a scatter plot illustrating the regression relationship between current energy consumption and the 48-hours-ahead target. The density of both variables is displayed at the top and right sides of the graph, respectively. The maximum density of kilowatts per hour is often recorded within the range of around 250–600 kWh. Additionally, the red line illustrates a significant association between energy use 48 h in advance. The correlation coefficient (ρ) between the variables is 46%, and its associated p-value is 0.00, as seen in the upper right corner. Figure 6 shows the hourly aggregate energy use. As anticipated, there is a noticeable increase in energy use during the hours of 9 a.m. to 7 p.m., which corresponds to typical working hours. However, it is noteworthy that there are many instances of outlier values seen between the hours of 7 p.m. and 9 a.m. Notably, there is a distinct solitary occurrence at 11 p.m., which may be attributed to an experiment conducted in building number four of the computer and information department.

3.2. Building Energy Consumption Variables

Data on energy use at an hourly interval is gathered from sensors installed in the designated buildings. This section does exploratory analysis on the aforementioned data. Table 3 presents the building energy consumption variables with their descriptions, making a total of seventeen continuous variables. Figure A1 shows the total time series plot for each building, where we can note the behavior of the energy outflow of each of the variables. As mentioned above, we can notice the outlier from variable 04. Information_Computing (kWh). Moreover, 07. Information_Technology (kWh), 08. College_Engineering (kWh), and 10. GuestHouse (kWh) variables present several outliers in their behavior between September and December of 2020. A decrease in energy consumption can be seen for the Haksan library building during mid-February to June 2020, which remained closed due to measures to reduce COVID infections. Additionally, constant energy consumption is observed at the university headquarters, faculty office, central laboratory department, the College of Arts and Physical Education, and the student center. Furthermore, a decrease in energy consumption can be seen in the College of Natural Science and the student dormitory, this was because more classes began to be held online.

3.3. Weather Variables

This study uses weather observation data for forecasting, as we focus on the possibility of exploiting the information content of weather observations that weather forecasts may not contain. This study collects eight weather-related variables from the city of Songdo, as shown in Table 4. The selection of weather observation versus weather forecast is based on some characteristics between them. For example, weather agencies like KMA report weather observation at one-hour intervals versus weather forecasting at three-hour intervals. Figure A2 and Figure A3 present the relationship between the eight weather variables and energy consumption. Figure A2 shows two sections. The upper section shows the correlation between each variable where each red dot represents a data point with the trend shown with the black lines. The section below shows the density of points in that area where the reddish color shows the highest density and the lighter color where there is little density of values. Although all variables exhibit similar patterns, Pressure has the highest correlation, where half of the same variables show a positive trend and the other half a negative trend. Feature extraction was performed for the Wind_Speed and Wind_Direction variables, which were transformed into their cosine and sine forms, Wx and Wy, respectively, as shown in Table 4. Figure 7 shows all normalized variables, where 04. Information_Computing (kWh), 07. Information_Technology (kWh), 08. College_Engineering (kWh), and 10. GuestHouse (kWh) variables present several outliers as observed previously.

4. Results

In this section, we compare the suggested model’s prediction performance against the results of evaluated forecasting methods. Initially, an outline is provided of the performance metrics employed for the comparison and evaluation of the resultant models. Next, we will give the hyperparameters that were assessed for each method, highlighting the best result obtained for each of them. Next, the performance of each model is assessed by utilizing the test set. Finally, a comprehensive study is conducted to compare our suggested forecast model with the other models that were tested.
In order to evaluate the efficacy of the experiment, we employed three metrics denoted as Equations (12)–(14). These metrics encompass the root mean square error (RMSE), mean absolute error (MAE), and R2. The RMSE is a performance indicator that imposes penalties on significant and large errors, thereby measuring the extent to which predictions deviate from measured energy consumption. The MAE quantifies the average discrepancy between the predicted and observed values of energy consumption. The R2 statistic quantifies the degree of relationship between predicted and actual energy usage by calculating the squared correlation coefficient. The performance metrics are delineated in the following manner:
RMSE = i = 1 N y i ŷ i 2 / N ,
MAE = i = 1 N y i ŷ i / N ,
R 2 = i = 1 N ŷ i y ¯ 2 / i = 1 N y i y ¯ 2 ,
where y i is the measured energy consumption, ŷ i is a predicted energy, y ¯ is the mean of the measured energy consumption, and N is the number of samples.
The data collecting period spans over a duration of more than one year, namely from 30 November 2019 to 17 January 2021, resulting in a total data gathering time of 14 months with a dataset size of 2 MB. The objective of this project is to generate forecasts for energy usage with a lead time of 48 h. The training dataset was utilized to train the model for each method. The train set encompasses the time period spanning from 30 November 2019 at 10:00 to 14 September 2020 at 22:00. The validation set is utilized to fine-tune the hyperparameters. It encompasses data collected 14 September 2020, 23:00 and 6 December 2020, 15:00. The test set is utilized to evaluate and compare the performance of different models. It encompasses the data collected from 6 December 2020, at 16:00, to 17 January 2021, at 00:00, as shown in Figure 8. The top plot of Figure 8 displays the complete time series data set from 30 November 2019 to 17 January 2021. The purple highlighted section spans from January 2020 to the end of February 2020 shown in detail in the bottom plot for Figure 8.
The three-fold cross-validation method was employed as a resampling procedure during the training phase. Additionally, a grid search procedure was used to determine the optimal values for the hyperparameters of each model. In order to evaluate the efficacy of our encoder-decoder regression model, a series of experiments were undertaken. The experiments were conducted using an Intel Core i9-9900KF (Intel, Santa Clara, CA, USA) central processing unit (CPU) paired with 16 gigabytes of DDR4 random access memory (RAM). The tests were conducted using the scikit-learn library in Python 3.7, which provides implementations of several machine-learning methods. The hyper-parameter tuning procedure was standardized across all data sources.
The hyper-parameter candidates evaluated in this study are shown in Table 5. The analysis encompassed a comprehensive examination of seven single regression methods, two bagging ensemble methods, three boosting ensemble algorithms, and one proposed deep learning algorithm. It should be noted that time series and linear regression algorithms cannot evaluate any hyperparameters as they automatically calculate the best value for the time series. The results indicate that the hyper-parameters significantly affect the performance of the models. For the Decision Tree model, increasing the max_depth improved its performance, but only up to a certain level. After a certain depth, the model started to overfit. Among the several single regression models evaluated in the validation set, it was observed that the Decision Tree model achieved the lowest RMSE and the highest R2 value when the maximum depth of the tree was set at 8. For the Random Forest, Extra Trees, Gradient Boosting, and LightGBM models increasing both max_depth and num_estimators improved their performance. However, the best performance was achieved with a moderate number of trees and depth. For the CatBoost model, the depth and iterations had a significant impact on the model’s performance, with a deeper tree and more iterations improving the model’s accuracy. Moreover, the learning rate had a considerable effect on performance, with a small learning rate leading to better results. Furthermore, for ensemble models, Extra Trees model got the best RMSE, MAE, and R2 with max_depth = 9 and num_estimators = 80. For the Encoder-Decoder Recurrent Neural Network, the number of neurons, activation functions, and optimizer significantly affected the model’s performance. A small number of neurons resulted in better performance. The choice of activation functions and optimizer also affected the model’s accuracy.
Considering the acquisition of the most optimal models using a three-fold cross-validation process, we proceeded to assess the efficacy of each model. Typically, in order to assess the efficacy of the suggested model, we conduct evaluations on both the validation and test datasets, employing several performance measures like RMSE, MAE, and R2. Table 6 presents the results of the evaluated prediction models for the INU energy consumption validation dataset.
Based on the outcomes derived from the validation data, it is evident that among the single regression models, SARIMA exhibits the highest RMSE of 188.85. This finding suggests that SARIMA has relatively inferior prediction accuracy in comparison to the other models. The best results from single regression models were obtained from the algorithm PHOPHET, with the lowest RMSE of 91.70. Among the ensemble models examined, it is observed that Extra Trees and Gradient Boosting show the lowest RMSE values, specifically 85.60 and 91.68, respectively. However, the Encoder-Decoder Recurrent Neural Network, which is suggested in this study, achieves the lowest RMSE value of 83.66. This outcome suggests that the Encoder-Decoder Recurrent Neural Network possesses greater accuracy in predicting energy consumption. In the context of MAE for the validation set, the single regression models reveal that Decision Tree exhibits the lowest MAE with 67.36, indicating superior performance in accurately capturing the absolute disparities between predicted and observed values. Within the ensemble models, it is observed that Extra Trees and Gradient Boosting provide the lowest MAE values, specifically 58.24 and 64.35, respectively. This outcome suggests that both models excel at effectively reducing absolute prediction errors, but it should be mentioned that Extra Trees also obtained the minimum error compared to the other models evaluated in the validation set. Among single regression models, PROPHET demonstrates the greatest validation R2 with 69.67%, suggesting a robust association between predicted and observed values. In the context of ensemble models, the Extra Trees and Encoder-Decoder Recurrent Neural Network exhibit notable R2 values of 70.05% and 71.08%, respectively. The results indicate their strong prediction skills and suitability for the given dataset.
Figure 9 presents the performance results of RMSE and MAE over test data. Based on the single regression model category, it can be concluded that the PROPHET model outperformed all other models in terms of both RMSE and MAE. The next best-performing model was the Decision Tree model with relatively low RMSE and MAE values. On the other hand, the SARIMA model showed the highest RMSE and MAE values among all the models evaluated, indicating that it did not perform well on the test set. The AR, Auto ARIMA, and Auto SARIMA models also showed relatively high RMSE and MAE values, suggesting that their performance was not satisfactory compared to the other models. Among the ensemble models, the LightGBM and CatBoost models showed slightly higher RMSE and MAE values compared to the Random Forest and Gradient Boosting but still performed better than most of the single regression models. Finally, as a machine learning model, the linear regression and decision tree models had relatively higher RMSE and MAE values, indicating that their performance was not as good as some of the other models. Finally, the Encoder-Decoder Recurrent Neural Network achieved the best performance in terms of RMSE and MAE value scores among all models in the test set.
Figure 10 presents the R2 performance result on the test data set. It can be concluded that the AR, Auto ARIMA, SARIMA, and Auto SARIMA models did not perform well on the test set as they have negative R2 values; for this reason, they are not presented in Figure 10. This indicates that the models did not fit the data well and performed worse than a horizontal line. The PROPHET model performed relatively well, with an R2 value of 0.6967. This indicates that the model was able to explain a significant portion of the variance in the data. The Linear Regression model had an R2 value of 17.81%, which indicates that the model was not able to explain much of the variance from the dataset. The tree-based models, including Decision Tree, Extra Trees, Random Forest, Gradient Boosting, and CatBoost, performed better than the other models, with R2 values ranging from 60.77% to 75.74%. These models were able to capture the nonlinear relationships in the data, which improved their performance. Moreover, the proposed Encoder-Decoder Recurrent Neural Network performed the best among all the models, with an R2 value of 75.93%. This indicates that the model was able to capture the complex temporal dependencies in the data and predict the values accurately.

5. Discussion

The analysis of the energy consumption dataset from Incheon National University reveals that both ensemble models and the Encoder-Decoder Recurrent Neural Network produce promising results for precise predictions, with the Deep Learning model exhibiting the highest predictive accuracy. The evaluated hyperparameters have a considerable effect on model performance, highlighting the need for meticulous tuning to achieve optimal results. Notably, the optimal combination of hyperparameters varies between models, necessitating a systematic search and experimental approach for their determination.
In terms of ensemble models, both validation and test set-evaluated performance metrics demonstrate favorable performance, with RMSE and MAE scores lying within reasonable ranges. The Encoder-Decoder Recurrent Neural Network and Extra Trees models have the lowest RMSE values, indicating superior performance in predicting the target variable on the validation set. In addition, the Random Forest, Gradient Boosting, and Encoder-Decoder Recurrent Neural Network models have the highest R2 scores, indicating their suitability for predicting the target variable based on unobserved data on the test set.
While Extra Trees performs exceptionally well in the validation set in terms of MAE, its performance decreases on the test set, indicating possible overfitting during training. Also, in the validation set, even though LightGBM has the highest RMSE and MAE scores among ensemble models, its performance on the test set has improved. In conclusion, over the test set Random Forest, Gradient Boosting, and Encoder-Decoder Recurrent Neural Networks emerge as the top-performing models, boasting comparatively high R2 scores and low RMSE scores over the validation set, indicative of their robust predictive capabilities.
In terms of training time, Linear Regression emerged as the fastest in our experiments. This result is attributed to the model exclusively training with energy data without evaluating other hyperparameters. However, it’s crucial to note that this approach also introduces a limitation. On the flip side, the Encoder-Decoder Recurrent Neural Network, while robust in performance, incurred a training time of approximately 4 to 8 min for each model. Despite this, post-training, both algorithms demonstrated efficient speed in making predictions.
Therefore, the adoption of the Encoder-Decoder Recurrent Neural Network is motivated by its ability to model sequential data effectively and capture complex temporal dependencies in energy consumption patterns. The recurrent architecture enables the model to remember previous observations, allowing it to make more accurate predictions. The comparison with other current models demonstrates the accuracy and predictive performance superiority of the Encoder-Decoder Recurrent Neural Network, which justifies its selection as the preferred model for energy consumption prediction in this study.

6. Conclusions

The study presents a process for predicting energy consumption in a smart city by analyzing individual buildings using a proposed Encoder-Decoder Recurrent Neural Network. The aim is to improve energy management in smart cities, which should ideally be sustainable and environmentally friendly. The proposed framework includes three modules: gathering data to generate energy statistics for each building; conducting a data analysis of energy behavior inside micro-cities to extract characteristics; and building baseline regressors to evaluate the proposed model’s effectiveness.
Moreover, the study proposes a framework process for energy consumption prediction for a smart campus, consisting of four modules: data collection, data preprocessing, training and validation modules, and testing modules. Data are collected from Incheon National University, including energy consumption, weather data, and temporal data. The data are then preprocessed, including the conversion of categorical variables and normalization. In order to prepare the data, we partitioned the energy consumption log into three distinct subsets: the training set, the validation set, and the test set. With the training set, we trained each algorithm and tested it over a validation set to optimize the evaluated hyper-parameters over 13 regression methods. Optimal hyperparameters for the proposed regression models are chosen to use grid search for predicting energy consumption 48 h ahead. The recommended model based on a neural network with Encoder-Decoder Recurrent connections improves performance prediction by 75.93%, as measured by the R2 value over unseen data in the test set. The findings suggest that this model is superior to other models in terms of R2, MAE, and RMSE. Therefore, the proposed process has the potential to improve energy sustainability and efficiency in smart cities.
While our research has demonstrated that the proposed Encoder-Decoder Recurrent Neural Network has the potential to improve energy consumption predictions for smart cities, there are a number of limitations that must be considered. First, the current availability of data for smart cities is limited, limiting the scope of our analysis. Although our study utilizes a variety of data sources, including educational, residential, and recreational buildings, it predominantly reflects the perspective of a microcity. Future research endeavors should seek to acquire more extensive and diverse datasets representing complete smart cities, enabling a broader and more comprehensive analysis.
In addition, the Encoder-Decoder Recurrent Neural Network, a specific form of deep learning model, has been the focus of our research. In future research, it would be beneficial to investigate alternative deep learning architectures, such as transformers or retentive networks, to evaluate their efficacy in predicting energy consumption in smart cities. Diversifying the models under consideration can result in a more nuanced comprehension of their relative strengths and weaknesses.
The temporal scope of our dataset, which spans from late 2019 to early 2021, is another limitation. Future research should aim to integrate longer temporal datasets in order to strengthen the reliability of our findings and account for potential temporal variations. This extension in the temporal domain would contribute to the development of more accurate and adaptable prediction models by facilitating a more comprehensive understanding of energy consumption patterns and trends.
Finally, while our study lays the groundwork for energy consumption prediction in smart cities, addressing these limitations and conducting future research will contribute to refining and expanding the applicability of deep learning predictive models, thereby helping more sustainable and efficient smart city infrastructures.

Author Contributions

Conceptualization, B.C.; Writing—original draft, B.C.; Supervision, K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Incheon National University (International Cooperative) Research in Grant in 2018 and by the Korea Institute of Energy Technology Evaluation and Planning (KETEP) granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (No. 20212020900090).

Data Availability Statement

The dataset analyzed during the current study is not publicly available due to privacy and confidentiality concerns.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Energy consumption by building from years 2019 to 2021 in Incheon National University, Songdo, Incheon.
Figure A1. Energy consumption by building from years 2019 to 2021 in Incheon National University, Songdo, Incheon.
Energies 16 07508 g0a1
Figure A2. Data visualization and analysis of weather variables versus energy consumption.
Figure A2. Data visualization and analysis of weather variables versus energy consumption.
Energies 16 07508 g0a2
Figure A3. Heatmap analysis of weather variables versus energy consumption.
Figure A3. Heatmap analysis of weather variables versus energy consumption.
Energies 16 07508 g0a3

References

  1. U.S. Energy Information Administration. Energy Consumption By Sector in U.S.; U.S. Energy Information Administration: Washington, DC, USA, 2021.
  2. U.S. Energy Information Administration. Electricity Consumption in U.S. Homes; U.S. Energy Information Administration: Washington, DC, USA, 2015.
  3. Energy Statistics Related Data; Korea Energy Agency: Ulsan, Republic of Korea, 2021.
  4. Obregon, J.; Han, Y.-R.; Ho, C.W.; Mouraliraman, D.; Lee, C.W.; Jung, J.-Y. Convolutional autoencoder-based SOH estimation of lithium-ion batteries using electrochemical impedance spectroscopy. J. Energy Storage 2023, 60, 106680. [Google Scholar] [CrossRef]
  5. Athanasopoulou, L.; Bikas, H.; Papacharalampopoulos, A.; Stavropoulos, P.; Chryssolouris, G. An industry 4.0 approach to electric vehicles. Int. J. Comput. Integr. Manuf. 2023, 36, 334–348. [Google Scholar] [CrossRef]
  6. Stavropoulos, P.; Giannoulis, C.; Papacharalampopoulos, A.; Foteinopoulos, P.; Chryssolouris, G. Life cycle analysis: Comparison between different methods and optimization challenges. Procedia CIRP 2016, 41, 626–631. [Google Scholar] [CrossRef]
  7. Carrera, B.; Peyrard, S.; Kim, K. Meta-regression framework for energy consumption prediction in a smart city: A case study of Songdo in South Korea. Sustain. Cities Soc. 2021, 72, 103025. [Google Scholar] [CrossRef]
  8. Liu, Y.; Li, J. Annual Electricity and Energy Consumption Forecasting for the UK Based on Back Propagation Neural Network, Multiple Linear Regression, and Least Square Support Vector Machine. Processes 2022, 11, 44. [Google Scholar] [CrossRef]
  9. Carrera, B.; Kim, K. Comparison analysis of machine learning techniques for photovoltaic prediction using weather sensor data. Sensors 2020, 20, 3129. [Google Scholar] [CrossRef]
  10. Guo, J.; Han, M.; Zhan, G.; Liu, S. A Spatio-Temporal Deep Learning Network for the Short-Term Energy Consumption Prediction of Multiple Nodes in Manufacturing Systems. Processes 2022, 10, 476. [Google Scholar] [CrossRef]
  11. Al-Saudi, K.; Degeler, V.; Medema, M. Energy Consumption Patterns and Load Forecasting with Profiled CNN-LSTM Networks. Processes 2021, 9, 1870. [Google Scholar] [CrossRef]
  12. Qian, K.; Wang, X.; Yuan, Y. Research on regional short-term power load forecasting model and case analysis. Processes 2021, 9, 1617. [Google Scholar] [CrossRef]
  13. Andrić, I.; Koc, M.; Al-Ghamdi, S.G. A review of climate change implications for built environment: Impacts, mitigation measures and associated challenges in developed and developing countries. J. Clean. Prod. 2019, 211, 83–102. [Google Scholar] [CrossRef]
  14. Farah, S.; Whaley, D.; Saman, W.; Boland, J. Integrating climate change into meteorological weather data for building energy simulation. Energy Build. 2019, 183, 749–760. [Google Scholar] [CrossRef]
  15. Kim, M.K.; Choi, J.-H. Can increased outdoor CO2 concentrations impact on the ventilation and energy in buildings? A case study in Shanghai, China. Atmos. Environ. 2019, 210, 220–230. [Google Scholar] [CrossRef]
  16. Lupato, G.; Manzan, M. Italian TRYs: New weather data impact on building energy simulations. Energy Build. 2019, 185, 287–303. [Google Scholar] [CrossRef]
  17. Al-Hajj, R.; Assi, A.; Fouad, M.; Mabrouk, E. A hybrid LSTM-based genetic programming approach for short-term prediction of global solar radiation using weather data. Processes 2021, 9, 1187. [Google Scholar] [CrossRef]
  18. Hong, W.-H.; Kim, J.-Y.; Lee, C.-M.; Jeon, G.-Y. Energy consumption and the power saving potential of a University in Korea: Using a field survey. J. Asian Archit. Build. Eng. 2011, 10, 445–452. [Google Scholar] [CrossRef]
  19. Chung, M.H.; Rhee, E.K. Potential opportunities for energy conservation in existing buildings on university campus: A field survey in Korea. Energy Build. 2014, 78, 176–182. [Google Scholar] [CrossRef]
  20. Lee, S.; Jung, S.; Lee, J. Prediction model based on an artificial neural network for user-based building energy consumption in South Korea. Energies 2019, 12, 608. [Google Scholar] [CrossRef]
  21. Park, K.-H.; Kim, S.-M. Analysis of energy consumption of buildings in the university. Korean J. Air Cond. Refrig. Eng. 2011, 23, 633–638. [Google Scholar] [CrossRef]
  22. Kim, T.-Y.; Cho, S.-B. Predicting residential energy consumption using CNN-LSTM neural networks. Energy 2019, 182, 72–81. [Google Scholar] [CrossRef]
  23. Bui, D.-K.; Nguyen, T.N.; Ngo, T.D.; Nguyen-Xuan, H. An artificial neural network (ANN) expert system enhanced with the electromagnetism-based firefly algorithm (EFA) for predicting the energy consumption in buildings. Energy 2020, 190, 116370. [Google Scholar] [CrossRef]
  24. Tran, D.-H.; Luong, D.-L.; Chou, J.-S. Nature-inspired metaheuristic ensemble model for forecasting energy consumption in residential buildings. Energy 2020, 191, 116552. [Google Scholar] [CrossRef]
  25. Wen, L.; Zhou, K.; Yang, S. Load demand forecasting of residential buildings using a deep learning model. Electr. Power Syst. Res. 2020, 179, 106073. [Google Scholar] [CrossRef]
  26. Olu-Ajayi, R.; Alaka, H.; Sulaimon, I.; Sunmola, F.; Ajayi, S. Building energy consumption prediction for residential buildings using deep learning and other machine learning techniques. J. Build. Eng. 2022, 45, 103406. [Google Scholar] [CrossRef]
  27. Kim, M.K.; Kim, Y.-S.; Srebric, J. Predictions of Electricity Consumption in a Campus Building Using Occupant Rates and Weather Elements with Sensitivity Analysis: Artificial Neural Network vs. Linear Regression. Sustain. Cities Soc. 2020, 62, 102385. [Google Scholar] [CrossRef]
  28. Goudarzi, S.; Anisi, M.H.; Kama, N.; Doctor, F.; Soleymani, S.A.; Sangaiah, A.K. Predictive modelling of building energy consumption based on a hybrid nature-inspired optimization algorithm. Energy Build. 2019, 196, 83–93. [Google Scholar] [CrossRef]
  29. Kim, Y.; Son, H.-g.; Kim, S. Short term electricity load forecasting for institutional buildings. Energy Rep. 2019, 5, 1270–1280. [Google Scholar] [CrossRef]
  30. Ji, C.; Hong, T.; Kim, H.; Yeom, S. Effect of building energy efficiency certificate on reducing energy consumption of non-residential buildings in South Korea. Energy Build. 2022, 255, 111701. [Google Scholar] [CrossRef]
  31. Dong, Z.; Liu, J.; Liu, B.; Li, K.; Li, X. Hourly energy consumption prediction of an office building based on ensemble learning and energy consumption pattern classification. Energy Build. 2021, 241, 110929. [Google Scholar] [CrossRef]
  32. Dagdougui, H.; Bagheri, F.; Le, H.; Dessaint, L. Neural network model for short-term and very-short-term load forecasting in district buildings. Energy Build. 2019, 203, 109408. [Google Scholar] [CrossRef]
  33. De Jong, M.; Joss, S.; Schraven, D.; Zhan, C.; Weijnen, M. Sustainable–smart–resilient–low carbon–eco–knowledge cities; making sense of a multitude of concepts promoting sustainable urbanization. J. Clean. Prod. 2015, 109, 25–38. [Google Scholar] [CrossRef]
  34. Cortese, T.T.P.; Almeida, J.F.S.d.; Batista, G.Q.; Storopoli, J.E.; Liu, A.; Yigitcanlar, T. Understanding Sustainable Energy in the Context of Smart Cities: A PRISMA Review. Energies 2022, 15, 2382. [Google Scholar] [CrossRef]
  35. Ejaz, W.; Naeem, M.; Shahid, A.; Anpalagan, A.; Jo, M. Efficient energy management for the internet of things in smart cities. IEEE Commun. Mag. 2017, 55, 84–91. [Google Scholar] [CrossRef]
  36. Makhadmeh, S.N.; Khader, A.T.; Al-Betar, M.A.; Naim, S.; Abasi, A.K.; Alyasseri, Z.A.A. Optimization methods for power scheduling problems in smart home: Survey. Renew. Sustain. Energy Rev. 2019, 115, 109362. [Google Scholar] [CrossRef]
  37. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M. Tensorflow: A system for large-scale machine learning. In Proceedings of the International Conference OSDI, Savannah, GA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
  38. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2021; Volume 112. [Google Scholar]
  39. Cho, K.; Van Merriënboer, B.; Bahdanau, D.; Bengio, Y. On the properties of neural machine translation: Encoder-decoder approaches. arXiv 2014, arXiv:1409.1259. [Google Scholar]
  40. Goodfellow, I.; Bengio, Y.; Courville, A.; Bengio, Y. Deep Learning; MIT Press: Cambridge, UK, 2016; Volume 1. [Google Scholar]
  41. Gers, F.A.; Schraudolph, N.N.; Schmidhuber, J. Learning precise timing with LSTM recurrent networks. J. Mach. Learn. Res. 2002, 3, 115–143. [Google Scholar]
  42. Carrera, B.; Sim, M.K.; Jung, J.-Y. PVHybNet: A Hybrid Framework for Predicting Photovoltaic Power Generation Using Both Weather Forecast and Observation Data. IET Renew. Power Gener. 2020, 14, 2192–2201. [Google Scholar] [CrossRef]
Figure 1. Proposed regression framework for energy consumption in a smart campus.
Figure 1. Proposed regression framework for energy consumption in a smart campus.
Energies 16 07508 g001
Figure 2. Incheon National University energy collection system (a) Campus map of INU and (b) INU’s College of Engineering (Building 8th) energy consumption.
Figure 2. Incheon National University energy collection system (a) Campus map of INU and (b) INU’s College of Engineering (Building 8th) energy consumption.
Energies 16 07508 g002
Figure 3. Proposed deep learning encoder-decoder regression model for energy consumption in a smart city.
Figure 3. Proposed deep learning encoder-decoder regression model for energy consumption in a smart city.
Energies 16 07508 g003
Figure 4. Decomposition plot of time series energy consumption.
Figure 4. Decomposition plot of time series energy consumption.
Energies 16 07508 g004
Figure 5. Comparative scatter plot analysis of actual and 48-hour-ahead total energy consumption in Songdo campus from 30 November 2019 to 19 January 2021.
Figure 5. Comparative scatter plot analysis of actual and 48-hour-ahead total energy consumption in Songdo campus from 30 November 2019 to 19 January 2021.
Energies 16 07508 g005
Figure 6. Boxplot of the total energy consumption per hour in Songdo Campus from 30 November 2019 to 19 January 2021.
Figure 6. Boxplot of the total energy consumption per hour in Songdo Campus from 30 November 2019 to 19 January 2021.
Energies 16 07508 g006
Figure 7. All variables used in this study already normalized.
Figure 7. All variables used in this study already normalized.
Energies 16 07508 g007
Figure 8. Time series fixed partition for train, validation, and test sets.
Figure 8. Time series fixed partition for train, validation, and test sets.
Energies 16 07508 g008
Figure 9. Performance results RMSE and MAE over test data. Predicted energy consumption versus measured energy consumption. The optimal values are emphasized using blue and underline formatting.
Figure 9. Performance results RMSE and MAE over test data. Predicted energy consumption versus measured energy consumption. The optimal values are emphasized using blue and underline formatting.
Energies 16 07508 g009
Figure 10. Performance results for R2 over test data. Predicted energy consumption versus measured energy consumption. The optimal values are emphasized using blue and underline formatting.
Figure 10. Performance results for R2 over test data. Predicted energy consumption versus measured energy consumption. The optimal values are emphasized using blue and underline formatting.
Energies 16 07508 g010
Table 1. Prediction algorithms with evaluated hyper-parameters.
Table 1. Prediction algorithms with evaluated hyper-parameters.
Prediction AlgorithmsEvaluated Hyper-Parameters
Single Regression
Models
AR-
Auto ARIMA-
SARIMA-
Auto SARIMA-
PROPHET-
Linear regression-
Decision treemax_depth = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Ensemble
Models
Random forestnum_estimators = {5, 10, 15, 20, 40, 80}, max_depth = {2, 5, 7, 9}
Extra treesnum_estimators = {5, 10, 15, 20, 40, 80}, max_depth = {2, 5, 7, 9}
Gradient boostingnum_estimators = {5, 10, 15, 20, 40, 80}, max_depth = {2, 5, 7, 9}
CatBoostdepth = {3, 6, 8, 10}, learning_rate = {0.0001, 0.001, 0.01, 0.1},
iterations = {30, 50, 100}
LightGBMnum_estimators = {5, 10, 15, 20, 40, 80}, max_depth = {2, 5, 7, 9}
Deep Learning ModelEncoder-Decoder Recurrent Neural Networkactivation = {‘relu’, ‘tanh’}, recurrent_activation = {‘relu’, ‘tanh’},
neurons = {15, …, 23, …, 50}, optimizer = {adam, rmsprop, nadam}
Table 2. Energy and time variables.
Table 2. Energy and time variables.
CategoryVariable NameClassificationDescription (Unit)
Total Energy
Consumption
Energy_kW-hourContinuousTotal energy consumed in an hour by all buildings in Kilowatts/hour. The target variable is Electricity(kWh)_48-h-ahead.
TimeDateCategoricalDate of energy consumption in format YYYYMMDD.
YearCategoricalMeasurement year YYYY
MonthCategoricalMeasurement month MM
DayCategoricalMeasurement day DD
HourCategoricalMeasurement hour (00:00 to 23:00)
Week_DayCategoricalDay from (1–7)
Num_WeekCategoricalNumber of the week respect a year (1–52)
Year sinContinuous Y e a r _ s i n = sin t i m e s e c o n d s   ×   2   π 365   ×   24   ×   60   ×   60
Year cosContinuous Y e a r _ c o s = cos t i m e s e c o n d s   ×   2   π 365   ×   24   ×   60   ×   60
Day sinContinuous D a y _ s i n = sin t i m e s e c o n d s   ×   2   π 24   ×   60   ×   60
Day cosContinuous D a y _ c o s = cos t i m e s e c o n d s   ×   2   π 24   ×   60   ×   60
Hour sinContinuous H o u r _ s i n = sin t i m e s e c o n d s   ×   2   π 60   ×   60
Hour cosContinuous H o u r _ c o s = cos t i m e s e c o n d s   ×   2   π 60   ×   60
Table 3. Building Energy Consumption variables.
Table 3. Building Energy Consumption variables.
Variable NameClassificationDescription (Unit)
01. University_Headquarters (kWh)ContinuousEnergy consumed in kilowatt-hours by the University headquarters, building number one.
02. Faculty_Hall (kWh)ContinuousEnergy consumed in kilowatt-hours by the faculty office, building number two.
04. Information_Computing (kWh)ContinuousEnergy consumed in kilowatt-hour by the computer and information department, building number four.
05. Natural_Science (kWh)ContinuousEnergy consumed in kilowatt-hour by the colleges of natural sciences, life science and bioengineering. Building number five.
06. Library (kWh)ContinuousEnergy consumed in kilowatt-hours by Haksan library, building number six.
07. Information_Technology (kWh)ContinuousEnergy consumed in kilowatt-hours by the College of Information Technology, building number seven.
08. College_Engineering (kWh)ContinuousEnergy consumed in kilowatt-hours by the College of Engineering, building number eight.
09. Joint_Experiment (kWh)ContinuousEnergy consumed in kilowatt-hours by the central laboratory department, building number nine.
10. GuestHouse (kWh)ContinuousEnergy consumed in kilowatt-hours by the guest house, building number ten.
11. Welfare_Hall (kWh)ContinuousEnergy consumed in kilowatt-hours by the welfare and service center, including the cafeteria, building number eleven.
12. Convention (kWh)ContinuousEnergy consumed in kilowatt-hours by the convention center, building number twelve.
15. College_Humanities (kWh)ContinuousEnergy consumed in kilowatt-hours by the College of Humanities, building number fifteen.
16. Art_Sports (kWh)ContinuousEnergy consumed in kilowatt-hours by the College of Arts and Physical Education, building number sixteen.
17. Student_Hall (kWh)ContinuousEnergy consumed in kilowatt-hours by the student center, building number seventeen.
18-1. Dormitory (kWh)ContinuousEnergy consumed in kilowatt-hours by the student dormitory #1, building number eighteen dash one.
20. Sport_Center (kWh)ContinuousEnergy consumed in kilowatt-hours by the sport center and golf practice center, building number twenty.
21. Gym (kWh)ContinuousEnergy consumed in kilowatt-hours by the gymnasium, building number twenty-one.
Table 4. Weather variables related to Songdo, Incheon.
Table 4. Weather variables related to Songdo, Incheon.
CategoryVariable NameClassificationDescription (Unit)
Weather
variables
Dew_PointContinuous(°C) Dew point temperature
HumidityContinuous(%) Humidity
PrecipitationContinuous(%)
PressureContinuous(hPa)
Sky ConditionCategoricalClear, Dangerously Windy, Dangerously Windy and Partly Cloudy, Foggy, Heavy Rain, Humid, Light Rain, Mostly Cloudy, Overcast, Possible Drizzle, Possible Light (Rain, Snow), Rain, Snow, Windy
TemperatureContinuous(°C) Temperature
Wind_SpeedContinuous(mph) Wind speed
Wind_Direction(deg)Continuous(°) Wind direction in degrees (0°–360°)
WxContinuous   W x = W i n d _ S p e e d   ×   cos W i n d _ D i r e c t i o n d e g   ×   π 180
WyContinuous W y = W i n d _ S p e e d × sin W i n d _ D i r e c t i o n d e g   ×   π 180
Table 5. Hyper-parameter tuning using grid search for the algorithms.
Table 5. Hyper-parameter tuning using grid search for the algorithms.
Prediction AlgorithmsEvaluated Hyper-Parameters
Single
Regression Models
AR-
Auto ARIMA-
SARIMA-
Auto SARIMA-
PROPHET-
Linear regression-
Decision treemax_depth = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Ensemble
Models
Random forestnum_estimators = {5, 10, 15, 20, 40, 80}, max_depth = {2, 5, 7, 9}
Extra treesnum_estimators = {5, 10, 15, 20, 40, 80}, max_depth = {2, 5, 7, 9},
Gradient boostingnum_estimators = {5, 10, 15, 20, 40, 80}, max_depth = {2, 5, 7, 9}
CatBoostlearning_rate = {0.0001, 0.001, 0.01, 0.1},
depth = {3, 6, 8, 10}, iterations = {30, 50, 100}
LightGBMnum_estimators = {5, 10, 15, 20, 40, 80}, max_depth = {2, 5, 7, 9}
Deep Learning ModelEncoder-Decoder Recurrent Neural Networkactivation = {‘relu’, ‘tanh’}, recurrent_activation = {‘relu’, ‘tanh’},
neurons = {15, …, 23, …, 50}, optimizer = {adam, rmsprop, nadam}
The optimal values are emphasized by the use of bold and underline formatting.
Table 6. Results of Energy Power Consumption Prediction over validation dataset.
Table 6. Results of Energy Power Consumption Prediction over validation dataset.
Prediction ModelsValidation RMSEValidation MAEValidation R2
Single
Regression Models
AR 157.64130.05−0.03
Auto ARIMA167.04141.80−0.15
SARIMA188.85159.29−0.43
Auto SARIMA167.38139.27−0.16
PROPHET91.7069.620.70
Ensemble
Models
Linear regression144.98108.440.15
Decision tree102.8667.360.57
Random forest98.8465.870.61
Extra trees85.6058.240.71
Gradient boosting91.6864.350.66
CatBoost94.6970.560.64
LightGBM101.2482.260.59
Deep Learning ModelEncoder-Decoder Recurrent Neural Network83.6659.780.71
The optimal values are emphasized by the use of bold and underline formatting.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Carrera, B.; Kim, K. A Regression Framework for Energy Consumption in Smart Cities with Encoder-Decoder Recurrent Neural Networks. Energies 2023, 16, 7508. https://doi.org/10.3390/en16227508

AMA Style

Carrera B, Kim K. A Regression Framework for Energy Consumption in Smart Cities with Encoder-Decoder Recurrent Neural Networks. Energies. 2023; 16(22):7508. https://doi.org/10.3390/en16227508

Chicago/Turabian Style

Carrera, Berny, and Kwanho Kim. 2023. "A Regression Framework for Energy Consumption in Smart Cities with Encoder-Decoder Recurrent Neural Networks" Energies 16, no. 22: 7508. https://doi.org/10.3390/en16227508

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop