Next Article in Journal
Optimization of Intelligent Maintenance System in Smart Factory Using State Space Search Algorithm
Previous Article in Journal
Residual Stress Characteristics in Spot Weld Joints of High-Strength Steel: Influence of Welding Parameters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing IoT-Based Environmental Monitoring and Power Forecasting: A Comparative Analysis of AI Models for Real-Time Applications

Department of Electronics Engineering, Kookmin University, Seoul 02707, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(24), 11970; https://doi.org/10.3390/app142411970
Submission received: 13 November 2024 / Revised: 11 December 2024 / Accepted: 19 December 2024 / Published: 20 December 2024

Abstract

:
The Internet of Things (IoT) is transforming industries by integrating sensors and connectivity into everyday objects, enabling enhanced monitoring, management, and automation through Machine-to-Machine (M2M) communication. Despite these advancements, the IoT faces limitations in accurately predicting environmental conditions and power consumption. This study proposes an advanced IoT platform that combines real-time data collection with secure transmission and forecasting using a hybrid Long Short-Term Memory (LSTM)–Gated Recurrent Unit (GRU) model. The hybrid architecture addresses the computational inefficiencies of LSTM and the short-term dependency challenges of GRU, achieving improved accuracy and efficiency in time-series forecasting. For all prediction use cases, the model achieves a Maximum Mean Absolute Error (MAE) of 3.78%, Root Mean Square Error (RMSE) of 8.15%, and a minimum R 2 score of 82.04%, the showing proposed model’s superiority for real-life use cases. Furthermore, a comparative analysis also shows the performance of the proposed model outperforms standalone LSTM and GRU models, enhancing the IoT’s reliability in real-time environmental and power forecasting.

1. Introduction

The integration of sensors and internet connectivity into everyday objects through the Internet of Things (IoT) is transforming industries and gaining widespread adoption. In essence, the IoT refers to a network of physical devices that exchange information over the internet, enabling Machine-to-Machine (M2M) communication. This capability enhances monitoring, management systems, and automation processes, thereby improving decision-making and enhancing user experiences [1,2,3]. IoT systems are built upon three primary layers: the sensor layer, the connectivity layer, and the cloud layer. The sensor layer collects real-time data, such as temperature, humidity, and power consumption, from the target environment. These data are then transmitted and stored either locally or in the cloud using internet connectivity [4]. In recent years, IoT-enabled systems combined with artificial intelligence (AI) have revolutionized environmental parameter monitoring and energy consumption forecasting [5]. These technologies enhance the efficiency of data collection and analysis, improving the accuracy of environmental monitoring while enabling predictive capabilities to optimize energy management [6]. The research underscores the importance of environmental conditions in workplace productivity and employee well-being. Rioux and Liliane [7] found that office environments significantly influence employee satisfaction, comfort, mental and physical health, and productivity. Lin et al. [8] highlighted that AI-enabled IoT systems provide valuable insights into environmental data, ensuring optimal conditions to enhance productivity. Temperature and humidity, in particular, play a critical role in workplace performance. A study reported that call center workers’ productivity declined by 2.4% for every degree Celsius increase within 21.9 °C to 28.5 °C, with a 5–7% reduction when temperatures exceeded 25 °C [9]. Similarly, high humidity negatively affects worker output, with surveys indicating a 14% decrease in tasks completed per hour on humid days [10]. These findings emphasize the critical need for precise environmental monitoring and control. Leveraging AI-driven IoT systems for forecasting environmental conditions can significantly enhance workplace productivity by maintaining optimal conditions and improving energy management strategies. Leal et al. [11] proposed an IoT-based Recurrent Neural Network (RNN) with backpropagation for agricultural use. Their study shows that when the RNN model is paired with IoT-based continued learning processes, model performance improves significantly; however, this study is mainly limited to agricultural uses. Wang et al. [12] proposed an LSTM-based approach for accurately forecasting energy consumption patterns in industrial settings. While their model demonstrated lower Root Mean Squared Error (RMSE) compared to backpropagation neural networks, Autoregressive Moving Average (ARMA), and autoregressive fractional integrated moving average (ARFIMA) models, it lacked IoT integration, limiting its applicability and scalability. Bouktif et al. [13] conducted a comparative analysis of various Machine Learning (ML) and deep learning models for real-life energy consumption forecasting. Their findings highlighted LSTM as the most accurate model, delivering lower error rates than other techniques. However, this study focused exclusively on load forecasting for residential houses and did not incorporate environmental monitoring or IoT-enabled data collection, which are critical for training future models and enhancing their performance. Mahjoub et al. [14] explored energy management strategies using LSTM, GRU, and Drop-GRU models, with Drop-GRU outperforming the others. Despite its promising results, the study omitted environmental monitoring and IoT-based data collection, factors that play a vital role in future refining model performance and ensuring its adaptability to diverse conditions. Fard et al. [15] showed that for IoT-based heating and cooling load forecasting, the Ada boost algorithm is best, but it lacks AI integration.
These limitations have inspired new studies focused on improving prediction accuracy and integrating IoT-enabled smart data collection systems. To address these challenges, this study proposes a Smart Environment and Energy Monitoring and Prediction (SEEMP) system. The SEEMP system combines an AI-driven approach, a novel RNN-based hybrid LSTM-GRU model, and an IoT application server platform. It facilitates real-time data collection, curation, and centralized storage, enabling the application of advanced AI models for accurate energy consumption forecasting in real-life scenarios. Enabling power and environmental forecasting features in an IoT environment is crucial for improving its effectiveness and automation. For instance, hour-ahead temperature forecasting can optimize the control of heating or cooling elements, enhancing energy efficiency. Similarly, monitoring power consumption by these elements helps energy management systems allocate energy effectively. The IoT platform supports local and global cloud servers, offers sensor data analytics, and features a plug-in architecture that allows external systems to access data through a REST API. To ensure secure data transmission, the system employs Message Queue Telemetry Transport (MQTT) and Hypertext Transfer Protocol Secure (HTTPS) protocols. The SEEMP system was tested for predictive performance using environmental parameters such as office temperature and humidity, along with energy consumption for appliances like water dispensers, air conditioners (AC), and refrigerators. The main contributions of this work are summarized below:
  • A robust, real-time data collection framework for diverse office equipment is designed and implemented, where secure data exchange is ensured through the utilization of HTTPS and MQTT protocols.
  • The hybrid LSTM-GRU model is deployed for real-time processing and prediction on edge devices, addressing existing challenges effectively.
  • The reliability and performance of the hybrid model are tested with evaluation matrices and compared with the traditional model on different forecasting conditions.
The structure of the paper is as follows: Section 2 discusses data collection and preprocessing, Section 3 reviews the model descriptions, Section 4 addresses performance evaluations, and Section 5 presents the conclusion.

2. Data Collection and Processing

The IoT system we developed, shown in Figure 1, comprises three core components: (a) Data acquisition devices are used capture energy consumption, temperature, and humidity data from office appliances. We utilized the AM2302 sensor to collect temperature and humidity data. This sensor provides an accuracy of 2–5% for humidity readings and ±0.5 °C for temperature measurements. For energy data collection, we measured voltage and current using the ZMPT101B voltage sensor and the SEN0211 current sensor. Energy consumption was calculated using current and voltage through mathematical equations. The data collection spanned a period of 18 months, from 4 October 2022 to 25 March 2024, resulting in a dataset of size 44,075,808 × 10. The dataset contains energy used by the device per minute and environmental data at each minute. The data were organized in a time series fashion. After processing, the dataset was aggregated to calculate the daily energy consumption for each appliance, resulting in 513 rows of data per appliance. Similarly, the daily averages for room temperature and humidity were computed, also covering 513 days. (b) Edge IoT systems were used to process the data locally and run AI models for anomaly detection. Finally, (c) a centralized IoT system was used for comprehensive data analysis and AI model training.
The architecture, depicted in the figures, illustrates the use of MQTT and HTTPS protocols for secure data transmission from data acquisition devices to edge systems and further to the central cloud system. Edge IoT systems employ local routers and MQTT brokers for data collection and processing, storing data in local SQL databases, and utilizing IoT infrastructure for real-time analysis. Cloud IoT systems receive data via global MQTTS connections, store the data in centralized SQL databases, and train AI models, which are then deployed back to edge systems. The term “cloud database” describes the centralized database used to store processed and aggregated data from edge devices that are housed in the cloud. On the other hand, a global database offers a unified picture of data from many geographical locations by logically integrating several cloud databases. The global dashboard presents a comprehensive picture of data collected from all areas, whereas the cloud dashboard gives targeted insights based on data stored in a particular cloud region. In our system, MQTT facilitates lightweight, real-time communication between edge IoT devices and brokers, ensuring efficient local data collection, while HTTPS enables secure interactions with the cloud system. The system’s security is ensured using Transport Layer Security (TLS) encryption and self-signed certificates generated with OpenSSL, protecting data integrity and authenticity across the network. This comprehensive setup enables real-time monitoring, anomaly detection, and secure communication, accessible globally via public Internet Protocol (IP) addresses [4].

2.1. Data Collection

PostgreSQL is designed for local use by applications running on the same machine. However, by configuring a global IP address on the host system, it can be accessed remotely from any location worldwide. This configuration enables clients to establish an internet connection to the database from multiple computers. PostgreSQL must be configured to allow remote connections so that data can be gathered from the database via the Internet. This is accomplished by setting an individual port for the connections, adding the IP addresses of the clients to PostgreSQL’s configuration file, and limiting access to those with permission only by demanding a specific username and password. With the help of these safeguards, regulated and safe database access is made possible, facilitating effective online data collection from distant users.

2.2. Data Preprocessing

The process of data extraction, processing, and AI model training in an IoT system is designed for power forecasting. The whole data processing and model training process is shown in a flow chart in Figure 2. Initially, cloud database access is configured using PostgreSQL, allowing secure and efficient data retrieval through specific SQL queries. After retrieving the data from the database, data are cleaned by removing null values, filling NaN values through interpolation, and filtering to handle heterogeneous datasets. The term heterogeneous data refers to datasets that consist of diverse types of information, such as numerical, categorical, and temporal data, or data from different sources like energy consumption per appliance and environmental conditions such as room temperature and humidity.
This addresses the complexity of varied data types, such as energy consumption per appliance and room temperature/humidity. Filtering heterogeneous data is essential in data processing to identify data entries, ensure uniqueness, and align the data with predefined structures required by databases. Before filtering, the dataset contained 513 days of data for most appliances. After the filtering process, the data were reduced to 451 days for the water dispenser, 489 days for the refrigerator, and 438 days for the air conditioner (AC). For environmental factors, such as room temperature and humidity, the dataset was reduced to 438 and 336 days, respectively. These reductions were necessary to ensure the data were used for analysis and AI model training. These steps ensure that the data are accurate and relevant, preparing it for the next phase of analysis. However, only one year (365 days) of data were used for AI model training to preserve seasonal patterns and maintain the seasonal relationship. Data from 6 months were used to test the model. The cleaned data are then processed to calculate daily energy consumption from the second energy consumption for each appliance and average room conditions, which are crucial for accurate power forecasting. Subsequently, the data are extended and prepared for training AI models. By scaling and splitting the data into test and train sets, and defining LSTM, GRU, and the hybrid LSTM-GRU models, the system focuses on sequential patterns crucial for forecasting future energy consumption. Training individual deep learning models for each data type helps manage the complexity of varied data inputs and improves prediction accuracy. This method not only enhances model performance but also ensures reliable day-ahead power forecasting, a critical function for optimizing energy management and planning.

3. Proposed Methodology

In recent years, the abundance of relevant load data and advancements in parallel computing have made load prediction using big data a prominent research area. Deep Learning (DL) is increasingly being adopted by researchers to enhance power forecasting methods and improve the analysis of time series data [16].

3.1. Long Short-Term Memory

Deep learning models frequently run into the issue of gradient disappearance (also known as learning pausing) or explosion (also known as learning scattering) [17]. LSTM networks are a type of Recurrent Neural Network (RNN). LSTM networks can effectively learn long-term dependencies and are resilient to the vanishing gradient problem. The LSTM architecture in Figure 3 has three processing gates: forget gate f t , input gate i t , and output gate o t [18].
LSTM has memory cells equipped with gates that control the flow of information, making LSTMs well-suited for time series forecasting [19]. Because it can accurately predict future loads based on past load sequences, the LSTM architecture is used as the foundation of the forecasting model [20]. The input layer (input feature), hidden layer, and output layer (target feature) are the most important parts of LSTM architecture. LSTM uses memory cells within the hidden layer.
f t = σ ( W f · [ h t 1 , x t ] + b f )
i t = σ ( W i · [ h t 1 , x t ] + b i )
C ˜ t = tanh ( W C · [ h t 1 , x t ] + b C )
C t = f t C t 1 + i t C ˜ t
o t = σ ( W o · [ h t 1 , x t ] + b o )
h t = o t tanh ( C t )
where σ denotes a sigmoid activation function; W f , W i , W C , and W o represent weight matrices; h t 1 represents the output at the time ( t 1 ) ; and b f , b i , b C , and b o represent the bias vectors.

3.2. Gated Recurrent Unit

A simple version of LSTM, GRU combines input and forget gates into a single update gate. It addresses the limitations of vanilla RNNs and demonstrates improved performance in sequence learning tasks, particularly by offering a more efficient training process and reduced complexity [21]. Additionally, GRU is more effective at recalling longer sequences and performs better with fewer training data [22]. Like the LSTM, it performs well with sequential data. GRUs are faster to train and more computationally efficient due to their reduced complexity, which makes them very helpful in real-time or resource-constrained settings. GRUs are commonly used in jobs involving historical prediction, natural language processing, and other applications where maintaining data over time is crucial [23]. Though usually not as resilient as LSTMs in very long sequences, the gating techniques handle vanishing and expanding gradient problems [24]. The architecture of the GRU model is illustrated in Figure 4.
Here are the equations that describe each step in the GRU mechanism:
z t = σ ( W z · [ h t 1 , x t ] )
r t = σ ( W r · [ h t 1 , x t ] )
h ˜ t = tanh ( W · [ r t h t 1 , x t ] )
h t = ( 1 z t ) h t 1 + z t h ˜ t
The operations of the GRU consist of essential components that control the flow and time-based updates of information. The gates ( r t ) (reset gate) and ( z t ) (update gate) operate on the current input ( x t ) and the previous hidden state ( h t 1 ) by employing sigmoid activation functions ( ( σ )). These functions regulate the extent of information to retain or update. The candidate hidden state ( h ˜ t ) is computed by utilizing the hyperbolic tangent function ( tanh ) on a combination of the current input and the reset-gated previous hidden state, thereby introducing non-linearity into the operation. Combining the candidate state ( h ˜ t ) and the previous state ( h t 1 ) generates the final hidden state ( h t ), with each state’s contribution weighted by the update gate ( z t ). This approach enables the model to effectively balance past information with new inputs, updating the state in a controlled way.

3.3. Proposed Hybrid LSTM-GRU Model

LSTMs have a separate cell state and hidden state, giving them robust control over memory [25]. However, this can lead to retaining information that might not be needed. GRUs have a simpler gating mechanism and combine the cell state and hidden state, making them computationally efficient but somewhat limited in handling long-term dependencies [24]. By combining both LSTM and GRU mechanisms, this model allows for more granular control over which information is retained or discarded. This makes it more adaptable to different types of sequences, where both short- and long-term memory might be needed. The structure of the hybrid model is shown in Figure 5.
The reset gate in this hybrid model functions as it does in a GRU, deciding how much of the previous hidden state ( h t 1 ) should be “reset” or ignored when computing the candidate state. This allows the model to selectively drop irrelevant information from previous states.
r t = σ ( W r · [ h t 1 , x t ] + b r )
where h t 1 and x t represent the previous hidden state and input, respectively; W r represents weight matrices for the reset gate; and b r represents the bias term.
Like the forget gate in a typical LSTM cell, the hybrid model’s forget gate operates similarly. It determines the proportion of the previous cell state ( C t 1 ) that should be “forgotten” or discarded. Every data point in the cell state ( C t 1 ) is given a value between 0 and 1 after applying a sigmoid activation function to the previous hidden state ( h t 1 ) and the output from the reset gate ( r t ).
f t = σ ( W f · [ h t 1 , r t ] + b f )
where r t represents the reset gate and W f and b f represent weight matrices and bias terms for the forget gate. The forget gate gives the flexibility to discard unimportant information, helping to mitigate issues like the vanishing gradient problem and enabling the model to remember important information over long time sequences.
The update gate z t controls how much of the hidden state h t should be updated with new information and how much of the previous hidden state should be retained. This gate blends the previous hidden state with the candidate’s hidden state (the new information) to create the next hidden state.
z t = σ ( W z · [ h t 1 , x t ] + b z )
where z t represents the update gate and W z and b z represent the weight matrices and bias terms for the update gate. The update gate helps maintain the continuity of trends over time while allowing flexibility to adjust for recent changes in time series data. The input gate helps the model decide whether to integrate a recent trend or anomaly into the memory based on its importance for predicting future values.
The input gate controls the extent to which the new candidate c ˜ t values (generated for the cell state) should influence the overall memory, allowing for selective integration of new information into the long-term cell state. This is beneficial for remembering important events or trends from the time-series data, allowing the model to capture long-term dependencies. Input gate and candidate cell state equations are given below.
i t = σ ( W i · [ h t 1 , z t ] + b i )
C ˜ t = tanh ( W i · [ h t 1 , z t ] + b i )
where i t and C ˜ t represent the input gate and candidate cell state and W i and b i represent the weight matrices and bias terms of the input gate.
The cell state C t is updated by integrating the candidate cell state (modulated by the input gate) with the previous cell state (modulated by the forget gate).
C t = ( f t · C t 1 + i t · C ˜ t )
The output gate regulates the output based on both the update gate and the hidden state.
o t = σ ( w o · [ h t 1 , z t ] + b o )
After the output gate modulation, tanh is applied to the cell state to produce a bounded output for the hidden state h t :
h t = ( o t · tanh ( C t ) )
This hybrid model can possibly offer quicker training and inference times without sacrificing the model’s ability to capture complex patterns by utilizing the memory cell of the LSTM for long-term dependencies and the simpler gating of the GRU for short-term dependencies. The hyperparameters of the proposed model are shown in Table 1. The selected features further enhance the model’s adaptability and performance. These features include day-level energy consumption data derived from minute-level voltage and current measurements, day-level environmental data consisting of temperature and humidity readings from the sensor, and temporal features such as the day of the week to capture seasonality and periodic patterns. The careful selection of these features ensures the model is well-equipped to handle a diverse range of sequences. These features further optimize the model’s training and inference. The hybrid approach allows for sequences with shorter, immediate requirements to benefit from GRU’s computational efficiency, while sequences with long-term dependencies are effectively managed through the LSTM’s cell state. Additionally, the GRU-inspired reset and update gates may improve gradient flow while preserving long-term dependencies, thereby enhancing training consistency and mitigating the risk of vanishing gradients over extended sequences. This combination of feature selection and hybrid architecture makes the model highly versatile for a variety of applications.

3.4. Evaluation Matrices

The evaluation metrics used to assess the performance of the RNN model are Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and the Coefficient of Determination ( R 2 ). Each metric provides insight into the model’s accuracy, consistency, and overall fit to the data [26]. MAE calculates the average of the absolute differences between the true values y i and the predicted values y ^ i . It indicates the average error magnitude without considering its direction (positive or negative).
MAE = 1 n i = 1 n y i y ^ i
In time-series forecasting, MAE provides a clear interpretation of the average forecast error. It is particularly useful when each error has an equal impact on the evaluation. RMSE measures the square root of the average squared differences between the actual and predicted values. By squaring the errors before averaging, RMSE penalizes larger errors more heavily than smaller ones, making it sensitive to outliers.
RMSE = 1 n i = 1 n ( y i y ^ i ) 2
R 2 represents the proportion of variance in the actual values that is predictable from the model’s predictions. It measures how well the model’s predictions approximate the actual data points.
R 2 = 1 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ ) 2
where y ¯ is the mean of the actual values.

4. Results and Discussion

In this study, the performance of different AI models was scrutinized to identify the most adept approach for real-time IoT applications in energy management and environmental monitoring. By evaluating LSTM, GRU, and hybrid LSTM-GRU models, the goal was to discern which model offers the highest accuracy and resilience in forecasting. This comparative analysis is crucial to ensure the SEEMP system can consistently predict energy consumption and environmental conditions, thereby optimizing office equipment efficiency and enhancing overall system performance. The performance of three AI models, LSTM, GRU, and our hybrid model, applied to different office appliances and weather data for power forecasting are illustrated in Figure 6, Figure 7 and Figure 8. Figure 6a,b depict the actual and predicted power values for the air conditioner and water dispenser, respectively. The LSTM-GRU model, represented by the green line, consistently demonstrates superior performance, closely aligning with the true values. This trend is also observed in Figure 7 for the refrigerator, where the hybrid model again outperforms the LSTM and GRU models, displaying lower prediction error and better tracking of the actual power usage patterns. Similarly, in Figure 8, which compares predictions for temperature and humidity, the hybrid model’s predictions are the closest to the true values, highlighting its accuracy across different types of data. These results underscore the hybrid model’s efficacy in real-time power forecasting, making it highly suitable for IoT applications in an office environment.
This study integrates edge and cloud AI into a sensor node monitoring system for power forecasting in an office environment, enabling IoT implementation through an energy and environmental data acquisition and monitoring system with day-ahead predictions using different RNN-based models, and supports real-time monitoring via centralized cloud and local dashboards. The performance metrics of different AI models applied to office appliances and weather data are summarized in Table 2. In evaluating various AI models on the dataset, the hybrid model consistently outperformed LSTM and GRU models across different metrics like Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and R 2 values, indicating its superior accuracy for real-time applications. For instance, in refrigerator forecasting, the hybrid model achieved an MAE of 2.12%, RMSE of 7.77%, and R 2 of 93.69% compared to LSTM and GRU.
Figure 9 illustrates the error distribution for the prediction of actual values across five scenarios, AC, refrigerator, water dispenser, temperature, and humidity, using three models: LSTM, GRU, and the proposed model. The proposed model demonstrates a narrower and more concentrated error distribution around zero, indicating higher prediction accuracy and lower variability. On the other hand, the LSTM and GRU models exhibit wider error distributions, with some instances showing significant deviations from zero, particularly for the refrigerator and humidity predictions. This suggests that the proposed model effectively minimizes prediction errors, providing more reliable outputs compared to the LSTM and GRU models across all evaluated scenarios.
Table 3 illustrates the computational time per step, where the hybrid model is the fastest at 5 ms/step, compared to LSTM’s 7 ms/step and GRU’s 9 ms/step. These results highlight the hybrid model’s reliability, making it particularly suitable for real-time AIoT applications in energy management and environmental monitoring.

5. Conclusions

The integration of AI models into the SEEMP system has demonstrated significant potential for advancing real-time energy management and environmental monitoring in office environments. The system includes real-time data collection, secure transmission, and sophisticated AI-based forecasting, offering an effective solution for optimizing energy consumption and maintaining optimal environmental conditions in office spaces. The edge and cloud AI capabilities in a sensor node monitoring system exemplify the potential for enhanced decision-making, improved user experience, and notable energy savings in IoT environments. Key findings from the comparative analysis of AI models, specifically the hybrid model, LSTM, and GRU, revealed that the hybrid model consistently outperforms the others in accuracy and efficiency. The hybrid model’s superior performance is evident across various metrics such as lower MAE and RMSE and higher R 2 values in applications such as refrigerator power forecasting, where it achieved an MAE of 2.12%, an RMSE of 7.77%, and an R 2 of 93.69%. These results underscore the hybrid model’s efficacy and reliability, making it highly suitable for real-time IoT applications in office energy management and environmental monitoring. Additionally, the proposed system is mainly designed for office environments, and further customization and validation would be needed for it to be effectively applied in industrial or residential settings. Another challenge with IoT systems is that they often rely on low-cost sensors, which can introduce noise and affect the model’s performance. In future work, the model could be improved by accounting for different types of noise to ensure more accurate predictions in a variety of situations.

Author Contributions

Conceptualization, M.M.R.; Methodology, M.M.R. and M.S.N.; Software, M.M.R.; Validation, M.S.N.; Formal analysis, M.I.J. and M.M.R.; Investigation, M.M.R. and M.I.J.; Data curation, M.I.J.; Writing—original draft, M.M.R. and M.S.N.; Writing—review & editing, M.M.R. and M.I.J.; Visualization, M.M.R.; Supervision, Y.M.J.; Project administration, Y.M.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bhau, G.V.; Deshmukh, R.G.; Chowdhury, S.; Sesharao, Y.; Abilmazhinov, Y.; Kumar, R. IoT based solar energy monitoring system. Mater. Today Proc. 2023, 80, 3697–3701. [Google Scholar] [CrossRef]
  2. Joha, M.I.; Rahman, M.M.; Zubair, M.I. IoT-Based Smart Energy Monitoring, Management, and Protection System for a Smart MicroGrid. In Proceedings of the 2024 Third International Conference on Power, Control and Computing Technologies (ICPC2T), Kuala Lumpur, Malaysia, 13–15 September 2024; IEEE: New York, NY, USA, 2024; pp. 398–403. [Google Scholar] [CrossRef]
  3. Salama, R.; Al-Turjman, F.; Aeri, M.; Yadav, S.P. Internet of intelligent things (IoT)–An overview. In Proceedings of the 2023 International Conference on Computational Intelligence, Communication Technology and Networking (CICTN), Ghaziabad, India, 20–21 April 2023; IEEE: New York, NY, USA, 2023; pp. 801–805. [Google Scholar] [CrossRef]
  4. Bin Mofidul, R.; Alam, M.M.; Rahman, M.H.; Jang, Y.M. Real-time energy data acquisition, anomaly detection, and monitoring system: Implementation of a secured, robust, and integrated global IIoT infrastructure with edge and cloud AI. Sensors 2022, 22, 8980. [Google Scholar] [CrossRef] [PubMed]
  5. Panduman, Y.Y.F.; Funabiki, N.; Fajrianti, E.D.; Fang, S.; Sukaridhoto, S. A Survey of AI Techniques in IoT Applications with Use Case Investigations in the Smart Environmental Monitoring and Analytics in Real-Time IoT Platform. Information 2024, 15, 153. [Google Scholar] [CrossRef]
  6. Saleem, T.J.; Chishti, M.A. Deep learning for the internet of things: Potential benefits and use-cases. Digit. Commun. Netw. 2021, 7, 526–542. [Google Scholar] [CrossRef]
  7. Rioux, L. Comfort at work: An indicator of quality of life at work. In Handbook of Environmental Psychology and Quality of Life Research; Springer: Berlin/Heidelberg, Germany, 2017; pp. 401–419. [Google Scholar] [CrossRef]
  8. Lin, S.H.; Zhang, H.; Li, J.H.; Ye, C.Z.; Hsieh, J.C. Evaluating smart office buildings from a sustainability perspective: A model of hybrid multi-attribute decision-making. Technol. Soc. 2022, 68, 101824. [Google Scholar] [CrossRef]
  9. Vimalanathan, K.; Ramesh Babu, T. The effect of indoor office environment on the work performance, health and well-being of office workers. J. Environ. Health Sci. Eng. 2014, 12, 1–8. [Google Scholar] [CrossRef]
  10. Kaushik, A.; Arif, M.; Tumula, P.; Ebohon, O.J. Effect of thermal comfort on occupant productivity in office buildings: Response surface analysis. Build. Environ. 2020, 180, 107021. [Google Scholar] [CrossRef]
  11. Leal Filho, J.R.; Kocian, A.; Fröhlich, A.A.; Chessa, S. Continual Learning in Recurrent Neural Networks for the Internet of Things: A Stochastic Approach. In Proceedings of the 2024 IEEE Symposium on Computers and Communications (ISCC), Paris, France, 26–29 June 2024; IEEE: New York, NY, USA, 2024; pp. 1–6. [Google Scholar] [CrossRef]
  12. Wang, J.Q.; Du, Y.; Wang, J. LSTM based long-term energy consumption prediction with periodicity. Energy 2020, 197, 117197. [Google Scholar] [CrossRef]
  13. Bouktif, S.; Fiaz, A.; Ouni, A.; Serhani, M.A. Optimal deep learning lstm model for electric load forecasting using feature selection and genetic algorithm: Comparison with machine learning approaches. Energies 2018, 11, 1636. [Google Scholar] [CrossRef]
  14. Mahjoub, S.; Chrifi-Alaoui, L.; Marhic, B.; Delahoche, L. Predicting energy consumption using LSTM, multi-layer GRU and drop-GRU neural networks. Sensors 2022, 22, 4062. [Google Scholar] [CrossRef]
  15. Fard, R.H.; Hosseini, S. Machine Learning algorithms for prediction of energy consumption and IoT modeling in complex networks. Microprocess. Microsystems 2022, 89, 104423. [Google Scholar] [CrossRef]
  16. Qureshi, M.; Arbab, M.A.; Rehman, S.u. Deep learning-based forecasting of electricity consumption. Sci. Rep. 2024, 14, 6489. [Google Scholar] [CrossRef] [PubMed]
  17. Ma, P.; Cui, S.; Chen, M.; Zhou, S.; Wang, K. Review of family-level short-term load forecasting and its application in household energy management system. Energies 2023, 16, 5809. [Google Scholar] [CrossRef]
  18. Bacanin, N.; Jovanovic, L.; Zivkovic, M.; Kandasamy, V.; Antonijevic, M.; Deveci, M.; Strumberger, I. Multivariate energy forecasting via metaheuristic tuned long-short term memory and gated recurrent unit neural networks. Inf. Sci. 2023, 642, 119122. [Google Scholar] [CrossRef]
  19. Lim, S.C.; Huh, J.H.; Hong, S.H.; Park, C.Y.; Kim, J.C. Solar power forecasting using CNN-LSTM hybrid model. Energies 2022, 15, 8233. [Google Scholar] [CrossRef]
  20. Qin, J.; Zhang, Y.; Fan, S.; Hu, X.; Huang, Y.; Lu, Z.; Liu, Y. Multi-task short-term reactive and active load forecasting method based on attention-LSTM model. Int. J. Electr. Power Energy Syst. 2022, 135, 107517. [Google Scholar] [CrossRef]
  21. Wang, Y.; Liao, W.; Chang, Y. Gated recurrent unit network-based short-term photovoltaic forecasting. Energies 2018, 11, 2163. [Google Scholar] [CrossRef]
  22. Song, H.; Choi, H. Forecasting stock market indices using the recurrent neural network based hybrid models: CNN-LSTM, GRU-CNN, and ensemble models. Appl. Sci. 2023, 13, 4644. [Google Scholar] [CrossRef]
  23. Batur Dinler, Ö.; Aydin, N. An optimal feature parameter set based on gated recurrent unit recurrent neural networks for speech segment detection. Appl. Sci. 2020, 10, 1273. [Google Scholar] [CrossRef]
  24. Cai, C.; Li, Y.; Su, Z.; Zhu, T.; He, Y. Short-term electrical load forecasting based on VMD and GRU-TCN hybrid network. Appl. Sci. 2022, 12, 6647. [Google Scholar] [CrossRef]
  25. Lin, C.B.; Dong, Z.; Kuan, W.K.; Huang, Y.F. A framework for fall detection based on OpenPose skeleton and LSTM/GRU models. Appl. Sci. 2020, 11, 329. [Google Scholar] [CrossRef]
  26. Jayashankara, M.; Shah, P.; Sharma, A.; Chanak, P.; Singh, S.K. A novel approach for short-term energy forecasting in smart buildings. IEEE Sens. J. 2023, 23, 5307–5314. [Google Scholar] [CrossRef]
Figure 1. Data collection system for power forecasting.
Figure 1. Data collection system for power forecasting.
Applsci 14 11970 g001
Figure 2. Block diagram of data processing and AI model training.
Figure 2. Block diagram of data processing and AI model training.
Applsci 14 11970 g002
Figure 3. Architecture of LSTM model.
Figure 3. Architecture of LSTM model.
Applsci 14 11970 g003
Figure 4. Structure of GRU model.
Figure 4. Structure of GRU model.
Applsci 14 11970 g004
Figure 5. Proposed architecture of LSTM-GRU hybrid model.
Figure 5. Proposed architecture of LSTM-GRU hybrid model.
Applsci 14 11970 g005
Figure 6. Comparison of actual and predicted value for (a) air conditioner and (b) water dispenser.
Figure 6. Comparison of actual and predicted value for (a) air conditioner and (b) water dispenser.
Applsci 14 11970 g006
Figure 7. Comparison of actual and predicted value for refrigerator.
Figure 7. Comparison of actual and predicted value for refrigerator.
Applsci 14 11970 g007
Figure 8. Comparison of actual and predicted value for (a) temperature and (b) humidity.
Figure 8. Comparison of actual and predicted value for (a) temperature and (b) humidity.
Applsci 14 11970 g008
Figure 9. Comparison of actual and predicted values for (a) AC, (b) refrigerator, (c) water dispenser, (d) temperature, and (e) humidity.
Figure 9. Comparison of actual and predicted values for (a) AC, (b) refrigerator, (c) water dispenser, (d) temperature, and (e) humidity.
Applsci 14 11970 g009
Table 1. Proposed model hyperparameters.
Table 1. Proposed model hyperparameters.
ParameterValue
No. of units128.64
Dropout rate0.35
Loss functionMSE
OptimizerAdam
Learning rate0.001–0.00001
Window size7
Epochs300
Early stop patience25
Table 2. Performance metrics for different models on office appliances and weather data.
Table 2. Performance metrics for different models on office appliances and weather data.
TypesComponentModelMAE (%)RMSE (%) R 2 (%)
WaterLSTM6.4713.2258.61
DispenserGRU5.3310.4264.34
Hybrid model3.788.1582.04
OfficeAirLSTM3.16.295.21
ApplianceConditionerGRU2.385.3992.5
Hybrid model2.204.9196.06
LSTM6.8911.6680.01
RefrigeratorGRU8.2413.8471.64
Hybrid model2.127.7793.69
LSTM8.6615.0547.08
TemperatureGRU9.5915.8539.01
Hybrid model0.090.2299.99
Weather LSTM4.055.9725.32
DataHumidityGRU4.546.1017.32
Hybrid model0.080.2699.98
Table 3. Computational time comparison.
Table 3. Computational time comparison.
LSTMGRUHybrid Model
Time (ms/step)795
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rahman, M.M.; Joha, M.I.; Nazim, M.S.; Jang, Y.M. Enhancing IoT-Based Environmental Monitoring and Power Forecasting: A Comparative Analysis of AI Models for Real-Time Applications. Appl. Sci. 2024, 14, 11970. https://doi.org/10.3390/app142411970

AMA Style

Rahman MM, Joha MI, Nazim MS, Jang YM. Enhancing IoT-Based Environmental Monitoring and Power Forecasting: A Comparative Analysis of AI Models for Real-Time Applications. Applied Sciences. 2024; 14(24):11970. https://doi.org/10.3390/app142411970

Chicago/Turabian Style

Rahman, Md Minhazur, Md Ibne Joha, Md Shahriar Nazim, and Yeong Min Jang. 2024. "Enhancing IoT-Based Environmental Monitoring and Power Forecasting: A Comparative Analysis of AI Models for Real-Time Applications" Applied Sciences 14, no. 24: 11970. https://doi.org/10.3390/app142411970

APA Style

Rahman, M. M., Joha, M. I., Nazim, M. S., & Jang, Y. M. (2024). Enhancing IoT-Based Environmental Monitoring and Power Forecasting: A Comparative Analysis of AI Models for Real-Time Applications. Applied Sciences, 14(24), 11970. https://doi.org/10.3390/app142411970

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop