energies-logo

Journal Browser

Journal Browser

Artificial Intelligence and Machine Learning Applications in Smart Energy Systems

A special issue of Energies (ISSN 1996-1073). This special issue belongs to the section "F5: Artificial Intelligence and Smart Energy".

Deadline for manuscript submissions: closed (10 July 2025) | Viewed by 14471

Special Issue Editors


E-Mail Website
Guest Editor
Department of Industrial Informatics, Faculty of Materials Engineering, Silesian University of Technology, Akademicka 2A, 44-100 Gliwice, Poland
Interests: data mining; artificial intelligence; machine learning; energy systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Automatic Control, Electrical Engineering and Optoelectronics, Faculty of Electrical Engineering, Częstochowa University of Technology, Al. Armii Krajowej 17, 42-200 Częstochowa, Poland
Interests: machine learning; evolutionary computation; artificial intelligence; pattern recognition; data mining and applications in forecasting, classification, regression, and optimization problems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are delighted to announce a Special Issue on "Artificial Intelligence and Machine Learning Applications in Smart Energy Systems." The main goal of this Special Issue is to bring together the latest research and developments in the areas of artificial intelligence (AI) and machine learning (ML) for smart energy systems.

As the demand for energy continues to increase, smart energy systems are becoming more prevalent in addressing the challenges associated with energy generation, distribution, and consumption. AI and ML have been identified as promising approaches to address these challenges, as they improve the efficiency, reliability, and sustainability of smart energy systems. Thus, this Special Issue aims to present original research articles, review papers, and case studies that demonstrate innovative applications of AI and ML in smart energy systems.

Topics of interest include, but are not limited to:

  • Machine learning for energy forecasting;
  • Artificial intelligence in demand response;
  • Intelligent control and optimization of energy systems;
  • Big data analytics for smart grids;
  • Reinforcement learning for energy management;
  • Deep learning for energy system modeling and simulation;
  • Cybersecurity and privacy in smart energy systems;
  • Human–machine interactions and decision making in smart energy systems.

We invite researchers and practitioners to submit their original research and review papers on these and other related topics. All submitted manuscripts will undergo a rigorous peer-review process to ensure they are high quality and original. We look forward to your valuable contributions to this Special Issue.  

Dr. Marcin Blachnik
Prof. Dr. Grzegorz Dudek
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Energies is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • smart energy systems
  • modeling
  • energy market
  • smart homes
  • renewable energy sources
  • smart modeling
  • machine learning
  • optimization
  • artificial intelligence
  • forecasting
  • load management
  • renewable energy integration
  • energy efficiency optimization
  • demand response
  • power system stability
  • fault detection and diagnosis
  • cybersecurity in energy systems
  • big data analytics
  • Internet of Things (IoT)
  • distributed energy resources
  • energy storage systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

32 pages, 7034 KB  
Article
Short-Term Electrical Load Forecasting Based on XGBoost Model
by Hristo Ivanov Beloev, Stanislav Radikovich Saitov, Antonina Andreevna Filimonova, Natalia Dmitrievna Chichirova, Oleg Evgenievich Babikov and Iliya Krastev Iliev
Energies 2025, 18(19), 5144; https://doi.org/10.3390/en18195144 - 27 Sep 2025
Viewed by 471
Abstract
Forecasting electricity consumption is one of the most important scientific and practical tasks in the field of electric power engineering. The forecast accuracy directly impacts the operational efficiency of the entire power system and the performance of electricity markets. This paper proposes algorithms [...] Read more.
Forecasting electricity consumption is one of the most important scientific and practical tasks in the field of electric power engineering. The forecast accuracy directly impacts the operational efficiency of the entire power system and the performance of electricity markets. This paper proposes algorithms for source data preprocessing and tuning XGBoost models to obtain the most accurate forecast profiles. The initial data included hourly electricity consumption volumes and meteorological conditions in the power system of the Republic of Tatarstan for the period from 2013 to 2025. The novelty of the study lies in defining and justifying the optimal model training period and developing a new evaluation metric for assessing model efficiency—financial losses in Balancing Energy Market operations. It was shown that the optimal depth of the training dataset is 10 years. It was also demonstrated that the use of traditional metrics (MAE, MAPE, MSE, etc.) as loss functions during training does not always yield the most effective model for market conditions. The MAPE, MAE, and financial loss values for the most accurate model, evaluated on validation data from the first 5 months of 2025, were 1.411%, 38.487 MWh, and 16,726,062 RUR, respectively. Meanwhile, the metrics for the most commercially effective model were 1.464%, 39.912 MWh, and 15,961,596 RUR, respectively. Full article
Show Figures

Figure 1

30 pages, 7088 KB  
Article
Cascade Hydropower Plant Operational Dispatch Control Using Deep Reinforcement Learning on a Digital Twin Environment
by Erik Rot Weiss, Robert Gselman, Rudi Polner and Riko Šafarič
Energies 2025, 18(17), 4660; https://doi.org/10.3390/en18174660 - 2 Sep 2025
Viewed by 556
Abstract
In this work, we propose the use of a reinforcement learning (RL) agent for the control of a cascade hydropower plant system. Generally, this job is handled by power plant dispatchers who manually adjust power plant electricity production to meet the changing demand [...] Read more.
In this work, we propose the use of a reinforcement learning (RL) agent for the control of a cascade hydropower plant system. Generally, this job is handled by power plant dispatchers who manually adjust power plant electricity production to meet the changing demand set by energy traders. This work explores the more fundamental problem with the cascade hydropower plant operation of flow control for power production in a highly nonlinear setting on a data-based digital twin. Using deep deterministic policy gradient (DDPG), twin delayed DDPG (TD3), soft actor-critic (SAC), and proximal policy optimization (PPO) algorithms, we can generalize the characteristics of the system and determine the human dispatcher level of control of the entire system of eight hydropower plants on the river Drava in Slovenia. The creation of an RL agent that makes decisions similar to a human dispatcher is not only interesting in terms of control but also in terms of long-term decision-making analysis in an ever-changing energy portfolio. The specific novelty of this work is in training an RL agent on an accurate testing environment of eight real-world cascade hydropower plants on the river Drava in Slovenia and comparing the agent’s performance to human dispatchers. The results show that the RL agent’s absolute mean error of 7.64 MW is comparable to the general human dispatcher’s absolute mean error of 5.8 MW at a peak installed power of 591.95 MW. Full article
Show Figures

Figure 1

29 pages, 7926 KB  
Article
Application of Artificial Intelligence Methods in the Analysis of the Cyclic Durability of Superconducting Fault Current Limiters Used in Smart Power Systems
by Sylwia Hajdasz, Marek Wróblewski, Adam Kempski and Paweł Szcześniak
Energies 2025, 18(17), 4563; https://doi.org/10.3390/en18174563 - 28 Aug 2025
Viewed by 535
Abstract
This article presents a preliminary study on the potential application of artificial intelligence methods for assessing the durability of HTS tapes in superconducting fault current limiters (SFCLs). Despite their importance for the selectivity and reliability of power networks, these devices remain at the [...] Read more.
This article presents a preliminary study on the potential application of artificial intelligence methods for assessing the durability of HTS tapes in superconducting fault current limiters (SFCLs). Despite their importance for the selectivity and reliability of power networks, these devices remain at the prototype testing stage, and the phenomena occurring in HTS tapes during their operation—particularly the degradation of tapes due to cyclic transitions into the resistive state—are difficult to model owing to their highly non-linear and dynamic nature. A concept of an engineering decision support system (EDSS) has been proposed, which, based on macroscopically measurable parameters (dissipated energy and the number of transitions), aims to enable the prediction of tape parameter degradation. Within the scope of the study, five approaches were tested and compared: Gaussian process regression (GPR) with various kernel functions, k-nearest neighbours (k-NN) regression, the random forest (RF) algorithm, piecewise cubic hermite interpolating polynomial (PCHIP) interpolation, and polynomial approximation. All models were trained on a limited set of experimental data. Despite the quantitative limitations and simplicity of the adopted methods, the results indicate that even simple GPR models can support the detection of HTS tape degradation in scenarios where direct measurement of the critical current is not feasible. This work constitutes a first step towards the construction of a complete EDSS and outlines directions for further research, including the need to expand the dataset, improve validation, analyse uncertainty, and incorporate physical constraints into the models. Full article
Show Figures

Figure 1

24 pages, 4037 KB  
Article
Modelling the Temperature of a Data Centre Cooling System Using Machine Learning Methods
by Adam Kula, Daniel Dąbrowski, Marcin Blachnik, Maciej Sajkowski, Albert Smalcerz and Zygmunt Kamiński
Energies 2025, 18(10), 2581; https://doi.org/10.3390/en18102581 - 16 May 2025
Cited by 1 | Viewed by 1011
Abstract
Reducing the energy consumption of a data centre while maintaining the requirements of the compute resources is a challenging problem that requires intelligent system design. It even becomes more challenging when dealing with an operating data centre. To achieve that goal without compromising [...] Read more.
Reducing the energy consumption of a data centre while maintaining the requirements of the compute resources is a challenging problem that requires intelligent system design. It even becomes more challenging when dealing with an operating data centre. To achieve that goal without compromising the working conditions of the compute resources, a temperature model is needed that estimates the temperature within the hot corridor of the cooling system based on the properties of the external weather conditions and internal conditions such as server energy consumption, and cooling system state. In this paper, we discuss the dataset creation process as well as the process of evaluating a model for forecasting the temperature in the warm corridor of the data centre. The proposed solution compares two new neural network architectures, namely Time-Series Dense Encoder (TiDE) and Time-Series Mixer (TSMixer) with classical methods such as Random Forest and XGBoost and AutoARIMA. The obtained results indicate that the lowest prediction error was achieved by the TiDE model allowing to achieve 0.1270 of N-RMSE followed by the XGBoost model with 0.1275 of N-RMSE. The additional analysis indicates a limitation of the use of the XGBoost model which tends to underestimate temperature as it approaches higher values, which is particularly important in avoiding safety conditions violations of the compute units. Full article
Show Figures

Figure 1

21 pages, 2816 KB  
Article
TSMixer- and Transfer Learning-Based Highly Reliable Prediction with Short-Term Time Series Data in Small-Scale Solar Power Generation Systems
by Younjeong Lee and Jongpil Jeong
Energies 2025, 18(4), 765; https://doi.org/10.3390/en18040765 - 7 Feb 2025
Cited by 3 | Viewed by 1653
Abstract
With the surge in energy demand worldwide, renewable energy is becoming increasingly important. Solar power, in particular, is positioning itself as a sustainable and environmentally friendly alternative, and is increasingly playing a role not only in large-scale power plants but also in small-scale [...] Read more.
With the surge in energy demand worldwide, renewable energy is becoming increasingly important. Solar power, in particular, is positioning itself as a sustainable and environmentally friendly alternative, and is increasingly playing a role not only in large-scale power plants but also in small-scale home power generation systems. However, small-scale power generation systems face challenges in the development of efficient prediction models because of the lack of data and variability in power generation owing to weather conditions. In this study, we propose a novel forecasting framework that combines transfer learning and dynamic time warping (DTW) to address these issues. We present a transfer learning-based prediction system design that can maintain high prediction performance even in data-poor environments. In the process of developing a prediction model suitable for the target domain by utilizing multi-source data, we propose a data similarity evaluation method using DTW, which demonstrates excellent performance with low error rates in the MSE and MAE metrics compared with conventional long short-term memory (LSTM) and Transformer models. This research not only contributes to maximizing the energy efficiency of small-scale PV power generation systems and improving energy independence but also provides a methodology that can maintain high reliability in data-poor environments. Full article
Show Figures

Figure 1

21 pages, 3528 KB  
Article
Short-Term Energy Generation Forecasts at a Wind Farm—A Multi-Variant Comparison of the Effectiveness and Performance of Various Gradient-Boosted Decision Tree Models
by Marcin Kopyt, Paweł Piotrowski and Dariusz Baczyński
Energies 2024, 17(23), 6194; https://doi.org/10.3390/en17236194 - 9 Dec 2024
Cited by 1 | Viewed by 1197
Abstract
High-quality short-term forecasts of wind farm generation are crucial for the dynamically developing renewable energy generation sector. This article addresses the selection of appropriate gradient-boosted decision tree models (GBDT) for forecasting wind farm energy generation with a 10-min time horizon. In most forecasting [...] Read more.
High-quality short-term forecasts of wind farm generation are crucial for the dynamically developing renewable energy generation sector. This article addresses the selection of appropriate gradient-boosted decision tree models (GBDT) for forecasting wind farm energy generation with a 10-min time horizon. In most forecasting studies, authors utilize a single gradient-boosted decision tree model and compare its performance with other machine learning (ML) techniques and sometimes with a naive baseline model. This paper proposes a comprehensive comparison of all gradient-boosted decision tree models (GBDTs, eXtreme Gradient Boosting (XGBoost), Light Gradient-Boosting Machine (LightGBM), and Categorical Boosting (CatBoost)) used for forecasting. The objective is to evaluate each model in terms of forecasting accuracy for wind farm energy generation (forecasting error) and computational time during model training. Computational time is a critical factor due to the necessity of testing numerous models with varying hyperparameters to identify the optimal settings that minimize forecasting error. Forecast quality using default hyperparameters is used here as a reference. The research also seeks to determine the most effective sets of input variables for the predictive models. The article concludes with findings and recommendations regarding the preferred GBDT models. Among the four tested models, the oldest GBDT model demonstrated a significantly longer training time, which should be considered a major drawback of this implementation of gradient-boosted decision trees. In terms of model quality testing, the lowest nRMSE error was achieved by the oldest model—GBDT in its tuned version (with the best hyperparameter values obtained from exploring 40,000 combinations). Full article
Show Figures

Figure 1

18 pages, 4375 KB  
Article
Research on Oil Well Production Prediction Based on GRU-KAN Model Optimized by PSO
by Bo Qiu, Jian Zhang, Yun Yang, Guangyuan Qin, Zhongyi Zhou and Cunrui Ying
Energies 2024, 17(21), 5502; https://doi.org/10.3390/en17215502 - 4 Nov 2024
Cited by 16 | Viewed by 2283
Abstract
Accurately predicting oil well production volume is of great significance in oilfield production. To overcome the shortcomings in the current study of oil well production prediction, we propose a hybrid model (GRU-KAN) with the gated recurrent unit (GRU) and Kolmogorov–Arnold network (KAN). The [...] Read more.
Accurately predicting oil well production volume is of great significance in oilfield production. To overcome the shortcomings in the current study of oil well production prediction, we propose a hybrid model (GRU-KAN) with the gated recurrent unit (GRU) and Kolmogorov–Arnold network (KAN). The GRU-KAN model utilizes GRU to extract temporal features and KAN to capture complex nonlinear relationships. First, the MissForest algorithm is employed to handle anomalous data, improving data quality. The Pearson correlation coefficient is used to select the most significant features. These selected features are used as input to the GRU-KAN model to establish the oil well production prediction model. Then, the Particle Swarm Optimization (PSO) algorithm is used to enhance the predictive performance. Finally, the model is evaluated on the test set. The validity of the model was verified on two oil wells and the results on well F14 show that the proposed GRU-KAN model achieves a Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Coefficient of Determination (R2) values of 11.90, 9.18, 6.0% and 0.95, respectively. Compared to popular single and hybrid models, the GRU-KAN model achieves higher production-prediction accuracy and higher computational efficiency. The model can be applied to the formulation of oilfield-development plans, which is of great theoretical and practical significance to the advancement of oilfield technology levels. Full article
Show Figures

Figure 1

21 pages, 985 KB  
Article
Integrating an Ensemble Reward System into an Off-Policy Reinforcement Learning Algorithm for the Economic Dispatch of Small Modular Reactor-Based Energy Systems
by Athanasios Ioannis Arvanitidis and Miltiadis Alamaniotis
Energies 2024, 17(9), 2056; https://doi.org/10.3390/en17092056 - 26 Apr 2024
Cited by 6 | Viewed by 1568
Abstract
Nuclear Integrated Energy Systems (NIES) have emerged as a comprehensive solution for navigating the changing energy landscape. They combine nuclear power plants with renewable energy sources, storage systems, and smart grid technologies to optimize energy production, distribution, and consumption across sectors, improving efficiency, [...] Read more.
Nuclear Integrated Energy Systems (NIES) have emerged as a comprehensive solution for navigating the changing energy landscape. They combine nuclear power plants with renewable energy sources, storage systems, and smart grid technologies to optimize energy production, distribution, and consumption across sectors, improving efficiency, reliability, and sustainability while addressing challenges associated with variability. The integration of Small Modular Reactors (SMRs) in NIES offers significant benefits over traditional nuclear facilities, although transferring involves overcoming legal and operational barriers, particularly in economic dispatch. This study proposes a novel off-policy Reinforcement Learning (RL) approach with an ensemble reward system to optimize economic dispatch for nuclear-powered generation companies equipped with an SMR, demonstrating superior accuracy and efficiency when compared to conventional methods and emphasizing RL’s potential to improve NIES profitability and sustainability. Finally, the research attempts to demonstrate the viability of implementing the proposed integrated RL approach in spot energy markets to maximize profits for nuclear-driven generation companies, establishing NIES’ profitability over competitors that rely on fossil fuel-based generation units to meet baseload requirements. Full article
Show Figures

Figure 1

Review

Jump to: Research

22 pages, 1082 KB  
Review
Leveraging Machine Learning in Next-Generation Climate Change Adaptation Efforts by Increasing Renewable Energy Integration and Efficiency
by Izabela Rojek, Dariusz Mikołajewski, Marek Andryszczyk, Tomasz Bednarek and Krzysztof Tyburek
Energies 2025, 18(13), 3315; https://doi.org/10.3390/en18133315 - 24 Jun 2025
Viewed by 3308
Abstract
This article examines the growing role of machine learning (ML) in promoting next-generation climate change adaptation through the improved integration and performance of renewable energy systems. As climate change accelerates, innovative solutions are urgently needed to enhance the resilience and sustainability of energy [...] Read more.
This article examines the growing role of machine learning (ML) in promoting next-generation climate change adaptation through the improved integration and performance of renewable energy systems. As climate change accelerates, innovative solutions are urgently needed to enhance the resilience and sustainability of energy infrastructure.ML offers powerful capabilities to handle complex data sets, forecast energy supply and demand, and optimize grid operations. This review highlights key applications of ML, such as predictive maintenance, intelligent grid management, and the real-time optimization of renewable energy resources. It also examines current challenges, including data availability, model transparency, and the need for interdisciplinary collaboration, both in technology development and policy and regulation. By synthesizing recent research and case studies, thisarticle shows how ML can significantly improve the performance, reliability, and scalability of renewable energy systems. This review emphasizes the importance of aligning technological advances with policy and infrastructure development. Successful implementation requires not only ensuring technological capabilities (robust infrastructure, structured data sets, and interdisciplinary collaboration) but also the careful consideration and alignment of ethical and regulatory factors from strategic to regional and local levels. Machine learning is becoming a key enabler for the transition to more adaptive, efficient, and low-carbon energy systems in response to climate change. Full article
Show Figures

Figure 1

Back to TopTop