Next Article in Journal
Identifying Themes in Energy Poverty Research: Energy Justice Implications for Policy, Programs, and the Clean Energy Transition
Previous Article in Journal
China in the Renewable Energy Era: What Has Been Done and What Remains to Be Done
Previous Article in Special Issue
Time Series Forecasting for Energy Production in Stand-Alone and Tracking Photovoltaic Systems Based on Historical Measurement Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of Artificial Intelligence Algorithms in Multilayer Perceptron and Elman Networks to Predict Photovoltaic Power Plant Generation

1
Department of Electrical and Computer Engineering Fundamentals, Rzeszow University of Technology, Al. Powstańców Warszawy 12, 35-959 Rzeszow, Poland
2
Department of Photonics, Electronics and Lighting Technology, Bialystok University of Technology, Wiejska 45A, 15-351 Bialystok, Poland
3
Faculty of Electrical and Computer Engineering, Cracow University of Technology, Warszawska 24, 31-155 Krakow, Poland
*
Author to whom correspondence should be addressed.
Energies 2023, 16(18), 6697; https://doi.org/10.3390/en16186697
Submission received: 16 August 2023 / Revised: 11 September 2023 / Accepted: 15 September 2023 / Published: 19 September 2023
(This article belongs to the Special Issue Recent Advances in Solar Cells and Photovoltaics)

Abstract

:
This paper presents the models developed for the short-term forecasting of energy production by photovoltaic panels. An analysis of a set of weather factors influencing daily energy production is presented. Determining the correlation between the produced direct current (DC) energy and the individual weather parameters allowed the selection of the potentially best explanatory factors, which served as input data for the neural networks. The forecasting models were based on MLP and Elman-type networks. An appropriate selection of structures and learning parameters was carried out, as well as the process of learning the models. The models were built based on different time periods: year-round, semi-annual, and seasonal. The models were developed separately for monocrystalline and amorphous photovoltaic modules. The study compared the models with the predicted and measured insolation energy. In addition, complex forecasting models were developed for the photovoltaic system, which could forecast DC and AC energy simultaneously. The complex models were developed according to the rules of global and local modeling. The forecast errors of the developed models were included. The smallest values of the DC energy forecast errors were achieved for the models designed for summer forecasts. The percentage forecast error was 1.95% using directly measured solar irradiance and 5. 57% using predicted solar irradiance. The complex model for summer forecasted the AC energy with an error of 1.86%.

1. Introduction

Worldwide as well as in Poland, the number of photovoltaic installations is increasing year by year. This fact can be attributed mainly to two reasons: rising energy costs and attempts to combat climate change. A photovoltaic system allows a certain degree of independence from rising energy prices. In order to make the best use of its possibilities, the aim should be to maximize the degree of utilization of the energy produced. An important element in this case is the ability to predict energy production in the future [1].
Energy production from renewable energy sources (RESs), especially photovoltaic panels, is heavily reliant on weather conditions and solar irradiance [2]. Key factors include global horizontal solar irradiance, diffuse horizontal solar irradiance, temperature, wind speed and direction, humidity, precipitation, and hours of sunshine [3,4].

1.1. Challenges That Come with Changes to New Power Systems

Fluctuations in this production must be continuously compensated for by conventional sources using regulation reserves or, in the future, through energy storage facilities [5,6]. Short-term variations in RES energy production due to sudden weather changes can pose significant risks to the stability of the energy system [7]. Consequently, as the proportion of renewable energy in the overall energy mix grows, accurate short-term production forecasting and real-time optimization become increasingly vital [8,9].
Renewable energy sources are connected to the power grid using power electronic converters [10,11]. Replacing conventional generators reduces the inertia of the power grid [12]. The inclusion of a large number of RESs creates difficulties with frequency stability [13], which is a major problem in their development [14,15]. Improving the stability of the power system can also be achieved by using a virtual synchronous generator that minimizes the frequency fluctuations [12,16,17]. Battery energy storage systems (BESSs) can be used to simulate the inertia response of synchronous generators [12,18].

1.2. Artificial Intelligence Algorithms in Energy Forecasting

The utilization of artificial neural networks for predicting power generation in photovoltaic systems has gained immense popularity. Various types of neural networks, fuzzy logic, and machine learning algorithms are employed, with the selection contingent on factors such as the forecast horizon, the accessibility of diverse historical data, and forecasts derived from numerical weather prediction. Among these techniques, the multilayer perceptron (MLP) neural network stands out as the most commonly employed approach for forecasting energy production and power in photovoltaic systems. Three categories of methods are used in power forecasting for photovoltaic systems: statistical methods, physical methods, and methods using artificial intelligence [19,20]. Ehsan, Simon, and Venkateswaran proposed a multi-layer perceptron-based artificial neural network (ANN) model for power generation prediction [21]. AlShafeey and Csáki’s work analyzed ANN and MR PV power forecasting performance over structural, time series, and hybrid input data methods [22]. The high relative variability of irradiance conditions makes network learning less reliable [23]. The article “One day ahead forecasting of energy generation in photovoltaic systems” presents the influence of selected weather factors on the accuracy of forecasts and the results of a study of the influence of the model structure (in the form of an MLP neural network) on the forecast accuracy [24]. It should also be taken into account that the accuracy of forecasts largely depends on the selection of explanatory data [25].
Many types of neural networks are used in forecasting, e.g., radial basis function (RBF) neural networks [19] or recurrent neural networks (RNNs) [26]. In addition, the support vector machine (SVM) algorithm [19,27], linear regression [28], or probabilistic models [29] are used in forecasting. The use of fuzzy logic in PV energy prognosis also has a strong rationale, as evidenced by publications [30]. Hybrid methods involve the fusion of multiple distinct techniques, strategically combined to leverage the strengths of each approach and overcome the limitations of individual models. By integrating diverse models with unique features, these methods aim to amplify the forecast performance significantly. The synergy achieved through this blending process enables more accurate and robust predictions, empowering decision-makers in various fields to make well-informed choices based on reliable forecasts [31,32]. The research shows that it should not be expected that a method for predicting photovoltaic power can obtain the same results regardless of the meteorological conditions at the location where it is applied [33,34]. Forecasts for the next 24 h are used in energy market transactions. Even small improvements in their quality translate into greater system security and savings for the economy [35]. Recently, algorithms based on deep learning such as long short-term memory (LSTM) [36] or convolutional neural networks [37] have also been used for energy forecasting over different time horizons.
The main objectives of the research are as follows:
  • Development and comparison of forecasting models using MLP and Elman networks for monocrystalline and amorphous panels;
  • Development of year-round, semi-annual, and seasonal models and their comparison;
  • Comparison of the accuracy of forecasting models for forecasted and measured solar irradiance;
  • Algorithm for the selection of weather factors to obtain the best forecasting models;
  • Development of a complex global forecasting model for a photovoltaic system with simultaneous DC and AC energy forecasting.
The remainder of this article is organized as follows: Section 2 describes the dataset and methods. Section 3 presents neural networks in electricity production forecasting and experimental results. Section 4 contains a discussion. Section 5 describes complex forecasting models for photovoltaic systems. Section 6 includes comparisons with the state of the art. Section 7 presents limitations. Section 8 provides conclusions drawn from the study.

2. Data and Methods

2.1. Acquisition of a Dataset for Network Learning

The dataset for the produced energy came from two photovoltaic systems installed at the Rzeszów University of Technology. The initial setup comprised three monocrystalline modules, collectively covering an area of 2.16 m2 and boasting a power capacity of 330 Wp. These modules were seamlessly linked to an inverter. Skillfully mounted on a sturdy frame, they maintained a fixed inclination of 30° relative to the horizon. The second installation consisted of modules made of amorphous silicon and the same model of inverter. The measurement system included voltage, current, and power measurements. Solar irradiance was measured using a Kipp & Zonnen CMP21 pyranometer. All parameters were meticulously measured at 10 s intervals and skillfully averaged to obtain precise 1 min values. The daily DC and AC energy production and insolation with respect to 1 m2 of module area were then calculated from these. The measurements were carried out between 1 November 2013 and 31 October 2014. As AC energy production is influenced by additional factors depending on the design of the plant itself, such as the type of inverter used, DC energy was chosen as the predictor variable to represent the output of the neural networks.
Historical weather data for the aforementioned time period were downloaded using the Visual Crossing weather API, resulting in the following set of daily parameters:
  • Maximum temperature;
  • Average temperature;
  • Dew point;
  • Humidity;
  • Volume of rainfall;
  • Precipitation time;
  • Wind speed;
  • Pressure;
  • Cloudiness;
  • Visibility;
  • Solar irradiance;
  • Maximum angle of the sun above the horizon.

2.2. Comparison of Correlation

To thoroughly examine the impact of weather data on the forecasting models’ accuracy, a meticulous factor-by-factor analysis was conducted, specifically focusing on their influence on DC power production. For this purpose, a linear correlation was calculated. The analysis was carried out separately for monocrystalline and amorphous modules.
Table 1 shows the calculated linear correlations between individual weather parameters and DC power production by monocrystalline and amorphous modules.
The correlation values for both the monocrystalline and amorphous panels of the individual weather factors did not differ significantly. The highest correlation was between the energy produced and the insolation directly measured at the photovoltaic installation. However, this parameter should be considered historically. Weather parameters obtained from weather stations using API had lower correlations.
Strong positive correlations were obtained for the parameters: the solar irradiance, maximum temperature, mean temperature, and visibility. Strong negative correlations were obtained for the parameters determining the humidity and cloud cover. Weak positive correlations were detected for the dew point and atmospheric pressure, while weak negative correlations were found for the parameters’ precipitation time, wind speed, and precipitation amount. The influence of the predicted solar irradiance is shown in Figure 1, for the monocrystalline and amorphous panels. The influence of the measured direct solar irradiance on the DC energy of the monocrystalline and amorphous panels is shown in Figure 2.
The amount of energy production generated from the amorphous panels was almost half that of the monocrystalline panels. The characteristics of the amorphous panels in the form of the correlation of individual weather factors or insolation with respect to the amount of energy generated were similar to such characteristics in the monocrystalline panels, but these were not differences in just the scale of energy production. Although they were similar, they were slightly different, and these differences can be observed by comparing the correlation graphs in Figure 1 and Figure 2.
Models for the monocrystalline and amorphous photovoltaic modules were developed separately because monocrystalline and amorphous modules have different energy characteristics.

3. Neural Network Learning for Electricity Production Forecasting

This stage involves teaching the year-round, half-yearly (October–March and April–September), and seasonal (winter, spring, summer, and autumn) models. For this purpose, datasets were created, divided according to the above ranges. These data were loaded and then randomly divided into a learning set (85% of the data) and a testing set (15%). The test data should be representative, so it was checked that the average of the test data did not deviate significantly from the average of the teaching data (by more than 10%). This condition was met. The total dataset contained 362 records (362 two days of the entire year). The split between the learning and testing data was as follows: annual model, teaching/testing data, 308/54 days, respectively; semi-annual models, data, 154/27 days; and seasonal models, data, 78/14 days.
The learning and testing data were normalized to the range [0,1] according to Equation (1):
x = x   min max     min
where x is the dataset, min is the smallest value from the set x, and max is the largest value from the set x.
MLP and Elman-type networks with two hidden layers were used for the study. The number of neurons in each layer was selected by wide simulation. The selection of weather factors and the selection of neurons in the two hidden layers were carried out in three loops of the algorithm for the four basic weather factors with the highest correlation values (i.e., the maximum temperature, humidity, cloud cover, and predicted solar irradiance). The algorithm for determining the optimal structure of each model was as follows:
  • Step 1: Initialize weather factors
Add four base weather factor to current weather factors;
  • Step 2: Loop for adding weather factors
For each weather_factor in available_weather_factors;
      Add new weather factor to the current weather factors
  • Step 3: Loop for increasing neurons in the first hidden layer
For num_neurons = 6 to 30:
      Set the number of neurons in the first hidden layer to num_neurons_1;
  • Step 4: Loop for increasing neurons in the second hidden layer
      For num_neurons = 2 to number_of_neurons_in_first_hidden_layer;
      Set the number of neurons in the second hidden layer to num_neurons_2;
  • Step 5: Training
      Train the model with the current weather factors as and set number of neurons;
  • Step 6: Saving
      Save the trained model;
  • Step 7: Check the termination conditions of the algorithm
      If the conditions are not fulfilled go to Step 2;
      If the conditions are fulfilled go to Step 8;
  • Step 8: Choosing the best model
      Load all the saved models;
      Among all models choose the best model;
The algorithm for determining the optimal architecture of each model is also shown in Algorithm 1 as pseudo-code.
Algorithm 1. Pseudo-code for determining the optimal architecture of models
L01:CWF [1–4] ← basic weather factorsAdd maximum temperature, humidity, cloud cover and solar irradiance; CWF—array of current weather factors
L02:For n = 5, 6, ..., N do L03–L07Loop for adding new weather factorN—number of available weather factors
L03:CWF[n] ← new weather factorAdding new weather factor
L04:For i = 6, 7, ..., 30 do L05Loop for increasing neurons in the first hidden layer
L05:For j = 2, 3, …, i do L06–L07Loop for increasing neurons in the second hidden layer
L06:Training of the neural networkLearning of the neural network
L07:Save trained networkSaving trained model (neural network)
L08:Among all saved models choose the best modelChoosing the best model
For MLP networks, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) learning algorithm was adopted [38], whereas for Elman networks, the resilient backpropagation (RPROP) algorithm [38] was utilized. In both cases, the objective function to be minimized was the sum squared error (SSE).
The predictive models were designed using the NeuroLab library, which contains many neural network algorithms in Python.
The simulations were performed on the learning and test data, and its correctness was then verified by the mean absolute percentage error and mean absolute error.
The mean absolute percentage error (MAPE) is given by the formula [23]
M A P E = 1 n i = 1 n d i y i d i 100 %
The mean absolute error (MEA) is given by [23]
M A E = 1 n i = 1 n d i y i
where n is the size of the dataset, di is the actual value of the predicted variable, and yi is the predicted value.

3.1. MLP Networks

A feed-forward network MLP consists of neurons arranged in layers with a single direction of signal flow and inter-layer connections only between successive layers. The input signals are fed into the input layer of neurons, whose outputs are the source signals for the next layer. The signal of the output layer depends on the network’s matrix of weights, offsets, activation function, and vector of output signals. A feed-forward two-layers MLP neural network is described in Figure 3:
In the output layer of the network, the output of the j-th neuron is defined by Equation (4) [38]:
y j t = f 2 k = 0 K w j k 2 f 1 i = 0 I w k i 1 x i t + b k 1 + b j 2
where xi(t) is the input to the i-th neuron, yj(t) is the output of the j-th neuron, wki(1) and wjk(2) are the weights in the individual layers, bk(1) and bj(2) are the biases of neurons in successive layers, and t is the input data sample.

3.2. Elman Networks

The Elman network is a partially recurrent network. It has a similar structure to that of the MLP network. The difference in its structure is the existence of an additional context layer. This layer draws a signal from the outputs of the individual neurons of the first hidden layer. This signal is then stored and is used as an additional input by the neurons in the first hidden layer to form the entire input signal together with the inputs of the network. It thus has feedback between the first hidden layer and the input layer (Figure 4).
Elman networks, also referred to as simple recurrent networks (SRN), are a class of neural networks known for their ability to capture temporal dependencies in sequential data. They consist of a single-layer architecture with n inputs and m hidden layer neurons. Each hidden layer neuron is equipped with a corresponding delay context layer node, allowing the network to maintain a memory of past inputs and update its internal state during the learning process.
In the Elman network, part of the signals in the input vector come from the feedback feeding the output layer signals to the input of the network. This vector can be written as [39]
x t = 1 , x 1 1 t , , x n 1 t ,   c 1 1 t 1 ,   , c n 1 t 1
In the time step t, the computation of the output vector can be achieved using the following equations:
y i t = f a i o t ,     i = 1 , 2 , , n
a i o t = i = 1 m W j i o , h t · h j t ,     i = 1 , 2 , , n
The complete input weight matrix expresses the relationship between the input layer, the hidden layer, and the context layer:
W h t = W i h t W c h t     R m   ×   k
The output of the complete input vector x(t) is mathematically expressed as [39]
h j t = f a j h t ,     j = 1 , 2 , , m
a j h t = l = 1 k W j l h t · x l t ,     j = 1 , 2 , , m
where x is—the input vector, h is—the hidden layer vector, y is the output vector, Wh,i is the external input weight matrix, Wh,c is the context weight matrix, Wo,h is the output weight matrix, and f is the activation function.
The learning process consists in the implementation of a learning algorithm that modifies weights in all neurons of the neural network, in such a way as to properly map the input data into the output data. When implementing the learning process, it is important to find a global minimum that guarantees the best fit of the network parameters to the task being solved, avoiding the pitfalls of local minima.

3.3. Monocrystalline Silicon Panels

The learning process was documented by plotting the SSE error values at each epoch and by plotting the comparison of the actual DC power production data on consecutive days from the learning and test sets with those predicted by the network, after the learning process. Figure 5 shows the course of the SSE error during the network learning process—the year-round MLP model for the monocrystalline panels. The amount of daily energy produced and forecasted for the learning data (year-round model) is shown in Figure 6. The PV energy for the testing data (year-round MLP model) is shown in Figure 7. The results of learning of the summer model using the Elman network are shown in Figure 8, Figure 9 and Figure 10.
The architecture of all models (MLP and Elman networks), the best models, is noted in Table 2. An example network structure written as “model name”, MLP: 7-10-5-1 means that the model used seven inputs (seven learning factors), it had 10 neurons in the first hidden layer and 5 neurons in the second hidden layer, and one was the number of outputs of the network. Table 2 also indicates which weather factors were used (as the symbol “×”).
Table 3 shows the MAPEs of models predicting the daily DC power production by monocrystalline panels for the predicted solar irradiance, and the MAPEs are shown in Table 4.
The MAPEs in Table 3 for the learner and test data differ by less or significantly less than 10 percent, which means that the division of the dataset into a learner and a test set was correct. The MAPEs for the different types of periods varied significantly, which was entirely reasonable given the climate and geographic location of the PV system. The MAEs shown in Table 4 are proportional to the magnitudes of the MAPEs and also to the maximum energy generated in each season. Thus, the MAEs for the winter model are smaller than those for the summer model, despite the fact that the MAPEs in the summer model were significantly smaller than those in the winter model. The reason for this was that the summer model operated on the maximum values of the energy produced, while in the winter model, the energy quantities were typically 10 times smaller than those in the summer model.
The best prediction models obtained (the architecture, number of neurons, and set of weather factors) for the silicon monocrystalline panels were subjected to a re-learning process replacing the predicted values of solar irradiance with values directly measured at the PV installation. The MAPEs of models predicting the daily DC power production by monocrystalline panels using directly measured solar irradiance are shown in Table 5, and the MAEs are shown in Table 6.
The MAPEs in Table 5 for the learning and test data differ by less than 10 percent in both the MLP and Elman models, which means the correct optimization of the parameters of the neural models. The MAPE values (with one exception) were less than 10 percent, and for different types of periods they varied significantly, which was perfectly reasonable. The only one deviating from the mean was the winter Elman model, in which the MAPEs were larger than 10 percent. Such dissimilarity may have been caused by inconsistent initial values of the model’s parameters.
The MAEs in Table 6 have relatively small values in comparison with the maximum value of energy production, which is 730 (Wh/m2). The values of the MAEs in the MLP and Elman models were similar to each other for the respective (periods) models.
Learning and testing times for predictive models for monocrystalline panels are given in Table 7. In the case of predictive models developed for amorphous panels, learning and testing times were similar to those for monocrystalline panels.
The measured times for learning and testing predictive models refer to calculations performed on a computer with a 64-bit operating system, an Intel(R) Core(TM) i7-4702MQ processor, a CPU @2.20 GHz, and the number of CPU cores being four.
The learning time of the MLP network was quite short and in the worst case was less than 11 (s). The learning time for models built on the Elman network was usually 2–3 times longer than that for models built on the MLP network. The longest learning time (about 29 (s)) occurred in the year-round model. The learning times of the individual models were usually proportional to the number of neurons in the hidden layers. There were some anomalies here, such as significantly longer learning times for the spring MLP model compared with the summer or autumn models. This was due to the characteristics of the neural network learning algorithm used. The testing times were short, ranging from 18 (ms) to 107 (ms) in both types of models.

3.4. Amorphous Silicon Panels

The architecture of all models (MLP and Elman networks), the best models obtained for amorphous PV panels, is noted in Table 8.
The learning results of the best models obtained for predicted solar irradiance are given in Table 9. The MAEs of the forecasting models for the amorphous panels in the case of predicted solar irradiance are shown in Table 10.
In the amorphous panel models, the MAPE values (see Table 9) were similar to those of the monocrystalline panel models. The MAPE values of the MLP models were in the range of 6.123% to 14.85%, while those of the Elman models were in the range of 7.018% to 14.25%. Thus, the error values of both types of neural models were close to each other. No major anomalies were noted in the results obtained, both for the test and the learning data.
The MAE values of model predictions for amorphous panels in Table 10 were about twice as low as such errors for the monocrystalline panels. The MAE values for the learning and test data did not usually differ from each other by more than 10–15%. However, in the April–September and spring Elman models, the MAEs for the learning and test data differed from each other by more than 20 percent. The smallest values were found in the autumn MLP model, and the largest were found in Elman’s April–September model.
Models with the same architecture were learned by swapping only one of the input parameters: the predicted solar irradiance taken from the weather API, with the solar irradiance measured directly. The MAPE values of predictions for such models are shown in Table 11, and the MAEs are shown in Table 12.
The MAPEs of model predictions for amorphous panels in the case of direct measurement of solar irradiance shown in Table 11 do not show anomalies. They were quite low in models for warm months (spring, summer, and April–September) in both types of MLP and Elman models and slightly larger for cold months. The smallest MAPE values were found in the spring MLP (3.505−2.867%) and April–September Elman (3.413−3.078%) models. Only in the winter Elman model were the MAPEs greater than 10%.
The MAE prediction errors in the case of direct measured solar irradiance as shown in Table 12 have small values in all models. The differences between the MAE values for the learning and test sets are small, indicating the correct tuning of model parameters in the learning process and the correct division of the data into a learning set and a test set. The smallest MAE values occurred in the spring MLP (2.38–2.49 (Wh/m2)) and winter MLP (2.84–2.90 (Wh/m2)) models.
An in-depth analysis of the obtained results will be presented in Chapter 4, Discussion.

4. Discussion

The results obtained and their comparison showed that there were significant differences in forecast quality between the models. The smallest errors were achieved for models forecasting energy in the summer and spring months, while the largest errors were achieved in the winter and autumn months. The large percentage errors of the autumn and winter models were mainly due to the small values of the energy produced by the panels during this period. A small absolute difference between the predicted energy production and the actual energy production can result in a significant percentage error. In such cases, a problem arises regarding the issue of acquiring weather data directly affecting the photovoltaic panels. Data from weather stations can only approximate these values.
By replacing the models’ predicted solar irradiance in the learning process with values from their direct measurements at the PV site, the MAPEs were significantly reduced in all types of models. Most errors dropped to values below 10%, with the exception of only one model (winter Elman). This proved the great role of acquiring accurate weather values affecting the PV plant in the forecasting process. However, we could not use these models in the application, as we were not able to forecast future direct insolation values. Neural models forecasting energy generated by the amorphous panels also showed significant differences in the forecast accuracy between models. The smallest forecast errors were found in models forecasting energy in the summer months, while the largest errors were found for energy forecasts in the winter months.
Figure 11 shows a comparison of the MAPEs calculated for the test set of all the models created (MLP and Elman networks), which were developed as part of the study using forecast solar irradiance. Figure 12 shows the MAPE forecast errors of all models obtained using directly measured irradiance.
Comparing the prediction errors in Figure 11, it can be observed that half of the models performed better (smaller errors) for monocrystalline panels, and the other half performed better for amorphous panels. In all types of models learned using directly measured solar irradiance, the forecast errors were significantly reduced. Most errors fell to values below 10%, the exception being again only one model (winter Elman), for which the forecast error is about 11–13%. The smallest error values were achieved for models designed for summer forecasts, where forecast errors were around 2–3%. The highest error values occurred in the model for forecasts in the winter months and year-round (including winter months).
Comparing the prediction errors in Figure 12, it can be noted that the vast majority of models performed better (smaller errors) for the monocrystalline panels than for the amorphous panels. Only in two cases was the opposite true. The model using the MLP networks slightly more accurately predicted energy than the Elman-type model for monocrystalline panels. The situation was different for amorphous panels, where the Elman-type networks were slightly more accurate in their predictions than the MLP networks.
The MAEs allowed direct estimation of the energy prediction errors of the individual models in each period. While the MAPE percentage errors of the monocrystalline and amorphous panels were similar in the respective models, the MAEs of the models for the amorphous panels were much lower than in the monocrystalline panel models. The reason for the smaller the MAE errors of amorphous panels is their lower energy efficiency, as amorphous panels generated almost twice as little energy per unit area than monocrystalline panels.
The MAEs were characterized by a large scatter across forecast periods. The smallest value of the MAE energy forecast error occurred in the autumn (MLP) model, an error of 3.61 (Wh/m2), for the monocrystalline panels. In the case of the amorphous panels, the smallest DC energy forecast error, equal to 2.49 (Wh/m2), was generated by the spring (MLP) model.

5. Complex Forecasting Models for Photovoltaic System

Modeling complex systems presents significant challenges as traditional mathematical methods often struggle to capture their intricacies. To tackle this, a common approach involves decomposing the complex system into simpler objects. These individual components can then be modeled independently using various methods, without considering their interconnectedness within the complex system. In contrast to local modeling, global modeling takes into account the entire system and aims to determine parameters for a single model that are optimal for the system as a whole. While this approach may not be individually optimal for each element, it provides a more comprehensive representation of the system’s behavior [40,41]. This distinctive characteristic of global modeling allows for the development of a more accurate representation of the complex system compared with the decomposition method. In conclusion, the adoption of neural networks for global modeling offers a promising avenue to overcome the challenges of modeling complex systems, resulting in more robust and comprehensive representations of their behavior than traditional decomposition methods [42]. We can consider the photovoltaic system shown in Figure 13 as a complex system.
The first part of the system generates DC energy (EDC), depending on the value of solar irradiance and weather conditions. The second part of the system (i.e., the inverter) converts DC energy into AC energy (EAC), which can be considered as the output of the whole system. Such a system can be modeled in two ways by a neural network as a system for forecasting both DC and AC energy simultaneously.
The simple model refers to the indivisible components of a photovoltaic system. The simple model is the basic component of the complex model. In our case, i.e., in the complex forecasting model, there was a cascade connection of two simple models. The first simple model was the PV panel model, and the second simple model was the inverter model, which was a nonlinear object. In our research, the simple models were neural networks (MLPs).
So, using MLP neural networks, a global model was built (see Figure 14), consisting of two simple models connected in cascade. The task of the global model was to predict the DC and AC energy production simultaneously. To determine the optimal parameters of the global model, a special learning algorithm was necessary to use. This algorithm made it possible to determine the optimal parameters of the global model, while optimizing the parameters of simple models to some extent.
Furthermore, the developed global model could be seamlessly converted into a more intricate complex local model, comprising a combination of two distinct local models. The parameters of these local models were independently determined using conventional MLP neural network learning methods.
The output y ^ 1 of the first simple model is mathematically defined as
y ^ 1 = f 1 u 1 , w 1
The output y ^ 1 is an input to the second simple model. The formula that describes the output y ^ 2 of the second simple model is
y ^ 2 = f 2 y ^ 1 , u 2 = f 2 u 1 , w 1 , w 2
where u(1) is the input of a complex system and model, w(1) is the weights of the first models, w(2) is the weights of the second simple models, and f1 and f2 are the activation functions of particular models.
The formulation of the global quality criterion hinges upon the divergence observed between the outputs of the r-th simple model y ^ r and the outputs of the r-th simple systems y 1 ; in the case where r = 2, it is
Q w 1 , w 2 = 1 2 k = 1 K β 1 y ^ k 1 y k 1 2 + 1 2 k = 1 K β 2 y ^ k 2 y k 2 2
where K is the sample count, y(r) is the output of the r-th simple system, and βr is a weighting factor taking values between 0 and 1 (in this work accepted β1 = β2 = 0.5).
The effective determination of the global model parameters involved minimizing the global quality criterion (13). The forecasting models were developed in two variants: one for year-round predictions and another specifically tailored for seasonal summer forecasts. The process of selecting the optimal structure of the forecasting model was conducted by analyzing the number of weather factors included in the neural network’s input. Additionally, preliminary simulations with varying numbers of hidden layers were performed to arrive at the most suitable model architecture. The global models had the following network structure: year-round model 8-9-3-1-5-3-1 (eight weather factors as inputs, nine neurons in the first hidden layer, three neurons in the second hidden layer, one neuron in third hidden layer, five neurons in the fourth hidden layer, three neurons in fifth hidden layer, and one neuron in the sixth layer (the output layer) (see Figure 14); the summer model was 8-7-3-1-5-3-1. Hidden layers 1, 2, 4, and 5 used hyperbolic tangents as the activation function, and in layers 3 and 6 the activation function was linear. The local type models were created by dividing the global model into two parts, according to Figure 14. The global and local type models were learned with Complex Rprop [43] and Rprop [38] algorithms, according to the model type. The number of learning and testing data in the complex models was identical in the models in Section 4. The complex models were simulated in the Matlab language and environment, due to the Complex Rprop algorithm.
The MAPE values of the predictions for the complex models are shown in Table 13, and the MAEs are shown in Table 14. Table 15 shows the learning and testing time for complex models.
The MAPEs of the forecasts in the (global and local) seasonal (summer) models were smaller than those in the year-round models, as in the simple models (see Table 5 and Table 13). The MAPEs of the DC energy forecasts were smaller in the local models. The AC energy (output energy) forecast errors were smaller in the global models than in the local models. The obtained results supported the thesis that global models can achieve more precise predictions of the final energy of the system than using local models, which more accurately model the individual parts of the complex system.
In complex models of the local type, the MAPEs of the DC energy forecasts were similar to the models described in Section 4. In the global type models, the MAPEs of the DC energy in the year-round model were lower than those in the corresponding model in Section 4, but in the summer model they were slightly higher than in the corresponding model (see Table 5). The MAPEs of the AC energy were lower than the error models in [24] as well as being smaller than the MAPEs of the DC energy in the complex models.
The smallest values of MAE energy forecast errors occurred in the year-round global model; the error of EDC was 6.54 (Wh/m2), and EAC was 5.95 (Wh/m2).

6. Comparison with the State of the Art

To evaluate the performance of the proposed approach, we used the MAPE metrics, which are used often for assessing the quality of predictive models. The MAPE metric allows an objective assessment of the quality of models predicting both the size of the PV system’s output and the velocity of the produced energy. Models developed for a year-round or nearly year-round dataset, forecasting with a horizon of one day ahead (in fact, 24 h ahead), were selected for the comparison.
Dolara et al. [31], for the day ahead forecasting activity of the power production by photovoltaic plants, proposed a new hybrid method called the physical hybridized artificial neural network (PHANN), which combined an artificial neural network (ANN) with an analytical physical model: the Clear Sky solar Radiation Model (CSRM). Additionally, they compared the new hybrid method with a standard ANN method. Regarding the prediction of hourly production with respect to 24 h, a weighted mean absolute error (WMAE%), which has a similar meaning to the MAPE metric, was used as one of the metrics to evaluate the quality of the models in predicting one day ahead of the generated power. The installed capacity of the PV system was 264 (kWp). The learning set contained 120 days, and the test set contained 30 days.
Theocharides et al. [44] presented a unified methodology for hourly averaged day-ahead photovoltaic power forecasts, based on data-driven machine learning techniques and statistical post-processing. The proposed forecasting methodology framework comprised a data quality stage, data-driven power output artificial neural networks, a weather clustering assessment (K-means clustering), and post-processing output optimization (linear regressive correction method). The installed capacity of the PV system was 1.175 (kWp), with five poly-crystalline silicon PV modules. An optimal year-round ANN model was developed for predicting the system power one day ahead. The MAPE metric was used to assess the quality of the model.
Li et al. [45] evaluated and compared two common methods, artificial neural networks (ANNs) and support vector regression (SVR), for predicting the energy production from a solar photovoltaic (PV) system 24 h ahead. A hierarchical approach was proposed based on the machine learning algorithms. The accuracies of the models were determined using metrics as mean absolute error (MAE), root mean square error (RMSE), and mean percentage error (MPE). The forecasting models were developed for a 6 MW peak solar power plant. Year-round models have been developed to forecast the amount of DC energy generated by the PV system.
Table 16 shows the MAPE values of our MLP-type models for monocrystalline panels and selected modeling methods of forecasting models by other authors, for models forecasting one day ahead.
The results obtained were very satisfactory, if we took into account the results of other authors. The MLP models developed on the basis of measured insolation were somewhat more accurate than models of other authors or methods. Moreover, the quality of the MLP models developed on the basis of predicted insolation was comparable to that of models by other authors and even surpassed the two other methods (studies). It should be borne in mind that when comparing the results presented in Table 16, their values were influenced by different locations of PV systems, different power output of the systems, 24 h vs. all-day approaches, the power vs. energy forecast, and many other differences.

7. Limitations

Neural networks, often need a large amount of data for effective generalization and accurate prediction. Collecting quality, comprehensive, and representative training data can be a challenge, especially for local-level forecasting. In addition, neural networks can have difficulty generalizing effectively to new weather conditions or unexpected events, such as sudden cloud cover or extreme weather changes. This limited generalization ability can affect the reliability of short-term forecasts. Models using neural networks can be prone to over-fitting, which can lead to inaccurate forecasts. A problem in neural models is estimating the forecast uncertainty. For energy forecasting, estimating the uncertainty is important for decision-making and risk assessment, but this is difficult to achieve with standard neural network architectures.
Another limitation is that finding the optimal set of hyperparameters (learning speed, network architecture, and others) can be time-consuming and require many experiments. A certain limitation is the quality and availability of data. The accuracy of neural network-based forecasts strongly depends on the quality and availability of data, such as measurements of solar radiation, temperature, or historical energy production. Missing or inaccurate data can lead to suboptimal forecasts. Models using neural networks can be sensitive to changes in the distribution of input data. Small changes in the input data, such as changes in the locations of sensors or measuring devices, can affect model performance.

8. Conclusions

As part of the research work, the authors performed an analysis of monocrystalline and amorphous panel energy production data and weather data, calculating the correlations between weather factors and energy produced. A list of explanatory factors was developed. Multiple neural network learning processes were carried out with appropriately prepared input data, and the selection of the type and structure of the network and the learning parameters was performed using the simulation method. Twenty-eight forecasting models were built, which could be divided according to the forecasting period (year-round, semi-annual, and seasonal), according to the type of network used (MLP or Elman) and according to the type of photovoltaic modules whose energy production was forecast (monocrystalline or amorphous). The accuracy of the model predictions for the amorphous panels was similar to the accuracy of the model predictions for the monocrystalline panels. The forecast errors for the models developed with the actual insolation forecast were compared, which tells the story of the real conditions with the models developed on the basis of the direct measured solar irradiance (historical values). Due to the very high correlation of insolation with energy production, models with measures of insolation generated “historical forecasts” with a small error. However, models with predicted insolation could be practical models because they generated “real” energy forecasts but with a slightly larger error.
The smallest values of DC energy forecast errors were achieved for the models (MLP) designed for summer forecasts. The percentage forecast errors were 1.95% using directly measured solar irradiance and 5.57% using predicted solar irradiance. The complex model for summer forecasted the AC energy with an error of 1.86%. The MAPE of the DC energy forecast in the year-round model (MLP) was 3.42% (using directly measured solar irradiance) for monocrystalline panels and 12.5% using the predicted solar irradiance for amorphous panels. The MAPEs of DC and AC energy predictions in the year-round global complex model were 3.067% and 3.186%, respectively (for the case of monocrystalline panels, test data, directly measured solar irradiance).
The obtained results of the forecast errors showed that their magnitude depended more on the accuracy of forecasts of the weather conditions than on the type of forecasting model used or the type of PV panels. In order to obtain such, it was first necessary to provide accurate forecasts of weather conditions. The photovoltaic system is a fairly simple system, but advantages can be seen in its modeling by the global model in forecasting tasks. Thus, the usefulness of global modeling is well justified.
Research on forecasting the energy output of PV panels is critical for optimizing renewable energy utilization, integrating solar power into existing grids, reducing costs, and making informed decisions in the energy sector. It plays a key role in advancing sustainable energy practices and transitioning to a cleaner, more resilient energy future.
The next stage of the research is to integrate the developed energy prediction models of PV systems into a model of a small smart grid, where research on optimizing the use of energy from renewable sources will be conducted.

Author Contributions

Conceptualization, G.D.; methodology, G.D. and D.M.; software, G.D., D.M. and J.D.; validation, G.D. and J.K.; formal analysis, G.D. and D.M.; investigation, G.D. and D.M.; resources, D.M. and J.K.; data curation, G.D. and J.D.; writing—original draft preparation, G.D., D.M. and J.K.; writing—review and editing, G.D. and D.M.; visualization, G.D. and J.D.; supervision, G.D.; project administration, D.M.; funding acquisition, J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Bialystok University of Technology as part of the work WZ/WE-IA/3/2023, and also this research and the APC were funded by the Minister of Science and Higher Education of the Republic of Poland: Maintain the research potential of the discipline of automation, electronics, electrical engineering and space technologies; grant number: PB22.ET.23.001; for Department of Electrical and Computer Engineering Fundamentals, Rzeszow University of Technology.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Renewables. Analysis and Forecast to 2027. 2022. Available online: https://www.iea.org/reports/renewables-2022 (accessed on 2 April 2023).
  2. Kumari, P.; Durga, T. Deep learning models for solar irradiance forecasting: A comprehensive review. J. Clean. Prod. 2021, 318, 128566. [Google Scholar] [CrossRef]
  3. Kusznier, J. Influence of Environmental Factors on the Intelligent Management of Photovoltaic and Wind Sections in a Hybrid Power Plant. Energies 2023, 16, 1716. [Google Scholar] [CrossRef]
  4. Alhmoud, L. Why Does the PV Solar Power Plant Operate Ineffectively? Energies 2023, 16, 4074. [Google Scholar] [CrossRef]
  5. Komarnicki, P. Energy storage systems: Power grid and energy market use cases. Arch. Electr. Eng. 2016, 65, 495–511. [Google Scholar] [CrossRef]
  6. Bonkile, M.P.; Ramadesigan, V. Effects of sizing on battery life and generation cost in PV–wind battery hybrid systems. J. Clean. Prod. 2021, 340, 130341. [Google Scholar] [CrossRef]
  7. Khan, M.A.; Islam, N.; Khan, M.A.M.; Irshad, K.; Hanzala, M.; Pasha, A.A.; Mursaleen, M. Experimental and simulation analysis of grid-connected rooftop photovoltaic system for a large-scale facility. Sustain. Energy Technol. Assess. 2022, 53, 102773. [Google Scholar] [CrossRef]
  8. Le, T.H. A combined method for wind power generation forecasting. Arch. Electr. Eng. 2021, 70, 991–1009. [Google Scholar] [CrossRef]
  9. Cholewinski, M.; Fafara, J.M. Numerical assessment of energy generation from photovoltaic cells using the CM-SAF PVGIS database. Arch. Electr. Eng. 2022, 71, 227–243. [Google Scholar] [CrossRef]
  10. Fernández-Guillamón, A.; Gómez-Lázaro, E.; Muljadi, E.; Molina-García, Á. Power systems with high renewable energy sources: A review of inertia and frequency control strategies over time. Renew. Sustain. Energy Rev. 2019, 115, 109369. [Google Scholar] [CrossRef]
  11. Chen, D.; Xu, Y.; Huang, A.Q. Integration of DC microgrids as virtual synchronous machines into the AC grid. IEEE Trans. Ind. Electron. 2017, 64, 7455–7466. [Google Scholar] [CrossRef]
  12. Shadoul, M.; Ahshan, R.; AlAbri, R.S.; Al-Badi, A.; Albadi, M.; Jamil, M. A Comprehensive Review on a Virtual-Synchronous Generator: Topologies, Control Orders and Techniques, Energy Storages, and Applications. Energies 2022, 15, 8406. [Google Scholar] [CrossRef]
  13. Shah, R.; Mithulananthan, N.; Bansal, R.; Ramachandaramurthy, V. A review of key power system stability challenges for large-scale PV integration. Renew. Sustain. Energy Rev. 2015, 41, 1423–1436. [Google Scholar] [CrossRef]
  14. Ratnam, K.S.; Palanisamy, K.; Yang, G. Future low-inertia power systems: Requirements, issues, and solutions-A review. Renew. Sustain. Energy Rev. 2020, 124, 109773. [Google Scholar] [CrossRef]
  15. Bajaj, M.; Singh, A.K. Grid integrated renewable DG systems: A review of power quality challenges and state-of-the-art mitigation techniques. Int. J. Energy Res. 2020, 44, 26–69. [Google Scholar] [CrossRef]
  16. Cheema, K.M. A comprehensive review of virtual synchronous generator. Int. J. Electr. Power Energy Syst. 2020, 120, 106006. [Google Scholar] [CrossRef]
  17. Sang, W.; Guo, W.; Dai, S.; Tian, C.; Yu, S.; Teng, Y. Virtual Synchronous Generator, a Comprehensive Overview. Energies 2022, 15, 6148. [Google Scholar] [CrossRef]
  18. Obaid, Z.A.; Cipcigan, L.; Muhssin, M.T.; Sami, S.S. Control of a population of battery energy storage systems for frequency response. Int. J. Electr. Power Energy Syst. 2020, 115, 105463. [Google Scholar] [CrossRef]
  19. Ciechulski, T.; Osowski, S. Comparison of three methods of load forecasting in a small power system based on neural networks. Przegląd Elektrotechniczny 2014, 90, 148–151. [Google Scholar]
  20. Piotrowski, P. The analysis of artificial neural networks applications for short-term power output and electric energy production forecasting of photovoltaic systems. Przegląd Elektrotechniczny 2015, 91, 162–165. [Google Scholar] [CrossRef]
  21. Ehsan, R.M.; Simon, S.P.; Venkateswaran, P.R. Day-ahead forecasting of solar photovoltaic output power using multilayer perceptron. Neural Comput. Appl. 2017, 28, 3981–3992. [Google Scholar] [CrossRef]
  22. AlShafeey, M.; Csáki, C. Evaluating neural network and linear regression photovoltaic power forecasting models based on different input methods. Energy Rep. 2021, 7, 7601–7614. [Google Scholar] [CrossRef]
  23. Nespoli, A.; Ogliari, E.; Leva, S.; Massi Pavan, A.; Mellit, A.; Lughi, V.; Dolara, A. Day-ahead photovoltaic forecasting: A comparison of the most effective techniques. Energies 2019, 12, 1621. [Google Scholar] [CrossRef]
  24. Dec, G.; Drałus, G.; Mazur, D. One day ahead forecasting of energy generating in photovoltaic systems. ITM Web Conf. 2018, 21, 00023. [Google Scholar] [CrossRef]
  25. Kwiatkowski, B.; Bartman, J.; Mazur, D. The quality of data and the accuracy of energy generation forecast by artificial neural networks. Int. J. Electr. Comput. Eng. 2020, 10, 3957–3966. [Google Scholar] [CrossRef]
  26. Lee, D.; Kim, K. Recurrent Neural Network-Based Hourly Prediction of Photovoltaic Power Output Using Meteorological Information. Energies 2019, 12, 215. [Google Scholar] [CrossRef]
  27. Wang, F.; Zhen, Z.; Wang, B.; Mi, Z. Comparative Study on KNN and SVM Based Weather Classification Models for Day Ahead Short Term Solar PV Power Forecasting. Appl. Sci. 2018, 8, 28. [Google Scholar] [CrossRef]
  28. Wang, G.; Su, Y.; Shu, L. One-day-ahead daily power forecasting of photovoltaic systems based on partial functional linear regression models. Renew. Energy 2016, 96, 469–478. [Google Scholar] [CrossRef]
  29. Agoua, X.G.; Girard, R.; Kariniotakis, G. Probabilistic Models for Spatio-Temporal Photovoltaic Power Forecasting. IEEE Trans. Sustain. Energy 2018, 10, 780789. [Google Scholar] [CrossRef]
  30. Dec, G.; Drałus, G.; Kwiatkowski, B.; Mazur, D. Forecasting Models of Energy Generation by PV Panels Using Fuzzy Logic. Energies 2021, 14, 1676. [Google Scholar] [CrossRef]
  31. Dolara, A.; Grimaccia, F.; Leva, S.; Mussetta, M.; Ogliari, E. A physical hybrid artificial neural network for short term forecasting of PV plant power output. Energies 2015, 8, 1138–1153. [Google Scholar] [CrossRef]
  32. Leva, S.; Dolara, A.; Grimaccia, F.; Mussetta, M.; Ogliari, E. Analysis and validation of 24 hours ahead neural network forecasting of photovoltaic output power. Math. Comput. Simul. 2017, 131, 88–100. [Google Scholar] [CrossRef]
  33. Collino, E.; Ronzio, D. Exploitation of a New Short-Term Multimodel Photovoltaic Power Forecasting Method in the Very Short-Term Horizon to Derive a Multi-Time Scale Forecasting System. Energies 2021, 14, 789. [Google Scholar] [CrossRef]
  34. Theocharides, S.; Theristis, M.; Makrides, G.; Kynigos, M.; Spanias, C.; Georghiou, G.E. Comparative Analysis of Machine Learning Models for Day-Ahead Photovoltaic Power Production Forecasting. Energies 2021, 14, 1081. [Google Scholar] [CrossRef]
  35. Piotrowski, P.; Baczyński, D.; Kopyt, M.; Gulczyński, T. Advanced Ensemble Methods Using Machine Learning and Deep Learning for One-Day-Ahead Forecasts of Electric Energy Production in Wind Farms. Energies 2022, 15, 1252. [Google Scholar] [CrossRef]
  36. Abdel-Nasser, M.; Mahmoud, K. Accurate photovoltaic power forecasting models using deep LSTM-RNN. Neural Comput. Appl. 2017, 31, 2727–2740. [Google Scholar] [CrossRef]
  37. Wang, K.; Qi, X.; Liu, H. A comparison of day-ahead photovoltaic power forecasting models based on deep learning neural network. Appl. Energy 2019, 251, 113315. [Google Scholar] [CrossRef]
  38. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  39. Ren, G.; Cao, Y.; Wen, S.; Huang, T.; Zeng, Z. A Modified Elman Neural Network with a New Learning Rate Scheme. Neurocomputing 2018, 286, 11–18. [Google Scholar] [CrossRef]
  40. Drapała, J.P.; Brzostowski, K.; Świątek, J. Global identification of complex processes using neural networks. In Advances in Computational Intelligence: XVIII International Conference on Systems Science, ICSS 2013; Mizera-Pietraszko, J., Brzostowski, K., Drapała, J., Eds.; Tempo: Wrocław, Poland, 2014; pp. 7–9. [Google Scholar]
  41. Drałus, G. The Investigating of Influence of Quality Criteria Coefficients on Global Complex Models. In Artificial Intelligence and Soft Computing, LNAI 6114; Springer: Berlin/Heidelberg, Germany, 2010; Volume 2, pp. 26–33. [Google Scholar]
  42. Drałus, G.; Mazur, D. Cascade complex systems—Global modeling using neural networks. In Proceedings of the 13th Selected Issues of Electrical Engineering and Electronics (WZEE), Rzeszow, Poland, 4–8 May 2016; pp. 1–8. [Google Scholar] [CrossRef]
  43. Drałus, G.; Świątek, J. Static neural network in global modelling of complex systems. In Proceedings of the 14th International Conference on Systems Engineering, Coventry, UK, 12–14 September 2000; pp. 547–551. [Google Scholar]
  44. Theocharides, S.; Makridesl, G.; Liveral, A.; Theristis, M.; Kaimakis, P.; Georghioul, G.E. Day-ahead photovoltaic power production forecasting methodology based on machine learning and statistical post-processing. Appl. Energy 2020, 268, 115023. [Google Scholar] [CrossRef]
  45. Li, Z.; Rahman, S.M.; Vega, R.; Dong, B. A Hierarchical Approach Using Machine Learning Methods in Solar Photovoltaic Energy Production Forecasting. Energies 2016, 9, 55. [Google Scholar] [CrossRef]
Figure 1. Effect of solar irradiance on DC power production of (a) monocrystalline panels and (b) amorphous panels.
Figure 1. Effect of solar irradiance on DC power production of (a) monocrystalline panels and (b) amorphous panels.
Energies 16 06697 g001
Figure 2. Effect of directly measured solar irradiance on DC power production of (a) monocrystalline panels and (b) amorphous panels.
Figure 2. Effect of directly measured solar irradiance on DC power production of (a) monocrystalline panels and (b) amorphous panels.
Energies 16 06697 g002
Figure 3. A feed-forward neural network.
Figure 3. A feed-forward neural network.
Energies 16 06697 g003
Figure 4. The Elman neural networks.
Figure 4. The Elman neural networks.
Energies 16 06697 g004
Figure 5. Learning error (SSE)—year-round MLP model for monocrystalline panels.
Figure 5. Learning error (SSE)—year-round MLP model for monocrystalline panels.
Energies 16 06697 g005
Figure 6. PV energy for the learning data—year-round MLP model for monocrystalline panels.
Figure 6. PV energy for the learning data—year-round MLP model for monocrystalline panels.
Energies 16 06697 g006
Figure 7. PV energy for the testing data—year-round MLP model for monocrystalline panels.
Figure 7. PV energy for the testing data—year-round MLP model for monocrystalline panels.
Energies 16 06697 g007
Figure 8. Learning error (SSE)—summer Elman model for monocrystalline panels.
Figure 8. Learning error (SSE)—summer Elman model for monocrystalline panels.
Energies 16 06697 g008
Figure 9. PV energy for the learning data—summer Elman model for monocrystalline panels.
Figure 9. PV energy for the learning data—summer Elman model for monocrystalline panels.
Energies 16 06697 g009
Figure 10. PV energy for the testing data—summer Elman model for monocrystalline panels.
Figure 10. PV energy for the testing data—summer Elman model for monocrystalline panels.
Energies 16 06697 g010
Figure 11. The MAPEs for test data (for predicted solar irradiance).
Figure 11. The MAPEs for test data (for predicted solar irradiance).
Energies 16 06697 g011
Figure 12. The MAPEs for test data (for measured solar irradiance).
Figure 12. The MAPEs for test data (for measured solar irradiance).
Energies 16 06697 g012
Figure 13. Scheme of the solar photovoltaic system.
Figure 13. Scheme of the solar photovoltaic system.
Energies 16 06697 g013
Figure 14. The global model as a complex six-layer neural network.
Figure 14. The global model as a complex six-layer neural network.
Energies 16 06697 g014
Table 1. Correlation between weather parameters and DC energy production.
Table 1. Correlation between weather parameters and DC energy production.
Weather ParameterMonocrystalline PanelsAmorphous Panels
Maximum temperature0.710.76
Average temperature0.600.66
Dew point0.390.47
Humidity−0.76−0.74
Amount of precipitation−0.19−0.18
Precipitation time−0.34−0.30
Wind speed−0.21−0.23
Pressure0.040.002
Cloudiness−0.75−0.71
Visibility0.620.62
Solar irradiance0.830.86
Solar panel tilt angle0.640.67
Measured solar irradiance0.990.996
Table 2. Model structure and weather conditions for modeling monocrystalline panels.
Table 2. Model structure and weather conditions for modeling monocrystalline panels.
ModelTmTaHuPaPtCcViSiAn
Year-round, MLP: 7-10-5-1×××°°××××
October–March, MLP: 7-12-6-1×°×°×××××
April–September, MLP: 7-24-17-1×°××××××°
Winter, MLP: 6-25-11-1×°×°××××°
Spring, MLP: 9-30-28-1×××××××××
Summer, MLP: 8-19-9-1××××××××°
Autumn, MLP: 9-8-6-1×××××××××
Year-round, Elman: 7-7-3-1×××°°××××
October–March, Elman: 9-8-5-1×××××××××
April–September, Elman: 8-6-2-1××××××××°
Winter, Elman: 6-29-27-1×°××××°×°
Spring, Elman: 6-12-8-1×××°°×××°
Summer, Elman: 7-9-4-1×°××××××°
Autumn, Elman: 9-18-11-1×××××××××
Tm—maximum temperature; Ta—average temperature; Hu—humidity; Pa—precipitation amount; Pt—precipitation time; Cc—cloud cover; Vi—visibility; Si—solar irradiance; An—a solar panel tilt angle, the symbol “×” means the use of a particular factor in the model, the symbol “°” indicates the lack of use of a particular factor in the model.
Table 3. The MAPEs of model predictions for monocrystalline panels.
Table 3. The MAPEs of model predictions for monocrystalline panels.
MLP ModelElman Model
ModelLearning Data (%)Test Data (%)Learning Data (%)Test Data (%)
Year-round15.0815.7614.1514.58
October–March15.9616.3116.0915.43
April–September6.3426.7436.2886.125
Winter10.8510.4215.1314.83
Spring7.5517.8228.4378.627
Summer5.4125.5686.0086.263
Autumn15.0815.7614.1514.58
Table 4. The MAEs of model predictions for monocrystalline panels.
Table 4. The MAEs of model predictions for monocrystalline panels.
MLP ModelElman Model
ModelLearning Data (Wh/m2)Test Data (Wh/m2)Learning Data (Wh/m2)Test Data (Wh/m2)
Year-round49.257.860.474.5
October–March30.835.735.737.1
April–September45.942.459.770.1
Winter12.616.319.820.7
Spring21.826.142.550.3
Summer16.324.141.332.9
Autumn10.59.6144.652.3
Table 5. The MAPEs of model predictions for monocrystalline panels in the case of direct measured solar irradiance.
Table 5. The MAPEs of model predictions for monocrystalline panels in the case of direct measured solar irradiance.
MLP ModelElman Model
ModelLearning Data (%)Test Data (%)Learning Data (%)Test Data (%)
Year-round3.1903.4195.4874.866
October–March5.8316.3476.1505.244
April–September3.4093.4882.5232.421
Winter8.3727.22113.3012.17
Spring6.9288.2233.8773.823
Summer2.2451.9502.4301.986
Autumn2.7432.9113.1713.254
Table 6. The MAEs of model predictions for monocrystalline panels in the case of direct measured solar irradiance.
Table 6. The MAEs of model predictions for monocrystalline panels in the case of direct measured solar irradiance.
MLP ModelElman Model
ModelLearning Data (Wh/m2)Test Data (Wh/m2)Learning Data (Wh/m2)Test Data (Wh/m2)
Year-round4.746.638.4810.7
October–March3.954.026.324.98
April–September9.8410.39.4411.5
Winter6.554.5715.017.1
Spring11.412.811.312.8
Summer7.777.607.925.78
Autumn2.913.614.315.35
Table 7. Times of learning and testing of forecasting models for monocrystalline panels in the case of direct measured solar irradiance.
Table 7. Times of learning and testing of forecasting models for monocrystalline panels in the case of direct measured solar irradiance.
MLP ModelElman Model
ModelLearning Time (s)Test Time (s)Learning Time (s)Test Time (s)
Year-round10.280.031228.610.1070
October–March4.6600.023513.950.0771
April–September7.9300.023119.220.0732
Winter5.8280.01968.6430.0819
Spring10.080.02177.8310.0765
Summer3.1860.01848.3500.0747
Autumn1.4840.017810.720.0804
Table 8. Model structure and weather conditions for modeling amorphous panels.
Table 8. Model structure and weather conditions for modeling amorphous panels.
ModelTmTaHuPaPtCcViSiAn
Year-round, MLP: 7-24-13-1×××°°××××
October–March, MLP: 5-10-7-1×°×°°×°××
April–September, MLP: 7-22-19-1×××°××××°
Winter, MLP: 7-10-7-1×°×°×××××
Spring, MLP: 6-12-11-1×××°°×××°
Summer, MLP: 6-24-20-1×°×°××××°
Autumn, MLP: 7-25-24-1×××°°××××
Year-round, Elman: 7-12-7-×××°°××××
October–March, Elman: 7-7-3-1×××°°××××
April–September, Elman: 6-8-3-1×××°°×××°
Winter, Elman: 8-17-5-1×°××××××°
Spring, Elman: 6-17-13-1×××°°×××°
Summer, Elman: 7-10-8-1×°××××××°
Autumn, Elman: 9-8-3-1×××××××××
Tm—maximum temperature; Ta—average temperature; Hu—humidity; Pa—precipitation amount; Pt—precipitation time; Cc—cloud cover; Vi—visibility; Si—solar irradiance; An—a solar panel tilt angle, the symbol “×” means the use of a particular factor in the model, the symbol “°” indicates the lack of use of a particular factor in the model.
Table 9. The MAPEs of model predictions for amorphous panels.
Table 9. The MAPEs of model predictions for amorphous panels.
MLP ModelElman Model
ModelLearning Data (%)Test Data (%)Learning Data (%)Test Data (%)
Year-round14.3114.5712.9512.54
October–March14.8514.2614.2513.92
April–September9.0059.43110.1810.47
Winter12.3412.9613.0613.98
Spring13.0312.5210.249.823
Summer6.1386.2357.0187.412
Autumn7.7687.56811.2110.84
Table 10. The MAEs of model predictions for amorphous panels.
Table 10. The MAEs of model predictions for amorphous panels.
MLP ModelElman Model
ModelLearning Data (Wh/m2)Test Data (Wh/m2)Learning Data (Wh/m2)Test Data (Wh/m2)
Year-round20.727.827.429.4
October–March13.615.314.713.8
April–September25.927.733.941.2
Winter10.911.110.110.1
Spring23.529.131.240.9
Summer14.518.828.228.4
Autumn5.76.719.120.0
Table 11. The MAPEs of model predictions for amorphous panels in the case of direct measured solar irradiance.
Table 11. The MAPEs of model predictions for amorphous panels in the case of direct measured solar irradiance.
MLP ModelElman Model
ModelLearning Data (%)Test Data (%)Learning Data (%)Test Data (%)
Year-round4.3673.9694.7784.688
October–March6.7127.8087.2826.609
April–September3.6794.1933.4133.078
Winter7.4917.47210.9712.93
Spring3.5052.8679.1589.181
Summer4.2744.4864.0693.824
Autumn8.6146.9185.5225.553
Table 12. The MAEs of model predictions for amorphous panels in the case of direct measured solar irradiance.
Table 12. The MAEs of model predictions for amorphous panels in the case of direct measured solar irradiance.
MLP ModelElman Model
ModelLearning Data (Wh/m2)Test Data (Wh/m2)Learning Data (Wh/m2)Test Data (Wh/m2)
Year-round2.583.204.213.89
October–March3.063.084.153.40
April–September6.106.804.083.45
Winter2.842.908.408.66
Spring2.382.496.879.02
Summer7.057.196.905.74
Autumn5.824.964.005.22
Table 13. The MAPEs of complex model predictions for monocrystalline panels in the case of direct measured solar irradiance.
Table 13. The MAPEs of complex model predictions for monocrystalline panels in the case of direct measured solar irradiance.
ModelLearning Data EDC (%)Learning Data EAC (%)Test Data EDC (%)Test Data EAC (%)
Year-round—global2.6583.0672.9603.186
Year-round—local2.3232.7682.8803.204
Summer—global1.4821.7192.3061.864
Summer—local1.5281.6341.9771.915
Table 14. The MAEs of complex model predictions for monocrystalline panels in the case of direct measured solar irradiance.
Table 14. The MAEs of complex model predictions for monocrystalline panels in the case of direct measured solar irradiance.
ModelLearning Data EDC (Wh/m2)Learning Data EAC (Wh/m2)Test Data EDC (Wh/m2)Test Data EAC (Wh/m2)
Year-round—global5.014.786.545.95
Year-round—local5.125.056.415.99
Summer—global6.186.0310.08.24
Summer—local6.536.139.668.36
Table 15. Time of learning and testing of forecasting complex models for monocrystalline panels in the case of direct measured solar irradiance.
Table 15. Time of learning and testing of forecasting complex models for monocrystalline panels in the case of direct measured solar irradiance.
ModelLearning Time (s)Test Time (s)
Year-round—global4.6490.0114
Year-round—local4.1940.0077
Summer—global1.6060.0115
Summer—local1.3990.0285
Table 16. Comparison of forecast quality in one day ahead forecasting models of PV systems (for the test data).
Table 16. Comparison of forecast quality in one day ahead forecasting models of PV systems (for the test data).
ModelMAPE (%)
ANN (Dolara [31])30.30 (1)
PHANN (Dolara [31])21.50 (1)
Optimal ANN (Theocharides [44])4.700
ANN (Li [45])10.51
SVR (Li [45])10.52
MLP year-round (ours, Table 3)15.76
MLP year-round (ours, Table 5)3.419
MLP Year-round—global (ours, Table 13)2.960
(1) The weighted mean absolute error (WMAE%) [31] was calculated.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Drałus, G.; Mazur, D.; Kusznier, J.; Drałus, J. Application of Artificial Intelligence Algorithms in Multilayer Perceptron and Elman Networks to Predict Photovoltaic Power Plant Generation. Energies 2023, 16, 6697. https://doi.org/10.3390/en16186697

AMA Style

Drałus G, Mazur D, Kusznier J, Drałus J. Application of Artificial Intelligence Algorithms in Multilayer Perceptron and Elman Networks to Predict Photovoltaic Power Plant Generation. Energies. 2023; 16(18):6697. https://doi.org/10.3390/en16186697

Chicago/Turabian Style

Drałus, Grzegorz, Damian Mazur, Jacek Kusznier, and Jakub Drałus. 2023. "Application of Artificial Intelligence Algorithms in Multilayer Perceptron and Elman Networks to Predict Photovoltaic Power Plant Generation" Energies 16, no. 18: 6697. https://doi.org/10.3390/en16186697

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop