Next Article in Journal
How to Repair the Next Generation of Wind Turbine Blades
Previous Article in Journal
High Proportion of Distributed PV Reliability Planning Method Based on Big Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Global Solar Radiation Forecasting System Using Combined Supervised and Unsupervised Learning Models

Department of Marine Environmental Informatics & Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 20224, Taiwan
*
Author to whom correspondence should be addressed.
Energies 2023, 16(23), 7693; https://doi.org/10.3390/en16237693
Submission received: 31 October 2023 / Revised: 19 November 2023 / Accepted: 20 November 2023 / Published: 21 November 2023
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)

Abstract

:
One of the most important sources of energy is the sun. Taiwan is located at a 22–25° north latitude. Due to its proximity to the equator, it experiences only a small angle of sunlight incidence. Its unique geographical location can obtain sustainable and stable solar resources. This study uses research on solar radiation forecasts to maximize the benefits of solar power generation, and it develops methods that can predict future solar radiation patterns to help reduce the costs of solar power generation. This study built supervised machine learning models, known as a deep neural network (DNN) and a long–short-term memory neural network (LSTM). A hybrid supervised and unsupervised model, namely a cluster-based artificial neural network (k-means clustering- and fuzzy C-means clustering-based models) was developed. After establishing these models, the study evaluated their prediction results. For different prediction periods, the study selected the best-performing model based on the results and proposed combining them to establish a real-time-updated solar radiation forecast system capable of predicting the next 12 h. The study area covered Kaohsiung, Hualien, and Penghu in Taiwan. Data from ground stations of the Central Weather Administration, collected between 1993 and 2021, as well as the solar angle parameters of each station, were used as input data for the model. The results of this study show that different models offer advantages and disadvantages in predicting different future times. The hybrid prediction system can predict future solar radiation more accurately than a single model.

1. Introduction

Taiwan is located between 22° N and 25° N latitudes, making it close to the equator, and thus, it has a smaller solar angle deviation. Its advantageous geographical location provides stable sunshine, making Taiwan highly suitable for solar power development. However, Taiwan relies heavily on imported fossil fuels such as oil, coal, and natural gas, accounting for up to 92% of its energy sources, while renewable energy currently only contributes 5.5% of the total electricity generation. Within this renewable energy mix, as of 2021, solar photovoltaic power accounts for 64.8%, wind power 10.7%, and other sources, such as hydroelectric and geothermal power, make up 24.5% [1]. Therefore, the more efficient development of solar energy generation has the potential to increase energy efficiency and enhance power supply reliability.
In Taiwan, the development of solar energy generation needs to take into account factors such as the angle and timing of sunlight, cloud cover, and topographical features. As a result, the energy received from solar radiation varies slightly across different regions, for example, in the southern part of Taiwan, at Kaohsiung Station (120.32° E, 22.57° N), in the eastern region at Hualien Station (121.61° E, 23.98° N), and on the outlying Penghu Islands at Penghu Station (119.56° E, 23.57° N)—as shown in Figure 1.
Geographically, Kaohsiung Station is located south of the Tropic of Cancer, Hualien Station is situated north of the Tropic of Cancer, and Penghu Station is located approximately on the Tropic of Cancer. Regarding topographical conditions, as shown in Figure 1, the average elevation of the Central Mountain Range (CMR) in Taiwan is about 2500 m [2]. The CMR divides Taiwan into two regions, with Hualien Station to the right of the CMR and Kaohsiung Station to the left. According to Taipower [1], the solar photovoltaic electricity generation at these three meteorological stations in 2020 was 3.41, 2.85, and 3.60 kWh-d, respectively. Although these three locations are geographically close, there is a significant difference in their solar energy generation capacity.
In recent years, due to the flourishing development of machine learning, the accuracy of climate prediction has significantly improved [3,4,5,6,7,8,9]. Lauret et al. [10] used a model to predict solar radiation in island environments and proposed the use of machine learning models to enhance the performance of linear regression models. They also suggested that machine learning performs better in less stable weather conditions. Wei [11] studied various machine learning models, such as multilayer perceptron and random forest, to analyze solar energy predictions for meteorological stations in southern Taiwan. The study compared the influence of input data from satellites, ground stations, and solar angle data on predictions. Additionally, it calculated the optimal placement angle for solar panels based on hourly solar angle data to maximize solar energy generation efficiency. Voyant et al. [12] also utilized various machine learning methods to predict solar radiation for the next 1–6 h. The study compared methods such as random forest, Gaussian processes, persistence, artificial neural networks, and support vector regressions to assess their strengths and weaknesses. The authors suggested that there is no one-size-fits-all best model, and combining multiple models in a hybrid prediction system yields superior results. Wei [13] conducted research on the application of deep neural networks to predict solar radiation. The study compared the results of backpropagation neural networks and linear regression. It also examined the impacts of different types of solar panels on electricity generation efficiency. Ali et al. [14] optimized the design of artificial neural networks for accurate global solar radiation forecasting while minimizing computational requirements. This paper reports on a new hybrid deep residual learning and gated long–short-term memory recurrent network boosted via a differential covariance matrix adaptation evolution strategy (ADCMA) to forecast solar radiation one hour ahead. Neshat et al. [15] reported on a hybrid deep residual learning and gated long–short-term memory recurrent network boosted by a differential covariance matrix adaptation evolution strategy to forecast solar radiation one hour ahead.
In recent years, the recurrent neural network (RNN) architecture, which has been thriving, has found widespread applications in various fields [16,17,18,19,20,21,22,23]. Qing and Niu [24] proposed the use of long–short-term memory neural networks (LSTMs) to predict solar radiation and compared the results with linear regression and backpropagation neural networks. Ultimately, they reported a 42.9% reduction in the root mean square error (RMSE) for the LSTM networks compared to backpropagation neural networks in predicting solar radiation. Li et al. [25] utilized a prediction model based on RNNs to forecast the short-term output power of a generating system. This model took only electrical data as an input, without weather information, and they compared its performance within a 90-min horizon against BPNN, persistence, SVM, LSTM, and other methods. In recent times, many scholars have proposed the application of LSTM neural networks to predict weather changes. It has been noted in the literature that most of these studies have achieved favorable forecasting results. Therefore, in this study, to enhance the accuracy of predicting long-term outcomes, the decision was made to incorporate the LSTM neural network model.
Ghofrani et al. [26] used a clustering approach to improve the performance of Bayesian neural networks and introduced an innovative, game-theoretic, self-organizing map (SOM) clustering method. They incorporated game theory to enhance the clustering effectiveness of the basic SOM clustering method. They also compared the results of Windows NT clustering, k-means clustering, and SOM clustering with machine-learning-derived predictions. Azimi et al. [27] proposed a k-means cluster-based algorithm to enhance the predictive performance of multilayer perceptrons. Their approach altered the initialization method of the k-means clustering algorithm to ensure consistent results each time it is trained, referred to as TB k-means. They assessed the performance of different data analysis clustering algorithms and compared the processing time required for training with different feature data. Ultimately, they suggested that this clustering approach provides better predictive results compared to directly using multilayer perceptrons.
The purpose of this study is to establish a solar radiation prediction model to accurately predict solar radiation levels. Given that solar radiation prediction is a time series problem with highly nonlinear characteristics, this research employed various algorithmic techniques, including both unsupervised and supervised algorithms, to effectively construct suitable localized prediction models. With supervised-based algorithms, this study utilized deep neural networks (DNN) and LSTM neural networks. The selection of specific deep learning models, namely DNN and LSTM, was guided by several considerations. Firstly, these models have demonstrated robust performance in similar tasks, as evidenced by studies such as [17,18,21,25]. Their architectures are well suited for effectively learning intricate patterns within our dataset, aligning with the complexity of the problem we aimed to address. Additionally, for unsupervised-based algorithms, clustering methods such as k-means clustering and fuzzy C-means clustering were employed. After clustering, subsets of data were created for each cluster, and neural-network-based prediction models were established for each group. Consequently, under the DNN model, we could establish a k-means DNN (referred to as k_DNN) and a fuzzy C-means DNN (fc_DNN). Similarly, under the LSTM model, we could create a k-means LSTM (k_LSTM) and a fuzzy C-means LSTM (fc_LSTM).

2. Study Area and Material

This study was conducted in Taiwan with test locations at Kaohsiung Station, Hualien Station, and Penghu Station (Figure 1). The research collected ten ground-level climate parameters related to solar radiation, including atmospheric pressure, surface temperature, dew temperature, relative humidity, water vapor, average wind speed, precipitation, rainfall duration, insolation duration, and global solar radiation. The above climate parameters have been suggested for use in predictive models in many studies, such as [28,29,30,31]. Therefore, we collected data on these meteorological factors as features for predicting solar radiation. The data source for these parameters was the Central Weather Administration (CWA), and the data were recorded at an hourly frequency. The data span from 1993 to 2021, totaling 29 years, resulting in a total of 254,184 hourly records. Table 1 presents the attributes, along with their respective units and statistical values.
According to [11,32,33], the addition of solar angle parameters can be used to improve the prediction of global solar radiation. Therefore, this study included five solar angle parameters, namely the declination angle, hour angle, zenith angle, elevation angle, and azimuth angle [33]. Firstly, the declination angle (δ) is the angle between the line connecting the Sun and the center of the Earth and the plane of the equator. The formula for this angle is as follows:
δ = 23.45 °   s i n   360 n d 80 365
The hour angle (ω) represents the angle at which the Sun moves relative to the position of the station per hour, and it can be calculated as follows:
ω = 15 ° ( H 12 )
The zenith angle (θ) is the angle between the Sun and the vertical line to the horizontal plane, and it can be calculated using the following formula:
θ = c o s 1 ( s i n λ · s i n δ + c o s λ · c o s ω )
The elevation angle (α) is the angle between the line connecting the Sun to the observation point and the horizontal plane.
α = 90 ° θ
The azimuth angle (ξ) is the angle between the Sun’s position in its orbit and the horizontal plane. It can be calculated using the following formula:
ξ = s i n 1 ( c o s δ · s i n ω / s i n θ )
In Equations (1)–(5) mentioned above: nd represents the day of the year, ranging from 1 to a maximum of 365; H represents the hour of the day when the angle is calculated, ranging from 1 to 24 h; and λ denotes the latitude of the test location.

3. A Real-Time Prediction System

As predicting solar radiation cannot guarantee that a single model will provide the best forecast [12], this study established a real-time solar radiation prediction system. It aimed to integrate simulation results from different models, following the concept of an ensemble model to jointly determine the optimal solution. Figure 2 illustrates the real-time prediction concept process designed for this study. The simulation interval was 1 h, and the solar radiation forecast horizon ranged from 1 to 12 h. The steps are described as follows:
(1)
At sunrise, set the time as t;
(2)
Receive the real-time ground weather data;
(3)
Generate model input patterns, including the global solar radiation attribute {S}, weather attributes {W}, and solar position attributes {P}. The dataset {P} {δ, ω, θ, α, ξ} can be derived from Equations (1)–(5);
(4)
Execute a model selection ensemble tabular (abbreviated as MSET);
(5)
Retrieve the set of optimal suggested neural-network-based models from the MSET lookup table for the future at 1 to 12 h;
(6)
Is it necessary to execute clustering models based on the suggested neural network-based models? If “yes”, proceed to Step 7; if “no”, continue with Step 9;
(7)
Calculate the distances between a current sample and cluster centers for cluster models;
(8)
Execute the cluster-based models (i.e., k_DNN, fc_DNN, k_LSTM, and fc_LSTM) and generate 1–12 h predictions;
(9)
Execute the DNN and LSTM models and generate 1–12 h predictions;
(10)
Generate a set of suggested neural-network-based models for 1 to 12 h in the future;
(11)
Is it sunset? If “yes”, conclude the analysis procedure (Step 12); if “no”, return to Step 2 and set the time to t + 1.
In Step 4, the lookup table from MSET was used to determine the optimal model for real-time predictions at each forecasting time (1 to 12 h). Compared to having a single model decide the forecast, the collaborative decision of all models could enhance accuracy. In steps 8 and 9, the methods for establishing each model are explained as follows.

3.1. Supervised Models

Figure 3 illustrates the modeling process for supervised models. Taking DNN as an example, DNN was developed based on the structure of deep neural networks. A deep neural network is a model with multiple layers and the advanced development of the multilayer perceptron based on the principles of the multilayer perceptron. The multilayer perceptron included an input layer, hidden layers, and an output layer. The input layer of the multilayer perceptron served as the interface for external input information, while the hidden layers and the output layer performed the actual computations. The flowchart in Figure 3 explains that the data sets consisted of the solar radiation attribute {S}, weather attributes {W}, and solar position attributes {P}. These three data sets were organized in a time sequence and then divided into a training set, a validation set, and a testing set. The training set and validation set were used for building prediction models, while the testing set was used to evaluate the model’s performance.
As shown in Figure 4, O() represents a collection of observed values, including datasets of {S}, {W}, and {P}. SP() represents a collection of predicted solar radiation values. The predictive function of the model can be written as follows:
S P t + n = D N N O t Δ t Δ t = 0 , d n + 1 S P t + k k = 1 , ( n 1 ) Δ t 0 , d , k [ 1 , N ]
where SP(t + k) represents the predicted solar radiation value for the future k hours, O(t − Δt) denotes past observed data, d is the input delay time for the model, N is the maximum prediction time length (set as a constant value in this study, N = 12 h), and n and k are parameters (indices) for the prediction time length.
In Equation (6), when predicting the solar radiation SP(t + 1) for the next 1 h, the current time data (Δt = 0) and the observation set of the past d hours were used. For predictions beyond t + 2 in the future, the previous predicted values for each time step were also incorporated.
Additionally, in this study, another supervised model was employed, namely LSTM. LSTM is an advanced model within an RNN. RNNs are recurrent networks commonly used for handling time- and sequence-related problems. However, during the modeling process, the issue of vanishing gradients or exploding gradients may occur. To address this problem, Hochreiter and Schmidhuber [34] introduced the LSTM neural network, which is an improved model incorporating memory blocks within the hidden layers of the RNN. As shown in Figure 5, while traditional RNNs have a single hidden state, ht, LSTM networks introduce memory blocks, Ct, allowing them to retain longer memories and forget less relevant information.
In the construction of the LSTM model in this study, the input data comprised the current solar radiation {S} and attributes {W} and {P}. The input–output format of the model, as shown in Figure 6, involved sequentially feeding data into the model based on the time sequence. The LSTM model was trained to predict 12 target values (i.e., solar radiation for the next 12 h), with each input data entry having a time length of d′. The model directly output predictions for the next 12 h. To assess accuracy, the data were split into three sets for training, validation, and testing, as illustrated in Figure 3, to evaluate the model’s performance.

3.2. Unsupervised Combined with Supervised Models

The unsupervised combined with supervised models in this study utilized unsupervised clustering algorithms in conjunction with supervised neural networks to form an integrated framework. The modularized process is depicted in Figure 7. Initially, the three data sets of {S}, {W}, and {P} were arranged based on the time sequence, and then they underwent data splitting. Subsequently, the training set was clustered, and each group of data was used to train a single supervised model (such as DNN and LSTM). After the model construction was complete, each data point in the validation set identified the cluster center with the shortest distance and utilized the corresponding ANN model for that cluster to predict solar radiation.
This study utilized two clustering methods: k-means and fuzzy C-means. The k-means method is a clustering algorithm that groups n data points into k clusters by minimizing the sum of the squared distances between all data points and their respective cluster centroids. The objective function JK is the sum of the squared distances between all the data points and their cluster centroids, and the mathematical expression is as follows:
J K = i = 1 k j = 1 n w j i x j C i 2
w j i = 1 ,   if   x j C i x j C m ,   m j 0 ,   otherwise  
where K is the number of clusters, k is the cluster index, n is the number of data points, xj is the j-th input data sample, Ci is the centroid of the i-th cluster, and wji is a weight, which is 1 if the data belongs to the i-th cluster and 0 otherwise.
The fuzzy C-means algorithm applies fuzzy theory concepts to clustering methods [35]. Unlike k-means, in fuzzy C-means, the weights, W, are not binary; instead, each attribute datum is represented by a membership function to indicate the degree of belonging to each cluster. The objective function JC is the sum of the squared distances between all data points and their cluster centroids, as shown in the following equation:
J C = i = 1 K J i = i = 1 K j = 1 n w j i m x j C i 2
where K is the number of clusters, n is the number of data points, xj represents the j-th sample data, Ci is the centroid of the i-th cluster, w j i m is the weight (ranging from 0 to 1), indicating the degree of truth for its fuzzy set, and m is the exponent coefficient, typically set to 2.
The sum of the weight coefficients should satisfy the constraint as follows:
i = 1 K w i j i = 1 ,     j = 1 , , n
The weight formula is as follows:
w i j i = 1 s = 1 K ( x j C i x j C s ) 2 m 1
The number of clusters K mentioned above was determined using a trial and error method. Figure 8 illustrates the process of selecting a cluster and predicting solar radiation based on the data in the validation set and testing set. After selecting the clustering method, the distance DIS between sample point i and cluster center Ck was calculated. kmin_dis represents the shortest distance to a certain cluster. Based on the kmin_dis result, you could use the predictive model associated with that cluster to estimate solar radiation.

4. Modeling and Results

This study divided the data from the experimental locations (i.e., Kaohsiung Station, Hualien Station, and Penghu Station) into training data for 18 years (1993 to 2010), validation data for 6 years (2011 to 2016), and testing data for 5 years (2017 to 2021). The Python programming language and the Keras library were used to build machine learning models and perform computations. The evaluation metrics employed in this study included RMSE (root mean squared error) and rRMSE (relative RMSE), with the formulas as follows:
R M S E = 1 M i = 1 M O i P r e O i O b s 2
r R M S E = R M S E O ¯ O b s
where M is the number of data points, O i P r e represents the ith prediction value, O i O b s is the ith observed value, and O ¯ O b s is the mean of all observed values.

Parameter Calibration

The hyperparameters for the DNN model had to be tuned, including the learning rate, momentum, number of hidden layers, and number of neurons in a hidden layer. The learning rate controls the step size for weight updates and affects the convergence speed. A learning rate that is too low can lead to slow convergence, while a learning rate that is too high can cause oscillations. This study started with a learning rate of 0.001 and increased it gradually in steps of 0.001 to find the optimal learning rate. Momentum is an effective way to enhance the efficiency of weight adjustments. It involves incorporating part of the previous learning’s value to update a network’s weights, which can significantly reduce the influence of extreme values or noise in the data. This study experimented with momentum values in the range of 0.01, increasing by 0.01 in each step. The number of hidden layers in the network is another crucial hyperparameter. This study tested architectures with 1 to 10 hidden layers. The number of neurons in a hidden layer influences a network’s ability to generalize. Too few neurons can lead to underfitting, while too many neurons can lead to overfitting. This study explored the number of neurons, starting from 1 and gradually increasing up to 100, to determine the optimal configuration.
In the DNN modeling process, the performance of two optimizers, namely Stochastic Gradient Descent (SGD) and Adaptive Moment Estimation (Adam), was initially compared. Figure 9 illustrates the iterative convergence process of errors using these two optimizers for Kaohsiung Station. It was evident that Adam outperformed SGD in terms of error convergence. As a result, the Adam optimizer was chosen for the subsequent steps.
Next, this study determined the delay time length, d. Figure 10 presents the best RMSE at various delay times using the Adam optimization. With the increase in d, the error reduction was less pronounced. This study used d = 7 h, 20 h, and 5 h as parameter values for Kaohsiung, Hualien, and Penghu stations, respectively.
Table 2 lists the parameter tuning results for the number of hidden layers and the number of neuron nodes in the DNN (t + 1) model using the Adam optimization for the next hour (t + 1). Using the same method, we conducted parameter tuning for 1 to 12 h into the future.
In the case of building the LSTM models, the parameter d’ represents the length of data required for the LSTM input. The values of d determined in the previous section for the DNN model can serve as reference values for d’, which are assumed to be 7, 10, and 7 h for the three experimental stations, respectively. The number of LSTM neurons was tested for forecast horizons from 1 to 12 h. Taking the example of the t + 1 forecast, Figure 11 shows that Kaohsiung, Hualien, and Penghu stations achieved stable convergence errors with 39, 85, and 36 neurons, respectively.
In the cluster-based DNN and LSTM models, during the clustering algorithm process, this study examined the number of clusters and determined the optimal number of clusters. Figure 12 shows the trial and error process for determining the number of clusters for k_DNN, fc_DNN, k_LSTM, and fc_LSTM. The results reveal that, for the DNN models, the optimal number of clusters for Kaohsiung Station, Hualien Station, and Penghu Station were four, three, and four for k_DNN and five, four, and five for fc_DNN. For the LSTM models, k_LSTM had an optimal number of clusters of five, four, and five, and fc_LSTM had an optimal number of clusters of five, five, and five. The results indicate that both DNN and LSTM models achieve their best performance with fewer than five clusters.

5. Simulation

This study evaluated the DNN, LSTM, k_DNN, fc_DNN, k_LSTM, and fc_LSTM models using the testing set. Figure 13 presents the evaluation results using the testing set, showing the rRMSE performance of each model for lead times ranging from 1 to 12 h. From Figure 13a, it can be observed that at Kaohsiung Station, for lead times of 1 to 2 h, the different models had similar rRMSE values. However, for lead times greater than 3 h, fc_LSTM exhibited superior prediction performance. In Figure 13b, at Hualien Station, for lead times of 1 to 3 h, LSTM, fc_DNN, and fc_LSTM showed comparable rRMSE values. For lead times greater than 4 h, fc_LSTM demonstrated better prediction performance. Figure 13c shows the results for Penghu Station, where, for lead times of 1 to 2 h, the k_DNN, LSTM, k_LSTM, fc_DNN, and fc_LSTM models achieved similar rRMSE values. However, for lead times greater than 3 h, fc_LSTM exhibited a more significant advantage in terms of prediction performance.
Based on the rRMSE results of the DNN model, this study defined a “model improvement rate”, as described below:
I R = r R M S E D N N r R M S E / r R M S E D N N × 100 %
After calculation, it was found that, at all three stations, the fc_LSTM model had the highest improvement rate (improving by 37.27%, 30.41%, and 29.08%, respectively) (see Table 3), followed by the k_LSTM model with the second-highest improvement rate (improving by 20.81%, 19.54%, and 29.08%, respectively).

Model Selection Ensemble Tabular (MSET)

From Section 3 of the real-time prediction system, the lookup table from MSET was used for real-time predictions to determine the best model for each forecast time (1 to 12 h). Table 4 presents the results obtained from testing data simulations. It was observed that, for short-term predictions (lead time ≤ 3 h), a combination of clustering algorithms with DNN or LSTM models performed better at all three stations. However, for long-term predictions (lead time ≥ 4 h), combining clustering algorithms with LSTM models was the most stable choice.
This study used test data from 2017 to simulate a real-time prediction system. The best models selected from the MSET table for different forecast periods were further utilized for real-time updates in predicting solar radiation. To assess the system, ground station data and solar angle data obtained through k-means and fuzzy C-means clustering were used. Each set of data belonged to a cluster with cluster center coordinates. The model selection set table can be updated based on the required time intervals to ensure the accuracy of clustering and statistical results. In practice, current data are imported into the database every hour. The best forecast models for each upcoming hour are defined according to the model selection set table, and the results of all models are computed. Finally, the required forecast values are combined, and the current time is adjusted, enabling continuous predictions until the user terminates the process.
This study has established a real-time prediction system for solar radiation. The system’s performance was demonstrated using representative days chosen in this study: 03/20 (spring equinox), 06/21 (summer solstice), 09/22 (autumn equinox), and 12/21 (winter solstice). Figure 14, Figure 15 and Figure 16 show the real-time predictions for Kaohsiung Station, Hualien Station, and Penghu Station during these four major seasonal transitions in 2017. The orange line represents the prediction results, while the blue line represents the observed solar radiation. Each prediction starts from 6:00 AM and looks ahead to the next 12 h. The figure titles indicate the time of the current prediction, starting at 6:00 AM, and the results are updated every two hours until noon, demonstrating the results up to that time.
It was evident that, at 6:00 AM, some stations tend to underestimate the peak around noon. However, after updating the data every two hours, by 10:00 AM, the results significantly improved and could reasonably predict the noon peak. The results demonstrate that the real-time predictions roughly aligned with the actual values. Based on the results, most errors were concentrated around the high values during the day. The accurate prediction of peak values can only be achieved as the current time approaches noon due to the error propagation over the long-term forecast. In such cases, the model’s output becomes increasingly conservative, leading to the observed underestimation. The study suggests that using more precise cluster selection methods to accurately choose the deep neural network trained by the noon cluster could potentially resolve this underestimation issue.

6. Conclusions

The aim of this study was to establish a prediction model for solar radiation and develop a hybrid real-time solar energy prediction system to obtain reliable daily solar radiation forecasts every morning. The system was designed to provide hourly updates and corrections to the predictions, assisting in determining the future electricity generation from solar energy and the optimal timing for energy generation. The study covered three regions, namely Kaohsiung, Hualien, and Penghu, each equipped with an independent real-time prediction system, forecasting solar radiation for the next 1 to 12 h.
In the model development phase, multiple models were employed, including a deep neural network (DNN) and a long–short-term memory neural network (LSTM). Additionally, unsupervised-based algorithms were used, which involved clustering methods such as k-means clustering and fuzzy C-means clustering. After clustering the data, neural-network-based prediction models were established for each cluster. As a result, in the DNN model, the following models were created: k-means DNN (k_DNN) and fuzzy C-means DNN (fc_DNN). In the case of the LSTM model, the following models were developed: k-means LSTM (k_LSTM) and fuzzy C-means LSTM (fc_LSTM).
Based on the predictions of various models, this study evaluated the best models for different forecasting time intervals and proposed a real-time solar radiation prediction system. This system is capable of providing real-time predictions of solar radiation in the range of 1 to 12 h ahead. To test its practicality, simulations were conducted using data from the year 2017. The results of the research’s prediction system demonstrated strong predictive performance. Even with increased errors in long-term predictions, the system was able to dynamically adjust the predictions in real time, effectively forecasting solar radiation for the next 12 h.

Author Contributions

C.-C.W. conceived and designed the experiments and wrote the manuscript, and Y.-C.Y. and C.-C.W. carried out this experiment and the analysis of the data and discussed the results. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Technology, Taiwan, grant number NSTC112-2622-M-019-001.

Data Availability Statement

The related data were provided by Taiwan’s Data Bank for Atmospheric Hydrologic Research, which is available at https://dbar.pccu.edu.tw/ (accessed on 1 July 2021).

Acknowledgments

The authors acknowledge the data provided by Taiwan’s Central Weather Administration.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Taipower (Taiwan Power Company). 2021. Available online: https://www.taipower.com.tw/en/index.aspx (accessed on 1 December 2021).
  2. Wei, C.C.; Hsieh, P.Y. Estimation of hourly rainfall during typhoons using radar mosaic-based convolutional neural networks. Remote Sens. 2020, 12, 896. [Google Scholar] [CrossRef]
  3. Lara-Cerecedo, L.O.; Hinojosa, J.F.; Pitalúa-Díaz, N.; Matsumoto, Y.; González-Angeles, A. Prediction of the electricity generation of a 60-kW photovoltaic system with intelligent models ANFIS and optimized ANFIS-PSO. Energies 2023, 16, 6050. [Google Scholar] [CrossRef]
  4. Kosovic, I.N.; Mastelic, T.; Ivankovic, D. Using Artificial Intelligence on environmental data from Internet of Things for estimating solar radiation: Comprehensive analysis. J. Clean. Prod. 2020, 266, 121489. [Google Scholar] [CrossRef]
  5. Makade, R.G.; Chakrabarti, S.; Jamil, B. Development of global solar radiation models: A comprehensive review and statistical analysis for Indian regions. J. Clean. Prod. 2021, 293, 126208. [Google Scholar] [CrossRef]
  6. Mazzeo, D.; Herdem, M.S.; Matera, N.; Bonini, M.; Wen, J.Z.; Nathwani, J.; Oliveti, G. Artificial intelligence application for the performance prediction of a clean energy community. Energy 2021, 232, 120999. [Google Scholar] [CrossRef]
  7. Wu, J.; Qin, W.; Wang, L.; Hu, B.; Song, Y.; Zhang, M. Mapping clear-sky surface solar ultraviolet radiation in China at 1 km spatial resolution using machine learning technique and Google Earth Engine. Atmos. Environ. 2022, 286, 119219. [Google Scholar] [CrossRef]
  8. Ruan, Z.; Sun, W.; Yuan, Y.; Tan, H. Accurately forecasting solar radiation distribution at both spatial and temporal dimensions simultaneously with fully-convolutional deep neural network model. Renew. Sustain. Energy Rev. 2023, 184, 113528. [Google Scholar] [CrossRef]
  9. Gallo, R.; Castangia, M.; Macii, A.; Macii, E.; Patti, E.; Aliberti, A. Solar radiation forecasting with deep learning techniques integrating geostationary satellite images. Eng. Appl. Artif. Intell. 2022, 116, 105493. [Google Scholar] [CrossRef]
  10. Lauret, P.; Voyant, C.; Soubdhan, T.; David, M.; Poggi, P. A benchmarking of machine learning techniques for solar radiation forecasting in an insular context. Sol. Energy 2014, 112, 446–457. [Google Scholar] [CrossRef]
  11. Wei, C.C. Predictions of surface solar radiation on tilted solar panels using machine learning models: Case study of Tainan City, Taiwan. Energies 2017, 10, 1660. [Google Scholar] [CrossRef]
  12. Voyant, C.; Notton, G.; Kalogirou, S.; Nivet, M.L.; Paoli, C.; Motte, F.; Fouilloy, A. Machine learning methods for solar radiation forecasting: A review. Renew. Energy 2017, 105, 569–582. [Google Scholar] [CrossRef]
  13. Wei, C.C. Evaluation of photovoltaic power generation by using deep learning in solar panels installed in buildings. Energies 2019, 12, 3564. [Google Scholar] [CrossRef]
  14. Ali, M.A.; Elsayed, A.; Elkabani, I.; Akrami, M.; Youssef, M.E.; Hassan, G.E. Optimizing artificial neural networks for the accurate prediction of global solar radiation: A performance comparison with conventional methods. Energies 2023, 16, 6165. [Google Scholar] [CrossRef]
  15. Neshat, M.; Nezhad, M.M.; Mirjalili, S.; Garcia, D.A.; Dahlquist, E.; Gandomi, A.H. Short-term solar radiation forecasting using hybrid deep residual learning and gated LSTM recurrent network with differential covariance matrix adaptation evolution strategy. Energy 2023, 278, 127701. [Google Scholar] [CrossRef]
  16. Tian, J.; Ooka, R.; Lee, D. Multi-scale solar radiation and photovoltaic power forecasting with machine learning algorithms in urban environment: A state-of-the-art review. J. Clean. Prod. 2023, 426, 139040. [Google Scholar] [CrossRef]
  17. Bandara, K.; Bergmeir, C.; Smyl, S. Forecasting across time series databases using recurrent neural networks on groups of similar series. Expert Syst. Appl. 2020, 140, 16. [Google Scholar] [CrossRef]
  18. Vu, B.H.; Chung, I. Optimal generation scheduling and operating reserve management for PV generation using RNN-based forecasting models for stand-alone microgrids. Renew. Energy 2022, 195, 1137–1154. [Google Scholar] [CrossRef]
  19. Cortez, B.; Carrera, B.; Jung, J.Y. An architecture for emergency event prediction using LSTM recurrent neural networks. Expert Syst. Appl. 2018, 97, 315–324. [Google Scholar] [CrossRef]
  20. Dabiri, S.; Heaslip, K. Developing a Twitter-based traffic event detection model using deep learning architectures. Expert Syst. Appl. 2019, 118, 425–439. [Google Scholar] [CrossRef]
  21. Dhaked, D.K.; Dadhich, S.; Birla, S. Power output forecasting of solar photovoltaic plant using LSTM. Green Energy Intell. Transp. 2023, 2, 100113. [Google Scholar] [CrossRef]
  22. Petersen, N.C.; Rodrigues, F.; Pereira, F.C. Multi-output bus travel time prediction with convolutional LSTM neural network. Expert Syst. Appl. 2019, 120, 426–435. [Google Scholar] [CrossRef]
  23. Qureshi, A.S.; Khan, A.; Zameer, A.; Usman, A. Wind power prediction using deep neural network based meta regression and transfer learning. Appl. Soft Comput. 2017, 58, 742–755. [Google Scholar] [CrossRef]
  24. Qing, X.G.; Niu, Y.G. Hourly day-ahead solar irradiance prediction using weather forecasts by LSTM. Energy 2018, 148, 461–468. [Google Scholar] [CrossRef]
  25. Li, G.Q.; Wang, H.Z.; Zhang, S.L.; Xin, J.T.; Liu, H.C. Recurrent Neural Networks Based Photovoltaic Power Forecasting Approach. Energies 2019, 12, 2538. [Google Scholar] [CrossRef]
  26. Ghofrani, M.; Ghayekhloo, M.; Arabali, A.; Ghayekhloo, A. A hybrid short-term load forecasting with a new input selection framework. Energy 2015, 81, 777–786. [Google Scholar] [CrossRef]
  27. Azimi, R.; Ghayekhloo, M.; Ghofrani, M. A hybrid method based on a new clustering technique and multilayer perceptron neural networks for hourly solar radiation forecasting. Energy Convers. Manag. 2016, 118, 331–344. [Google Scholar] [CrossRef]
  28. Chen, C.R.; Kartini, U.T. K-nearest neighbor neural network models for very short-term global solar irradiance forecasting based on meteorological data. Energies 2017, 10, 186. [Google Scholar] [CrossRef]
  29. Ghimire, S.; Deo, R.C.; Wang, H.; Al-Musaylh, M.S.; Casillas-Pérez, D.; Salcedo-Sanz, S. Stacked LSTM sequence-to-sequence autoencoder with feature selection for daily solar radiation prediction: A review and new modeling results. Energies 2022, 15, 1061. [Google Scholar] [CrossRef]
  30. Yang, X.; Ji, Y.; Wang, X.; Niu, M.; Long, S.; Xie, J.; Sun, Y. Simplified method for predicting hourly global solar radiation using extraterrestrial radiation and limited weather forecast parameters. Energies 2023, 16, 3215. [Google Scholar] [CrossRef]
  31. Vernet, A.; Fabregat, A. Evaluation of empirical daily solar radiation models for the Northeast Coast of the Iberian Peninsula. Energies 2023, 16, 2560. [Google Scholar] [CrossRef]
  32. Zhu, T.; Li, Y.; Li, Z.; Guo, Y.; Ni, C. Inter-hour forecast of solar radiation based on long short-term memory with attention mechanism and genetic algorithm. Energies 2022, 15, 1062. [Google Scholar] [CrossRef]
  33. Reda, I.; Andreas, A. Solar position algorithm for solar radiation applications. Sol. Energy 2004, 76, 577–589. [Google Scholar] [CrossRef]
  34. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  35. Bezdek, J.C.; Ehrlich, R.; Full, W. FCM: The fuzzy c-means clustering algorithm. Comput. Geosci. 1984, 10, 191–203. [Google Scholar] [CrossRef]
Figure 1. Map of the study region.
Figure 1. Map of the study region.
Energies 16 07693 g001
Figure 2. Flowchart of a real-time prediction process.
Figure 2. Flowchart of a real-time prediction process.
Energies 16 07693 g002
Figure 3. The modeling process for DNN models.
Figure 3. The modeling process for DNN models.
Energies 16 07693 g003
Figure 4. Input–output pattern used in the DNN model.
Figure 4. Input–output pattern used in the DNN model.
Energies 16 07693 g004
Figure 5. The concept of network structures in the RNN and LSTM models.
Figure 5. The concept of network structures in the RNN and LSTM models.
Energies 16 07693 g005
Figure 6. Input–output patterns of the LSTM model.
Figure 6. Input–output patterns of the LSTM model.
Energies 16 07693 g006
Figure 7. The modeling process for the unsupervised combined with supervised models.
Figure 7. The modeling process for the unsupervised combined with supervised models.
Energies 16 07693 g007
Figure 8. Procedure for data sample clustering.
Figure 8. Procedure for data sample clustering.
Energies 16 07693 g008
Figure 9. Error convergence trends over iterations for SGD versus the Adam optimizer.
Figure 9. Error convergence trends over iterations for SGD versus the Adam optimizer.
Energies 16 07693 g009
Figure 10. Calibration of the delay time d of the DNN model at (a) Kaohsiung Station, (b) Hualien Station, and (c) Penghu Station.
Figure 10. Calibration of the delay time d of the DNN model at (a) Kaohsiung Station, (b) Hualien Station, and (c) Penghu Station.
Energies 16 07693 g010
Figure 11. Calibration of neuron nodes in LSTM: (a) Kaohsiung Station, (b) Hualien Station, and (c) Penghu Station.
Figure 11. Calibration of neuron nodes in LSTM: (a) Kaohsiung Station, (b) Hualien Station, and (c) Penghu Station.
Energies 16 07693 g011
Figure 12. Calibration of the clustering amount in the cluster-based DNN and LSTM models at (a) Kaohsiung Station, (b) Hualien Station, and (c) Penghu Station.
Figure 12. Calibration of the clustering amount in the cluster-based DNN and LSTM models at (a) Kaohsiung Station, (b) Hualien Station, and (c) Penghu Station.
Energies 16 07693 g012
Figure 13. The results of rRMSE using the testing set: (a) Kaohsiung, (b) Hualien, and (c) Penghu.
Figure 13. The results of rRMSE using the testing set: (a) Kaohsiung, (b) Hualien, and (c) Penghu.
Energies 16 07693 g013
Figure 14. Real-time prediction system simulation results for Kaohsiung Station.
Figure 14. Real-time prediction system simulation results for Kaohsiung Station.
Energies 16 07693 g014
Figure 15. Real-time prediction system simulation results for Hualien Station.
Figure 15. Real-time prediction system simulation results for Hualien Station.
Energies 16 07693 g015
Figure 16. Real-time prediction system simulation results for Penghu Station.
Figure 16. Real-time prediction system simulation results for Penghu Station.
Energies 16 07693 g016
Table 1. Statistics of ground-level climate attributes.
Table 1. Statistics of ground-level climate attributes.
AttributeUnitMin–Max, Mean
Kaohsiung StationHualien StationPenghu Station
Atmospheric pressurehPa976.1–1030.9, 1012.0958.8–1032.1, 1011.8974.1–1033, 1011.8
Surface temperature°C7.1–36.9, 25.319.2–39.6, 24.667.9–34.8, 23.67
Dew temperature°C−4.2–30.4, 20.491.5–29.2, 19.64−0.4–29.8, 19.95
Relative humidity%26–100, 75.2425–100, 74.2127–100, 80.12
Water vaporhPa4.5–43.4, 24.936.8–40.5, 23.655.9–41.9, 24.34
Average wind speedm/s0–18, 2.170–16.6, 1.740–25.8, 4.06
Precipitationmm0–119.5, 0.210–83, 0.210–94.5, 0.13
Rainfall durationh0–1, 0.040–1, 0.060–1, 0.05
Insolation durationh0–1, 0.260–1, 0.200–1, 0.24
Global solar radiationMJ/m20–3.90, 0.560–5.32, 0.630–4.34, 0.53
Declination angledegree−23.45–23.45, −0.01−23.45–23.45, −0.01−23.45–23.45, −0.01
Hour angledegree−165–180, 7.5−165–180, 7.5−165–180, 7.5
Zenith angledegree0.02–179.98, 900.028–179.97, 900.12–179.88, 90
Elevation angledegree−89.98–89.98, 0.0−89.97–89.97, −0.0−89.88–89.88, 0.0
Azimuth angledegree−90–90, 0.0−90–90, 0.0−90–90, 0.0
Table 2. Parameter tuning results for the DNN model.
Table 2. Parameter tuning results for the DNN model.
StationDelay Time (h)Number of Hidden LayersNumber of Neuron NodesRMSE (MJ/m2)
Kaohsiung73180.232196
Hualien203100.262571
Penghu5390.232947
Table 3. Average performance levels for 1–12 h predictions.
Table 3. Average performance levels for 1–12 h predictions.
StationMeasureDNNLSTMk_DNNfc_DNNk_LSTMfc_LSTM
KaohsiungRMSE (MJ/m2)0.6010.5700.5670.5740.4570.342
rRMSE1.0331.0081.0031.0230.8180.648
IR0%2.42%2.90%0.97%20.81%37.27%
HualienRMSE (MJ/m2)0.5190.4540.5010.4970.4300.364
rRMSE1.0490.8801.0320.9070.8440.730
IR0%16.11%1.62%13.54%19.54%30.41%
PenghuRMSE (MJ/m2)0.5030.4010.4370.4210.3890.352
rRMSE1.0350.8230.9330.8730.7630.734
IR0%20.48%9.86%15.65%26.28%29.08%
Table 4. Model selection ensemble tabular (MSET).
Table 4. Model selection ensemble tabular (MSET).
Lead TimeSelected Optimal Model for Each Station
Kaohsiung StationHualien StationPenghu Station
1 hfc_DNNfc_DNNfc_DNN
2 hfc_DNNfc_DNNfc_LSTM
3 hfc_LSTMfc_DNNfc_LSTM
4–12 hfc_LSTMfc_LSTMfc_LSTM
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wei, C.-C.; Yang, Y.-C. A Global Solar Radiation Forecasting System Using Combined Supervised and Unsupervised Learning Models. Energies 2023, 16, 7693. https://doi.org/10.3390/en16237693

AMA Style

Wei C-C, Yang Y-C. A Global Solar Radiation Forecasting System Using Combined Supervised and Unsupervised Learning Models. Energies. 2023; 16(23):7693. https://doi.org/10.3390/en16237693

Chicago/Turabian Style

Wei, Chih-Chiang, and Yen-Chen Yang. 2023. "A Global Solar Radiation Forecasting System Using Combined Supervised and Unsupervised Learning Models" Energies 16, no. 23: 7693. https://doi.org/10.3390/en16237693

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop