Next Article in Journal
Effect of Electrolysis on Activated Sludge during the Hydrolysis and Acidogenesis Stages in the Anaerobic Digestion of Poultry Manure
Next Article in Special Issue
Education, Financial Development, and Primary Energy Consumption: An Empirical Analysis for BRICS Economies
Previous Article in Journal
Prioritizing Cleaner Production Actions towards Circularity: Combining LCA and Emergy in the PET Production Chain
Previous Article in Special Issue
Application of a Novel Optimized Fractional Grey Holt-Winters Model in Energy Forecasting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improvement in Solar-Radiation Forecasting Based on Evolutionary KNEA Method and Numerical Weather Prediction

1
Faculty of Geographical Science, Beijing Normal University, Beijing 100875, China
2
State Key Laboratory of Simulation and Regulation of Water Cycle in River Basin, China Institute of Water Resources and Hydropower Research, Beijing 100038, China
3
School of Hydraulic and Ecological Engineering, Nanchang Institute of Technology, Nanchang 330099, China
4
Key Laboratory of Water Cycle and Related Land Surface Processes, Institute of Geographic Science and Science and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(11), 6824; https://doi.org/10.3390/su14116824
Submission received: 26 April 2022 / Revised: 28 May 2022 / Accepted: 30 May 2022 / Published: 2 June 2022
(This article belongs to the Special Issue Development Trends of Environmental and Energy Economics)

Abstract

:
Accurate forecasting of solar radiation (Rs) is significant to photovoltaic power generation and agricultural management. The National Centers for Environmental Prediction (NECP) has released its latest Global Ensemble Forecast System version 12 (GEFSv12) prediction product; however, the capability of this numerical weather product for Rs forecasting has not been evaluated. This study intends to establish a coupling algorithm based on a bat algorithm (BA) and Kernel-based nonlinear extension of Arps decline (KNEA) for post-processing 1–3 d ahead Rs forecasting based on the GEFSv12 in Xinjiang of China. The new model also compares two empirical statistical methods, which were quantile mapping (QM) and Equiratio cumulative distribution function matching (EDCDFm), and compares six machine-learning methods, e.g., long-short term memory (LSTM), support vector machine (SVM), XGBoost, KNEA, BA-SVM, BA-XGBoost. The results show that the accuracy of forecasting Rs from all of the models decreases with the extension of the forecast period. Compared with the GEFS raw Rs data over the four stations, the RMSE and MAE of QM and EDCDFm models decreased by 20% and 15%, respectively. In addition, the BA-KNEA model was superior to the GEFSv12 raw Rs data and other post-processing methods, with R2 = 0.782–0.829, RMSE = 3.240–3.685 MJ m−2 d−1, MAE = 2.465–2.799 MJ m−2 d−1, and NRMSE = 0.152–0.173.

1. Introduction

Solar radiation is the primary source of surface energy, which drives carbon and water exchanges between the atmosphere and terrestrial ecosystems [1]. Population growth, limited fossil fuels, and environmental pollution have caused the rapid development of renewable energy sources such as solar and wind power. However, in many solar energy applications, accurate information about the presence of solar energy is required [2]. Solar measuring equipment is much more expensive than other meteorological parameters such as temperature, relative humidity and wind speed. More than 2400 weather stations in China record meteorological data, while only about 5% of stations observe global solar radiation (Rs). Therefore, models need to be developed for stations with no solar-radiation records, to estimate solar radiation [3]. Three main methods are used to calculate daily global solar radiation, i.e., satellite-derived, stochastic and meteorological-based processes [4]. The satellite-derived method can receive the reflectivity of the Earth’s atmosphere of the irradiation, invert the daily radiation value and estimate the solar radiation in a large area.
Nevertheless, the uncertainty of satellite-based solar-radiation remote sensing can be high in cloudy and polluted areas. Stochastic algorithms depend on history; a statistical summary of radiation information is used to infer the probability of future radiation, which requires the support of existing high-quality historical radiation-observation data. Weather-based approaches aim to establish relationships between solar radiation and other, more readily available, meteorological elements. This method is by far the most widely used.
Recently, machine-learning models, due to their super nonlinear fitting ability, have been widely used in the simulation of natural phenomena, agriculture, engineering and the economy, also including Rs predicting/forecasting. Rehman and Mohandes [5] used an artificial neural network (ANN) to estimate solar radiation in Abha of Saudi Arabia. They found an ANN model with air temperature and relative humidity as inputs can capably estimate Rs. Quej et al. [6] assessed three approaches (SVM, ANN and ANFIS) to predict daily Rs in Yucatán, México. They declared that SVM models performed well in warm sub-humid regions. Ghimire et al. [7] explored the feasibility of using numerical weather prediction to forecast Rs. Deo et al. [8] used geo-temporal and satellite images as input data to feed the ELM method to develop an Rs model in Australia. The results show that the ELM model outperformed RF, M5T and MARS methods. Hassan et al. [9] evaluated the ability of four ML algorithms (MLP, ANFIS, SVM and RT) in modeling Rs. Based on these algorithms, sunshine-, temperature-, meteorological parameters- and day-number-based models were examined in Egypt. They verified that the MLP algorithm excelled in comparison to other models. On the other hand, many studies also show that ML is not always better, for example, as it has less precision than the dependency model [10]. Mohammadi et al. [11] compared the performance of an SVM model and ANFIS in predicting Rs under temperature data only with the data of Iran. It was found that the SVM model using an RBF kernel function had the highest accuracy. Feng et al. [12] used six machine-learning models to map daily global solar radiation and photovoltaic power in the Loess Plateau of China. In addition, the prediction of Rs by kernel-based machine-learning models has been widely reported in northwest China [12], humid regions of China [13], air-pollution regions of north China [14,15], Algeria [16], Spain [17], other regions around the world [18], also including diffuse radiation [19]. The kernel-based model also been used to map the solar photovoltaic potential of China [20,21].
Recently, deep-learning models have been gradually applied to the prediction of solar radiation, including LSTM algorithms, which are good at mining time-series information [22,23,24], and spatial processing information [25]. In addition, ML models can also be used to identify the most significant input parameters to better understand the relationship between common meteorological factors and Rs.
Voyant et al. [26] reviewed different machine-learning technologies used for solar-radiation forecasting. They pointed out that methods such as ANN and SVM were primarily used in the early stage, while methods such as regression tree and boosting tree have been used more recently. Compared with ANN, SVM, ANFIS, and decision-making, the most significant advantage tree-based methods have is processing larger data sets faster [9]. Sun et al. [27] applied an RF method to estimate Rs in an air-pollution environment. Ibrahim and Khatib [28] coupled an RF model with FFA to predict radiation on an hourly scale. Prasad et al. [29] designed a new approach named the EEMD-AOC-RF method for Rs forecasting. Firstly, this method decomposed the time lagging (t-1) data into signal data and noise data by EEMD; the data was brought into the RF model and optimized by AOC algorithm. Wu et al. [13] compared six machine-learning models (M5T, KNEA, MLP, CatBoost, RF and MARS) for predicting Rs in a sub-humid region in China. They found that the KNEA model had the highest accuracy, MLP model had the best stability, and CatBoost model had the fastest speed.
Recently, The National Centers for Environmental Prediction (NECP) released its new product, Global Ensemble Forecast System version 12 (GEFSv12) [30]. This product has up to 35 days ahead of Rs forecast data, however, its accuracy has not been evaluated. A new model-based bat algorithm and KNEA was used to forecasting Rs, and the input data was from the GEFSv12 output for the 1–3 d ahead. Therefore, the objectives of this study were: (1) to evaluate the 1–3 d ahead solar-radiation-prediction performance of GEFSv12 at four stations in northwest China; (2) to build a coupling model based on the bat algorithm and KNEA (BA–KNEA) model; and (3) compare the newly developed BA-KNEA model with the traditional empirical model and five other machine-learning models.

2. Materials and Methods

2.1. Study Region

This study uses observational data from four radiation stations in Xinjiang of China, whose geographical locations are shown in Figure 1. The region is rich in solar-radiation resources, with an annual average 5200–6400 MJ m−2 y−1. The annual average air temperature is 9 °C and annual precipitation is less than 200 mm y−1. These stations are affiliated with the Meteorological Data Center of the China Meteorological Administration, and the data include the total daily surface radiation from 2006 to 2015. The data was divided into two parts, the first part (2005–2010) was used for training the model and the other was used to test the model. When the Rs of a day was higher than the extraterrestrial radiation, the data of that day were deleted [31]. The global solar radiation for different months at each station is outlined in Table 1.
NCEP implemented its next Global Ensemble Forecasting System (GEFSv12) in summer 2020. This model upgrade, based on a deterministic and ensemble prediction system, is very different from the previous upgrade. In the NCEP operation model, a new dynamic core (FV3) is used for the first time to replace the previous spectral dynamic core [32]. The previous three categories of Zhao–Carr microphysics schemes have also been replaced by the more advanced six categories of GFDL microphysics schemes. From the perspective of the ensemble model, GEFSv12 extends the prediction period to 35 days. To better represent the considerable uncertainty related to this time scale, random physical-disturbance trends and random kinetic-energy-backscattering stochastic schemes replaced the original random general-disturbance-trend stochastic scheme, which is also a significant upgrade of the system [33]. Its spatial resolution is 25 km and its temporal resolution is 3 h. In this study, we used grid data from the mean of the four grid points around the site, including forecasting solar radiation (Rsf), maximum temperature (Tmaxf), minimum temperature (Tminf), relative humidity (RHf) at 2 m height and wind speed (Uf) at 10 m height every 3 h for the next 72 h, and converted the 3 h time-resolution data into daily data. That means that, for 3 h to 24 h (27 h to 48 h and 51 to 72 h), the eight data points were converted to daily scale. Tmaxf and Tminf are the highest and lowest temperature of the eight time scale in one day. RHf and Uf are the mean of the eight-point time scale in one day. Rsf is the sum of the eight-point time scale in a day. The output of the models is the measured Rs corresponding to the GEFS data on the same day.
The data were also divided into two parts, from 2006 to 2010 for the training model, and from 2011 to 2015 for validation.

2.2. Quantile Mapping (QM)

QM algorithms are commonly used to correct forecasting and observed data [34,35]. The QM method assumes that forecast data has the same cumulative frequency distribution (CDF) as observed data. The general equation of the QM method is defined as follows:
x ^ m , f ( t ) = F o , h 1 F m , h x m , f ( t )
where x ^ m , f ( t ) is the model forecast data at the t time. F m , h is the CDF of the observed history data. F o , h 1 is the inverse of CDF observed historical data.

2.3. Equiratio Cumulative Distribution Function Matching (EDCDFm)

EDCDFm is also a method based on quantile mapping. However, unlike the QM method, EDCDFm believes that observed value and forecast value have different CDFs [36]. The difference in CDF function needs to be considered, which is defined as follows:
x ^ m , f ( t ) = x m , f ( t ) + F o , h 1 F m , f x m , f ( t ) F m , h 1 F m , f x m , f ( t )
where F m , f is the CDF of the model forecasting data in the future, and F m , h 1 is the inverse of CDF model forecasting historical data.

2.4. Machine-Learning Algorithms

2.4.1. Long-Short Term Memory (LSTM)

In recent years, due to the advantages of LSTM model in dealing with sequential tasks, researchers have carried out a lot of research on it [37,38,39]. LSTM is a deep-learning architecture that aims to solve the long-term dependence problem of existing recurrent neural networks (RNNs) by introducing forgetting gates. LSTM model can recall previous data and evaluate the correlation of features based on past data.
As shown in Figure 2, a typical LSTM network consists of one unit and three gates (input gate, forget gate and output gate). The input gate adjusts the amount of new data stored in the unit. The output gate determines which information to obtain from the cell, while the forgetting gate determines which information can be discarded. The LSTM model will consider all this information and make judgments. These gates control cell state Ct and output ht; the input gate can be calculated as follows:
g a t e ( f i ) = σ s ( w i x t + u i h t 1 + b i )
where σ s is the sigmoid activation function, h t 1 is the cell output at the previous time step, w i and u i are the weighting factors, and b i is the bias. The forget gate can be calculating as follows:
g a t e ( f t ) = σ s ( w f x t + u f h t 1 + b f )
where w f and u f are the weighting factors, and b f is the bias. The equation of output gate is as follows:
g a t e ( f o ) = σ s ( w o x t + u o h t 1 + b o )
where w o and u o are the weighting factors, and b f is the bias.
In this study, LSTM is used to forecast Rs, and the input data includes Rsf, Tmaxf, Tminf, RHf and Uf for the forecast target day and observed Rs values during the previous 3–6 days. To achieve this model, Python 3.7 (https://www.python.org/downloads/release/python-370/ (accessed on 25 April 2022)) was used to develop the model.

2.4.2. Support Vector Machine (SVM)

SVM is an advanced statistical method based on the structural risk minimization principle and Vapnik–Chervonenkis dimension theory [40]. This method can be used to deal with classification and regression problems. Support vector regression (SVR) is an extension of a support vector machine in the field of regression. It has the advantages of solid generalization ability and fast convergence speed. It also has the advantages of dealing with small samples and nonlinear problems. By introducing the structural error minimization criterion, SVR has good robustness, generalization and learning ability. The SVR function is defined as follows:
f ( x ) = w ψ ( x ) + b
where f ( x ) is the output, w is the weight vector, ψ ( x ) is the high-dimensional nonlinear mapping function, and b is the constant. This equation is equivalent to the following objective function:
min R ( F ) = 1 2 w 2 + C i = 1 n f x i y i ε
where C is the penalty parameter, n is the number of the samples for develop the model, ε is the maximum allowable error which depending on the samples, and f x i y i is the residual error, defined as follows:
| f ( x ) y | ε = max { 0 , | f ( x ) y | ε }
By introducing two relaxation variables (ζ and ζ* ), Equation (5) can be rewritten as:
min 1 2 w 2 + C n i = 1 ξ i + ξ i *
s . t . = y i f x i ε + ξ i f x i y i ε + ξ i ξ i , ξ i > 0
Equation (6) can be converted into a duality problem as:
f ( x ) = n i = 1 α i α i * K x i , x j + b
where α i and α i are the Lagrange multipliers, and K · is a kernel function.
K x i , x j = exp 1 2 σ 2 x i x j 2
There are many kinds of kernel functions. In this study, we used radial basis function as kernel function, which has certain advantages in nonlinear aspect.

2.4.3. Extreme Gradient Boosting (XGBoost)

XGBoost is the first parallel gradient enhanced tree (GBDT) algorithm. XGBoost, based on classification and regression tree (CART) theory, has been widely proven to be a very efficient approach to regression and classification problems [41]. After optimization, XGBoost’s objective function consists of two different parts, representing the deviation and regularization terms of the model to prevent overfitting. The objective function can be written as follows:
O b j = m i = 1 l y i , y ^ i ( t 1 ) + f i x i + Ω f k
Ω f k = γ T + 1 2 λ w 2
where γ and λ are parameters that measure model complexity. T is the number of leaves on the CART tree, and w is also a weight parameter; in XGBoost, this is the weight of each leaf. More details can be found in the reference [41].

2.4.4. Kernel-Based Nonliear Extension of Arps Decline (KNEA)

KNEA is a new time-series model which has been applied in oil-production estimation, ET0 prediction and groundwater-level prediction [42,43]. The KNEA synthesizes nonlinear models of past state and present effects. The main function of KNEA algorithm can be expressed as:
f ( x ) = a f ( x 1 ) + g ( u ( x ) ) + b
where f ( x ) is the output at present; f ( x 1 ) is the output of the previous step; u ( x ) is the variables that can affect the output; g ( u ( x ) ) is a function of variables; and a and b are the constants. Usually, the g ( u ( x ) ) is unknown, so it should be converted to:
g ( u ( x ) ) = ω T φ ( u ( x ) )
where φ ( u ( x ) ) is the nonlinear mapping of variable into a new space.
After transformation, a small value e x can be introduced in the function and the original problem is transformed into a minima problem:
e x = f ( x ) a f ( x 1 ) ω T φ ( u ( x ) ) b
min ς ( a , ω , e ) = 1 2 a 2 + 1 2 w 2 + δ 2 n x = 2 e x 2
s . t . f ( x ) = a f ( x 1 ) + g ( u ( x ) ) + b + e x
Similar to SVM, this equivalent form can also be solved by introducing Lagrange multipliers and kernel functions.

2.4.5. Bat Algorithm

Yang and He [44] proposed the bat algorithm by imitating the predation law of bats. Bat algorithm has high efficiency in parameter optimization. Velocity and position changes are critical for bats to find optimal solutions in space, and these values are obtained by the following equation:
f i = f min + f max f min β 0
V i t = V i t + 1 + X i t + 1 + X i t X f i
X i t = X i t 1 + V i t
where β 0 is a random vector and its value range is [−1, 1]; X is the current optimal position in all the bats; and f min and f max are adjust the coefficient of speed. After each generation, each bat produces a new position, as follows:
X n e w = X o l d + μ A t
where μ is also a random vector and its value range is [−1, 1]; the following steps implement conditional updates of bat positions. A random number is generated and every bat of this generation is traversed. When the random number is greater than r i t and the fitness of the bat is higher than the optimal fitness of the current population, the generated new solution is accepted, and r i t as well as A i t are updated:
A i t + 1 = v A i t + 1
r i t + 1 = r i 0 [ 1 e ( ρ t ) ]
where A i 1 = 0.99, and r i 0 = 0.5 r 0 i . Equations (20)–(25) are repeated until the maximum generation is reached. BA has been used to optimize the parameters of the machine-learning models, e.g., SVM, XGBoost and KNEA. In this study, the population of BA algorithm was set to 50 and the number of iterations was 200.
The range of parameters of the three machine-learning models are shown in Table 2.

2.4.6. Particle Swarm Optimization Algorithm (PSO)

PSO is an algorithm developed by simulating group predation to find the optimal solution [45,46]. The particle swarm optimization algorithm designs a massless particle with only two attributes: speed and position, in which speed represents the speed of movement and position represents the direction of movement. Each particle searches for the optimal solution separately in the search space and records it as the current individual extreme value. The position of the extreme value is shared with other particles in the whole particle swarm. After other particles find the optimal individual extreme value, they update it to the whole particle swarm’s current global optimal solution. The formula of position and speed of the PSO algorithm is as follows:
Z i t = Z i t 1 + u i t
u i t = ω t × u i t 1 + c 1 × θ 1 × p b e s t i Z i t 1 + c 2 × θ 2 × g b e s t i Z i t 1
where Z i t is the location of the i-th particle during t-th iteration, u i t is the speed of the i-th particle during t-th iteration, c1 and c2 are study factors and the value was set as 2. θ 1 and θ 2 are random data ranged [−1, 1]. p b e s t i is the best location of the i-th particle among different iterations. g b e s t i is the globally best location of all the particles. ω t is the momentum factor, which can be calculated as follows:
ω t = ω i n i ω e n d I max t / I max + ω e n d
where ω i n i and ω e n d are the initial and the end momentum factors, and the values were set as 0.9 and 0.4, respectively. I max is the maximum iteration. In this study, the population of PSO algorithm was set to 50 and the number of iterations was 200. The range of machine-learning parameters optimized by PSO is the same as the BA algorithm. Figure 3 presents the flow chart of three machine-learning models optimized by evolutionary algorithms. To achieve these models, except LSTM, R language (R language v4.4, https://www.r-project.org/ (accessed on 25 April 2022)) was used.

2.5. Statistical Indicators

In this study, four commonly used statistical indicators were used to evaluate the prediction performance of total surface radiation, which are determination coefficient (R2):
R 2 = i = 1 n R s , m R ¯ s , m R s , f R ¯ s , f 2 i = 1 n R s , m R ¯ s , m 2 n i = 1 R s , f R ¯ s , f 2
Root mean square error (RMSE):
R M S E = 1 n n i = 1 R s , m R s , f 2
Mean absolute error:
M A E = 1 n n i = 1 R s , m R s , f
Normalized RMSE:
N R M S E = R M S E / R ¯ s , m
where Rs,m is measured Rs, Rs,f is forecasting Rs, R ¯ s , m is the mean of the measured Rs, and R ¯ s , f is the mean of forecasting Rs.

3. Results

3.1. Empirical Statistics Methods

Table 3 presents the statistical indicators of the GEFSv12 NWP raw Rsf forecasting data and the results from QM and EDCDFm methods. In general, with the extension of the forecast period, the errors of NWP raw Rsf data and the Rsf correct by QM and EDCDFm methods gradually increase. In Altay, the performance of the QM and EDCDFm methods were very similar, and both of them were slightly better than that of the NWP raw Rsf data. In Kashgar, the error of the raw Rsf data was relatively large. However, the QM and EDCDFm methods were superior to the raw Rsf, with RMSE decreased by 28.2–31% and 28.6–31.5%, and MAE decreased by 27.9–31.1% and 27.7–31.1%, respectively, during 1–3 d ahead. In Ruoqiang, the error of the raw Rsf was large, and its RMSE was more than 5 MJ m−2 d−1. After correcting by QM and EDCDFm method correction, RMSE decreased by 17.4–18.5% and 19.7–20.1% for 1–3 d ahead, and MAE decreased by 16–17.7% and 17.7–19.4%, respectively. However, the R2 of the raw Rsf was slightly higher than that of the two statistical methods. This indicates that the statistical method improved the overestimation (or underestimation) problem. The performance for Khotan station was similar to that of Ruoqiang station. Compared with the raw Rsf over the four stations, the RMSE and MAE of QM and EDCDFm models decreased by 20% and 15%, respectively. It can be seen from the above results that empirical statistical methods can improve forecasting accuracy.
As can be seen from the scatter plot of raw Rs vs. ground observed Rs (Figure 4), the discrete points increased slightly from 1 d to 3 d, indicating a slight decrease in inaccuracy. The forecasting value of Rsf in the future 1–3 d was not higher than 30 MJ m−2 d−1, which was slightly lower than the extreme value of Rs. The main problem of the GEFS data set lay in the existence of many overestimated discrete points when the observed value was lower than 25 MJ m−2 d−1. However, the QM and EDCDFm methods can alleviate this problem, and the R2 of the two methods was slightly higher than the corresponding value of raw Rsf data.

3.2. Machine-Learning Methods

Table 4 shows the statistical indicators of Rs-forecasting results by seven different machine-learning methods during 1–3 d ahead. In Altay, the third day’s R2 average increased by 0.046, and the average RMSE and MAE increased by 13.4% and 13.1%, compared with the first day. Among the seven machine-learning models, the BA-KNEA model was superior to other machine−learning models each day, and the RMSE, MAE and NRMSE of the BA-KNEA model decreased by 2.1–10.3%, 2.5–12.0% and 2.8–12.4% than other machine-learning models for 1 d ahead, decreased by 1.8–8.8%, 1.7% to 10.1% and 1.6–9.9% for 2 d ahead, and decreased by 2.2–8.2%, 2.2–9.6% and 2.0–9.5% for 3 d ahead. The performance of the BA-SVM model was ranked second, followed by BA-XGBoost, PSO-KNEA, PSO-SVM, LSTM and PSO-XGBoost models.
In Kashgar, the BA-KNEA model did not have a significant advantage over the PSO-KNEA model on the first two days, but performed slightly better than the PSO-KNEA model on the third day. In addition, the BA-KNEA model was generally superior to other models. RMSE, MAE and NRMSE decreased by 3.8–7.0%, 0–6.2% and 0–6.3% for 1 d ahead, 3.8–8.4%, 2.8–8.6% and 2.4–8.2% for 2 d ahead, and 5.4–12.5%, 2.3–14.7% and 2.3–14.5% for 3 d ahead. In addition, the BA-XGBoost model slightly outperformed the BA-SVM model.
In Ruoqiang, the BA-KNEA model performed better than the other six models. Compared with the BA-KNEA model, the RMSE, MAE and NRMSE of the other six models increased by 4.6–9.6%, 4.6–10.6% and 4.5–10.5% for 1 d ahead, 7.1–10.0%, 4.9–9.5%, 6.3–9.7% for 2 d ahead, and 3.3–4.9%, 2.6–4.6%, 3.2–5.1% for 3 d ahead. The BA-SVM model performed better than the other four models on 1 d, but the advantage in the other models, except BA-KNEA, was not obvious on the other two days. In Khotan, the BA-KNEA model also achieved the highest accuracy, and the RMSE, MAE and NRMSE of the other five models increased by 2.6–6.9%, 4.8–7.1% and 1.3–6.7% for 1 d ahead, 3.5–9.0%, 1.6–8.4%, 1.2–8.5% for 2 d ahead, and 3.0–8.5%, 1.8–12.8%, 1.8–11.8% for 3 d ahead. The performance of the BA-SVM model was still better than the other four models, except for the BA-KNEA model.
The scatter plots of observed Rs vs. Rsf by seven machine-learning models are shown in Figure 5. Among all the machine-learning models, it can be seen that the BA-KENA model performed slightly better than other models, followed by the BA-SVM model. The slope of all the regression equations in the Figure was less than 1, and the intercept was greater than 0, which means that all the models exhibit the problem that when Rs is very large, the model will underestimate the result, and when Rs is very small, the model will underestimate the result.
Figure 6 shows the distribution of the absolute error of the forecast Rs for different machine-learning models 1–3 days ahead. As can be seen, at 1 d ahead, the proportion of days with AE < 2 MJ m−2 d−1 for the six models was around 60%; the proportion of PSO-KNEA and BA-KNEA was slightly higher than in other models; and had a AE > 6 MJ m−2 d−1 days ratio, the BA-KNEA had a slight advantage over the other models. The performance on 2 d ahead was slightly worse than that on 1 d ahead: the proportion of days with AE < 2 MJ m−2 d−1 for all six models was below 60%, while the number of days with AE > 6 MJ m−2 d−1 showed little change compared with 1 d ahead, with the BA-KNEA model having a slight advantage over the other models in the number of days with AE > 6 MJ m−2 d−1. In the 3 d ahead, the accuracy of the six models continued to decline compared with the previous 2 d, and the BA-KNEA model had a slightly lower proportion of days with AE > 6 MJ m−2 d−1 than the other models.
Figure 7 shows the Taylor diagram of different methods over the four stations. It can be seen that the BA-KNEA model outperformed the other methods over the all stations.

3.3. Comparison of Statistical Models and Machine-learning Models

To evaluate the performance of different categories of models, we ranked the four statistical indicators of all models over the four stations (Table 5). With the highest R2 or the lowest RMSE, MAE or NRMSE would rank first, and so on. When the ranking of different statistical indicators is different, the model with more indicators at the top ranks first. It can be seen that the rank of different models in 1–3 d ahead were the same. The BA-KNEA model was the best, followed by the BA-SVM, BA-XGBoost, PSO-KNEA, PSO-SVM, LSTM, PSO-XGBoost, EDCDFm and QM models. The above results prove that the machine−learning model is superior to the empirical-statistical model, and the new BA-KNEA model has the best performance in accuracy. In addition, the Taylor plots of different stations on the first day of the forecast period are shown in Figure 6. It can also be seen that the results of the BA-KNEA model were the closest to the observations, while the GEFS raw data had the largest error.

3.4. BA-KNEA with Different Input Combinations

In order to analyze the difference in the forecasting ability of different meteorological factors on the results, we used the BA-KNEA model to set up different input combinations. Through the results, we explored the contribution differences of different factors. Table 6 shows the statistical indicators of the different input combinations of the BA-KNEA model in the forecast period 1–3 d. When the input factor is Rsf, the accuracy of the BA-KNEA model was better than that of the QM and EDCDFm methods with the same input at four stations (Table 3), and the RMSE and MAE of the BA-KNEA model was 1.7–7.9% and was 1.6–7.6% lower in the forecast period of 1–3 days, relative to the EDCDFm method. This model was also better than the model established with temperature and extraterrestrial radiation as inputs (Combination 5), which shows that the solar radiation accuracy of the GEFSv12 dataset is better than that of the traditional temperature-based machine-learning model method. In Altay, when only the maximum and minimum air temperature was used as input, the error was larger than the model with Rs input: R2 was between 0.712–0.723, RMSE was between 4.705–4.812 MJ m−2 d−1, and MAE was between 3.766–3.799 MJ m−2 d−1, and NRMSE was between 0.241–0.243. Adding RHf, Uf, Tmaxf and Tminf based on the Rsf can improve the prediction accuracy of Rs, among which the increase in wind speed was the largest, followed by air temperature, and, finally, relative humidity. Compared with Combination 2, 3, and 4, the accuracy of combination 6 was higher, and it can be seen that the accuracy of the multi-factor was higher than that of the two-factor combination. This shows that the multi-factor combination contains more nonlinear information related to Rs than the two-factor combination, which helps improve the model accuracy further. At Kashgar station, adding relative humidity based on Rs did not improve the accuracy significantly, and when the forecast period was 2 and 3 days, adding wind speed based on Rs slightly improved the accuracy. Adding the temperature model based on Rs improves the model’s accuracy to a certain extent, but it is not much different from the accuracy of the complete combination (Combination 6). This is mainly due to the limited contribution of RH and U to improving the accuracy of the model. The performance of the BA-KNEA model on the first two days of Ruoqiang Station was similar to that on Altay, but on the third day, Combination 3 outperformed the complete input combination. Due to poor forecast accuracy of wind speed and relative humidity, adding these factors will increase the noise in the model. At Khotan station, on the first day, the complete combination was close to the Combination 2, 3, and 4 but superior to those during the other two days. The complete combination is slightly better than the other combinations. As seen from the above, the complete combination was slightly better than the other combinations over the four stations.

4. Discussion

Different machine-learning models perform differently in solar-radiation prediction. This is mainly due to two reasons. Firstly, different machine-learning models have different sensitivities to data distribution. For example, kernel-based machine-learning methods can perform well in low-dimensional data sets [47]. However, the tree-based model performs better with high dimensions and a large amount of typed data. The deep-learning model has better performance in image processing [48]. Another reason is that the parameter selection of machine-learning models did not achieve the optimal global solution. Fan et al. [31] compared the performance of SVM and XGBoost when the input factors were temperature and precipitation and found that SVM was slightly better than the XGBoost model. Ghimire et al. [7] compared ANN, SVR, GPML and GP models for forecasting solar radiation with reanalysis data in Queensland, Australia. They highlighted that the ANN model outperformed other ML models. Shin et al. [49] used a deep-learning model to short-term forecast solar radiation for photovoltaic power generation. Hu et al. [50] used ground-based images and an ANN model to forecast solar radiation. However, there is limited study of using weather-forecast products to forecast solar radiation in China. In this study, we evaluated the capability of the GEFSv12 product in the solar-resource-rich region of China. We found that the raw solar-radiation forecast data in GEFSv12 has poor performance and uncertainty for indirect use. Thus, we built a coupling model based on the bat algorithm and KNEA model. The result shows that the newly developed model is superior to other empirical-statistical and machine-learning models. The LSTM had been used to forecast Rs on hourly and other time scales [51,52]. However, we found that the LSTM did not perform better than the BA-KNEA model nor other models. The daily Rs fluctuated widely on an hourly scale in the arid regions of the northwest of China, and historical information is not as important as the WRF data for future. Thus, the LSTM did not achieve enough information to forecast 1–3 d Rs.
Many scholars have found that various meteorological factors, such as air temperature, relative humidity, wind speed, and precipitation, are closely related to solar radiation [53,54], but the effects of these factors vary in different regions of the globe [55,56]. In northwest China, air temperature is the closest meteorological variable to solar radiation [57]. Thus, many scholars have established solar-radiation models based on air temperature. In addition, relative humidity and wind speed have also been used to improve the accuracy of solar radiation prediction [58,59]. Although the forecast data set was used in this study, similar results have been obtained, which means that the forecast data set and observation data have similar results. The most significant difference between the forecast data set and observation data lies in the forecast precision of different forecast factors. In general, the temperature has a very high forecast accuracy, but the relative humidity and wind-speed forecast accuracy are low, a fact mainly caused by two data mismatches. That is to say, the forecast data is the average of a large area, while the relative humidity and wind speed observed by the weather station is a minimal point value. We found that, in the four stations of this study, the model’s accuracy with temperature factor is generally better than that of wind speed and relative humidity, and the prediction performance of relative humidity and wind speed of GEFSv12 needs to be improved.

5. Conclusions

Accurate forecasting of solar radiation (Rs) is significant to photovoltaic power generation and agricultural management. For the first time, this study evaluated and improved the capability of the newly released National Centers for Environmental Prediction Global Ensemble Forecast System version 12 (NECP GEFSv12) for short-term forecasting of Rs. To achieve this goal, a new coupling model based on the bat algorithm (BA) and kernel-based nonlinear extension of Arps decline (KNEA) was established. The data used four solar-radiation stations in Xinjiang, China as the benchmark. The new model was also compared with two empirical statistical methods (quantile mapping and Equiratio cumulative distribution function matching) with five machine-learning methods, e.g., support vector machine (SVM), XGBoost, KNEA, BA-SVM, BA-XGBoost. The results show that the accuracy of forecasting Rs from all of the models decreases from 1 d to 3 d ahead. Compared with the GEFS raw Rs data over the four stations, the RMSE and MAE of the QM and EDCDFm models decreased by 20% and 15%, respectively. In addition, the BA-KNEA model was superior to the GEFSv12 raw Rs data and other post-processing methods, with R2 = 0.782–0.829, RMSE= 3.240–3.685 MJ m−2 d−1, MAE = 2.465–2.799 MJ m−2 d−1, NRMSE = 0.152–0.173.

Author Contributions

Conceptualization, L.W.; methodology, L.W. and F.L.; software, S.W.; validation, G.D.; formal analysis, G.D.; investigation, G.D.; resources, data curation, S.W.; writing—original draft preparation, L.W. and G.D.; writing—review and editing, G.D. and L.W.; supervision, F.L.; project administration, Y.W.; funding acquisition, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This study was jointly supported by the National Natural Science Foundation of China (No. 51879226, 51709143) and Jiangxi Natural Science Foundation of China (No. 20181BBG78078). The APC was funded by Jiangxi Natural Science Foundation of China (No. 20181BBG78078).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Thanks to the National Meteorological Information Center of China Meteorological Administration for offering the meteorological data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, P.; Tong, X.; Zhang, J.; Meng, P.; Li, J.; Zheng, J. Estimation of half-hourly diffuse solar radiation over a mixed plantation in north China. Renew. Energy 2020, 149, 1360–1369. [Google Scholar] [CrossRef]
  2. Demircan, C.; Bayrakçı, H.C.; Keçebaş, A. Machine learning-based improvement of empiric models for an accurate estimating process of global solar radiation. Sustain. Energy Technol. Assess. 2020, 37, 100574. [Google Scholar] [CrossRef]
  3. Chang, K.; Zhang, Q. Improvement of the hourly global solar model and solar radiation for air-conditioning design in China. Renew. Energy 2019, 138, 1232–1238. [Google Scholar] [CrossRef]
  4. Zang, H.; Cheng, L.; Ding, T.; Cheung, K.W.; Wang, M.; Wei, Z.; Sun, G. Application of functional deep belief network for estimating daily global solar radiation: A case study in China. Energy 2019, 191, 116502. [Google Scholar] [CrossRef]
  5. Rehman, S.; Mohandes, M. Artificial neural network estimation of global solar radiation using air temperature and relative humidity. Energy Policy 2008, 36, 571–576. [Google Scholar] [CrossRef] [Green Version]
  6. Quej, V.H.; Almorox, J.; Arnaldo, J.A.; Saito, L. ANFIS, SVM and ANN soft-computing techniques to estimate daily global solar radiation in a warm sub-humid environment. J. Atmos. Sol.-Terr. Phys. 2017, 155, 62–70. [Google Scholar] [CrossRef] [Green Version]
  7. Ghimire, S.; Deo, R.C.; Raj, N.; Mi, J. Deep solar radiation forecasting with convolutional neural network and long short-term memory network algorithms. Appl. Energy 2019, 253, 113541. [Google Scholar] [CrossRef]
  8. Deo, R.C.; Şahin, M.; Adamowski, J.F.; Mi, J. Universally deployable extreme learning machines integrated with remotely sensed MODIS satellite predictors over Australia to forecast global solar radiation: A new approach. Renew. Sustain. Energy Rev. 2019, 104, 235–261. [Google Scholar] [CrossRef]
  9. Hassan, M.A.; Khalil, A.; Kaseb, S.; Kassem, M.A. Exploring the potential of tree-based ensemble methods in solar radiation modeling. Appl. Energy 2017, 203, 897–916. [Google Scholar] [CrossRef]
  10. Güçlü, Y.S.; Yeleğen, M.Ö.; Dabanlı, İ.; Şişman, E. Solar irradiation estimations and comparisons by ANFIS, Angström–Prescott and dependency models. Sol. Energy 2014, 109, 118–124. [Google Scholar] [CrossRef]
  11. Mohammadi, K.; Shamshirband, S.; Kamsin, A.; Lai, P.C.; Mansor, Z. Identifying the most significant input parameters for predicting global solar radiation using an ANFIS selection procedure. Renew. Sustain. Energy Rev. 2016, 63, 423–434. [Google Scholar] [CrossRef]
  12. Feng, Y.; Cui, N.; Chen, Y.; Gong, D.; Hu, X. Development of data-driven models for prediction of daily global horizontal irradiance in northwest China. J. Clean. Prod. 2019, 223, 136–146. [Google Scholar] [CrossRef]
  13. Wu, L.; Huang, G.; Fan, J.; Zhang, F.; Wang, X.; Zeng, W. Potential of kernel-based nonlinear extension of Arps decline model and gradient boosting with categorical features support for predicting daily global solar radiation in humid regions. Energy Convers. Manag. 2019, 183, 280–295. [Google Scholar] [CrossRef]
  14. Fan, J.; Wu, L.; Zhang, F.; Cai, H.; Wang, X.; Lu, X.; Xiang, Y. Evaluating the effect of air pollution on global and diffuse solar radiation prediction using support vector machine modeling based on sunshine duration and air temperature. Renew. Sustain. Energy Rev. 2018, 94, 732–747. [Google Scholar] [CrossRef]
  15. Fan, J.; Wu, L.; Ma, X.; Zhou, H.; Zhang, F. Hybrid support vector machines with heuristic algorithms for prediction of daily diffuse solar radiation in air-polluted regions. Renew. Energy 2020, 145, 2034–2045. [Google Scholar] [CrossRef]
  16. Belaid, S.; Mellit, A. Prediction of daily and mean monthly global solar radiation using support vector machine in an arid climate. Energy Convers. Manag. 2016, 118, 105–118. [Google Scholar] [CrossRef]
  17. Urraca, R.; Martinez-de-Pison, E.; Sanz-Garcia, A.; Antonanzas, J.; Antonanzas-Torres, F. Estimation methods for global solar radiation: Case study evaluation of five different approaches in central Spain. Renew. Sustain. Energy Rev. 2017, 77, 1098–1113. [Google Scholar] [CrossRef]
  18. Álvarez-Alvarado, J.M.; Ríos-Moreno, J.G.; Obregón-Biosca, S.A.; Ronquillo-Lomelí, G.; Ventura-Ramos, E.; Trejo-Perea, M. Hybrid techniques to predict solar radiation using support vector machine and search optimization algorithms: A review. Appl. Sci. 2021, 11, 1044. [Google Scholar] [CrossRef]
  19. Dong, J.; Wu, L.; Liu, X.; Fan, C.; Leng, M.; Yang, Q. Simulation of daily diffuse solar radiation based on three machine learning models. Comput. Model. Eng. Sci. 2020, 123, 49–73. [Google Scholar] [CrossRef]
  20. Feng, Y.; Hao, W.; Li, H.; Cui, N.; Gong, D.; Gao, L. Machine learning models to quantify and map daily global solar radiation and photovoltaic power. Renew. Sustain. Energy Rev. 2020, 118, 109393. [Google Scholar] [CrossRef]
  21. Liu, Y.; Zhou, Y.; Chen, Y.; Wang, D.; Wang, Y.; Zhu, Y. Comparison of support vector machine and copula-based nonlinear quantile regression for estimating the daily diffuse solar radiation: A case study in China. Renew. Energy 2020, 146, 1101–1112. [Google Scholar] [CrossRef]
  22. Qing, X.; Niu, Y. Hourly day-ahead solar irradiance prediction using weather forecasts by LSTM. Energy 2018, 148, 461–468. [Google Scholar] [CrossRef]
  23. Abdel-Nasser, M.; Mahmoud, K. Accurate photovoltaic power forecasting models using deep LSTM-RNN. Neural Comput. Appl. 2019, 31, 2727–2740. [Google Scholar] [CrossRef]
  24. Huang, C.; Kuo, P. Multiple-input deep convolutional neural network model for short-term photovoltaic power forecasting. IEEE Access 2019, 7, 74822–74834. [Google Scholar] [CrossRef]
  25. Kaba, K.; Sarıgül, M.; Avcı, M.; Kandırmaz, H.M. Estimation of daily global solar radiation using deep learning model. Energy 2018, 162, 126–135. [Google Scholar] [CrossRef]
  26. Voyant, C.; Notton, G.; Kalogirou, S.; Nivet, M.; Paoli, C.; Motte, L.; Fouilloy, A. Machine learning methods for solar radiation forecasting: A review. Renew. Energy 2017, 105, 569–582. [Google Scholar] [CrossRef]
  27. Sun, H.; Gui, D.; Yan, B.; Liu, Y.; Liao, W.; Zhu, Y.; Lu, C.; Zhao, N. Assessing the potential of random forest method for estimating solar radiation using air pollution index. Energy Convers. Manag. 2016, 119, 121–129. [Google Scholar] [CrossRef] [Green Version]
  28. Ibrahim, I.A.; Khatib, T. A novel hybrid model for hourly global solar radiation prediction using random forests technique and firefly algorithm. Energy Convers. Manag. 2017, 138, 413–425. [Google Scholar] [CrossRef]
  29. Prasad, R.; Ali, M.; Kwan, P.; Khan, H. Designing a multi-stage multivariate empirical mode decomposition coupled with ant colony optimization and random forest model to forecast monthly solar radiation. Appl. Energy 2019, 236, 778–792. [Google Scholar] [CrossRef]
  30. Hamill, T.M.; Whitaker, J.S.; Shlyaeva, A.; Bates, G.; Fredrick, S.; Pegion, P.; Sinsky, E.; Zhu, Y.; Tallapragada, V.; Guan, H.; et al. The Reanalysis for the Global Ensemble Forecast System, Version 12. Monthly. Weather Rev. 2022, 150, 59–79. [Google Scholar] [CrossRef]
  31. Fan, J.; Chen, B.; Wu, L.; Zhang, F.; Lu, X.; Xiang, Y. Evaluation and development of temperature-based empirical models for estimating daily global solar radiation in humid regions. Energy 2018, 144, 903–914. [Google Scholar] [CrossRef]
  32. Zhou, X.; Zhu, Y.; Hou, D.; Fu, B.; Li, W.; Guan, H.; Sinsky, E.; Kolczynski, W.; Xue, X.; Luo, Y.; et al. The Development of the NCEP Global Ensemble Forecast System Version 12. Weather Forecast. 2022, 37, 727. [Google Scholar] [CrossRef]
  33. Tallapragada, V. Recent updates to NCEP Global Modeling Systems: Implementation of FV3 based Global Forecast System (GFS v15. 1) and plans for implementation of Global Ensemble Forecast System (GEFSv12). In AGU Fall Meeting Abstracts; Astrophysics Data System: San Francisco, CA, USA, 2019; pp. A31C–A34C. [Google Scholar]
  34. Lee, T.; Singh, V.P. Statistical Downscaling for Hydrological and Environmental Applications; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar] [CrossRef]
  35. Maraun, D. Bias correction, quantile mapping, and downscaling: Revisiting the inflation issue. J. Clim. 2013, 26, 2137–2143. [Google Scholar] [CrossRef] [Green Version]
  36. Guo, L.; Gao, Q.; Jiang, Z.; Li, L. Bias correction and projection of surface air temperature in LMDZ multiple simulation over central and eastern China. Adv. Clim. Chang. Res. 2018, 9, 81–92. [Google Scholar] [CrossRef]
  37. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef]
  38. Yan, R.; Liao, J.; Yang, J.; Sun, W.; Nong, M.; Li, F. Multi-hour and multi-site air quality index forecasting in Beijing using CNN, LSTM, CNN-LSTM, and spatiotemporal clustering. Expert Syst. Appl. 2021, 169, 114513. [Google Scholar] [CrossRef]
  39. Ao, C.; Zeng, W.; Wu, L.; Qian, L.; Srivastava, A.K.; Gaiser, T. Time-delayed machine learning models for estimating groundwater depth in the Hetao Irrigation District, China. Agric. Water Manag. 2021, 255, 107032. [Google Scholar] [CrossRef]
  40. Vapnik, V.N. An overview of statistical learning theory. IEEE Trans. Neural Netw. 1999, 10, 988–999. [Google Scholar] [CrossRef] [Green Version]
  41. Chen, T.; He, T.; Benesty, M.; Khotilovich, V.; Tang, Y.; Benesty, M.; Khotilovich, V.; Tang, Y.; Cho, H.; Chen, K. Xgboost: Extreme Gradient Boosting; R Package Vers. 0.4-2; Xgboost: Seattle, WA, USA, 2015; pp. 1–4. [Google Scholar]
  42. Ma, X.; Liu, Z. Predicting the oil production using the novel multivariate nonlinear model based on Arps decline model and kernel method. Neural Comput. Appl. 2018, 29, 579–591. [Google Scholar] [CrossRef]
  43. Lu, H.; Ma, X.; Huang, K.; Azimi, M. Prediction of offshore wind farm power using a novel two-stage model combining kernel-based nonlinear extension of the Arps decline model with a multi-objective grey wolf optimizer. Renew. Sustain. Energy Rev. 2020, 127, 109856. [Google Scholar] [CrossRef]
  44. Yang, X.; He, X. Bat algorithm: Literature review and applications. Int. J. Bio-Inspired Comput. 2013, 5, 141–149. [Google Scholar] [CrossRef] [Green Version]
  45. Cui, Y.; Jia, L.; Fan, W. Estimation of actual evapotranspiration and its components in an irrigated area by integrating the Shuttleworth-Wallace and surface temperature-vegetation index schemes using the particle swarm optimization algorithm. Agric. For. Meteorol. 2021, 307, 108488. [Google Scholar] [CrossRef]
  46. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  47. Erfani, S.M.; Rajasegarar, S.; Karunasekera, S.; Leckie, C. High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning. Pattern Recognit. 2016, 58, 121–134. [Google Scholar] [CrossRef]
  48. Zang, H.; Liu, L.; Sun, L.; Cheng, L.; Wei, Z.; Sun, G. Short-term global horizontal irradiance forecasting based on a hybrid CNN-LSTM model with spatiotemporal correlations. Renew. Energy 2020, 160, 26–41. [Google Scholar] [CrossRef]
  49. Shin, D.; Ha, E.; Kim, T.; Kim, C. Short-term photovoltaic power generation predicting by input/output structure of weather forecast using deep learning. Soft Comput. 2021, 25, 771–783. [Google Scholar] [CrossRef]
  50. Hu, M.; Zhao, B.; Ao, X.; Cao, J.; Wang, Q.; Riffat, S.; Su, Y.; Pei, G. Applications of radiative sky cooling in solar energy systems: Progress, challenges, and prospects. Renew. Sustain. Energy Rev. 2022, 160, 112304. [Google Scholar] [CrossRef]
  51. Feng, Y.; Zhang, X.; Jia, Y.; Cui, N.; Hao, W.; Li, H.; Gong, D. High-resolution assessment of solar radiation and energy potential in China. Energy Convers. Manag. 2021, 240, 114265. [Google Scholar] [CrossRef]
  52. De Araujo, J.M.S. Performance comparison of solar radiation forecasting between WRF and LSTM in Gifu, Japan. Environ. Res. Commun. 2020, 2, 045002. [Google Scholar] [CrossRef]
  53. Zhou, Y.; Liu, Y.; Wang, D.; Liu, X.; Wang, Y. A review on global solar radiation prediction with machine learning models in a comprehensive perspective. Energy Convers. Manag. 2021, 235, 113960. [Google Scholar] [CrossRef]
  54. Qiu, R.; Li, L.; Wu, L.; Agathokleous, E.; Liu, C.; Zhang, B.; Luo, Y.; Sun, S. Modeling daily global solar radiation using only temperature data: Past, development, and future. Renew. Sustain. Energy Rev. 2022, 163, 112511. [Google Scholar] [CrossRef]
  55. Makade, R.G.; Chakrabarti, S.; Jamil, B. Development of global solar radiation models: A comprehensive review and statistical analysis for Indian regions. J. Clean. Prod. 2021, 293, 126208. [Google Scholar] [CrossRef]
  56. Tao, H.; Ewees, A.A.; Al-Sulttani, A.O.; Beyaztas, U.; Hameed, M.M.; Salih, S.Q.; Armanuos, A.M.; Al-Ansari, N.; Voyant, C.; Shahid, S.; et al. Global solar radiation prediction over North Dakota using air temperature: Development of novel hybrid intelligence model. Energy Rep. 2021, 7, 136–157. [Google Scholar] [CrossRef]
  57. Zhang, Y.; Cui, N.; Feng, Y.; Gong, D.; Hu, X. Comparison of BP, PSO-BP and statistical models for predicting daily global solar radiation in arid Northwest China. Comput. Electron. Agric. 2019, 164, 104905. [Google Scholar] [CrossRef]
  58. Yadav, A.K.; Chandel, S.S. Solar radiation prediction using Artificial Neural Network techniques: A review. Renew. Sustain. Energy Rev. 2014, 33, 772–781. [Google Scholar] [CrossRef]
  59. Fan, J.; Wang, X.; Wu, L.; Zhang, F.; Bai, H.; Lu, X.; Xiang, Y. New combined models for estimating daily global solar radiation based on sunshine duration in humid regions: A case study in South China. Energy Convers. Manag. 2018, 156, 618–625. [Google Scholar] [CrossRef]
Figure 1. Location of the meteorological stations.
Figure 1. Location of the meteorological stations.
Sustainability 14 06824 g001
Figure 2. The struct of the LSTM model.
Figure 2. The struct of the LSTM model.
Sustainability 14 06824 g002
Figure 3. Flowchart of the three machine-learning models optimal for evolutionary algorithms.
Figure 3. Flowchart of the three machine-learning models optimal for evolutionary algorithms.
Sustainability 14 06824 g003
Figure 4. Scatter plots of measured Rs vs. forecasting Rs at Kashgar station during the testing period, GEFSv12 raw Rs forecasting data on (a) 1 d ahead, (b) 2 d ahead, (c) 3 d ahead; QM method forecasting Rs on (d) 1 d ahead, (e) 2 d ahead, (f) 3 d ahead; EDCDFm method forecasting Rs on (g) 1 d ahead, (h) 2 d ahead, (i) 3 d ahead.
Figure 4. Scatter plots of measured Rs vs. forecasting Rs at Kashgar station during the testing period, GEFSv12 raw Rs forecasting data on (a) 1 d ahead, (b) 2 d ahead, (c) 3 d ahead; QM method forecasting Rs on (d) 1 d ahead, (e) 2 d ahead, (f) 3 d ahead; EDCDFm method forecasting Rs on (g) 1 d ahead, (h) 2 d ahead, (i) 3 d ahead.
Sustainability 14 06824 g004
Figure 5. Scatter plots of measured Rs vs. forecasting Rs at Kashgar station during the testing period, LSTM method forecasting data on (a) 1 d ahead, (b) 2 d ahead, (c) 3 d ahead; PSO-SVM method forecasting Rs on (d) 1 d ahead, (e) 2 d ahead, (f) 3 d ahead; BA-SVM method forecasting Rs on (g) 1 d ahead, (h) 2 d ahead, (i) 3 d ahead; PSO-XGBoost method forecasting Rs on (j) 1 d ahead, (k) 2 d ahead, (l) 3 d ahead; BA-XGBoost method forecasting Rs on (m) 1 d ahead, (n) 2 d ahead, (o) 3 d ahead; PSO-KNEA method forecasting Rs on (p) 1 d ahead, (q) 2 d ahead, (r) 3 d ahead; BA-KNEA method forecasting Rs on (s) 1 d ahead, (t) 2 d ahead, (u) 3 d ahead.
Figure 5. Scatter plots of measured Rs vs. forecasting Rs at Kashgar station during the testing period, LSTM method forecasting data on (a) 1 d ahead, (b) 2 d ahead, (c) 3 d ahead; PSO-SVM method forecasting Rs on (d) 1 d ahead, (e) 2 d ahead, (f) 3 d ahead; BA-SVM method forecasting Rs on (g) 1 d ahead, (h) 2 d ahead, (i) 3 d ahead; PSO-XGBoost method forecasting Rs on (j) 1 d ahead, (k) 2 d ahead, (l) 3 d ahead; BA-XGBoost method forecasting Rs on (m) 1 d ahead, (n) 2 d ahead, (o) 3 d ahead; PSO-KNEA method forecasting Rs on (p) 1 d ahead, (q) 2 d ahead, (r) 3 d ahead; BA-KNEA method forecasting Rs on (s) 1 d ahead, (t) 2 d ahead, (u) 3 d ahead.
Sustainability 14 06824 g005
Figure 6. Absolute error of different machine−learning models at Ruoqiang station.
Figure 6. Absolute error of different machine−learning models at Ruoqiang station.
Sustainability 14 06824 g006
Figure 7. Taylor plots of forecasting results for different Rs.
Figure 7. Taylor plots of forecasting results for different Rs.
Sustainability 14 06824 g007
Table 1. Global solar radiation in different months of stations in this study.
Table 1. Global solar radiation in different months of stations in this study.
StationPeriodJan.Feb.Mar.Apr.MayJun.Jul.Aug.Sept.Oct.Nov.Dec.
AltayTrain7 ± 2.711.1 ± 3.515.7 ± 4.820.3 ± 5.923.8 ± 7.525.3 ± 7.124.2 ± 721.3 ± 6.317 ± 5.710.2 ± 4.66.1 ± 3.15.3 ± 2.4
Test6 ± 2.79.6 ± 3.615 ± 4.418.7 ± 5.922.5 ± 6.724.6 ± 5.623.7 ± 5.520.4 ± 5.115.9 ± 4.410 ± 46.1 ± 2.74.9 ± 2.3
KashgarTrain8.3 ± 2.59.6 ± 3.813.8 ± 4.619 ± 5.822.3 ± 6.226.4 ± 525.2 ± 4.621.3 ± 517.3 ± 4.413.1 ± 3.28.3 ± 2.46.2 ± 1.9
Test6.8 ± 2.49.1 ± 3.513 ± 4.717.2 ± 5.620.7 ± 6.124.7 ± 522.9 ± 5.619.7 ± 4.516.1 ± 4.112.3 ± 3.38.5 ± 2.56.4 ± 2
RuoqiangTrain9.3 ± 2.510.9 ± 2.716 ± 4.320.1 ± 4.722.1 ± 5.922.9 ± 6.624.1 ± 6.721.9 ± 6.119 ± 3.514.8 ± 3.19.5 ± 2.78 ± 1.8
Test8.6 ± 2.611.2 ± 2.815.4 ± 3.818.8 ± 5.221.8 ± 623 ± 521.5 ± 5.920.3 ± 5.517.9 ± 414.2 ± 2.810.6 ± 2.27.8 ± 1.9
KhotanTrain10.1 ± 2.511.6 ± 3.315.5 ± 419.8 ± 5.423.4 ± 5.823.9 ± 622.3 ± 6.320.1 ± 5.418.6 ± 4.816.3 ± 2.811.1 ± 2.38.8 ± 2.6
Test9.1 ± 311.2 ± 3.815.2 ± 4.618.9 ± 5.221.5 ± 5.122.1 ± 5.421.3 ± 5.919.2 ± 4.616.2 ± 4.914.9 ± 2.810.8 ± 2.28.7 ± 1.7
Note: the unit of the data is MJ m−2 d−1.
Table 2. Parameters of the three machine-learning models.
Table 2. Parameters of the three machine-learning models.
ModelParameter NamesRange
SVMRegularization coefficient[0.01, 10,000]
Kernel parameter[0.01, 10,000]
XGBoostNumber of trees[50, 1000]
Maximum tree depth[2, 50]
Learning rate[0.01, 0.3]
KNEARegularization coefficient[0.1, 10,000]
Kernel parameter[0.1, 10,000]
Table 3. Statistical indicators of solar-radiation forecast by GEFS NWP raw data and two empirical-statistics methods.
Table 3. Statistical indicators of solar-radiation forecast by GEFS NWP raw data and two empirical-statistics methods.
ID1 d 2 d 3 d
ModelR2RMSEMAENRMSER2RMSEMAENRMSER2RMSEMAENRMSE
51076 Altay
NWP0.8163.9393.1200.2500.7664.3133.4170.2740.7454.5823.4820.292
QM0.8213.8433.0770.2460.7874.1943.3040.2690.7684.3873.4420.281
EDCDFm0.8203.8383.0710.2460.7884.1893.3010.2680.7684.3843.4370.281
51709 Kashgar
NWP0.7955.0163.8220.3270.7725.2143.9550.3400.7575.3784.0800.351
QM0.8163.4602.6330.2170.7923.7072.7980.2330.7763.8622.9430.243
EDCDFm0.8203.4372.6330.2160.7953.6992.8150.2320.7803.8412.9500.241
51777 Ruoqiang
NWP0.7534.5473.1020.2800.7264.8593.3120.2990.6975.1563.4780.317
QM0.7543.7082.5530.2240.7134.0022.7620.2410.6814.2572.9200.257
EDCDFm0.7583.6322.4990.2190.7193.9122.7090.2360.6884.1382.8640.250
51828 Khotan
NWP0.7014.8223.3980.2960.6685.1273.6280.3150.6505.3543.7880.329
QM0.7203.6742.7500.2190.6654.0573.0480.2410.6494.1783.1430.249
EDCDFm0.7213.6372.7330.2160.6694.0123.0440.2390.6524.1453.1510.247
Note: the value in bold is the best statistical indicator among the different methods. The same as follow.
Table 4. Statistical indicators of solar-radiation forecasts by different machine-learning models.
Table 4. Statistical indicators of solar-radiation forecasts by different machine-learning models.
ID1 d 2 d 3 d
ModelR2RMSEMAENRMSER2RMSEMAENRMSER2RMSEMAENRMSE
51076 Altay
LSTM0.8133.8893.0860.2020.7984.1783.3140.2160.7874.2583.1680.207
PSO-SVM0.8173.8752.9880.1910.7924.1163.1810.2040.7734.2923.3190.213
BA-SVM0.8373.6272.8540.1830.8113.913.0320.1940.7934.0913.1740.203
PSO-XGBoost0.8163.9173.1180.20.794.1783.280.210.7734.333.4030.218
BA-XGBoost0.8333.6852.8930.1850.8034.0053.1140.1990.7864.1713.2430.208
PSO-KNEA0.8263.7232.9030.1860.7944.0533.0880.1980.774.2813.2680.209
BA-KNEA0.8443.5522.7850.1780.8193.8392.980.1910.8034.0023.1050.199
51709 Kashgar
LSTM0.8343.4852.810.1770.8083.7352.750.1730.7993.9083.0330.191
PSO-SVM0.8383.4362.6410.1660.8093.7352.8630.180.7893.8242.8860.181
BA-SVM0.8613.382.7070.170.8383.5962.8540.1790.7993.9233.1360.197
PSO-XGBoost0.843.4452.70.170.8113.7542.9330.1840.83.8082.9820.187
BA-XGBoost0.8453.3452.550.160.8193.6612.7750.1740.8083.6772.7960.176
PSO-KNEA0.8413.2312.4380.1530.8243.452.6290.1650.8013.6182.7480.173
BA-KNEA0.8693.0562.370.1490.8373.4342.6540.1670.8343.4872.7330.172
51777 Ruoqiang
LSTM0.7843.4012.4310.1470.743.8212.5470.1590.7193.8522.7490.168
PSO-SVM0.7963.3132.3310.1410.763.6032.5280.1530.7323.7962.7110.164
BA-SVM0.8033.2662.2960.1390.7643.5922.5420.1530.7333.8112.6930.163
PSO-XGBoost0.7873.4232.4290.1470.753.6882.6140.1580.7313.8222.7460.166
BA-XGBoost0.7963.3192.3040.1390.7533.6392.5420.1530.7213.8532.7280.165
PSO-KNEA0.7853.5522.410.1450.7393.862.60.1570.7174.0692.7360.165
BA-KNEA0.8193.1232.1960.1330.7913.3542.3870.1440.7523.6742.6240.158
51828 Khotan
LSTM0.7623.3312.6190.1550.7173.7392.740.1610.6963.8732.8220.166
PSO-SVM0.7523.4592.6650.1590.713.7312.8150.1670.6973.812.8830.172
BA-SVM0.7713.3842.6640.1590.7373.7552.9680.1770.7043.9693.1160.185
PSO-XGBoost0.7553.322.6210.1510.7233.8853.0030.1790.7033.9913.1970.189
BA-XGBoost0.7543.372.6780.1570.7343.6892.8470.1670.7223.7882.8950.175
PSO-KNEA0.7433.5062.5870.1540.6893.8342.80.1670.6713.9292.8990.172
BA-KNEA0.7833.2272.5090.1490.7543.4832.6760.1590.7373.5762.7320.163
Table 5. Rank of empirical-statistical and machine-learning models.
Table 5. Rank of empirical-statistical and machine-learning models.
Model1 d2 d3 d
GEFS raw101010
QM999
EDCDFm888
LSTM666
PSO-SVM555
BA-SVM223
PSO-XGBoost777
BA-XGBoost332
PSO-KNEA444
BA-KNEA111
Table 6. Statistical indicators of BA-KNEA model under different input combinations.
Table 6. Statistical indicators of BA-KNEA model under different input combinations.
IDInput1 d 2 d 3 d
R2RMSEMAENRMSER2RMSEMAENRMSER2RMSEMAENRMSE
51076 Altay
1Rsf0.8243.7783.0190.1930.7894.1393.230.2070.7714.3123.3820.217
2Rsf, RHf0.8283.7412.9780.1910.7994.0513.1640.2030.7814.2263.3050.212
3Rsf, Tmaxf, Tminf0.8323.6872.9130.1870.8053.9833.0790.1970.7874.1783.2690.209
4Rsf, Uf0.8353.6512.8620.1830.8083.9433.0570.1960.7924.0973.1690.203
5Tmaxf, Tminf, Ra0.7234.7053.7660.2410.7214.7573.7370.2390.7124.8123.7990.243
6All0.8443.5522.7850.1780.8193.8392.980.1910.8034.0023.1050.199
51709 Kashgar
1Rsf0.8523.212.4990.1570.8293.4942.7110.170.8143.6632.870.18
2Rsf, RHf0.8593.232.5510.160.843.4562.7050.170.8233.6322.8690.18
3Rsf, Tmaxf, Tminf0.8673.1852.5350.1590.8463.3882.6340.1650.8323.4882.7410.172
4Rsf, Uf0.873.2232.550.160.8413.4642.7010.170.8263.5022.7050.17
5Tmaxf, Tminf, Ra0.7963.9583.090.1940.7853.8092.930.1840.7763.8382.9540.186
6All0.8693.0562.370.1490.8373.4342.6540.1670.8343.4872.7330.172
51777 Ruoqiang
1Rsf0.7893.4032.3520.1420.7553.642.5040.1510.733.8182.6630.161
2Rsf, RHf0.7983.3022.2860.1380.7673.5272.4790.150.7413.7322.6390.159
3Rsf, Tmaxf, Tminf0.8113.1992.2960.1390.7823.4672.4670.1490.7563.6492.6160.158
4Rsf, Uf0.8143.2222.2450.1350.7743.5112.4450.1480.743.7462.6490.16
5Tmaxf, Tminf, Ra0.7453.7642.7920.1680.7243.8752.8710.1730.7024.0352.960.179
6All0.8193.1232.1960.1330.7913.3542.3870.1440.7523.6742.6240.158
51828 Khotan
1Rsf0.7473.442.6070.1550.6943.7872.8270.1680.6713.9482.9980.178
2Rsf, RHf0.7693.2932.5230.150.7193.6452.7670.1650.7053.7292.8180.168
3Rsf, Tmaxf, Tminf0.7823.2362.50.1490.7513.5642.7710.1650.7313.6782.8330.169
4Rsf, Uf0.7633.3372.5040.1490.7253.6432.7860.1660.7083.7652.8670.171
5Tmaxf, Tminf, Ra0.733.6022.7750.1650.7163.7182.8570.170.6973.8232.9190.174
6All0.7833.2272.5090.1490.7543.4832.6760.1590.7373.5762.7320.163
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Duan, G.; Wu, L.; Liu, F.; Wang, Y.; Wu, S. Improvement in Solar-Radiation Forecasting Based on Evolutionary KNEA Method and Numerical Weather Prediction. Sustainability 2022, 14, 6824. https://doi.org/10.3390/su14116824

AMA Style

Duan G, Wu L, Liu F, Wang Y, Wu S. Improvement in Solar-Radiation Forecasting Based on Evolutionary KNEA Method and Numerical Weather Prediction. Sustainability. 2022; 14(11):6824. https://doi.org/10.3390/su14116824

Chicago/Turabian Style

Duan, Guosheng, Lifeng Wu, Fa Liu, Yicheng Wang, and Shaofei Wu. 2022. "Improvement in Solar-Radiation Forecasting Based on Evolutionary KNEA Method and Numerical Weather Prediction" Sustainability 14, no. 11: 6824. https://doi.org/10.3390/su14116824

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop