Next Article in Journal
Residual Damage, Its Consequences, and Remedial Measures on Post Hydrofrac Well Productivity: Learnt Lessons
Previous Article in Journal
A Mixed-Integer Programming Approach for Unit Commitment in Micro-Grid with Incentive-Based Demand Response and Battery Energy Storage System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ensemble Interval Prediction for Solar Photovoltaic Power Generation

School of Mathematical Sciences, Capital Normal University, Beijing 100048, China
*
Author to whom correspondence should be addressed.
Energies 2022, 15(19), 7193; https://doi.org/10.3390/en15197193
Submission received: 5 September 2022 / Revised: 23 September 2022 / Accepted: 26 September 2022 / Published: 29 September 2022

Abstract

:
In recent years, solar photovoltaic power generation has emerged as an essential means of energy supply. The prediction of its active power is not only conducive to cost saving but can also promote the development of solar power generation industry. However, it is challenging to obtain an accurate and high-quality interval prediction of active power. Based on the data set of desert knowledge in the Australia solar center in Australia, firstly, we have compared twelve interval prediction methods based on machine learning. Secondly, six ensemble methods, namely Ensemble-Mean, Ensemble-Median (Ensemble-Med), Ensemble-Envelop (Ensemble-En), Ensemble-Probability averaging of endpoints and simple averaging of midpoints (Ensemble-PM), Ensemble-Exterior trimming (Ensemble-TE), and Ensemble-Interior trimming (Ensemble-TI) are used to combine forecast intervals. The result indicates that Ensemble-TE is the best method. Additionally, compared to other methods, Ensemble-TE ensures the prediction interval coverage probability for confidence levels of 95%, 90%, 85%, and 80% as 0.960, 0.920, 0.873, and 0.824, respectively, using 15-min level data. Meanwhile, the narrower prediction interval normalized averaged width is calculated for the same confidence levels as 0.066, 0.045, 0.035, and 0.028, respectively. In addition, higher Winkler score and smaller coverage width-based criterion are obtained, representing high-quality intervals. We have calculated smaller mean prediction interval center deviation, which is approximately 0.044. Thus, the above demonstrates that this study obtains the prediction interval with better performance compared to other existing methods.

1. Introduction

1.1. Motivation and Incitement

With the depletion of traditional non-renewable energy sources and increasingly serious problems of environmental pollution and climate warming, it is vital to find suitable renewable sources of energy, advocate the low-carbon economy, and achieve carbon neutralization as soon as possible. Solar photovoltaic power generation, which uses clean energy, is a good way to promote the sustainable development of energy [1]. As a type of new energy, solar energy is considered to be abundant, widely distributed, and environmentally friendly. Therefore, solar photovoltaic power generation has been widely recognized all over the world and occupies an essential position in the new energy area [2]. The penetration of solar energy in power systems around the world has also been increased [3]. However, it is limited by certain physical factors, such as daylight change, local climate, and geographical location [4]. Meanwhile, active power is volatile and uncertain [5], which not only influences the operation and maintenance of solar photovoltaic power generation negatively, but also brings great challenges to its prediction. Overestimation of active power will increase the cost of power supply system, while underestimation will lead to insufficient power consumption in the whole society, thus affecting people’s normal life. Therefore, it is extremely necessary to find an effective and accurate power prediction method for solar photovoltaic power generation.

1.2. Literature Review and Research Gaps

The prediction of solar photovoltaic power generation mainly consists of two parts: point prediction and interval prediction. They are also known as deterministic prediction and probabilistic prediction with different forms of prediction results. Specifically, the point prediction implies the use of multiple variables to train the regression algorithm and obtain a series of single-point prediction values. Currently, many studies have researched point prediction in solar photovoltaic power generation. Wang et al. (2018) [6] used gated recurrent unit (GRU) to predict the photovoltaic power generation. Benali et al. (2019) [7] utilized random forests (RF) and artificial neural network (ANN) models to predict and compare solar power output. Dash et al. (2021) [8] used empirical wavelet transform (EWT) and robust minimum variance random vector functional link network (RRVFLN) in short term solar power forecasting. Elsaraiti et al. (2022) [9] demonstrated that the LSTM network could provide reliable information for the photovoltaic power forecast. Shedbalkar et al. (2022) [10] proposed using the Bayesian linear regression to solve solar power generation forecasting problem. Elizabeth et al. (2022) [11] proposed a multi-step convolutional neural network (CNN) Stacked LSTM technique to predict short-term solar power. In addition, there are dozens of related machine learning methods for point prediction of solar photovoltaic power generation [4,5]. However, the single-point prediction obviously cannot meet the researchers’ requirements for the reliability of power generation prediction. The uncertainty and error range of the predicted value are considered essential as well. Therefore, it is important to study and develop the interval prediction of the solar photovoltaic power generation.
In contrast, interval prediction possesses more research value, since it provides more uncertain information to quantify the extent to which we can trust the predictions of the model. The purpose of the interval prediction is to obtain the forecast interval C for real value y and a given confidence level α to ensure:
P ( y C ) α .
It can not only quantify the uncertainty of the predicted value and estimate more accurately, but also facilitate relevant staff to regulate and plan for the entire solar photovoltaic power generation system. In recent years, many related studies have been reported. Almeida et al. (2015) [12] used quantile regression forests (QRF) to construct prediction intervals. Ni et al. (2017) [13] proposed a method based on the extreme learning machine (ELM) and lower upper bound estimation (LUBE) to construct a reliable solar energy prediction interval. Huang et al. (2017) [14] used the k-nearest neighbor to obtain the interval of solar energy prediction. Pan et al. (2021) [15] used attention mechanism-gated recurrent unit-kernel density estimation (A-GRU-KDE) to predict and estimate the interval of solar power. Wang et al. (2021) [16] combined LSTM and Gaussian process regression (GPR) to obtain a reliable interval estimation in solar power generation. Ramkumar et al. (2021) [17] proposed a model of solar photovoltaic power interval forecast based on an online sequential extreme learning machine with a forgetting mechanism (FOS-ELM) algorithm. Li et al. (2022) [18] proposed a method for interval forecasting day-ahead solar power generation based on extreme gradient boosting (XGBoost) and KDE. Chen et al. (2022) [19] proposed a method using the density peak clustering improved by the kernel Mahalanobis distance (KMDDPC) combined with the multivariate kernel density estimation (MKDE) method to obtain the prediction intervals in four seasons. The methods mentioned above have their respective advantages and their prediction accuracy is also different. To combine the advantages of a variety of methods and further improve the prediction accuracy, the ensemble method is proposed, which also have the opportunity to obtain prediction intervals with higher quality.
The multi-model ensemble method, i.e., the model combination method, is the average of estimates or forecasts from multiple models that are different from each other through suitable weights [20], which can not only fully utilize the results of each method but also obtain more stable results [21]. Zhang et al. (2011) [20] chose a variety of weight selection criteria, such as smoothed-Akaike information criterion (S-AIC), smoothed-bayesian information criterion (S-BIC), and optimal weight selection method to average the model. Zhang et al. (2011) [22] used the focused information criterion (FIC) and frequentist model average (FMA) in generalized additive partial linear models (GAPLMs) to study model selection and model averaging. Gaba et al. (2017) [23] proposed the probability averaging of endpoints and simple averaging of midpoints (PM) to combine interval prediction results. The above studies reveal that the ensemble method can further improve the performance of the results of each method.

1.3. Major Contribution and Organization

This study proposes an ensemble interval prediction for solar photovoltaic power generation that obtains prediction intervals with higher quality than other methods. The main contributions of this paper can be stated as follows:
1.
Twelve latest interval prediction methods with good results are compared simultaneously in this paper in order to understand their advantages and applicability, and use them effectively. To the best of our knowledge, the existing literature does not focus on the horizontal comparison for these methods.
2.
Six ensemble methods of prediction interval are used for a combination of prediction intervals of the aforementioned partial methods. The focus is on the advantages of those methods to obtain more reliable and stable interval prediction results.
3.
Compared with the previous prediction results, the ensemble or combination method of interval described in this study further improves the quality of interval prediction: the prediction interval coverage probability (PICP) under different confidence levels is close to the given confidence level; it has a smaller prediction interval normalized averaged width (PINAW), i.e., narrower and more accurate standardized average interval width; it has smaller coverage width-based criterion (CWC) and higher Winkler score, that is, the interval holds a higher comprehensive quality; and it has a smaller mean prediction interval center deviation (MPICD), that is, the actual value is closer to the midpoint of the prediction interval.
The rest of this paper is organized as follows. Section 2 provides a detailed description of the methods of point prediction, methods of interval prediction, and ensemble methods used in this study. In Section 3, the data sources, data preprocessing, and performance evaluation indexes used in this study are presented. In Section 4, the experimental results of point prediction, interval prediction, and interval prediction using ensemble method are described and compared, and the best-combined interval forecasting method is identified. Finally, the main conclusions and contributions of this paper along with future research prospects are described in Section 5.

2. Methodology

This section mainly includes three parts: point prediction, which is used to obtain the deterministic prediction of active power, method of interval prediction, which is used to get the probability prediction of active power, and ensemble method of interval prediction, which is used to combine the constructed intervals to obtain higher quality intervals. Specifically, the flow chart describing the three kinds of methods is illustrated in Figure 1.

2.1. Point Prediction

2.1.1. Random Forest (RF)

RF is an ensemble algorithm that was first proposed by Breiman [24]. Its base learner is CART, and the ensemble method is bagging [25].
The objective of bagging is to use the bootstrap [26] method to resample the original training samples to gain a large number of independent sample subsets. Then, the average or mode of their fitting values is computed to average the model with high difference and small deviation to reduce the variance of the model. Based on the binary decision tree, the objective of the CART algorithm is to recursively bisect each input feature to divide the input space into finite elements, where we predict the probability distribution, including generation and pruning.
RF combines the idea of CART and bagging. Then, it introduces random feature selection to decorrelate the tree to improve bagging. RF randomly samples without replacement rather than selecting all of the original training samples. In addition, it randomly selects partial features for the split points on each decision tree. Generally, the number of features (k) opted is approximately the square root of the number of features (p) [27], i.e., k p . Compared with the single regression tree, RF can avoid the negative influence of noise, outliers, and overfitting and achieve more accurate and robust results.

2.1.2. Gated Recurrent Unit (GRU)

GRU [28] is a special RNN [29], whose basic structure is similar to that of RNN. More specifically, it uses the current input x t and previous hidden state h t 1 to obtain the current result y t and current hidden state h t as the outputs.
Unlike the LSTM network with three gates, which also belongs to RNN, GRU has only two gates, including an update gate and a reset gate. The network structure of GRU is depicted in Figure 2.
In Figure 2, the update gate r t is used to control and filter the extent to which the state information of the previous time is brought into the current state. The larger the value of r t , the more state information of the previous time is retained. The expression of r t is as follows:
r t = σ ( W r [ h t 1 , x t ] ) ,
where σ is the sigmoid function and W represents the corresponding parameters to be trained (similarly hereinafter). The reset gate z t controls the amount of information of the previous state recorded to the current candidate set h ˜ t . The smaller the value of z t , the less information about the previous state is recorded. z t and h ˜ t are expressed as follows:
z t = σ ( W z [ h t 1 , x t ] ) ,
h ˜ t = t a n h ( W h ˜ [ r t h t 1 , x t ] ) ,
where ∗ is the Hadamard product. The expression of the new hidden state h t and current output result y t is as follows:
h t = ( 1 z t ) h t 1 + z t h ˜ t ,
y t = σ ( W o h t ) .
In general, both LSTM and CRU filter and retain important features and information obtained from various gate functions. Both of them can maintain long-term memory and alleviate the gradient disappearance problem, while GRU has one less gate function than LSTM, which leads to a more compact and simpler structure [30]. Therefore, we chose GRU for point prediction.

2.1.3. Gradient Boosting Regression Tree (GBRT)

Similar to RF, GBRT [31] with CART as its base learner is also an ensemble learning algorithm. Nonetheless, the ensemble method used in GBRT is different, and is known as boosting. Unlike bagging, boosting does not require resampling but generates multiple trees sequentially based on the modified values of the original data. Each tree learns the residuals between the results obtained by all previous trees and the real values to further update the residuals to gradually reduce them. GBRT provides accurate prediction and strong generalization ability without having high requirements on data. In this study, the mean and median of active power are considered as the prediction targets, and MSE and pinball loss (also known as weighted absolute deviations [32]) are selected as the scoring criteria to choose hyperparameters and then predict the active power.
In addition to the methods described above, ridge regression [33] and NGB are also used in this study for point prediction. Particularly, NGB will be introduced in Section 2.2, since it can output the results of point and interval prediction simultaneously.

2.2. Interval Prediction

2.2.1. Kernel Density Estimation (KDE)

KDE [34] is a commonly used nonparametric method that uses adjacent samples to estimate the density function at a certain point, and then estimates the probability density of the whole sample. The expression of KDE estimation is as follows:
f ^ ( x ) = 1 n h i = 1 n K ( x x i h ) ,
where K ( x ) represents kernel function and h represents bandwidth. In this study, the Gaussian kernel is selected for KDE, and its expression is as follows:
K ( x ) = 1 2 π e x 2 2 .
The commonly used methods to determine the optimal bandwidth include the cross validation and adaptive bandwidth rule of thumb. In this study, we choose the 5-fold cross validation to select the bandwidth. Then we determine the KDE estimation of the sample probability density function. Next, we obtain the KDE estimation of the cumulative distribution function F ^ h ( x ) = x f ^ h ( t ) d t = 1 n h i = 1 n x K ( t x i h ) d t to the pth quantile Q ^ p ( h ) using F ^ h ( x ) , which can be expressed as follows:
Q ^ p ( h ) = sup { x : F ^ h ( x ) p } .
Finally, for new samples, given the confidence level α , the constructed prediction interval C is described as follows:
C = [ Q ^ ( 1 α ) / 2 ( h ) , Q ^ ( 1 + α ) / 2 ( h ) ] .

2.2.2. Natural Gradient Boosting (NGB)

NGB is a gradient boosting regression method that predicts the conditional probability distribution of the target variable to construct the prediction interval. Different from the general gradient boosting method, NGB is solved on the statistical manifold where the probability distribution is located (see [35] for the definition) and applied to multi-parameter distribution. More specifically, it includes basic learner f ( k ) , parameterized conditional probability distribution P θ ( y | x ) , and appropriate scoring rules S. Scoring rules are used to calibrate the difference between model results and real values. If a scoring rule satisfies the following formula:
E y Q [ S ( Q , y ) ] E y Q [ S ( P , y ) ] ,
where P is the predicted distribution and Q is the real distribution, then it is called an appropriate scoring rule. The maximum likelihood estimation (MLE) is one of the common scoring rules, which can induce Kullback-Leibler Divergence. Choosing MLE as the scoring rule, the gradient for the prediction parameters θ on conditional probability density function P θ ( y | x ) is recorded as θ L ( θ , y ) and the natural gradient of each step is recorded as g i k , which can be obtained by solving the corresponding optimization problem. The expression of g i k is as follows:
g i k = I L ( θ i k 1 ) 1 θ L ( θ i k 1 , y ) ,
where I L ( θ ) is the Fisher information brought by the observed value of P θ , which can be written as:
I L ( θ ) = E y P θ [ θ L ( θ , y ) θ L ( θ , y ) T ] .
In the process of model learning, a group of basic learners is trained using natural gradient and input values at each stage. In this study, normal distribution is used; therefore, the parameters include mean μ and standard deviation σ , indicating that the corresponding basic learners are f μ ( k ) and f σ ( k ) . Then, scale as follows and update the parameter θ :
θ = θ ( 0 ) η k = 1 n ρ ( k ) f ( k ) ( x ) ,
where ρ ( k ) and f ( k ) ( x ) are the scaling coefficient and the basic learner of each stage, respectively, and η is the learning rate. Finally, we obtained two sets of parameter values μ ^ and σ ^ in the fitted normal distribution, that is, given a new sample x, y | x N ( μ ^ ( x ) , σ ^ ( x ) ) . Then, under the confidence level α , the corresponding quantiles Φ 1 ( ( 1 α ) / 2 ) and Φ 1 ( ( 1 + α ) / 2 ) are calculated and prediction interval C is constructed as follows:
C = [ μ ^ ( x ) + σ ^ ( x ) Φ 1 ( ( 1 α ) / 2 ) , μ ^ ( x ) + σ ^ ( x ) Φ 1 ( ( 1 + α ) / 2 ) ] .

2.2.3. Jackknife+-after-bootstrap (J+ab)

J+ab [36] is a method for constructing prediction intervals in which the bootstrap and the jackknife+ [37] methods are used to construct prediction intervals. In addition, jackknife+ is a modified version of the jackknife method. To obtain the prediction interval using the J+ab method, firstly, the training dataset is resampled using bootstrap. Then, the regression model μ ^ is trained to obtain the predicted value, which will be aggregated. The ensemble method described in this paper is mean. Next, we calculate the absolute value of the leave-one-out residual of each sample R i = | y i μ ^ ( i ) ( x i ) | , where μ ^ ( i ) ( x i ) is the predicted value of the regression model that is not trained by the sample i. It is also the output of the regression model trained by other subsets resampled in the bootstrap, which does not contain the sample i. Finally, for the new samples, the prediction interval C is constructed using R i . The expression of C is as follows:
C = [ q α ( μ ^ ( i ) ( x ) R i ) , q α + ( μ ^ ( i ) ( x ) + R i ) ] ,
where q α ( x ) and q α + ( x ) represent the upper and lower α th quantiles of x at the confidence level α , respectively.
J+ab can be combined with any regression method for prediction since there is no requirement for the underlying regression algorithm. Therefore, we combine J+ab with Ridge, multi-layer perceptron (MLP), and RF for interval prediction, respectively.

2.2.4. Random Forest-Out-Of-Bag (RF-OOB)

Based on RF, RF-OOB [38] is a method that uses out-of-bag (OOB) [39] samples to calculate the error of OOB prediction and then obtains the empirical distribution to establish the prediction interval. After establishing the RF model, we can use the OOB sample corresponding to each point to calculate the prediction error given by D i = y i y ^ i , i = 1 , 2 , , n , where y ^ i represents the prediction value of the OOB samples. The original sample and prediction error are both independent and identically distributed. Therefore, we can establish the empirical distribution D of the prediction error, and then construct the prediction interval C for the new sample y. The expression of C is as follows:
C = [ y ^ + D [ n , ( 1 α ) / 2 ] , y ^ + D [ n , ( 1 + α ) / 2 ] ] ,
where α is the confidence level, D [ n , p ] is the pth quantile of D, and y ^ is the predicted value. Thus, we can get the prediction interval obtained by RF-OOB at a given confidence level.

2.2.5. Split Conformal-Random Forest (SC-RF)

SC-RF is a combination of split conformal (SC) [40] and RF. It uses split samples to separate the model fitting part from the subsequent sorting part and has no requirements for the underlying method. It can be combined with any regression method for prediction, and the computational cost is far less than the full conformal prediction [41].
SC-RF first divides the training samples into two parts S 1 and S 2 with the same sample size. S 1 is used as the training set to build the RF model μ ^ and S 2 is used as the test set to generate the predicted value of the RF model. Then, the absolute value of the residual R i = | y i μ ^ ( x i ) | , i S 2 is obtained. Next, we calculate the kth value d in R i ; here k = ( n / 2 + 1 ) α and α is the confidence level. Finally, for the new sample x, we construct the prediction interval C, which can be expressed as follows:
C = [ μ ^ ( x ) d , μ ^ ( x ) + d ] .
It can be proven [40] that the prediction interval C satisfies α P ( Y C ) α + 2 / ( n + 2 ) , which assures that the coverage of C reaches the given confidence level α .

2.2.6. Quantile Regression Forests (QRF)

Combined with RF and quantile regression [42], QRF [32] is a nonparametric conditional quantile method that is capable of predicting any quantile to establish a prediction interval. Compared with RF, which only retains the mean value of all observations in the leaves, QRF retains all values and evaluates the conditional distribution based on this information. The conditional distribution expression of the target variable when X is given as follows:
F ( y | X = x ) = P ( Y y | X = x ) = E ( I ( Y y ) | X = x ) ,
where I ( · ) is the indicative function. Then, the weighted average of the observed values of I ( Y y ) is used to approximate E ( I ( Y y ) | X = x ) to obtain the estimation of the conditional distribution F ^ , whose expression is as follows:
F ^ ( y | X = x ) = i = 1 n w i ( x ) I ( Y y ) ,
w i ( x ) = 1 k t = 1 k w i ( x , θ t ) ,
w i ( x , θ t ) = X i R l ( x , θ ) # { j : X j R l ( x , θ ) } ,
where θ is the parameter, R l ( x , θ ) is the leaf node, # { j : X j R l ( x , θ ) } indicates the number of samples belonging to R l ( x , θ ) , w i ( x , θ t ) is the weight vector, the sum of which is 1, and w i ( x ) is the weight in RF representing the average weight vector of k decision trees. After that, the estimation of pth quantile Q ^ p ( x ) is obtained, whose expression is as follows:
Q ^ p ( x ) = sup { y : F ^ ( y | X = x ) p } .
Finally, for the new sample x with the confidence level α , the constructed prediction interval C is as follows:
C = [ Q ^ ( 1 α ) / 2 ( x ) , Q ^ ( 1 + α ) / 2 ( x ) ] .
QRF provides a complete conditional distribution of the target variable when x is given and is widely used in the field of machine learning to estimate quantiles and obtain prediction intervals.

2.3. Ensemble Method of Interval Prediction

In this section, we introduce several ensemble methods for prediction intervals to combine a variety of constructed prediction intervals to obtain better prediction intervals. Specifically, six ensemble methods are introduced in this study, namely Ensemble-Mean, Ensemble-Med, Ensemble-En, Ensemble-TE, Ensemble-TI, and Ensemble-PM. It is assumed that m prediction intervals corresponding to n real values have been obtained, which is expressed as follows:
[ L i j , U i j ] , i = 1 , 2 , , n , j = 1 , 2 , , m ,
Then, the final ensemble interval is as follows:
[ L i , U i ] = f ( [ L i j , U i j ] ) , i = 1 , 2 , , n , j = 1 , 2 , , m ,
where f ( x ) represents the ensemble method of m prediction intervals.
Ensemble-Mean means that the ensemble interval is the mean value of m intervals, and the formula is given by:
L i = 1 m j = 1 m L i j , U i = 1 m j = 1 m U i j .
Ensemble-Mean is one of the most commonly used ensemble methods, which is simple and easy to understand, and can represent the centralized trend of intervals.
Ensemble-Med represents the median value of m intervals, and the expression is as follows:
L i = m e d i a n j ( L i j ) , U i = m e d i a n j ( U i j ) .
Ensemble-Med has good stability and is not easily to be affected by extreme interval values. Therefore, some intervals with large fluctuations will be excluded.
Ensemble-En means that the lower bound of ensemble intervals takes the minimum value of the lower bound of prediction intervals, and the upper bound of ensemble intervals takes the maximum value of the upper bound of prediction intervals. In other words, the ensemble interval surrounds all the prediction intervals. The formula is given by:
L i = min j ( L i j ) , U i = max j ( U i j ) .
Ensemble-En may improve the coverage of the prediction interval. However, it may also make the interval too wide.
Ensemble-TE means to delete the outermost 2 k points in the prediction intervals and then compute average, which is expressed as follows [23]:
k = 0 , 1 m 3 , 1 , 4 m 7 , 2 , 8 m 11 , 3 , 12 m ,
L i = 1 m k j = 1 m k L i j ( k min ) , U i = 1 m k j = 1 m k U i j ( k max ) ,
where L i j ( k min ) represents the lower bound of the remaining prediction intervals after deleting the minimum k values and U i j ( k max ) represents the upper bound of the remaining prediction intervals after deleting the maximum k values. Ensemble-TE not only removes some outliers in the prediction intervals, making it easier to obtain a narrow ensemble interval, but also uses the advantage of mean value, representing the centralized trend of the remaining prediction interval. Thus, the entire information is used as fully as possible without being affected by larger extreme values.
Ensemble-TI means that 2 k points at the innermost part of the prediction interval are deleted and then averaged. The expression is as follows:
L i = 1 m k j = 1 m k L i j ( k max ) , U i = 1 m k j = 1 m k U i j ( k min ) ,
where L i j ( k max ) represents the lower bound of the remaining prediction intervals after deleting the maximum k values and U i j ( k min ) represents the upper bound of the remaining prediction intervals after deleting the minimum k values. Ensemble-TI retains the information on the outside of the prediction intervals and deletes the innermost information, which may help to appropriately improve the coverage of interval prediction.
Ensemble-PM [23] means that the ensemble interval is obtained by simply averaging the endpoints of the prediction interval after probability averaging, and it is assumed that the prediction interval of each point is established based on normal distribution. If Y i N ( μ , σ ) , then Z i = Y i μ σ N ( 0 , 1 ) , and the pth quantile is expressed, as follows:
P ( Y i y ) = P ( Z i y μ σ ) = Φ ( y μ σ ) = p .
Therefore, the pth quantile on N ( μ , σ ) can be expressed as [43]:
y = μ + σ Φ 1 ( p ) .
Then, average the quantile values corresponding to the ensemble interval on m cumulative distribution functions and make it equal to the quantile of the endpoint of the interval at the given confidence level α ; here, p = ( 1 α ) / 2 , ( 1 + α ) / 2 , and the upper and lower bounds of the ensemble interval satisfy the following formula:
1 m j = 1 m F i j ( L i ) = ( 1 α ) / 2 , 1 m j = 1 m F i j ( U i ) = ( 1 + α ) / 2 ,
where F i j is the cumulative distribution function of the jth prediction interval at the ith point. Ensemble-PM makes full use of the probability information of the prediction interval.

3. Data and Evaluation Indexes

3.1. Data Description and Processing

In this section, we introduce the data sources used in this study and how we process the data, including the selection of the period of data, the selection of variables, display of variable correlation, data preprocessing, and division of training, validation, and test sets.
The data used in this study are from the desert knowledge Australia solar center (DKASC) in Alice Springs, Australia at http://dkasolarcentre.com.au/locations/alice-springs (last access: 5 September 2022). This platform provides historical data queries, real-time displays of solar power generation, and relevant meteorological information on power station systems of various manufacturers. In this study, we select the data of the 31st site established in 2013. The time range is from 1 April 2014 to 31 October 2015 with an interval of 15 min and 5 min. The aim of this study is to predict the active power of solar energy. Since there is no solar energy at night to generate power, we limit the time range of each day to 5:30–19:00. The opted variables include three parts. First is the physical variables, including wind speed, weather temperature (Celsius), weather relative humidity, global horizontal radiation, and diffuse horizontal radiation. Second is the time variables, including hour and month. To ensure continuity, the month is mapped to cosine and sine values of the month, which are represented as month_cos and month_sin. The respective expressions are as follows:
month _ cos = c o s ( 2 π · month / 12 ) , month _ sin = s i n ( 2 π · month / 12 ) .
Considering the autocorrelation of the solar active power, in the third part, we choose four lagged values t-15, t-30, t-45, and t-60 of active power for data with an interval of 15 min, which respectively corresponds to the power values 15, 30, 45, and 60 min before the current time. Then, we calculate the missing proportion of each variable and find that only active power is missing, which is 0.03%. Because the missing proportion is very low, it can be deleted. Next, we draw the heatmap of Pearson correlation coefficients between the relevant variables of the model, as demonstrated in Figure 3. Correspondingly, we select four lagged values for data with an interval of 5 min and do the same process above. Given that the results are similar to that of the 15-min interval, we only illustrate the latter there.
Figure 3 reflects that the active power has a strong linear correlation with the global horizontal radiation and four lagged values t-15, t-30, t-45, and t-60. It also has a weak linear correlation with diffuse horizontal radiation, wind speed, weather temperature (Celsius), and weather relative humidity. In particular, it seems to have a very weak linear correlation with month_sin, month_cos, and hour. However, this only represents linear correlation. Further modeling and analysis are required for other correlations. In terms of the division of data set, we regard the period from 1 April 2014 to 30 June 2015 as the training set, 1 July 2015 to 31 August 2015 as the validation set, and 1 September 2015 to 31 October 2015 as the test set.

3.2. Performance Evaluation Indexes

3.2.1. Evaluation Indexes for Point Prediction

In this section, we introduce five indexes to evaluate the accuracy and fitting degree of the point prediction, including mean absolute error (MAE), root mean square error (RMSE), fitting coefficient ( R 2 ), mean absolute percentage error (MAPE), and symmetric mean absolute percentage error (SMAPE), which are used to evaluate the difference between the predicted results of the model and real values. The specific expressions of the five indicators are as follows:
MAE = 1 n i = 1 n y i y ^ i ,
RMSE = 1 n i = 1 n y i y ^ i 2 ,
R 2 = i = 1 n y ^ i y ¯ 2 i = 1 n y i y ¯ 2 ,
MAPE = 1 n i = 1 n y i y ^ i y i × 100 % ,
SMAPE = 1 n i = 1 n y i y ^ i y i + y ^ i × 100 % ,
where y ¯ = 1 n i = 1 n y i represents the mean of the actual value, y i represents the actual value, y ^ i represents the predicted value, and n is the sample size. MAE, RMSE, MAPE, and SMAPE measure the degree to which the predicted value deviates from the real value. The smaller the value of the four indexes, the closer the predicted value is to the real value and the stronger is the prediction ability of the model. R 2 measures the fitting degree of the predicted value to the real value; that is, the extent of the proportion of variance of real value is explained by the predicted value of the model. The closer R 2 is to 1, the better is the fitting effect of the model.

3.2.2. Evaluation Indexes for Interval Prediction

To measure the quality of prediction interval, we select five indicators, including PICP, PINAW, CWC, Winkler score, and MPICD to evaluate the reliability, accuracy, and relative position with the real value of the prediction interval.
PICP represents the proportion of the predicted interval covering the real value, which is used to measure the reliability of the interval. The prediction interval is recorded as [ L i , U i ] , i = 1 , 2 , , n , and the expression of PICP is as follows:
PICP = 1 n i = 1 n I ( y i [ L i , U i ] ) ,
where I ( · ) is the indicative function, which means I ( y i [ L i , U i ] ) = 1 if y i [ L i , U i ] , otherwise I ( y i [ L i , U i ] ) = 0 . Larger PICP ensures that more real values are covered and the prediction interval is more reliable. Generally speaking, for a given confidence level α , PICP should be as close as possible to α when it is greater than or equal to α [44].
PINAW refers to the relative average width of the prediction interval after standardization, which is used to measure the accuracy of the interval. The expression of PINAW is as follows:
PINAW = 1 n R i = 1 n ( U i L i ) ,
where R = max ( y i ) min ( y i ) represents the range of data, which is used to further standardize the average width of the interval to exclude the width change caused by the variation of real value. Smaller PINAW results in a narrow relative prediction range and more accurate prediction interval. If PINAW is too large, it may easily close to PICP. However, the relative prediction range will be too wide to capture the detailed changes.
However, the two indicators mentioned above are contradictory, which means that they consider only one characteristic of the prediction interval. When the PICP is large enough, PINAW is also large. When the PINAW is small, the PICP often fails to meet the confidence level. Thus, it is difficult to comprehensively evaluate the interval quality. Therefore, we also select the CWC and Winkler score to comprehensively evaluate the reliability and accuracy of the prediction interval.
CWC will punish the prediction interval of PICP that does not reach the confidence level accordingly, and the expression is as follows:
CWC = PINAW 1 + I PICP < α e η PICP α ,
where α represents the confidence level and η represents the penalty factor, which is used to scale the difference between PICP and α . When the PICP of the interval reaches the confidence level, CWC will be similar to PINAW. When PICP does not reach the confidence level, the corresponding penalty will be made to make the CWC larger. Therefore, smaller CWC better ensures the quality of the prediction interval. In this study, penalty factors 25 and 50 are opted for evaluation. However, it is noted that the excessive penalty of CWC results in part from the value being too small; thus, the penalty factor we chose is 25.
Winkler score [45] specifically penalizes the interval that does not cover the real value of each point. The expressions are as follows:
w i ( α ) = U i ( α ) L i ( α ) ,
S i ( α ) = 2 α w i ( α ) 4 ( L i ( α ) y i ) , y i < L i ( α ) , 2 α w i ( α ) , y i [ L i ( α ) , U i ( α ) ] , 2 α w i ( α ) 4 ( y i U i ( α ) ) , y i > U i ( α ) ,
S ( α ) = 1 n i = 1 n S i ( α ) ,
where w i ( α ) , S i ( α ) , and S ( α ) represent the absolute width of the prediction interval at confidence level α , with an interval score at each point, and the final result of the Winkler score. When the real value is not covered by the prediction interval, the interval score will punishes the part exceeding the upper or lower bounds. Therefore, larger the Winkler score, the better the interval quality.
In addition, MPICD is selected to measure the position of the real value in the prediction interval. The specific formula is as follows:
MPICD = 1 n i = 1 n y i L i ( α ) + U i ( α ) 2 .
This formula indicates that MPICD quantifies the average difference between the real value and the midpoint of the prediction interval. The smaller the MPICD is, closer the real value is to the middle point of the interval, and the better is the prediction interval quality.

4. Results and Discussion

In this section, we conducted empirical research on the real data with both point prediction and interval prediction methods introduced in Section 2, along with modeling, analysis, prediction and model evaluation. All programs are implemented in Python 3.8.5 or R language 4.1.1.

4.1. Results of Point Prediction

For point prediction, the prediction performance of six methods, namely Ridge, RF, GRU, NGB, GBRT-Mean, and GBRT-Med, are compared. For each prediction model, the choice of hyperparameters is crucial for the performance of the model. We use grid search and cross validation on the validation set to evaluate the performance of the model and determine the optimal hyperparameter of each model, which are demonstrated in Table 1. The order of importance of the RF model variables is illustrated in Figure 4.
It can be observed from Figure 4 that the importance of global horizontal radiation ranks first among all independent variables, which constitutes for more than 50% of variable importance, followed by t-15 and t-30.
To ensure a faster convergence rate and learning effect, we standardize the data before establishing GRU model. As a mainstream optimization algorithm, Adam [46] is opted as our optimizer. The number of iterations is set to 1000, learning rate to 0.01, hidden layer to 4, hidden node to 60, and the regularization parameter to 0.0001.
In the NGB model, we choose the Gaussian distribution, and the learning rate is set to 0.01, the number of iterations is 532, and the percent subsample of rows to use in each boosting iteration is 0.4. The importance ranking of location parameter variables is depicted in the left panel of Figure 5. Like the RF results, the importance of global horizontal radiation ranks is first followed by t-15, and the importance of month_cos is higher than month_sin. However, the third one is slightly different, which is the diffuse horizontal radiation.
In the GBRT-Mean method, we set the maximum depth of the tree to 5, the number of boosting stages to perform is 400, the minimum number of samples required to split an internal node is 10, the minimum number of samples required to be at a leaf node is 15, and the learning rate is 0.05. For the median method, GBRT-Med, we set the maximum depth of the tree to 15, the number of boosting stages to perform is 400, the minimum number of samples required to split an internal node is 15, the minimum number of samples required to be at a leaf node is 10, and the learning rate is 0.15.
To measure the closeness between the predicted and real values, the relevant evaluation indexes of the prediction results of the six models mentioned earlier are calculated to evaluate the prediction performance of the models, which are listed in Table 2. The best results under each evaluation index are displayed in bold according to the description in Section 3.2.1.
Table 2 reveals that compared to other models, MAE, RMSE, MAPE, and SMAPE calculated from the prediction results of RF model are the minimum and R 2 is the maximum, which indicates that the RF model has the best performance. However, the difference with RF in most of the indicators in GBRT-Mean and GBRT-Med are negligible, indicating that their performances are also similar, followed by GRU and NGB. The worst is Ridge regression, in which the accuracy is far from that of other methods.

4.2. Results of Interval Prediction

In this study, we have compared the performance and prediction interval quality of twelve relatively new interval prediction methods proposed recently. Then, we have used six ensemble methods for a combination of prediction intervals of the above-mentioned partial methods to obtain better prediction intervals.
Specifically, the following methods are chosen: KDE, including GRU-KDE, RF-KDE, Ridge-KDE, GBRT-Mean-KDE, and GBRT-Med-KDE based on their point prediction residuals, J+ab, including J+ab-Ridge, J+ab-MLP, and J+ab-RF, RF-OOB, QRF, and SC-RF based on random forest, and NGB.
For the KDE method, we train the point prediction models, including GRU, RF, Ridge, GBRT-Mean, and GBRT-Med (see Section 4.1 for the training process and hyperparameter adjustment). Then, the error between the predicted and actual values of the five models is calculated. Next, we estimate the kernel density bandwidth of the residual by cross validation, of which the values are from 0.005 to 0.15. The values obtained for the five models are 0.040, 0.016, 0.072, 0.034, and 0.026, respectively. Then, we calculate the cumulative distribution function and corresponding quantiles. When the confidence levels are 95%, 90%, 85%, and 80%, respectively, the upper and lower quantiles corresponding to the five methods are demonstrated in Table 3.
For the J+ab method, the multi-layer perceptron in J+ab-MLP chooses the Adam optimizer, maximum number of iterations is set to 8000, activation function selected is tanh, the sizes of the three hidden layers are 50, 40 and 30 in order, and the regularization parameter is 0.001. The parameter settings of J+ab-Ridge and J+ab-RF base learners are the same as those in Section 4.1.
For the NGB method, the specific modeling process and parameters are described in Section 4.1. For the interval prediction in NGB, its scale parameters are more important. The order of importance of the variables affecting scale parameters is depicted in the right panel of Figure 5. It can be observed that the three most important factors affecting the scale parameters are the global horizontal radiation, t-15, and diffuse horizontal radiation.
For the prediction intervals established by the above method, we draw the resulting graph of active power from 1 September 2015 to 4 September 2015, as illustrated in Figure 6. In addition, the corresponding weather conditions including wind speed, temperature (Celsius), and relative humidity are depicted in Figure 7. It can be observed from Figure 6 that, in general, the variation trend of the prediction intervals obtained by the twelve methods is similar to that of the actual value, and they include most of the actual values. Among them, the prediction interval obtained by J+ab-Ridge, J+ab-MLP, and J+ab-RF is relatively wide, followed by Ridge-KDE, while other methods are relatively narrow.
In addition, we calculate PICP, PINAW, Winkler score, CWC, and MPICD from the prediction interval of each method and the results are depicted in Figure 8. The specific values at the 95% confidence level are listed in Table 4 (see Table A1, Table A2 and Table A3 for the results at other confidence levels).
According to the results in Figure 8, the models with PICP close to the nominal level are J+ab-Ridge, RF-OOB, SC-RF, NGB, GRU-KDE, and Ridge-KDE for different indexes under each confidence level. The models with narrow PINAW and high Winkler score are RF-OOB, SC-RF, QRF, NGB, GRU-KDE, RF-KDE, GBRT-Mean-KDE and GBRT-Med-KDE, which are less than 0.12 and greater than −1.1, respectively. The models with smaller CWC are RF-OOB, SC-RF, QRF, NGB, GRU-KDE, Ridge-KDE, and GBRT-Mean-KDE, all of which are less than 0.23. The models with smaller MPICD are RF-OOB, SC-RF, QRF, NGB, GRU-KDE, RF-KDE, Ridge-KDE, GBRT-Mean-KDE, and GBRT-Med-KDE, all of which are less than 0.13.
More specifically, for the performance of each method, although the PICP of the three J+ab methods are basically close to the confidence level, PINAW is too large to be accurate enough and other indexes are not ideal as well. The PICP of QRF and GBRT-Mean-KDE sometimes cannot reach the confidence level. However, the value is relatively close with other indexes, which are relatively ideal. The PICP of RF-KDE and GBRT-Med-KDE are far from the given confidence level. Compared with other methods, the CWC of RF-KDE and GBRT-Med-KDE are larger, while other indexes are ideal. The PICP of Ridge-KDE is achieved, but the PINAW is slightly higher resulting in a relatively wide interval. In contrast, NGB, GRU-KDE, RF-OOB, and SC-RF perform well with all indexes.
In addition, we also compare the total computational time of each method for obtaining the prediction interval under four confidence levels, as depicted in Figure 9. The training was completed by a personal computer with AMD R7-5800h CPU, 3.20 GHz processor and 16 GB memory. It can be observed from Figure 9 that the methods with the shortest time are RF-OOB, NGB, and SC-RF, which are all less than 40 s, followed by QRF, J+ab-Ridge, J+ab-MLP, and RF-KDE, all of which are less than 200 s. Meanwhile, the most time-consuming methods are J+ab-RF, GBRT-Mean-KDE, GBRT-Med-KDE, GRU-KDE, and Ridge-KDE, all of which have taken more than 200 s. In general, the KDE method takes a longer time than other methods, probably due to the time-consuming cross validation when selecting the bandwidth.
To ensure that the results are more stable, we have repeated the tests ten times. As illustrated in Figure 10, the results obtained by J+ab method are unstable, with values fluctuating by more than 0.1 for PICP and 0.05 for PINAW, while those obtained by other methods are overlapped and almost close to a straight line, indicating having more stable results, which are not shown in the graph for aesthetic reasons. Therefore, we only demonstrated the test corresponding to the fifth PICP of each method in the ten tests when the confidence level was 95% in the previous results.
Based on the above results, the J+ab method will not be considered in the further ensemble interval with the above-mentioned methods because it is already an ensemble method, performs poorly in various indicators, and has a number of outliers in the prediction interval.
As for the ensemble part, we have implemented six methods, including Ensemble-Mean, Ensemble-Med, Ensemble-En, Ensemble-TE, Ensemble-TI, and Ensemble-PM to combine the prediction intervals obtained by nine methods, namely RF-OOB, SC-RF, QRF, NGB, GRU-KDE, RF-KDE, Ridge-KDE, GBRT-Mean-KDE, and GBRT-Med-KDE to obtain the ensemble prediction intervals. Then, we compare the results from the ensemble methods with those of the previously implemented models with best performance, such as RF-OOB, SC-RF, NGB, and GRU-KDE, and calculate the five indexes, namely PICP, PINAW, Winkler score, CWC, and MPICD. The results are reflected in Figure 11, in which the method with the best performance on the basis of four indexes except PICP is marked in black, and the corresponding confidence levels are indicated by dotted lines. Moreover, we have listed the index comparison results under 90% confidence level in Table 5, in which the method with the best performance based on the four indexes except PICP is marked in bold (see Tables Table A4, Table A5 and Table A6 for the results at other confidence levels). In addition, we have also compared the data with an interval of 5 min according to the same method. The corresponding results are illustrated in Figure 12 and Table 5 (see Table A7, Table A8 and Table A9 for the results at other confidence levels).
It can be observed from Figure 11 and Figure 12, Table 5 and Table 6 that the PICP of all ensemble methods reaches the confidence level at both the time intervals of fifteen and five minutes. It is demonstrated that Ensemble-TE is the optimal method at all confidence levels with the value of PINAW and CWC being significantly smaller than that of other methods, and the value of Winkler score is higher than others. Meanwhile, in most cases, Ensemble-TE method has the smallest MPICD. It only reflects that the optimal method with MPICD is Ensemble-mean at 80% confidence level with an interval of fifteen minutes. Similarly, the optimal method with MPICD is RF-OOB at 95% confidence level with an interval of five minutes. However, in these two cases, it can be found that a slight difference exists between the results of Ensemble-TE method and that of the optimal method within 10 3 . Therefore, it is considered that Ensemble-TE is the best interval prediction method among all the other methods.
Finally, the numerical results of five interval quality indicators at different confidence levels are described in Table 7 and Table 8, respectively.
In addition, the indexes calculated by the interval prediction results in this paper also perform better than those in other literatures. To be more specific, when using the A-GRU-KDE [15] method on the same data set with a fifteen-min and a five-min interval, the PINAW of the prediction interval obtained are 0.258 and 0.195, respectively, at a confidence level of 95%. In the same case, however, the results of PINAW in our method are 0.066 and 0.063, respectively, indicating that the relative average width of the interval is reduced by nearly three quarters with guaranteed coverage. Meanwhile, the Winkler scores in previously implemented methods are −2.39 and −1.88, respectively, at a confidence level of 95%. In contrast, the Winkler scores in our method are −0.608 and −0.591, and increased by about 70%, which indicates a high-quality prediction interval. Similar conclusions can be obtained at other confident levels, including 90%, 85%, and 80%. Thus, the proposed method exhibits increased accuracy and higher quality prediction interval on the same DKASC data set.

5. Conclusions

In this study, we implement six methods to obtain point prediction for solar photovoltaic power generation, namely Ridge regression, RF, GRU, NGB, GBRT-Mean, and GBRT-Med. On comparing the prediction results of these methods, we found that the results of RF method are the best under five point prediction evaluation indexes, including MAE, RMSE, R 2 , MAPE, and SMAPE.
We also compare twelve interval prediction methods in this paper, namely J+ab-Ridge, J+ab-MLP, J+ab-RF, RF-OOB, SC-RF, QRF, NGB, GRU-KDE, RF-KDE, Ridge-KDE, GBRT-Mean-KDE, and GBRT-Med-KDE. For the performance of each method, although the PICP of the three J+ab methods are basically close to the confidence level, PINAW is too large to be accurate enough and other indicators are not ideal as well. The PICP of QRF and GBRT-Mean-KDE cannot reach the confidence level in some cases, but they are relatively close, and the other indicators are relatively ideal. The PICP of RF-KDE and GBRT-Med-KDE are far from the given confidence level. In addition, the CWC of RF-KDE and GBRT-Med-KDE are larger, while other indicators are ideal. The PICP of Ridge-KDE is achieved; however, the PINAW of it is slightly higher, which leads to a relatively wide interval. In contrast, NGB, GRU-KDE, RF-OOB, and SC-RF perform well with all indicators. From the perspective of the computational time of each method, the methods with the shortest time are RF-OOB, NGB, and SC-RF, all of which are less than 40 s, followed by QRF, J+ab-Ridge, J+ab-MLP, and RF-KDE with a time within 200 s. The most time-consuming methods are J+ab-RF, GBRT-Mean-KDE, GBRT-Med-KDE, GRU-KDE, and Ridge-KDE, which all took more than 200 s.
Furthermore, we use six methods of ensemble interval prediction, namely Ensemble-Med, Ensemble-Mean, Ensemble-En, Ensemble-TE, Ensemble-TI, and Ensemble-PM to combine the prediction intervals obtained by nine methods, including RF-OOB, SC-RF, QRF, NGB, GRU-KDE, RF-KDE, Ridge-KDE, GBRT-Mean-KDE, and GBRT-Med-KDE, with an aim to obtain the ensemble prediction intervals. Then, we compare the ensemble results with the previously implemented models with the best performance, such as RF-OOB, SC-RF, NGB, and GRU-KDE. Finally, it is found that the prediction interval obtained by Ensemble-TE is basically optimal under five indicators, including PICP, PINAW, Winkler score, CWC, and MPICD, and at four confidence levels, including 95%, 90%, 85%, and 80%. Therefore, we propose an ensemble interval prediction for solar photovoltaic power generation, which is, a combination of the above nine interval prediction methods using Ensemble-TE method. The results demonstrate that, compared to other methods, the proposed method can obtain the interval with an coverage close to the nominal level, higher accuracy, and closer prediction between the interval and actual value, with an interval of fifteen and five minutes, at four confidence levels of 95%, 90%, 85%, and 80%.
In the future, we will focus on applying the ensemble or combination method of prediction for solar photovoltaic power generation power in other regions or countries and try to make long-term predictions, such as 24 h. In addition, we will further utilize other state-of-the-art interval prediction methods for ensemble prediction to obtain more reliable and accurate prediction intervals. Furthermore, we will further explore the impact of weather uncertainty on the model and find methods possessing good prediction performance with insensitivity to weather changes.

Author Contributions

Conceptualization, Y.Z. and T.H.; methodology, Y.Z. and T.H.; software, Y.Z.; validation, Y.Z. and T.H.; data curation, Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, T.H.; visualization, Y.Z.; supervision, T.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Beijing Natural Science Foundation Z210003 and National Nature Science Foundation of China (Grant Nos: 12171328, 11971064).

Data Availability Statement

The datasets from the desert knowledge Australia solar center are available online at http://dkasolarcentre.com.au/locations/alice-springs.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CARTClassification and regression trees
CWCCoverage width-based criterion
EnEnvelop
GBRTGradient boosting regression tree
GRUGated recurrent unit
J+abJackknife+-after-bootstrap
KDEKernel density estimation
LSTMLong short-term memory
MAEMean absolute error
MAPEMean absolute percentage error
MedMedian
MLPMulti-layer perceptron
MSEMean squared error
NGBNatural gradient boosting
OOBOut-of-bag
PICPPrediction interval coverage probability
PINAWPrediction interval normalized averaged width
PMProbability averaging of endpoints and simple averaging of midpoints
QRFQuantile regression forests
R 2 Fitting coefficient
RFRandom forests
RMSERoot mean square error
SCSplit conformal
SMAPESymmetric mean absolute percentage error
TEExterior trimming
TIInterior trimming

Appendix A. Other Results of Twelve Interval Prediction Methods

In this section, we demonstrate other results of twelve interval prediction methods at other confidence levels, including 90%, 85%, and 80%, as a supplement to Section 4.2.
Table A1. Results of five interval quality indexes of prediction interval of twelve methods (90% confidence level).
Table A1. Results of five interval quality indexes of prediction interval of twelve methods (90% confidence level).
MethodEvaluating Indicator
PICPPINAWWinkler ScoreCWCMPICD
J+ab-Ridge0.9630.179−1.5630.1790.178
J+ab-MLP0.9540.307−2.6610.3070.317
J+ab-RF0.9000.352−3.0630.7090.530
RF-OOB0.9270.065−0.5900.0650.048
SC-RF0.9270.069−0.6300.0690.052
QRF0.9240.077−0.6650.0770.060
NGB0.9310.066−0.5770.0660.066
GRU-KDE0.9250.056−0.5010.0560.055
RF-KDE0.7360.022−0.2571.3450.045
Ridge-KDE0.9540.146−1.2750.1460.126
GBRT-Mean-KDE0.8910.049−0.4470.1100.052
GBRT-Med-KDE0.7230.026−0.2892.1590.051
Table A2. Results of five interval quality indexes of prediction interval of twelve methods (85% confidence level).
Table A2. Results of five interval quality indexes of prediction interval of twelve methods (85% confidence level).
MethodEvaluating Indicator
PICPPINAWWinkler ScoreCWCMPICD
J+ab-Ridge0.9450.164−1.3720.1640.185
J+ab-MLP0.8340.284−2.40.7090.408
J+ab-RF0.8230.31−2.5810.9180.499
RF-OOB0.8920.045−0.4130.0450.048
SC-RF0.8930.049−0.4540.0490.052
QRF0.9010.063−0.5140.0630.056
NGB0.8990.057−0.4870.0570.066
GRU-KDE0.8810.046−0.4050.0460.055
RF-KDE0.6520.017−0.2212.3710.045
Ridge-KDE0.9350.128−1.0710.1280.126
GBRT-Mean-KDE0.840.041−0.3660.0930.052
GBRT-Med-KDE0.6410.02−0.2493.7750.05
Table A3. Results of five interval quality indexes of prediction interval of twelve methods (80% confidence level).
Table A3. Results of five interval quality indexes of prediction interval of twelve methods (80% confidence level).
MethodEvaluating Indicator
PICPPINAWWinkler ScoreCWCMPICD
J+ab-Ridge0.9050.148−1.2030.1480.206
J+ab-MLP0.8340.237−1.9120.2370.34
J+ab-RF0.8080.275−2.1760.2750.474
RF-OOB0.8420.033−0.3160.0330.048
SC-RF0.8410.035−0.3410.0350.052
QRF0.8760.053−0.4150.0530.053
NGB0.8690.051−0.420.0510.066
GRU-KDE0.8450.041−0.3470.0410.055
RF-KDE0.5880.014−0.2022.8140.045
Ridge-KDE0.9060.114−0.9180.1140.126
GBRT-Mean-KDE0.7860.034−0.3080.0820.052
GBRT-Med-KDE0.6040.018−0.2312.4880.05

Appendix B. Other Results of Six Ensemble Methods and Four Interval Prediction Methods

In this section, we demonstrate other results of six ensemble methods and four interval prediction methods, including 95%, 85%, and 80%, as a supplement to Section 4.2. The method with the best performance of the other four indicators except PICP is also marked in bold.
Table A4. Results of five interval quality indexes measured by four interval prediction methods and six ensemble methods (95% confidence level, 15-min interval).
Table A4. Results of five interval quality indexes measured by four interval prediction methods and six ensemble methods (95% confidence level, 15-min interval).
MethodEvaluating Indicator
PICPPINAWWinkler ScoreCWCMPICD
RF-OOB0.9680.108−0.9830.1080.048
SC-RF0.9650.115−1.0460.1150.052
NGB0.9610.078−0.7140.0780.066
GRU-KDE0.9640.075−0.6840.0750.054
Ensemble-Med0.9760.079−0.7190.0790.052
Ensemble-Mean0.9830.089−0.8090.0890.049
Ensemble-En0.9980.209−1.8800.2090.112
Ensemble-TE0.9600.066−0.6080.0660.045
Ensemble-TI0.9920.107−0.9620.1070.052
Ensemble-PM0.9930.125−1.1230.1250.072
Table A5. Results of five interval quality indexes measured by four interval prediction methods and six ensemble methods (85% confidence level, 15-min interval).
Table A5. Results of five interval quality indexes measured by four interval prediction methods and six ensemble methods (85% confidence level, 15-min interval).
MethodEvaluating Indicator
PICPPINAWWinkler ScoreCWCMPICD
RF-OOB0.8920.045−0.4130.0450.048
SC-RF0.8930.049−0.4540.0490.052
NGB0.8990.057−0.4870.0570.066
GRU-KDE0.8810.046−0.4050.0460.055
Ensemble-Med0.9170.045−0.3830.0450.045
Ensemble-Mean0.9410.052−0.4370.0520.048
Ensemble-En0.9950.145−1.170.1450.108
Ensemble-TE0.8730.035−0.3190.0350.044
Ensemble-TI0.9650.064−0.5230.0640.051
Ensemble-PM0.9630.062−0.5080.0620.055
Table A6. Results of five interval quality indexes measured by four interval prediction methods and six ensemble methods (80% confidence level, 15-min interval).
Table A6. Results of five interval quality indexes measured by four interval prediction methods and six ensemble methods (80% confidence level, 15-min interval).
MethodEvaluating Indicator
PICPPINAWWinkler ScoreCWCMPICD
RF-OOB0.8420.033−0.3160.0330.048
SC-RF0.8410.035−0.3410.0350.052
NGB0.8690.051−0.420.0510.066
GRU-KDE0.8450.041−0.3470.0410.055
Ensemble-Med0.8830.036−0.3030.0360.044
Ensemble-Mean0.910.044−0.3580.0440.048
Ensemble-En0.9920.13−0.990.130.106
Ensemble-TE0.8240.028−0.2610.0280.044
Ensemble-TI0.950.054−0.4260.0540.05
Ensemble-PM0.9370.049−0.3890.0490.052
Table A7. Results of five interval quality indexes measured by four interval prediction methods and six ensemble methods (95% confidence level, 5-min interval).
Table A7. Results of five interval quality indexes measured by four interval prediction methods and six ensemble methods (95% confidence level, 5-min interval).
MethodEvaluating Indicator
PICPPINAWWinkler ScoreCWCMPICD
RF-OOB0.9680.103−0.9470.1030.041
SC-RF0.9660.113−1.0440.1130.044
NGB0.970.074−0.6840.0740.061
GRU-KDE0.9660.076−0.7040.0760.049
Ensemble-Med0.970.074−0.6840.0740.047
Ensemble-Mean0.9780.085−0.7750.0850.043
Ensemble-En0.9970.195−1.7550.1950.102
Ensemble-TE0.9570.063−0.5910.0630.041
Ensemble-TI0.9850.1−0.9120.10.046
Ensemble-PM0.990.117−1.0560.1170.064
Table A8. Results of five interval quality indexes measured by four interval prediction methods and six ensemble methods (85% confidence level, 5-min interval).
Table A8. Results of five interval quality indexes measured by four interval prediction methods and six ensemble methods (85% confidence level, 5-min interval).
MethodEvaluating Indicator
PICPPINAWWinkler ScoreCWCMPICD
RF-OOB0.8930.038−0.3650.0380.041
SC-RF0.8920.04−0.3870.040.043
NGB0.9270.054−0.4670.0540.061
GRU-KDE0.8940.042−0.3760.0420.049
Ensemble-Med0.910.035−0.3220.0350.04
Ensemble-Mean0.9360.043−0.3770.0430.042
Ensemble-En0.9910.123−0.9980.1230.096
Ensemble-TE0.8840.028−0.2770.0280.039
Ensemble-TI0.9560.052−0.4450.0520.044
Ensemble-PM0.9560.052−0.4370.0520.049
Table A9. Results of five interval quality indexes measured by four interval prediction methods and six ensemble methods (80% confidence level, 5-min interval).
Table A9. Results of five interval quality indexes measured by four interval prediction methods and six ensemble methods (80% confidence level, 5-min interval).
MethodEvaluating Indicator
PICPPINAWWinkler ScoreCWCMPICD
RF-OOB0.850.026−0.270.0260.041
SC-RF0.8510.028−0.2880.0280.043
NGB0.9010.048−0.4020.0480.061
GRU-KDE0.8420.034−0.3080.0340.049
Ensemble-Med0.8770.027−0.2510.0270.039
Ensemble-Mean0.9140.035−0.3050.0350.042
Ensemble-En0.9870.111−0.8470.1110.094
Ensemble-TE0.8380.021−0.2220.0210.039
Ensemble-TI0.9420.044−0.3590.0440.044
Ensemble-PM0.9340.039−0.3280.0390.046

References

  1. Hosenuzzaman, M.; Rahim, N.A.; Selvaraj, J.; Hasanuzzaman, M.; Malek, A.A.; Nahar, A. Global prospects, progress, policies, and environmental impact of solar photovoltaic power generation. Renew. Sustain. Energy Rev. 2015, 41, 284–297. [Google Scholar] [CrossRef]
  2. Parida, B.; Iniyan, S.; Goic, R. A review of solar photovoltaic technologies. Renew. Sustain. Energy Rev. 2011, 15, 1625–1636. [Google Scholar] [CrossRef]
  3. Tawn, R.; Browell, J. A review of very short-term wind and solar power forecasting. Renew. Sustain. Energy Rev. 2022, 153, 111758. [Google Scholar] [CrossRef]
  4. Ahmed, R.; Sreeram, V.; Mishra, Y.; Arif, M.D. A review and evaluation of the state-of-the-art in PV solar power forecasting: Techniques and optimization. Renew. Sustain. Energy Rev. 2020, 124, 109792. [Google Scholar] [CrossRef]
  5. Sobri, S.; Koohi-Kamali, S.; Rahim, N.A. Solar photovoltaic generation forecasting methods: A review. Energy Conv. Manag. 2018, 156, 459–497. [Google Scholar] [CrossRef]
  6. Wang, Y.; Liao, W.; Chang, Y. Gated recurrent unit network-based short-term photovoltaic forecasting. Energies 2018, 11, 2163. [Google Scholar] [CrossRef]
  7. Benali, L.; Notton, G.; Fouilloy, A.; Voyant, C.; Dizene, R. Solar radiation forecasting using artificial neural network and random forest methods: Application to normal beam, horizontal diffuse and global components. Renew. Energy 2019, 132, 871–884. [Google Scholar] [CrossRef]
  8. Dash, D.R.; Dash, P.K.; Bisoi, R. Short term solar power forecasting using hybrid minimum variance expanded RVFLN and Sine-Cosine Levy Flight PSO algorithm. Renew. Energy 2021, 174, 513–537. [Google Scholar] [CrossRef]
  9. Elsaraiti, M.; Merabet, A. Solar power forecasting using deep learning techniques. IEEE Acc. 2022, 10, 31692–31698. [Google Scholar] [CrossRef]
  10. Shedbalkar, K.H.; More, D.S. Bayesian Regression for Solar Power Forecasting. In Proceedings of the 2nd International Conference on Artificial Intelligence and Signal Processing (AISP), Vijayawada, India, 12–14 February 2022; pp. 1–4. [Google Scholar]
  11. Elizabeth Michael, N.; Mishra, M.; Hasan, S.; Al-Durra, A. Short-term solar power predicting model based on multi-step CNN stacked LSTM technique. Energies 2022, 15, 2150. [Google Scholar] [CrossRef]
  12. Almeida, M.P.; Perpinan, O.; Narvarte, L. PV power forecast using a nonparametric PV model. Solar Energy 2015, 115, 354–368. [Google Scholar] [CrossRef]
  13. Ni, Q.; Zhuang, S.; Sheng, H.; Kang, G.; Xiao, J. An ensemble prediction intervals approach for short-term PV power forecasting. Solar Energy 2017, 155, 1072–1083. [Google Scholar] [CrossRef]
  14. Huang, J.; Perry, M. A semi-empirical approach using gradient boosting and k-nearest neighbors regression for GEFCom2014 probabilistic solar power forecasting. Int. J. Forecast. 2016, 32, 1081–1086. [Google Scholar] [CrossRef]
  15. Pan, C.; Tan, J.; Feng, D. Prediction intervals estimation of solar generation based on gated recurrent unit and kernel density estimation. Neurocomputing 2021, 453, 552–562. [Google Scholar] [CrossRef]
  16. Wang, Y.; Feng, B.; Hua, Q.S.; Sun, L. Short-term solar power forecasting: A combined long short-term memory and gaussian process regression method. Sustainability 2021, 13, 3665. [Google Scholar] [CrossRef]
  17. Ramkumar, G.; Sahoo, S.; Amirthalakshmi, T.M.; Ramesh, S.; Prabu, R.T.; Kasirajan, K.; Ranjith, A. A short-term solar photovoltaic power optimized prediction interval model based on FOS-ELM algorithm. Int. J. Photoenergy 2021, 2021, 3981456. [Google Scholar] [CrossRef]
  18. Li, X.; Ma, L.; Chen, P.; Xu, H.; Xing, Q.; Yan, J.; Cheng, Y. Probabilistic solar irradiance forecasting based on XGBoost. Energy Rep. 2022, 2021, 1087–1095. [Google Scholar] [CrossRef]
  19. Chen, W.H.; Cheng, L.S.; Chang, Z.P.; Zhou, H.T.; Yao, Q.F.; Peng, Z.M.; Chen, Z.X. Interval Prediction of Photovoltaic Power Using Improved NARX Network and Density Peak Clustering Based on Kernel Mahalanobis Distance. Complexity 2022, 2022, 8169510. [Google Scholar] [CrossRef]
  20. Zhang, X.; Zou, G. Model averaging method and its application in forecasting. Stat. Res. 2011, 28, 97–102. (In Chinese) [Google Scholar]
  21. Gneiting, T.; Raftery, A.E. Weather forecasting with ensemble methods. Science 2005, 310, 248–249. [Google Scholar] [CrossRef] [PubMed]
  22. Zhang, X.; Liang, H. Focused information criterion and model averaging for generalized additive partial linear models. Ann. Stat. 2011, 39, 174–200. [Google Scholar] [CrossRef] [Green Version]
  23. Gaba, A.; Tsetlin, I.; Winkler, R.L. Combining interval forecasts. Decis. Anal. 2017, 14, 1–20. [Google Scholar] [CrossRef]
  24. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  25. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
  26. Efron, B. Bootstrap Methods: Another Look at the Jackknife. In Breakthroughs in Statistics; Kotz, S., Johnson, N.L., Eds.; Springer: New York, NY, USA, 1992; pp. 569–593. [Google Scholar]
  27. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning with Applications in R; Springer: New York, NY, USA, 2013. [Google Scholar]
  28. Dey, R.; Salem, F.M. Gate-variants of gated recurrent unit (GRU) neural networks. In Proceedings of the 2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS), Boston, MA, USA, 6–9 August 2017; pp. 1597–1600. [Google Scholar]
  29. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef]
  30. Liu, X.; Lin, Z.; Feng, Z. Short-term offshore wind speed forecast by seasonal ARIMA-A comparison against GRU and LSTM. Energy 2021, 227, 120492. [Google Scholar] [CrossRef]
  31. Liu, X.; Lin, Z.; Feng, Z. Stochastic gradient boosting. Computat. Stat. Data Anal. 2002, 38, 367–378. [Google Scholar]
  32. Meinshausen, N.; Ridgeway, G. Quantile regression forests. J. Mach. Learn. Res. 2006, 7, 983–999. [Google Scholar]
  33. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  34. Parzen, E. On estimation of a probability density function and mode. Ann. Math. Stat. 1962, 33, 1065–1076. [Google Scholar] [CrossRef]
  35. Duan, T.; Anand, A.; Ding, D.Y.; Thai, K.K.; Basu, S.; Ng, A.; Schuler, A. Ngboost: Natural gradient boosting for probabilistic prediction. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 2690–2700. [Google Scholar]
  36. Kim, B.; Xu, C.; Barber, R. Predictive inference is free with the jackknife+-after-bootstrap. Adv. Neural Inform. Proc. Syst. 2020, 33, 4138–4149. [Google Scholar]
  37. Barber, R.F.; Candes, E.J.; Ramdas, A.; Tibshirani, R.J. Predictive inference with the jackknife+. Ann. Stat. 2021, 49, 486–507. [Google Scholar] [CrossRef]
  38. Zhang, H.; Zimmerman, J.; Nettleton, D.; Nordman, D.J. Random forest prediction intervals. Am. Stat. 2019, 74, 392–406. [Google Scholar] [CrossRef]
  39. Martínez-Muñoz, G.; Suárez, A. Out-of-bag estimation of the optimal sample size in bagging. Pattern Recognit. 2010, 43, 143–152. [Google Scholar] [CrossRef] [Green Version]
  40. Lei, J.; G’Sell, M.; Rinaldo, A.; Tibshirani, R.J.; Wasserman, L. Distribution-free predictive inference for regression. J. Am. Stat. Assoc. 2018, 113, 1094–1111. [Google Scholar] [CrossRef]
  41. Vovk, V.; Gammerman, A.; Shafer, G. Conformal prediction. In Algorithmic Learning in a Random World; Springer: Boston, MA, USA, 2005; pp. 17–51. [Google Scholar]
  42. Koenker, R.; Hallock, K.F. Quantile regression. J. Econ. Perspect. 2001, 15, 143–156. [Google Scholar] [CrossRef]
  43. Keener, R.W. Theoretical Statistics: Topics for a Core Course; Springer: New York, NY, USA, 2010. [Google Scholar]
  44. Khosravi, A.; Nahavandi, S.; Creighton, D. Construction of optimal prediction intervals for load forecasting problems. IEEE Trans. Power Syst. 2010, 25, 1496–1503. [Google Scholar] [CrossRef]
  45. Winkler, R.L. A decision-theoretic approach to interval estimation. J. Am. Stat. Assoc. 1972, 67, 187–191. [Google Scholar] [CrossRef]
  46. Jais, I.K.M.; Ismail, A.R.; Nisa, S.Q. Adam optimization algorithm for wide and deep neural network. Know. Eng. Data Sci. 2019, 2, 41–46. [Google Scholar] [CrossRef]
Figure 1. Flow chart of this study.
Figure 1. Flow chart of this study.
Energies 15 07193 g001
Figure 2. Network structure of GRU.
Figure 2. Network structure of GRU.
Energies 15 07193 g002
Figure 3. Heatmap of Pearson correlation coefficients between correlated variables of the model.
Figure 3. Heatmap of Pearson correlation coefficients between correlated variables of the model.
Energies 15 07193 g003
Figure 4. Ranking of RF variables based on their importance.
Figure 4. Ranking of RF variables based on their importance.
Energies 15 07193 g004
Figure 5. Ranking of NGB variables based on their importance (left pane: location parameter, right pane: scale parameter).
Figure 5. Ranking of NGB variables based on their importance (left pane: location parameter, right pane: scale parameter).
Energies 15 07193 g005
Figure 6. Prediction intervals of twelve methods from 1 September 2015 to 4 September 2015 (J+ab-Ridge, J+ab-MLP, J+ab-RF, RF-OOB, SC-RF, QRF, NGB, GRU-KDE, RF-KDE, Ridge-KDE, GBRT-Mean-KDE, and GBRT-Med-KDE, respectively).
Figure 6. Prediction intervals of twelve methods from 1 September 2015 to 4 September 2015 (J+ab-Ridge, J+ab-MLP, J+ab-RF, RF-OOB, SC-RF, QRF, NGB, GRU-KDE, RF-KDE, Ridge-KDE, GBRT-Mean-KDE, and GBRT-Med-KDE, respectively).
Energies 15 07193 g006
Figure 7. Weather conditions including wind speed, temperature (Celsius), and relative humidity from 1 September 2015 to 4 September 2015.
Figure 7. Weather conditions including wind speed, temperature (Celsius), and relative humidity from 1 September 2015 to 4 September 2015.
Energies 15 07193 g007
Figure 8. Comparison of PICP, PINAW, Winkler score, CWC, and MPICD of twelve methods (J+ab-Ridge, J+ab-MLP, J+ab-RF, RF-OOB, SC-RF, QRF, NGB, GRU-KDE, RF-KDE, Ridge-KDE, GBRT-Mean-KDE, and GBRT-Med-KDE, respectively).
Figure 8. Comparison of PICP, PINAW, Winkler score, CWC, and MPICD of twelve methods (J+ab-Ridge, J+ab-MLP, J+ab-RF, RF-OOB, SC-RF, QRF, NGB, GRU-KDE, RF-KDE, Ridge-KDE, GBRT-Mean-KDE, and GBRT-Med-KDE, respectively).
Energies 15 07193 g008
Figure 9. Comparison of total calculation time of twelve methods.
Figure 9. Comparison of total calculation time of twelve methods.
Energies 15 07193 g009
Figure 10. PICP and PINAW of three J+ab methods (80% confidence level).
Figure 10. PICP and PINAW of three J+ab methods (80% confidence level).
Energies 15 07193 g010
Figure 11. Comparison of PICP, PINAW, Winkler score, CWC, and MPICD of nine methods (RF-OOB, SC-RF, NGB, GRU-KDE, Ensemble-Med, Ensemble-Mean, Ensemble-En, Ensemble-TE, Ensemble-TI, Ensemble-PM, respectively, 15-min interval).
Figure 11. Comparison of PICP, PINAW, Winkler score, CWC, and MPICD of nine methods (RF-OOB, SC-RF, NGB, GRU-KDE, Ensemble-Med, Ensemble-Mean, Ensemble-En, Ensemble-TE, Ensemble-TI, Ensemble-PM, respectively, 15-min interval).
Energies 15 07193 g011
Figure 12. Comparison of PICP, PINAW, Winkler score, CWC, and MPICD of nine methods (RF-OOB, SC-RF, NGB, GRU-KDE, Ensemble-Med, Ensemble-Mean, Ensemble-En, Ensemble-TE, Ensemble-TI, Ensemble-PM, respectively, 5-min interval).
Figure 12. Comparison of PICP, PINAW, Winkler score, CWC, and MPICD of nine methods (RF-OOB, SC-RF, NGB, GRU-KDE, Ensemble-Med, Ensemble-Mean, Ensemble-En, Ensemble-TE, Ensemble-TI, Ensemble-PM, respectively, 5-min interval).
Energies 15 07193 g012
Table 1. Hyperparameters of different methods in point prediction.
Table 1. Hyperparameters of different methods in point prediction.
Forecasting MethodHyperparameter
RFEstimators = {100, 125, 150, 175, 200}
Max depth = {10, 30, 50, 70, 100}
Max features = {5, 6, 7, 8, 9, 10, 11}
GRULearning rate = {0.005, 0.01, 0.015}
Hidden nodes = {50, 60, 70, 80}
Hidden layer = {2, 3, 4, 5}
Regularization parameter = {0.00005, 0.0001, 0.00015, 0.0002}
GBRTEstimators = {100, 200, 300, 400, 500}
Learning rate = {0.05, 0.1, 0.15, 0.2}
Min samples leaf = {5, 10, 15}
Min samples split = {5, 10, 15, 20}
Max depth = {5, 10, 15, 20}
NGBLearning rate = {0.005, 0.01, 0.015}
Minibatch frac = {0.3, 0.4, 0.5}
RidgeAlphas = {0.01, 0.1, 1}
Table 2. Performance of six point prediction methods.
Table 2. Performance of six point prediction methods.
Evaluating IndicatorModel
GRUNGBRFRidgeGBRT-MeanGBRT-Med
MAE0.0540.0660.0450.1270.0520.050
RMSE0.0810.0990.0790.1800.0810.080
R 2 (%)99.7599.6299.7698.7499.7499.75
MAPE0.7880.2370.0481.1200.1040.055
SMAPE0.0860.0860.0240.1540.0510.028
Table 3. Upper and lower quantiles of five KDE methods with different confidence levels.
Table 3. Upper and lower quantiles of five KDE methods with different confidence levels.
MethodConfidence Level
95%90%85%80%
GRU-KDE(−0.176, 0.177)(−0.131, 0.135)(−0.108, 0.112)(−0.094, 0.099)
RF-KDE(−0.078, 0.075)(−0.051, 0.052)(−0.039, 0.040)(−0.032, 0.035)
Ridge-KDE(−0.440, 0.436)(−0.355, 0.335)(−0.312, 0.293)(−0.280, 0.261)
GBRT-Mean-KDE(−0.159, 0.157)(−0.115, 0.117)(−0.095, 0.097)(−0.079, 0.081)
GBRT-Med-KDE(−0.087, 0.095)(−0.057, 0.064)(−0.047, 0.049)(−0.042, 0.044)
Table 4. Results of five interval quality indicators of prediction interval of twelve methods (95% confidence level).
Table 4. Results of five interval quality indicators of prediction interval of twelve methods (95% confidence level).
MethodEvaluating Indicator
PICPPINAWWinkler ScoreCWCMPICD
J+ab-Ridge0.9750.253−2.3010.2530.223
J+ab-MLP0.9810.367−3.3160.3670.350
J+ab-RF0.9530.402−3.6320.4020.479
RF-OOB0.9680.108−0.9830.1080.048
SC-RF0.9650.115−1.0460.1150.052
QRF0.9440.106−0.9520.2270.074
NGB0.9610.078−0.7140.0780.066
GRU-KDE0.9640.075−0.6840.0750.054
RF-KDE0.8380.032−0.3420.5620.045
Ridge-KDE0.9740.185−1.6840.1850.127
GBRT-Mean-KDE0.9510.067−0.6170.0670.052
GBRT-Med-KDE0.8370.038−0.3910.6870.051
Table 5. Results of five interval quality indicators measured by four interval prediction methods and six ensemble methods (90% confidence level, 15-min interval).
Table 5. Results of five interval quality indicators measured by four interval prediction methods and six ensemble methods (90% confidence level, 15-min interval).
MethodEvaluating Indicator
PICPPINAWWinkler ScoreCWCMPICD
RF-OOB0.9270.065−0.5900.0650.048
SC-RF0.9270.069−0.6300.0690.052
NGB0.9310.066−0.5770.0660.066
GRU-KDE0.9250.056−0.5010.0560.055
Ensemble-Med0.9440.057−0.5010.0570.047
Ensemble-Mean0.9610.064−0.5590.0640.049
Ensemble-En0.9960.165−1.4090.1650.109
Ensemble-TE0.9200.045−0.4130.0450.045
Ensemble-TI0.9780.077−0.6680.0770.051
Ensemble-PM0.9800.082−0.7040.0820.061
Table 6. Results of five interval quality indicators measured by four interval prediction methods and six ensemble methods (90% confidence level, 5-min interval).
Table 6. Results of five interval quality indicators measured by four interval prediction methods and six ensemble methods (90% confidence level, 5-min interval).
MethodEvaluating Indicator
PICPPINAWWinkler ScoreCWCMPICD
RF-OOB0.9320.059−0.5440.0590.041
SC-RF0.9330.066−0.6060.0660.044
NGB0.9480.062−0.5530.0620.061
GRU-KDE0.9350.052−0.4740.0520.049
Ensemble-Med0.9420.048−0.4450.0480.043
Ensemble-Mean0.9550.056−0.5020.0560.043
Ensemble-En0.9940.143−1.2220.1430.096
Ensemble-TE0.9240.039−0.3790.0390.039
Ensemble-TI0.9710.067−0.5920.0670.045
Ensemble-PM0.9730.072−0.6290.0720.054
Table 7. Results of five interval quality indicators of Ensemble-TE at different confidence levels (15-min interval).
Table 7. Results of five interval quality indicators of Ensemble-TE at different confidence levels (15-min interval).
Confidence LevelPICPPINAWWinkler ScoreCWCMPICD
0.950.9600.066−0.6080.0660.045
0.900.9200.045−0.4130.0450.045
0.850.8730.035−0.3190.0350.044
0.800.8240.028−0.2610.0280.044
Table 8. Results of five interval quality indicators of Ensemble-TE at different confidence levels (5-min interval).
Table 8. Results of five interval quality indicators of Ensemble-TE at different confidence levels (5-min interval).
Confidence LevelPICPPINAWWinkler ScoreCWCMPICD
0.950.9570.063−0.5910.0630.041
0.900.9240.039−0.3790.0390.039
0.850.8840.028−0.2770.0280.039
0.800.8380.021−0.2220.0210.039
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Hu, T. Ensemble Interval Prediction for Solar Photovoltaic Power Generation. Energies 2022, 15, 7193. https://doi.org/10.3390/en15197193

AMA Style

Zhang Y, Hu T. Ensemble Interval Prediction for Solar Photovoltaic Power Generation. Energies. 2022; 15(19):7193. https://doi.org/10.3390/en15197193

Chicago/Turabian Style

Zhang, Yaxin, and Tao Hu. 2022. "Ensemble Interval Prediction for Solar Photovoltaic Power Generation" Energies 15, no. 19: 7193. https://doi.org/10.3390/en15197193

APA Style

Zhang, Y., & Hu, T. (2022). Ensemble Interval Prediction for Solar Photovoltaic Power Generation. Energies, 15(19), 7193. https://doi.org/10.3390/en15197193

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop