Next Article in Journal
Surface-Related and Internal Multiple Elimination Using Deep Learning
Previous Article in Journal
Application of EOR Using Water Injection in Carbonate Condensate Reservoirs in the Tarim Basin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evidential Extreme Learning Machine Algorithm-Based Day-Ahead Photovoltaic Power Forecasting

1
Key Laboratory of Energy Thermal Conversion and Control of Ministry of Education, School of Energy and Environment, Southeast University, Nanjing 210096, China
2
Huaiyin Institute of Technology, Huaian 223003, China
*
Author to whom correspondence should be addressed.
Energies 2022, 15(11), 3882; https://doi.org/10.3390/en15113882
Submission received: 18 April 2022 / Revised: 20 May 2022 / Accepted: 23 May 2022 / Published: 24 May 2022

Abstract

:
The gradually increased penetration of photovoltaic (PV) power into electric power systems brings an urgent requirement for accurate and stable PV power forecasting methods. The existing forecasting methods are built to explore the function between weather data and power generation, which ignore the uncertainty of historical PV power. To manage the uncertainty in the forecasting process, a novel ensemble method, named the evidential extreme learning machine (EELM) algorithm, for deterministic and probabilistic PV power forecasting based on the extreme learning machine (ELM) and evidential regression, is proposed in this paper. The proposed EELM algorithm builds ELM models for each neighbor in the k-nearest neighbors initially, and subsequently integrates multiple models through an evidential discounting and combination process. The results can be accessed through forecasting outcomes from corresponding models of nearest neighbors and the mass function determined by the distance between the predicted point and neighbors. The proposed EELM algorithm is verified with the real data series of a rooftop PV plant in Macau. The deterministic forecasting results demonstrate that the proposed EELM algorithm exhibits 15.45% lower nRMSE than ELM. In addition, the forecasting prediction intervals obtain better performance in PICP and CWC than normal distribution.

1. Introduction

The critical depletion and severe energy crisis nowadays encourage the exploit of renewable energy resources [1,2], in which the solar energy is regarded as a typical promising renewable energy source [3]. Photovoltaic (PV) systems, which transfer global irradiance into electricity through the PV effect [4], are widely used to supply power for residential, commercial and industrial parks [5]. Meanwhile, considering the high variability and uncertainty of the PV system, rising penetration of the PV system brings a great challenge to the economic and stable operation of the power grid [6]. Moreover, the implementation of PV systems also faces significant issues such as system reactive power compensation, reliability, stability, electric power balance and frequency response to the power system [7]. Therefore, PV power forecasting is crucial for optimal planning and modelling of the energy system [8].
The variability and uncertainty of PV power is mostly affected by meteorological parameters, such as solar radiation, temperature, humidity, wind speed, precipitation and cloud coverage, etc. [9]. Owing to the volatile performance of the weather system [10], the influential meteorological parameters change dramatically, thus resulting in a inevitably variability of the PV power, which leads to challenging tasks for accurate and reliable PV power forecasting [11]. Generally, forecasting methods for the PV system can be separated into the physical method and statistical method [12,13]. Physical methods are generated by the integration of a series of mathematical equations using meteorological and geological parameters with which the dynamic motion of atmosphere can be described [14,15]. The main drawback of such models is that the attained resolution is up to 16–50 km, and these models are marginally sensitive to changes in the meteorological variables, which indicates that these models are incapable in the real-time dispatch stage of the energy system [16]. Statistical models are established considering a sequence of observations from which one or more parameters are measured at successive instants in time, and they aim to establish a regression formulation with respect to historical time-series data and PV power [17]. These methods include artificial neural networks (ANN) [18], support vector machine (SVM) [19,20], Markov chain [21], Kalman filter [22] and regression methods [23,24]. Due to the capacity of self-organizing, self-learning and powerful nonlinear regression [25], ANN models are regarded as the most commonly used approaches for complex nonlinear PV generation prediction problems [3,26,27]. The extreme learning machine (ELM), as a typical and improved ANN with a faster learning capability and excellent generalization ability [28], has been extensively applied into PV power forecasting problems, and the excellent forecasting performance has been confirmed [8,29,30,31]. Due to the volatility of PV power outputs, the distribution of forecasting errors is essential for a risk analysis and reliability evaluation of power systems [32]. The forecasting errors are generally assumed as certain distributions, such as normal distribution [33], which ignore the periodic and nonlinear characteristics of PV power [34].
All of the above research works have exhibited sufficient useful suggestions and conclusions. However, most of the PV power forecasting methods concentrate on the accuracy of forecasting without the consideration of instability of a single model [8]. Meanwhile, the majority of forecasting models for PV power generation are regarded as single task learning methods, which ignore the information that historical data in similar situations can potentially provide; such methods offer limited improvement to capture the actual PV trend. Moreover, the existing forecasting methods take the information of a single similar day into consideration, and multiple similar days should also be integrated into the model while considering the uncertainty of the relevant PV power on similar days.
To enable such a knowledge sharing and transferring, multi-model frameworks should be exploited, which train multiple models simultaneously by transferring information between separate models and the total model. Moreover, the probabilistic performance of PV power forecasting is generally calculated according to the confidence interval, which is roughly described by a certain distribution form of forecasting error [6]. Accordingly, an accurate approach for the quantified distribution of the PV forecasting error is quite essential.
Motivated by the existing works, this paper proposes an ensemble method for PV power forecasting. The key contributions of this study are as follows:
(1)
A novel EELM algorithm is developed to handle the high nonlinear regression problem of PV forecasting. The proposed algorithm, which embedded multiple ELM models into the evidential theoretic framework, will help to improve the accuracy and stability of PV forecasting. Differing from other algorithms, the algorithm sufficiently considers the nearest neighbors of the predicted points and introduces their model, rather than their actual values, into the prediction process.
(2)
A new multi-model framework for PV power predicting on the basis of similar days selection is established. Basically, the proposed prediction framework reconstructs the original database by inserting the power and meteorological factors of similar days into the prediction inputs, and divides the original database into several sub-databases. The integration of multiple similar days allows the prediction model to consider the uncertainty of PV generation and contributes to the reliability of the forecasting model.
(3)
In the foundation of the proposed algorithm, available deterministic values as well as probabilistic intervals can be accessed. The confidence intervals are directly calculated by the forecasting results of ELM models and their belief assignments, which is helpful to improve the accuracy of the probabilistic forecasting results.
The remainder of the paper is organized as follows. The following section provides a brief introduction of ELM and evidential regression. Section 3 presents the detailed methodology and procedure of the proposed novel EELM algorithm and PV power forecasting framework. In Section 4, a dataset for a PV plant in Macau is applied, and the correlation analysis of input vectors and PV power is accessed. Section 5 illustrates numerical examples to validate the performance of the proposed method. Finally, the conclusion of this study is demonstrated in Section 6.

2. Preliminary

2.1. Extreme Learning Machine

The extreme learning machine (ELM) is a learning algorithm for generalized single-hidden layer feed forward neural networks (SLFNs) proposed by Huang [28]. The topological structure of the ELM is similar to ANNs, as shown in Figure 1.
The ELM was originally proposed for overcoming the ambitious issues of the back-propagation (BP) learning algorithm and has obtained extensive attention in various domains. The fundamental concepts of the ELM algorithm are calculating the output weights by the least squares method and randomly assigning hidden layer parameters instead of iteration approaches. The approximated output by the ELM is as follows:
y E , i = j = 1 L β j g ( ω E , j · x E , i + b j ) ,   i = 1 , , N
where xE,i and yE,i are the inputs and outputs of training samples, respectively. L denotes the number of hidden layer nodes, N is the number of samples, βj represents the weights of the output, g(·) is the activation function and ωE,j and bj are the weights and bias of input, respectively. The Equation (1) can be transferred to the matrix form:
Y E = H β
where β is the matrix of weight, YE = [y1, y2, ⋯, yN] is the predicted output. H denotes the matrix of hidden layer output, where
H = [ g ( ω E , 1 · x E , 1 + b 1 ) g ( ω E , L · x E , 1 + b L ) g ( ω E , 1 · x E , N + b 1 ) g ( ω E , L · x E , N + b L ) ] N × L
The objective of ELM can be expressed as:
m i n i m i z e : | | H β Y E | | 2
In practical terms, the parameter L is smaller than sample number N. Accordingly, the approximated β can be calculated using a Moore Penrose (MP) generalized inverse operation:
β * = H Y E
where H represents the MP generalized inverse of the output matrix of the hidden layer.
The standard ELM is considered as an algorithm based on the empirical risk minimization principle which might lead to generate an over-fitting model. To address the drawbacks of over-fitting, a regularized extreme learning machine (RELM) is employed according to the structural risk minimization principle [35]. The mathematical model of RELM is:
min 1 2 | | β | | 2 + C 2 | | ξ | | 2 s . t . h ( x E , i ) y E , i = ξ i , i = 1 , 2 , , N
The empirical risk can be calculated with sum error square ‖ξ‖ and the structural risk is denoted by ‖β‖. The proportion is regularized by introducing the factor C. By regulating penalty factor C, the optimal tradeoff between structural risk and empirical risk will make ELM models access better performance. The solution of β can be given as:
β * = ( H T H + I L C ) H T Y
where IL is a unit matrix.

2.2. Evidential Regression

2.2.1. Belief Function Theory

Belief function theory, as the fundamental background for evidential regression, is briefly introduced in this section [36,37,38]. Let, Ω = {ω1, ω2, ⋯, ωc} the frame of discernment, be a collectively exhaustive and mutually exclusive set of c hypotheses. A basic belief assignment (BBA, or mass function) is a function of m: 2 → [0, 1], satisfying:
A m ( A ) = 1
For a BBA satisfying m(Φ) = 0, it is considered to be normal; otherwise, it can be regarded as subnormal BBA. Any sunset A of Ω satisfying m(A) > 0 is a focal element of m. For two dependent sources of evidence m1 and m2, the fusion of BBAs may be defined as:
m ( C ) = A B = C m 1 ( A ) m 2 ( B )
The conjunctive rule and disjunctive rule are accessed, when ∇ = ∩ and = , respectively. As the conjunctive rule, the Dempster rule ⊕, may generate subnormal BBAs, the subnormal BBAs should be converted into a normal one, m*, which is defined as follows:
m * ( A ) = m ( A ) 1 m ( )
The belief function Bel, and the plausibility function, Pl, are evidence functions derived from BBA, which are defined as:
B e l ( A ) = A B m ( B )
P l ( A ) = A B m ( B )
For these two functions, a relationship P l ( A ) = 1 B e l ( A ¯ ) for all A can be revealed. For any probability distribution Pd satisfying the inequation Bel(A) ≤ Pd(A) ≤Pl(A), Pd is considered as compatible with m. As a typical probability distribution, the BetPm, named the pignistic probability distribution, can be described as:
B e t P m ( ω ) = { A | ω ϵ A } m * ( A ) | A |

2.2.2. Evidential Regression

Divided from the form of a traditional regression framework, the training data of evidential regression are assumed to be:
T R = { e i = ( x E R , i , m i ) } i = 1 N
where xER,i is the input vector, and mi is a mass on an ordered or continuous frame, which quantifies one’s partial knowledge or belief by taking the response value yER,i.
While x is an arbitrary vector, yER is the output to be predicted. The prediction issue is to deduce useful information from the excavation of training set TR. Since the information of training set TR is potentially uncertain and imprecise, the output should take the form of a mass function, which can be constructed depending on the following steps: discounting of the mass function and combination of the discounted mass. The detailed procedure of evidential regression proposed by Simon Petit-Renaud and Thierry Denoeux can be concluded as follows [39,40]:
  • Step 1 Evidence construction on training data
For evidential regression problems, an evidence for sample (xER,i, yER,i) can be determined as:
{ m i ( y E R , i ) = p i m i ( Ω ) = 1 p i
where the Ω represents all the potential values of yER, and pi is the confidence level considering the possible value of yER,i while the input is xER,i.
  • Step 2 Discounting of the mass function
The value of yER is highly dependent on the dissimilarity between input vector xER and xER,i, which is measured by a distance function. With a given metric ǁ·ǁ, yER is considered as a value close to yER,i when x is close to xER,i. More formally, m can be constructed as:
m ( A | x i ) = { α ϕ d ( d i ) , A = { ω q } , q { 1 , 2 , , c } 1 α E R ϕ d ( d i ) , A = Ω 0 , A 2 Ω \ { Ω , { ω q } }
where di = d(xER, xER,i), αER is referred to a real parameter of [0, 1]. ϕd is a discounting function, and a natural form for ϕ is:
ϕ ( d ) = e x p ( γ d 2 )
where γ is a real parameter. γ is the most important parameter, which directly controls the decrement gradient of decrement function.
  • Step 3 Combination of the discounted mass function
On the basis of the conjunctive rule for mass function, the final mass can be generated by the combination of information provided by every single element in TR. The final mass is then:
m y [ x E R , T R ] = i = 1 N m y [ x E R , e i ]
By normalizing my[xER,TR], m y * [xER,TR] of the final belief assignment can be obtained. As the bounded interval of domain yER is [yinf,ysup], the pignistic probability distribution BetPm can be expressed as:
B e t P m [ x E R , T R ] ( y E R ) = i = 1 N m y * [ x E R , T R ] ( { y E R , i } ) + m y * [ x E R , T R ] ( Ω ) y s u p y i n f
The expectation can be defined as:
y ˜ E R ( x E R ) = i = 1 N m y * [ x E R , T R ] ( { y E R , i } ) y E R , i + m y * [ x E R , T R ] ( Ω ) y ¯
with y ¯ = ( y i n f + y s u p ) / 2 . The value y ¯ can be considered as an added observation, and the regression function can be rewritten as a linear function of the yER:
y ˜ E R ( x E R ) = i = 1 N + 1 S i y E R , i
where Si is the weight regarding vectors xER,j and x, j = 1, 2, ⋯, N.
In addition, according to the BetPm of variable values to be predicted, the cumulative distribution function CDF(yER) can be acquired. For a given confidence level αp, the dynamic upper and lower bounds [ y E R , * , y E R * ] of the variable values can be calculated, which is [41,42]:
C D F ( y E R * ) = 1 + α p 2 C D F ( y E R , * ) = 1 α p 2
Remark 1.
Parameter γ determines the regression outcome. Once the error criterion has been determined, the parameter γ can be possibly optimized, which is defined as the parameter estimation process. Specified as the mean value of the forecasting error in TR, the global selection criterion CV is obtained as the minimizing criterion.
C V ( γ ) = 1 N i = 1 N C ( m i , m y i * [ x E R , i , Ω i , γ ] ) γ ^ = arg min γ C V ( γ )
where C(·) is the squared error of the predicted and desired values.
Remark 2.
Under the worst conditions, the number of focal elements for my[xER,TR] will increase exponentially for large N, which results in a heavy burden computation. To avoid this dilemma, an effective approach is to access the mass functions referring to the k-nearest neighbors {xER(i)}k, I = 1 of x in TR:
m y [ x E R , T R ] = i = 1 k m y [ x E R , e ( i ) ]

3. The Proposed Evidential Extreme Learning Machine for Photovoltaic Forecasting

In order to handle the high-nonlinear regression problem in a day-ahead photovoltaic forecasting problem, an ensemble algorithm titled as the evidential extreme learning machine (EELM) is proposed in this paper, which inserts the ELM into the framework of evidential regression. The proposed algorithm not only regards the sample points as training data, but also takes the model information of nearest neighbors into consideration, which can improve the accuracy of prediction or regression. Moreover, by integrating multiple ELM models, the stability of the prediction model is promoted. The proposed EELM algorithm is employed in the proposed photovoltaic forecasting process based on similar days.

3.1. Evidential Extreme Learning Machine Algorithm

In this section, a novel regression algorithm named the evidential extreme learning machine is proposed, which can be considered as an ensemble model of ELM under the framework of evidential theory. The basic concept of the proposed ensemble method is to build multiple ELM models and weigh the optimal ELM models, while the weights are acquired by the normalized mass function. The EELM algorithm is on the basis of evidential regression with k-nearest neighbors, as shown in Figure 2.
For a testing point to be predicted, the predicted output is determined by multiple ELM models of the k-nearest neighbors. For each neighbor, the distance of inputs, which can be regarded as the similarity, are converted to a mass through a discounting and combination process. Furthermore, instead of the original outputs of nearest neighbor points, the predicted outputs are accessed by ELM models of neighbors with the inputs of the point to be predicted. The general predicted value equals the expectation which is calculated by mass function and ELM prediction results. To build a suitable and accurate ELM model for each neighbor point, several data pre-processing methods can be employed, such as data clustering, which clusters the data with similar features for model training to ensure separate and accurate models for sample points in space. Meanwhile, another promising method is to reconstruct the database, through which a certain database is divided into multiple sub-databases.
As the general predicted output is accessed by the outputs of the ELM model as well as the mass functions which are regarded as the weights of outputs in the EELM model, the overall outputs can be expressed as a linear equation of outputs gained from multiple ELM models:
Y = m 1 y 1 + m 2 y 2 + m K y K + m Ω y ¯
where mi, I = 1, 2, ⋯, K, Ω is the belief assignment of each ELM model. To be more precise, the overall output can be calculated as:
Y ^ ( x ) = k = 1 K m y ˜ * [ x , T R ] ( { y ˜ k } ) y ˜ k + m y ˜ * [ x , T R ] ( Ω ) y ¯
where K is the number of neighbors. y ˜ k is the estimated output with the input x through the ELM model of the k-th neighbor:
y ˜ k = j = 1 L β j g ( ω E , j , k · x + b j , k )
where ωE,j,k and bj,k are the weights and bias of the jth hidden layer node in the k-th ELM model, respectively. The forecasting accuracy of the ensemble model is:
C V E = i = 1 N ( Y ˜ ( x i ) Y i ) 2
To improve the forecasting accuracy, parameter γ can be optimized according to:
γ ^ = arg min γ C V E ( γ )
Thereby, the BetPm and dynamic confidence interval [ y * , y * ] are calculated according to the confidence level αpv, which are:
B e t P m [ x , T R ] ( y ) = i = 1 N m y * [ x , T R ] ( { y i } ) + m y * [ x , T R ] ( Ω ) y s u p y i n f
C D F ( y * ) = 1 + α p v 2 C D F ( y * ) = 1 α p v 2
The output weights of ELM can be accessed through the desired output yi. In the condition where the desired output of each ELM can be generated, the output weights of ELM are trained independently.
β k * = H k Y k *
H k = [ g ( ω E , 1 · x 1 , k + b 1 ) g ( ω E , L · x 1 , k + b L ) g ( ω E , 1 · x N k , k + b 1 ) g ( ω E , L · x N k , k + b L ) ] N k × L
Y k * = Y k m y ˜ * [ x , T R ] ( Ω ) y ¯
Y k = [ y 1 , k , , y N k , k ]
where Nk is the number of training samples in TR corresponding to the k-th ELM model. In case of a special situation where the desired outputs are unknown and the ELM model share the same coefficients β, the output weight matrix can be:
β * = H Y *
H = [ k = 1 K m y * [ x , T R ] ( { y ~ 1 , k } ) g ( ω E , 1 , k · x 1 + b 1 , k ) k = 1 K m y * [ x , T R ] ( { y ~ 1 , k } ) g ( ω E , L , k · x 1 + b L , k ) k = 1 K m y * [ x , T R ] ( { y ~ N , k } ) g ( ω E , 1 , k · x N + b 1 , k ) k = 1 K m y * [ x , T R ] ( { y ~ N , k } ) g ( ω E , L , k · x N + b L , k ) ] N × L
Y * = Y m y ˜ * [ x , T R ] ( Ω ) y ¯
The flowchart of the proposed EELM algorithm with certain ELM outputs is exhibited in Figure 3, and the main steps can be summarized as follows:
  • Step 1. Initialize the regression problem and algorithm parameters.
The input variables, output variables, training database, testing database objective function and algorithm parameters are specified, including the number of hidden layer nodes L and the number of nearest neighbors K.
  • Step 2. Reconstruct the database for each ELM model.
The database, including inputs and corresponding outputs, is reconstructed and separated into several sub-databases according to specific regulations, and for each sub-database an ELM model is constructed.
  • Step 3. Initialize the weights and bias of input in ELM model.
For ELM models, the weights and bias of input are generated randomly, and the penalty factor is set as a constant value. Nevertheless, an artificial bee colony (ABC) algorithm is employed in this model to optimize these parameters subsequently.
  • Step 4. Calculate the output weights of ELM.
Concerning the corresponding output of the ELM model, the output weights are achieved. For regression problems with a regularized term, the output weights can be calculated as:
β k * = ( H k T H k + I L C ) H k T Y k
  • Step 5. Optimize the weights, bias and regularization coefficient.
Differing from the randomly assigned weights and bias of input in a traditional ELM model, the weights, basis as well as the penalty factor are optimized by employing the ABC algorithm to enhance and ensure the accuracy of the forecasting algorithm [43]. Furthermore, if the end condition of the optimization process is satisfied, proceed to the following step; otherwise, return to Step 4 with updated parameters to continue execution.
  • Step 6. Discounting and combination of mass function.
Referring to the given metric ǁ·ǁ, k-nearest neighbors of the predicted point are searched. Afterwards, the mass function is obtained through discounting and combination by the conjunctive rule of combination.
  • Step 7. Optimize parameter γ.
As the most important parameter for regression, the parameter γ is optimized by using the leave-one-out method with an optimization objective of CVE. Unlike a traditional evidential regression algorithm, for a certain point to be predicted with an input of x, the corresponding predicted values of k-nearest neighbors are the outputs given by ELM models with the identical input x.
  • Step 8. Prediction process.
Regarding a testing point to be predicted, the output can be calculated by equations with the optimized weights and bias of ELMs as well as parameter γ of evidential regression. In accordance with the value of BetPm, the probabilistic forecasting results can be generated following Equations (30) and (31).

3.2. Photovoltaic Power Forecasting Based on Evidential Extreme Learning Machine

To guarantee economic and steady operation of power systems under growing PV penetration, precise algorithms for day-ahead PV power are essentially required. Accurate PV power forecasting is a complex issue on account of the fluctuated and volatile nature of weather. The prediction of PV power is a high nonlinear regression problem, and the power generation depends mostly on the solar radiance. The remaining potential influencing parameters such as the atmospheric temperature, humidity, precipitation and atmospheric pressure are also regarded as available inputs for PV power prediction. The day-ahead PV power forecasting is specified as:
y P V , t = f ( x p v , 1 , x p v , 2 , , x p v , n ) , t = 1 , 2 , , 24
where yPV,t is the photovoltaic generation at the t-th hour to be predicted; xpv,i is the corresponding influencing factors.
The proposed forecasting approach comprises five stages: data preprocessing and reconstruction; ELM model training and parameter optimization; mass function calculation and evidential parameter optimization; deterministic forecasting; and probabilistic forecasting. Figure 4 exhibits the detailed flowchart of the EELM algorithm, in which xPV represents available meteorological elements, while xPV,i and yPV,i indicate available meteorological elements and PV power of the ith similar days, respectively.
In the data preprocessing stage, the abnormal data are abandoned, and the missing parts are filled in. The optimal input vectors are arranged by Pearson correlation coefficient (PCC) [44], which is considered as a typical measurement for the interdependence of variables. The PCC between two vectors S and T can be mentioned as:
P C C S T = c o v ( S , T ) σ S σ T = n i = 1 n s i t i ( i = 1 n s i ) ( i = 1 n t i ) n i = 1 n s i 2 ( i = 1 n s i ) 2 n i = 1 n t i 2 ( i = 1 n t i ) 2
where cov(S, T) is the covariance between S and T, while σS and σT are standard deviations of S and T, respectively. While the absolute value of the PCC is generally above 0.8, the two factors are considered to have a strong correlation. In addition, it is considered to have a moderate correlation in the case of taking values between 0.3 and 0.8. Consequently, the parameters with an absolute Pearson correlation coefficient under 0.3 are abandoned.
Since the power of similar days have an obvious influence on PV power forecasting, the PV power and meteorological parameters of similar days are added to input variables at the database reconstruction stage. Among this phase, historical PV power and meteorological parameters series are decomposed into K sub-databases. For the k-th sub-database, PV power generations and corresponding meteorological parameters of the k-th similar days for samples are added to input vectors. In general terms, the k-th database contains the PV power and related meteorological parameters, as well as PV power and the corresponding meteorological parameters of the k-th similar days for samples.
During the training process, the EELM model is trained, and hidden layer parameters are optimized. Specifically, an ELM model is built for every sub-database, and the weights and basis of the hidden layer are optimized by an ABC algorithm. Afterwards, a series of predicted PV power is generated according to the optimized ELM models, which can be regarded as the nearest neighbor outputs for a proceeding evidence regression process. The similarity of sample and nearest neighbors is calculated by the Euclidean distance of meteorological parameters, and a normalized belief assignment is obtained through discounting and combination. At the following step, a general PV power prediction is acquired according to the ELM prediction results and mass functions. By minimizing the forecasting error, the parameter γ is optimized. To emphasize, the basic concept of EELM in PV forecasting is that it replaces the power generations of similar days by the predicted power through ELM, which are calculated by input parameters of the predicted samples and parameters of similar days.
With the optimized parameters in EELM, a PV power forecasting framework is constructed. For a specific point to be predicted, the meteorological parameters are used to search for similar days, and an expectation of power forecasting can be calculated based on ELM outputs and mass function. In many cases, the forecasting outcomes are supposed to be in the form of an estimation interval. Differing from the roughly modeled normal distribution in previous studies, the EELM generates the confidence interval via the accumulation of probabilities for potential values.

4. Case Study

To verify the performance of the proposed evidential EML forecasting model, a series of experiments are carried out. The dataset in these experiments comes from IEEEDataPort [45]. The output power data of a rooftop PV plant located at the University of Macau with 3 kW rating capacity, as well as the corresponding meteorological elements (i.e., solar irradiation, atmospheric pressure, atmospheric temperature, wind speed, relative humidity, precipitation and cloud amount) are utilized [45]. The above data are managed with hourly intervals for a time period of 1 January 2018 to 31 December 2018. The hourly power generation is exhibited in Figure 5. Apparently, PV power generation shows a certain degree of volatility, and the peak power generation level in summer is slightly lower than other seasons in general.

4.1. Correlation between Photovoltaic Power Generation and Input Variables

For a PV power forecasting model, the predicting performance mainly depends on the selection of input variables, which is accessed through a correlation calculation between input and output variables. Therefore, the relevancy between various meteorological factors, such as solar irradiation, wind speed, precipitation, temperature and relative humidity, with PV power generation is essential for PV power forecasting. Moreover, as the similar days are inserted into the forecasting model, the correlations of PV power in similar days and predicted days are analyzed simultaneously. The correlation analysis is presented by PCC. The strongly related input variables are selected as input vectors. On the contrary, the weakly correlated input vectors are abandoned.
Figure 6a exhibits the comparison of solar irradiation and PV power generation on the typical day. A conclusion can be reached that the PV power generation particularly corresponds with solar irradiation. Figure 6b illustrates a visually strong conjunction of PV power generation and global irradiance. Consequently, global irradiance is regarded as a necessary input factor for PV power forecasting.
Figure 7a highlights the curves of PV power generation and atmospheric temperature, and it is clear that these two factors almost match at the daylight period. Additionally, Figure 7b illustrates the relationship of the PV power generation and atmospheric temperature. Clearly, the correlation is much weaker than that of global irradiance. When considering time periods with available sunlight, the PCC factor is taken as 0.3584. In this case, the atmospheric temperature is perceived as a potential influencing factor for PV power.
The correlation between PV power of predicted days and similar days is described in Figure 8, and the trend of predicted PV power is highly consistent with the PV power of similar days. Apparently, the correlation is much stronger than that of the atmospheric temperature, and approximately the same as that of global irradiance.
Furthermore, meteorological parameters such as wind angle, precipitation, atmospheric pressure, relative humidity and cloud amount are also available for the power forecasting of PV plants. The above parameters have a certain degree of influence in the prediction of PV power, and the corresponding PCC values are demonstrated in Table 1. Apparently, the cloud amount, relative humidity and atmospheric temperature have a moderate correlation with PV power generation, and the solar irradiation has a remarkably high correlation value. At the same time, the remaining factors such as wind angle, precipitation, atmospheric pressure, wind speed and wind direction have weak correlations with PV power generation. From the above figures and discussion, the input meteorological vectors of PV power forecasting are set for global irradiance, relative humidity, cloud amount and atmospheric temperature. Simultaneously, meteorological vectors and the PV power of similar days are also attached to the input vectors.

4.2. Preprocessing and Performance Criterion

Since the magnitudes of various inputs and outputs have significant variation, the preprocessing procedure is essential for the improvement of forecasting accuracy. To preprocess the input data, normalization is employed in which the original data are processed into [−1, 1] from a relatively large range of data. The process formula is:
I N o r m a l = 2 I a c t u a l I m i n I m a x I m i n 1
where INormal is the normalized data; Iactual is the original input data; Imin and Imax are the minimum and maximum value of input data, respectively.
To evaluate the performance of the proposed EELM forecasting algorithm, an evaluation metric such as the mean absolute error (MAE), mean square error (MSE) and normalized root mean square error (nRMSE) are calculated. The mathematical formula of MAE is defined as:
M A E = 1 N p i = 1 N p | P f , i P a , i |
where Np represents the number of samples. Pf,i and Pa,i demonstrate forecasted power and actual power of the ith sample, respectively.
The MSE can be defined as:
M S E = 1 N p i = 1 N p ( P f , i P a , i ) 2
The nRMSE can be indicated as:
n R M S E = 1 N p i = 1 N p ( P f , i P a , i ) 2 P ¯ a × 100 %
where P a ¯ is the mean value of PV power.
To measure the reliability of the prediction interval generated by the proposed EELM algorithm, a series of metrics are adopted [46], including the prediction interval coverage probability (PICP):
P I C P = ( 1 N p i = 1 N p C i ) × 100 %
C i = { 1 y i [ L i , U i ] 0 y i [ L i , U i ]
where Li and Ui are the lower and upper bound of the confidence interval. The nominal confidence level corresponding to the confidence interval is the prediction interval nominal confidence (PINC).
For confidence intervals under the same confidence level, narrow intervals are more reliable compared with wide intervals. The prediction interval normalized average width (PINAW) can be used to measure the degree of narrowness.
P I N A W = 1 N p A i = 1 N p ( U i L i )
where A is the range of predicted target values.
To integrate the narrowness of the interval and the sample coverage, the coverage width criterion (CWC) is employed.
C W C = P I N A W ( 1 + γ C e η ( P I C P P I N C ) )
γ C = { 0 P I C P P I N C 1 P I C P < P I N C
where η is the penalty parameter. When PICP fails to meet PINC, the difference between PICP and PINC will be amplified.

5. Results and Discussion

The experiments are carried out on a personal computer with 4.00 GB of RAM. In total, 80% of the samples are randomly assigned as the training dataset, while the remaining 20% are allocated to the testing dataset. As the overall number is 362, the training database contains 290 samples, and the number of testing samples is 72. The hyper parameters are set through trial and errors, and the results are presented and discussed in this section. The number of nearest neighbors is set as 20, and the number of hidden layers is defined as 25. For the optimization procedure of hidden layer weights and bias, the ABC algorithm is employed. The range of the number of food sources (SN) of the ABC algorithm should be 50–100 [47], and a constant value of 50 is employed in the operation process. The population size (NP) is defined as 2×SN, and the value of limit is determined as 100. The penalty factor η is set as 15 [48].

5.1. Deterministic Performance Evaluation

To verify the forecasting performance of the proposed forecasting model, a seasonal case is carried out to exhibit the actual PV power generation and the predicted results using the proposed day-ahead forecasting model. Figure 9 demonstrates the forecasting results of typical days for various seasons. The correlation between the actual measurements and those obtained through the proposed EELM algorithm for testing samples is presented in Figure 10.
Generally, it is observed that the predictive PV power by the proposed EELM model is quite a match with the actual PV output curve. Furthermore, although the actual PV power curves have dramatic volatility in summer and autumn, the EELM model also succeeds in predicting the PV power generation. Nevertheless, there are few time points at which the predicted PV power fails to accurately align with the actual PV power. According to the correlation distribution of actual and predicted power for testing samples, the predicted PV power for the proposed EELM model is highly consistent with the actual PV power curves.
The MAE values for various periods are demonstrated in Figure 11a, and the trend of the MAE within a day is approximately matched with the curve of PV power generation, which reaches a peak period between 10 and 15 o’clock. A more detailed forecasting performance for testing samples with the time period of 11–12 is exhibited in Figure 11b. It is quite visual that, although some errors occur, the predicted results achieved by the proposed EELM method have only slight differences from the actual results under most circumstances. Meanwhile, the predicted value differs significantly from the actual value in extremely few cases.
In summary, the proposed EELM algorithm and multi-model PV forecasting framework have high forecasting capacities and are feasible in PV power forecasting problems.

5.2. Comparison of Various Regression Algorithms

To verify the forecasting performance of the proposed EELM algorithm, cases by adopting SVM, multi-layer perceptron (MLP), evidential regression and the ELM are conducted for comparison. The hyper parameters of the proposed EELM algorithm, including the number of neighbors K, and number of hidden layer nodes L, are chosen as 20 and 25, respectively. For a fair comparison, the parameter K of evidential regression is set as 25, and the parameter L of ELM is defined as 25. The MLP, as a typical ANN for comparison, has a prime requirement for the precise ranges of the learning rate. Through trials and errors, the feasible learning rate for the above-mentioned day-ahead PV forecasting problem is [0.005, 0.0001], and a value of 0.001 is employed. The number of hidden layer nodes of MLP is 25. The log-sigmoid function is employed as activation functions in MLP, ELM and EELM. The penalty factor of SVM is suggested as 100, and the Gaussian function is applied as the kernel function.
Figure 12 is aimed to graphically demonstrate the forecasting performance with various algorithms. The inputs of the algorithms for comparison are the meteorological parameters of the forecasted day, as well as the meteorological parameters and PV power generation of the most similar days, which are chosen by Euclidean distance of meteorological parameters. Regarding the evidential regression algorithm, only the meteorological parameters of the forecasted day are adopted.
According to the results of five forecast models in distinct seasons, it is obvious that the proposed algorithm matches with the PV power the most and predicts better and the most confident results, while the remaining algorithms deviate to varying degrees during peak PV generation periods. Furthermore, the SVM algorithm has the heaviest deviation in general. In addition, it can be observed that the prediction results are the worst ones in winter when compared among seasons. The weak prediction accuracy might be due to multiple types of weather in winter, which leads to more uncertainty in the forecast results.
Table 2 shows the results of the MAE, MSE and nRMSE for the five forecast models. By the comparison of the EELM algorithm with several typical and commonly used forecast models, the EELM algorithm has the smallest MAE, MSE and nRMSE. It is worth mentioning that algorithms like MLP, ELM and evidential regression have a similar degree of errors, and the proposed EELM algorithm has 15.45% lower than the original ELM. To conclude, the forecasting performance of PV power is promoted effectively with the proposed EELM model.

5.3. Tuning of Parameters

For ELM, the suitable number of hidden nodes L is pre-determined by a trial-and-error method, mostly. Moreover, the number of neighbors K should be pre-determined in an evidential regression algorithm. To visualize the impact of these two coefficients on the PV power prediction, a series of experiments concerning the PV forecasting on a time period of 11–12h are conducted, taking different values for each of these two hyper parameters. The simulation results of coefficients L and K are revealed in Figure 13.
The generalization performance of the EELM algorithm is quite stable within a wide range of hidden nodes number L. The MAE of PV power decreases gradually as the value of N increases and levels off when N is taken as a value of 20. As for hyper parameter K, the generalization performance of EELM is also relatively stable regardless of the changing number of neighbors, and the preferable parameter range is between 15 and 25. Whereas, the generalization performance appears to have a worse status when the number of hidden layer nodes is too low or too high. It can be calculated that the range of MAE is [0.1739, 0.1907], and the nRMSE is between 12.87% and 14.11% when N and K adopt values other than 1. From the above graph and data, a conclusion can be drawn that the proposed EELM algorithm illustrates considerable predicting robustness irrespective of the value of hyper parameters, which confirms the stability of the proposed algorithm in fulfilling the PV power forecasting task.

5.4. Probabilistic Performance

An accurate analysis of PV power forecasting uncertainty is essential for the stable dispatch of the power system, and therefore reducing the rotating reserve capacity. Through the proposed EELM algorithm, the confidence interval can be calculated. Differing from the rough uncertainty distribution, such as normal distribution, the forecasting error distribution of the proposed method is assumed depending on the ELM prediction results and belief assignments. The EELM probabilistic forecasting method, as a nonparametric method, calculates the confidence intervals of a forecasting error without assuming the distribution form, which provides better alignment with actual situations. To verify the efficiency probabilistic PV power forecasting by the EELM algorithm, several experiments are performed in this section. One is the probabilistic PV power forecasting of a time period of 11–12 o’clock for testing samples, while the other demonstrates the probabilistic day-ahead PV power forecasting performance for typical days. The results are depicted in Figure 14, and confidence levels of 50%, 80%, 90% and 99% are applied.
Apparently, Figure 14 shows that the majority of actual PV power points drop within the 80% confidence interval, and all the actual PV power points are landed within the 99% confidence interval. Differing from the pre-defined distribution form, the bandwidth of the confidence interval fluctuates significantly. The bandwidth is associated with the distribution of PV power generation under different meteorological parameters, and this type of confidence interval is more consistent with actual conditions when compared with a fixed distribution form. As stated in Figure 14b, the bandwidth of the confidence level varies between time periods. From a day-wide perspective, the confidence interval for PV power is wider during peak hours and narrower during the valley period. In general, the proposed EELM method holds high forecasting reliability on probabilistic PV power forecasting.
To measure the probabilistic prediction results of the proposed EELM algorithm, a normal distribution N(μ, σ2) is adopted as a comparison method, where μ is the predicted value of PV power and σ is the corresponding standard deviation. The fluctuation of PV power varies in seasons, and the standard deviations are set as Table 3 [49]. The confidence intervals of this normal distribution at 50%, 80%, 90% and 99% confidence levels are shown in Figure 15. Figure 15a exhibits the confidence interval of an 11–12 h period, and Figure 15b shows the confidence interval of the normal distribution for a typical day in early March.
According to Figure 15, the confidence intervals of PV power under normal distribution are significantly different from the distribution of actual PV power. Based on the confidence intervals at the time period 11–12 h, it is clear that the width of the confidence intervals is excessively narrow, and a large number of actual PV power still falls outside the intervals, even with a 99% confidence level. The confidence interval for a typical day in March shows that the confidence interval width of the normal distribution is wider than that of the EELM algorithm. The uncertainty prediction evaluation indicators under the same typical day and the confidence interval evaluation indicators for the 11–12 h moments of different sample points are shown in Table 4 and Table 5.
It can be observable from the tables that the PICP values of the normal distribution at the time period of 11–12 h are basically smaller than PINC, which indicates the confidence level are unsatisfied and confidence intervals are unreliable. Although the confidence intervals have smaller widths than EELM, the lower PINC values lead to higher penalty levels for CWC, and the intervals of normal distribution are basically unusable. In contrast, the confidence intervals obtained by the EELM algorithm have a slight defect only at 90% PINC, and the intervals are much more reliable than the normal distribution. For a typical day in March, all of the PICP values of the EELM algorithm meet the PINC requirement, while the normal distribution has an 8.18% inadequacy at 90% PINC. In terms of interval width, the confidence interval widths of the EELM evidence regression algorithm are smaller than those of the normal distribution, except for the case of 99% PINC. In general, the EELM algorithm is more reliable than normal distribution in the aspect of probabilistic forecasting.

6. Conclusions

In this paper, a novel evidential extreme learning machine (EELM) algorithm is firstly proposed to improve the accuracy of the day-ahead PV power forecasting problem. The proposed EELM is a multi-model regression algorithm, which builds multi-ELM models, while the weights and bias are optimized by an ABC algorithm. Moreover, k-nearest neighbors of the predicted point are searched, and the model of neighbors are employed to acquire forecasting results provided by various neighbors. Subsequently, an evidential theoretic k-nearest neighbors regression rule is employed to define the relationship between the predicted point and each local neighbor. Additionally, the separate ELM models are weighted by the similarity to generate the EELM forecasting framework. Furthermore, the EELM is employed to day-ahead deterministic and probabilistic power forecasting of the PV system, in which the k-nearest neighbors of predicted days based on meteorological factors are treated as k similar days. Meanwhile, the ELM models of neighbors can be obtained through dataset reconstruction and an ELM model training process. The performance of the proposed EELM algorithm is verified by using a real-world database of a PV plant in Macau, and the results indicate that the predicted PV power is quite a match with the actual power curves. To emphasize, the proposed algorithm brings a 15.45% lower nRMSE compared with the original ELM algorithm. It is worth mentioning that the proposed algorithm is also efficient in the aspect of probabilistic forecasting. Generally, numerical in-depth case studies confirm the convincing performance of the proposed forecasting algorithms. Therefore, the PV forecasting model based on the proposed EELM algorithm has a mentionable potential for practical applications in the forecasting of a PV power system.

Author Contributions

Conceptualization, M.W.; methodology, M.W.; validation, M.W., T.Z. and P.W.; formal analysis, M.W.; data curation, M.W.; writing—original draft preparation, M.W.; writing—review and editing, M.W.; visualization, M.W.; funding acquisition, P.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 51976032.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (51976032). We are grateful to the editors and reviewers for the significant contributions regarding our manuscript.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

ABCartificial bee colony
ANNartificial neural network
BBAbasic belief assignment
BPback-propagation
CWCcoverage width criterion
ELMextreme learning machine
MAEmean absolute error
MLPmultilayer perceptron
MPMoore Penrose
MSEmean square error
nRMSEnormalized root mean square error
PCCPearson correlation coefficient
PICPprediction interval coverage probability
PINAWprediction interval normalized average width
PINCprediction interval nominal confidence
PVphotovoltaic
RELMregularized extreme learning machine
SLFNsingle-hidden layer feed forward neural network
SVMsupport vector machine

References

  1. Sobri, S.; Koohi-Kamali, S.; Rahim, N.A. Solar photovoltaic generation forecasting methods: A review. Energy Convers. Manag. 2018, 156, 459–497. [Google Scholar] [CrossRef]
  2. Wang, M.; Zhang, T.; Wang, P.; Chen, X. An improved harmony search algorithm for solving day-ahead dispatch optimization problems of integrated energy systems considering time-series constraints. Energy Build. 2020, 229, 110477. [Google Scholar] [CrossRef]
  3. Zang, H.; Cheng, L.; Ding, T.; Cheung, K.; Wei, Z.; Sun, G. Day-ahead photovoltaic power forecasting approach based on deep convolutional neural networks and meta learning. Int. J. Electr. Power Energy Syst. 2019, 118, 105790. [Google Scholar] [CrossRef]
  4. Huang, C.; Cao, L.; Peng, N.; Li, S.; Zhang, J.; Wang, L.; Luo, X.; Wang, J.-H. Day-ahead forecasting of hourly photovoltaic power based on robust multilayer perception. Sustainability 2018, 10, 4863. [Google Scholar] [CrossRef] [Green Version]
  5. Al-Waeli, A.H.; Kazem, H.A.; Chaichan, M.T.; Sopian, K. Photovoltaic/Thermal (PV/T) Systems: Principles, Design, and Applications; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  6. Gu, B.; Shen, H.; Lei, X.; Hu, H.; Liu, X. Forecasting and uncertainty analysis of day-ahead photovoltaic power using a novel forecasting method. Appl. Energy 2021, 299, 117291. [Google Scholar] [CrossRef]
  7. Rakhshani, E.; Rouzbehi, K.; Sánchez, A.J.; Tobar, A.C.; Pouresmaeil, E. Integration of Large Scale PV-Based Generation into Power Systems: A Survey. Energies 2019, 12, 1425. [Google Scholar] [CrossRef] [Green Version]
  8. Zhou, Y.; Zhou, N.; Gong, L.; Jiang, M. Prediction of photovoltaic power output based on similar day analysis, genetic algorithm and extreme learning machine. Energy 2020, 204, 117894. [Google Scholar] [CrossRef]
  9. Das, U.K.; Tey, K.S.; Seyedmahmoudian, M.; Mekhilef, S.; Idris, M.Y.I.; Van Deventer, W.; Horan, B.; Stojcevski, A. Forecasting of photovoltaic power generation and model optimization: A review. Renew. Sustain. Energy Rev. 2018, 81, 912–928. [Google Scholar] [CrossRef]
  10. Wang, H.; Liu, Y.; Zhou, B.; Li, C.; Cao, G.; Voropai, N.; Barakhtenko, E. Taxonomy research of artificial intelligence for deter-ministic solar power forecasting. Energy Convers. Manag. 2020, 214, 112909. [Google Scholar] [CrossRef]
  11. Konstantinou, M.; Peratikou, S.; Charalambides, A. Solar Photovoltaic Forecasting of Power Output Using LSTM Networks. Atmosphere 2021, 12, 124. [Google Scholar] [CrossRef]
  12. Nespoli, A.; Ogliari, E.; Leva, S.; Massi Pavan, A.; Mellit, A.; Lughi, V.; Dolara, A. Day-ahead photovoltaic forecasting: A com-parison of the most effective techniques. Energies 2019, 12, 1621. [Google Scholar] [CrossRef] [Green Version]
  13. Mellit, A.; Pavan, A.M.; Ogliari, E.; Leva, S.; Lughi, V. Advanced methods for photovoltaic output power forecasting: A review. Appl. Sci. 2020, 10, 487. [Google Scholar] [CrossRef] [Green Version]
  14. Ogliari, E.; Dolara, A.; Manzolini, G.; Leva, S. Physical and hybrid methods comparison for the day ahead PV output power forecast. Renew. Energy 2017, 113, 11–21. [Google Scholar] [CrossRef]
  15. Wang, H.; Yi, H.; Peng, J.; Wang, G.; Liu, Y.; Jiang, H.; Liu, W. Deterministic and probabilistic forecasting of photovoltaic power based on deep convolutional neural network. Energy Convers. Manag. 2017, 153, 409–422. [Google Scholar] [CrossRef]
  16. Mayer, M.J.; Gróf, G. Extensive comparison of physical models for photovoltaic power forecasting. Appl. Energy 2020, 283, 116239. [Google Scholar] [CrossRef]
  17. Ahmed, R.; Sreeram, V.; Mishra, Y.; Arif, M. A review and evaluation of the state-of-the-art in PV solar power forecasting: Techniques and optimization. Renew. Sustain. Energy Rev. 2020, 124, 109792. [Google Scholar] [CrossRef]
  18. Vagropoulos, S.I.; Chouliaras, G.I.; Kardakos, E.G.; Simoglou, C.K.; Bakirtzis, A.G. Comparison of SARIMAX, SARIMA, modified SARIMA and ANN-based models for short-term PV generation forecasting. In Proceedings of the 2016 IEEE International Energy Conference (ENERGYCON), Leuven, Belgium, 4–8 April 2016; pp. 1–6. [Google Scholar] [CrossRef]
  19. Wang, F.; Zhen, Z.; Wang, B.; Mi, Z. Comparative study on KNN and SVM based weather classification models for day ahead short term solar pv power forecasting. Appl. Sci. 2018, 8, 28. [Google Scholar] [CrossRef] [Green Version]
  20. Eseye, A.T.; Zhang, J.; Zheng, D. Short-term photovoltaic solar power forecasting using a hybrid Wavelet-PSO-SVM model based on SCADA and Meteorological information. Renew. Energy 2018, 118, 357–367. [Google Scholar] [CrossRef]
  21. Sanjari, M.J.; Gooi, H.B. Probabilistic forecast of PV power generation based on higher order markov chain. IEEE Trans. Power Syst. 2016, 32, 2942–2952. [Google Scholar] [CrossRef]
  22. Lee, H.; Kim, N.W.; Lee, J.G.; Lee, B.T. Kalman filter-based adaptive forecasting of PV power output. In Proceedings of the 2020 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Korea, 21–23 October 2020; pp. 1342–1347. [Google Scholar]
  23. Zamo, M.; Mestre, O.; Arbogast, P.; Pannekoucke, O. A benchmark of statistical regression methods for short-term forecasting of photovoltaic electricity production, Part I: Deterministic forecast of hourly production. Sol. Energy 2014, 105, 792–803. [Google Scholar] [CrossRef]
  24. Yadav, H.K.; Pal, Y.; Tripathi, M. Photovoltaic power forecasting methods in smart power grid. In Proceedings of the 2015 Annual IEEE India Conference (INDICON), New Delhi, India, 17–20 December 2015; pp. 1–6. [Google Scholar] [CrossRef]
  25. Liu, L.; Zhan, M.; Bai, Y. A recursive ensemble model for forecasting the power output of photovoltaic systems. Sol. Energy 2019, 189, 291–298. [Google Scholar] [CrossRef]
  26. Leva, S.; Dolara, A.; Grimaccia, F.; Mussetta, M.; Ogliari, E. Analysis and validation of 24 hours ahead neural network forecasting of photovoltaic output power. Math. Comput. Simul. 2017, 131, 88–100. [Google Scholar] [CrossRef] [Green Version]
  27. Gao, M.; Li, J.; Hong, F.; Long, D. Day-ahead power forecasting in a large-scale photovoltaic plant based on weather classification using LSTM. Energy 2019, 187, 115838. [Google Scholar] [CrossRef]
  28. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  29. Liu, Z.F.; Li, L.L.; Tseng, M.L.; Lim, M.K. Prediction short-term photovoltaic power using improved chicken swarm optimizer-extreme learning machine model. J. Clean. Prod. 2020, 248, 119272. [Google Scholar] [CrossRef]
  30. Behera, M.K.; Majumder, I.; Nayak, N. Solar photovoltaic power forecasting using optimized modified extreme learning ma-chine technique. Eng. Sci. Technol. Int. J. 2018, 21, 428–438. [Google Scholar]
  31. Luo, P.; Zhu, S.; Han, L.; Chen, Q. Short-term photovoltaic generation forecasting based on similar day selection and extreme learning machine. In Proceedings of the 2017 IEEE Power & Energy Society General Meeting, Chicago, IL, USA, 16–20 July 2017; pp. 1–5. [Google Scholar] [CrossRef]
  32. Tan, Q.; Mei, S.; Dai, M.; Zhou, L.; Wei, Y.; Ju, L. A multi-objective optimization dispatching and adaptability analysis model for wind-PV-thermal-coordinated operations considering comprehensive forecasting error distribution. J. Clean. Prod. 2020, 256, 120407. [Google Scholar] [CrossRef]
  33. Kaplanis, S.; Kaplani, E. A model to predict expected mean and stochastic hourly global solar radiation I(h;nj) values. Renew. Energy 2007, 32, 1414–1425. [Google Scholar] [CrossRef]
  34. Han, Y.; Wang, N.; Ma, M.; Zhou, H.; Dai, S.; Zhu, H. A PV power interval forecasting based on seasonal model and non-parametric estimation algorithm. Sol. Energy 2019, 184, 515–526. [Google Scholar] [CrossRef]
  35. Deng, W.; Zheng, Q.; Chen, L. Regularized extreme learning machine. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence and Data Mining, Nashville, TN, USA, 30 March–2 April 2009; pp. 389–395. [Google Scholar]
  36. Su, Z.-G.; Wang, Y.-F.; Wang, P.-H. Parametric regression analysis of imprecise and uncertain data in the fuzzy belief function framework. Int. J. Approx. Reason. 2013, 54, 1217–1242. [Google Scholar] [CrossRef]
  37. Dempster, A.P. Upper and lower probabilities induced by a multivalued mapping. In Classic Works of the Dempster-Shafer Theory of Belief Function; Springer: Berlin/Heidelberg, Germany, 2008; pp. 57–72. [Google Scholar]
  38. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  39. Petit-Renaud, S.; Denœux, T. Nonparametric regression analysis of uncertain and imprecise data using belief functions. Int. J. Approx. Reason. 2004, 35, 1–28. [Google Scholar] [CrossRef] [Green Version]
  40. Su, Z.-G.; Wang, P.-H.; Shen, J.; Yu, X.-J.; Lv, Z.-Z.; Lu, L. Multi-model strategy based evidential soft sensor model for predicting evaluation of variables with uncertainty. Appl. Soft Comput. 2011, 11, 2595–2610. [Google Scholar] [CrossRef]
  41. Wang, P.-H.; Su, Z.-G. Research on Theory of Belief Function and Modelling for Cognizing Unmeasured Parameters in Power System; Southeast University: Dhaka, Bangladesh, 2010. [Google Scholar]
  42. Zhao, Y. Research on Evidence Research Modelling and Its Application of Thermal Objects; Southeast University: Dhaka, Bangladesh, 2018. [Google Scholar]
  43. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  44. Cohen, I.; Huang, Y.; Chen, J.; Benesty, J. Pearson correlation coefficient. In Noise Reduction in Speech Processing; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  45. Qiu, Y.; Zhou, Z. 2018–2019 PV Generation of a Rooftop Plant in the University of Macau and Weather Report. IEEEDataPort. Available online: https://ieee-dataport.org/documents/2018-2019-pv-generation-rooftop-plant-university-macau-and-weather-report (accessed on 23 November 2021).
  46. Khosravi, A.; Nahavandi, S.; Creighton, D.; Atiya, A.F. Comprehensive Review of Neural Network-Based Prediction In-tervals and New Advances. IEEE Trans. Neural Netw. 2011, 22, 1341–1356. [Google Scholar] [CrossRef] [PubMed]
  47. Aderhold, A.; Diwold, K.; Scheidler, K.; Middendorf, M. Artificial bee colony optimization: A new selection scheme and its performance. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  48. Li, C.; Tang, G.; Xue, X.; Chen, X.; Wang, R.; Zhang, C. Deep interval prediction model with gradient descend optimization method for short-term wind power prediction. arXiv 2019, arXiv:1911.08160. [Google Scholar]
  49. Zhou, Z.; Zhang, J.; Liu, P.; Li, Z.; Georgiadis, M.C.; Pistikopoulos, E.N. A two-stage stochastic programming model for the optimal design of distributed energy systems. Appl. Energy 2012, 103, 135–144. [Google Scholar] [CrossRef]
Figure 1. Topological structure of ELM.
Figure 1. Topological structure of ELM.
Energies 15 03882 g001
Figure 2. Schematic diagram of EELM.
Figure 2. Schematic diagram of EELM.
Energies 15 03882 g002
Figure 3. Algorithm process of EELM.
Figure 3. Algorithm process of EELM.
Energies 15 03882 g003
Figure 4. Schematic diagram of EELM in PV power forecasting.
Figure 4. Schematic diagram of EELM in PV power forecasting.
Energies 15 03882 g004
Figure 5. Hourly PV power distribution in 2018.
Figure 5. Hourly PV power distribution in 2018.
Energies 15 03882 g005
Figure 6. Hourly PV power distribution in 2018. (a) PV power and global irradiance curves of a typical day; (b) correlation between global irradiance and PV power.
Figure 6. Hourly PV power distribution in 2018. (a) PV power and global irradiance curves of a typical day; (b) correlation between global irradiance and PV power.
Energies 15 03882 g006
Figure 7. PV power and atmospheric temperature curves. (a) PV power and atmospheric temperature curves of a typical day; (b) correlation between atmospheric temperature and PV power.
Figure 7. PV power and atmospheric temperature curves. (a) PV power and atmospheric temperature curves of a typical day; (b) correlation between atmospheric temperature and PV power.
Energies 15 03882 g007
Figure 8. PV power and PV power of similar days curves. (a) PV power and PV power of similar days curves of a typical day; (b) correlation between PV power of similar days and PV power.
Figure 8. PV power and PV power of similar days curves. (a) PV power and PV power of similar days curves of a typical day; (b) correlation between PV power of similar days and PV power.
Energies 15 03882 g008
Figure 9. Actual and predicted PV power and PV power for various seasons: (a) spring; (b) summer; (c) autumn; (d) winter.
Figure 9. Actual and predicted PV power and PV power for various seasons: (a) spring; (b) summer; (c) autumn; (d) winter.
Energies 15 03882 g009
Figure 10. Correlation between actual and predicted PV power.
Figure 10. Correlation between actual and predicted PV power.
Energies 15 03882 g010
Figure 11. Actual and predicted PV power. (a) MAE of predicted PV power of a typical day; (b) predicted and actual PV power for time period of 11–12 o’clock.
Figure 11. Actual and predicted PV power. (a) MAE of predicted PV power of a typical day; (b) predicted and actual PV power for time period of 11–12 o’clock.
Energies 15 03882 g011
Figure 12. Actual and predicted PV power of various forecasting models for various seasons: (a) spring; (b) summer; (c) autumn; (d) winter.
Figure 12. Actual and predicted PV power of various forecasting models for various seasons: (a) spring; (b) summer; (c) autumn; (d) winter.
Energies 15 03882 g012
Figure 13. MAE of PV power prediction with different values of hyperparameters.
Figure 13. MAE of PV power prediction with different values of hyperparameters.
Energies 15 03882 g013
Figure 14. Actual and predicted PV power of various forecasting models for various seasons. (a) Probabilistic forecasting results for time period of 11–12 o’clock; (b) probabilistic forecasting results of a typical day.
Figure 14. Actual and predicted PV power of various forecasting models for various seasons. (a) Probabilistic forecasting results for time period of 11–12 o’clock; (b) probabilistic forecasting results of a typical day.
Energies 15 03882 g014
Figure 15. Confidence intervals under normal distribution. (a) Confidence intervals for time period of 11–12 h; (b) confidence intervals of a typical day in March.
Figure 15. Confidence intervals under normal distribution. (a) Confidence intervals for time period of 11–12 h; (b) confidence intervals of a typical day in March.
Energies 15 03882 g015
Table 1. PCC values between influencing factors and PV power.
Table 1. PCC values between influencing factors and PV power.
Influencing FactorPCC ValueInfluencing FactorPCC Value
Solar irradiation0.9434Wind speed−0.0046
Atmospheric temperature0.3584Wind direction−0.0536
PV power of similar days0.9389Wind angle−0.2623
Relative humidity−0.5600Precipitation−0.1182
Cloud amount−0.4851Atmospheric pressure0.0051
Table 2. Forecasting error of various forecasting algorithms.
Table 2. Forecasting error of various forecasting algorithms.
Forecasting ModelsEELMMLPELMEvidential RegressionSVM
MAE0.01820.05720.05730.06020.0900
MSE0.00490.01670.01660.01830.0386
nRMSE (%)18.389333.866733.840535.493251.5477
Table 3. Probabilistic distributions of hourly PV power.
Table 3. Probabilistic distributions of hourly PV power.
Time FrameDistributionStandard Deviation
November–April 9:00 a.m.–15:00 p.m.N(μ, σ2)12%μ
November–April the rest of the dayN(μ, σ2)25%μ
May–October 9:00 a.m.–15:00 p.m.N(μ, σ2)3%μ
May–October the rest of the dayN(μ, σ2)8%μ
Table 4. Confidence interval evaluation index for time period 11–12 h.
Table 4. Confidence interval evaluation index for time period 11–12 h.
Probabilistic ModelPINCPICPPINAWCWC
EELM50%52.78%0.19630.1963
EELM80%81.94%0.39270.3927
EELM90%88.89%0.47321.0323
EELM99%100%0.61740.6174
Normal distribution50%36.11%0.06330.5720
Normal distribution80%50%0.120310.9533
Normal distribution90%50%0.154562.4678
Normal distribution99%58.33%0.2419108.0870
Table 5. Confidence interval evaluation index for a typical day in May.
Table 5. Confidence interval evaluation index for a typical day in May.
Probabilistic ModelPINCPICPPINAWCWC
EELM50%54.55%0.06120.0612
EELM80%100%0.14150.1415
EELM90%100%0.17940.1794
EELM99%100%0.51250.5125
Normal distribution50%81.82%0.12610.1261
Normal distribution80%81.82%0.23950.2395
Normal distribution90%81.82%0.30741.3562
Normal distribution99%100%0.48140.4814
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, M.; Wang, P.; Zhang, T. Evidential Extreme Learning Machine Algorithm-Based Day-Ahead Photovoltaic Power Forecasting. Energies 2022, 15, 3882. https://doi.org/10.3390/en15113882

AMA Style

Wang M, Wang P, Zhang T. Evidential Extreme Learning Machine Algorithm-Based Day-Ahead Photovoltaic Power Forecasting. Energies. 2022; 15(11):3882. https://doi.org/10.3390/en15113882

Chicago/Turabian Style

Wang, Minli, Peihong Wang, and Tao Zhang. 2022. "Evidential Extreme Learning Machine Algorithm-Based Day-Ahead Photovoltaic Power Forecasting" Energies 15, no. 11: 3882. https://doi.org/10.3390/en15113882

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop