Next Article in Journal
State-of-the-Art of Cellulose Nanocrystals and Optimal Method for their Dispersion for Construction-Related Applications
Previous Article in Journal
Evaluating the Effect of Financing Costs on PV Grid Parity by Applying a Probabilistic Methodology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Combined Model Based on Multi-Objective Optimization and Application in Wind Speed Forecast

1
School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, China
2
School of Statistics, Dongbei University of Finance and Economics, Dalian 116025, China
3
School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(3), 423; https://doi.org/10.3390/app9030423
Submission received: 3 December 2018 / Revised: 30 December 2018 / Accepted: 17 January 2019 / Published: 27 January 2019
(This article belongs to the Section Energy Science and Technology)

Abstract

:
Wind power is an important part of a power system, and its use has been rapidly increasing as compared with fossil energy. However, due to the intermittence and randomness of wind speed, system operators and researchers urgently need to find more reliable wind-speed prediction methods. It was found that the time series of wind speed not only has linear characteristics, but also nonlinear. In addition, most methods only consider one criterion or rule (stability or accuracy), or one objective function, which can lead to poor forecasting results. So, wind-speed forecasting is still a difficult and challenging problem. The existing forecasting models based on combination-model theory can adapt to some time-series data and overcome the shortcomings of the single model, which achieves poor accuracy and instability. In this paper, a combined forecasting model based on data preprocessing, a nondominated sorting genetic algorithm (NSGA-III) with three objective functions and four models (two hybrid nonlinear models and two linear models) is proposed and was successfully applied to forecasting wind speed, which not only overcomes the issue of forecasting accuracy, but also solves the difficulties of forecasting stability. The experimental results show that the stability and accuracy of the proposed combined model are better than the single models, improving the mean absolute percentage error (MAPE) range from 0.007% to 2.31%, and the standard deviation mean absolute percentage error (STDMAPE) range from 0.0044 to 0.3497.

1. Introduction

In recent years, wind energy has become the focus of managers and researchers in the energy field, due to the advantages of wind power, such as renewability and cleanliness.
In 2016, the global installed capacity of wind power exceeded 54 GW. This installed capacity is distributed between 90 countries, which have an installed capacity of more than 10 GW, with 29 countries having an installed capacity of 1 GW [1]. Cumulative installed capacity increased by 12.6%, and accumulated capacity reached 486.8 GW. In the United States, more than 53,000 wind turbines operate, generating more than 84.1 GW of electricity in 41 states [2]. On the other side of the globe, wind power may become China's second largest power source by 2050 [3].
Accurate forecasting of wind speed is an important prerequisite for using wind power due to the following reasons: 1) it reduces rotating and operating costs of wind-farm equipment; 2) it helps dispatching departments to adjust their plans in time; 3) it reduces the impact on the entire power grid; 4) it effectively reduces or avoids the negative impact of wind farms on the power system, and 5) it improves the ability of wind power in power market competition.
However, because of the chaotic and random fluctuations of wind speed, it is a challenging and difficult task to obtain satisfactory wind-speed forecasting results [4,5,6]. To have satisfying results and reduce errors, and improve the accuracy and stability of the forecasting results, various wind-speed forecasting methods have been proposed and developed by former researchers. The methods can be classified in four categories: 1) physical models; 2) statistical models; 3) artificial-intelligence models; and 4) spatial-correlation models [7].
The common physical models include weather research and forecast, the consortium for small-scale modeling, and mesoscale model 5 (MM5) [8]. These methods are based on physics and the atmosphere, and forecast wind speed with additional background information [8]. Statistical models include the fuzzy methods [9], Autoregressive Moving Average (ARMA) model [10], the Autoregressive Integrate Moving Average (ARIMA) model [11,12], and the grey model [13], combined with neurofuzzy techniques [14] and Markov chains [15]. Statistical methods aim to mine the relationship between historical data and establish prediction models, including traditional statistical and machine-learning models, to describe the potential sampling of wind-speed forecasting [16]. In recent years, because of their powerful nonlinear character, many artificial-intelligence forecasting methods, such as artificial neural networks (ANN) [17,18,19,20,21,22,23,24,25] and support vector machines (SVM) [26,27], have been widely applied to wind-speed forecasting. The wind speed of the spatial relationship between different sites is considered in spatial-correlation models [28] to forecast wind speed.
However, these four categories of methods have disadvantages because of their character: 1) physical models need accurate numerical weather-prediction data and detailed information of wind-farm locality, input parameters, and data acquisition. Processing and calculation are complex [29,30]; 2) traditional statistical models can address forecasting with linear trends well, but times-series wind speed is always random, chaotic, and nonlinear, and the performance of these models does not satisfy researchers and managers; 3) although artificial-intelligence forecasting methods have some advantages, due to their nonlinear mapping capability compared with traditional statistical models, their main disadvantages and shortcomings are that they are easy to reach local optimal values, and they suffer from overfitting and slow convergence [31]; 4) spatial-correlation methods should consider many influential factors, such as the relationship of the time-series wind-speed data between different sites, original data from several stations, and if the time-series wind speed of the forecasting point and its neighbors is used for forecasting [28].
To overcome the problem of forecasting accuracy and stability and take advantage of single models, a combined model based on multi-objective optimization and combined theory is proposed by us to overcome their defects, including low accuracy and weak stability. Because wind-speed forecasting is a complex study, single-objective optimization is not enough to solve the wind-speed forecasting problem. Solving the single-objective optimization problem (SOP) is a direct task, but multiple-objective problems (like the wind-speed forecasting problem) are often complex, and objective functions are competing (or conflicting) with each other. Due to this reason, there is no single optimal solution, commonly known as a series of Pareto-optimal solutions, which simultaneously optimize all objective functions. A set of optimal solutions, called a Pareto-optimal Set (PS), of which mapping in the target space is named the Pareto front (PF), and not a single optimal solution, is the goal of solving the multiple-objective optimization problem (MOP). In MOPs, the favored solution from the PS is chosen by researchers, but not the optimal solution. Obviously, it is difficult to select one objective function from a number of objective functions and ensure that the selected objective function can achieve a lower mean absolute percentage error (MAPE) and stronger stability. Due to these reasons, multiple-objective optimization can solve the low accuracy and weak stability of the wind-speed forecasting problem through simultaneously searching the goal of multiple functions.
In this paper, a new combined model [32] that integrates nonpositive constraint theory [33], four branch models, including two nonlinear hybrid neural networks (Cuckoo Search-Back-Propagation Neural Network (CS-BPNN) and DE-Online Extreme Learning Machine (DE-OSELM)), two linear models (ARIMA and HW), and a nondominated sorting genetic algorithm (NSGA-III) [34], which optimizes the weights of the branch models, is proposed in this paper. Ten-minute time-series wind-speed data from three wind-farm sites were applied to examine our proposed model. We also chose three combined models with two objective functions to test its performance.
The major contributions and innovations of this paper are as follows:
1. According to the observed time-series wind-speed data, the trajectory matrix was constructed, decomposed, and reconstructed to extract signals representing different components of the original time series, such as long-term trend signals, periodic signals, and noise signals, so that the structure of the time series could be analyzed and used for further forecasting.
2. Our proposed combined model is based on MOP theory, which can obtain both accuracy and stability. The wind-speed forecasting problem is a MOP, so the theory overcomes the difficulty of selecting one objective function from multiple functions to obtain higher precision and stronger stability.
3. Due to the wind-speed data having both linear and nonlinear characteristics, linear models (ARIMA and HW) and nonlinear models (BPNN) and OSELM) were chosen to be the branch models of our proposed combined model. The combination of these two kinds of models can solve the wind-speed forecast problem with both linear and nonlinear characteristics.
4. Our novel developed combined model with three objective functions, which can be applied to forecast wind speed, is proposed in this paper. As compared with other models, the proposed model not only guarantees accuracy, but also has strong stability. The results of the experiments mean that the proposed integrated combined model is a more effective model for wind-speed forecasting and wind-farm management.
5. The forecasting performance of the combined model was evaluated scientifically and comprehensively. The evaluation system used five experiments and four performance metrics to effectively evaluate the forecasting accuracy and stability of the proposed combined model.
The remainder of the paper is arranged as follows. The strategy of the proposed combined model is shown in Section 2. Singular-spectrum analysis is presented in Section 3. Nonlinear back propagation, an extreme learning machine neural network, two linear models, autoregressive integrated moving average and Holt–Winters, heuristic algorithms, and the optimization procedure are introduced in Section 4. In Section 5, we show our proposed combined model. In Section 6, forecasting performance metrics, the forecasting results of individual models and of the proposed combined model, and comparisons are discussed, and the views and results of the entire paper are summarized. Finally, Section 7 concludes the study.

2. Strategy of Our Proposed Combined Model

(1) Preprocessing the time-series wind-speed data with denoising and reconstruction from China.
(2) Setting the train data and test data according to the character of the time-series wind-speed data; the interval of each two data items is ten minutes.
(3) Two kinds of models were selected as single models to build the combined model. The linear models include ARIMA and HW, and the nonlinear models include CS-BPNN and DE-OSELM, which were optimized by cuckoo search and differential evolution. The parameter values are shown in Table 1.
(4) Building a combined forecasting system model, which is based on nonpositive constraint theory, NSGA-III, which optimizes the weights of the branch models and four single models, which are mentioned before.
(5) Forecasting results were evaluated, and testing of the combined model is at the end.

3. Results

The denoising process consists of four steps, embedding, singular-value decomposition, grouping, and diagonal averaging [35]. According to the observed time-series wind-speed data and these four steps, the trajectory matrix was constructed. The trajectory matrix was decomposed and reconstructed to extract signals representing different components of the original time series, such as long-term trend signals, periodic signals, and noise signals. In order to obtain satisfying results, the noise signals were abandoned. The detailed process is as follows:
(1) Embedding:
Form the trajectory matrix of the series X, which is the L × K matrix:
X = [ x 1 , , x N ] = ( x ij ) i , j = 1 L , K = [ x 1 x 2 x 3 x K x 2 x 3 x 4 x K + 1 x 3 x 4 x 5 x K + 2 x L x L + 1 x L + 2 x N ]
where X = ( x 1 ,   ,   x i + L 1 ) T , ( 1 < i < K ) are lagged vectors of size L. Matrix X is a Hankel matrix that means that X has equal elements x ij on the anti-diagonals i +j=const.
(2) Singular-Value Decomposition:
Perform the singular value decomposition (SVD) of trajectory matrix X. Set S   =   XX T and denote by λ 1 , , λ L the eigenvalues of S taken in the decreasing order of magnitude λ 1 λ L 0 , and by U 1 , , U L the orthonormal system of the eigenvectors of matrix S corresponding to these eigenvalues.
Set d   =   rankX   =   max { i ,   such   that   λ i   >   0 } (note that d = L for a typical real-life series) and V i   =   X T U / λ i , ( i   =   1 , , d ) . In this notation, the SVD of trajectory matrix X can be written as X   = X 1 +     + X N .
Where X i   =   λ i U i V i T are matrices having rank 1; these are called elementary matrices. Collection ( λ i ,   U i ,   V i T ) is the ith eigentriple (ET) of the SVD. Vectors U i are the left singular vectors of matrix X, and numbers λ i are the singular values and provide the singular spectrum of X; this gives the name to SSA. Vectors V i λ i   =   X T U i are the vectors of principal components (PCs).
(3) Eigentriple grouping:
Partition set of indices { 1 ,   ,   d } into m disjoint subsets I 1 , , I m .
Let   I   =   { i 1 , , i p } . Then, resultant matrix X I corresponding to group I is defined as X   = X I 1 + + X I m . The resultant matrices are computed for the groups, and the grouped SVD expansion of X can now be written as X   =   X I 1 + + X I m .
(4) Diagonal averaging:
Each matrix X I j of the grouped decomposition is hankelized, and then the obtained Hankel matrix is transformed into a new series of length N using the one-to-one correspondence between Hankel matrices and the time series. Diagonal averaging applied to a resultant matrix X I k produces a reconstructed series X ˜ ( k ) = ( X ˜ 1 ( k ) , , X ˜ N ( k ) ) . In this way, the initial series x 1 , , x N is decomposed into a sum of m reconstructed subseries:
x n = k = 1 m x ˜ n ( k ) ,   ( n = 1 , 2 , , N )
This decomposition is the main result of the SSA algorithm. The decomposition is meaningful if each reconstructed subseries could be forecast as a part of either a trend, some periodic component, or noise.
The pseudocode for denoising is provided in Appendix A, Algorithm 1.

4. Methods and Heuristic Algorithm

The linear and nonlinear models selected to build the combined system are shown in this section. The heuristic algorithm, cuckoo search, and differential evolution to optimize BPNN and OSELM are also introduced in this section.

4.1. Nonlinear Models

Because of the character of wind-speed data, nonlinear models can obtain excellent performance in forecasting. In this paper, two models, BPNN and ELM, were selected from nonlinear models and the parameter was optimized by the algorithm to obtain better performance.

4.1.1. Back-Propagation Neural Network (BPNN) Model

BPNN is a type of multilayer feed-forward neural network with a wide variety of applications. It is based on a gradient-descent method that minimizes the sum of the squared errors between actual and desired output values. The transfer function is of the neuron type. The output function is between 0 and 1, and can transform input to output for continuous nonlinear mapping [36].
The topology of the BPNN is as follows:
X = { X i } = 2 × X i X i min X i max X i min 1 , ( i = 1 , 2 , , n ) , X [ 1 , 1 ]
where X min and X max are the minimum and maximum value of the input array or output vectors, and Xi denotes the real value of each vector.
Step 1. Calculate outputs of all hidden layer nodes:
y j = f ( i w j i x i + b j ) = f ( n e t j )   ( i   = 1 , , n   ;   j   = 1 , , 2 n )
net j = i w ji x i + b j ,   ( j   =   1 , , 2 n )
where the activation value of node j is net j , w ji representing the connection weight from input node i to hidden node j, b j represents the bias of neuron j, y j represents the output of hidden layer node j, and f is the activation function of a node, which is usually a sigmoid function.
Step 2. Calculate the output data of the neural network:
O 1   =   f 0 ( j w 0 j y i + b 0 ) ,   ( i   =   1 , , 2 n )
where w 0 j represents the connection threshold from hidden node j to the output node, b 0 represents the bias of the neuron, O 1 represents the output data of the network, and f 0 is the activation function of the output layer node.
Step 3. Minimize the global error via the training algorithm:
Mean   Square   Error   =   1 m ( O 1 Z ) 2
where Z represents the real data vector of the output, m represents the number of output.

4.1.2. CS Algorithm

CS, which was proposed by Yang and Deb [37] in 2009, is derived from the action of cuckoos laying their eggs in other birds’ nests to let those birds hatch the eggs for them. However, once the host birds discover the cuckoo eggs, the host birds throw them away or abandon their nests and rebuild a new nest elsewhere. The CS algorithm is constructed based on three assumptions: a) only one egg is randomly laid by each cuckoo in a selected nest; b) the following generations would begin in the best nest; and c) it is a constant of the number of available host nests, and the probability value of the host bird discovering the egg laid by a cuckoo is p, which has the range of 0 to 1. In CS, every nest stands for a solution. The steeps of BPNN optimized by CS are shown in Figure 1.

4.1.3. OSELM

An Extreme Learning Machine (ELM) is a simple and effective single hidden-layer feed-forward neural network proposed by Huang et al. [38]. In many practical applications, however, learning is a continuous process. For this reason, Liang et al. proposed an online learning algorithm, the OSELM [39].
For different N training sample Z, Z = { ( x i ,   t i ) | i = 1 , , N } , where x i = [ x i 1 ,   x i 2 , , x in ] R n , t i = [ t i 1 ,   t i 2 , ,   t im ] R m , ELM with L hidden layer nodes, and g(x) activation function can approximate arbitrary N samples with zero errors by arbitrarily specifying a j and b j , which can be expressed as follows:
O i   =   j = 1 L β j g ( a j , b j , x i ) =   t i
where a j is the input weight, b j is the threshold of hidden layer nodes,   x i is the input vector, O i is the output vector, and beta βj is the output weight β L × m =   [ β 1 ,   β 2 ,   ,   β L ] T .
Formula (8) can be simplified as:
H β   =   T
where, β L × m =   [ β 1 ,   β 2 ,   ,   β L ] T , T N × m =   [ T 1 ,   T 2 ,   ,   T N ] T , and H is hidden layer output matrix. H ( a 1 , , a L ;   b 1 , , b L ; x 1 , , x N )   = [ g ( a 1 ,   b 1 ,   x 1 ) g ( a L ,   b L ,   x 1 ) g ( a 1 ,   b 1 ,   x N ) g ( a L ,   b L ,   x N ) ] N × L . Column J of H represents the output of the j hidden layer node for input x 1 ,   x 2 ,   , x N .
After the pseudoinverse matrix is incorporated, the least-squares solution of the above linear system is:
β ^ = ( H T H ) 1 H T T
Based on a recursive least-squares algorithm, the algorithm flow of OSELM can be described as follows:
(1) Initialization Stage
Given activation function g, the number of hidden layer nodes is L. A small training set is given to initialize the network and obtain the initial value. The output weight value is β ( 0 ) . Take k = 0, where k is the number of data segments sent to the network, which is represented.
(2) Online Learning Stage
Given the k + 1 data segment, calculate output weight beta β ( k + 1 ) , take k = k + 1, then return to the online learning stage, and constantly update the calculated output weight until data learning is completed.

4.1.4. Differential Evolution

The differential-evolution algorithm is a parallel, direct, and random-search algorithm based on group evolution [40]. Firstly, the population can be randomly initialized in the feasible solution space of the problem. The population can be represented by NP (population size) D (number of decision variables), dimension parameter x h l , where h = 1, 2, …,NP, l = 1, 2,…, D.
Two different individual vectors are randomly selected and subtracted to generate difference vectors. The difference vectors are weighted and added to the third random selected individual vectors to generate variation vectors. This operation is called variation. Utilizing Formula (11) to implement a mutation operation on each individual x h t at t-time, the corresponding mutation individual v h t + 1 is obtained, that is:
v h t + 1 =   x r 1 t + K ( x r 2 t   x r 3 t )
where, r 1 , r 3 , r 3 { 1 , 2 , ,   N P } are different, and different to h.
The variation vector is mixed with the target vector to generate test vectors. This process is called a crossover. Formula (12) is used to cross-operate x h t with variant individual v h t + 1 generated by Formula (11) to generate experimental individual u h t + 1 , that is:
u h t + 1   =   { v h , l t + 1 , If   ( rand ( l )     CR )   or   l = rnbr ( h ) ; x h , l t , otherwise .
where, rand (l) is a uniformly distributed random number in the range of [0,1]; CR is a cross probability in the range of [0,1]; and rnbr (h) is a random variable in the range of {1, 2, …, D}.
If the fitness of the test vector is better than that of the target vector, the next generation is formed by replacing the target vector with the test vector. This operation is called selection. Fitness functions J of u h t + 1 and x h t were compared by Formula (6). For the minimization problem, the individual with a low fitness function value was selected as the individual x h t + 1 of the new population, that is:
x h t + 1 = { u h t + 1 , if   J ( u h t + 1 ) < J ( x h t ) ; x h t   , otherwise .
where J is the fitness function. The Figure 2 shows the flow chart of hybrid OSELM.

4.2. Linear Models

Although wind-speed data are usually nonlinear, according to our test in Section 6.1, the time-series wind-speed data also have a linear character. We can say that the linear models used to build the combined system are correct and appropriate. In this section, two linear models are briefly introduced.

4.2.1. ARIMA

The ARIMA model is one of the most popular forecasting models [41]. The ARIMA model can be expressed as follows:
y t   =   ϕ 1 y t 1 + ϕ 2 y t 2 + + ϕ p y t p +   ε θ q ε t q
where y i ( i = 1 , 2 , , t ) is the actual value, ε i ( i = 1 , 2 , , t ) is the random error at time t, ϕ i and θ i represent the coefficients, and p and q are internumbers that are often referred to as autoregressive and moving average polynomials, respectively.

4.2.2. Holt–Winters (HW)

The output of the HW method is written as F t + m , an estimate of the value of x at time t + m, m > 0 based on the raw data up to time t; suppose we have a sequence of observations { x t }, beginning at time t = 0 with a cycle of seasonal change of length L. The formula and recursive updating equations are the following:
F t + m = s t + mb t + c t L + 1 + ( m 1 )
where s t   =   α ( x t c t L ) + ( 1 α ) ( s t 1 + b t 1 ) , b t = β ( s t s t 1 )   +   ( 1 β ) b t 1 , c t   =   γ ( x t s t 1 b t 1 ) + ( 1 γ ) c t L , α is the data-smoothing factor, β is the trend-smoothing factor, γ is the seasonal change-smoothing factor, and these three parameters are between 0 to 1. { s t } and { b t } represent the smoothed value of the constant part for time t and the sequence of best estimates of the linear trend that are superimposed on the seasonal changes, respectively. { c t } is the sequence of seasonal-correction factors [42].

5. Our Proposed Combined Model

The combined model theory, MOPs, our proposed combined system with three objective functions, and the compared combined model with two objective functions are presented in this section. The flowchart is shown in Figure 3.

5.1. Combined-Model Theory

The forecasting model based on combination, which was initiated by Bates and Granger [32], has long been considered as representing an improvement over individual models and also as an efficient and simple way to perfect forecasting accuracy and stability. A new combined system that consolidates several neural networks, NSGA-III, and nonpositive constraint theory [34] was successfully proposed in this paper.
Definition 1.
Let x ^ j , t denote the unbiased out-of-sample forecast for x t , which is obtained by thejth individual model. Then, the combined output at time t of the combining methods has the following weighted average form [43,44]:
x ^ c , t = j = 1 m w j x ^ j , t ,   t = 1 , 2 ,
where x ^ c , t is the combined output, m is the number of the component models, and w j is the weight on thejth component model. These weights have no limitation in the range of [0,1]. The experiment results show that the combination model can obtain desirable results when the weight vector has a value in the range of [–2,2] [33].

5.2. Multiobjective Optimization Problem

Generally, MOPs can be classified into two groups: constrained and unconstrained problems. Constrained problems with J inequality and K equality constraints can be formulated as:
Definition 2.
Minimize   F ( x )   =   ( f 1 ( x ) , f 2 ( x ) ,   ,   f M ( x )   ) T
s . t .   g j ( x )     0 ,   j = 1 , 2 ,   ,   J , h k ( x )   =   0 ,   k   =   1 , 2 ,   ,   K ,   x     Ω .
whereMis the number of objectives, and M     4 , and x = ( x 1 ,   x 2 ,   ,   x n ) T is the decision vector, wherenis the number of decision variables. In MOP (1), Ω   =   i = 1 n [ x i L ,   x i U ] R n is called the decision space, where x i L and x i U are the lower and upper bounds of the decision variable x i , respectively.
When the inequality and equality constraints in MOP (1) were omitted, then the unconstrained (with box constraints only) problems were obtained, which can be stated as follows:
Definition 3.
Minimize   F ( x )   =   ( f 1 ( x ) , f 2 ( x ) ,   ,   f M ( x )   ) T
s . t .   x     Ω .
In order to solve MOP, the NSGA-III is presented in the next section.

5.3. Introduction of Objective Functions and NSGA-III

Since wind-speed prediction is not a single-objective problem, it is necessary to consider multiple objectives to obtain both accuracy and stability. In this part, we show a multi-objective optimization algorithm that optimizes the weights of four models and three proposed objective functions to comprehensively consider this problem.

5.3.1. Objective Functions

In this paper, we chose three functions to be the objective functions.
(1) The Theil Inequality Coefficient (TIC) can be indicated as follows:
TIC ( Y ^ ,   Y ) = 1 N i = 1 N ( y ^ i y i ) 2 1 N i = 1 N y ^ i 2 + 1 N i = 1 N y i 2
TIC is always between 0 to 1; a lower numerical value is equal to better performance.
(2) Root mean squared error (RMSE). A smaller absolute value of RMSE ( Y ^ ,   Y ) indicates a more accurate forecasting performance of the system. It can be defined as follows:
RMSE ( Y ^ ,   Y ) = 1 N i = 1 N ( y ^ i y i ) 2
(3) MAPE can be represented as:
MAPE ( Y ^ ,   Y ) = 1 N i = 1 N | y ^ i y i | y i
The lower MAPE ( Y ^ ,   Y ) is, the more accurate the system.
Thus, in this combined system, the fitness function for the accuracy and stability objectives can be defined as follows:
Minimize = { f 1 = TIC ( Y ^ ,   Y ) f 2 = RMSE ( Y ^ ,   Y ) f 3 = MAPE ( Y ^ ,   Y )

5.3.2. NSGA-III

A random population of size N and a set of widely distributed prespecified M-dimensional reference points H on a unit hyperplane, which is placed in a manner so that it intersects each objective axis at one, having a normal vector of ones covering the entire R + M region, were initialized by NSGA-III [45]. The more detailed description of NSGA-III is shown in the Appendix A.

5.4. Compared Combined Models with Two Objective Functions

There are three two-object-function models that were chosen to compare with our combined system:
(1) Combined model with Object Function (21) and (22);
Minimize = { f 1 = TIC ( Y ^ ,   Y ) f 2 = RMSE ( Y ^ ,   Y )
(2) Combined model with Object Function (21) and (23);
Minimize = { f 1 = TIC ( Y ^ ,   Y ) f 2 = MAPE ( Y ^ ,   Y )
(3) Combined model with Object Function (22) and (23).
Minimize = { f 1 = RMSE ( Y ^ ,   Y ) f 2 = MAPE ( Y ^ ,   Y )

6. Experiments

Some performance metrics must be described to comprehensively understand the model characteristics. Diebold-Mariano and forecasting effectiveness were used in this study. Four experiments are given in this section to text the data and our proposed combined system.

6.1. The Performance Metric

Some performance metrics must be described to comprehensively understand the model characteristics. Four metrics, that is, AE, MAE, MSE, and MAPE, are shown in Table 1.

6.1.1. Diebold Mariano test

A comparison test, the Diebold Mariano (DM) test, was proposed by Diebold, F.X., and Mariano [46], which focused on the predictive accuracy and evaluating the forecasting performance between two or more-time series models.
Actual values:
{ y t ;   t   =   1 , ,   n   +   m }
Two forecasts:
{ y ^ t ( 1 ) ;   t   =   1 , ,   n   +   m }
{ y ^ t ( 2 ) ;   t   =   1 , ,   n   +   m }
The forecast errors from the two models are:
{ ε n + h ( 1 ) = y n + h y ^ n + h ( 1 ) ,   h = 1 , 2 , , m }
{ ε n + h ( 2 ) = y n + h y ^ n + h ( 2 ) ,   h = 1 , 2 , , m }
A suitable loss function, L ( ε n + h ( 1 ) ) i = 1,2., was applied to measure the accuracy of each forecast. Square error loss and absolute-deviation error loss are the most popular loss functions.
Square error loss:
L ( ε n + h ( i ) )   =   ( ε n + h ( i ) ) 2
Absolute deviation loss:
L ( ε n + h ( i ) )   =   / ε n + h ( i ) /
The DM test statistics estimate the forecasts according to arbitrary loss function L(g):
DM = h = 1 m ( L ( ε n + h ( 1 ) )     L ( ε n + h ( 2 ) )   ) / m S 2 / m S 2
where S 2 is an estimator of the variance of d h =   L ( ε n + h ( 1 ) )     L ( ε n + h ( 2 ) ) .
The null hypothesis is:
H 0 :   E ( d h )   =   0     t
versus the alternative hypothesis, which is:
H 1 :   E ( d h )     0
The null hypothesis means that the two forecasts have the same accuracy. The alternative hypothesis means that the two forecasts have different levels of accuracy. Under the null hypothesis, test statistic DM is asymptotically N (0, 1) distributed. The null hypothesis of no difference is rejected if the computed DM statistic falls outside the range from -zα/2 to zα/2, that is, if
| D M |   >   z α / 2
where zα/2 is the upper (or positive) z-value from the standard normal table, corresponding to half of the desired α level of the test.

6.1.2. Forecasting Effectiveness

Both the square sum of forecasting error and the mean and mean squared deviation were applied to measure the forecasting accuracy by forecasting effectiveness. In some practical cases, it is necessary to further consider the kurtosis and skewness of the forecasting-accuracy distribution. On this basis, the general discrete from of forecasting effectiveness is given in this section [47].
Definition 4.
Let
ε n = { 1 , ( y n y ^ n ) / y n < 1 ( y n y ^ n ) / y n , 1 < ( y n y ^ n ) < 1 1 , ( y n y ^ n ) / y n > 1
Definition 5.
A n   =   1 | ε n | is called the forecasting accuracy at time n.
Definition 6.
m k   =   n = 1 N Q n A n k is the forecasting-accuracy effectiveness unit, k is a positive integer, { Q n ,   n = 1 , 2 , , N } is the discrete-probability distribution at time n, and n 1 N Q n =   1 ,   Q n >   0 . Especially if the a priori information of the discrete-probability distribution is unknown, we define Q n =   1 /   N , n = 1, 2,…, N.
Definition 7.
mkis thek-order forecasting-effectiveness unit, andHis a continuous function of a certainkunit. H (m1, m2, …, mk) is the k-order forecasting effectiveness.
Definition 8.
WhenH(x) = xis a continuous function of one variable, H (m1) = m1 is the first-order forecasting effectiveness. When H ( x , y )   =   x ( 1 y x 2 ) , it is a continuous function of two variables.

6.2. Experiment I: Use the Linear and Nonlinear Functions to Test the Feature of a Wind-Speed Series.

Only by better understanding the characteristics of the research data can we better select the model for future work. To achieve better results, we must consider the characteristics of the data.
In general, the linear model fits the linear data better, just as the non-linear model fits the non-linear data better. Only by understanding the characteristics of the data can we achieve good results in future forecasting work. For data, it is not only linear or non-linear, but also may both. Therefore, it is necessary to judge the linear nonlinearity of the data used in this paper. Therefore, we carried out the following experiment. In order to verify the linear or nonlinear character of wind speed, three functions were structured: (1) linear function f 1 ( x ) = π + i = 1 4 b i x i ; (2) nonlinear function f 2 ( x ) = 1 / ( π + e x p ( i = 1 4 b i x i ) + b 5 ) ; and (3) nonlinear function f 3 ( x ) = 2 / ( π + e x p ( i = 1 4 b i x i + b 5 ) ) 1 .
From the results of Table 2 and Table 3, the wind speed data are both linear and nonlinear by hypothesis test. Therefore, the linear models and nonlinear models considered in our proposed forecasting model are correct and necessary.

6.3. Experiment II: Models Tested with Wind-Speed Data From Site 1

In order to evaluate the accuracy and stability of the forecasting system, we selected the average results of 100 trials, which were divided into two parts: accuracy and stability. In the accuracy section, we compare the AE, MAE, MSE, and MAPE values for a single model and combined model (as shown in Table 4).
(a) The results of Table 4 show the following:
(1) CS-BPNN reached the best results in Tuesday, Wednesday, and Thursday compared to other branch models, with MAPE values being 3.726%, 4.081%, and 5.173%, respectively.
(2) On Monday and Sunday, DE-OSELM obtained the lowest MAPE compared with other branch models.
(3) HW achieved the most accurate forecasting value of all branch models on Friday and Saturday.
(4) Although ARIMA could achieve higher forecasting precision, their forecasting performance was worse than nonlinear models and HW
(5) Our combined model had a significant improvement in forecasting accuracy with a lower MAPE compared with all branch models. The MAPE values from Monday to Sunday are 5.691%, 3.698%, 4.078%, 5.166%, 5.692%, 6.443%, and 5.082%, respectively.
(b) The results of Figure 4 show the following:
(1) Part A shows the MAE, MSE, and MAPE of five models, although our combined model did not achieve the lowest MAE every day; the lowest MSE and MAPE were obtained by our proposed model.
(2) Part B shows the forecasting results of CS-BPNN, DE-OSELM, ARIMA, HW, and the combined model.
(3) Part C also shows the 95% confidence intervals (CIs) obtained by each model; the figure indicates that both the upper and lower CIs were close between four branch models but, for linear models, there were more points in the confidence interval. As Part C shows, the errors of the combined model were very small, and our combined model also achieved a small CI.
Remark 1.
From Table 4 and Figure 4, the results indicate that our proposed model showed better performance than the other branch models. In brief, it can be explained that SSA, which could denoise the time-series wind-speed data as a preprocessing method, and the combined model, which took advantage of the branch models, could improve forecasting accuracy.

6.4. Experiment III: The Performance of Branch Models at Each Time Point.

In this experiment, four models, two nonlinear hybrid models (CS-BPNN and DE-OSELM) and two linear models (ARIMA and HW), were tested for performance at each time points. In this part the, wind-speed data of Tuesday from Site 2 were used in our experiments.
(a) All Tuesday results in Figure 5 show the following:
(1) Part A shows the MAPE values of four branch models. From the figure, we can see that ARIMA performed the worst, but the MAPE of these four models were approximately.
(2) From Figure 5, Part B, we can see that the MSE and MAE of the four models were not high. The values are very close between these four models.
(3) Figure 5, Part C also shows the 95% CIs obtained by the four branch models, and it indicates that both the upper and lower CI were close between the two nonlinear models. For the linear models, however, the CI of ARIMA was smaller.
(b) Every-hours’ Tuesday results in Table 5 show the following:
(1) CS-BPNN obtained the lowest MAPE values of all branch models at 1:00, 7:00, 14:00, and 22:00, and the values are 1.87%, 3.90%, 0.07%, and 5.93%, respectively.
(2) At 1:00, 5:00, 8:00, 18:00, and 22:00, DE-OSELM reached the most accurate forecasting value.
(3) With the MAPE values 0.57%, 0.71%, 0.81%, 0.62%, 2.78%, 8.52%, 4.17%, and 1.25% at eight time points, ARIMA obtained the maximum time-point accuracy results on Tuesday from Site 2.
(4) HW achieved the lowest MAPE values of all branch models at 4:00, 6:00, 9:00, 13:00, 17:00, 19:00, and 20:00, and HW is also the model that had the best performance than the other branch models at all hours.
(5) The results reveal that there is no model that can reach the best results at every time point.
Remark 2.
There is no one model that can reach the best results at every time point; each model has advantages and disadvantages. The combined models can add up the forecasting models to overcome these dilemmas. This experiment provides a reason to apply combined-model theory to forecast wind speed.

6.5. Experiment IV: Stability Comparison with Branch Models

In Experiments II and III, we tested the time-point performance and accuracy of the branch models. In this experiment, we evaluated the stability of the proposed combined model by comparing the standard deviation of the MAPE (STD-MAPE) values (as shown in Table 6). Because there is no randomness in the mathematical models, we just tested CS-BPNN and DE-OSELMNN of branch models in terms of stability. Here, we show the wind-speed data from Sites 1 and 2, and the results of MAPE and STD-MAPE from 100 time experiments.
The results in Table 6 show the following:
(1) CS-BPNN performs better than DE-OSELM in accuracy on most test days.
(2) In terms of stability, CS-BPNN obtained lower STD-MAPE than DE-OSELM in Site 1, but in Site 2, DE-OSELM was better than CS-BPNN.
(3) Our proposed combined model reached the best accuracy and stability values compared to the two other models.
(4) The lowest STD-MAPE value of CS-BPNN was achieved on Wednesday on Site 2; at the same time, the STD-MAPE of DE-OSELM and our combined model were 0.2430 and 1846, respectively.
Remark 3.
From the results shown in Table 6, our combined model obtained the lowest MAPE and STD-MAPE, which means that our combined-model functions could achieve high accuracy and strong stability compared with the single models.

6.6. Experiment V: Comparison with Three Combined Models (Two Objective Functions)

We proposed three other combined models with two objective functions in Section 5.2. In this experiment, we tested the accuracy and stability between our proposed system and the other three combined models. Here, wind-speed data from Site 3 were used in our experiments, and all of the models in this part were run 100 times.
(a) The results in Table 7 and Table 8 show the following:
(1) Table 7 shows the average AE, MAE, and MSE. It can be seen that the forecast performance and the results of our proposed combined model were better in accuracy.
(2) The average results of the three two-objective-function models were close to the combined system (three objective functions). Especially on Friday, the difference between the combined model (three objective functions) and the combined models (two objective functions) was only 0.65%.
(3) As shown in Table 8, after running the models for 100 times, the standard-deviation MAPE and the MAPE range of the two-objective-function models was much higher and larger than the combined model. This shows that the stability of our combined model’s performance was much better than the combined models with the two objective functions.
(b). Figure 6 shows the following;
(1) Part A shows that, with regard to the minimum, maximum, and average results of MAPE from 100 experiments, our proposed combined model with three objective functions reached the lowest values compared with the other combined models on each day.
(2) Part C presents that the MAE and MSE of our proposed combined model with three objective functions were also lower than the other models.
Remark 4.
The results show that the performance of the combined model with two objective functions was close to the combined model with three objective functions when we chose the result average. However, the single results of the combined model with three objective functions could be given a high degree of trust, as accuracy and stability were successfully enhanced by our combined model based on NSGA-III in the forecasting problems.

6.7. Experiment VI: Forecasting Results Test

To evaluate the forecasting results of these models, two important evaluation metrics, the DM test and forecasting availability, were applied in this part. We discuss the results from Site 3.
(1) The results of the DM test are shown in Table 9, which indicate that the combined system was different from the other models at a significant difference level in different datasets.
(2) As shown in Table 10, the first-order and second-order forecasting availabilities of the proposed combined system performed better than the other models for seven datasets from three regions in electricity-load forecasting. For example, in the Monday dataset, the first-order forecasting availabilities offered by each forecasting models were 0.9110, 0.9143, 0.8432, 0.8851, 0.9284, 0.9291, 0.9249, and 0.9456, respectively.

7. Conclusions

Effective, accurate, and reliable forecast of wind speed is a crucial component of wind-farm management, and also a significant part of the economic development of a nation. However, previous studies only focused on one type of accuracy or stability. Due to this reason, former models cannot achieve satisfying results that obtain high accuracy and strong stability. Wind-speed forecast is also a multicriteria problem; considering only one criterion (accuracy or stability) cannot achieve satisfying results. Thus, it is difficult to select one objective function from multiple-objective functions, and arduous to ensure that the selected function can obtain low MAPE and strong stability. In this paper, a combined forecasting model based on data preprocessing and MOP theory, which could simultaneously improve the accuracy and stability of wind-speed forecasting and four models (two hybrid nonlinear models and two linear models), is proposed and was successfully applied to wind-speed forecasting, not only overcoming the issue of forecasting accuracy, but also solving the difficulty of forecasting stability. Then, as the results show, our proposed combined model with three objective functions achieved lower MAPE and STMAPE than the other models. From the results of our research, our proposed combined model improved the MAPE range from 0.007% to 2.31%, and the STDMAPE range from 0.0044 to 0.3497. Moreover, according to our study, the combined model could be used in large wind farms to forecast wind speed, evaluate wind-energy resources, and save operation costs and wind energy.

Author Contributions

S.Z. carried on programming and writing of the whole manuscript; Y.L. carried on the validation and visualization of experiment results; J.W. and C.W. provided the overall guide of conceptualization and methodology.

Funding

The research was funded by National Natural Science Foundation of China (Grants No.71671029).

Acknowledgments

This work was supported by National Natural Science Foundation of China (Grants No.71671029).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

List of abbreviations:

MM5Mesoscale Model 5
HWHolt-Winters
CSCuckoo Search algorithm
SVMSupport Vector Machine
AEAbsolute Error
MAEMean Absolute Error
MSEMean Square Error
PSPareto-optimal Set
PFPareto Front
DEDifferential Evolution
MOPMultiple Objective Optimization Problem
ARIMAAuto-Regressive Integrate Moving Average
ARMAAutoregressive Moving Average
BPNNBack-Propagation Neural Network
NSGA-IIINon-dominated Sorting Genetic Algorithm
SOPSingle Objective Optimization Problem
OSELMOnline Extreme Learning Machine
MAPEMean Absolute Percentage Error
RMSERoot Mean Squared Error
ANNArtificial Neural Networks
Algorithm 1: Denoising
Input:
Y =   { y 1 ,   y 2 ,   , y N } a sequence of original time series
Output:
Y ˜ =   { y ˜ 1 ,   y ˜ 2 , ,   y ˜ N } —a sequence of decomposed time series
Parameters:
N—the length of a time series.X—the trajectory matrix.
L—the windows length.K—the number of lagged vactors.
L*—the minimum between L and K.K*—the maximum between L and K.
M—the number of repetitions of each trial.
1/*Obtain the trajectory matrix.*/
2FOR EACHi=1:LDO
3 | X(i, :)=(yi,…,yK+i-1)
4END FOR
5 X = [ y 1 y 2 y K y 2 y 3 y K + 1 y L y L + 1 y N ] .
6 S = X X T /*Singular value decomposition.*/
7/*Obtain λ1, ⋯,λL the eigenvalue of S. (λ1 λL≥0)*/
8/*Obtain U1, ,UL the orthonormal system of the eigenvectors of the matrix S corresponding to these eigenvalues.*/
9FOR EACHi=1:dDO
10|  V i   =   X T U i λ i ;   X i =   λ i U i V i T ;
11END FOR
12X=X1+ +Xd; Let I={i1,   ,ip}; X I =   X i 1 + + X i p ;
13/*The results of the grouping procedure partition the matrices Xi into several disjoint subsets.*/
14/*Transform each matrix of the grouped decomposition into a new series of length N.*/
15/*Diagnal averaging*/
16 Y ˜ = { y ˜ 1 ,   y ˜ 2 , ,   y ˜ N }

The Introduction of NSGA-III

NSGA-III performs the following operations, performed at a generation t. Primarily, different nondomination levels are classified from whole population Pt in the same way in NSGA-II, following the principle of nondominated sorting. Then, Pt creates an offspring population Qt by way of the usual recombination and mutation operators. For every reference point, only one individual of the population is anticipated to be found, so there is no necessity for any selection operation in NSGA-III. Then, there is a new population Rt combined with Pt and Qt (Rt= PtQt). Subsequently, points starting from the first nondominated front are selected for Pt + 1, one at a time, until all solutions from a complete front cannot be included. We can denote the final front as FL. In general, only a few solutions from FL need to be selected for Pt + 1 using a niche-preserving operator, which could be described in the next one. Primarily, each population member of Pt + 1 and FL is normalized by using the current population spread so that all objective vectors and reference points have commensurate values. Subsequently, each member of Pt+1 and FL is associated with a specific reference point by using the shortest perpendicular distance (d())of each population member, with a reference line created by joining the origin with a supplied reference point. Then, a careful niching strategy is employed to choose those FL members that are associated with the least-represented reference points in Pt + 1. The niching strategy puts an emphasis on selecting a population member for as many supplied reference points as possible. A population member associated with an under-represented or unrepresented reference point is immediately preferred. With continuous stress for emphasizing nondominated individuals, the whole process is then expected to find one population member corresponding to each supplied reference point close to the Pareto-optimal front, provided that the genetic variation operators (recombination and mutation) are capable of producing respective solutions. The use of well-spread reference points ensures a well-distributed set of trade-off points at the end.
The pseudocode for the NSGA-III is provided in Algorithm 2.
Algorithm 2: NSGA-III
Input:
Y = { y 1 ,   y 2 ,   , y N } a sequence of time series wind speed data
Output:
Y ^ =   { y ^ 1 ,   y ^ 2 , ,   y ^ N } —a sequence of time series wind speed forecasting data
Fitness function
    Minimize = { f 1 = TIC ( Y ^ ,   Y ) f 2 = RMSE ( Y ^ ,   Y ) f 3 = MAPE ( Y ^ ,   Y )
Parameters:
P0—the initial population.NPop—the size of population.
t—the number of iterations.Wi—the weight of ith single model.
Itmax—the number of maximum iteration.
Applsci 09 00423 i001

References

  1. Available online: http://www.sohu.com/a/192783727_472920 (accessed on 18 September 2018 ).
  2. Wiser, R.; Bolinger, M. Wind Technologies Market Report; Tech. Rep.; Lawrence Berkeley National Laboratory: Berkeley, CA, USA, 2016.
  3. Yu, J.; Ji, F.; Zhang, L.; Chen, Y. An over painted oriental arts: Evaluation of the development of the Chinese renewable energy market using the wind power market as a model. Energy Policy 2009, 37, 5221–5225. [Google Scholar] [CrossRef]
  4. Soman, S.; Zareipour, H.; Malik, O.; Mandal, P. A review of wind power and wind speed forecasting methods with different time horizons. In Proceedings of the North American Power Symposium (NAPS), Arlington, TX, USA, 26–28 September 2010; pp. 1–8. [Google Scholar]
  5. De Giorgi, M.G.; Ficarella, A.; Tarantino, M. Assessment of the benefits of numerical weather predictions in wind power forecasting based on statistical methods. Energy 2011, 36, 3968–3978. [Google Scholar] [CrossRef]
  6. Cassola, F.; Burlando, M. Wind speed and wind energy forecast through kalman filtering of numerical weather prediction model output. Appl. Energy 2012, 99, 154–166. [Google Scholar] [CrossRef]
  7. Wang, J.; Heng, J.; Xiao, L.; Wang, C. Research and application of a combined model based on multi-objective optimization for multi-step ahead wind speed forecasting. Energy 2017, 125, 591–613. [Google Scholar] [CrossRef]
  8. Zhao, J.; Guo, Z.H.; Su, Z.Y.; Zhao, Z.Y.; Xiao, X.; Liu, F. An improved multi-step forecasting model based on WRF ensembles and creative fuzzy systems for wind speed. Appl. Energy 2016, 162, 808–826. [Google Scholar] [CrossRef]
  9. Yang, H.; Jiang, Z.; Lu, H. A Hybrid Wind Speed Forecasting System Based on a ‘Decomposition and Ensemble’ Strategy and Fuzzy Time Series. Energies 2017, 10, 1422. [Google Scholar] [CrossRef]
  10. Milligan, M.; Schwartz, M.; Wan, Y.H. Statistical Wind Power Forecasting for U.S. In Proceedings of the Wind Farms: Preprint Conference on Probability & Statistics in the Atmospheric Sciences/American Meteorological Society Meeting, Seattle, WA, USA, 11–15 January 2004. [Google Scholar]
  11. Flores, J.J.; Loaeza, R.; Rodríguez, H.; Cadenas, E. Wind Speed Forecasting Using a Hybrid Neural-Evolutive Approach. In Mexican International Conference on Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  12. Flores, J.J.; Graff, M.; Rodriguez, H. Evolutive design of ARMA and ANN models for time series forecasting. Renew. Energy 2012, 44, 225–230. [Google Scholar] [CrossRef]
  13. El-Fouly, T.H.M.; El-Saadany, E.F.; Salama, M.M.A. Grey predictor for wind energy conversion systems output power prediction. IEEE Trans. Power Syst. 2006, 21, 1450–1452. [Google Scholar] [CrossRef]
  14. Atsalakis, G.; Nezis, D.; Zopounidis, C. Neuro-fuzzy versus traditional models for forecasting wind energy production. In Genetic Programming and Evolvable Machines, Advances in Data Analysis; Springer: Boston, MA, USA, 2010; pp. 275–287. [Google Scholar]
  15. Tanga, J.; Brousteb, A.; Tsuia, K.L. Some improvements of wind speed markov chain modeling. Renew. Energy 2015, 81, 52–56. [Google Scholar] [CrossRef]
  16. Esen, H.; Inalli, M.; Sengur, A.; Esen, M. Performance prediction of a ground-coupled heat pump system using artificial neural networks. Expert Syst. Appl. 2008, 35, 1940–1948. [Google Scholar] [CrossRef]
  17. Zhao, X.; Wang, C.; Su, J.; Wang, J. Research and application based on the swarm intelligence algorithm and artificial intelligence for wind farm decision system. Renew. Energy 2019, 134, 681–697. [Google Scholar] [CrossRef]
  18. Fu, T.; Wang, C. A Hybrid Wind Speed Forecasting Method and Wind Energy Resource Analysis Based on a Swarm Intelligence Optimization Algorithm and an Artificial Intelligence Model. Sustainability 2018, 10, 3913. [Google Scholar] [CrossRef]
  19. Yao, Z.; Wang, C. A Hybrid Model Based on A Modified Optimization Algorithm and an Artificial Intelligence Algorithm for Short-Term Wind Speed Multi-Step Ahead Forecasting. Sustainability 2018, 10, 1443. [Google Scholar] [CrossRef]
  20. Wang, Z.; Wang, C.; Wu, J. Wind Energy Potential Assessment and Forecasting Research Based on the Data Pre-Processing Technique and Swarm Intelligent Optimization Algorithms. Sustainability 2016, 8, 1191. [Google Scholar] [CrossRef]
  21. Heng, J.; Wang, C.; Zhao, X.; Xiao, L. Research and Application Based on Adaptive Boosting Strategy and Modified CGFPA Algorithm: A Case Study for Wind Speed Forecasting. Sustainability 2016, 8, 235. [Google Scholar] [CrossRef]
  22. Du, P.; Wang, J.; Guo, Z.; Yang, W. Research and application of a novel hybrid forecasting system based on multi-objective optimization for wind speed forecasting. Energy Convers. Manag. 2017, 150, 90–107. [Google Scholar] [CrossRef]
  23. Wang, J.; Du, P.; Niu, T.; Yang, W. A novel hybrid system based on a new proposed algorithm—Multi-Objective Whale Optimization Algorithm for wind speed forecasting. Appl. Energy 2017, 208, 344–360. [Google Scholar] [CrossRef]
  24. Jiang, P.; Yang, H.; Heng, J. A hybrid forecasting system based on fuzzy time series and multi-objective optimization for wind speed forecasting. Appl. Energy 2019, 235, 786–801. [Google Scholar] [CrossRef]
  25. Dalto, M.; Matuško, J.; Vašak, M. Deep neural networks for ultra- short-term wind forecasting. In Proceedings of the 2015 IEEE International Conference on Industrial Technology (ICIT), Seville, Spain, 17–19 March 2015. [Google Scholar]
  26. Liu, D.; Niu, D.; Wang, H.; Fan, L. Short-term wind speed forecasting using wavelet transform and support vector machines optimized by genetic algorithm. Renew. Energy 2014, 62, 592–597. [Google Scholar] [CrossRef]
  27. Hu, J.; Wang, J.; Ma, K. A hybrid technique for short-term wind speed prediction. Energy 2015, 81, 563–574. [Google Scholar] [CrossRef]
  28. Barbounis, T.G.; Theocharis, J.B. A locally recurrent fuzzy neural network with application to the wind speed prediction using spatial correlation. Neurocomputing 2007, 70, 1525–1542. [Google Scholar] [CrossRef]
  29. Focken, U.; Lange, M.; Waldl, H. Previento—A wind power prediction system with an innovative upscaling algorithm. In Proceedings of the European Wind Energy Conference, Copenhagen, Denmark, 2–6 July 2001; Volume 276. [Google Scholar]
  30. Landberg, L. Short-term prediction of the power production from wind farms. J. Wind Eng. Aerodyn. 1999, 80, 207–220. [Google Scholar] [CrossRef]
  31. Iversen, E.B.; Morales, J.M.; Møller, J.K.; Madsen, H. Short-term probabilistic forecasting of wind speed using stochastic differential equations. Int. J. Forecast. 2015, 32, 981–990. [Google Scholar] [CrossRef]
  32. Bates, J.M.; Granger, C.W.J. The combination of forecasts. In Essays in Econometrics; Cambridge University Press: Cambridge, UK, 2001; pp. 451–468. [Google Scholar]
  33. Xiao, L.; Wang, J.; Hou, R.; Wu, J. A combined model based on data pre-analysis and weight coefficients optimization for electrical load forecasting. Energy 2015, 82, 524–549. [Google Scholar] [CrossRef]
  34. Jain, H.; Deb, K. An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part II: Handling constraints and extending to an adaptive approach. IEEE Trans. Evol. Comput. 2014, 18, 602–622. [Google Scholar] [CrossRef]
  35. Abdollahzade, M.; Miranian, A.; Hassani, H.; Iranmanesh, H. A new hybrid enhanced local linear neuro-fuzzy model based on the optimized singular spectrum analysis and its application for nonlinear and chaotic time series forecasting. Inf. Sci. 2015, 295, 107–125. [Google Scholar] [CrossRef]
  36. Guo, Z.H.; Wu, J.; Lu, H.Y.; Wang, J.Z. A case study on a hybrid wind speed forecasting method using BP neural network. Knowl.-Based Syst. 2011, 24, 1048–1056. [Google Scholar] [CrossRef]
  37. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the World Congress on Nature & Biologically Inspired Computing, Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  38. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: A new learning scheme of feedforward neural networks. Neurocomputing 2004, 2, 985–990. [Google Scholar]
  39. Liang, N.Y.; Huang, G.B.; Saratchandran, P.; Sunndararajan, N. A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans. Neural Netw. 2006, 17, 1411–1423. [Google Scholar] [CrossRef]
  40. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Adaptive Scheme for Global Optimization over Continuous Spaces; Tech. Rep.TR-95-012; ICSI: Berkeley, CA, USA, 1995. [Google Scholar]
  41. Wang, Y.; Wang, J.; Zhao, G.; Dong, Y. Application of residual modification approach in seasonal ARIMA for electricity demand forecasting: A case study of China. Energy Policy 2012, 48, 284–294. [Google Scholar] [CrossRef]
  42. Grubb, H.; Mason, A. Long lead-time forecasting of UK air passengers by Holt–Winters methods with damped trend. Int. J. Forecast. 2001, 17, 71–82. [Google Scholar] [CrossRef]
  43. Stock, J.H.; Watson, M.W. Combination forecasts of output growth in a seven-country data set. J. Forecast. 2004, 23, 405–430. [Google Scholar] [CrossRef]
  44. Geweke, J.; Amisano, G. Optimal prediction pools. J. Econom. 2011, 164, 130–141. [Google Scholar] [CrossRef]
  45. Das, I.; Dennis, J.E. Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems. SIAM J. Optim. 1998, 8, 631–657. [Google Scholar] [CrossRef]
  46. Diebold, F.X.; Mariano, R. Comparing predictive accuracy. J. Bus. Econ. Stat. 1995, 13, 253. [Google Scholar]
  47. Chen, H.Y.; Hou, D.P. Research on superior combination forecasting models based on forecasting effective measures. J. Univ. Sci. Technol. China 2002, 2, 172–180. [Google Scholar]
Figure 1. The flow chart of hybrid Back-Propagation Neural Network (BPNN).
Figure 1. The flow chart of hybrid Back-Propagation Neural Network (BPNN).
Applsci 09 00423 g001
Figure 2. The flow chart of hybrid Online Extreme Learning Machine (OSELM).
Figure 2. The flow chart of hybrid Online Extreme Learning Machine (OSELM).
Applsci 09 00423 g002
Figure 3. The flow char of our proposed combined model.
Figure 3. The flow char of our proposed combined model.
Applsci 09 00423 g003
Figure 4. The forecasting results from Site 1.
Figure 4. The forecasting results from Site 1.
Applsci 09 00423 g004
Figure 5. The Tuesday forecasting results of each time point from Site 2.
Figure 5. The Tuesday forecasting results of each time point from Site 2.
Applsci 09 00423 g005
Figure 6. The comparison of four different combined models.
Figure 6. The comparison of four different combined models.
Applsci 09 00423 g006
Table 1. Four metrics.
Table 1. Four metrics.
MetricDefinitionEquation
AEThe average forecast error of n times forecast results AE = 1 N n = 1 N ( y n y ^ n )
MAEThe average absolute forecast error of n times forecast results MAE = 1 N n = 1 N | y n y ^ n |
MSEThe average of the prediction error squares MSE = 1 N n = 1 N ( y n y ^ n ) 2
MAPEThe average of absolute percentage error MAPE = 1 N n = 1 N | y n y ^ n y n | × 100 %
Table 2. Testing wind speed data by adjusting to linear functions or nonlinear functions.
Table 2. Testing wind speed data by adjusting to linear functions or nonlinear functions.
Number of ObservationsError Degrees of FreedomR-SquaredAdjusted R-SquaredF-Statistic vs. Constant Modelp-Value
y ~ f 1 ( x ) Site 1200819960.9540.95446800
Site 2200819960.9390.93943000
Site 3200819960.9380.93844600
y ~ f 2 ( x ) Site 1200819960.9350.93598400
Site 2200819960.9330.93296200
Site 3200819960.9360.936107100
y ~ f 3 ( x ) Site 1200819960.9230.92385100
Site 2200819960.9250.92483500
Site 3200819960.9110.91084600
Table 3. The explanations of the test parameters.
Table 3. The explanations of the test parameters.
Number of ObservationsNumber of Rows without Any NaN Values.
Error degrees of freedomnp, where n is the number of observations, and p is the number of coefficients in the model, including the intercept.
R-squared and Adjusted R-squaredCoefficient of determination and adjusted coefficient of determination, respectively
F-statistic vs. constant modelTest statistic for the F-test on the regression model. It tests for a significant regression relationship between the response variable and the predictor variables
p-Valuep-value for the F statistic of the hypotheses test that the corresponding coefficient is equal to zero or not.
Table 4. The forecasting results from Site 1.
Table 4. The forecasting results from Site 1.
MonTueWedThuFriSatSun
AECSBPNN 0.0466−0.0547−0.0269−0.0265−0.0194−0.0261−0.0386
DE-OSELM 0.0120−0.0164−0.0108−0.0238−0.0444−0.0108−0.0297
ARIMA 0.2151−0.2508−0.2658−0.2054−0.3563−0.3744−0.3101
HW0.01840.0178−0.01870.0043−0.0083−0.2792−0.0068
Combined Model 0.0013−0.0091−0.0158−0.0156−0.0104−0.01190.0047
MAECSBPNN0.28100.25200.21290.22890.44790.42940.3179
DE-OSELM0.27590.25310.21440.22340.44720.42820.3182
ARIMA0.27700.26380.27170.23630.45590.43120.3255
HW0.29420.26520.23660.23240.42520.41530.3412
Combined Model0.24200.25200.21410.22070.44450.41960.3161
MSECSBPNN0.19160.10740.07690.07800.44410.37090.1713
DE-OSELM0.18760.10750.07540.08120.43510.37050.1695
ARIMA0.18180.15910.08730.08210.39660.39140.1287
HW0.19990.11070.09260.08820.34860.28500.1901
Combined Model0.18120.10530.07520.07710.33220.35800.1269
MAPECSBPNN5.908%3.726%4.081%5.173%5.729%6.628%5.103%
DE-OSELM5.768%3.728%4.097%5.445%5.748%6.547%5.093%
ARIMA6.227%4.299%5.513%5.610%5.967%6.469%5.522%
HW6.493%3.999%4.517%5.509%5.721%6.457%5.446%
Combined Model5.691%3.698%4.078%5.166%5.692%6.443%5.082%
Table 5. The Tuesday forecasting results of each time point from Site 2.
Table 5. The Tuesday forecasting results of each time point from Site 2.
CS-BPNN DE-OSELMNN ARIMA HW
MAEMSEMAPE MAEMSEMAPE MAEMSEMAPE MAEMSEMAPE
0:000.29060.08442.91% 0.28450.08092.85% 0.43370.18814.34% 1.14501.311011.45%
1:000.20430.04181.87% 0.17200.02961.58% 0.21850.04772.00% 0.33710.11373.09%
2:000.24590.06052.20% 0.27920.07802.49% 0.06420.00410.57% 0.34560.11953.09%
3:000.21950.04821.91% 0.26120.06822.27% 0.08110.00660.71% 0.26120.06822.27%
4:000.65790.43287.00% 0.66070.43657.03% 0.46150.21304.91% 0.42750.18274.55%
5:000.15670.02461.65% 0.14720.02171.55% 0.25500.06502.68% 0.24910.06202.62%
6:000.22310.04982.59% 0.20470.04192.38% 0.45380.20605.28% 0.03730.00140.43%
7:000.31230.09753.90% 0.32370.10484.05% 0.52660.27736.58% 0.34500.11904.31%
8:000.13210.01741.63% 0.11510.01331.42% 0.22860.05222.82% 0.16050.02581.98%
9:000.15880.02521.87% 0.15880.02521.87% 0.14850.02201.75% 0.11680.01361.37%
10:000.59140.34975.97% 0.59230.35095.98% 0.08050.00650.81% 0.29160.08502.95%
11:000.64080.41066.47% 0.64670.41826.53% 0.06110.00370.62% 0.51040.26055.16%
12:000.31940.10203.67% 0.31300.09803.60% 0.24150.05832.78% 0.28630.08203.29%
13:000.70380.49539.78% 0.67360.45379.36% 0.47740.22796.63% 0.09100.00831.26%
14:000.00050.00010.07% 0.00980.00020.13% 0.20890.04362.68% 0.05530.00310.71%
15:000.67990.462210.79% 0.69230.479310.99% 0.53660.28798.52% 0.56790.32259.01%
16:000.28510.08134.60% 0.30250.09154.88% 0.25840.06674.17% 0.51910.26948.37%
17:000.18400.03384.09% 0.22300.04974.95% 0.62370.389013.86% 0.10460.01092.32%
18:000.26020.06775.91% 0.25610.06565.82% 0.35370.12518.04% 0.50950.259611.58%
19:000.01760.00030.46% 0.07340.00541.93% 0.39910.159310.50% 0.02060.00040.54%
20:000.13420.01802.74% 0.12540.01572.56% 0.20980.04404.28% 0.04510.00200.92%
21:000.36940.13645.96% 0.41010.16826.61% 0.07740.00601.25% 0.42370.17956.83%
22:000.35610.12685.93% 0.36270.13166.05% 0.49760.24768.29% 0.76790.589712.80%
23:000.18320.03362.78% 0.09270.00861.40% 0.14920.02232.26% 0.53780.28928.15%
Average0.31880.15594.11% 0.32540.16574.23% 0.29270.10934.36% 0.28210.13403.78%
Table 6. The stability and accuracy from three model.
Table 6. The stability and accuracy from three model.
Site 1 Site 2
CS-BPNNDE-OSELMCombined Model CS-BPNNDE-OSELMCombined Model
MAPEMon5.91%5.77%5.69% 4.55%4.80%4.20%
Tue3.73%3.73%3.70% 4.11%4.23%3.74%
Wed4.08%4.10%4.08% 4.07%4.19%3.67%
Thu5.17%5.44%5.17% 5.29%5.50%4.81%
Fri5.73%5.75%5.69% 6.15%6.10%5.38%
Sat6.63%6.55%6.44% 5.70%5.79%5.04%
Sun5.10%5.09%5.08% 4.38%4.40%3.93%
STD-MAPEMon0.49100.52200.4617 0.37040.34920.3448
Tue0.25550.26760.1871 0.26680.24340.2001
Wed0.23890.27520.2176 0.20510.24300.1846
Thu0.75920.82610.6506 1.15360.92450.8039
Fri0.36010.37540.3243 0.47560.44890.3702
Sat0.37140.41340.3636 0.37050.35190.3470
Sun0.31620.35620.2797 0.28270.27590.2048
Table 7. The results for the combined model compared with combined model with two objective functions.
Table 7. The results for the combined model compared with combined model with two objective functions.
Combined Model * Combined Model ** Combined Model *** Combined Model
AEMAEMSE AEMAEMSE AEMAEMSE AEMAEMSE
Monday0.018150.331850.00233 0.015680.328880.00225 0.076840.345060.00590 0.030880.263710.00095
Tuesday−0.031390.364540.00099 −0.036720.361320.00135 0.049870.359750.00249 0.024450.307480.00060
Wednesday−0.005960.289400.00204 −0.014350.287070.00221 0.049600.296310.00246 0.033240.212870.00110
Thursday0.037490.322680.00341 0.032710.321850.00307 0.098340.335580.00967 0.045530.224930.00207
Friday−0.058780.551310.00345 −0.060420.546890.00365 0.028610.550140.00082 0.013170.502290.00017
Saturday−0.069600.498250.00484 −0.072030.490990.00519 0.014230.502290.00020 0.002690.429690.00001
Sunday−0.058400.386800.00341 −0.064040.383480.00410 0.020420.383290.00042 0.014260.299340.00020
Table 8. Mean absolute percentage error (MAPE) of the results for the combined model compared with combined model with two objective functions.
Table 8. Mean absolute percentage error (MAPE) of the results for the combined model compared with combined model with two objective functions.
Combined Model * Combined Model ** Combined Model *** Combined Model
MAXMINAVESTD MAXMINAVESTD MAXMINAVESTD MAXMINAVESTD
Monday11.47%6.55%7.60%0.7837 11.08%5.94%7.43%0.7540 11.12%6.30%7.82%0.7138 8.83%5.01%5.86%0.6240
Tuesday5.92%4.38%4.86%0.2528 5.83%3.96%4.81%0.2772 5.99%3.93%4.86%0.2263 4.80%3.64%4.05%0.2023
Wednesday6.27%4.65%5.25%0.3012 6.34%4.37%5.21%0.3227 6.62%4.32%5.42%0.2811 5.69%3.46%3.93%0.2775
Thursday18.21%6.73%8.60%1.6167 16.66%6.19%8.41%1.6433 14.05%7.03%8.95%1.6301 13.98%4.91%6.64%1.5877
Friday7.98%5.75%6.60%0.4075 8.07%5.41%6.55%0.4319 8.33%5.30%6.60%0.3719 6.67%5.16%5.90%0.3199
Saturday8.03%5.67%6.62%0.3858 7.75%5.51%6.55%0.3903 8.32%5.36%6.69%0.3555 6.33%4.94%5.63%0.2820
Sunday6.08%4.37%4.94%0.3127 6.16%4.10%4.88%0.3043 6.38%3.94%4.90%0.3148 4.70%3.37%3.82%0.2345
* Combined Model with the objective functions: TIC ( Y ^ ,   Y ) and RMSE ( Y ^ ,   Y ) . ** Combined Model with the objective functions: TIC ( Y ^ ,   Y ) and MAPE ( Y ^ ,   Y ) . *** Combined Model with the objective functions: RMSE ( Y ^ ,   Y ) and MAPE ( Y ^ ,   Y ) .
Table 9. Results for the Diebold Mariano test.
Table 9. Results for the Diebold Mariano test.
CS-BPNNDE-OSELMARIMAHWCM *CM **CM ***
Mon3.24202.93184.63513.22452.66962.66393.0670
Tue6.02155.96976.59065.13213.21012.89083.3343
Wed7.53947.48289.44645.40805.67465.54546.7144
Thu7.53717.25959.55475.03696.44166.36726.4491
Fri5.73445.34552.97530.82641.75981.46571.6069
Sat5.84645.72803.97621.37951.89771.57541.6169
Sun5.39995.37719.18154.10564.73454.66535.2127
* Combined Model with the objective functions: TIC ( Y ^ ,   Y ) and RMSE ( Y ^ ,   Y ) . ** Combined Model with the objective functions: TIC ( Y ^ ,   Y ) and MAPE ( Y ^ ,   Y ) . *** Combined Model with the objective functions: RMSE ( Y ^ ,   Y ) and MAPE ( Y ^ ,   Y ) .
Table 10. Results for the forecasting effectiveness.
Table 10. Results for the forecasting effectiveness.
CS-BPNNDE-OSELMARIMAHWCM *CM **CM ***CM
Mon1-order0.91100.91430.84320.88510.92840.92910.92490.9456
2-order0.83110.83430.77150.81210.86110.86210.85620.8973
Tue1-order0.94530.94560.91160.93020.95200.95250.95210.9605
2-order0.90230.90300.87090.88250.91540.91620.91410.9259
Wed1-order0.93900.93980.91350.93990.94810.94860.94640.9616
2-order0.89730.89870.88170.90180.91120.91170.90980.9313
Thu1-order0.89850.90180.88310.92760.92300.92280.91660.9471
2-order0.77730.77760.78360.86930.82810.82810.81340.8816
FRI1-order0.92690.92790.88120.92120.93510.93560.93510.9423
2-order0.86230.86420.83250.86140.87840.87910.87750.8862
Sat1-order0.92620.92620.87500.91740.93480.93570.93420.9446
2-order0.86570.86550.82690.86750.88170.88260.88120.8935
Sun1-order0.94680.94660.91100.94730.95130.95180.95150.9624
2-order0.90410.90360.88310.91060.91270.91350.91310.9317
* Combined Model with the objective functions: TIC ( Y ^ ,   Y ) and RMSE ( Y ^ ,   Y ) . ** Combined Model with the objective functions: TIC ( Y ^ ,   Y ) and MAPE ( Y ^ ,   Y ) . *** Combined Model with the objective functions: RMSE ( Y ^ ,   Y ) and MAPE ( Y ^ ,   Y ) .

Share and Cite

MDPI and ACS Style

Zhang, S.; Liu, Y.; Wang, J.; Wang, C. Research on Combined Model Based on Multi-Objective Optimization and Application in Wind Speed Forecast. Appl. Sci. 2019, 9, 423. https://doi.org/10.3390/app9030423

AMA Style

Zhang S, Liu Y, Wang J, Wang C. Research on Combined Model Based on Multi-Objective Optimization and Application in Wind Speed Forecast. Applied Sciences. 2019; 9(3):423. https://doi.org/10.3390/app9030423

Chicago/Turabian Style

Zhang, Shenghui, Yuewei Liu, Jianzhou Wang, and Chen Wang. 2019. "Research on Combined Model Based on Multi-Objective Optimization and Application in Wind Speed Forecast" Applied Sciences 9, no. 3: 423. https://doi.org/10.3390/app9030423

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop