Next Article in Journal
Perceptions and Acceptance of Desalinated Seawater for Irrigation: A Case Study in the Níjar District (Southeast Spain)
Next Article in Special Issue
Modelling Extreme Wave Overtopping at Aberystwyth Promenade
Previous Article in Journal
Evaluation of Double Perforated Baffles Installed in Rectangular Secondary Clarifiers
Previous Article in Special Issue
Estimation of Instantaneous Peak Flow Using Machine-Learning Models and Empirical Formula in Peninsular Spain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Streamflow Forecasting Using Empirical Wavelet Transform and Artificial Neural Networks

1
School of Hydropower and Information Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
2
Hubei Key Laboratory of Digital Valley Science and Technology, Wuhan 430074, China
3
School of Hydraulic and Environmental Engineering, Three Gorges University, Yichang 443002, China
*
Author to whom correspondence should be addressed.
Water 2017, 9(6), 406; https://doi.org/10.3390/w9060406
Submission received: 13 April 2017 / Revised: 31 May 2017 / Accepted: 2 June 2017 / Published: 7 June 2017

Abstract

:
Accurate and reliable streamflow forecasting plays an important role in various aspects of water resources management such as reservoir scheduling and water supply. This paper shows the development of a novel hybrid model for streamflow forecasting and demonstrates its efficiency. In the proposed hybrid model for streamflow forecasting, the empirical wavelet transform (EWT) is firstly employed to eliminate the redundant noises from the original streamflow series. Secondly, the partial autocorrelation function (PACF) values are explored to identify the inputs for the artificial neural network (ANN) models. Thirdly, the weights and biases of the ANN architecture are tuned and optimized by the multi-verse optimizer (MVO) algorithm. Finally, the simulated streamflow is obtained using the well-trained MVO-ANN model. The proposed hybrid model has been applied to annual streamflow observations from four hydrological stations in the upper reaches of the Yangtze River, China. Parallel experiments using non-denoising models, the back propagation neural network (BPNN) and the ANN optimized by the particle swarm optimization algorithm (PSO-ANN) have been designed and conducted to compare with the proposed model. Results obtained from this study indicate that the proposed hybrid model can capture the nonlinear characteristics of the streamflow time series and thus provides more accurate forecasting results.

1. Introduction

Assessing and estimating water available in a basin over long periods (from months to years) is an important precondition for efficient reservoir scheduling and water supply. For this reason, constructing a forecasting model for long-term streamflow time series is essential [1]. At present, methods for long-term streamflow forecasting can be mainly grouped into two categories: physically-based models and data-driven models. Technically, physical-based models usually consider multi-factors including rainfall, runoff, evaporation, soil moisture, infiltration, snow cover and melt, etc., while data-driven models usually take advantage of the implicit information contained in the historical streamflow time series to forecast the future streamflow [2]. Physically-based models can suffer from the drawbacks that physical relationships between different hydrological features are complex, and sometimes the lack of hydrological data makes it difficult to complete physical modeling [3].
In the last few decades, numerous data-driven models for hydrological time series forecasting have been proposed to increase forecasting accuracy. The autoregressive moving average (ARMA) approach, since first being proposed by Box and Jenkins [4] and then popularized by Carlson et al. [5], has been one of the most widely-used methods for hydrological forecasting [6,7,8]. The ARMA technique assumes the time series to be stationary and to follow the normal distribution [4]. However, streamflow time series is usually characterized by features of both nonlinearity and unstableness; thus, linear-related time series forecasting techniques are not sufficient to capture the characteristics of hydrological time series [9]. In this case, artificial neural networks (ANNs) have been put forward and widely exploited for hydrological forecasting to deal with the nonlinearity and instability of hydrological time series. Hornik et al. [10] have demonstrated that ANN can approximate any measurable function to a certain degree. One of the most obvious advantages of the ANN technique is that one does not need to have a well-defined process for algorithmically converting inputs into outputs [11]. The advantages and disadvantages of ANN in the application of hydrologic research have been discussed in a comprehensive review by Govindaraju [12,13].
Although there have been many achievements in the application of the ANN technique using single ANN, there is still much room to improve. One traditional method is to optimize the weights and biases of ANN. In the training process of ANN, the target is to find the optimal weights and biases to minimize the learning error. Since the behavior of ANN is highly dependent on the initial values of weights and biases, employing different heuristic searching algorithms becomes popular in the training process [14,15,16,17,18]. Mirjalili et al. [14] employed the gravitational search algorithm (GSA) and PSOGSA (particle swarm optimization, gravitational search algorithm) as training methods to train the neural networks. Khan and Sahai [15] demonstrated the superiority of the new metaheuristic bat algorithm (BA) over the genetic algorithm (GA), particle swarm optimization (PSO), the “standard” back propagation (BP) algorithm and the Levenberg-Marquardt algorithm. In Ref. [16], the GA was exploited to search for the optimal weights and biases of the back propagation neural network (BPNN). Cheng et al. [17] proposed an ANN model based on quantum-behaved PSO (ANN-QPSO) for daily runoff forecasting. In Ref. [18], a gravitational search algorithm (GSA)-based radial basis function (RBF) approach (GSA-RBF) was exploited for predicting debris flow velocity.
Another new trend to improve the performance of the single ANN model is to take advantage of data pre-analysis methodologies such as eliminating stochastic volatilities from the original time series [19,20], using decomposed subseries as inputs [21,22] and employing a decompose-predict-ensemble framework [23,24]. Wu et al. [19] combined moving average (MA), singular spectrum analysis (SSA), wavelet multi-resolution analysis (WMRA) and ANNs to improve the forecasting accuracy of daily flows. In Ref. [20], the MA method, principle component analysis (PCA) and SSA were coupled with ANNs to predict rainfall time series. Besides these aforementioned data preprocessing-based techniques, wavelet and empirical mode decomposition (EMD) have been the most commonly-used data pre-analysis techniques in recent years. Kisi et al. [21] developed a wavelet-support vector regression (WSVR) model in which the discrete wavelet transform (DWT) was employed as a data preprocessing tool. The discrete wavelet transform has been proven to have improved the forecasting accuracy. Budu et al. [22] decomposed the observed reservoir inflow time series into subseries using DWT with different mother wavelet functions. Then appropriate subseries were used as inputs for constructing ANN models. Fu et al. (2014) developed a hybrid EMD-RBFNN model for predicting regional groundwater depth time series [23]. Zhao et al. [24] combined chaotic least squares support vector machine (CLSSVM) with EMD for annual runoff forecasting. The EMD-CLSSVM model has been proven to have attained significant improvement compared with the single CLSSVM model in long-term runoff time series forecasting. Nourani et al. [25] provided a comprehensive introduction to the application of the wavelet artificial intelligence (AI) conjunction model in simulating important processes at the hydrologic cycle.
Although wavelet transform (WT) and EMD have been used widely in recent years, the behavior of different WT-based forecasting models depends highly on the selection of mother wavelets, and EMD suffers from the drawbacks of a lack of mathematical theory and sensitivity to both noise and sampling. In this case, Gilles [26] proposed a new adaptive data analysis method, empirical wavelet transform (EWT), to overcome the drawbacks of these decomposition techniques. The efficiency of EWT has been demonstrated by a few benchmarks in Ref. [26]. Additionally, EWT has been applied to several forecasting models successfully since being proposed in 2013. Hu and Wang [27] combined the EWT with Gaussian process regression (GPR) for forecasting short-term wind speed. Wang and Hu [28] proposed a GPR model combined with autoregressive integrated moving average (ARIMA), extreme learning machine (ELM), SVM and LSSVM for short-term wind speed forecasting. In both Ref. [27] and Ref. [28], EWT was employed as a data pre-analysis technique to filter the noises of the original time series.
Multi-verse optimizer (MVO) is a novel nature-inspired optimization algorithm, which was originally introduced by Mirjalili et al. [29]. In Ref. [29], the efficiency of MVO has been demonstrated by 19 benchmark functions and five real engineering optimization problems. Results have proven that the MVO algorithm outperforms the best algorithms proposed in the literature on the majority of the test beds. Additionally, the MVO algorithm has exhibited good performance in optimizing the weights and biases of a feedforward multilayer perceptron neural network [30], as well as the parameters of SVM [31].
In consideration of the good performance of EWT and MVO, a hybrid EWT-MVO-ANN model is proposed for more accurate annual streamflow forecasting results in which the EWT is employed to decompose the streamflow time series into subseries and eliminate the noises; the MVO algorithm is adopted to optimize the parameters of ANN; and ANN trained with MVO is employed for time series forecasting. Before constructing MVO-ANN models, the input variables are selected according to the PACF (partial autocorrelation function) values. Parallel experiments as BPNN, PSO-ANN and MVO-ANN with and without the EWT denoising process have been designed and conducted to apply to annual streamflow data of four hydrological stations in China to highlight the superiority of the proposed EWT-MVO-ANN model. The results of the experiments of this study have demonstrated that EWT is an effective way to filter the noises of annual streamflow series and thus improving prediction accuracy, and also, the MVO algorithm exhibits better performance compared with the Levenberg–Marquart (LM) and PSO algorithms in the training of ANN models for annual streamflow forecasting. It has been demonstrated by the results of this study that the proposed hybrid model can provide an effective modeling approach to capture the nonlinear characteristics of annual streamflow series, thus providing more satisfactory forecasting results.

2. Methodology

2.1. Empirical Wavelet Transform

The empirical wavelet transform (EWT) was first introduced by Jerome Gilles [26] in 2013. The main idea of this method is to extract the amplitude modulated-frequency modulated (AM-FM) signals of the given signal by designing an appropriate wavelet filter bank adaptively. EWT produces intrinsic mode functions (IMFs) according to the information contained in the spectrum of the signal. The decomposition process of signal x ( t ) using EWT can be described as follows:
Step 1
Compute the Fourier spectrum F ( ω ) of the original time series x ( t ) using the fast Fourier transform algorithm (FFT).
Step 2
Extract different modes by proper segmentation of the Fourier spectrum and determine the boundaries.
Step 3
After the boundaries are determined by appropriate segmentation, calculate the approximation and detail coefficients by applying the scaling function and empirical wavelets that correspond to each detected segment.
Equipped with the obtained set of local maxima plus 0 and π , the boundaries ω i of each segment are defined as the center between two consecutive maxima. Let { f 0 , f 1 , f 2 , ... , f N } be the set of frequencies corresponding to the set of local maxima and f 0 = 0 , f N = π . The boundaries ω i can be given as:
ω i = f i + f i + 1 2    f o r    1 i N 1
then the Fourier segments will be [ 0 , ω 1 ] , [ ω 1 , ω 2 ] , ... , [ ω N 1 , π ] . A transition phase T n of width 2 τ n is defined centered around each ω n [26].
After determining the set of boundaries, a bank of N wavelet filters, comprised of one low-pass filter and N 1 band-pass filters, is defined for each segment based on the detected boundaries. The expressions for empirical wavelets ψ i ( ω ) and scaling function φ i ( ω ) are given by Gilles [26], as shown in Equations (3) and (4), respectively:
ψ i ( ω ) = { 1 , i f ( 1 + γ ) ω i | ω | ( 1 γ ) ω i + 1 cos [ π 2 β ( 1 2 γ ω i + 1 ( | ω | ( 1 γ ) ω i + 1 ) ) ] , i f ( 1 γ ) ω i + 1 | ω | ( 1 + γ ) ω i + 1 sin [ π 2 β ( 1 2 γ ω i ( | ω | ( 1 γ ) ω i ) ) ] , i f ( 1 γ ) ω i | ω | ( 1 + γ ) ω i 0 , o t h e r w i s e
φ 1 ( ω ) = { 1 , i f | ω | ( 1 γ ) ω 1 cos [ π 2 β ( 1 2 γ ω 1 ( | ω | ( 1 γ ) ω 1 ) ) ] , i f ( 1 γ ) ω 1 | ω | ( 1 + γ ) ω 1 0 , o t h e r w i s e
where β ( x ) is an arbitrary function such that:
β ( x ) = { 0     , i f x 0      a n d β ( x ) + β ( 1 x ) = 1 x [ 0 , 1 ] 1     , i f   x 1
The widely-used function β ( x ) according to Ref. [32] is:
β ( x ) = x 4 ( 35 84 x + 70 x 2 20 x 3 )
Additionally, τ n = γ ω n , 0 < γ < 1 , γ can be selected using the below expression:
γ < min i ( ω i + 1 ω i ω i + 1 + ω i )
The detail coefficients are given by the inner products with the empirical wavelets:
W x ( i , t ) = x ( t ) , ψ i ( t ) = x ( τ ) ψ i ( τ t ) d τ = F 1 [ x ( ω ) ψ ( ω ) ]
while the approximation coefficient by the inner products with the scaling function:
W x ( 1 , t ) = x ( t ) , φ 1 ( t ) = x ( τ ) φ 1 ( τ t ) d τ = F 1 [ x ( ω ) φ 1 ( ω ) ]
where F 1 denotes the inverse Fourier transform.
Finally, the reconstruction of the original signal is obtained according to:
x ( t ) = W x ( 1 , t ) φ 1 ( t ) + i = 2 N W x ( i , t ) ψ i ( t )
where means convolution.

2.2. The Proposed MVO-ANN Model

2.2.1. Multi-Verse Optimizer

The multi-verse optimizer (MVO) algorithm, proposed by Mirjalili et al. [29], is inspired by the theory of the multi-verse in physics. Multi-verse theory believes that there is more than one big bang, and each big bang causes the birth of a universe [29]. It builds a mathematical model of the white/black holes (wormholes) tunnels that connect two universes and exchange the objects of universes using the wheel mechanism. At every iteration, the universes are sorted according to their inflation rates, and the one with the highest inflation rate is chosen by the roulette wheel to have a white hole. MVO is mathematically modeled as follows:
Assume that:
U = [ x 1 1 x 1 2 x 1 d x 2 1 x 2 2 x 2 d x n 1 x n 2 x n d ]
where d is the number of parameters (variables) and n is the number of universes (candidate solutions):
x k j = { x k j r 1 < N I ( U i ) x i j r 1 N I ( U i )
where x i j denotes the j-th parameter of the i-th universe, Ui denotes the i-th universe, N I ( U i ) is the normalized inflation rate of the i-th universe and r1 ranges from 0 to 1.
Assuming that there is a wormhole tunnel between one universe and the best universe, the transportation can be described as follows:
x i j = { { X j + TDR × ( ( u b j l b j ) × r 4 + l b j )    r 3 < 0.5 X j TDR × ( ( u b j l b j ) × r 4 + l b j )    r 3 0.5    r 2 < WEP x i j r 2 WEP
where X j denotes the j-th parameter of the best universe, WEP and TDR are worm existence probability and travelling distance rate, respectively, l b j and u b j denote the lower and upper bound of the j-th variable, respectively, and r2, r3, r4 are random numbers in [0, 1]. The formulae for WEP and TDR are defined as follows:
WEP = min + l × ( max min L )
TDR = 1 l 1 / p L 1 / p
where min and max are the minimum and maximum, which are defined as 0.2 and 1, respectively, l denotes the current iteration, L denotes the maximum iterations and p is the exploitation accuracy over the iterations and is defined as 6. The number of universes is defined as 30, and the maximum number of iterations is 500 in this study.

2.2.2. Artificial Neural Network

As a typical intelligent learning paradigm, the artificial neural network has been widely used in many areas of science and engineering such as daily water level forecasting [33], rainfall-runoff simulation [34] and wind speed forecasting [35]. As can be seen in Figure 1, a typical three-layer feed-forward ANN usually consists of three layers including input, hidden and output layers [36]. Each layer contains nodes that are connected to nodes of the other layer(s). Additionally, the connection is associated with a weight. Assume a simplified neural network with three layers. The output of the j-th hidden node can be obtained as given in Equation (15):
H j = f ( i = 1 n w i j x i a j )    j = 1 , 2 , ... , l
where f ( x ) = 1 / ( 1 + exp ( x ) ) and is the transfer function of the hidden layer; n denotes the number of nodes in the input layer; l denotes the number of nodes in the hidden layer; wij represents the connection weight from the i-th input node to the j-th hidden node; and aj represents the bias of the j-th hidden node.
After calculating the outputs of the hidden layer, the final output can be given as follows:
O k = j = 1 l H j w j k b k     k = 1 , 2 , ... , m
where m denotes the number of nodes in the output layer, wjk denotes the connection weight from the j-th hidden node to the k-th output node and bk is the bias of the k-th output node.

2.2.3. ANN Trained with MVO

Training of neural networks is a form of supervised learning in which the weights and thresholds of the network are updated for the purpose of reducing the function error whenever the error of the training outputs does not match the expected accuracy. Unlike traditional local optimization training algorithms, such as the gradient descent method and Levenberg–Marquart (LM) algorithm, the global optimization algorithms can overcome the drawbacks of local minima and obtain more efficient solutions. The MVO algorithm is exploited to find an optimal combination of connection weights and biases to minimize the fitness error. In the MVO algorithm, the iteration begins with creating a set of random universes. When training the neural network, the weights and thresholds of the network are represented by the parameters of the MVO algorithm. In each iteration of MVO, the weights and thresholds are optimized.
In the MVO-ANN model, the ANN model is trained with MVO. In order to evaluate the performance of the MVO-ANN model, root mean square error (RMSE) is adopted as the fitness function. Define the number of training samples as q, then the fitness function can be described as follows:
E i = k = 1 m ( Y k i O k i ) 2
f i t n e s s = 1 q i = 1 q E i
where Y k i and O k i denote the predicted and observed values of the k-th output node in the i-th training sample, respectively.

2.3. Partial Autocorrelation Function

Selecting appropriate input variables for ANN models is an essential precondition for ANN modeling. The relationship between inputs and outputs has a significant impact on the forecasting accuracy of ANN models. To address this matter, PACF values of the streamflow time series are adopted to identify the input variables. Assume x t ( t = 1 , 2 , ... , n ) to be the target variable. If PACF values at lags greater than k are inside the 95% confidence interval, [ 1.96 / n , 1.96 / n ] , x t 1 , x t 2 , ... , x t k are employed as input variables. If all PACF values are in the 95% confidence interval, the input variables are chosen according to a trial and error method by systematically increasing the number of sequential data from the latest year to the sixth year. The calculation steps of the PACF values of the target variable x t ( t = 1 , 2 , ... , n ) can be briefly described as follows [37]:
Firstly, let γ k denote the covariance at lag k ( γ 0 represents the variance), γ k can be calculated as follows:
γ k = 1 n i = 1 n k ( x i x ¯ ) ( x i + k x ¯ ) ,   k = 0 , 1 , ... , M
Secondly, the autocorrelation coefficient at lag k , denoted by ρ k , is calculated:
ρ k = γ k γ 0
Finally, the partial autocorrelation coefficient at lag k , denoted as α k k , is obtained as follows:
α 11 = ρ 1 α k + 1 , k + 1 = ρ k + 1 k = 1 k ρ k + 1 j α k j 1 j = 1 k ρ j α k j α k + 1 , j = α k j α k + 1 , k + 1 . α k , k j + 1    ( j = 1 , 2 , ... , k ) }
where k = 1 , 2 , ... , M .

2.4. The Hybrid EWT-MVO-ANN Model

The overall structure of the proposed hybrid EWT-MVO-ANN model is shown in Figure 2. As can be seen in Figure 2, the detailed procedure of the proposed hybrid EWT-MVO-ANN model can be described in four steps:
Firstly, the EWT is exploited to decompose the original streamflow series into several detailed components and an approximated component. The approximated component corresponds to the general trend of the original series while the detailed components are high-frequency IMFs that belong to different frequency bands of the streamflow series. Secondly, the residual series is thrown away to eliminate the noises and smooth the original streamflow series. The sum of the remaining modes produces the reconstructed series. Thirdly, the PACF values are calculated to determine the best combination of input variables. Fourthly, the MVO algorithm is exploited to find the optimal weights and biases of the ANN architecture to minimize the fitness error. Finally, the forecasting results of the streamflow series are obtained according to the well-trained MVO-ANN model.

2.5. Model Performance Evaluation

It is important to apply multiple error measures when evaluating the forecasting ability of the developed models. This paper considers four statistical measures including RMSE, mean absolute error (MAE), mean absolute percent error (MAPE) and the coefficient of correlation (R). Among the four statistical measures, RMSE is sensitive to extremely large or small values of a time series and reflects the degree of variation; MAE reflects the actual forecasting error in a more balanced perspective; and MAPE is a measure of accuracy with no units. The RMSE, MAE and MAPE are defined as follows:
R M S E = 1 N i = 1 N ( q p ( i ) q o ( i ) ) 2
M A E = 1 N i = 1 N | q p ( i ) q o ( i ) |
M A P E = 1 N i = 1 N | q p ( i ) q o ( i ) | q o ( i ) × 100 %
where q p ( i ) and q o ( i ) denote the predicted and observed annual streamflow series, respectively, and N denotes the length of data.
R shows the degree to which two datasets are related to each other and ranges from −1 to 1. The larger the absolute value of R, the more the predicted and observed data are related. The coefficient of correlation (R) is defined as:
R = i = 1 N ( q p ( i ) q ¯ p ) ( q o ( i ) q ¯ o ) i = 1 N ( q p ( i ) q ¯ p ) 2 × i = 1 N ( q o ( i ) q ¯ o ) 2
where q ¯ p and q ¯ o denote the average values of the predicted and observed runoffs, respectively.

3. Model Construction and Development

3.1. Study Area and Data Collection

Annual streamflow time series of four hydraulic gauging stations located in two river basins in the upper reach of the Yangtze River in China, i.e., Xiangjiaba, Panzhihua, Luoxidu and Sanleiba, are researched in this study.
The first river basin is referred to as the Jinsha River Basin. It flows through the provinces of Qinghai, Sichuan and Yunnan in western China. Originating in the uppermost of the Yangtze River, the Jinsha River has a length of 3486 km, accounting for 77% of the total length of the upper Yangtze River, and has a drainage area of 480,000 km2, accounting for 50% of the total area of the upper Yangtze River. The observed annual streamflow data of Xiangjiaba and Panzhihua stations, which lie on the lower reaches of the Jinsha River basin, are selected as the first two study areas. For Xiangjiaba station, annual streamflow data from 1959 to 2012 (54 observations) are studied. The training dataset is from 1959 to 2000 (42 observations), whilst the validation data set is from 2001 to 2012 (12 observations). For Panzhihua station, streamflow data from 1953 to 2012 (60 observations) are studied. The training dataset is from 1953 to 1998 (46 observations), whilst the validation dataset is from 1999 to 2012 (14 observations).
The second river basin is referred to as the Jialing River Basin. Jialing River is a major tributary of the Yangtze River in the Sichuan Basin. The Jialing River Basin has a main stream length of 1120 km and a drainage area of 160,000 km2, accounting for about 17% of the drainage area of the Yangtze River Basin. Two hydraulic gauging stations, which are called Luoduxi and Sanleiba, are selected in this study. Annual streamflow data from 1954 to 2008 (55 observations) for both Luoduxi and Sanleiba are studied. Data from 1954 to 1996 (43 observations) are for training, whilst those from 1997 to 2008 (12 observations) are for validation.
Figure 3 provides a schematic of the Jinsha River Basin and the Jialing River Basin, as well as the four hydrological gauging stations researched in this study. Annual streamflow data of the four stations are presented in Figure 4. Basic statistical information, such as the mean value, standard deviation (SD), minimum (Min.), maximum (Max.), skewness (Skew.) and kurtosis (Kurt.), of these datasets is illustrated in Table 1.

3.2. Decomposing and Reconstruction Using EWT

Since EWT borrows the framework of classic WT in the decomposition and reconstruction processes [27], and three-level decompositions have been considered as the most appropriate in various studies, such as wind speed forecasting [27,28], drought index prediction [38] and daily river flow forecasting [39]; the decomposition level is determined as three in this study. Graphical representations of the decomposed subseries using EWT for four study stations are illustrated in Figure 5. As can be seen in Figure 5, the original annual streamflow series is decomposed into three independent modes and a residual for the four stations, respectively.
Among these modes, Mode 1 is a trend term that describes the general tendency of the original annual streamflow series. The modes present changing frequencies, amplitudes and wavelengths. For all stations, Mode 1 has the lowest frequency, the longest wavelength and contributes the most to the original time series. For the following subseries, the frequency is increased, while the wavelength and contribution are decreased. After decomposing the original streamflow series, the residue is thrown away because it mainly contains noisy information, and the remaining modes are reconstructed in a new series. The graphical representations of the reconstructed series are shown in Figure 6. As can be seen in Figure 6, the reconstructed series tends to be smoother after the denoising processes.

3.3. Input Determination

To obtain the input variables for each model, the PACF values between each streamflow series and their antecedent values at lag k are calculated using the method described in Section 2.3, as well as the PACF values between the reconstructed streamflow series and their antecedent values. Graphical representations of the PACF values are shown in Figure 7 for the four stations.
According to the input variable selection method described in Section 2.3, the input variables of the models for the original and the reconstructed series of the four stations are finally determined as described in Table 2.

3.4. Parameter Settings

To verify the efficiency of the proposed MVO-ANN model, the back propagation neural network (BPNN) trained with the LM algorithm and ANN trained with the PSO algorithm (PSO-ANN) have been constructed for annual streamflow forecasting.
After determining the optimal combination of input variables using the method described in Section 2.3, the streamflow data are normalized to [0, 1]. In most cases, the sigmoid function is used as the transfer function in the hidden layer, while the linear activation function is used in the output layer for ANNs [40]. In this study, all of the ANN models exploit the tan-sigmoid function and purelin function as the activation function for the hidden layer and output layer, respectively. For the single BPNN models, the LM training function (TrainLM) is exploited to speed up the training process. For the MVO-ANN models, the number of universes is defined as 30, and the maximum number of iterations is 500 according to Ref. [29]. A value of 1000 for the maximum iteration number has also been tried for the MVO-ANN model in the experimental process. Results have shown that there is no significant improvement in accuracy, but the experiment time is much longer. The WEP is increased linearly from 0.2 to 1, while TDR is decreased from 0.6 down to zero according to Equations (13) and (14), respectively. For the PSO-ANN models, the population size is set as 30 and the maximum number of iteration is set as 500 to compare with the MVO-ANN model. The optimal numbers of hidden nodes for all ANN models including BPNN, PSO-ANN and MVO-ANN with and without EWT are determined using grid search (GS) algorithm. In the GS algorithm, the searching range is set as Ref. [2,29] and the grid step is set as one [35]. Each experiment is repeated 10 times, and results with the best accuracy are presented.

4. Results and Analysis

To evaluate the performance of these models, RMSE, MAE, MAPE and R of the BPNN, PSO-ANN and MVO-ANN models with and without the denoising process are analyzed and compared. The computations related to all of the models including BPNN, PSO-ANN, MVO-ANN, EWT-BPNN, EWT-PSO-ANN and EWT-MVO-ANN are implemented in the MATLAB environment in a computer with Intel core i5, 2.6-GHz CPU and 4 GB of RAM. The performance indices of the four stations in the training and validation periods are given in Table 3, Table 4, Table 5 and Table 6, respectively.
It is clear from Table 3, Table 4, Table 5 and Table 6 that the proposed EWT-MVO-ANN model obtains the smallest RMSE, MAE and MAPE and the biggest R compared with the other five models. Comparison between the models with and without the EWT algorithm reveals that combining EWT with the ANN model is an effective way to increase the forecasting accuracy. Though the proposed EWT-MVO-ANN model exhibits much better performance than the other models, the configuration is more complex on account of data pre-processing and parameter optimization. Thus, the computation time required is much more than the single BPNN model without the denoising and optimization processes. Detailed analyses are submitted to further evaluate the performance of these models.
Table 3 displays the forecasting results of the six models in terms of RMSE, MAE, MAPE and R for Xiangjiaba Station. For models without denoising process, the MVO-ANN model outperforms the other models in the training period in terms of the four indices. However, the RMSE, MAE and MAPE are a little bigger than the other models in the validation period. The comparison between BPNN, PSO-ANN and MVO-ANN demonstrates that the simple LM algorithm may be more suitable for forecasting in some occasions. For models with EWT, the EWT-MVO-ANN model outperforms the other models both in the training and validation periods in terms of the four indices. Overall, the EWT-MVO-ANN method outperforms the other models. For example, the R of the EWT-MVO-ANN model is 0.724 while those of the BPNN, PSO-ANN, MVO-ANN, EWT-ANN and EWT-PSO-ANN are 0.223, 0.227, 0.514, 0.649 and 0.665 in the validation period, respectively.
Table 4 displays the forecasting performance of the six models in terms of RMSE, MAE, MAPE and R for Panzhihua Station. For the four indices, the MVO-ANN model improved the BPNN model by 3.94% decreasing in RMSE, 7.28% decreasing in MAE, 3% decreasing in MAPE and 8.91% increasing in R in the validation period while the improvement of the EWT-MVO-ANN model to the BPNN model is 19.97%, 17.54% and 20% decreasing in RMSE, MAE and MAPE and 47.17% increasing in R, respectively.
Table 5 displays the forecasting performance of the six models in terms of RMSE, MAE, MAPE and R for Luoduxi Station. For the four indices, the MVO-ANN improved the BPNN model by 28.57% decreasing in RMSE, 27.87% decreasing in MAE, 22.38% decreasing in MAPE and 31.03% increasing in R in the validation period, while the improvement of the EWT-MVO-ANN model to the BPNN model is 53.61%, 55.73% and 52.27% decreasing in RMSE, MAE and MAPE and 50.49% increasing in R, respectively. These results demonstrate that the MVO algorithm outperforms the simple LM algorithm in the training of ANN models, and the use of EWT contributes greatly to the forecasting accuracy of streamflow time series.
Similarly, Table 6 displays the forecasting performance of the six models in terms of RMSE, MAE, MAPE and R for Sanleiba Station. For the four indices, the MVO-ANN improved the BPNN model by 32.48% decreasing in RMSE, 27.16% decreasing in MAE, 29.82% decreasing in MAPE and 0.05% increasing in R in the validation period while the improvement of the EWT-MVO-ANN model to the BPNN model is 33.92%, 28.10% and 30.41% decreasing in RMSE, MAE and MAPE and 51.61% increasing in R, respectively.
Table 7 illustrates the comparisons of the forecasting performance between EWT-PSO-ANN and EWT-MVO-ANN models. It can be concluded from Table 7 that the EWT-MVO-ANN models outperform the EWT-PSO-ANN models in RMSE, MAE, MAPE and R for the four stations in both training and validation periods, which demonstrate the superiority of MVO algorithm in training ANNs.
To further describe the effectiveness of EWT algorithm in enhancing the accuracy of ANN models in streamflow forecasting, forecasting results of the MVO-ANN and EWT-MVO-ANN models for the four stations are illustrated in Figure 8, where line charts with markers on the left side represent the forecasted and observed values of streamflow, and scatter plots with a linear fitting line of the predicted and observed values are exhibited on the right side of Figure 8. Subgraphs on the right side of Figure 8 are drawn using the “Fit linear” tool in the OriginPro environment. The intercepts of the linear fitting lines of all subgraphs have been fixed as zero to better display the differences between two different models. From the left side of Figure 8, it is obvious that the prediction lines of the EWT-MVO-ANN models approximate the observed values better than the MVO-ANN models. From the right side of Figure 8, it can be seen that the scatters of the EWT-MVO-ANN models distribute a little more tightly around the linear fitting line compared with the MVO-ANN models. From the description above, the effectiveness of EWT in the forecasting of annual streamflow time series is demonstrated.
Generally, it can be concluded from the forecasting results of the experiments of this study that the ANN model trained by LM cannot give a satisfactory performance in the forecasting of annual streamflow series. Additionally, the MVO method can achieve better performance in the training of ANN models compared with the LM and PSO algorithms. Compared with the MVO-ANN model, the EWT-MVO-ANN model leads to a much better performance in terms of the four evaluation indices, which demonstrates that EWT is an effective alternative decomposition technique in improving the forecasting accuracy of annual streamflow time series.

5. Conclusions

This paper develops a hybrid EWT-MVO-ANN model for streamflow time series forecasting. EWT is exploited to extract useful information and eliminate stochastic volatility from the original streamflow series. The ANN architecture trained with MVO method is employed as a predictor. To fully demonstrate the effectiveness of the proposed model, data from four hydrological stations with different levels of discharge and different volatilities in the upper reaches of the Yangtze River, China, are studied. Among the six models, including BPNN, PSO-ANN, MVO-ANN, EWT-BPNN, EWT-PSO-ANN and EWT-MVO-ANN, the EWT-MVO-ANN model performs the best according to four statistical indices (RMSE, MAE, MAPE and R). The following results have been concluded:
1
Compared with the BPNN model, the ANN models trained by intelligent algorithms obtain better forecasting results. The performance of the MVO algorithm is better than the PSO algorithm.
2
By discarding the residue of the subseries, the forecasting ability of the EWT-MVO-ANN model has been improved, which demonstrated that EWT is suitable for data pre-analysis for streamflow time series.
3
The proposed model is appropriate for annual streamflow series forecasting for streamflow data with different magnitudes and fluctuations in the four hydraulic gauging stations in China.
This study proposes a novel evolutionary algorithm to optimize the weights and biases of a traditional ANN, but the performances of ANNs with different network structures, such as the radial basis function network, the general regression neural network and the ELM, have not been studied. More attention will be paid to the performances of different ANNs for forecasting in the future study.

Acknowledgments

This work is supported by the Key Program of the Major Research Plan of the NationalNatural Science Foundation of China (No. 91547208), the National Natural Science Foundation of China (No. 51579107) and the State Key Program of National Natural Science of China (No. 51239004), the National Key R&D Program of China (2016YFC0402708, 2016YFC0401005) and special thanks are given to the anonymous reviewers and editors for their constructive comments.

Author Contributions

Tian Peng and Chu Zhang designed and conducted the experiments. Tian Peng wrote the draft of the paper. Chu Zhang prepared the figures for this paper. Jianzhong Zhou proposed the main structure of this study. Jianzhong Zhou and Wenlong Fu provided useful advice and made some corrections. All authors read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, T.C.; Yu, P.S.; Chen, C.C. Long-term runoff forecasting by combining hydrological models and meteorological records. Hydrol. Process. 2005, 19, 1967–1981. [Google Scholar] [CrossRef]
  2. Barge, J.T.; Sharif, H.O. An Ensemble Empirical Mode Decomposition, Self-Organizing Map, and Linear Genetic Programming Approach for Forecasting River Streamflow. Water 2016, 8, 247. [Google Scholar] [CrossRef]
  3. Napolitano, G.; Serinaldi, F.; See, L. Impact of EMD decomposition and random initialisation of weights in ANN hindcasting of daily stream flow series: An empirical examination. J. Hydrol. 2011, 406, 199–214. [Google Scholar] [CrossRef]
  4. Box, G.E.P.; Jenkins, G.M. Time Series Analysis: Forecasting and Control; Wiley: Tokyo, Japan, 1970; ISBN 0-8162-1104-3. [Google Scholar]
  5. Carlson, R.F.; MacCormick, A.J.A.; Watts, D.G. Application of linear random models to four annual streamflow series. Water Resour. Res. 1970, 6, 1070–1078. [Google Scholar] [CrossRef]
  6. Khashei, M.; Bijari, M. A novel hybridization of artificial neural networks and ARIMA models for time series forecasting. Appl. Soft Comput. 2011, 11, 2664–2675. [Google Scholar] [CrossRef]
  7. Zhao, X.H.; Chen, X. Auto regressive and ensemble empirical mode decomposition hybrid model for annual runoff forecasting. Water Resour. Manag. 2015, 29, 2913–2926. [Google Scholar] [CrossRef]
  8. Zounemat-Kermani, M. Investigating Chaos and Nonlinear Forecasting in Short Term and Mid-term River Discharge. Water Resour. Manag. 2016, 30, 1851–1865. [Google Scholar] [CrossRef]
  9. Wei, S.; Zuo, D.; Song, J. Improving prediction accuracy of river discharge time series using a Wavelet-NAR artificial neural network. J. Hydroinform. 2012, 14, 974–991. [Google Scholar] [CrossRef]
  10. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  11. Sudheer, K.P.; Gosain, A.K.; Mohana Rangan, D.; Saheb, S.M. Modelling evaporation using an artificial neural network algorithm. Hydrol. Process. 2002, 16, 3189–3202. [Google Scholar] [CrossRef]
  12. Govindaraju, R.S. Artificial neural networks in hydrology. I: Preliminary concepts. J. Hydrol. Eng. 2000, 5, 115–123. [Google Scholar]
  13. Govindaraju, R.S. Artificial neural networks in hydrology. II: Hydrologic applications. J. Hydrol. Eng. 2000, 5, 124–137. [Google Scholar]
  14. Mirjalili, S.; Hashim, S.Z.M.; Sardroudi, H.M. Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl. Math. Comput. 2012, 218, 11125–11137. [Google Scholar] [CrossRef]
  15. Khan, K.; Sahai, A. A comparison of BA, GA, PSO, BP and LM for training feed forward neural networks in e-learning context. Int. J. Intell. Syst. Appl. 2012, 4, 23. [Google Scholar] [CrossRef]
  16. Liu, W.C.; Chung, C.E. Enhancing the predicting accuracy of the water stage using a physical-based model and an artificial neural network-genetic algorithm in a river system. Water 2014, 6, 1642–1661. [Google Scholar] [CrossRef]
  17. Cheng, C.T.; Niu, W.J.; Feng, Z.K.; Shen, J.J.; Chau, K.W. Daily reservoir runoff forecasting method using artificial neural network based on quantum-behaved particle swarm optimization. Water 2015, 7, 4232–4246. [Google Scholar] [CrossRef]
  18. Cao, C.; Song, S.; Chen, J.; Zheng, L.; Kong, Y. An Approach to Predict Debris Flow Average Velocity. Water 2017, 9, 205. [Google Scholar] [CrossRef]
  19. Wu, C.L.; Chau, K.W.; Li, Y.S. Methods to improve neural network performance in daily flows prediction. J. Hydrol. 2009, 372, 80–93. [Google Scholar] [CrossRef]
  20. Wu, C.L.; Chau, K.W.; Fan, C. Prediction of rainfall time series using modular artificial neural networks coupled with data-preprocessing techniques. J. Hydrol. 2010, 389, 146–167. [Google Scholar] [CrossRef]
  21. Kisi, O.; Cimen, M. A wavelet-support vector machine conjunction model for monthly streamflow forecasting. J. Hydrol. 2011, 399, 132–140. [Google Scholar] [CrossRef]
  22. Budu, K. Comparison of wavelet-based ANN and regression models for reservoir inflow forecasting. J. Hydrol. Eng. 2013, 19, 1385–1400. [Google Scholar] [CrossRef]
  23. Fu, Q.; Liu, D.; Li, T.; Cui, S.; Hu, Y. EMD-RBFNN Coupling Prediction Model of Complex Regional Groundwater Depth Series: A Case Study of the Jiansanjiang Administration of Heilongjiang Land Reclamation in China. Water 2016, 8, 340. [Google Scholar] [CrossRef]
  24. Zhao, X.; Chen, X.; Xu, Y.; Xi, D.; Zhang, Y.; Zheng, X. An EMD-Based Chaotic Least Squares Support Vector Machine Hybrid Model for Annual Runoff Forecasting. Water 2017, 9, 153. [Google Scholar] [CrossRef]
  25. Nourani, V.; Baghanam, A.H.; Adamowski, J.; Kisi, O. Applications of hybrid wavelet–Artificial Intelligence models in hydrology: A review. J. Hydrol. 2014, 514, 358–377. [Google Scholar] [CrossRef]
  26. Gilles, J. Empirical wavelet transform. IEEE Trans. Signal Proc. 2013, 61, 3999–4010. [Google Scholar] [CrossRef]
  27. Hu, J.; Wang, J. Short-term wind speed prediction using empirical wavelet transform and Gaussian process regression. Energy 2015, 93, 1456–1466. [Google Scholar] [CrossRef]
  28. Wang, J.; Hu, J. A robust combination approach for short-term wind speed forecasting and analysis—Combination of the ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM) forecasts using a GPR (Gaussian Process Regression) model. Energy 2015, 93, 41–56. [Google Scholar]
  29. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  30. Faris, H.; Aljarah, I.; Mirjalili, S. Training feedforward neural networks using multi-verse optimizer for binary classification problems. Appl. Intell. 2016, 45, 322–332. [Google Scholar] [CrossRef]
  31. Faris, H.; Hassonah, M.A.; Ala’M, A.Z.; Mirjalili, S.; Aljarah, I. A multi-verse optimizer approach for feature selection and optimizing SVM parameters based on a robust system architecture. Neural Comput. Appl. 2017, 28, 1–15. [Google Scholar] [CrossRef]
  32. Daubechies, I. Ten Lectures on Wavelets; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1992. [Google Scholar]
  33. Seo, Y.; Kim, S.; Kisi, O.; Singh, V.P. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques. J. Hydrol. 2015, 520, 224–243. [Google Scholar] [CrossRef]
  34. Chen, L.; Singh, V.P.; Guo, S. Copula entropy coupled with artificial neural network for rainfall–runoff simulation. Stoch. Environ. Res. Risk Assess. 2014, 28, 1755–1767. [Google Scholar] [CrossRef]
  35. Zhang, C.; Zhou, J.; Li, C.; Fu, W.; Peng, T. A compound structure of ELM based on feature selection and parameter optimization using hybrid backtracking search algorithm for wind speed forecasting. Energy Convers. Manag. 2017, 143, 360–376. [Google Scholar] [CrossRef]
  36. Seo, Y.; Kim, S.; Kisi, O.; Singh, V.P.; Parasuraman, K. River stage forecasting using wavelet packet decomposition and machine learning models. Water Resour. Manag. 2016, 30, 4011–4035. [Google Scholar] [CrossRef]
  37. Wang, H.; Zhao, W. Arima model estimated by particle swarm optimization algorithm for consumer price index forecasting. In International Conference on Artificial Intelligence and Computational Intelligence; Springer: Berlin, Germany, 2009. [Google Scholar]
  38. Deo, R.C.; Tiwari, M.K.; Adamowski, J.F.; Quilty, J.M. Forecasting effective drought index using a wavelet extreme learning machine (W-ELM) model. Stoch. Environ. Res. Risk Assess. 2016, 30, 1–30. [Google Scholar] [CrossRef]
  39. Shafaei, M.; Kisi, O. Predicting river daily flow using wavelet-artificial neural networks based on regression analyses in comparison with artificial neural networks and support vector machine models. Neural Comput. Appl. 2016, 27, 1–14. [Google Scholar] [CrossRef]
  40. Ahila, R.; Sadasivam, V.; Manimala, K. An integrated PSO for parameter determination and feature selection of ELM and its application in classification of power system disturbances. Appl. Soft Comput. 2015, 32, 23–37. [Google Scholar] [CrossRef]
Figure 1. Architecture of a typical three-layer ANN.
Figure 1. Architecture of a typical three-layer ANN.
Water 09 00406 g001
Figure 2. The overall framework of the proposed empirical wavelet transform (EWT)-multi-verse optimizer (MVO)-ANN model. PACF, partial autocorrelation function; TDR, travelling distance rate; WEP, worm existence probability.
Figure 2. The overall framework of the proposed empirical wavelet transform (EWT)-multi-verse optimizer (MVO)-ANN model. PACF, partial autocorrelation function; TDR, travelling distance rate; WEP, worm existence probability.
Water 09 00406 g002
Figure 3. Locations of upper Yangtze River, as well as the two study areas and the four gauging stations.
Figure 3. Locations of upper Yangtze River, as well as the two study areas and the four gauging stations.
Water 09 00406 g003
Figure 4. Annual streamflow series for the four stations.
Figure 4. Annual streamflow series for the four stations.
Water 09 00406 g004
Figure 5. Decomposed subseries of annual streamflow series for the four stations.
Figure 5. Decomposed subseries of annual streamflow series for the four stations.
Water 09 00406 g005
Figure 6. Comparison between the original and reconstructed streamflow series for the four stations.
Figure 6. Comparison between the original and reconstructed streamflow series for the four stations.
Water 09 00406 g006
Figure 7. PACF plot with 95% confidence bands for the original and reconstructed series.
Figure 7. PACF plot with 95% confidence bands for the original and reconstructed series.
Water 09 00406 g007
Figure 8. Forecasting results of the MVO-ANN and the EWT-MVO-ANN models for the four stations (left, forecasted streamflow and observed streamflow with a dotted line dividing the training and validation stages; right, scatter plot with a linear fitting line).
Figure 8. Forecasting results of the MVO-ANN and the EWT-MVO-ANN models for the four stations (left, forecasted streamflow and observed streamflow with a dotted line dividing the training and validation stages; right, scatter plot with a linear fitting line).
Water 09 00406 g008aWater 09 00406 g008b
Table 1. The statistical information of the streamflow data of the four stations. Skew., skewness; Kurt., kurtosis.
Table 1. The statistical information of the streamflow data of the four stations. Skew., skewness; Kurt., kurtosis.
StationCatchment Area (km2)Statistics InformationTime Range
Mean (m3/s)SD (m3/s)Min. (m3/s)Max. (m3/s)Skew.Kurt.TrainingValidation
Xiangjiaba458,8004593730324562860.392.311959–20002001–2012
Panzhihua259,1771830298121124230.132.031953–19981999–2012
Luoduxi38,06468622930014000.402.941954–19961997–2008
Sanleiba28,896308731884960.582.721954–19961997–2008
Table 2. Input determination for the original and reconstructed series.
Table 2. Input determination for the original and reconstructed series.
StationsInputs Combination
OriginalReconstructed
Xiangjiabaxt−1, xt−2xt−1, xt−2
Panzhihuaxt−1, xt−2xt−1, xt−2, xt−3, xt−4
Luoduxixt−1, xt−2xt−1, xt−2, xt−3, xt−4
Sanleibaxt−1, xt−2xt−1, xt−2, xt−3
Table 3. Forecasting performance of the models for Xiangjiaba Station.
Table 3. Forecasting performance of the models for Xiangjiaba Station.
ModelsTrainingValidation
RMSE (m3/s)MAE (m3/s)MAPE (%)RRMSE (m3/s)MAE (m3/s)MAPE (%)R
BPNN660.37534.9111.50.525721.17602.0014.70.223
PSO-ANN603.42480.8110.40.629725.10590.0414.00.227
MVO-ANN576.73447.189.50.672774.46655.9315.30.514
EWT-BPNN578.23387.918.90.708577.72500.9211.20.649
EWT-PSO-ANN482.79363.978.10.786543.31465.4810.60.665
EWT-MVO-ANN431.27316.947.20.838513.98448.5110.30.724
Table 4. Forecasting performance of the models for Panzhihua Station.
Table 4. Forecasting performance of the models for Panzhihua Station.
ModelsTrainingValidation
RMSE (m3/s)MAE (m3/s)MAPE (%)RRMSE (m3/s)MAE (m3/s)MAPE (%)R
BPNN285.79224.0912.30.443226.40191.8510.00.494
PSO-ANN264.85214.3512.20.453212.22176.769.60.496
MVO-ANN251.34202.0011.40.537217.47177.899.70.538
EWT-BPNN193.35140.158.10.766208.66187.409.70.632
EWT-PSO-ANN180.35144.718.20.800209.13169.769.30.612
EWT-MVO-ANN159.39124.897.20.844181.19158.208.00.727
Table 5. Forecasting performance of the models for Luoduxi Station.
Table 5. Forecasting performance of the models for Luoduxi Station.
ModelsTrainingValidation
RMSE (m3/s)MAE (m3/s)MAPE (%)RRMSE (m3/s)MAE (m3/s)MAPE (%)R
BPNN209.72164.6424.50.526398.13356.8057.20.406
PSO-ANN196.87161.8125.50.540327.48286.8250.20.380
MVO-ANN177.94136.7521.80.648284.38257.3544.40.532
EWT-BPNN184.41137.9820.60.790271.71216.2140.60.571
EWT-PSO-ANN122.4085.1713.60.854241.08208.1333.60.623
EWT-MVO-ANN118.8591.1113.80.863184.66157.9527.30.611
Table 6. Forecasting performance of the models for Sanleiba Station.
Table 6. Forecasting performance of the models for Sanleiba Station.
ModelsTrainingValidation
RMSE (m3/s)MAE (m3/s)MAPE (%)RRMSE (m3/s)MAE (m3/s)MAPE (%)R
BPNN66.8845.6216.20.61453.3341.2817.10.496
PSO-ANN65.8850.1616.80.60138.6233.5913.20.489
MVO-ANN60.3249.8116.30.61836.0130.0712.00.521
EWT-BPNN49.1237.5612.20.76635.9831.1311.80.697
EWT-PSO-ANN45.0236.0511.60.81331.3725.069.60.720
EWT-MVO-ANN38.8726.868.40.86135.2429.6811.90.752
Table 7. The comparison between the EWT-PSO-ANN and EWT-MVO-ANN models for the four stations.
Table 7. The comparison between the EWT-PSO-ANN and EWT-MVO-ANN models for the four stations.
StationsModelsTrainingValidation
RMSE (m3/s)MAE (m3/s)MAPE (%)RRMSE (m3/s)MAE (m3/s)MAPE (%)R
XiangjiabaEWT-PSO-ANN482.79363.978.10.786543.31465.4810.60.665
EWT-MVO-ANN431.27316.947.20.838513.98448.5110.30.724
PanzhihuaEWT-PSO-ANN181.40143.408.30.793223.91198.0410.00.612
EWT-MVO-ANN159.39124.897.20.844181.19158.28.00.727
LuoduxiEWT-PSO-ANN147.36117.7318.50.783211.60173.3229.30.582
EWT-MVO-ANN118.8591.1113.80.863184.66157.9527.30.611
SanleibaEWT-PSO-ANN48.1037.4212.40.78834.5929.1311.20.705
EWT-MVO-ANN38.8726.868.40.86135.2429.6811.90.752

Share and Cite

MDPI and ACS Style

Peng, T.; Zhou, J.; Zhang, C.; Fu, W. Streamflow Forecasting Using Empirical Wavelet Transform and Artificial Neural Networks. Water 2017, 9, 406. https://doi.org/10.3390/w9060406

AMA Style

Peng T, Zhou J, Zhang C, Fu W. Streamflow Forecasting Using Empirical Wavelet Transform and Artificial Neural Networks. Water. 2017; 9(6):406. https://doi.org/10.3390/w9060406

Chicago/Turabian Style

Peng, Tian, Jianzhong Zhou, Chu Zhang, and Wenlong Fu. 2017. "Streamflow Forecasting Using Empirical Wavelet Transform and Artificial Neural Networks" Water 9, no. 6: 406. https://doi.org/10.3390/w9060406

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop