Next Article in Journal
Biofuel and Bioenergy Technology
Previous Article in Journal
A Study on the Optimal Ratio of Research and Development Investment in the Energy Sector: An Empirical Analysis in South Korea
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Hydropower Generation Using Grey Wolf Optimization Adaptive Neuro-Fuzzy Inference System

by
Majid Dehghani
1,
Hossein Riahi-Madvar
2,
Farhad Hooshyaripor
3,
Amir Mosavi
4,5,
Shahaboddin Shamshirband
6,7,*,
Edmundas Kazimieras Zavadskas
8 and
Kwok-wing Chau
9
1
Technical and Engineering Department, Faculty of Civil Engineering, Vali-e-Asr University of Rafsanjan, P.O. Box 518, Rafsanjan 7718897111, Iran
2
College of Agriculture, Vali-e-Asr University of Rafsanjan, P.O. Box 518, Rafsanjan 7718897111, Iran
3
Technical and Engineering Department, Science and Research, Branch, Islamic Azad University, Tehran 1477893855, Iran
4
Institute of Automation, Kando Kalman Faculty of Electrical Engineering, Obuda University, 1034 Budapest, Hungary
5
School of the Built Environment, Oxford Brookes University, Oxford OX3 0BP, UK
6
Department for Management of Science and Technology Development, Ton Duc Thang University, Ho Chi Minh City, Viet Nam
7
Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City, Viet Nam
8
Institute of Sustainable Construction, Vilnius Gediminas Technical University, LT-10223 Vilnius, Lithuania
9
Department of Civil and Environmental Engineering, Hong Kong Polytechnic University, Hong Kong, China
*
Author to whom correspondence should be addressed.
Energies 2019, 12(2), 289; https://doi.org/10.3390/en12020289
Submission received: 31 December 2018 / Revised: 14 January 2019 / Accepted: 16 January 2019 / Published: 17 January 2019

Abstract

:
Hydropower is among the cleanest sources of energy. However, the rate of hydropower generation is profoundly affected by the inflow to the dam reservoirs. In this study, the Grey wolf optimization (GWO) method coupled with an adaptive neuro-fuzzy inference system (ANFIS) to forecast the hydropower generation. For this purpose, the Dez basin average of rainfall was calculated using Thiessen polygons. Twenty input combinations, including the inflow to the dam, the rainfall and the hydropower in the previous months were used, while the output in all the scenarios was one month of hydropower generation. Then, the coupled model was used to forecast the hydropower generation. Results indicated that the method was promising. GWO-ANFIS was capable of predicting the hydropower generation satisfactorily, while the ANFIS failed in nine input-output combinations.

1. Introduction

Hydropower is a renewable source of energy that is derived from the fast reservoir water flows through a turbine. One of the main purposes of dam construction is to generate the hydropower via installation of a hydropower plant near the dam site. The rate of hydropower generation depends on the dam height and the inflow to the dam reservoir. Nonetheless, hydropower is one of the major sources of power supply in each country. In addition, the power consumption varies strongly during the year. Therefore, an insight on the value of hydropower energy to be produced in the coming months would be an important tool in managing the electricity distribution network and operation of the dam. Consequently, hydropower generation forecasting could be a key component in dam operation. Hamlet et al. [1] evaluated a long-lead forecasting model in the Colombia river and stated that long-lead forecasting model led to an increase in annual revenue of approximately $153 million per year in comparison with no forecasting model. Several researches carried out based on the inflow forecasting to the dam and executing an operating reservoir model to determine the hydropower generation [2,3,4,5,6,7,8]. While these researches are promising, some challenges arise during the implementation of these models. First, forecasting the precipitation is needed and in the next step the inflow to the river and then a reservoir model needs to be run. Each step, including the precipitation or inflow forecasting and reservoir modeling, is associated with uncertainty and the results are highly affected by the uncertainty in these models. Second, an optimization algorithm seems to be needed to optimize the parameters of predictive models.
During the past two decades, several artificial intelligent models were utilized for hydrologic model prediction [9] and hydropower stream flow forecasting [10]. Among them, the ensemble models [11,12,13] and hybrid models [14] have recently become very popular. Recently, to produce novel hybrid models, different optimization algorithms were coupled with these models to improve their performance [15,16,17,18,19,20]. Among the optimization algorithms, Grey wolf optimization (GWO) has shown promising results in a wide range of application when coupled with machine learning algorithms [21]. Consequently, in this study, to reduce the source of uncertainty, an artificial intelligent model was used for hydropower generation forecasting. For this purpose, the adaptive neuro-fuzzy inference system (ANFIS) was coupled with GWO to forecast the monthly hydropower generation directly based on the precipitation over the basin, the inflow to the dam and the hydropower generation in previous months. This method is capable to facilitate the hydropower generation forecasting. The rest of this chapter is organized as follows. In Section 2, the coupled model of ANFIS and GWO and study area are presented. Section 3 involves the results of hydropower forecasting and its reliability. Finally, Section 4 includes the conclusion of the study.

2. Methodology and Data

2.1. Study Area

The Dez dam is an arch dam constructed in 1963 on the Dez river southwestern of Iran (Figure 1). The dam is 203 m high and has a reservoir capacity of 3340 Mm3. The upstream catchment of the dam with the mean elevation of 1915.3 m above sea level and average slope of 0.0084 has an area of 17,843.3 Km2. The catchment length is about 400 km and ends with the dam reservoir at the outlet. Flow to the reservoir was measured at the Tele-Zang hydrometric station (Figure 1). The precipitation stations that were used in the present study include four precipitation stations in the catchment and 10 others around the catchment (Figure 1). The hydrometric data was taken from Iran’s Water Resources Management Company (http://www.wrm.ir/index.php?l=EN) and the precipitation data is available from Iran Meteorological Organization (http://www.irimo.ir/eng/index.php). The monthly data used here covered the range of October 1963 to September 2017. The average Inflow to the reservoir and precipitation were calculated and shown in Figure 2. According to Figure 2, the most precipitation occurred from October to May. Precipitation in the winter accumulated as snowpack over the high mountainous area and in the spring the river flow increased as a result of snowmelt. Summer was dry with almost no considerable precipitation.
The primary purpose of the Dez dam is the flood control, hydroelectric power generation and irrigation supply for 125,000 ha downstream agricultural area, as well. The Dez hydropower plant consists of eight units with a total installed capacity of 520 MW. The monthly power generation was gathered between 1963 and 2017 from Iran’s Water Resources Management Company. Figure 3 illustrates monthly hydroelectric generation in the Dez hydropower plant. Table 1 shows the statistical characteristics of the precipitation over the Dez basin, the inflow to the dam reservoir and the hydropower generation time series.

2.2. ANFIS: Adaptive Neuro-Fuzzy Inference System

Jang (1993) [22] developed ANFIS as a joint of artificial neural network and the fuzzy inference system [23]. The learning ability of artificial neural networks (ANN) and the fuzzy reasoning create a valuable capability to fit a relationship between input and output spaces [24]. On the other hand, the ANFIS uses the training capability of ANN to assign and adjust the membership functions. The back-propagation algorithm enables the model to adjust the parameters until an acceptable error is reached [25]. Suppose that the system of fuzzy inference include x & y as inputs and z as output. Two if-then rules could be utilized for Sugeno model as follows:
  • Rule one: if x and y = A 1 and B 1 , respectively, then f 1 = p 1 x + q 1 y + r 1 .
  • Rule two: if x and y = A 2 and B 2 , respectively, then f 2 = p 2 x + q 2 y + r 2
where A 1 , B 1 , A 2 , B 2 are considered as the labels of linguistic. Furthermore, p 1 , p 2 , q 1 , q 2 , r 1 and r 2 are the output function parameters [26].
The architecture of ANFIS is presented in Figure 4. It includes five layers; all are fixed nodes, except the first and fourth nodes, which are adaptive nodes.
Layer 1: The nodes act adaptive in generating the membership grades of the inputs [24]:
O 1 , i = μ A i ( x ) , for   i   =   1 ,   2 ,   or O 1 , i = μ B i 2 ( y ) , for   i   =   3 ,   4.
It should be noted that i is the number of inputs and O 1 , i to O 5 , i are the output of each layer. Several memberships could be used for this purpose; among all Gaussian functions presented in the Equation (1), the following was utilized in this study:
μ ( x ) = exp [ 0.5 { x c i σ i } 2 ]
where c i and σ i are set parameters with maximum and minimum of one and zero, respectively [22].
Layer 2: this layer is a rule node with AND/OR operator to get an output which called firing strengths O 2 , i :
O 2 , i = μ A i ( x ) μ B i ( y ) , i   =   1 ,   2
Layer 3: presents an average node computing the normalized firing strength as follows:
O 3 , i = w ¯ i = w i w 1 + w 2 , i   =   1 ,   2
Layer 4: this layer contains the consequent nodes for which the p , q and r parameters were tuned during the learning process:
O 4 , i = w ¯ i f i = w ¯ i ( p i x + q i y + r i )
Layer 5: this layer contains the output nodes which compute the total average of output through a sum of entire input signals [27]:
O 5 , i = f = i w ¯ i f i
While the ANFIS has high capability to map the input to the output as a black-box model, it suffers from a long training time to assign the proper values to the parameters of membership function. To overcome this problem we use the optimization algorithm of grey wolf.

2.3. Grey Wolf Optimization (GWO)

The optimization algorithm of Grey wolf known as GWO is known as an advanced meta-heuristic nature-inspired for an efficient optimization [28]. This algorithm was developed through imitating the foraging behavior of grey wolfs performing in groups of five-12 individuals which are at the top of food chain [29]. Grey wolves follow a social hierarchy strictly.
The leaders include a couple of female and male, called alpha ( α ), who are in charge of decision making while hunting, resting and so on. Beta ( β ) is the next level helping alpha in making decisions, while they should obey the alpha. The beta wolves can be male and female and the role of them is disciplining the group. They are the best candidate for substituting the alpha when they become older or die. The next level is called delta ( δ ) and play the role of scouts, sentinels, hunters and so on. The last level is called omega ( ω ), which are the weakest level. They act as babysitters. While this level is the weakest, without omega wolves, internal fights may be observed in the group. Hunting, along with the social hierarchy, is a major social behavior of grey wolves. Muro et al. [30] expressed the three steps in the grey wolves hunting:
  • Identifying, following and approaching the prey;
  • Encircling the prey;
  • Attacking the prey.
These two social behaviors are considered in the GWO algorithm [29]. In mathematical modeling of the algorithm, α is considered as the fittest solution, and in the next steps, β , δ and ω . The mathematical formulation of encircling could be presented as follows [28]:
D = | C X p ( t ) X ¯ ( t ) |
X ( t + 1 ) = X p ( t ) A D
where, A and C would work as the vectors of the coefficient. Furthermore, X p would determine the positions of prey and X is the wolf's positions. Here, D would be the vector for specifying a new position of the GW and t is the iteration time. The C and A formulated as [28]:
A = 2 a r 1 a
C = 2 r 2
where a presents the set of vectors over the iteration that change in value from 2 to 0 linearly. The r 1 and r 2 represent random vectors in [ 0 ,   1 ] .
The α leads the hunting, while β and δ contribute in this task occasionally. For mathematical representation of hunting, it was assumed that the alpha, bata and delta include better knowledge on the prey’s locations. Thus, the optimal solutions for the three positions can be registered. Consequently, the rest of the wolves will follow and update their postions accordingly.
D α = | C 1 X α X |
D β = | C 2 X β X |
D δ = | C 3 X δ X |
X 1 = X α A 1 D α
X 2 = X β A 2 D β
X 3 = X δ A 3 D δ
X ( t + 1 ) = X 1 + X 2 + X 3 3
When the prey stops, the grey wolves start to attack. The vector A is a random value in the interval [ 2 a ,   2 a ] . The | A | < 1 leads to grey wolves’ attack while | A | > 1 force them to move away to find a better solution. Figure 5 shows the framework of the GWO algorithm.

2.4. Performance Criteria

The assessment of the proposed model’s efficiencies, including accuracy and agreement, was evaluated using statistical criteria, such as the confidence index (CI), root mean square error (RMSE), Nash-Sutcliffe Efficiency (NSE), coefficient of determination (R2), index of agreement (d), relative absolute error (RAE) and mean absolute error (MAE).
The evaluation criteria of RMSE and MAE are common mean error indicators that indicated how close data points are to a best fit line (Equations (18) & (19)) [31].
According to Nash and Sutcliffe [32], the NSE is defined as the sum of the absolute squared differences of the observed and estimated data normalized by the variance minus one. (Equation (21)). As determined by [33], the range of NSE is from one to −∞. When NSE is less than 0, the mean observed value have been a better predictor than the model. It describes the plot of observed data versus estimated data, and how well they fit the 1:1 line.
Furthermore, according to Bravais-Pearson, the R2 presents the squared value of the correlation coefficient describing how much of the observed dispersion is delivered by the prediction. The value of R2 may vary from 1 and 0. The 0, and 1 values would present no correlation between observed and predicted data, and dispersion of the estimation data is equal to that of the observation, respectively [33] (Equation (20)).
The index of agreement d [34] prevail over the insensitivity of NSE and R2 to differences in the means and variances of the observed and estimated data [35]. The index of agreement demonstrates the ratio of the mean square error and the potential error [36] (Equation (22)). The range of d similar to R2 changes from 0 for the no correlation to 1, which is a perfect fit.
The RAE is a non-negative index that indicates a ratio of the overall agreement level between observed and estimated datasets. The range of RAE may change from 0 for a perfect fit to ∞, which means no upper bound. The Confidence index (CI) is the product of NSE and d, which ranges between 1 (perfect fit) and −∞. Lower than zero values means that the mean observed values have been a better predictor than the model.
The evaluation criteria were calculated based on the following equations:
R M S E = 1 N i = 1 N ( O i P i ) 2 , 0 R M S E <
M A E = 1 N i = 1 N | O i P i | , 0 M A E <
R 2 = ( i = 1 N ( O i O ¯ ) ( P i P ¯ ) i = 1 N ( O i O ¯ ) 2 i = 1 N ( P i P ¯ ) 2 ) 2 , 0 r 2 1
N S E = 1 i = 1 N ( O i P i ) 2 i = 1 N ( O i O ¯ ) 2 , < N S E 1
d = 1 i = 1 N ( P i O i ) 2 i = 1 N ( | P i O ¯ | + | O i O ¯ | ) 2 , 0 d 1
P I = 1 i = 1 N ( O i P i ) 2 i = 1 N ( O i O i 1 ) 2 , < P I <
C I = d × N S E , C I 1
R A E = i = 1 N | O i P i | i = 1 N | O i O ¯ | , 0 R A E <
In which the Oi is observation value, Pi is the predicted model output, Ō is the average of observations, P ¯ is the average of model outputs and N is number of data.

3. Results

In this study, the inflow of the Dez dam and the average precipitation over the whole basin were utilized to forecast the hydropower generation. For this purpose, the time series divided into two subsets as the train and test subsets. 70% of data was assigned as the train and the remaining 30% for test phase.
Different input combinations were evaluated and used in the modeling process. The final selection of input combinations was based on the correlation analysis of variables in Table 3, the physical nature of variables and applicability of models presented in Table 2. Based on the availability of different measured parameters in the dam, one can choose which model is applicable for prediction of hydropower, and these different combinations strengthen the applicability of model in different data availability of the study. Some models were only based on inflow to the dam and rainfall such as: M1, M2, M3, M13, M14, M15, M18, M19 and M20. These models did not require the hydropower generation of the dam in previous time steps and, based on inflow and precipitation, can predict the hydropower generation in the plan. Some models used lagged values of hydropower generation of the dam in previous time steps as input vectors and these models did not require further information of inflow or precipitation in prediction of hydropower generation. These models, such as M5, M7 M10, M11 and M12, are lagged based models. The other models are based on combination of lagged values of hydropower generation, inflow and precipitation, such as M4, M6, M8, M9, M16 and M17.
According to Table 2, the discharge, precipitation, and the hydropower generation with different lags were used to forecast the hydropower generation for the next month. The correlation coefficients between the input variables were calculated and are presented in Table 3; they oscillate between 0.01 and 0.67. It should be noted that Q is the inflow of the dam, but not the inflow of the turbine. Therefore, as the Dam is multipurpose, and the water stored in the dam is also used for irrigation, it is possible to use Qt to predict the Ht. All 20 input combinations were used for modeling by ANFIS and GWO-ANFIS to evaluate the capability of GWO in optimizing the ANFIS parameters, which shows better performance. The results of ANFIS modeling are presented in Table 4. Among the different input combinations, the first three models were not capable to reproduce satisfying results. Negative values were assigned to the NSE and CI, which show the poor application of models. The same procedure is visible in M13, M14, M15, M18, M19 and M20. However, the M4 to M11, M16 and M17 performed well. Among these combinations, M4 is the best and M8, M10 and M9 are the next in row. It should be noted that although the M4 was the best based on the evaluation criteria, according to Table 2, M17 was selected as the best model. This was because all the inputs of M17, i.e., Qt-3, Ht-2 and Ht-1, have at least a one-month lag. In addition, the results of M4 and M17 were not considerably different. This pattern was repeated for the test phase. Consequently, it can be concluded that, ANFIS was capable to forecast the hydropower generation satisfactorily.
In the next step, the coupled model of GWO-ANFIS was utilized for hydropower generation forecasting. Results are presented in Table 5. According to Table 5, the model performed well in all input combinations. As the d, NSE and CI values were positive in all the models, the new modeling technique of GWO-ANFIS provided a superior capability in forecasting hydropower generation, while the ANFIS results failed in nine models. In addition, based on the evaluation criteria, the accuracy of forecasting was higher for GWO-ANFIS.
The time series of observed and forecasted hydropower in train and test phases for M4, M8, M10 and M117 are presented in Figure 6 and Figure 7. Both ANFIS and GWO-ANFIS performed well, while the GWO-ANFIS was superior due to less error. In addition, it can be observed that the M4 and M10 presented better input combinations, while for dam operation and reservoir management, M17 was more practical.
Although Figure 6 and Figure 7 and Table 4 and Table 5 show the observed and forecasted values and evaluation criteria for all models, the error distribution among models could not be discussed via these figures and tables. Therefore, the box plot of error during the train and test phases was plotted in Figure 8. In Figure 8, it can be observed that the GWO-ANFIS was superior to the ANFIS considerably in almost all input combinations. Nevertheless, the error of ANFIS in nine models was considerably higher than the GWO-ANFIS.
The meta-heuristic optimization algorithm of GWO-ANFIS showed an acceptable efficiency in the optimization of the unknown parameters in ANFIS. Although the number of optimization parameters in ANFIS and GWO-ANFIS was the same, the main complexity quantifier was the number of unknown parameters to be tuned for model training. The present research sought to ensure that the numerical complexity of the two modeling approaches was similar. Furthermore, the ANFIS models required the derivative calculation for unknown parameters, which increased the computational time and space necessary for training. While the GWO-ANFIS models did not need the derivative calculation, this would lead to less computation and faster convergence.

4. Conclusions

In this study, a coupled of adaptive neuro-fuzzy inference system and grey wolf optimization was utilized for one month ahead hydropower generation. For this purpose, 53 years of monthly data of inflow to the dam reservoir and the hydropower generation were used. Twenty input-output combinations were considered to evaluate the model robustness and to find the best input-output combination. Based on the results, GWO was capable to improve the ANFIS performance considerably. GWO-ANFIS performed well in all 20 combinations based on the evaluation criteria while the ANFIS failed in nine out of 20 combinations. Additionally, the box plot of error in all combinations shows the superiority of GWO-ANFIS. Overall, it can be concluded that, GWO-ANFIS is capable to forecast the hydropower generation satisfactorily, which makes it a suitable tool for policymakers. Furthermore, for the future research direction, it is important to mention that, not all the rules in the model architecture are essential; thus, it is necessary to reduce trained models complexity through eliminating the noncontributing rules which leads to the reduction of network’s computational cost. To improve the proposed method, utilizing the other optimization algorithms for creating novel hybrid prediction models, as well as applying ensemble models in this application is suggested for the future research. In fact, the potential of ensemble machine learning models have not yet been fully explored in the prediction of hydropower generation, which leaves great room for future investigations. In addition, a limitation of our proposed model was that while the effective factors for which the model was implemented were the most critical factors, there may be other relevant factors that should be used. For instance, climate change and drought variations need to be separated from the general trend of the data set. Therefore, the addition of these concepts is left for future work.

Author Contributions

Conceptualization, M.D., H.R.-M. and F.H.; Data curation, M.D., H.R.-M. and F.H.; Formal analysis, M.D., A.M., H.R.-M. and F.H.; Methodology, M.D., H.R.-M., S.S. and F.H.; Resources, H.R.-M. and F.H.; Software, H.R.-M., M.D. and F.H.; Supervision, K.-w.C. and E.K.Z.; Visualization, F.H., A.M., S.S. and K.-w.C.; Writing—original draft, M.D., H.R.-M., F.H. and A.M.; Writing—review & editing, M.D., H.R.-M., F.H., A.M., S.S. and K.-w.C.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hamlet, A.F.; Huppert, D.; Lettenmaier, D.P. Economic value of long-lead streamflow forecasts for Columbia River hydropower. J. Water Resour. Plan. Manag. 2002, 1282, 91–101. [Google Scholar] [CrossRef]
  2. Tang, G.L.; Zhou, H.C.; Li, N.; Wang, F.; Wang, Y.; Jian, D. Value of medium-range precipitation forecasts in inflow prediction and hydropower optimization. Water Resour. Manag. 2010, 24, 2721–2742. [Google Scholar] [CrossRef]
  3. Zhou, H.; Tang, G.; Li, N.; Wang, F.; Wang, Y.; Jian, D. Evaluation of precipitation forecasts from NOAA global forecast system in hydropower operation. J. Hydroinform. 2011, 13, 81–95. [Google Scholar] [CrossRef]
  4. Block, P. Tailoring seasonal climate forecasts for hydropower operations. Hydrol. Earth Syst. Sci. 2011, 15, 1355–1368. [Google Scholar] [CrossRef] [Green Version]
  5. Rheinheimer, D.E.; Bales, R.C.; Oroza, C.A.; Lund, J.R.; Viers, J.H. Valuing year-to-go hydrologic forecast improvements for a peaking hydropower system in the Sierra Nevada. Water Resour. Res. 2016, 52, 3815–3828. [Google Scholar] [CrossRef] [Green Version]
  6. Zhang, X.; Peng, Y.; Xu, W.; Wang, B. An Optimal Operation Model for Hydropower Stations Considering Inflow Forecasts with Different Lead-Times. Water Resour. Manag. 2017. [Google Scholar] [CrossRef]
  7. Peng, Y.; Xu, W.; Liu, B. Considering precipitation forecasts for real-time decision-making in hydropower operations. Int. J. Water Resour. Dev. 2017, 33, 987–1002. [Google Scholar] [CrossRef]
  8. Jiang, Z.; Li, R.; Li, A.; Ji, C. Runoff forecast uncertainty considered load adjustment model of cascade hydropower stations and its application. Energy 2018, 158, 693–708. [Google Scholar] [CrossRef]
  9. Mosavi, A.; Ozturk, P.; Chau, K.W. Flood prediction using machine learning models: Literature review. Water 2018, 10, 1536. [Google Scholar] [CrossRef]
  10. Hammid, A.T.; Sulaiman, M.H.B.; Abdalla, A.N. Prediction of small hydropower plant power production in Himreen Lake dam (HLD) using artificial neural network. Alexandria Eng. J. 2018, 57, 211–221. [Google Scholar] [CrossRef]
  11. Boucher, M.A.; Ramos, M.H. Ensemble Streamflow Forecasts for Hydropower Systems. Handb. Hydrometeorol. Ensemble Forecast. 2018, 1–19. [Google Scholar] [CrossRef]
  12. Choubin, B.; Moradi, E.; Golshan, M.; Adamowski, J.; Sajedi-Hosseini, F.; Mosavi, A. An Ensemble prediction of flood susceptibility using multivariate discriminant analysis, classification and regression trees, and support vector machines. Sci. Total Environ. 2019, 651, 2087–2096. [Google Scholar] [CrossRef] [PubMed]
  13. Shamshirband, S.; Jafari Nodoushan, E.; Adolf, J.E.; Abdul Manaf, A.; Mosavi, A.; Chau, K.W. Ensemble models with uncertainty analysis for multi-day ahead forecasting of chlorophyll a concentration in coastal waters. Eng. Appl. Comput. Fluid Mech. 2019, 13, 91–101. [Google Scholar] [CrossRef]
  14. Bui, K.T.T.; Bui, D.T.; Zou, J.; Van Doan, C.; Revhaug, I. A novel hybrid artificial intelligent approach based on neural fuzzy inference model and particle swarm optimization for horizontal displacement modeling of hydropower dam. Neural Comput. Appl. 2018, 29, 1495–1506. [Google Scholar]
  15. Kim, Y.O.; Eum, H.I.; Lee, E.G.; Ko, I.H. Optimizing Operational Policies of a Korean Multireservoir System Using Sampling Stochastic Dynamic Programming with Ensemble Streamflow Prediction. J. Water Resour. Plan Manag. 2007, 133, 4. [Google Scholar] [CrossRef]
  16. Ch, S.; Anand, N.; Panigrahi, B.K. Streamflow forecasting by SVM with quantum behaved particle swarm optimization. Neurocomputing 2013, 101, 18–23. [Google Scholar] [CrossRef]
  17. Cote, P.; Leconte, R. Comparison of Stochastic Optimization Algorithms for Hydropower Reservoir Operation with Ensemble Streamflow Prediction. J. Water Resour. Plan Manag. 2016, 142, 04015046. [Google Scholar] [CrossRef]
  18. Keshtegar, B.; Falah Allawi, M.; Afan, H.A.; El-Shafie, A. Optimized River Stream-Flow Forecasting Model Utilizing High-Order Response Surface Method. Water Resour. Manag. 2016, 30, 3899–3914. [Google Scholar] [CrossRef]
  19. Paul, M.; Negahban-Azar, M. Sensitivity and uncertainty analysis for streamflow prediction using multiple optimization algorithms and objective functions: San Joaquin Watershed, California. Model. Earth Syst. Environ. 2018, 4, 1509–1525. [Google Scholar] [CrossRef]
  20. Karballaeezadeh, N.; Mohammadzadeh, D.; Shamshirband, S.; Hajikhodaverdikhan, P.; Mosavi, A.; Chau, K.W. Prediction of remaining service life of pavement using an optimized support vector machine. Eng. Appl. Comput. Fluid Mech. 2019, 16, 120–144. [Google Scholar]
  21. Niu, M.; Wang, Y.; Sun, S.; Li, Y. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting. Atmos. Environ. 2016, 134, 168–180. [Google Scholar] [CrossRef]
  22. Jang, J.-S.R. ANFIS: Adaptive-network-based fuzzy inference system. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
  23. Choubin, B.; Khalighi-Sigaroodi, S.; Malekian, A.; Kişi, O. Multiple linear regression, multi-layer perceptron network and adaptive neuro-fuzzy inference system for the prediction of precipitation based on large-scale climate signals. Hydrol. Sci. J. 2016, 61, 1001–1009. [Google Scholar] [CrossRef]
  24. Firat, M.; Güngör, M. Hydrological time-series modelling using an adaptive neuro-fuzzy inference system. Hydrol. Process. 2007, 22, 2122–2132. [Google Scholar] [CrossRef]
  25. Shabri, A. A Hybrid Wavelet Analysis and Adaptive Neuro-Fuzzy Inference System for Drought Forecasting. Appl. Math. Sci. 2014, 8, 6909–6918. [Google Scholar] [CrossRef]
  26. Kisi, O.; Shiri, J. Precipitation forecasting using wavelet genetic programming and wavelet-neuro-fuzzy conjunction models. Water Resour. Manag. 2011, 25, 3135–3152. [Google Scholar] [CrossRef]
  27. Awan, J.A.; Bae, D.H. Drought prediction over the East Asian monsoon region using the adaptive neuro-fuzzy inference system and the global sea surface temperature anomalies. Int. J. Climatol. 2016, 36, 4767–4777. [Google Scholar] [CrossRef]
  28. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  29. Bozorg-Haddad, O. Advanced Optimization by Nature-Inspired Algorithms; Springer: Singapore, 2017. [Google Scholar]
  30. Muro, C.; Escobedo, R.; Spector, L.; Coppinger, R. Wolf-pack (Canis Lupus) hunting strategies emerge from simple rules in computational simulations. Behav. Process. 2011, 88, 192–197. [Google Scholar] [CrossRef]
  31. Amr, H.; El-Shafie, A.; El Mazoghi, H.; Shehata, A.; Taha, M.R. Artificial neural network technique for rainfall forecasting applied to Alexandria, Egypt. Int. J. Phys. Sci. 2011, 6, 1306–1316. [Google Scholar]
  32. Nash, J.E.; Sutcliffe, J.V. River flow forecasting through conceptual models part I—A discussion of principles. J. Hydrol. 1970, 10, 282–290. [Google Scholar] [CrossRef]
  33. Krause, P.; Boyle, D.P.; Bäse, F. Comparison of different efficiency criteria for hydrological model assessment. Adv. Geosci. 2005, 5, 89–97. [Google Scholar] [CrossRef] [Green Version]
  34. Willmott, C.J. On the validation of models. Phys. Geogr. 1981, 2, 184–194. [Google Scholar] [CrossRef]
  35. Legates, D.R.; McCabe, G.J. Evaluating the use of “goodness-of-fit” measures in hydrologic and hydroclimatic model validation. Water Resour. Res. 1999, 35, 233–241. [Google Scholar] [CrossRef]
  36. Willmott, C.J. On the evaluation of model performance in physical geography. In Spatial Statistics and Models; Springer: Dordrecht, The Netherlands, 1984; pp. 443–460. [Google Scholar]
Figure 1. Location of the Dez dam and the precipitation stations in Iran.
Figure 1. Location of the Dez dam and the precipitation stations in Iran.
Energies 12 00289 g001
Figure 2. Average monthly precipitation in the catchment and mean monthly inflow to the Dez dam reservoir.
Figure 2. Average monthly precipitation in the catchment and mean monthly inflow to the Dez dam reservoir.
Energies 12 00289 g002
Figure 3. Average power generation in the Dez hydropower plant.
Figure 3. Average power generation in the Dez hydropower plant.
Energies 12 00289 g003
Figure 4. Architecture of adaptive neuro-fuzzy inference system in this study.
Figure 4. Architecture of adaptive neuro-fuzzy inference system in this study.
Energies 12 00289 g004
Figure 5. The flowchart of ANFIS-GWO modeling.
Figure 5. The flowchart of ANFIS-GWO modeling.
Energies 12 00289 g005
Figure 6. Observed and forecasted time series of hydropower generation using ANFIS.
Figure 6. Observed and forecasted time series of hydropower generation using ANFIS.
Energies 12 00289 g006aEnergies 12 00289 g006b
Figure 7. Observed and forecasted time series of hydropower generation using GWO-ANFIS.
Figure 7. Observed and forecasted time series of hydropower generation using GWO-ANFIS.
Energies 12 00289 g007aEnergies 12 00289 g007b
Figure 8. Box plot of errors for ANFIS and GWO-ANFIS modeling in training and testing phases. F and G refer to ANFIS and GWO-ANFIS, respectively.
Figure 8. Box plot of errors for ANFIS and GWO-ANFIS modeling in training and testing phases. F and G refer to ANFIS and GWO-ANFIS, respectively.
Energies 12 00289 g008
Table 1. Presenting the datasets and the statistical characteristics.
Table 1. Presenting the datasets and the statistical characteristics.
ParameterModeMeanMinS.D.First QuartileMedianThird QuartileMaxSkew.Kurtosis.
Ht26,545.98165,297.7026,545.9856,093.06130,338.42168,568.96203,734.07354,879.53−0.102.76
Qt63.28651.7163.28615.23209.87430.22842.413643.841.866.91
Pt0.0042.820.0046.910.4727.9771.81238.471.103.69
Ht: Hydroelectric Energy (MWH) at month t; Qt: River Inflow (m3/s) at month t; Pt: precipitation (mm) at month t; S.D.: Standard Deviation.
Table 2. Different input combination used for ANFIS and GWO-ANFIS modeling.
Table 2. Different input combination used for ANFIS and GWO-ANFIS modeling.
ModelInput ParametersOutput
M1QtHt
M2Qt, PtHt
M3Qt-1, QtHt
M4Qt-1, Qt, Ht-1Ht
M5Ht-1Ht
M6Qt-1, Qt, Pt, Ht-1Ht
M7Ht-2, Ht-1Ht
M8Qt, Ht-2, Ht-1Ht
M9Qt-1, Qt, Ht-2, Ht-1Ht
M10Ht-12, Ht-2, Ht-1Ht
M11Ht-12, Ht-1Ht
M12Ht-12Ht
M13Qt-4, Qt-3, Qt-2, Qt-1, QtHt
M14Qt-3, Qt-2, Qt-1, QtHt
M15Qt-2, Qt-1, QtHt
M16Qt-3, Qt-2, Ht-12, Ht-1Ht
M17Qt-3, Ht-2, Ht-1Ht
M18Qt-4, Qt-3Ht
M19Pt-5, Pt-4, Qt-4, Qt-3, Qt-2Ht
M20Pt-5, Pt-4, Qt-3, Qt-2Ht
Table 3. Correlation coefficients between parameters.
Table 3. Correlation coefficients between parameters.
HtQtPtHt-1Ht-2Ht-3Ht-4Ht-5Ht-6Ht-12Qt-1Qt-2Qt-3Qt-4Qt-5Qt-6Pt-1Pt-2Pt-3Pt-4Pt-5
Ht1
Qt0.111
Pt0.050.131
Ht-10.660.010.11
Ht-20.3400.070.661
Ht-30.150.020.020.340.661
Ht-40.060.0200.160.350.661
Ht-50.020.010.020.060.160.350.671
Ht-60.0100.080.020.060.160.350.661
Ht-120.180.010.060.140.080.040.020.010.011
Qt-10.270.4600.110.0100.020.020.010.091
Qt-20.310.140.090.270.110.0100.020.020.150.461
Qt-30.3100.190.310.260.110.0100.020.170.140.461
Qt-40.240.020.240.310.310.260.110.0100.1300.140.461
Qt-50.110.110.180.240.310.310.260.110.010.050.0300.140.461
Qt-60.020.170.040.110.240.310.310.260.1100.110.0300.140.461
Pt-100.450.230.060.10.070.0200.020.010.1300.090.190.240.181
Pt-20.040.390.0600.060.10.070.02000.450.1300.090.190.240.231
Pt-30.110.2900.0400.060.10.070.020.030.390.450.1300.090.190.060.231
Pt-40.20.140.060.10.0400.060.10.070.080.280.390.450.1300.0900.060.231
Pt-50.210.020.20.20.10.0400.060.10.120.140.280.390.450.1300.0600.060.231
Table 4. Results of ANFIS modeling in train and test phases.
Table 4. Results of ANFIS modeling in train and test phases.
Train ANFIS1ANFIS2ANFIS3ANFIS4ANFIS5ANFIS6ANFIS7ANFIS8ANFIS9ANFIS10
RSQ0.080.120.170.680.620.510.630.690.660.67
RMSE179,477179,882179,30529,26032,16538,26331,48829,76830,16429,991
MAE171,981172,397171,84021,00023,45128,17222,86921,53921,59922,521
RAE4.104.114.110.500.560.670.550.510.520.54
d0.310.310.310.900.880.830.880.900.890.89
NSE−11.02−11.08−11.010.680.610.450.630.670.660.67
CI−3.41−3.42−3.410.610.540.380.560.600.590.60
ANFIS11ANFIS12ANFIS13ANFIS14ANFIS15ANFIS16ANFIS17ANFIS18ANFIS19ANFIS20
RSQ0.660.120.220.260.230.640.650.350.220.32
RMSE30,57951,150179,856179,647179,43831,25330,884179,512180,465180,470
MAE23,09140,660172,323172,134171,95422,45122,071172,115172,932172,973
RAE0.550.974.124.114.110.540.534.124.134.13
d0.890.540.310.310.310.880.890.310.310.31
NSE0.660.04−11.06−11.02−11.010.640.64−11.02−11.11−11.11
CI0.580.02−3.41−3.41−3.410.560.57−3.41−3.43−3.43
Test ANFIS1ANFIS2ANFIS3ANFIS4ANFIS5ANFIS6ANFIS7ANFIS8ANFIS9ANFIS10
RSQ0.120.150.210.730.700.640.670.720.690.66
RMSE155,453155,740155,02131,26533,98436,65435,36732,95133,50835,003
MAE143,490143,773143,01824,49826,69428,26728,09625,98925,89026,600
RAE2.852.852.830.490.530.560.560.510.510.53
d0.380.370.380.920.910.890.890.910.900.89
NSE−5.67−5.69−5.600.730.680.630.660.700.690.66
CI−2.13−2.13−2.110.670.620.560.590.640.620.59
ANFIS11ANFIS12ANFIS13ANFIS14ANFIS15ANFIS16ANFIS17ANFIS18ANFIS19ANFIS20
RSQ0.680.150.510.420.310.680.690.450.390.37
RMSE34,30759,395154,896154,929154,93334,15733,456154,793155,475155,535
MAE25,91045,123142,788142,914142,92626,61325,956142,809143,361143,448
RAE0.510.892.822.832.830.530.512.822.832.83
d0.900.640.380.380.380.900.900.380.380.38
NSE0.680.03−5.56−5.60−5.600.680.69−5.55−5.61−5.61
CI0.610.02−2.10−2.11−2.110.610.62−2.10−2.11−2.11
Table 5. Results of GWO-ANFIS modeling in train and test phases. G-A is the abbreviation of GWO-ANFIS.
Table 5. Results of GWO-ANFIS modeling in train and test phases. G-A is the abbreviation of GWO-ANFIS.
Train G-A1G-A2G-A3G-A4G-A5G-A6G-A7G-A8G-A9G-A10
RSQ0.090.310.280.730.630.630.650.700.720.65
RMSE49,50342,88943,80926,85731,47732,55930,77028,41427,48231,016
MAE40,46333,76435,87319,77322,98425,60022,45320,85420,36522,675
RAE0.970.810.860.470.550.610.540.500.490.54
d0.390.680.660.920.880.840.880.910.910.88
NSE0.090.310.280.730.630.600.650.700.720.65
CI0.030.210.190.670.550.510.570.630.650.57
G-A11G-A12G-A13G-A14G-A15G-A16G-A17G-A18G-A19G-A20
RSQ0.640.180.480.420.330.680.610.320.410.37
RMSE31,07347,26337,22439,51042,41129,52133,02842,66839,95241,037
MAE23,21937,78129,61730,81033,84621,07623,58532,68430,03331,964
RAE0.550.900.710.740.810.500.560.780.720.76
d0.880.530.800.750.680.900.880.680.750.73
NSE0.640.180.480.420.330.680.590.320.410.37
CI0.570.090.390.320.220.610.520.220.300.27
Test G-A1G-A2G-A3G-A4G-A5G-A6G-A7G-A8G-A9G-A10
RSQ0.110.210.260.790.690.710.700.750.760.70
RMSE62,42058,69556,47328,40234,12840,84934,15130,53529,92834,293
MAE50,69945,59546,28921,43927,07932,83827,48024,13123,81127,566
RAE1.010.900.920.420.540.650.540.480.470.55
d0.440.580.540.930.890.790.890.920.920.89
NSE−0.070.050.120.780.680.540.680.740.750.68
CI−0.030.030.070.720.610.430.610.680.690.60
G-A11G-A12G-A13G-A14G-A15G-A16G-A17G-A18G-A19G-A20
RSQ0.690.130.510.480.350.730.650.350.450.43
RMSE33,81659,59047,20449,58354,71031,37736,52654,75648,84952,456
MAE26,20546,90237,18538,98843,48924,54728,00642,24337,94941,320
RAE0.520.930.730.770.860.490.550.830.750.82
d0.900.510.680.640.560.920.890.550.680.61
NSE0.690.020.390.320.180.730.630.180.350.25
CI0.610.010.270.210.100.670.570.100.240.15

Share and Cite

MDPI and ACS Style

Dehghani, M.; Riahi-Madvar, H.; Hooshyaripor, F.; Mosavi, A.; Shamshirband, S.; Zavadskas, E.K.; Chau, K.-w. Prediction of Hydropower Generation Using Grey Wolf Optimization Adaptive Neuro-Fuzzy Inference System. Energies 2019, 12, 289. https://doi.org/10.3390/en12020289

AMA Style

Dehghani M, Riahi-Madvar H, Hooshyaripor F, Mosavi A, Shamshirband S, Zavadskas EK, Chau K-w. Prediction of Hydropower Generation Using Grey Wolf Optimization Adaptive Neuro-Fuzzy Inference System. Energies. 2019; 12(2):289. https://doi.org/10.3390/en12020289

Chicago/Turabian Style

Dehghani, Majid, Hossein Riahi-Madvar, Farhad Hooshyaripor, Amir Mosavi, Shahaboddin Shamshirband, Edmundas Kazimieras Zavadskas, and Kwok-wing Chau. 2019. "Prediction of Hydropower Generation Using Grey Wolf Optimization Adaptive Neuro-Fuzzy Inference System" Energies 12, no. 2: 289. https://doi.org/10.3390/en12020289

APA Style

Dehghani, M., Riahi-Madvar, H., Hooshyaripor, F., Mosavi, A., Shamshirband, S., Zavadskas, E. K., & Chau, K. -w. (2019). Prediction of Hydropower Generation Using Grey Wolf Optimization Adaptive Neuro-Fuzzy Inference System. Energies, 12(2), 289. https://doi.org/10.3390/en12020289

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop