Next Article in Journal
Parallel-Computing Two-Way Grid-Nested Storm Surge Model with a Moving Boundary Scheme and Case Study of the 2013 Super Typhoon Haiyan
Previous Article in Journal
Impacts of Flood Disturbance on the Dynamics of Basin-Scale Swimming Fish Migration in Mountainous Streams
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Runoff Probability Prediction Model Based on Natural Gradient Boosting with Tree-Structured Parzen Estimator Optimization

School of Civil and Hydraulic Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Water 2022, 14(4), 545; https://doi.org/10.3390/w14040545
Submission received: 10 January 2022 / Revised: 4 February 2022 / Accepted: 10 February 2022 / Published: 12 February 2022
(This article belongs to the Section Hydrology)

Abstract

:
Accurate and reliable runoff prediction is critical for solving problems related to water resource planning and management. Deterministic runoff prediction methods cannot meet the needs of risk analysis and decision making. In this study, a runoff probability prediction model based on natural gradient boosting (NGboost) with tree-structured parzen estimator (TPE) optimization is proposed. The model obtains the probability distribution of the predicted runoff. The TPE algorithm was used for the hyperparameter optimization of the model to improve the prediction. The model was applied to the prediction of runoff on the monthly, weekly and daily scales at the Yichang and Pingshan stations in the upper Yangtze River. We also tested the prediction effectiveness of the models using exponential, normal and lognormal distributions for different flow characteristics and time scales. The results show that in terms of deterministic prediction, the proposed model improved in all indicators compared to the benchmark model. The root mean square error of the monthly runoff prediction was reduced by 9% on average and 7% on the daily scale. In probabilistic prediction, the proposed model can provide reliable probabilistic prediction on weekly and daily scales.

1. Introduction

Prevention of natural disasters [1,2] and the utilization of water resources [3,4,5] have been hot topics of research. As one of the most important forms of water resources, the prediction of runoff is a basis for research in the field of water resource research [6]. Especially in areas where precipitation data are scarce, it is of concern to obtain a reliable runoff prediction with few relevant data.
In order to obtain an accurate runoff prediction, many scholars have conducted extensive research. Current runoff forecasting methods can be divided into two main types. The first method starts with the physical mechanism of runoff formation and establishes hydrological models [7], such as the TANK model [8] and the Xinanjiang model [9]. This type of method has a clear physical meaning, but the model is complex and needs a large amount of relevant data to support it, which is difficult to apply in areas with few data. The second method analyzes historical data to predict runoff. With the dramatic increase in computer performance, a large number of machine learning methods have been introduced to runoff prediction to improve the accuracy and reliability of runoff prediction [10,11]. Among data-driven methods, a large number of neural network models [12] and decision tree algorithms [13] are applied to runoff prediction, in addition to earlier applied regression analysis methods [14]. In particular, extreme gradient boosting (XGboost) [15] as a representative gradient boosting tree method is widely used for data prediction in various disciplines and has performed well in data science competitions and in industry.
XGboost is a tree-based boosting algorithm that trains multiple weak classifiers and combines them linearly to form a strong classifier by changing the weight distribution of the training data. In addition, the method adds a regularization item to the loss function to prevent overfitting. When deriving the loss function, the method uses a second-order Taylor expansion, and the loss is calculated more accurately. This method was applied to runoff prediction and achieved a satisfactory result [16].
Until now, the predictions of all methods have been difficult to fully agree with the actual data. The results of deterministic prediction cannot be completely reliable, so the risks of prediction must be measured. In contrast to deterministic prediction, probabilistic prediction results can provide an interval of forecast values or a probability distribution of forecast values. There are various methods for probabilistic prediction of runoff. The Bayesian probabilistic hydrological forecasting method uses the results of deterministic prediction models to generate probabilistic predictions [17,18,19]. The error probability distribution method generates probabilistic predictions by assuming model errors [20]. The quantile regression method [21], on the other hand, considers only the relationship between the quantile and the impact factor. The NGboost method uses natural gradients to achieve probabilistic predictions [22]. Usually, more than one parameter of the probability distribution is predicted. Conventionally, a single-leaf node determines all parameters simultaneously, and different parameters construct mutually independent groups of decision trees [23]. This approach will cause the classification to fall into suboptimal solutions. Compared with traditional gradients, NGBoost uses natural gradients multiplied by the inverse of the Riemann measure for “pre-scaled”, which coordinates the relationship between multiple parameters and allows global line search for multi-parameter prediction of probability distributions, thus making the gradient boosting algorithm capable of probabilistic prediction. The method directly obtains the probability distribution of the forecast values, avoiding the complicated process of obtaining the deterministic prediction and then converting them to probabilistic prediction by other methods.
The NGboost method has been applied in various fields of prediction problems because of its accuracy and direct access to the probability distribution of the predicted values [24]. In the energy field, the method has been applied to the probabilistic prediction of wind power [25]. In biomedical sciences, the method is applied to the probabilistic prediction of treatment frequency [26]. In the field of meteorology, the method is applied to probabilistic prediction of temperature [27]. In environmental sciences, the method is applied to the probabilistic prediction of CO2 emissions [28]. The application of the NGboost method in different fields shows that the method has broad applications in probabilistic prediction and can be applied in the probabilistic prediction of runoff.
Before building a prediction model using the NGboost method, there are some hyperparameters to be set. The selection of these hyperparameters significantly affects the accuracy and reliability of the model predictions. The main methods for selecting hyperparameters are grid search, random search, intelligent algorithms, Bayesian methods, etc. Grid search is an enumeration method that tries to find the optimum for all combinations within a given range of hyperparameter values, which requires a lot of computational resources and is therefore difficult to use when there are many parameters and a large range of values [29]. Random search attempts random combinations of parameters within the range of hyperparameter values; this method is fast, but it is difficult to guarantee the results. Intelligent algorithms are complex and computationally intensive to build models [30]. The Bayesian method, on the other hand, obtains a better combination of hyperparameters with a smaller number of trials by tracking the results of each evaluation [31]. The TPE algorithm [32] has performed well as a Bayesian method for a variety of model hyperparameter optimization applications [33], such as model hyperparameter optimization for the prediction of energy consumption of HVAC systems [34] and the prediction of steam generation data from nuclear power plants [35]. In this study, the TPE algorithm was used for the hyperparameter optimization of the model to improve the prediction [36].

2. Materials and Methods

2.1. Study Area and Data

Two stations in the upper Yangtze River were selected for the study. As the longest river in China and the third longest river in the world, the Yangtze River has a basin area of 1.8 million km2, accounting for about one-fifth of China’s total land area. The Yichang station is the boundary between the upper and middle reaches of the Yangtze River, and the Pingshan station is the main control station for the upper reaches of the Yangtze River. The two stations have different characteristics in terms of runoff. The runoff data used for the study are monthly, weekly and daily runoff data from January 1940 to April 2021. The former 80% of the data is the training set and the latter 20% is the test set. Figure 1 shows the location of the Yichang and Pingshan stations in the Yangtze River basin. Table 1 and Table 2 show the basic statistical information of the runoff data at each scale for the Yichang and Pingshan stations, respectively.

2.2. Methods

2.2.1. Natural Gradient Boosting (NGboost)

Natural Gradient Boosting (NGBoost) is a method proposed by researchers at Stanford University that uses gradient boosting methods for probabilistic prediction. The method is based on the gradient boosting decision tree, which uses natural gradients instead of general gradients to overcome the influence brought by parameters on convergence [37,38]. Each base learner of the model fits the natural gradient, and after a combination of deflation and additivity, an integrated model is obtained to fit the parameters of the conditional distribution for the purpose of probabilistic prediction.
This study uses the Maximum Loss Expectancy (MLE) scoring rule [39,40] as the loss function during model training with the following equation:
( θ , y ) = log P θ ( y ) ,
where θ is the parameter of the distribution. We assume that there are suitable scoring rules S , predicted probability distribution P , true probability distribution Q and observations y . The scores S ( P , y ) , S ( Q , y ) should satisfy:
E y ~ Q [ S ( Q , y ) ] E y ~ Q [ S ( P , y ) ] P , Q .
The excess of the right-hand side of the inequality over the left-hand side can be interpreted as a measure of the difference between distributions Q and P, as the divergence induced by this scoring rule. The divergence is given by:
D S ( Q P ) = E y ~ Q [ S ( P , y ) ] E y ~ Q [ S ( Q , y ) ] .
For the MLE scoring rule, the derived divergence is the Kullback–Leibler divergence ( D K L ):
D ( Q P ) = E y ~ Q [ ( P , y ) ] E y ~ Q [ ( Q , y ) ] = E y ~ Q [ log Q ( y ) P ( y ) ] D K L ( Q P )
The natural gradient was originally defined for the statistical manifold with the distance measure induced by D K L . Duan et al. provide a more general treatment that makes it applicable to the divergence of some scoring rules. The generalized natural gradient is the direction of the steepest ascent in Riemannian space, which is invariant to parametrization and defined as:
˜ ( θ , y ) lim ϵ 0 arg max d : D ( P θ P θ + d ) = ϵ ( θ + d , y ) .
By solving the corresponding optimization problem, the natural gradient can be obtained as:
˜ ( θ , y ) ( θ ) 1 ( θ , y ) ,
where ( θ ) is the Fisher information from the observations on P θ , which is defined as:
( θ ) = E y ~ P θ [ θ ( θ , y ) θ ( θ , y ) T ] = E y ~ P θ [ θ 2 ( θ , y ) ]
Gradient boosting is a supervised learning method. In this method, several base learners are combined and integrated using an additive approach. The model is learned sequentially, with the next base learner fitting the residuals of the currently integrated training target. The output of the fitted base learners is then scaled according to the learning rate and added to the integrated model. The NGboost method improves on the original method. For multi-parameter estimation, the original method using gradient segmentation based on one parameter may be suboptimal relative to the gradient of another parameter. In contrast to the conventional gradient, the natural gradient is multiplied by the inverse of the Riemann metric for “pre-scaled” to reconcile multiple parameters, thus allowing the algorithm to perform multi-parameter boosting. The flow chart of the NGBoost method for runoff probability prediction is shown in Figure 2.

2.2.2. Probability Distribution

A common probability distribution can usually be expressed by a probability distribution function. The parameters of the probability distribution function uniquely determine this probability distribution. The proposed model obtains the probability distribution of the predicted values by predicting the parameters of the probability distribution function. Different probability distributions have different parameters, so the type of probability distribution must be predetermined before building the prediction model. Three common forms of probability distributions, namely normal, lognormal and exponential, are tried in the study to investigate the prediction effects of each distribution function at different runoff characteristics and time scales. The probability density function of the normal distribution is determined by two parameters. Parameter μ is the mean, and parameter σ is the standard deviation. The probability density function of the lognormal distribution is determined by two parameters. Parameter σ is the standard deviation, and parameter s is the shape parameter. The probability density function of the exponential distribution is determined by one parameter. Parameter σ is the standard deviation.

2.2.3. Tree-Structured Parzen Estimator (TPE)

Tree-structured parzen estimator (TPE) is a method based on the Bayesian approach to tune the hyperparameters of models. The Gaussian process-based approach directly simulates p ( y | θ ) , and the TPE strategy simulates p ( θ | y ) and p ( y ) to indirectly obtain p ( y | θ ) . The improvement in the model after choosing a new set of hyperparameters is given by:
I ( θ ) = max ( y * y ( θ ) , 0 ) ,
where θ is the hyperparameter of the model, y is the loss function of the model and y * is the threshold value of the loss function selected based on the data. The expectation of the improvement is:
E I y * ( θ ) = y * ( y * y ) p ( y θ ) d y = y * ( y * y ) p ( θ y ) p ( y ) p ( θ ) d y
where p ( θ | y ) is defined as:
p ( θ y ) = { l ( θ )   if   y < y * g ( θ )   if   y y * ,
where l ( θ ) is the probability density formed by the set of loss values less than y * , and g ( θ ) is the probability density formed by the remaining set. By constructing
γ = p ( y < y * )
and
p ( θ ) = p ( θ y ) p ( y ) d y = γ l ( θ ) + ( 1 γ ) g ( θ ) ,
we can obtain:
y * ( y * y ) p ( θ y ) p ( y ) d y = ( x ) y * ( y * y ) p ( y ) d y = γ y * l ( θ ) l ( θ ) y * p ( y ) d y
Furthermore, it can be introduced:
E I y * ( x ) = γ y * l ( θ ) l ( θ ) y * p ( y ) d y γ l ( θ ) + ( 1 γ ) g ( x ) ( γ + g ( x ) l ( x ) ( 1 γ ) ) 1 .
From Equation (14), it can be concluded that maximizing the improvement of the hyperparameters means the probability of l ( θ ) is high, while the probability of g ( θ ) is as low as possible. The TPE algorithm evaluates the hyperparameters by g ( θ ) / l ( θ ) to find the hyperparameters θ * with maximum EI. The flow chart of this algorithm is shown in Figure 3.

2.2.4. Performance Measures

To test the performance of the proposed model, some quantitative metrics are used to evaluate the prediction. For deterministic forecast results, three metrics are used: root mean square error (RMSE), mean relative error (MARE) and coefficient of determination (R2). The calculation methods are as follows.
RMSE = 1 n i = 1 n ( y p i y o i ) 2 .
MRE = 1 n i = 1 n | y p i y o i y o i | .
R 2 = 1 i = 1 n ( y o i y p i ) 2 i = 1 n ( y o i y ¯ o ) 2 ,
where n is the number of observations, y o i is the observed runoff, y ¯ o is the mean value of the observed runoff series and y p i is the predicted runoff. To evaluate the prediction of runoff series with high autocorrelation, especially daily runoff series, the inertial root mean square error (IRMSE) [41,42] is introduced.
IRMSE = RMSE σ Δ ,
where
σ Δ = i = 1 n ( Δ i Δ ¯ ) 2 n ,
Δ i = y o i y o i 1 ,  
Δ ¯ = i = 1 n Δ i n .  
where Δ i is the first-order lagged observed series, Δ ¯ is the mean value of Δ i and σ Δ is the standard deviation of Δ i .
The three metrics used to evaluate the probabilistic forecast results are interval coverage probability (ICP), interval normalized averaged width (INAW) and coverage width-based criterion (CWC). The metrics for the probabilistic forecast test are calculated as follows.
ICP = 1 n i = 1 n ε i ( ε i = { 1 , y o i [ y l o w , y u p ] 0 , y o i [ y l o w , y u p ] ) .
INAW = 1 n R i = 1 n ( y u p y l o w ) .  
CWC = INAW ( 1 + γ e η ( ICP μ ) ) , η > 0 γ = { 0 , ICP μ 1 , ICP < μ ,  
where ε i is a Boolean variable with a value of 1 when the observation is within the prediction interval and 0 otherwise. y u p and y l o w are the upper and lower bounds of the interval at the specified confidence level, respectively. R = max y o min y o denotes the range of the test set, which is used to normalize the evaluation metrics. η and μ are the parameters that determine the degree of penalty. μ is determined by the confidence interval, and η is a parameter greater than 0, which is taken as 1 in this study. ICP indicates the degree of coverage of the prediction interval on the observed values, and the value ranges from 0 to 1. Values closer to 1 indicate the better coverage of the prediction interval. However, when the prediction interval is too wide, although it can completely cover the observed values, it loses the value of probability prediction. INAW indicates the relative width of the prediction interval and takes a value between 0 and 1. Values closer to 0 indicate that when the relative width of the prediction interval is narrower, the better the prediction effect. CWC is a comprehensive index based on ICP and INAW, which reflects the overall effect of probability prediction.
To measure the prediction effectiveness of the models, both Support Vector Machine (SVM) and Extreme Gradient Boosting (XGboost) are used to train on the same data set and validate their prediction effectiveness on the test set. These two models are introduced as benchmark models for comparative evaluation of model prediction.

2.2.5. Runoff Prediction Model

In this study, the NGboost method is introduced to establish the runoff prediction model, which can obtain both deterministic and probabilistic prediction results of the predicted values. The input of the model is the historical runoff series. The number of timesteps of previous observations in the input is treated as one of the hyperparameters of the model, along with the other four hyperparameters in the NGBoost method, which are the depth of the decision tree base learner, the number of base learners, the learning rate and the percentage of subsample used in the model training. The hyperparameters are determined by the TPE algorithm optimization. The output of the model is the parameters of the probability density function of the assumed probability distribution. The output of the normal distribution is the mean μ and the standard deviation σ . The lognormal distribution is the standard deviation σ and the shape parameter s . The exponential distribution is the standard deviation σ . The model corresponding to each time scale and each probability distribution is optimized by 100 rounds of TPE iterations according to the experimental and computational power limitations. The combination of hyperparameters with the best results is preferred to build the runoff prediction model for this situation. The proposed model is a single-step prediction model that predicts the probability distribution of runoff in the next timestep. The prediction effects are tested for normal, lognormal and exponential distributions. This provides guidance for the development of runoff prediction models at different time scales for different flow characteristics. Figure 4 shows the flow chart of the proposed model.

3. Results

3.1. Hyperparameter Tuning

The prediction model uses the TPE algorithm to preferably select the model hyperparameters. The hyperparameters of the model are the best combination of predictions after 100 rounds of iterations of the TPE algorithm. The optimal model hyperparameter combinations for the three distributions at each time scale at each station are shown in Table 3.
Runoff probability prediction models are developed separately according to each optimal combination of hyperparameters. Models are trained in the corresponding training sets, and prediction is validated in the test set.

3.2. Deterministic Predictions

Figure 5 and Figure 6 show the predicted and actual runoff processes of each model on the test set data at monthly, weekly and daily scales for the Yichang and Pingshan stations. The trends of predicted data and actual data of each model are in agreement. The predictions are improved from monthly and weekly to daily, in order. The results are consistent with the perception that the correlation of adjacent runoff data increases sequentially from month and week to day. It can also be seen that the model prediction errors for the extreme values are large at each scale.
As seen in Table 4, the NGboost model shows a significant improvement in all prediction effectiveness metrics compared to the two benchmark models. The metrics are closer between the three NGboost models using different distributions. In the monthly and weekly scale runoff prediction tests, the NGboost model with exponential distribution has the best prediction. According to the results of the monthly runoff prediction, the NGboost-Exponential model reduced RMSE by 11.03%, reduced MRE by 14.23% and improved R2 by 8.61% compared to the XGboost model. On the weekly scale, RMSE and MRE decreased by 17.22% and 22.91%, respectively, and R2 improved by 10.01%. In the daily runoff prediction tests, the NGboost model using lognormal distribution has the best prediction results. Compared to the XGboost benchmark model, the NGboost-LogNormal model has a 9.46% reduction in RMSE, a 13.56% reduction in MRE and a 1.2% improvement in R2. For daily runoff with high autocorrelation, the IRMSE metrics of three NGBoost models are around 0.6, and the predictions are fair.
Table 5 shows the statistical results in the test data set at each scale at the Pingshan station. Similar to the Yichang station, the NGboost model significantly improved each metric compared to the predicted effects of the two benchmark models, while the metrics are closer between the three NGboost models using different distributions. In the monthly and weekly scale runoff prediction tests, the NGboost model using the lognormal distribution predicted the best results. According to the results of the monthly runoff prediction, the NGboost-LogNorm model has an 8.96% decrease in RMSE, a 6.43% decrease in MRE and a 4.48% increase in R2 compared to the XGboost baseline model. On the weekly scale, RMSE and MRE were reduced by 12.16% and 16.26%, respectively, and R2 was improved by 3.95%. In the daily runoff prediction tests, the NGboost model using a normal distribution has the best prediction results. Compared to the XGboost model, the NGboost-Normal model has a 7.05% reduction in RMSE, a 6.24% reduction in MRE and a 0.35% improvement in R2. Regarding the daily runoff with high autocorrelation, the IRMSE metrics of three NGBoost models are about 0.59, and the predictions are fair.

3.3. Probabilistic Predictions

The data in Table 6 are metrics of probabilistic prediction results for the three models at a confidence level of 90%. As can be seen from the metrics of each time scale, although the ICP of the exponential distribution model is 1, its prediction interval is too wide, and the INAW and CWC metrics far exceed the acceptable thresholds for poor prediction. In the monthly runoff probability prediction test, the NGboost model with lognormal distribution performs better, with a 50% improvement in ICP and a 1.14% decrease in CWC compared to the model with normal distribution. On the weekly and daily scales, the predictions using the normal distribution model are better than those using the lognormal distribution. On the weekly scale, compared to the NGboost-LogNormal model, the ICP of the NGboost-Normal model improved by 5.2% and the CWC decreased by 47.67%. On a daily scale, ICP improved by 2.51% and CWC decreased by 0.99% compared to the NGboost-LogNormal model.
Table 7, similar to Table 6, shows the statistics of the runoff probability prediction result metrics for each model in the Pingshan station test data set on different scales. In the monthly runoff probability prediction tests, both models with normal and lognormal distributions had ICP metrics that are significantly lower than the confidence level, and both predictions were poor. As in the case of the Yichang station, the prediction of the model with normal distribution is better than that of the model with lognormal distribution at weekly and daily scales. On the weekly scale, compared to the NGboost-LogNormal model, the ICP of the NGboost-Normal model improved by 10.47% and the CWC decreased by 2.73%. On the daily scale, INAW and CWC decreased by 7.05% and 5.7%, respectively, compared to the NGboost-LogNormal model.
In order to further compare the probability prediction performance of each prediction model on the test data, the 80%, 85% and 90% confidence interval runoff prediction processes are plotted based on the probability prediction results of three NGboost models with normal, lognormal and exponential distributions on monthly, weekly and daily scales, respectively, using the first year of the test data set, i.e., 2005, at Yichang station as an example. Figure 7, Figure 8 and Figure 9 show the runoff prediction process on the monthly, weekly and daily scales, respectively. As can be seen from the figure, the prediction interval of the model using the exponential distribution is too wide. The prediction intervals of the prediction models with normal and lognormal distributions can cover the test data better, especially in the case of weekly and daily time scales. The large prediction errors occur mainly in areas with large fluctuations in the runoff process and extreme points.
Overall, the evaluation metrics of the NGboost model prediction results on the test data set are significantly improved compared to the benchmark model. The optimization of model hyperparameters and the selection of probability distributions for different runoff characteristics and time scales are further discussed in the following sections of the paper.

4. Discussion

The above calculation results show that the different runoff characteristics of the stations have significant effects on the optimization of the model hyperparameters and the selection of the probability distribution.
Figure 10a shows the hyperparameter tuning results of the NGboost model with normal distribution under the monthly runoff at the Yichang station. As can be noticed from the figure, there are more dark lines when the number of timesteps of previous observations in the input is 10 or larger, especially 12, indicating that the models with a greater number of timesteps of previous observations perform better. The depth of the decision tree base learner has less effect on the model, and the model with a depth of 3 is slightly better. The models with the number of base learners of around 900 performed better. Models with learning rates between 0.02 and 0.04 performed better. The percentages of subsamples in the model training of the more effective models were mainly distributed at 50%, 60% and 70%.
Figure 10b plots the hyperparameter tuning results of the NGboost model with lognormal distribution under the weekly runoff of the Pingshan station. In contrast to the Yichang station, the depth of the decision tree base learner has a significant effect on the model in this condition, and the model with a depth of 2 performs better. The percentages of subsamples used in the model training of the more effective models were mainly distributed at 70%, 80% and 90%.
According to the tuning results of the hyperparameters of each model, there are some recommendations for the selection of the hyperparameters of the NGboost model for runoff prediction in similar runoff situations.
  • In the range of values allowed by the experimental arithmetic, the greater the number of timesteps of previous observations in the input, the better the performance;
  • The depth of the decision tree base learner needs to be optimally selected for different situations;
  • The number of base learners can be selected between 800 and 1200;
  • The learning rate between 0.001 and 0.03 is better; and
  • The percentage of subsamples used in the model training has little effect on the prediction of the model.
The two stations selected for the study have different runoff characteristics. The average flow at the Yichang station exceeded 13,000 m3/s, with large flows but small variance and a relatively stable runoff process. In terms of deterministic prediction, the NGboost model using different distributions predicts closer results. The models with exponential and lognormal distributions predicted slightly better. The reason is that the runoff process at the Yichang station is stable and less random, in accordance with the characteristics of the skewed distribution. The average flow at the Pingshan station is 4500 m3/s, with a small flow but high variance and fluctuations in the runoff process. Due to the large randomness of runoff, the model with normal distribution predicts better.
The study also developed prediction models for each of the three different time scales of monthly, weekly and daily runoff prediction problems. For deterministic prediction, the model with exponential distribution predicts the best results on the monthly and weekly scales at the Yichang station. The model with lognormal distribution predicts better results on the daily scale. For the Pingshan station, the model with lognormal distribution predicted the best results on the monthly and weekly scales, while the model with normal distribution performed better on the daily scale. In terms of probabilistic prediction, all models perform poorly for the probabilistic prediction of runoff on the monthly scale due to the poor correlation of data on the adjacent time periods on the monthly scale, and there is room for improvement. Of the three time scales, the model predicts the best on weekly runoff data. On the monthly scale, the model with a lognormal distribution predicts the best. On the weekly and daily scales, the model with a normal distribution has the best prediction. As the time scale becomes smaller, the randomness of runoff becomes stronger, so more predicted runoff data conform to the characteristics of normal distribution.

5. Conclusions

Accurate runoff prediction data represent an essential foundation for other studies in the field of water resources. Many hydrological and data-driven models have been applied to predict runoff. These models often require a large amount of relevant data as input to the model. However, not all regions have complete data that meet the requirements of complex models. At the same time, an accurate prediction is difficult to obtain, and it is difficult for deterministic prediction results to meet needs such as risk assessment. In this study, the NGboost model was introduced into the runoff prediction. The model uses only the historical runoff series itself as the model input and can obtain the probability distribution of the data to be predicted. The study also used the TPE algorithm to optimize the hyperparameters of the NGboost model. The runoff probability prediction models are developed and optimized for three time scales of monthly, weekly and daily and three probability distributions of exponential, normal and lognormal distributions. The model was applied to the prediction of runoff at the Yichang and Pingshan stations in the upper reaches of the Yangtze River and achieved satisfactory prediction results. The results were obtained with the following process:
  • Apply the NGboost model to the deterministic and probabilistic prediction of runoff at monthly, weekly and daily scales, and achieve better predictions;
  • Use the TPE algorithm to optimize the prediction model hyperparameters, improve the model prediction effect and summarize some recommendations of prediction model hyperparameter tuning; and
  • Analyze the prediction models with normal, lognormal and exponential distributions for different runoff characteristics and different time scales, and summarize the recommended distributions in different conditions.
It can be noted in the results that the NGBoost model had suitable performance in deterministic and probabilistic runoff prediction. However, due to the limitations of the model and the complexity of the actual runoff, there are still some questions that deserve further investigation.
  • The prediction accuracy is not sufficient for high flow cases, especially at extreme values. In the prediction results of each time scale, it can be seen that there is more room to improve the model prediction accuracy in high flow cases. Some methods of extreme value analysis can be applied in subsequent work to deal specifically with the high flow case.
  • More probability distribution functions can be introduced to participate in the test. In this study, three probability distributions—normal distribution, lognormal distribution and exponential distribution—were tested. Subsequent studies can introduce more distribution forms to test the prediction effect of the model.
In general, the NGboost runoff prediction model optimized by the TPE method can obtain better deterministic and probabilistic prediction results when only the runoff series itself is used as data input, which is superior compared to other models and worthy of further study.

Author Contributions

Conceptualization, K.S. and H.Q.; methodology, K.S. and J.Z.; software, K.S.; validation, K.S. and G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Key Research and Development Program of China (2021YFC3200303) and funded by National Natural Science Foundation of China (no. 51979113, U1865202, 52039004).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Acknowledgments

Special thanks are given to the anonymous reviewers and editors for their constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Loucks, D.P.; Van Beek, E. Water Resource Systems Planning and Management: An Introduction to Methods, Models, and Applications; Springer: Berlin/Heidelberg, Germany, 2017; Volume 4, ISBN 9783319442327. [Google Scholar]
  2. Ramaswamy, V.; Saleh, F. Ensemble Based Forecasting and Optimization Framework to Optimize Releases from Water Supply Reservoirs for Flood Control. Water Resour. Manag. 2020, 34, 989–1004. [Google Scholar] [CrossRef]
  3. Xu, W.; Zhang, C.; Peng, Y.; Fu, G.; Zhou, H. A two stage B ayesian stochastic optimization model for cascaded hydropower systems considering varying uncertainty of flow forecasts. Water Resour. Res. 2014, 50, 9267–9286. [Google Scholar] [CrossRef]
  4. Feng, Z.-K.; Niu, W.-J.; Liu, S.; Luo, B.; Miao, S.-M.; Liu, K. Multiple hydropower reservoirs operation optimization by adaptive mutation sine cosine algorithm based on neighborhood search and simplex search strategies. J. Hydrol. 2020, 590, 125223. [Google Scholar] [CrossRef]
  5. Chen, L.; Singh, V.P.; Lu, W.; Zhang, J.; Zhou, J.; Guo, S. Streamflow forecast uncertainty evolution and its effect on real-time reservoir operation. J. Hydrol. 2016, 540, 712–726. [Google Scholar] [CrossRef]
  6. Bourdin, D.R.; Fleming, S.W.; Stull, R.B. Streamflow Modelling: A Primer on Applications, Approaches and Challenges. Atmos.-Ocean 2012, 50, 507–536. [Google Scholar] [CrossRef] [Green Version]
  7. Devia, G.K.; Ganasri, B.; Dwarakish, G. A Review on Hydrological Models. Aquat. Procedia 2015, 4, 1001–1007. [Google Scholar] [CrossRef]
  8. Meng, X.; Huang, M.; Liu, D.; Yin, M. Simulation of rainfall–runoff processes in karst catchment considering the impact of karst depression based on the tank model. Arab. J. Geosci. 2021, 14, 250. [Google Scholar] [CrossRef]
  9. Lü, H.; Hou, T.; Horton, R.; Zhu, Y.; Chen, X.; Jia, Y.; Wang, W.; Fu, X. The streamflow estimation using the Xinanjiang rainfall runoff model and dual state-parameter estimation method. J. Hydrol. 2012, 480, 102–114. [Google Scholar] [CrossRef]
  10. Liu, Y.; Ye, L.; Qin, H.; Hong, X.; Ye, J.; Yin, X. Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression. J. Hydrol. 2018, 561, 146–159. [Google Scholar] [CrossRef]
  11. Zhang, J.; Chen, X.; Khan, A.; Zhang, Y.-K.; Kuang, X.; Liang, X.; Taccari, M.L.; Nuttall, J. Daily runoff forecasting by deep recursive neural network. J. Hydrol. 2021, 596, 126067. [Google Scholar] [CrossRef]
  12. Gao, S.; Huang, Y.; Zhang, S.; Han, J.; Wang, G.; Zhang, M.; Lin, Q. Short-term runoff prediction with GRU and LSTM networks without requiring time step optimization during sample generation. J. Hydrol. 2020, 589, 125188. [Google Scholar] [CrossRef]
  13. He, X.; Luo, J.; Li, P.; Zuo, G.; Xie, J. A Hybrid Model Based on Variational Mode Decomposition and Gradient Boosting Regression Tree for Monthly Runoff Forecasting. Water Resour. Manag. 2020, 34, 865–884. [Google Scholar] [CrossRef]
  14. Rezaie-Balf, M.; Kim, S.; Fallah, H.; Alaghmand, S. Daily river flow forecasting using ensemble empirical mode decomposition based heuristic regression models: Application on the perennial rivers in Iran and South Korea. J. Hydrol. 2019, 572, 470–485. [Google Scholar] [CrossRef]
  15. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  16. Ni, L.; Wang, D.; Wu, J.; Wang, Y.; Tao, Y.; Zhang, J.; Liu, J. Streamflow forecasting using extreme gradient boosting model coupled with Gaussian mixture model. J. Hydrol. 2020, 586, 124901. [Google Scholar] [CrossRef]
  17. Herr, H.D.; Krzysztofowicz, R. Ensemble Bayesian forecasting system Part I: Theory and algorithms. J. Hydrol. 2015, 524, 789–802. [Google Scholar] [CrossRef]
  18. Herr, H.D.; Krzysztofowicz, R. Ensemble Bayesian forecasting system Part II: Experiments and properties. J. Hydrol. 2019, 575, 1328–1344. [Google Scholar] [CrossRef]
  19. Tajiki, M.; Schoups, G.; Franssen, H.J.H.; Najafinejad, A.; Bahremand, A. Recursive Bayesian Estimation of Conceptual Rainfall-Runoff Model Errors in Real-Time Prediction of Streamflow. Water Resour. Res. 2020, 56. [Google Scholar] [CrossRef]
  20. Montanari, A.; Grossi, G. Estimating the uncertainty of hydrological forecasts: A statistical approach. Water Resour. Res. 2008, 44. [Google Scholar] [CrossRef] [Green Version]
  21. López, P.L.; Verkade, J.S.; Weerts, A.H.; Solomatine, D.P. Alternative configurations of quantile regression for estimating predictive uncertainty in water level forecasts for the upper Severn River: A comparison. Hydrol. Earth Syst. Sci. 2014, 18, 3411–3428. [Google Scholar] [CrossRef] [Green Version]
  22. Duan, T.; Anand, A.; Ding, D.Y.; Thai, K.K.; Basu, S.; Ng, A.; Schuler, A. NGBoost: Natural Gradient Boosting for Probabilistic Prediction. In Proceedings of the 37th International Conference on Machine Learning, Virtual, 12–18 July 2020; Volume 119. [Google Scholar]
  23. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 45, 1189–1232. [Google Scholar] [CrossRef]
  24. Cai, X.; Yang, Y.; Jiang, G. Online Risk Measure Estimation VIA Natural Gradient Boosting. In Proceedings of the 2020 Winter Simulation Conference (WSC), Orlando, FL, USA, 14–18 December 2020; pp. 2341–2352. [Google Scholar]
  25. Li, Y.; Wang, Y.; Wu, B. Short-Term Direct Probability Prediction Model of Wind Power Based on Improved Natural Gradient Boosting. Energies 2020, 13, 4629. [Google Scholar] [CrossRef]
  26. Pfau, M.; Sahu, S.; Rupnow, R.A.; Romond, K.; Millet, D.; Holz, F.G.; Schmitz-Valckenberg, S.; Fleckenstein, M.; Lim, J.I.; de Sisternes, L.; et al. Probabilistic Forecasting of Anti-VEGF Treatment Frequency in Neovascular Age-Related Macular Degeneration. Transl. Vis. Sci. Technol. 2021, 10, 30. [Google Scholar] [CrossRef] [PubMed]
  27. Peng, T.; Zhi, X.; Ji, Y.; Ji, L.; Tian, Y. Prediction Skill of Extended Range 2-m Maximum Air Temperature Probabilistic Forecasts Using Machine Learning Post-Processing Methods. Atmosphere 2020, 11, 823. [Google Scholar] [CrossRef]
  28. Ben Jabeur, S.; Ballouk, H.; Ben Arfi, W.; Khalfaoui, R. Machine Learning-Based Modeling of the Environmental Degradation, Institutional Quality, and Economic Growth. Environ. Model. Assess. 2021, 1–14. [Google Scholar] [CrossRef] [PubMed]
  29. Ghawi, R.; Pfeffer, J. Efficient Hyperparameter Tuning with Grid Search for Text Categorization using kNN Approach with BM25 Similarity. Open Comput. Sci. 2019, 9, 160–180. [Google Scholar] [CrossRef]
  30. Noh, J.; Park, H.-J.; Kim, J.S.; Hwang, S.-J. Gated Recurrent Unit with Genetic Algorithm for Product Demand Forecasting in Supply Chain Management. Mathematics 2020, 8, 565. [Google Scholar] [CrossRef] [Green Version]
  31. Stuke, A.; Rinke, P.; Todorović, M. Efficient hyperparameter tuning for kernel ridge regression with Bayesian optimization. Mach. Learn. Sci. Technol. 2021, 2, 035022. [Google Scholar] [CrossRef]
  32. Bergstra, J.; Bardenet, R.; Bengio, Y.; Kégl, B. Algorithms for Hyper-Parameter Optimization. In Proceedings of the 24th International Conference on Neural Information Processing Systems, Guangzhou, China, 14–18 November 2017; Curran Associates Inc.: Red Hook, NY, USA; pp. 2546–2554. [Google Scholar]
  33. Valsecchi, C.; Consonni, V.; Todeschini, R.; Orlandi, M.E.; Gosetti, F.; Ballabio, D. Parsimonious Optimization of Multitask Neural Network Hyperparameters. Molecules 2021, 26, 7254. [Google Scholar] [CrossRef]
  34. Xu, Y.; Gao, W.; Qian, F.; Li, Y. Potential Analysis of the Attention-Based LSTM Model in Ultra-Short-Term Forecasting of Building HVAC Energy Consumption. Front. Energy Res. 2021, 9, 730640. [Google Scholar] [CrossRef]
  35. Nguyen, H.-P.; Liu, J.; Zio, E. A long-term prediction approach based on long short-term memory neural networks with automatic parameter optimization by Tree-structured Parzen Estimator and applied to time-series data of NPP steam generators. Appl. Soft Comput. 2020, 89, 106116. [Google Scholar] [CrossRef] [Green Version]
  36. Bergstra, J.; Yamins, D.; Cox, D. Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures. In Proceedings of the 30th International Conference on International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; Volume 28, pp. 115–123. [Google Scholar]
  37. Amari, S.-I. Natural Gradient Works Efficiently in Learning. Neural Comput. 1998, 10, 251–276. [Google Scholar] [CrossRef]
  38. Martens, J. New Insights and Perspectives on the Natural Gradient Method. arXiv 2014, arXiv:1412.1193. [Google Scholar]
  39. Dawid, A.P. The geometry of proper scoring rules. Ann. Inst. Stat. Math. 2006, 59, 77–93. [Google Scholar] [CrossRef] [Green Version]
  40. Gebetsberger, M.; Messner, J.W.; Mayr, G.J.; Zeileis, A. Estimation Methods for Nonhomogeneous Regression Models: Minimum Continuous Ranked Probability Score versus Maximum Likelihood. Mon. Weather Rev. 2018, 146, 4323–4338. [Google Scholar] [CrossRef]
  41. Dominguez, E.; Dawson, C.W.; Ramirez, A.; Abrahart, R. The search for orthogonal hydrological modelling metrics: A case study of 20 monitoring stations in Colombia. J. Hydroinformatics 2010, 13, 429–442. [Google Scholar] [CrossRef]
  42. Moreido, V.; Gartsman, B.; Solomatine, D.; Suchilina, Z. How Well Can Machine Learning Models Perform without Hydrologists? Application of Rational Feature Selection to Improve Hydrological Forecasting. Water 2021, 13, 1696. [Google Scholar] [CrossRef]
Figure 1. Locations of the Yichang and Pingshan stations.
Figure 1. Locations of the Yichang and Pingshan stations.
Water 14 00545 g001
Figure 2. Flow chart of the NGBoost method for runoff probability prediction.
Figure 2. Flow chart of the NGBoost method for runoff probability prediction.
Water 14 00545 g002
Figure 3. Flow chart of the TPE algorithm.
Figure 3. Flow chart of the TPE algorithm.
Water 14 00545 g003
Figure 4. Flow chart of the runoff probability prediction model.
Figure 4. Flow chart of the runoff probability prediction model.
Water 14 00545 g004
Figure 5. Predicted and actual runoff processes of models at the Yichang station for each time scale. (a) Predicted and actual runoff processes of the model at the monthly scale; (b) predicted and actual runoff processes of the weekly at the monthly scale; (c) predicted and actual runoff processes of the model at the daily scale.
Figure 5. Predicted and actual runoff processes of models at the Yichang station for each time scale. (a) Predicted and actual runoff processes of the model at the monthly scale; (b) predicted and actual runoff processes of the weekly at the monthly scale; (c) predicted and actual runoff processes of the model at the daily scale.
Water 14 00545 g005aWater 14 00545 g005b
Figure 6. Predicted and actual runoff processes of models at the Pingshan station for each time scale. (a) Predicted and actual runoff processes of the model at the monthly scale; (b) predicted and actual runoff processes of the weekly at the monthly scale; (c) predicted and actual runoff processes of the model at the daily scale.
Figure 6. Predicted and actual runoff processes of models at the Pingshan station for each time scale. (a) Predicted and actual runoff processes of the model at the monthly scale; (b) predicted and actual runoff processes of the weekly at the monthly scale; (c) predicted and actual runoff processes of the model at the daily scale.
Water 14 00545 g006aWater 14 00545 g006b
Figure 7. Probabilistic prediction intervals for models with different probability distributions and observations on the monthly scale of the Yichang station. (a) Probabilistic prediction intervals for models with normal distribution and observations; (b) probabilistic prediction intervals for models with lognormal distribution and observations; (c) probabilistic prediction intervals for models with exponential distribution and observations.
Figure 7. Probabilistic prediction intervals for models with different probability distributions and observations on the monthly scale of the Yichang station. (a) Probabilistic prediction intervals for models with normal distribution and observations; (b) probabilistic prediction intervals for models with lognormal distribution and observations; (c) probabilistic prediction intervals for models with exponential distribution and observations.
Water 14 00545 g007
Figure 8. Probabilistic prediction intervals for models with different probability distributions and observations on the weekly scale of the Yichang station. (a) Probabilistic prediction intervals for models with normal distribution and observations; (b) probabilistic prediction intervals for models with lognormal distribution and observations; (c) probabilistic prediction intervals for models with exponential distribution and observations.
Figure 8. Probabilistic prediction intervals for models with different probability distributions and observations on the weekly scale of the Yichang station. (a) Probabilistic prediction intervals for models with normal distribution and observations; (b) probabilistic prediction intervals for models with lognormal distribution and observations; (c) probabilistic prediction intervals for models with exponential distribution and observations.
Water 14 00545 g008
Figure 9. Probabilistic prediction intervals for models with different probability distributions and observations on the daily scale of the Yichang station. (a) Probabilistic prediction intervals for models with normal distribution and observations; (b) probabilistic prediction intervals for models with lognormal distribution and observations; (c) probabilistic prediction intervals for models with exponential distribution and observations.
Figure 9. Probabilistic prediction intervals for models with different probability distributions and observations on the daily scale of the Yichang station. (a) Probabilistic prediction intervals for models with normal distribution and observations; (b) probabilistic prediction intervals for models with lognormal distribution and observations; (c) probabilistic prediction intervals for models with exponential distribution and observations.
Water 14 00545 g009
Figure 10. Hyperparameter tuning results of runoff prediction models under different conditions. (a) Hyperparameter tuning results of NGboost model with normal distribution under monthly runoff at the Yichang station; (b) hyperparameter tuning results of NGboost model with lognormal distribution under weekly runoff at the Pingshan station. The rightmost axis of the figure is the MRE score of the model. The color from light to dark represents the MRE metric from large to small, and the darker color indicates that the model constructed with this set of hyperparameters has better predictions. Each curve represents a set of hyperparameters to try, and the different colors represent the effectiveness of that set of hyperparameters. The five axes on the left side represent the five hyperparameters of the model, i.e., the number of timesteps of previous observations in the input, the depth of the decision tree base learner, the number of base learners, the learning rate, and the percentage of subsamples used in the model training.
Figure 10. Hyperparameter tuning results of runoff prediction models under different conditions. (a) Hyperparameter tuning results of NGboost model with normal distribution under monthly runoff at the Yichang station; (b) hyperparameter tuning results of NGboost model with lognormal distribution under weekly runoff at the Pingshan station. The rightmost axis of the figure is the MRE score of the model. The color from light to dark represents the MRE metric from large to small, and the darker color indicates that the model constructed with this set of hyperparameters has better predictions. Each curve represents a set of hyperparameters to try, and the different colors represent the effectiveness of that set of hyperparameters. The five axes on the left side represent the five hyperparameters of the model, i.e., the number of timesteps of previous observations in the input, the depth of the decision tree base learner, the number of base learners, the learning rate, and the percentage of subsamples used in the model training.
Water 14 00545 g010aWater 14 00545 g010b
Table 1. Basic statistical information of the runoff data from the Yichang station.
Table 1. Basic statistical information of the runoff data from the Yichang station.
Time ScaleMean 1Minimum 1Maximum 1Standard
Deviation 1
Autocorrelation 2
monthly13,643305852,16897450.7554
weekly13,706282463,71410,3810.9038
daily13,707247079,88110,6650.9852
1 The unit is m3/s. 2 The value is the result of the first-order autocorrelation function.
Table 2. Basic statistical information of the runoff data from the Pingshan station.
Table 2. Basic statistical information of the runoff data from the Pingshan station.
Time scaleMean 1Minimum 1Maximum 1Standard
Deviation 1
Autocorrelation 2
monthly449990019,44835920.7516
weekly452077026,97137930.9317
daily452064028,60038470.9926
1 The unit is m3/s. 2 The value is the result of the first-order autocorrelation function.
Table 3. Optimal hyperparameters of the prediction model for each time scale of two stations.
Table 3. Optimal hyperparameters of the prediction model for each time scale of two stations.
StationTime ScaleDistributionsNumber of TimestepsBase Learner DepthNumber of LearnersLearning
Rate
Percent of Subsample
YichangMonthlyNormal1239000.037220.6
LogNormal12311000.010030.5
Exponential12411000.005770.6
WeeklyNormal1635000.005680.5
LogNormal15212000.003460.9
Exponential15214000.003720.5
DailyNormal16311000.00680.7
LogNormal2729000.008641
Exponential2327000.019620.8
PingshanMonthlyNormal1258000.011980.5
LogNormal1259000.051110.7
Exponential12414000.016320.7
WeeklyNormal13312000.003840.5
LogNormal1328000.015280.7
Exponential1429000.025970.5
DailyNormal15415000.012350.9
LogNormal1527000.010471
Exponential21211000.01250.8
Table 4. Statistical results of RMSE, MARE, R2 and IRMSE on the test data set for two benchmark models, SVM and XGboost, and three NGboost prediction models with normal (NGboost-Normal), lognormal (NGboost-LogNormal) and exponential (NGboost-Exponential) distributions, under monthly, weekly and daily scales at the Yichang station. The best performing model for each metric is in bold.
Table 4. Statistical results of RMSE, MARE, R2 and IRMSE on the test data set for two benchmark models, SVM and XGboost, and three NGboost prediction models with normal (NGboost-Normal), lognormal (NGboost-LogNormal) and exponential (NGboost-Exponential) distributions, under monthly, weekly and daily scales at the Yichang station. The best performing model for each metric is in bold.
Time ScaleModelsRMSEMRER2IRMSE
MonthlySVM8612.160.5501−0.06250.9231
Xgboost4518.050.20360.70760.4843
Normal4138.100.18630.75470.4436
LogNormal4035.070.17030.76680.4325
Exponential4019.670.17470.76850.4309
WeeklySVM9050.410.5176−0.02741.6681
Xgboost4387.760.19050.75850.8087
Normal3685.470.14880.82960.6793
LogNormal3654.540.14730.83250.6736
Exponential3632.260.14680.83450.6695
DailySVM6958.660.21430.43581.9935
Xgboost2316.020.08950.93750.6635
Normal2103.430.07710.94840.6026
LogNormal2097.010.07740.94880.6007
Exponential2099.290.07770.94860.6014
Table 5. Statistical results of the benchmark model and the NGboost prediction model with different distributions in the test data set at each scale at the Pingshan station. The best performing model for each metric is in bold.
Table 5. Statistical results of the benchmark model and the NGboost prediction model with different distributions in the test data set at each scale at the Pingshan station. The best performing model for each metric is in bold.
Time ScaleModelsRMSEMRER2IRMSE
MonthlySVM3184.430.4586−0.15420.9276
Xgboost1349.630.19340.79270.3931
Normal1232.740.19150.82700.3591
LogNormal1228.670.18100.82820.3579
Exponential1239.290.18320.82520.3610
WeeklySVM3172.360.3825−0.02201.8276
Xgboost1205.080.16200.85250.6942
Normal1068.970.14050.88400.6158
LogNormal1058.500.13570.88620.6098
Exponential1063.740.13780.88510.6128
DailySVM1596.710.15140.75112.0013
Xgboost503.490.06870.97530.6311
Normal467.980.06440.97860.5866
LogNormal475.350.06420.97790.5958
Exponential473.280.06410.97810.5932
Table 6. Statistical results of ICP, INAW and CWC for the three NGboost prediction models with normal, lognormal and exponential distributions on the test data set at monthly, weekly and daily scales for Yichang station, respectively. The best performing model for each metric is in bold.
Table 6. Statistical results of ICP, INAW and CWC for the three NGboost prediction models with normal, lognormal and exponential distributions on the test data set at monthly, weekly and daily scales for Yichang station, respectively. The best performing model for each metric is in bold.
Time ScaleDistributionsICPINAWCWC
MonthlyNormal0.43880.07420.1919
LogNormal0.65820.09620.1897
Exponential1.00001.01081.0108
WeeklyNormal0.92930.16490.1649
LogNormal0.88340.15620.3151
Exponential1.00000.68580.6858
DailyNormal0.78420.04280.0899
LogNormal0.76500.04190.0908
Exponential1.00000.49650.4965
Table 7. Statistic results of probabilistic metrics on for models at each time scale at the Pingshan station. The best performing model for each metric is in bold.
Table 7. Statistic results of probabilistic metrics on for models at each time scale at the Pingshan station. The best performing model for each metric is in bold.
Time ScaleDistributionsICPINAWCWC
MonthlyNormal0.32140.05210.1451
LogNormal0.02040.00600.0205
Exponential1.00000.93070.9307
WeeklyNormal0.74560.13150.2849
LogNormal0.67490.13000.2929
Exponential1.00000.78080.7808
DailyNormal0.68760.03740.0837
LogNormal0.71380.04030.0888
Exponential1.00000.61080.6108
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shen, K.; Qin, H.; Zhou, J.; Liu, G. Runoff Probability Prediction Model Based on Natural Gradient Boosting with Tree-Structured Parzen Estimator Optimization. Water 2022, 14, 545. https://doi.org/10.3390/w14040545

AMA Style

Shen K, Qin H, Zhou J, Liu G. Runoff Probability Prediction Model Based on Natural Gradient Boosting with Tree-Structured Parzen Estimator Optimization. Water. 2022; 14(4):545. https://doi.org/10.3390/w14040545

Chicago/Turabian Style

Shen, Keyan, Hui Qin, Jianzhong Zhou, and Guanjun Liu. 2022. "Runoff Probability Prediction Model Based on Natural Gradient Boosting with Tree-Structured Parzen Estimator Optimization" Water 14, no. 4: 545. https://doi.org/10.3390/w14040545

APA Style

Shen, K., Qin, H., Zhou, J., & Liu, G. (2022). Runoff Probability Prediction Model Based on Natural Gradient Boosting with Tree-Structured Parzen Estimator Optimization. Water, 14(4), 545. https://doi.org/10.3390/w14040545

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop