Next Article in Journal
The Role of Green and Blue Hydrogen in the Energy Transition—A Technological and Geopolitical Perspective
Next Article in Special Issue
Detailed Analysis of Spatial–Temporal Variability of Rainfall Erosivity and Erosivity Density in the Central and Southern Pannonian Basin
Previous Article in Journal
Managing Corporate Social and Environmental Disclosure: An Accountability vs. Impression Management Framework
Previous Article in Special Issue
Using Logistic Regression to Identify Leading Factors to Prepare for an Earthquake Emergency during Daytime and Nighttime: The Case of Mass Earthquake Drills
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Potential Evapotranspiration Using Temperature-Based Heuristic Approaches

1
State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, Hohai University, Nanjing 210098, China
2
Faculty of Science, Agronomy Department, Hydraulics Division, University of Skikda, Skikda 21000, Algeria
3
Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
4
Faculty of Engineering, School of Civil Engineering, Universiti Teknologi Malaysia (UTM), Johor Bahru 81310, Malaysia
5
School of Technology, Ilia State University, Tbilisi 0162, Georgia
*
Authors to whom correspondence should be addressed.
Sustainability 2021, 13(1), 297; https://doi.org/10.3390/su13010297
Submission received: 10 December 2020 / Revised: 23 December 2020 / Accepted: 24 December 2020 / Published: 31 December 2020
(This article belongs to the Special Issue Hydrometeorological Hazards and Disasters)

Abstract

:
The potential or reference evapotranspiration (ET0) is considered as one of the fundamental variables for irrigation management, agricultural planning, and modeling different hydrological pr°Cesses, and therefore, its accurate prediction is highly essential. The study validates the feasibility of new temperature based heuristic models (i.e., group method of data handling neural network (GMDHNN), multivariate adaptive regression spline (MARS), and M5 model tree (M5Tree)) for estimating monthly ET0. The outcomes of the newly developed models are compared with empirical formulations including Hargreaves-Samani (HS), calibrated HS, and Stephens-Stewart (SS) models based on mean absolute error (MAE), root mean square error (RMSE), and Nash-Sutcliffe efficiency. Monthly maximum and minimum temperatures (Tmax and Tmin) observed at two stations in Turkey are utilized as inputs for model development. In the applications, three data division scenarios are utilized and the effect of periodicity component (PC) on models’ accuracies are also examined. By importing PC into the model inputs, the RMSE accuracy of GMDHNN, MARS, and M5Tree models increased by 1.4%, 8%, and 6% in one station, respectively. The GMDHNN model with periodic input provides a superior performance to the other alternatives in both stations. The recommended model reduced the average error of MARS, M5Tree, HS, CHS, and SS models with respect to RMSE by 3.7–6.4%, 10.7–3.9%, 76–75%, 10–35%, and 0.8–17% in estimating monthly ET0, respectively. The HS model provides the worst accuracy while the calibrated version significantly improves its accuracy. The GMDHNN, MARS, M5Tree, SS, and CHS models are also compared in estimating monthly mean ET0. The GMDHNN generally gave the best accuracy while the CHS provides considerably over/under-estimations. The study indicated that the only one data splitting scenario may mislead the modeler and for better validation of the heuristic methods, more data splitting scenarios should be applied.

1. Introduction

Reference evapotranspiration (ET0) is one of the major components in the hydrological cycle [1]. It contributes to a rationale water resources management [2,3], and it is important in agriculture for measuring crop water requirement quantification [4]. In addition, ET0 is used as inputs for several hydrological models, and adopted for climate change studies [5]. Several empirical and semi-empirical methods have been developed at different time scales for ET0 prediction. The performance of the methods varies with respect to the meteorological variables included in the methods, ranging from temperature based, radiation based, and combination methods [6]. One of the earliest methodologies, the ET0 calculated using the standards FAO56 Penman-Monteith, has been adopted as a reference approach [7]. However, ET0 can be measured directly using lysimeters [2]. Measurement of ET0 is varied from one region to another and that is totally based on the regional climate characteristics [8]. Hence, empirical formulation is demonstrated as a remarkable limitation on the ET0 estimation. During the last few decades, models based on computer aid capacity indicated a distinguished progress in the hydrology and water resources fields [9,10,11,12,13]. Artificial intelligence (AI) models have been extensively applied as a reliable soft computing technology for ET0 estimation based on the available and measured climatic variables [14,15,16].
A review of the literature indicates that numerous studies have examined the application of several AI models for estimating ET0 [17,18,19,20,21,22,23,24,25,26,27,28]. In more detailed state-of-the-art, Yin et al. [26] introduced a new hybrid AI model dependent on the hybridization of a genetic algorithm with a kernel model i.e., a support vector machine (GA-SVM) for modeling daily ET0 in China using several daily climatic variables including Tmax, Tmin, wind speed (U2), relative humidity (RH), and solar radiation (SR). Compared to classical an artificial neural network (ANN) and the SVM models, the scholars demonstrated the superiority of the GA-SVM in predicting ET0. Jovic et al. [17] proposed a hybrid method called genetic programming (GP) for estimating ET0 using eight climatic variables. Mattar [20] applied gene expression programming (GEP) for modeling monthly ET0 in Egypt, using five input variables Tmax, Tmin, RH, U2 and Rs. Tao et al. [25] introduced a hybrid method called adaptive neuro-fuzzy inference systems (ANFIS) with a firefly algorithm (FA) (ANFIS-FA) for modeling daily ET0 at Burkina Faso. Using six input variables namely, Tmax, Tmin, maximum relative humidity (RHmax), Rs, U2, and vapor pressure deficit (VP), the authors demonstrated that the FA significantly increased the exactness of the ANFIS method, and that the hybrid ANFIS-FA provided high accuracy with a determination coefficient (R2) nearly equal to 0.97 compared to a R2 of 0.91 obtained using standard ANFIS. Using data from India, Adamala [27] compared four models in predicting daily ET0, using fewer inputs variables: Tmax, Tmin, and extra-terrestrial radiation (Ra). The applied models were a wavelet neural network (W-ANN), ANN, multi linear regression (MLR), and wavelet linear regression (W-MLR). The authors reported that decomposition of the input variables applying wavelet transform significantly improved the performance of the models with Nash-Sutcliffe efficiency (NSE) equal to 0.82. Using Tmax, Tmin, RH, U2, Rs, and sunshine hours (SH), Gavili et al. [28] demonstrated that ANN model was better than GEP and ANFIS for predicting monthly ET0 in Iran, with a NSE value that reached 0.98 during the testing phase for all tested stations. Khoshravesh et al. [19] compared three regression methods, namely multivariate fractional polynomial (MFP), Bayesian (BR), and robust regressions (RBR) for modeling monthly ET0 in Iran, using Tmax, Tmin, mean temperature (Tmean), and Rs. Karbasi [29] built a new Gaussian process regression (GPR) for forecasting daily ET0 several days in advance and demonstrated that the use of wavelet decomposition significantly increased the abilities of the methods and the RMSE dropped from 0.816 mm to 0.068 mm.
Among several machine learning models explored over the literature, the group method of data handling type neural network (GMDHNN) is an dependent on the Rosenblatt’s perceptron method introduced by Farlow [30]. GMDHNN is successfully applied in diverse engineering applications [31,32,33,34]. Within hydrology and water resources related research, Najafzadeh et al. [35] developed the GMDHNN model for scour depth (SD) of pipelines estimation due to waves variability; the prediction of local SD at bridge abutments in coarse sediments with thinly armored beds was conducted by Najafzadeh et al. [36]; simulation of flow discharge of straight compound channels was reported by Najafzadeh and Zahiri [37]; prediction of significant wave height was established by Shahabi et al. [38]; prediction of turbidity considering daily rainfall and discharge data was determined by Tsai and Yen [39]; an improved modeling of the discharge coefficient for triangular labyrinth lateral weirs was described by Parsaie and Haghiabi [40]; an evaluation of treated water quality in a water treatment plant was carried out by Alitaleshi and Daghbandan [41]; a prediction of turbidity and the free residual aluminum of drinking water was tested by Daghbandan et al. [42]. Based on the reported literature review, only one study reported the implementation of the GMDHNN ET0 modeling developed by da Silva Carvalho and Delgado [43]. The study conducted based on the calculated FAO56 Penman-Monteith using only previous values and very limited data of daily scale over three years (January 2011 to January 2014), were utilized for the modeling development.
Multivariate adaptive regression splines (MARS) model introduced by Friedman [44], and M5 tree (M5Tree) model established developed by Quinlan [45]. They are another distinguished category of a data driven model, which is mainly used in environmental, hydrology, irrigation, and hydraulic studies. The MARS model is one of the sophisticated AI models, as it has the ability to provide a non-parametric feature that is able to identify the actual relationship between predictors and predicted using splines method for detecting the nonlinearity pattern [46]. The MARS model has been successfully applied in many hydrological applications [47,48,49,50,51,52]. The MARS model was successfully used to predict water pollution by Kisi and Parmar [47], to forecast sediment load by Adnan et al. [48], to model daily streamflow by Yin et al. [49], to predict evaporation by Ghaemi et al., [50], and to predict monthly river flow by Adnan et al. [51].
However, fewer applications related to the ET0 modeling can be seen in the related literature. For instance, Mehdizadeh et al. [53] compared MARS, SVM, and GEP for modeling monthly ET0 in Iran, using several climatic variables as inputs: Tmax, Tmin, Tmean, RH, U2, VP, Ra, Rs, and Rn. The authors have compared several scenarios, namely temperature-based, radiation-based, mass transfer-based, and meteorological parameters-based scenarios. For the temperature-based scenarios, using only Tmax, Tmin, and Ra, the research finding approved the potential of MARS model over the SVM and GEP with a R2 equal to 0.944 in the validation phase. Mehdizadeh et al. [22] investigated the capacity of MARS and GEP models for estimating daily ET0 in Iran using four climatic variables: Tmean, RH, U2, and Rs. The author demonstrated that the MARS model performed the best using all climate variables with a R2 nearly equal to 0.99 in the validation phase. Using four climatic variables, namely, Tmean, RH, U2, and Rs, Kisi [54] compared MARS, M5Tree and least square support vector machines (LSSVM) for modeling monthly ET0 in Turkey. The authors demonstrated that in some cases, MARS is superior over the two others in terms of performance accuracy. Keshtegar et al., [55] and Keshtegar and Kisi, [56] applied the M5Tree, ANFIS, and ANN for modeling daily ET0 in Turkey, using Tmean, RH, U2, and Rs as input variables. Kisi and Kilic [57] compared M5Tree and ANN for modeling daily ET0 in USA, using Tmean, RH, U2, and Rs. Rahimikhoob [58] compared M5Tree and ANN for modeling monthly ET0 in USA, using Tmean, RH, U2, and Ra, in Iran. The authors reported that both methods produced almost similar estimates with smaller differences.
The majority of the aforementioned studies have been conducted using several input variables. With the exception of the investigation by Mehdizadeh et al. [53], in which the MARS model was applied for modeling ET0 using only temperature data as inputs, there are limited studies that have applied MARS and M5Tree models for ET0 utilizing only temperature inputs. Hence, the major objective of the present investigation was to assess the performances of GMDHNN, MARS, and M5Tree models to estimate ET0 using only temperature and extra-terrestrial radiation and validating the results against the empirical formulations (i.e., HS, CHS and SS). The main motivation of the current study is using specific climate data (temperature) to simulate the ET0, as this involves highly essential and significant factors influencing ET0, while recording of such data for long durations is an easy task in developing countries. The other difference of this study compared to previous ones is the use of different data splitting scenarios and the inclusion of periodicity (month number of the year) as an input to the GMDHNN, MARS, and M5Tree models.

2. Materials and Methods

2.1. Case Study

In the present study, monthly maximum and minimum temperatures (Tmax and Tmin), solar radiation, relative humidity, and wind speed measured at Adana (latitude 37°00′ N, longitude 35°19′ E, altitude 27 m) and Antakya (latitude 36°33′ N, longitude 36°30′ E, altitude 100 m) stations in the Mediterranean Region of Turkey were utilized. The stations operated by the Turkish Meteorological Organization can be observed from Figure 1. The data periods used for the Adana and Antakya are 1968–2015 and 1983–2010, respectively. The statistical parameters of the data employed in the applications are summed up in Table 1. Ra has the highest correlation with ET0 followed by the Tmin and Tmax and Ra has a higher correlation with ET0 in Antakya compared to Adana. It is also visible from Table 1 that ET0 in Adana is higher than for Antakya.

2.2. Group Method of Data Handling Type Neural Network

GMDHNN is a powerful machine learning tool based upon the principle of termination. In this principle, the system follows the one process through data importing, rearing, hybridizing, choice, and rejection. The GMDH algorithm is divided into two parts: one is the parameter and other is the non-parameter. If the variance is low, then parametric algorithms provide the best results and for high variance, non-parametric algorithms perform better. The GMDHNN model is capable of handling the multiple input variables and provides single output. There are different layers in the GMDHNN model, which have a set of neurons; these neurons are further linked with quadratic polynomial in every layer, which provides the new neurons for the next layer [59,60,61]. The output of the database with multiple input variables and M numbers of observations is defined as below.
m i = f ( y i , 1 , y i , 2 , y i , 3 , , y i , M )                   ( i = 1 , 2 , 3 , 4 , , N )
Here, ( y 1 , y 2 , y 3 , y 4 , , y M ) is the real input of the mapping f and m i is the real output based upon the real input. For the identification of the problem, f ^ is considered as mapping instead of f to forecast the m ^ (output value) instead of m, and this m ^ is close to m. The provided input ( y 1 , y 2 , y 3 , y 4 , , y M ) is used for the GMDHNN training to attain the output m ^ i as given below.
m ^ i = f ^ ( y i , 1 , y i , 2 , y i , 3 , , y i , M )                   ( i = 1 , 2 , 3 ,   ,   N )
The GMDH-NN algorithm is working to minimize the MSE (mean square error) for making the model most effective for prediction. This MSE is calculated as E below to make the error level reach its minimum.
E = { i = 1 N [ f ^ ( y i , 1 , y i , 2 , y i , 3 , , y i , M ) y i ) ] 2 } / N   min
As we discussed above, neurons are connected with the quadratic polynomial, while Kolmogorov-Gabor polynomial is used to conduct relation mapping between input and output variables [53,55,56]. This Kolmogorov-Gabor polynomial can be expressed as below
m = d 0 + i = 1 M d i y i + i = 1 M j = 1 M d i j y i y j + i = 1 M j = 1 M k = 1 M d i j k y i y j y k +  
To minimize the variation between the actual output ( m ) and estimated output ( m ^ ), a regression model is applied for each pair of input variables ( y i , y j ) .

2.3. Multivariate Adaptive Regression Splines

The Multivariate adaptive regression splines model (MARS) was proposed by Friedman [44] as a new data driven technique, looking for any possible nonlinear and nonparametric relationship which can exist and can be built between a set of inputs and output variables. MARS is used to try to identify and automatically establish the possible explicit regression equation between the regressors and the dependent variables in a stepwise manner; another important advantage of the MARS model is its abilities to provide the part of the contribution of each predictor to the dependent variable, and at the end of the training procedure it provides the final rankings of the regressors individually based on its rank [44]. A wide range of applications of the MARS model can be found in the literature including: estimating heating load in buildings [62,63], predicting centerline segregation in steel cast products [64], predictions of landslide susceptibility [65], estimating fractional snow cover (FSC) from MODIS data [66], and predicting monthly discharge and mean soil temperature [22,62]. Using the MARS model, the space of regressors is divided into several subspaces called knots, each has its own function and splines (segments) which are used to link all these knots, and all the spline are grouped to form a basis function (BF). Hence, globally speaking, the MARS model is based on three major clear and precise components: knots, spline, and BF, and the development of the model is achieved in two phases: forward (building) and backward (pruning) phases. In the forward phase, a high dimensional model is built that contains the chosen knots and their corresponding BF. During the backward phase, the BF that provides fewer contributions to the decreasing of the error is pruned via generalized cross-validation (GCV) [66].
Firstly, MARS starts by building a set of BF with the following equation [44]:
B F m ( x ) = max ( 0 , c x )   or   B F m ( x ) = max ( 0 , x c )
where x is one of the regressor variables, c is the threshold value for the regressor x, and the BF is the basis function. Consequently, the MARS model is developed as an ensemble of the BF as follow:
Y = f ( x ) = ψ 0 + m = 1 M ψ m B F m ( x ) .
Y is the response (dependent) variable (ET0), BF is the basis function, x is a regressor that contributing to the formation of the BF, and ψm are unknown coefficients of the mth BF, while M is the total number of the BF [44,66]. The GCV is expressed as follow:
G C V ( M ) = 1 N i = 1 N ( y i f ( x i ) ) 2 ( 1 c ( M ) N ) 2 .
N is the quantity of the pattern, M presents the BF number, yi is the targeted variable (ET0), f (xi) is the predicted value of the pattern i, and c(M) is the penalty factor [67]. The MARS model was implemented utilizing the MatLab toolbox ARESLab [68].

2.4. M5 Model Tree

The M5 model tree (M5Tree), which is an amended version of the original decision tree (DT), was proposed by Quinlan [45]. DT was originally proposed for solving classification problems using a splitting method, for which the available information from the data is extracted via the construction of a tree composed of three kinds of nodes: the internal, the roots, and the leaves nodes [45]. The M5Tree has been used for solving several problems, such as predictions of energy consumption in buildings [69], air quality modeling [70], predicting liquefaction-induced lateral spreading [71], forecasting solar ultraviolet [72], and predicting daily water levels in rivers [73]. The M5Tree is a regression model in which the training data are being apportioned to smaller subsets through the construction of a tree and using a gain ratio criterion, an individual regression model is built for each subset [45]. Once the tree had been constructed, the training process starts and tries to determine the best separation to different subsets with respect to two conditions: (i) the leaves nodes of the tree only contains patterns from one subset or (ii) separation does not occur until any improvement in the gain ratio is observed. For a given n number of nodes leaves corresponding to k breaking points, each subset has its own linear model as follow [73]:
Υ = { λ 01 + λ 11 x ,       i f   x   Ζ 1 λ 02 + λ 12 x ,       i f   x   >   Ζ 1   , λ 0 n 1 + λ 1 n 1 x ,       i f   x   Ζ k λ 0 n + λ 1 n x ,       i f   x   > Ζ k  
where Y is the calculated ET0, x is one of the input variables selected for model development (climatic variables), λ01 and λ1i (i = 1:n) are the parameters of the linear models at n leave, and Z1:k are the breaking points values. According to Quinlan [45], building an M5Tree model generally takes two major steps: the growth step (create a DT) and the tree pruning step to prune back an overgrown tree. The standard deviation reduction (SDR) statistical metric was used to compute the error at each node as the splitting criterion [74,75]:
S D R = s d ( T ) | T i | | T | s d ( T i ) .
Ti indicates the subset of the ith possible test, T represents the examples number reaching the node, and sd is the standard deviation of the observations. The M5Tree is applied utilizing the MATLAB toolbox M5PrimeLab [76].

2.5. Stephens-Stewart Model

Stephens and Stewart’s [77] method is used for pan evaporation estimation. It can be expressed as
E p a n = R ( a + b T m e a n )
where Epan is daily pan evaporation (mm/month), R is solar radiation (mm/month) at daily scale, and a and b refer the fitted parameters. In the present study, the SS method given in Equation (9) was used for ET0 estimation by using extraterrestrial radiation instead of solar radiation data:
E T 0 = R a ( a + b T m e a n )
where ET0 denotes the reference evapotranspiration (mm/month) and Ra refers the extraterrestrial radiation (mm/month).

2.6. Hargreaves and Samani Model

Hargreaves and Samani (HS) [78,79] is a temperature-based model and need only fewer input variables: extraterrestrial radiation (Ra) (mm/day), Tmax and Tmin (°C):
E T 0 = 0.0023 R a ( T m e a n + 17.8 )   ( T max T min ) 0.5
where Tmax and Tmin are the monthly maximum and minimum temperatures (°C), respectively. The calibrated version of HS given in the following equation was also employed in this study:
E T 0 ,       c a l i b r a t e d = a + b E T 0
where a and b are fitted parameters.

2.7. Model Development by Heuristic Methods

In the presented study, three abovementioned heuristic methods were implemented for monthly ET0 estimation. Three different data division scenarios: 50–50%, 60–40%, and 75–25%, were employed in the applications so as to see the effect of training/test size on models’ accuracy. It is well-known that data-driven methods are highly affected by the size of the training data and that more data generally produce a better model. As also mentioned in the introduction section, the studies in the existing literature generally utilize four climatic inputs: air temperatures (Tmax, Tmin), wind speed (U2), relative humidity (RH), and solar radiation (SR) in ET0 estimation. In developing countries, measurement of all these variables is always not possible and therefore models requiring a limited number of inputs are necessary in such cases. As was also reported by a recent review [1], future studies are required for developing new models with limited inputs. Keeping this in the mind, the following two input combinations were considered in this study:
  • Tmin, Tmax, Ra
  • Tmin, Tmax, Ra, α.
It is worth mentioning that the air temperature is easily available in every place and that Ra can be calculated using the Julia date. The developed models are be useful in practical applications because they only need a smaller number of input variables. The periodicity (α, month number of the year) was also considered in the model input to see its influence on models’ exactness if there is any. The flowchart provided in Figure 2 summarizes the model development procedure.
In the applications, GMDHNN, MARS, and M5Tree heuristic methods were employed to estimate monthly ET0 while only utilizing temperature data as model inputs. Data of two stations, Adana and Antakya, were used for calibration of the methods. First, monthly ET0 values were calculated by the FAO–56 PM formula using data of minimum and maximum temperatures, relative humidity, solar radiation, and wind speed following the guideline of Allen et al. [7]. Then, the obtained ET0 data were used for the calibration and test of the selected models. The outcomes of GMDHNN, MARS, and M5Tree methods were compared with the empirical HS, calibrated HS (CHS), and SS regression methods.

3. Application and Results

The models were evaluated with respect to three commonly used statistics: root mean square error (RMSE), mean absolute error (MAE), and Nash-Sutcliffe efficiency (NSE) [80,81,82]. RMSE and MAE varied from 0 to positive infinity. RMSE and MAE outcomes equivalent to 0 indicate a perfect fit. NSE varies from negative infinity to 1 and 1 means that models perfectly catch the observed values. The expressions of the RMSE, MAE, and NSE are:
R M S E = i = 1 N ( E T 0 , i E T 0 , i M ) 2 N
M A E = i = 1 N | E T 0 , i E T 0 , i M | N
N S E = 1 i = 1 N ( E T 0 , i E T 0 , i M ) 2 i = 1 N ( E T 0 , i E T ¯ 0 ) 2 .
In the equations, N is the quantity of data, E T ¯ 0 is average value of the reference evapotranspiration computed by FAO–56 PM, E T 0 , i M is estimated ET0, and E T 0 , i is computed reference evapotranspiration.
For Adana Station, GMDHNN, MARS, M5Tree, HS, CHS, and SS models are compared in Table 2 while considering RMSE, MAE, and NSE statistics. In the table, training and testing accuracies can be observed for three different train-test scenarios. In the three implemented heuristic methods, default structures were used and models were calibrated by introducing the training data; in case of the 1st, 2nd, and 3rd scenarios, 50%, 60%, and 75% of the whole data were utilized to obtain optimal parameters of the models. After calibration process, the calculated parameters of the GMDHNN, MARS, and M5Tree models were kept and they were directly used in the testing stage and models were validated by the test data; in case of the 1st, 2nd, and 3rd scenarios, 50%, 40%, and 25% of the whole data were utilized to assess the models’ accuracies based on the three aforementioned statistics (RMSE, MAE, and NSE). GMDHNN2, MARS2, and M5Tree2 models were also developed by adding periodicity component (α) to the GMDHNN1, MARS1, and M5tree1 models so as to see its effect on models’ accuracy in estimation ET0. It is obvious that there was not any considerable effect of α on models’ exactness in this station. M5Tree models had better fitting in the training stage whereas the GMDHNN and MARS models has a superior performance to the M5Tree in the testing stage. GMDHNN has a better accuracy than the MARS but the difference is not too large. The calibration process considerably increases the HS performance in the estimation of ET0. Average statistics in Table 2 show that the GMDHNN and SS have almost the same performance and they show a superior performance to the other models with respect to three criteria. The relative differences between the GMDHNN/SS and MARS2 models with respect to average RMSE and MAE are 2.9% and 3%, respectively. Detailed results indicate that a slight difference exists between periodic MARS (MARS2) and SS models in 50–50% and 60–40% train-test scenarios, while the latter performs better than the first in a 75–25% scenario. These results tell us that the use of one data-splitting scenario may mislead modelers during evaluation of the methods’ accuracy. The methods are also compared in Figure 3 with respect to RMSE and NSE in the testing stage. The variation of the criteria (RMSE, NSE) with respect to different splitting scenarios is parallel to each other for all of the applied methods. NSE decreases and RMSE slightly increases from the first splitting scenario (50–50%) to the third scenario (75–25%).
Table 3 reports the training and testing statistics of the employed methods for Antakya Station. Unlike the Adana Station, including the periodicity input considerably improves the accuracy of MARS and M5Tree methods in the testing stage. Similar to for the previous station, here temperature based GMDHNN models also show superior performance to the MARS and M5Tree models in the estimation of ET0. All heuristic methods outperform the SS method. A considerable improvement is observed for the HS method after calibration: RMSE and MAE are increased from 1.715 mm and 1.557 mm to 0.655 mm and 0.541 mm with respect to average statistics, respectively. The SS model has better accuracy than the HS and CHS models in the estimation of ET0 using only temperature data as inputs. The relative differences between GMDHNN2 and MARS2/M5Tree/SS models with respect to average RMSE and MAE are 3.7%/10.7%/0.8% and 4%/11.7%/1.1%, respectively. The results of the Antakya Station suggest the use of a periodicity input in model development. The RMSE and NSE values of the applied methods are also compared in Figure 4 for the testing stage. Here the criteria also vary in similar ways for all the methods except for the CHS. Unlike Adana Station, the NSE slightly increases and RMSE decreases from the 50–50% splitting scenario to a 75–25% scenario. Comparison of the two stations (compare Figure 3 and Figure 4 or Table 2 and Table 3) reveals that the models generally provide better estimates for Antakya Station compared to Adana. A higher correlation between the inputs (Tmin, Tmax, Ra) and output (ET0) in Antakya compared to Adana may be the reason for this.
Figure 5 illustrates the FAO–56 PM and estimated ET0 obtained by using six different methods for Adana Station. It is apparent from the scatterplots that the HS considerably overestimates ET0 while the CHS has less scattered estimates compared to HS. GMDHNN and SS methods have the least scattered estimates among the applied methods and are closely followed by the MARS method. The methods are graphically compared in Figure 6 in estimation of ET0 of Antakya Station. Here the CHS also considerably improves the HS accuracy. GMDHNN also has the least scattered estimates followed by the MARS in this station.
The monthly mean estimates of the GMDHNN, MARS, M5Tree, HS, CHS, and SS methods are compared in Figure 7. In Adana Station, the GMDHNN2, MARS2, M5Tree1, and SS model results are generally very close to each other while the CHS underestimates the ET0 of March and May and overestimates those of July and August. The models’ estimates do not considerably change with respect to splitting scenarios. In Antakya Station, however, the models’ accuracy changes for different train-test scenarios. For example, the GMDHNN2, MARS2, M5Tree1, and SS models are less successful in estimation of ET0 in the case of the 50–50% splitting scenario while the 75–25% train-test scenario provides the best estimates. This also confirms the necessity of considering different splitting scenarios in evaluating the accuracy of the applied methods in the estimation of ET0. It is apparent that the CHS has the worst estimates while the GMDHNN2 maps the mean monthly ET0 better than the other models. All the models underestimate ET0 of Antakya Station in the 50–50% splitting scenario.

4. Conclusions

The ability of new temperature based regression methods were compared with Hargreaves-Samani, calibrated Hargreaves-Samani, and Stephens-Stewart methods in modeling monthly reference evapotranspiration. The applied models only used maximum and minimum temperatures and extraterrestrial radiation inputs that were measured and calculated for two stations in Turkey. Data division scenarios of 50–50%, 60–40%, and 75–25% were applied in the study to evaluate the aforementioned methods. The periodicity component (the month number of the year varying from 1 January to 12 December) was also used as an input to the models so as to examine its effect on models’ performances. Three commonly used criteria: RMSE, MAE, and NSE, were used for comparison of the methods. The results indicated the following conclusions:
In Adana Station, GMDHNN and the SS model performed better than the other models. In Antakya Station, however, the GMDHNN model provided the best accuracy followed by the MARS and M5Tree in modeling monthly reference evapotranspiration.
The calibration procedure considerably increased HS model accuracy. For example, average RMSE and MAE statistics of HS were increased from 1.715 mm and 1.557 mm to 0.655 mm and 0.541 mm for Antakya Station.
The periodicity component increased the accuracy of GMDHNN, MARS, and M5Tree models in Antakya Station only. RMSE decrements of the GMDHNN, MARS, and M5Tree models in the test stage were 1.4%, 8%, and 6%, respectively.
The applications revealed the necessity of using different data division scenarios for better evaluation of the compared models.
Comparison of the models in estimating monthly mean reference evapotranspiration revealed that the GMDHNN model generally had better accuracy compared to other models while the CHS models provided the worst estimates. By implementing the GMDHNN model, the average RMSE of MARS, M5Tree, HS, CHS, and SS models respectively decreased by 3.7–6.4%, 10.7–3.9%, 76–75%, 10–35%, and 0.8–17% when estimating monthly ET0.
The results of this study recommend the use of the GMDHNN model for the prediction of ET0 in regions where only the temperature is available while other meteorological data are not available or are missing for a long duration.

Author Contributions

Conceptualization: R.M.A., O.K. and Z.M.Y. Formal analysis: S.H., Z.M.Y., S.S., O.K. and B.L. Validation: R.M.A., S.H., Z.M.Y., S.S., O.K. and B.L. Supervision: S.S. and O.K. Writing original draft: R.M.A., S.H., Z.M.Y., S.S., O.K. and B.L. Visualization: R.M.A., S.H. and Z.M.Y. investigation: R.M.A., S.H. and Z.M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Key R&D Program of China. (2016YFC0402706).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study will be available on interested request from the corresponding author.

Conflicts of Interest

There is no conflict of interest in this study.

References

  1. Jing, W.; Yaseen, Z.M.; Shahid, S.; Saggi, M.K.; Tao, H.; Kisi, O.; Salih, S.Q.; Al-Ansari, N.; Chau, K.W. Implementation of evolutionary computing models for reference evapotranspiration modeling: Short review, assessment and possible future research directions. Eng. Appl. Comput. Fluid Mech. 2019, 13, 811–823. [Google Scholar] [CrossRef] [Green Version]
  2. Allen, R.G.; Pereira, L.S.; Howell, T.A.; Jensen, M.E. Evapotranspiration information reporting: I. Factors governing measurement accuracy. Agric. Water Manag. 2011, 98, 899–920. [Google Scholar] [CrossRef] [Green Version]
  3. Allen, R.G.; Pereira, L.S.; Howell, T.A.; Jensen, M.E. Evapotranspiration information reporting: II. Recommended documentation. Agric. Water Manag. 2011, 98, 921–929. [Google Scholar] [CrossRef]
  4. Ren, X.; Qu, Z.; Martins, D.S.; Paredes, P.; Pereira, L.S. Daily reference evapotranspiration for hyper-arid to moist sub-humid climates in inner mongolia, China: I. Assessing temperature methods and spatial variability. Water Resour. Manag. 2016, 30, 3769–3791. [Google Scholar] [CrossRef]
  5. Xing, W.; Wang, W.; Shao, Q.; Peng, S.; Yu, Z.; Yong, B.; Taylor, J. Changes of reference evapotranspiration in the Haihe River Basin: Present observations and future projection from climatic variables through multi-model ensemble. Glob. Planet. Chang. 2014, 115, 1–15. [Google Scholar] [CrossRef]
  6. Valiantzas, J.D. Temperature-and humidity-based simplified Penman’s ET0 formulae. Comparisons with temperature-based Hargreaves-Samani and other methodologies. Agric. Water Manag. 2018, 208, 326–334. [Google Scholar] [CrossRef]
  7. Allen, R.G.; Pereira, L.S.; Raes, D.; Smith, M. Crop evapotranspiration-Guidelines for computing crop water requirements-FAO Irrigation and drainage paper 56. FAO Rome 1998, 300, D05109. [Google Scholar]
  8. Yaseen, Z.M.; Sulaiman, S.O.; Deo, R.C.; Chau, K.-W. An enhanced extreme learning machine model for river flow forecasting: State-of-the-art, practical applications in water resource engineering area and future research direction. J. Hydrol. 2018, 569, 387–408. [Google Scholar] [CrossRef]
  9. Khosravinia, P.; Nikpour, M.R.; Kisi, O.; Yaseen, Z.M. Application of novel data mining algorithms in prediction of discharge and end depth in trapezoidal sections. Comput. Electron. Agric. 2020, 170, 105283. [Google Scholar] [CrossRef]
  10. Zhu, S.; Ptak, M.; Yaseen, Z.M.; Dai, J.; Sivakumar, B. Forecasting surface water temperature in lakes: A comparison of approaches. J. Hydrol. 2020, 585, 124809. [Google Scholar] [CrossRef]
  11. Yuan, X.; Chen, C.; Lei, X.; Yuan, Y.; Adnan, R.M. Monthly runoff forecasting based on LSTM–ALO model. Stoch. Environ. Res. Risk Assess. 2018, 32, 2199–2212. [Google Scholar] [CrossRef]
  12. Adnan, R.M.; Liang, Z.; Yuan, X.; Kisi, O.; Akhlaq, M.; Li, B. Comparison of LSSVR, M5RT, NF-GP, and NF-SC models for predictions of hourly wind speed and wind power based on cross-validation. Energies 2019, 12, 329. [Google Scholar] [CrossRef] [Green Version]
  13. Kisi, O.; Shiri, J.; Karimi, S.; Adnan, R.M. Three different adaptive neuro fuzzy computing techniques for forecasting long-period daily streamflows. In Big Data in Engineering Applications; Springer: Singapore, 2018; pp. 303–321. [Google Scholar]
  14. Alizamir, M.; Kisi, O.; Muhammad Adnan, R.; Kuriqi, A. Modelling reference evapotranspiration by combining neuro-fuzzy and evolutionary strategies. Acta Geophys. 2020, 68, 1113–1126. [Google Scholar] [CrossRef]
  15. Petković, B.; Petković, D.; Kuzman, B.; Milovančević, M.; Wakil, K.; Ho, L.S.; Jermsittiparsert, K. Neuro-fuzzy estimation of reference crop evapotranspiration by neuro fuzzy logic based on weather conditions. Comput. Electron. Agric. 2020, 173, 105358. [Google Scholar] [CrossRef]
  16. Zhu, B.; Feng, Y.; Gong, D.; Jiang, S.; Zhao, L.; Cui, N. Hybrid particle swarm optimization with extreme learning machine for daily reference evapotranspiration prediction from limited climatic data. Comput. Electron. Agric. 2020, 173, 105430. [Google Scholar] [CrossRef]
  17. Jovic, S.; Nedeljkovic, B.; Golubovic, Z.; Kostic, N. Evolutionary algorithm for reference evapotranspiration analysis. Comput. Electron. Agric. 2018, 150, 1–4. [Google Scholar] [CrossRef]
  18. Adnan, R.M.; Chen, Z.; Yuan, X.; Kisi, O.; El-Shafie, A.; Kuriqi, A.; Ikram, M. Reference Evapotranspiration Modeling Using New Heuristic Methods. Entropy 2020, 22, 547. [Google Scholar] [CrossRef]
  19. Khoshravesh, M.; Sefidkouhi, M.A.G.; Valipour, M. Estimation of reference evapotranspiration using multivariate fractional polynomial, Bayesian regression, and robust regression models in three arid environments. Appl. Water Sci. 2015, 7, 1911–1922. [Google Scholar] [CrossRef] [Green Version]
  20. Mattar, M.A. Using gene expression programming in monthly reference evapotranspiration modeling: A case study in Egypt. Agric. Water Manag. 2018, 198, 28–38. [Google Scholar] [CrossRef]
  21. Mehdizadeh, S. Estimation of daily reference evapotranspiration (ETo) using artificial intelligence methods: Offering a new approach for lagged ETo data-based modeling. J. Hydrol. 2018, 559, 794–812. [Google Scholar] [CrossRef]
  22. Mehdizadeh, S.; Behmanesh, J.; Khalili, K. Comprehensive modeling of monthly mean soil temperature using multivariate adaptive regression splines and support vector machine. Theor. Appl. Climatol. 2017, 133, 911–924. [Google Scholar] [CrossRef]
  23. Sanikhani, H.; Kisi, O.; Maroufpoor, E.; Yaseen, Z.M. Temperature-based modeling of reference evapotranspiration using several artificial intelligence models: Application of different modeling scenarios. Theor. Appl. Climatol. 2018. [Google Scholar] [CrossRef]
  24. Shiri, J. Improving the performance of the mass transfer-based reference evapotranspiration estimation approaches through a coupled wavelet-random forest methodology. J. Hydrol. 2018, 561, 737–750. [Google Scholar] [CrossRef]
  25. Tao, H.; Diop, L.; Bodian, A.; Djaman, K.; Ndiaye, P.M.; Yaseen, Z.M. Reference evapotranspiration prediction using hybridized fuzzy model with firefly algorithm: Regional case study in Burkina Faso. Agric. Water Manag. 2018, 208, 140–151. [Google Scholar] [CrossRef]
  26. Yin, Z.; Wen, X.; Feng, Q.; He, Z.; Zou, S.; Yang, L. Integrating genetic algorithm and support vector machine for modeling daily reference evapotranspiration in a semi-arid mountain area. Hydrol. Res. 2016, 48, 1177–1191. [Google Scholar] [CrossRef]
  27. Adamala, S. Temperature based generalized wavelet-neural network models to estimate evapotranspiration in India. Inf. Process. Agric. 2018, 5, 149–155. [Google Scholar] [CrossRef]
  28. Gavili, S.; Sanikhani, H.; Kisi, O.; Mahmoudi, M.H. Evaluation of several soft computing methods in monthly evapotranspiration modelling. Meteorol. Appl. 2017, 25, 128–138. [Google Scholar] [CrossRef] [Green Version]
  29. Karbasi, M. Forecasting of multi-step ahead reference evapotranspiration using wavelet- gaussian process regression model. Water Resour. Manag. 2017, 32, 1035–1052. [Google Scholar] [CrossRef]
  30. Farlow, S.J. The GMDH algorithm of Ivakhnenko. Am. Stat. 1981, 35, 210–215. [Google Scholar]
  31. Adnan, R.M.; Khosravinia, P.; Karimi, B.; Kisi, O. Prediction of hydraulics performance in drain envelopes using Kmeans based multivariate adaptive regression spline. Appl. Soft Comput. 2020, 100, 107008. [Google Scholar] [CrossRef]
  32. Nasir, V.; Nourian, S.; Avramidis, S.; Cool, J. Prediction of physical and mechanical properties of thermally modified wood based on color change evaluated by means of “group method of data handling”(GMDH) neural network. Holzforschung 2019, 73, 381–392. [Google Scholar] [CrossRef]
  33. Adnan, R.M.; Liang, Z.; Parmar, K.S.; Soni, K.; Kisi, O. Modeling monthly streamflow in mountainous basin by MARS, GMDH-NN and DENFIS using hydroclimatic data. Neural Comput. Appl. 2020. [Google Scholar] [CrossRef]
  34. Nkurlu, B.M.; Shen, C.; Asante-Okyere, S.; Mulashani, A.K.; Chungu, J.; Wang, L. Prediction of permeability using group method of data handling (GMDH) neural network from well log data. Energies 2020, 13, 551. [Google Scholar] [CrossRef] [Green Version]
  35. Najafzadeh, M.; Barani, G.-A.; Kermani, M.R.H. Estimation of pipeline scour due to waves by GMDH. J. Pipeline Syst. Eng. Pract. 2014, 5, 06014002. [Google Scholar] [CrossRef]
  36. Najafzadeh, M.; Barani, G.-A.; Hessami-Kermani, M.-R. Evaluation of GMDH networks for prediction of local scour depth at bridge abutments in coarse sediments with thinly armored beds. Ocean Eng. 2015, 104, 387–396. [Google Scholar] [CrossRef]
  37. Najafzadeh, M.; Zahiri, A. Neuro-fuzzy GMDH-based evolutionary algorithms to predict flow discharge in straight compound channels. J. Hydrol. Eng. 2015, 20, 4015035. [Google Scholar] [CrossRef]
  38. Shahabi, S.; Khanjani, M.-J.; Kermani, M.H. Hybrid wavelet-GMDH model to forecast significant wave height. Water Supply 2015, 16, 453–459. [Google Scholar] [CrossRef]
  39. Tsai, T.-M.; Yen, P.-H. GMDH algorithms applied to turbidity forecasting. Appl. Water Sci. 2016, 7, 1151–1160. [Google Scholar] [CrossRef] [Green Version]
  40. Parsaie, A.; Haghiabi, A.H. Improving modelling of discharge coefficient of triangular labyrinth lateral weirs using SVM, GMDH and MARS techniques. Irrig. Drain. 2017, 66, 636–654. [Google Scholar] [CrossRef]
  41. Alitaleshi, F.; Daghbandan, A. Using a multi-objective optimal design of GMDH type neural networks to evaluate the quality of treated water in a water treatment plant. Desalination Water Treat. 2019, 139, 123–132. [Google Scholar] [CrossRef]
  42. Daghbandan, A.; Khalatbari, S.; Abbasi, M.M. Applying GMDH-type neural network for modeling and prediction of turbidity and free residual aluminium in drinking water. Desalination Water Treat. 2019, 140, 118–131. [Google Scholar] [CrossRef]
  43. Da Silva Carvalho, R.L.; Delgado, A.R.S. Estimativas da evapotranspiração de referência do município de Ariquemes (RO) utilizando os métodos Penman-Monteith-FAO e Hargreaves-Samani. Rev. Bras. De Agric. Irrig. 2016, 10, 1038–1048. [Google Scholar]
  44. Friedman, J.H. Multivariate adaptive regression splines. Ann. Stat. 1991, 19, 1–67. [Google Scholar] [CrossRef]
  45. Quinlan, J.R. Learning with Continuous Classes. In Proceedings of the 5th Australian Joint Conference on Artificial Intelligence, Hobart, Tasmania, 16–18 November 1992; pp. 343–348. [Google Scholar]
  46. Adnan, R.M.; Liang, Z.; Trajkovic, S.; Zounemat-Kermani, M.; Li, B.; Kisi, O. Daily streamflow prediction using optimally pruned extreme learning machine. J. Hydrol. 2019, 577, 123981. [Google Scholar] [CrossRef]
  47. Kisi, O.; Parmar, K.S. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution. J. Hydrol. 2016. [Google Scholar] [CrossRef]
  48. Adnan, R.M.; Liang, Z.; El-Shafie, A.; Zounemat-Kermani, M.; Kisi, O. Prediction of suspended sediment load using data-driven models. Water 2019, 11, 2060. [Google Scholar] [CrossRef] [Green Version]
  49. Yin, Z.; Feng, Q.; Wen, X.; Deo, R.C.; Yang, L.; Si, J.; He, Z. Design and evaluation of SVR, MARS and M5Tree models for 1, 2 and 3-day lead time forecasting of river flow data in a semiarid mountainous catchment. Stoch. Environ. Res. Risk Assess. 2018, 32, 2457–2476. [Google Scholar] [CrossRef]
  50. Ghaemi, A.; Rezaie-Balf, M.; Adamowski, J.; Kisi, O.; Quilty, J. On the applicability of maximum overlap discrete wavelet transform integrated with MARS and M5 model tree for monthly pan evaporation prediction. Agric. For. Meteorol. 2019, 278, 107647. [Google Scholar] [CrossRef]
  51. Adnan, R.M.; Yuan, X.; Kisi, O.; Anam, R. Improving accuracy of river flow forecasting using LSSVR with gravitational search algorithm. Adv. Meteorol. 2017, 2017, 1–23. [Google Scholar] [CrossRef]
  52. Kisi, Ö.; Yildirim, G. Discussion of “Forecasting of reference evapotranspiration by artificial neural networks” by Slavisa Trajkovic, Branimir Todorovic, and Miomir Stankovic. J. Irrig. Drain. Eng. 2005, 131, 390–391. [Google Scholar] [CrossRef]
  53. Mehdizadeh, S.; Behmanesh, J.; Khalili, K. Using MARS, SVM, GEP and empirical equations for estimation of monthly mean reference evapotranspiration. Comput. Electron. Agric. 2017, 139, 103–114. [Google Scholar] [CrossRef]
  54. Kisi, O. Modeling reference evapotranspiration using three different heuristic regression approaches. Agric. Water Manag. 2016, 169, 162–172. [Google Scholar] [CrossRef]
  55. Keshtegar, B.; Mert, C.; Kisi, O. Comparison of four heuristic regression techniques in solar radiation modeling: Kriging method vs RSM, MARS and M5 model tree. Renew. Sustain. Energy Rev. 2018, 81, 330–341. [Google Scholar] [CrossRef]
  56. Keshtegar, B.; Kisi, O. RM5Tree: Radial basis M5 model tree for accurate structural reliability analysis. Reliab. Eng. Syst. Saf. 2018, 180, 49–61. [Google Scholar] [CrossRef]
  57. Kisi, O.; Kilic, Y. An investigation on generalization ability of artificial neural networks and M5 model tree in modeling reference evapotranspiration. Theor. Appl. Climatol. 2015, 126, 413–425. [Google Scholar] [CrossRef]
  58. Rahimikhoob, A. Comparison between M5 model tree and neural networks for estimating reference evapotranspiration in an arid environment. Water Resour. Manag. 2014, 28, 657–669. [Google Scholar] [CrossRef]
  59. Amanifard, N.; Nariman-Zadeh, N.; Farahani, M.H.; Khalkhali, A. Modelling of multiple short-length-scale stall cells in an axial compressor using evolved GMDH neural networks. Energy Convers. Manag. 2008, 49, 2588–2594. [Google Scholar] [CrossRef]
  60. Ivakhnenko, A.G. Polynomial theory of complex systems. IEEE Trans. Syst. ManCybern. 1971, SMC-1, 364–378. [Google Scholar] [CrossRef] [Green Version]
  61. Ivakhnenko, A.G.; Ivakhnenko, G.A. Problems of further development of the group method of data handling algorithms. Part I. Pattern Recognit. Image Anal. C/C Raspoznavaniye Obraz. I Anal. Izobr. 2000, 10, 187–194. [Google Scholar]
  62. Adnan, R.M.; Liang, Z.; Heddam, S.; Zounemat-Kermani, M.; Kisi, O.; Li, B. Least square support vector machine and multivariate adaptive regression splines for streamflow prediction in mountainous basin using hydro-meteorological data as inputs. J. Hydrol. 2020, 586, 124371. [Google Scholar] [CrossRef]
  63. Roy, S.S.; Roy, R.; Balas, V.E. Estimating heating load in buildings using multivariate adaptive regression splines, extreme learning machine, a hybrid model of MARS and ELM. Renew. Sustain. Energy Rev. 2018, 82, 4256–4268. [Google Scholar] [CrossRef]
  64. García Nieto, P.J.; García-Gonzalo, E.; Álvarez Antón, J.C.; Suárez, V.M.G.; Bayón, R.M.; Martín, F.M. A comparison of several machine learning techniques for the centerline segregation prediction in continuous cast steel slabs and evaluation of its performance. J. Comput. Appl. Math. 2018, 330, 877–895. [Google Scholar] [CrossRef]
  65. Pourghasemi, H.R.; Rahmati, O. Prediction of the landslide susceptibility: Which algorithm, which precision? CATENA 2018, 162, 177–192. [Google Scholar] [CrossRef]
  66. Kuter, S.; Akyurek, Z.; Weber, G.-W. Retrieval of fractional snow covered area from MODIS data by multivariate adaptive regression splines. Remote Sens. Environ. 2018, 205, 236–252. [Google Scholar] [CrossRef]
  67. Zhang, W.; Zhang, R.; Goh, A.T.C. Multivariate adaptive regression splines approach to estimate lateral wall deflection profiles caused by braced excavations in clays. Geotech. Geol. Eng. 2017. [Google Scholar] [CrossRef]
  68. Jekabsons, G. ARESLab: Adaptive Regression Splines Toolbox for Matlab/Octave Ver. 1.13.0; Riga Technical University: Riga, Latvia, 2016. [Google Scholar]
  69. Afsarian, F.; Saber, A.; Pourzangbar, A.; Olabi, A.G.; Khanmohammadi, M.A. Analysis of recycled aggregates effect on energy conservation using M5′ model tree algorithm. Energy 2018, 156, 264–277. [Google Scholar] [CrossRef]
  70. García Nieto, P.J.; García-Gonzalo, E.; Sánchez, A.B.; Miranda, A.A.R. Air quality modeling using the PSO-SVM-based approach, MLP neural network, and M5 model tree in the metropolitan area of oviedo (Northern Spain). Environ. Modeling Assess. 2017, 23, 229–247. [Google Scholar] [CrossRef]
  71. Avval, Y.J.; Derakhshani, A. New formulas for predicting liquefaction-induced lateral spreading: Model tree approach. Bull. Eng. Geol. Environ. 2018, 78, 3649–3661. [Google Scholar] [CrossRef]
  72. Deo, R.C.; Downs, N.; Parisi, A.V.; Adamowski, J.F.; Quilty, J.M. Very short-term reactive forecasting of the solar ultraviolet index using an extreme learning machine integrated with the solar zenith angle. Environ. Res. 2017, 155, 141–166. [Google Scholar] [CrossRef]
  73. Pham, H.T.; Marshall, L.; Johnson, F.; Sharma, A. Deriving daily water levels from satellite altimetry and land surface temperature for sparsely gauged catchments: A case study for the Mekong River. Remote Sens. Environ. 2018, 212, 31–46. [Google Scholar] [CrossRef]
  74. Lin, L.; Wang, Q.; Sadek, A.W. A combined M5P tree and hazard-based duration model for predicting urban freeway traffic accident durations. Accid. Anal. Prev. 2016, 91, 114–126. [Google Scholar] [CrossRef] [PubMed]
  75. Singh, G.; Sachdeva, S.N.; Pal, M. M5 model tree based predictive modeling of road accidents on non-urban sections of highways in India. Accid. Anal. Prev. 2016, 96, 108–117. [Google Scholar] [CrossRef] [PubMed]
  76. Jekabsons, G. M5PrimeLab: M5′Regression Tree and Model Tree Ensemble Toolbox for Matlab/Octave Ver. 1.7.0.; Institute of Applied Computer Systems Riga Technical University: Riga, Latvia, 2016; Available online: http://www.cs.rtu.lv/jekabsons/Files/M5PrimeLab.pdf (accessed on 20 December 2019).
  77. Stephens, J.C.; Stewart, E.H. A comparison of procedures for computing evaporation and evapotranspiration. Publication 1963, 62, 123–133. [Google Scholar]
  78. Hargreaves, G.H.; Samani, Z.A. Estimating potential evapotranspiration. J. Irrig. Drain. Div. 1982, 108, 225–230. [Google Scholar]
  79. Hargreaves, G.H.; Samani, Z.A. Reference crop evapotranspiration from temperature. Appl. Eng. Agric. 1985, 1, 96–99. [Google Scholar] [CrossRef]
  80. Tiyasha, T.; Tung, M.; Yaseen, Z.M. A survey on river water quality modelling using artificial intelligence models: 2000–2020. J. Hydrol. 2020, 585, 124670. [Google Scholar] [CrossRef]
  81. Adnan, R.M.; Yuan, X.; Kisi, O.; Yuan, Y.; Tayyab, M.; Lei, X. Application of soft computing models in streamflow forecasting. In Proceedings of the Institution of Civil Engineers-Water Management; Thomas Telford Ltd.: London, UK, 2019; Volume 172, No. 3; pp. 123–134. [Google Scholar]
  82. Bhagat, S.K.; Tung, T.M.; Yaseen, Z.M. Development of artificial intelligence for modeling wastewater heavy metal removal: State of the art, application assessment and possible future research. J. Clean. Prod. 2019, 250, 119473. [Google Scholar] [CrossRef]
Figure 1. The location of the Adana and Antakya stations.
Figure 1. The location of the Adana and Antakya stations.
Sustainability 13 00297 g001
Figure 2. Flowchart of the proposed heuristic models.
Figure 2. Flowchart of the proposed heuristic models.
Sustainability 13 00297 g002
Figure 3. Comparison of different methods for estimating ET0, in the x-axis: (1) MARS1, (2) MARS2, (3) M5Tree1, (4) M5Tree2, (5) GMDHNN1, (6) GMDHNN2, (7) CHS, and (8) SS—Adana.
Figure 3. Comparison of different methods for estimating ET0, in the x-axis: (1) MARS1, (2) MARS2, (3) M5Tree1, (4) M5Tree2, (5) GMDHNN1, (6) GMDHNN2, (7) CHS, and (8) SS—Adana.
Sustainability 13 00297 g003aSustainability 13 00297 g003b
Figure 4. Comparison of different methods for estimating ET0, in the x-axis: (1) MARS1, (2) MARS2, (3) M5Tree1, (4) M5Tree2, (5) GMDHNN1, (6) GMDHNN2, (7) CHS and (8) SS—Antakya.
Figure 4. Comparison of different methods for estimating ET0, in the x-axis: (1) MARS1, (2) MARS2, (3) M5Tree1, (4) M5Tree2, (5) GMDHNN1, (6) GMDHNN2, (7) CHS and (8) SS—Antakya.
Sustainability 13 00297 g004aSustainability 13 00297 g004b
Figure 5. The FAO–56 PM and estimated ET0 by: (a) MARS, (b) M5tree, (c) GMDHNN, (d) HS, (e) CHS, and (f) SS methods—Adana.
Figure 5. The FAO–56 PM and estimated ET0 by: (a) MARS, (b) M5tree, (c) GMDHNN, (d) HS, (e) CHS, and (f) SS methods—Adana.
Sustainability 13 00297 g005aSustainability 13 00297 g005b
Figure 6. The FAO–56 PM and estimated ET0 obtained by: (a) MARS, (b) M5tree, (c) GMDHNN, (d) HS, (e) CHS, and (f) SS methods—Antakya.
Figure 6. The FAO–56 PM and estimated ET0 obtained by: (a) MARS, (b) M5tree, (c) GMDHNN, (d) HS, (e) CHS, and (f) SS methods—Antakya.
Sustainability 13 00297 g006aSustainability 13 00297 g006b
Figure 7. The FAO–56 PM and estimated monthly mean ET0 by MARS, M5tree, GMDHNN, CHS and SS methods: (a) Adana, (b) Antakya using different splitting scenarios: (i) 50–50% (ii), 60–40%, (iii) 75–25%.
Figure 7. The FAO–56 PM and estimated monthly mean ET0 by MARS, M5tree, GMDHNN, CHS and SS methods: (a) Adana, (b) Antakya using different splitting scenarios: (i) 50–50% (ii), 60–40%, (iii) 75–25%.
Sustainability 13 00297 g007
Table 1. The statistical parameters of climatic data used in the study.
Table 1. The statistical parameters of climatic data used in the study.
StationVariablexminxmaxxmeanSxCsxCorrelation with ET0
AdanaTmin (°C)−3.423.49.337.700.080.828
Tmax (°C)17.044.031.37.02−0.410.850
Ra (MJ/m2)15.541.729.49.35−0.140.920
ET0 (mm)0.576.523.321.520.041.000
AntakyaTmin (°C)−4.624.89.188.160.220.860
Tmax (°C)14.442.628.87.64−0.320.878
Ra (MJ/m2)16.041.629.59.16−0.110.926
ET0 (mm)0.287.203.391.860.061.000
Tmin, Tmax, Ra, and ET0 are minimum and maximum temperatures, extraterrestrial radiation, and reference evapotranspiration, respectively. xmin, xmax, xmean, Sx, and Csx are minimum, maximum, mean, standard deviation, and skewness, respectively.
Table 2. Root mean square error (RMSE), Mean absolute error (MAE), and Nash-Sutcliffe efficiency (NSE) statistics of each model for different data splitting strategies—Adana.
Table 2. Root mean square error (RMSE), Mean absolute error (MAE), and Nash-Sutcliffe efficiency (NSE) statistics of each model for different data splitting strategies—Adana.
ModelInputTrainingTest
RMSE (mm)MAE (mm)NSERMSE (mm)MAE (mm)NSE
50% training and 50% test
MARS1Tmin, Tmax, Ra0.4540.3630.9080.4670.3590.907
MARS2Tmin, Tmax, Ra, α0.4610.3560.9050.4660.3570.907
M5Tree1Tmin, Tmax, Ra0.4080.3010.9260.5180.4060.885
M5Tree2Tmin, Tmax, Ra, α0.4080.3010.9260.5180.4060.885
HSTmin, Tmax, Ra2.0211.777−0.822.0061.782−0.72
CHSTmin, Tmax, Ra0.5230.4070.8780.5100.3830.889
SSTmin, Tmax, Ra0.5010.3900.8880.4630.3550.909
GMDHNN1Tmin, Tmax, Ra0.4480.3530.8980.4560.3470.895
GMDHNN2Tmin, Tmax, Ra, α0.4430.3470.9010.4530.3430.898
60% training and 40% test
MARS1Tmin, Tmax, Ra0.4350.3440.9160.5100.3890.889
MARS2Tmin, Tmax, Ra, α0.4470.3470.9120.4920.3760.898
M5Tree1Tmin, Tmax, Ra0.4020.2880.9290.5290.4060.881
M5Tree2Tmin, Tmax, Ra, α0.4020.2880.9290.5290.4060.881
HSTmin, Tmax, Ra2.0481.809−0.861.9601.734−0.63
CHSTmin, Tmax, Ra0.5090.3960.8850.5270.3950.882
SSTmin, Tmax, Ra0.4820.3760.8970.4820.3680.901
GMDHNN1Tmin, Tmax, Ra0.4280.3310.9090.4800.3680.902
GMDHNN2Tmin, Tmax, Ra, α0.4240.3270.9100.4780.3660.903
75% training and 25% test
MARS1Tmin, Tmax, Ra0.4380.3390.9160.5160.4080.884
MARS2Tmin, Tmax, Ra, α0.4370.3360.9170.5220.4050.882
M5Tree1Tmin, Tmax, Ra0.3850.2790.9350.5500.4240.869
M5Tree2Tmin, Tmax, Ra, α0.3850.2790.9350.5500.4240.869
HSTmin, Tmax, Ra2.0531.821−0.8411.8941.659−0.556
CHSTmin, Tmax, Ra0.5040.3880.8890.5520.4140.868
SSTmin, Tmax, Ra0.4790.3700.9000.4910.3820.896
GMDHNN1Tmin, Tmax, Ra0.4210.3220.9140.4970.3850.881
GMDHNN2Tmin, Tmax, Ra, α0.4200.3200.9150.4950.3840.883
Average
MARS1Tmin, Tmax, Ra0.4420.3490.9130.4980.3850.893
MARS2Tmin, Tmax, Ra, α0.4480.3460.9110.4930.3790.896
M5Tree1Tmin, Tmax, Ra0.3980.2890.9300.5320.4120.878
M5Tree2Tmin, Tmax, Ra, α0.3980.2890.9300.5320.4120.878
HSTmin, Tmax, Ra2.0411.802−0.8401.9531.725−0.635
CHSTmin, Tmax, Ra0.5120.3970.8840.5300.3970.880
SSTmin, Tmax, Ra0.4870.3790.8950.4790.3680.902
GMDHNN1Tmin, Tmax, Ra0.4320.3350.9070.4780.3670.893
GMDHNN2Tmin, Tmax, Ra, α0.4290.3310.9090.4750.3640.895
Tmin, Tmax, Ra, and α are minimum and maximum temperatures, extraterrestrial radiation, and periodicity (month number), respectively. RMSE, MAE, and NSE are the root mean square error, mean absolute error, and efficiency coefficient, respectively.
Table 3. RMSE, MAE, and NSE statistics of each model for different data splitting strategies—Antakya.
Table 3. RMSE, MAE, and NSE statistics of each model for different data splitting strategies—Antakya.
ModelInputTrainingTest
RMSE (mm)MAE (mm)NSERMSE (mm)MAE (mm)NSE
50% training and 50% test
MARS1Tmin, Tmax, Ra0.3830.2900.9590.6350.5210.872
MARS2Tmin, Tmax, Ra, α0.3690.2860.9620.5660.4600.963
M5Tree1Tmin, Tmax, Ra0.3410.2570.9680.6390.5270.870
M5Tree2Tmin, Tmax, Ra, α0.3350.2560.9690.5980.4890.886
HSTmin, Tmax, Ra1.5131.3160.3671.7811.6130.065
CHSTmin, Tmax, Ra0.6410.4560.8860.7180.6030.848
SSTmin, Tmax, Ra0.4380.3390.9470.6780.5720.864
GMDHNN1Tmin, Tmax, Ra0.3500.2680.9630.5520.4360.912
GMDHNN2Tmin, Tmax, Ra, α0.3450.2630.9650.5500.4330.913
60% training and 40% test
MARS1Tmin, Tmax, Ra0.4640.3590.9380.4680.3700.933
MARS2Tmin, Tmax, Ra, α0.4540.3450.9410.4530.3730.966
M5Tree1Tmin, Tmax, Ra0.4060.3050.9530.4780.3800.930
M5Tree2Tmin, Tmax, Ra, α0.4390.3260.9450.4410.3480.941
HSTmin, Tmax, Ra1.6121.4020.2561.7221.5690.127
CHSTmin, Tmax, Ra0.6760.4870.8690.6470.5380.877
SSTmin, Tmax, Ra0.5260.4000.9210.5100.4360.923
GMDHNN1Tmin, Tmax, Ra0.4410.3390.9390.4260.3450.943
GMDHNN2Tmin, Tmax, Ra, α0.4300.3350.9410.4240.3420.945
75% training and 25% test
MARS1Tmin, Tmax, Ra0.4550.3510.9410.3680.2760.957
MARS2Tmin, Tmax, Ra, α0.4430.3490.9440.3350.2690.971
M5Tree1Tmin, Tmax, Ra0.3900.2910.9570.3730.3040.963
M5Tree2Tmin, Tmax, Ra, α0.4060.2990.9530.3670.2920.958
HSTmin, Tmax, Ra1.6631.4630.2111.6411.4890.168
CHSTmin, Tmax, Ra0.6770.4970.8690.6010.4810.888
SSTmin, Tmax, Ra0.5260.4160.6210.4100.3270.648
GMDHNN1Tmin, Tmax, Ra0.4390.3470.9400.3180.2480.968
GMDHNN2Tmin, Tmax, Ra, α0.4260.3370.9440.3040.2470.969
Average
MARS1Tmin, Tmax, Ra0.4340.3330.9460.4900.3890.921
MARS2Tmin, Tmax, Ra, α0.4220.3270.9490.4510.3670.967
M5Tree1Tmin, Tmax, Ra0.3790.2840.9590.4970.4040.921
M5Tree2Tmin, Tmax, Ra, α0.3930.2940.9560.4690.3760.928
HSTmin, Tmax, Ra1.5961.3940.2781.7151.5570.120
CHSTmin, Tmax, Ra0.6650.4800.8750.6550.5410.871
SSTmin, Tmax, Ra0.4970.3850.8300.5330.4450.812
GMDHNN1Tmin, Tmax, Ra0.4100.3180.9470.4320.3430.941
GMDHNN2Tmin, Tmax, Ra, α0.4010.3120.9500.4260.3410.942
Tmin, Tmax, Ra, and α are minimum and maximum temperatures, extraterrestrial radiation, and periodicity (month number), respectively. RMSE, MAE, and NSE are the root mean square error, mean absolute error, and efficiency coefficient, respectively.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Adnan, R.M.; Heddam, S.; Yaseen, Z.M.; Shahid, S.; Kisi, O.; Li, B. Prediction of Potential Evapotranspiration Using Temperature-Based Heuristic Approaches. Sustainability 2021, 13, 297. https://doi.org/10.3390/su13010297

AMA Style

Adnan RM, Heddam S, Yaseen ZM, Shahid S, Kisi O, Li B. Prediction of Potential Evapotranspiration Using Temperature-Based Heuristic Approaches. Sustainability. 2021; 13(1):297. https://doi.org/10.3390/su13010297

Chicago/Turabian Style

Adnan, Rana Muhammad, Salim Heddam, Zaher Mundher Yaseen, Shamsuddin Shahid, Ozgur Kisi, and Binquan Li. 2021. "Prediction of Potential Evapotranspiration Using Temperature-Based Heuristic Approaches" Sustainability 13, no. 1: 297. https://doi.org/10.3390/su13010297

APA Style

Adnan, R. M., Heddam, S., Yaseen, Z. M., Shahid, S., Kisi, O., & Li, B. (2021). Prediction of Potential Evapotranspiration Using Temperature-Based Heuristic Approaches. Sustainability, 13(1), 297. https://doi.org/10.3390/su13010297

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop