Next Article in Journal
Innovation Management: A Bibliometric Analysis of 50 Years of Research Using VOSviewer® and Scopus
Previous Article in Journal
Predicting Livestock Farmers’ Attitudes towards Improved Sheep Breeds in Ahar City through Data Mining Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Utility of Certain AI Models in Climate-Induced Disasters

by
Ritusnata Mishra
*,
Sanjeev Kumar
,
Himangshu Sarkar
and
Chandra Shekhar Prasad Ojha
Department of Civil Engineering, Indian Institute of Technology Roorkee, Roorkee 247667, India
*
Author to whom correspondence should be addressed.
World 2024, 5(4), 865-900; https://doi.org/10.3390/world5040045
Submission received: 11 September 2024 / Revised: 22 September 2024 / Accepted: 27 September 2024 / Published: 8 October 2024

Abstract

:
To address the current challenge of climate change at the local and global levels, this article discusses a few important water resources engineering topics, such as estimating the energy dissipation of flowing waters over hilly areas through the provision of regulated stepped channels, predicting the removal of silt deposition in the irrigation canal, and predicting groundwater level. Artificial intelligence (AI) in water resource engineering is now one of the most active study topics. As a result, multiple AI tools such as Random Forest (RF), Random Tree (RT), M5P (M5 model trees), M5Rules, Feed-Forward Neural Networks (FFNNs), Gradient Boosting Machine (GBM), Adaptive Boosting (AdaBoost), and Support Vector Machines kernel-based model (SVM-Pearson VII Universal Kernel, Radial Basis Function) are tested in the present study using various combinations of datasets. However, in various circumstances, including predicting energy dissipation of stepped channels and silt deposition in rivers, AI techniques outperformed the traditional approach in the literature. Out of all the models, the GBM model performed better than other AI tools in both the field of energy dissipation of stepped channels with a coefficient of determination (R2) of 0.998, root mean square error (RMSE) of 0.00182, and mean absolute error (MAE) of 0.0016 and sediment trapping efficiency of vortex tube ejector with an R2 of 0.997, RMSE of 0.769, and MAE of 0.531 during testing. On the other hand, the AI technique could not adequately understand the diversity in groundwater level datasets using field data from various stations. According to the current study, the AI tool works well in some fields of water resource engineering, but it has difficulty in other domains in capturing the diversity of datasets.

1. Introduction

At local and global scales, a rise in temperatures altered the patterns of the ocean systems, atmosphere, and land, as well as their interactions with one another [1,2,3,4]. The most significant variation occurs when the duration of the rainfall changes, giving some areas more intense rainfall while others receive less intense rainfall. Consequently, the globe is facing severe drought in some regions and disastrous flooding in others. Some of the authors well described different modeling approaches and mitigation measures for extreme hydrological events [5,6].
The Indian climate system is also undergoing these kinds of changes [7,8,9]. Due to the alternation of precipitation patterns, certain areas of the Himalayan region, particularly Uttarakhand, Himachal Pradesh, and other northeast regions of India, are experiencing extremely severe floods, while other parts of the country are experiencing a significant drop in groundwater levels. India is now dealing with two major issues as a result: (1) landslides and silt deposition in rivers caused by the high-velocity flow in hilly areas and (2) a decrease in groundwater level due to uneven rainfall patterns.
These issues lead to more challenges in every aspect of life. During landslides, the main challenge is the breakdown of highway connectivity caused by an inadequate storm waterway system. Secondly, the erosion of river banks and the accumulation of eroded materials in the river bed will cause the flood to disrupt the morphology of the river. Further, this can hamper the functionality of the water management system. Thirdly, the constant lowering of the groundwater level may cause drought. As a result, technical solutions are needed to address these concerns.
However, to address the problem in hilly regions, one of the solutions is to propose a stepped storm channel downstream of the rainwater drainage system beneath the underpass. Stepped chutes are useful in many civil engineering applications, including emergency spillways over embankment dams, mountain drainage systems, etc. Stepped channels are commonly used as hydraulic structures as they can dissipate more kinetic energy across their length.
Depending on the step form and flow velocity, stepped spillways can have different flow patterns. Nappe flow is the flow regime characterized by low flow rates [10,11,12]. Regimes with moderate to high flow rates are referred to as transition flow [13,14,15]. Similarly, the flow regime is referred to as skimming flow at high flow rates [16,17]. Many studies in the literature have examined the nappe flow condition [18,19,20,21,22,23,24]. The concept that energy dissipation in the nappe flow regime is larger than in the skimming flow regime is supported by a few authors. However, the slope of the channel and inflow conditions are the most governing factors to be considered for designing the stepped channel.
The field of stepped channel for energy dissipation has successfully employed several machine learning tools, including genetic algorithm support vector machine regression (GA-SVR), group method of data handling (GMDH), multilayer perceptron neural network (ANN), M5 algorithm, multivariate adaptive regression splines (MARS), and SVM Support Vector Machine (SVM) [25,26,27,28,29,30]. They have improved the outcome by combining several input parameters. These studies are either based on experimental data or the literature-based datasets. Overall, the datasets from various experimental setups, equipment, and energy dissipation methods were acquired in both situations. Consequently, the input parameter ranges for each dataset [25,26,27,28,29,30] must be different. A single model from any given dataset cannot fall short of the modeling requirements. Consequently, to obtain the desired model, a unique modeling strategy that incorporates a wide variety of information from various sources must be combined.
One more major issue that arises due to climate change and heavy rainfall is the sedimentation in rivers, where increasing silt accumulation is a persistent problem. Flooding has extreme impacts on irrigation channels and power channels along with the machinery associated with these systems, which results in significant operational and maintenance challenges. During flood events, excessive water flow can cause river banks to erode, altering the river’s morphology and leading to the deposition of large amounts of sediment downstream. River morphological changes, including the formation of new channels or the deepening of existing ones, can affect the hydrodynamic conditions and need the infrastructure to be redesigned or adjusted to fit the new flow patterns. During flooding, a large amount of sediment can carry river systems. This sediment load can choke irrigation channels, reducing their capacity to transport water effectively to agricultural fields and compromising crop yields. These impacts highlight the critical need for effective sediment management strategies and the use of advanced predictive models to mitigate the adverse effects of flooding on irrigation and power systems. Similarly, power channels feeding hydroelectric plants can be hampered, reducing water flow to turbines and resulting in low power generation efficiency. The increased sediment load can also damage machinery, causing wear and tear on pumps, turbines, and other equipment, leading to higher maintenance costs and potential downtime.
One of the most promising devices for sediment removal problems is the vortex tube silt ejector. This device offers several advantages over traditional methods/devices such as settling basins, vortex settling basins, and tunnel-type ejectors. It is small, easy to install, and highly efficient at ejecting both bed load and suspended load sediments. Moreover, it achieves this with minimal water loss, and approximately 5% to 10% escape flushing discharge, which is particularly beneficial in water-scarce regions [31]. Parshall [32] was the first to use and innovate the vortex tube silt ejector structure. Subsequent studies and examinations by researchers [32,33,34,35,36,37,38] have contributed to the understanding of this device.
The potential of artificial intelligence (AI)-based techniques to predict the trapping efficiency of vortex tube silt ejectors has been increasingly recognized. AI approaches, including Random Tree, Random Forest, support vector machine, and multivariate adaptive regression spline, have been applied to assess the sediment trapping efficiency under varying hydraulic conditions. The prediction performance of these AI methods was evaluated using metrics such as the mean absolute error (MAE), coefficient of correlation (CC), and root mean square error (RMSE). Although AI-based methods have been less commonly applied in this specific field, they have seen widespread use in related areas of hydraulics, water resources projects, and other engineering disciplines over the past few decades [39,40,41,42,43].
Recent advancements in predicting the trapping efficiency of vortex tube sediment ejectors and tunnel-type ejectors have involved a variety of sophisticated machine-learning techniques. Tiwari et al. [39] applied the Adaptive Neuro-Fuzzy Inference System (ANFIS) and Artificial Neural Networks (ANNs), showing their ability to model complex relationships in this context. Similarly, Kumar et al. [44] extended this work by employing Gaussian Processes, Support Vector Machines (SVM), and Stacked Gaussian Processes, illustrating their effectiveness across different data scenarios. Singh et al. [41] integrated ANFIS with Random Forests (RFs), aiming to enhance the predictive accuracy of these systems. Kumar et al. [43] took a comprehensive approach by using SVM, RF, Regression Trees (RTs), and Multivariate Adaptive Regression Splines (MARS) on a dataset of 144 samples, demonstrating the robustness of these methods in diverse conditions. However, the datasets from diverse experimental conditions, measurement locations, and efficiency calculation techniques varied significantly. As a result, each dataset’s input parameter ranges need to be distinct. A single model from a given dataset cannot possibly meet all of the modeling requirements. Therefore, a distinct modeling approach that combines a wide range of information from several sources must come together to generate the desired model.
Nevertheless, groundwater is essential, meeting around one-third of global water demands and is vital for domestic use (36%), agriculture (42%), and industry (27%). Its role is crucial in socio-economic development, yet recent studies highlight declining groundwater levels due to human activities, climate change, reduced precipitation, and higher temperatures [45,46]. India faces severe water stress, impacting economic growth and livelihoods, with significant portions of the population lacking access to safe drinking water [47,48,49]. The primary causes of this decline are attributed to human activities and the adverse impacts of climate change, such as reduced precipitation and higher temperatures during dry periods. Natural factors like evapotranspiration and hydraulic properties also contribute to seasonal fluctuations in water tables.
Artificial Intelligence (AI) has the potential to revolutionize groundwater evaluation and modeling. It can address the limitations of conventional techniques, particularly in terms of data accuracy and the complexity of managing numerous parameters [50,51]. Traditional methods often rely on limited spatially sparse data points, and they struggle with the non-linearity and heterogeneity of aquifer systems, making it challenging to predict groundwater levels accurately [52]. AI models, such as machine learning and neural networks, can integrate vast and diverse datasets, including climate, geological, and hydrological information while identifying complex patterns and relationships that conventional models may miss [53]. Moreover, AI allows for real-time data assimilation, providing more dynamic and responsive groundwater management solutions. While AI improves accuracy and predictive capabilities, its effectiveness depends on the quality of input data, and its models can be viewed as black boxes, limiting interpretability and trust in some scenarios [54,55]. However, the potential of AI to improve groundwater management is a reason for optimism about the future.
To anticipate the Groundwater Level (GWL) more precisely and reduce the need for a large number of parameters in groundwater evaluation and modeling, this work proposes a comprehensive method. The development of AI models for groundwater level prediction using spatial field gravity data is a crucial step toward this goal. Understanding the relationship between the earth’s gravity changes and groundwater level variations is vital, as fluctuations in groundwater mass or volume may be directly linked to temporal variations in gravity [56]. Therefore, nine AI-based models have been evaluated using the concept described above, and a detailed comparative analysis of their predictive performance, computational efficiency, and interpretability has been conducted, highlighting the importance of these innovative solutions.
On the other hand, advancements in computer modeling, computing power, and information processing have led to the development of practical tools for understanding complex natural systems. In hydrology, researchers have focused on the applicability of machine learning (ML) methods to improve groundwater studies [50,51,52,53,54,55,56,57,58,59]. ML, a subset of Artificial Intelligence, develops computer algorithms and statistical models to learn patterns or trends from data, which can be used for prediction. Many researchers have developed various ML models for groundwater prediction, recognizing that the behavior of ML models changes as data structures change [60,61,62]. Supervised learning techniques have shown promise for prediction purposes, but clustering and ensemble learning models are also useful, depending on the availability of high-quality datasets. However, not every ML technique is universally suitable for all groundwater problems due to the diversity of available data and scenarios where large datasets are required. In those cases, deep learning (DL) methods are much more suitable [63,64]. Some methods described in the literature may not be effective with limited, sparse, noisy, or limited datasets [65].
The present study considers three challenging issues that are relevant to various nations across the globe: energy dissipation of high-velocity flows, sediment regulation, and the assessment of GWL. Therefore, the analysis is based on the data collected in and around IIT Roorkee. Few machine learning tools are used to address certain specific problems, as addressed above. It is not enough to simply apply several machine learning methods and determine which model has an extensive relationship with the desired variables when it comes to water resource engineering and modeling. Rather, a model that not only gives well-predicted datasets but also preserves the flow behavior must be obtained. Therefore, it is also necessary to include the current experimental model and the existing experimental datasets into a single platform to obtain a reliable dataset that carries the physics of flow behavior. Thus, different AI models such as Random Forest (RF), Random Tree (RT), M5P (M5 model trees), M5Rules, Feed-Forward Neural Networks (FFNN), Gradient Boosting Machine (GBM), Adaptive Boosting (AdaBoost), Support Vector Machines kernel-based model such as SVM-Pearson VII Universal Kernel (PUK), and Radial Basis Function (RBF) are applied for each distinguished field of water resource engineering. However, many studies have been conducted on single well datasets to predict GWL. In all the cases of a single well, the AI tool has been implemented as a successive tool in predicting GWL. However, in the present study, the temporal gravity data of multiple wells are considered for the prediction of GWL.
Details of these are included in the next section, which is followed by the application and potential assessment of AI tools in mitigating climate-induced disasters. Information on the research area and methods is contained in Section 2. In contrast, the results and discussions are outlined in Section 3 and Section 4. Likewise, the study’s conclusions are shown in Section 5.

2. Materials and Methods

This segment consists of datasets from three different parts. Where the first part consists of datasets of stepped channels, the second part consists of datasets from sediment trapping problems in the vortex tube silt ejector, and the third part of this section consists of datasets of groundwater levels from multiple fields.

2.1. Data Collection (Part 1)

A total number of 173 pieces of experimental data was collected for nappe flow as shown in Table 1. Out of 173 datasets, 29 pieces of data from [18], 28 from [19], 6 from [23], and 110 from the present study were taken for energy dissipation. Two different slopes, 1:2 and 1:2.34, were used to conduct the experimental study. A free-flow nappe flow regime was taken for detailed analysis with 0.201 ≤ yc/h ≤ 0.6. The range of non-dimensional parameters is given in Table 2. For the prediction of energy dissipation, nine AI models such as RF, RT, M5P, M5Rules, FFNN, GBM, AdaBoost, SVM-Pearson VII Universal Kernel, and Radial Basis Function are taken.
Various input parameters that affect the prediction of energy dissipation of stepped channel are considered as ρ w (Density of water), g (gravitational acceleration), h (step height), l (step length), yc (critical height of flow), tan α (slope), N (number of steps), ΔH, Hmax (maximum height of channel), w (width of the channel), ΔHcha, and Q (total discharge). Different combinations of input parameters are tried, out of which flowing parameters have a better correlation with the target variable H H m a x .
H H m a x = f y c h , Δ H c h a y c , w y c , tan α , DN
where DN = Drop number q 2 g H c h a 3 , Hmax = Maximum upstream head above chute toe, Hres = Residual energy, and ∆H = Hmax − Hres (m), ∆Hcha = Hcha − Nh. A graphical representation of the correlation between the input target variables for predicting energy dissipation of the stepped channel is shown in Figure 1.

2.2. Data Collection (Part 2)

A total of 120 datasets of trapping efficiency were collected from the laboratory. Additionally, 28 datasets were collected from previous studies. The rectangular channel was 30.0 m long, 1.5 m wide, and 0.76 m deep. The vortex tube was installed at a distance of about 14.0 m away from the channel’s entrance. The removal of silt material transmitted in the channel as a percentage is known as the trapping efficiency of the vortex tube silt ejector.
A dataset comprising 28 numbers was reported by different authors [36,66]. A vortex tube was constructed in a branch canal of the Warujeng-Kertosono irrigation scheme in Java, as part of a pilot study discussed by [36]. Additionally, a study on the Chatra Canal in Nepal included measurements taken at the Chatra Canal vortex tube, as reported by [66]. The range of the data can be seen in Table 3 and Table 4. A graphical representation of the correlation between the input target variables for predicting the trapping efficiency of the vortex tube silt ejector is shown in Figure 2. The statistical description of mean, Standard deviation, and Kurtosis is represented as
Mean ( X ¯ ) = 1 n x i n
Standard deviation ( σ ) = 1 n ( x i x ¯ ) n 1
Kurtosis ( σ ) = 1 n x i x ¯ σ 4 n
where x i is each observation, x ¯ is the mean observation, σ is the standard deviation, and n is the number of observations.

2.3. Data Collection (Part 3)

This study has been carried out in a small part of Roorkee block in Uttarakhand, India, which lies between longitude 77°50′30.012″ E–77°56′30.012″ E and latitude 29°48′15.012″ N–29°55′30″ N with a total area of 99.401 Km2 (Figure 3). This area is flat, with an altitude varying between 254 m to 279 m concerning mean sea level (MSL), and is covered by agricultural land (57%) and residential area (39%). The depth to water level at the mentioned wells lies between 5.30 to 9.00 m below the ground level (BGL). The seasonal fluctuation ranges between 0.47 m to 3.65 m. The sub-surface formations consist of a cyclic sequence of grey micaceous sand, silt, clay/brownish grey clay, sand, and gravels with occasional pebbles and boulders of the terrace, fans, and channel alluvium of quaternary age [67].
A relative gravimeter with a precision of 0.001 mGal has been used to take gravity observation near an observation well. As the relative gravimeter gives the relative gravity value, relative gravity values have been converted into absolute gravity values with respect to the reference absolute gravity station available at the Earth Science department on the IITR campus. The water table’s depth is also measured simultaneously using a water level indicator with a precision of 5 mm. In this study, a dual-frequency GPS receiver has been used for a geographical location (Latitude, Longitude, and Orthometric height), which is most needed during gravity observation. Geographical information of all gravity stations is presented in Table 5. A total of 14 bore wells were identified to measure the temporal gravity and groundwater level.
A total of 378 samples were collected for this study. The input parameters consisted of GPS time and observed gravity, while the output parameter was the groundwater level (GWL). The observed gravity data obtained from the observation well were transformed into absolute gravity values measured in Gal, and the water levels were converted to an orthometric height, expressed in meters. Additionally, the time of observation was converted to seconds. An Exploratory Data Analysis (EDA) process was conducted to ensure high data quality, which included data scaling, cleaning, and normalization. The normalization of input parameters was achieved using the min–max normalization method to ensure that all variables contributed equally to the model training process.

2.4. AI-Based Model

2.4.1. Random Forests (RF)

An RF approach refers to a well-organized collection of tree predictors generated from input vectors using random vector samples. At each node, various variables are arranged to form a tree with arbitrarily selected input parameters. A training dataset is constructed from randomly selected parameters for establishing specific trees, and a Gini index is used to quantify the impurity of parameters in comparison to the output [68]. RF regression necessarily involves the use of two predefined user variables: an input parameter (m) to be used at a separate node to produce a tree and the number of trees grown (k) [69]. The method is based on the hit-and-trial method, with the variables chosen based on the best split. The RF method constructs Random Forests by capturing a group of Random Trees [70]. It creates different separate classification trees from random data samples (i.e., bagging) and then votes to choose the most popular class [71]. The base classifier in RF is decision trees, and it is a hybrid of the bagging technique and the random sub-space method. This approach can handle missing values as well as continuous, categorical, and binary data, making it ideal for high-dimensional data modeling. RF has several advantages, including excellent accuracy in predicting results, interpretability, and non-parametricity for a variety of datasets. Because of the employment of collective techniques and random sampling, accurate forecasts and improved generalizations are achieved. When compared to a single-tree classifier, RF typically provides a significant performance boost. The strategy to find the inconsistency of the data is different in this method, which is quite different from the usual functions [72,73].

2.4.2. Random Tree (RT)

RT is an arborescence that is devised by a stochastic process. RT is a tree chosen at random from several probable trees with k-random characteristics at each node (point). In this term, ‘random’ is used to describe that each tree in the set of trees has an equal opportunity to be selected in the sample. In other words, there is uniformity in the distribution of trees. Random Trees can be grown effectively. Precise models can be obtained by combining large sets of Random Trees. In the last few decades, machine learning techniques have led to the extensive development of Random Tree designs. Additionally, this method also includes the option of assessing probability for each class (or target mean in the case of regression) using a hold-out set (backfitting) [70].

2.4.3. M5P Model

Quinlan [74] identified the M5 tree, a decision tree learner designed for regression situations. The M5P tree is a binary format regression tree model that uses the linear regression function to generate continuous numeric features [74]. By building the tree model, M5P creates a decision tree using a divergence measure. Rather than dealing with discrete classes, the M5 tree technique handles continuous class issues and can tackle tasks with extremely high dimensionality. It displays a piecewise detail of every linear model that was built to approximate the dataset’s nonlinear relations. The M5P tree algorithm measures the error value at the terminal node by analyzing the expected result and utilizes the standard deviation of that node’s individual class value.

2.4.4. M5Rules Model

As with many other machine-learning algorithms, the fundamental principle of the M5Rules model is tree learning (TL). M5Rules is a simple working approach for extracting rules from model trees. Several classification and prediction problems have been addressed using this approach [75,76]. M5Rules trains a pruned tree by operating a tree learner over the training data. The tree will be removed once the elite leaf is turned into a rule. Keep in mind that this action is the only difference between M5rules and the standard procedure for creating a single rule. Using a predefined criterion, the M5tree is constructed and the optimal rule is found. The iteration ends after all possible scenarios have been covered. The primary parameters of all these modes are given in Table 6.

2.4.5. AdaBoost Model

By creating a strong classification as a linear combination of weak classifications with adequate weight, the AdaBoost algorithm [77,78] is suitable for accelerating machine learning algorithms and improving their performance. The key idea is to sequentially train weak learners, with each learner focusing more on the samples that previous learners misclassified or predicted poorly. The process results in a final model that aggregates the outputs of all weak learners to make accurate predictions.
The goal of AdaBoost is to combine weak learners (e.g., decision trees with shallow depth) to form a strong learner. For regression tasks, AdaBoost minimizes the prediction error iteratively, adjusting the weights of the weak learners at each iteration. After T iterations, the final prediction H(x) is a weighted sum of the predictions from all the weak learners, as follows:
H ( x ) = t = 1 T t h t ( x )
Thus, the overall prediction is formed by aggregating the predictions of each weak learner weighted by their ∝t. The AdaBoost model effectively focuses on difficult-to-predict samples by iteratively updating the sample weights and combining the outputs of multiple weak learners into a strong model. The method can be highly effective in regression tasks where accuracy improves by handling hard-to-predict data points systematically.

2.4.6. Feed-Forward Neural Network (FFNN) Model

Feed-Forward Neural Networks (FFNNs) are one of the simplest types of artificial neural networks. To create a network that can solve issues in the real world, the FFNN model needs a large number of neurons because each neuron can only make a basic choice [79,80]. The fundamental principle behind a Feed-Forward Neural Network (FFNN) is to approximate a function that maps an input xxx to an output y, such that the model learns the underlying relationship between input data and output targets through a set of weighted connections between layers of neurons. The learning process is guided by the principle of minimizing a loss function using optimization techniques, primarily gradient descent. The loss function quantifies how far the predictions are from the actual target values. For regression problems, the most common loss function is Mean Squared Error (MSE)
L = 1 N i = 1 N ( y i y ^ i ) 2
where y i is the true value and y ^ i is the predicted value for the ith example. The optimization process uses techniques like gradient descent to update the weights and biases. The gradients of the loss function with respect to the model parameters are calculated through backpropagation, which relies on the chain rule of calculus to efficiently compute partial derivatives. After computing the loss, the network adjusts its weights and biases using backpropagation, which relies on the chain rule of calculus to compute the gradient of the loss with respect to each parameter.
The gradient of the output layer for the output neuron is as follows:
L w j = L y ^ · y ^ w j
The gradient of the hidden layer for each hidden neuron j:
L w i j = L h j · h j w i j
where w i j is the weight connecting input xi to neuron j in the hidden layer and w j is the weight connecting hidden neuron j to the output neuron. The output layer produces the final predictions based on the activations from the hidden layer(s). If there is only one output neuron (for regression), the prediction is
y ^ = g j = 1 m w j h j + b 0
where w j is the weight connecting hidden neuron j to the output neuron, b 0 is the bias term for the output neuron, and g is an activation function (e.g., linear for regression, softmax for classification).

2.4.7. Gradient Boosting Machine (GBM)

The GBM has been successfully implemented in different fields of engineering [81]. The GBM is based on the idea of boosting, where weak learners are added sequentially. Each new model corrects the errors made by the previous model, improving overall performance. The goal is to build models that complement each other rather than building independent models (as in bagging). The key principle driving GBM is the use of gradient descent. In each iteration, the algorithm fits a new weak learner (decision tree) to the negative gradient of the loss function. The negative gradient indicates the direction in which the model should adjust to minimize errors.
r i ( m ) = L y i , y ^ i ( m 1 ) y ^ i ( m 1 )
This gradient guides the new model in reducing the error at each step. The final prediction for a data point xi after M iterations is given by
y ^ i ( M ) = y ^ i ( 0 ) + η m = 1 M h m ( x i )
where h m ( x i ) is the prediction of the mth weak learner for data point xi and η is the learning rate controlling the contribution of each weak learner.

2.4.8. Support Vector Machines (SVM)–Radial Basis Function (SVM_RBF)

Support Vector Machines (SVM) is a supervised learning algorithm used for classification and regression tasks. The SVM–RBF model incorporates a Radial Basis Function (RBF) kernel, which helps the model handle non-linear data effectively. The SVM–RBF algorithm works by mapping input data into a higher-dimensional space where it can separate classes or approximate relationships using a hyperplane. In many cases, the data are not linearly separable in their original feature space. To deal with this, SVM uses kernel functions to project the data into a higher-dimensional space. The Radial Basis Function (RBF) kernel is one of the most popular kernel functions due to its ability to handle non-linear relationships. The RBF kernel function is mathematically expressed as [82]
K x i , x j = exp γ x i x j 2
where γ is a hyperparameter that controls the spread of the kernel and xi and xj are the feature vectors of the i-th and j-th samples, respectively. The original optimization problem of SVM is reformulated into its dual form by introducing Lagrange multipliers αi, as follows:
max α i = 1 n α i 1 2 i = 1 n j = 1 n α i α j y i y j K ( x i , x j )
Subject to 0 ≤ αi ≤ C,
i = 1 n α i y i = 0
This form uses the kernel function K(xi,xj) to project the input data into a higher-dimensional space, allowing the SVM to find the optimal hyperplane.

2.4.9. SVM-Pearson VII Universal Kernel (PUK)

The PUK kernel is a type of Radial Basis Function kernel that is more flexible and can generalize various other kernels by tuning two parameters. It is based on the Pearson VII distribution and is often used in SVMs when the relationship between input and output is nonlinear and complex. The PUK kernel is defined as
K x i , x j = 1 + 2 X i X j 2 σ 2 ω 1 ω
where xi and xj are the input feature vectors, σ is the kernel width, and ω is a shape parameter [82,83,84] that controls the tailing behavior of the function. This kernel allows SVM to efficiently model complex relationships, making it an effective approach for nonlinear regression and classification. The goal of the training phase is to optimize the SVM’s hyperparameters (weights w and bias b) to maximize the margin between different classes while minimizing the classification error or regression loss.

2.5. Modelling Process

After collecting datasets from different sources, the datasets are prepared for the model study using AI tools. In this approach, nine AI models such as RF, RT, M5P, M5Rules, FFNN, GBM, AdaBoost, SVM-Pearson VII Universal Kernel, and Radial Basis Function are considered. Out of all datasets, 75% of the datasets were used for training, while the remaining 25% were used to assess the model’s performance. However, the flow diagram of the methodology is shown in Figure 4.
In the present study, the K-fold cross-validation method is chosen. The main advantage of K-fold cross-validation is that it allows us to evaluate the model on different subsets of the data, giving a more thorough understanding of its performance and helping to prevent overfitting. By averaging the performance metrics (like R2, RMSE, and NSE) from all five iterations, a more reliable model can be obtained. In summary, K-fold cross-validation helps to understand the effectiveness of the model by using different subsets of the data for training and testing.
K-fold cross-validation is a standard method for evaluating the performance of a machine-learning model. In this technique, the dataset is divided into K equal-sized subsets or “folds”. For each iteration, the model is trained on (K − 1) of these folds and tested on the remaining fold [85,86,87,88]. This process is repeated K times, with each fold used as the test set exactly once. In the present study, K = 5 was considered to split the dataset into five equal parts. Each part was used once as a test set, while the remaining four parts were used for training. This random division ensures that each subset represents the overall dataset well.
To evaluate the model performance, four statistical parameters are taken. The squared correlation coefficient (R2), Nash–Sutcliffe Efficiency (NSE), mean absolute error (MAE), root-mean-square error (RMSE), and relative error are represented as
R 2 = ( n n O p r e O o b s n O p r e n O o b s ) 2 ( n n O p r e 2 ( n O p r e ) 2 ) ( n n O o b s 2 ( n O o b s ) 2 ) 0 R 2 1
NSE = 1 i = 1 n ( O o b s O p r e ) 2 i = 1 n ( O o b s O o b s ¯ ) 2
MAE = 1 n i = 1 n O o b s O p r e 0 MAE +
RMSE = 1 n n ( O o b s O p r e ) 2 0 RMSE 1
Relative error = O o b s O p r e O o b s × 100
where Oobs is the observed experimental data, Opre is the predicted data, n is the number of observations, and O o b s ¯ is the mean of the observed experimental data. The best model was again taken for sensitivity analysis of the input parameters. The best combination of input parameters is taken for further study.

3. Results

This section examines the assessment of a specific AI tool in three distinct contexts related to global climate change issues. Nonetheless, this part provides a thorough description of AI in these three segments, as well as its benefits and drawbacks. Table 6, Table 7 and Table 8 provide a brief overview of the hyperparameters for each of the nine different machine learning models: RF, RT, M5P, M5Rules, FFNN, GBM, AdaBoost, SVM-Pearson VII Universal Kernel, and Radial Basis Function.
The number of estimators, maximum depth, and n-estimators in AdaBoost as well as GBM are known as hyperparameters, and they are essential for regulating model complexity and avoiding overfitting. Hyperparameters such as epsilon, maximum iterations, and the selected kernel function (RBF) in SVM have a big influence on how well the model works and how well it can handle nonlinear connections. Regarding FFNN, the model’s learning and inference abilities are influenced by several parameters, including the Hidden layer sizes, batch size, learning rate, and maximum iteration for modeling. For M5P, RF, RT, and M5Rules, model batch size, min Num Instances, and max Depth are a few important hyperparameters to be considered for modeling. To obtain the best results and generalization across various datasets and challenging domains, these hyperparameters must be properly tuned.

3.1. Prediction of Energy Dissipation of a Stepped Channel Using AI Models (Part 1)

In this part, the flow over the stepped channel is analyzed using nine different AI-based models (RF, RT, M5P, M5Rules, FFNN, GBM, AdaBoost, SVM_PUK, and SVM_RBF). To model energy dissipation. five useful input parameters such as y c h , Δ H c h a y c , w y c , tan α , and DN are taken for modeling. An agreement diagram of training and testing of all the models is shown in Figure 5. From Figure 5, it is revealed that the M5P model has maximum scattered datasets during training as well as testing. For evaluation of model output results, four different statistical parameters (R2, RMSE, MAE, and NSE) are taken. Table 9 shows the statistical parameters of training and testing datasets.
Table 9 shows that the GBM model is performing the best, with RMSE of 0.002, MAE of 0.0017, a maximum R2 of 0.998 during training, and RMSE of 0.00182, MAE of 0.0016, and R2 of 0.998 during testing. The previously mentioned result is further supported by Figure 6’s Taylor diagram, which shows that the GBM model performs superior to other models. Additionally, the Violin diagram (Figure 7), which displays the relative error of each model, indicates that the GBM model performs better than the others with a maximum relative error of ±1%. Nevertheless, the SVM_PUK is the second-best model to predict energy dissipation, according to Table 9. The M5Rules and M5P model generate several rules and a linear regression expression that might be beneficial in further studies.

3.2. Prediction of Sediment Trapping Efficiency Using AI Models (Part 2)

This part represents the predicted results of the RF, RT, M5P, M5Rules, FFNN, GBM, AdaBoost, SVM_PUK, and SVM_RBF models. These four statistical parameters R2, NSE, RMSE, and MAE are selected to evaluate the performance of the above-stated AI-based models as given in Table 10. Figure 8 depicts the optimized models using all data in the form of an agreement line diagram in the training and testing phases.
As a result, it was clear that the proposed GBM model was more accurate than the other applied models in predicting trapping efficiency. The RF, RT, M5P, M5Rules, FFNN, AdaBoost, SVM_PUK, and SVM_RBF models also performed well in terms of predicting the trapping efficiency. The agreement plot showed that the GBM model predicted the trapping efficiency with the least error. The GBM model predicted greater efficiency in the training phase (R2 = 0.997, NSE = 0.997, RMSE = 0.782, MAE = 0.602) as well as the testing phase (R2 = 0.997, NSE = 0.998, RMSE = 0.769, MAE = 0.531). Moreover, the SVM_PUK model also expressed good agreement with the observed and the model exhibited a good correlation value testing phase.
The M5Rule and M5P models were observed to be efficient models, as shown in Figure 6. The main advantage of the M5Rule and M5P model tree is that it provides linear models (LMs) in the form of equations with some conditions. Figure 9 presents the Taylor diagram with the statistical measures for all proposed AI models. As compared to the other AI models during the training and testing phase, Figure 9’s Taylor plot showed that the values of the GBM model were closer to the observed values. As can be seen from the Violin diagram (Figure 10), which displays the relative error of each model, the GBM model performs better than the others, with a maximum relative error of ±5%

3.3. Prediction of Groundwater Level Using AI Models (Part 3)

The performance of the developed models was evaluated based on their squared correlation coefficient (R2), root mean square error (RMSE), and mean absolute error (MAE) during both the training and testing phases. The results are summarized as follows in Table 11 and the outcomes of the models during the training and testing phase have been shown through agreement diagrams in Figure 11. These results show that the GBM and Random Tree (RT) models perform significantly better than the M5Rules and M5P models. The GBM and RT exhibit strong performance in the training phase, with extremely high correlation coefficients and low error metrics. However, their performance in the testing phase shows a noticeable decline, although GBM still maintains a relatively high R2 and lower errors compared to RT.
Another aspect of the model’s performance has been represented through the Taylor diagram in Figure 12, which shows the correlation between observed and model-predicted values during the training and testing phases. M5Rules and M5P models may have overfitted the training data due to their structure.
The comparison with the single well data-based gravimetric modeling study by [89] highlights that RF performs exceptionally well in both the training and testing phases in single well scenarios, with an RMSE of 0.182 and MAE of 0.027 during training and an RMSE of 0.257 and MAE of 0.173 during testing. This stark difference in performance when using multiple wells data can be attributed to the uneven distribution across observation wells, which introduces variability and complexity that single well data do not encounter. The M5Rules and M5P, FFNN, AdaBoost, SVM_PUK, and SVM_RBF models did not perform as well as the Random Forest and Random Tree models for several reasons, primarily related to the nature of the data and the characteristics of these modeling techniques.
M5Rules and M5P are rule-based and tree-based regression models that can be quite sensitive to the distribution of the data. In this study, the data distribution across multiple wells was uneven, which likely introduced significant variability and complexity that these models were unable to handle effectively. Unlike ensemble methods like Random Forest, which aggregates multiple decision trees to mitigate the effects of outliers and uneven data distribution, M5Rules and M5P models rely heavily on the structure of the data they are trained on.
The training performance of GBM showed a reasonable R2 of 0.835 during R2 testing at 0.828. This suggests that the model was able to capture patterns in the training data but failed to generalize to unseen data, a common symptom of overfitting.
The relationships between gravity data and groundwater levels can be highly non-linear and complex. M5Rules and M5P may not have been capable of capturing these complexities effectively. These models generate linear models at the leaves of the trees, which might be insufficient to capture the nuanced relationships present in the data. On the other hand, Random Forest can handle non-linear relationships better due to the ensemble approach, which combines multiple decision trees with different structures and subsets of data.
To understand and visualize the overall difference between the model performance, a relative error has been computed and represented through the error box diagram in Figure 13. These error box diagrams demonstrate that the FFNN may have a higher bias when dealing with non-linear data structures, leading to underfitting. Conversely, it might also exhibit high variance due to sensitivity to specific data points and local patterns in the training set. This trade-off between bias and variance can result in poor performance when models are tested on new unseen data, as reflected in the high RMSE and MAE values during the testing phase.

4. Discussion

This section covers the output results of the AI tool used to model various climate challenge situations. Additionally, it suggests a few catastrophe mitigation strategies that might minimize the after-effects of disaster.

4.1. Prediction of Energy Dissipation of the Stepped Channel Using AI Tools

One of the flood mitigation plans has been proposed in this part by designing a stepped channel in a hilly region for drainage. For that, the data from the present study and the literature were gathered to obtain a broad perspective of this study. A wide range of data is collected, where the current study’s data and data from the literature combine to produce a reliable prediction that may be helpful. Therefore, the datasets have been collected from different geometric conditions and flow rates (Table 1 and Table 2). A few existing expressions of nappe flow were also included to validate the model and prediction ability. The expression given by Chamani and Rajaratnam [22] on energy dissipation is expressed as
H H m a x = 1 1 α N 1 + 1.5 y c h + i = 1 N 1 1 α i 1.5 y c h + N
where α = 0.3 − 0.35 h l ( 0.54 + 0.27 h l ) log y c h , and N is number of steps. The above expression is useful for a slope of 22° to 40° (h/l = 0.421 to 0.842).
Additionally, the following expression for a nappe flow given by Chanson [10] is expressed for a fully developed nappe flow as
H H m a x = 1 0.54 Y c h 0.275 + 1.715 Y c h 0.55 1.5 + ( H d a m y c )
where Hdam is the dam height. Similarly, Chanson and Toombes [21] expressed the following to evaluate residual energy (slope < 5°) as
H r e s y c = 3.57 y c h 0.36
However, few existing models [10,21,22] from the literature are compared with the present datasets. The statistical analysis of all the existing models with the present datasets is given in Table 12. It demonstrates that the GBM exhibits good agreement with the observed datasets in comparison to the existing model.
However, several rules (Appendix A) offered by the M5Rules models assist in predicting energy at the downstream of channel. These rules could make it more practical to design stepped channels on steep terrain to drain the excess rainwater. This method of modeling supports further analysis that facilitates the construction of stepped channels. It most likely is a way to lessen the after-effects of climatical disasters and improve its implementation in the real world.
Clear guidelines for real-world applications:
  • The hydrological study of the occurrence of extreme events is required to obtain an optimum design discharge (Q) to construct a stepped channel for a particular site;
  • Getting the maximum discharge as well as the height of the channel will help in deciding the step geometry as well as the number of steps;
  • The known input parameters for designing a stepped channel would be w, yc, and slope, whereas the output, which is H H m a x at different locations, can be found out by using several rules (Appendix A) where H H max = f y c h , Δ H cha y c , w y c , tan α , DN .
Yet, AI technologies are only a means of approaching practical issues more easily; they are not the ultimate solution.

4.2. Prediction of Silt Trapping Efficiency Using AI Tools

This study discussed issues generated by the flood and to mitigate these issues, some sediment removal devices can be installed at the bed of upstream the canal headworks as a sediment excluder and similarly at the downstream of canal headworks as a sediment extractor. To set up and analyze some AI tools, a set of data was prepared. The details of the dataset are summarized in Table 3 and Table 4.
AI models have become increasingly valuable in predicting sediment transport, particularly in estimating the trapping efficiency of sediment removal devices such as settling basins, vortex settling basing, and vortex tube ejectors. The comparative results of the AI tools are mentioned in Table 10. Traditional sediment transport models often rely on empirical formulas and physical simulations, which can be restricted solely by their assumptions and the complexity of these models. On the other hand, AI employs vast amounts of data and advanced algorithms to provide more accurate and dynamic predictions.
According to Table 10, the GBM model registered the best ranking among other applied models, followed by the RF, RT, M5P, M5Rules, FFNN, AdaBoost, SVM_PUK, and SVM_RBF models in the testing phase. Therefore, the GBM model has the potential to predict the trapping efficiency of the vortex tube ejector based on R2, NSE, RMSE, and MAE parameters, as shown in Table 10. A few existing models from the literature were discussed in this section. A detailed analysis of existing models and present models is shown in Table 13, which includes the expression of Curi et al. [89], Paul et al. [90], and Singh et al. [42].
Curi et al. [89] and others, based on their experimental work on sediment extractors, have given trapping efficiency of vortex type settling basins.
η = 1.74 + ln [ d u 0.11 ( γ s γ f ) 0.88 Q 0.58 ]
In which η = sediment trapping efficiency, du = diameter of underflow outlet, γ s = weight density of sediment, γ f = weight density of water, and Q = discharge in the inlet channel. Paul et al. [90] and his fellow engineers examined the trapping efficiency of settling basins
η = 73.4 + 8.0 log ( ω W )
In which, η = sediment trapping efficiency, ω = sediment fall velocity, and W = vertical upward velocity at the center of the basin. A model is given by Singh et al. [42] for the estimation trapping efficacy of tunnel extractors as
η = 192.08 C 0.392 d 50 0.5983 R 0.3766
In which d50 = sediment size in milli-meter (mm), C = sediment concentration in parts per million (ppm), R = extraction ratio in percentage, and η = trapping efficiency in percentage.
In this study, it was found that AI tools can effectively predict sediment trapping efficiency, whereas M5P rules provided some expression that assists in predicting sediment trapping efficiency (Appendix B). These AI tools such as Tree-based models, i.e., Random Forest, Random Tree, and some expression-based M5P, M5Rules, can effectively analyze data on sediment transport, hydrological parameters, and device performance to identify patterns and relationships that may not be apparent through conventional methods. By training these models on extensive datasets, including variables like flow velocity, sediment load, particle size distribution, and device geometry, AI can predict how efficiently a sediment removal device will trap and remove sediment under various conditions. Combining AI with hydraulic/hydrological models can further improve the accuracy of sediment transport predictions as well as sediment removal. AI can refine the input parameters for hydraulic models, leading to better calibration and validation.
Clear guidelines for real-world applications:
  • To design the sediment removal structure in the canal, particularly the vortex tube, which may help in reducing sediment load from the canal while maintaining of design discharge of the canal;
  • The flood events data may help to obtain an optimum design discharge (Q) of sediment removal structures such as vortex tubes, which may also help in the design of the trapping sediment capacity of hydraulic structures from canals during these events;
  • Getting the design discharge as well as sediment flux in the canal will help in deciding the vortex tube length, diameter, etc.;
  • The known input parameters for designing a vortex tube ejector would be t/d, sediment size, and sediment concentration whereas the output is T.E. Similarly, T.E. = f (d50, C, R, t/d), which can be found out by using a number of rules (Appendix B).
AI technologies can make solving practical problems easier, but they are not a complete solution by themselves. They should be used alongside practical experience and real-world testing to achieve the best results.

4.3. Prediction of the Groundwater Level Using AI Tools

The results of this study demonstrate the potential of machine learning models, particularly Random Forest and Random Tree, in predicting groundwater levels using multiple well data. Despite the challenges posed by uneven data distribution, these models have shown promise in handling complex datasets and providing valuable insights into groundwater storage evaluation. It is noteworthy that the models demonstrated robust performance using only field gravity data as input, without considering various other parameters such as specific yield, pumping effects, precipitation, etc. This highlights the strength and versatility of the models in leveraging the available data to make accurate predictions of local groundwater storage. The ability to predict groundwater levels with high accuracy using just gravity measurements is particularly impressive and suggests that these models can be effectively employed even in situations where detailed hydrogeological data are unavailable or difficult to obtain.
The models in this study relied primarily on gravity measurements and well data, but the absence of other hydrogeological parameters (e.g., soil moisture, aquifer properties, precipitation, and pumping rates) may limit the models’ ability to capture the full complexity of the groundwater system. As Kumar et al. [91] described, omitting important features can lead to models that are accurate in specific cases but do not generalize well across different regions or periods. Including more diverse data sources in future studies could improve model robustness. On the other hand, single well data-based model development by Sarkar et al. [92] demonstrated that temporal gravity data give significantly better performance rather than multiple parameter-based model runs.
The study area contained an uneven distribution of data across observation wells, which can lead to bias in model training. Although techniques like cross-validation and normalization were employed to mitigate this issue, the models may still struggle with underrepresented regions, leading to local inaccuracies. Addressing this limitation requires further improvements in data collection strategies and spatial interpolation techniques to fill in data gaps across broader areas. Field data collection methods must be expanded to include more observation wells as a supplement to reduce spatial biases.
Implications for Future Groundwater Storage Evaluation: This study underscores the importance of data quality and distribution in the development and application of predictive models for groundwater storage evaluation. Future studies should focus on improving data collection methods to ensure more even distribution across observation wells. Additionally, incorporating more sophisticated techniques for data preprocessing and feature engineering may further enhance model performance. Machine learning models, especially ensemble methods like Random Forest, can be instrumental in developing robust groundwater management strategies. By accurately predicting groundwater levels, these models can aid in identifying trends and patterns that are crucial for sustainable water resource management.
Climate change poses a significant threat to global water resources, impacting groundwater recharge rates and availability. The predictive capabilities of machine learning models can play a crucial role in mitigating these effects by providing early warnings and enabling proactive measures to manage groundwater resources effectively. Implementing these models at a larger scale can help monitor groundwater levels in real time, allowing for adaptive management practices that respond to changing climate conditions. For instance, regions experiencing prolonged droughts can use these predictions to implement water-saving measures, while areas with the potential for recharge can optimize water storage and conservation efforts.
Overall, Artificial Intelligence is a great modeling tool. However, in all three sub-fields of water resource engineering, AI models did not perform equally. AI tools can be useful in predicting energy dissipation in stepped channels and trap efficiency in vortex tubes. It identifies the characteristics that are significant for prediction; however, when predicting GWL for multiple wells, it fails to capture variance. The benefits of these AI technologies include the ability to provide good results; but, if the model does not use an adequate input source, the output results may be virtually undesirable. As a result, it is essential to include a reliable dataset that accurately reflects the model’s behavior in the field of water resource engineering.

5. Conclusions

This study is composed of three distinct parts: the first part proposed the prediction of energy dissipation of stepped channels as a mitigation approach to climate change issues. The second part is composed of datasets related to sediment removal issues involving the vortex tube silt ejector, and the third part is composed of the prediction of groundwater level from various field data. Additionally, this study highlights the benefits and drawbacks of AI technologies.
  • Out of different climate change mitigation approaches, providing stepped channels in hilly terrain for high-velocity rainwater drainage is studied. The AI approach of modeling in the field of water resource engineering is successfully implemented in this part of the study. The best input parameters that may help design the stepped channels are y c h , Δ H cha y c , w y c , D N , and tan α ;
  • The GBM model performs well for modeling proposes, whereas the M5Rules are further useful for predicting energy dissipation in any geometric condition through several regression rules;
  • The second part of this study assesses the capability of nine artificial intelligent techniques, i.e., RF, RT, M5P, GBM, M5Rules, FFNN, AdaBoost, SVM_PUK, and SVM_RBF in modeling the trapping efficiency of the vortex tube silt ejector utilizing experimental observations. A fair amount of data from various field sites would be required to arrive at more solid conclusions for future works;
  • The GBM model outperformed other invoked AI-based models as well as conventional inductive empirical models in the computation of the vortex tube silt ejector trapping efficiency, followed by the RF model. The study’s findings exhibited that estimating the trapping efficacy of the vortex tube silt evacuator with conventional models produces very high errors;
  • The third part of this study demonstrated the effectiveness of GBM and Random Tree models in predicting groundwater levels using multiple well data, despite the challenges of uneven data distribution. The insights gained from this research can inform future groundwater storage evaluation approaches, emphasizing the need for high-quality evenly distributed data;
  • Applying these models can significantly contribute to global efforts to manage groundwater resources sustainably, thereby reducing the adverse impacts of climate change on water availability. By leveraging advanced machine learning techniques, we can develop more resilient and adaptive groundwater management strategies that ensure water security for future generations.
Varying the step geometry in a stepped channel may be taken into consideration to create a channel that is even more effective and capable of dissipating more kinetic energy. The addition of more experimental and field data may lead to a more precise model than the existing and some new hybrid ML models can also enhance the modeling results. Further studies may focus on scaling AI models to more significant regions, adapting them to diverse geological conditions, and testing their generalization across different environmental contexts to enhance their utility for groundwater management in other areas. Given the impacts of climate change on groundwater recharge rates and availability, future research can focus on integrating climate projections into groundwater prediction models. This would enable AI-based models to predict groundwater levels under different climate scenarios, providing valuable insights for climate adaptation strategies. These models could also be applied to simulate the effects of extreme weather events, such as droughts or floods, on groundwater availability, offering early warnings and allowing for adaptive management practices.

Author Contributions

Conceptualization, C.S.P.O., R.M. and S.K.; Methodology, R.M., S.K. and H.S.; Software, S.K. and R.M.; Model description, H.S.; Validation, C.S.P.O., R.M. and S.K.; Formal analysis, R.M., S.K. and H.S.; Investigation, S.K., R.M. and H.S.; Resources, C.S.P.O., S.K., R.M. and H.S.; Data curation, R.M., S.K. and H.S.; Writing—original draft preparation, C.S.P.O., R.M., S.K. and H.S.; Writing—review and editing, R.M., S.K. and H.S.; Visualization and Supervision, C.S.P.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All supporting data and models used during this study can be made available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to pay their gratitude to the civil department, Hydraulic Lab of IIT Roorkee, and Geomatics Lab of IIT Roorkee for the efficient experimental set-up of the channel and simulation.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

It contains the number of rules of the M5P model as well as M5Rules to predict the energy dissipation of the stepped channel. However, the M5Rules and M5P model generate several rules and a linear regression expression that might be beneficial in further studies. The M5Rules are expressed as
  • Rule: −1, For 12.71 < Δ H c h a y c 19.09
    H H m a x = 0.767 + 0.0032 Δ H c h a y c 0.0306 y c h + 0.168 tan α + 0.005 w y c 0.6794 D N
  • Rule: −2, For 15.97 < Δ H c h a y c , y c h 0.309
    H H m a x = 0.870 + 0.001 Δ H c h a y c 0.0391 y c h + 0.0587 tan α + 0.0009 w y c 0.1864 D N
  • Rule: −3, For 15.97 < Δ H c h a y c , tan α > 0.423
    H H m a x = 0.8032 + 0.0028 Δ H c h a y c 0.0439 y c h + 0.0801 tan α + 0.0015 w y c
  • Rule: −4, For Δ H c h a y c > 16.69
    H H m a x = 0.7949 + 0.003 Δ H c h a y c 0.0439 y c h + 0.0022 w y c
  • Rule: −5, For Δ H c h a y c 8.152
    H H m a x = 0.7304 + 0.0088 Δ H c h a y c 0.0773 y c h
  • Rule: −6, For y c h 0.509 , DN > 0.001
    H H m a x = 0.7404 + 0.2203 tan α
  • Rule: −7, For y c h > 0.509 , tan α 0.258
    H H m a x = 0.6874 + 0.012 Δ H c h a y c 0.1633 tan α
  • Rule: −8, For y c h < 0.509 , Δ H c h a y c 11.397
    H H m a x = 0.597 + 0.0301 Δ H c h a y c 0.2197 y c h
  • Rule: −9
    H H m a x = 0.0738 + 0.0609 Δ H c h a y c

Appendix B

It includes the M5Rules and the number of M5P model rules to forecast the removal of silt deposition in canals. The linear expression is given as
  • Rule: −1, For 0.35 < d 50 0.415
    T . E . = 19.5646 × d 50 1.406 × R % + 58.7014
  • Rule: −2, For t d 0.25
    T . E . = 61.2922 × d 50 + 90.9074 × t d + 0.0026 × C 0.6181 × R % + 3.9735
  • Rule: −3,
    T . E . = 103.536 × d 50 + 2.463 × R ( % ) + 7.2549
The M5P model produces several linear regression expressions that can potentially be useful in the future. The M5P model’s liner regression is described as
LM1 T.E. (%) = −17.028 × d50 + 0.0009 × C − 0.798 × R + 53.024 is obtained when R ≤ 16.495 and Sediment Size ≤ 0.195
LM2 T.E. (%) = −1.2008 × d50 − 0.001 × C − 0.798 × R + 48.9238 is generated when R ≤ 16.495; and Sediment Size > 0.195
Similarly, LM3 T.E. (%) = 35.7396 × d50 − 0.0016 × C − 1.0345 × R + 41.6065 is generated when R > 16.495 and Sediment Size > 0.35
LM num: 4 T.E. (%) = −7.6019 × d50 + 0.0243 × C + 0.6436 × R + 63.9772
LM num: 5 T.E. (%) = −35.6058 × d50 − 0.0056 × C − 1.207 × R + 50.2012

References

  1. Collins, M.; An, S.I.; Cai, W.; Ganachaud, A.; Guilyardi, E.; Jin, F.F.; Jochum, M.; Lengaigne, M.; Power, S.; Timmermann, A.; et al. The impact of global warming on the tropical Pacific Ocean and El Niño. Nat. Geosci. 2010, 3, 391–397. [Google Scholar] [CrossRef]
  2. Soden, B.J.; Held, I.M. An Assessment of Climate Feedbacks in Coupled Ocean–Atmosphere Models. J. Clim. 2006, 19, 3354–3360. [Google Scholar] [CrossRef]
  3. Vecchi, G.A.; Soden, B.J. Global Warming and the Weakening of the Tropical Circulation. J. Clim. 2007, 20, 4316–4340. [Google Scholar] [CrossRef]
  4. Wang, C.; Xie, S.-P.; Carton, J.A. A Global Survey of Ocean–Atmosphere Interaction and Climate Variability; Wang, C., Xie, S.P., Carton, J.A., Eds.; AGU Geophysical Monograph Series; Blackwell Publishing Ltd.: Oxford, UK, 2004; pp. 1–19. [Google Scholar] [CrossRef]
  5. Kasiviswanathan, K.S.; Soundharajan, D.; Sandhya, P.; Jianxun, H.; Ojha, C.S.P. Modeling and Mitigation Measures for Managing Extreme Hydrometeorological Events under a Warming Climate; Elsevier: Amsterdam, The Netherlands, 2023; ISBN 9780443186417. [Google Scholar]
  6. Rao, Y.S.; Tian, C.Z.; Ojha, C.S.P.; Gurjar, B.; Tyagi, R.D.; Kao, C.M. Climate Change Modeling, Mitigation, and Adaptation; ASCE: Reston, VA, USA, 2013; ISBN 978-0-7844-1271-8. [Google Scholar] [CrossRef]
  7. Karki, R.; ul Hasson, S.; Gerlitz, L.; Talchabhadel, R.; Schenk, E.; Schickoff, U.; Böhner, J. WRF-based simulation of an extreme precipitation event over the Central Himalayas: Atmospheric mechanisms and their representation by microphysics parameterization schemes. Atmos. Res. 2018, 214, 21–35. [Google Scholar] [CrossRef]
  8. Bhardwaj, A.; Wasson, R.J.; Chow, W.T.L.; Ziegler, A.D. High-intensity monsoon rainfall variability and its attributes: A case study for Upper Ganges Catchment in the Indian Himalaya during 1901–2013. Nat. Hazards 2021, 105, 2907–2936. [Google Scholar] [CrossRef]
  9. Gouda, K.C.; Rath, S.S.; Singh, N.; Ghosh, S.; Lata, R. Extreme rainfall event analysis over the state of Himachal Pradesh in India. Theor. Appl. Climatol. 2022, 151, 1103–1111. [Google Scholar] [CrossRef]
  10. Chanson, H. Hydraulics of nappe flow regime above stepped chutes and spillways. Aust. Civil Eng. Trans. 1994, 36, 69–76. [Google Scholar]
  11. Chanson, H. Hydraulic Design of Stepped Cascades, Channels, Weirs and Spillways; Pergamon: Oxford, UK, 1994. [Google Scholar]
  12. Peyras, L.; Royet, P.; Degoutte, G. Flow and Energy Dissipation over Stepped Gabion Weirs. J. Hydraul. Eng. 1992, 118, 707–717. [Google Scholar] [CrossRef]
  13. Chanson, H. Prediction of the transition nappe/skimming flow on a stepped channel. J. Hydraul. Res. 1996, 34, 421–429. [Google Scholar] [CrossRef]
  14. Ohtsu, I.; Yasuda, Y. Characteristics of Flow Conditions on Stepped Channels. In Proceedings of the 27th IAHR Congress, Theme D, San Francisco, CA, USA, 10–15 August 1997; pp. 583–588. [Google Scholar]
  15. Chanson, H.; Toombes, L. Hydraulics of stepped chutes: The transition flow. J. Hydraul. Res. 2004, 42, 43–54. [Google Scholar] [CrossRef]
  16. Boes, R.M.; Hager, W.H. Hydraulic design of stepped spillways. J. Hydraul. Eng. 2003, 129, 671–679. [Google Scholar] [CrossRef]
  17. Chamani, M.R.; Rajaratnam, N. Characteristics of skimming flow over stepped spillways. J. Hydraul. Eng. 1999, 125, 361–368. [Google Scholar] [CrossRef]
  18. Essery, I.T.S.; Horner, M.W. The Hydraulic Design of Stepped Spillways, 2nd ed.; CIRIA Report No. 33; CIRIA (Construction Industry Research and Information Association): London, UK, 1978. [Google Scholar]
  19. Pinheiro, A.N.; Fael, C.S. Nappe Flow in Stepped Channels—Occurrence and Energy Dissipation. In International Workshop on Hydraulics of Stepped Spillways; Balkema: Zurich, Switzerland, 2000; pp. 119–126. [Google Scholar]
  20. Toombes, L.; Wagne, C.; Chanson, H. Flow Patterns in Nappe Flow Regime Down Low Gradient Stepped Chutes. J. Hydraul. Res. 2008, 46, 4–14. [Google Scholar] [CrossRef]
  21. Chanson, H.; Toombes, L. Energy dissipation and air entrainment in a stepped storm waterway: An experimental study. J. Irrig. Drain. Eng. 2002, 128, 305–315. [Google Scholar] [CrossRef]
  22. Chamani, M.R.; Rajaratnam, N. Jet flow on stepped spillways. J. Hydraul. Eng. 1994, 120, 254–259. [Google Scholar] [CrossRef]
  23. Felder, S.; Geuzaine, M.; Dewals, B.; Erpicum, S. Nappe flows on a stepped chute with prototype-scale steps height: Observations of flow patterns, air-water flow properties, energy dissipation and dissolved oxygen. J. Hydro-Environ. Res. 2019, 27, 1–19. [Google Scholar] [CrossRef]
  24. Horner, M.W. An Analysis of Flow on Cascades of Steps. Ph.D. Thesis, University of Birmingham, Birmingham, UK, 1969; p. 357. [Google Scholar]
  25. Salmasi, F.; Özger, M. Neuro-fuzzy approach for estimating energy dissipation in skimming flow over stepped spillways. Arab. J. Sci. Eng. 2014, 39, 6099–6108. [Google Scholar] [CrossRef]
  26. Parsaie, A.; Haghiabi, A.H.; Saneie, M.; Torabi, H. Prediction of energy dissipation on the stepped spillway using the multivariate adaptive regression splines. J. Hydraul. Eng. 2016, 22, 281–292. [Google Scholar] [CrossRef]
  27. Parsaie, A.; Haghiabi, A.H.; Saneie, M.; Torabi, H. Applications of soft computing techniques for prediction of energy dissipation on stepped spillways. Neural Comput. Appl. 2018, 29, 1393–1409. [Google Scholar] [CrossRef]
  28. Jiang, L.; Diao, M.; Xue, H.; Sun, H. Energy Dissipation Prediction for Stepped Spillway Based on Genetic Algorithm–Support Vector Regression. J. Irrig. Drain. Eng. 2018, 144, 04018003. [Google Scholar] [CrossRef]
  29. Parsaie, A.; Haghiabi, A.H.H. Evaluation of energy dissipation on stepped spillway using evolutionary computing. Appl. Water Sci. 2019, 9, 144. [Google Scholar] [CrossRef]
  30. Pujari, S.; Kaushik, V.; Kumar, S.A. Prediction of Energy Dissipation over Stepped Spillway with Baffles Using Machine Learning Techniques. Civ. Eng. Archit. 2023, 11, 2377–2391. [Google Scholar] [CrossRef]
  31. Orak, S.J.; Asareh, A. Effect of gradation on sediment extraction (trapping) efficiency in structures of vortex tube with different angles. Adv. Environ. Biol. 2015, 31, 53–58. [Google Scholar]
  32. Parshall, R.L. Model and prototype studies of sand traps. Trans. Am. Soc. Civ. Eng. 1952, 117, 204–212. [Google Scholar] [CrossRef]
  33. Blench, T. Discussion of model and prototype studies of sand traps, by, RL Parshall. Trans. Am. Soc. Civ. Eng. 1952, 117, 213. [Google Scholar] [CrossRef]
  34. Ahmed, M. Final recommendations from experiments of silt ejector of DG Kahn canal. In Hydraulics Research; IAHR: Thessaloniki, Greece, 1958. [Google Scholar]
  35. Robinson, A.R. Vortex tube sand trap. Trans. Am. Soc. Civ. Eng. 1962, 127, 391–433. [Google Scholar] [CrossRef]
  36. Lawrence, P.; Sanmuganathan, K. Field verification of vortex tube design method. In Proceedings of the South-East Asian Regional Symposium on Problems of Soil Erosion and Sedimentation, Bangkok, Thailand, 27–29 January 1981; Tingsanchali, T., Eggers, H., Eds.; [Google Scholar]
  37. Atkinson, E. Vortex-tube sediment extractors. I: Trapping efficiency. J. Hydraul. Eng. 1994, 120, 1110–1125. [Google Scholar] [CrossRef]
  38. Atkinson, E. Vortex-tube sediment extractors. II: Design. J. Hydraul. Eng. 1994, 120, 1126–1138. [Google Scholar] [CrossRef]
  39. Tiwari, N.K.; Sihag, P.; Kumar, S.; Ranjan, S. Prediction of trapping efficiency of vortex tube ejector. ISH J. Hydraul. Eng. 2018, 26, 59–67. [Google Scholar] [CrossRef]
  40. Tiwari, N.K.; Sihag, P.; Singh, B.K.; Ranjan, S.; Singh, K.K. Estimation of Tunnel Desilter Sediment Removal Efficiency by ANFIS. Iran. J. Sci. Technol. Trans. Civ. Eng. 2019, 44, 959–974. [Google Scholar] [CrossRef]
  41. Singh, B.; Sihag, P.; Singh, K.; Kumar, S. Estimation of trapping efficiency of a vortex tube silt ejector. Int. J. River Basin Manag. 2021, 19, 261–269. [Google Scholar] [CrossRef]
  42. Singh, B.K.; Tiwari, N.K.; Singh, K.K. Support vector regression-based modeling of trapping efficiency of silt ejector. J. Indian Water Resour. Soc. 2016, 36, 41–49. [Google Scholar]
  43. Kumar, S.; Ojha, C.S.P.; Tiwari, N.K.; Ranjan, S. Exploring the potential of artificial intelligence techniques in prediction of the removal efficiency of vortex tube silt ejector. Int. J. Sediment Res. 2023, 38, 615–627. [Google Scholar] [CrossRef]
  44. Kumar, M.; Sihag, P.; Kumar, S. Evaluation and analysis of trapping efficiency of vortex tube ejector using soft computing techniques. J. Indian Water Resour. Soc. 2019, 39, 1–9. [Google Scholar]
  45. Dangar, S.; Asoka, A.; Mishra, V. Causes and implications of groundwater depletion in India: A review. J. Hydrol. 2021, 596, 126103. [Google Scholar] [CrossRef]
  46. Swain, S.; Taloor, A.K.; Dhal, L.; Sahoo, S.; Al-Ansari, N. Impact of climate change on groundwater hydrology: A comprehensive review and current status of the Indian hydrogeology. Appl. Water Sci. 2022, 12, 120. [Google Scholar] [CrossRef]
  47. Bhattarai, N.; Lobell, D.B.; Singh, B.; Fishman, R.; Kustas, W.P.; Pokhrel, Y.; Jain, M. Warming temperatures exacerbate groundwater depletion rates in India. Sci. Adv. 2023, 9, eadi1401. [Google Scholar] [CrossRef]
  48. Chandra, N.A.; Sahoo, S.N. Groundwater levels and resiliency mapping under land cover and climate change scenarios: A case study of Chitravathi basin in Southern India. Environ. Monit. Assess. 2023, 195, 1394. [Google Scholar] [CrossRef]
  49. Das, S. Groundwater Sustainability, Security and equity: India today and tomorrow. J. Geol. Soc. India 2023, 99, 5–8. [Google Scholar] [CrossRef]
  50. Tao, H.; Hameed, M.M.; Marhoon, H.A.; Zounemat-Kermani, M.; Heddam, S.; Kim, S.; Sulaiman, S.O.; Tan, M.L.; Sa’adi, Z.; Mehr, A.D.; et al. Groundwater level prediction using machine learning models: A comprehensive review. Neurocomputing 2022, 489, 271–308. [Google Scholar] [CrossRef]
  51. Khan, J.; Lee, E.; Balobaid, A.S.; Kim, K. A comprehensive review of conventional, machine leaning, and deep learning models for groundwater level (GWL) forecasting. Appl. Sci. 2023, 13, 2743. [Google Scholar] [CrossRef]
  52. Afrifa, S.; Zhang, T.; Appiahene, P.; Varadarajan, V. Mathematical and machine learning models for groundwater level changes: A systematic review and bibliographic analysis. Future Internet 2022, 14, 259. [Google Scholar] [CrossRef]
  53. Boo, K.B.W.; El-Shafie, A.; Othman, F.; Khan, M.M.H.; Birima, A.H.; Ahmed, A.N. Groundwater level forecasting with machine learning models: A review. Water Res. 2024, 252, 121249. [Google Scholar] [CrossRef] [PubMed]
  54. Chen, Y.; Chen, W.; Chandra Pal, S.; Saha, A.; Chowdhuri, I.; Adeli, B.; Janizadeh, S.; Dineva, A.A.; Wang, X.; Mosavi, A. Evaluation efficiency of hybrid deep learning algorithms with neural network decision tree and boosting methods for predicting groundwater potential. Geocarto Int. 2021, 37, 5564–5584. [Google Scholar] [CrossRef]
  55. Di Salvo, C. Improving results of existing groundwater numerical models using machine learning techniques: A review. Water 2022, 14, 2307. [Google Scholar] [CrossRef]
  56. Jacob, T.; Bayer, R.; Chery, J.; Le Moigne, N. Time-lapse microgravity surveys reveal water storage heterogeneity of a karst aquifer. J. Geophys. Res. Solid Earth 2010, 115, B06402. [Google Scholar] [CrossRef]
  57. Nhu, V.H.; Shahabi, H.; Nohani, E.; Shirzadi, A.; Al-Ansari, N.; Bahrami, S.; Nguyen, H. Daily water level prediction of Zrebar Lake (Iran): A comparison between M5P, random forest, random tree and reduced error pruning trees algorithms. ISPRS Int. J. Geo-Inf. 2020, 9, 479. [Google Scholar] [CrossRef]
  58. Elbeltagi, A.; Pande, C.B.; Kouadri, S.; Islam, A.R.M.T. Applications of various data-driven models for the prediction of groundwater quality index in the Akot basin, Maharashtra, India. Environ. Sci. Pollut. Res. 2022, 29, 17591–17605. [Google Scholar] [CrossRef]
  59. Xiong, J.; Guo, S.; Kinouchi, T. Leveraging machine learning methods to quantify 50 years of dwindling groundwater in India. Sci. Total Environ. 2022, 835, 155474. [Google Scholar] [CrossRef]
  60. Mirhashemi, S.H.; Mirzaei, F.; Haghighat Jou, P.; Panahi, M. Evaluation of Four Tree Algorithms in Predicting and Investigating the Changes in Aquifer Depth. Water Resour. Manag. 2022, 36, 4607–4618. [Google Scholar] [CrossRef]
  61. Masroor, M.; Rehman, S.; Sajjad, H.; Rahaman, M.H.; Sahana, M.; Ahmed, R.; Singh, R. Assessing the impact of drought conditions on groundwater potential in Godavari Middle Sub-Basin, India using analytical hierarchy process and random forest machine learning algorithm. Groundw. Sustain. Dev. 2021, 13, 100554. [Google Scholar] [CrossRef]
  62. Masroor, M.; Sajjad, H.; Kumar, P.; Saha, T.K.; Rahaman, M.H.; Choudhari, P.; Saito, O. Novel ensemble machine learning modeling approach for groundwater potential mapping in Parbhani District of Maharashtra, India. Water 2023, 15, 419. [Google Scholar] [CrossRef]
  63. Afan, H.A.; Ibrahem Ahmed Osman, A.; Essam, Y.; Ahmed, A.N.; Huang, Y.F.; Kisi, O.; El-Shafie, A. Modeling the fluctuations of groundwater level by employing ensemble deep learning techniques. Eng. Appl. Comput. Fluid Mech. 2021, 15, 1420–1439. [Google Scholar] [CrossRef]
  64. Abdi, E.; Ali, M.; Santos, C.A.G.; Olusola, A.; Ghorbani, M.A. Enhancing Groundwater Level Prediction Accuracy Using Interpolation Techniques in Deep Learning Models. Groundw. Sustain. Dev. 2024, 26, 101213. [Google Scholar] [CrossRef]
  65. Ahmadi, A.; Olyaei, M.; Heydari, Z.; Emami, M.; Zeynolabedin, A.; Ghomlaghi, A.; Sadegh, M. Groundwater level modeling with machine learning: A systematic review and meta-analysis. Water 2022, 14, 949. [Google Scholar] [CrossRef]
  66. Singh, P.N. Chatra Canal, Nepal: Vortex Tube Field Measurements; Report No. OD55; Hydraulics Research: Wallingford, UK, 1983. [Google Scholar]
  67. Pathak, S.; Gupta, S.; Ojha, C.S.P. Assessment of groundwater vulnerability to contamination with ASSIGN index: A case study in Haridwar, Uttarakhand, India. J. Hazard. Toxic Radioact. Waste 2021, 25, 04020081. [Google Scholar] [CrossRef]
  68. Breiman, L.; Friedman, J.; Olshen, R.; Stone, C. Classification and Regression Trees; Chapman and Hall/CRC: Boca Raton, FL, USA, 1984. [Google Scholar] [CrossRef]
  69. Breiman, L. Using Adaptive Bagging to Debias Regressions; Technical Report 547; Statistics Dept. UCB: Berkeley, CA, USA, 1999; Available online: https://statistics.berkeley.edu/tech-reports/547 (accessed on 28 February 1999).
  70. Erdal, H.; Karahanoğlu, İ. Bagging ensemble models for bank profitability: An emprical research on Turkish development and investment banks. Appl. Soft Comput. 2016, 49, 861–867. [Google Scholar] [CrossRef]
  71. Breiman, L. Random forests. In Machine Learning; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2001; Volume 45, pp. 5–32. [Google Scholar] [CrossRef]
  72. Ali, J.; Khan, R.; Ahmad, N.; Maqsood, I. Random forests and decision trees. Int. J. Comput. Sci. Issues 2012, 9, 272–278. Available online: https://www.researchgate.net/publication/259235118 (accessed on 1 September 2012).
  73. Sattari, M.T.; Pal, M.; Mirabbasi, R.; Abraham, J. Ensemble of M5 model tree-based modelling of sodium adsorption ratio. J. AI Data Min. 2018, 6, 69–78. Available online: https://jad.shahroodut.ac.ir/article_1015_957fdfdc9de0cc89dbb4339ccf806dc4.pdf (accessed on 31 March 2018).
  74. Quinlan, J.R. Learning with continuous classes. In Proceedings of the 5th Australian Joint Conference on Artificial Intelligence, Hobart, Tasmania, 16–18 November 1992; Volume 92, pp. 343–348. Available online: https://sci2s.ugr.es/keel/pdf/algorithm/congreso/1992-Quinlan-AI.pdf (accessed on 18 November 1992).
  75. Sharma, R.; Kumar, S.; Maheshwari, R. Comparative analysis of classification techniques in data mining using different datasets. Int. J. Comput. Sci. Mob. Comput. 2015, 4, 125–134. Available online: https://api.semanticscholar.org/CorpusID:33215569 (accessed on 31 December 2015).
  76. Bayzid, S.M.; Mohamed, Y.; Al-Hussein, M. Prediction of maintenance cost for road construction equipment: A case study. Can. J. Civ. Eng. 2016, 43, 480–492. [Google Scholar] [CrossRef]
  77. Zhou, Z.H. Ensemble Methods: Foundations and Algorithms; Publisher CRC Press: Boca Raton, FL, USA, 2012; pp. 23–44. [Google Scholar]
  78. Schapire, R.E. Explaining adaboost. In Empirical Inference; Springer: Berlin/Heidelberg, Germany, 2013; pp. 37–52. [Google Scholar] [CrossRef]
  79. Burgsteiner, H. Imitation learning with spiking neural networks and real-world devices. Eng. Appl. Artif. Intell. 2006, 19, 741–752. [Google Scholar] [CrossRef]
  80. Bartolini, A.; Lombardi, M.; Milano, M.; Benini, L. Neuron Constraints to Model Complex Real-World Problems; Springer: Berlin/Heidelberg, Germany, 2011; pp. 115–129. [Google Scholar] [CrossRef]
  81. Singh, U.; Rizwan, M.; Alaraj, M.; Alsaidan, L. 3A Machine Learning-Based Gradient Boosting Regression Approach for Wind Power Production Forecasting: A Step towards Smart Grid Environments. Energies 2021, 14, 5196. [Google Scholar] [CrossRef]
  82. Han, S.; Qubo, C.; Meng, H. Parameter selection in SVM with RBF kernel, function. In Proceedings of the World Automation Congress, Puerto Vallarta, Mexico, 24–28 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1–4. Available online: https://ieeexplore.ieee.org/document/6321759 (accessed on 4 October 2012).
  83. Sihag, P.; Jain, P.; Kumar, M. Modelling of impact of water quality on recharging rate of stormwater filter system using various kernel function-based regression. Model. Earth Syst. Environ. 2018, 4, 61–68. [Google Scholar] [CrossRef]
  84. Smola, A.J.; Schölkopf, B. A tutorial on support vector regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef]
  85. Namadi, P.; He, M.; Sandhu, P. Modeling ion constituents in the Sacramento-San Joaquin Delta using multiple machine learning approaches. J. Hydroinform. 2023, 25, 2541–2560. [Google Scholar] [CrossRef]
  86. Stone, M. Cross-validatory choice and assessment of statistical predictions. J. R. Stat. Soc. Ser. B Methodol. 1974, 36, 111–133. [Google Scholar] [CrossRef]
  87. Geisser, S. The predictive sample reuse method with applications. J. Am. Stat. Assoc. 1975, 70, 320–328. [Google Scholar] [CrossRef]
  88. Efron, B. Estimating the error rate of a prediction rule: Improvement on cross-validation. J. Am. Stat. Assoc. 1983, 78, 316–331. [Google Scholar] [CrossRef]
  89. Curi, K.V.; Esen, I.I.; Velioglu, S.G. Vortex type solid liquid separator. Prog. Water Technol. 1979, 7, 183–190. [Google Scholar]
  90. Paul, T.C.; Sayal, S.K.; Sakhuja, V.S.; Dhillon, G.S. Vortex-settling basin design considerations. J. Hydraul. Eng. 1991, 117, 172–189. [Google Scholar] [CrossRef]
  91. Kumar, N.; Rajagopalan, P.; Pankajakshan, P.; Bhattacharyya, A.; Sanyal, S.; Balachandran, J.; Waghmare, U.V. Machine learning constrained with dimensional analysis and scaling laws: Simple, transferable, and interpretable models of materials from small datasets. Chem. Mater. 2018, 31, 314–321. [Google Scholar] [CrossRef]
  92. Sarkar, H.; Goriwale, S.S.; Ghosh, J.K.; Ojha, C.S.P.; Ghosh, S.K. Potential of machine learning algorithms in groundwater level prediction using temporal gravity data. Groundw. Sustain. Dev. 2024, 25, 101114. [Google Scholar] [CrossRef]
Figure 1. A graphical representation illustrating the correlation between the input target variables for predicting the energy dissipation of a stepped channel.
Figure 1. A graphical representation illustrating the correlation between the input target variables for predicting the energy dissipation of a stepped channel.
World 05 00045 g001
Figure 2. A graphical representation illustrating the correlation between the input target variables for predicting the sediment trapping efficiency of the vortex tube silt ejector.
Figure 2. A graphical representation illustrating the correlation between the input target variables for predicting the sediment trapping efficiency of the vortex tube silt ejector.
World 05 00045 g002
Figure 3. Study area map.
Figure 3. Study area map.
World 05 00045 g003
Figure 4. The flow diagram of the current methodology.
Figure 4. The flow diagram of the current methodology.
World 05 00045 g004
Figure 5. Agreement diagram of observed and predicted H H m a x . (a) M5P; (b) M5Rules; (c) RF; (d) RT; (e) FFNN; (f) GBM; (g) AdaBoost; (h) SVM_PUK; (i) SVM_RBF.
Figure 5. Agreement diagram of observed and predicted H H m a x . (a) M5P; (b) M5Rules; (c) RF; (d) RT; (e) FFNN; (f) GBM; (g) AdaBoost; (h) SVM_PUK; (i) SVM_RBF.
World 05 00045 g005
Figure 6. Taylor’s diagram of observed and predicted H H m a x . (a) AI model training; (b) AI model testing.
Figure 6. Taylor’s diagram of observed and predicted H H m a x . (a) AI model training; (b) AI model testing.
World 05 00045 g006
Figure 7. Distribution of relative errors of energy dissipation for all applied AI-based models in the (a) training phase and (b) testing phase.
Figure 7. Distribution of relative errors of energy dissipation for all applied AI-based models in the (a) training phase and (b) testing phase.
World 05 00045 g007
Figure 8. Agreement diagram of observed and predicted trap efficiency. (a) M5P model; (b) M5Rules; (c) RF model; (d) RT model; (e) FFNN; (f) GBM; (g) AdaBoost; (h) SVM_PUK; (i) SVM_RBF.
Figure 8. Agreement diagram of observed and predicted trap efficiency. (a) M5P model; (b) M5Rules; (c) RF model; (d) RT model; (e) FFNN; (f) GBM; (g) AdaBoost; (h) SVM_PUK; (i) SVM_RBF.
World 05 00045 g008
Figure 9. Taylor’s diagram of observed and predicted trapping efficiency. (a) AI model training; (b) AI model testing.
Figure 9. Taylor’s diagram of observed and predicted trapping efficiency. (a) AI model training; (b) AI model testing.
World 05 00045 g009
Figure 10. Distribution of relative errors of trapping efficiency for all applied AI-based models (a) in the training phase and (b) testing phase.
Figure 10. Distribution of relative errors of trapping efficiency for all applied AI-based models (a) in the training phase and (b) testing phase.
World 05 00045 g010
Figure 11. Agreement diagram of observed and predicted GWL using AI models. (a) M5P model; (b) M5Rules; (c) RF model; (d) RT model; (e) FFNN; (f) GBM; (g) AdaBoost; (h) SVM_PUK; (i) SVM_RBF.
Figure 11. Agreement diagram of observed and predicted GWL using AI models. (a) M5P model; (b) M5Rules; (c) RF model; (d) RT model; (e) FFNN; (f) GBM; (g) AdaBoost; (h) SVM_PUK; (i) SVM_RBF.
World 05 00045 g011
Figure 12. Taylor’s diagram of observed and predicted GWL. (a) AI model Training; (b) AI model Testing.
Figure 12. Taylor’s diagram of observed and predicted GWL. (a) AI model Training; (b) AI model Testing.
World 05 00045 g012
Figure 13. Distribution of relative errors of groundwater level for all applied AI-based models in the (a) training phase and (b) testing phase.
Figure 13. Distribution of relative errors of groundwater level for all applied AI-based models in the (a) training phase and (b) testing phase.
World 05 00045 g013
Table 1. A significant feature of datasets for nappe flow from the literature.
Table 1. A significant feature of datasets for nappe flow from the literature.
Sl. No.ReferenceShapeUnit Discharge q [m2/s] h [cm] Number of StepsW [m] Slope
(α)
Flow Depth Measurement
1[18,24]Flat-2.9–50 8, 10, 20, 300.15, 0.3, 0.6 11.3–45Pitot tube
2[19]Flat0.004–0.057 5 100.7 14, 18.4Indirect method
4[23]Flat0.005–0.6375060.215Conductivity probe
5Present studyFlat0.0089–0.046010100.5226.6, 23.12Conductivity probe
Table 2. A significant feature of all the datasets (Part 1).
Table 2. A significant feature of all the datasets (Part 1).
Sl. No.ReferenceMINMAXAVGSTD
1DN0.00010.0980.0020.008
2Hcha/yc4.67396.35523.20415.132
3W/yc0.58367.41515.8639.4332
4Slope (α)0.2490.50.4180.078
5yc/h0.0961.3680.40.224
6 H H m a x 0.5920.9730.880.076
Table 3. Range of collected data from the present experiment (Part 2).
Table 3. Range of collected data from the present experiment (Part 2).
StatisticsUnitsMaxMinMeanStd. Dev.KurtosisSkewness
Flow rate (Q) m3/s0.10.10.1000
t/d-0.20.20.2000
Concentrationppm1000500750250−2.03420
Extraction Ratio (R)%26.4213.3819.38264.89353.75460.2725
Pipe diameterm0.1270.1270.127000
Flow depthm0.22450.10790.15790.0492−1.51270.4735
Sediment sizem0.000580.000240.0003980.00013−1.42760.2167
slope-0.001710.001710.00171000
Trap efficiency ( η ) %57.2419.8641.713217.85491.65741.5548
Table 4. Range of collected data from the literature (Part 2).
Table 4. Range of collected data from the literature (Part 2).
StatisticsUnitsMaxMinMeanStd. Dev.KurtosisSkewness
Flow rate (Q) m3/s460.11.81847.877525.36585.1787
t/d-0.30.20.21890.03930.57861.6034
Concentrationppm109210.494643.7491347.2818−1.01508−0.4139
Extraction Ratio (R)%26.423.718.06364.8935−0.0596−0.3849
Pipe diameterm0.90.1270.21110.18704.42532.2331
Flow depthm3.20.10790.31090.536622.12324.7223
Channel widthm271.53.01414.750619.98684.4395
Sediment sizem0.000380.000150.0003380.0000891.2338−1.7750
slope-0.001710.000190.0014230.0005950.58003−1.6037
Trap efficiency ( η ) %9443.572.107116.9605−1.387−0.379
Table 5. Summary of the observation wells and their geographical location (Part 3).
Table 5. Summary of the observation wells and their geographical location (Part 3).
Station IDNameGeographical Information
Latitude (N)Longitude (E)Height of GWL
Above MSL
W1HYDW IITR29°52′6.312″ 77°53′42.576″ 254.01
W2Bajuberi29°54′5.112″ 77°55′25.14″ 259.23
W3Sherpur29°52′55.2″ 77°54′50.508″ 256.31
W4Adarsh Nagar29°52′32.052″ 77°53′59.928″ 256.84
W5Ibrahimpur29°53′34.368″ 77°52′9.372″ 267.02
W6Ramnagar29°52′33.996″ 77°52′30.72″ 267.9
W7Civil Line29°52′42.672″ 77°53′16.512″ 263.99
W8Ashafnagar29°49′30.252″ 77°52′2.064″ 260.55
W9Saidpura29°49′14.016″ 77°52′32.5128″ 264.19
W10Mandir29°51′1.08″ 77°53′16.008″ 254.16
W11Dhandera29°49′57.108″ 77°54′24.516″ 265.74
W12Rail Gate29°50′28.716″ 77°54′24.4224″ 266.58
W13DS Barrack29°51′40.392″ 77°53′25.5912″ 266.55
W14KV29°51′48.996″ 77°54′29.196″ 254.22
Table 6. Identification and application of hyperparameters for RF, RT, M5P, M5Rules, FFNN, GBM, AdaBoost, SVM_PUK, and SVM_RBF for the prediction of energy dissipation of a stepped channel.
Table 6. Identification and application of hyperparameters for RF, RT, M5P, M5Rules, FFNN, GBM, AdaBoost, SVM_PUK, and SVM_RBF for the prediction of energy dissipation of a stepped channel.
ModelHyperparameterGeneral RangeHyperparameter Range
GBMn_estimators
Learning rate
max depth
subsample
[50, 100, 150]
[0.01, 0.1, 0.2]
[3, 5, 7]
[0.8, 0.9, 1.0]
50
0.1
3
0.8
FFNNHidden_layer_sizes
Activation
Solver
Alpha
Batch size
Learning_rate
Learning rate_initial
Power_t
Max_iter
Early_stopping
[(50, 20), (50, 30), (100, 100)],
[‘tanh’, ‘relu’, ‘logistic’],
[‘adam’, ‘sgd’, ‘lbfgs’],
[0.0001, 0.01, 0.1]
[32, 64, 100]
[‘constant’, ‘invscaling’, ‘adaptive’]
[0.001, 0.01, 0.1]
[0.5, 0.7, 0.9]
[200, 500, 1000]
[True, False]
[100 100]
relu
adam
0.0001
100
Constant
0.001
0.5
500
True
ADABOOSTn_estimators
Learning rate
loss
max_depth
min sample split
[50, 100, 150]
[0.01, 0.1, 0.2]
[linear, square, exponential]
[3, 5, 7]
[2, 5, 10]
50
0.01
Exponential
5
10
SVM_PUKC (Regularization Param)
ω (indepent term of PUK)
γ (coefficient for PUK kernel)
seed
[0.1, 1.0, 10.0]
[0, 1, 2]
[0.001, 0.01, 0.1]
[0, 1, 2]
1.0
1.0
0.001
1
SVM_RBFC (Regularization Prm)
ω (indepent term of PUK)
γ (coefficient for PUK kernel)
seed
[0.1, 1.0, 10.0]
[0, 1, 2]
[0.001, 0.01, 0.1]
[0, 1, 2]
1.0
0
0.001
1
M5PBatch size
minNumInstances
unpruned
[50, 100, 150]
[2, 4, 7]
[True, False]
129
6
False
M5RulesBatch size
minNumInstances
unpruned
[50, 100, 150]
[2, 4, 7]
[True, False]
129
6
False
RFK (No of selected Attributes)
Batch size
I (Min No. of Instances per leaf)
max Depth
min num (m)
minVarianceProp
seed
[0, 1, 2]
[50, 100, 150]
[50, 100, 150]
[5, 7, 8]
[0, 1, 2]
[0.001, 0.01, 0.1]
[1, 2, 3]
0
100
200
6
1
0.001
1
RTK (No of selected Attributes)
Batch size
I (Min No. of Instances per leaf)
max Depth
min num (m)
minVarianceProp
seed
[0, 1, 2]
[50, 100, 150]
[50, 100, 150]
[3, 7, 8]
[0, 1, 2]
[0.001, 0.01, 0.1]
[1, 2, 3]
0
100
100
3
1
0.01
1
Table 7. Identification and application of hyperparameters for RF, RT, M5P, M5Rules, FFNN, GBM, AdaBoost, SVM_PUK, and SVM_RBF for prediction of the trapping efficiency of the vortex tube silt ejector.
Table 7. Identification and application of hyperparameters for RF, RT, M5P, M5Rules, FFNN, GBM, AdaBoost, SVM_PUK, and SVM_RBF for prediction of the trapping efficiency of the vortex tube silt ejector.
ModelHyperparameterGeneral RangeHyperparameter Range
GBMn_estimators
Learning rate
max depth
subsample
[50, 100, 150]
[0.01, 0.1, 0.2]
[3, 5, 7]
[0.8, 0.9, 1.0]
150
0.1
3
0.9
FFNNHidden_layer_sizes
Activation
Solver
Alpha
Batch size
Learning_rate
Learning rate_initial
Power_t
Max_iter
Early_stopping
[(50, 20), (50, 30), (100, 100)],
[‘tanh’, ‘relu’, ‘logistic’],
[‘adam’, ‘sgd’, ‘lbfgs’],
[0.0001, 0.01, 0.1]
[32, 64, 100]
[‘constant’, ‘invscaling’, ‘adaptive’]
[0.001, 0.01, 0.1]
[0.5, 0.7, 0.9]
[200, 500, 1000]
[True, False]
[50 30]
relu
adam
0.0001
64
Constant
0.001
0.5
1000
True
ADABOOSTn_estimators
Learning rate
loss
max_depth
min sample split
[50, 100, 150]
[0.01, 0.1, 0.2]
[linear, square, exponential]
[3, 5, 7]
[2, 5, 10]
50
0.2
Exponential
5
2
SVM_PUKC (Regularization Param)
ω (indepent term of PUK)
γ (coefficient for PUK kernel)
seed
[0.1, 1.0, 10.0]
[0, 1, 2]
[0.001, 0.01, 0.1]
[0, 1, 2]
1.0
0
0.001
1
SVM_RBFC (Regularization Prm)
ω (indepent term of PUK)
γ (coefficient for PUK kernel)
seed
[0.1, 1.0, 10.0]
[0, 1, 2]
[0.001, 0.01, 0.1]
[0, 1, 2]
1.0
0
0.001
1
M5PBatch size
minNumInstances
unpruned
[50, 100, 150]
[2, 4, 7]
[True, False]
100
4
False
M5RulesBatch size
minNumInstances
unpruned
[50, 100, 150]
[2, 4, 7]
[True, False]
100
7
False
RFK (No of selected Attributes)
Batch size
I (Min No. of Instances per leaf)
max Depth
min num (m)
minVarianceProp
seed
[0, 1, 2]
[50, 100, 150]
[50, 100, 150]
[5, 7, 8]
[0, 1, 2]
[0.001, 0.01, 0.1]
[1, 2, 3]
0
100
100
8
1
0.001
1
RTK (No of selected Attributes)
Batch size
I (Min No. of Instances per leaf)
max Depth
min num (m)
minVarianceProp
seed
[0, 1, 2]
[50, 100, 150]
[50, 100, 150]
[5, 7, 8]
[0, 1, 2]
[0.001, 0.01, 0.1]
[1, 2, 3]
0
100
100
8
1
0.001
1
Table 8. Identification and application of hyperparameters for RF, RT, M5P, M5Rules, FFNN, GBM, AdaBoost, SVM_PUK, and SVM_RBF for prediction of GWL.
Table 8. Identification and application of hyperparameters for RF, RT, M5P, M5Rules, FFNN, GBM, AdaBoost, SVM_PUK, and SVM_RBF for prediction of GWL.
ModelHyperparameterGeneral RangeHyperparameter Range
GBMn_estimators
Learning rate
max depth
subsample
[50, 100, 150]
[0.01, 0.1, 0.2]
[3, 5, 7]
[0.8, 0.9, 1.0]
50
0.1
3
0.8
FFNNHidden_layer_sizes
Activation
Solver
Alpha
Batch size
Learning_rate
Learning rate_initial
Power_t
Max_iter
Early_stopping
[(50, 20), (50, 30), (100, 100)],
[‘tanh’, ‘relu’, ‘logistic’],
[‘adam’, ‘sgd’, ‘lbfgs’],
[0.0001, 0.01, 0.1]
[32, 64, 100]
[‘constant’, ‘invscaling’, ‘adaptive’]
[0.001, 0.01, 0.1]
[0.5, 0.7, 0.9]
[200, 500, 1000]
[True, False]
[100 100]
relu
adam
0.0001
100
Constant
0.001
0.5
500
True
ADABOOSTn_estimators
Learning rate
loss
max_depth
min sample split
[50, 100, 150]
[0.01, 0.1, 0.2]
[linear, square, exponential]
[3, 5, 7]
[2, 5, 10]
50
0.01
Exponential
5
10
SVM_PUKC (Regularization Param)
ω (indepent term of PUK)
γ (coefficient for PUK kernel)
seed
[0.1, 1.0, 10.0]
[0, 1, 2]
[0.001, 0.01, 0.1]
[0, 1, 2]
1.0
1
0.001
1
SVM_RBFC(Regularization Prm)
ω (indepent term of PUK)
γ (coefficient for PUK kernel)
seed
[0.1, 1.0, 10.0]
[0, 1, 2]
[0.001, 0.01, 0.1]
[0, 1, 2]
1.0
1
0.01
1
M5PBatch size
minNumInstances
unpruned
[50, 100, 150]
[2, 4, 7]
[True, False]
150
4
False
M5RulesBatch size
minNumInstances
unpruned
[50, 100, 150]
[2, 4, 7]
[True, False]
100
4
False
RFK (No of selected Attributes)
Batch size
I (Min No. of Instances per leaf)
max Depth
min num (m)
minVarianceProp
seed
[0, 1, 2]
[50, 100, 150]
[50, 100, 150]
[5, 7, 8]
[0, 1, 2]
[0.001, 0.01, 0.1]
[1, 2, 3]
0
100
100
2
1
0.001
2
RTK (No of selected Attributes)
Batch size
I (Min No. of Instances per leaf)
max Depth
min num (m)
minVarianceProp
seed
[0, 1, 2]
[50, 100, 150]
[50, 100, 150]
[5, 7, 8]
[0, 1, 2]
[0.001, 0.01, 0.1]
[1, 2, 3]
0
100
100
5
1
0.001
2
Table 9. Evaluation metric details of various AI techniques for stepped channels.
Table 9. Evaluation metric details of various AI techniques for stepped channels.
Training Datasets Testing Datasets
ModelsR2NSEMAERMSER2NSEMAERMSE
GBM0.9980.9990.00170.002020.9980.9990.001600.00182
SVM_PUK0.9440.9370.00940.01820.9300.9010.0160.0246
FFNN0.9190.9180.01780.02070.8890.8600.02890.0313
ADDA-BOOST0.9680.9690.01050.01270.8100.8060.03100.0369
M5P0.8760.8700.01740.02610.8120.7540.02230.0416
RF0.9700.7470.00910.01290.8290.7110.02270.0353
M5Rule0.9230.9210.01670.02030.7310.7100.02720.0452
RT0.7510.7410.02750.03650.7140.7110.02940.0451
SVM_RBF0.7590.7770.02130.03420.7800.7010.02960.046
Table 10. Evaluation metrics details of various AI techniques for trap efficiency (TE).
Table 10. Evaluation metrics details of various AI techniques for trap efficiency (TE).
Training Datasets Testing Datasets
ModelsR2NSEMAERMSER2NSEMAERMSE
GBM0.9970.9970.6020.78210.9970.9980.5310.769
ADDA-BOOST0.9900.9891.3281.71990.95120.952.9814.363
SVM_PUK0.9730.9731.8272.81570.9560.9513.1054.307
RT0.9960.9960.6991.09150.9460.9413.3234.736
RF0.9910.9911.1441.14490.9430.9393.3764.824
M5Rule0.9480.9473.0343.92350.9280.9174.3845.637
M5P0.8760.8594.5406.41510.8510.8345.5125.512
FFNN0.8610.8614.5176.34530.8120.8116.1268.482
SVM_RBF0.8120.5976.11510.83310.7030.498.25213.941
Table 11. Evaluation metrics details of various AI techniques for prediction of GWL.
Table 11. Evaluation metrics details of various AI techniques for prediction of GWL.
Training Datasets Testing Datasets
ModelsR2NSEMAERMSER2NSEMAERMSE
GBM0.8350.8211.0121.7230.8280.8150.8161.994
SVM_PUK0.056−0.022.2254.1170.17−0.1852.9075.213
FFNN0.021−22.18910.97819.6300.001−63.57215.88938.476
RF0.5010.4881.8232.9160.4860.4712.1043.482
M5P0.4840.4031.8673.1480.01−0.1913.0975.226
ADDA-BOOST0.7840.7691.1431.9580.51630.5081.8133.359
RT0.7750.7751.0241.9330.5140.5082.2063.358
SVM_RBF0.028−0.0642.4184.2050.006−0.2023.0645.249
M5Rule0.6250.5671.6482.6820.1970.1662.7654.3729
Table 12. Comparison of existing models and the AI-based GBM model for prediction of H H m a x .
Table 12. Comparison of existing models and the AI-based GBM model for prediction of H H m a x .
ModelsRange of ParametersInput ParametersR2MAERMSE
[10]Nappe flow with fully
developed hydraulic jump
H H m a x = f y c h , H d a m y c 0.4000.08170.0999
[22] θ = 22 ° to 40 ° , y c h < 0.8 H H m a x = f y c h , N , α 0.6930.05550.0651
[21] θ = 3.4 ° , 4 ° H H m a x = f y c h 0.4480.1600.1925
Present study (GBM) θ = 14 ° to 26.6 ° , y c h 1.368 H H m a x = f y c h , Δ H c h a y c , w y c , tan θ , D N 0.9980.00140.0019
Table 13. Comparison of existing models and AI-based RT model for the prediction of trapping efficiency.
Table 13. Comparison of existing models and AI-based RT model for the prediction of trapping efficiency.
ModelsInput ParametersR2MAERMSE
[89] η = 1.74 + ln [ d u 0.11 ( γ s γ f ) 0.88 Q 0.58 ] −0.419638.35142.497
[90] η = 73.4 + 8.0 log ( ω W ) −0.275832.95834.734
[42] η = 192.08 C 0.392 d 50 0.5983 R 0.3766 0.640812.43415.957
Present study (GBM) η = f (d50, t/d, C, R)0.9940.00310.0079
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mishra, R.; Kumar, S.; Sarkar, H.; Ojha, C.S.P. Utility of Certain AI Models in Climate-Induced Disasters. World 2024, 5, 865-900. https://doi.org/10.3390/world5040045

AMA Style

Mishra R, Kumar S, Sarkar H, Ojha CSP. Utility of Certain AI Models in Climate-Induced Disasters. World. 2024; 5(4):865-900. https://doi.org/10.3390/world5040045

Chicago/Turabian Style

Mishra, Ritusnata, Sanjeev Kumar, Himangshu Sarkar, and Chandra Shekhar Prasad Ojha. 2024. "Utility of Certain AI Models in Climate-Induced Disasters" World 5, no. 4: 865-900. https://doi.org/10.3390/world5040045

APA Style

Mishra, R., Kumar, S., Sarkar, H., & Ojha, C. S. P. (2024). Utility of Certain AI Models in Climate-Induced Disasters. World, 5(4), 865-900. https://doi.org/10.3390/world5040045

Article Metrics

Back to TopTop