Utility of Certain AI Models in Climate-Induced Disasters
Abstract
:1. Introduction
2. Materials and Methods
2.1. Data Collection (Part 1)
2.2. Data Collection (Part 2)
2.3. Data Collection (Part 3)
2.4. AI-Based Model
2.4.1. Random Forests (RF)
2.4.2. Random Tree (RT)
2.4.3. M5P Model
2.4.4. M5Rules Model
2.4.5. AdaBoost Model
2.4.6. Feed-Forward Neural Network (FFNN) Model
2.4.7. Gradient Boosting Machine (GBM)
2.4.8. Support Vector Machines (SVM)–Radial Basis Function (SVM_RBF)
2.4.9. SVM-Pearson VII Universal Kernel (PUK)
2.5. Modelling Process
3. Results
3.1. Prediction of Energy Dissipation of a Stepped Channel Using AI Models (Part 1)
3.2. Prediction of Sediment Trapping Efficiency Using AI Models (Part 2)
3.3. Prediction of Groundwater Level Using AI Models (Part 3)
4. Discussion
4.1. Prediction of Energy Dissipation of the Stepped Channel Using AI Tools
- The hydrological study of the occurrence of extreme events is required to obtain an optimum design discharge (Q) to construct a stepped channel for a particular site;
- Getting the maximum discharge as well as the height of the channel will help in deciding the step geometry as well as the number of steps;
- The known input parameters for designing a stepped channel would be w, yc, and slope, whereas the output, which is at different locations, can be found out by using several rules (Appendix A) where .
4.2. Prediction of Silt Trapping Efficiency Using AI Tools
- To design the sediment removal structure in the canal, particularly the vortex tube, which may help in reducing sediment load from the canal while maintaining of design discharge of the canal;
- The flood events data may help to obtain an optimum design discharge (Q) of sediment removal structures such as vortex tubes, which may also help in the design of the trapping sediment capacity of hydraulic structures from canals during these events;
- Getting the design discharge as well as sediment flux in the canal will help in deciding the vortex tube length, diameter, etc.;
- The known input parameters for designing a vortex tube ejector would be t/d, sediment size, and sediment concentration whereas the output is T.E. Similarly, T.E. = f (d50, C, R, t/d), which can be found out by using a number of rules (Appendix B).
4.3. Prediction of the Groundwater Level Using AI Tools
5. Conclusions
- Out of different climate change mitigation approaches, providing stepped channels in hilly terrain for high-velocity rainwater drainage is studied. The AI approach of modeling in the field of water resource engineering is successfully implemented in this part of the study. The best input parameters that may help design the stepped channels are , , , , and ;
- The GBM model performs well for modeling proposes, whereas the M5Rules are further useful for predicting energy dissipation in any geometric condition through several regression rules;
- The second part of this study assesses the capability of nine artificial intelligent techniques, i.e., RF, RT, M5P, GBM, M5Rules, FFNN, AdaBoost, SVM_PUK, and SVM_RBF in modeling the trapping efficiency of the vortex tube silt ejector utilizing experimental observations. A fair amount of data from various field sites would be required to arrive at more solid conclusions for future works;
- The GBM model outperformed other invoked AI-based models as well as conventional inductive empirical models in the computation of the vortex tube silt ejector trapping efficiency, followed by the RF model. The study’s findings exhibited that estimating the trapping efficacy of the vortex tube silt evacuator with conventional models produces very high errors;
- The third part of this study demonstrated the effectiveness of GBM and Random Tree models in predicting groundwater levels using multiple well data, despite the challenges of uneven data distribution. The insights gained from this research can inform future groundwater storage evaluation approaches, emphasizing the need for high-quality evenly distributed data;
- Applying these models can significantly contribute to global efforts to manage groundwater resources sustainably, thereby reducing the adverse impacts of climate change on water availability. By leveraging advanced machine learning techniques, we can develop more resilient and adaptive groundwater management strategies that ensure water security for future generations.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
- Rule: −1, For
- Rule: −2, For ,
- Rule: −3, For ,
- Rule: −4, For
- Rule: −5, For
- Rule: −6, For , DN > 0.001
- Rule: −7, For ,
- Rule: −8, For ,
- Rule: −9
Appendix B
- Rule: −1, For
- Rule: −2, For
- Rule: −3,
References
- Collins, M.; An, S.I.; Cai, W.; Ganachaud, A.; Guilyardi, E.; Jin, F.F.; Jochum, M.; Lengaigne, M.; Power, S.; Timmermann, A.; et al. The impact of global warming on the tropical Pacific Ocean and El Niño. Nat. Geosci. 2010, 3, 391–397. [Google Scholar] [CrossRef]
- Soden, B.J.; Held, I.M. An Assessment of Climate Feedbacks in Coupled Ocean–Atmosphere Models. J. Clim. 2006, 19, 3354–3360. [Google Scholar] [CrossRef]
- Vecchi, G.A.; Soden, B.J. Global Warming and the Weakening of the Tropical Circulation. J. Clim. 2007, 20, 4316–4340. [Google Scholar] [CrossRef]
- Wang, C.; Xie, S.-P.; Carton, J.A. A Global Survey of Ocean–Atmosphere Interaction and Climate Variability; Wang, C., Xie, S.P., Carton, J.A., Eds.; AGU Geophysical Monograph Series; Blackwell Publishing Ltd.: Oxford, UK, 2004; pp. 1–19. [Google Scholar] [CrossRef]
- Kasiviswanathan, K.S.; Soundharajan, D.; Sandhya, P.; Jianxun, H.; Ojha, C.S.P. Modeling and Mitigation Measures for Managing Extreme Hydrometeorological Events under a Warming Climate; Elsevier: Amsterdam, The Netherlands, 2023; ISBN 9780443186417. [Google Scholar]
- Rao, Y.S.; Tian, C.Z.; Ojha, C.S.P.; Gurjar, B.; Tyagi, R.D.; Kao, C.M. Climate Change Modeling, Mitigation, and Adaptation; ASCE: Reston, VA, USA, 2013; ISBN 978-0-7844-1271-8. [Google Scholar] [CrossRef]
- Karki, R.; ul Hasson, S.; Gerlitz, L.; Talchabhadel, R.; Schenk, E.; Schickoff, U.; Böhner, J. WRF-based simulation of an extreme precipitation event over the Central Himalayas: Atmospheric mechanisms and their representation by microphysics parameterization schemes. Atmos. Res. 2018, 214, 21–35. [Google Scholar] [CrossRef]
- Bhardwaj, A.; Wasson, R.J.; Chow, W.T.L.; Ziegler, A.D. High-intensity monsoon rainfall variability and its attributes: A case study for Upper Ganges Catchment in the Indian Himalaya during 1901–2013. Nat. Hazards 2021, 105, 2907–2936. [Google Scholar] [CrossRef]
- Gouda, K.C.; Rath, S.S.; Singh, N.; Ghosh, S.; Lata, R. Extreme rainfall event analysis over the state of Himachal Pradesh in India. Theor. Appl. Climatol. 2022, 151, 1103–1111. [Google Scholar] [CrossRef]
- Chanson, H. Hydraulics of nappe flow regime above stepped chutes and spillways. Aust. Civil Eng. Trans. 1994, 36, 69–76. [Google Scholar]
- Chanson, H. Hydraulic Design of Stepped Cascades, Channels, Weirs and Spillways; Pergamon: Oxford, UK, 1994. [Google Scholar]
- Peyras, L.; Royet, P.; Degoutte, G. Flow and Energy Dissipation over Stepped Gabion Weirs. J. Hydraul. Eng. 1992, 118, 707–717. [Google Scholar] [CrossRef]
- Chanson, H. Prediction of the transition nappe/skimming flow on a stepped channel. J. Hydraul. Res. 1996, 34, 421–429. [Google Scholar] [CrossRef]
- Ohtsu, I.; Yasuda, Y. Characteristics of Flow Conditions on Stepped Channels. In Proceedings of the 27th IAHR Congress, Theme D, San Francisco, CA, USA, 10–15 August 1997; pp. 583–588. [Google Scholar]
- Chanson, H.; Toombes, L. Hydraulics of stepped chutes: The transition flow. J. Hydraul. Res. 2004, 42, 43–54. [Google Scholar] [CrossRef]
- Boes, R.M.; Hager, W.H. Hydraulic design of stepped spillways. J. Hydraul. Eng. 2003, 129, 671–679. [Google Scholar] [CrossRef]
- Chamani, M.R.; Rajaratnam, N. Characteristics of skimming flow over stepped spillways. J. Hydraul. Eng. 1999, 125, 361–368. [Google Scholar] [CrossRef]
- Essery, I.T.S.; Horner, M.W. The Hydraulic Design of Stepped Spillways, 2nd ed.; CIRIA Report No. 33; CIRIA (Construction Industry Research and Information Association): London, UK, 1978. [Google Scholar]
- Pinheiro, A.N.; Fael, C.S. Nappe Flow in Stepped Channels—Occurrence and Energy Dissipation. In International Workshop on Hydraulics of Stepped Spillways; Balkema: Zurich, Switzerland, 2000; pp. 119–126. [Google Scholar]
- Toombes, L.; Wagne, C.; Chanson, H. Flow Patterns in Nappe Flow Regime Down Low Gradient Stepped Chutes. J. Hydraul. Res. 2008, 46, 4–14. [Google Scholar] [CrossRef]
- Chanson, H.; Toombes, L. Energy dissipation and air entrainment in a stepped storm waterway: An experimental study. J. Irrig. Drain. Eng. 2002, 128, 305–315. [Google Scholar] [CrossRef]
- Chamani, M.R.; Rajaratnam, N. Jet flow on stepped spillways. J. Hydraul. Eng. 1994, 120, 254–259. [Google Scholar] [CrossRef]
- Felder, S.; Geuzaine, M.; Dewals, B.; Erpicum, S. Nappe flows on a stepped chute with prototype-scale steps height: Observations of flow patterns, air-water flow properties, energy dissipation and dissolved oxygen. J. Hydro-Environ. Res. 2019, 27, 1–19. [Google Scholar] [CrossRef]
- Horner, M.W. An Analysis of Flow on Cascades of Steps. Ph.D. Thesis, University of Birmingham, Birmingham, UK, 1969; p. 357. [Google Scholar]
- Salmasi, F.; Özger, M. Neuro-fuzzy approach for estimating energy dissipation in skimming flow over stepped spillways. Arab. J. Sci. Eng. 2014, 39, 6099–6108. [Google Scholar] [CrossRef]
- Parsaie, A.; Haghiabi, A.H.; Saneie, M.; Torabi, H. Prediction of energy dissipation on the stepped spillway using the multivariate adaptive regression splines. J. Hydraul. Eng. 2016, 22, 281–292. [Google Scholar] [CrossRef]
- Parsaie, A.; Haghiabi, A.H.; Saneie, M.; Torabi, H. Applications of soft computing techniques for prediction of energy dissipation on stepped spillways. Neural Comput. Appl. 2018, 29, 1393–1409. [Google Scholar] [CrossRef]
- Jiang, L.; Diao, M.; Xue, H.; Sun, H. Energy Dissipation Prediction for Stepped Spillway Based on Genetic Algorithm–Support Vector Regression. J. Irrig. Drain. Eng. 2018, 144, 04018003. [Google Scholar] [CrossRef]
- Parsaie, A.; Haghiabi, A.H.H. Evaluation of energy dissipation on stepped spillway using evolutionary computing. Appl. Water Sci. 2019, 9, 144. [Google Scholar] [CrossRef]
- Pujari, S.; Kaushik, V.; Kumar, S.A. Prediction of Energy Dissipation over Stepped Spillway with Baffles Using Machine Learning Techniques. Civ. Eng. Archit. 2023, 11, 2377–2391. [Google Scholar] [CrossRef]
- Orak, S.J.; Asareh, A. Effect of gradation on sediment extraction (trapping) efficiency in structures of vortex tube with different angles. Adv. Environ. Biol. 2015, 31, 53–58. [Google Scholar]
- Parshall, R.L. Model and prototype studies of sand traps. Trans. Am. Soc. Civ. Eng. 1952, 117, 204–212. [Google Scholar] [CrossRef]
- Blench, T. Discussion of model and prototype studies of sand traps, by, RL Parshall. Trans. Am. Soc. Civ. Eng. 1952, 117, 213. [Google Scholar] [CrossRef]
- Ahmed, M. Final recommendations from experiments of silt ejector of DG Kahn canal. In Hydraulics Research; IAHR: Thessaloniki, Greece, 1958. [Google Scholar]
- Robinson, A.R. Vortex tube sand trap. Trans. Am. Soc. Civ. Eng. 1962, 127, 391–433. [Google Scholar] [CrossRef]
- Lawrence, P.; Sanmuganathan, K. Field verification of vortex tube design method. In Proceedings of the South-East Asian Regional Symposium on Problems of Soil Erosion and Sedimentation, Bangkok, Thailand, 27–29 January 1981; Tingsanchali, T., Eggers, H., Eds.; [Google Scholar]
- Atkinson, E. Vortex-tube sediment extractors. I: Trapping efficiency. J. Hydraul. Eng. 1994, 120, 1110–1125. [Google Scholar] [CrossRef]
- Atkinson, E. Vortex-tube sediment extractors. II: Design. J. Hydraul. Eng. 1994, 120, 1126–1138. [Google Scholar] [CrossRef]
- Tiwari, N.K.; Sihag, P.; Kumar, S.; Ranjan, S. Prediction of trapping efficiency of vortex tube ejector. ISH J. Hydraul. Eng. 2018, 26, 59–67. [Google Scholar] [CrossRef]
- Tiwari, N.K.; Sihag, P.; Singh, B.K.; Ranjan, S.; Singh, K.K. Estimation of Tunnel Desilter Sediment Removal Efficiency by ANFIS. Iran. J. Sci. Technol. Trans. Civ. Eng. 2019, 44, 959–974. [Google Scholar] [CrossRef]
- Singh, B.; Sihag, P.; Singh, K.; Kumar, S. Estimation of trapping efficiency of a vortex tube silt ejector. Int. J. River Basin Manag. 2021, 19, 261–269. [Google Scholar] [CrossRef]
- Singh, B.K.; Tiwari, N.K.; Singh, K.K. Support vector regression-based modeling of trapping efficiency of silt ejector. J. Indian Water Resour. Soc. 2016, 36, 41–49. [Google Scholar]
- Kumar, S.; Ojha, C.S.P.; Tiwari, N.K.; Ranjan, S. Exploring the potential of artificial intelligence techniques in prediction of the removal efficiency of vortex tube silt ejector. Int. J. Sediment Res. 2023, 38, 615–627. [Google Scholar] [CrossRef]
- Kumar, M.; Sihag, P.; Kumar, S. Evaluation and analysis of trapping efficiency of vortex tube ejector using soft computing techniques. J. Indian Water Resour. Soc. 2019, 39, 1–9. [Google Scholar]
- Dangar, S.; Asoka, A.; Mishra, V. Causes and implications of groundwater depletion in India: A review. J. Hydrol. 2021, 596, 126103. [Google Scholar] [CrossRef]
- Swain, S.; Taloor, A.K.; Dhal, L.; Sahoo, S.; Al-Ansari, N. Impact of climate change on groundwater hydrology: A comprehensive review and current status of the Indian hydrogeology. Appl. Water Sci. 2022, 12, 120. [Google Scholar] [CrossRef]
- Bhattarai, N.; Lobell, D.B.; Singh, B.; Fishman, R.; Kustas, W.P.; Pokhrel, Y.; Jain, M. Warming temperatures exacerbate groundwater depletion rates in India. Sci. Adv. 2023, 9, eadi1401. [Google Scholar] [CrossRef]
- Chandra, N.A.; Sahoo, S.N. Groundwater levels and resiliency mapping under land cover and climate change scenarios: A case study of Chitravathi basin in Southern India. Environ. Monit. Assess. 2023, 195, 1394. [Google Scholar] [CrossRef]
- Das, S. Groundwater Sustainability, Security and equity: India today and tomorrow. J. Geol. Soc. India 2023, 99, 5–8. [Google Scholar] [CrossRef]
- Tao, H.; Hameed, M.M.; Marhoon, H.A.; Zounemat-Kermani, M.; Heddam, S.; Kim, S.; Sulaiman, S.O.; Tan, M.L.; Sa’adi, Z.; Mehr, A.D.; et al. Groundwater level prediction using machine learning models: A comprehensive review. Neurocomputing 2022, 489, 271–308. [Google Scholar] [CrossRef]
- Khan, J.; Lee, E.; Balobaid, A.S.; Kim, K. A comprehensive review of conventional, machine leaning, and deep learning models for groundwater level (GWL) forecasting. Appl. Sci. 2023, 13, 2743. [Google Scholar] [CrossRef]
- Afrifa, S.; Zhang, T.; Appiahene, P.; Varadarajan, V. Mathematical and machine learning models for groundwater level changes: A systematic review and bibliographic analysis. Future Internet 2022, 14, 259. [Google Scholar] [CrossRef]
- Boo, K.B.W.; El-Shafie, A.; Othman, F.; Khan, M.M.H.; Birima, A.H.; Ahmed, A.N. Groundwater level forecasting with machine learning models: A review. Water Res. 2024, 252, 121249. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.; Chen, W.; Chandra Pal, S.; Saha, A.; Chowdhuri, I.; Adeli, B.; Janizadeh, S.; Dineva, A.A.; Wang, X.; Mosavi, A. Evaluation efficiency of hybrid deep learning algorithms with neural network decision tree and boosting methods for predicting groundwater potential. Geocarto Int. 2021, 37, 5564–5584. [Google Scholar] [CrossRef]
- Di Salvo, C. Improving results of existing groundwater numerical models using machine learning techniques: A review. Water 2022, 14, 2307. [Google Scholar] [CrossRef]
- Jacob, T.; Bayer, R.; Chery, J.; Le Moigne, N. Time-lapse microgravity surveys reveal water storage heterogeneity of a karst aquifer. J. Geophys. Res. Solid Earth 2010, 115, B06402. [Google Scholar] [CrossRef]
- Nhu, V.H.; Shahabi, H.; Nohani, E.; Shirzadi, A.; Al-Ansari, N.; Bahrami, S.; Nguyen, H. Daily water level prediction of Zrebar Lake (Iran): A comparison between M5P, random forest, random tree and reduced error pruning trees algorithms. ISPRS Int. J. Geo-Inf. 2020, 9, 479. [Google Scholar] [CrossRef]
- Elbeltagi, A.; Pande, C.B.; Kouadri, S.; Islam, A.R.M.T. Applications of various data-driven models for the prediction of groundwater quality index in the Akot basin, Maharashtra, India. Environ. Sci. Pollut. Res. 2022, 29, 17591–17605. [Google Scholar] [CrossRef]
- Xiong, J.; Guo, S.; Kinouchi, T. Leveraging machine learning methods to quantify 50 years of dwindling groundwater in India. Sci. Total Environ. 2022, 835, 155474. [Google Scholar] [CrossRef]
- Mirhashemi, S.H.; Mirzaei, F.; Haghighat Jou, P.; Panahi, M. Evaluation of Four Tree Algorithms in Predicting and Investigating the Changes in Aquifer Depth. Water Resour. Manag. 2022, 36, 4607–4618. [Google Scholar] [CrossRef]
- Masroor, M.; Rehman, S.; Sajjad, H.; Rahaman, M.H.; Sahana, M.; Ahmed, R.; Singh, R. Assessing the impact of drought conditions on groundwater potential in Godavari Middle Sub-Basin, India using analytical hierarchy process and random forest machine learning algorithm. Groundw. Sustain. Dev. 2021, 13, 100554. [Google Scholar] [CrossRef]
- Masroor, M.; Sajjad, H.; Kumar, P.; Saha, T.K.; Rahaman, M.H.; Choudhari, P.; Saito, O. Novel ensemble machine learning modeling approach for groundwater potential mapping in Parbhani District of Maharashtra, India. Water 2023, 15, 419. [Google Scholar] [CrossRef]
- Afan, H.A.; Ibrahem Ahmed Osman, A.; Essam, Y.; Ahmed, A.N.; Huang, Y.F.; Kisi, O.; El-Shafie, A. Modeling the fluctuations of groundwater level by employing ensemble deep learning techniques. Eng. Appl. Comput. Fluid Mech. 2021, 15, 1420–1439. [Google Scholar] [CrossRef]
- Abdi, E.; Ali, M.; Santos, C.A.G.; Olusola, A.; Ghorbani, M.A. Enhancing Groundwater Level Prediction Accuracy Using Interpolation Techniques in Deep Learning Models. Groundw. Sustain. Dev. 2024, 26, 101213. [Google Scholar] [CrossRef]
- Ahmadi, A.; Olyaei, M.; Heydari, Z.; Emami, M.; Zeynolabedin, A.; Ghomlaghi, A.; Sadegh, M. Groundwater level modeling with machine learning: A systematic review and meta-analysis. Water 2022, 14, 949. [Google Scholar] [CrossRef]
- Singh, P.N. Chatra Canal, Nepal: Vortex Tube Field Measurements; Report No. OD55; Hydraulics Research: Wallingford, UK, 1983. [Google Scholar]
- Pathak, S.; Gupta, S.; Ojha, C.S.P. Assessment of groundwater vulnerability to contamination with ASSIGN index: A case study in Haridwar, Uttarakhand, India. J. Hazard. Toxic Radioact. Waste 2021, 25, 04020081. [Google Scholar] [CrossRef]
- Breiman, L.; Friedman, J.; Olshen, R.; Stone, C. Classification and Regression Trees; Chapman and Hall/CRC: Boca Raton, FL, USA, 1984. [Google Scholar] [CrossRef]
- Breiman, L. Using Adaptive Bagging to Debias Regressions; Technical Report 547; Statistics Dept. UCB: Berkeley, CA, USA, 1999; Available online: https://statistics.berkeley.edu/tech-reports/547 (accessed on 28 February 1999).
- Erdal, H.; Karahanoğlu, İ. Bagging ensemble models for bank profitability: An emprical research on Turkish development and investment banks. Appl. Soft Comput. 2016, 49, 861–867. [Google Scholar] [CrossRef]
- Breiman, L. Random forests. In Machine Learning; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2001; Volume 45, pp. 5–32. [Google Scholar] [CrossRef]
- Ali, J.; Khan, R.; Ahmad, N.; Maqsood, I. Random forests and decision trees. Int. J. Comput. Sci. Issues 2012, 9, 272–278. Available online: https://www.researchgate.net/publication/259235118 (accessed on 1 September 2012).
- Sattari, M.T.; Pal, M.; Mirabbasi, R.; Abraham, J. Ensemble of M5 model tree-based modelling of sodium adsorption ratio. J. AI Data Min. 2018, 6, 69–78. Available online: https://jad.shahroodut.ac.ir/article_1015_957fdfdc9de0cc89dbb4339ccf806dc4.pdf (accessed on 31 March 2018).
- Quinlan, J.R. Learning with continuous classes. In Proceedings of the 5th Australian Joint Conference on Artificial Intelligence, Hobart, Tasmania, 16–18 November 1992; Volume 92, pp. 343–348. Available online: https://sci2s.ugr.es/keel/pdf/algorithm/congreso/1992-Quinlan-AI.pdf (accessed on 18 November 1992).
- Sharma, R.; Kumar, S.; Maheshwari, R. Comparative analysis of classification techniques in data mining using different datasets. Int. J. Comput. Sci. Mob. Comput. 2015, 4, 125–134. Available online: https://api.semanticscholar.org/CorpusID:33215569 (accessed on 31 December 2015).
- Bayzid, S.M.; Mohamed, Y.; Al-Hussein, M. Prediction of maintenance cost for road construction equipment: A case study. Can. J. Civ. Eng. 2016, 43, 480–492. [Google Scholar] [CrossRef]
- Zhou, Z.H. Ensemble Methods: Foundations and Algorithms; Publisher CRC Press: Boca Raton, FL, USA, 2012; pp. 23–44. [Google Scholar]
- Schapire, R.E. Explaining adaboost. In Empirical Inference; Springer: Berlin/Heidelberg, Germany, 2013; pp. 37–52. [Google Scholar] [CrossRef]
- Burgsteiner, H. Imitation learning with spiking neural networks and real-world devices. Eng. Appl. Artif. Intell. 2006, 19, 741–752. [Google Scholar] [CrossRef]
- Bartolini, A.; Lombardi, M.; Milano, M.; Benini, L. Neuron Constraints to Model Complex Real-World Problems; Springer: Berlin/Heidelberg, Germany, 2011; pp. 115–129. [Google Scholar] [CrossRef]
- Singh, U.; Rizwan, M.; Alaraj, M.; Alsaidan, L. 3A Machine Learning-Based Gradient Boosting Regression Approach for Wind Power Production Forecasting: A Step towards Smart Grid Environments. Energies 2021, 14, 5196. [Google Scholar] [CrossRef]
- Han, S.; Qubo, C.; Meng, H. Parameter selection in SVM with RBF kernel, function. In Proceedings of the World Automation Congress, Puerto Vallarta, Mexico, 24–28 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1–4. Available online: https://ieeexplore.ieee.org/document/6321759 (accessed on 4 October 2012).
- Sihag, P.; Jain, P.; Kumar, M. Modelling of impact of water quality on recharging rate of stormwater filter system using various kernel function-based regression. Model. Earth Syst. Environ. 2018, 4, 61–68. [Google Scholar] [CrossRef]
- Smola, A.J.; Schölkopf, B. A tutorial on support vector regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef]
- Namadi, P.; He, M.; Sandhu, P. Modeling ion constituents in the Sacramento-San Joaquin Delta using multiple machine learning approaches. J. Hydroinform. 2023, 25, 2541–2560. [Google Scholar] [CrossRef]
- Stone, M. Cross-validatory choice and assessment of statistical predictions. J. R. Stat. Soc. Ser. B Methodol. 1974, 36, 111–133. [Google Scholar] [CrossRef]
- Geisser, S. The predictive sample reuse method with applications. J. Am. Stat. Assoc. 1975, 70, 320–328. [Google Scholar] [CrossRef]
- Efron, B. Estimating the error rate of a prediction rule: Improvement on cross-validation. J. Am. Stat. Assoc. 1983, 78, 316–331. [Google Scholar] [CrossRef]
- Curi, K.V.; Esen, I.I.; Velioglu, S.G. Vortex type solid liquid separator. Prog. Water Technol. 1979, 7, 183–190. [Google Scholar]
- Paul, T.C.; Sayal, S.K.; Sakhuja, V.S.; Dhillon, G.S. Vortex-settling basin design considerations. J. Hydraul. Eng. 1991, 117, 172–189. [Google Scholar] [CrossRef]
- Kumar, N.; Rajagopalan, P.; Pankajakshan, P.; Bhattacharyya, A.; Sanyal, S.; Balachandran, J.; Waghmare, U.V. Machine learning constrained with dimensional analysis and scaling laws: Simple, transferable, and interpretable models of materials from small datasets. Chem. Mater. 2018, 31, 314–321. [Google Scholar] [CrossRef]
- Sarkar, H.; Goriwale, S.S.; Ghosh, J.K.; Ojha, C.S.P.; Ghosh, S.K. Potential of machine learning algorithms in groundwater level prediction using temporal gravity data. Groundw. Sustain. Dev. 2024, 25, 101114. [Google Scholar] [CrossRef]
Sl. No. | Reference | Shape | Unit Discharge q [m2/s] | h [cm] | Number of Steps | W [m] | Slope (α) | Flow Depth Measurement |
---|---|---|---|---|---|---|---|---|
1 | [18,24] | Flat | - | 2.9–50 | 8, 10, 20, 30 | 0.15, 0.3, 0.6 | 11.3–45 | Pitot tube |
2 | [19] | Flat | 0.004–0.057 | 5 | 10 | 0.7 | 14, 18.4 | Indirect method |
4 | [23] | Flat | 0.005–0.637 | 50 | 6 | 0.2 | 15 | Conductivity probe |
5 | Present study | Flat | 0.0089–0.0460 | 10 | 10 | 0.52 | 26.6, 23.12 | Conductivity probe |
Sl. No. | Reference | MIN | MAX | AVG | STD |
---|---|---|---|---|---|
1 | DN | 0.0001 | 0.098 | 0.002 | 0.008 |
2 | Hcha/yc | 4.673 | 96.355 | 23.204 | 15.132 |
3 | W/yc | 0.583 | 67.415 | 15.863 | 9.4332 |
4 | Slope (α) | 0.249 | 0.5 | 0.418 | 0.078 |
5 | yc/h | 0.096 | 1.368 | 0.4 | 0.224 |
6 | 0.592 | 0.973 | 0.88 | 0.076 |
Statistics | Units | Max | Min | Mean | Std. Dev. | Kurtosis | Skewness |
---|---|---|---|---|---|---|---|
Flow rate (Q) | m3/s | 0.1 | 0.1 | 0.1 | 0 | 0 | 0 |
t/d | - | 0.2 | 0.2 | 0.2 | 0 | 0 | 0 |
Concentration | ppm | 1000 | 500 | 750 | 250 | −2.0342 | 0 |
Extraction Ratio (R) | % | 26.42 | 13.38 | 19.3826 | 4.8935 | 3.7546 | 0.2725 |
Pipe diameter | m | 0.127 | 0.127 | 0.127 | 0 | 0 | 0 |
Flow depth | m | 0.2245 | 0.1079 | 0.1579 | 0.0492 | −1.5127 | 0.4735 |
Sediment size | m | 0.00058 | 0.00024 | 0.000398 | 0.00013 | −1.4276 | 0.2167 |
slope | - | 0.00171 | 0.00171 | 0.00171 | 0 | 0 | 0 |
Trap efficiency | % | 57.24 | 19.86 | 41.7132 | 17.8549 | 1.6574 | 1.5548 |
Statistics | Units | Max | Min | Mean | Std. Dev. | Kurtosis | Skewness |
---|---|---|---|---|---|---|---|
Flow rate (Q) | m3/s | 46 | 0.1 | 1.8184 | 7.8775 | 25.3658 | 5.1787 |
t/d | - | 0.3 | 0.2 | 0.2189 | 0.0393 | 0.5786 | 1.6034 |
Concentration | ppm | 1092 | 10.494 | 643.7491 | 347.2818 | −1.01508 | −0.4139 |
Extraction Ratio (R) | % | 26.42 | 3.7 | 18.0636 | 4.8935 | −0.0596 | −0.3849 |
Pipe diameter | m | 0.9 | 0.127 | 0.2111 | 0.1870 | 4.4253 | 2.2331 |
Flow depth | m | 3.2 | 0.1079 | 0.3109 | 0.5366 | 22.1232 | 4.7223 |
Channel width | m | 27 | 1.5 | 3.0141 | 4.7506 | 19.9868 | 4.4395 |
Sediment size | m | 0.00038 | 0.00015 | 0.000338 | 0.000089 | 1.2338 | −1.7750 |
slope | - | 0.00171 | 0.00019 | 0.001423 | 0.000595 | 0.58003 | −1.6037 |
Trap efficiency | % | 94 | 43.5 | 72.1071 | 16.9605 | −1.387 | −0.379 |
Station ID | Name | Geographical Information | ||
---|---|---|---|---|
Latitude (N) | Longitude (E) | Height of GWL Above MSL | ||
W1 | HYDW IITR | 29°52′6.312″ | 77°53′42.576″ | 254.01 |
W2 | Bajuberi | 29°54′5.112″ | 77°55′25.14″ | 259.23 |
W3 | Sherpur | 29°52′55.2″ | 77°54′50.508″ | 256.31 |
W4 | Adarsh Nagar | 29°52′32.052″ | 77°53′59.928″ | 256.84 |
W5 | Ibrahimpur | 29°53′34.368″ | 77°52′9.372″ | 267.02 |
W6 | Ramnagar | 29°52′33.996″ | 77°52′30.72″ | 267.9 |
W7 | Civil Line | 29°52′42.672″ | 77°53′16.512″ | 263.99 |
W8 | Ashafnagar | 29°49′30.252″ | 77°52′2.064″ | 260.55 |
W9 | Saidpura | 29°49′14.016″ | 77°52′32.5128″ | 264.19 |
W10 | Mandir | 29°51′1.08″ | 77°53′16.008″ | 254.16 |
W11 | Dhandera | 29°49′57.108″ | 77°54′24.516″ | 265.74 |
W12 | Rail Gate | 29°50′28.716″ | 77°54′24.4224″ | 266.58 |
W13 | DS Barrack | 29°51′40.392″ | 77°53′25.5912″ | 266.55 |
W14 | KV | 29°51′48.996″ | 77°54′29.196″ | 254.22 |
Model | Hyperparameter | General Range | Hyperparameter Range |
---|---|---|---|
GBM | n_estimators Learning rate max depth subsample | [50, 100, 150] [0.01, 0.1, 0.2] [3, 5, 7] [0.8, 0.9, 1.0] | 50 0.1 3 0.8 |
FFNN | Hidden_layer_sizes Activation Solver Alpha Batch size Learning_rate Learning rate_initial Power_t Max_iter Early_stopping | [(50, 20), (50, 30), (100, 100)], [‘tanh’, ‘relu’, ‘logistic’], [‘adam’, ‘sgd’, ‘lbfgs’], [0.0001, 0.01, 0.1] [32, 64, 100] [‘constant’, ‘invscaling’, ‘adaptive’] [0.001, 0.01, 0.1] [0.5, 0.7, 0.9] [200, 500, 1000] [True, False] | [100 100] relu adam 0.0001 100 Constant 0.001 0.5 500 True |
ADABOOST | n_estimators Learning rate loss max_depth min sample split | [50, 100, 150] [0.01, 0.1, 0.2] [linear, square, exponential] [3, 5, 7] [2, 5, 10] | 50 0.01 Exponential 5 10 |
SVM_PUK | C (Regularization Param) ω (indepent term of PUK) γ (coefficient for PUK kernel) seed | [0.1, 1.0, 10.0] [0, 1, 2] [0.001, 0.01, 0.1] [0, 1, 2] | 1.0 1.0 0.001 1 |
SVM_RBF | C (Regularization Prm) ω (indepent term of PUK) γ (coefficient for PUK kernel) seed | [0.1, 1.0, 10.0] [0, 1, 2] [0.001, 0.01, 0.1] [0, 1, 2] | 1.0 0 0.001 1 |
M5P | Batch size minNumInstances unpruned | [50, 100, 150] [2, 4, 7] [True, False] | 129 6 False |
M5Rules | Batch size minNumInstances unpruned | [50, 100, 150] [2, 4, 7] [True, False] | 129 6 False |
RF | K (No of selected Attributes) Batch size I (Min No. of Instances per leaf) max Depth min num (m) minVarianceProp seed | [0, 1, 2] [50, 100, 150] [50, 100, 150] [5, 7, 8] [0, 1, 2] [0.001, 0.01, 0.1] [1, 2, 3] | 0 100 200 6 1 0.001 1 |
RT | K (No of selected Attributes) Batch size I (Min No. of Instances per leaf) max Depth min num (m) minVarianceProp seed | [0, 1, 2] [50, 100, 150] [50, 100, 150] [3, 7, 8] [0, 1, 2] [0.001, 0.01, 0.1] [1, 2, 3] | 0 100 100 3 1 0.01 1 |
Model | Hyperparameter | General Range | Hyperparameter Range |
---|---|---|---|
GBM | n_estimators Learning rate max depth subsample | [50, 100, 150] [0.01, 0.1, 0.2] [3, 5, 7] [0.8, 0.9, 1.0] | 150 0.1 3 0.9 |
FFNN | Hidden_layer_sizes Activation Solver Alpha Batch size Learning_rate Learning rate_initial Power_t Max_iter Early_stopping | [(50, 20), (50, 30), (100, 100)], [‘tanh’, ‘relu’, ‘logistic’], [‘adam’, ‘sgd’, ‘lbfgs’], [0.0001, 0.01, 0.1] [32, 64, 100] [‘constant’, ‘invscaling’, ‘adaptive’] [0.001, 0.01, 0.1] [0.5, 0.7, 0.9] [200, 500, 1000] [True, False] | [50 30] relu adam 0.0001 64 Constant 0.001 0.5 1000 True |
ADABOOST | n_estimators Learning rate loss max_depth min sample split | [50, 100, 150] [0.01, 0.1, 0.2] [linear, square, exponential] [3, 5, 7] [2, 5, 10] | 50 0.2 Exponential 5 2 |
SVM_PUK | C (Regularization Param) ω (indepent term of PUK) γ (coefficient for PUK kernel) seed | [0.1, 1.0, 10.0] [0, 1, 2] [0.001, 0.01, 0.1] [0, 1, 2] | 1.0 0 0.001 1 |
SVM_RBF | C (Regularization Prm) ω (indepent term of PUK) γ (coefficient for PUK kernel) seed | [0.1, 1.0, 10.0] [0, 1, 2] [0.001, 0.01, 0.1] [0, 1, 2] | 1.0 0 0.001 1 |
M5P | Batch size minNumInstances unpruned | [50, 100, 150] [2, 4, 7] [True, False] | 100 4 False |
M5Rules | Batch size minNumInstances unpruned | [50, 100, 150] [2, 4, 7] [True, False] | 100 7 False |
RF | K (No of selected Attributes) Batch size I (Min No. of Instances per leaf) max Depth min num (m) minVarianceProp seed | [0, 1, 2] [50, 100, 150] [50, 100, 150] [5, 7, 8] [0, 1, 2] [0.001, 0.01, 0.1] [1, 2, 3] | 0 100 100 8 1 0.001 1 |
RT | K (No of selected Attributes) Batch size I (Min No. of Instances per leaf) max Depth min num (m) minVarianceProp seed | [0, 1, 2] [50, 100, 150] [50, 100, 150] [5, 7, 8] [0, 1, 2] [0.001, 0.01, 0.1] [1, 2, 3] | 0 100 100 8 1 0.001 1 |
Model | Hyperparameter | General Range | Hyperparameter Range |
---|---|---|---|
GBM | n_estimators Learning rate max depth subsample | [50, 100, 150] [0.01, 0.1, 0.2] [3, 5, 7] [0.8, 0.9, 1.0] | 50 0.1 3 0.8 |
FFNN | Hidden_layer_sizes Activation Solver Alpha Batch size Learning_rate Learning rate_initial Power_t Max_iter Early_stopping | [(50, 20), (50, 30), (100, 100)], [‘tanh’, ‘relu’, ‘logistic’], [‘adam’, ‘sgd’, ‘lbfgs’], [0.0001, 0.01, 0.1] [32, 64, 100] [‘constant’, ‘invscaling’, ‘adaptive’] [0.001, 0.01, 0.1] [0.5, 0.7, 0.9] [200, 500, 1000] [True, False] | [100 100] relu adam 0.0001 100 Constant 0.001 0.5 500 True |
ADABOOST | n_estimators Learning rate loss max_depth min sample split | [50, 100, 150] [0.01, 0.1, 0.2] [linear, square, exponential] [3, 5, 7] [2, 5, 10] | 50 0.01 Exponential 5 10 |
SVM_PUK | C (Regularization Param) ω (indepent term of PUK) γ (coefficient for PUK kernel) seed | [0.1, 1.0, 10.0] [0, 1, 2] [0.001, 0.01, 0.1] [0, 1, 2] | 1.0 1 0.001 1 |
SVM_RBF | C(Regularization Prm) ω (indepent term of PUK) γ (coefficient for PUK kernel) seed | [0.1, 1.0, 10.0] [0, 1, 2] [0.001, 0.01, 0.1] [0, 1, 2] | 1.0 1 0.01 1 |
M5P | Batch size minNumInstances unpruned | [50, 100, 150] [2, 4, 7] [True, False] | 150 4 False |
M5Rules | Batch size minNumInstances unpruned | [50, 100, 150] [2, 4, 7] [True, False] | 100 4 False |
RF | K (No of selected Attributes) Batch size I (Min No. of Instances per leaf) max Depth min num (m) minVarianceProp seed | [0, 1, 2] [50, 100, 150] [50, 100, 150] [5, 7, 8] [0, 1, 2] [0.001, 0.01, 0.1] [1, 2, 3] | 0 100 100 2 1 0.001 2 |
RT | K (No of selected Attributes) Batch size I (Min No. of Instances per leaf) max Depth min num (m) minVarianceProp seed | [0, 1, 2] [50, 100, 150] [50, 100, 150] [5, 7, 8] [0, 1, 2] [0.001, 0.01, 0.1] [1, 2, 3] | 0 100 100 5 1 0.001 2 |
Training Datasets | Testing Datasets | |||||||
---|---|---|---|---|---|---|---|---|
Models | R2 | NSE | MAE | RMSE | R2 | NSE | MAE | RMSE |
GBM | 0.998 | 0.999 | 0.0017 | 0.00202 | 0.998 | 0.999 | 0.00160 | 0.00182 |
SVM_PUK | 0.944 | 0.937 | 0.0094 | 0.0182 | 0.930 | 0.901 | 0.016 | 0.0246 |
FFNN | 0.919 | 0.918 | 0.0178 | 0.0207 | 0.889 | 0.860 | 0.0289 | 0.0313 |
ADDA-BOOST | 0.968 | 0.969 | 0.0105 | 0.0127 | 0.810 | 0.806 | 0.0310 | 0.0369 |
M5P | 0.876 | 0.870 | 0.0174 | 0.0261 | 0.812 | 0.754 | 0.0223 | 0.0416 |
RF | 0.970 | 0.747 | 0.0091 | 0.0129 | 0.829 | 0.711 | 0.0227 | 0.0353 |
M5Rule | 0.923 | 0.921 | 0.0167 | 0.0203 | 0.731 | 0.710 | 0.0272 | 0.0452 |
RT | 0.751 | 0.741 | 0.0275 | 0.0365 | 0.714 | 0.711 | 0.0294 | 0.0451 |
SVM_RBF | 0.759 | 0.777 | 0.0213 | 0.0342 | 0.780 | 0.701 | 0.0296 | 0.046 |
Training Datasets | Testing Datasets | |||||||
---|---|---|---|---|---|---|---|---|
Models | R2 | NSE | MAE | RMSE | R2 | NSE | MAE | RMSE |
GBM | 0.997 | 0.997 | 0.602 | 0.7821 | 0.997 | 0.998 | 0.531 | 0.769 |
ADDA-BOOST | 0.990 | 0.989 | 1.328 | 1.7199 | 0.9512 | 0.95 | 2.981 | 4.363 |
SVM_PUK | 0.973 | 0.973 | 1.827 | 2.8157 | 0.956 | 0.951 | 3.105 | 4.307 |
RT | 0.996 | 0.996 | 0.699 | 1.0915 | 0.946 | 0.941 | 3.323 | 4.736 |
RF | 0.991 | 0.991 | 1.144 | 1.1449 | 0.943 | 0.939 | 3.376 | 4.824 |
M5Rule | 0.948 | 0.947 | 3.034 | 3.9235 | 0.928 | 0.917 | 4.384 | 5.637 |
M5P | 0.876 | 0.859 | 4.540 | 6.4151 | 0.851 | 0.834 | 5.512 | 5.512 |
FFNN | 0.861 | 0.861 | 4.517 | 6.3453 | 0.812 | 0.811 | 6.126 | 8.482 |
SVM_RBF | 0.812 | 0.597 | 6.115 | 10.8331 | 0.703 | 0.49 | 8.252 | 13.941 |
Training Datasets | Testing Datasets | |||||||
---|---|---|---|---|---|---|---|---|
Models | R2 | NSE | MAE | RMSE | R2 | NSE | MAE | RMSE |
GBM | 0.835 | 0.821 | 1.012 | 1.723 | 0.828 | 0.815 | 0.816 | 1.994 |
SVM_PUK | 0.056 | −0.02 | 2.225 | 4.117 | 0.17 | −0.185 | 2.907 | 5.213 |
FFNN | 0.021 | −22.189 | 10.978 | 19.630 | 0.001 | −63.572 | 15.889 | 38.476 |
RF | 0.501 | 0.488 | 1.823 | 2.916 | 0.486 | 0.471 | 2.104 | 3.482 |
M5P | 0.484 | 0.403 | 1.867 | 3.148 | 0.01 | −0.191 | 3.097 | 5.226 |
ADDA-BOOST | 0.784 | 0.769 | 1.143 | 1.958 | 0.5163 | 0.508 | 1.813 | 3.359 |
RT | 0.775 | 0.775 | 1.024 | 1.933 | 0.514 | 0.508 | 2.206 | 3.358 |
SVM_RBF | 0.028 | −0.064 | 2.418 | 4.205 | 0.006 | −0.202 | 3.064 | 5.249 |
M5Rule | 0.625 | 0.567 | 1.648 | 2.682 | 0.197 | 0.166 | 2.765 | 4.3729 |
Models | Range of Parameters | Input Parameters | R2 | MAE | RMSE |
---|---|---|---|---|---|
[10] | Nappe flow with fully developed hydraulic jump | 0.400 | 0.0817 | 0.0999 | |
[22] | , | 0.693 | 0.0555 | 0.0651 | |
[21] | 0.448 | 0.160 | 0.1925 | ||
Present study (GBM) | , | 0.998 | 0.0014 | 0.0019 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mishra, R.; Kumar, S.; Sarkar, H.; Ojha, C.S.P. Utility of Certain AI Models in Climate-Induced Disasters. World 2024, 5, 865-900. https://doi.org/10.3390/world5040045
Mishra R, Kumar S, Sarkar H, Ojha CSP. Utility of Certain AI Models in Climate-Induced Disasters. World. 2024; 5(4):865-900. https://doi.org/10.3390/world5040045
Chicago/Turabian StyleMishra, Ritusnata, Sanjeev Kumar, Himangshu Sarkar, and Chandra Shekhar Prasad Ojha. 2024. "Utility of Certain AI Models in Climate-Induced Disasters" World 5, no. 4: 865-900. https://doi.org/10.3390/world5040045
APA StyleMishra, R., Kumar, S., Sarkar, H., & Ojha, C. S. P. (2024). Utility of Certain AI Models in Climate-Induced Disasters. World, 5(4), 865-900. https://doi.org/10.3390/world5040045