Utilizing Deep Learning and Object-Based Image Analysis to Search for Low-Head Dams in Indiana, USA
Abstract
:1. Introduction
1.1. Public Safety at Low-Head Dams
1.2. Identification
1.3. Automated Object Detection
1.4. Deep Learning and Low-Head Dam Identification
2. Materials and Methods
2.1. Study Area
2.2. Conceptualization
2.3. Data Acquisition
2.4. Training Data Preparation
2.5. Training Deep Learning Model
2.6. Model Performance and Validation
Actual | Total | |||
---|---|---|---|---|
Positive | Negative | |||
Model Prediction | Positive | True positive (accurately located low-head dam) TP | False positive (incorrectly located low-head dam) FP | Total low-head dam locations identified by model δ |
Negative | False negative FN | True negative TN | Total locations not identified by model Δ |
3. Results
3.1. Model Training Outcomes
3.2. Trained Deep Learning Model Validation and Final Performance
4. Discussion
- The number of pixels in the training data significantly affected the computer memory needed to train the model. On the computer used, images with 1024 pixels could only be successfully trained on backbone models with fewer than 30 layers, and with a batch size limited to one or two. However, images with 400 or 500 pixels could be trained on all models with batch sizes of four or eight.
- When using the export training data for deep learning tool and limiting the extents, the tool needs to fit an image within the specified area. If the extents provided are smaller than the width or height of a pixel in the raster image, the tool will disregard those extents and export images based on the full size of the raster image instead. It is important to note that the extents are defined using the units of the map’s coordinate system, which may differ from the raster image’s internal coordinate system. Therefore, the extents should be carefully set in accordance with the map’s coordinate units to ensure the desired area is properly selected for export.
- It was not possible to add additional metadata to the training dataset, such as elevation information or distance to the nearest stream or river.
5. Conclusions
- The tools within ArcGIS Pro were able to train a deep learning model when training images adequately depict a low-head dam, which can include centering it in an image.
- An image classification was developed to help model users prepare images for scanning other regions of the USA.
- Residual Network models are the pretrained models with the highest accuracy for the low-head dam application.
- The pretrained models with fewer layers tended to have lower model accuracy.
- Low-head dams can be located with sufficient visibility within the study area of Indiana.
- The visibility criteria greatly influenced the ability for the model to locate a given dam.
- Object identification over a large area was feasible with sufficient computational resources (i.e., a single robust desktop may require 1+ months depending on the size of the region). Additional machines may speed up run time with each machine running a subset of the desired area.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Angelakis, A.N.; Baba, A.; Valipour, M.; Dietrich, J.; Fallah-Mehdipour, E.; Krasilnikoff, J.; Bilgic, E.; Passchier, C.; Tzanakakis, V.A.; Kumar, R.; et al. Water Dams: From ancient to present times and into the future. Water 2024, 16, 1889. [Google Scholar] [CrossRef]
- Tschantz, B. What we know (and don’t know) about low-head dams. J. Dam Saf. 2014, 12, 37–45. [Google Scholar]
- Tschantz, B.A.; Wright, K.R. Hidden dangers and public safety at low-head dams. J. Dam Saf. 2011, 9, 7–17. [Google Scholar]
- Schweiger, P.; Barfuss, S.; Foos, W.; Richards, G. Don’t Go with the Flow! Identifying and Mitigating Hydraulic Hazards at Dams; Association of State Dam Safety Officials: Lexington, KY, USA, 2017. [Google Scholar]
- Sung, K.-K.; Poggio, T. Example-based learning for view-based human face detection. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 39–51. [Google Scholar] [CrossRef]
- Felzenszwalb, P.F.; Girshick, R.B.; McAllester, D.; Ramanan, D. Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 32, 1627–1645. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM. 2012, 60, 84–90. [Google Scholar] [CrossRef]
- Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 3 November 2014; pp. 675–678. [Google Scholar]
- Chen, C.; Seff, A.; Kornhauser, A.; Xiao, J. Deep driving: Learning affordance for direct perception in autonomous driving. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2722–2730. [Google Scholar]
- Yang, Z.; Nevatia, R. A multi-scale cascade fully convolutional network face detector. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4 December 2016; IEEE: New York, NY, USA, 2016; pp. 633–638. [Google Scholar]
- Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2D pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7291–7299. [Google Scholar]
- Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-view 3D object detection network for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1907–1915. [Google Scholar]
- Zhao, Z.Q.; Zheng, P.; Xu, S.T.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
- Dundar, A.; Jin, J.; Martini, B.; Culurciello, E. Embedded streaming deep neural networks accelerator with applications. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 1572–1583. [Google Scholar] [CrossRef]
- Cintra, R.J.; Duffner, S.; Garcia, C.; Leite, A. Low-complexity approximate convolutional neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5981–5992. [Google Scholar] [CrossRef]
- Stuhlsatz, A.; Lippel, J.; Zielke, T. Feature extraction with deep neural networks by a generalized discriminant analysis. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 596–608. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
- Sarker, I.H. Machine learning: Algorithms, real-world applications and research directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef] [PubMed]
- Koulali, R.; Hajar, Z.; Zaim, M. Evaluation of Several Artificial Intelligence and Machine Learning Algoriths for Image Classification on Small Datasets. In Advances on Smart and Soft Computing; Saeed, F., Al-Hadhrami, T., Mohammed, F., Mohammed, E., Eds.; Springer: Singapore, 2020; pp. 51–60. [Google Scholar]
- Lv, Q.; Dou, Y.; Niu, X.; Xu, J.Q.; Xia, F. Remote sensing image classification based on DBN model. J. Comput. Res. Dev. 2014, 51, 1911–1918. [Google Scholar]
- Patterson, B.; Leone, G.; Pantoja, M.; Behrouzi, A.A. Deep learning for automated image classification of seismic damage to built infrastructure. In Proceedings of the Eleventh US National Conference on Earthquake Engineering, Los Angeles, CA, USA, 25–26 June 2018. [Google Scholar]
- Pritt, M. Deep learning for recognizing mobile targets in satellite imagery. In Proceedings of the 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 9 October 2018; pp. 1–7. [Google Scholar]
- Trigka, M.; Dritsas, E. A Comprehensive Survey of Machine Learning Techniques and Models for Object Detection. Sensors 2025, 25, 214. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ioffe, S. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Arabi, S.; Haghighat, A.; Sharma, A. A deep learning based solution for construction equipment detection: From development to deployment. arXiv 2019, arXiv:1904.09021. [Google Scholar]
- Li, Y.; Che, P.; Liu, C.; Wu, D.; Du, Y. Cross-scene pavement distress detection by a novel transfer learning framework. Comput.-Aided Civ. Infrastruct. Eng. 2021, 36, 1398–1415. [Google Scholar] [CrossRef]
- Doycheva, K.; Koch, C.; König, M. Computer vision and deep learning for real-time pavement distress detection. In Advances in Informatics and Computing in Civil and Construction Engineering, Proceedings of the 35th CIB W78 2018 Conference: IT in Design, Construction, and Management, Chicago, IL, USA, 1–3 October 2018; Springer International Publishing: Cham, Switzerland, 2019; pp. 601–607. [Google Scholar]
- Moselhi, O.; Shehab-Eldeen, T. Automated detection of surface defects in water and sewer pipes. Autom. Constr. 1999, 8, 581–588. [Google Scholar] [CrossRef]
- Kumar, S.S.; Wang, M.; Abraham, D.M.; Jahanshahi, M.R.; Iseley, T.; Cheng, J.C. Deep learning–based automated detection of sewer defects in CCTV videos. J. Comput. Civ. Eng. 2020, 34, 04019047. [Google Scholar] [CrossRef]
- Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
- Cornic, A.; Ose, K.; Ienco, D.; Barbe, E.; Cresson, R. Assessment of urban land-cover classification: Comparison between pixel and object scales. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11 July 2021; IEEE: New York, NY, USA, 2021; pp. 5716–5719. [Google Scholar]
- Pan, X.; Zhang, C.; Xu, J.; Zhao, J. Simplified object-based deep neural network for very high resolution remote sensing image classification. ISPRS J. Photogramm. Remote Sens. 2021, 181, 218–237. [Google Scholar] [CrossRef]
- Tassi, A.; Gigante, D.; Modica, G.; Di Martino, L.; Vizzari, M. Pixel-vs. Object-based landsat 8 data classification in google earth engine using random forest: The case study of maiella national park. Remote Sens. 2021, 13, 2299. [Google Scholar] [CrossRef]
- Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
- Pritt, M.; Chern, G. Satellite image classification with deep learning. In Proceedings of the 2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 10 October 2017; IEEE: New York, NY, USA, 2017; pp. 1–7. [Google Scholar]
- Aplin, P.; Smith, G.M. Advances in object-based image classification. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 725–728. [Google Scholar]
- Murray, J.; Sargent, I.; Holland, D.; Gardiner, A.; Dionysopoulou, K.; Coupland, S.; Hare, J.; Zhang, C.; Atkinson, P.M. Opportunities for machine learning and artificial intelligence in national mapping agencies: Enhancing ordnance survey workflow. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 185–189. [Google Scholar] [CrossRef]
- ESRI ArcGIS Pro Resources. 2020. Available online: https://www.pro.arcgis.com/en/ (accessed on 19 October 2020).
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20 June 2009; IEEE: New York, NY, USA, 2009; pp. 248–255. [Google Scholar]
- Stanford Vision Lab; Stanford University; Princeton University. ImageNet Tree View. 2010. Available online: https://www.image-net.org/index.php (accessed on 11 December 2020).
- Ma, L.; Li, M.; Ma, X.; Cheng, L.; Du, P.; Liu, Y. A review of supervised object-based land-cover image classification. ISPRS J. Photogramm. Remote Sens. 2017, 130, 277–293. [Google Scholar] [CrossRef]
- Ma, L.; Li, M.; Gao, Y.; Chen, T.; Ma, X.; Qu, L. A novel wrapper approach for feature selection in object-based image classification using polygon-based cross-validation. IEEE Geosci. Remote Sens. Lett. 2017, 14, 409–413. [Google Scholar] [CrossRef]
- Benfield, S.L.; Guzman, H.M.; Mair, J.M.; Young, J.A. Mapping the distribution of coral reefs and associated sublittoral habitats in Pacific Panama: A comparison of optical satellite sensors and classification methodologies. Int. J. Remote Sens. 2007, 28, 5047–5070. [Google Scholar] [CrossRef]
- Diaz, O.; Kushibar, K.; Osuala, R.; Linardos, A.; Garrucho, L.; Igual, L.; Radeva, P.; Prior, F.; Gkontra, P.; Lekadir, K. Data preparation for artificial intelligence in medical imaging: A comprehensive guide to open-access platforms and tools. Phys. Medica 2021, 83, 25–37. [Google Scholar] [CrossRef]
- United States Geological Survey. What Are the Band Designations for the Landsat Satellites? Available online: https://www.usgs.gov/faqs/what-are-band-designations-landsat-satellites?qt-news_science_products=0#qt-news_science_products (accessed on 15 October 2021).
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Part I 14. Springer International Publishing: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Ghannadi, P.; Kourehli, S.S. Data driven method of damage detection using sparse sensors installation by SEREPa. J. Civ. Struct Health Monit. 2019, 9, 459–475. [Google Scholar] [CrossRef]
- Alstad, C. The xView2 AI Challenge. 1 April 2022. Available online: https://www.ibm.com/cloud/blog/the-xview2-ai-challenge (accessed on 9 April 2022).
- GitHub. DIUx-xView2_First_Place: First Place Solution for View2: Assess Building Damage challenge. 6 August 2020. Available online: https://github.com/DIUx-xView/xView2_first_place (accessed on 14 December 2020).
- Hay, G.J.; Castilla, G. Geographic Object-Based Image Analysis (GEOBIA): A new name for a new discipline. In Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 75–89. [Google Scholar]
- Domadia, S.G.; Thakkar, F.N.; Ardeshana, M.A. Recent advancement in learning methodology for segmenting brain tumor from magnetic resonance imaging-a review. Multimed. Tools Appl. 2023, 82, 34809–34845. [Google Scholar] [CrossRef]
- Zhang, C.; Cheng, J.; Tian, Q. Unsupervised and semi-supervised image classification with weak semantic consistency. IEEE Trans. Multimed. 2019, 21, 2482–2491. [Google Scholar] [CrossRef]
- Balaniuk, R.; Isupova, O.; Reece, S. Mining and tailings dam detection in satellite imagery using deep learning. Sensors 2020, 20, 6936. [Google Scholar] [CrossRef]
- Aguilar, M.A.; Saldaña, M.M.; Aguilar, F.J. GeoEye-1 and WorldView-2 pan-sharpened imagery for object-based classification in urban environments. Int. J. Remote Sens. 2013, 34, 2583–2606. [Google Scholar] [CrossRef]
- Jamil, A.; Bayram, B.; Seker, D.Z. Mapping Hazelnut Trees from High Resolution Digital Orthophoto Maps: A Quantitative Comparison of an Object and a Pizel Based Approach. FEB-Fresenius Environ. Bull. 2019, 28, 561–567. [Google Scholar]
- Mostafaei, H. Modal identification techniques for concrete dams: A comprehensive review and application. Sci 2024, 6, 40. [Google Scholar] [CrossRef]
- Google Colaboratory. 2024. Available online: https://colab.research.google.com/ (accessed on 1 November 2020).
- Malerba, M.E.; Wright, N.; Macreadie, P.I. A continental-scale assessment of density, size, distribution and historical trends of farm dams using deep learning convolutional neural networks. Remote Sens. 2021, 13, 319. [Google Scholar] [CrossRef]
- Chakraborty, S.; Cardwell, M.; Crookston, B.; Hotchkiss, R.H.; Johnson, M. Using Deep Learning and Aerial Imagery to Identify Low-Head Dams. In Dam Safety; Association of State Dam Safety Officials: Nashville, TN, USA, 2021; p. 177. [Google Scholar]
- Alouta, R.; Hess, K. Deep Learning with ArcGIS Pro Pro Tips & Tricks: Part 1. 22 February 2021. Available online: https://www.esri.com/arcgis-blog/products/arcgis-pro/imagery/deep-learning-with-arcgis-pro-tips-tricks/ (accessed on 12 July 2021).
- Indiana University. 2016–2018 Indiana Orthophotography Refresh. Available online: https://gis.iu.edu/dataset/statewide/in_2016.html (accessed on 17 September 2020).
- Abdullah, Q. The ASPRS Positional Accuracy Standards, Edition 2: The Geospatial Mapping Industry Guide to Best Practices. Photogramm. Eng. Remote Sens. 2023, 89, 581–588. [Google Scholar] [CrossRef]
- Indiana Office of Information Technology. Dataset Download Interface. Indiana Spatial Data Portal. 2016. Available online: http://gis.iu.edu/ (accessed on 14 December 2020).
- Johnson, M. Indiana Low-Head Dam Inventory. 2020. [Google Scholar]
- Arnold, C. Deep Learning and Low Head Dams, HydroShare. 2021. Available online: http://www.hydroshare.org/resource/4ea2d62f7c864f0691e4441587c8116f (accessed on 1 February 2021).
- Wujek, B.; Hall, P.; Günes, F. Best Practices for Machine Learning Applications; SAS Institute Inc.: Cary, NC, USA, 2016. [Google Scholar]
- Bennett, N.D.; Croke, B.F.; Guariso, G.; Guillaume, J.H.; Hamilton, S.H.; Jakeman, A.J.; Marsili-Libelli, S.; Newham, L.T.; Norton, J.P.; Perrin, C.; et al. Characterising performance of environmental models. Environ. Model. Softw. 2013, 40, 1–20. [Google Scholar] [CrossRef]
- Indiana Geographic Information Council, Inc. n.d. Layer Gallery, Indiana Map. Available online: https://maps.indiana.edu/layerGallery.html?category=WaterBodies (accessed on 13 July 2021).
Backbone Model | Batch Normalization | Single Shot Detector | RetinaNet | FasterRCNN |
---|---|---|---|---|
ResNet-18 | X | X | X | |
ResNet-34 | X | X | X | |
ResNet-50 | X | X | X | |
ResNet-101 | X | X | X | |
ResNet-152 | X | X | X | |
DenseNet-121 | X | |||
DenseNet-169 | X | |||
DenseNet-161 | X | |||
DenseNet-201 | X | |||
VGG-11 | X | |||
VGG-11 | X | X | ||
VGG-13 | X | |||
VGG-13 | X | X | ||
VGG-16 | X | |||
VGG-16 | X | X | ||
VGG-19 | X | |||
VGG-19 | X | X | ||
MobileNet version 2 | X | |||
DarkNet-53 | X |
Visibility | Total | ||||
---|---|---|---|---|---|
Class 0 | Class 1 | Class 2 | Class 3 | ||
Training | 8 | 18 | 33 | 33 | 92 |
Validation | 9 | 16 | 29 | 24 | 78 |
Total | 17 | 34 | 62 | 57 | 170 |
Iteration | Overview |
---|---|
1 | The majority of training images did not showcase a low-head dam. Zero accuracy |
2 | Training images included at least half of a low-head dam. Approximately 10% accuracy and 10% recall. |
3 | Training images centered on a low-head dam. Two additional training classes: (1) water (2) not water. Approximately 50% accuracy and high recall. |
4 | One additional training class added: (3) forest. Search limited to 100 m wide corridor along NHD stream/rivers. Approximately 1% accuracy and 32.1% recall. |
Model Type | Backbone Model | TP | FP |
---|---|---|---|
RETINANET | RESNET101 | 2 of 2 | 0 |
RETINANET | RESNET 152 | 2 of 2 | 1 |
FASTERRCNN | RESNET50 | 2 of 2 | 13 |
RETINANET | RESNET 50 | 1 of 2 | 0 |
FASTERRCNN | RESNET 18 | 1 of 2 | 0 |
Actual | Total | |||
---|---|---|---|---|
Positive | Negative | |||
Model Prediction | Positive | True positive TP = 25 | False positive FP = 3443 | Total low-head dam locations identified by model δ |
Negative | False negative FN = 53 | True negative TN | Total locations not identified by model Δ |
Visibility Class | # Dams | TP | FN | Expected Recall | Actual Recall |
---|---|---|---|---|---|
0 | 9 | 0 | 9 | 0.00 | 0.00 |
1 | 16 | 0 | 16 | 0.00 | 0.00 |
2 | 29 | 5 | 24 | 0.50 | 0.17 |
3 | 24 | 20 | 4 | 0.95 | 0.83 |
Total | 78 | 25 | 53 | 0.48 | 0.32 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Crookston, B.M.; Arnold, C.R. Utilizing Deep Learning and Object-Based Image Analysis to Search for Low-Head Dams in Indiana, USA. Water 2025, 17, 876. https://doi.org/10.3390/w17060876
Crookston BM, Arnold CR. Utilizing Deep Learning and Object-Based Image Analysis to Search for Low-Head Dams in Indiana, USA. Water. 2025; 17(6):876. https://doi.org/10.3390/w17060876
Chicago/Turabian StyleCrookston, Brian M., and Caitlin R. Arnold. 2025. "Utilizing Deep Learning and Object-Based Image Analysis to Search for Low-Head Dams in Indiana, USA" Water 17, no. 6: 876. https://doi.org/10.3390/w17060876
APA StyleCrookston, B. M., & Arnold, C. R. (2025). Utilizing Deep Learning and Object-Based Image Analysis to Search for Low-Head Dams in Indiana, USA. Water, 17(6), 876. https://doi.org/10.3390/w17060876