Unsupervised Color-Based Flood Segmentation in UAV Imagery
Abstract
:1. Introduction
- Novelty in Approach: To our knowledge, this is the first fully unsupervised method for flood area segmentation in color images captured by UAVs. Our work addresses flood segmentation using parameter-free calculated masks and unsupervised image analysis techniques without any need for training.
- Probability Optimization: Flood areas are identified as solutions to a probability optimization problem, with an isocontour evolution starting from high-confidence areas and gradually growing according to the hysteresis thresholding method.
- Robust Algorithm: The proposed formulation results in a robust, simple, and effective unsupervised algorithm for flood segmentation.
- Dataset Categorization: We have introduced a dataset categorization according to the depicted scenery and camera rotation angle, into rural and urban/pei-urban, and no sky and with sky, respectively.
- Efficiency and Real-Time Processing: The framework is efficient and suitable for on-board execution on UAVs, enabling real-time processing and decision-making during flight. The processing time per image is approximately 0.5 s, without the need for pre-processing, substantial computational resources, or specialized GPU capabilities.
2. Materials
3. Methodology
3.1. System Overview
3.2. RGB Vegetation Index Mask
3.3. LAB Components Masks
3.4. Flood Dominant Color Estimation
3.5. Hysteresis Thresholding
- We adapt the process for region growing, where the high threshold is applied to the entire probability map to identify pixels with (p) > as flood (strong flood pixels). These regions have high confidence to belong to flood areas, so they can be used as seeds in a region-growing process that is described below.
- Next, a connectivity-based approach is used to track the flood. Starting from the pixels identified in the first step, the algorithm looks at neighboring pixels. If a neighboring pixel has a probability value higher than the low threshold (weak flood pixel), it is ultimately considered part of the flood.
- This process continues recursively until no more connected pixels above the low threshold are found.
3.6. Final Segmentation
4. Results and Discussion
4.1. Evaluation Metrics
4.2. Implementation
4.3. Flood Area Dataset: Experimental Results and Discussion
4.4. Exploring the Impact of Environmental Zones and Camera Rotation Angle
- Environmental zone:
- (a)
- Rural, predominantly featuring fields, hills, rugged mountainsides, scattered housing structures reminiscent of villages or rural settlements, and sparse roads depicted within the images. It consists of 87 out of the 290 images in the dataset.
- (b)
- Urban and peri-urban, distinctly showcasing urban landscapes characterized by well-defined infrastructure, that conforms to urban planning guidelines; a dense network of roads and high population density reflected in the presence of numerous buildings and structures. It encompasses a collection of 203 images.
- Camera rotation angle:
- (a)
- No sky (almost top-down view, low camera rotation angle), distinguished by the absence of any sky elements; specifically, these images entirely lack any portion of the sky or clouding within their composition. It comprises 182 images of the dataset.
- (b)
- With sky (bird’s-eye view, high camera rotation angle), where elements of the sky, such as clouds or open sky expanses, are visibly present within the image composition. It encompasses the remaining 103 images within the dataset.
4.5. Ablation Study
- L (UFS-HT-REM-L with = 74.8%)
- A (UFS-HT-REM-A with = 68.9%)
- B (UFS-HT-REM-B with = 70.8%)
4.6. Comparison with DL Approaches
4.7. Flood Semantic Segmentation Dataset: Experimental Results and Discussion
5. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
ADNet | Attentive Decoder Network |
AI | Artificial Intelligence |
APLS | Average Path Length Similarity |
BCNN | Bayesian Convolutional Neural Network |
BiT | Bitemporal image Transformer |
CNN | Convolutional Neural Network |
CV | Computer Vision |
DCFAM | Densely Connected Feature Aggregation Module |
DELTA | Deep Earth Learning, Tools, and Analysis |
DL | Deep Learning |
DNN | Deep Neural Network |
DSM | Digital Surface Model |
EDN | Encoder–Decoder Network |
EDR | Encoder–Decoder Residual |
EHAAC | Encoder–Decoder High-Accuracy Activation Cropping |
ENet | Efficient Neural Network |
HR | High Resolution |
ISPRS | International Society for Photogrammetry and Remote Sensing |
LSTM | Long Short-Term Memory |
ML | Machine Learning |
MRF | Markov Random Field |
NDWI | Normalized Difference Water Index |
PFA | Potential Flood Area |
PSPNet | Pyramid Scene Parsing Network |
RGBVI | RGB Vegetation Index |
ResNet | Residual Network |
SAM | Segment Anything Model |
SAR | Synthetic Aperture Radar |
UAV | Unmanned Aerial Vehicle |
VH | Vertical-Horizontal |
VV | Vertical-Vertical |
WSSS | Weakly Supervised Semantic Segmentation |
References
- Ritchie, H.; Rosado, P. Natural Disasters. 2022. Available online: https://ourworldindata.org/natural-disasters (accessed on 10 May 2024).
- Kondratyev, K.Y.; Varotsos, C.A.; Krapivin, V.F. Natural Disasters as Components of Global Ecodynamics; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Algiriyage, N.; Prasanna, R.; Stock, K.; Doyle, E.E.; Johnston, D. Multi-source multimodal data and deep learning for disaster response: A systematic review. SN Comput. Sci. 2022, 3, 1–29. [Google Scholar] [CrossRef] [PubMed]
- Linardos, V.; Drakaki, M.; Tzionas, P.; Karnavas, Y.L. Machine learning in disaster management: Recent developments in methods and applications. Mach. Learn. Knowl. Extr. 2022, 4, 446–473. [Google Scholar] [CrossRef]
- Chouhan, A.; Chutia, D.; Aggarwal, S.P. Attentive decoder network for flood analysis using sentinel 1 images. In Proceedings of the 2023 International Conference on Communication, Circuits, and Systems (IC3S), Bhubaneswar, India, 26–28 May 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar]
- Drakonakis, G.I.; Tsagkatakis, G.; Fotiadou, K.; Tsakalides, P. OmbriaNet—Supervised flood mapping via convolutional neural networks using multitemporal sentinel-1 and sentinel-2 data fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 2341–2356. [Google Scholar] [CrossRef]
- Dong, Z.; Liang, Z.; Wang, G.; Amankwah, S.O.Y.; Feng, D.; Wei, X.; Duan, Z. Mapping inundation extents in Poyang Lake area using Sentinel-1 data and transformer-based change detection method. J. Hydrol. 2023, 620, 129455. [Google Scholar] [CrossRef]
- Hänsch, R.; Arndt, J.; Lunga, D.; Gibb, M.; Pedelose, T.; Boedihardjo, A.; Petrie, D.; Bacastow, T.M. Spacenet 8-the detection of flooded roads and buildings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 1472–1480. [Google Scholar]
- He, Y.; Wang, J.; Zhang, Y.; Liao, C. An efficient urban flood mapping framework towards disaster response driven by weakly supervised semantic segmentation with decoupled training samples. ISPRS J. Photogramm. Remote Sens. 2024, 207, 338–358. [Google Scholar] [CrossRef]
- Hernández, D.; Cecilia, J.M.; Cano, J.C.; Calafate, C.T. Flood detection using real-time image segmentation from unmanned aerial vehicles on edge-computing platform. Remote Sens. 2022, 14, 223. [Google Scholar] [CrossRef]
- Hertel, V.; Chow, C.; Wani, O.; Wieland, M.; Martinis, S. Probabilistic SAR-based water segmentation with adapted Bayesian convolutional neural network. Remote Sens. Environ. 2023, 285, 113388. [Google Scholar] [CrossRef]
- Ibrahim, N.; Sharun, S.; Osman, M.; Mohamed, S.; Abdullah, S. The application of UAV images in flood detection using image segmentation techniques. Indones. J. Electr. Eng. Comput. Sci. 2021, 23, 1219. [Google Scholar] [CrossRef]
- Inthizami, N.S.; Ma’sum, M.A.; Alhamidi, M.R.; Gamal, A.; Ardhianto, R.; Jatmiko, W.; Kurnianingsih. Flood video segmentation on remotely sensed UAV using improved Efficient Neural Network. ICT Express 2022, 8, 347–351. [Google Scholar] [CrossRef]
- Li, Z.; Demir, I. U-net-based semantic classification for flood extent extraction using SAR imagery and GEE platform: A case study for 2019 central US flooding. Sci. Total. Environ. 2023, 869, 161757. [Google Scholar] [CrossRef]
- Lo, S.W.; Wu, J.H.; Lin, F.P.; Hsu, C.H. Cyber surveillance for flood disasters. Sensors 2015, 15, 2369–2387. [Google Scholar] [CrossRef] [PubMed]
- Munawar, H.S.; Ullah, F.; Qayyum, S.; Heravi, A. Application of deep learning on uav-based aerial images for flood detection. Smart Cities 2021, 4, 1220–1242. [Google Scholar] [CrossRef]
- Park, J.C.; Kim, D.G.; Yang, J.R.; Kang, K.S. Transformer-Based Flood Detection Using Multiclass Segmentation. In Proceedings of the 2023 IEEE International Conference on Big Data and Smart Computing (BigComp), Jeju, Republic of Korea, 13–16 February 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 291–292. [Google Scholar]
- Rahnemoonfar, M.; Chowdhury, T.; Sarkar, A.; Varshney, D.; Yari, M.; Murphy, R.R. Floodnet: A high resolution aerial imagery dataset for post flood scene understanding. IEEE Access 2021, 9, 89644–89654. [Google Scholar] [CrossRef]
- Şener, A.; Doğan, G.; Ergen, B. A novel convolutional neural network model with hybrid attentional atrous convolution module for detecting the areas affected by the flood. Earth Sci. Inform. 2024, 17, 193–209. [Google Scholar] [CrossRef]
- Shastry, A.; Carter, E.; Coltin, B.; Sleeter, R.; McMichael, S.; Eggleston, J. Mapping floods from remote sensing data and quantifying the effects of surface obstruction by clouds and vegetation. Remote Sens. Environ. 2023, 291, 113556. [Google Scholar] [CrossRef]
- Wang, L.; Li, R.; Duan, C.; Zhang, C.; Meng, X.; Fang, S. A novel transformer based semantic segmentation scheme for fine-resolution remote sensing images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Wieland, M.; Martinis, S.; Kiefl, R.; Gstaiger, V. Semantic segmentation of water bodies in very high-resolution satellite and aerial images. Remote Sens. Environ. 2023, 287, 113452. [Google Scholar] [CrossRef]
- Bauer-Marschallinger, B.; Cao, S.; Tupas, M.E.; Roth, F.; Navacchi, C.; Melzer, T.; Freeman, V.; Wagner, W. Satellite-Based Flood Mapping through Bayesian Inference from a Sentinel-1 SAR Datacube. Remote Sens. 2022, 14, 3673. [Google Scholar] [CrossRef]
- Filonenko, A.; Hernández, D.C.; Seo, D.; Jo, K.H. Real-time flood detection for video surveillance. In Proceedings of the IECON 2015-41st Annual Conference of the IEEE Industrial Electronics Society, Yokohama, Japan, 9–12 November 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 004082–004085. [Google Scholar]
- Landuyt, L.; Verhoest, N.E.; Van Coillie, F.M. Flood mapping in vegetated areas using an unsupervised clustering approach on sentinel-1 and-2 imagery. Remote Sens. 2020, 12, 3611. [Google Scholar] [CrossRef]
- McCormack, T.; Campanyà, J.; Naughton, O. A methodology for mapping annual flood extent using multi-temporal Sentinel-1 imagery. Remote Sens. Environ. 2022, 282, 113273. [Google Scholar] [CrossRef]
- Trombini, M.; Solarna, D.; Moser, G.; Dellepiane, S. A goal-driven unsupervised image segmentation method combining graph-based processing and Markov random fields. Pattern Recognit. 2023, 134, 109082. [Google Scholar] [CrossRef]
- Bentivoglio, R.; Isufi, E.; Jonkman, S.N.; Taormina, R. Deep learning methods for flood mapping: A review of existing applications and future research directions. Hydrol. Earth Syst. Sci. 2022, 26, 4345–4378. [Google Scholar] [CrossRef]
- Kumar, V.; Azamathulla, H.M.; Sharma, K.V.; Mehta, D.J.; Maharaj, K.T. The state of the art in deep learning applications, challenges, and future prospects: A comprehensive review of flood forecasting and management. Sustainability 2023, 15, 10543. [Google Scholar] [CrossRef]
- Yu, F.; Koltun, V. Multi-Scale Context Aggregation by Dilated Convolutions. arXiv 2016, arXiv:1511.07122. [Google Scholar]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment Anything. arXiv 2023, arXiv:2304.02643. [Google Scholar]
- Tarpanelli, A.; Mondini, A.C.; Camici, S. Effectiveness of Sentinel-1 and Sentinel-2 for flood detection assessment in Europe. Nat. Hazards Earth Syst. Sci. 2022, 22, 2473–2489. [Google Scholar] [CrossRef]
- Guo, Z.; Wu, L.; Huang, Y.; Guo, Z.; Zhao, J.; Li, N. Water-body segmentation for SAR images: Past, current, and future. Remote Sens. 2022, 14, 1752. [Google Scholar] [CrossRef]
- O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, L.; Riordan, D.; Walsh, J. Deep learning vs. traditional computer vision. In Proceedings of the Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC), Las Vegas, NV, USA, 2–3 May 2019; Springer: Berlin/Heidelberg, Germany, 2020; Volume 2, pp. 128–144. [Google Scholar]
- Karim, F.; Sharma, K.; Barman, N.R. Flood Area Segmentation. Available online: https://www.kaggle.com/datasets/faizalkarim/flood-area-segmentation (accessed on 10 May 2024).
- Yang, L. Flood Semantic Segmentation Dataset. Available online: https://www.kaggle.com/datasets/lihuayang111265/flood-semantic-segmentation-dataset (accessed on 10 May 2024).
- Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.L.; Bareth, G. Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley. Int. J. Appl. Earth Obs. Geoinf. 2015, 39, 79–87. [Google Scholar] [CrossRef]
- Markaki, S.; Panagiotakis, C. Unsupervised Tree Detection and Counting via Region-Based Circle Fitting. In Proceedings of the ICPRAM, Lisbon, Portugal, 22–24 February 2023; pp. 95–106. [Google Scholar]
- Ashapure, A.; Jung, J.; Chang, A.; Oh, S.; Maeda, M.; Landivar, J. A comparative study of RGB and multispectral sensor-based cotton canopy cover modelling using multi-temporal UAS data. Remote Sens. 2019, 11, 2757. [Google Scholar] [CrossRef]
- Chavolla, E.; Zaldivar, D.; Cuevas, E.; Perez, M.A. Color spaces advantages and disadvantages in image color clustering segmentation. In Advances in Soft Computing and Machine Learning in Image Processing; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–22. [Google Scholar]
- Hernandez-Lopez, J.J.; Quintanilla-Olvera, A.L.; López-Ramírez, J.L.; Rangel-Butanda, F.J.; Ibarra-Manzano, M.A.; Almanza-Ojeda, D.L. Detecting objects using color and depth segmentation with Kinect sensor. Procedia Technol. 2012, 3, 196–204. [Google Scholar] [CrossRef]
- Colorimetry—Part 4: CIE 1976 L*a*b* Colour Space. Available online: https://cie.co.at/publications/colorimetry-part-4-cie-1976-lab-colour-space-0 (accessed on 10 May 2024).
- Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 4th ed.; Global Edition; Pearson: London, UK, 2018; pp. 405–420. [Google Scholar]
- Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar]
- Fabbri, R.; Costa, L.D.F.; Torelli, J.C.; Bruno, O.M. 2D Euclidean distance transform algorithms: A comparative survey. ACM Comput. Surv. (CSUR) 2008, 40, 1–44. [Google Scholar] [CrossRef]
- Xu, X.; Xu, S.; Jin, L.; Song, E. Characteristic analysis of Otsu threshold and its applications. Pattern Recognit. Lett. 2011, 32, 956–961. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF Conference On Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5693–5703. [Google Scholar]
- Grinias, I.; Panagiotakis, C.; Tziritas, G. MRF-based segmentation and unsupervised classification for building and road detection in peri-urban areas of high-resolution satellite images. ISPRS J. Photogramm. Remote Sens. 2016, 122, 145–166. [Google Scholar] [CrossRef]
Authors | Year | Approach | Imagery | Method |
---|---|---|---|---|
Chouhan, A. et al. [5] | 2023 | Supervised | Sentinel-1 | Multi-scale ADNet |
Drakonakis, G.I. et al. [6] | 2022 | Supervised | Sentinel-1, 2 | CNN change detection |
Dong, Z. et al. [7] | 2023 | Supervised | Sentinel-1 | STANets, SNUNet, BiT |
Hänsch, R. et al. [8] | 2022 | Supervised | HR satelite RGB | U-Net |
He, Y. et al. [9] | 2024 | Weakly- supervised | HR aerial RGB | End-to-end WSSS framework structure constraints and self-distillation |
Hernández, D. et al. [10] | 2021 | Supervised | UAV RGB | Optimized DNN |
Hertel, V. et al. [11] | 2023 | Supervised | SAR | BCNN |
Ibrahim, N. et al. [12] | 2021 | Semi- supervised | UAV RGB | RGB and HSI color models, k-means clustering, region growing |
Inthizami, N.S. et al. [13] | 2022 | Supervised | UAV video | Improved ENet |
Li, Z. et al. [14] | 2023 | Supervised | Sentinel-1 | U-Net |
Lo, S.W. et al. [15] | 2015 | Semi- supervised | RGB (Surveillance camera) | HSV color model, seeded region growing |
Munawar, H.S. et al. [16] | 2021 | Supervised | UAV RGB | Landmark-based feature selection, CNN hybrid |
Park, J.C. et al. [17] | 2023 | Supervised | HR satelite RGB | Swin transformer in a Siamese-UNet |
Rahnemoonfar, M. et al. [18] | 2021 | Supervised | UAV RGB | InceptionNetv3, ResNet50, XceptionNet, PSPNet, ENet, DeepLabv3+ |
Şener, A. et al. [19] | 2024 | Supervised | UAV RGB | ED network with EDR block and atrous convolutions (FASegNet) |
Shastry, A. et al. [20] | 2023 | Supervised | WorldView 2, 3 multispectral | CNN with atrous convolutions |
Wang, L. et al. [21] | 2022 | Supervised | True Orthophoto (near infrared), DSM | Swin transformer and DCFAM |
Wieland, M. et al. [22] | 2023 | Supervised | Satelite and aerial | U-Net model with MobileNet-V3 backbone pre-trained on ImageNet |
Bauer-Marschallinger, B. et al. [23] | 2022 | Unsupervised | SAR | Datacube, time series-based detection, Bayes classifier |
Filonenko, A. et al. [24] | 2015 | Unsupervised | RGB (surveillance camera) | Change detection, color probability calculation |
Landuyt, L. et al. [25] | 2020 | Unsupervised | Sentinel-1, 2 | K-means clustering, region growing |
McCormack, T. et al. [26] | 2022 | Unsupervised | Sentinel-1 | Histogram thresholding, multi-temporal and contextual filters |
Trombini, M. et al. [27] | 2023 | Unsupervised | SAR | Graph-based MRF segmentation |
Category | |||||
---|---|---|---|---|---|
1.(a) Rural | 83.6% | 82.3% | 78.4% | 80.3% | 78.2% |
1.(b) Urban/peri-urban | 85.4% | 78.4% | 78.7% | 78.5% | 76.9% |
2.(a) No sky | 85.2% | 81.6% | 78.0% | 79.8% | 77.9% |
2.(b) With sky | 84.4% | 76.1% | 79.5% | 77.7% | 76.1% |
All | 84.9% | 79.5% | 78.6% | 79.1% | 77.3% |
Method | |||||
---|---|---|---|---|---|
UFS-HT-REM | 84.9% | 79.5% | 78.6% | 79.1% | 77.3% |
UFS-HT-REM (Equation (10)) | 84.7% | 79.2% | 78.5% | 78.8% | 77.0% |
UFS-HT-REM (Equation (9)) | 84.2% | 78.6% | 78.7% | 78.6% | 76.7% |
UFS-HT | 83.4% | 76.3% | 79.4% | 77.8% | 75.9% |
UFS-REM | 82.5% | 75.7% | 79.1% | 77.3% | 74.9% |
UFS-HT-REM—L | 79.9% | 68.9% | 82.0% | 74.8% | 72.8% |
UFS | 78.6% | 68.9% | 81.1% | 74.5% | 71.6% |
Mask | 76.0% | 64.9% | 82.7% | 72.7% | 69.8% |
UFS-HT-REM—B | 74.6% | 66.3% | 76.1% | 70.8% | 67.8% |
UFS-HT-REM—A | 68.6% | 56.7% | 88.0% | 68.9% | 66.0% |
UFS(Otsu)-HT-REM | 72.6% | 66.9% | 36.8% | 47.5% | 43.5% |
Method | Tr. Par. | ||||
---|---|---|---|---|---|
FASegNet | 91.5% | 91.4% | 90.3% | 90.9% | 0.64 M |
UNet | 90.7% | 90.0% | 90.1% | 90.0% | 31.05 M |
HRNet | 88.6% | 84.8% | 92.0% | 88.3% | 28.60 M |
Ours | 84.9% | 79.5% | 78.6% | 79.1% | 0 M |
Dataset | Images | |||||
---|---|---|---|---|---|---|
FSSD | 663 | 88.5% | 79.8% | 83.7% | 81.7% | 79.4% |
FAD | 290 | 84.9% | 79.5% | 78.6% | 79.1% | 77.3% |
FSSD ∪ FAD | 953 | 87.4% | 79.7% | 82.2% | 80.9% | 78.8% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Simantiris, G.; Panagiotakis, C. Unsupervised Color-Based Flood Segmentation in UAV Imagery. Remote Sens. 2024, 16, 2126. https://doi.org/10.3390/rs16122126
Simantiris G, Panagiotakis C. Unsupervised Color-Based Flood Segmentation in UAV Imagery. Remote Sensing. 2024; 16(12):2126. https://doi.org/10.3390/rs16122126
Chicago/Turabian StyleSimantiris, Georgios, and Costas Panagiotakis. 2024. "Unsupervised Color-Based Flood Segmentation in UAV Imagery" Remote Sensing 16, no. 12: 2126. https://doi.org/10.3390/rs16122126
APA StyleSimantiris, G., & Panagiotakis, C. (2024). Unsupervised Color-Based Flood Segmentation in UAV Imagery. Remote Sensing, 16(12), 2126. https://doi.org/10.3390/rs16122126