Orthophoto-Based Vegetation Patch Analyses—A New Approach to Assess Segmentation Quality
Abstract
:1. Introduction
2. Materials and Methods
- Conducting additional drone flights with different parameters, for example, from a lower altitude (to obtain higher-resolution images) or using a different camera (e.g., multispectral instead of RGB);
- Monitoring changes in the extent of the plant over time (e.g., in connection with an attempt to eradicate it);
- Sending people to these locations to verify the presence of the plant on the ground and possibly undertake its removal.
- Three-channel orthophoto (red, green, blue)
- Five-channel orthophoto (red, green, blue, red edge, near-infrared).
2.1. Standard Segmentation Quality Metrics Used
- IoU (intersection over union (IoU): other terms used in the literature include the Jaccard index and the Jaccard similarity coefficient.
- 2.
- Dice’s coefficient, other names: Sørensen–Dice coefficient, Dice similarity coefficient (DSC), and F1 score.
- 3.
- Tau coefficient, other names: Kendall rank correlation coefficient and Kendall’s Tau coefficient. In this study, a version of the Tau-b coefficient was used. The Tau coefficient [28] was proposed as a simpler and more efficient version of the Cohhen’s Kappa coefficient [44], which is again used as another measure of accuracy. Let denote the number of concordant pairs, denotes the number of discordant pairs, is the number of ties in , and is the number of ties in . If a tie occurred for the same pair in both and , it was not added to either or . The Tau-b coefficient is given by the following equation:
2.2. Research Preparation and Problem Identification
- UNet fed with 3-channel images (RGB),
- UNet powered by 5-channel data (RGB, red edge, and near-infrared).
- Blue is a true positive (TP; an object exists in the mask and has been indicated in the prediction).
- Black is a false positive (FP, object exists in the mask but has not been indicated in the prediction).
- Red indicates a false negative (FN, no object in the mask, but has been indicated in the prediction).
- White is a true negative (TN, no object in the mask, and not indicated in the prediction).
2.3. Proposed New Metric
- —positive region coefficient calculated for the TP regions, taking values from 0 to 1
- —negative region coefficient calculated for FP regions and taking values from 0 to 1
- taking values from −1 to 1.
3. Results and Discussion
3.1. Cases I with Fixed Dice and IoU Values
3.2. Cases II—Hits in the Same Percentage of the Ground Truth Mask Area
3.3. Cases III Where the Same Ground Truth Mask Regions Are Hit
3.4. Cases IV of Hits on All Ground Truth Mask Regions
3.5. Cases V with a Fixed Number of FP Regions
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7 June 2015; pp. 3431–3440. [Google Scholar] [CrossRef]
- Rizzoli, G.; Barbato, F.; Zanuttigh, P. Multimodal Semantic Segmentation in Autonomous Driving: A Review of Current Approaches and Future Perspectives. Technologies 2022, 10, 90. [Google Scholar] [CrossRef]
- Treml, M.; Arjona-Medina, J.; Unterthiner, T.; Durgesh, R.; Friedmann, F.; Schuberth, P.; Mayr, A.; Heusel, M.; Hofmarcher, M.; Widrich, M.; et al. Speeding up Semantic Segmentation for Autonomous Driving. In Proceedings of the 29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, 5–10 December 2016. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In LNCS; Springer International Publishing: Berlin, Germany, 2015; Volume 9351, pp. 234–241. [Google Scholar] [CrossRef]
- Huang, S.-Y.; Hsu, W.-L.; Hsu, R.-J.; Liu, D.-W. Fully Convolutional Network for the Semantic Segmentation of Medical Images: A Survey. Diagnostics 2022, 12, 2765. [Google Scholar] [CrossRef] [PubMed]
- Pedrayes, O.; Lema, D.; Garcia, F.D.; Usamentiaga, R.; Alonso, A. Evaluation of Semantic Segmentation Methods for Land Use with Spectral Imaging Using Sentinel-2 and PNOA Imagery. Remote Sens. 2021, 13, 2292. [Google Scholar] [CrossRef]
- Huang, L.; Jiang, B.; Lv, S.; Liu, Y.; Fu, Y. Deep-Learning-Based Semantic Segmentation of Remote Sensing Images: A Survey. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 17, 8370–8396. [Google Scholar] [CrossRef]
- Wu, B.; Gu, Z.; Zhang, W.; Fu, Q.; Zeng, M.; Li, A. Investigator Accuracy: A Center-Weighted Metric for Evaluating the Location Accuracy of Image Segments in Land Cover Classification. Int. J. Appl. Earth Obs. Geoinf. 2023, 122, 103402. [Google Scholar] [CrossRef]
- Lin, J.; Jing, W.; Song, H.; Chen, G. ESFNet: Efficient Network for Building Extraction From High-Resolution Aerial Images. IEEE Access 2019, 7, 54285–54294. [Google Scholar] [CrossRef]
- Audebert, N.; Le Saux, B.; Lefèvre, S. Segment-before-Detect: Vehicle Detection and Classification through Semantic Segmentation of Aerial Images. Remote Sens. 2017, 9, 368. [Google Scholar] [CrossRef]
- Śledziowski, J.; Terefenko, P.; Giza, A.; Forczmański, P.; Łysko, A.; Maćków, W.; Stępień, G.; Tomczak, A.; Kurylczyk, A. Application of Unmanned Aerial Vehicles and Image Processing Techniques in Monitoring Underwater Coastal Protection Measures. Remote Sens. 2022, 14, 458. [Google Scholar] [CrossRef]
- Altaweel, M.; Khelifi, A.; Li, Z.; Squitieri, A.; Basmaji, T.; Ghazal, M. Automated Archaeological Feature Detection Using Deep Learning on Optical UAV Imagery: Preliminary Results. Remote Sens. 2022, 14, 553. [Google Scholar] [CrossRef]
- Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Deep Learning Techniques to Classify Agricultural Crops through UAV Imagery: A Review. Neural Comput. Appl. 2022, 34, 9511–9536. [Google Scholar] [CrossRef]
- Chen, Y.; Ribera, J.; Boomsma, C.; Delp, E.J. Plant Leaf Segmentation for Estimating Phenotypic Traits. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3884–3888. [Google Scholar] [CrossRef]
- Marzialetti, F.; Frate, L.; De Simone, W.; Frattaroli, A.R.; Acosta, A.; Carranza, M. Unmanned Aerial Vehicle (UAV)-Based Mapping of Acacia Saligna Invasion in the Mediterranean Coast. Remote Sens. 2021, 13, 3361. [Google Scholar] [CrossRef]
- Nair, S.; Sharifzadeh, S.; Palade, V. Farmland Segmentation in Landsat 8 Satellite Images Using Deep Learning and Conditional Generative Adversarial Networks. Remote Sens. 2024, 16, 823. [Google Scholar] [CrossRef]
- Reckling, W.; Mitasova, H.; Wegmann, K.; Kauffman, G.; Reid, R. Efficient Drone-Based Rare Plant Monitoring Using a Species Distribution Model and AI-Based Object Detection. Drones 2021, 5, 110. [Google Scholar] [CrossRef]
- Baena, S.; Moat, J.; Whaley, O.; Boyd, D. Identifying Species from the Air: UAVs and the Very High Resolution Challenge for Plant Conservation. PLoS ONE 2017, 12, e0188714. [Google Scholar] [CrossRef]
- Zhang, Y.J. A Survey on Evaluation Methods for Image Segmentation. Pattern Recognit. 1996, 29, 1335–1346. [Google Scholar] [CrossRef]
- Chen, Y.; Ming, D.; Zhao, L.; Lv, B.; Zhou, K.; Qing, Y. Review on High Spatial Resolution Remote Sensing Image Segmentation Evaluation. Photogramm. Eng. Remote Sens. 2018, 84, 629–646. [Google Scholar] [CrossRef]
- Gao, H.; Tang, Y.; Jing, L.; Li, H.; Ding, H. A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images. Sensors 2017, 17, 2427. [Google Scholar] [CrossRef]
- Wang, Z.; Wang, E.; Zhu, Y. Image Segmentation Evaluation: A Survey of Methods. Artif. Intell. Rev. 2020, 53, 5637–5674. [Google Scholar] [CrossRef]
- Hsieh, C.-H.; Chia, T.-L. Analysis of Evaluation Metrics for Image Segmentation. J. Inf. Hiding Multimed. Signal Process. 2018, 9, 1559–1576. [Google Scholar]
- Müller, D.; Soto-Rey, I.; Kramer, F. Towards a Guideline for Evaluation Metrics in Medical Image Segmentation. BMC Res. Notes 2022, 15, 210. [Google Scholar] [CrossRef]
- Wang, H.; Zhuang, C.; Zhao, J.; Shi, R.; Jiang, H.; Yuan, Y.; Guo, X.; Xue, Z. Research on Evaluation Method of Aerial Image Segmentation Algorithm. In Proceedings of the 2022 7th International Conference on Signal and Image Processing (ICSIP), Suzhou, China, 20–22 July 2022; pp. 415–419. [Google Scholar] [CrossRef]
- Janušonis, E.; Kazakeviciute-Januskeviciene, G.; Bausys, R. Selection of Optimal Segmentation Algorithm for Satellite Images by Intuitionistic Fuzzy PROMETHEE Method. Appl. Sci. 2024, 14, 644. [Google Scholar] [CrossRef]
- Kazakeviciute-Januskeviciene, G.; Janušonis, E.; Bausys, R. Evaluation of the Segmentation of Remote Sensing Images. In Proceedings of the 2021 IEEE Open Conference of Electrical, Electronic and Information Sciences (eStream), Vilnius, Lithuania, 22 April 2021; pp. 1–7. [Google Scholar] [CrossRef]
- Ma, Z.; Redmond, R.L. Tau Coefficients for Accuracy Assessment of Classification of Remote Sensing Data. Photogramm. Eng. Remote Sens. 1995, 61, 435–439. [Google Scholar]
- Cohen, J. A Coefficient of Agreement for Nominal Scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
- Tariku, G.; Ghiglieno, I.; Gilioli, G.; Gentilin, F.; Armiraglio, S.; Serina, I. Automated Identification and Classification of Plant Species in Heterogeneous Plant Areas Using Unmanned Aerial Vehicle-Collected RGB Images and Transfer Learning. Drones 2023, 7, 599. [Google Scholar] [CrossRef]
- Pichai, K.; Park, B.; Bao, A.; Yin, Y. Automated Segmentation and Classification of Aerial Forest Imagery. Analytics 2022, 1, 135–143. [Google Scholar] [CrossRef]
- Lin, C.-W.; Lin, M.; Hong, Y. Aerial and Optical Images-Based Plant Species Segmentation Using Enhancing Nested Downsampling Features. Forests 2021, 12, 1695. [Google Scholar] [CrossRef]
- Xia, L.; Zhang, R.; Chen, L.; Li, L.; Yi, T.; Yao, W.; Ding, C.; Xie, C. Evaluation of Deep Learning Segmentation Models for Detection of Pine Wilt Disease in Unmanned Aerial Vehicle Images. Remote Sens. 2021, 13, 3594. [Google Scholar] [CrossRef]
- Fuentes-Pacheco, J.; Torres, J.; Roman-Rangel, E.; Cervantes, S.; Juarez-Lopez, P.; Hermosillo, J.; Rendon-Mancha, J. Fig Plant Segmentation from Aerial Images Using a Deep Convolutional Encoder-Decoder Network. Remote Sens. 2019, 11, 1157. [Google Scholar] [CrossRef]
- Gallmann, J.; Schüpbach, B.; Jacot, K.; Albrecht, M.; Winizki, J.; Kirchgessner, N.; Aasen, H. Flower Mapping in Grasslands With Drones and Deep Learning. Front. Plant Sci. 2022, 12, 774965. [Google Scholar] [CrossRef]
- Lake, T.; Runquist, R.; Moeller, D. Deep Learning Detects Invasive Plant Species across Complex Landscapes Using Worldview-2 and Planetscope Satellite Imagery. Remote Sens. Ecol. Conserv. 2022, 8, 875–889. [Google Scholar] [CrossRef]
- Asner, G. Applications of Remote Sensing to Alien Invasive Plant Studies. Sensors 2009, 9, 4869–4889. [Google Scholar] [CrossRef] [PubMed]
- Gałczyńska, M.; Gamrat, R.; Łysko, A. Impact of invasive species of the genus heracleum spp. (apiaceae) on environment and human health Wpływ Gatunków Inwazyjnych z Rodzaju Heracleum Spp. (Apiaceae) Na Środowisko i Zdrowie Człowieka. Kosmos. Seria A, Biologia / Polskie Towarzystwo Przyrodników im. Kopernika 2016, 65, 591–599. [Google Scholar]
- Sužiedelytė Visockienė, J.; Tumelienė, E.; Maliene, V. Identification of Heracleum Sosnowskyi-Invaded Land Using Earth Remote Sensing Data. Sustainability 2020, 12, 759. [Google Scholar] [CrossRef]
- Powers, D. Evaluation: From Precision, Recall and F-Factor to ROC, Informedness, Markedness & Correlation. Mach. Learn. Technol. 2008, 2, 1–24. [Google Scholar]
- Sokolova, M.; Lapalme, G. A Systematic Analysis of Performance Measures for Classification Tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
- Zhang, Y.; Guindon, B. Application of the Dice Coefficient to Accuracy Assessment of Object-Based Image Classification. Can. J. Remote Sens. 2017, 43, 48–61. [Google Scholar] [CrossRef]
- Yan, J.; Wang, H.; Yan, M.; Wenhui, D.; Sun, X.; Li, H. IoU-Adaptive Deformable R-CNN: Make Full Use of IoU for Multi-Class Object Detection in Remote Sensing Imagery. Remote Sens. 2019, 11, 286. [Google Scholar] [CrossRef]
- Setiawan, A.W. Image Segmentation Metrics in Skin Lesion: Accuracy, Sensitivity, Specificity, Dice Coefficient, Jaccard Index, and Matthews Correlation Coefficient. In Proceedings of the 2020 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM), Surabaya, Indonesia, 17–18 November 2020; pp. 97–102. [Google Scholar]
- Ataş, İ. Performance Evaluation of Jaccard-Dice Coefficient on Building Segmentation from High Resolution Satellite Images. Balk. J. Electr. Comput. Eng. 2023, 11, 100–106. [Google Scholar] [CrossRef]
Image | Tau | IoU | Dice |
---|---|---|---|
Prediction 3C | 0.788 | 0.408 | 0.58 |
Prediction 3D | 0.856 | 0.547 | 0.707 |
Image | Mask Regions | TP Regions | FP Regions | |||
---|---|---|---|---|---|---|
Number | Total Area | Number | Total Area | Number | Total Area | |
Mask 3B | 611 | 5,665,854 | n/a | n/a | n/a | n/a |
Prediction 3C | 495 | 6,926,924 | 301 | 3,655,480 | 453 | 3,271,444 |
Prediction 3D | 336 | 4,224,048 | 292 | 3,506,152 | 297 | 717,896 |
Image | M+ (α = 5) Positive Regions Coefficient | M− (β = 5) Negative Regions Coefficient | Δ The Difference between Region’s Coefficient |
---|---|---|---|
Prediction 3C | 0.856 | 0.368 | 0.488 |
Prediction 3D | 0.839 | 0.023 | 0.816 |
Case | TP% | FP% | Tau | IoU | Dice | M+ (α = 5) | M− (β = 5) | Δ |
---|---|---|---|---|---|---|---|---|
I-A | 25 | 0 | 0.743086 | 0.25 | 0.4 | 0.25 | 0.0 | 0.25 |
I-B | 25 | 0 | 0.743086 | 0.25 | 0.4 | 0.351572 | 0.0 | 0.351572 |
I-C | 25 | 0 | 0.743086 | 0.25 | 0.4 | 0.453143 | 0.0 | 0.453143 |
I-D | 25 | 0 | 0.743086 | 0.25 | 0.4 | 0.656287 | 0.0 | 0.656287 |
I-E | 25 | 0 | 0.743086 | 0.25 | 0.4 | 0.757858 | 0.0 | 0.757858 |
I-F | 30 | 20 | 0.697491 | 0.25 | 0.4 | 0.3 | 0.4 | −0.1 |
I-A | I-B | I-C | I-D | I-E | ||
---|---|---|---|---|---|---|
TP% | 25 | 25 | 25 | 25 | 25 | |
TP regions | 5 | 8 | 11 | 17 | 20 | |
M+ | α = 1 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 |
α = 2 | 0.25 | 0.3 | 0.35 | 0.45 | 0.5 | |
α = 3 | 0.25 | 0.326 | 0.402 | 0.554 | 0.63 | |
α = 4 | 0.25 | 0.341 | 0.433 | 0.616 | 0.707 | |
α = 5 | 0.25 | 0.352 | 0.453 | 0.656 | 0.758 | |
α = 6 | 0.25 | 0.359 | 0.467 | 0.685 | 0.794 | |
α = 7 | 0.25 | 0.364 | 0.478 | 0.706 | 0.82 | |
α = 8 | 0.25 | 0.368 | 0.486 | 0.723 | 0.841 | |
α = 9 | 0.25 | 0.371 | 0.493 | 0.736 | 0.857 | |
α = 10 | 0.25 | 0.374 | 0.498 | 0.746 | 0.871 |
Case | TP% | FP% | Tau | IoU | Dice | M+ (α = 5) | M− (β = 5) | Δ |
---|---|---|---|---|---|---|---|---|
II-A | 25.0 | 0.0 | 0.743 | 0.25 | 0.4 | 0.25 | 0.0 | 0.25 |
II-B | 25.0 | 0.0 | 0.743 | 0.25 | 0.4 | 0.758 | 0.0 | 0.758 |
II-C | 25.0 | 0.0 | 0.743 | 0.25 | 0.4 | 0.352 | 0.0 | 0.352 |
II-D | 25.0 | 75.0 | 0.596 | 0.143 | 0.25 | 0.758 | 0.237 | 0.521 |
Case | TP% | FP% | Tau | IoU | Dice | M+ (α = 5) | M− (β = 5) | Δ |
---|---|---|---|---|---|---|---|---|
III-A | 10.0 | 0.0 | 0.653 | 0.1 | 0.182 | 0.303 | 0.0 | 0.303 |
III-B | 25.0 | 0.0 | 0.743 | 0.25 | 0.4 | 0.352 | 0.0 | 0.352 |
III-C | 40.0 | 0.0 | 0.809 | 0.4 | 0.571 | 0.4 | 0.0 | 0.4 |
III-D | 40.0 | 120.0 | 0.625 | 0.182 | 0.308 | 0.4 | 0.237 | 0.163 |
Case | TP% | FP% | Tau | IoU | Dice | M+ (α = 5) | M− (β = 5) | Δ |
---|---|---|---|---|---|---|---|---|
IV-A | 100.0 | 0.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 1.0 |
IV-B | 70.0 | 0.0 | 0.914 | 0.7 | 0.824 | 0.903 | 0.0 | 0.903 |
IV-C | 25.0 | 0.0 | 0.743 | 0.25 | 0.4 | 0.758 | 0.0 | 0.758 |
IV-D | 9.0 | 0.0 | 0.645 | 0.09 | 0.165 | 0.618 | 0.0 | 0.618 |
IV-E | 4.0 | 0.0 | 0.597 | 0.04 | 0.077 | 0.525 | 0.0 | 0.525 |
IV-F | 1.0 | 0.0 | 0.548 | 0.01 | 0.02 | 0.398 | 0.0 | 0.398 |
Case | TP% | FP% | Tau | IoU | Dice | M+ (α = 5) | M− (β = 5) | Δ |
---|---|---|---|---|---|---|---|---|
V-A | 100.0 | 0.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 | 1.0 |
V-B | 100.0 | 20.0 | 0.953 | 0.833 | 0.909 | 1.0 | 0.167 | 0.833 |
V-C | 25.0 | 0.0 | 0.743 | 0.25 | 0.4 | 0.758 | 0.0 | 0.758 |
V-D | 25.0 | 5.0 | 0.719 | 0.238 | 0.385 | 0.758 | 0.167 | 0.591 |
V-E | 70.0 | 0.0 | 0.914 | 0.7 | 0.824 | 0.903 | 0.0 | 0.903 |
V-F | 70.0 | 20.0 | 0.859 | 0.583 | 0.737 | 0.903 | 0.222 | 0.681 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Maćków, W.; Bondarewicz, M.; Łysko, A.; Terefenko, P. Orthophoto-Based Vegetation Patch Analyses—A New Approach to Assess Segmentation Quality. Remote Sens. 2024, 16, 3344. https://doi.org/10.3390/rs16173344
Maćków W, Bondarewicz M, Łysko A, Terefenko P. Orthophoto-Based Vegetation Patch Analyses—A New Approach to Assess Segmentation Quality. Remote Sensing. 2024; 16(17):3344. https://doi.org/10.3390/rs16173344
Chicago/Turabian StyleMaćków, Witold, Malwina Bondarewicz, Andrzej Łysko, and Paweł Terefenko. 2024. "Orthophoto-Based Vegetation Patch Analyses—A New Approach to Assess Segmentation Quality" Remote Sensing 16, no. 17: 3344. https://doi.org/10.3390/rs16173344
APA StyleMaćków, W., Bondarewicz, M., Łysko, A., & Terefenko, P. (2024). Orthophoto-Based Vegetation Patch Analyses—A New Approach to Assess Segmentation Quality. Remote Sensing, 16(17), 3344. https://doi.org/10.3390/rs16173344