UAV-Based Multi-Sensor Data Fusion for Urban Land Cover Mapping Using a Deep Convolutional Neural Network
Abstract
:1. Introduction
2. Proposed Classification Approach
2.1. Orthomosaic Image Generation
2.2. Optimized LOAM SLAM
2.3. Conventional Maximum Likelihood Classifier
2.4. Support Vector Machine Classifier
2.5. DCNN Architecture
2.6. Accuracy Assessment
3. Data Acquisition and Preprocessing
3.1. Data Processing of the First Dataset
3.2. Data Processing of the Second Dataset
4. Results and Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Yifang, B.; Gong, P.; Gini, C. Global land cover mapping using Earth observation satellite data: Recent progresses and challenges. ISPRS J. Photogramm. Remote Sens. 2015, 103, 1–6. [Google Scholar]
- Feddema, J.J.; Oleson, K.W.; Bonan, G.B.; Mearns, L.O.; Buja, L.E.; Meehl, G.A.; Washington, W.M. The Importance of Land-Cover Change in Simulating Future Climates. Science 2005, 310, 1674–1678. [Google Scholar] [CrossRef] [PubMed]
- Raid Al-Tahir, M.A. Unmanned Aerial Mapping Solution for Small Island Developing States. In Proceedings of the global geospatial conference, Quebec City, QC, Canada, 17 May 2012. [Google Scholar]
- Kalantar, B.; Halin, A.A.; Al-Najjar HA, H.; Mansor, S.; van Genderen, J.L.; Shafri HZ, M.; Zand, M. A Framework for Multiple Moving Objects Detection in Aerial Videos, in Spatial Modeling in GIS and R for Earth and Environmental Sciences; Elsevier: Amsterdam, The Netherlands, 2019; pp. 573–588. [Google Scholar]
- Yao, H.; Qin, R.; Chen, X.J.R.S. Unmanned aerial vehicle for remote sensing applications—A review. Remote Sens. 2019, 11, 1443. [Google Scholar] [CrossRef]
- Torresan, C.; Berton, A.; Carotenuto, F.; Di Gennaro, S.F.; Gioli, B.; Matese, A.; Miglietta, F.; Vagnoli, C.; Zaldei, A.; Wallace, L. Forestry applications of UAVs in Europe: A review. Int. J. Remote Sens. 2017, 38, 2427–2447. [Google Scholar] [CrossRef]
- Lv, Z.; Shi, W.; Benediktsson, J.A.; Ning, X. Novel object-based filter for improving land-cover classification of aerial imagery with very high spatial resolution. Remote Sens. 2016, 8, 1023. [Google Scholar] [CrossRef]
- Natesan, S.; Armenakis, C.; Benari, G.; Lee, R. Use of UAV-Borne Spectrometer for Land Cover Classification. Drones 2018, 2, 16. [Google Scholar] [CrossRef]
- Giang, T.L.; Dang, K.B.; Le, Q.T.; Nguyen, V.G.; Tong, S.S.; Pham, V.M. U-Net convolutional networks for mining land cover classification based on high-resolution UAV imagery. IEEE Access 2020, 8, 186257–186273. [Google Scholar] [CrossRef]
- Pan, L.; Gu, L.; Ren, R.; Yang, S. Land cover classification based on machine learning using UAV multi-spectral images. SPIE 2020, 11501, 115011F. [Google Scholar]
- Park, G.; Park, K.; Song, B.; Lee, H. Analyzing Impact of Types of UAV-Derived Images on the Object-Based Classification of Land Cover in an Urban Area. Drones 2022, 6, 71. [Google Scholar] [CrossRef]
- Long, T.; Jiao, W.; He, G.; Zhang, Z.; Cheng, B.; Wang, W. A generic framework for image rectification using multiple types of feature. ISPRS J. Photogramm. Remote Sens. 2015, 102, 161–171. [Google Scholar] [CrossRef]
- Megahed, Y.; Shaker, A.; Sensing, W.Y.-R. Fusion of Airborne LiDAR Point Clouds and Aerial Images for Heterogeneous Land-Use Urban Mapping. Remote Sens. 2021, 13, 814. [Google Scholar] [CrossRef]
- Fieber, K.D.; Davenport, I.J.; Ferryman, J.M.; Gurney, R.J.; Walker, J.P.; Hacker, J.M. Analysis of full-waveform LiDAR data for classification of an orange orchard scene. ISPRS J. Photogramm. Remote Sens. 2013, 82, 63–82. [Google Scholar] [CrossRef] [Green Version]
- Reese, H.; Nyström, M.; Nordkvist, K.; Olsson, H. Combining airborne laser scanning data and optical satellite data for classification of alpine vegetation. Int. J. Appl. Earth Obs. Geoinfor. ITC J. 2013, 27, 81–90. [Google Scholar] [CrossRef]
- Tonolli, S.; Dalponte, M.; Neteler, M.; Rodeghiero, M.; Vescovo, L.; Gianelle, D. Fusion of airborne LiDAR and satellite multispectral data for the estimation of timber volume in the Southern Alps. Remote Sens. Environ. 2011, 115, 2486–2498. [Google Scholar] [CrossRef]
- Dalponte, M.; Bruzzone, L.; Gianelle, D. Tree species classification in the Southern Alps based on the fusion of very high geometrical resolution multispectral/hyperspectral images and LiDAR data. Remote Sens. Environ. 2012, 123, 258–270. [Google Scholar] [CrossRef]
- Buddenbaum, H.; Seeling, S.; Hill, J. Fusion of full-waveform lidar and imaging spectroscopy remote sensing data for the characterization of forest stands. Int. J. Remote Sens. 2013, 34, 4511–4524. [Google Scholar] [CrossRef]
- Mutlu, M.; Popescu, S.C.; Stripling, C.; Spencer, T. Mapping surface fuel models using lidar and multispectral data fusion for fire behavior. Remote Sens. Environ. 2008, 112, 274–285. [Google Scholar] [CrossRef]
- Bork, E.W.; Su, J.G. Integrating LIDAR data and multispectral imagery for enhanced classification of rangeland vegetation: A meta analysis. Remote Sens. Environ. 2007, 111, 11–24. [Google Scholar] [CrossRef]
- Mesas-Carrascosa, F.J.; Castillejo-González, I.L.; de la Orden, M.S.; Porras, A.G.-F. Combining LiDAR intensity with aerial camera data to discriminate agricultural land uses. Comput. Electron. Agric. 2012, 84, 36–46. [Google Scholar] [CrossRef]
- Myint, S.W.; Gober, P.; Brazel, A.; Grossman-Clarke, S.; Weng, Q. Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sens. Environ. 2011, 115, 1145–1161. [Google Scholar] [CrossRef]
- Blaschke, T.; Lang, S.; Lorup, E.; Strobl, J.; Zeil, P. Object-oriented image processing in an integrated GIS/remote sensing environment and perspectives for environmental applications. Environ. Inf. Plan. Politics Public 2000, 2, 555–570. [Google Scholar]
- Zhang, P.; Ke, Y.; Zhang, Z.; Wang, M.; Li, P.; Zhang, S. Urban land use and land cover classification using novel deep learning models based on high spatial resolution satellite imagery. Sensors 2018, 18, 3717. [Google Scholar] [CrossRef]
- Ke, Y.; Quackenbush, L.J.; Im, J. Synergistic use of QuickBird multispectral imagery and LIDAR data for object-based forest species classification. Remote Sens. Environ. 2010, 114, 1141–1154. [Google Scholar] [CrossRef]
- Li, D.; Ke, Y.; Gong, H.; Li, X. Object-Based Urban Tree Species Classification Using Bi-Temporal WorldView-2 and WorldView-3 Images. Remote Sens. 2015, 7, 16917–16937. [Google Scholar] [CrossRef]
- Duro, D.C.; Franklin, S.E.; Dubé, M.G. A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens. Environ. 2012, 118, 259–272. [Google Scholar] [CrossRef]
- Huang, B.; Zhao, B.; Song, Y. Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery. Remote Sens. Environ. 2018, 214, 73–86. [Google Scholar] [CrossRef]
- Audebert, N.; Le Saux, B.; Lefèvre, S. Semantic segmentation of earth observation data using multimodal and multi-scale deep networks. In Proceedings of the Asian Conference on Computer Vision, Taipei, Taiwan, 20–24 November 2016. [Google Scholar]
- Kemker, R.; Salvaggio, C.; Kanan, C. Algorithms for semantic segmentation of multispectral remote sensing imagery using deep learning. ISPRS J. Photogramm. Remote Sens. 2018, 145, 60–77. [Google Scholar] [CrossRef]
- Zheng, C.; Wang, L. Semantic segmentation of remote sensing imagery using object-based Markov random field model with regional penalties. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 8, 1924–1935. [Google Scholar] [CrossRef]
- Pinheiro, P.O.; Lin, T.Y.; Collobert, R.; Dollár, P. Learning to refine object segments. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
- Lin, G.; Milan, A.; Shen, C.; Reid, I. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
- Jakovljevic, G.; Govedarica, M.; Alvarez-Taboada, F. A Deep Learning Model for Automatic Plastic Mapping Using Unmanned Aerial Vehicle (UAV) Data. Remote Sens. 2020, 12, 1515. [Google Scholar] [CrossRef]
- Diakogiannis, F.I.; Waldner, F.; Caccetta, P.; Wu, C. ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote Sens. 2020, 162, 94–114. [Google Scholar] [CrossRef]
- Garg, L.; Shukla, P.; Singh, S.; Bajpai, V.; Yadav, U. Land Use Land Cover Classification from Satellite Imagery using mUnet: A Modified Unet Architecture. VISIGRAPP 2019, 4, 359–365. [Google Scholar]
- Hawkins, S. Using a drone and photogrammetry software to create orthomosaic images and 3D models of aircraft accident sites. In Proceedings of the ISASI 2016 Seminar, Reykjavik, Iceland, 17–21 October 2016. [Google Scholar]
- Mapper, P.D. 2020. Available online: https://cloud.pix4d.com/ (accessed on 7 October 2020).
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Küng, O.; Strecha, C.; Beyeler, A.; Zufferey, J.-C.; Floreano, D.; Fua, P.; Gervaix, F. The Accuracy of Automatic Photogrammetric Techniques on Ultra-Light UAV Imagery. In Proceedings of the IAPRS, International Conference on Unmanned Aerial Vehicle in Geomatics (UAV-g), Zurich, Switzerland, 14–16 September 2011. [Google Scholar]
- Kitware. Optimized LOAM SLAM. Available online: https://gitlab.kitware.com/keu-computervision/slam (accessed on 1 March 2020).
- Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Robotics: Science and Systems; MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
- Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001. IEEE. [Google Scholar]
- Randazzo, G.; Cascio, M.; Fontana, M.; Gregorio, F.; Lanza, S.; Muzirafuti, A. Mapping of Sicilian pocket beaches land use/land cover with sentinel-2 imagery: A case study of messina Province. Land 2021, 10, 678. [Google Scholar] [CrossRef]
- Li, Y.; Bai, J.; Zhang, L.; Yang, Z. Mapping and Spatial Variation of Seagrasses in Xincun, Hainan Province, China, Based on Satellite Images. Remote Sens. 2022, 14, 2373. [Google Scholar] [CrossRef]
- Paola, J.; Schowengerdt, R. A detailed comparison of backpropagation neural network and maximum-likelihood classifiers for urban land use classification. IEEE Trans. Geosci. Remote Sens. 1995, 33, 981–996. [Google Scholar] [CrossRef]
- Vapnik, V. The support vector method of function estimation. In Nonlinear Modeling; Springer: Berlin/Heidelberg, Germany, 1998; pp. 55–85. [Google Scholar]
- Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Yakubovskiy, P. Segmentation Models. 2019. Available online: https://github.com/qubvel/segmentation_models (accessed on 17 January 2022).
- Milletari, F.; Navab, N.; Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016. [Google Scholar]
- Reddi, S.J.; Kale, S.; Kumar, S. On the convergence of adam and beyond. arXiv 2019, arXiv:1904.09237. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- TensorFlow. TensorFlow. 2021. Available online: https://www.tensorflow.org/ (accessed on 17 January 2021).
- Chollet, F.K. Keras: Deep Learning for Humans; GitHub: San Francisco, CA, USA, 2015; Available online: https://github.com/keras-team/keras (accessed on 17 January 2021).
- Sony. Sony ILCE-7RM2. 2021. Available online: https://electronics.sony.com/imaging/interchangeable-lens-cameras/full-frame/p/ilce7rm2-b (accessed on 19 September 2021).
- RedEdge-M. MicaSense. 2022. Available online: https://support.micasense.com/hc/en-us/articles/360001485134-Getting-Started-With-RedEdge-M-Legacy- (accessed on 20 January 2022).
- Velodyne. VLP-16 User Manual. 2021. Available online: https://velodynelidar.com/wp-content/uploads/2019/12/63-9243-Rev-E-VLP-16-User-Manual.pdf (accessed on 19 September 2021).
- DJI Zenmuse L1. 2022. Available online: www.dji.com/cz/zenmuse-l1/specs (accessed on 20 June 2021).
- Labelme. Available online: https://github.com/wkentaro/labelme (accessed on 1 February 2022).
- Rimal, B.; Rijal, S.; Kunwar, R. Comparing support vector machines and maximum likelihood classifiers for mapping of urbanization. J. Indian Soc. Remote Sens. 2020, 48, 71–79. [Google Scholar] [CrossRef]
- Ghayour, L.; Neshat, A.; Paryani, S.; Shahabi, H.; Shirzadi, A.; Chen, W.; Al-Ansari, N.; Geertsema, M.; Amiri, M.P.; Gholamnia, M.; et al. Performance evaluation of sentinel-2 and landsat 8 OLI data for land cover/use classification using a comparison between machine learning algorithms. Remote Sens. 2021, 13, 1349. [Google Scholar] [CrossRef]
RGB | RGB + Intensity | RGB + Elevation | RGB + LiDAR | Multispectral | Multispectral + Elevation | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | |
Building | 35.51% | 36.54% | 56.07% | 55.05% | 71.96% | 77.00% | 83.18% | 85.58% | 27.10% | 35.37% | 80.37% | 82.69% |
Road | 75.94% | 90.45% | 86.32% | 85.12% | 98.11% | 83.53% | 97.17% | 83.40% | 90.09% | 69.96% | 91.98% | 82.69% |
Vegetation | 54.10% | 40.24% | 52.46% | 48.48% | 44.26% | 58.70% | 45.90% | 70.00% | 45.90% | 71.79% | 47.54% | 59.18% |
Snow | 75.24% | 62.20% | 69.52% | 73.00% | 72.38% | 90.48% | 75.24% | 92.94% | 40.00% | 65.63% | 76.19% | 83.33% |
Cars | 33.33% | 55.56% | 26.67% | 40.00% | 66.67% | 47.62% | 73.33% | 45.83% | 13.33% | 5.26% | 40.00% | 42.86% |
OA (%) | 63.20 | 70.40 | 79.60 | 82.60 | 58.40 | 79.20 | ||||||
kappa (%) | 49.63 | 58.45 | 70.79 | 75.11 | 40.12 | 70.34 |
RGB | RGB + Intensity | RGB + Elevation | RGB + LiDAR | Multispectral | Multispectral + Elevation | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | |
Building | 55.14% | 55.14% | 65.51% | 53.27% | 92.47% | 80.37% | 89.36% | 88.50% | 40.00% | 37.38% | 89.79% | 82.24% |
Road | 89.02% | 72.64% | 90.44% | 75.94% | 89.74% | 82.54% | 95.47% | 84.19% | 91.77% | 68.39% | 89.55% | 84.90% |
Vegetation | 47.96% | 77.05% | 51.81% | 70.49% | 55.69% | 72.13% | 48.19% | 75.57% | 46.38% | 52.46% | 52.24% | 57.38% |
Snow | 72.72% | 53.33% | 71.84% | 70.48% | 87.91% | 76.19% | 74.28% | 91.28% | 56.84% | 51.43% | 83.87% | 74.28% |
Cars | 22.22% | 66.66% | 24.49% | 80.00% | 33.33% | 93.33% | 80.00% | 46.09% | 12.82% | 66.67% | 26.83% | 73.33% |
OA (%) | 65.20 | 69.40 | 79.80 | 83.8 | 59.2 | 79.40 | ||||||
kappa (%) | 53.51 | 58.95 | 72.51 | 77.13 | 42.50 | 70.40 |
Combination | Accuracy (%) | Precision (%) | Recall (%) | F-Measure (%) |
---|---|---|---|---|
RGB | 90.72 | 72.25 | 71.95 | 72.10 |
RGB + intensity | 96.27 | 88.92 | 88.64 | 88.78 |
RGB + elevation | 96.52 | 89.80 | 89.25 | 89.52 |
RGB + LiDAR | 97.53 | 90.46 | 90.68 | 90.57 |
Combination | Accuracy (%) | Precision (%) | Recall (%) | F-Measure (%) |
---|---|---|---|---|
RGB | 93.77 | 80.34 | 80.24 | 80.29 |
RGB + intensity | 96.94 | 90.86 | 90.76 | 90.81 |
RGB + elevation | 97.31 | 91.96 | 91.91 | 91.93 |
RGB + LiDAR | 97.75 | 93.27 | 93.24 | 93.25 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Elamin, A.; El-Rabbany, A. UAV-Based Multi-Sensor Data Fusion for Urban Land Cover Mapping Using a Deep Convolutional Neural Network. Remote Sens. 2022, 14, 4298. https://doi.org/10.3390/rs14174298
Elamin A, El-Rabbany A. UAV-Based Multi-Sensor Data Fusion for Urban Land Cover Mapping Using a Deep Convolutional Neural Network. Remote Sensing. 2022; 14(17):4298. https://doi.org/10.3390/rs14174298
Chicago/Turabian StyleElamin, Ahmed, and Ahmed El-Rabbany. 2022. "UAV-Based Multi-Sensor Data Fusion for Urban Land Cover Mapping Using a Deep Convolutional Neural Network" Remote Sensing 14, no. 17: 4298. https://doi.org/10.3390/rs14174298
APA StyleElamin, A., & El-Rabbany, A. (2022). UAV-Based Multi-Sensor Data Fusion for Urban Land Cover Mapping Using a Deep Convolutional Neural Network. Remote Sensing, 14(17), 4298. https://doi.org/10.3390/rs14174298