remotesensing-logo

Journal Browser

Journal Browser

Recent Applications of Convolutional Neural Networks (CNNs) in Vegetation Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing in Agriculture and Vegetation".

Deadline for manuscript submissions: 26 May 2024 | Viewed by 1758

Special Issue Editor


E-Mail Website
Guest Editor
Postdoctoral Research Fellow in Remote Sensing and Environment, ANU College of Science, Canberra, ACT 2601, Australia
Interests: artificial intelligence; earth and space science informatics; environmental assessment and monitoring; photogrammetry and remote sensing; natural hazards; image processing; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Vegetation analysis and mapping is a critical component of monitoring the earth's ecosystems and understanding the impact of environmental changes on biodiversity and ecosystem services. Accurate vegetation mapping enables researchers and managers to identify and track changes in vegetation cover over time, detect the onset of ecosystem degradation, and identify areas in need of restoration or conservation. Furthermore, the analysis of vegetation data provides critical information for climate change research, land-use planning, and agricultural management. Remote sensing has revolutionized vegetation mapping and trend analysis, providing data with different spatial and spectral resolutions on vegetation cover at global scales. However, the accurate interpretation of remote sensing data requires advanced analytical techniques that can handle the complexity and scale of the data. In recent years, Convolutional Neural Networks (CNNs), a type of deep learning algorithm, have emerged as a powerful approach for analysing remote sensing data and extracting valuable information about vegetation patterns and dynamics. CNNs enable researchers to extract complex features from large-scale remote sensing datasets, providing critical insights into vegetation distribution, composition, and dynamics. The ability of CNNs to accurately classify vegetation types and detect changes in vegetation cover over time has the potential to transform our understanding of global vegetation dynamics and its response to environmental changes.

The aim of the forthcoming Special Issue (SI) is to highlight the latest developments and applications of CNNs in vegetation remote sensing. The SI welcomes all types of manuscripts (e.g., original research articles, review articles, etc.) with an added value of using time series remote sensing data in all aspects regarding the mapping, change detection, trend analysis, and studies of drivers of vegetation change in all ecosystems using CNNs. Some suggested themes and topics for submission include, but are not limited to, the following:

  • CNN architectures for vegetation remote sensing:
    • Novel CNN architectures specifically designed for vegetation classification, segmentation, or change detection.
    • Comparative studies of different CNN architectures for vegetation remote sensing.
  • Applications of CNNs in vegetation remote sensing:
    • Use of CNNs for vegetation mapping, classification, and segmentation in different regions and ecosystems.
    • Analysis of the performance of CNNs compared to traditional remote sensing methods in vegetation mapping and monitoring.
    • Retrieving time series of biophysical parameters for vegetation monitoring using CNNs.
    • Integration of CNNs with other remote sensing data sources, such as LiDAR or hyperspectral data, to improve vegetation mapping accuracy.
  • CNNs for monitoring vegetation dynamics:
    • Development of time-series CNN models for vegetation change monitoring and trend analysis.
    • Analysis of the spatio-temporal patterns of vegetation changes using CNNs.
  • CNNs for addressing key challenges in vegetation remote sensing:
    • Use of CNNs for accurate classification of mixed pixel areas, such as urban or agricultural landscapes.
    • Response of vegetation dynamics to climatic variables change.
    • Analysis of the effects of different data preprocessing techniques on the performance of CNNs in vegetation remote sensing.
    • Development of CNN-based techniques for handling missing data in vegetation remote sensing datasets.

Dr. Abolfazl Abdollahi
Guest Editor

Dr. Chandrama Sarker
Guest Editor Assistant
Environmental Unit, The Commonwealth Scientific and Industrial Research Organisation (CSIRO), Brisbane, Australia
Email: [email protected]

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • convolutional neural networks (CNNs)
  • remote sensing
  • vegetation mapping
  • vegetation dynamics and trend analysis
  • time series analysis
  • change detection
  • deep learning
  • mixed pixels classification
  • land use change
  • environmental change
  • accuracy assessment

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 14907 KiB  
Article
Multisource High-Resolution Remote Sensing Image Vegetation Extraction with Comprehensive Multifeature Perception
by Yan Li, Songhan Min, Binbin Song, Hui Yang, Biao Wang and Yongchuang Wu
Remote Sens. 2024, 16(4), 712; https://doi.org/10.3390/rs16040712 - 18 Feb 2024
Viewed by 588
Abstract
High-resolution remote sensing image-based vegetation monitoring is a hot topic in remote sensing technology and applications. However, when facing large-scale monitoring across different sensors in broad areas, the current methods suffer from fragmentation and weak generalization capabilities. To address this issue, this paper [...] Read more.
High-resolution remote sensing image-based vegetation monitoring is a hot topic in remote sensing technology and applications. However, when facing large-scale monitoring across different sensors in broad areas, the current methods suffer from fragmentation and weak generalization capabilities. To address this issue, this paper proposes a multisource high-resolution remote sensing image-based vegetation extraction method that considers the comprehensive perception of multiple features. First, this method utilizes a random forest model to perform feature selection for the vegetation index, selecting an index that enhances the otherness between vegetation and other land features. Based on this, a multifeature synthesis perception convolutional network (MSCIN) is constructed, which enhances the extraction of multiscale feature information, global information interaction, and feature cross-fusion. The MSCIN network simultaneously constructs dual-branch parallel networks for spectral features and vegetation index features, strengthening multiscale feature extraction while reducing the loss of detailed features by simplifying the dense connection module. Furthermore, to facilitate global information interaction between the original spectral information and vegetation index features, a dual-path multihead cross-attention fusion module is designed. This module enhances the differentiation of vegetation from other land features and improves the network’s generalization performance, enabling vegetation extraction from multisource high-resolution remote sensing data. To validate the effectiveness of this method, we randomly selected six test areas within Anhui Province and compared the results with three different data sources and other typical methods (NDVI, RFC, OCBDL, and HRNet). The results demonstrate that the MSCIN method proposed in this paper, under the premise of using only GF2 satellite images as samples, exhibits robust accuracy in extraction results across different sensors. It overcomes the rapid degradation of accuracy observed in other methods with various sensors and addresses issues such as internal fragmentation, false positives, and false negatives caused by sample generalization and image diversity. Full article
Show Figures

Figure 1

27 pages, 26547 KiB  
Article
Self-Adaptive-Filling Deep Convolutional Neural Network Classification Method for Mountain Vegetation Type Based on High Spatial Resolution Aerial Images
by Shiou Li, Xianyun Fei, Peilong Chen, Zhen Wang, Yajun Gao, Kai Cheng, Huilong Wang and Yuanzhi Zhang
Remote Sens. 2024, 16(1), 31; https://doi.org/10.3390/rs16010031 - 20 Dec 2023
Viewed by 728
Abstract
The composition and structure of mountain vegetation are complex and changeable, and thus urgently require the integration of Object-Based Image Analysis (OBIA) and Deep Convolutional Neural Networks (DCNNs). However, while integration technology studies are continuing to increase, there have been few studies that [...] Read more.
The composition and structure of mountain vegetation are complex and changeable, and thus urgently require the integration of Object-Based Image Analysis (OBIA) and Deep Convolutional Neural Networks (DCNNs). However, while integration technology studies are continuing to increase, there have been few studies that have carried out the classification of mountain vegetation by combining OBIA and DCNNs, for it is difficult to obtain enough samples to trigger the potential of DCNNs for mountain vegetation type classification, especially using high-spatial-resolution remote sensing images. To address this issue, we propose a self-adaptive-filling method (SAF) to incorporate the OBIA method to improve the performance of DCNNs in mountain vegetation type classification using high-spatial-resolution aerial images. Using this method, SAF technology was employed to produce enough regular sample data for DCNNs by filling the irregular objects created by image segmenting using interior adaptive pixel blocks. Meanwhile, non-sample segmented image objects were shaped into different regular rectangular blocks via SAF. Then, the classification result was defined by voting combining the DCNN performance. Compared to traditional OBIA methods, SAF generates more samples for the DCNN and fully utilizes every single pixel of the DCNN input. We design experiments to compare them with traditional OBIA and semantic segmentation methods, such as U-net, MACU-net, and SegNeXt. The results show that our SAF-DCNN outperforms traditional OBIA in terms of accuracy and it is similar to the accuracy of the best performing method in semantic segmentation. However, it reduces the common pretzel phenomenon of semantic segmentation (black and white noise generated in classification). Overall, the SAF-based OBIA using DCNNs, which is proposed in this paper, is superior to other commonly used methods for vegetation classification in mountainous areas. Full article
Show Figures

Graphical abstract

Back to TopTop