remotesensing-logo

Journal Browser

Journal Browser

High Spatial Resolution Remote Sensing: Data, Analysis, and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (1 May 2021) | Viewed by 19857

Special Issue Editor


E-Mail Website
Guest Editor
Research Geographer, U.S. Geological Survey, Geosciences and Environmental Change Science Center, DFC, MS980, Denver, CO 80225, USA
Interests: high-resolution and moderate-resolution remote sensing; wetlands; fire; image processing and analysis; object-based remote sensing; multi-source remote sensing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The rapidly increasing availability of multispectral, high spatial resolution (≤5 m) imagery, collected by satellites, cubesats, and airborne sensors, presents an opportunity to detect landscape change with increased spatial detail. This additional detail can enhance our ability to monitor small or narrow landscape features (e.g., coastlines, riparian corridors, forest fragments, wetlands), as well as detect greater heterogeneity in landscape change. Current efforts to process multiple high spatial resolution images are frequently challenged by temporally inconsistent quality control resulting in variable band reflectance values and spatial positioning, particularly compared to more established sensors such as Landsat, AVHRR or MODIS. The smaller pixel size has also been shown to create greater spectral heterogeneity within a given cover class, which has increased the popularity of object-based image processing approaches. To date, such approaches are frequently labor-intensive and in need of simplification and automation to allow for greater repeatability. These challenges all point to the need for improved image processing approaches specific to multispectral, high spatial resolution imagery.

This Special Issue welcomes original and innovative papers that explore how image processing of multispectral, high spatial resolution imagery can be improved to enhance and potentially operationalize high spatial resolution change detection and time series analysis. Application of imagery to diverse ecosystem types are welcome. Welcome topics include but are not limited to the following:

  • Operational use of high spatial resolution datasets for disturbance (e.g., fire, insect, climate, anthropogenic activities) detection in forests, grasslands, surface water, snow, or ice;
  • Time series analysis using high spatial resolution imagery;
  • Change detection using high spatial resolution imagery;
  • Method development related to operationalizing object-based image analysis;
  • Machine learning image processing approaches;
  • Method development or applications using novel sensors and platforms;
  • Integration of high spatial resolution multispectral imagery with other remote measurements (e.g., SAR, lidar, UAVs, Sentinel);
  • Integration of high spatial resolution multispectral imagery with ground-based datasets.

Dr. Melanie Vanderhoof
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Remote sensing of environment
  • Time series analysis
  • Change detection
  • Spatial analysis
  • Object-based image analysis
  • Machine learning
  • Image processing

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

40 pages, 11642 KiB  
Article
Multiple Pedestrians and Vehicles Tracking in Aerial Imagery Using a Convolutional Neural Network
by Seyed Majid Azimi, Maximilian Kraus, Reza Bahmanyar and Peter Reinartz
Remote Sens. 2021, 13(10), 1953; https://doi.org/10.3390/rs13101953 - 17 May 2021
Cited by 11 | Viewed by 2561
Abstract
In this paper, we address various challenges in multi-pedestrian and vehicle tracking in high-resolution aerial imagery by intensive evaluation of a number of traditional and Deep Learning based Single- and Multi-Object Tracking methods. We also describe our proposed Deep Learning based Multi-Object Tracking [...] Read more.
In this paper, we address various challenges in multi-pedestrian and vehicle tracking in high-resolution aerial imagery by intensive evaluation of a number of traditional and Deep Learning based Single- and Multi-Object Tracking methods. We also describe our proposed Deep Learning based Multi-Object Tracking method AerialMPTNet that fuses appearance, temporal, and graphical information using a Siamese Neural Network, a Long Short-Term Memory, and a Graph Convolutional Neural Network module for more accurate and stable tracking. Moreover, we investigate the influence of the Squeeze-and-Excitation layers and Online Hard Example Mining on the performance of AerialMPTNet. To the best of our knowledge, we are the first to use these two for regression-based Multi-Object Tracking. Additionally, we studied and compared the L1 and Huber loss functions. In our experiments, we extensively evaluate AerialMPTNet on three aerial Multi-Object Tracking datasets, namely AerialMPT and KIT AIS pedestrian and vehicle datasets. Qualitative and quantitative results show that AerialMPTNet outperforms all previous methods for the pedestrian datasets and achieves competitive results for the vehicle dataset. In addition, Long Short-Term Memory and Graph Convolutional Neural Network modules enhance the tracking performance. Moreover, using Squeeze-and-Excitation and Online Hard Example Mining significantly helps for some cases while degrades the results for other cases. In addition, according to the results, L1 yields better results with respect to Huber loss for most of the scenarios. The presented results provide a deep insight into challenges and opportunities of the aerial Multi-Object Tracking domain, paving the way for future research. Full article
Show Figures

Graphical abstract

20 pages, 6116 KiB  
Article
DE-Net: Deep Encoding Network for Building Extraction from High-Resolution Remote Sensing Imagery
by Hao Liu, Jiancheng Luo, Bo Huang, Xiaodong Hu, Yingwei Sun, Yingpin Yang, Nan Xu and Nan Zhou
Remote Sens. 2019, 11(20), 2380; https://doi.org/10.3390/rs11202380 - 14 Oct 2019
Cited by 64 | Viewed by 4620
Abstract
Deep convolutional neural networks have promoted significant progress in building extraction from high-resolution remote sensing imagery. Although most of such work focuses on modifying existing image segmentation networks in computer vision, we propose a new network in this paper, Deep Encoding Network (DE-Net), [...] Read more.
Deep convolutional neural networks have promoted significant progress in building extraction from high-resolution remote sensing imagery. Although most of such work focuses on modifying existing image segmentation networks in computer vision, we propose a new network in this paper, Deep Encoding Network (DE-Net), that is designed for the very problem based on many lately introduced techniques in image segmentation. Four modules are used to construct DE-Net: the inception-style downsampling modules combining a striding convolution layer and a max-pooling layer, the encoding modules comprising six linear residual blocks with a scaled exponential linear unit (SELU) activation function, the compressing modules reducing the feature channels, and a densely upsampling module that enables the network to encode spatial information inside feature maps. Thus, DE-Net achieves state-of-the-art performance on the WHU Building Dataset in recall, F1-Score, and intersection over union (IoU) metrics without pre-training. It also outperformed several segmentation networks in our self-built Suzhou Satellite Building Dataset. The experimental results validate the effectiveness of DE-Net on building extraction from aerial imagery and satellite imagery. It also suggests that given enough training data, designing and training a network from scratch may excel fine-tuning models pre-trained on datasets unrelated to building extraction. Full article
Show Figures

Graphical abstract

23 pages, 9825 KiB  
Article
WorldView-2 Data for Hierarchical Object-Based Urban Land Cover Classification in Kigali: Integrating Rule-Based Approach with Urban Density and Greenness Indices
by Theodomir Mugiraneza, Andrea Nascetti and Yifang Ban
Remote Sens. 2019, 11(18), 2128; https://doi.org/10.3390/rs11182128 - 12 Sep 2019
Cited by 22 | Viewed by 5590
Abstract
The emergence of high-resolution satellite data, such as WorldView-2, has opened the opportunity for urban land cover mapping at fine resolution. However, it is not straightforward to map detailed urban land cover and to detect urban deprived areas, such as informal settlements, in [...] Read more.
The emergence of high-resolution satellite data, such as WorldView-2, has opened the opportunity for urban land cover mapping at fine resolution. However, it is not straightforward to map detailed urban land cover and to detect urban deprived areas, such as informal settlements, in complex urban environments based merely on high-resolution spectral features. Thus, approaches integrating hierarchical segmentation and rule-based classification strategies can play a crucial role in producing high quality urban land cover maps. This study aims to evaluate the potential of WorldView-2 high-resolution multispectral and panchromatic imagery for detailed urban land cover classification in Kigali, Rwanda, a complex urban area characterized by a subtropical highland climate. A multi-stage object-based classification was performed using support vector machines (SVM) and a rule-based approach to derive 12 land cover classes with the input of WorldView-2 spectral bands, spectral indices, gray level co-occurrence matrix (GLCM) texture measures and a digital terrain model (DTM). In the initial classification, confusion existed among the informal settlements, the high- and low-density built-up areas, as well as between the upland and lowland agriculture. To improve the classification accuracy, a framework based on a geometric ruleset and two newly defined indices (urban density and greenness density indices) were developed. The novel framework resulted in an overall classification accuracy at 85.36% with a kappa coefficient at 0.82. The confusion between high- and low-density built-up areas significantly decreased, while informal settlements were successfully extracted with the producer and user’s accuracies at 77% and 90% respectively. It was revealed that the integration of an object-based SVM classification of WorldView-2 feature sets and DTM with the geometric ruleset and urban density and greenness indices resulted in better class separability, thus higher classification accuracies in complex urban environments. Full article
Show Figures

Graphical abstract

28 pages, 7729 KiB  
Article
CNN-Based Land Cover Classification Combining Stratified Segmentation and Fusion of Point Cloud and Very High-Spatial Resolution Remote Sensing Image Data
by Keqi Zhou, Dongping Ming, Xianwei Lv, Ju Fang and Min Wang
Remote Sens. 2019, 11(17), 2065; https://doi.org/10.3390/rs11172065 - 02 Sep 2019
Cited by 37 | Viewed by 5562
Abstract
Traditional and convolutional neural network (CNN)-based geographic object-based image analysis (GeOBIA) land-cover classification methods prosper in remote sensing and generate numerous distinguished achievements. However, a bottleneck emerges and hinders further improvements in classification results, due to the insufficiency of information provided by very [...] Read more.
Traditional and convolutional neural network (CNN)-based geographic object-based image analysis (GeOBIA) land-cover classification methods prosper in remote sensing and generate numerous distinguished achievements. However, a bottleneck emerges and hinders further improvements in classification results, due to the insufficiency of information provided by very high-spatial resolution images (VHSRIs). To be specific, the phenomenon of different objects with similar spectrum and the lack of topographic information (heights) are natural drawbacks of VHSRIs. Thus, multisource data steps into people’s sight and shows a promising future. Firstly, for data fusion, this paper proposed a standard normalized digital surface model (StdnDSM) method which was actually a digital elevation model derived from a digital terrain model (DTM) and digital surface model (DSM) to break through the bottleneck by fusing VHSRI and cloud points. It smoothed and improved the fusion of point cloud and VHSRIs and thus performed well in follow-up classification. The fusion data then were utilized to perform multiresolution segmentation (MRS) and worked as training data for the CNN. Moreover, the grey-level co-occurrence matrix (GLCM) was introduced for a stratified MRS. Secondly, for data processing, the stratified MRS was more efficient than unstratified MRS, and its outcome result was theoretically more rational and explainable than traditional global segmentation. Eventually, classes of segmented polygons were determined by majority voting. Compared to pixel-based and traditional object-based classification methods, majority voting strategy has stronger robustness and avoids misclassifications caused by minor misclassified centre points. Experimental analysis results suggested that the proposed method was promising for object-based classification. Full article
Show Figures

Figure 1

Back to TopTop