remotesensing-logo

Journal Browser

Journal Browser

Very High Resolution (VHR) Satellite Imagery: Processing and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (1 February 2019) | Viewed by 83297

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Oceanography and Global Change, University of Las Palmas of Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Las Palmas, Spain
Interests: multi/hyperspectral remote sensing; high resolution image preprocessing and analysis; change detection; multisensor registration; land and shallow waters applications

E-Mail Website
Guest Editor
Institute of Oceanography and Global Change, University of Las Palmas of Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Las Palmas, Spain
Interests: ecosystem monitoring; coastal areas assessment; multi/hyperspectral sensors; image fusion; classification and change detection

Special Issue Information

Dear Colleagues,

Optical sensors provide, nowadays, multispectral and panchromatic imagery at much finer spatial resolutions than in previous decades. Ikonos was the first commercial high-resolution satellite sensor, launched on September 24, 1999, breaking the one meter mark. Since then, Quickbird, Geoeye, Pleiades, Kompsat and many other very high resolution (VHR) satellites have been launched. Another important milestone was the launch in 2009 of WorldView-2, the first VHR satellite to provide eight spectral channels in the visible to near-infrared range. On the other hand, very high-resolution SAR finally became available in 2007 with the launch of the Italian Cosmo-Skymed and German TerraSAR-X, both providing X band imagery at a 1-m resolution.

In consequence, the recent advances in sensor technology and algorithm development enable the use of VHR remote sensing to quantitatively study the biophysical and biogeochemical processes in coastal and inland waters. Apart from water areas, VHR can be fundamental for the monitoring of complex land ecosystems for biodiversity conservation or precision agriculture for the management of soils, crops and pests.

In this context, recent very high-resolution satellite technologies and image processing algorithms present the opportunity to develop quantitative techniques that have the potential to improve upon traditional techniques in terms of cost, mapping fidelity, and objectivity. Typical applications include multi-temporal classification, recognition and tracking of specific patterns, multisensor data fusion, analysis of land/marine ecosystem processes and environment monitoring, etc.

This Special Issue aims at collecting new developments, methodologies and applications of very high-resolution satellite data for remote sensing. We welcome submissions which provide the community with the most recent advancements on all aspects of VHR satellite remote sensing, including but not limited to:

  • Image preprocessing (pansharpening, atmospheric modeling, sunglint correction, feature extraction, etc.).
  • Data fusion and integration of multiresolution and multiplatform data.
  • Image segmentation and classification.
  • Change detection and multi-temporal analysis.
  • Vegetation monitoring in complex ecosystems.
  • Precision agriculture.
  • Urban mapping.
  • Shallow waters seafloor mapping.
  • Shallow waters bathymetry.
  • Shallow waters water quality.
  • Any use of very high-resolution satellite imagery related to remote sensing.
Prof. Francisco Eugenio
Dr. Javier Marcello
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Very High resolution multiplatform imagery.
  • Atmospheric modeling and sunglint techniques
  • Data fusion
  • Segmentation and Classification
  • Coastal applications
  • Land applications

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 9118 KiB  
Article
Remote Sensing Image Scene Classification Using CNN-CapsNet
by Wei Zhang, Ping Tang and Lijun Zhao
Remote Sens. 2019, 11(5), 494; https://doi.org/10.3390/rs11050494 - 28 Feb 2019
Cited by 311 | Viewed by 12781
Abstract
Remote sensing image scene classification is one of the most challenging problems in understanding high-resolution remote sensing images. Deep learning techniques, especially the convolutional neural network (CNN), have improved the performance of remote sensing image scene classification due to the powerful perspective of [...] Read more.
Remote sensing image scene classification is one of the most challenging problems in understanding high-resolution remote sensing images. Deep learning techniques, especially the convolutional neural network (CNN), have improved the performance of remote sensing image scene classification due to the powerful perspective of feature learning and reasoning. However, several fully connected layers are always added to the end of CNN models, which is not efficient in capturing the hierarchical structure of the entities in the images and does not fully consider the spatial information that is important to classification. Fortunately, capsule network (CapsNet), which is a novel network architecture that uses a group of neurons as a capsule or vector to replace the neuron in the traditional neural network and can encode the properties and spatial information of features in an image to achieve equivariance, has become an active area in the classification field in the past two years. Motivated by this idea, this paper proposes an effective remote sensing image scene classification architecture named CNN-CapsNet to make full use of the merits of these two models: CNN and CapsNet. First, a CNN without fully connected layers is used as an initial feature maps extractor. In detail, a pretrained deep CNN model that was fully trained on the ImageNet dataset is selected as a feature extractor in this paper. Then, the initial feature maps are fed into a newly designed CapsNet to obtain the final classification result. The proposed architecture is extensively evaluated on three public challenging benchmark remote sensing image datasets: the UC Merced Land-Use dataset with 21 scene categories, AID dataset with 30 scene categories, and the NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that the proposed method can lead to a competitive classification performance compared with the state-of-the-art methods. Full article
Show Figures

Figure 1

13 pages, 5382 KiB  
Article
Long-Term Satellite Monitoring of the Slumgullion Landslide Using Space-Borne Synthetic Aperture Radar Sub-Pixel Offset Tracking
by Donato Amitrano, Raffaella Guida, Domenico Dell’Aglio, Gerardo Di Martino, Diego Di Martire, Antonio Iodice, Mario Costantini, Fabio Malvarosa and Federico Minati
Remote Sens. 2019, 11(3), 369; https://doi.org/10.3390/rs11030369 - 12 Feb 2019
Cited by 15 | Viewed by 4401
Abstract
Kinematic characterization of a landslide at large, small, and detailed scale is today still rare and challenging, especially for long periods, due to the difficulty in implementing demanding ground surveys with adequate spatiotemporal coverage. In this work, the suitability of space-borne synthetic aperture [...] Read more.
Kinematic characterization of a landslide at large, small, and detailed scale is today still rare and challenging, especially for long periods, due to the difficulty in implementing demanding ground surveys with adequate spatiotemporal coverage. In this work, the suitability of space-borne synthetic aperture radar sub-pixel offset tracking for the long-term monitoring of the Slumgullion landslide in Colorado (US) is investigated. This landslide is classified as a debris slide and has so far been monitored through ground surveys and, more recently, airborne remote sensing, while satellite images are scarcely exploited. The peculiarity of this landslide is that it is subject to displacements of several meters per year. Therefore, it cannot be monitored with traditional synthetic aperture radar differential interferometry, as this technique has limitations related to the loss of interferometric coherence and to the maximum observable displacement gradient/rate. In order to overcome these limitations, space-borne synthetic aperture radar sub-pixel offset tracking is applied to pairs of images acquired with a time span of one year between August 2011 and August 2013. The obtained results are compared with those available in the literature, both at landslide scale, retrieved through field surveys, and at point scale, using airborne synthetic aperture radar imaging and GPS. The comparison showed full congruence with the past literature. A consistency check covering the full observation period is also implemented to confirm the reliability of the technique, which results in a cheap and effective methodology for the long-term monitoring of large landslide-induced movements. Full article
Show Figures

Graphical abstract

21 pages, 35070 KiB  
Article
Evaluating the Effects of Image Texture Analysis on Plastic Greenhouse Segments via Recognition of the OSI-USI-ETA-CEI Pattern
by Yao Yao and Shixin Wang
Remote Sens. 2019, 11(3), 231; https://doi.org/10.3390/rs11030231 - 23 Jan 2019
Cited by 8 | Viewed by 3990
Abstract
Compared to multispectral or panchromatic bands, fusion imagery contains both the spectral content of the former and the spatial resolution of the latter. Even though the Estimation of Scale Parameter (ESP), the ESP 2 tool, and some segmentation evaluation methods have been introduced [...] Read more.
Compared to multispectral or panchromatic bands, fusion imagery contains both the spectral content of the former and the spatial resolution of the latter. Even though the Estimation of Scale Parameter (ESP), the ESP 2 tool, and some segmentation evaluation methods have been introduced to simplify the choice of scale parameter (SP), shape, and compactness, many challenges remain, including obtaining the natural border of plastic greenhouses (PGs) from a GaoFen-2 (GF-2) fusion imagery, accelerating the progress of follow-up texture analysis, and accurately evaluating over-segmentation and under-segmentation of PG segments in geographic object-based image analysis. Considering the features of high-resolution images, the heterogeneity of fusion imagery was compressed using texture analysis before calculating the optimal scale parameter in ESP 2 in this study. As a result, we quantified the effects of image texture analysis, including increasing averaging operator size (AOS) and decreasing greyscale quantization level (GQL) on PG segments via recognition of a proposed Over-Segmentation Index (OSI)-Under-Segmentation Index (USI)-Error Index of Total Area (ETA)-Composite Error Index (CEI) pattern. The proposed pattern can be used to reasonably evaluate the quality of PG segments obtained from GF-2 fusion imagery and its derivative images, showing that appropriate texture analysis can effectively change the heterogeneity of a fusion image for better segmentation. The optimum setup of GQL and AOS are determined by comparing CEI and visual analysis. Full article
Show Figures

Graphical abstract

13 pages, 14375 KiB  
Article
The Outlining of Agricultural Plots Based on Spatiotemporal Consensus Segmentation
by Angel Garcia-Pedrero, Consuelo Gonzalo-Martín, Mario Lillo-Saavedra and Dionisio Rodríguez-Esparragón
Remote Sens. 2018, 10(12), 1991; https://doi.org/10.3390/rs10121991 - 08 Dec 2018
Cited by 16 | Viewed by 3684
Abstract
The outlining of agricultural land is an important task for obtaining primary information used to create agricultural policies, estimate subsidies and agricultural insurance, and update agricultural geographical databases, among others. Most of the automatic and semi-automatic methods used for outlining agricultural plots using [...] Read more.
The outlining of agricultural land is an important task for obtaining primary information used to create agricultural policies, estimate subsidies and agricultural insurance, and update agricultural geographical databases, among others. Most of the automatic and semi-automatic methods used for outlining agricultural plots using remotely sensed imagery are based on image segmentation. However, these approaches are usually sensitive to intra-plot variability and depend on the selection of the correct parameters, resulting in a poor performance due to the variability in the shape, size, and texture of the agricultural landscapes. In this work, a new methodology based on consensus image segmentation for outlining agricultural plots is presented. The proposed methodology combines segmentation at different scales—carried out using a superpixel (SP) method—and different dates from the same growing season to obtain a single segmentation of the agricultural plots. A visual and numerical comparison of the results provided by the proposed methodology with field-based data (ground truth) shows that the use of segmentation consensus is promising for outlining agricultural plots in a semi-supervised manner. Full article
Show Figures

Graphical abstract

21 pages, 5093 KiB  
Article
Two-Step Urban Water Index (TSUWI): A New Technique for High-Resolution Mapping of Urban Surface Water
by Wei Wu, Qiangzi Li, Yuan Zhang, Xin Du and Hongyan Wang
Remote Sens. 2018, 10(11), 1704; https://doi.org/10.3390/rs10111704 - 29 Oct 2018
Cited by 36 | Viewed by 6575
Abstract
Urban surface water mapping is essential for studying its role in urban ecosystems and local microclimates. However, fast and accurate extraction of urban water remains a great challenge due to the limitations of conventional water indexes and the presence of shadows. Therefore, we [...] Read more.
Urban surface water mapping is essential for studying its role in urban ecosystems and local microclimates. However, fast and accurate extraction of urban water remains a great challenge due to the limitations of conventional water indexes and the presence of shadows. Therefore, we proposed a new urban water mapping technique named the Two-Step Urban Water Index (TSUWI), which combines an Urban Water Index (UWI) and an Urban Shadow Index (USI). These two subindexes were established based on spectral analysis and linear Support Vector Machine (SVM) training of pure pixels from eight training sites across China. The performance of the TSUWI was compared with that of the Normalized Difference Water Index (NDWI), High Resolution Water Index (HRWI) and SVM classifier at twelve test sites. The results showed that this method consistently achieved good performance with a mean Kappa Coefficient (KC) of 0.97 and a mean total error (TE) of 5.82%. Overall, classification accuracy of TSUWI was significantly higher than that of the NDWI, HRWI, and SVM (p-value < 0.01). At most test sites, TSUWI improved accuracy by decreasing the TEs by more than 45% compared to NDWI and HRWI, and by more than 15% compared to SVM. In addition, both UWI and USI were shown to have more stable optimal thresholds that are close to 0 and maintain better performance near their optimum thresholds. Therefore, TSUWI can be used as a simple yet robust method for urban water mapping with high accuracy. Full article
Show Figures

Graphical abstract

23 pages, 2905 KiB  
Article
Deep Distillation Recursive Network for Remote Sensing Imagery Super-Resolution
by Kui Jiang, Zhongyuan Wang, Peng Yi, Junjun Jiang, Jing Xiao and Yuan Yao
Remote Sens. 2018, 10(11), 1700; https://doi.org/10.3390/rs10111700 - 29 Oct 2018
Cited by 95 | Viewed by 5457
Abstract
Deep convolutional neural networks (CNNs) have been widely used and achieved state-of-the-art performance in many image or video processing and analysis tasks. In particular, for image super-resolution (SR) processing, previous CNN-based methods have led to significant improvements, when compared with shallow learning-based methods. [...] Read more.
Deep convolutional neural networks (CNNs) have been widely used and achieved state-of-the-art performance in many image or video processing and analysis tasks. In particular, for image super-resolution (SR) processing, previous CNN-based methods have led to significant improvements, when compared with shallow learning-based methods. However, previous CNN-based algorithms with simple direct or skip connections are of poor performance when applied to remote sensing satellite images SR. In this study, a simple but effective CNN framework, namely deep distillation recursive network (DDRN), is presented for video satellite image SR. DDRN includes a group of ultra-dense residual blocks (UDB), a multi-scale purification unit (MSPU), and a reconstruction module. In particular, through the addition of rich interactive links in and between multiple-path units in each UDB, features extracted from multiple parallel convolution layers can be shared effectively. Compared with classical dense-connection-based models, DDRN possesses the following main properties. (1) DDRN contains more linking nodes with the same convolution layers. (2) A distillation and compensation mechanism, which performs feature distillation and compensation in different stages of the network, is also constructed. In particular, the high-frequency components lost during information propagation can be compensated in MSPU. (3) The final SR image can benefit from the feature maps extracted from UDB and the compensated components obtained from MSPU. Experiments on Kaggle Open Source Dataset and Jilin-1 video satellite images illustrate that DDRN outperforms the conventional CNN-based baselines and some state-of-the-art feature extraction approaches. Full article
Show Figures

Graphical abstract

22 pages, 17755 KiB  
Article
Impact of the Acquisition Geometry of Very High-Resolution Pléiades Imagery on the Accuracy of Canopy Height Models over Forested Alpine Regions
by Livia Piermattei, Mauro Marty, Wilfried Karel, Camillo Ressl, Markus Hollaus, Christian Ginzler and Norbert Pfeifer
Remote Sens. 2018, 10(10), 1542; https://doi.org/10.3390/rs10101542 - 25 Sep 2018
Cited by 26 | Viewed by 5350
Abstract
This work focuses on the accuracy estimation of canopy height models (CHMs) derived from image matching of Pléiades stereo imagery over forested mountain areas. To determine the height above ground and hence canopy height in forest areas, we use normalised digital surface models [...] Read more.
This work focuses on the accuracy estimation of canopy height models (CHMs) derived from image matching of Pléiades stereo imagery over forested mountain areas. To determine the height above ground and hence canopy height in forest areas, we use normalised digital surface models (nDSMs), computed as the differences between external high-resolution digital terrain models (DTMs) and digital surface models (DSMs) from Pléiades image matching. With the overall goal of testing the operational feasibility of Pléiades images for forest monitoring over mountain areas, two questions guide this work whose answers can help in identifying the optimal acquisition planning to derive CHMs. Specifically, we want to assess (1) the benefit of using tri-stereo images instead of stereo pairs, and (2) the impact of different viewing angles and topography. To answer the first question, we acquired new Pléiades data over a study site in Canton Ticino (Switzerland), and we compare the accuracies of CHMs from Pléiades tri-stereo and from each stereo pair combination. We perform the investigation on different viewing angles over a study area near Ljubljana (Slovenia), where three stereo pairs were acquired at one-day offsets. We focus the analyses on open stable and on tree covered areas. To evaluate the accuracy of Pléiades CHMs, we use CHMs from aerial image matching and airborne laser scanning as reference for the Ticino and Ljubljana study areas, respectively. For the two study areas, the statistics of the nDSMs in stable areas show median values close to the expected value of zero. The smallest standard deviation based on the median of absolute differences (σMAD) was 0.80 m for the forward-backward image pair in Ticino and 0.29 m in Ljubljana for the stereo images with the smallest absolute across-track angle (−5.3°). The differences between the highest accuracy Pléiades CHMs and their reference CHMs show a median of 0.02 m in Ticino with a σMAD of 1.90 m and in Ljubljana a median of 0.32 m with a σMAD of 3.79 m. The discrepancies between these results are most likely attributed to differences in forest structure, particularly tree height, density, and forest gaps. Furthermore, it should be taken into account that temporal vegetational changes between the Pléiades and reference data acquisitions introduce additional, spurious CHM differences. Overall, for narrow forward–backward angle of convergence (12°) and based on the used software and workflow to generate the nDSMs from Pléiades images, the results show that the differences between tri-stereo and stereo matching are rather small in terms of accuracy and completeness of the CHM/nDSMs. Therefore, a small angle of convergence does not constitute a major limiting factor. More relevant is the impact of a large across-track angle (19°), which considerably reduces the quality of Pléiades CHMs/nDSMs. Full article
Show Figures

Figure 1

22 pages, 5560 KiB  
Article
Building Detection from VHR Remote Sensing Imagery Based on the Morphological Building Index
by Yongfa You, Siyuan Wang, Yuanxu Ma, Guangsheng Chen, Bin Wang, Ming Shen and Weihua Liu
Remote Sens. 2018, 10(8), 1287; https://doi.org/10.3390/rs10081287 - 15 Aug 2018
Cited by 34 | Viewed by 7374
Abstract
Automatic detection of buildings from very high resolution (VHR) satellite images is a current research hotspot in remote sensing and computer vision. However, many irrelevant objects with similar spectral characteristics to buildings will cause a large amount of interference to the detection of [...] Read more.
Automatic detection of buildings from very high resolution (VHR) satellite images is a current research hotspot in remote sensing and computer vision. However, many irrelevant objects with similar spectral characteristics to buildings will cause a large amount of interference to the detection of buildings, thus making the accurate detection of buildings still a challenging task, especially for images captured in complex environments. Therefore, it is crucial to develop a method that can effectively eliminate these interferences and accurately detect buildings from complex image scenes. To this end, a new building detection method based on the morphological building index (MBI) is proposed in this study. First, the local feature points are detected from the VHR remote sensing imagery and they are optimized by the saliency index proposed in this study. Second, a voting matrix is calculated based on these optimized local feature points to extract built-up areas. Finally, buildings are detected from the extracted built-up areas using the MBI algorithm. Experiments confirm that our proposed method can effectively and accurately detect buildings in VHR remote sensing images captured in complex environments. Full article
Show Figures

Figure 1

21 pages, 12078 KiB  
Article
Seabed Mapping in Coastal Shallow Waters Using High Resolution Multispectral and Hyperspectral Imagery
by Javier Marcello, Francisco Eugenio, Javier Martín and Ferran Marqués
Remote Sens. 2018, 10(8), 1208; https://doi.org/10.3390/rs10081208 - 02 Aug 2018
Cited by 38 | Viewed by 5786
Abstract
Coastal ecosystems experience multiple anthropogenic and climate change pressures. To monitor the variability of the benthic habitats in shallow waters, the implementation of effective strategies is required to support coastal planning. In this context, high-resolution remote sensing data can be of fundamental importance [...] Read more.
Coastal ecosystems experience multiple anthropogenic and climate change pressures. To monitor the variability of the benthic habitats in shallow waters, the implementation of effective strategies is required to support coastal planning. In this context, high-resolution remote sensing data can be of fundamental importance to generate precise seabed maps in coastal shallow water areas. In this work, satellite and airborne multispectral and hyperspectral imagery were used to map benthic habitats in a complex ecosystem. In it, submerged green aquatic vegetation meadows have low density, are located at depths up to 20 m, and the sea surface is regularly affected by persistent local winds. A robust mapping methodology has been identified after a comprehensive analysis of different corrections, feature extraction, and classification approaches. In particular, atmospheric, sunglint, and water column corrections were tested. In addition, to increase the mapping accuracy, we assessed the use of derived information from rotation transforms, texture parameters, and abundance maps produced by linear unmixing algorithms. Finally, maximum likelihood (ML), spectral angle mapper (SAM), and support vector machine (SVM) classification algorithms were considered at the pixel and object levels. In summary, a complete processing methodology was implemented, and results demonstrate the better performance of SVM but the higher robustness of ML to the nature of information and the number of bands considered. Hyperspectral data increases the overall accuracy with respect to the multispectral bands (4.7% for ML and 9.5% for SVM) but the inclusion of additional features, in general, did not significantly improve the seabed map quality. Full article
Show Figures

Figure 1

28 pages, 11446 KiB  
Article
Applying High-Resolution Imagery to Evaluate Restoration-Induced Changes in Stream Condition, Missouri River Headwaters Basin, Montana
by Melanie K. Vanderhoof and Clifton Burt
Remote Sens. 2018, 10(6), 913; https://doi.org/10.3390/rs10060913 - 09 Jun 2018
Cited by 13 | Viewed by 6095
Abstract
Degradation of streams and associated riparian habitat across the Missouri River Headwaters Basin has motivated several stream restoration projects across the watershed. Many of these projects install a series of beaver dam analogues (BDAs) to aggrade incised streams, elevate local water tables, and [...] Read more.
Degradation of streams and associated riparian habitat across the Missouri River Headwaters Basin has motivated several stream restoration projects across the watershed. Many of these projects install a series of beaver dam analogues (BDAs) to aggrade incised streams, elevate local water tables, and create natural surface water storage by reconnecting streams with their floodplains. Satellite imagery can provide a spatially continuous mechanism to monitor the effects of these in-stream structures on stream surface area. However, remote sensing-based approaches to map narrow (e.g., <5 m wide) linear features such as streams have been under-developed relative to efforts to map other types of aquatic systems, such as wetlands or lakes. We mapped pre- and post-restoration (one to three years post-restoration) stream surface area and riparian greenness at four stream restoration sites using Worldview-2 and 3 images as well as a QuickBird-2 image. We found that panchromatic brightness and eCognition-based outputs (0.5 m resolution) provided high-accuracy maps of stream surface area (overall accuracy ranged from 91% to 99%) for streams as narrow as 1.5 m wide. Using image pairs, we were able to document increases in stream surface area immediately upstream of BDAs as well as increases in stream surface area along the restoration reach at Robb Creek, Alkali Creek and Long Creek (South). Although Long Creek (North) did not show a net increase in stream surface area along the restoration reach, we did observe an increase in riparian greenness, suggesting increased water retention adjacent to the stream. As high-resolution imagery becomes more widely collected and available, improvements in our ability to provide spatially continuous monitoring of stream systems can effectively complement more traditional field-based and gage-based datasets to inform watershed management. Full article
Show Figures

Graphical abstract

22 pages, 8622 KiB  
Article
Method Based on Edge Constraint and Fast Marching for Road Centerline Extraction from Very High-Resolution Remote Sensing Images
by Lipeng Gao, Wenzhong Shi, Zelang Miao and Zhiyong Lv
Remote Sens. 2018, 10(6), 900; https://doi.org/10.3390/rs10060900 - 07 Jun 2018
Cited by 28 | Viewed by 4541
Abstract
In recent decades, road extraction from very high-resolution (VHR) remote sensing images has become popular and has attracted extensive research efforts. However, the very high spatial resolution, complex urban structure, and contextual background effect of road images complicate the process of road extraction. [...] Read more.
In recent decades, road extraction from very high-resolution (VHR) remote sensing images has become popular and has attracted extensive research efforts. However, the very high spatial resolution, complex urban structure, and contextual background effect of road images complicate the process of road extraction. For example, shadows, vehicles, or other objects may occlude a road located in a developed urban area. To address the problem of occlusion, this study proposes a semiautomatic approach for road extraction from VHR remote sensing images. First, guided image filtering is employed to reduce the negative effects of nonroad pixels while preserving edge smoothness. Then, an edge-constraint-based weighted fusion model is adopted to trace and refine the road centerline. An edge-constraint fast marching method, which sequentially links discrete seed points, is presented to maintain road-point connectivity. Six experiments with eight VHR remote sensing images (spatial resolution of 0.3 m/pixel to 2 m/pixel) are conducted to evaluate the efficiency and robustness of the proposed approach. Compared with state-of-the-art methods, the proposed approach presents superior extraction quality, time consumption, and seed-point requirements. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

9 pages, 6113 KiB  
Letter
Use of WorldView-2 Along-Track Stereo Imagery to Probe a Baltic Sea Algal Spiral
by George Marmorino and Wei Chen
Remote Sens. 2019, 11(7), 865; https://doi.org/10.3390/rs11070865 - 10 Apr 2019
Cited by 7 | Viewed by 3152
Abstract
The general topic here is the application of very high-resolution satellite imagery to the study of ocean phenomena having horizontal spatial scales of the order of 1 kilometer, which is the realm of the ocean submesoscale. The focus of the present study is [...] Read more.
The general topic here is the application of very high-resolution satellite imagery to the study of ocean phenomena having horizontal spatial scales of the order of 1 kilometer, which is the realm of the ocean submesoscale. The focus of the present study is the use of WorldView-2 along-track stereo imagery to probe a submesoscale feature in the Baltic Sea that consists of an apparent inward spiraling of surface aggregations of algae. In this case, a single pair of images is analyzed using an optical-flow velocity algorithm. Because such image data generally have a much lower dynamic range than in land applications, the impact of residual instrument noise (e.g., data striping) is more severe and requires attention; we use a simple scheme to reduce the impact of such noise. The results show that the spiral feature has at its core a cyclonic vortex, about 1 km in radius and having a vertical vorticity of about three times the Coriolis frequency. Analysis also reveals that an individual algal aggregation corresponds to a velocity front having both horizontal shear and convergence, while wind-accelerated clumps of surface algae can introduce fine-scale signatures into the velocity field. Overall, the analysis supports the interpretation of algal spirals as evidence of a submesoscale eddy and of algal aggregations as indicating areas of surface convergence. Full article
Show Figures

Graphical abstract

13 pages, 1659 KiB  
Letter
Deformable Faster R-CNN with Aggregating Multi-Layer Features for Partially Occluded Object Detection in Optical Remote Sensing Images
by Yun Ren, Changren Zhu and Shunping Xiao
Remote Sens. 2018, 10(9), 1470; https://doi.org/10.3390/rs10091470 - 14 Sep 2018
Cited by 88 | Viewed by 8121
Abstract
The region-based convolutional networks have shown their remarkable ability for object detection in optical remote sensing images. However, the standard CNNs are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. To address this, we introduce [...] Read more.
The region-based convolutional networks have shown their remarkable ability for object detection in optical remote sensing images. However, the standard CNNs are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. To address this, we introduce a new module named deformable convolution that is integrated into the prevailing Faster R-CNN. By adding 2D offsets to the regular sampling grid in the standard convolution, it learns the augmenting spatial sampling locations in the modules from target tasks without additional supervision. In our work, a deformable Faster R-CNN is constructed by substituting the standard convolution layer with a deformable convolution layer in the last network stage. Besides, top-down and skip connections are adopted to produce a single high-level feature map of a fine resolution, on which the predictions are to be made. To make the model robust to occlusion, a simple yet effective data augmentation technique is proposed for training the convolutional neural network. Experimental results show that our deformable Faster R-CNN improves the mean average precision by a large margin on the SORSI and HRRS dataset. Full article
Show Figures

Graphical abstract

Back to TopTop