remotesensing-logo

Journal Browser

Journal Browser

Satellite Image Processing and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 April 2021) | Viewed by 61254

Special Issue Editors


E-Mail Website
Guest Editor
Department of Forest Sciences, University of Helsinki, 00014 Helsinki, Finland
Interests: remote sensing; machine learning; deep learning; object-based image analysis; urban mapping; vegetation species detection

E-Mail Website
Guest Editor
1. Earth Change Observation Laboratory, Department of Geosciences and Geography, University of Helsinki, 00100 Helsinki, Finland
2. Taita Research Station, Kenya of the University of Helsinki, 00014 Helsinki, Finland
Interests: remote sensing; mapping; climate change; sustainability and development; land use/land cover change detection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Civil Engineering Department, Faculty of Engineering, University Putra Malaysia, Serdang 43400, Malaysia
Interests: remote sensing; environmental monitoring; machine learning; geoinformation; satellite image analysis; land use planning; digital mapping; urban development

Special Issue Information

Dear Colleagues,

Satellite remote sensing data has been rapidly used from a wide range of sensors and plays an important roles in earth surface material monitoring. Most of the optical satellite sensors provide multispectral bands and finer spatial resolution for panchromatic band. Landsat-8 and Sentinel2A/B data are among freely available satellite data. Landsat-8 Operational Land Imager (OLI) was launched on 2013 and has been improved compared with Landsat-7 Enhanced Thematic Mapper (ETM+) in calibration, signal to noise ratio, radiometric resolution and spectral wavebands. European Space Agency (ESA) launched Sentinel-2A and Sentinel-2B satellite sensors on 2015 and 2017, respectively; providing multispectral imagery in 13 spectral bands at different spatial resolutions (10 to 60 m). The commercially available very-high-resolution (VHR) sensors such as IKONOS, QuickBird, GeoEye-1, WorldView-2/3 and many other VHR satellites have contributed to finer/detailed characterization of earth surface features. Moreover, SAR imagery are also available from different sources such as Cosmo-Skymed, Sentinel-1 and TerraSAR-X, etc.

In consequence, the advancement in sensor technology and image processing algorithms enable the potential to develop novel methodologies and improve upon traditional processing methods in terms of cost, quantitative and qualitative accuracy, and objectivity. Satellite image processing may include wide spectrum of applications including imagery classification, multi-temporal image classification, multi-sensor data fusion, characterization of earth ecosystem processes and environmental monitoring, etc.

The goal of this special issue is to collect latest developments, methodologies and applications of satellite image data for remote sensing. We welcome submissions which provide the community with the most recent advancements on all aspects of satellite remote sensing processing and applications, including but not limited to:

  • Data fusion and integration of multi-sensor data
  • Image segmentation and classification algorithms
  • Feature selection algorithms
  • Machine learning techniques
  • Geographic Object-Based Image Analysis
  • Deep learning
  • Change detection and multi-temporal analysis
  • Urban mapping
  • Vegetation and species detection within complex environment
  • Impervious surface detection
  • Natural hazard assessment
Dr. Alireza Hamedianfar
Prof. Petri Pellikka
Assoc. Prof. Dr. Helmi Shafri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing applications
  • machine learning
  • image classification
  • optimization
  • image segmentation
  • neural networks
  • feature selection
  • computer vision

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

27 pages, 12934 KiB  
Article
Spectral Temporal Information for Missing Data Reconstruction (STIMDR) of Landsat Reflectance Time Series
by Zhipeng Tang, Giuseppe Amatulli, Petri K. E. Pellikka and Janne Heiskanen
Remote Sens. 2022, 14(1), 172; https://doi.org/10.3390/rs14010172 - 31 Dec 2021
Cited by 11 | Viewed by 4330
Abstract
The number of Landsat time-series applications has grown substantially because of its approximately 50-year history and relatively high spatial resolution for observing long term changes in the Earth’s surface. However, missing observations (i.e., gaps) caused by clouds and cloud shadows, orbit and sensing [...] Read more.
The number of Landsat time-series applications has grown substantially because of its approximately 50-year history and relatively high spatial resolution for observing long term changes in the Earth’s surface. However, missing observations (i.e., gaps) caused by clouds and cloud shadows, orbit and sensing geometry, and sensor issues have broadly limited the development of Landsat time-series applications. Due to the large area and temporal and spatial irregularity of time-series gaps, it is difficult to find an efficient and highly precise method to fill them. The Missing Observation Prediction based on Spectral-Temporal Metrics (MOPSTM) method has been proposed and delivered good performance in filling large-area gaps of single-date Landsat images. However, it can be less practical for a time series longer than one year due to the lack of mechanics that exclude dissimilar data in time series (e.g., different phenology or changes in land cover). To solve this problem, this study proposes a new gap-filling method, Spectral Temporal Information for Missing Data Reconstruction (STIMDR), and examines its performance in Landsat reflectance time series. Two groups of experiments, including 2000 × 2000 pixel Landsat single-date images and Landsat time series acquired from four sites (Kenya, Finland, Germany, and China), were performed to test the new method. We simulated artificial gaps to evaluate predicted pixel values with real observations. Quantitative and qualitative evaluations of gap-filled images through comparisons with other state-of-the-art methods confirmed the more robust and accurate performance of the proposed method. In addition, the proposed method was also able to fill gaps contaminated by extreme cloud cover for a period (e.g., winter in high-latitude areas). A down-stream task of random forest supervised classification through both gap-filled simulated datasets and the original valid datasets verified that STIMDR-generated products are relevant to the user community for land cover applications. Full article
(This article belongs to the Special Issue Satellite Image Processing and Applications)
Show Figures

Graphical abstract

27 pages, 11995 KiB  
Article
Super-Resolution Restoration of Spaceborne Ultra-High-Resolution Images Using the UCL OpTiGAN System
by Yu Tao and Jan-Peter Muller
Remote Sens. 2021, 13(12), 2269; https://doi.org/10.3390/rs13122269 - 10 Jun 2021
Cited by 10 | Viewed by 4693
Abstract
We introduce a robust and light-weight multi-image super-resolution restoration (SRR) method and processing system, called OpTiGAN, using a combination of a multi-image maximum a posteriori approach and a deep learning approach. We show the advantages of using a combined two-stage SRR processing scheme [...] Read more.
We introduce a robust and light-weight multi-image super-resolution restoration (SRR) method and processing system, called OpTiGAN, using a combination of a multi-image maximum a posteriori approach and a deep learning approach. We show the advantages of using a combined two-stage SRR processing scheme for significantly reducing inference artefacts and improving effective resolution in comparison to other SRR techniques. We demonstrate the optimality of OpTiGAN for SRR of ultra-high-resolution satellite images and video frames from 31 cm/pixel WorldView-3, 75 cm/pixel Deimos-2 and 70 cm/pixel SkySat. Detailed qualitative and quantitative assessments are provided for the SRR results on a CEOS-WGCV-IVOS geo-calibration and validation site at Baotou, China, which features artificial permanent optical targets. Our measurements have shown a 3.69 times enhancement of effective resolution from 31 cm/pixel WorldView-3 imagery to 9 cm/pixel SRR. Full article
(This article belongs to the Special Issue Satellite Image Processing and Applications)
Show Figures

Figure 1

17 pages, 10135 KiB  
Article
MixChannel: Advanced Augmentation for Multispectral Satellite Images
by Svetlana Illarionova , Sergey Nesteruk , Dmitrii Shadrin, Vladimir Ignatiev , Maria Pukalchik  and Ivan Oseledets
Remote Sens. 2021, 13(11), 2181; https://doi.org/10.3390/rs13112181 - 3 Jun 2021
Cited by 20 | Viewed by 5536
Abstract
Usage of multispectral satellite imaging data opens vast possibilities for monitoring and quantitatively assessing properties or objects of interest on a global scale. Machine learning and computer vision (CV) approaches show themselves as promising tools for automatizing satellite image analysis. However, there are [...] Read more.
Usage of multispectral satellite imaging data opens vast possibilities for monitoring and quantitatively assessing properties or objects of interest on a global scale. Machine learning and computer vision (CV) approaches show themselves as promising tools for automatizing satellite image analysis. However, there are limitations in using CV for satellite data. Mainly, the crucial one is the amount of data available for model training. This paper presents a novel image augmentation approach called MixChannel that helps to address this limitation and improve the accuracy of solving segmentation and classification tasks with multispectral satellite images. The core idea is to utilize the fact that there is usually more than one image for each location in remote sensing tasks, and this extra data can be mixed to achieve the more robust performance of the trained models. The proposed approach substitutes some channels of the original training image with channels from other images of the exact location to mix auxiliary data. This augmentation technique preserves the spatial features of the original image and adds natural color variability with some probability. We also show an efficient algorithm to tune channel substitution probabilities. We report that the MixChannel image augmentation method provides a noticeable increase in performance of all the considered models in the studied forest types classification problem. Full article
(This article belongs to the Special Issue Satellite Image Processing and Applications)
Show Figures

Graphical abstract

16 pages, 6106 KiB  
Article
Assessing Wildfire Burn Severity and Its Relationship with Environmental Factors: A Case Study in Interior Alaska Boreal Forest
by Christopher W Smith, Santosh K Panda, Uma S Bhatt, Franz J Meyer, Anushree Badola and Jennifer L Hrobak
Remote Sens. 2021, 13(10), 1966; https://doi.org/10.3390/rs13101966 - 18 May 2021
Cited by 11 | Viewed by 5700
Abstract
In recent years, there have been rapid improvements in both remote sensing methods and satellite image availability that have the potential to massively improve burn severity assessments of the Alaskan boreal forest. In this study, we utilized recent pre- and post-fire Sentinel-2 satellite [...] Read more.
In recent years, there have been rapid improvements in both remote sensing methods and satellite image availability that have the potential to massively improve burn severity assessments of the Alaskan boreal forest. In this study, we utilized recent pre- and post-fire Sentinel-2 satellite imagery of the 2019 Nugget Creek and Shovel Creek burn scars located in Interior Alaska to both assess burn severity across the burn scars and test the effectiveness of several remote sensing methods for generating accurate map products: Normalized Difference Vegetation Index (NDVI), Normalized Burn Ratio (NBR), and Random Forest (RF) and Support Vector Machine (SVM) supervised classification. We used 52 Composite Burn Index (CBI) plots from the Shovel Creek burn scar and 28 from the Nugget Creek burn scar for training classifiers and product validation. For the Shovel Creek burn scar, the RF and SVM machine learning (ML) classification methods outperformed the traditional spectral indices that use linear regression to separate burn severity classes (RF and SVM accuracy, 83.33%, versus NBR accuracy, 73.08%). However, for the Nugget Creek burn scar, the NDVI product (accuracy: 96%) outperformed the other indices and ML classifiers. In this study, we demonstrated that when sufficient ground truth data is available, the ML classifiers can be very effective for reliable mapping of burn severity in the Alaskan boreal forest. Since the performance of ML classifiers are dependent on the quantity of ground truth data, when sufficient ground truth data is available, the ML classification methods would be better at assessing burn severity, whereas with limited ground truth data the traditional spectral indices would be better suited. We also looked at the relationship between burn severity, fuel type, and topography (aspect and slope) and found that the relationship is site-dependent. Full article
(This article belongs to the Special Issue Satellite Image Processing and Applications)
Show Figures

Graphical abstract

37 pages, 3606 KiB  
Article
Probabilistic Mapping and Spatial Pattern Analysis of Grazing Lawns in Southern African Savannahs Using WorldView-3 Imagery and Machine Learning Techniques
by Kwame T. Awuah, Paul Aplin, Christopher G. Marston, Ian Powell and Izak P. J. Smit
Remote Sens. 2020, 12(20), 3357; https://doi.org/10.3390/rs12203357 - 15 Oct 2020
Cited by 9 | Viewed by 4340
Abstract
Savannah grazing lawns are a key food resource for large herbivores such as blue wildebeest (Connochaetes taurinus), hippopotamus (Hippopotamus amphibius) and white rhino (Ceratotherium simum), and impact herbivore densities, movement and recruitment rates. They also exert a [...] Read more.
Savannah grazing lawns are a key food resource for large herbivores such as blue wildebeest (Connochaetes taurinus), hippopotamus (Hippopotamus amphibius) and white rhino (Ceratotherium simum), and impact herbivore densities, movement and recruitment rates. They also exert a strong influence on fire behaviour including frequency, intensity and spread. Thus, variation in grazing lawn cover can have a profound impact on broader savannah ecosystem dynamics. However, knowledge of their present cover and distribution is limited. Importantly, we lack a robust, broad-scale approach for detecting and monitoring grazing lawns, which is critical to enhancing understanding of the ecology of these vital grassland systems. We selected two sites in the Lower Sabie and Satara regions of Kruger National Park, South Africa with mesic and semiarid conditions, respectively. Using spectral and texture features derived from WorldView-3 imagery, we (i) parameterised and assessed the quality of Random Forest (RF), Support Vector Machines (SVM), Classification and Regression Trees (CART) and Multilayer Perceptron (MLP) models for general discrimination of plant functional types (PFTs) within a sub-area of the Lower Sabie landscape, and (ii) compared model performance for probabilistic mapping of grazing lawns in the broader Lower Sabie and Satara landscapes. Further, we used spatial metrics to analyse spatial patterns in grazing lawn distribution in both landscapes along a gradient of distance from waterbodies. All machine learning models achieved high F-scores (F1) and overall accuracy (OA) scores in general savannah PFTs classification, with RF (F1 = 95.73±0.004%, OA = 94.16±0.004%), SVM (F1 = 95.64±0.002%, OA = 94.02±0.002%) and MLP (F1 = 95.71±0.003%, OA = 94.27±0.003%) forming a cluster of the better performing models and marginally outperforming CART (F1 = 92.74±0.006%, OA = 90.93±0.003%). Grazing lawn detection accuracy followed a similar trend within the Lower Sabie landscape, with RF, SVM, MLP and CART achieving F-scores of 0.89, 0.93, 0.94 and 0.81, respectively. Transferring models to the Satara landscape however resulted in relatively lower but high grazing lawn detection accuracies across models (RF = 0.87, SVM = 0.88, MLP = 0.85 and CART = 0.75). Results from spatial pattern analysis revealed a relatively higher proportion of grazing lawn cover under semiarid savannah conditions (Satara) compared to the mesic savannah landscape (Lower Sabie). Additionally, the results show strong negative correlation between grazing lawn spatial structure (fractional cover, patch size and connectivity) and distance from waterbodies, with larger and contiguous grazing lawn patches occurring in close proximity to waterbodies in both landscapes. The proposed machine learning approach provides a novel and robust workflow for accurate and consistent landscape-scale monitoring of grazing lawns, while our findings and research outputs provide timely information critical for understanding habitat heterogeneity in southern African savannahs. Full article
(This article belongs to the Special Issue Satellite Image Processing and Applications)
Show Figures

Figure 1

21 pages, 6120 KiB  
Article
Cloud Detection of SuperView-1 Remote Sensing Images Based on Genetic Reinforcement Learning
by Xiaolong Li, Hong Zheng, Chuanzhao Han, Haibo Wang, Kaihan Dong, Ying Jing and Wentao Zheng
Remote Sens. 2020, 12(19), 3190; https://doi.org/10.3390/rs12193190 - 29 Sep 2020
Cited by 7 | Viewed by 3113
Abstract
Cloud pixels have massively reduced the utilization of optical remote sensing images, highlighting the importance of cloud detection. According to the current remote sensing literature, methods such as the threshold method, statistical method and deep learning (DL) have been applied in cloud detection [...] Read more.
Cloud pixels have massively reduced the utilization of optical remote sensing images, highlighting the importance of cloud detection. According to the current remote sensing literature, methods such as the threshold method, statistical method and deep learning (DL) have been applied in cloud detection tasks. As some cloud areas are translucent, areas blurred by these clouds still retain some ground feature information, which blurs the spectral or spatial characteristics of these areas, leading to difficulty in accurate detection of cloud areas by existing methods. To solve the problem, this study presents a cloud detection method based on genetic reinforcement learning. Firstly, the factors that directly affect the classification of pixels in remote sensing images are analyzed, and the concept of pixel environmental state (PES) is proposed. Then, PES information and the algorithm’s marking action are integrated into the “PES-action” data set. Subsequently, the rule of “reward–penalty” is introduced and the “PES-action” strategy with the highest cumulative return is learned by a genetic algorithm (GA). Clouds can be detected accurately through the learned “PES-action” strategy. By virtue of the strong adaptability of reinforcement learning (RL) to the environment and the global optimization ability of the GA, cloud regions are detected accurately. In the experiment, multi-spectral remote sensing images of SuperView-1 were collected to build the data set, which was finally accurately detected. The overall accuracy (OA) of the proposed method on the test set reached 97.15%, and satisfactory cloud masks were obtained. Compared with the best DL method disclosed and the random forest (RF) method, the proposed method is superior in precision, recall, false positive rate (FPR) and OA for the detection of clouds. This study aims to improve the detection of cloud regions, providing a reference for researchers interested in cloud detection of remote sensing images. Full article
(This article belongs to the Special Issue Satellite Image Processing and Applications)
Show Figures

Graphical abstract

19 pages, 2774 KiB  
Article
GF-1 Satellite Observations of Suspended Sediment Injection of Yellow River Estuary, China
by Ru Yao, LiNa Cai, JianQiang Liu and MinRui Zhou
Remote Sens. 2020, 12(19), 3126; https://doi.org/10.3390/rs12193126 - 23 Sep 2020
Cited by 21 | Viewed by 3736
Abstract
We analyzed the distribution of suspended sediments concentration (SSC) in the Yellow River Estuary based on data from GaoFen-1 (GF-1), which is a high-resolution satellite carrying a wide field-of-view (WFV) sensor and panchromatic and a multispectral (PMS) sensor on it. A new SSC [...] Read more.
We analyzed the distribution of suspended sediments concentration (SSC) in the Yellow River Estuary based on data from GaoFen-1 (GF-1), which is a high-resolution satellite carrying a wide field-of-view (WFV) sensor and panchromatic and a multispectral (PMS) sensor on it. A new SSC retrieval model for the wide field-of-view sensor (M-WFV) was established based on the relationship between in-situ SSC and the reflectance in blue and near infrared bands. SSC obtained from 16 WFV1 images were analyzed in the Yellow River Estuary. The results show that (1) SSC in the study area is mainly 100–3500 mg/L, with the highest value being around 4500 mg/L. (2) The details of suspended sediment injection phenomenon were found in the Yellow River Estuary. The SSC distribution in the coastal water has two forms. One is that the high SSC water evenly distributes near the coast and the gradient of the SSC is similar. The other is that the high SSC water concentrates at the right side of the estuary (Laizhou Bay) with a significantly large area. Usually, there is a clear-water notch at the left side of the estuary. (3) Currents clearly influenced the SSC distribution in the Yellow River Estuary. The SSC gradient in the estuary was high against the local current direction. On the contrary, the SSC gradient in the estuary was small towards the local current direction. Eroding the coast and resuspension of the bottom sediments, together with currents, are the major factors influencing the SSC distribution in nearshore water in the Yellow River Estuary. Full article
(This article belongs to the Special Issue Satellite Image Processing and Applications)
Show Figures

Graphical abstract

22 pages, 26805 KiB  
Article
Reconstruction of Cloud-free Sentinel-2 Image Time-series Using an Extended Spatiotemporal Image Fusion Approach
by Fuqun Zhou, Detang Zhong and Rihana Peiman
Remote Sens. 2020, 12(16), 2595; https://doi.org/10.3390/rs12162595 - 12 Aug 2020
Cited by 34 | Viewed by 4642
Abstract
Time-series for medium spatial resolution satellite imagery are a valuable resource for environmental assessment and monitoring at regional and local scales. Sentinel-2 satellites from the European Space Agency (ESA) feature a multispectral instrument (MSI) with 13 spectral bands and spatial resolutions from 10 [...] Read more.
Time-series for medium spatial resolution satellite imagery are a valuable resource for environmental assessment and monitoring at regional and local scales. Sentinel-2 satellites from the European Space Agency (ESA) feature a multispectral instrument (MSI) with 13 spectral bands and spatial resolutions from 10 m to 60 m, offering a revisit range from 5 days at the equator to a daily approach of the poles. Since their launch, the Sentinel-2 MSI image time-series from satellites have been used widely in various environmental studies. However, the values of Sentinel-2 image time-series have not been fully realized and their usage is impeded by cloud contamination on images, especially in cloudy regions. To increase cloud-free image availability and usage of the time-series, this study attempted to reconstruct a Sentinel-2 cloud-free image time-series using an extended spatiotemporal image fusion approach. First, a spatiotemporal image fusion model was applied to predict synthetic Sentinel-2 images when clear-sky images were not available. Second, the cloudy and cloud shadow pixels of the cloud contaminated images were identified based on analysis of the differences of the synthetic and observation image pairs. Third, the cloudy and cloud shadow pixels were replaced by the corresponding pixels of its synthetic image. Lastly, the pixels from the synthetic image were radiometrically calibrated to the observation image via a normalization process. With these processes, we can reconstruct a full length cloud-free Sentinel-2 MSI image time-series to maximize the values of observation information by keeping observed cloud-free pixels and calibrating the synthetized images by using the observed cloud-free pixels as references for better quality. Full article
(This article belongs to the Special Issue Satellite Image Processing and Applications)
Show Figures

Graphical abstract

24 pages, 75106 KiB  
Article
Detection of Parking Cars in Stereo Satellite Images
by Sebastian Zambanini, Ana-Maria Loghin, Norbert Pfeifer, Elena Màrmol Soley and Robert Sablatnig
Remote Sens. 2020, 12(13), 2170; https://doi.org/10.3390/rs12132170 - 7 Jul 2020
Cited by 19 | Viewed by 8989
Abstract
In this paper, we present a Remote Sens. approach to localize parking cars in a city in order to enable the development of parking space availability models. We propose to use high-resolution stereo satellite images for this problem, as they provide enough details [...] Read more.
In this paper, we present a Remote Sens. approach to localize parking cars in a city in order to enable the development of parking space availability models. We propose to use high-resolution stereo satellite images for this problem, as they provide enough details to make individual cars recognizable and the time interval between the stereo shots allows to reason about the moving or static condition of a car. Consequently, we describe a complete processing pipeline where raw satellite images are georeferenced, ortho-rectified, equipped with a digital surface model and an inclusion layer generated from Open Street Model vector data, and finally analyzed for parking cars by means of an adapted Faster R-CNN oriented bounding box detector. As a test site for the proposed approach, a new publicly available dataset of the city of Barcelona labeled with parking cars is presented. On this dataset, a Faster R-CNN model directly trained on the two ortho-rectified stereo images achieves an average precision of 0.65 for parking car detection. Finally, an extensive empirical and analytical evaluation shows the validity of our idea, as parking space occupancy can be broadly derived in fully visible areas, whereas moving cars are efficiently ruled out. Our evaluation also includes an in-depth analysis of the stereo occlusion problem in view of our application scenario as well as the suitability of using a reconstructed Digital Surface Model (DSM) as additional data modality for car detection. While an additional adoption of the DSM in our pipeline does not provide a beneficial cue for the detection task, the stereo images provide essentially two views of the dynamic scene at different timestamps. Therefore, for future studies, we recommend a satellite image acquisition geometry with smaller incidence angles, to decrease occlusions by buildings and thus improve the results with respect to completeness. Full article
(This article belongs to the Special Issue Satellite Image Processing and Applications)
Show Figures

Graphical abstract

21 pages, 7713 KiB  
Article
An Effective Cloud Detection Method for Gaofen-5 Images via Deep Learning
by Junchuan Yu, Yichuan Li, Xiangxiang Zheng, Yufeng Zhong and Peng He
Remote Sens. 2020, 12(13), 2106; https://doi.org/10.3390/rs12132106 - 1 Jul 2020
Cited by 28 | Viewed by 4469
Abstract
Recent developments in hyperspectral satellites have dramatically promoted the wide application of large-scale quantitative remote sensing. As an essential part of preprocessing, cloud detection is of great significance for subsequent quantitative analysis. For Gaofen-5 (GF-5) data producers, the daily cloud detection of hundreds [...] Read more.
Recent developments in hyperspectral satellites have dramatically promoted the wide application of large-scale quantitative remote sensing. As an essential part of preprocessing, cloud detection is of great significance for subsequent quantitative analysis. For Gaofen-5 (GF-5) data producers, the daily cloud detection of hundreds of scenes is a challenging task. Traditional cloud detection methods cannot meet the strict demands of large-scale data production, especially for GF-5 satellites, which have massive data volumes. Deep learning technology, however, is able to perform cloud detection efficiently for massive repositories of satellite data and can even dramatically speed up processing by utilizing thumbnails. Inspired by the outstanding learning capability of convolutional neural networks (CNNs) for feature extraction, we propose a new dual-branch CNN architecture for cloud segmentation for GF-5 preview RGB images, termed a multiscale fusion gated network (MFGNet), which introduces pyramid pooling attention and spatial attention to extract both shallow and deep information. In addition, a new gated multilevel feature fusion module is also employed to fuse features at different depths and scales to generate pixelwise cloud segmentation results. The proposed model is extensively trained on hundreds of globally distributed GF-5 satellite images and compared with current mainstream CNN-based detection networks. The experimental results indicate that our proposed method has a higher F1 score (0.94) and fewer parameters (7.83 M) than the compared methods. Full article
(This article belongs to the Special Issue Satellite Image Processing and Applications)
Show Figures

Graphical abstract

19 pages, 2453 KiB  
Article
A Classified Adversarial Network for Multi-Spectral Remote Sensing Image Change Detection
by Yue Wu, Zhuangfei Bai, Qiguang Miao, Wenping Ma, Yuelei Yang and Maoguo Gong
Remote Sens. 2020, 12(13), 2098; https://doi.org/10.3390/rs12132098 - 30 Jun 2020
Cited by 23 | Viewed by 3398
Abstract
Adversarial training has demonstrated advanced capabilities for generating image models. In this paper, we propose a deep neural network, named a classified adversarial network (CAN), for multi-spectral image change detection. This network is based on generative adversarial networks (GANs). The generator captures the [...] Read more.
Adversarial training has demonstrated advanced capabilities for generating image models. In this paper, we propose a deep neural network, named a classified adversarial network (CAN), for multi-spectral image change detection. This network is based on generative adversarial networks (GANs). The generator captures the distribution of the bitemporal multi-spectral image data and transforms it into change detection results, and these change detection results (as the fake data) are input into the discriminator to train the discriminator. The results obtained by pre-classification are also input into the discriminator as the real data. The adversarial training can facilitate the generator learning the transformation from a bitemporal image to a change map. When the generator is trained well, the generator has the ability to generate the final result. The bitemporal multi-spectral images are input into the generator, and then the final change detection results are obtained from the generator. The proposed method is completely unsupervised, and we only need to input the preprocessed data that were obtained from the pre-classification and training sample selection. Through adversarial training, the generator can better learn the relationship between the bitemporal multi-spectral image data and the corresponding labels. Finally, the well-trained generator can be applied to process the raw bitemporal multi-spectral images to obtain the final change map (CM). The effectiveness and robustness of the proposed method were verified by the experimental results on the real high-resolution multi-spectral image data sets. Full article
(This article belongs to the Special Issue Satellite Image Processing and Applications)
Show Figures

Graphical abstract

Other

Jump to: Research

14 pages, 7636 KiB  
Technical Note
Automatic Generation of Seamless Mosaics Using Invariant Features
by Prajowal Manandhar, Ahmad Jalil, Khaled AlHashmi and Prashanth Marpu
Remote Sens. 2021, 13(16), 3094; https://doi.org/10.3390/rs13163094 - 5 Aug 2021
Cited by 10 | Viewed by 3253
Abstract
The acquisition of satellite images over a wide area is often carried out across seasons because of satellite orbits and atmospheric conditions (e.g., cloud cover, dust, etc.). This results in spectral mismatch between adjacent scenes as the sun angle and the atmospheric conditions [...] Read more.
The acquisition of satellite images over a wide area is often carried out across seasons because of satellite orbits and atmospheric conditions (e.g., cloud cover, dust, etc.). This results in spectral mismatch between adjacent scenes as the sun angle and the atmospheric conditions will be different for different acquisitions. In this work, we developed an approach to generate seamless mosaics using Scale-Invariant Features Transformation (SIFT). In this process, we make use of the overlapping areas between two adjacent scenes and then map spectral values of one imagery scene to another based on the filtered points detected by SIFT features to create a seamless mosaic. We make use of the Random Sample Consensus (RANSAC) method successively to filter out obtained SIFT points across adjacent tiles and to remove spectral outliers across each band of an image. Several high resolution satellite images acquired with WorldView-2 and Dubaisat-2 satellites, and medium resolution Sentinel-2 satellite imagery are used for experimentation. The experimental results show that the proposed approach can generate good seamless mosaics. Furthermore, Sentinel-2’s level 2A (L2A) product surface reflectance data is used to adjust the spectral values for color consistency. Full article
(This article belongs to the Special Issue Satellite Image Processing and Applications)
Show Figures

Figure 1

Back to TopTop