remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing Data Fusion as a Strategy to Add Value to Earth Observation Data

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 November 2021) | Viewed by 38614

Special Issue Editors


E-Mail Website
Guest Editor
Physics, Universidad Politécnica de Madrid, Center of Biomedical Technoology, Campus de Montegancedo, Pozuelo de Alarcón, 28233 Madrid, Spain
Interests: remote sensing; artificial neural network; Image; medical image; Based Image Analysis

E-Mail Website
Guest Editor
Electrical Eng., Universidad de Concepción, Avda. Vicente Mendez 595, Chillán 3812120, Chile
Interests: image processing, image fusion, data fusion, data analytic and machine learning applied to agriculture water management

E-Mail Website
Guest Editor
Universidad Politécnica de Madrid, Madrid, Spain
Interests: image analysis; time series analysis; machine learning; deep learning; image segmentation

Special Issue Information

Dear Colleagues,

The continuous improvements of remote sensors in applications like monitoring and management of the environment, precision agriculture, and security and defenses, involve having large volumes of data that need to be processed to allow the extraction of useful and valuable information. Data fusion is the process of integrating multiple data sources to produce more consistent, accurate, and useful information than that provided by any individual data source. For the particular case of image fusion, multiple images from single or multiple imaging modalities are merged to improve the imaging quality, preserving or enhancing the specific features and eliminating redundant information. In order to improve the value of remotely sensed images, in addition to fuse different modalities, they can be also merged with different types of earth observation data. The aim of this Special Issue is to highlight the latest advances and trends in remote sensing data fusion strategies, images, and any other types of data that can add value to raw Earth observation data.

This Special Issue welcomes articles dedicated to all aspects of multi-sensor (e.g., multi-, hyper-spectral, SAR ...) and temporal remote sensing data fusion, as well as the fusion of remotely sensed images with other kinds of data. Articles may focus on, but are not limited to, novel image fusion algorithms, based on transform domain, machine learning, and other theoretical approaches. Their applications in urban areas, environment, agriculture, and natural resources management in a climate change scenario are also of interest.

Dr. Consuelo Gonzalo-Martín
Dr. Mario Lillo-Saavedra
Dr. Angel M. Garcia-Pedrero
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Image fusion
  • Data fusion
  • Transformed domain
  • Machine learning
  • Deep learning
  • Multi-resolution analysis
  • Remote sensing optical data
  • SAR data

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

19 pages, 5966 KiB  
Article
Fusion of China ZY-1 02D Hyperspectral Data and Multispectral Data: Which Methods Should Be Used?
by Han Lu, Danyu Qiao, Yongxin Li, Shuang Wu and Lei Deng
Remote Sens. 2021, 13(12), 2354; https://doi.org/10.3390/rs13122354 - 16 Jun 2021
Cited by 13 | Viewed by 2700
Abstract
ZY-1 02D is China’s first civil hyperspectral (HS) operational satellite, developed independently and successfully launched in 2019. It can collect HS data with a spatial resolution of 30 m, 166 spectral bands, a spectral range of 400~2500 nm, and a swath width of [...] Read more.
ZY-1 02D is China’s first civil hyperspectral (HS) operational satellite, developed independently and successfully launched in 2019. It can collect HS data with a spatial resolution of 30 m, 166 spectral bands, a spectral range of 400~2500 nm, and a swath width of 60 km. Its competitive advantages over other on-orbit or planned satellites are its high spectral resolution and large swath width. Unfortunately, the relatively low spatial resolution may limit its applications. As a result, fusing ZY-1 02D HS data with high-spatial-resolution multispectral (MS) data is required to improve spatial resolution while maintaining spectral fidelity. This paper conducted a comprehensive evaluation study on the fusion of ZY-1 02D HS data with ZY-1 02D MS data (10-m spatial resolution), based on visual interpretation and quantitative metrics. Datasets from Hebei, China, were used in this experiment, and the performances of six common data fusion methods, namely Gram-Schmidt (GS), High Pass Filter (HPF), Nearest-Neighbor Diffusion (NND), Modified Intensity-Hue-Saturation (IHS), Wavelet Transform (Wavelet), and Color Normalized Sharping (Brovey), were compared. The experimental results show that: (1) HPF and GS methods are better suited for the fusion of ZY-1 02D HS Data and MS Data, (2) IHS and Brovey methods can well improve the spatial resolution of ZY-1 02D HS data but introduce spectral distortion, and (3) Wavelet and NND results have high spectral fidelity but poor spatial detail representation. The findings of this study could serve as a good reference for the practical application of ZY-1 02D HS data fusion. Full article
Show Figures

Graphical abstract

14 pages, 3004 KiB  
Article
Diurnal Cycle of Passive Microwave Brightness Temperatures over Land at a Global Scale
by Zahra Sharifnezhad, Hamid Norouzi, Satya Prakash, Reginald Blake and Reza Khanbilvardi
Remote Sens. 2021, 13(4), 817; https://doi.org/10.3390/rs13040817 - 23 Feb 2021
Cited by 4 | Viewed by 3054
Abstract
Satellite-borne passive microwave radiometers provide brightness temperature (TB) measurements in a large spectral range which includes a number of frequency channels and generally two polarizations: horizontal and vertical. These TBs are widely used to retrieve several atmospheric and surface variables and parameters such [...] Read more.
Satellite-borne passive microwave radiometers provide brightness temperature (TB) measurements in a large spectral range which includes a number of frequency channels and generally two polarizations: horizontal and vertical. These TBs are widely used to retrieve several atmospheric and surface variables and parameters such as precipitation, soil moisture, water vapor, air temperature profile, and land surface emissivity. Since TBs are measured at different microwave frequencies with various instruments and at various incidence angles, spatial resolutions, and radiometric characteristics, a mere direct integration of them from different microwave sensors would not necessarily provide consistency. However, when appropriately harmonized, they can provide a complete dataset to estimate the diurnal cycle. This study first constructs the diurnal cycle of land TBs using the non-sun-synchronous Global Precipitation Measurement (GPM) Microwave Imager (GMI) observations by utilizing a cubic spline fit. The acquisition times of GMI vary from day to day and, therefore, the shape (amplitude and phase) of the diurnal cycle for each month is obtained by merging several days of measurements. This diurnal pattern is used as a point of reference when intercalibrated TBs from other passive microwave sensors with daily fixed acquisition times (e.g., Special Sensor Microwave Imager/Sounder, and Advanced Microwave Scanning Radiometer 2) are used to modify and tune the monthly diurnal cycle to daily diurnal cycle at a global scale. Since the GMI does not cover polar regions, the proposed method estimates a consistent diurnal cycle of land TBs at global scale. Results show that the shape and peak of the constructed TB diurnal cycle is approximately similar to the diurnal cycle of land surface temperature. The diurnal brightness temperature range for different land cover types has also been explored using the derived diurnal cycle of TBs. In general, a large diurnal TB range of more than 15 K has been observed for the grassland, shrubland, and tundra land cover types, whereas it is less than 5K over forests. Furthermore, seasonal variations in the diurnal TB range for different land cover types show a more consistent result over the Southern Hemisphere than over the Northern Hemisphere. The calibrated TB diurnal cycle may then be used to consistently estimate the diurnal cycle of land surface emissivity. Moreover, since changes in land surface emissivity are related to moisture change and freeze–thaw (FT) transitions in high-latitude regions, the results of this study enhance temporal detection of FT state, particularly during the transition times when multiple FT changes may occur within a day. Full article
Show Figures

Graphical abstract

21 pages, 1565 KiB  
Article
Fusion of Rain Radar Images and Wind Forecasts in a Deep Learning Model Applied to Rain Nowcasting
by Vincent Bouget, Dominique Béréziat, Julien Brajard, Anastase Charantonis and Arthur Filoche
Remote Sens. 2021, 13(2), 246; https://doi.org/10.3390/rs13020246 - 13 Jan 2021
Cited by 27 | Viewed by 4544
Abstract
Short- or mid-term rainfall forecasting is a major task with several environmental applications such as agricultural management or flood risk monitoring. Existing data-driven approaches, especially deep learning models, have shown significant skill at this task, using only rainfall radar images as inputs. In [...] Read more.
Short- or mid-term rainfall forecasting is a major task with several environmental applications such as agricultural management or flood risk monitoring. Existing data-driven approaches, especially deep learning models, have shown significant skill at this task, using only rainfall radar images as inputs. In order to determine whether using other meteorological parameters such as wind would improve forecasts, we trained a deep learning model on a fusion of rainfall radar images and wind velocity produced by a weather forecast model. The network was compared to a similar architecture trained only on radar data, to a basic persistence model and to an approach based on optical flow. Our network outperforms by 8% the F1-score calculated for the optical flow on moderate and higher rain events for forecasts at a horizon time of 30 min. Furthermore, it outperforms by 7% the same architecture trained using only rainfall radar images. Merging rain and wind data has also proven to stabilize the training process and enabled significant improvement especially on the difficult-to-predict high precipitation rainfalls. Full article
Show Figures

Graphical abstract

20 pages, 9630 KiB  
Article
A Fast Three-Dimensional Convolutional Neural Network-Based Spatiotemporal Fusion Method (STF3DCNN) Using a Spatial-Temporal-Spectral Dataset
by Mingyuan Peng, Lifu Zhang, Xuejian Sun, Yi Cen and Xiaoyang Zhao
Remote Sens. 2020, 12(23), 3888; https://doi.org/10.3390/rs12233888 - 27 Nov 2020
Cited by 8 | Viewed by 2853 | Correction
Abstract
With the growing development of remote sensors, huge volumes of remote sensing data are being utilized in related applications, bringing new challenges to the efficiency and capability of processing huge datasets. Spatiotemporal remote sensing data fusion can restore high spatial and high temporal [...] Read more.
With the growing development of remote sensors, huge volumes of remote sensing data are being utilized in related applications, bringing new challenges to the efficiency and capability of processing huge datasets. Spatiotemporal remote sensing data fusion can restore high spatial and high temporal resolution remote sensing data from multiple remote sensing datasets. However, the current methods require long computing times and are of low efficiency, especially the newly proposed deep learning-based methods. Here, we propose a fast three-dimensional convolutional neural network-based spatiotemporal fusion method (STF3DCNN) using a spatial-temporal-spectral dataset. This method is able to fuse low-spatial high-temporal resolution data (HTLS) and high-spatial low-temporal resolution data (HSLT) in a four-dimensional spatial-temporal-spectral dataset with increasing efficiency, while simultaneously ensuring accuracy. The method was tested using three datasets, and discussions of the network parameters were conducted. In addition, this method was compared with commonly used spatiotemporal fusion methods to verify our conclusion. Full article
Show Figures

Graphical abstract

21 pages, 10447 KiB  
Article
An Effective High Spatiotemporal Resolution NDVI Fusion Model Based on Histogram Clustering
by Xuegang Xing, Changzhen Yan, Yanyan Jia, Haowei Jia, Junfeng Lu and Guangjie Luo
Remote Sens. 2020, 12(22), 3774; https://doi.org/10.3390/rs12223774 - 17 Nov 2020
Cited by 6 | Viewed by 2496
Abstract
The normalized difference vegetation index (NDVI) is a powerful tool for understanding past vegetation, monitoring the current state, and predicting its future. Due to technological and budget limitations, the existing global NDVI time-series data cannot simultaneously meet the needs of high spatial and [...] Read more.
The normalized difference vegetation index (NDVI) is a powerful tool for understanding past vegetation, monitoring the current state, and predicting its future. Due to technological and budget limitations, the existing global NDVI time-series data cannot simultaneously meet the needs of high spatial and temporal resolution. This study proposes a high spatiotemporal resolution NDVI fusion model based on histogram clustering (NDVI_FMHC), which uses a new spatiotemporal fusion framework to predict phenological and shape changes. Meanwhile, this model also uses four strategies to reduce error, including the construction of an overdetermined linear mixed model, multiscale prediction, residual distribution, and Gaussian filtering. Five groups of real MODIS_NDVI and Landsat_NDVI datasets were used to verify the predictive performance of the NDVI_FMHC. The results indicate that NDVI_FMHC has higher accuracy and robustness in forest areas (r = 0.9488 and ADD = 0.0229) and cultivated land areas (r = 0.9493 and ADD = 0.0605), while the prediction effect is relatively weak in areas subject to shape changes, such as flooded areas (r = 0.8450 and ADD = 0.0968), urban areas (r = 0.8855 and ADD = 0.0756), and fire areas (r = 0.8417 and ADD = 0.0749). Compared with ESTARFM, NDVI_LMGM, and FSDAF, NDVI_FMHC has the highest prediction accuracy, the best spatial detail retention, and the strongest ability to capture shape changes. Therefore, the NDVI_FMHC can obtain NDVI time-series data with high spatiotemporal resolution, which can be used to realize long-term land surface dynamic process research in a complex environment. Full article
Show Figures

Graphical abstract

37 pages, 13981 KiB  
Article
Super-Resolution of Sentinel-2 Images Using Convolutional Neural Networks and Real Ground Truth Data
by Mikel Galar, Rubén Sesma, Christian Ayala, Lourdes Albizua and Carlos Aranda
Remote Sens. 2020, 12(18), 2941; https://doi.org/10.3390/rs12182941 - 10 Sep 2020
Cited by 22 | Viewed by 9106
Abstract
Earth observation data is becoming more accessible and affordable thanks to the Copernicus programme and its Sentinel missions. Every location worldwide can be freely monitored approximately every 5 days using the multi-spectral images provided by Sentinel-2. The spatial resolution of these images for [...] Read more.
Earth observation data is becoming more accessible and affordable thanks to the Copernicus programme and its Sentinel missions. Every location worldwide can be freely monitored approximately every 5 days using the multi-spectral images provided by Sentinel-2. The spatial resolution of these images for RGBN (RGB + Near-infrared) bands is 10 m, which is more than enough for many tasks but falls short for many others. For this reason, if their spatial resolution could be enhanced without additional costs, any posterior analyses based on these images would be benefited. Previous works have mainly focused on increasing the resolution of lower resolution bands of Sentinel-2 (20 m and 60 m) to 10 m resolution. In these cases, super-resolution is supported by bands captured at finer resolutions (RGBN at 10 m). On the contrary, this paper focuses on the problem of increasing the spatial resolution of 10 m bands to either 5 m or 2.5 m resolutions, without having additional information available. This problem is known as single-image super-resolution. For standard images, deep learning techniques have become the de facto standard to learn the mapping from lower to higher resolution images due to their learning capacity. However, super-resolution models learned for standard images do not work well with satellite images and hence, a specific model for this problem needs to be learned. The main challenge that this paper aims to solve is how to train a super-resolution model for Sentinel-2 images when no ground truth exists (Sentinel-2 images at 5 m or 2.5 m). Our proposal consists of using a reference satellite with a high similarity in terms of spectral bands with respect to Sentinel-2, but with higher spatial resolution, to create image pairs at both the source and target resolutions. This way, we can train a state-of-the-art Convolutional Neural Network to recover details not present in the original RGBN bands. An exhaustive experimental study is carried out to validate our proposal, including a comparison with the most extended strategy for super-resolving Sentinel-2, which consists in learning a model to super-resolve from an under-sampled version at either 40 m or 20 m to the original 10 m resolution and then, applying this model to super-resolve from 10 m to 5 m or 2.5 m. Finally, we will also show that the spectral radiometry of the native bands is maintained when super-resolving images, in such a way that they can be used for any subsequent processing as if they were images acquired by Sentinel-2. Full article
Show Figures

Graphical abstract

27 pages, 21972 KiB  
Article
Super-Resolution of Sentinel-2 Imagery Using Generative Adversarial Networks
by Luis Salgueiro Romero, Javier Marcello and Verónica Vilaplana
Remote Sens. 2020, 12(15), 2424; https://doi.org/10.3390/rs12152424 - 28 Jul 2020
Cited by 59 | Viewed by 9705
Abstract
Sentinel-2 satellites provide multi-spectral optical remote sensing images with four bands at 10 m of spatial resolution. These images, due to the open data distribution policy, are becoming an important resource for several applications. However, for small scale studies, the spatial detail of [...] Read more.
Sentinel-2 satellites provide multi-spectral optical remote sensing images with four bands at 10 m of spatial resolution. These images, due to the open data distribution policy, are becoming an important resource for several applications. However, for small scale studies, the spatial detail of these images might not be sufficient. On the other hand, WorldView commercial satellites offer multi-spectral images with a very high spatial resolution, typically less than 2 m, but their use can be impractical for large areas or multi-temporal analysis due to their high cost. To exploit the free availability of Sentinel imagery, it is worth considering deep learning techniques for single-image super-resolution tasks, allowing the spatial enhancement of low-resolution (LR) images by recovering high-frequency details to produce high-resolution (HR) super-resolved images. In this work, we implement and train a model based on the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN) with pairs of WorldView-Sentinel images to generate a super-resolved multispectral Sentinel-2 output with a scaling factor of 5. Our model, named RS-ESRGAN, removes the upsampling layers of the network to make it feasible to train with co-registered remote sensing images. Results obtained outperform state-of-the-art models using standard metrics like PSNR, SSIM, ERGAS, SAM and CC. Moreover, qualitative visual analysis shows spatial improvements as well as the preservation of the spectral information, allowing the super-resolved Sentinel-2 imagery to be used in studies requiring very high spatial resolution. Full article
Show Figures

Figure 1

Other

Jump to: Research

14 pages, 4815 KiB  
Technical Note
Ex Post Analysis of Water Supply Demand in an Agricultural Basin by Multi-Source Data Integration
by Mario Lillo-Saavedra, Viviana Gavilán, Angel García-Pedrero, Consuelo Gonzalo-Martín, Felipe de la Hoz, Marcelo Somos-Valenzuela and Diego Rivera
Remote Sens. 2021, 13(11), 2022; https://doi.org/10.3390/rs13112022 - 21 May 2021
Cited by 2 | Viewed by 2191
Abstract
In this work, we present a new methodology integrating data from multiple sources, such as observations from the Landsat-8 (L8) and Sentinel-2 (S2) satellites, with information gathered in field campaigns and information derived from different public databases, in order [...] Read more.
In this work, we present a new methodology integrating data from multiple sources, such as observations from the Landsat-8 (L8) and Sentinel-2 (S2) satellites, with information gathered in field campaigns and information derived from different public databases, in order to characterize the water demand of crops (potential and estimated) in a spatially and temporally distributed manner. This methodology is applied to a case study corresponding to the basin of the Longaví River, located in south-central Chile. Potential and estimated demands, aggregated at different spatio-temporal scales, are compared to the streamflow of the Longaví River, as well as extractions from the groundwater system. The results obtained allow us to conclude that the availability of spatio-temporal information on the water availability and demand pairing allows us to close the water gap—i.e., the difference between supply and demand—allowing for better management of water resources in a watershed. Full article
Show Figures

Graphical abstract

Back to TopTop