remotesensing-logo

Journal Browser

Journal Browser

Multi-Sensor Fusion Technology in Remote Sensing: Datasets, Algorithms and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (15 January 2023) | Viewed by 42205

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail Website
Guest Editor
Department of Computer Science, Kingston University, London, UK
Interests: computer vision; machine learning; pattern recognition; video and motion analysis; human motion analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Multi-sensor fusion technology is commonly used in various real-world applications, such as remote sensing, military, robotics, and autonomous driving. Extensive research has been dedicated to the effective use of intelligent and advanced multi-sensor fusion methods for accurate monitoring, complete information acquisition, and optimal decision-making. However, the multi-sensor fusion methods suffer from three main challenges: (1) the automatic calibration of sensors for bringing their readings into a common coordinate frame, (2) the feature extraction from various types of sensory data, and (3) the selection of a suitable fusion level.

The aim of this Special Issue is to give the opportunity to explore these challenges in multi-sensor fusion for remote sensing. The topics in the Special Issue include, but are not limited to, multi-sensor, multi-source, and multi-process information fusion. Articles are expected to emphasize one or more of three facets: data, architectures, and algorithms. The applications of various multi-sensor fusion technologies and of various systems are also welcome.

Dr. Fahimeh Farahnakian
Prof. Dr. Jukka Heikkonen
Prof. Dr. Dimitrios Makris
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multi-sensor fusion
  • Image fusion
  • Data fusion
  • Multi-source fusion
  • Remote sensing
  • Machine learning
  • Deep learning
  • Applications

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

17 pages, 13316 KiB  
Article
An Effective Infrared and Visible Image Fusion Approach via Rolling Guidance Filtering and Gradient Saliency Map
by Liangliang Li, Ming Lv, Zhenhong Jia, Qingxin Jin, Minqin Liu, Liangfu Chen and Hongbing Ma
Remote Sens. 2023, 15(10), 2486; https://doi.org/10.3390/rs15102486 - 9 May 2023
Cited by 7 | Viewed by 2034
Abstract
To solve problems of brightness and detail information loss in infrared and visible image fusion, an effective infrared and visible image fusion method using rolling guidance filtering and gradient saliency map is proposed in this paper. The rolling guidance filtering is used to [...] Read more.
To solve problems of brightness and detail information loss in infrared and visible image fusion, an effective infrared and visible image fusion method using rolling guidance filtering and gradient saliency map is proposed in this paper. The rolling guidance filtering is used to decompose the input images into approximate layers and residual layers; the energy attribute fusion model is used to fuse the approximate layers; the gradient saliency map is introduced and the corresponding weight matrices are constructed to perform on residual layers. The fusion image is generated by reconstructing the fused approximate layer sub-image and residual layer sub-images. Experimental results demonstrate the superiority of the proposed infrared and visible image fusion method. Full article
Show Figures

Figure 1

34 pages, 5693 KiB  
Article
A Comprehensive Study of Clustering-Based Techniques for Detecting Abnormal Vessel Behavior
by Farshad Farahnakian, Florent Nicolas, Fahimeh Farahnakian, Paavo Nevalainen, Javad Sheikh, Jukka Heikkonen and Csaba Raduly-Baka
Remote Sens. 2023, 15(6), 1477; https://doi.org/10.3390/rs15061477 - 7 Mar 2023
Cited by 13 | Viewed by 3302
Abstract
Abnormal behavior detection is currently receiving much attention because of the availability of marine equipment and data allowing maritime agents to track vessels. One of the most popular tools for developing an efficient anomaly detection system is the Automatic Identification System (AIS). The [...] Read more.
Abnormal behavior detection is currently receiving much attention because of the availability of marine equipment and data allowing maritime agents to track vessels. One of the most popular tools for developing an efficient anomaly detection system is the Automatic Identification System (AIS). The aim of this paper is to explore the performance of existing well-known clustering methods for detecting the two most dangerous abnormal behaviors based on the AIS. The methods include K-means, Density-Based Spatial Clustering of Applications with Noise (DBSCAN), Affinity Propagation (AP), and the Gaussian Mixtures Model (GMM). In order to evaluate the performance of the clustering methods, we also used the AIS data of vessels, which were collected through the Finnish transport agency from the whole Baltic Sea for three months. Although most existing studies focus on ocean route recognition, deviations from regulated ocean routes, or irregular speed, we focused on dark ships or those sets of vessels that turn off the AIS to perform illegal activities and spiral vessel movements. The experimental results demonstrate that the K-means clustering method can effectively detect dark ships and spiral vessel movements, which are the most threatening events for maritime safety. Full article
Show Figures

Figure 1

25 pages, 7789 KiB  
Article
Multispectral and Hyperspectral Image Fusion Based on Regularized Coupled Non-Negative Block-Term Tensor Decomposition
by Hao Guo, Wenxing Bao, Kewen Qu, Xuan Ma and Meng Cao
Remote Sens. 2022, 14(21), 5306; https://doi.org/10.3390/rs14215306 - 23 Oct 2022
Cited by 13 | Viewed by 2677
Abstract
The problem of multispectral and hyperspectral image fusion (MHF) is to reconstruct images by fusing the spatial information of multispectral images and the spectral information of hyperspectral images. Focusing on the problem that the hyperspectral canonical polyadic decomposition model and the Tucker model [...] Read more.
The problem of multispectral and hyperspectral image fusion (MHF) is to reconstruct images by fusing the spatial information of multispectral images and the spectral information of hyperspectral images. Focusing on the problem that the hyperspectral canonical polyadic decomposition model and the Tucker model cannot introduce the physical interpretation of the latent factors into the framework, it is difficult to use the known properties and abundance of endmembers to generate high-quality fusion images. This paper proposes a new fusion algorithm. In this paper, a coupled non-negative block-term tensor model is used to estimate the ideal high spatial resolution hyperspectral images, its sparsity is characterized by adding 1-norm, and total variation (TV) is introduced to describe piecewise smoothness. Secondly, the different operators in two directions are defined and introduced to characterize their piecewise smoothness. Finally, the proximal alternating optimization (PAO) algorithm and the alternating multiplier method (ADMM) are used to iteratively solve the model. Experiments on two standard datasets and two local datasets show that the performance of this method is better than the state-of-the-art methods. Full article
Show Figures

Figure 1

22 pages, 7242 KiB  
Article
AMM-FuseNet: Attention-Based Multi-Modal Image Fusion Network for Land Cover Mapping
by Wanli Ma, Oktay Karakuş and Paul L. Rosin
Remote Sens. 2022, 14(18), 4458; https://doi.org/10.3390/rs14184458 - 7 Sep 2022
Cited by 22 | Viewed by 5166
Abstract
Land cover mapping provides spatial information on the physical properties of the Earth’s surface for various classes of wetlands, artificial surface and constructions, vineyards, water bodies, etc. Having reliable information on land cover is crucial to developing solutions to a variety of environmental [...] Read more.
Land cover mapping provides spatial information on the physical properties of the Earth’s surface for various classes of wetlands, artificial surface and constructions, vineyards, water bodies, etc. Having reliable information on land cover is crucial to developing solutions to a variety of environmental problems, such as the destruction of important wetlands/forests, and loss of fish and wildlife habitats. This has made land cover mapping become one of the most widespread applications in remote sensing computational imaging. However, due to the differences between modalities in terms of resolutions, content, and sensors, integrating complementary information that multi-modal remote sensing imagery exhibits into a robust and accurate system still remains challenging, and classical segmentation approaches generally do not give satisfactory results for land cover mapping. In this paper, we propose a novel dynamic deep network architecture, AMM-FuseNet that promotes the use of multi-modal remote sensing images for the purpose of land cover mapping. The proposed network exploits the hybrid approach of the channel attention mechanism and densely connected atrous spatial pyramid pooling (DenseASPP). In the experimental analysis, in order to verify the validity of the proposed method, we test AMM-FuseNet with three datasets whilst comparing it to the six state-of-the-art models of DeepLabV3+, PSPNet, UNet, SegNet, DenseASPP, and DANet. In addition, we demonstrate the capability of AMM-FuseNet under minimal training supervision (reduced number of training samples) compared to the state of the art, achieving less accuracy loss, even for the case with 1/20 of the training samples. Full article
Show Figures

Graphical abstract

19 pages, 2560 KiB  
Article
Knowledge Graph Representation Learning-Based Forest Fire Prediction
by Jiahui Chen, Yi Yang, Ling Peng, Luanjie Chen and Xingtong Ge
Remote Sens. 2022, 14(17), 4391; https://doi.org/10.3390/rs14174391 - 3 Sep 2022
Cited by 11 | Viewed by 4035
Abstract
Forest fires destroy the ecological environment and cause large property loss. There is much research in the field of geographic information that revolves around forest fires. The traditional forest fire prediction methods hardly consider multi-source data fusion. Therefore, the forest fire predictions ignore [...] Read more.
Forest fires destroy the ecological environment and cause large property loss. There is much research in the field of geographic information that revolves around forest fires. The traditional forest fire prediction methods hardly consider multi-source data fusion. Therefore, the forest fire predictions ignore the complex dependencies and correlations of the spatiotemporal kind that usually bring valuable information for the predictions. Although the knowledge graph methods have been used to model the forest fires data, they mainly rely on artificially defined inference rules to make predictions. There is currently a lack of a representation and reasoning methods for forest fire knowledge graphs. We propose a knowledge-graph- and representation-learning-based forest fire prediction method in this paper for addressing the issues. First, we designed a schema for the forest fire knowledge graph to fuse multi-source data, including time, space, and influencing factors. Then, we propose a method, RotateS2F, to learn vector-based knowledge graph representations of the forest fires. We finally leverage a link prediction algorithm to predict the forest fire burning area. We performed an experiment on the Montesinho Natural Park forest fire dataset, which contains 517 fires. The results show that our method reduces mean absolute deviation by 28.61% and root-mean-square error by 53.62% compared with the previous methods. Full article
Show Figures

Graphical abstract

16 pages, 2726 KiB  
Article
An Unsupervised Cascade Fusion Network for Radiometrically-Accurate Vis-NIR-SWIR Hyperspectral Sharpening
by Sihan Huang and David Messinger
Remote Sens. 2022, 14(17), 4390; https://doi.org/10.3390/rs14174390 - 3 Sep 2022
Cited by 1 | Viewed by 1969
Abstract
Hyperspectral sharpening has been considered an important topic in many earth observation applications. Many studies have been performed to solve the Visible-Near-Infrared (Vis-NIR) hyperpectral sharpening problem, but there is little research related to hyperspectral sharpening including short-wave infrared (SWIR) bands despite many hyperspectral [...] Read more.
Hyperspectral sharpening has been considered an important topic in many earth observation applications. Many studies have been performed to solve the Visible-Near-Infrared (Vis-NIR) hyperpectral sharpening problem, but there is little research related to hyperspectral sharpening including short-wave infrared (SWIR) bands despite many hyperspectral imaging systems capturing this wavelength range. In this paper, we introduce a novel method to achieve full-spectrum hyperspectral sharpening by fusing the high-resolution (HR) Vis-NIR multispectral image (MSI) and the Vis-NIR-SWIR low-resolution (LR) hyperspectral image (HSI). The novelty of the proposed approach lies in three points. Firstly, our model is designed for sharpening the full-spectrum HSI with high radiometric accuracy. Secondly, unlike most of the big-dataset-driven deep learning models, we only need one LR-HSI and HR-MSI pair for training. Lastly, per-pixel classification is implemented to test the spectral accuracy of the results. Full article
Show Figures

Graphical abstract

21 pages, 7981 KiB  
Article
Spatio-Temporal Knowledge Graph Based Forest Fire Prediction with Multi Source Heterogeneous Data
by Xingtong Ge, Yi Yang, Ling Peng, Luanjie Chen, Weichao Li, Wenyue Zhang and Jiahui Chen
Remote Sens. 2022, 14(14), 3496; https://doi.org/10.3390/rs14143496 - 21 Jul 2022
Cited by 28 | Viewed by 4513
Abstract
Forest fires have frequently occurred and caused great harm to people’s lives. Many researchers use machine learning techniques to predict forest fires by considering spatio-temporal data features. However, it is difficult to efficiently obtain the features from large-scale, multi-source, heterogeneous data. There is [...] Read more.
Forest fires have frequently occurred and caused great harm to people’s lives. Many researchers use machine learning techniques to predict forest fires by considering spatio-temporal data features. However, it is difficult to efficiently obtain the features from large-scale, multi-source, heterogeneous data. There is a lack of a method that can effectively extract features required by machine learning-based forest fire predictions from multi-source spatio-temporal data. This paper proposes a forest fire prediction method that integrates spatio-temporal knowledge graphs and machine learning models. This method can fuse multi-source heterogeneous spatio-temporal forest fire data by constructing a forest fire semantic ontology and a knowledge graph-based spatio-temporal framework. This paper defines the domain expertise of forest fire analysis as the semantic rules of the knowledge graph. This paper proposes a rule-based reasoning method to obtain the corresponding data for the specific machine learning-based forest fire prediction methods, which are dedicated to tackling the problem with real-time prediction scenarios. This paper performs experiments regarding forest fire predictions based on real-world data in the experimental areas Xichang and Yanyuan in Sichuan province. The results show that the proposed method is beneficial for the fusion of multi-source spatio-temporal data and highly improves the prediction performance in real forest fire prediction scenarios. Full article
Show Figures

Graphical abstract

22 pages, 11596 KiB  
Article
Encoding Geospatial Vector Data for Deep Learning: LULC as a Use Case
by Marvin Mc Cutchan and Ioannis Giannopoulos
Remote Sens. 2022, 14(12), 2812; https://doi.org/10.3390/rs14122812 - 11 Jun 2022
Cited by 2 | Viewed by 3580
Abstract
Geospatial vector data with semantic annotations are a promising but complex data source for spatial prediction tasks such as land use and land cover (LULC) classification. These data describe the geometries and the types (i.e., semantics) of geo-objects, such as a Shop or [...] Read more.
Geospatial vector data with semantic annotations are a promising but complex data source for spatial prediction tasks such as land use and land cover (LULC) classification. These data describe the geometries and the types (i.e., semantics) of geo-objects, such as a Shop or an Amenity. Unlike raster data, which are commonly used for such prediction tasks, geospatial vector data are irregular and heterogenous, making it challenging for deep neural networks to learn based on them. This work tackles this problem by introducing novel encodings which quantify the geospatial vector data allowing deep neural networks to learn based on them, and to spatially predict. These encodings were evaluated in this work based on a specific use case, namely LULC classification. We therefore classified LULC based on the different encodings as input and an attention-based deep neural network (called Perceiver). Based on the accuracy assessments, the potential of these encodings is compared. Furthermore, the influence of the object semantics on the classification performance is analyzed. This is performed by pruning the ontology, describing the semantics and repeating the LULC classification. The results of this work suggest that the encoding of the geography and the semantic granularity of geospatial vector data influences the classification performance overall and on a LULC class level. Nevertheless, the proposed encodings are not restricted to LULC classification but can be applied to other spatial prediction tasks too. In general, this work highlights that geospatial vector data with semantic annotations is a rich data source unlocking new potential for spatial predictions. However, we also show that this potential depends on how much is known about the semantics, and how the geography is presented to the deep neural network. Full article
Show Figures

Figure 1

23 pages, 12127 KiB  
Article
The Survey of Lava Tube Distribution in Jeju Island by Multi-Source Data Fusion
by Jung-Rack Kim, Shih-Yuan Lin and Jong-Woo Oh
Remote Sens. 2022, 14(3), 443; https://doi.org/10.3390/rs14030443 - 18 Jan 2022
Viewed by 3104
Abstract
Lava tubes, a major geomorphic element over volcanic terrain, have recently been highlighted as testbeds of the habitable environments and natural threats to unpredictable collapse. In our case study, we detected and monitored the risk of lava tube collapse on Jeju, an island [...] Read more.
Lava tubes, a major geomorphic element over volcanic terrain, have recently been highlighted as testbeds of the habitable environments and natural threats to unpredictable collapse. In our case study, we detected and monitored the risk of lava tube collapse on Jeju, an island off the Korean peninsula’s southern tip with more than 200 lava tubes, by conducting Interferometric Synthetic Aperture Radar (InSAR) time series analysis and a synthesized analysis of its outputs fused with spatial clues. We identified deformations up to 10 mm/year over InSAR Persistent Scatterers (PSs) obtained with Sentinel-1 time series processing in 3-year periods along with a specific geological unit. Using machine learning algorithms trained on time series deformations of samples along with clues from the spatial background, we classified candidates of potential lava tube networks primarily over coastal lava flows. What we detected in our analyses was validated via comparison with geophysical and ground surveys. Given that cavities in the lava tubes could pose serious risks, a detailed physical exploration and threat assessment of potential cave groups are required before the planned intensive construction of infrastructure on Jeju Island. We also recommend using the approach established in our study to detect undiscovered potential risks of collapse in the cavities, especially over lava tube networks, and to explore lava tubes on planetary surfaces using proposed terrestrial and planetary InSAR sensors. Full article
Show Figures

Graphical abstract

24 pages, 2702 KiB  
Article
Semantic Boosting: Enhancing Deep Learning Based LULC Classification
by Marvin Mc Cutchan, Alexis J. Comber, Ioannis Giannopoulos and Manuela Canestrini
Remote Sens. 2021, 13(16), 3197; https://doi.org/10.3390/rs13163197 - 12 Aug 2021
Cited by 6 | Viewed by 4203
Abstract
The classification of land use and land cover (LULC) is a well-studied task within the domain of remote sensing and geographic information science. It traditionally relies on remotely sensed imagery and therefore models land cover classes with respect to their electromagnetic reflectances, aggregated [...] Read more.
The classification of land use and land cover (LULC) is a well-studied task within the domain of remote sensing and geographic information science. It traditionally relies on remotely sensed imagery and therefore models land cover classes with respect to their electromagnetic reflectances, aggregated in pixels. This paper introduces a methodology which enables the inclusion of geographical object semantics (from vector data) into the LULC classification procedure. As such, information on the types of geographic objects (e.g., Shop, Church, Peak, etc.) can improve LULC classification accuracy. In this paper, we demonstrate how semantics can be fused with imagery to classify LULC. Three experiments were performed to explore and highlight the impact and potential of semantics for this task. In each experiment CORINE LULC data was used as a ground truth and predicted using imagery from Sentinel-2 and semantics from LinkedGeoData using deep learning. Our results reveal that LULC can be classified from semantics only and that fusing semantics with imagery—Semantic Boosting—improved the classification with significantly higher LULC accuracies. The results show that some LULC classes are better predicted using only semantics, others with just imagery, and importantly much of the improvement was due to the ability to separate similar land use classes. A number of key considerations are discussed. Full article
Show Figures

Figure 1

27 pages, 14641 KiB  
Article
A Quantitative Validation of Multi-Modal Image Fusion and Segmentation for Object Detection and Tracking
by Nicholas LaHaye, Michael J. Garay, Brian D. Bue, Hesham El-Askary and Erik Linstead
Remote Sens. 2021, 13(12), 2364; https://doi.org/10.3390/rs13122364 - 17 Jun 2021
Cited by 2 | Viewed by 2627
Abstract
In previous works, we have shown the efficacy of using Deep Belief Networks, paired with clustering, to identify distinct classes of objects within remotely sensed data via cluster analysis and qualitative analysis of the output data in comparison with reference data. In this [...] Read more.
In previous works, we have shown the efficacy of using Deep Belief Networks, paired with clustering, to identify distinct classes of objects within remotely sensed data via cluster analysis and qualitative analysis of the output data in comparison with reference data. In this paper, we quantitatively validate the methodology against datasets currently being generated and used within the remote sensing community, as well as show the capabilities and benefits of the data fusion methodologies used. The experiments run take the output of our unsupervised fusion and segmentation methodology and map them to various labeled datasets at different levels of global coverage and granularity in order to test our models’ capabilities to represent structure at finer and broader scales, using many different kinds of instrumentation, that can be fused when applicable. In all cases tested, our models show a strong ability to segment the objects within input scenes, use multiple datasets fused together where appropriate to improve results, and, at times, outperform the pre-existing datasets. The success here will allow this methodology to be used within use concrete cases and become the basis for future dynamic object tracking across datasets from various remote sensing instruments. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

17 pages, 6286 KiB  
Technical Note
Multi-Sensor Fusion of SDGSAT-1 Thermal Infrared and Multispectral Images
by Lintong Qi, Zhuoyue Hu, Xiaoxuan Zhou, Xinyue Ni and Fansheng Chen
Remote Sens. 2022, 14(23), 6159; https://doi.org/10.3390/rs14236159 - 5 Dec 2022
Cited by 5 | Viewed by 2761
Abstract
Thermal infrared imagery plays an important role in a variety of fields, such as surface temperature inversion and urban heat island effect analysis, but the spatial resolution has severely restricted the potential for further applications. Data fusion is defined as data combination using [...] Read more.
Thermal infrared imagery plays an important role in a variety of fields, such as surface temperature inversion and urban heat island effect analysis, but the spatial resolution has severely restricted the potential for further applications. Data fusion is defined as data combination using multiple sensors, and fused information often has better results than when the sensors are used alone. Since multi-resolution analysis is considered an effective method of image fusion, we propose an MTF-GLP-TAM model to combine thermal infrared (30 m) and multispectral (10 m) information of SDGSAT-1. Firstly, the most relevant multispectral bands to the thermal infrared bands are found. Secondly, to obtain better performance, the high-resolution multispectral bands are histogram-matched with each thermal infrared band. Finally, the spatial details of the multispectral bands are injected into the thermal infrared bands with an MTF Gaussian filter and an additive injection model. Despite the lack of spectral overlap between thermal infrared and multispectral bands, the fused image improves the spatial resolution while maintaining the thermal infrared spectral properties as shown by subjective and objective experimental analyses. Full article
Show Figures

Figure 1

Back to TopTop