remotesensing-logo

Journal Browser

Journal Browser

Multispectral Image Acquisition, Processing and Analysis

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 June 2019) | Viewed by 56330

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Electronics and Telecommunications IETR UMR CNRS 6164, University of Rennes, 22305 Lannion, France
Interests: blind estimation of degradation characteristics (noise, PSF); blind restoration of multicomponent images; multimodal image correction; multicomponent image compression; multi-channel adaptive processing of signals and images; unsupervised machine learning and deep learning; multi-mode remote sensing data processing; remote sensing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Due to the continually-improving advances in lightweight and less expensive versions of multispectral sensors and remote sensing platform technology in recent years, the end-users are provided with a multitude of timely observational capabilities for a better sensing and monitoring of the Earth surface.

To benefit from the full potential of these ever advancing productive systems in a more flexible and smart way in many applied fields, we need to further continue to improve our analysis and processing capabilities accordingly. Joint efforts for fully-automated, easy-to-use and efficient systems are a key direction to the facilitated and matured operational use of remote sensing.

This Special Issue is thus intended to cover the last advances in the following primary topics of interest (but not limited to) related to Multispectral Image Acquisition, Processing and Analysis:

  • State-of-the-art and emerging multispectral technologies, including new platforms (satellite, aerial, Unmanned Aerial Vehicles) and sensors with:
    • spatial, spectral, temporal sensing abilities
    • georeferencing and navigation abilities
    • cooperative sensing
  • Advanced multispectral image/data analysis and processing:
    • lossless/lossy compression, denoising,
    • geometrical, registration, georeferencing processing,
    • feature extraction, classification, object recognition, change detection, domain adaptation
  • Multisource data fusion
    • optical-radar fusion, pan-sharpening
    • field sensing
    • crowd sensing

A wide spectrum of recent and latest emerging applications highlighting Multispectral Image Acquisition, Processing and Analysis are obviously targeted including biodiversity assessment, vegetation and environmental monitoring (identification of diversity in grassland species, invasive plants, biomass estimation, wetlands), precision agriculture in agricultural ecosystems and crop management, water resource and quality management in nearshore coastal (mapping near-surface water constituents, benthic habitats) and inland waters (analysis and surveying of rivers and lakes), sustainable forestry and agroforestry (forest preservation and mapping of forest species, wildfire detection), mapping archaeological areas, urban development and management, and hazard monitoring.

Dr. Benoit Vozel
Prof. Vladimir Lukin
Dr. Yakoub Bazi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Imaging sensors and platforms
  • Cooperative sensing
  • Multispectral data analysis
  • Multispectral data processing
  • Multisource data fusion
  • Deep learning strategies

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

4 pages, 158 KiB  
Editorial
Editorial to Special Issue “Multispectral Image Acquisition, Processing, and Analysis”
by Benoit Vozel, Vladimir Lukin and Yakoub Bazi
Remote Sens. 2019, 11(19), 2310; https://doi.org/10.3390/rs11192310 - 04 Oct 2019
Cited by 1 | Viewed by 2322
Abstract
This Special Issue was announced in March 2018 [...] Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)

Research

Jump to: Editorial, Other

24 pages, 35367 KiB  
Article
Spectral Super-Resolution with Optimized Bands
by Utsav B. Gewali, Sildomar T. Monteiro and Eli Saber
Remote Sens. 2019, 11(14), 1648; https://doi.org/10.3390/rs11141648 - 11 Jul 2019
Cited by 14 | Viewed by 5039
Abstract
Hyperspectral (HS) sensors sample reflectance spectrum in very high resolution, which allows us to examine material properties in very fine details. However, their widespread adoption has been hindered because they are very expensive. Reflectance spectra of real materials are high dimensional but sparse [...] Read more.
Hyperspectral (HS) sensors sample reflectance spectrum in very high resolution, which allows us to examine material properties in very fine details. However, their widespread adoption has been hindered because they are very expensive. Reflectance spectra of real materials are high dimensional but sparse signals. By utilizing prior information about the statistics of real HS spectra, many previous studies have reconstructed HS spectra from multispectral (MS) signals (which can be obtained from cheaper, lower spectral resolution sensors). However, most of these techniques assume that the MS bands are known apriori and do not optimize the MS bands to produce more accurate reconstructions. In this paper, we propose a new end-to-end fully convolutional residual neural network architecture that simultaneously learns both the MS bands and the transformation to reconstruct HS spectra from MS signals by analyzing large quantity of HS data. The learned band can be implemented in hardware to obtain an MS sensor that collects data that is best to reconstruct HS spectra using the learned transformation. Using a diverse set of real-world datasets, we show how the proposed approach of optimizing MS bands along with the transformation can drastically increase the reconstruction accuracy. Additionally, we also investigate the prospects of using reconstructed HS spectra for land cover classification. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Show Figures

Graphical abstract

19 pages, 13147 KiB  
Article
A Local Feature Descriptor Based on Oriented Structure Maps with Guided Filtering for Multispectral Remote Sensing Image Matching
by Tao Ma, Jie Ma and Kun Yu
Remote Sens. 2019, 11(8), 951; https://doi.org/10.3390/rs11080951 - 20 Apr 2019
Cited by 19 | Viewed by 3389
Abstract
Multispectral image matching plays a very important role in remote sensing image processing and can be applied for registering the complementary information captured by different sensors. Due to the nonlinear intensity difference in multispectral images, many classic descriptors designed for images of the [...] Read more.
Multispectral image matching plays a very important role in remote sensing image processing and can be applied for registering the complementary information captured by different sensors. Due to the nonlinear intensity difference in multispectral images, many classic descriptors designed for images of the same spectrum are unable to work well. To cope with this problem, this paper proposes a new local feature descriptor termed histogram of oriented structure maps (HOSM) for multispectral image matching tasks. This proposed method consists of three steps. First, we propose a new method based on local contrast to construct the structure guidance images from the multispectral images by transferring the significant contours from source images to results, respectively. Second, we calculate oriented structure maps with guided image filtering. In details, we first construct edge maps by the progressive Sobel filters to extract the common structure characteristics from the multispectral images, and then we compute the oriented structure maps by performing the guided filtering on the edge maps with the structure guidance images constructed in the first step. Finally, we build the HOSM descriptor by calculating the histogram of oriented structure maps in a local region of each interest point and normalize the feature vector. The proposed HOSM descriptor was evaluated on three commonly used datasets and was compared with several state-of-the-art methods. The experimental results demonstrate that the HOSM descriptor can be robust to the nonlinear intensity difference in multispectral images and outperforms other methods. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Show Figures

Figure 1

20 pages, 5755 KiB  
Article
Pansharpening Using Guided Filtering to Improve the Spatial Clarity of VHR Satellite Imagery
by Jaewan Choi, Honglyun Park and Doochun Seo
Remote Sens. 2019, 11(6), 633; https://doi.org/10.3390/rs11060633 - 15 Mar 2019
Cited by 21 | Viewed by 5418
Abstract
Pansharpening algorithms are designed to enhance the spatial resolution of multispectral images using panchromatic images with high spatial resolutions. Panchromatic and multispectral images acquired from very high resolution (VHR) satellite sensors used as input data in the pansharpening process are characterized by spatial [...] Read more.
Pansharpening algorithms are designed to enhance the spatial resolution of multispectral images using panchromatic images with high spatial resolutions. Panchromatic and multispectral images acquired from very high resolution (VHR) satellite sensors used as input data in the pansharpening process are characterized by spatial dissimilarities due to differences in their spectral/spatial characteristics and time lags between panchromatic and multispectral sensors. In this manuscript, a new pansharpening framework is proposed to improve the spatial clarity of VHR satellite imagery. This algorithm aims to remove the spatial dissimilarity between panchromatic and multispectral images using guided filtering (GF) and to generate the optimal local injection gains for pansharpening. First, we generate optimal multispectral images with spatial characteristics similar to those of panchromatic images using GF. Then, multiresolution analysis (MRA)-based pansharpening is applied using normalized difference vegetation index (NDVI)-based optimal injection gains and spatial details obtained through GF. The algorithm is applied to Korea multipurpose satellite (KOMPSAT)-3/3A satellite sensor data, and the experimental results show that the pansharpened images obtained with the proposed algorithm exhibit a superior spatial quality and preserve spectral information better than those based on existing algorithms. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Show Figures

Figure 1

16 pages, 8522 KiB  
Article
Enhancement of Component Images of Multispectral Data by Denoising with Reference
by Sergey Abramov, Mikhail Uss, Vladimir Lukin, Benoit Vozel, Kacem Chehdi and Karen Egiazarian
Remote Sens. 2019, 11(6), 611; https://doi.org/10.3390/rs11060611 - 13 Mar 2019
Cited by 6 | Viewed by 3312
Abstract
Multispectral remote sensing data may contain component images that are heavily corrupted by noise and the pre-filtering (denoising) procedure is often applied to enhance these component images. To do this, one can use reference images—component images having relatively high quality and that are [...] Read more.
Multispectral remote sensing data may contain component images that are heavily corrupted by noise and the pre-filtering (denoising) procedure is often applied to enhance these component images. To do this, one can use reference images—component images having relatively high quality and that are similar to the image subject to pre-filtering. Here, we study the following problems: how to select component images that can be used as references (e.g., for the Sentinel multispectral remote sensing data) and how to perform the actual denoising. We demonstrate that component images of the same resolution as well as component images of a better resolution can be used as references. To provide high efficiency of denoising, reference images have to be transformed using linear or nonlinear transformations. This paper proposes a practical approach to doing this. Examples of denoising tests and real-life images demonstrate high efficiency of the proposed approach. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Show Figures

Graphical abstract

18 pages, 4651 KiB  
Article
Fusion of Multispectral and Panchromatic Images via Spatial Weighted Neighbor Embedding
by Kai Zhang, Feng Zhang and Shuyuan Yang
Remote Sens. 2019, 11(5), 557; https://doi.org/10.3390/rs11050557 - 07 Mar 2019
Cited by 15 | Viewed by 4546
Abstract
Fusing the panchromatic (PAN) image and low spatial-resolution multispectral (LR MS) images is an effective technology for generating high spatial-resolution MS (HR MS) images. Some image-fusion methods inspired by neighbor embedding (NE) are proposed and produce competitive results. These methods generally adopt Euclidean [...] Read more.
Fusing the panchromatic (PAN) image and low spatial-resolution multispectral (LR MS) images is an effective technology for generating high spatial-resolution MS (HR MS) images. Some image-fusion methods inspired by neighbor embedding (NE) are proposed and produce competitive results. These methods generally adopt Euclidean distance to determinate the neighbors. However, closer Euclidean distance is not equal to greater similarity in spatial structure. In this paper, we propose a spatial weighted neighbor embedding (SWNE) approach for PAN and MS image fusion, by exploring the similar manifold structures existing in the observed LR MS images to those of HR MS images. In SWNE, the spatial neighbors of the LR patch are found first. Second, the weights of these neighbors are estimated by the alternative direction multiplier method (ADMM), in which the neighbors and their weights are determined simultaneously. Finally, the HR patches are reconstructed by the sum of HR patches corresponding to the LR patches multiplying with their weights. Due to the introduction of spatial structures in objective function, outlier patches can be eliminated effectively by ADMM. Compared with other methods based on NE, more reasonable neighbor patches and their weights are estimated simultaneously. Some experiments are conducted on datasets collected by QuickBird and Geoeye-1 satellites to validate the effectiveness of SWNE, and the results demonstrate a better performance of SWNE in spatial and spectral information preservation. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Show Figures

Graphical abstract

16 pages, 1610 KiB  
Article
Hyperspectral Image Classification with Multi-Scale Feature Extraction
by Bing Tu, Nanying Li, Leyuan Fang, Danbing He and Pedram Ghamisi
Remote Sens. 2019, 11(5), 534; https://doi.org/10.3390/rs11050534 - 05 Mar 2019
Cited by 29 | Viewed by 4420
Abstract
Spectral features cannot effectively reflect the differences among the ground objects and distinguish their boundaries in hyperspectral image (HSI) classification. Multi-scale feature extraction can solve this problem and improve the accuracy of HSI classification. The Gaussian pyramid can effectively decompose HSI into multi-scale [...] Read more.
Spectral features cannot effectively reflect the differences among the ground objects and distinguish their boundaries in hyperspectral image (HSI) classification. Multi-scale feature extraction can solve this problem and improve the accuracy of HSI classification. The Gaussian pyramid can effectively decompose HSI into multi-scale structures, and efficiently extract features of different scales by stepwise filtering and downsampling. Therefore, this paper proposed a Gaussian pyramid based multi-scale feature extraction (MSFE) classification method for HSI. First, the HSI is decomposed into several Gaussian pyramids to extract multi-scale features. Second, we construct probability maps in each layer of the Gaussian pyramid and employ edge-preserving filtering (EPF) algorithms to further optimize the details. Finally, the final classification map is acquired by a majority voting method. Compared with other spectral-spatial classification methods, the proposed method can not only extract the characteristics of different scales, but also can better preserve detailed structures and the edge regions of the image. Experiments performed on three real hyperspectral datasets show that the proposed method can achieve competitive classification accuracy. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Show Figures

Figure 1

16 pages, 3678 KiB  
Article
A Multiscale Hierarchical Model for Sparse Hyperspectral Unmixing
by Jinlin Zou and Jinhui Lan
Remote Sens. 2019, 11(5), 500; https://doi.org/10.3390/rs11050500 - 01 Mar 2019
Cited by 6 | Viewed by 2708
Abstract
Due to the complex background and low spatial resolution of the hyperspectral sensor, observed ground reflectance is often mixed at the pixel level. Hyperspectral unmixing (HU) is a hot-issue in the remote sensing area because it can decompose the observed mixed pixel reflectance. [...] Read more.
Due to the complex background and low spatial resolution of the hyperspectral sensor, observed ground reflectance is often mixed at the pixel level. Hyperspectral unmixing (HU) is a hot-issue in the remote sensing area because it can decompose the observed mixed pixel reflectance. Traditional sparse hyperspectral unmixing often leads to an ill-posed inverse problem, which can be circumvented by spatial regularization approaches. However, their adoption has come at the expense of a massive increase in computational cost. In this paper, a novel multiscale hierarchical model for a method of sparse hyperspectral unmixing is proposed. The paper decomposes HU into two domain problems, one is in an approximation scale representation based on resampling the method’s domain, and the other is in the original domain. The use of multiscale spatial resampling methods for HU leads to an effective strategy that deals with spectral variability and computational cost. Furthermore, the hierarchical strategy with abundant sparsity representation in each layer aims to obtain the global optimal solution. Both simulations and real hyperspectral data experiments show that the proposed method outperforms previous methods in endmember extraction and abundance fraction estimation, and promotes piecewise homogeneity in the estimated abundance without compromising sharp discontinuities among neighboring pixels. Additionally, compared with total variation regularization, the proposed method reduces the computational time effectively. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Show Figures

Graphical abstract

24 pages, 3943 KiB  
Article
FCM Approach of Similarity and Dissimilarity Measures with α-Cut for Handling Mixed Pixels
by Sayan Mukhopadhaya, Anil Kumar and Alfred Stein
Remote Sens. 2018, 10(11), 1707; https://doi.org/10.3390/rs10111707 - 29 Oct 2018
Cited by 4 | Viewed by 3829
Abstract
In this paper, the fuzzy c-means (FCM) classifier has been studied with 12 similarity and dissimilarity measures: Manhattan distance, chessboard distance, Bray–Curtis distance, Canberra, Cosine distance, correlation distance, mean absolute difference, median absolute difference, Euclidean, Mahalanobis, diagonal Mahalanobis and normalised squared Euclidean distance. [...] Read more.
In this paper, the fuzzy c-means (FCM) classifier has been studied with 12 similarity and dissimilarity measures: Manhattan distance, chessboard distance, Bray–Curtis distance, Canberra, Cosine distance, correlation distance, mean absolute difference, median absolute difference, Euclidean, Mahalanobis, diagonal Mahalanobis and normalised squared Euclidean distance. Both single and composite modes were used with a varying weight constant (m*) and also at different α-cuts. The two best single measures obtained were combined to study the effect of composite measures on the datasets used. An image-to-image accuracy check was conducted to assess the accuracy of the classified images. Fuzzy error matrix (FERM) was applied to measure the accuracy assessment outcomes for a Landsat-8 dataset with respect to the Formosat-2 dataset. To conclude, FCM classifier with Cosine measure performed better than the conventional Euclidean measure. But, due to the incapability of the FCM classifier to handle noise properly, the classification accuracy was around 75%. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Show Figures

Graphical abstract

21 pages, 14083 KiB  
Article
Diffuse Skylight as a Surrogate for Shadow Detection in High-Resolution Imagery Acquired Under Clear Sky Conditions
by Mark Cameron and Lalit Kumar
Remote Sens. 2018, 10(8), 1185; https://doi.org/10.3390/rs10081185 - 27 Jul 2018
Cited by 12 | Viewed by 5826
Abstract
An alternative technique for shadow detection and abundance is presented for high spatial resolution imagery acquired under clear sky conditions from airborne/spaceborne sensors. The method, termed Scattering Index (SI), uses Rayleigh scattering principles to create a diffuse skylight vector as a shadow reference. [...] Read more.
An alternative technique for shadow detection and abundance is presented for high spatial resolution imagery acquired under clear sky conditions from airborne/spaceborne sensors. The method, termed Scattering Index (SI), uses Rayleigh scattering principles to create a diffuse skylight vector as a shadow reference. From linear algebra, the proportion of diffuse skylight in each image pixel provides a per pixel measure of shadow extent and abundance. We performed a comparative evaluation against two other methods, first valley detection thresholding (extent) and physics-based unmixing (extent and abundance). Overall accuracy and F-score measures are used to evaluate shadow extent on both Worldview-3 and ADS40 images captured over a common scene. Image subsets are selected to capture objects well documented as shadow detection anomalies, e.g., dark water bodies. Results showed improved accuracies and F-scores for shadow extent and qualitative evaluation of abundance show the method is invariant to scene and sensor characteristics. SI avoids shadow misclassifications by avoiding the use of pixel intensity and the associated limitations of binary thresholding. The method negates the need for complex sun-object-sensor corrections, it is simple to apply, and it is invariant to the exponential increase in scene complexity associated with higher-resolution imagery. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Show Figures

Graphical abstract

21 pages, 2667 KiB  
Article
A Cloud Detection Method for Landsat 8 Images Based on PCANet
by Yue Zi, Fengying Xie and Zhiguo Jiang
Remote Sens. 2018, 10(6), 877; https://doi.org/10.3390/rs10060877 - 05 Jun 2018
Cited by 81 | Viewed by 7716
Abstract
Cloud detection for remote sensing images is often a necessary process, because cloud is widespread in optical remote sensing images and causes a lot of difficulty to many remote sensing activities, such as land cover monitoring, environmental monitoring and target recognizing. In this [...] Read more.
Cloud detection for remote sensing images is often a necessary process, because cloud is widespread in optical remote sensing images and causes a lot of difficulty to many remote sensing activities, such as land cover monitoring, environmental monitoring and target recognizing. In this paper, a novel cloud detection method is proposed for multispectral remote sensing images from Landsat 8. Firstly, the color composite image of Bands 6, 3 and 2 is divided into superpixel sub-regions through Simple Linear Iterative Cluster (SLIC) method. Then, a two-step superpixel classification strategy is used to predict each superpixel as cloud or non-cloud. Thirdly, a fully connected Conditional Random Field (CRF) model is used to refine the cloud detection result, and accurate cloud borders are obtained. In the two-step superpixel classification strategy, the bright and thick cloud superpixels, as well as the obvious non-cloud superpixels, are firstly separated from potential cloud superpixels through a threshold function, which greatly speeds up the detection. The designed double-branch PCA Network (PCANet) architecture can extract the high-level information of cloud, then combined with a Support Vector Machine (SVM) classifier, the potential superpixels are correctly classified. Visual and quantitative comparison experiments are conducted on the Landsat 8 Cloud Cover Assessment (L8 CCA) dataset; the results indicate that our proposed method can accurately detect clouds under different conditions, which is more effective and robust than the compared state-of-the-art methods. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Show Figures

Graphical abstract

Other

Jump to: Editorial, Research

7 pages, 37625 KiB  
Technical Note
Thermal Airborne Optical Sectioning
by Indrajit Kurmi, David C. Schedl and Oliver Bimber
Remote Sens. 2019, 11(14), 1668; https://doi.org/10.3390/rs11141668 - 13 Jul 2019
Cited by 14 | Viewed by 4460
Abstract
We apply a multi-spectral (RGB and thermal) camera drone for synthetic aperture imaging to computationally remove occluding vegetation for revealing hidden objects, as required in archeology, search-and-rescue, animal inspection, and border control applications. The radiated heat signal of strongly occluded targets, such as [...] Read more.
We apply a multi-spectral (RGB and thermal) camera drone for synthetic aperture imaging to computationally remove occluding vegetation for revealing hidden objects, as required in archeology, search-and-rescue, animal inspection, and border control applications. The radiated heat signal of strongly occluded targets, such as a human bodies hidden in dense shrub, can be made visible by integrating multiple thermal recordings from slightly different perspectives, while being entirely invisible in RGB recordings or unidentifiable in single thermal images. We collect bits of heat radiation through the occluder volume over a wide synthetic aperture range and computationally combine them to a clear image. This requires precise estimation of the drone’s position and orientation for each capturing pose, which is supported by applying computer vision algorithms on the high resolution RGB images. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Show Figures

Graphical abstract

Back to TopTop