remotesensing-logo

Journal Browser

Journal Browser

Pattern Recognition and Image Processing for Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (29 February 2020) | Viewed by 38855

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Mississippi State University, 406 Hardy Road, 216 Simrall Hall, Mississippi State, MS 39762, USA
Interests: Advanced Driver Assistance Systems (ADAS); scene understanding; sensor processing (Radar, LiDAR, camera, thermal); machine learning; digital image and signal processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Mississippi State University, Mississippi State, MS 39762, USA
Interests: signal processing and pattern recognition; automated target detection; image fusion; image information mining
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Mississippi State University, 406 Hardy Road, 216 Simrall Hall, Mississippi State, MS 39762, USA
Interests: machine learning; compressive sensing; computational imaging; radar and array signal processing; digital signal and image processing; remote sensing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

There is great need for pattern recognition and image processing in remote sensing. The field of remote sensing seeks to answer important questions such as: What is the land use in a certain area? What land cover classes make up a region? How much of a certain crop is planted in the southern part of a state? How many vehicles drive this highway? to name a few. The list is practically endless. The algorithms, methods and procedures to answer these questions greatly rely on pattern recognition and image processing. It is for this reason that we proposed this special issue.

This special issue is geared towards high-quality papers in the broad area of pattern recognition and image processing in remote sensing. Papers should have both a theoretical and an experimental component, and thus provide advances to the state of the art and also to practical implementation.

Scope: All imaging modalities (hyperspectral, multispectral, synthetic aperture radar (SAR), LiDAR, radar, thermal, multitemporal, etc.) are welcome. Remote sensing platforms can include unmanned aerial systems (UASs), airplanes, satellites, robots, undersea vehicles, autonomous vehicles, etc., and also any situation where the sensors are not in contact with the sensed environment.

This special issue topics include, but are not limited to, the following categories:

  • Classification
  • Pattern recognition
  • Object recognition and detection
  • Land cover/land use
  • Anomaly detection
  • Scene understanding
  • Multi-sensor processing and fusion
  • Deep learning and other data-driven methods
  • Machine learning techniques
  • Optimization techniques
  • Parameter estimation techniques
  • Systems or subsystems for use in robotics, UAS, or autonomous vehicle navigation and mapping
  • Uncertainty characterization
  • Probabilistic methods
  • Sparsity-based techniques

Dr. John Ball
Dr. Nicolas H. Younan
Dr. Ali C. Gurbuz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Remote Sensing 
  • Pattern Recognition 
  • Image Processing 
  • Deep Learning 
  • LiDAR
  • Hyperspectral 
  • Synthetic Aperture Radar 
  • Fusion

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 10400 KiB  
Article
An Improved Boundary-Aware Perceptual Loss for Building Extraction from VHR Images
by Yan Zhang, Weihong Li, Weiguo Gong, Zixu Wang and Jingxi Sun
Remote Sens. 2020, 12(7), 1195; https://doi.org/10.3390/rs12071195 - 8 Apr 2020
Cited by 19 | Viewed by 4335
Abstract
With the development of deep learning technology, an enormous number of convolutional neural network (CNN) models have been proposed to address the challenging building extraction task from very high-resolution (VHR) remote sensing images. However, searching for better CNN architectures is time-consuming, and the [...] Read more.
With the development of deep learning technology, an enormous number of convolutional neural network (CNN) models have been proposed to address the challenging building extraction task from very high-resolution (VHR) remote sensing images. However, searching for better CNN architectures is time-consuming, and the robustness of a new CNN model cannot be guaranteed. In this paper, an improved boundary-aware perceptual (BP) loss is proposed to enhance the building extraction ability of CNN models. The proposed BP loss consists of a loss network and transfer loss functions. The usage of the boundary-aware perceptual loss has two stages. In the training stage, the loss network learns the structural information from circularly transferring between the building mask and the corresponding building boundary. In the refining stage, the learned structural information is embedded into the building extraction models via the transfer loss functions without additional parameters or postprocessing. We verify the effectiveness and efficiency of the proposed BP loss both on the challenging WHU aerial dataset and the INRIA dataset. Substantial performance improvements are observed within two representative CNN architectures: PSPNet and UNet, which are widely used on pixel-wise labelling tasks. With BP loss, UNet with ResNet101 achieves 90.78% and 76.62% on IoU (intersection over union) scores on the WHU aerial dataset and the INRIA dataset, respectively, which are 1.47% and 1.04% higher than those simply trained with the cross-entropy loss function. Additionally, similar improvements (0.64% on the WHU aerial dataset and 1.69% on the INRIA dataset) are also observed on PSPNet, which strongly supports the robustness of the proposed BP loss. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing)
Show Figures

Graphical abstract

22 pages, 5568 KiB  
Article
Extended Feature Pyramid Network with Adaptive Scale Training Strategy and Anchors for Object Detection in Aerial Images
by Wei Guo, Weihong Li, Weiguo Gong and Jinkai Cui
Remote Sens. 2020, 12(5), 784; https://doi.org/10.3390/rs12050784 - 1 Mar 2020
Cited by 14 | Viewed by 5373
Abstract
Multi-scale object detection is a basic challenge in computer vision. Although many advanced methods based on convolutional neural networks have succeeded in natural images, the progress in aerial images has been relatively slow mainly due to the considerably huge scale variations of objects [...] Read more.
Multi-scale object detection is a basic challenge in computer vision. Although many advanced methods based on convolutional neural networks have succeeded in natural images, the progress in aerial images has been relatively slow mainly due to the considerably huge scale variations of objects and many densely distributed small objects. In this paper, considering that the semantic information of the small objects may be weakened or even disappear in the deeper layers of neural network, we propose a new detection framework called Extended Feature Pyramid Network (EFPN) for strengthening the information extraction ability of the neural network. In the EFPN, we first design the multi-branched dilated bottleneck (MBDB) module in the lateral connections to capture much more semantic information. Then, we further devise an attention pathway for better locating the objects. Finally, an augmented bottom-up pathway is conducted for making shallow layer information easier to spread and further improving performance. Moreover, we present an adaptive scale training strategy to enable the network to better recognize multi-scale objects. Meanwhile, we present a novel clustering method to achieve adaptive anchors and make the neural network better learn data features. Experiments on the public aerial datasets indicate that the presented method obtain state-of-the-art performance. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing)
Show Figures

Figure 1

29 pages, 22756 KiB  
Article
Precise and Robust Ship Detection for High-Resolution SAR Imagery Based on HR-SDNet
by Shunjun Wei, Hao Su, Jing Ming, Chen Wang, Min Yan, Durga Kumar, Jun Shi and Xiaoling Zhang
Remote Sens. 2020, 12(1), 167; https://doi.org/10.3390/rs12010167 - 2 Jan 2020
Cited by 126 | Viewed by 6601
Abstract
Ship detection in high-resolution synthetic aperture radar (SAR) imagery is a challenging problem in the case of complex environments, especially inshore and offshore scenes. Nowadays, the existing methods of SAR ship detection mainly use low-resolution representations obtained by classification networks or recover high-resolution [...] Read more.
Ship detection in high-resolution synthetic aperture radar (SAR) imagery is a challenging problem in the case of complex environments, especially inshore and offshore scenes. Nowadays, the existing methods of SAR ship detection mainly use low-resolution representations obtained by classification networks or recover high-resolution representations from low-resolution representations in SAR images. As the representation learning is characterized by low resolution and the huge loss of resolution makes it difficult to obtain accurate prediction results in spatial accuracy; therefore, these networks are not suitable to ship detection of region-level. In this paper, a novel ship detection method based on a high-resolution ship detection network (HR-SDNet) for high-resolution SAR imagery is proposed. The HR-SDNet adopts a novel high-resolution feature pyramid network (HRFPN) to take full advantage of the feature maps of high-resolution and low-resolution convolutions for SAR image ship detection. In this scheme, the HRFPN connects high-to-low resolution subnetworks in parallel and can maintain high resolution. Next, the Soft Non-Maximum Suppression (Soft-NMS) is used to improve the performance of the NMS, thereby improving the detection performance of the dense ships. Then, we introduce the Microsoft Common Objects in Context (COCO) evaluation metrics, which provides not only the higher quality evaluation metrics average precision (AP) for more accurate bounding box regression, but also the evaluation metrics for small, medium and large targets, so as to precisely evaluate the detection performance of our method. Finally, the experimental results on the SAR ship detection dataset (SSDD) and TerraSAR-X high-resolution images reveal that (1) our approach based on the HRFPN has superior detection performance for both inshore and offshore scenes of the high-resolution SAR imagery, which achieves nearly 4.3% performance gains compared to feature pyramid network (FPN) in inshore scenes, thus proving its effectiveness; (2) compared with the existing algorithms, our approach is more accurate and robust for ship detection of high-resolution SAR imagery, especially inshore and offshore scenes; (3) with the Soft-NMS algorithm, our network performs better, which achieves nearly 1% performance gains in terms of AP; (4) the COCO evaluation metrics are effective for SAR image ship detection; (5) the displayed thresholds within a certain range have a significant impact on the robustness of ship detectors. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing)
Show Figures

Graphical abstract

37 pages, 20613 KiB  
Article
Depthwise Separable Convolution Neural Network for High-Speed SAR Ship Detection
by Tianwen Zhang, Xiaoling Zhang, Jun Shi and Shunjun Wei
Remote Sens. 2019, 11(21), 2483; https://doi.org/10.3390/rs11212483 - 24 Oct 2019
Cited by 166 | Viewed by 7650
Abstract
As an active microwave imaging sensor for the high-resolution earth observation, synthetic aperture radar (SAR) has been extensively applied in military, agriculture, geology, ecology, oceanography, etc., due to its prominent advantages of all-weather and all-time working capacity. Especially, in the marine field, SAR [...] Read more.
As an active microwave imaging sensor for the high-resolution earth observation, synthetic aperture radar (SAR) has been extensively applied in military, agriculture, geology, ecology, oceanography, etc., due to its prominent advantages of all-weather and all-time working capacity. Especially, in the marine field, SAR can provide numerous high-quality services for fishery management, traffic control, sea-ice monitoring, marine environmental protection, etc. Among them, ship detection in SAR images has attracted more and more attention on account of the urgent requirements of maritime rescue and military strategy formulation. Nowadays, most researches are focusing on improving the ship detection accuracy, while the detection speed is frequently neglected, regardless of traditional feature extraction methods or modern deep learning (DL) methods. However, the high-speed SAR ship detection is of great practical value, because it can provide real-time maritime disaster rescue and emergency military planning. Therefore, in order to address this problem, we proposed a novel high-speed SAR ship detection approach by mainly using depthwise separable convolution neural network (DS-CNN). In this approach, we integrated multi-scale detection mechanism, concatenation mechanism and anchor box mechanism to establish a brand-new light-weight network architecture for the high-speed SAR ship detection. We used DS-CNN, which consists of a depthwise convolution (D-Conv2D) and a pointwise convolution (P-Conv2D), to substitute for the conventional convolution neural network (C-CNN). In this way, the number of network parameters gets obviously decreased, and the ship detection speed gets dramatically improved. We experimented on an open SAR ship detection dataset (SSDD) to validate the correctness and feasibility of the proposed method. To verify the strong migration capacity of our method, we also carried out actual ship detection on a wide-region large-size Sentinel-1 SAR image. Ultimately, under the same hardware platform with NVIDIA RTX2080Ti GPU, the experimental results indicated that the ship detection speed of our proposed method is faster than other methods, meanwhile the detection accuracy is only lightly sacrificed compared with the state-of-art object detectors. Our method has great application value in real-time maritime disaster rescue and emergency military planning. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing)
Show Figures

Graphical abstract

16 pages, 11313 KiB  
Article
Improvement of EPIC/DSCOVR Image Registration by Means of Automatic Coastline Detection
by Víctor Molina García, Sruthy Sasi, Dmitry S. Efremenko and Diego Loyola
Remote Sens. 2019, 11(15), 1747; https://doi.org/10.3390/rs11151747 - 25 Jul 2019
Cited by 4 | Viewed by 3938
Abstract
In this work, we address the image geolocation issue that is present in the imagery of EPIC/DSCOVR (Earth Polychromatic Imaging Camera/Deep Space Climate Observatory) Level 1B version 2. To solve it, we develop an algorithm that automatically computes a registration correction consisting of [...] Read more.
In this work, we address the image geolocation issue that is present in the imagery of EPIC/DSCOVR (Earth Polychromatic Imaging Camera/Deep Space Climate Observatory) Level 1B version 2. To solve it, we develop an algorithm that automatically computes a registration correction consisting of a motion (translation plus rotation) and a radial distortion. The correction parameters are retrieved for every image by means of a regularised non-linear optimisation process, in which the spatial distances between the theoretical and actual locations of chosen features are minimised. The actual features are found along the coastlines automatically by using computer vision techniques. The retrieved correction parameters show a behaviour that is related to the period of DSCOVR orbiting around the Lagrangian point L 1 . With this procedure, the EPIC coastlines are collocated with an accuracy of about 1.5 pixels, thus significantly improving the original registration of about 5 pixels from the imagery of EPIC/DSCOVR Level 1B version 2. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing)
Show Figures

Graphical abstract

23 pages, 9895 KiB  
Article
Combing Triple-Part Features of Convolutional Neural Networks for Scene Classification in Remote Sensing
by Hong Huang and Kejie Xu
Remote Sens. 2019, 11(14), 1687; https://doi.org/10.3390/rs11141687 - 17 Jul 2019
Cited by 57 | Viewed by 4633
Abstract
High spatial resolution remote sensing (HSRRS) images contain complex geometrical structures and spatial patterns, and thus HSRRS scene classification has become a significant challenge in the remote sensing community. In recent years, convolutional neural network (CNN)-based methods have attracted tremendous attention and obtained [...] Read more.
High spatial resolution remote sensing (HSRRS) images contain complex geometrical structures and spatial patterns, and thus HSRRS scene classification has become a significant challenge in the remote sensing community. In recent years, convolutional neural network (CNN)-based methods have attracted tremendous attention and obtained excellent performance in scene classification. However, traditional CNN-based methods focus on processing original red-green-blue (RGB) image-based features or CNN-based single-layer features to achieve the scene representation, and ignore that texture images or each layer of CNNs contain discriminating information. To address the above-mentioned drawbacks, a CaffeNet-based method termed CTFCNN is proposed to effectively explore the discriminating ability of a pre-trained CNN in this paper. At first, the pretrained CNN model is employed as a feature extractor to obtain convolutional features from multiple layers, fully connected (FC) features, and local binary pattern (LBP)-based FC features. Then, a new improved bag-of-view-word (iBoVW) coding method is developed to represent the discriminating information from each convolutional layer. Finally, weighted concatenation is employed to combine different features for classification. Experiments on the UC-Merced dataset and Aerial Image Dataset (AID) demonstrate that the proposed CTFCNN method performs significantly better than some state-of-the-art methods, and the overall accuracy can reach 98.44% and 94.91%, respectively. This indicates that the proposed framework can provide a discriminating description for HSRRS images. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing)
Show Figures

Graphical abstract

19 pages, 12398 KiB  
Article
Ship Detection from Optical Remote Sensing Images Using Multi-Scale Analysis and Fourier HOG Descriptor
by Chao Dong, Jinghong Liu, Fang Xu and Chenglong Liu
Remote Sens. 2019, 11(13), 1529; https://doi.org/10.3390/rs11131529 - 28 Jun 2019
Cited by 44 | Viewed by 4710
Abstract
Automatic ship detection by Unmanned Airborne Vehicles (UAVs) and satellites is one of the fundamental challenges in maritime research due to the variable appearances of ships and complex sea backgrounds. To address this issue, in this paper, a novel multi-level ship detection algorithm [...] Read more.
Automatic ship detection by Unmanned Airborne Vehicles (UAVs) and satellites is one of the fundamental challenges in maritime research due to the variable appearances of ships and complex sea backgrounds. To address this issue, in this paper, a novel multi-level ship detection algorithm is proposed to detect various types of offshore ships more precisely and quickly under all possible imaging variations. Our object detection system consists of two phases. First, in the category-independent region proposal phase, the steerable pyramid for multi-scale analysis is performed to generate a set of saliency maps in which the candidate region pixels are assigned to high salient values. Then, the set of saliency maps is used for constructing the graph-based segmentation, which can produce more accurate candidate regions compared with the threshold segmentation. More importantly, the proposed algorithm can produce a rather smaller set of candidates in comparison with the classical sliding window object detection paradigm or the other region proposal algorithms. Second, in the target identification phase, a rotation-invariant descriptor, which combines the histogram of oriented gradients (HOG) cells and the Fourier basis together, is investigated to distinguish between ships and non-ships. Meanwhile, the main direction of the ship can also be estimated in this phase. The overall algorithm can account for large variations in scale and rotation. Experiments on optical remote sensing (ORS) images demonstrate the effectiveness and robustness of our detection system. Full article
(This article belongs to the Special Issue Pattern Recognition and Image Processing for Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop