remotesensing-logo

Journal Browser

Journal Browser

Pattern Analysis and Recognition in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 November 2018) | Viewed by 87682

Special Issue Editor


E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

The recent advances in optics and photonics have stimulated the deployment of sensing devices of high spatio-temporal and spectral resolutions. These sensors are now placed on satellite, aerial, UAV and ground acquisition platforms used for material, object and terrain land detection and classification. A variety of research topics has been and continues to be at the focal point of remote sensing and pattern recognition researchers, such as: automatic identification of 2D and 3D patterns in unimodal and multimodal remote sensing data, including multi-scale aerial and satellite data, multi- and hyperspectral imagery, SAR-, radargrammetric and SAR-tomography data; recognition of temporal patterns in remote sensing data, e.g., image-based flow estimation and learning from InSAR data (traffic, glaciers, currents, etc.); identification of spatial patterns in remote sensing data exploiting 3D and 4D models (e.g. GIS, CAD).

Although high spatial and spectral resolution improves classification accuracy, it also imposes several research challenges derived as a consequence of the so-called “curse of dimensionality”; the difficulties arising when we need to analyze and organize data in high dimensional spaces. At the same time, the surge of increasingly complex and powerful machine learning techniques (including deep learning and tensor-based classifiers) has brought about a totally new landscape as regards the boundaries of pattern recognition methods. This Special Issue aims at exploring the potential of new ideas and technologies from the field of machine learning and pattern recognition in remote sensing applications and further investigate the overlap between remote sensing and computer vision/image analysis.

Prof. Nikolaos Doulamis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Hyperspectral/multispectral image classification
  • SAR-radargrammetric and SAR-tomography data
  • Pattern recognition/machine learning/deep learning for remote sensing
  • Tensor based classification
  • Geospatial big data analytics
  • Hyperspectral unmixing
  • Automatic target recognition
  • Multisensor information fusion
  • Temporal/spatial pattern analysis
  • Semi-supervised learningobject segmentation/classification

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 3265 KiB  
Article
COSMO-SkyMed Staring Spotlight SAR Data for Micro-Motion and Inclination Angle Estimation of Ships by Pixel Tracking and Convex Optimization
by Biondi Filippo
Remote Sens. 2019, 11(7), 766; https://doi.org/10.3390/rs11070766 - 29 Mar 2019
Cited by 22 | Viewed by 4685
Abstract
In past research, the problem of maritime targets detection and motion parameter estimation has been tackled. This new research aims to contribute by estimating the micro-motion of ships while they are anchored in port or stationed at the roadstead for logistic operations. The [...] Read more.
In past research, the problem of maritime targets detection and motion parameter estimation has been tackled. This new research aims to contribute by estimating the micro-motion of ships while they are anchored in port or stationed at the roadstead for logistic operations. The problem of motion detection of targets is solved using along-track interferometry (ATI) which is observed using two radars spatially distanced by a baseline extended in the azimuth direction. In the case of spaceborne missions, the performing of ATI requests using at least two real-time SAR observations spatially distanced by an along-track baseline. For spotlight spaceborne SAR re-synthesizing two ATI observations from one raw data is a problem because the received electromagnetic bursts are not oversampled for onboard memory space saving and data appears like a white random process. This problem makes appearing interlaced Doppler bands completely disjointed. This phenomenon, after the range-Doppler focusing process, causes decorrelation when considering the ATI interferometric phase information retransmitted by distributed targets. Only small and very coherent targets located within the same radar resolution cell are considered. This paper is proposing a new approach where the micro-motion estimation of ships, occupying thousands of pixels, is measured processing the information given by sub-pixel tracking generated during the coregistration process of two re-synthesized time-domain and partially overlapped sub-apertures generated splitting the raw data observed by a single wide Doppler band staring spotlight (ST) SAR map. The inclination of ships is calculated by low-rank plus sparse decomposition and Radon transform of some region of interest. Experiments are performed processing one set of COSMO-SkyMed ST SAR data. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Figure 1

18 pages, 3160 KiB  
Article
MultiCAM: Multiple Class Activation Mapping for Aircraft Recognition in Remote Sensing Images
by Kun Fu, Wei Dai, Yue Zhang, Zhirui Wang, Menglong Yan and Xian Sun
Remote Sens. 2019, 11(5), 544; https://doi.org/10.3390/rs11050544 - 06 Mar 2019
Cited by 54 | Viewed by 7267
Abstract
Aircraft recognition in remote sensing images has long been a meaningful topic. Most related methods treat entire images as a whole and do not concentrate on the features of parts. In fact, a variety of aircraft types have small interclass variance, and the [...] Read more.
Aircraft recognition in remote sensing images has long been a meaningful topic. Most related methods treat entire images as a whole and do not concentrate on the features of parts. In fact, a variety of aircraft types have small interclass variance, and the main evidence for classifying subcategories is related to some discriminative object parts. In this paper, we introduce the idea of fine-grained visual classification (FGVC) and attempt to make full use of the features from discriminative object parts. First, multiple class activation mapping (MultiCAM) is proposed to extract the discriminative parts of aircrafts of different categories. Second, we present a mask filter (MF) strategy to enhance the discriminative object parts and filter the interference of the background from original images. Third, a selective connected feature fusion method is proposed to fuse the features extracted from both networks, focusing on the original images and the results of MF, respectively. Compared with the single prediction category in class activation mapping (CAM), MultiCAM makes full use of the predictions of all categories to overcome the wrong discriminative parts produced by a wrong single prediction category. Additionally, the designed MF preserves the object scale information and helps the network to concentrate on the object itself rather than the interfering background. Experiments on a challenging dataset prove that our method can achieve state-of-the-art performance. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Graphical abstract

25 pages, 866 KiB  
Article
Entropy-Mediated Decision Fusion for Remotely Sensed Image Classification
by Baofeng Guo
Remote Sens. 2019, 11(3), 352; https://doi.org/10.3390/rs11030352 - 10 Feb 2019
Cited by 6 | Viewed by 3632
Abstract
To better classify remotely sensed hyperspectral imagery, we study hyperspectral signatures from a different view, in which the discriminatory information is divided as reflectance features and absorption features, respectively. Based on this categorization, we put forward an information fusion approach, where the reflectance [...] Read more.
To better classify remotely sensed hyperspectral imagery, we study hyperspectral signatures from a different view, in which the discriminatory information is divided as reflectance features and absorption features, respectively. Based on this categorization, we put forward an information fusion approach, where the reflectance features and the absorption features are processed by different algorithms. Their outputs are considered as initial decisions, and then fused by a decision-level algorithm, where the entropy of the classification output is used to balance between the two decisions. The final decision is reached by modifying the decision of the reflectance features via the results of the absorption features. Simulations are carried out to assess the classification performance based on two AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) hyperspectral datasets. The results show that the proposed method increases the classification accuracy against the state-of-the-art methods. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Graphical abstract

25 pages, 4215 KiB  
Article
Superpixel-Guided Layer-Wise Embedding CNN for Remote Sensing Image Classification
by Han Liu, Jun Li, Lin He and Yu Wang
Remote Sens. 2019, 11(2), 174; https://doi.org/10.3390/rs11020174 - 17 Jan 2019
Cited by 16 | Viewed by 4537
Abstract
Irregular spatial dependency is one of the major characteristics of remote sensing images, which brings about challenges for classification tasks. Deep supervised models such as convolutional neural networks (CNNs) have shown great capacity for remote sensing image classification. However, they generally require a [...] Read more.
Irregular spatial dependency is one of the major characteristics of remote sensing images, which brings about challenges for classification tasks. Deep supervised models such as convolutional neural networks (CNNs) have shown great capacity for remote sensing image classification. However, they generally require a huge labeled training set for the fine tuning of a deep neural network. To handle the irregular spatial dependency of remote sensing images and mitigate the conflict between limited labeled samples and training demand, we design a superpixel-guided layer-wise embedding CNN (SLE-CNN) for remote sensing image classification, which can efficiently exploit the information from both labeled and unlabeled samples. With the superpixel-guided sampling strategy for unlabeled samples, we can achieve an automatic determination of the neighborhood covering for a spatial dependency system and thus adapting to real scenes of remote sensing images. In the designed network, two types of loss costs are combined for the training of CNN, i.e., supervised cross entropy and unsupervised reconstruction cost on both labeled and unlabeled samples, respectively. Our experimental results are conducted with three types of remote sensing data, including hyperspectral, multispectral, and synthetic aperture radar (SAR) images. The designed SLE-CNN achieves excellent classification performance in all cases with a limited labeled training set, suggesting its good potential for remote sensing image classification. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Figure 1

17 pages, 2005 KiB  
Article
Automatic Ship Detection in Optical Remote Sensing Images Based on Anomaly Detection and SPP-PCANet
by Nan Wang, Bo Li, Qizhi Xu and Yonghua Wang
Remote Sens. 2019, 11(1), 47; https://doi.org/10.3390/rs11010047 - 29 Dec 2018
Cited by 25 | Viewed by 6034
Abstract
Automatic ship detection technology in optical remote sensing images has a wide range of applications in civilian and military fields. Among most important challenges encountered in ship detection, we focus on the following three selected ones: (a) ships with low contrast; (b) sea [...] Read more.
Automatic ship detection technology in optical remote sensing images has a wide range of applications in civilian and military fields. Among most important challenges encountered in ship detection, we focus on the following three selected ones: (a) ships with low contrast; (b) sea surface in complex situations; and (c) false alarm interference such as clouds and reefs. To overcome these challenges, this paper proposes coarse-to-fine ship detection strategies based on anomaly detection and spatial pyramid pooling pcanet (SPP-PCANet). The anomaly detection algorithm, based on the multivariate Gaussian distribution, regards a ship as an abnormal marine area, effectively extracting candidate regions of ships. Subsequently, we combine PCANet and spatial pyramid pooling to reduce the amount of false positives and improve the detection rate. Furthermore, the non-maximum suppression strategy is adopted to eliminate the overlapped frames on the same ship. To validate the effectiveness of the proposed method, GF-1 images and GF-2 images were utilized in the experiment, including the three scenarios mentioned above. Extensive experiments demonstrate that our method obtains superior performance in the case of complex sea background, and has a certain degree of robustness to external factors such as uneven illumination and low contrast on the GF-1 and GF-2 satellite image data. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 15666 KiB  
Article
A Region Merging Segmentation with Local Scale Parameters: Applications to Spectral and Elevation Data
by Maria Dekavalla and Demetre Argialas
Remote Sens. 2018, 10(12), 2024; https://doi.org/10.3390/rs10122024 - 12 Dec 2018
Cited by 9 | Viewed by 3262
Abstract
Region merging is the most effective method for the segmentation of remote sensing data. The quality and the size of the resulted image objects is controlled by a global heterogeneity threshold, termed as the scale parameter. However, the multidimensional nature of the visible [...] Read more.
Region merging is the most effective method for the segmentation of remote sensing data. The quality and the size of the resulted image objects is controlled by a global heterogeneity threshold, termed as the scale parameter. However, the multidimensional nature of the visible features in a scene defies the use of an even optimum single global scale parameter. In this study, a novel region merging segmentation method is proposed, where a local scale parameter is defined for each image object by its internal and external heterogeneity measures (i.e., local variance and Moran’s I). This method allows image objects with low internal and external heterogeneity to be further merged with higher scale parameter values, since they are more likely to be a part of an adjacent object, than objects with high internal and external heterogeneity. The proposed method was applied in spectral and elevation data and its results were evaluated visually and with supervised and unsupervised evaluation methods. The comparison with multi-resolution segmentation (MRS) showed that the proposed region merging method can produce improved segmentation results in terms of maximizing intra-object homogeneity and inter-object heterogeneity as well as in the delimitation of specific target objects, present in spectral and elevation data. The unsupervised evaluation results of the (1) Côte d’Azur, (2) Manchester, and (3) Szada images from the SZTAKI-INRIA building detection dataset showed that the proposed method (overall goodness, OGf (1): 0.7375, (2): 0.7923, (3): 0.7967) performs better than MRS (OGf (1): 0.7224, (2): 0.7648, (3): 0.7823). The higher values of OGf indicate their ability to produce segmentation results with reduced over-segmentation effects and without the need of presegmented input data, in contrast to the objective heterogeneity and relative homogeneity (OHRH) hybrid segmentation method (OGf (1): 0.5864, (2): 0.5151, (3): 0.6983). Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Figure 1

16 pages, 9545 KiB  
Article
Road Extraction from High-Resolution Remote Sensing Imagery Using Deep Learning
by Yongyang Xu, Zhong Xie, Yaxing Feng and Zhanlong Chen
Remote Sens. 2018, 10(9), 1461; https://doi.org/10.3390/rs10091461 - 13 Sep 2018
Cited by 185 | Viewed by 13704
Abstract
The road network plays an important role in the modern traffic system; as development occurs, the road structure changes frequently. Owing to the advancements in the field of high-resolution remote sensing, and the success of semantic segmentation success using deep learning in computer [...] Read more.
The road network plays an important role in the modern traffic system; as development occurs, the road structure changes frequently. Owing to the advancements in the field of high-resolution remote sensing, and the success of semantic segmentation success using deep learning in computer version, extracting the road network from high-resolution remote sensing imagery is becoming increasingly popular, and has become a new tool to update the geospatial database. Considering that the training dataset of the deep convolutional neural network will be clipped to a fixed size, which lead to the roads run through each sample, and that different kinds of road types have different widths, this work provides a segmentation model that was designed based on densely connected convolutional networks (DenseNet) and introduces the local and global attention units. The aim of this work is to propose a novel road extraction method that can efficiently extract the road network from remote sensing imagery with local and global information. A dataset from Google Earth was used to validate the method, and experiments showed that the proposed deep convolutional neural network can extract the road network accurately and effectively. This method also achieves a harmonic mean of precision and recall higher than other machine learning and deep learning methods. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Graphical abstract

18 pages, 6367 KiB  
Article
Development of Shoreline Extraction Method Based on Spatial Pattern Analysis of Satellite SAR Images
by Takashi Fuse and Takashi Ohkura
Remote Sens. 2018, 10(9), 1361; https://doi.org/10.3390/rs10091361 - 27 Aug 2018
Cited by 11 | Viewed by 5100
Abstract
The extensive monitoring of shorelines is becoming important for investigating the impact of coastal erosion. Satellite synthetic aperture radar (SAR) images can cover wide areas independently of weather or time. The recent development of high-resolution satellite SAR images has made observations more detailed. [...] Read more.
The extensive monitoring of shorelines is becoming important for investigating the impact of coastal erosion. Satellite synthetic aperture radar (SAR) images can cover wide areas independently of weather or time. The recent development of high-resolution satellite SAR images has made observations more detailed. Shoreline extraction using high-resolution images, however, is challenging because of the influence of speckle, crest lines, patterns in sandy beaches, etc. We develop a shoreline extraction method based on the spatial pattern analysis of satellite SAR images. The proposed method consists of image decomposition, smoothing, sea and land area segmentation, and shoreline refinement. The image decomposition step, in which the image is decomposed into its texture and outline components, is based on morphological component analysis. In the image decomposition step, a learning process involving spatial patterns is introduced. The outline images are smoothed using a non-local means filter, and then the images are segmented into sea and land areas using the graph cuts’ technique. The boundary between these two areas can be regarded as the shoreline. Finally, the snakes algorithm is applied to refine the position accuracy. The proposed method is applied to the satellite SAR images of coasts in Japan. The method can successfully extract the shorelines. Through experiments, the performance of the proposed method is confirmed. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Graphical abstract

24 pages, 10375 KiB  
Article
Unsupervised Segmentation Evaluation Using Area-Weighted Variance and Jeffries-Matusita Distance for Remote Sensing Images
by Yongji Wang, Qingwen Qi and Ying Liu
Remote Sens. 2018, 10(8), 1193; https://doi.org/10.3390/rs10081193 - 30 Jul 2018
Cited by 30 | Viewed by 6017
Abstract
Image segmentation is an important process and a prerequisite for object-based image analysis. Thus, evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and to optimize the scale. In this paper, we propose an unsupervised evaluation (UE) method using [...] Read more.
Image segmentation is an important process and a prerequisite for object-based image analysis. Thus, evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and to optimize the scale. In this paper, we propose an unsupervised evaluation (UE) method using the area-weighted variance (WV) and Jeffries-Matusita (JM) distance to compare two image partitions to evaluate segmentation quality. The two measures were calculated based on the local measure criteria, and the JM distance was improved by considering the contribution of the common border between adjacent segments and the area of each segment in the JM distance formula, which makes the heterogeneity measure more effective and objective. Then the two measures were presented as a curve when changing the scale from 8 to 20, which can reflect the segmentation quality in both over- and under-segmentation. Furthermore, the WV and JM distance measures were combined by using three different strategies. The effectiveness of the combined indicators was illustrated through supervised evaluation (SE) methods to clearly reveal the segmentation quality and capture the trade-off between the two measures. In these experiments, the multiresolution segmentation (MRS) method was adopted for evaluation. The proposed UE method was compared with two existing UE methods to further confirm their capabilities. The visual and quantitative SE results demonstrated that the proposed UE method can improve the segmentation quality. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Figure 1

19 pages, 1542 KiB  
Article
Band Priority Index: A Feature Selection Framework for Hyperspectral Imagery
by Wenqiang Zhang, Xiaorun Li and Liaoying Zhao
Remote Sens. 2018, 10(7), 1095; https://doi.org/10.3390/rs10071095 - 10 Jul 2018
Cited by 14 | Viewed by 3889
Abstract
Hyperspectral Band Selection (BS) aims to select a few informative and distinctive bands to represent the whole image cube. In this paper, an unsupervised BS framework named the band priority index (BPI) is proposed. The basic idea of BPI is to find the [...] Read more.
Hyperspectral Band Selection (BS) aims to select a few informative and distinctive bands to represent the whole image cube. In this paper, an unsupervised BS framework named the band priority index (BPI) is proposed. The basic idea of BPI is to find the bands with large amounts of information and low correlation. Sequential forward search (SFS) is used to avoid an exhaustive search, and the objective function of BPI consist of two parts: the information metric and the correlation metric. We proposed a new band correlation metric, namely, the joint correlation coefficient (JCC), to estimate the joint correlation between a single band and multiple bands. JCC uses the angle between a band and the hyperplane determined by a band set to evaluate the correlation between them. To estimate the amount of information, the variance and entropy are used as the information metric for BPI, respectively. Since BPI is a framework for BS, other information metrics and different mathematic functions of the angle can also be used in the model, which means there are various implementations of BPI. The BPI-based methods have the advantages as follows: (1) The selected bands are informative and distinctive. (2) The BPI-based methods usually have good computational efficiencies. (3) These methods have the potential to determine the number of bands to be selected. The experimental results on different real hyperspectral datasets demonstrate that the BPI-based methods are highly efficient and accurate BS methods. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Figure 1

37 pages, 13846 KiB  
Article
A Level Set Method for Infrared Image Segmentation Using Global and Local Information
by Minjie Wan, Guohua Gu, Jianhong Sun, Weixian Qian, Kan Ren, Qian Chen and Xavier Maldague
Remote Sens. 2018, 10(7), 1039; https://doi.org/10.3390/rs10071039 - 02 Jul 2018
Cited by 17 | Viewed by 4512
Abstract
Infrared image segmentation plays a significant role in many burgeoning applications of remote sensing, such as environmental monitoring, traffic surveillance, air navigation and so on. However, the precision is limited due to the blurred edge, low contrast and intensity inhomogeneity caused by infrared [...] Read more.
Infrared image segmentation plays a significant role in many burgeoning applications of remote sensing, such as environmental monitoring, traffic surveillance, air navigation and so on. However, the precision is limited due to the blurred edge, low contrast and intensity inhomogeneity caused by infrared imaging. To overcome these challenges, a level set method using global and local information is proposed in this paper. In our method, a hybrid signed pressure function is constructed by fusing a global term and a local term adaptively. The global term is represented by the global average intensity, which effectively accelerates the evolution when the evolving curve is far away from the object. The local term is represented by a multi-feature-based signed driving force, which accurately guides the curve to approach the real boundary when it is near the object. Then, the two terms are integrated via an adaptive weight matrix calculated based on the range value of each pixel. Under the framework of geodesic active contour model, a new level set formula is obtained by substituting the proposed signed pressure function for the edge stopping function. In addition, a Gaussian convolution is applied to regularize the level set function for the purpose of avoiding the computationally expensive re-initialization. By iteration, the object of interest can be segmented when the level set function converges. Both qualitative and quantitative experiments verify that our method outperforms other state-of-the-art level set methods in terms of accuracy and robustness with the initial contour being set randomly. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Graphical abstract

14 pages, 8273 KiB  
Article
A Component-Based Multi-Layer Parallel Network for Airplane Detection in SAR Imagery
by Chu He, Mingxia Tu, Dehui Xiong, Feng Tu and Mingsheng Liao
Remote Sens. 2018, 10(7), 1016; https://doi.org/10.3390/rs10071016 - 25 Jun 2018
Cited by 32 | Viewed by 3277
Abstract
In this paper, a component-based multi-layer parallel network is proposed for airplane detection in Synthetic Aperture Radar (SAR) imagery. In response to the problems called sparsity and diversity brought by SAR scattering mechanism, depth characteristics and component structure are utilized in the presented [...] Read more.
In this paper, a component-based multi-layer parallel network is proposed for airplane detection in Synthetic Aperture Radar (SAR) imagery. In response to the problems called sparsity and diversity brought by SAR scattering mechanism, depth characteristics and component structure are utilized in the presented algorithm. Compared with traditional features, the depth characteristics have better description ability to deal with diversity. Component information is contributing in detecting complete targets. The proposed algorithm consists of two parallel networks and a constraint layer. First, the component information is introduced into the network by labeling. Then, the overall target and corresponding components are detected by the trained model. In the following discriminative constraint layer, the maximum probability and prior information are adopted to filter out wrong detection. Experiments for several comparative methods are conducted on TerraSAR-X SAR imagery; the results indicate that the proposed network has a higher accuracy for airplane detection. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Figure 1

21 pages, 93260 KiB  
Article
Bidirectional Long Short-Term Memory Network for Vehicle Behavior Recognition
by Jiasong Zhu, Ke Sun, Sen Jia, Weidong Lin, Xianxu Hou, Bozhi Liu and Guoping Qiu
Remote Sens. 2018, 10(6), 887; https://doi.org/10.3390/rs10060887 - 06 Jun 2018
Cited by 15 | Viewed by 5442
Abstract
Vehicle behavior recognition is an attractive research field which is useful for many computer vision and intelligent traffic analysis tasks. This paper presents an all-in-one behavior recognition framework for moving vehicles based on the latest deep learning techniques. Unlike traditional traffic analysis methods [...] Read more.
Vehicle behavior recognition is an attractive research field which is useful for many computer vision and intelligent traffic analysis tasks. This paper presents an all-in-one behavior recognition framework for moving vehicles based on the latest deep learning techniques. Unlike traditional traffic analysis methods which rely on low-resolution videos captured by road cameras, we capture 4K ( 3840 × 2178 ) traffic videos at a busy road intersection of a modern megacity by flying a unmanned aerial vehicle (UAV) during the rush hours. We then manually annotate locations and types of road vehicles. The proposed method consists of the following three steps: (1) vehicle detection and type recognition based on deep neural networks; (2) vehicle tracking by data association and vehicle trajectory modeling; (3) vehicle behavior recognition by nearest neighbor search and by bidirectional long short-term memory network, respectively. This paper also presents experimental results of the proposed framework in comparison with state-of-the-art approaches on the 4K testing traffic video, which demonstrated the effectiveness and superiority of the proposed method. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Figure 1

24 pages, 4602 KiB  
Article
The Generalized Gamma-DBN for High-Resolution SAR Image Classification
by Zhiqiang Zhao, Lei Guo, Meng Jia and Lei Wang
Remote Sens. 2018, 10(6), 878; https://doi.org/10.3390/rs10060878 - 05 Jun 2018
Cited by 13 | Viewed by 3954
Abstract
With the increase of resolution, effective characterization of synthetic aperture radar (SAR) image becomes one of the most critical problems in many earth observation applications. Inspired by deep learning and probability mixture models, a generalized Gamma deep belief network (g Γ-DBN) is [...] Read more.
With the increase of resolution, effective characterization of synthetic aperture radar (SAR) image becomes one of the most critical problems in many earth observation applications. Inspired by deep learning and probability mixture models, a generalized Gamma deep belief network (g Γ-DBN) is proposed for SAR image statistical modeling and land-cover classification in this work. Specifically, a generalized Gamma-Bernoulli restricted Boltzmann machine (gΓB-RBM) is proposed to capture high-order statistical characterizes from SAR images after introducing the generalized Gamma distribution. After stacking the g Γ B-RBM and several standard binary RBMs in a hierarchical manner, a gΓ-DBN is constructed to learn high-level representation of different SAR land-covers. Finally, a discriminative neural network is constructed by adding an additional predict layer for different land-covers over the constructed deep structure. Performance of the proposed approach is evaluated via several experiments on some high-resolution SAR image patch sets and two large-scale scenes which are captured by ALOS PALSAR-2 and COSMO-SkyMed satellites respectively. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Graphical abstract

26 pages, 28077 KiB  
Article
Region Merging Considering Within- and Between-Segment Heterogeneity: An Improved Hybrid Remote-Sensing Image Segmentation Method
by Yongji Wang, Qingyan Meng, Qingwen Qi, Jian Yang and Ying Liu
Remote Sens. 2018, 10(5), 781; https://doi.org/10.3390/rs10050781 - 18 May 2018
Cited by 30 | Viewed by 5750
Abstract
Image segmentation is an important process and a prerequisite for object-based image analysis, but segmenting an image into meaningful geo-objects is a challenging problem. Recently, some scholars have focused on hybrid methods that employ initial segmentation and subsequent region merging since hybrid methods [...] Read more.
Image segmentation is an important process and a prerequisite for object-based image analysis, but segmenting an image into meaningful geo-objects is a challenging problem. Recently, some scholars have focused on hybrid methods that employ initial segmentation and subsequent region merging since hybrid methods consider both boundary and spatial information. However, the existing merging criteria (MC) only consider the heterogeneity between adjacent segments to calculate the merging cost of adjacent segments, thus limiting the goodness-of-fit between segments and geo-objects because the homogeneity within segments and the heterogeneity between segments should be treated equally. To overcome this limitation, in this paper a hybrid remote-sensing image segmentation method is employed that considers the objective heterogeneity and relative homogeneity (OHRH) for MC during region merging. In this paper, the OHRH method is implemented in five different study areas and then compared to our region merging method using the objective heterogeneity (OH) method, as well as the full lambda-schedule algorithm (FLSA). The unsupervised evaluation indicated that the OHRH method was more accurate than the OH and FLSA methods, and the visual results showed that the OHRH method could distinguish both small and large geo-objects. The segments showed greater size changes than those of the other methods, demonstrating the superiority of considering within- and between-segment heterogeneity in the OHRH method. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Graphical abstract

16 pages, 4420 KiB  
Article
Salient Object Detection via Recursive Sparse Representation
by Yongjun Zhang, Xiang Wang, Xunwei Xie and Yansheng Li
Remote Sens. 2018, 10(4), 652; https://doi.org/10.3390/rs10040652 - 23 Apr 2018
Cited by 14 | Viewed by 4882
Abstract
Object-level saliency detection is an attractive research field which is useful for many content-based computer vision and remote-sensing tasks. This paper introduces an efficient unsupervised approach to salient object detection from the perspective of recursive sparse representation. The reconstruction error determined by foreground [...] Read more.
Object-level saliency detection is an attractive research field which is useful for many content-based computer vision and remote-sensing tasks. This paper introduces an efficient unsupervised approach to salient object detection from the perspective of recursive sparse representation. The reconstruction error determined by foreground and background dictionaries other than common local and global contrasts is used as the saliency indication, by which the shortcomings of the object integrity can be effectively improved. The proposed method consists of the following four steps: (1) regional feature extraction; (2) background and foreground dictionaries extraction according to the initial saliency map and image boundary constraints; (3) sparse representation and saliency measurement; and (4) recursive processing with a current saliency map updating the initial saliency map in step 2 and repeating step 3. This paper also presents the experimental results of the proposed method compared with seven state-of-the-art saliency detection methods using three benchmark datasets, as well as some satellite and unmanned aerial vehicle remote-sensing images, which confirmed that the proposed method was more effective than current methods and could achieve more favorable performance in the detection of multiple objects as well as maintaining the integrity of the object area. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop