remotesensing-logo

Journal Browser

Journal Browser

Deep Learning for Target Object Detection and Identification in Remote Sensing Data

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (31 December 2017) | Viewed by 109931

Special Issue Editors

Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, 20A Datun Rd, Beijing 100101, China
Interests: remote sensing; change detection; satellite image time series analysis; spatio-temporal model analysis

E-Mail Website
Guest Editor
College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
Interests: remote sensing; feature extraction; pattern classification; hyperspectral imagery

E-Mail Website
Guest Editor
1. School of Computer Science and Applied Mathematics, University of the Witwatersrand, Johannesburg 2000, South Africa
2. School of Information Science and Technology, Southwest Jiaotong University, Chengdu, China
Interests: signal/image/video processing; visual computing; machine learning; cognitive computing; remote sensing data modelling and processing
Special Issues, Collections and Topics in MDPI journals
Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, 20A Datun Rd, Beijing 100101, China
Interests: object detection; satellite image classification; machine learning

Special Issue Information

Dear Colleagues,

Remote Sensing Technology (RST) mainly focuses on the acquisition of information about the Earth’s surface and atmosphere using sensors onboard airborne or spaceborne platforms. RST has been widely used in ground mapping, resource regulation, environmental protection, urban planning, geological research, disaster relief and emergency, military reconnaissance, and other fields.

Amongst the applications of RST, object detection and recognition from multi-source and multi-modal remote sensing data, to detect and identify target objects, play a key role. Target object detection and identification is usually achieved using a combination of signal/image processing techniques and statistical models. However, because of the volume, variety, and velocity of RS data acquired via airborne or spaceborne platforms, sophisticated signal/image processing techniques used for feature extraction (feature engineering) and statistical models need to be adapted or redesigned according to the characteristics of the new data.

Recent advances in deep learning architectures have shown promising results over statistical counterparts in target object detection and identification. Although such learning architectures are heavily dependent on computing resources, they are easy to use compared to sophisticated statistical models. Further, deep learning architectures are also able to render feature engineering as a part of their learning process which makes them extremely powerful in target object detection and identification process.

This Special Issue focuses on target object detection and identification using deep learning architectures on multi-source and multi-modal remote sensing data captured from both active and passive sensors onboard airborne or spaceborne platforms. The Special Issue will include the following topics, specifically designed for target object detection and identification from remote sensing data, but not limited to them:

·         Feature extraction
·         Feature design
·         Feature learning
·         Design of deep learning architectures
·         Theory of deep learning architectures
·         Efficient training of deep learning architectures
·         Deep convolutional networks
·         Efficient object search methods on remote sensing images

Authors are requested to check and follow the Instructions to Authors, see https://www.mdpi.com/journal/remotesensing/instructions.  

We look forward to receiving your submissions in this interesting area of specialization.

Best wishes,

Dr. Yu  Meng
Dr. Wei Li
Dr. Turgay Celik
Dr. Anzhi  Yue
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing

  • deep learning

  • object detection

  • machine learning

  • feature learning

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 18501 KiB  
Article
Fast Automatic Airport Detection in Remote Sensing Images Using Convolutional Neural Networks
by Fen Chen, Ruilong Ren, Tim Van de Voorde, Wenbo Xu, Guiyun Zhou and Yan Zhou
Remote Sens. 2018, 10(3), 443; https://doi.org/10.3390/rs10030443 - 12 Mar 2018
Cited by 82 | Viewed by 9649
Abstract
Fast and automatic detection of airports from remote sensing images is useful for many military and civilian applications. In this paper, a fast automatic detection method is proposed to detect airports from remote sensing images based on convolutional neural networks using the Faster [...] Read more.
Fast and automatic detection of airports from remote sensing images is useful for many military and civilian applications. In this paper, a fast automatic detection method is proposed to detect airports from remote sensing images based on convolutional neural networks using the Faster R-CNN algorithm. This method first applies a convolutional neural network to generate candidate airport regions. Based on the features extracted from these proposals, it then uses another convolutional neural network to perform airport detection. By taking the typical elongated linear geometric shape of airports into consideration, some specific improvements to the method are proposed. These approaches successfully improve the quality of positive samples and achieve a better accuracy in the final detection results. Experimental results on an airport dataset, Landsat 8 images, and a Gaofen-1 satellite scene demonstrate the effectiveness and efficiency of the proposed method. Full article
Show Figures

Graphical abstract

17 pages, 2668 KiB  
Article
Automatic Kernel Size Determination for Deep Neural Networks Based Hyperspectral Image Classification
by Chen Ding, Ying Li, Yong Xia, Lei Zhang and Yanning Zhang
Remote Sens. 2018, 10(3), 415; https://doi.org/10.3390/rs10030415 - 08 Mar 2018
Cited by 6 | Viewed by 5280
Abstract
Considering kernels in Convolutional Neural Networks (CNNs) as detectors for local patterns, K-means neural network proposes to cluster local patches extracted from training images and then fixate those kernels as the representative patches in each cluster without further training. Thus the amount of [...] Read more.
Considering kernels in Convolutional Neural Networks (CNNs) as detectors for local patterns, K-means neural network proposes to cluster local patches extracted from training images and then fixate those kernels as the representative patches in each cluster without further training. Thus the amount of labeled samples necessitated for training can be greatly reduced. One key property of those kernels is their spatial size which determines their capacity in detecting local patterns and is expected to be task-specific. However, most of literatures determine the spatial size of those kernels in a heuristic way. To address this problem, we propose to automatically determine the kernel size in order to better adapt the K-means neural network for hyperspectral imagery classification. Specifically, a novel kernel-size determination scheme is developed by measuring the clustering performance of local patches with different sizes. With the kernel of determined size, more discriminative local patterns can be detected in the hyperspectral imagery, with which the classification performance of K-means neural network can be obviously improved. Experimental results on two datasets demonstrate the effectiveness of the proposed method. Full article
Show Figures

Graphical abstract

24 pages, 3904 KiB  
Article
When Low Rank Representation Based Hyperspectral Imagery Classification Meets Segmented Stacked Denoising Auto-Encoder Based Spatial-Spectral Feature
by Cong Wang, Lei Zhang, Wei Wei and Yanning Zhang
Remote Sens. 2018, 10(2), 284; https://doi.org/10.3390/rs10020284 - 12 Feb 2018
Cited by 28 | Viewed by 6204
Abstract
When confronted with limited labelled samples, most studies adopt an unsupervised feature learning scheme and incorporate the extracted features into a traditional classifier (e.g., support vector machine, SVM) to deal with hyperspectral imagery classification. However, these methods have limitations in generalizing well in [...] Read more.
When confronted with limited labelled samples, most studies adopt an unsupervised feature learning scheme and incorporate the extracted features into a traditional classifier (e.g., support vector machine, SVM) to deal with hyperspectral imagery classification. However, these methods have limitations in generalizing well in challenging cases due to the limited representative capacity of the shallow feature learning model, as well as the insufficient robustness of the classifier which only depends on the supervision of labelled samples. To address these two problems simultaneously, we present an effective low-rank representation-based classification framework for hyperspectral imagery. In particular, a novel unsupervised segmented stacked denoising auto-encoder-based feature learning model is proposed to depict the spatial-spectral characteristics of each pixel in the imagery with deep hierarchical structure. With the extracted features, a low-rank representation based robust classifier is then developed which takes advantage of both the supervision provided by labelled samples and unsupervised correlation (e.g., intra-class similarity and inter-class dissimilarity, etc.) among those unlabelled samples. Both the deep unsupervised feature learning and the robust classifier benefit, improving the classification accuracy with limited labelled samples. Extensive experiments on hyperspectral imagery classification demonstrate the effectiveness of the proposed framework. Full article
Show Figures

Graphical abstract

20 pages, 8900 KiB  
Article
An Efficient Hyperspectral Image Retrieval Method: Deep Spectral-Spatial Feature Extraction with DCGAN and Dimensionality Reduction Using t-SNE-Based NM Hashing
by Jing Zhang, Lu Chen, Li Zhuo, Xi Liang and Jiafeng Li
Remote Sens. 2018, 10(2), 271; https://doi.org/10.3390/rs10020271 - 10 Feb 2018
Cited by 34 | Viewed by 7445
Abstract
Hyperspectral images are one of the most important fundamental and strategic information resources, imaging the same ground object with hundreds of spectral bands varying from the ultraviolet to the microwave. With the emergence of huge volumes of high-resolution hyperspectral images produced by all [...] Read more.
Hyperspectral images are one of the most important fundamental and strategic information resources, imaging the same ground object with hundreds of spectral bands varying from the ultraviolet to the microwave. With the emergence of huge volumes of high-resolution hyperspectral images produced by all sorts of imaging sensors, processing and analysis of these images requires effective retrieval techniques. How to ensure retrieval accuracy and efficiency is a challenging task in the field of hyperspectral image retrieval. In this paper, an efficient hyperspectral image retrieval method is proposed. In principle, our method includes the following steps: (1) in order to make powerful representations for hyperspectral images, deep spectral-spatial features are extracted with the Deep Convolutional Generative Adversarial Networks (DCGAN) model; (2) considering the higher dimensionality of deep spectral-spatial features, t-Distributed Stochastic Neighbor Embedding-based Nonlinear Manifold (t-SNE-based NM) hashing is utilized to make dimensionality reduction by learning compact binary codes embedded on the intrinsic manifolds of deep spectral-spatial features for balancing between learning efficiency and retrieval accuracy; and (3) multi-index hashing in Hamming space is measured to find similar hyperspectral images. Five comparative experiments are conducted to verify the effectiveness of deep spectral-spatial features, dimensionality reduction of t-SNE-based NM hashing, and similarity measurement of multi-index hashing. The experimental results using NASA datasets show that our hyperspectral image retrieval method can achieve comparable and superior performance with less computational time. Full article
Show Figures

Graphical abstract

15 pages, 8383 KiB  
Article
End-to-End Airplane Detection Using Transfer Learning in Remote Sensing Images
by Zhong Chen, Ting Zhang and Chao Ouyang
Remote Sens. 2018, 10(1), 139; https://doi.org/10.3390/rs10010139 - 18 Jan 2018
Cited by 168 | Viewed by 11299
Abstract
Airplane detection in remote sensing images remains a challenging problem due to the complexity of backgrounds. In recent years, with the development of deep learning, object detection has also obtained great breakthroughs. For object detection tasks in natural images, such as the PASCAL [...] Read more.
Airplane detection in remote sensing images remains a challenging problem due to the complexity of backgrounds. In recent years, with the development of deep learning, object detection has also obtained great breakthroughs. For object detection tasks in natural images, such as the PASCAL (Pattern Analysis, Statistical Modelling and Computational Learning) VOC (Visual Object Classes) Challenge, the major trend of current development is to use a large amount of labeled classification data to pre-train the deep neural network as a base network, and then use a small amount of annotated detection data to fine-tune the network for detection. In this paper, we use object detection technology based on deep learning for airplane detection in remote sensing images. In addition to using some characteristics of remote sensing images, some new data augmentation techniques have been proposed. We also use transfer learning and adopt a single deep convolutional neural network and limited training samples to implement end-to-end trainable airplane detection. Classification and positioning are no longer divided into multistage tasks; end-to-end detection attempts to combine them for optimization, which ensures an optimal solution for the final stage. In our experiment, we use remote sensing images of airports collected from Google Earth. The experimental results show that the proposed algorithm is highly accurate and meaningful for remote sensing object detection. Full article
Show Figures

Figure 1

2247 KiB  
Article
Effective Fusion of Multi-Modal Remote Sensing Data in a Fully Convolutional Network for Semantic Labeling
by Wenkai Zhang, Hai Huang, Matthias Schmitz, Xian Sun, Hongqi Wang and Helmut Mayer
Remote Sens. 2018, 10(1), 52; https://doi.org/10.3390/rs10010052 - 29 Dec 2017
Cited by 34 | Viewed by 6731
Abstract
In recent years, Fully Convolutional Networks (FCN) have led to a great improvement of semantic labeling for various applications including multi-modal remote sensing data. Although different fusion strategies have been reported for multi-modal data, there is no in-depth study of the reasons of [...] Read more.
In recent years, Fully Convolutional Networks (FCN) have led to a great improvement of semantic labeling for various applications including multi-modal remote sensing data. Although different fusion strategies have been reported for multi-modal data, there is no in-depth study of the reasons of performance limits. For example, it is unclear, why an early fusion of multi-modal data in FCN does not lead to a satisfying result. In this paper, we investigate the contribution of individual layers inside FCN and propose an effective fusion strategy for the semantic labeling of color or infrared imagery together with elevation (e.g., Digital Surface Models). The sensitivity and contribution of layers concerning classes and multi-modal data are quantified by recall and descent rate of recall in a multi-resolution model. The contribution of different modalities to the pixel-wise prediction is analyzed explaining the reason of the poor performance caused by the plain concatenation of different modalities. Finally, based on the analysis an optimized scheme for the fusion of layers with image and elevation information into a single FCN model is derived. Experiments are performed on the ISPRS Vaihingen 2D Semantic Labeling dataset (infrared and RGB imagery as well as elevation) and the Potsdam dataset (RGB imagery and elevation). Comprehensive evaluations demonstrate the potential of the proposed approach. Full article
Show Figures

Figure 1

5079 KiB  
Article
Deformable ConvNet with Aspect Ratio Constrained NMS for Object Detection in Remote Sensing Imagery
by Zhaozhuo Xu, Xin Xu, Lei Wang, Rui Yang and Fangling Pu
Remote Sens. 2017, 9(12), 1312; https://doi.org/10.3390/rs9121312 - 13 Dec 2017
Cited by 109 | Viewed by 8928
Abstract
Convolutional neural networks (CNNs) have demonstrated their ability object detection of very high resolution remote sensing images. However, CNNs have obvious limitations for modeling geometric variations in remote sensing targets. In this paper, we introduced a CNN structure, namely deformable ConvNet, to address [...] Read more.
Convolutional neural networks (CNNs) have demonstrated their ability object detection of very high resolution remote sensing images. However, CNNs have obvious limitations for modeling geometric variations in remote sensing targets. In this paper, we introduced a CNN structure, namely deformable ConvNet, to address geometric modeling in object recognition. By adding offsets to the convolution layers, feature mapping of CNN can be applied to unfixed locations, enhancing CNNs’ visual appearance understanding. In our work, a deformable region-based fully convolutional networks (R-FCN) was constructed by substituting the regular convolution layer with a deformable convolution layer. To efficiently use this deformable convolutional neural network (ConvNet), a training mechanism is developed in our work. We first set the pre-trained R-FCN natural image model as the default network parameters in deformable R-FCN. Then, this deformable ConvNet was fine-tuned on very high resolution (VHR) remote sensing images. To remedy the increase in lines like false region proposals, we developed aspect ratio constrained non maximum suppression (arcNMS). The precision of deformable ConvNet for detecting objects was then improved. An end-to-end approach was then developed by combining deformable R-FCN, a smart fine-tuning strategy and aspect ratio constrained NMS. The developed method was better than a state-of-the-art benchmark in object detection without data augmentation. Full article
Show Figures

Graphical abstract

17964 KiB  
Article
Arbitrary-Oriented Vehicle Detection in Aerial Imagery with Single Convolutional Neural Networks
by Tianyu Tang, Shilin Zhou, Zhipeng Deng, Lin Lei and Huanxin Zou
Remote Sens. 2017, 9(11), 1170; https://doi.org/10.3390/rs9111170 - 14 Nov 2017
Cited by 116 | Viewed by 10478
Abstract
Vehicle detection with orientation estimation in aerial images has received widespread interest as it is important for intelligent traffic management. This is a challenging task, not only because of the complex background and relatively small size of the target, but also the various [...] Read more.
Vehicle detection with orientation estimation in aerial images has received widespread interest as it is important for intelligent traffic management. This is a challenging task, not only because of the complex background and relatively small size of the target, but also the various orientations of vehicles in aerial images captured from the top view. The existing methods for oriented vehicle detection need several post-processing steps to generate final detection results with orientation, which are not efficient enough. Moreover, they can only get discrete orientation information for each target. In this paper, we present an end-to-end single convolutional neural network to generate arbitrarily-oriented detection results directly. Our approach, named Oriented_SSD (Single Shot MultiBox Detector, SSD), uses a set of default boxes with various scales on each feature map location to produce detection bounding boxes. Meanwhile, offsets are predicted for each default box to better match the object shape, which contain the angle parameter for oriented bounding boxes’ generation. Evaluation results on the public DLR Vehicle Aerial dataset and Vehicle Detection in Aerial Imagery (VEDAI) dataset demonstrate that our method can detect both the location and orientation of the vehicle with high accuracy and fast speed. For test images in the DLR Vehicle Aerial dataset with a size of 5616 × 3744 , our method achieves 76.1% average precision (AP) and 78.7% correct direction classification at 5.17 s on an NVIDIA GTX-1060. Full article
Show Figures

Graphical abstract

10268 KiB  
Article
Transfer Learning with Deep Convolutional Neural Network for SAR Target Classification with Limited Labeled Data
by Zhongling Huang, Zongxu Pan and Bin Lei
Remote Sens. 2017, 9(9), 907; https://doi.org/10.3390/rs9090907 - 31 Aug 2017
Cited by 341 | Viewed by 19565
Abstract
Tremendous progress has been made in object recognition with deep convolutional neural networks (CNNs), thanks to the availability of large-scale annotated dataset. With the ability of learning highly hierarchical image feature extractors, deep CNNs are also expected to solve the Synthetic Aperture Radar [...] Read more.
Tremendous progress has been made in object recognition with deep convolutional neural networks (CNNs), thanks to the availability of large-scale annotated dataset. With the ability of learning highly hierarchical image feature extractors, deep CNNs are also expected to solve the Synthetic Aperture Radar (SAR) target classification problems. However, the limited labeled SAR target data becomes a handicap to train a deep CNN. To solve this problem, we propose a transfer learning based method, making knowledge learned from sufficient unlabeled SAR scene images transferrable to labeled SAR target data. We design an assembled CNN architecture consisting of a classification pathway and a reconstruction pathway, together with a feedback bypass additionally. Instead of training a deep network with limited dataset from scratch, a large number of unlabeled SAR scene images are used to train the reconstruction pathway with stacked convolutional auto-encoders (SCAE) at first. Then, these pre-trained convolutional layers are reused to transfer knowledge to SAR target classification tasks, with feedback bypass introducing the reconstruction loss simultaneously. The experimental results demonstrate that transfer learning leads to a better performance in the case of scarce labeled training data and the additional feedback bypass with reconstruction loss helps to boost the capability of classification pathway. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

13 pages, 14851 KiB  
Technical Note
Long Short-Term Memory Neural Networks for Online Disturbance Detection in Satellite Image Time Series
by Yun-Long Kong, Qingqing Huang, Chengyi Wang, Jingbo Chen, Jiansheng Chen and Dongxu He
Remote Sens. 2018, 10(3), 452; https://doi.org/10.3390/rs10030452 - 13 Mar 2018
Cited by 58 | Viewed by 8096
Abstract
A satellite image time series (SITS) contains a significant amount of temporal information. By analysing this type of data, the pattern of the changes in the object of concern can be explored. The natural change in the Earth’s surface is relatively slow and [...] Read more.
A satellite image time series (SITS) contains a significant amount of temporal information. By analysing this type of data, the pattern of the changes in the object of concern can be explored. The natural change in the Earth’s surface is relatively slow and exhibits a pronounced pattern. Some natural events (for example, fires, floods, plant diseases, and insect pests) and human activities (for example, deforestation and urbanisation) will disturb this pattern and cause a relatively profound change on the Earth’s surface. These events are usually referred to as disturbances. However, disturbances in ecosystems are not easy to detect from SITS data, because SITS contain combined information on disturbances, phenological variations and noise in remote sensing data. In this paper, a novel framework is proposed for online disturbance detection from SITS. The framework is based on long short-term memory (LSTM) networks. First, LSTM networks are trained by historical SITS. The trained LSTM networks are then used to predict new time series data. Last, the predicted data are compared with real data, and the noticeable deviations reveal disturbances. Experimental results using 16-day compositions of the moderate resolution imaging spectroradiometer (MOD13Q1) illustrate the effectiveness and stability of the proposed approach for online disturbance detection. Full article
Show Figures

Graphical abstract

16 pages, 9106 KiB  
Technical Note
Remote Sensing Scene Classification Based on Convolutional Neural Networks Pre-Trained Using Attention-Guided Sparse Filters
by Jingbo Chen, Chengyi Wang, Zhong Ma, Jiansheng Chen, Dongxu He and Stephen Ackland
Remote Sens. 2018, 10(2), 290; https://doi.org/10.3390/rs10020290 - 13 Feb 2018
Cited by 47 | Viewed by 6480
Abstract
Semantic-level land-use scene classification is a challenging problem, in which deep learning methods, e.g., convolutional neural networks (CNNs), have shown remarkable capacity. However, a lack of sufficient labeled images has proved a hindrance to increasing the land-use scene classification accuracy of CNNs. Aiming [...] Read more.
Semantic-level land-use scene classification is a challenging problem, in which deep learning methods, e.g., convolutional neural networks (CNNs), have shown remarkable capacity. However, a lack of sufficient labeled images has proved a hindrance to increasing the land-use scene classification accuracy of CNNs. Aiming at this problem, this paper proposes a CNN pre-training method under the guidance of a human visual attention mechanism. Specifically, a computational visual attention model is used to automatically extract salient regions in unlabeled images. Then, sparse filters are adopted to learn features from these salient regions, with the learnt parameters used to initialize the convolutional layers of the CNN. Finally, the CNN is further fine-tuned on labeled images. Experiments are performed on the UCMerced and AID datasets, which show that when combined with a demonstrative CNN, our method can achieve 2.24% higher accuracy than a plain CNN and can obtain an overall accuracy of 92.43% when combined with AlexNet. The results indicate that the proposed method can effectively improve CNN performance using easy-to-access unlabeled images and thus will enhance the performance of land-use scene classification especially when a large-scale labeled dataset is unavailable. Full article
Show Figures

Graphical abstract

3284 KiB  
Technical Note
Remote Sensing Image Classification Based on Stacked Denoising Autoencoder
by Peng Liang, Wenzhong Shi and Xiaokang Zhang
Remote Sens. 2018, 10(1), 16; https://doi.org/10.3390/rs10010016 - 22 Dec 2017
Cited by 71 | Viewed by 7596
Abstract
Focused on the issue that conventional remote sensing image classification methods have run into the bottlenecks in accuracy, a new remote sensing image classification method inspired by deep learning is proposed, which is based on Stacked Denoising Autoencoder. First, the deep network model [...] Read more.
Focused on the issue that conventional remote sensing image classification methods have run into the bottlenecks in accuracy, a new remote sensing image classification method inspired by deep learning is proposed, which is based on Stacked Denoising Autoencoder. First, the deep network model is built through the stacked layers of Denoising Autoencoder. Then, with noised input, the unsupervised Greedy layer-wise training algorithm is used to train each layer in turn for more robust expressing, characteristics are obtained in supervised learning by Back Propagation (BP) neural network, and the whole network is optimized by error back propagation. Finally, Gaofen-1 satellite (GF-1) remote sensing data are used for evaluation, and the total accuracy and kappa accuracy reach 95.7% and 0.955, respectively, which are higher than that of the Support Vector Machine and Back Propagation neural network. The experiment results show that the proposed method can effectively improve the accuracy of remote sensing image classification. Full article
Show Figures

Graphical abstract

Back to TopTop