remotesensing-logo

Journal Browser

Journal Browser

A Review of Computer Vision for Remote Sensing Imagery

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (29 January 2022) | Viewed by 9836

Special Issue Editors

School of Computer Science, Wuhan University, Wuhan 430072, China
Interests: artificial intelligence; data mining; computer vision; image processing
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
Interests: big data; artificial intelligence; urban computing

E-Mail Website
Guest Editor
School of Computer Science, University of Sydney, Camperdown, NSW 2006, Australia
Interests: machine learning algorithms; applications in computer vision
School of Computer Science, Wuhan University, Wuhan 430072, China
Interests: computer vision; image retrieval; representation learning

Special Issue Information

Dear Colleagues,

Today, an exponentially increasing amount of earth-observation data are being collected automatically, such as remote sensing data, multi-modality images, traffic streaming data, vehicle trajectories, and so on. These data include both images and video sequences of different resolutions, monitoring constantly the earth's surface, the mobility of humans, and the interactions between humans and the earth. Nevertheless, how to make full use of these abundant earth-observation data is still an open challenge. On the other hand, with the rapid development of computing, computer vision and deep learning techniques have been shown to be effective in many applications. This Special Issue aims to exploit the massive earth-observation datasets to deliver information for numerous applications such as urban planning, intelligent transportation, public safety by novel computer vision, and deep learning methods. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome. For this Special Issue, we welcome the most recent advancements related, but not limited to:

  • Computer vision method for remote sensing
  • Deep learning architecture for remote sensing
  • Machine learning for remote sensing
  • Classification/Detection/Segmentation
  • Anomaly/novelty detection for remote sensing
  • Remote sensing data analysis
  • Synthetic remote sensing data generation
  • Explainable deep learning for time series/image/multimedia data
  • Traffic pattern analysis and intelligent transportation
  • Novel applications/Metrics/Benchmark datasets

Dr. Bo Du
Dr. Shuo Shang
Dr. Chang Xu
Dr. Yutian Lin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computer vision method for remote sensing
  • Deep learning architecture for remote sensing
  • Machine learning for remote sensing
  • Classification/detection/segmentation
  • Anomaly/novelty detection for remote sensing
  • Remote sensing data analysis
  • Synthetic remote sensing data generation
  • Explainable deep learning for time series/image/multimedia data
  • Traffic pattern analysis and intelligent transportation
  • Novel applications/metrics/benchmark datasets

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 3476 KiB  
Article
Decision-Level Fusion with a Pluginable Importance Factor Generator for Remote Sensing Image Scene Classification
by Junge Shen, Chi Zhang, Yu Zheng and Ruxin Wang
Remote Sens. 2021, 13(18), 3579; https://doi.org/10.3390/rs13183579 - 8 Sep 2021
Cited by 6 | Viewed by 2375
Abstract
Remote sensing image scene classification acts as an important task in remote sensing image applications, which benefits from the pleasing performance brought by deep convolution neural networks (CNNs). When applying deep models in this task, the challenges are, on one hand, that the [...] Read more.
Remote sensing image scene classification acts as an important task in remote sensing image applications, which benefits from the pleasing performance brought by deep convolution neural networks (CNNs). When applying deep models in this task, the challenges are, on one hand, that the targets with highly different scales may exist in the image simultaneously and the small targets could be lost in the deep feature maps of CNNs; and on the other hand, the remote sensing image data exhibits the properties of high inter-class similarity and high intra-class variance. Both factors could limit the performance of the deep models, which motivates us to develop an adaptive decision-level information fusion framework that can incorporate with any CNN backbones. Specifically, given a CNN backbone that predicts multiple classification scores based on the feature maps of different layers, we develop a pluginable importance factor generator that aims at predicting a factor for each score. The factors measure how confident the scores in different layers are with respect to the final output. Formally, the final score is obtained by a class-wise and weighted summation based on the scores and the corresponding factors. To reduce the co-adaptation effect among the scores of different layers, we propose a stochastic decision-level fusion training strategy that enables each classification score to randomly participate in the decision-level fusion. Experiments on four popular datasets including the UC Merced Land-Use dataset, the RSSCN 7 dataset, the AID dataset, and the NWPU-RESISC 45 dataset demonstrate the superiority of the proposed method over other state-of-the-art methods. Full article
(This article belongs to the Special Issue A Review of Computer Vision for Remote Sensing Imagery)
Show Figures

Graphical abstract

17 pages, 7701 KiB  
Article
Object-Oriented Building Contour Optimization Methodology for Image Classification Results via Generalized Gradient Vector Flow Snake Model
by Jingxin Chang, Xianjun Gao, Yuanwei Yang and Nan Wang
Remote Sens. 2021, 13(12), 2406; https://doi.org/10.3390/rs13122406 - 19 Jun 2021
Cited by 10 | Viewed by 3264
Abstract
Building boundary optimization is an essential post-process step for building extraction (by image classification). However, current boundary optimization methods through smoothing or line fitting principles are unable to optimize complex buildings. In response to this limitation, this paper proposes an object-oriented building contour [...] Read more.
Building boundary optimization is an essential post-process step for building extraction (by image classification). However, current boundary optimization methods through smoothing or line fitting principles are unable to optimize complex buildings. In response to this limitation, this paper proposes an object-oriented building contour optimization method via an improved generalized gradient vector flow (GGVF) snake model and based on the initial building contour results obtained by a classification method. First, to reduce interference from the adjacent non-building object, each building object is clipped via their extended minimum bounding rectangles (MBR). Second, an adaptive threshold Canny edge detection is applied to each building image to detect the edges, and the progressive probabilistic Hough transform (PPHT) is applied to the edge result to extract the line segments. For those cases with missing or wrong line segments in some edges, a hierarchical line segments reconstruction method is designed to obtain complete contour constraint segments. Third, accurate contour constraint segments for the GGVF snake model are designed to quickly find the target contour. With the help of the initial contour and constraint edge map for GGVF, a GGVF force field computation is executed, and the related optimization principle can be applied to complex buildings. Experimental results validate the robustness and effectiveness of the proposed method, whose contour optimization has higher accuracy and comprehensive value compared with that of the reference methods. This method can be used for effective post-processing to strengthen the accuracy of building extraction results. Full article
(This article belongs to the Special Issue A Review of Computer Vision for Remote Sensing Imagery)
Show Figures

Figure 1

17 pages, 1578 KiB  
Article
Unsupervised Learning of Depth from Monocular Videos Using 3D-2D Corresponding Constraints
by Fusheng Jin, Yu Zhao, Chuanbing Wan, Ye Yuan and Shuliang Wang
Remote Sens. 2021, 13(9), 1764; https://doi.org/10.3390/rs13091764 - 1 May 2021
Cited by 3 | Viewed by 2168
Abstract
Depth estimation can provide tremendous help for object detection, localization, path planning, etc. However, the existing methods based on deep learning have high requirements on computing power and often cannot be directly applied to autonomous moving platforms (AMP). Fifth-generation (5G) mobile and wireless [...] Read more.
Depth estimation can provide tremendous help for object detection, localization, path planning, etc. However, the existing methods based on deep learning have high requirements on computing power and often cannot be directly applied to autonomous moving platforms (AMP). Fifth-generation (5G) mobile and wireless communication systems have attracted the attention of researchers because it provides the network foundation for cloud computing and edge computing, which makes it possible to utilize deep learning method on AMP. This paper proposes a depth prediction method for AMP based on unsupervised learning, which can learn from video sequences and simultaneously estimate the depth structure of the scene and the ego-motion. Compared with the existing unsupervised learning methods, our method makes the spatial correspondence among pixel points consistent with the image area by smoothing the 3D corresponding vector field based on 2D image, which effectively improves the depth prediction ability of the neural network. Our experiments on the KITTI driving dataset demonstrated that our method outperformed other previous learning-based methods. The results on the Apolloscape and Cityscapes datasets show that our proposed method has a strong universality. Full article
(This article belongs to the Special Issue A Review of Computer Vision for Remote Sensing Imagery)
Show Figures

Figure 1

Back to TopTop