sensors-logo

Journal Browser

Journal Browser

Remote Sensing for Spatial Information Extraction and Process

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (25 February 2024) | Viewed by 3061

Special Issue Editors


E-Mail Website
Guest Editor
1. Myyrmäki Campus, School of Smart and Clean Solutions, Metropolia University of Applied Sciences, Leiritie 1, Fi-01600 Vantaa, Finland
2. Finnish Geospatial Research Institute (FGI), National Land Survey of Finland (NLS), Vuorimiehentie 5, FI-02150 Espoo, Finland
Interests: remote sensing; photogrammetry; forest science; deep learning; image processing; computer vision

E-Mail Website
Guest Editor
Senior Research Scientist, National Landsurvey of Finland, Helsinki, Finland
Interests: photogrammetry and remote-sensing

Special Issue Information

Dear Colleagues,

Spatial information extraction is a complex process that involves employing advanced sensors, specialized computation hardware, and state-of-the-art methods. It has been revolutionized over the past few decades by advances in different directions, such as novel data-capturing platforms, advanced methods, and powerful computers.  This progress has led to more accurate and aim-centric systems that are able to extract up-to-date and accurate georeferenced data as well as information.

This Special Issue aims to collect all of the papers that address any of those aspects in this complex ecosystem. The submitted papers could address progression, e.g., in sensor advancement, hardware design, or advanced algorithms related to any spatial information processing paradigm. Articles may address, but are not limited to, the following topics:

  • Machine learning applications in spatial data generation and processing;
  • Recent advances in mobile mapping system designs and applications;
  • Novel methods in object detection from aerial and satellite images;
  • Novel hardware designs in UAV systems;
  • Novel methods for processing satellite data;
  • Deep learning applications in spatial information generation and processing;
  • Novel spatial system calibration methods;
  • Geospatial model development;
  • Optical system design;
  • Infrastructure extraction (road, building, etc.) from aerial and satellite data;
  • Semantic segmentation from aerial and satellite images;
  • Novel real-time algorithms for generating/processing spatial information

Dr. Ehsan Khoramshahi
Dr. Roope Näsi
Dr. Yuwei Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • spatial information processing
  • machine learning
  • mobile mapping system
  • UAV methods
  • semantic segmentation
  • real-time processing
  • satellite remote sensing
  • deep learning
  • road extraction

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 27356 KiB  
Article
Deep Ordinal Classification in Forest Areas Using Light Detection and Ranging Point Clouds
by Alejandro Morales-Martín, Francisco-Javier Mesas-Carrascosa, Pedro Antonio Gutiérrez, Fernando-Juan Pérez-Porras, Víctor Manuel Vargas and César Hervás-Martínez
Sensors 2024, 24(7), 2168; https://doi.org/10.3390/s24072168 - 28 Mar 2024
Viewed by 471
Abstract
Recent advances in Deep Learning and aerial Light Detection And Ranging (LiDAR) have offered the possibility of refining the classification and segmentation of 3D point clouds to contribute to the monitoring of complex environments. In this context, the present study focuses on developing [...] Read more.
Recent advances in Deep Learning and aerial Light Detection And Ranging (LiDAR) have offered the possibility of refining the classification and segmentation of 3D point clouds to contribute to the monitoring of complex environments. In this context, the present study focuses on developing an ordinal classification model in forest areas where LiDAR point clouds can be classified into four distinct ordinal classes: ground, low vegetation, medium vegetation, and high vegetation. To do so, an effective soft labeling technique based on a novel proposed generalized exponential function (CE-GE) is applied to the PointNet network architecture. Statistical analyses based on Kolmogorov–Smirnov and Student’s t-test reveal that the CE-GE method achieves the best results for all the evaluation metrics compared to other methodologies. Regarding the confusion matrices of the best alternative conceived and the standard categorical cross-entropy method, the smoothed ordinal classification obtains a more consistent classification compared to the nominal approach. Thus, the proposed methodology significantly improves the point-by-point classification of PointNet, reducing the errors in distinguishing between the middle classes (low vegetation and medium vegetation). Full article
(This article belongs to the Special Issue Remote Sensing for Spatial Information Extraction and Process)
Show Figures

Graphical abstract

15 pages, 17077 KiB  
Article
Extraction of Coastal Levees Using U-Net Model with Visible and Topographic Images Observed by High-Resolution Satellite Sensors
by Hao Xia and Hideyuki Tonooka
Sensors 2024, 24(5), 1444; https://doi.org/10.3390/s24051444 - 23 Feb 2024
Viewed by 605
Abstract
Coastal levees play a role in protecting coastal areas from storm surges and high waves, and they provide important input information for inundation damage simulations. However, coastal levee data with uniformity and sufficient accuracy for inundation simulations are not always well developed. Against [...] Read more.
Coastal levees play a role in protecting coastal areas from storm surges and high waves, and they provide important input information for inundation damage simulations. However, coastal levee data with uniformity and sufficient accuracy for inundation simulations are not always well developed. Against this background, this study proposed a method to extract coastal levees by inputting high spatial resolution optical satellite image products (RGB images, digital surface models (DSMs), and slope images that can be generated from DSM images), which have high data availability at the locations and times required for simulation, into a deep learning model. The model is based on U-Net, and post-processing for noise removal was introduced to further improve its accuracy. We also proposed a method to calculate levee height using a local maximum filter by giving DSM values to the extracted levee pixels. The validation was conducted in the coastal area of Ibaraki Prefecture in Japan as a test area. The levee mask images for training were manually created by combining these data with satellite images and Google Street View, because the levee GIS data created by the Ibaraki Prefectural Government were incomplete in some parts. First, the deep learning models were compared and evaluated, and it was shown that U-Net was more accurate than Pix2Pix and BBS-Net in identifying levees. Next, three cases of input images were evaluated: (Case 1) RGB image only, (Case 2) RGB and DSM images, and (Case 3) RGB, DSM, and slope images. Case 3 was found to be the most accurate, with an average Matthews correlation coefficient of 0.674. The effectiveness of noise removal post-processing was also demonstrated. In addition, an example of the calculation of levee heights was presented and evaluated for validity. In conclusion, this method was shown to be effective in extracting coastal levees. The evaluation of generalizability and use in actual inundation simulations are future tasks. Full article
(This article belongs to the Special Issue Remote Sensing for Spatial Information Extraction and Process)
Show Figures

Figure 1

21 pages, 16243 KiB  
Article
The Use of Deep Learning Methods for Object Height Estimation in High Resolution Satellite Images
by Szymon Glinka, Jarosław Bajer, Damian Wierzbicki, Kinga Karwowska and Michal Kedzierski
Sensors 2023, 23(19), 8162; https://doi.org/10.3390/s23198162 - 29 Sep 2023
Cited by 3 | Viewed by 1554
Abstract
Processing single high-resolution satellite images may provide a lot of important information about the urban landscape or other applications related to the inventory of high-altitude objects. Unfortunately, the direct extraction of specific features from single satellite scenes can be difficult. However, the appropriate [...] Read more.
Processing single high-resolution satellite images may provide a lot of important information about the urban landscape or other applications related to the inventory of high-altitude objects. Unfortunately, the direct extraction of specific features from single satellite scenes can be difficult. However, the appropriate use of advanced processing methods based on deep learning algorithms allows us to obtain valuable information from these images. The height of buildings, for example, may be determined based on the extraction of shadows from an image and taking into account other metadata, e.g., the sun elevation angle and satellite azimuth angle. Classic methods of processing satellite imagery based on thresholding or simple segmentation are not sufficient because, in most cases, satellite scenes are not spectrally heterogenous. Therefore, the use of classical shadow detection methods is difficult. The authors of this article explore the possibility of using high-resolution optical satellite data to develop a universal algorithm for a fully automated estimation of object heights within the land cover by calculating the length of the shadow of each founded object. Finally, a set of algorithms allowing for a fully automatic detection of objects and shadows from satellite and aerial imagery and an iterative analysis of the relationships between them to calculate the heights of typical objects (such as buildings) and atypical objects (such as wind turbines) is proposed. The city of Warsaw (Poland) was used as the test area. LiDAR data were adopted as the reference measurement. As a result of final analyses based on measurements from several hundred thousand objects, the global accuracy obtained was ±4.66 m. Full article
(This article belongs to the Special Issue Remote Sensing for Spatial Information Extraction and Process)
Show Figures

Figure 1

Back to TopTop