remotesensing-logo

Journal Browser

Journal Browser

Latest Developments in 3D Mapping with Unmanned Aerial Vehicles

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (31 October 2020) | Viewed by 37768

Special Issue Editors


E-Mail Website
Guest Editor
Graz University of Technology, Institute of Computer Graphics & Vision Inffeldgasse 16/II, 8010 Graz, Austria
Interests: computer vision; unmanned aerial vehicles; robotics; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
3D Optical Metrology (3DOM) Unit, Bruno Kessler Foundation (FBK), 38123 Trento, Italy
Interests: photogrammetry; laser scanning; optical metrology; 3D; AI; quality control
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Unmanned aerial vehicles (UAVs) have become indispensable for remote sensing applications. UAVs have created fascinating possibilities to gather data in ways that have not been possible before. In many scenarios, UAVs have become a viable alternative to traditional airborne sensors and, even more, they have expanded the use of aerial data in application scenarios where this has not been done before.

One specific field of application that has benefited from these developments is 3D reconstruction and mapping. Due to their small size and weight, UAVs can carry sensors (camera, laser scanner, etc.) very close to structures of interest and are able to produce high-resolution measurements. UAVs can move into positions that provide unobstructed views onto surveyed structures and easily can provide full coverage. 

These properties make UAVs very well suited to be used, e.g., in urban environments, as they allow for the generation of high-fidelity maps, orthoimages, or 3D digital reconstructions of as-built structures likes buildings or other man-made infrastructures. Other application domains where UAV’s are frequently used are environmental 3D mapping and monitoring, change detection analyses, the digitalization of natural heritage, precision farming, digital twinning, etc.

A big role for the success of UAV platforms and data is played by algorithms and techniques for proper sensors handling and data processing. Algorithms for 3D reconstruction and 3D mapping are crucial to generate high-quality 3D measurements, and despite many years of UAV investigations and applications, there are still various R&D open issues related to such platforms.

The Special Issue is proposed as a cross-discipline and sector Issue, with the aim of contributing to an increase in the level of knowledge in the context of UAV for 3D mapping. In particular, we solicit papers presenting investigations with UAV platforms and remote sensing data acquired with these platforms:

-Large-scale mapping and 3D reconstruction;

-Autonomous navigation;

-3D documentation of complex scenarios;

-Onboard SLAM;

-Online and real-time processing;

-Data fusion (integration of UAV data with other sources);

-Machine/deep learning for UAV perception (real-time object detection, semantic classification for navigation, etc.);

-Applications in non-topographic fields (agriculture, forestry, etc.).

Dr. Friedrich Fraundorfer
Dr. Fabio Remondino
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • UAV
  • Drone
  • Photogrammetry
  • 3D reconstruction
  • 3D mapping
  • Stereo matching
  • Tie-point matching
  • Semantic 3D
  • Orthophoto generation
  • DSM generation
  • SLAM
  • Data fusion.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 9469 KiB  
Article
Investigating the Susceptibility to Failure of a Rock Cliff by Integrating Structure-from-Motion Analysis and 3D Geomechanical Modelling
by Michele Perrotti, Danilo Godone, Paolo Allasia, Marco Baldo, Nunzio Luciano Fazio and Piernicola Lollino
Remote Sens. 2020, 12(23), 3994; https://doi.org/10.3390/rs12233994 - 06 Dec 2020
Cited by 5 | Viewed by 2800
Abstract
Multi-temporal UAV and digital photo surveys have been acquired between 2017 and 2020 on a coastal cliff in soft rocks in South-Eastern Italy for hazard assessment and the corresponding point clouds have been processed and compared. The multi-temporal survey results provide indications of [...] Read more.
Multi-temporal UAV and digital photo surveys have been acquired between 2017 and 2020 on a coastal cliff in soft rocks in South-Eastern Italy for hazard assessment and the corresponding point clouds have been processed and compared. The multi-temporal survey results provide indications of a progressive deepening process of erosion and detachment of blocks from the mid-height portion of the cliff, with the upper stiffer rock stratum working provisionally as a shelf against the risk of general collapse. Based on the DEM model obtained, a three-dimensional geomechanical finite element model has been created and analyzed in order to investigate the general stability of the cliff and to detect the rock portions which are more susceptible to failure. Concerning the evolving erosion process, active in the cliff, the photogrammetric analyses and the modeling simulations result in agreement and a proneness to both local and general instabilities has been achieved. Full article
(This article belongs to the Special Issue Latest Developments in 3D Mapping with Unmanned Aerial Vehicles)
Show Figures

Graphical abstract

25 pages, 21074 KiB  
Article
Experimental Fire Measurement with UAV Multimodal Stereovision
by Vito Ciullo, Lucile Rossi and Antoine Pieri
Remote Sens. 2020, 12(21), 3546; https://doi.org/10.3390/rs12213546 - 29 Oct 2020
Cited by 9 | Viewed by 2706
Abstract
In wildfire research, systems that are able to estimate the geometric characteristics of fire, in order to understand and model the behavior of this spreading and dangerous phenomenon, are required. Over the past decade, there has been a growing interest in the use [...] Read more.
In wildfire research, systems that are able to estimate the geometric characteristics of fire, in order to understand and model the behavior of this spreading and dangerous phenomenon, are required. Over the past decade, there has been a growing interest in the use of computer vision and image processing technologies. The majority of these works have considered multiple mono-camera systems, merging the information obtained from each camera. Recent studies have introduced the use of stereovision in this field; for example, a framework with multiple ground stereo pairs of cameras has been developed to measure fires spreading for about 10 meters. This work proposes an unmanned aerial vehicle multimodal stereovision framework which allows for estimation of the geometric characteristics of fires propagating over long distances. The vision system is composed of two cameras operating simultaneously in the visible and infrared spectral bands. The main result of this work is the development of a portable drone system which is able to obtain georeferenced stereoscopic multimodal images associated with a method for the estimation of fire geometric characteristics. The performance of the proposed system is tested through various experiments, which reveal its efficiency and potential for use in monitoring wildfires. Full article
(This article belongs to the Special Issue Latest Developments in 3D Mapping with Unmanned Aerial Vehicles)
Show Figures

Graphical abstract

25 pages, 23255 KiB  
Article
Reliable and Efficient UAV Image Matching via Geometric Constraints Structured by Delaunay Triangulation
by San Jiang, Wanshou Jiang, Lelin Li, Lizhe Wang and Wei Huang
Remote Sens. 2020, 12(20), 3390; https://doi.org/10.3390/rs12203390 - 16 Oct 2020
Cited by 11 | Viewed by 2471
Abstract
Outlier removal is a crucial step in local feature-based unmanned aerial vehicle (UAV) image matching. Inspired by our previous work, this paper proposes a method for reliable and efficient outlier removal in UAV image matching. The inputs of the method are only two [...] Read more.
Outlier removal is a crucial step in local feature-based unmanned aerial vehicle (UAV) image matching. Inspired by our previous work, this paper proposes a method for reliable and efficient outlier removal in UAV image matching. The inputs of the method are only two images without any other auxiliary data. The core idea is to design local geometric constraints within the neighboring structure via the Delaunay triangulation and use a two-stage method for outlier removal and match refinement. In the filter stage, initial matches are first organized as the Delaunay triangulation (DT) and its corresponding graph, and their dissimilarity scores are computed from the affine-invariant spatial angular order (SAO), which is used to achieve hierarchical outlier removal. In addition, by using the triangle constraint between the refined Delaunay triangulation and its corresponding graph, missed inliers are resumed from match expansion. In the verification stage, retained matches are refined using the RANSAC-based global geometric constraint. Therefore, the two-stage algorithm is termed DTSAO-RANSAC. Finally, using four datasets, DTSAO-RANSAC is comprehensively analyzed and compared with other methods in feature matching and image orientation tests. The experimental results demonstrate that compared with the LO-RANSAC algorithm, DTSAO-RANSAC can achieve efficient outlier removal with speedup ratios ranging from 4 to 16 and, it can provide reliable matching results for image orientation of UAV datasets. Full article
(This article belongs to the Special Issue Latest Developments in 3D Mapping with Unmanned Aerial Vehicles)
Show Figures

Graphical abstract

23 pages, 29946 KiB  
Article
3D Reconstruction of a Complex Grid Structure Combining UAS Images and Deep Learning
by Vladimir A. Knyaz, Vladimir V. Kniaz, Fabio Remondino, Sergey Y. Zheltov and Armin Gruen
Remote Sens. 2020, 12(19), 3128; https://doi.org/10.3390/rs12193128 - 23 Sep 2020
Cited by 20 | Viewed by 5833
Abstract
The latest advances in technical characteristics of unmanned aerial systems (UAS) and their onboard sensors opened the way for smart flying vehicles exploiting new application areas and allowing to perform missions seemed to be impossible before. One of these complicated tasks is the [...] Read more.
The latest advances in technical characteristics of unmanned aerial systems (UAS) and their onboard sensors opened the way for smart flying vehicles exploiting new application areas and allowing to perform missions seemed to be impossible before. One of these complicated tasks is the 3D reconstruction and monitoring of large-size, complex, grid-like structures as radio or television towers. Although image-based 3D survey contains a lot of visual and geometrical information useful for making preliminary conclusions on construction health, standard photogrammetric processing fails to perform dense and robust 3D reconstruction of complex large-size mesh structures. The main problem of such objects is repeated and self-occlusive similar elements resulting in false feature matching. This paper presents a method developed for an accurate Multi-View Stereo (MVS) dense 3D reconstruction of the Shukhov Radio Tower in Moscow (Russia) based on UAS photogrammetric survey. A key element for the successful image-based 3D reconstruction is the developed WireNetV2 neural network model for robust automatic semantic segmentation of wire structures. The proposed neural network provides high matching quality due to an accurate masking of the tower elements. The main contributions of the paper are: (1) a deep learning WireNetV2 convolutional neural network model that outperforms the state-of-the-art results of semantic segmentation on a dataset containing images of grid structures of complicated topology with repeated elements, holes, self-occlusions, thus providing robust grid structure masking and, as a result, accurate 3D reconstruction, (2) an advanced image-based pipeline aided by a neural network for the accurate 3D reconstruction of the large-size and complex grid structured, evaluated on UAS imagery of Shukhov radio tower in Moscow. Full article
(This article belongs to the Special Issue Latest Developments in 3D Mapping with Unmanned Aerial Vehicles)
Show Figures

Figure 1

20 pages, 6915 KiB  
Article
Bridge Inspection Using Unmanned Aerial Vehicle Based on HG-SLAM: Hierarchical Graph-Based SLAM
by Sungwook Jung, Duckyu Choi, Seungwon Song and Hyun Myung
Remote Sens. 2020, 12(18), 3022; https://doi.org/10.3390/rs12183022 - 16 Sep 2020
Cited by 37 | Viewed by 6889
Abstract
With the increasing demand for autonomous systems in the field of inspection, the use of unmanned aerial vehicles (UAVs) to replace human labor is becoming more frequent. However, the Global Positioning System (GPS) signal is usually denied in environments near or under bridges, [...] Read more.
With the increasing demand for autonomous systems in the field of inspection, the use of unmanned aerial vehicles (UAVs) to replace human labor is becoming more frequent. However, the Global Positioning System (GPS) signal is usually denied in environments near or under bridges, which makes the manual operation of a UAV difficult and unreliable in these areas. This paper addresses a novel hierarchical graph-based simultaneous localization and mapping (SLAM) method for fully autonomous bridge inspection using an aerial vehicle, as well as a technical method for UAV control for actually conducting bridge inspections. Due to the harsh environment involved and the corresponding limitations on GPS usage, a graph-based SLAM approach using a tilted 3D LiDAR (Light Detection and Ranging) and a monocular camera to localize the UAV and map the target bridge is proposed. Each visual-inertial state estimate and the corresponding LiDAR sweep are combined into a single subnode. These subnodes make up a “supernode” that consists of state estimations and accumulated scan data for robust and stable node generation in graph SLAM. The constraints are generated from LiDAR data using the normal distribution transform (NDT) and generalized iterative closest point (G-ICP) matching. The feasibility of the proposed method was verified on two different types of bridges: on the ground and offshore. Full article
(This article belongs to the Special Issue Latest Developments in 3D Mapping with Unmanned Aerial Vehicles)
Show Figures

Graphical abstract

26 pages, 22612 KiB  
Article
Refining the Joint 3D Processing of Terrestrial and UAV Images Using Quality Measures
by Elisa Mariarosaria Farella, Alessandro Torresani and Fabio Remondino
Remote Sens. 2020, 12(18), 2873; https://doi.org/10.3390/rs12182873 - 04 Sep 2020
Cited by 21 | Viewed by 3746
Abstract
The paper presents an efficient photogrammetric workflow to improve the 3D reconstruction of scenes surveyed by integrating terrestrial and Unmanned Aerial Vehicle (UAV) images. In the last years, the integration of this kind of images has shown clear advantages for the complete and [...] Read more.
The paper presents an efficient photogrammetric workflow to improve the 3D reconstruction of scenes surveyed by integrating terrestrial and Unmanned Aerial Vehicle (UAV) images. In the last years, the integration of this kind of images has shown clear advantages for the complete and detailed 3D representation of large and complex scenarios. Nevertheless, their photogrammetric integration often raises several issues in the image orientation and dense 3D reconstruction processes. Noisy and erroneous 3D reconstructions are the typical result of inaccurate orientation results. In this work, we propose an automatic filtering procedure which works at the sparse point cloud level and takes advantage of photogrammetric quality features. The filtering step removes low-quality 3D tie points before refining the image orientation in a new adjustment and generating the final dense point cloud. Our method generalizes to many datasets, as it employs statistical analyses of quality feature distributions to identify suitable filtering thresholds. Reported results show the effectiveness and reliability of the method verified using both internal and external quality checks, as well as visual qualitative comparisons. We made the filtering tool publicly available on GitHub. Full article
(This article belongs to the Special Issue Latest Developments in 3D Mapping with Unmanned Aerial Vehicles)
Show Figures

Graphical abstract

25 pages, 3781 KiB  
Article
Survey of 8 UAV Set-Covering Algorithms for Terrain Photogrammetry
by Joshua E. Hammond, Cory A. Vernon, Trent J. Okeson, Benjamin J. Barrett, Samuel Arce, Valerie Newell, Joseph Janson, Kevin W. Franke and John D. Hedengren
Remote Sens. 2020, 12(14), 2285; https://doi.org/10.3390/rs12142285 - 16 Jul 2020
Cited by 6 | Viewed by 3621
Abstract
Remote sensing with unmanned aerial vehicles (UAVs) facilitates photogrammetry for environmental and infrastructural monitoring. Models are created with less computational cost by reducing the number of photos required. Optimal camera locations for reducing the number of photos needed for structure-from-motion (SfM) are determined [...] Read more.
Remote sensing with unmanned aerial vehicles (UAVs) facilitates photogrammetry for environmental and infrastructural monitoring. Models are created with less computational cost by reducing the number of photos required. Optimal camera locations for reducing the number of photos needed for structure-from-motion (SfM) are determined through eight mathematical set-covering algorithms as constrained by solve time. The algorithms examined are: traditional greedy, reverse greedy, carousel greedy (CG), linear programming, particle swarm optimization, simulated annealing, genetic, and ant colony optimization. Coverage and solve time are investigated for these algorithms. CG is the best method for choosing optimal camera locations as it balances number of photos required and time required to calculate camera positions as shown through an analysis similar to a Pareto Front. CG obtains a statistically significant 3.2 fewer cameras per modeled area than base greedy algorithm while requiring just one additional order of magnitude of solve time. For comparison, linear programming is capable of fewer cameras than base greedy but takes at least three orders of magnitude longer to solve. A grid independence study serves as a sensitivity analysis of the CG algorithms α (iteration number) and β (percentage to be recalculated) parameters that adjust traditional greedy heuristics, and a case study at the Rock Canyon collection dike in Provo, UT, USA, compares the results of all eight algorithms and the uniqueness (in terms of percentage comparisons based on location/angle metadata and qualitative visual comparison) of each selected set. Though this specific study uses SfM, the principles could apply to other instruments such as multi-spectral cameras or aerial LiDAR. Full article
(This article belongs to the Special Issue Latest Developments in 3D Mapping with Unmanned Aerial Vehicles)
Show Figures

Graphical abstract

26 pages, 14518 KiB  
Article
LiDAR-Aided Interior Orientation Parameters Refinement Strategy for Consumer-Grade Cameras Onboard UAV Remote Sensing Systems
by Tian Zhou, Seyyed Meghdad Hasheminasab, Radhika Ravi and Ayman Habib
Remote Sens. 2020, 12(14), 2268; https://doi.org/10.3390/rs12142268 - 15 Jul 2020
Cited by 16 | Viewed by 3505
Abstract
Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping [...] Read more.
Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height. Full article
(This article belongs to the Special Issue Latest Developments in 3D Mapping with Unmanned Aerial Vehicles)
Show Figures

Graphical abstract

23 pages, 19467 KiB  
Article
Automated 3D Reconstruction Using Optimized View-Planning Algorithms for Iterative Development of Structure-from-Motion Models
by Samuel Arce, Cory A. Vernon, Joshua Hammond, Valerie Newell, Joseph Janson, Kevin W. Franke and John D. Hedengren
Remote Sens. 2020, 12(13), 2169; https://doi.org/10.3390/rs12132169 - 07 Jul 2020
Cited by 17 | Viewed by 4745
Abstract
Unsupervised machine learning algorithms (clustering, genetic, and principal component analysis) automate Unmanned Aerial Vehicle (UAV) missions as well as the creation and refinement of iterative 3D photogrammetric models with a next best view (NBV) approach. The novel approach uses Structure-from-Motion (SfM) to achieve [...] Read more.
Unsupervised machine learning algorithms (clustering, genetic, and principal component analysis) automate Unmanned Aerial Vehicle (UAV) missions as well as the creation and refinement of iterative 3D photogrammetric models with a next best view (NBV) approach. The novel approach uses Structure-from-Motion (SfM) to achieve convergence to a specified orthomosaic resolution by identifying edges in the point cloud and planning cameras that “view” the holes identified by edges without requiring an initial model. This iterative UAV photogrammetric method successfully runs in various Microsoft AirSim environments. Simulated ground sampling distance (GSD) of models reaches as low as 3.4 cm per pixel, and generally, successive iterations improve resolution. Besides analogous application in simulated environments, a field study of a retired municipal water tank illustrates the practical application and advantages of automated UAV iterative inspection of infrastructure using 63 % fewer photographs than a comparable manual flight with analogous density point clouds obtaining a GSD of less than 3 cm per pixel. Each iteration qualitatively increases resolution according to a logarithmic regression, reduces holes in models, and adds details to model edges. Full article
(This article belongs to the Special Issue Latest Developments in 3D Mapping with Unmanned Aerial Vehicles)
Show Figures

Graphical abstract

Back to TopTop