remotesensing-logo

Journal Browser

Journal Browser

Techniques and Applications of UAV-Based Photogrammetric 3D Mapping II

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (15 February 2023) | Viewed by 13279

Special Issue Editors

1. School of Computer Sciences, China University of Geosciences, Wuhan 430074, China
2. Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong 999077, China
Interests: image retrieval; image matching; structure from motion; multi-view stereo; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430072, China
Interests: SLAM and real-time photogrammetry; multi-source data fusion; 3D reconstruction; building extraction and intelligent 3D mapping
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430072, China
Interests: image registering; image classification; change detection; 3D reconstruction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

3D mapping plays a critical role in a variety of photogrammetric applications. In the last decade, unmanned aerial vehicle (UAV) images have become one of the most important sources of remote sensing data due to the high flexibility of UAV platforms and the extensive usage of low-cost cameras. Additionally, the rapid development of recent techniques such as SfM (structure from motion) for offline image orientation, SLAM (simultaneous localization and mapping) for online UAV navigation, and the deep learning (DL) embedded 3D reconstruction pipeline has moved UAV-based 3D mapping towards the direction of automation and intelligence. Recent years have witnessed the explosive development of UAV-based photogrammetric 3D mapping techniques and their wide applications from traditional surveying and mapping to other related fields (e.g., automatic driving, structure inspection). 

The aim of this Special Issue is to focus on the techniques for UAV-based 3D mapping, especially for trajectory planning for UAV data acquisition in complex environments; recent algorithms for the feature matching of aerial and ground images; SfM and SLAM for efficient image orientation; the usage of DL techniques in the 3D mapping pipeline; and the applications of UAV-based 3D mapping, such as crack detection in civil structures, automatic inspection of transmission lines, the precision management of crops, archaeological and cultural heritage, and so on. 

This is the Second Edition of the Special Issue, and experts and scholars in related fields are welcome to submit their original works to this Special Issue.

Dr. San Jiang
Dr. Xiongwu Xiao
Dr. Wanshou Jiang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • UAV
  • trajectory planning
  • photogrammetry
  • aerial triangulation
  • dense image matching
  • 3D mapping
  • structure from motion
  • simultaneous localization and mapping
  • deep learning

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 7989 KiB  
Communication
Texture-Mapping Error Removal Based on the BRIEF Operator in Image-Based Three-Dimensional Reconstruction
by Junxing Yang, Lu Lu, Ge Peng, He Huang, Jian Wang and Fei Deng
Remote Sens. 2023, 15(2), 536; https://doi.org/10.3390/rs15020536 - 16 Jan 2023
Cited by 2 | Viewed by 1421
Abstract
In image-based three-dimensional (3D) reconstruction, texture-mapping techniques can give the model realistic textures. When the geometric surface in some regions is not reconstructed, such as for moving cars, powerlines, and telegraph poles, the textures in the corresponding image are textured to other regions, [...] Read more.
In image-based three-dimensional (3D) reconstruction, texture-mapping techniques can give the model realistic textures. When the geometric surface in some regions is not reconstructed, such as for moving cars, powerlines, and telegraph poles, the textures in the corresponding image are textured to other regions, resulting in errors. To solve this problem, this letter proposes an image consistency detection method based on the Binary Robust Independent Elementary Features (BRIEF) descriptor. The method is composed of two parts. First, each triangle in the mesh and its neighboring triangles are sampled uniformly to obtain sampling points. Then, these sampled points are projected into the visible image of the triangle, and the corresponding sampled points and their RGB color values are obtained on the corresponding image. Based on the sampled points on these images, a BRIEF descriptor is calculated for each image corresponding to that triangle. In the second step, the Hamming distance between these BRIEF descriptors is calculated, outliers are removed according to the method, and noisy images are also removed. In addition, we propose adding semantic information to Markov energy optimization to reduce errors further. The two methods effectively reduced errors in texture mapping caused by objects not reconstructed, improving the texture quality of 3D models. Full article
Show Figures

Figure 1

24 pages, 144801 KiB  
Article
A Novel Method for Digital Orthophoto Generation from Top View Constrained Dense Matching
by Zhihao Zhao, Guang Jiang and Yunsong Li
Remote Sens. 2023, 15(1), 177; https://doi.org/10.3390/rs15010177 - 28 Dec 2022
Cited by 1 | Viewed by 1966
Abstract
The digital orthophoto is an image with both map geometric accuracy and image characteristics, which is commonly used in geographic information systems (GIS) as a background image. Existing methods for digital orthophoto generation are generally based on a 3D reconstruction. However, the digital [...] Read more.
The digital orthophoto is an image with both map geometric accuracy and image characteristics, which is commonly used in geographic information systems (GIS) as a background image. Existing methods for digital orthophoto generation are generally based on a 3D reconstruction. However, the digital orthophoto is only the top view of the 3D reconstruction result with a certain spatial resolution. The computation about the surfaces vertical to the ground and details less than the spatial resolution is redundant for digital orthophoto generation. This study presents a novel method for digital orthophoto generation based on top view constrained dense matching (TDM). We first reconstruct some sparse points using the features in the image sequence based on the structure-from-motion (SfM) method. Second, we use a raster to locate the sparse 3D points. Each cell indicates a pixel of the output digital orthophoto. The size of the cell is related to the required spatial resolution. Only some cells with initial values from the sparse 3D points are considered seed cells. The values of other cells around the seed points are computed from a top-down propagation based on color constraints and occlusion detection from multiview-related images. The propagation process continued until the entire raster was occupied. Since the process of TDM is on a raster and only one point is saved in each cell, TDM effectively eliminate the redundant computation. We tested TDM on various scenes and compared it with some commercial software. The experiments showed that our method’s accuracy is the same as the result of commercial software, together with a time consumption decrease as the spatial resolution decreases. Full article
Show Figures

Graphical abstract

23 pages, 10041 KiB  
Article
Efficient SfM for Large-Scale UAV Images Based on Graph-Indexed BoW and Parallel-Constructed BA Optimization
by Sikang Liu, San Jiang, Yawen Liu, Wanchang Xue and Bingxuan Guo
Remote Sens. 2022, 14(21), 5619; https://doi.org/10.3390/rs14215619 - 07 Nov 2022
Cited by 5 | Viewed by 2265
Abstract
Structure from Motion (SfM) for large-scale UAV (Unmanned Aerial Vehicle) images has been widely used in the fields of photogrammetry and computer vision. Its efficiency, however, decreases dramatically as well as with the memory occupation rising steeply due to the explosion of data [...] Read more.
Structure from Motion (SfM) for large-scale UAV (Unmanned Aerial Vehicle) images has been widely used in the fields of photogrammetry and computer vision. Its efficiency, however, decreases dramatically as well as with the memory occupation rising steeply due to the explosion of data volume and the iterative BA (bundle adjustment) optimization. In this paper, an efficient SfM solution is proposed to solve the low-efficiency and high memory consumption problems. First, an algorithm is designed to find UAV image match pairs based on a graph-indexed bag-of-words (BoW) model (GIBoW), which treats visual words as vertices and link relations as edges to build a small-world graph structure. The small-world graph structure can be used to search the nearest-neighbor visual word for query features with extremely high efficiency. Reliable UAV image match pairs can effectively improve feature matching efficiency. Second, a central bundle adjustment with object point-wise parallel construction of the Schur complement (PSCBA) is proposed, which is designed as the combination of the LM (Levenberg–Marquardt) algorithm with the preconditioned conjugate gradients (PCG). The PSCBA method can dramatically reduce the memory consumption in both error and normal equations, as well as improve efficiency. Finally, by using four UAV datasets, the effectiveness of the proposed SfM solution is verified through comprehensive analysis and comparison. The experimental results show that compared with Colmap-Bow and Dbow2, the proposed graph index BoW retrieval algorithm improves the efficiency of image match pair selection with an acceleration ratio ranging from 3 to 7. Meanwhile, the parallel-constructed BA optimization algorithm can achieve accurate bundle adjustment results with an acceleration ratio by 2 to 7 times and reduce memory occupation by 2 to 3 times compared with the BA optimization using Ceres solver. For large-scale UAV images, the proposed method is an effective and reliable solution. Full article
Show Figures

Figure 1

24 pages, 17777 KiB  
Article
Urban Building Mesh Polygonization Based on Plane-Guided Segmentation, Topology Correction and Corner Point Clump Optimization
by Yawen Liu, Bingxuan Guo, Shuo Wang, Sikang Liu, Ziming Peng and Demin Li
Remote Sens. 2022, 14(17), 4300; https://doi.org/10.3390/rs14174300 - 01 Sep 2022
Cited by 2 | Viewed by 2038
Abstract
The lightweight representation of 3D building models has played an increasingly important role in the comprehensive application of urban 3D models. Polygonization is a compact and lightweight representation for which a fundamental challenge is the fidelity of building models. In this paper, we [...] Read more.
The lightweight representation of 3D building models has played an increasingly important role in the comprehensive application of urban 3D models. Polygonization is a compact and lightweight representation for which a fundamental challenge is the fidelity of building models. In this paper, we propose an improved polyhedralization method for 3D building models based on guided plane segmentation, topology correction, and corner point clump optimization. Improvements due to our method arise from three aspects: (1) A plane-guided segmentation method is used to improve the simplicity and reliability of planar extraction. (2) Based on the structural characteristics of a building, incorrect topological connections of thin-plate planes are corrected, and the lamellar structure is recovered. (3) Optimization based on corner point clumps reduces redundant corner points and improves the realism of a polyhedral building model. We conducted detailed qualitative and quantitative analyses of building mesh models from multiple datasets, and the results show that our method obtains concise and reliable segmented planes by segmentation, obtains high-fidelity building polygonal models, and improves the structural perception of building polygonization. Full article
Show Figures

Graphical abstract

16 pages, 7704 KiB  
Article
A New Line Matching Approach for High-Resolution Line Array Remote Sensing Images
by Jingxue Wang, Suyan Liu and Ping Zhang
Remote Sens. 2022, 14(14), 3287; https://doi.org/10.3390/rs14143287 - 08 Jul 2022
Cited by 3 | Viewed by 1399
Abstract
In this paper, a new line matching approach for high-resolution line array remote sensing images is presented. This approach establishes the correspondence of straight lines on two images by combining multiple constraints. Firstly, three geometric constraints, epipolar, direction and the point-line geometric relationship, [...] Read more.
In this paper, a new line matching approach for high-resolution line array remote sensing images is presented. This approach establishes the correspondence of straight lines on two images by combining multiple constraints. Firstly, three geometric constraints, epipolar, direction and the point-line geometric relationship, are used in turn to reduce the number of matching candidates. After this, two similarity constraints, the double line descriptor and point-line distance, are used to determine the optimal matches. Finally, the co-linearity constraint is used to check the one-to-many and many-to-one correspondences in the results. The proposed approach is tested on eight representative image patches selected from the ZY-3 line array satellite images, and the results are compared with those of two state-of-the-art approaches. Experiments demonstrate the superiority and potential of the proposed approach due to its higher accuracy and greater number of matches in most cases. Full article
Show Figures

Graphical abstract

26 pages, 27844 KiB  
Article
CAISOV: Collinear Affine Invariance and Scale-Orientation Voting for Reliable Feature Matching
by Haihan Luo, Kai Liu, San Jiang, Qingquan Li, Lizhe Wang and Wanshou Jiang
Remote Sens. 2022, 14(13), 3175; https://doi.org/10.3390/rs14133175 - 01 Jul 2022
Cited by 1 | Viewed by 1322
Abstract
Reliable feature matching plays an important role in the fields of computer vision and photogrammetry. Due to the complex transformation model caused by photometric and geometric deformations, and the limited discriminative power of local feature descriptors, initial matches with high outlier ratios cannot [...] Read more.
Reliable feature matching plays an important role in the fields of computer vision and photogrammetry. Due to the complex transformation model caused by photometric and geometric deformations, and the limited discriminative power of local feature descriptors, initial matches with high outlier ratios cannot be addressed very well. This study proposes a reliable outlier-removal algorithm by combining two affine-invariant geometric constraints. First, a very simple geometric constraint, namely, CAI (collinear affine invariance) has been implemented, which is based on the observation that the collinear property of any two points is invariant under affine transformation. Second, after the first-step outlier removal based on the CAI constraint, the SOV (scale-orientation voting) scheme was then adopted to remove remaining outliers and recover the lost inliers, in which the peaks of both scale and orientation voting define the parameters of the geometric transformation model. Finally, match expansion was executed using the Delaunay triangulation of refined matches. By using close-range (rigid and non-rigid images) and UAV (unmanned aerial vehicle) datasets, comprehensive comparison and analysis are conducted in this study. The results demonstrate that the proposed outlier-removal algorithm achieves the best overall performance when compared with RANSAC-like and local geometric constraint-based methods, and it can also be applied to achieve reliable outlier removal in the workflow of SfM-based UAV image orientation. Full article
Show Figures

Graphical abstract

18 pages, 6108 KiB  
Article
Optimal Self-Calibration Strategies in the Combined Bundle Adjustment of Aerial–Terrestrial Integrated Images
by Linfu Xie, Han Hu, Qing Zhu, Xiaoming Li, Xiang Ye, Renzhong Guo, Yeting Zhang, Xiaoqiong Qin and Weixi Wang
Remote Sens. 2022, 14(9), 1969; https://doi.org/10.3390/rs14091969 - 19 Apr 2022
Cited by 2 | Viewed by 1722
Abstract
Accurate combined bundle adjustment (BA) is a fundamental step for the integration of aerial and terrestrial images captured from complementary platforms. In traditional photogrammetry pipelines, self-calibrated bundle adjustment (SCBA) improves the BA accuracy by simultaneously refining the interior orientation parameters (IOPs), including lens [...] Read more.
Accurate combined bundle adjustment (BA) is a fundamental step for the integration of aerial and terrestrial images captured from complementary platforms. In traditional photogrammetry pipelines, self-calibrated bundle adjustment (SCBA) improves the BA accuracy by simultaneously refining the interior orientation parameters (IOPs), including lens distortion parameters, and the exterior orientation parameters (EOPs). Aerial and terrestrial images separately processed through SCBA need to be fused using BA. Thus, the IOPs in the aerial–terrestrial BA must be properly treated. On one hand, the IOPs in one flight should be identical for the same images in physics. On the other hand, the IOP adjustment in the cross-platform-combined BA may mathematically improve the aerial–terrestrial image co-registration degree in 3D space. In this paper, the impacts of self-calibration strategies in combined BA of aerial and terrestrial image blocks on the co-registration accuracy were investigated. To answer this question, aerial and terrestrial images captured from seven study areas were tested under four aerial–terrestrial BA scenarios: the IOPs for both aerial and terrestrial images were fixed; the IOPs for only aerial images were fixed; the IOPs for only terrestrial images were fixed; the IOPs for both images were adjusted. The cross-platform co-registration accuracy for the BA was evaluated according to independent checkpoints that were visible on the two platforms. The experimental results revealed that the recovered IOPs of aerial images should be fixed during the BA. However, when the tie points of the terrestrial images are comprehensively distributed in the image space and the aerial image networks are sufficiently stable, refining the IOPs of the terrestrial cameras during the BA may improve the co-registration accuracy. Otherwise, fixing the IOPs is the best solution. Full article
Show Figures

Graphical abstract

Back to TopTop