remotesensing-logo

Journal Browser

Journal Browser

Automatic Segmentation, Reconstruction, and Modelling from Laser Scanning Data

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 15 June 2024 | Viewed by 6771

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Photogrammetry and Remote Sensing, Chinese Academy of Surveying & Mapping, Beijing 100830, China
Interests: LiDAR and hyperspectral remote sensing

E-Mail Website
Guest Editor
Faculty of Applied Science, University of British Columbia, Vancouver, BC, Canada
Interests: transportation engineering; highway design; traffic safety; infrastructure management; LiDAR

Special Issue Information

Dear Colleagues,

At present, with the development of unmanned aerial vehicles (UAVs), autonomous driving systems, and robot technology, laser scanning technology is seen as a critical component to the efficient operation of most of those systems. In the meantime, the miniaturization and highly integration trends of LiDAR components are becoming evident, while the performance of laser scanning systems has also been improved. This has resulted in an influx of massive, very high density and high precision point cloud data at a relatively low cost. The datasets contain an accurate 3D representation of the real world environment and can be collected on local and regional scales, from outdoor to indoor, and underground environments. These data sets have opened a broad range of new applications in a variety of disciplines, e.g., urban development, natural resource management, transportation, electric power, energy, and heritage conservation, for 3D scene modeling, automatic driving, high precision location navigation, facility and infrastructure management, etc.

One challenge when dealing with laser scanning data is that those datasets are unorganized and big data sets. Therefore, the efficient and automatic segmentation, classification, reconstruction and modelling of point clouds collected using laser scanning technology has been the focus of many research papers over the past few years. The identification and recognition of the different elements in a 3D scene is a challenging task due to the various scenarios and different data acquisition systems. The documented approaches, however, mainly focus on a certain kind of object or the detection of learned invariant surface shapes, e.g., street salient/street adjacent objects, modelling of building facades and roofs, detailed modelling of trees, while not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research. In addition, many advanced AI and machine learning methods are required to be discussed in automatic segmentation, reconstruction and 3D modelling.

This Special Issue of Remote Sensing aims to attract innovative and well-documented article contributions showcasing recent achievements in the field of LiDAR point cloud segmentation, reconstruction and modelling applications, as well as to identify the obstacles still ahead. Submitted manuscripts may cover, although not limited to, topics related to:

  • LiDAR point cloud segmentation and reconstruction methods and algorithms;
  • Combining LiDAR point cloud and multispectral/hyperspectral image data for segmentation, reconstruction and modelling;
  • Machine/deep learning algorithms for point cloud segmentation and clustering;
  • Application of 3D reconstructed models generated from LiDAR point cloud data;
  • Quality assessment of the segmentation, reconstruction, and modelling process.

Prof. Dr. Zhengjun Liu
Dr. Suliman Gargoum
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 24808 KiB  
Article
Classification of Hyperspectral and LiDAR Data Using Multi-Modal Transformer Cascaded Fusion Net
by Shuo Wang, Chengchao Hou, Yiming Chen, Zhengjun Liu, Zhenbei Zhang and Geng Zhang
Remote Sens. 2023, 15(17), 4142; https://doi.org/10.3390/rs15174142 - 24 Aug 2023
Viewed by 1260
Abstract
With the continuous development of surface observation methods and technologies, we can acquire multiple sources of data more effectively in the same geographic area. The quality and availability of these data have also significantly improved. Consequently, how to better utilize multi-source data to [...] Read more.
With the continuous development of surface observation methods and technologies, we can acquire multiple sources of data more effectively in the same geographic area. The quality and availability of these data have also significantly improved. Consequently, how to better utilize multi-source data to represent ground information has become an important research question in the field of geoscience. In this paper, a novel model called multi-modal transformer cascaded fusion net (MMTCFN) is proposed for fusion and classification of multi-modal remote sensing data, Hyperspectral Imagery (HSI) and LiDAR data. Feature fusion and feature extraction are the two stages of the model. First, in the feature extraction stage, a three-branch cascaded Convolutional Neural Network (CNN) framework is employed to fully leverage the advantages of convolutional operators in extracting shallow-level local features. Based on this, we generated multi-modal long-range integrated deep features utilizing the transformer-based vectorized pixel group transformer (VPGT) module during the feature fusion stage. In the VPGT block, we designed a vectorized pixel group embedding that preserves the global features extracted from the three branches in a non-overlapping multi-space manner. Moreover, we introduce the DropKey mechanism into the multi-head self-attention (MHSA) to alleviate overfitting caused by insufficient training samples. Finally, we employ a probabilistic decision fusion strategy to integrate multiple class estimations, assigning a specific category to each pixel. This model was experimented on three HSI-LiDAR datasets with balanced and unbalanced training samples. The proposed model outperforms the other seven SOTA approaches in terms of OA performance, proving the superiority of MMTCFN for the HSI-LiDAR classification task. Full article
Show Figures

Figure 1

20 pages, 20731 KiB  
Article
Sharp Feature-Preserving 3D Mesh Reconstruction from Point Clouds Based on Primitive Detection
by Qi Liu, Shibiao Xu, Jun Xiao and Ying Wang
Remote Sens. 2023, 15(12), 3155; https://doi.org/10.3390/rs15123155 - 16 Jun 2023
Cited by 1 | Viewed by 2299
Abstract
High-fidelity mesh reconstruction from point clouds has long been a fundamental research topic in computer vision and computer graphics. Traditional methods require dense triangle meshes to achieve high fidelity, but excessively dense triangles may lead to unnecessary storage and computational burdens, while also [...] Read more.
High-fidelity mesh reconstruction from point clouds has long been a fundamental research topic in computer vision and computer graphics. Traditional methods require dense triangle meshes to achieve high fidelity, but excessively dense triangles may lead to unnecessary storage and computational burdens, while also struggling to capture clear, sharp, and continuous edges. This paper argues that the key to high-fidelity reconstruction lies in preserving sharp features. Therefore, we introduce a novel sharp-feature-preserving reconstruction framework based on primitive detection. It includes an improved deep-learning-based primitive detection module and two novel mesh splitting and selection modules that we propose. Our framework can accurately and reasonably segment primitive patches, fit meshes in each patch, and split overlapping meshes at the triangle level to ensure true sharpness while obtaining lightweight mesh models. Quantitative and visual experimental results demonstrate that our framework outperforms both the state-of-the-art learning-based primitive detection methods and traditional reconstruction methods. Moreover, our designed modules are plug-and-play, which not only apply to learning-based primitive detectors but also can be combined with other point cloud processing tasks such as edge extraction or random sample consensus (RANSAC) to achieve high-fidelity results. Full article
Show Figures

Figure 1

21 pages, 7827 KiB  
Article
Framework for Geometric Information Extraction and Digital Modeling from LiDAR Data of Road Scenarios
by Yuchen Wang, Weicheng Wang, Jinzhou Liu, Tianheng Chen, Shuyi Wang, Bin Yu and Xiaochun Qin
Remote Sens. 2023, 15(3), 576; https://doi.org/10.3390/rs15030576 - 18 Jan 2023
Cited by 11 | Viewed by 2250
Abstract
Road geometric information and a digital model based on light detection and ranging (LiDAR) can perform accurate geometric inventories and three-dimensional (3D) descriptions for as-built roads and infrastructures. However, unorganized point clouds and complex road scenarios would reduce the accuracy of geometric information [...] Read more.
Road geometric information and a digital model based on light detection and ranging (LiDAR) can perform accurate geometric inventories and three-dimensional (3D) descriptions for as-built roads and infrastructures. However, unorganized point clouds and complex road scenarios would reduce the accuracy of geometric information extraction and digital modeling. There is a standardization need for information extraction and 3D model construction that integrates point cloud processing and digital modeling. This paper develops a framework from semantic segmentation to geometric information extraction and digital modeling based on LiDAR data. A semantic segmentation network is improved for the purpose of dividing the road surface and infrastructure. The road boundary and centerline are extracted by the alpha-shape and Voronoi diagram methods based on the semantic segmentation results. The road geometric information is obtained by a coordinate transformation matrix and the least square method. Subsequently, adaptive road components are constructed using Revit software. Thereafter, the road route, road entity model, and various infrastructure components are generated by the extracted geometric information through Dynamo and Revit software. Finally, a detailed digital model of the road scenario is developed. The Toronto-3D and Semantic3D datasets are utilized for analysis through training and testing. The overall accuracy (OA) of the proposed net for the two datasets is 95.3 and 95.0%, whereas the IoU of segmented road surfaces is 95.7 and 97.9%. This indicates that the proposed net could accomplish superior performance for semantic segmentation of point clouds. The mean absolute errors between the extracted and manually measured geometric information are marginal. This demonstrates the effectiveness and accuracy of the proposed extraction methods. Thus, the proposed framework could provide a reference for accurate extraction and modeling from LiDAR data. Full article
Show Figures

Graphical abstract

Back to TopTop