Next Article in Journal
True-Color Three-Dimensional Imaging and Target Classification BASED on Hyperspectral LiDAR
Previous Article in Journal
A Satellite-Based Climatology of Wind-Induced Surface Temperature Anomalies for the Antarctic
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Survey of Mobile Laser Scanning Applications and Key Techniques over Urban Areas

1
National-Local Joint Engineering Laboratory of Geo-Spatial Information Technology, Hunan University of Science and Technology, No.1 Taoyuan Road, Xiangtan 411201, China
2
Department of Geography and Environment, University of Hawaii at Mānoa, 2424 Maile Way, Honolulu, HI 96822, USA
3
Faculty of Geosciences and Environmental Engineering, Southwest Jiaotong University, Gaoxin West District, Chengdu 611756, China
4
Collaborative Innovation Center for Geospatial Technology, 129 Luoyu Road, Wuhan 430079, China
5
Department of Geography, University of Cincinnati, Braunstein Hall, 400E, Cincinnati, OH 45221, USA
6
School of Geographic Science, Center of Geo-Informatics for Public Security, Guangzhou University, 230 Guangzhou University City Outer Ring Road, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(13), 1540; https://doi.org/10.3390/rs11131540
Submission received: 14 May 2019 / Revised: 25 June 2019 / Accepted: 26 June 2019 / Published: 28 June 2019
(This article belongs to the Section Urban Remote Sensing)

Abstract

:
Urban planning and management need accurate three-dimensional (3D) data such as light detection and ranging (LiDAR) point clouds. The mobile laser scanning (MLS) data, with up to millimeter-level accuracy and point density of a few thousand points/m2, have gained increasing attention in urban applications. Substantial research has been conducted in the past decade. This paper conducted a comprehensive survey of urban applications and key techniques based on MLS point clouds. We first introduce the key characteristics of MLS systems and the corresponding point clouds, and present the challenges and opportunities of using the data. Next, we summarize the current applications of using MLS over urban areas, including transportation infrastructure mapping, building information modeling, utility surveying and mapping, vegetation inventory, and autonomous vehicle driving. Then, we review common key issues for processing and analyzing MLS point clouds, including classification methods, object recognition, data registration, data fusion, and 3D city modeling. Finally, we discuss the future prospects for MLS technology and urban applications.

Graphical Abstract

1. Introduction

Accurate three-dimensional (3D) point cloud data have been an important data source for 3D urban models, which are an integral part of urban planning, simulation, mapping and visualization, emergency response training, and so on [1]. Mobile laser scanning (MLS) is an emerging technology for generating highly accurate 3D point clouds and thus have many potential urban applications [2]. A MLS system consists of 3D laser scanner, GNSS (global navigation satellite system), IMU (inertial measurement unit), and camera. They are usually mounted on moving platforms (e.g., vehicle, boat, backpack, robot), and can collect 3D surface information along the driving paths. Due to the short measure range and flexibility of data acquisition, a MLS system can acquire very accurate (millimeter-level) point clouds of high point density (up to a few thousand points/m2) [3,4,5].
Given those advantages, MLS data have been used in recent years in a wide range of urban applications, including urban land cover analysis [6,7,8,9], digital 3D city modeling [10,11], urban environment monitoring [12,13,14,15,16], and autonomous vehicle driving [17,18,19]. The use of these data has involved techniques such as data fusion and classification (e.g., using machine learning approaches) developed from the remote sensing, computer vision and robotics communities [20,21]. We searched the database of Web of Science core collection with the topics “mobile laser scanning”, “mobile LiDAR and urban”, and “mobile LiDAR and classification” from 2008 to 2018 (Figure 1). This showed that the numbers of research papers on mobile LiDAR has been accelerating, while those on MLS urban applications and data classification have been steadily increasing.
However, in contrast to the rapid development of MLS technology and its huge potential in various applications, we still lack a comprehensive review of the latest progresses, common issues, and remaining challenges of MLS applications over urban areas. Some previous studies do exist reviewing different aspects of LiDAR technologies. For example, Meng et al. (2010) and Han et al. (2017) reviewed the issues of ground filtering of airborne laser scanning (ALS) data and filtering of 3D point cloud [22,23], respectively. Grilli et al. (2017) surveyed the point clouds segmentation and classification algorithms [24]. Zhang and Lin (2016) focused on the data fusion methodology of optical images and LiDAR point cloud, and reviewed its various applications, including registration, classification, 3D reconstruction and change detection [25]. Wang et al. (2018a) reviewed the existing 3D urban reconstruction techniques, such as computer graphics, computer vision and photogrammetry disciplines, focusing on the architectural modeling and generic mesh reconstruction [1]. Puente et al. (2013) introduced the mobile mapping and surveying technologies, reviewed the MMSs and their main categories [3]. Guan et al. (2016) informed the advancements of MLS technologies and their applications in road information inventory [26]. However, these studies mostly focused on the ALS point clouds, remote sensing images, and certain types of data processing methods. No reviews have been dedicated to various MLS applications over urban areas. This article aims to fill in this gap by summarizing the recent developments of MLS systems, introducing their typical applications in urban areas, and discussing the key issues of data processing and analysis.
MLS has been developing rapidly, and a review of all aspects related to its technologies and applications would be a challenge and is thus beyond the scope of this review. Instead, we focus on the applications of MLS technologies over urban areas, where MLS has been most frequently used. The review is organized as follows: Section 2 introduces the characteristics of MLS systems and their point clouds and presents the relevant opportunities and challenges. Section 3 summarizes MLS applications in urban areas. Section 4 discusses the key issues for MLS data processing and analysis. The future developments of MLS are discussed in Section 5, and conclusions are drawn in Section 6.

2. Characteristics of MLS Systems and Data

2.1. Mobile Laser Scanning Systems

Mobile laser scanning systems can be mounted on vehicles, trolleys, backpacks, or boats, with the former three often being used in urban areas [2,27]. The key components of a MLS include 3D laser scanners, GNSS, IMU, and cameras. Therefore, the major suppliers of MLS systems are companies that can manufacture and integrate these products, such as TOPCON, TRIMBLE, OPTECH, and RIEGL, Hexagon/Leica [3,5]. Table 1 briefly summarizes some latest (as of 15 June 2019) MMSs from these vendors. These 3D laser scanners, included in the MMSs, mostly are eye-safe, compact, light-weighted, and with full-circle fields of view.

2.2. Characteristics of MLS Point Clouds

The main advantage of the MLS point cloud is its ultra-high point density, which could be up to a few thousand points per square meter with centimeter-level point spacing. In contrast, the point density of ALS data is usually <10 point m−2 and typical point spacing is 30–50 cm. The point density of TLS (Terrestrial Laser Scanning) data could reach the same level as MLS, but TLS point clouds are often captured in a stop-and-go mode, often without the IMU system, which makes it not easy to register directly the point clouds collected at different scanning positions. Moreover, TLS data has a very inhomogeneous point distribution and much lower productivity than MLS. The continuous collection of MLS point clouds of high point density allows a capture of detailed road features such as curbs and surface road marking (Figure 2).
However, MLS data have some challenges for their boarder application in urban areas: (1) the ultra-high point density of mobile LiDAR data comes with the expense of large file size (on the order of Gigabytes for every mile of data), which makes it relatively slow to process and render [27]; (2) automatic methods are still not efficient for detecting finer and more connected features such as road curbs and markings, which are crucial for autonomous vehicle driving and intelligent transportation; and (3) despite its high sampling density over roadways, MLS data are not able to map out-of-the-sight terrain beyond the roadways due to occlusions caused by buildings and trees. In contrast, ALS data from a high vantage point or high-resolution imagery can more easily have a complete coverage over a large geographical extent. Therefore, MLS data are complementary with other data sources for large-area urban applications.

3. MLS Applications over Urban Areas

Urban areas have a mix of both built-up and natural objects, including bare ground, grass cover, trees, roads, and buildings. The analysis of urban land cover and land use has conventionally used optical imagery and, in the 21st century, ALS point clouds [7]. In recent years, the advantages of MLS for extracting finer scale objects with detailed information have been recognized [28]; see Figure 3 for a sample of features that can be extracted from MLS point cloud. Many urban applications based on MLS have emerged, including road environment reconstruction [5,29,30], single tree stem modeling and pole-like furniture detection [31,32,33,34], building roof segmentation [35], and other utilities reconstruction [1,36,37,38]. Here, we organize the urban applications of MLS data into several categories: transportation infrastructure mapping, vegetation mapping and inventory, building information modeling, urban utility surveying and mapping, and autonomous vehicle driving.

3.1. Transportation Infrastructure Mapping

MLS data have considerable use for road surface object classification, road environment modeling and 3D road asset inventory [5,14,26,39,40,41]. The possibility of mapping detailed transportation features such as railroad center line [29], road edges [42], street curbs [43], road markings [40,44,45], road cross sections [41], curved façades [46], cars [47], and traffic sign [48,49] have been investigated. The detected information provides basis support for transportation infrastructure management.
Early studies reported that the road markings, zebra crossings, centerline, and curbstones can be identified with mean accuracies of about 80% [50,51,52]. Road surface objects, including urban road manholes, and left and right road edges, were also extracted from MLS point clouds [38,40,42]. These studies mostly applied image-processing algorithms and supervised classification methods. Some recent studies have shown that the accuracy of extracting roads was up to 90% by tracking and refining the detected curb points [39,53,54,55]. The finer vertical profiles, including linear parameters (slope and vertical curves) and cross-sections of roads, could be modeled [56]. This automatic processing approach used a hierarchical strategy, and the error of experimental results was under 3.5%. Some studies mapped detailed road surface features and road marking types with high accuracy [14,41,57], in which the average correctness of road surfaces could be up to 98.3. By validating with real test sites and a fully automatic approach, these proposed methods have shown satisfactory and feasible performance. Such extracted transportation infrastructure information is useful for urban facility maintenance, autonomous vehicle assistance, and navigation.

3.2. Building Information Modeling

MLS data with high point density allows the extraction and 3D reconstructions of buildings for building information modeling (BIM) [58,59,60,61,62]. MLS data have been used to identify building shape, structure, boundary, façade, roof, and outdoor/indoor environments [12,14,30,63,64,65,66] and street-level building change detection [32,67], and thus have gained attention in the fields of architecture, engineering, and construction [68,69]. The automatic reconstruction method of permanent structures, such as walls, floors and ceilings, combined with feature primitive extraction and graph-cut formulation. Therefore, different wall directions could be extracted accurately and the Hausdorff distance with ground truth was under 2.6 cm. For evaluating the robustness of these methods, the sparse sampling with down-sampled instances were produced with 50%, 20%, 10% and 5% of the original point cloud. With adjusted parameters, the experiments showed that the reconstruction of the 50% data set provided the same amount of details as the original one. Some wearable devices and autonomous robots mounted 3D laser sensors have been used for navigation in outdoor environments [70] or simultaneous localization and mapping (SLAM)-driven robotic mapping in indoor environments [71,72,73,74,75]. Conventionally, MLS have been used for mapping buildings and roofs [58,76,77,78]. In recent years, new applications such as generating building models with multiple levels of detail for accessibility diagnosis have begun to emerge [79,80].

3.3. Utility Surveying and Mapping

The extraction of street-scene objects from mobile LiDAR point clouds is a crucial task for urban utility surveying and mapping [39,53]. A few studies have tried to detect and classify pole-like objects (such as light pole and utility poles) and street-side car parks from MLS point clouds [33,81,82]. Moreover, some studies have considered the pillar structure and geometric index feature for detecting vertical elements in complex road environments [83,84], and their classification rates could be up to 90%. However, misclassification often occurred over man-made poles adjacent to trees or low-height traffic signs. By considering the spatial relationship, the finer detection of pole-like road furniture was approximately 95% [85], and the optimized decomposition improved the correctness by 7.3% and 18.4% in the test areas. Furthermore, studies have investigated heuristic segmentation, various classifiers, and the decomposition of road furniture into different components [85,86,87], including traffic signs, traffic lights, advertisement panels, and trees. The overall accuracy of the classified poles achieved 96%, and advertisement panels might be classified as traffic sign due to their reduced panel size and shape similarity.

3.4. Vegetation Mapping and Inventory

Vegetation mapping and inventory is another important application field. MLS has been used to derive information related to vegetation structure or species. For example, 3D individual trees could be segmented from MLS data using imagery processing techniques [53]. Puttonen et al. (2011) combined MLS and hyperspectral data for tree species classification and found that the combined dataset outperformed single-sensor data, and separated the coniferous and deciduous trees with overall classification accuracy of 83% [88]. To classify tree species, the point clouds were usually first converted to 3D voxels, followed by segmentation or classification of these voxels [89,90], and the overall accuracy of urban tree species classification achieved 86.1%. By applying a deep learning model, the average classification accuracy could be over 93.1%, while the number of projection images for rasterization had impact on the training and testing stages. Therefore, it should consider more effective ways to express 3D objects.

3.5. Autonomous Vehicle Driving

MLS data can be used for perceiving street environments [82] and detecting the precise position, orientation and geometric features of vehicles [91]. Therefore, MLS is a crucial component for visual perception system in autonomous vehicle [17]. For autonomous vehicle driving, three MLS applications are particularly important: vehicle and pedestrian detection, lane detection, and drivable surface detection [18,19,92]. By experimenting with parked/moving autonomous vehicle platform within real urban environment, the true positive rates of pedestrian recognition were over 96% and 93%, respectively. Based on convolutional neural network and image upsampling theory, the car, cyclist, pedestrian, and truck were classified with average accuracy of 97%. Compared to traditional RGB images, the four-channel RGB-LiDAR image data was able to reach approximately 5% higher accuracy over different scales of training/testing sets. MLS is often combined with cameras (optical sensor) and RADARs (electromagnetic sensor) for timely prediction of pedestrian motions, and even constructing a visual perception system [19,93,94,95,96,97,98,99]. With MLS, it is possible to determine the real-time navigation paths of autonomous mobile robots in unknown environments [3,100], while detecting obstacles and reconstructing and mapping the environment [36,95,101].

4. Key Issues for MLS Application and Technology

For the applications described in Section 3, many key technologies of MLS data processing and analysis are involved (Figure 4). First, the raw MLS point cloud often needs to be classified to identify the class for each point. Second, individual objects can be identified from MLS raw or classified point clouds. Third, MLS data are often fused with other data sources such as ALS point clouds and optical imagery to improve the classification and detection accuracy. Therefore, the registration of MLS with ALS point clouds and/or imagery is important. Finally, the detected and classified objects from MLS point cloud can be used for 3D city modeling.

4.1. Classification Methods

The semantic classification of MLS point clouds is a crucial and prerequisite task for various applications, including transportation infrastructure mapping, vegetation mapping and inventory, building information modeling, utility surveying and so on. The classification methods can be grouped as follows:
(1) Unsupervised classification or segmentation with prior knowledge.
This group of methods classify points based on multiple thresholds, which can be predetermined based on the characteristics of point cloud data or prior knowledge (e.g., related to height, density, or intensity). The procedures usually involved transforming point clouds to 2D images or triangular irregular network (TIN) and then voting in the parameter space. The specific methods varied from simple statistical analysis [33,102], HT (Hough transform) [103] and RANSAC (random sampling and consensus) [31,104], to more sophisticated methods such as morphological processing [22,57], regression and clustering [23,41,42], principal component analysis (PCA) [77], and Markov network [40].
Some methods are used to detect points for road only. For example, Yang et al. [39,53,54,55] used the partition strategy of consecutive “scanning lines” and moving window filtering methods to extract roads. In contrast, others are to classify points from different objects. For example, the mean-shift algorithms with different input variables (such as range, smoothed normal, texture information, and RGB of image color) were utilized for urban scene segmentation [24,105,106].
(2) Supervised classification.
Compared to unsupervised methods, supervised methods classify points using the rules learnt from the training data. The choice of classifier and input features are important factors for supervised classification methods [107,108]. Before classification, raster images were often generated by interpolation from height and intensity of MLS data. Then, supervised methods such as RF (Random Forests), SVM (Support Vector Machine), Bayesian network, NN (Neural Network, including deep learning) were applied for urban object classification [38]. The extraction of distinct features from LiDAR point cloud is a crucial step for many classifiers. These features often are computed from points and their neighbors, representing the spatial structures of local context. Different neighborhood types and scales were proposed for computing the spatial structure for each considering point, including the spherical, cylindrical, and k nearest neighborhoods, single-scale and multi-scale neighborhoods [109]. In particular, an optimal eigenvalue-based and slant cylindrical neighborhoods were proved to increase the distinctiveness of features and more suitable for certain object classification [110,111,112]. The extraction of local spatial features is computation-intensive, so a key challenge is to balance the accuracy and computation needs of the procedures.

4.2. Object Recognition

Object recognition is the task for extracting or detecting geographical objects of particular types from point clouds, which might involve 3D reconstruction and shape restoring. For successful object detection and recognition, it is critical to extract affine-invariant descriptors of geometric structure [113]. In this regard, researchers have introduced covariance-based feature descriptors [114,115,116]. Content-based shape descriptors have also been proposed to identify 3D objects such as street light poles, road markings, zebra crossings, and tunnel luminaires [26,51,113,117,118]. However, current research of the object recognition is mainly focused on one type of object or improvement after point cloud classification. Further studies are needed to develop frameworks for semantically classifying point clouds, extracting 3D shape descriptors, and detecting objects simultaneously.

4.3. Data Registration

Optical imagery and ALS data can be combined with MLS to provide complementary information for urban environmental mapping. However, accurate geometric registration of different data sources is a crucial issue for “geospatial big data” applications [119,120]. The GNSS/IMU system commonly onboard a mobile mapping system makes it possible to directly georeference the laser scanner and optical camera data into a common map coordinate system. However, the georeferencing quality is often challenged by the GNSS signal perturbation or even denial due to tall buildings, trees, and tunnels. A synergistic use of GNSS, IMU, and odometer information is often proposed to improve the estimation of position and orientation information of the platform. To achieve the best accuracy when registering different sensor data via direct georeferencing, the misalignment among sensors needs to carefully calibrated (through either indirect or direct sensor orientation), and their time needs to be rigorously synchronized. In addition, the data registration among different sensors can be done as discussed in the following studies:
(1) Registration of MLS with other point clouds. The point clouds from different sources can be aligned in a common, global coordinate system by a pairwise registration and individual transformations, using the overlap of point clouds to establish a link [121,122,123]. Some studies have introduced artificial targets (such as spheres) in the registration of multi-view point clouds [124,125], in which the centroids of the detected targets were used for rough registration, and the position, orientation, and geometric shapes were used for refining the results. Cheng et al. (2015, 2018) proposed a hierarchical, coarse-to-fine registration strategy for registering laser scanning point clouds [122,123]. The road networks and 3D building contours were used for the initial coarse and fine registration, respectively. Often, local feature descriptors and geometric key point descriptors have been designed for efficient 3D point cloud registration, involving a comprehensive description of local shape geometries about local depth, point density, and angles between normal [114,126]. An automatic non-rigid registration method for multi-strip MLS point clouds was also proposed by Yan et al. (2018) [125]. This method was verified with MLS data scanned in city road and highway environment, and the root mean square errors were about 4–5 cm.
(2) Registration of MLS with images. The combination of MLS point clouds with optical images is another important way of data registration [127]. Some recent studies have introduced semantic features-based, sensor-constellation-based and image-to-geometry registration methods [127,128,129]. the first method applied the semantic features extracted from panoramic images and point clouds. The consumer-grade GPS and IMU aided structure from motion was used to estimate the rotation parameters, and the final errors were less than three pixels with two challenging urban scenes; the sensor constellation method first applied the intersection feature points, which lie in the connecting line between global positioning system antenna and the panoramic camera, to separate the point clouds into blocks. Then, the accurate corresponding pixels in relevant images could be found by the relationship of position and orientation between different sensors [128]; the image-to-geometry method consisted of generating a wide-angle synthetic image, translation in the image plane, and fine registration by a new image metric [128,129].

4.4. Data Integration

Due to the differences in sensor scanning angle, time and pose, different data sources (MLS, ALS, and imagery data) can be combined to complement each other for geospatial applications [7,130,131,132,133,134,135]. For example, optical imagery, captured using an unmanned aerial vehicles (UAV) system or aerospace sensor, and LiDAR point clouds can be combined for DTM generation, land use and land cover classification, building extraction, and 3D construction [6,25,136,137]. Conventionally, aerial images and ALS are commonly utilized for urban land-use and land cover mapping [6,130,138]. Research showed that the usual synergistic use of LiDAR point clouds, multispectral and normalized difference vegetation index (NDVI) data provided significant advantages based on several channels, and thus could achieve more accurate urban land cover classification results than single-sensor data. However, these aerial datasets might not provide sufficient details for analysis over complex urban areas [139]. MLS can be combined with other data information to address this issue. The fusion of 3D point clouds and 2D images can be applied for the semantic segmentation of large-scale urban scenes, obstacle recognition, road detection, and autonomous vehicle driving [19,97,140,141,142]. There are different levels of data integration for MLS point clouds and other data sources in Table 2.

4.5. 3D City Modeling

The research of 3D digital city modeling is becoming popular [145]. With MLS data of high point density, not only the different types of objects (e.g., ground, roads, trees, buildings, utilities) can be classified and detected, but also the detailed city model could be 3D regularized and reconstructed geometrically. MLS 3D point clouds can overcome the limitations of very low productivity and low geometric accuracy in real-world high-quality 3D city modeling [146]. The images and point clouds acquired by a MLS system are usually combined for texture mapping and semantical labelling for constructing 3D city mesh [143,144]. Moreover, MLS data provide an efficient solution for automatic geometry generation and shape detection for BIM and city models [32,69,147]. Recently, multiple cooperative robots with 3D lasers have also been applied for automatic large-scale 3D modeling [148,149].

5. Future Developments of MLS

5.1. Emerging of New MLS Systems and Applications

MLS data have been increasingly used in the urban sensing recognition applications, such as transportation infrastructure mapping and autonomous vehicle driving. However, their applications have been largely limited to roads where vehicles can drive. To access other areas in complex urban environment, there is a growing need to develop and use MLS systems on non-vehicle platforms, such as backpacked MLS, to collect data over areas where vehicle MLS cannot reach. Another trend is the integration of MLS with other instruments such as robot probe units, road disease detection robots, guiding robots, and UAVs for in-depth diagnosis and more detailed measurements of urban environments (e.g., detect surface health of roads and monitor micro-level shape changes). The applications of MLS have also been expanding from conventional street environmental mapping and perception to other areas such as inverse engineering for industrial control, SLAM, risky environment exploration. MLS data can also be integrated with high resolution satellite imagery for urban analysis over relatively large areas.

5.2. Data Processing Framework Tailored to MLS

Compared to ALS, the much higher point density of MLS could provide more detailed 3D spatial information for object classification and environment modeling, such as road surface [50,54,85,150], building [53,76,77], power line [151,152]. However, MLS has the challenges of large data processing. Few analysis methods have been tailored for processing massive MLS point clouds while taking full advantage of their high point density [153]. The MLS data provides more profile information along the scanning paths, while ALS provides vertically earth surface representation. The laser beam scatters along the horizontal scanning direction. The registration between MLS and ALS data also needs efficient methods and further study. Moreover, the MLS has been increasingly applied for indoor/underground real-time navigation and SLAM. This means that accurate real-time processing algorithms are needed.

5.3. Machine Learning Based Classification Methods

Classifying 3D LiDAR point clouds is often a critical step for many urban applications. To address this issue, machine learning methods are increasingly applied [90,154,155,156,157,158]. Although these methods have been successfully used for high-resolution 2D imagery classification [159,160,161,162,163,164], extending their applications to 3D MLS data need to consider the informative yet irregular 3D distribution of point clouds. Some relevant key issues include:
First, it is important to select suitable classifiers and features. Some classifiers, such as SVM, RF, and NN, have been used for road, building, pole-like object and power line extraction from MLS [82,86,87,151,165]. The semantic constraints, roughness parameter, topological structural information and clustering algorithms were considered for unsupervised classification [166,167,168]. However, identifying useful features and classifiers for a wide range of applications deserves more investigation.
Second, deep learning, especially convolutional neural network, have experienced explosive growth in computer vision for image perception and thus is a promising technique for automatic classification of point cloud. However, most existing deep learning frameworks were developed for processing 2D images with pixels over a regular grid, not for 3D point clouds that are irregularly distributed in a 3D space. Therefore, it is difficult to directly apply the commonly used neural network architecture for semantic classification from MLS data. Some studies proposed the data transformation methods from 3D point clouds to 2D images, before using them as inputs to multi-layer neural network. Another alternative method was the data dimensionality reduction. However, such processing resulted in information loss, larger classification errors, and more computation. Therefore, fast, accurate, and efficient deep learning frameworks tailored for processing massive MLS point cloud are urgently needed.
Third, automation and efficiency is another concern for classification methods. For many supervised or semi-supervised classification methods, it is very costly and time consuming to produce ground truth datasets, especially when labeling the point-wise class manually. The selection strategy of training, validation and test datasets is also crucial for the results. Therefore, it needs to develop the fully automated, efficient and unsupervised classification method in future.

6. Conclusions

With the recent developments in sensing technology, MLS point clouds have attracted more attention, and have been increasingly used in various urban management and planning applications. The data processing, data mining and knowledge discovery from MLS data have become an active area of research. This paper conducted a comprehensive review of MLS applications and key techniques in urban areas. As a state-of-the-art technology of obtaining detailed 3D geographical surface information, MLS provides fine-scale and highly accurate point clouds for urban environment surveying and mapping. Therefore, we introduced the characteristics of MLS systems and data, and also investigated the emerging urban applications based on MLS data, such as transportation infrastructure mapping, vegetation mapping and inventory, building information modeling, utility surveying and mapping, and autonomous vehicle driving. To achieve these applications, several key issues should be addressed, including classification methods, object recognition, data registration, data fusion, and 3D city modeling. In large-scale, complex and changing urban areas, new MLS systems will emerge for the potential applications. Moreover, it is also necessary to develop rapid, automated and intelligent techniques, such as special processing framework and machine learning-based methods.

Author Contributions

Y.W. and Q.C. together designed the research and organized literature. Y.W. conducted the analysis and wrote the manuscript. Q.C. contributed the LiDAR point cloud and assisted the manuscript writing. Q.Z. assisted refining the research design and manuscript writing. L.L. assisted with refining the framework and the literature analysis. C.L. and D.Z. assisted the analysis and results interpretation.

Funding

This work is supported by the National Natural Science Foundation of China (grant numbers 41601426 and 41771462), the National Key R&D Program of China (grant number 2018YFB0504501), the Natural Science Foundation of Hunan Province (grant numbers 2018JJ3155 and 2018JJ3151), and the Open Foundation of Key Laboratory of Digital Mapping and Land Information Application of National Administration of Surveying, Mapping and Geoinformation, Wuhan University (grant number GCWD201806).

Acknowledgments

Sincere thanks are given for the comments and contributions of anonymous reviewers and members of the editorial team.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, R.; Peethambaran, J.; Chen, D. Lidar point clouds to 3-d urban models: A review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 606–627. [Google Scholar] [CrossRef]
  2. Kukko, A.; Kaartinen, H.; Hyyppä, J.; Chen, Y. Multiplatform mobile laser scanning: Usability and performance. Sensors 2012, 12, 11712–11733. [Google Scholar] [CrossRef]
  3. Puente, I.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P. Review of mobile mapping and surveying technologies. Measurement 2013, 46, 2127–2145. [Google Scholar] [CrossRef]
  4. Rodríguez-Gonzálvez, P.; Jiménez Fernández-Palacios, B.; Muñoz-Nieto, Á.L.; Arias-Sanchez, P.; Gonzalez-Aguilera, D. Mobile lidar system: New possibilities for the documentation and dissemination of large cultural heritage sites. Remote Sens. 2017, 9, 189. [Google Scholar] [CrossRef]
  5. Sairam, N.; Nagarajan, S.; Ornitz, S. Development of mobile mapping system for 3d road asset inventory. Sensors 2016, 16, 367. [Google Scholar] [CrossRef] [PubMed]
  6. Hartfield, K.A.; Landau, K.I.; van Leeuwen, W.J.D. Fusion of high resolution aerial multispectral and lidar data: Land cover in the context of urban mosquito habitat. Remote Sens. 2011, 3, 2364–2383. [Google Scholar] [CrossRef]
  7. Yan, W.Y.; Shaker, A.; El-Ashmawy, N. Urban land cover classification using airborne lidar data: A review. Remote Sens. Environ. 2015, 158, 295–310. [Google Scholar] [CrossRef]
  8. Zou, X.; Zhao, G.; Li, J.; Yang, Y.; Fang, Y. Object based image analysis combining high spatial resolution imagery and laser point clouds for urban land cover. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 733–739. [Google Scholar] [CrossRef]
  9. Matikainen, L.; Karila, K.; Hyyppä, J.; Litkey, P.; Puttonen, E.; Ahokas, E. Object-based analysis of multispectral airborne laser scanner data for land cover classification and map updating. ISPRS J. Photogramm. Remote Sens. 2017, 128, 298–313. [Google Scholar] [CrossRef]
  10. Guo, L.; Chehata, N.; Mallet, C.; Boukir, S. Relevance of airborne lidar and multispectral image data for urban scene classification using random forests. ISPRS J. Photogramm. Remote Sens. 2011, 66, 56–66. [Google Scholar] [CrossRef]
  11. Chen, G.; Weng, Q.; Hay, G.J.; He, Y. Geographic object-based image analysis (geobia): Emerging trends and future opportunities. Gisci. Remote Sens. 2018, 55, 159–182. [Google Scholar] [CrossRef]
  12. Rusu, R.B.; Marton, Z.C.; Blodow, N.; Dolha, M.; Beetz, M. Towards 3d point cloud based object maps for household environments. Robot. Auton. Syst. 2008, 56, 927–941. [Google Scholar] [CrossRef]
  13. Boyko, A.; Funkhouser, T. Extracting roads from dense point clouds in large scale urban environment. Isprs J. Photogramm. Remote Sens. 2011, 66, S2–S12. [Google Scholar] [CrossRef] [Green Version]
  14. Yadav, M.; Singh, A.K.; Lohani, B. Extraction of road surface from mobile lidar data of complex road environment. Int. J. Remote Sens. 2017, 38, 4655–4682. [Google Scholar] [CrossRef]
  15. Jeong, J.; Yoon, T.S.; Park, J.B. Multimodal sensor-based semantic 3d mapping for a large-scale environment. Expert Syst. Appl. 2018, 105, 1–10. [Google Scholar] [CrossRef]
  16. Soilan, M.; Riveiro, B.; Sanchez-Rodriguez, A.; Arias, P. Safety assessment on pedestrian crossing environments using mls data. Accid. Anal. Prev. 2018, 111, 328–337. [Google Scholar] [CrossRef] [PubMed]
  17. Shi, W.; Alawieh, M.B.; Li, X.; Yu, H. Algorithm and hardware implementation for visual perception system in autonomous vehicle: A survey. Integr. VLSI J. 2017, 59, 148–156. [Google Scholar] [CrossRef]
  18. Wang, H.; Wang, B.; Liu, B.; Meng, X.; Yang, G. Pedestrian recognition and tracking using 3d lidar for autonomous vehicle. Robot. Auton. Syst. 2017, 88, 71–78. [Google Scholar] [CrossRef]
  19. Gao, H.; Cheng, B.; Wang, J.; Li, K.; Zhao, J.; Li, D. Object classification using cnn-based fusion of vision and lidar in autonomous vehicle environment. IEEE Trans. Ind. Inform. 2018, 14, 4224–4231. [Google Scholar] [CrossRef]
  20. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef]
  21. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.-S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep learning in remote sensing: A review. arXiv 2017, arXiv:1710.03959. [Google Scholar]
  22. Meng, X.; Currit, N.; Zhao, K. Ground filtering algorithms for airborne lidar data: A review of critical issues. Remote Sens. 2010, 2, 833–860. [Google Scholar] [CrossRef]
  23. Han, X.-F.; Jin, J.S.; Wang, M.-J.; Jiang, W.; Gao, L.; Xiao, L. A review of algorithms for filtering the 3d point cloud. Signal Process. Image Commun. 2017, 57, 103–112. [Google Scholar] [CrossRef]
  24. Grilli, E.; Menna, F.; Remondino, F. A review of point clouds segmentation and classification algorithms. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 339–344. [Google Scholar] [CrossRef]
  25. Zhang, J.; Lin, X. Advances in fusion of optical imagery and lidar point cloud applied to photogrammetry and remote sensing. Int. J. Image Data Fusion 2016, 8, 1–31. [Google Scholar] [CrossRef]
  26. Guan, H.; Li, J.; Cao, S.; Yu, Y. Use of mobile lidar in road information inventory: A review. Int. J. Image Data Fusion 2016, 7, 219–242. [Google Scholar] [CrossRef]
  27. Brédif, M.; Vallet, B.; Ferrand, B. Distributed dimensonality-based rendering of lidar point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 559–564. [Google Scholar] [CrossRef]
  28. Rau, J.-Y.; Jhan, J.-P.; Hsu, Y.-C. Analysis of oblique aerial images for land cover and point cloud classification in an urban environment. IEEE Trans. Geosci. Remote Sens. 2014, 53, 1304–1319. [Google Scholar] [CrossRef]
  29. Beger, R.; Gedrange, C.; Hecht, R.; Neubert, M. Data fusion of extremely high resolution aerial imagery and lidar data for automated railroad centre line reconstruction. ISPRS J. Photogramm. Remote Sens. 2011, 66, S40–S51. [Google Scholar] [CrossRef]
  30. Xiang, B.; Yao, J.; Lu, X.; Li, L.; Xie, R.; Li, J. Segmentation-based classification for 3d point clouds in the road environment. Int. J. Remote Sens. 2018, 39, 6182–6212. [Google Scholar] [CrossRef]
  31. Olofsson, K.; Holmgren, J.; Olsson, H. Tree stem and height measurements using terrestrial laser scanning and the ransac algorithm. Remote Sens. 2014, 6, 4323–4344. [Google Scholar] [CrossRef]
  32. Qin, R.; Gruen, A. 3d change detection at street level using mobile laser scanning point clouds and terrestrial images. ISPRS J. Photogramm. Remote Sens. 2014, 90, 23–35. [Google Scholar] [CrossRef]
  33. Cabo, C.; Ordoñez, C.; García-Cortés, S.; Martínez, J. An algorithm for automatic detection of pole-like street furniture objects from mobile laser scanner point clouds. ISPRS J. Photogramm. Remote Sens. 2014, 87, 47–56. [Google Scholar] [CrossRef]
  34. De Conto, T.; Olofsson, K.; Görgens, E.B.; Rodriguez, L.C.E.; Almeida, G. Performance of stem denoising and stem modelling algorithms on single tree point clouds from terrestrial laser scanning. Comput. Electron. Agric. 2017, 143, 165–176. [Google Scholar] [CrossRef]
  35. Chen, D.; Zhang, L.; Mathiopoulos, P.T.; Huang, X. A methodology for automated segmentation and reconstruction of urban 3-d buildings from als point clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4199–4217. [Google Scholar] [CrossRef]
  36. Dairi, A.; Harrou, F.; Senouci, M.; Sun, Y. Unsupervised obstacle detection in driving environments using deep-learning-based stereovision. Robot. Auton. Syst. 2018, 100, 287–301. [Google Scholar] [CrossRef] [Green Version]
  37. Zhi, S.; Liu, Y.; Li, X.; Guo, Y. Toward real-time 3d object recognition: A lightweight volumetric cnn framework using multitask learning. Comput. Graph. 2018, 71, 199–207. [Google Scholar] [CrossRef]
  38. Yu, Y.; Guan, H.; Ji, Z. Automated detection of urban road manhole covers using mobile laser scanning data. IEEE Trans. Intell. Transp. Syst. 2015, 16, 3258–3269. [Google Scholar] [CrossRef]
  39. Yang, B.; Fang, L.; Li, J. Semi-automated extraction and delineation of 3d roads of street scene from mobile laser scanning point clouds. ISPRS J. Photogramm. Remote Sens. 2013, 79, 80–93. [Google Scholar] [CrossRef]
  40. Yu, Y.; Li, J.; Guan, H.; Jia, F.; Wang, C. Learning hierarchical features for automated extraction of road markings from 3-d mobile lidar point clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 709–726. [Google Scholar] [CrossRef]
  41. Gargoum, S.A.; El-Basyouny, K.; Froese, K.; Gadowski, A. A fully automated approach to extract and assess road cross sections from mobile lidar data. IEEE Trans. Intell. Transp. Syst. 2018, 19, 3507–3516. [Google Scholar] [CrossRef]
  42. Kumar, P.; McElhinney, C.P.; Lewis, P.; McCarthy, T. An automated algorithm for extracting road edges from terrestrial mobile lidar data. ISPRS J. Photogramm. Remote Sens. 2013, 85, 44–55. [Google Scholar] [CrossRef]
  43. Rodríguez-Cuenca, B.; García-Cortés, S.; Ordóñez, C.; Alonso, M.C. An approach to detect and delineate street curbs from mls 3d point cloud data. Autom. Constr. 2015, 51, 103–112. [Google Scholar] [CrossRef]
  44. Hervieu, A.; Soheilian, B. Road side detection and reconstruction using lidar sensor. In Proceedings of the 2013 IEEE Intelligent Vehicles Symposium (IV), Gold Coast, Australia, 23–26 June 2013; pp. 1247–1252. [Google Scholar]
  45. Hervieu, A.; Soheilian, B.; Brédif, M. Road marking extraction using a model&data-driven rj-mcmc. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 2, 47–54. [Google Scholar]
  46. Iman Zolanvari, S.M.; Laefer, D.F. Slicing method for curved façade and window extraction from point clouds. ISPRS J. Photogramm. Remote Sens. 2016, 119, 334–346. [Google Scholar] [CrossRef]
  47. Yu, Y.; Li, J.; Guan, H.; Wang, C. Automated detection of three-dimensional cars in mobile laser scanning point clouds using dbm-hough-forests. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4130–4142. [Google Scholar] [CrossRef]
  48. Soilán, M.; Riveiro, B.; Martínez-Sánchez, J.; Arias, P. Traffic sign detection in mls acquired point clouds for geometric and image-based semantic inventory. ISPRS J. Photogramm. Remote Sens. 2016, 114, 92–101. [Google Scholar] [CrossRef]
  49. Arcos-García, Á.; Soilán, M.; Álvarez-García, J.A.; Riveiro, B. Exploiting synergies of mobile mapping sensors and deep learning for traffic sign recognition systems. Expert Syst. Appl. 2017, 89, 286–295. [Google Scholar] [CrossRef]
  50. Jaakkola, A.; Hyyppä, J.; Hyyppä, H.; Kukko, A. Retrieval algorithms for road surface modelling using laser-based mobile mapping. Sensors 2008, 8, 5238–5249. [Google Scholar] [CrossRef]
  51. Guan, H.; Li, J.; Yu, Y.; Ji, Z.; Wang, C. Using mobile lidar data for rapidly updating road markings. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2457–2466. [Google Scholar] [CrossRef]
  52. Yang, B.; Fang, L.; Li, Q.; Li, J. Automated extraction of road markings from mobile lidar point clouds. Photogramm. Eng. Remote Sens. 2012, 78, 331–338. [Google Scholar] [CrossRef]
  53. Yang, B.; Wei, Z.; Li, Q.; Li, J. Automated extraction of street-scene objects from mobile lidar point clouds. Int. J. Remote Sens. 2012, 33, 5839–5861. [Google Scholar] [CrossRef]
  54. Yang, B.; Dong, Z.; Liu, Y.; Liang, F.; Wang, Y. Computing multiple aggregation levels and contextual features for road facilities recognition using mobile laser scanning data. ISPRS J. Photogramm. Remote Sens. 2017, 126, 180–194. [Google Scholar] [CrossRef]
  55. Yang, B.; Liu, Y.; Dong, Z.; Liang, F.; Li, B.; Peng, X. 3d local feature bkd to extract road information from mobile laser scanning point clouds. ISPRS J. Photogramm. Remote Sens. 2017, 130, 329–343. [Google Scholar] [CrossRef]
  56. Holgado-Barco, A.; Gonzalez-Aguilera, D.; Arias-Sanchez, P.; Martinez-Sanchez, J. An automated approach to vertical road characterisation using mobile lidar systems: Longitudinal profiles and cross-sections. ISPRS J. Photogramm. Remote Sens. 2014, 96, 28–37. [Google Scholar] [CrossRef]
  57. Guo, J.; Tsai, M.-J.; Han, J.-Y. Automatic reconstruction of road surface features by using terrestrial mobile lidar. Autom. Constr. 2015, 58, 165–175. [Google Scholar] [CrossRef]
  58. Wang, Y.; Cheng, L.; Chen, Y.; Wu, Y.; Li, M. Building point detection from vehicle-borne lidar data based on voxel group and horizontal hollow analysis. Remote Sens. 2016, 8, 419. [Google Scholar] [CrossRef]
  59. Du, S.; Zhang, Y.; Qin, R.; Yang, Z.; Zou, Z.; Tang, Y.; Fan, C. Building change detection using old aerial images and new lidar data. Remote Sens. 2016, 8, 1030. [Google Scholar] [CrossRef]
  60. Sohn, G.; Dowman, I. Data fusion of high-resolution satellite imagery and lidar data for automatic building extraction. ISPRS J. Photogramm. Remote Sens. 2007, 62, 43–63. [Google Scholar] [CrossRef]
  61. Hussain, E.; Ural, S.; Kim, K.; Fu, C.S.; Shan, J. Building extraction and rubble mapping for city port-au-prince post-2010 earthquake with geoeye-1 imagery and lidar data. Photogramm. Eng. Remote Sens. 2011, 77, 1011–1023. [Google Scholar]
  62. Chen, K.; Lu, W.; Xue, F.; Tang, P.; Li, L.H. Automatic building information model reconstruction in high-density urban areas: Augmenting multi-source data with architectural knowledge. Autom. Constr. 2018, 93, 22–34. [Google Scholar] [CrossRef]
  63. Gong, F.-Y.; Zeng, Z.-C.; Zhang, F.; Li, X.; Ng, E.; Norford, L.K. Mapping sky, tree, and building view factors of street canyons in a high-density urban environment. Build. Environ. 2018, 134, 155–167. [Google Scholar] [CrossRef]
  64. Oesau, S.; Lafarge, F.; Alliez, P. Indoor scene reconstruction using feature sensitive primitive extraction and graph-cut. ISPRS J. Photogramm. Remote Sens. 2014, 90, 68–82. [Google Scholar] [CrossRef] [Green Version]
  65. Laefer, D.F.; Truong-Hong, L. Toward automatic generation of 3d steel structures for building information modelling. Autom. Constr. 2017, 74, 66–77. [Google Scholar] [CrossRef]
  66. Li, Y.; Hu, Q.; Wu, M.; Liu, J.; Wu, X. Extraction and simplification of building façade pieces from mobile laser scanner point clouds for 3d street view services. ISPRS Int. J. Geo-Inf. 2016, 5, 231. [Google Scholar] [CrossRef]
  67. Qin, R.; Tian, J.; Reinartz, P. 3d change detection—Approaches and applications. ISPRS J. Photogramm. Remote Sens. 2016, 122, 41–56. [Google Scholar] [CrossRef]
  68. Nguyen, L.V.; La, H.M.; Sanchez, J.; Vu, T. A smart shoe for building a real-time 3d map. Autom. Constr. 2016, 71, 2–12. [Google Scholar] [CrossRef]
  69. Thomson, C.; Boehm, J. Automatic geometry generation from point clouds for bim. Remote Sens. 2015, 7, 11753–11775. [Google Scholar] [CrossRef]
  70. Krajník, T.; Cristóforis, P.; Kusumam, K.; Neubert, P.; Duckett, T. Image features for visual teach-and-repeat navigation in changing environments. Robot. Auton. Syst. 2017, 88, 127–141. [Google Scholar] [CrossRef] [Green Version]
  71. Kim, P.; Chen, J.; Cho, Y.K. Slam-driven robotic mapping and registration of 3d point clouds. Autom. Constr. 2018, 89, 38–48. [Google Scholar] [CrossRef]
  72. Chen, C.; Zou, X.; Tian, M.; Li, J.; Wu, W.; Song, Y.; Dai, W.; Yang, B. Low cost multi-sensor robot laser scanning system and its accuracy investigations for indoor mapping application. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 83–85. [Google Scholar] [CrossRef]
  73. Bartkevicius, S.; Fiodorova, O.; Knys, A.; Derviniene, A.; Dervinis, G.; Raudonis, V.; Lipnickas, A.; Baranauskas, V.; Sarkauskas, K.; Balasevicius, L. Mobile robots navigation modeling in known 2d environment based on petri nets. Intell. Autom. Soft Comput. 2017. [Google Scholar] [CrossRef]
  74. Macher, H.; Landes, T.; Grussenmeyer, P. From point clouds to building information models: 3d semi-automatic reconstruction of indoors of existing buildings. Appl. Sci. 2017, 7, 1030. [Google Scholar] [CrossRef]
  75. Hong, S.; Jung, J.; Kim, S.; Cho, H.; Lee, J.; Heo, J. Semi-automated approach to indoor mapping for 3d as-built building information modeling. Comput. Environ. Urban Syst. 2015, 51, 34–46. [Google Scholar] [CrossRef]
  76. Zhu, L.L.; Hyyppa, J.; Kukko, A.; Kaartinen, H.; Chen, R.Z. Photorealistic building reconstruction from mobile laser scanning data. Remote Sens. 2011, 3, 1406–1426. [Google Scholar] [CrossRef]
  77. Yang, B.; Wei, Z.; Li, Q.; Li, J. Semiautomated building facade footprint extraction from mobile lidar point clouds. IEEE Geosci. Remote Sens. Lett. 2013, 10, 766–770. [Google Scholar] [CrossRef]
  78. Du, S.J.; Zhang, Y.S.; Zou, Z.R.; Xu, S.H.; He, X.; Chen, S.Y. Automatic building extraction from lidar data fusion of point and grid-based features. ISPRS J. Photogramm. Remote Sens. 2017, 130, 294–307. [Google Scholar] [CrossRef]
  79. Xu, J.; Hou, W.; Sun, Y.; Lee, Y.-S. Plsp based layered contour generation from point cloud for additive manufacturing. Robot. Comput. Integr. Manuf. 2018, 49, 1–12. [Google Scholar] [CrossRef]
  80. Balado, J.; Díaz-Vilariño, L.; Arias, P.; Soilán, M. Automatic building accessibility diagnosis from point clouds. Autom. Constr. 2017, 82, 103–111. [Google Scholar] [CrossRef] [Green Version]
  81. Huang, J.; You, S. Pole-like object detection and classification from urban point clouds. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation, ICRA 2015, Seattle, WA, USA, 26–30 May 2015; pp. 3032–3038. [Google Scholar]
  82. Xiao, W.; Vallet, B.; Schindler, K.; Paparoditis, N. Street-side vehicle detection, classification and change detection using mobile laser scanning data. ISPRS J. Photogramm. Remote Sens. 2016, 114, 166–178. [Google Scholar] [CrossRef]
  83. Rodriguez-Cuenca, B.; Garcia-Cortes, S.; Ordonez, C.; Alonso, M.C. Automatic detection and classification of pole-like objects in urban point cloud data using an anomaly detection algorithm. Remote Sens. 2015, 7, 12680–12703. [Google Scholar] [CrossRef]
  84. Yadav, M.; Lohani, B.; Singh, A.K.; Husain, A. Identification of pole-like structures from mobile lidar data of complex road environment. Int. J. Remote Sens. 2016, 37, 4748–4777. [Google Scholar] [CrossRef]
  85. Li, F.; Oude Elberink, S.; Vosselman, G. Pole-like road furniture detection and decomposition in mobile laser scanning data based on spatial relations. Remote Sens. 2018, 10, 531. [Google Scholar]
  86. Ordonez, C.; Cabo, C.; Sanz-Ablanedo, E. Automatic detection and classification of pole-like objects for urban cartography using mobile laser scanning data. Sensors 2017, 17, 10. [Google Scholar] [CrossRef] [PubMed]
  87. Yan, L.; Li, Z.; Liu, H.; Tan, J.; Zhao, S.; Chen, C. Detection and classification of pole-like road objects from mobile lidar data in motorway environment. Opt. Laser Technol. 2017, 97, 272–283. [Google Scholar] [CrossRef]
  88. Puttonen, E.; Jaakkola, A.; Litkey, P.; Hyyppa, J. Tree classification with fused mobile laser scanning and hyperspectral data. Sensors 2011, 11, 5158–5182. [Google Scholar] [CrossRef] [PubMed]
  89. Guan, H.; Yu, Y.; Ji, Z.; Li, J.; Zhang, Q. Deep learning-based tree classification using mobile lidar data. Remote Sens. Lett. 2015, 6, 864–873. [Google Scholar] [CrossRef]
  90. Zou, X.; Cheng, M.; Wang, C.; Xia, Y.; Li, J. Tree classification in complex forest point clouds based on deep learning. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2360–2364. [Google Scholar] [CrossRef]
  91. Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P. Automatic filtering of vehicles from mobile lidar datasets. Measurement 2014, 53, 215–223. [Google Scholar] [CrossRef]
  92. Borcs, A.; Nagy, B.; Benedek, C. Instant object detection in lidar point clouds. IEEE Geosci. Remote Sens. Lett. 2017, 14, 992–996. [Google Scholar] [CrossRef]
  93. Chen, Y.; Zhao, D.; Lv, L.; Zhang, Q. Multi-task learning for dangerous object detection in autonomous driving. Inf. Sci. 2018, 432, 559–571. [Google Scholar] [CrossRef]
  94. Stiller, C.; Hipp, J.; Rössig, C.; Ewald, A. Multisensor obstacle detection and tracking. Image Vis. Comput. 2000, 18, 389–396. [Google Scholar] [CrossRef]
  95. Asvadi, A.; Premebida, C.; Peixoto, P.; Nunes, U. 3d lidar-based static and moving obstacle detection in driving environments: An approach based on voxels and multi-region ground planes. Robot. Auton. Syst. 2016, 83, 299–311. [Google Scholar] [CrossRef]
  96. Zhang, L.; Li, Q.; Li, M.; Mao, Q.; Nüchter, A. Multiple vehicle-like target tracking based on the velodyne lidar. Proc. IFAC Intell. Auton. Veh. 2013, 46, 126–131. [Google Scholar] [CrossRef]
  97. Zhao, G.; Xiao, X.; Yuan, J.; Ng, G.W. Fusion of 3d-lidar and camera data for scene parsing. J. Vis. Commun. Image Represent. 2014, 25, 165–183. [Google Scholar] [CrossRef]
  98. Mancini, A.; Frontoni, E.; Zingaretti, P. Embedded multisensor system for safe point-to-point navigation of impaired users. IEEE Trans. Intell. Transp. Syst. 2015, 16, 3543–3555. [Google Scholar] [CrossRef]
  99. Gruyer, D.; Magnier, V.; Hamdi, K.; Claussmann, L.; Orfila, O.; Rakotonirainy, A. Perception, information processing and modeling: Critical stages for autonomous driving applications. Annu. Rev. Control 2017, 44, 323–341. [Google Scholar] [CrossRef]
  100. Jende, P.; Nex, F.; Gerke, M.; Vosselman, G. A fully automatic approach to register mobile mapping and airborne imagery to support the correction of platform trajectories in gnss-denied urban areas. ISPRS J. Photogramm. Remote Sens. 2018, 141, 86–99. [Google Scholar] [CrossRef]
  101. Einhorn, E.; Schröter, C.; Gross, H.M. Attention-driven monocular scene reconstruction for obstacle detection, robot navigation and map building. Robot. Auton. Syst. 2011, 59, 296–309. [Google Scholar] [CrossRef]
  102. Cheng, L.; Tong, L.; Wang, Y.; Li, M. Extraction of urban power lines from vehicle-borne lidar data. Remote Sens. 2014, 6, 3302–3320. [Google Scholar] [CrossRef]
  103. Liu, W.; Zhang, Z.; Li, S.; Tao, D. Road detection by using a generalized hough transform. Remote Sens. 2017, 9, 590. [Google Scholar] [CrossRef]
  104. Li, L.; Yang, F.; Zhu, H.; Li, D.; Li, Y.; Tang, L. An improved ransac for 3d point cloud plane segmentation based on normal distribution transformation cells. Remote Sens. 2017, 9, 433. [Google Scholar] [CrossRef]
  105. Barnea, S.; Filin, S. Segmentation of terrestrial laser scanning data using geometry and image information. ISPRS J. Photogramm. Remote Sens. 2013, 76, 33–48. [Google Scholar] [CrossRef]
  106. Awrangjeb, M.; Zhang, C.; Fraser, C.S. Automatic extraction of building roofs using lidar data and multispectral imagery. ISPRS J. Photogramm. Remote Sens. 2013, 83, 1–18. [Google Scholar] [CrossRef]
  107. Li, M.; Ma, L.; Blaschke, T.; Cheng, L.; Tiede, D. A systematic comparison of different object-based classification techniques using high spatial resolution imagery in agricultural environments. Int. J. Appl. Earth Obs. Geoinf. 2016, 49, 87–98. [Google Scholar] [CrossRef]
  108. Zhao, W.; Du, S.; Wang, Q.; Emery, W.J. Contextually guided very-high-resolution imagery classification with semantic segments. ISPRS J. Photogramm. Remote Sens. 2017, 132, 48–60. [Google Scholar] [CrossRef]
  109. Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B. Contextual classification of point cloud data by exploiting individual 3d neigbourhoods. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 2, 271–278. [Google Scholar] [CrossRef]
  110. Dittrich, A.; Weinmann, M.; Hinz, S. Analytical and numerical investigations on the accuracy and robustness of geometric features extracted from 3d point cloud data. ISPRS J. Photogramm. Remote Sens. 2017, 126, 195–208. [Google Scholar] [CrossRef]
  111. Weinmann, M.; Urban, S.; Hinz, S.; Jutzi, B.; Mallet, C. Distinctive 2d and 3d features for automated large-scale scene analysis in urban areas. Comput. Graph. 2015, 49, 47–57. [Google Scholar] [CrossRef]
  112. Weinmann, M.; Jutzi, B.; Hinz, S.; Mallet, C. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS J. Photogramm. Remote Sens. 2015, 105, 286–304. [Google Scholar] [CrossRef]
  113. Yu, Y.; Li, J.; Guan, H.; Wang, C.; Yu, J. Semiautomated extraction of street light poles from mobile lidar point-clouds. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1374–1386. [Google Scholar] [CrossRef]
  114. Yang, J.; Cao, Z.; Zhang, Q. A fast and robust local descriptor for 3d point cloud registration. Inf. Sci. 2016, 346–347, 163–179. [Google Scholar] [CrossRef]
  115. Fehr, D.; Beksi, W.J.; Zermas, D.; Papanikolopoulos, N. Covariance based point cloud descriptors for object detection and recognition. Comput. Vis. Image Underst. 2016, 142, 80–93. [Google Scholar] [CrossRef] [Green Version]
  116. Beserra Gomes, R.; Ferreira da Silva, B.M.; Rocha, L.K.D.M.; Aroca, R.V.; Velho, L.C.P.R.; Gonçalves, L.M.G. Efficient 3d object recognition using foveated point clouds. Comput. Graph. 2013, 37, 496–508. [Google Scholar] [CrossRef]
  117. Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P. Automatic detection of zebra crossings from mobile lidar data. Opt. Laser Technol. 2015, 70, 63–70. [Google Scholar] [CrossRef]
  118. Puente, I.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P. Automatic detection of road tunnel luminaires using a mobile lidar system. Measurement 2014, 47, 569–575. [Google Scholar] [CrossRef]
  119. Eo, Y.D.; Pyeon, M.W.; Kim, S.W.; Kim, J.R.; Han, D.Y. Coregistration of terrestrial lidar points by adaptive scale-invariant feature transformation with constrained geometry. Autom. Constr. 2012, 25, 49–58. [Google Scholar] [CrossRef]
  120. Gressin, A.; Mallet, C.; Demantké, J.; David, N. Towards 3d lidar point cloud registration improvement using optimal neighborhood knowledge. ISPRS J. Photogramm. Remote Sens. 2013, 79, 240–251. [Google Scholar] [CrossRef]
  121. Weber, T.; Hänsch, R.; Hellwich, O. Automatic registration of unordered point clouds acquired by kinect sensors using an overlap heuristic. ISPRS J. Photogramm. Remote Sens. 2015, 102, 96–109. [Google Scholar] [CrossRef]
  122. Cheng, L.; Wu, Y.; Tong, L.; Chen, Y.; Li, M. Hierarchical registration method for airborne and vehicle lidar point cloud. Remote Sens. 2015, 7, 13921–13944. [Google Scholar] [CrossRef]
  123. Cheng, L.; Chen, S.; Liu, X.; Xu, H.; Wu, Y.; Li, M.; Chen, Y. Registration of laser scanning point clouds: A review. Sensors 2018, 18, 1641. [Google Scholar] [CrossRef] [PubMed]
  124. Yun, D.; Kim, S.; Heo, H.; Ko, K.H. Automated registration of multi-view point clouds using sphere targets. Adv. Eng. Inform. 2015, 29, 930–939. [Google Scholar] [CrossRef]
  125. Yan, L.; Tan, J.X.; Liu, H.; Xie, H.; Chen, C.J. Automatic non-rigid registration of multi-strip point clouds from mobile laser scanning systems. Int. J. Remote Sens. 2018, 39, 1713–1728. [Google Scholar] [CrossRef]
  126. Bueno, M.; González-Jorge, H.; Martínez-Sánchez, J.; Lorenzo, H. Automatic point cloud coarse registration using geometric keypoint descriptors for indoor scenes. Autom. Constr. 2017, 81, 134–148. [Google Scholar] [CrossRef]
  127. Li, J.; Yang, B.; Chen, C.; Huang, R.; Dong, Z.; Xiao, W. Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features. ISPRS J. Photogramm. Remote Sens. 2018, 136, 41–57. [Google Scholar] [CrossRef] [Green Version]
  128. Guislain, M.; Digne, J.; Chaine, R.; Monnier, G. Fine scale image registration in large-scale urban lidar point sets. Comput. Vis. Image Underst. 2017, 157, 90–102. [Google Scholar] [CrossRef]
  129. Yao, L.; Wu, H.; Li, Y.; Meng, B.; Qian, J.; Liu, C.; Fan, H. Registration of vehicle-borne point clouds and panoramic images based on sensor constellations. Sensors 2017, 17, 837. [Google Scholar] [CrossRef] [PubMed]
  130. Man, Q.; Dong, P.; Guo, H. Pixel- and feature-level fusion of hyperspectral and lidar data for urban land-use classification. Int. J. Remote Sens. 2015, 36, 1618–1644. [Google Scholar] [CrossRef]
  131. Parent, J.R.; Volin, J.C.; Civco, D.L. A fully-automated approach to land cover mapping with airborne lidar and high resolution multispectral imagery in a forested suburban landscape. ISPRS J. Photogramm. Remote Sens. 2015, 104, 18–29. [Google Scholar] [CrossRef]
  132. Huang, H.; Chen, Y.; Clinton, N.; Wang, J.; Wang, X.; Liu, C.; Gong, P.; Yang, J.; Bai, Y.; Zheng, Y.; et al. Mapping major land cover dynamics in beijing using all landsat images in google earth engine. Remote Sens. Environ. 2017, 202, 166–176. [Google Scholar] [CrossRef]
  133. Zhang, W.; Li, W.; Zhang, C.; Hanink, D.M.; Li, X.; Wang, W. Parcel-based urban land use classification in megacity using airborne lidar, high resolution orthoimagery, and google street view. Comput. Environ. Urban Syst. 2017, 64, 215–228. [Google Scholar] [CrossRef]
  134. Wang, H.; Glennie, C. Fusion of waveform lidar data and hyperspectral imagery for land cover classification. ISPRS J. Photogramm. Remote Sens. 2015, 108, 1–11. [Google Scholar] [CrossRef]
  135. Singh, K.K.; Vogler, J.B.; Shoemaker, D.A.; Meentemeyer, R.K. Lidar-landsat data fusion for large-area assessment of urban land cover: Balancing spatial resolution, data volume and mapping accuracy. ISPRS J. Photogramm. Remote Sens. 2012, 74, 110–121. [Google Scholar] [CrossRef]
  136. Zhou, G.; Zhou, X. Seamless fusion of lidar and aerial imagery for building extraction. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7393–7407. [Google Scholar] [CrossRef]
  137. Li, J.; Yang, B.; Wu, W.; Dai, W.; Chen, C.; Zou, X.; Tian, M. 3D mobile mapping with a low cost uav system. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 127–132. [Google Scholar] [CrossRef]
  138. Qin, Y.C.; Li, S.H.; Vu, T.T.; Niu, Z.; Ban, Y.F. Synergistic application of geometric and radiometric features of lidar data for urban land cover mapping. Opt. Express 2015, 23, 13761–13775. [Google Scholar] [CrossRef]
  139. Zhu, L.; Hyyppa, J. The use of airborne and mobile laser scanning for modeling railway environments in 3d. Remote Sens. 2014, 6, 3075–3100. [Google Scholar] [CrossRef]
  140. Zhang, R.; Li, G.; Li, M.; Wang, L. Fusion of images and point clouds for the semantic segmentation of large-scale 3d scenes based on deep learning. ISPRS J. Photogramm. Remote Sens. 2018, 143, 85–96. [Google Scholar] [CrossRef]
  141. Budzan, S.; Kasprzyk, J. Fusion of 3d laser scanner and depth images for obstacle recognition in mobile applications. Opt. Lasers Eng. 2016, 77, 230–240. [Google Scholar] [CrossRef]
  142. Xiao, L.; Wang, R.; Dai, B.; Fang, Y.; Liu, D.; Wu, T. Hybrid conditional random field based camera-lidar fusion for road detection. Inf. Sci. 2018, 432, 543–558. [Google Scholar] [CrossRef]
  143. Boussaha, M.; Fernandez-Moral, E.; Vallet, B.; Rives, P. On the production of semantic and textured 3d meshes of large scale urban environments from mobile mapping images and lidar scans. In Proceedings of the Reconnaissance des Formes, Image, Apprentissage et Perception (RFIAP), Marne-la-Vallée, France, 26–28 June 2018. [Google Scholar]
  144. Babahajiani, P.; Fan, L.X.; Kamarainen, J.K.; Gabbouj, M. Urban 3d segmentation and modelling from street view images and lidar point clouds. Mach. Vis. Appl. 2017, 28, 679–694. [Google Scholar] [CrossRef]
  145. Biljecki, F.; Stoter, J.; Ledoux, H.; Zlatanova, S.; Çöltekin, A. Applications of 3d city models: State of the art review. ISPRS Int. J. Geo-Inf. 2015, 4, 2842–2889. [Google Scholar] [CrossRef]
  146. Heo, J.; Jeong, S.; Park, H.-K.; Jung, J.; Han, S.; Hong, S.; Sohn, H.-G. Productive high-complexity 3d city modeling with point clouds collected from terrestrial lidar. Comput. Environ. Urban Syst. 2013, 41, 26–38. [Google Scholar] [CrossRef]
  147. Wang, J.; Xu, K. Shape detection from raw lidar data with subspace modeling. IEEE Trans. Vis. Comput. Graph. 2017, 23, 2137–2150. [Google Scholar] [CrossRef] [PubMed]
  148. Kurazume, R.; Oshima, S.; Nagakura, S.; Jeong, Y.; Iwashita, Y. Automatic large-scale three dimensional modeling using cooperative multiple robots. Comput. Vis. Image Underst. 2017, 157, 25–42. [Google Scholar] [CrossRef] [Green Version]
  149. Zhao, Y.; Chai, X.; Gao, F.; Qi, C. Obstacle avoidance and motion planning scheme for a hexapod robot octopus-III. Robot. Auton. Syst. 2018, 103, 199–212. [Google Scholar] [CrossRef]
  150. Yan, L.; Liu, H.; Tan, J.; Li, Z.; Chen, C. A multi-constraint combined method for ground surface point filtering from mobile lidar point clouds. Remote Sens. 2017, 9, 958. [Google Scholar] [CrossRef]
  151. Wang, Y.; Chen, Q.; Liu, L.; Li, X.; Sangaiah, A.K.; Li, K. Systematic comparison of power line classification methods from als and mls point cloud data. Remote Sens. 2018, 10, 1222. [Google Scholar] [CrossRef]
  152. Yadav, M.; Chousalkar, C.G. Extraction of power lines using mobile lidar data of roadway environment. Remote Sens. Appl. Soc. Environ. 2017, 8, 258–265. [Google Scholar] [CrossRef]
  153. Lin, Y.; Hyyppä, J. Geometrically modeling 2d scattered points: A review of the potential for methodologically improving mobile laser scanning in data processing. Int. J. Digit. Earth 2014, 7, 432–449. [Google Scholar] [CrossRef]
  154. Wang, Z.; Zhang, L.; Zhang, L.; Li, R.; Zheng, Y.; Zhu, Z. A deep neural network with spatial pooling (dnnsp) for 3-d point cloud classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4594–4604. [Google Scholar] [CrossRef]
  155. Zhao, R.; Pang, M.; Wang, J. Classifying airborne lidar point clouds via deep features learned by a multi-scale convolutional neural network. Int. J. Geogr. Inf. Sci. 2018, 32, 960–979. [Google Scholar] [CrossRef]
  156. Zhang, L.; Zhang, L. Deep learning-based classification and reconstruction of residential scenes from large-scale point clouds. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1887–1897. [Google Scholar] [CrossRef]
  157. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  158. Hu, X.; Yuan, Y. Deep-learning-based classification for dtm extraction from als point cloud. Remote Sens. 2016, 8, 730. [Google Scholar] [CrossRef]
  159. Li, W.; Fu, H.; Yu, L.; Cracknell, A. Deep learning based oil palm tree detection and counting for high-resolution remote sensing images. Remote Sens. 2016, 9, 22. [Google Scholar] [CrossRef]
  160. Zhao, W.; Du, S. Learning multiscale and deep representations for classifying remotely sensed imagery. ISPRS J. Photogramm. Remote Sens. 2016, 113, 155–165. [Google Scholar] [CrossRef]
  161. Li, Y.; Xie, W.; Li, H. Hyperspectral image reconstruction by deep convolutional neural network for classification. Pattern Recognit. 2017, 63, 371–383. [Google Scholar] [CrossRef]
  162. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2018, 145, 120–147. [Google Scholar] [CrossRef]
  163. Huang, B.; Zhao, B.; Song, Y. Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery. Remote Sens. Environ. 2018, 214, 73–86. [Google Scholar] [CrossRef]
  164. Liu, X.; Jiao, L.; Zhao, J.; Zhao, J.; Zhang, D.; Liu, F.; Yang, S.; Tang, X. Deep multiple instance learning-based spatial–spectral classification for pan and ms imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 461–473. [Google Scholar] [CrossRef]
  165. Lao, Y. Topology Based Classification of Mobile Laser Data with Conditional Random Fields. Master’s Thesis, University of Twente, Enschede, The Netherlands, 2016. [Google Scholar]
  166. Kang, Z.; Yang, J. A probabilistic graphical model for the classification of mobile lidar point clouds. ISPRS J. Photogramm. Remote Sens. 2018, 143, 108–123. [Google Scholar] [CrossRef]
  167. Diaz-Vilarino, L.; Gonzalez-Jorge, H.; Bueno, M.; Arias, P.; Puente, I. Automatic classification of urban pavements using mobile lidar data and roughness descriptors. Constr. Build. Mater. 2016, 102, 208–215. [Google Scholar] [CrossRef]
  168. Pu, S.; Zhan, Q. Classification of mobile terrestrial laser point clouds using semantic constraints. In Proceedings of the Videometrics, Range Imaging, and Applications X, San Diego, CA, USA, 2–3 August 2009; p. 74470D. [Google Scholar]
Figure 1. The number of published papers per year from 2008 to 2018. The bars and their above numbers depict the number of published papers in the according year. The blue bars represent the topic of mobile LiDAR, the red ones represent the topic of MLS and urban, and the yellow ones represent the topic of MLS and classification. The red curved line is the trend fitting of MLS and urban topic by spline data interpolation.
Figure 1. The number of published papers per year from 2008 to 2018. The bars and their above numbers depict the number of published papers in the according year. The blue bars represent the topic of mobile LiDAR, the red ones represent the topic of MLS and urban, and the yellow ones represent the topic of MLS and classification. The red curved line is the trend fitting of MLS and urban topic by spline data interpolation.
Remotesensing 11 01540 g001
Figure 2. Comparison of airborne and mobile LiDAR data of the same area. (a) Airborne LiDAR (ALS = airborne laser scanning); (b) mobile LiDAR (MLS = mobile laser scanning).
Figure 2. Comparison of airborne and mobile LiDAR data of the same area. (a) Airborne LiDAR (ALS = airborne laser scanning); (b) mobile LiDAR (MLS = mobile laser scanning).
Remotesensing 11 01540 g002
Figure 3. Urban scene mapping and modeling based on MLS data. The objects are roughly detected from the more detailed MLS point clouds. The A represents road markings, the B represents road curbs, the C represents zebra stripes, the D represents trees, the E depicts street poles, the F depicts traffic light or advertising boards, and the G depicts buildings.
Figure 3. Urban scene mapping and modeling based on MLS data. The objects are roughly detected from the more detailed MLS point clouds. The A represents road markings, the B represents road curbs, the C represents zebra stripes, the D represents trees, the E depicts street poles, the F depicts traffic light or advertising boards, and the G depicts buildings.
Remotesensing 11 01540 g003
Figure 4. Data processing pipeline framework of key issues for MLS data.
Figure 4. Data processing pipeline framework of key issues for MLS data.
Remotesensing 11 01540 g004
Table 1. Summary of some of the latest mobile mapping systems.
Table 1. Summary of some of the latest mobile mapping systems.
CompanyProductCharacteristics 1Application
SITECORoad-Scanner 4Modular and scalable design: can integrate (1) up to three Faro Focus (range up to 350 m range at 90% reflectivity, measurement rate up to 976,000 points per second) laser-scanners, or two Z+F/RIEGL/ OPTECH laser scanners; (2) up to eight high resolution cameras or a spherical LadyBug5 camera; and (3) a wide range of GNSS-inertial systems.Surveying and mapping for urban road inventories, infrastructure monitoring, and pavement management.
Pave-ScannerMillimeter resolution pavement inspectionInspect road cracks, lane markings, potholes, etc.
TOPCONIP-S3Includes (1) a single Velodyne HDL-32E (measurement rate of 700,000 points per second with a range of 100 m and 5–20 Hz frame rate) laser scanner, (2) a LadyBug5 spherical camera, and (3) a GNSS-inertial system consisting of a dual-frequency GNSS receiver, an IMU, and an vehicle wheel encoder.Roadway surface condition assessments, pipelines, and utilities mapping, construction progress monitoring and risk management.
TRIMBLEMX9Includes (1) one or two RIEGL VUX-1HA (measurement rate up to 1 million pulses per second, maximum range of 120 m at 10% reflectivity and 420 m at 80% reflectivity, scan speed of 10-250 lines per second) laser scanners, (2) one spherical camera, two side looking cameras, and one backward/downward looking camera, and (3) a Trimble AP60 or AP40 GNSS-Inertial system. Sensors weigh 31 kg (single scanner) to 37 kg (two scanners).Surveying and mapping for road assets, cross sections, 3D models, and volumetric analysis.
3D LASER MAPPING Ltd. ROBINCan be switched among walk, drive, and fly modes and used across multiple platforms (backpack, vehicle, or aircraft). The basic system includes (1) a RIEGL VUX-1HA laser scanner, (2) a FLIR Grasshooper3 camera (12 MP), (3) dual GNSS receivers and a MEMS (Mico Electro-Mechanical System) IMU. For the drive mode, it can be extended with a LadyBug5 spherical camera, a FOG (fiber optical) IMU, and an optical odometer.Walk & drive modes are applied for asset, heritage, and highway mapping, geohazard monitoring, urban asset management, and rail infrastructure mapping.
StreetMapperIVIncludes (1) one or two RIEGL VUX-1HA laser scanners, (2) a MEMS or FOG IMU, (3) a LadyBug spherical camera.Asset mapping, city surveying, monitoring and change, detection, tunnel mapping, civil engineering, and surface maintenance.
RIEGLVMX-2HAIncludes (1) two RIEGL VUX-1HA laser sensors, (2) a modular camera system connecting up to nine cameras (either RIEGL, LadyBug 5+, or DSLR camera), (3) a high grade GNSS/IMU. Transportation infrastructure and rail mapping, road surface measurements, city modeling, and as-built surveying.
RenishawDynascan S250Includes (1) one or two patented laser scanners (maximum range 250 m, field of view 360°, range resolution 1 cm, scan rate up to 30 Hz, measurement rate 36,000 pulses/second per scanner), (2) up to 10 cameras (10 frame per second (fps) at 5MP or 30 fps at 1080p HD resolution), (3) an IMU and dual GNSS receivers. No need to use PC or external computer to operate due to its on-board Linux operation system. Urban surveying and construction industry applications, such as utilities, transport / highway infrastructure surveys, urban brownfield updates and revamps.
OPTECHLynx HS600 Includes (1) one or two OPTECH LiDAR sensors (up to 1.6 million pulses/second, 4 returns, and 600 lines per second per sensor, maximum range 250 m at 10% reflectivity), (2) a LadyBug5 spherical camera, (3) a GNSS/IMU system.Urban asset, transportation, and utilities management, rail / civil engineering, urban mapping and modeling.
Hexagon / LeicaLeica Pegasus: Two UltimateIncludes (1) a laser scanner of either Z+F 9012 PROFILER (Range 119 m, > 1 million pulses per second, 200 lines per second) or Leica ScanStation, (2) one spherical camera (24 MP with two fish-eye camera), four built-in frame cameras, and one or two optional external frame cameras, (3) a triple-band GNSS with single or dual antenna support, and a FOG IMU.Improved smart city applications, survey-grade accurate reality capture, pavement assessment, underground assets detection with sensors (pavement camera and ground penetrating radar, etc.)
1 The detailed characteristics of each MMS involve the collected data processing software, technical information (e.g., pulse repetition frequency, valid range, point density), and most suitable applications. The detailed information could be found at these web sites: (1) Road-Scanner 4: https://www.sitecoinf.it/en/solutions/road-scanner; (2) Pave-Scanner: https://www.sitecoinf.it/en/solutions/pave-scanner; (3) IP-S3: https://www.topconpositioning.com/mass-data-and-volume-collection/mobile-mapping/ip-s3; (4)MX9: https://geospatial.trimble.com/products-and-solutions/trimble-mx9; (5) ROBIN: https://www.3dlasermapping.com/robin-mobile-mapping-system; (6) StreetMapperIV: https://www.3dlasermapping.com/streetmapper-lidar-system; (7) VMX-2HA: http://www.riegl.com/nc/products/mobile-scanning/produktdetail/product/scanner/56; (8) Dynascan S250: http://www.renishaw.com/sv/27332.aspx; (9) Lynx HS600: https://www.teledyneoptech.com/en/products/mobile-survey/lynx-hs600/; (10) Leica Pegasus: Two Ultimate: (with Z+F 9012 laser scanner) https://leica-geosystems.com/products/mobile-sensor-platforms/capture-platforms/leica-pegasus_two-ultimate.
Table 2. The overview of the data integration framework of MLS point clouds with other data sources.
Table 2. The overview of the data integration framework of MLS point clouds with other data sources.
Integration levelDescriptionAdvantage/LimitationReferences
LowFirst, processing MLS point clouds and other data sources, independently. Then, merging their respective intermediate information to obtain the final results.The processing procedures are simple and easy. However, it should develop independent methods for different data sources. The accuracy of results is usually not very high.[32,76,88,97,139,141,143]
MediumFirst, processing one data source to obtain useful features or background knowledge. Then, based on this computed information, analyzing the other data to classify or extract final results.The referenced information improves the accuracy of results. It is difficult to design and execute the serial procedures. It also needs the procedures to be much related.[5,29,48,127,129,140,144]
HighFirst, computing and extracting features from MLS point clouds and other data sources, directly. Then, analyzing and classifying based on the concentrated features as one type of data source.The processing could utilize adequately all the data sources and obtain high accuracy. It needs complex methods and professional knowledge.[19,49,128,142]

Share and Cite

MDPI and ACS Style

Wang, Y.; Chen, Q.; Zhu, Q.; Liu, L.; Li, C.; Zheng, D. A Survey of Mobile Laser Scanning Applications and Key Techniques over Urban Areas. Remote Sens. 2019, 11, 1540. https://doi.org/10.3390/rs11131540

AMA Style

Wang Y, Chen Q, Zhu Q, Liu L, Li C, Zheng D. A Survey of Mobile Laser Scanning Applications and Key Techniques over Urban Areas. Remote Sensing. 2019; 11(13):1540. https://doi.org/10.3390/rs11131540

Chicago/Turabian Style

Wang, Yanjun, Qi Chen, Qing Zhu, Lin Liu, Chaokui Li, and Dunyong Zheng. 2019. "A Survey of Mobile Laser Scanning Applications and Key Techniques over Urban Areas" Remote Sensing 11, no. 13: 1540. https://doi.org/10.3390/rs11131540

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop